make gcs object URL public to all users within an organization - google-cloud-platform

I have a bucket bucket1 in which there is a file abc.pdf, I want to make this file accessible by all the users in my organization irrespective of they have GCP account or not i.e., make it specifically public to all my intranet users in my organization who are even not part of GCP also.
for example : if a user with id - abc#xyz.com who is not a part of GCP or doesn't have google account when he clicks the URL he should be able to access it.
need guidance and help on this.

I would suggest using signed URLs as users are provided with time-limited access to a specific Cloud Storage resource even without a Google account.

Related

Google Cloud : How to connect the google cloud storage to cloud CDN without making the bucket public?

I'm configuring Google Cloud CDN with Google Cloud Storage following article :
https://cloud.google.com/cdn/docs/setting-up-cdn-with-bucket#make_your_bucket_public
In my experience for AWS,
S3 Bucket can allow read permission only for its CDN (Cloudfront)
I wonder if GCP also has a similar feature,
Above article, I make 'allUsers' read the bucket, but I don't want to give the read permission for all users but only for Cloud CDN.
I've checked IAM documents but couldn't find.
Please help me
Cloud Storage Bucket allow the read permission only for Cloud CDN, not all users.
I don't want to make my bucket public.
The reason I ask if you consider accessing with IP address because Ive checked this link wherein you can limit an access by using the IP address.
Another link that I can share is signed URL, however based from the link “signed URLs give time-limited resource access to anyone in possession of the URL” and “signed URL is a URL that provides limited permission and time to make a request” I believed this one is time limited.
One thing that I can also think of is to use IAM with buckets wherein you can set permission you want to a certain user. For more information and configuration of this feature you also visit this site
Google recently release in public beta the v4 token signing process to support private origins. In this case, you can use the new v4 token signing process to access a private GCS bucket. One of my colleagues wrote a blog post with directions on how to do this: https://medium.com/#thetechbytes/private-gcs-bucket-access-through-google-cloud-cdn-430d940ebad9

Limited access to AWS S3 bucket

I am trying to understand access security as it relates to Amazon S3. I want to host some files in an S3 bucket, using CloudFront to access it via my domain. I need to limit access to certain companies/individuals. In addition I need to manage that access individually.
A second access model is project based, where I need to make a library of files available to a particular project team, and I need to be able to add and remove team members in an ad hoc manner, and then close access for the whole project at some point. The bucket in question might be the same for both scenarios.
I assume something like this is possible in AWS, but all I can find (and understand) on the AWS site involves using IAM to control access via the AWS console. I don't see any indication that I could create an IAM user, add them to an IAM group, give the group read only access to the bucket and then provide the name and password via System.Net.WebClient in PowerShell to actually download the available file. Am I missing something, and this IS possible? Or am I not correct in my assumption that this can be done with AWS?
I did find Amazon CloudFront vs. S3 --> restrict access by domain? - Stack Overflow that talks about using CloudFront to limit access by Domain, but that won't work in a WfH scenario, as those home machines won't be on the corporate domain, but the corporate BIM Manager needs to manage access to content libraries for the WfH staff. I REALLY hope I am not running into an example of AWS just not being ready for the current reality.
Content stored in Amazon S3 is private by default. There are several ways that access can be granted:
Use a bucket policy to make the entire bucket (or a directory within it) publicly accessible to everyone. This is good for websites where anyone can read the content.
Assign permission to IAM Users to grant access only to users or applications that need to access to the bucket. This is typically used within your organization. Never create an IAM User for somebody outside your organization.
Create presigned URLs to grant temporary access to private objects. This is typically used by applications to grant web-based access to content stored in Amazon S3.
To provide an example for pre-signed URLs, imagine that you have a photo-sharing website. Photos provided by users are private. The flow would be:
A user logs in. The application confirms their identity against a database or an authentication service (eg Login with Google).
When the user wants to view a photo, the application first checks whether they are entitled to view the photo (eg it is their photo). If they are entitled to view the photo, the application generates a pre-signed URL and returns it as a link, or embeds the link in an HTML page (eg in a <img> tag).
When the user accesses the link, the browser sends the URL request to Amazon S3, which verifies the encrypted signature in the signed URL. If if it is correct and the link has not yet expired, the photo is returned and is displayed in the web browser.
Users can also share photos with other users. When another user accesses a photo, the application checks the database to confirm that it was shared with the user. If so, it provides a pre-signed URL to access the photo.
This architecture has the application perform all of the logic around Access Permissions. It is very flexible since you can write whatever rules you want, and then the user is sent to Amazon S3 to obtain the file. Think of it like buying theater tickets online -- you just show the ticket and the door and you are allowed to sit in the seat. That's what Amazon S3 is doing -- it is checking the ticket (signed URL) and then giving you access to the file.
See: Amazon S3 pre-signed URLs
Mobile apps
Another common architecture is to generate temporary credentials using the AWS Security Token Service (STS). This is typically done with mobile apps. The flow is:
A user logs into a mobile app. The app sends the login details to a back-end application, which verifies the user's identity.
The back-end app then uses AWS STS to generate temporary credentials and assigns permissions to the credentials, such as being permitted to access a certain directory within an Amazon S3 bucket. (The permissions can actually be for anything in AWS, such as launching computers or creating databases.)
The back-end app sends these temporary credentials back to the mobile app.
The mobile app then uses those credentials to make calls directly to Amazon S3 to access files.
Amazon S3 checks the credentials being used and, if they have permission for the files being requests, grants access. This can be done for uploads, downloads, listing files, etc.
This architecture takes advantage of the fact that mobile apps are quite powerful and they can communicate directly with AWS services such as Amazon S3. The permissions granted are based upon the user who logs in. These permissions are determined by the back-end application, which you would code. Think of it like a temporary employee who has been granted a building access pass for the day, but they can only access certain areas.
See: IAM Role Archives - Jayendra's Blog
The above architectures are building blocks for how you wish to develop your applications. Every application is different, just like the two use-cases in your question. You can securely incorporate Amazon S3 in your applications while maintaining full control of how access is granted. Your applications can then concentrate on the business logic of controlling access, without having to actually serve the content (which is left up to Amazon S3). It's like selling the tickets without having to run the theater.
You ask whether Amazon S3 is "ready for the current reality". Many of the popular web sites you use every day run on AWS, and you probably never realize it.
If you are willing to issue IAM User credentials (max 5000 per account), the steps would be:
Create an IAM User for each user and select Programmatic access
This will provide an Access Key and Secret Key that you can provide to each user
Attach permissions to each IAM User, or put the users in an IAM Group and attach permissions to the IAM Group
Each user can run aws configure on their computer (using the AWS Command-Line Interface (CLI) to store their Access Key and Secret Key
They can then use the AWS CLI to upload/download files
If you want the users to be able to access via the Amazon S3 management console, you will need to provide some additional permissions: Grant a User Amazon S3 Console Access to Only a Certain Bucket
Alternatively, users could use a program like CyberDuck for an easy Drag & Drop interface to Amazon S3. Cyberduck will also ask for the Access Key and Secret Key.

Granting application users access to Amazon S3 but hitting 5000 user limit

What I am trying to achieve is the following:
Create users dynamicly through API(users might grow alot - 50-100k+ eventually)
Give those users access to a specific prefix of an AWS S3 bucket(IAM policy)
Currently my idea is to create AWS IAM Users and generate credentials for those users(The credentials should not be temporary). This works fine, but the problem is that AWS is limited to 5000 IAM users. Is there another way to avoid that limit. One way that I found out is via cognito users -> https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
However I do not think that there is a way to create long-term access keys(as the IAM user access keys) for those cognito users ?
Is there another way to achieve this ?
Thanks in advance!
You should not use IAM for application users. IAM is for staff within your organisation to operate your AWS infrastructure.
Your application should operate its own authentication method separate from IAM (as suggested in the above comments). An example of using AWS for this task would be to use Amazon Cognito.
Once a user has authenticated, you have a couple of options:
Option 1: Using AWS credentials
If you want to allow the authenticated users to access AWS resources (eg Amazon S3) via AWS API calls, then you can create temporary credentials that have limited permissions (eg can access any object within a given path of a given bucket). These credentials can then be provided to the users. This method is commonly used for mobile applications that are capable of making API calls directly to AWS. It requires that the users have software that can use the AWS credentials.
Option 2: Amazon S3 pre-signed URLS
If you are running a web application and you want users to be able to access private objects in Amazon S3, you can generate pre-signed URLs. For example, let's say you are running a photo-sharing website. The process would be:
Photos are kept in private S3 buckets.
Users authenticate to the application.
The application can then show them their private photos: When the application generates any links to this private content, or embeds content in the page (eg via <img> tags), it generates a pre-signed URL, which provides time-limited access to private content.
The user then accesses the URL, or their browser requests data (eg images) from that URL.
Amazon S3 verifies the signature on the URL and check the validity time. If it is correct, then S3 returns the private object.
The application uses a set of IAM credentials to sign the pre-signed URL. This can be done in a couple of lines of code and does not require an API call to AWS.
The benefit of this method is that the application is responsible for determining which objects the user may access. For example, let's say a user wants to share their photos with another user. This sharing information can be stored in a database and the application can consult the database when sharing photos. If a user is entitled to view another user's photos, the application can generate a pre-signed URL without caring in which directory the photos are stored. This is a much more flexible approach than using storage location to grant access. However, it does require additional logic within the application.
See: Amazon S3 pre-signed URLs

How to encrypt/hide google cloud bucket name in signed URL

I would like to upload a image on a google storage bucket, for that I generated signed URL which would be passed to client for upload. I observed that google cloud bucket name is exposed in the signed URL.
https://storage.googleapis.com/myproject-images/test.PNG?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=3242342308700-compute%40developer.gserviceaccount.com%2F20200430%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20200430T044803Z&X-Goog-Expires=900&X-Goog-SignedHeaders=host&X-Goog-Signature=*********************
My question: Is it possible to encrypt/map or hide the google cloud bucket in signed URL. I do not want to expose my bucket name to end user
It's not possible, if you want the client to directly access that data. You could obfuscate it by using a URL shortener, but all that would do is hide it from view temporarily.
Once you choose to allow clients to access your project directly, your project id is no longer private information. That ID is absolutely required in order to identify resources within your project (and not just Cloud Storage). The same is true for all Firebase-related client access that goes directly to Google Cloud and Firebase products.
If you don't want anyone to see the name of your project, you will either:
Disallow all direct client access
Route all requests through some middleware service identified by another DNS name that hides all the implementation details of the interaction with Google Cloud products.

Amazon S3 download authentication

I have created a bucket in Amazon S3 and have uploaded 2 files in it and made them public. I have the links through which I can access them from anywhere on the Internet. I now want to put some restriction on who can download the files. Can someone please help me with that. I did try the documentation, but got confused.
I want that at the time of download using the public link it should ask for some credentials or something to authenticate the user at that time. Is this possible?
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy
IAM Users and Groups
A Pre-Signed URL
As long as at least one of these methods is granting access, your users will be able to access the objects from Amazon S3.
1. Access Control List on individual objects
The Make Public option in the Amazon S3 management console will grant Open/Download permissions to all Internet users. This can be used to grant public access to specific objects.
2. Bucket Policy
A Bucket Policy can be used to grant access to a whole bucket or a portion of a bucket. It can also be used to specify limits to access. For example, a policy could make a specific directory within a bucket public to users from a specific range of IP addresses, during particular times of the day, and only when accessing the bucket via SSL.
A bucket policy is a good way to grant public access to many objects (eg a particular directory) without having to specify permissions on each individual object. This is commonly used for static websites served out of an S3 bucket.
3. IAM Users and Groups
This is similar to defining a Bucket Policy, but permissions are assigned to specific Users or Groups of users. Thus, only those users have permission to access the objects. Users must authenticate themselves when accessing the objects, so this is most commonly used when accessing objects via the AWS API, such as using the aws s3 commands from the AWS Command-Line Interface (CLI).
Rather than being prompted to authenticate, users must provide the authentication when making the API call. A simple way of doing this is to store user credentials in a local configuration file, which the CLI will automatically use when calling the S3 API.
4. Pre-Signed URL
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
Typically, an application constructs a Pre-Signed URL when it wishes to grant access to an object. For example, let's say you have a photo-sharing website and a user has authenticated to your website. You now wish to display their pictures in a web page. The pictures are normally private, but your application can generate Pre-Signed URLs that grant them temporary access to the pictures. The Pre-Signed URL will expire after a particular date/time.
Regarding the pre-signed URL, the signature is in the request headers, hence it should be within HTTPS/TLS encryption. But do check for yourself.