By requirement, the google bucket I use for file storage should be private. But I need to make the files on the bucket accessible over google CDN.
Most of the documents I found describe the best practice involving signed URLs. But I need to make it work w/o signed URLs or cookies. Does anyone know how to achieve that?
I've successfully configured the access over signed URLs, including all the permissions settings for bucket/CDN but that's not what I need.
At this moment, Cloud CDN still requires tokenized access in order to access a private origin. There is a solution where you can deploy a proxy that will dynamically sign your request with an ephemerial token and access the private storage bucket.
https://github.com/GoogleCloudPlatform/cdn-auth-proxy
There is work underway for Cloud CDN that will allow you to dynamically generate an access token without having deploy a proxy, but a definitive time for the release has not been set.
The new Google Cloud Media CDN service allows you to access a private storage bucket via IAM permissions: https://cloud.google.com/media-cdn/docs/origins?hl=en#private-storage-buckets .
You can register service worker credentials in a json file on the webserver that is supposed to serve the files. Just make sure the worker has proper permissions to access the desired resources. In the gcloud SDK there is full support to make requests to protected resources via a service worker given the permissions are sufficient.
This way you can just map the requests dynamically to the web service and have the service take care of accessing and using protected credentials in the back.
Related
I'm configuring Google Cloud CDN with Google Cloud Storage following article :
https://cloud.google.com/cdn/docs/setting-up-cdn-with-bucket#make_your_bucket_public
In my experience for AWS,
S3 Bucket can allow read permission only for its CDN (Cloudfront)
I wonder if GCP also has a similar feature,
Above article, I make 'allUsers' read the bucket, but I don't want to give the read permission for all users but only for Cloud CDN.
I've checked IAM documents but couldn't find.
Please help me
Cloud Storage Bucket allow the read permission only for Cloud CDN, not all users.
I don't want to make my bucket public.
The reason I ask if you consider accessing with IP address because Ive checked this link wherein you can limit an access by using the IP address.
Another link that I can share is signed URL, however based from the link “signed URLs give time-limited resource access to anyone in possession of the URL” and “signed URL is a URL that provides limited permission and time to make a request” I believed this one is time limited.
One thing that I can also think of is to use IAM with buckets wherein you can set permission you want to a certain user. For more information and configuration of this feature you also visit this site
Google recently release in public beta the v4 token signing process to support private origins. In this case, you can use the new v4 token signing process to access a private GCS bucket. One of my colleagues wrote a blog post with directions on how to do this: https://medium.com/#thetechbytes/private-gcs-bucket-access-through-google-cloud-cdn-430d940ebad9
I'm building an app that authenticates users and then return their generated files.
I'm using Amazon S3 to store such files. The Public block access is disabled and the Policy is set in way that only an IAM User can access to the main bucket.
What I only need is returning these files to authenticated users.
I see that a way to achieve this is creating presigned url and it works, but such url will be available for anyone that has the link.
I know I can set a time limit like 1 minute, but it doesn't resolve my problem completely. Maybe I can solve this by using Amazon Cognito, but it forces me to use their Authentication flow, which I don't want to (I plan to use Firebase Auth).
If you know Firebase Cloud Storage then you know that I can easily achieve this through Firebase Storage Rules.
So my questions are:
How can I achieve this in Amazon S3? I mean if there is an option to validate in the backend
Is this really possible, or I'm forced to use services such Google Cloud Storage?
I just want my S3 bucket to be able to access itself. For example in my index.html there is a reference to a favicon, which resides in my s3 (the same actually) bucket. When i call the index.html, i get 403 HTTP ACCESS DENIED error.
If i put block all access off and i add a policy it works, but i do not want the Bucket to be public.
How am i able to invoke my website with my AWS user for example without making the site public (that is with having all internet access blocked)?
I just want my S3 bucket to be able to access itself.
no, the request always comes from the client
How am i able to invoke my website with my AWS user
For the site-level access control there is CloudFront with signed cookie. You will still need some logic (apigw+lambda? lambda on edge? other server?) to authenticate the user and sign the cookie.
You mention that "the websites in the bucket should be only be able to see by a few dedicated users, which i will create with IAM."
However, accessing Amazon S3 content with IAM credentials is not compatible with accessing objects via URLs in a web browser. IAM credentials can be used when making AWS API calls, but a different authentication method is required when accessing content via URLs. Authentication normally requires a back-end to perform the authentication steps, or you could use Amazon Cognito.
Without knowing how your bucket is set up and what permissions / access controls you have already deployed it is hard to give a definite answer.
Having said that it sounds like you simply need to walk through the proper steps for building an appropriate permission model. You have already explored part of this with the block all access and a policy, but there are also ACL's and permission specifics based on object ownership that need to be considered.
Ultimately AWS's documentation is going to do a better job than most to illustrate what to do and where to start:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html
NOTE: if you share more information about how the bucket is configured and how your client side is accessing the website, I can edit the answer to give a more prescriptive solution (assuming the AWS docs don't get you all the way there)
UPDATE: After re-reading your question and comment on my answer, I think gusto2 and John's answers are pointing you in the right direction. What you are wanting to do is to authenticate users before they access the contents of the S3 bucket (which if I understand you right, is a s3 hosted static website). This means you need an authentication layer between the client and the bucket, which can be accomplished in a number of ways (lambda + cloudfront, or using an IdP like Cognito are certainly viable options). It would be a moot point for me to regurgitate exactly how to pull off something like this when there are a ton of accessible blog posts on the topic (search "Authenticate s3 static website").
HOWEVER I also want to point out that what you are wanting to accomplish is not possible in the way you are hoping to accomplish it (using IAM permission modeling to authenticate users against an s3 hosted static website). You can either authenticate users to your s3 website OR you can use IAM + S3 Permissions and ACL to set up AWS User and Role specific access to the contents of a bucket, but you can't use IAM users / roles as a method for authenticating client access to an S3 static website (not in any way I would imagine is simple or recommended at least...)
GCP seems to allow you to delegate Cloud Storage authentication via IAM and that's great but you're only able to ever get a single file at a time using that method.
What I mean is, if I gave permissions to a user to have 'Storage Object Viewer' Role to a folder in a bucket, then a user would be able to browse to a single file (let's say an .html file) using https://storage.cloud.google.com/bucket-name/folder-name/filename and display that, but if that .html file contains .css or other files it needs the user to download then those all return 404 not found errors.
It seems that whatever token is obtained upon authentication is only valid for the retrieval of the single file that was requested before the token was created.
How does one host a static website, with some form of authentication in GCP Cloud Storage?
I did see a question similar to this asked over 5 years ago and thought GCP has changed considerably since then so that's why I'm re-asking.
Edit: Ok, let's assume I'm okay with public read-only access to bucket contents, and instead I'm going to focus on securing the GCP Cloud Functions which make the changes.
Now, I've enabled authentication on the GCP functions and used OAuth ID Token. The issue is now CORS. Any calls made to the GCP functions need a CORS access-control-allow-origin header, but that header does not get returned until AFTER the authentication.
Anybody know how to enable CORS on GCP Cloud Functions before any authentication takes place?
Thanks!
You can host your static files on App Engine. The content is served for free!
In front of App Engine, you can activate IAP.
Finally, grant your user (or groups, or Google Workspace domains) the role IAP-Secured Web App User.
Google Cloud allows serving static content from a bucket by adding a loadbalancer in front of it. So far I was able to successfully serve public content, but I would also like to be able to authenticate users before they can see some/all content in a bucket using an oauth provider, but do it as serverless.
I have found Grant project which might solve a part of it, but I could really use some guidance on the best way to configure GCP itself to do it, or if this is even possible?
If possible, google function should not be a proxy service for all traffic, but instead just instruct GCP to redirect traffic without proper credentials to oauth, and otherwise just serve the content from a bucket.