Google Cloud allows serving static content from a bucket by adding a loadbalancer in front of it. So far I was able to successfully serve public content, but I would also like to be able to authenticate users before they can see some/all content in a bucket using an oauth provider, but do it as serverless.
I have found Grant project which might solve a part of it, but I could really use some guidance on the best way to configure GCP itself to do it, or if this is even possible?
If possible, google function should not be a proxy service for all traffic, but instead just instruct GCP to redirect traffic without proper credentials to oauth, and otherwise just serve the content from a bucket.
Related
I'm configuring Google Cloud CDN with Google Cloud Storage following article :
https://cloud.google.com/cdn/docs/setting-up-cdn-with-bucket#make_your_bucket_public
In my experience for AWS,
S3 Bucket can allow read permission only for its CDN (Cloudfront)
I wonder if GCP also has a similar feature,
Above article, I make 'allUsers' read the bucket, but I don't want to give the read permission for all users but only for Cloud CDN.
I've checked IAM documents but couldn't find.
Please help me
Cloud Storage Bucket allow the read permission only for Cloud CDN, not all users.
I don't want to make my bucket public.
The reason I ask if you consider accessing with IP address because Ive checked this link wherein you can limit an access by using the IP address.
Another link that I can share is signed URL, however based from the link “signed URLs give time-limited resource access to anyone in possession of the URL” and “signed URL is a URL that provides limited permission and time to make a request” I believed this one is time limited.
One thing that I can also think of is to use IAM with buckets wherein you can set permission you want to a certain user. For more information and configuration of this feature you also visit this site
Google recently release in public beta the v4 token signing process to support private origins. In this case, you can use the new v4 token signing process to access a private GCS bucket. One of my colleagues wrote a blog post with directions on how to do this: https://medium.com/#thetechbytes/private-gcs-bucket-access-through-google-cloud-cdn-430d940ebad9
By requirement, the google bucket I use for file storage should be private. But I need to make the files on the bucket accessible over google CDN.
Most of the documents I found describe the best practice involving signed URLs. But I need to make it work w/o signed URLs or cookies. Does anyone know how to achieve that?
I've successfully configured the access over signed URLs, including all the permissions settings for bucket/CDN but that's not what I need.
At this moment, Cloud CDN still requires tokenized access in order to access a private origin. There is a solution where you can deploy a proxy that will dynamically sign your request with an ephemerial token and access the private storage bucket.
https://github.com/GoogleCloudPlatform/cdn-auth-proxy
There is work underway for Cloud CDN that will allow you to dynamically generate an access token without having deploy a proxy, but a definitive time for the release has not been set.
The new Google Cloud Media CDN service allows you to access a private storage bucket via IAM permissions: https://cloud.google.com/media-cdn/docs/origins?hl=en#private-storage-buckets .
You can register service worker credentials in a json file on the webserver that is supposed to serve the files. Just make sure the worker has proper permissions to access the desired resources. In the gcloud SDK there is full support to make requests to protected resources via a service worker given the permissions are sufficient.
This way you can just map the requests dynamically to the web service and have the service take care of accessing and using protected credentials in the back.
Using IAM, is there any simple way to let a GCP console logged in user to access a Cloud Run URL?
The idea here is to have a lightweight way to protect the access to some URLs for people who are already logged in to the console.
So I don't want the world to have access, only my GCP users.
It seems that the options are either:
Setup IAP for Cloud Run => costly (load balancer) and not exactly simple
Setup the container to require authentication, generate a token from the console, use a browser extension and inject the said token on each request.
Note: I tried to setup a container as allowing non authenticated calls but removing the allUsers principal from the Invoker role and stick to a particular email address. The URL ended up still being available to non authenticated browsers.
Seems like a very simple use case but unless I am missing something, the options are all over-the-top.
Thanks,
Maybe this might work for you (don't necessarily know if it's the best architecture)
Deploy cloud run and ONLY allow for aunthenticated invocation
Create a very simple GAE project. Add login: required to app.yaml so that anyone trying to load the app is forced to login
Your GAE code can then invoke the cloud run endpoint. Your code will generate a token and include it as a header when making the call to the cloud run endpoint. See this documentation
I'm building an app that authenticates users and then return their generated files.
I'm using Amazon S3 to store such files. The Public block access is disabled and the Policy is set in way that only an IAM User can access to the main bucket.
What I only need is returning these files to authenticated users.
I see that a way to achieve this is creating presigned url and it works, but such url will be available for anyone that has the link.
I know I can set a time limit like 1 minute, but it doesn't resolve my problem completely. Maybe I can solve this by using Amazon Cognito, but it forces me to use their Authentication flow, which I don't want to (I plan to use Firebase Auth).
If you know Firebase Cloud Storage then you know that I can easily achieve this through Firebase Storage Rules.
So my questions are:
How can I achieve this in Amazon S3? I mean if there is an option to validate in the backend
Is this really possible, or I'm forced to use services such Google Cloud Storage?
GCP seems to allow you to delegate Cloud Storage authentication via IAM and that's great but you're only able to ever get a single file at a time using that method.
What I mean is, if I gave permissions to a user to have 'Storage Object Viewer' Role to a folder in a bucket, then a user would be able to browse to a single file (let's say an .html file) using https://storage.cloud.google.com/bucket-name/folder-name/filename and display that, but if that .html file contains .css or other files it needs the user to download then those all return 404 not found errors.
It seems that whatever token is obtained upon authentication is only valid for the retrieval of the single file that was requested before the token was created.
How does one host a static website, with some form of authentication in GCP Cloud Storage?
I did see a question similar to this asked over 5 years ago and thought GCP has changed considerably since then so that's why I'm re-asking.
Edit: Ok, let's assume I'm okay with public read-only access to bucket contents, and instead I'm going to focus on securing the GCP Cloud Functions which make the changes.
Now, I've enabled authentication on the GCP functions and used OAuth ID Token. The issue is now CORS. Any calls made to the GCP functions need a CORS access-control-allow-origin header, but that header does not get returned until AFTER the authentication.
Anybody know how to enable CORS on GCP Cloud Functions before any authentication takes place?
Thanks!
You can host your static files on App Engine. The content is served for free!
In front of App Engine, you can activate IAP.
Finally, grant your user (or groups, or Google Workspace domains) the role IAP-Secured Web App User.