Restrict Bucket Access to Certain Origin(s) - google-cloud-platform

I have a Google Cloud Storage bucket with images that I want to serve to users of my website. The public URL is something like this:
https://storage.googleapis.com/example-bucket/filename.jpg
So my website can easily access it, and any random internet user can enter the URL in a browser directly to access it.
Is it possible, via Google Cloud, to restrict this so that if my website tries to access the file, it succeeds, but if a random user tries to enter the URL into a browser window, they get denied?
Cloud Storage lets us set CORS policies, but they only apply for the XML API: https://cloud.google.com/storage/docs/cross-origin#server-side-support
Is it possible to restrict this via a Load Balancer, Cloud Armor, or Cloud CDN?
FYI, let's say my website is accessing it from the DOM directly, like this:
<html>
<body>
<img src="https://storage.googleapis.com/example-bucket/filename.jpg"></img>
</body
</html>

Based on your use-case. Google Cloud Storage has no mechanism as of now to allow read but restrict download. Once the image/file has been set to public, any random user/website with the URL can read or download the file.
I would suggest:
Use another 3rd party app to render the documents as graphic/image inside the app which prevents any user from downloading.
You can change your use-case. Have a user sign-in to your website using their google account and configure using IAM permissions with ACLs which makes the Google Cloud Storage object only accessible if they are allowed to read and also authenticated.
You could also check this blog on how to control access to Google Cloud Storage.

Related

Accessing private google bucket through google CDN w/o signed URLs

By requirement, the google bucket I use for file storage should be private. But I need to make the files on the bucket accessible over google CDN.
Most of the documents I found describe the best practice involving signed URLs. But I need to make it work w/o signed URLs or cookies. Does anyone know how to achieve that?
I've successfully configured the access over signed URLs, including all the permissions settings for bucket/CDN but that's not what I need.
At this moment, Cloud CDN still requires tokenized access in order to access a private origin. There is a solution where you can deploy a proxy that will dynamically sign your request with an ephemerial token and access the private storage bucket.
https://github.com/GoogleCloudPlatform/cdn-auth-proxy
There is work underway for Cloud CDN that will allow you to dynamically generate an access token without having deploy a proxy, but a definitive time for the release has not been set.
The new Google Cloud Media CDN service allows you to access a private storage bucket via IAM permissions: https://cloud.google.com/media-cdn/docs/origins?hl=en#private-storage-buckets .
You can register service worker credentials in a json file on the webserver that is supposed to serve the files. Just make sure the worker has proper permissions to access the desired resources. In the gcloud SDK there is full support to make requests to protected resources via a service worker given the permissions are sufficient.
This way you can just map the requests dynamically to the web service and have the service take care of accessing and using protected credentials in the back.

Is there any way to host a public static website in GCP Cloud Storage and protect it using a username and password?

GCP seems to allow you to delegate Cloud Storage authentication via IAM and that's great but you're only able to ever get a single file at a time using that method.
What I mean is, if I gave permissions to a user to have 'Storage Object Viewer' Role to a folder in a bucket, then a user would be able to browse to a single file (let's say an .html file) using https://storage.cloud.google.com/bucket-name/folder-name/filename and display that, but if that .html file contains .css or other files it needs the user to download then those all return 404 not found errors.
It seems that whatever token is obtained upon authentication is only valid for the retrieval of the single file that was requested before the token was created.
How does one host a static website, with some form of authentication in GCP Cloud Storage?
I did see a question similar to this asked over 5 years ago and thought GCP has changed considerably since then so that's why I'm re-asking.
Edit: Ok, let's assume I'm okay with public read-only access to bucket contents, and instead I'm going to focus on securing the GCP Cloud Functions which make the changes.
Now, I've enabled authentication on the GCP functions and used OAuth ID Token. The issue is now CORS. Any calls made to the GCP functions need a CORS access-control-allow-origin header, but that header does not get returned until AFTER the authentication.
Anybody know how to enable CORS on GCP Cloud Functions before any authentication takes place?
Thanks!
You can host your static files on App Engine. The content is served for free!
In front of App Engine, you can activate IAP.
Finally, grant your user (or groups, or Google Workspace domains) the role IAP-Secured Web App User.

How to encrypt/hide google cloud bucket name in signed URL

I would like to upload a image on a google storage bucket, for that I generated signed URL which would be passed to client for upload. I observed that google cloud bucket name is exposed in the signed URL.
https://storage.googleapis.com/myproject-images/test.PNG?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=3242342308700-compute%40developer.gserviceaccount.com%2F20200430%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20200430T044803Z&X-Goog-Expires=900&X-Goog-SignedHeaders=host&X-Goog-Signature=*********************
My question: Is it possible to encrypt/map or hide the google cloud bucket in signed URL. I do not want to expose my bucket name to end user
It's not possible, if you want the client to directly access that data. You could obfuscate it by using a URL shortener, but all that would do is hide it from view temporarily.
Once you choose to allow clients to access your project directly, your project id is no longer private information. That ID is absolutely required in order to identify resources within your project (and not just Cloud Storage). The same is true for all Firebase-related client access that goes directly to Google Cloud and Firebase products.
If you don't want anyone to see the name of your project, you will either:
Disallow all direct client access
Route all requests through some middleware service identified by another DNS name that hides all the implementation details of the interaction with Google Cloud products.

Hosting static images in AWS S3

I am integrating the ability for users of my web app to be able to upload images to my site. I want to store these images in an AWS S3 bucket, but I need to be careful with privacy and making sure only people that should have access to these files can see them.
Users should have access to these files via <img src="s3_link"> but should not be able to access the bucket directly or list the objects within.
I can accomplish this by making the bucket public but this seems dangerous.
How do I set up a proper bucket policy to allow these images to be loaded onto a webpage in an <img> tag?
S3 supports pre-signed URLs. They can be used to restrict access to specific user.
See: Share an Object with Others
You might be able to use something like https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-4 (Restricting Access to a Specific HTTP Referrer).
It's going to be difficult to ensure complete control since S3 is a basic CND without the ability to create/grant permissions on the fly, coupled with the fact that you want your <img /> tags to grab the content directly from S3.
If you are trying to restrict downloads, you'll need to setup some basic logic to grant your users some type of access token that they will provide when requesting content to download (will require a lambda script/DB or a service that can pull the images down and then serve them if the caller is authenticated).
It sounds like you'll need authenticated users to request access to content via an API, passing in an Authorization token which the API will then verify if they have access to pull down the requested content.

How can I allow a 3rd party file upload to a private S3 bucket without using IAM?

Can I allow a 3rd party file upload to an S3 bucket without using IAM? I would like to avoid the hassle of sending them credentials for an AWS account, but still take advantage of the S3 UI. I have only found solutions for one or the other.
The pre-signed url option sounded great but appears to only work with their SDKs and I'm not about to tell my client to install python on their computer to upload a file.
The browser based upload requires me to make my own front end html form and run in on a server just to upload (lol).
Can I not simply create a pre-signed url which navigates the user to the S3 console and allows them to upload before expiration time? Of course, making the bucket public is not an option either. Why is this so complicated!
Management Console
The Amazon S3 management console will only display S3 buckets that are associated with the AWS account of the user. Also, it is not possible to limit the buckets displayed (it will display all buckets in the account, even if the user cannot access them).
Thus, you certainly don't want to give them access to your AWS management console.
Pre-Signed URL
Your user does not require the AWS SDK to use a pre-signed URL. Rather, you must run your own system that generates the pre-signed URL and makes it available to the user (eg through a web page or API call).
Web page
You can host a static upload page on Amazon S3, but it will not be able to authenticate the user. Since you only wish to provide access to specific people, you'll need some code running on the back-end to authenticate them.
Generate...
You ask: "Can I not simply create a pre-signed url which navigates the user to the S3 console and allows them to upload before expiration time?"
Yes and no. Yes, you can generate a pre-signed URL. However, it cannot be used with the S3 console (see above).
Why is this so complicated?
Because security is important.
So, what to do?
A few options:
Make a bucket publicly writable, but not publicly readable. Tell your customer how to upload. The downside is that anyone could upload to the bucket (if they know about it), so it is only security by obscurity. But, it might be a simple solution for you.
Generate a very long-lived pre-signed URL. You can create a URL that works for months or years. Provide this to them, and they can upload (eg via a static HTML page that you give them).
Generate some IAM User credentials for them, then have them use a utility like the AWS Command-Line Interface (CLI) or Cloudberry. Give them just enough credentials for upload access. This assumes you only have a few customers that need access.
Bottom line: Security is important. Yet, you wish to "avoid the hassle of sending them credentials", nor do you wish to run a system to perform the authentication checks. You can't have security without doing some work, and the cost of poor security will be much more than the cost of implementing good security.
you could deploy a lambda function to call "signed URL" then use that URL to upload the file. here is an example
https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/