I uploaded an image to Google Cloud Storage, but someone used the URL of the image for their website without my consent, which caused me to lose money. How can this be prohibited?
As #Doug and gauillaume# said in the comments, by default, a storage URL is accessible for anyone connected to internet who has the URL.
If you want only entitled persons to have access to you Storage objects you can use Signed URLs.
A signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource.
Here you can read more about how to process of generating signed URLs looks like and how to achieve it.
There are also other security considerations for Storage buckets that may be worth reading about. For example if you do not need a 100% secure bucket you can simply choose some very difficult name for buckets and objects. That way it will be very difficult for a link to be guessed by a person which is not entitled. More strong solutions would be ACLs and different IAM roles attributions.
Related
This is driving me crazy…
I want to store images into Google Cloud Storage and the images to be only accessible from our app, because the uploaded image may contain privacy sensitive content.
I kept googling last couple of hours and haven’t got a clue, and feeling dumb of myself…😫
ChatGPT suggested to create a signed URL, which makes sense, but do we need to go that far, to satisfy this seemingly common requirement?
My first intuition was that I could use IAM and Service Account, but… it didn’t lead anywhere for me.
If Google doesn’t offer this capability but AWS does, I want to hear that too.
You are right, there is not out of the box solution for this. Google Cloud's storage security is mainly based in IAM roles and permissions. Using signed URL's could be a feasible solution to provide temporary access to the final user to a single object, but in your case as you want to use the bucket to store and access objects from your app you should just set a service account for your app and set the right permissions according to your business needs.
I've looked all over but can't seem to see how this might be done.
I have a GCP bucket, which is publicly accessible. I need a link to give to an associate which they can use to upload some files into that bucket. There is no need for authentication, the files are just public domain anyway. I need the process to be super simple for the associate.
Once the files are uploaded I can grab them and destroy the bucket/project anyway.
Is this possible?
Use a signed URL. From the documentation (emphasis mine):
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. You specify an expiration time when you create the signed URL. Anyone who knows the URL can access the resource until the expiration time for the URL is reached or the key used to sign the URL is rotated.
Goal:
For example, users could create courses which has resources such as images, videos etc.
I want to restrict access to them using signed cookies. i.e. resources on /courses/1 will only be accessible to logged-in users who have a valid signed cookie.
Background
I'll be creating a bucket of media files per course based on https://cloud.google.com/storage/docs/access-control#recommended_bucket_architecture.
Where I am stuck
How to add backend buckets to the load balancer dynamically since I could only add them in the console
How to use the same signing key for all buckets for easy maintenance https://cloud.google.com/cdn/docs/using-signed-cookies#creatingkeys. It seems like I need to manually create a key for each bucket.
So is there a standard way to do these or am I thinking about this whole architecture wrong since this won't scale without automation?
You will be limited to 50 path rules as mentioned in the Quotas, limited to 50 courses. I hope you expect more than this!!
So, this pattern isn't suitable for your use case. You need to use the same bucket and to control access with a backend app. And then to generated SignedUrl for the resources requested by the users
I want to upload some images on Amazon S3 and based on the user's subscription give them the access of viewing some portions of these images. After reading Amazon S3 documentation I have come up with these solutions:
Assigning each user in my application to one IAM user in Amazon S3 and then defining user policy or bucket policy to manage who has access to what. But there is two drawbacks: First, the user or bucket policies have limit on their size and since the number of users and images are very large it is very likely that I need to exceed that limit. Second, the number of IAM users per AWS account is bounded to 5000 and I would have more users in my application.
Amazon S3 makes it possible to define some temporary security credentials that act the same as IAM users. It's possible for me to require the client side to make a request to my server and I create a temporary IAM user for them with a special policy and pass them their credentials then they can directly send request to S3 using their credentials and have access to their resources. But the problem is that these users would last between 15 min to 1 hour and therefore the clients need to request my server at least every 1 hour to make a temporary IAM user for them.
Since I want to serve some images it is a good practice to use combination of Amazon Cloudfront and S3 to serve the content as quick as possible. I have also read the Cloudfront documentation for serving private content and I found out that their solution is using signed URLs or signed cookies. I will deny any access to S3 resources and the cloudfront would be the only one who has the access to read data from S3 and every time a user signs in to my application I would send them the credentials that they need to have to make a signed URL or I would send them the necessary cookies. They can request required resources with the information that they have and this information would last until they are signed in to my application. But I have some security concerns. Since almost all of the information about access control is sent to the client (e.g. in cookies) they can easily modify it and grant themselves more permissions. However it is a big concern but I think I have to use cloudfront for decreasing loading resource time.
I want to know you think which of these solutions is more reasonable and better than others and also if there are other solutions maybe using other Amazon web services.
My own approach to serve private content on S3 is by using CloudFront with either signed URLs or signed cookies (or sometimes, both). You should not use IAM users or temporary credentials for large number of users, as in your case.
You can read more about this topic here:
Serving Private Content through CloudFront
Your choice of whether to use signed URLs or signed cookies depends on the following.
Choosing Between Signed URLs and Signed Cookies
CloudFront signed URLs and signed cookies provide the same basic
functionality: they allow you to control who can access your content.
If you want to serve private content through CloudFront and you're
trying to decide whether to use signed URLs or signed cookies,
consider the following.
Use signed URLs in the following cases:
You want to use an RTMP distribution. Signed cookies aren't supported
for RTMP distributions.
You want to restrict access to individual files, for example, an installation download for your application.
Your users are using a client (for example, a custom HTTP client) that
doesn't support cookies.
Use signed cookies in the following cases:
You want to provide access to multiple restricted files, for example,
all of the files for a video in HLS format or all of the files in the
subscribers' area of a website.
You don't want to change your current
URLs.
As for your security concerns, CloudFront uses the public key to validate the signature in the signed cookie and to confirm that the cookie hasn't been tampered with. If the signature is invalid, the request is rejected.
You can also follow the guidelines at the end of this page to prevent misuse of signed cookies.
Let's say that I want to create a simplistic version of Dropbox' website, where you can sign up and perform operations on files such as upload, download, delete, rename, etc. - pretty much like in this question. I want to use Amazon S3 for the storage of the files. This is all quite easy with the AWS SDK, except for one thing: security.
Obviously user A should not be allowed to access user B's files. I can kind of add "security through obscurity" by handling permissions in my application, but it is not good enough to have public files and rely on that, because then anyone with the right URL could access files that they should not be able to. Therefore I have searched and looked through the AWS documentation for a solution, but I have been unable to find a suitable one. The problem is that everything I could find relates to permissions based on AWS accounts, and it is not appropriate for me to create many thousand IAM users. I considered IAM users, bucket policies, S3 ACLs, pre-signed URLs, etc.
I could indeed solve this by authorizing everything in my application and setting permissions on my bucket so that only my application can access the objects, and then having users download files through my application. However, this would put increased load on my application, where I really want people to download the files directly through Amazon S3 to make use of its scalability.
Is there a way that I can do this? To clarify, I want to give a given user in my application access to only a subset of the objects in Amazon S3, without creating thousands of IAM users, which is not so scalable.
Have the users download the files with the help of your application, but not through your application.
Provide each link as a link the points to an endpoint of your application. When each request comes in, evaluate whether the user is authorized to download the file. Evaluate this with the user's session data.
If not, return an error response.
If so, pre-sign a download URL for the object, with a very short expiration time (e.g. 5 seconds) and redirect the user's browser with 302 Found and set the signed URL in the Location: response header. As long as the download is started before the signed URL expires, it won't be interrupted if the URL expires while the download is already in progress.
If the connection to your app, and the scheme of the signed URL are both HTTPS, this provides a substantial level of security against any unauthorized download, at very low resource cost.