Hosting static images in AWS S3 - amazon-web-services

I am integrating the ability for users of my web app to be able to upload images to my site. I want to store these images in an AWS S3 bucket, but I need to be careful with privacy and making sure only people that should have access to these files can see them.
Users should have access to these files via <img src="s3_link"> but should not be able to access the bucket directly or list the objects within.
I can accomplish this by making the bucket public but this seems dangerous.
How do I set up a proper bucket policy to allow these images to be loaded onto a webpage in an <img> tag?

S3 supports pre-signed URLs. They can be used to restrict access to specific user.
See: Share an Object with Others

You might be able to use something like https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-4 (Restricting Access to a Specific HTTP Referrer).
It's going to be difficult to ensure complete control since S3 is a basic CND without the ability to create/grant permissions on the fly, coupled with the fact that you want your <img /> tags to grab the content directly from S3.
If you are trying to restrict downloads, you'll need to setup some basic logic to grant your users some type of access token that they will provide when requesting content to download (will require a lambda script/DB or a service that can pull the images down and then serve them if the caller is authenticated).
It sounds like you'll need authenticated users to request access to content via an API, passing in an Authorization token which the API will then verify if they have access to pull down the requested content.

Related

Offload Multiple S3 Downloads

I'm working to build a web portal that displays the contents of an S3 bucket to authenticated users. Users would be allowed to download objects via presigned URLs so the content/bandwidth wouldn't need to be ushered through the web portal and credentials wouldn't need to be passed to the client. This works well for single objects. However, I'm uncertain how to leverage presigned URLs when users want to download many objects e.g. all objects with a specific prefix. It seems the issue may be more of a limitation with standard web technologies i.e. multiple downloads triggered by a single action.
I've seen some apps dynamically create a .zip containing all the objects, but I'm trying to avoid moving data through the portal. I also found AWS POST Policies leveraging condition keys like 'starts-with' but it doesn't look like a POST Policy will help with getting objects. The STS AssumeRole could be used to generate temporary/limited credentials to download the objects of a specific prefix, but the user would still need to download each object. Am I overlooking a better solution?

Amazon S3: Allow Dynamic Groups of Users Access

Is it possible in S3 to allow dynamic groups of users access to resources in a bucket? For example, I know you can use Cognito to restrict access of users' content to the respective users. However, I don't know how to apply some dynamic rule which would require DB access. Some example scenarios I can think of:
Instagram-like functionality, users can connect with friends and upload photos. Only friends can view a user's photos.
Project-level resources. Multiple users can be added to a project, and only members of the project may view its resources. Projects can be created and managed by users and so are not pre-defined.
Users have private file storage, but can share files with other users.
Now the obvious 1st layer of protection would be the front-end simply not giving the links to these resources to unauthorized users. But suppose in the second scenario, the S3 link to SECRET_COMPANY_DATA.zip gets leaked. I would hope that when someone tries to access that link, it only succeeds if they're in the associated project and have sufficient privileges.
I think, to some degree, this can be handled with adding custom claims to the cognito token, e.g. you could probably add a project_id claim and do a similar path-based Allow on it. But if a user can be part of multiple projects, this seems to go out the window.
It seems to me like this should be a common enough requirement enough that there is a simple solution. Any advice?
The best approach would be:
Keep your bucket private, with no Bucket Policy
Users authenticate to your app
When a user requests access to a file stored in Amazon S3, the app should check if they are permitted to access the file. This could check who 'owns' the file, their list of friends, their projects, etc. You would program all this logic in your own app.
If the user is authorised to access the file, the your app should generate an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object. This URL can be inserted into HTML, such as in <a HREF="..."> or <img src="...">.
When the user clicks the link, Amazon S3 will verify the signature and will confirm that the link has not yet expired. If everything is okay, it will return the file to the user's browser.
This approach means that your app can control all the authentication and authorization, while S3 will be responsible for serving the content to the user.
If another person got access to the pre-signed URL, then they can also download the content. Therefore, keep the expiry time to a minimum (a few minutes). After this period, the URL will no longer work.
Your app can generate the pre-signed URL in a few lines of code. It does not require a call to AWS to create the URL.

AWS S3 Per Bucket Permission for non-AWS accounts

This question is in the same line of thought than Is it possible to give token access to link to amazon s3 storage?.
Basically, we are building an app where groups of users can save pictures, that should be visible only to their own group.
We are thinking of using either a folder per user group, or it could even be an independent S3 bucket per user group.
The rules are very simple:
Any member of Group A should be able to add a picture to the Group A folder (or bucket)
Any member of Group A should be able to read all pictures of the Group A folder (or bucket)
No member of Group A should not have access to any of the pictures
However, the solution used by the post mentioned above (temporary pre-signed URLs) is not usable, as we need the client to be able to write files on his bucket as well as read the files on his bucket, without having any access to any other bucket. The file write part is the difficulty here and the reason why we cannot use pre-signed URLs.
Additionally, the solution from various AWS security posts that we read (for example https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/) do not apply because they show how to control accesses for IAM groups of for other AWS accounts. In our case, a group of users does not have an IAM account...
The only solutions that we see so far are either insecure or wasteful
Open buckets to everybody and rely on obfuscating the folder / bucket names (lots of security issues, including the ability to brute force and read / overwrite anybody's files)
Have a back-end that acts as a facade between the app and S3, validating the accesses. S3 has no public access, the bucket is only opened to an IAM role that the back-end has. However this is a big waste of bandwidth, since all the data would transit on the EC2 instance(s) of that back-end
Any better solution?
Is this kind of customized access doable with S3?
The correct way to achieve your goal is to use Amazon S3 pre-signed URLs, which are time-limited URLs that provides temporary access to a private object.
You can also Upload objects using presigned URLs - Amazon Simple Storage Service.
The flow is basically:
Users authenticate to your back-end app
When a user wants to access a private object, the back-end verifies that they are permitted to access the object (using your own business logic, such as the Groups you mention). If they are allowed to access the object, the back-end generates a pre-signed URL.
The pre-signed URL is returned to the user's browser, such as putting it in a <img src="..."> tag.
When the user's browser requests the object, S3 verifies the signature in the pre-signed URL. If it is valid and the time period has not expired, S3 provides the requested object. (Otherwise, it returns Access Denied.)
A similar process is used when users upload objects:
Users authenticate to your back-end app
They request the opportunity to upload a file
Your back-end app generates an S3 Pre-signed URL that is included in the HTML page for upload
Your back-end should track the object in a database so it knows who performed the upload and keeps track of who is permitted to access the object (eg particular users or groups)
Your back-end app is fully responsible for deciding whether particular users can upload/download objects. It then hands-off the actual upload/download process to S3 via the pre-signed URLs. This reduces load on your server because all uploads/downloads go direct to/from S3.

Difficulty of getting image from public S3 bucket?

I want to use S3 to store user uploaded images. Some images like profile pictures and other thumbnails should be visible to anyone. However, I also want to have some images to be visible to users in the group that the image was "posted" to.
My app will handle all of the logic to decide whether or not a certain user has access to the image.
My question is: with a public S3 bucket and my app controlling the visibility of the images, how hard would it potentially be for someone to see images that they generally don't have access to?
Is there a better way to set up an S3 bucket to meet these requirements?
Thanks.
The best approach is:
Do not grant public access to any of the images/objects in Amazon S3
Your application at all times determines whether they should be allowed access
For users who are allowed access to an image, create an Amazon S3 Pre-Signed URL, which is a time-limited URL that will grant access to the object.
Your application can generate the pre-signed URL in a couple of lines of code, without requiring a call to AWS.
This way, all your security is maintained by the application rather than having to selectively make some objects public and there is no way for people to gain unauthorized access to objects.

How to prevent brute force file downloading on S3?

I'm storing user images on S3 which are readable by default.
I need to access the images directly from the web as well.
However, I'd like to prevent hackers from brute forcing the URL and downloading my images.
For example, my S3 image url is at http://s3.aws.com/test.png
They can brute force test and download all the contents?
I cannot set the items inside my buckets to be private because I need to access directly from the web.
Any idea how to prevent it?
Using good security does not impact your ability to "access directly from the web". All content in Amazon S3 can be accessed from the web if appropriate permissions are used.
By default, all content in Amazon S3 is private.
Permissions to access content can then be assigned in several ways:
Directly on the object (eg make an object 'public')
Via a Bucket Policy (eg permit access to a subdirectory if accessed from a specific range of IP addresses, during a particular time of day, but only via HTTPS)
Via a policy assigned to an IAM User (which requires the user to authenticate when accessing Amazon S3)
Via a time-limited Pre-signed URL
The most interesting is the Pre-Signed URL. This is a calculated URL that permits access to an Amazon S3 object for a limited period of time. Applications can generate a Pre-signed URL and include the link in a web page (eg as part of a <img> tag). That way, your application determines whether a user is permitted to access an object and can limit the time duration that the link will work.
You should keep your content secure, and use Pre-signed URLs to allow access only for authorized visitors to your web site. You do have to write some code to make it work, but it's secure.

Categories