Restrict Access to S3 bucket on AWS - amazon-web-services

I am storing files in a S3 bucket. I want the access to the files be restricted.
Currently, anyone with the URL to the file is able to access the file.
I want a behavior where file is accessed only when it is accessed through my application. The application is hosted on EC2.
Following are 2 possible ways I could find.
Use "referer" key in bucket policy.
Change "allowed origin" in CORS configuration
Which of the above 2 should be used, given the fact that 'referer' could be spoofed in the request header.
Also can cloudfront play a role over here?

I would recommend using a Pre-Signed URL that permits access to private objects stored on Amazon S3. It is a means of keeping objects secure, yet grant temporary access to a specific object.
It is created via a hash calculation based on the object path, expiry time and a shared Secret Access Key belonging to an account that has permission to access the Amazon S3 object. The result is a time-limited URL that grants access to the object. Once the expiry time passes, the URL does not return the object.
Start by removing existing permissions that grant access to these objects. Then generate Pre-Signed URLs to grant access to private content on a per-object basis, calculated every time you reference an S3 object. (Don't worry, it's fast to do!)
See documentation: Sample code in Java

When dealing with a private S3 bucket, you'll want to use an AWS SDK appropriate for your use case.
Here lies SDKs for many different languages: http://aws.amazon.com/tools/
Within each SDK, you can find sample calls to S3.
If you are trying to make private calls via browser-side JavaScript, you can use CORS.

Related

AWS S3 Per Bucket Permission for non-AWS accounts

This question is in the same line of thought than Is it possible to give token access to link to amazon s3 storage?.
Basically, we are building an app where groups of users can save pictures, that should be visible only to their own group.
We are thinking of using either a folder per user group, or it could even be an independent S3 bucket per user group.
The rules are very simple:
Any member of Group A should be able to add a picture to the Group A folder (or bucket)
Any member of Group A should be able to read all pictures of the Group A folder (or bucket)
No member of Group A should not have access to any of the pictures
However, the solution used by the post mentioned above (temporary pre-signed URLs) is not usable, as we need the client to be able to write files on his bucket as well as read the files on his bucket, without having any access to any other bucket. The file write part is the difficulty here and the reason why we cannot use pre-signed URLs.
Additionally, the solution from various AWS security posts that we read (for example https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/) do not apply because they show how to control accesses for IAM groups of for other AWS accounts. In our case, a group of users does not have an IAM account...
The only solutions that we see so far are either insecure or wasteful
Open buckets to everybody and rely on obfuscating the folder / bucket names (lots of security issues, including the ability to brute force and read / overwrite anybody's files)
Have a back-end that acts as a facade between the app and S3, validating the accesses. S3 has no public access, the bucket is only opened to an IAM role that the back-end has. However this is a big waste of bandwidth, since all the data would transit on the EC2 instance(s) of that back-end
Any better solution?
Is this kind of customized access doable with S3?
The correct way to achieve your goal is to use Amazon S3 pre-signed URLs, which are time-limited URLs that provides temporary access to a private object.
You can also Upload objects using presigned URLs - Amazon Simple Storage Service.
The flow is basically:
Users authenticate to your back-end app
When a user wants to access a private object, the back-end verifies that they are permitted to access the object (using your own business logic, such as the Groups you mention). If they are allowed to access the object, the back-end generates a pre-signed URL.
The pre-signed URL is returned to the user's browser, such as putting it in a <img src="..."> tag.
When the user's browser requests the object, S3 verifies the signature in the pre-signed URL. If it is valid and the time period has not expired, S3 provides the requested object. (Otherwise, it returns Access Denied.)
A similar process is used when users upload objects:
Users authenticate to your back-end app
They request the opportunity to upload a file
Your back-end app generates an S3 Pre-signed URL that is included in the HTML page for upload
Your back-end should track the object in a database so it knows who performed the upload and keeps track of who is permitted to access the object (eg particular users or groups)
Your back-end app is fully responsible for deciding whether particular users can upload/download objects. It then hands-off the actual upload/download process to S3 via the pre-signed URLs. This reduces load on your server because all uploads/downloads go direct to/from S3.

S3: How to grant public write access to an existing bucket file, but not the putObject permission (Private CRUD, Public Read/Update)

So, I want to have a service that creates files in an S3 bucket with specific links, and then allow anyone with a link to a file to write to the file and read it.
But it must not be a public privilege to create files, only editing/reading already existing files, given you have the link.
Is this possible with a bucket policy? Basically allowing one service CRUD privileges but having public RU privileges.
You will need to write such a service yourself.
First, please note that there is no difference between 'Create' and 'Update' in Amazon S3 -- both use a PutObject operation. Objects cannot be 'edited' -- they can only be overwritten.
You can achieve your goal for Reading, by using public objects with obfuscated URLs -- as long as somebody knows the URL, they could access the object. Not a perfect means of security, but that is your choice.
You do not want to grant public permission to create objects in a bucket, otherwise anybody would be able to upload any files to the bucket (eg copyrighted movies) and you would be paying the cost of storage and data transfer.
The safer way to permit uploads is to have users authenticate to your back-end, and then your back-end can generate an Amazon S3 pre-signed URL that can be used to upload to the bucket. This pre-signed URL can specify limitations such as file size and the filename of the upload.
For more details, see: Uploading objects using presigned URLs - Amazon Simple Storage Service

how to assign Amazon S3 objects permissions to a particular IAM group?

How to allow read/write/delete etc, permissions to users in a particular IAM group for a specific Amazon S3 object/file.
If you wish to control access to "millions" of individual files where access is not based upon the path (directory/folder) of the files, then you will need to create your own authentication method.
This can be done by using an Amazon S3 Pre-signed URL. Basically:
Users access your application
When they request access to a secure file (or, for example, when the application generates an HTML page that includes a link to such a file, or even a reference in an Image tag), the application generates a time-limited pre-signed URL
Users can use this link/URL to access the object in Amazon S3
After the expiry period, the URL no longer works
This gives your application full control over whether a user can access an object.
The only alternative if you were to use IAM would be to grant access based upon the path of the object. It is not a good method to assign access to individual objects.

How to prevent brute force file downloading on S3?

I'm storing user images on S3 which are readable by default.
I need to access the images directly from the web as well.
However, I'd like to prevent hackers from brute forcing the URL and downloading my images.
For example, my S3 image url is at http://s3.aws.com/test.png
They can brute force test and download all the contents?
I cannot set the items inside my buckets to be private because I need to access directly from the web.
Any idea how to prevent it?
Using good security does not impact your ability to "access directly from the web". All content in Amazon S3 can be accessed from the web if appropriate permissions are used.
By default, all content in Amazon S3 is private.
Permissions to access content can then be assigned in several ways:
Directly on the object (eg make an object 'public')
Via a Bucket Policy (eg permit access to a subdirectory if accessed from a specific range of IP addresses, during a particular time of day, but only via HTTPS)
Via a policy assigned to an IAM User (which requires the user to authenticate when accessing Amazon S3)
Via a time-limited Pre-signed URL
The most interesting is the Pre-Signed URL. This is a calculated URL that permits access to an Amazon S3 object for a limited period of time. Applications can generate a Pre-signed URL and include the link in a web page (eg as part of a <img> tag). That way, your application determines whether a user is permitted to access an object and can limit the time duration that the link will work.
You should keep your content secure, and use Pre-signed URLs to allow access only for authorized visitors to your web site. You do have to write some code to make it work, but it's secure.

Amazon S3 download authentication

I have created a bucket in Amazon S3 and have uploaded 2 files in it and made them public. I have the links through which I can access them from anywhere on the Internet. I now want to put some restriction on who can download the files. Can someone please help me with that. I did try the documentation, but got confused.
I want that at the time of download using the public link it should ask for some credentials or something to authenticate the user at that time. Is this possible?
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy
IAM Users and Groups
A Pre-Signed URL
As long as at least one of these methods is granting access, your users will be able to access the objects from Amazon S3.
1. Access Control List on individual objects
The Make Public option in the Amazon S3 management console will grant Open/Download permissions to all Internet users. This can be used to grant public access to specific objects.
2. Bucket Policy
A Bucket Policy can be used to grant access to a whole bucket or a portion of a bucket. It can also be used to specify limits to access. For example, a policy could make a specific directory within a bucket public to users from a specific range of IP addresses, during particular times of the day, and only when accessing the bucket via SSL.
A bucket policy is a good way to grant public access to many objects (eg a particular directory) without having to specify permissions on each individual object. This is commonly used for static websites served out of an S3 bucket.
3. IAM Users and Groups
This is similar to defining a Bucket Policy, but permissions are assigned to specific Users or Groups of users. Thus, only those users have permission to access the objects. Users must authenticate themselves when accessing the objects, so this is most commonly used when accessing objects via the AWS API, such as using the aws s3 commands from the AWS Command-Line Interface (CLI).
Rather than being prompted to authenticate, users must provide the authentication when making the API call. A simple way of doing this is to store user credentials in a local configuration file, which the CLI will automatically use when calling the S3 API.
4. Pre-Signed URL
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
Typically, an application constructs a Pre-Signed URL when it wishes to grant access to an object. For example, let's say you have a photo-sharing website and a user has authenticated to your website. You now wish to display their pictures in a web page. The pictures are normally private, but your application can generate Pre-Signed URLs that grant them temporary access to the pictures. The Pre-Signed URL will expire after a particular date/time.
Regarding the pre-signed URL, the signature is in the request headers, hence it should be within HTTPS/TLS encryption. But do check for yourself.