I have a s3 bucket that is private and I want specific user to have access to some objects in this bucket. What is the correct way to do that?
For individuals objects, you should use Pre-signed URL.
It allows the user who access the URL to issue a request as the person who pre-signed the URL (inheriting the permissions of the IAM user that generated the URL). It can be generated with SDK or CLI. It is valid for 3600s by default, but you can change this duration.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html
For multiple objetcs (if you want a path with wildcard), you can use Signed cookies. It need you to first implements a CloudFront distribution in front of you s3 bucket.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html
CloudFront also allow to provide Signed URLs, which are different from S3 Presigned-URL: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
Related
My website has some files hosted on AWS S3 buckets, but I need to restrict access to the s3 objects URL to only logged in users of the website. Something similar to how google drive works. which means that anyone trying to access the urls to any of the files on our S3 bucket needs to be logged into the website before he or she would be able to.
Is this possible?
Thanks
If you want to restrict access to the S3 objects, don't make the objects public and don't use the public URLs shown on AWS S3 console.
S3 provides an option to generate pre-signed URLs to download S3 objects. So once your users log in to your website and when they request to download the S3 object, make a request to S3 to generate this pre-signed URL. Clicking on the pre-signed URL will download the object.
With pre-signed URLs, you can configure additional options like expiry time, so that these URLs are more secure.
You can find more info about pre-signed URLs and their implementation here.
If you happen to use AWS Cognito for log in/out functionality, you can assign IAM roles to logged in users.
This way when trying to access the s3 bucket, you can restrict access using IAM roles.
AWS Amplify would be a good fit for this use-case.
I need to restrict access to S3 objects using cloudfront. Hence the users will be hitting the cloudfront url instead of S3.
How do I specify which users can access the cloudfront URL.
I am aware of OAI and related bucket access but that does not allow me to restrict the user group.
I would use Signed URLs for this purpose. You can generate the URL for your specific user, share it with them, and limit access to that URL with the constraints available.
In one case I generated a very short-lived Signed URL and redirected the user to that URL, so it essentially only worked for the user who made the request. Limiting the lifetime to a few seconds and access to the client's IP address was sufficient for my case.
AWS docs here on Private Content: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
Is it possible to PUT to S3 using a presigned key-starts-with policy to allow upload of multiple or arbitrarily named files?
This is easy using the browser-based PresignedPost technique, but I've been unable to find a way to use a normal simple PUT for uploading arbitrary files starting with the same key.
This isn't possible... not directly.
POST uploads are unique in their support for an embedded policy document, which allows logic like starts-with.
PUT and all other requests require the signature to precisely match the request, because the signature is derived entirely from observable attributes of the request itself.
One possible workaround would be to connect the bucket to CloudFront and use a CloudFront pre-signed URL with an appropriate wildcard. The CloudFront origin access identity, after validating the CloudFront URL, would actually handle signing the request in the background on its way to S3 to match the exact request. Giving the origin access identity the s3:PutObject permission in bucket policy then should allow the action.
I suggest this should work, though I have not tried it, because the CloudFront docs indicate that the client needs to add the x-amz-content-sha256 header to PUT requests for full compatibility with all S3 regions. The same page warns that any permissions you assign to the origin access identity will work (such as DELETE), so, setting the bucket policy too permissive will allow any operation to be performed via the signed URL -- CloudFront signed URLs don't restrict to a specific REST verb.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Note that there's no such concept as uploading "to" CloudFront. Uploads go through CloudFront to the origin server, S3 in this case.
I have created a bucket in Amazon S3 and have uploaded 2 files in it and made them public. I have the links through which I can access them from anywhere on the Internet. I now want to put some restriction on who can download the files. Can someone please help me with that. I did try the documentation, but got confused.
I want that at the time of download using the public link it should ask for some credentials or something to authenticate the user at that time. Is this possible?
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy
IAM Users and Groups
A Pre-Signed URL
As long as at least one of these methods is granting access, your users will be able to access the objects from Amazon S3.
1. Access Control List on individual objects
The Make Public option in the Amazon S3 management console will grant Open/Download permissions to all Internet users. This can be used to grant public access to specific objects.
2. Bucket Policy
A Bucket Policy can be used to grant access to a whole bucket or a portion of a bucket. It can also be used to specify limits to access. For example, a policy could make a specific directory within a bucket public to users from a specific range of IP addresses, during particular times of the day, and only when accessing the bucket via SSL.
A bucket policy is a good way to grant public access to many objects (eg a particular directory) without having to specify permissions on each individual object. This is commonly used for static websites served out of an S3 bucket.
3. IAM Users and Groups
This is similar to defining a Bucket Policy, but permissions are assigned to specific Users or Groups of users. Thus, only those users have permission to access the objects. Users must authenticate themselves when accessing the objects, so this is most commonly used when accessing objects via the AWS API, such as using the aws s3 commands from the AWS Command-Line Interface (CLI).
Rather than being prompted to authenticate, users must provide the authentication when making the API call. A simple way of doing this is to store user credentials in a local configuration file, which the CLI will automatically use when calling the S3 API.
4. Pre-Signed URL
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
Typically, an application constructs a Pre-Signed URL when it wishes to grant access to an object. For example, let's say you have a photo-sharing website and a user has authenticated to your website. You now wish to display their pictures in a web page. The pictures are normally private, but your application can generate Pre-Signed URLs that grant them temporary access to the pictures. The Pre-Signed URL will expire after a particular date/time.
Regarding the pre-signed URL, the signature is in the request headers, hence it should be within HTTPS/TLS encryption. But do check for yourself.
I am storing files in a S3 bucket. I want the access to the files be restricted.
Currently, anyone with the URL to the file is able to access the file.
I want a behavior where file is accessed only when it is accessed through my application. The application is hosted on EC2.
Following are 2 possible ways I could find.
Use "referer" key in bucket policy.
Change "allowed origin" in CORS configuration
Which of the above 2 should be used, given the fact that 'referer' could be spoofed in the request header.
Also can cloudfront play a role over here?
I would recommend using a Pre-Signed URL that permits access to private objects stored on Amazon S3. It is a means of keeping objects secure, yet grant temporary access to a specific object.
It is created via a hash calculation based on the object path, expiry time and a shared Secret Access Key belonging to an account that has permission to access the Amazon S3 object. The result is a time-limited URL that grants access to the object. Once the expiry time passes, the URL does not return the object.
Start by removing existing permissions that grant access to these objects. Then generate Pre-Signed URLs to grant access to private content on a per-object basis, calculated every time you reference an S3 object. (Don't worry, it's fast to do!)
See documentation: Sample code in Java
When dealing with a private S3 bucket, you'll want to use an AWS SDK appropriate for your use case.
Here lies SDKs for many different languages: http://aws.amazon.com/tools/
Within each SDK, you can find sample calls to S3.
If you are trying to make private calls via browser-side JavaScript, you can use CORS.