I'm not sure if this is feasible. I'm looking for a way to limit the files that can be saved into an S3 bucket to a certain pattern (number.*, e.g. 2893.jpg or 18928.png). Can this be done thru IAM policy or is there another way?
Thanks
There is no native way to do this. Permissions can be assigned to upload to a particular Prefix (folder), but there is no method for specifying permitted characters in a Key (filename).
You would likely want to implement a frontend that verifies filenames and allows upload via a pre-signed URL.
See: Uploading Objects Using Presigned URLs - Amazon S3
Related
So, I want to have a service that creates files in an S3 bucket with specific links, and then allow anyone with a link to a file to write to the file and read it.
But it must not be a public privilege to create files, only editing/reading already existing files, given you have the link.
Is this possible with a bucket policy? Basically allowing one service CRUD privileges but having public RU privileges.
You will need to write such a service yourself.
First, please note that there is no difference between 'Create' and 'Update' in Amazon S3 -- both use a PutObject operation. Objects cannot be 'edited' -- they can only be overwritten.
You can achieve your goal for Reading, by using public objects with obfuscated URLs -- as long as somebody knows the URL, they could access the object. Not a perfect means of security, but that is your choice.
You do not want to grant public permission to create objects in a bucket, otherwise anybody would be able to upload any files to the bucket (eg copyrighted movies) and you would be paying the cost of storage and data transfer.
The safer way to permit uploads is to have users authenticate to your back-end, and then your back-end can generate an Amazon S3 pre-signed URL that can be used to upload to the bucket. This pre-signed URL can specify limitations such as file size and the filename of the upload.
For more details, see: Uploading objects using presigned URLs - Amazon Simple Storage Service
I'm using S3 to store a bunch of confidential files for clients. The bucket can not have public access and only authenticated users can access these files.
This is my current idea
I'm using Cognito to authenticate the user and allow them to access API Gateway. When they make a request to the path /files, it directs the request to a lambda, which generates a signed url for every file that the user has access too. Then API Gateway returns the list of all these signed urls and the browser displays them.
Gathering a signed url for every file seems very inefficient. Is there any other way to get confidential files from S3 in one large batch?
A safer approach would be for your application to generate signed URLs, valid for a single request or period, and have your bucket accept only requests originating from CloudFront using an Origin Access Identity.
See the documentation for this at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
You say "Gathering a signed url for every file seems very inefficient", but the process of creating the Signed URL itself is very easy — just a few lines of code.
However, if there are many files, it would put a lot of work on your users to download each file individually.
Therefore, another approach could be:
Identify all the files they wish to download
Create a Zip of the files and store it in Amazon S3
Provide a Signed URL to the Zip file
Delete the Zip file later (since it is not required anymore), possibly by creating a lifecycle rule on a folder within the bucket
Please note that AWS Lambda functions have a disk storage limit of 500MB, which might not be enough to create the Zip file.
Is it possible to PUT to S3 using a presigned key-starts-with policy to allow upload of multiple or arbitrarily named files?
This is easy using the browser-based PresignedPost technique, but I've been unable to find a way to use a normal simple PUT for uploading arbitrary files starting with the same key.
This isn't possible... not directly.
POST uploads are unique in their support for an embedded policy document, which allows logic like starts-with.
PUT and all other requests require the signature to precisely match the request, because the signature is derived entirely from observable attributes of the request itself.
One possible workaround would be to connect the bucket to CloudFront and use a CloudFront pre-signed URL with an appropriate wildcard. The CloudFront origin access identity, after validating the CloudFront URL, would actually handle signing the request in the background on its way to S3 to match the exact request. Giving the origin access identity the s3:PutObject permission in bucket policy then should allow the action.
I suggest this should work, though I have not tried it, because the CloudFront docs indicate that the client needs to add the x-amz-content-sha256 header to PUT requests for full compatibility with all S3 regions. The same page warns that any permissions you assign to the origin access identity will work (such as DELETE), so, setting the bucket policy too permissive will allow any operation to be performed via the signed URL -- CloudFront signed URLs don't restrict to a specific REST verb.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Note that there's no such concept as uploading "to" CloudFront. Uploads go through CloudFront to the origin server, S3 in this case.
So I have an S3 bucket. I want to grant access to a single file within that bucket to a unique person. Is it possible to grant access based on a secure hash or something like that?
So for instance. File is uploaded to bucket. Emails is sent to user with a link:
https://s3-us-west-2.amazonaws.com/mycoolbucket/test.txt?key=asdqwerwerhsdhsdfh23562346
Access to that file is granted if the key (or whatever) is present and correct. If that key wasn't correct access would be denied. And access would only be granted for that single file in the bucket. Trying to avoid changing policies and what not.
Thanks in advance!
Take a look at pre-signed URLs, for example in Java: http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURLJavaSDK.html
So after looking over all the existing golang packages and what not I decided it was best to just build my own package specifically for creating a secure url to a specific item in an S3 bucket. It works great but documentation is a work in progress. Hopefully it helps someone:
https://github.com/markhayden/s3querybuilder
I am using Amazon S3 to host images for a public REST API and serve them. Currently, my bucket allows anyone to enter in the URL to the image, without any signature included in the params, and the image will be accessible. However, I'd like to require an expiring signature in each image request, so users will have to go through the API to fetch images. How do I do this? Is this a bucket policy?
You simply set all of the files to private. If you want to be able to give out pre-signed URLs, you'll need to generate them as you hand them out.
AWS’ official SDKs make this trivial.