Presigned S3 URL for PUT with dynamic filename / key starts with - amazon-web-services

Is it possible to PUT to S3 using a presigned key-starts-with policy to allow upload of multiple or arbitrarily named files?
This is easy using the browser-based PresignedPost technique, but I've been unable to find a way to use a normal simple PUT for uploading arbitrary files starting with the same key.

This isn't possible... not directly.
POST uploads are unique in their support for an embedded policy document, which allows logic like starts-with.
PUT and all other requests require the signature to precisely match the request, because the signature is derived entirely from observable attributes of the request itself.
One possible workaround would be to connect the bucket to CloudFront and use a CloudFront pre-signed URL with an appropriate wildcard. The CloudFront origin access identity, after validating the CloudFront URL, would actually handle signing the request in the background on its way to S3 to match the exact request. Giving the origin access identity the s3:PutObject permission in bucket policy then should allow the action.
I suggest this should work, though I have not tried it, because the CloudFront docs indicate that the client needs to add the x-amz-content-sha256 header to PUT requests for full compatibility with all S3 regions. The same page warns that any permissions you assign to the origin access identity will work (such as DELETE), so, setting the bucket policy too permissive will allow any operation to be performed via the signed URL -- CloudFront signed URLs don't restrict to a specific REST verb.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Note that there's no such concept as uploading "to" CloudFront. Uploads go through CloudFront to the origin server, S3 in this case.

Related

Give access to specific user to files in private s3 bucket

I have a s3 bucket that is private and I want specific user to have access to some objects in this bucket. What is the correct way to do that?
For individuals objects, you should use Pre-signed URL.
It allows the user who access the URL to issue a request as the person who pre-signed the URL (inheriting the permissions of the IAM user that generated the URL). It can be generated with SDK or CLI. It is valid for 3600s by default, but you can change this duration.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html
For multiple objetcs (if you want a path with wildcard), you can use Signed cookies. It need you to first implements a CloudFront distribution in front of you s3 bucket.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html
CloudFront also allow to provide Signed URLs, which are different from S3 Presigned-URL: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html

How to securely access a file in the application using s3 bucket URL

In my application we have to open some pdf files in a new tab on click of an icon using the direct s3 bucket url like this:
http://MyBucket.s3.amazonaws.com/Certificates/1.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1522947975&Signature=XXXXXXXXXXXXXXXXX
Some how i feel this is not secure as the user could see the bucket name, AWSAccessKeyId,Expiration and Signature. Is this still considered secure ? Or is there a better way to handle this ?
Allowing the user to see these parameters is not a problem because;
AWSAccessKeyId can be public (do not confuse with SecretAccessKey)
Expires and signature is signed with your SecretAccessKey so no one will be able to manipulate it (aws will validate it against you SecretKey)
Since you don't have public objects and your bucket itself is not public, then it is ok to the user knowing your bucket name - you will always need a valid signature to access the objects.
But I have two suggestions for you; 1. Use your own domain, so the bucket is not visible (you can use free SSL provided by AWS if you use CloudFornt), 2. Use HTTPS instead of plain HTTP.
And if for any reason you absolutely dont want your users to see AWS parameters, then I suggest that you proxy the access to S3 via your own API. (though I consider it unnecessary)
I see you access with http (with no SSL). You can do virtual hosting with S3 for multiple domains.
https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
and create signed url based on your domain and you are good to go.
If you are using SSL, you can use Cloudfront
and configure cloudfront origin to point to your S3 bucket.
Hope it helps.

Protect Private Data in Amazon S3

I want to upload some images on Amazon S3 and based on the user's subscription give them the access of viewing some portions of these images. After reading Amazon S3 documentation I have come up with these solutions:
Assigning each user in my application to one IAM user in Amazon S3 and then defining user policy or bucket policy to manage who has access to what. But there is two drawbacks: First, the user or bucket policies have limit on their size and since the number of users and images are very large it is very likely that I need to exceed that limit. Second, the number of IAM users per AWS account is bounded to 5000 and I would have more users in my application.
Amazon S3 makes it possible to define some temporary security credentials that act the same as IAM users. It's possible for me to require the client side to make a request to my server and I create a temporary IAM user for them with a special policy and pass them their credentials then they can directly send request to S3 using their credentials and have access to their resources. But the problem is that these users would last between 15 min to 1 hour and therefore the clients need to request my server at least every 1 hour to make a temporary IAM user for them.
Since I want to serve some images it is a good practice to use combination of Amazon Cloudfront and S3 to serve the content as quick as possible. I have also read the Cloudfront documentation for serving private content and I found out that their solution is using signed URLs or signed cookies. I will deny any access to S3 resources and the cloudfront would be the only one who has the access to read data from S3 and every time a user signs in to my application I would send them the credentials that they need to have to make a signed URL or I would send them the necessary cookies. They can request required resources with the information that they have and this information would last until they are signed in to my application. But I have some security concerns. Since almost all of the information about access control is sent to the client (e.g. in cookies) they can easily modify it and grant themselves more permissions. However it is a big concern but I think I have to use cloudfront for decreasing loading resource time.
I want to know you think which of these solutions is more reasonable and better than others and also if there are other solutions maybe using other Amazon web services.
My own approach to serve private content on S3 is by using CloudFront with either signed URLs or signed cookies (or sometimes, both). You should not use IAM users or temporary credentials for large number of users, as in your case.
You can read more about this topic here:
Serving Private Content through CloudFront
Your choice of whether to use signed URLs or signed cookies depends on the following.
Choosing Between Signed URLs and Signed Cookies
CloudFront signed URLs and signed cookies provide the same basic
functionality: they allow you to control who can access your content.
If you want to serve private content through CloudFront and you're
trying to decide whether to use signed URLs or signed cookies,
consider the following.
Use signed URLs in the following cases:
You want to use an RTMP distribution. Signed cookies aren't supported
for RTMP distributions.
You want to restrict access to individual files, for example, an installation download for your application.
Your users are using a client (for example, a custom HTTP client) that
doesn't support cookies.
Use signed cookies in the following cases:
You want to provide access to multiple restricted files, for example,
all of the files for a video in HLS format or all of the files in the
subscribers' area of a website.
You don't want to change your current
URLs.
As for your security concerns, CloudFront uses the public key to validate the signature in the signed cookie and to confirm that the cookie hasn't been tampered with. If the signature is invalid, the request is rejected.
You can also follow the guidelines at the end of this page to prevent misuse of signed cookies.

Relationship between Origin Access Identities (OAIs) and CloudFront Signed URLs

So I've been following guides on CloudFront and S3 and I feel like I am still missing a core piece of information in the relationship between Origin Access Identities (OAIs) and CloudFront Signed URLs.
What I want: a private CDN to host audio snippets (of a few seconds in length) and low-resolution images. I only want these files to be accessible when requested from a specific domain (i.e. the domain the web app will live on) and maybe a testing server, so that my web app can get the files but anyone else just can't access them without going through the web app.
What I'm confused about: I'm fuzzy on the relationship (if there is any) between CloudFront Origin Access Identities (OAIs) and Signed CloudFront URLs.
I have currently created a private S3 bucket, an OAI for my CloudFront distribution, and have generated a signed URL to an image through CloudFront. But I don't see how these things are related and how they prevent someone else from accessing CDN files (e.g. if they were able to inspect an element and get the signed URL).
Is the whole point to make sure the signed URLs expire quickly? And if so, how does the OAI play a role in it? Is this something set in CORS?
An origin access identity is an entity inside CloudFront that
can be authorized by bucket policy to access objects in a bucket. When CloudFront uses an origin access identity to access content in a bucket, CloudFront uses the OAI's credentials to generate a signed request that it sends to the bucket to fetch the content. This signature is not accessible to the viewer.
The meaning of the word "origin" as used here should not be confused with the word "origin" as used in other contexts, such as CORS, where "origin" refers to the site that is allowed to access the content.
The origin access identity has nothing to do with access being restricted to requests containing a specific Origin or Referer header.
Once a signed URL is validated by CloudFront as matching a CloudFront signing key associated with your AWS account (or another account that you designate as a trusted signer) the object is fetched from the bucket, using whatever permissions the origin access identity has been granted at the bucket.
Is the whole point to make sure the signed url's expire quickly?
Essentially, yes.
Authentication and Authorization of requests by trying to restrict access based on the site where the link was found is not a viable security measure. It prevents hot-linking from other sites, but does nothing to protect against anyone who can forge request headers. Defeating a measure like that is trivial.
Signed URLs, by contrast, are extremely tamper resistant to the point of computational infeasibility.
A signed URL is not only valid only until it expires, but can optionally also restrict access to a person having the same IP address that's included in the policy document, if you use a custom policy. Once signed, any change to the URL, including the policy statement, makes the entire URL unusable.
The OAI is only indirectly connected with CloudFront signed URLs -- they can be used individually, or together -- but without an OAI, CloudFront has no way to prove that it is authorized to request objects from your bucket, so the bucket would need to be public, which would defeat much of the purpose of signed URLs on CloudFront.
Add a new CNAME entry that points to your CloudFront domain. This entry should match that entered in the ‘Alternate Domain Names’ from within the CloudFront console.
By default CloudFront generate Domain name automatically (eg d3i29vunzqzxrt.cloudfront.net) but you can define your alternative domain name.
Also you can secure Cloudfront
Serving Private Content through CloudFront

Can I restrict Amazon S3 Browser-Based Uploads by URL in my bucket policy

Based on: http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html
Is there a way to limit a browser based upload to Amazon S3 such that it is rejected if it does not originate from my secure URL (i.e. https://www.someurl.com)?
Thanks!
I want to absolutely guarantee the post is coming from my website
That is impossible.
The web is stateless and a POST coming "from" a specific domain is just not a valid concept, because the Referer: header is trivial to spoof, and a malicious user most likely knows this. Running through an EC2 server will gain you nothing, because it will tell you nothing new and meaningful.
The post policy document not only expires, it also can constrain the object key to a prefix or an exact match. How is a malicious user going to defeat this? They can't.
in your client form you have encrypted/hashed versions of your credentials. 
No, you do not.
What you have is a signature that attests to your authorization for S3 to honor the form post. It can't feasibly be reverse-engineered such that the policy can be modified, and that's the point. The form has to match the policy, which can't be edited and still remain valid.
You generate this signature using information known only to you and AWS; specifically, the secret that accompanies your access key.
When S3 receives the request, it computes what the signature should have been. If it's a match, then the privileges of the specific user owning that key are checked to see whether the request is authorized.
By constraining the object key in the policy, you prevent the user from uploading (or overwriting) any object other than the specific one authorized by the policy. Or the specific object ket prefix, in which case, you restrict the user from harm to anything not under that prefix.
If you are handing over a policy that allows any object key to be overwritten in the entire bucket, then you're solving the wrong problem by trying to constrain posts as coming "from" your website.
I think you've misunderstood how the S3 service authenticates.
Your server would have a credentials file holding your access id and key and then your server signs the file as it is uploaded to your S3 bucket.
Amazon's S3 servers then check that the uploaded file has been signed by your access id and key.
This credentials file should never be publicly exposed anywhere and there's no way to get the keys off the wire.
In the case of browser based uploads your form should contain a signature that is passed to Amazon's S3 servers and authenticated against. This signature is generated from a combination of the upload policy, your access id and key but it is hashed so you shouldn't be able to get back to the secret key.
As you mentioned, this could mean that someone would be able to upload to your bucket from outside the confines of your app by simply reusing the signature in the X-Amz-Signature header.
This is what the policy's expiration header is for as it allows you to set a reasonably short expiration period on the form to prevent misuse.
So when a user goes to your upload page your server should generate a policy with a short expiration date (for example, 5 minutes after generation time). It should then create a signature from this policy and your Amazon credentials. From here you can now create a form that will post any data to your S3 bucket with the relevant policy and signature.
If a malicious user was to then attempt to copy the policy and signature and use that directly elsewhere then it would still expire 5 minutes after they originally landed on your upload page.
You can also use the policy to restrict other things such as the name of the file or mime types.
More detailed information is available in the AWS docs about browser based uploads to S3 and how S3 authenticates requests.
To further restrict where requests can come from you should look into enabling Cross-Origin Resource Sharing (CORS) permissions on your S3 bucket.
This allows you to specify which domain(s) each type of request may originate from.
Instead of trying to barricade the door. Remove the door.
A better solution IMHO would be to prevent any uploads at all directly to s3.
Meaning delete your s3 upload policy that allows strangers to upload.
Make them upload to one of your servers.
Validate the upload however you like.
If it is acceptable then your server could move the file to s3.