S3 pre signed url without path - amazon-web-services

I need a to have a way of letting a client upload data to S3 without showing them the full location (path) of the file. Is that something doable with AWS S3 pre-signed URL?
I'm using boto3 as such
s3.client.generate_presigned_url(
ClientMethod='put_object',
ExpiresIn=7200,
Params={'Bucket': BUCKET, 'Key': name}
)
But the outcome will be:
https://s3.amazonaws.com/MY_BUCKET/upload/xxxx-xxxx/file-name.bin?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=XXXX
I need something like that won't show the Key name in the path (/upload/xxxx-xxxx/file-name.bin).
What other solutions do I have if not the pre-signed url?

I believe best way is distributing files with AWS Cloudfront. You can set the origin of the Cloudfront distribution to MY_BUCKET.s3.amazonaws.com. It is also possible to use subfolders like MY_BUCKET.s3.amazonaws.com/upload as origin.
Cloudfront will serve your files within S3 origin with generated CDN endpoint domain or it is possible to set and use custom domain as well.
https://d111111abcdef8.cloudfront.net/upload/xxxx-xxxx/file-name.bin
https://uploads.example.com/upload/xxxx-xxxx/file-name.bin
if you use subfolder as origin:
https://uploads.example.com/xxxx-xxxx/file-name.bin
More info on setting S3 Bucket as origin on Cloudfront: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_S3Origin
More info on using directory paths of S3 Bucket as origin: https://aws.amazon.com/about-aws/whats-new/2014/12/16/amazon-cloudfront-now-allows-directory-path-as-origin-name/
More info on Custom URLs: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html

This isn’t a complete answer, but it’s too long to be a comment.
I think you should be able to use API Gateway as a proxy for S3 to hide the path. You can still use pre-signed urls, but you might need to create pre-signed API Gateway urls rather than pre-signed S3 urls. I’ve never done it myself—nor will I be able to try it out in near future—but I’ll do my best to lay out how I think it’s done, and maybe someone else can try it and write up a more complete answer.
First, we need to set up an API gateway endpoint that will act as a proxy to S3.
AWS has a very thorough write-up on how to make a general proxy for S3, and I think you can make your custom endpoint point to a specific bucket and folder in S3 by modifying the PathOverride of the proxy. If you look at the screenshot of the PathOverrides in this section of the AWS documentation, you can see they have set the path override to {bucket}/{object}, but I think you could set the PathOverride to mySecretBucket/my/secret/folder/{object}, and then update the path mappings appropriately.
Next, you need to be able to use pre-signed urls with this proxy. There’s two ways you might be able to do this.
The first thing that might work is making the url signature pass through API Gateway to S3. I know it’s possible to map query parameters in a similar way to path parameters. You may need to perform some url encoding on the pre-signed URL’s signature param to make it work—I’m not entirely sure.
The other option is to allow Api Gateway to always write to S3, and require a signed request for calling your proxy endpoint. This SO question has a pretty detailed answer that looks to me like it should work.
Again, I know this isn’t a complete answer, and I haven’t tried to verify that this works, but hopefully someone can start with this and get to a complete answer for your question.

Related

AWS S3 + Cloudfront URL validation (CORS?)

I have a bucket s3://my-bucket/ in where I have a lot of tenants there: s3://my-bucket/app1. s3://my-bucket/app2, s3://my-bucket/app3, etc...
I also have an AWS Cloudfront distribution with a custom domain pointing to this buckets as origin:
app1.mycloudfrontcontenturl.com/app1/images/profilePicture.png
app2.mycloudfrontcontenturl.com/app2/images/customLogo.png
The trick i'm interested to do is someone from app1 can not be able to their own files changing the host. I mean, in this scenario if you hit this someappX.mycloudfrontcontenturl.com/app1/images/profilePicture.png it works. I want to prohibit that, if the host header does not match with the app in the URL it should gives them forbidden or whatever.
Any idea that does not uses lambda#edge ?

How AWS Cloudfront forward my request Initiated by S3 bucket?

This may be a simple question, but I can't find any tutorials for it
My website all store in S3 bucket, but the front-end and back-end are stores in different buckets
In my front-end website, JS initiated a request URL use relative path, like /api/***, the request URL to be http://front-end.com/api/***.
how can I make all these requests redirect to my back-end bucket. like this:
http://back-end.com/api/***
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/redirect-website-requests.html
this doc can't seem to do this
Is there a reason you need to use different domain names to serves your content ?
To redirect from http://front-end.com/api/* to http://back-end.com/api/*, there are couple of ways:
1. Use Lambda#edge viewer request function to redirect with 301/302 with new URL.
2. Use S3 bucket for redirection.
In any of above case, you need both front-end.com and back-end.com to point to CloudFront so that you can serve them from CloudFront.
An easy way is to access all of them using front-end.com and create a cache behavior which Path pattern "/api/*" and choose the Origin bucket where you want to make the request.

How to securely access a file in the application using s3 bucket URL

In my application we have to open some pdf files in a new tab on click of an icon using the direct s3 bucket url like this:
http://MyBucket.s3.amazonaws.com/Certificates/1.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1522947975&Signature=XXXXXXXXXXXXXXXXX
Some how i feel this is not secure as the user could see the bucket name, AWSAccessKeyId,Expiration and Signature. Is this still considered secure ? Or is there a better way to handle this ?
Allowing the user to see these parameters is not a problem because;
AWSAccessKeyId can be public (do not confuse with SecretAccessKey)
Expires and signature is signed with your SecretAccessKey so no one will be able to manipulate it (aws will validate it against you SecretKey)
Since you don't have public objects and your bucket itself is not public, then it is ok to the user knowing your bucket name - you will always need a valid signature to access the objects.
But I have two suggestions for you; 1. Use your own domain, so the bucket is not visible (you can use free SSL provided by AWS if you use CloudFornt), 2. Use HTTPS instead of plain HTTP.
And if for any reason you absolutely dont want your users to see AWS parameters, then I suggest that you proxy the access to S3 via your own API. (though I consider it unnecessary)
I see you access with http (with no SSL). You can do virtual hosting with S3 for multiple domains.
https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
and create signed url based on your domain and you are good to go.
If you are using SSL, you can use Cloudfront
and configure cloudfront origin to point to your S3 bucket.
Hope it helps.

Presigned S3 URL for PUT with dynamic filename / key starts with

Is it possible to PUT to S3 using a presigned key-starts-with policy to allow upload of multiple or arbitrarily named files?
This is easy using the browser-based PresignedPost technique, but I've been unable to find a way to use a normal simple PUT for uploading arbitrary files starting with the same key.
This isn't possible... not directly.
POST uploads are unique in their support for an embedded policy document, which allows logic like starts-with.
PUT and all other requests require the signature to precisely match the request, because the signature is derived entirely from observable attributes of the request itself.
One possible workaround would be to connect the bucket to CloudFront and use a CloudFront pre-signed URL with an appropriate wildcard. The CloudFront origin access identity, after validating the CloudFront URL, would actually handle signing the request in the background on its way to S3 to match the exact request. Giving the origin access identity the s3:PutObject permission in bucket policy then should allow the action.
I suggest this should work, though I have not tried it, because the CloudFront docs indicate that the client needs to add the x-amz-content-sha256 header to PUT requests for full compatibility with all S3 regions. The same page warns that any permissions you assign to the origin access identity will work (such as DELETE), so, setting the bucket policy too permissive will allow any operation to be performed via the signed URL -- CloudFront signed URLs don't restrict to a specific REST verb.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Note that there's no such concept as uploading "to" CloudFront. Uploads go through CloudFront to the origin server, S3 in this case.

Amazon S3 Bucket - Files are completely public, but how can I restrict to allow only pre-signed requests?

I am using Amazon S3 to host images for a public REST API and serve them. Currently, my bucket allows anyone to enter in the URL to the image, without any signature included in the params, and the image will be accessible. However, I'd like to require an expiring signature in each image request, so users will have to go through the API to fetch images. How do I do this? Is this a bucket policy?
You simply set all of the files to private. If you want to be able to give out pre-signed URLs, you'll need to generate them as you hand them out.
AWS’ official SDKs make this trivial.