How AWS Cloudfront forward my request Initiated by S3 bucket? - amazon-web-services

This may be a simple question, but I can't find any tutorials for it
My website all store in S3 bucket, but the front-end and back-end are stores in different buckets
In my front-end website, JS initiated a request URL use relative path, like /api/***, the request URL to be http://front-end.com/api/***.
how can I make all these requests redirect to my back-end bucket. like this:
http://back-end.com/api/***
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/redirect-website-requests.html
this doc can't seem to do this

Is there a reason you need to use different domain names to serves your content ?
To redirect from http://front-end.com/api/* to http://back-end.com/api/*, there are couple of ways:
1. Use Lambda#edge viewer request function to redirect with 301/302 with new URL.
2. Use S3 bucket for redirection.
In any of above case, you need both front-end.com and back-end.com to point to CloudFront so that you can serve them from CloudFront.
An easy way is to access all of them using front-end.com and create a cache behavior which Path pattern "/api/*" and choose the Origin bucket where you want to make the request.

Related

AWS S3 + Cloudfront URL validation (CORS?)

I have a bucket s3://my-bucket/ in where I have a lot of tenants there: s3://my-bucket/app1. s3://my-bucket/app2, s3://my-bucket/app3, etc...
I also have an AWS Cloudfront distribution with a custom domain pointing to this buckets as origin:
app1.mycloudfrontcontenturl.com/app1/images/profilePicture.png
app2.mycloudfrontcontenturl.com/app2/images/customLogo.png
The trick i'm interested to do is someone from app1 can not be able to their own files changing the host. I mean, in this scenario if you hit this someappX.mycloudfrontcontenturl.com/app1/images/profilePicture.png it works. I want to prohibit that, if the host header does not match with the app in the URL it should gives them forbidden or whatever.
Any idea that does not uses lambda#edge ?

AWS S3 - secure URL

Trying to find a best practice to securely download contents from s3 on a SPA? Presigned URL seems to be one of the options. But whoever can get hold of that URL can access the contents. Is there a way to protect the presigned URL with an extra layer of security? Like to ensure the right person is accessing the files. Just a note, we use a third party IDP and not cognito.

S3 pre signed url without path

I need a to have a way of letting a client upload data to S3 without showing them the full location (path) of the file. Is that something doable with AWS S3 pre-signed URL?
I'm using boto3 as such
s3.client.generate_presigned_url(
ClientMethod='put_object',
ExpiresIn=7200,
Params={'Bucket': BUCKET, 'Key': name}
)
But the outcome will be:
https://s3.amazonaws.com/MY_BUCKET/upload/xxxx-xxxx/file-name.bin?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=XXXX
I need something like that won't show the Key name in the path (/upload/xxxx-xxxx/file-name.bin).
What other solutions do I have if not the pre-signed url?
I believe best way is distributing files with AWS Cloudfront. You can set the origin of the Cloudfront distribution to MY_BUCKET.s3.amazonaws.com. It is also possible to use subfolders like MY_BUCKET.s3.amazonaws.com/upload as origin.
Cloudfront will serve your files within S3 origin with generated CDN endpoint domain or it is possible to set and use custom domain as well.
https://d111111abcdef8.cloudfront.net/upload/xxxx-xxxx/file-name.bin
https://uploads.example.com/upload/xxxx-xxxx/file-name.bin
if you use subfolder as origin:
https://uploads.example.com/xxxx-xxxx/file-name.bin
More info on setting S3 Bucket as origin on Cloudfront: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_S3Origin
More info on using directory paths of S3 Bucket as origin: https://aws.amazon.com/about-aws/whats-new/2014/12/16/amazon-cloudfront-now-allows-directory-path-as-origin-name/
More info on Custom URLs: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
This isn’t a complete answer, but it’s too long to be a comment.
I think you should be able to use API Gateway as a proxy for S3 to hide the path. You can still use pre-signed urls, but you might need to create pre-signed API Gateway urls rather than pre-signed S3 urls. I’ve never done it myself—nor will I be able to try it out in near future—but I’ll do my best to lay out how I think it’s done, and maybe someone else can try it and write up a more complete answer.
First, we need to set up an API gateway endpoint that will act as a proxy to S3.
AWS has a very thorough write-up on how to make a general proxy for S3, and I think you can make your custom endpoint point to a specific bucket and folder in S3 by modifying the PathOverride of the proxy. If you look at the screenshot of the PathOverrides in this section of the AWS documentation, you can see they have set the path override to {bucket}/{object}, but I think you could set the PathOverride to mySecretBucket/my/secret/folder/{object}, and then update the path mappings appropriately.
Next, you need to be able to use pre-signed urls with this proxy. There’s two ways you might be able to do this.
The first thing that might work is making the url signature pass through API Gateway to S3. I know it’s possible to map query parameters in a similar way to path parameters. You may need to perform some url encoding on the pre-signed URL’s signature param to make it work—I’m not entirely sure.
The other option is to allow Api Gateway to always write to S3, and require a signed request for calling your proxy endpoint. This SO question has a pretty detailed answer that looks to me like it should work.
Again, I know this isn’t a complete answer, and I haven’t tried to verify that this works, but hopefully someone can start with this and get to a complete answer for your question.

Redirecting request for non-existent s3 bucket to different bucket

There is a bucket with some world readable content, which is being referenced from many places. We have migrated the contents of the bucket to a new bucket.
Now, we need remove the old bucket, but we cannot remove the endpoints/reference for the objects which were generated in the old bucket.
for example:
Old bucket name: xxx-yyy
Sample endpoint : https://s3.amazonaws.com/xxx-yyy/facebook.png
New bucket name: abc-pqr
Sample endpoint : https://s3.amazonaws.com/abc-pqr/facebook.png
Any request coming to non-existent xxx-yyy bucket should redirect to abc-pqr bucket. We do not want to remove the endpoints, we just want to redirect the request coming to the objects with the endpoint to the new bucket.
It appears that you are referencing files directly in Amazon S3. This format of URL is not able to redirect requests.
Amazon S3 buckets have a capability called Static Website hosting, which gives additional capabilities such as default Index & Error pages, plus the ability to setup a Webpage Redirect.
However, this requires a different URL to access your objects (eg http://xxx-yyy/s3-website-us-west-2.amazonaws.com/facebook.png). Given that you are unable to change your existing links, this would not be an option.
Your only option would be to create web pages in the original S3 bucket that use an HTML redirect to forward browsers to the new location.
With your current setup that's not possible. If you would have used AWS Cloudfront then you could have easily achieved that

Presigned S3 URL for PUT with dynamic filename / key starts with

Is it possible to PUT to S3 using a presigned key-starts-with policy to allow upload of multiple or arbitrarily named files?
This is easy using the browser-based PresignedPost technique, but I've been unable to find a way to use a normal simple PUT for uploading arbitrary files starting with the same key.
This isn't possible... not directly.
POST uploads are unique in their support for an embedded policy document, which allows logic like starts-with.
PUT and all other requests require the signature to precisely match the request, because the signature is derived entirely from observable attributes of the request itself.
One possible workaround would be to connect the bucket to CloudFront and use a CloudFront pre-signed URL with an appropriate wildcard. The CloudFront origin access identity, after validating the CloudFront URL, would actually handle signing the request in the background on its way to S3 to match the exact request. Giving the origin access identity the s3:PutObject permission in bucket policy then should allow the action.
I suggest this should work, though I have not tried it, because the CloudFront docs indicate that the client needs to add the x-amz-content-sha256 header to PUT requests for full compatibility with all S3 regions. The same page warns that any permissions you assign to the origin access identity will work (such as DELETE), so, setting the bucket policy too permissive will allow any operation to be performed via the signed URL -- CloudFront signed URLs don't restrict to a specific REST verb.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Note that there's no such concept as uploading "to" CloudFront. Uploads go through CloudFront to the origin server, S3 in this case.