Cloudfront 403 error while accessing files uploaded by another account - amazon-web-services

I have a Cloudfront distribution which takes one of my s3 buckets as its origin server. The files are uploaded to s3 by a third party attachment uploader.
When I try to access the file in s3 via cloudfront I am getting a 403 Forbidden error with an Access Denied XML (as below). But when I manually upload files to the s3 bucket I am able to access the file via cloudfront.
The permission for both the files are same except the owner of the file. For the file uploaded by me manually the owner, of the file is my account and for the file uploaded by the uploader, it is the uploader. The third party attachment uploader gives full access of the object to the bucket owner. Also, I have restricted bucket access but not viewer access.
What are the reasons which can cause this error? How do I go about debugging this?

When a second AWS account uploads content to an S3 bucket serving content via CloudFront with OAI, the uploaded file needs to have the OAI canonical ID added with the --grant read=id="OAI-canonical-ID" when the file is uploade; also add the S3 bucket owner as a grant full=id="BucketOwnerID". The aws cli was used to perform the uploaded. Adjust according for the method that is used.
When the file is viewed in the S3 bucket, the permissions will have CloudFront listed as a grantee. The file should be readable via CloudFront.

It is possible that the "Access Denied" response you are receiving is a response from S3 that is cached by CloudFront from before the object was available in S3.
For example, if you try the CloudFront URL and the file does not exist in S3, then you'll get the "Access Denied" response. Even after uploading the file, CloudFront will have the "Access Denied" response cached until the end of the TTL. During this time, you will continue to receive the "Access Denied" response.
Try invalidating the distribution. After that, request the file and see if you get the correct response.
If this solves the problem, then you need to figure out how to avoid requesting the object from CloudFront before it exists in S3.

Related

Is there a way to use the CLI to copy data from a presigned URL to my own bucket?

I have a presigned URL for a file in a vendor's S3 bucket. I want to copy that file into my own bucket. I'd rather not copy it to the machine I'm running the copy from. My thought was to use the CLI s3 sync or cp commands to copy the file from one bucket to another. But those commands require s3:// URLs, not https://.
I tried converting the HTTP URL by replacing "https://bucketname.s3.region.amazonaws.com" with "s3://bucketname", but that gives an Access Denied error with s3 sync and a Bad Request with s3 cp. Is there any way to do this, or do I need to download it locally with HTTP, then upload to my bucket with the CLI?
Problem here is that you need to authenticate into two different accounts, the source to read and the destination to write. If you had access to both, i.e. the credentials you use to read could also write to your own bucket, you would be able to bypass the middle-man.
That's not the case here, so your best bet is to download it first, then authenticate with your own account and put the object there.
Amazon S3 has an in-built CopyObject command that can read from an S3 bucket and write to an S3 bucket without needing to download the data. To use this command, you require credentials that have GetObject permission on the source bucket and PutObject permissions on the destination bucket. The credentials themselves can be issued by either the AWS Account having the source bucket or the AWS Account having the destination bucket. Thus, you would need to work with the account admins who control the 'other' AWS Account.
If this is too difficult and your only way of accessing the source object is via a pre-signed URL, then you cannot use the CopyObject command. Instead, you would need to download the source file and then separately upload it to Amazon S3.

Configuring Security for an S3 for Presigned Post?

Continuing my adventure in AWS, Uploading The File, I am just going to come out and ask it. What is the proper way to configure the security settings for an S3 bucket intended to be the endpoint for a presigned post file upload?
Currently I have created an IAM user with full S3 permissions which can generate the presigned post in response to a get request via an AWS Lambda function. However if I have the block all privacy setting enabled when I use the post I get access denied. If I turn off block all it works but I am worried about security.
So I will just ask, what is the proper way to configure the security settings for an S3 bucket intended to be the endpoint for a presigned post file upload?

AWS CloudFront not able to point to specific url subpage

I am using reactJS to develop our website, which I uploaded to S3 bucket with both index and error documents pointing to "index.html".
If I use the s3 bucket's url, say http://assets.s3-website-us-west-2.amazonaws.com", I get served my index.html. So far, so good. If I then go to specific subpage by deliberately appending /merchant, it goes there to without any problem although there is no folder called /merchant in my s3 bucket.
However, if I now attach this S3 bucket to my CloudFront distribution, and I try to directly address "https://blah.cloudfront.net/merchant", it responds with "access denied" because it could not find the subfolder /merchant in s3 bucket.
How do people get around this issue with CloudFront? I have so many virtual subpages that don't map to physical folders.
Thank you!
I have the answer.
In the cloudfront, set a custom error response like this

Cloudfront Origin Group detect failver using Lambda#Edge

My setup:
I have a Cloudfront Origin Group where Bucket A is a primary bucket and Bucket B is a secondary bucket. Lambda#Edge is added on origin-request to do a certain process.
Whenever a request comes to Cloudfront, my Lambda#Edge modifies it to match the folder structure of my bucket and returns file accordingly.
If Bucket A doesn't have a certain file it throws an error and Cloudfront failover requests the file from Bucket B. Bucket B doesn't have the same structure as Bucket it, it should return the file from unmodified file path in the bucket.
Example:
My Original Request: /somefile.html
Lambda#Edge modifies this request to get the file from Bucket A to: /en/somefile.html
If Bucket A doesn't have this somefile.html then this request goes to Bucket B. It should return file from the originally requested path: /somefile.html and not /en/somefile.html
The above scenario is very simple, my original scenario is much complex. Basically Bucket A file path is processed path while Bucket B should return file from an originally requested path.
What I want:
Using Lambda#Edge how can I detect if the request is on Bucket A or bucket B?
What I have tried:
I tried adding certain header in request headers and check if the header exists then its request to Bucket B. But this doesn't seem to be working.
The hostname of the origin that CloudFront will try to contact after an origin-request trigger returns control can be found in one of two places.
If you are using what CloudFront calls an S3 origin (the REST interface to S3) it will be here:
event.Records[0].cf.request.origin.s3.domainName
If you are using what CloudFront calls a custom origin -- which includes S3 website hosting endpoints as well as any other origin server that isn't S3 REST, it's here:
event.Records[0].cf.request.origin.custom.domainName
These can be used to determine which of two origins in an origin group will receive the request, next.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html#lambda-event-structure-request

Setting AWS S3 CORS that files could be fetch only from website

I have a website where all files are stored in AWS S3 and client want that some private files couldn't be accessed only when having file link.
Is it possible to somehow set CORS or something else that if I copy image link, which is stored in S3, and paste it into browser it would be denied, but if browser requests that image for displaying in the website it would be ok?
Use cloudfront, here the official doc