S3 presigned url limit file size - amazon-web-services

I am uploading file to S3 using presigned url from client side, is there anyway I can:
restrict file upload based on file size limit and mimeType
throw error if file name already exists in the bucket folder instead of overwriting.
Note:
I am expecting a no-code solution (eg: configuration at bucket access policy) rather than having additional coding check.

Related

Restricting upload of any other file on s3 bucket except the file for which the presigned url is generated

I am using php to create a presigned url and send that to front-end and that uploads directly to s3 using that presigned url. I want to secure and restrict that upload.
for example I have created a presigned url for file "stack.txt" and I want the upload to fail/restrict any other file of different name being uploaded to s3 bucket.
How can I implement that ?

Different permissions same S3 bucket, parquet files

Problem
I have multiple files in the same S3 bucket. When I try to load one file into Snowflake, I get a "access denied" error. When I try a different file (in the same bucket), I can successfully load into Snowflake.
The file highlighted does not load into Snowflake.
This is the error
Using a different file but in the same bucket, I can successfully load into Snowflake.
Known Difference: The file that does not work was generated by AWS. The file that can be loaded into Snowflake was generated by AWS, saved to my local then reuploaded to the bucket.
The only difference is I brought it down to my local machine.
Question: Is there a known file permission on parquet files? Why does this behavior go away when I download and upload to the same bucket.
It cannot be an S3 bucket issue. It has to be some encoding on the parquet file.
Question: Is there a known file permission on parquet files? Why does
this behavior go away when I download and upload to the same bucket.
It cannot be an S3 bucket issue. It has to be some encoding on the
parquet file.
You are making some bad assumptions here. Each S3 object can have separate ACL (permission) values. You need to check what the ACL settings are by drilling down to view the details of each of those objects in S3. My guess is AWS is writing the objects to S3 with a private ACL, and when you re-uploaded one of them to the bucket you saved it with a public ACL.
Turns out I needed to add KMS permissions to the user accessing the file.

Can I pass a presigned URL to another s3 bucket to use to download content to it?

So as the question states, is there any means by which I can simply pass a url to an s3 bucket, and it will commence the download of that content? As if I was sending a file to an upload presigned url generated by my bucket, but instead the url.
something like this:
s3Client.uploadeToS3FromGivenUrl(s3BucketName, downloadFromThisUrl)
Help appreciated.
You can use the CopyObject API call to copy an object between buckets if you have sufficient access permissions.
However, if you can only read permissions via a pre-signed URL, this will not work. You will need to download the object and then upload it to the destination bucket.

Cloudfront Origin Group detect failver using Lambda#Edge

My setup:
I have a Cloudfront Origin Group where Bucket A is a primary bucket and Bucket B is a secondary bucket. Lambda#Edge is added on origin-request to do a certain process.
Whenever a request comes to Cloudfront, my Lambda#Edge modifies it to match the folder structure of my bucket and returns file accordingly.
If Bucket A doesn't have a certain file it throws an error and Cloudfront failover requests the file from Bucket B. Bucket B doesn't have the same structure as Bucket it, it should return the file from unmodified file path in the bucket.
Example:
My Original Request: /somefile.html
Lambda#Edge modifies this request to get the file from Bucket A to: /en/somefile.html
If Bucket A doesn't have this somefile.html then this request goes to Bucket B. It should return file from the originally requested path: /somefile.html and not /en/somefile.html
The above scenario is very simple, my original scenario is much complex. Basically Bucket A file path is processed path while Bucket B should return file from an originally requested path.
What I want:
Using Lambda#Edge how can I detect if the request is on Bucket A or bucket B?
What I have tried:
I tried adding certain header in request headers and check if the header exists then its request to Bucket B. But this doesn't seem to be working.
The hostname of the origin that CloudFront will try to contact after an origin-request trigger returns control can be found in one of two places.
If you are using what CloudFront calls an S3 origin (the REST interface to S3) it will be here:
event.Records[0].cf.request.origin.s3.domainName
If you are using what CloudFront calls a custom origin -- which includes S3 website hosting endpoints as well as any other origin server that isn't S3 REST, it's here:
event.Records[0].cf.request.origin.custom.domainName
These can be used to determine which of two origins in an origin group will receive the request, next.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html#lambda-event-structure-request

Cloudfront 403 error while accessing files uploaded by another account

I have a Cloudfront distribution which takes one of my s3 buckets as its origin server. The files are uploaded to s3 by a third party attachment uploader.
When I try to access the file in s3 via cloudfront I am getting a 403 Forbidden error with an Access Denied XML (as below). But when I manually upload files to the s3 bucket I am able to access the file via cloudfront.
The permission for both the files are same except the owner of the file. For the file uploaded by me manually the owner, of the file is my account and for the file uploaded by the uploader, it is the uploader. The third party attachment uploader gives full access of the object to the bucket owner. Also, I have restricted bucket access but not viewer access.
What are the reasons which can cause this error? How do I go about debugging this?
When a second AWS account uploads content to an S3 bucket serving content via CloudFront with OAI, the uploaded file needs to have the OAI canonical ID added with the --grant read=id="OAI-canonical-ID" when the file is uploade; also add the S3 bucket owner as a grant full=id="BucketOwnerID". The aws cli was used to perform the uploaded. Adjust according for the method that is used.
When the file is viewed in the S3 bucket, the permissions will have CloudFront listed as a grantee. The file should be readable via CloudFront.
It is possible that the "Access Denied" response you are receiving is a response from S3 that is cached by CloudFront from before the object was available in S3.
For example, if you try the CloudFront URL and the file does not exist in S3, then you'll get the "Access Denied" response. Even after uploading the file, CloudFront will have the "Access Denied" response cached until the end of the TTL. During this time, you will continue to receive the "Access Denied" response.
Try invalidating the distribution. After that, request the file and see if you get the correct response.
If this solves the problem, then you need to figure out how to avoid requesting the object from CloudFront before it exists in S3.