Upload the same name to Amazon S3 but keep the same permission - amazon-web-services

Just my thinking. Some of us may work on several files and frequently upload the same file with the same name onto Amazon S3. By default, the permission will be reset. Assuming that I don't use Versioning.
And I have a need to keep the same permission for any uploaded file which has the same name file existed on current Amazon S3.
I know it may not a good idea but technically how we can realize it?
Thanks

It is not possible to upload an object and request that the existing ACL settings be kept on the new object.
Instead, you should specify the ACL when the object is uploaded.

Related

AWS File Sharing Services With Temporary Storage

I am currently looking for a solution to store temporary files in AWS. I want to create a functionality on my app that allows my customers to upload a file and send it by email (something like WeTranfer or Send Anywhere).
I want to save the file temporarily on my AWS storage, for 10 hours and then remove it permanently. If the file has not expired, the user can click the link (provided by AWS) on the email and download the file.
I recently came across S3 Bucket Lifecycle rules, but I can only specify days for the expiration and not hours.
I would appreciate any suggestion. Thank you!
Amazon S3 is the appropriate place to store these files.
If you want access controls (to control which users can access the file) and fine-grained control over when the object 'expires', then you would need to code this yourself.
The files should be stored in a private Amazon S3 bucket. You would then need a back-end app that manages user authentication. When an authorized user requests access to a file, the app can generate an Amazon S3 pre-signed URL, which provides time-limited access to private objects in Amazon S3 (eg 10 hours). This is the link you would put into the email.
Deletion could still be handled by S3 Lifecycle rules, but it is less important when the file is actually deleted because the pre-signed URL would block access to the file after 10 hours anyway.

copy images from one S3 bucket to diff account s3 bucket

I am using RESTful API, API provider having images on S3 bucket more than 80GB size.
I need to download these images and upload in my AWS S3 bucket, its time taking job.
Is there any way to copy image from API to my S3 bucket instead of I download and upload again.
I talked with API support they saying you are getting image URL, so its up to you how you handle,
I am using laravel.
is it way to get the sourced images url's and directly move images to S3 instead of first I download and upload.
Thanks
I think downloading and re-uploading to different accounts would be inefficient plus pricey for the API Provider. Instead of that I would talk to the respective API Provider and try to replicate the images across accounts.
Post replicate you can Amazon S3 inventory for various information related to the objects in the bucket.
Configuring replication when the source and destination buckets are owned by different accounts
You want "S3 Batch Operations". Search for "xcopy".
You do not say how many images you have, but 1000 at 80GB is 80TB, and for that size you would not even want to be downloading to a temporary EC2 instance in the same region file by file which might be a one or two day option otherwise, you will still pay for ingress/egress.
I am sure AWS will do this in an ad-hoc manner for a price, as they would do if you were migrating from the platform.
It may also be easier to allow access to the original bucket from the alternative account, but this is no the question.

While moving s3 files to cloudfront, want to exclude some files

I have files stored in s3, some files are accessed often, but other files are just stored for later use.
We do not need to cdn these files at all.
Is there a way to tell cloudfront not to fetch these files from s3?
Easiest way is to move them to a separate s3 bucket however another option is to keep the objects you don't want exposed as private.
By default, your Amazon S3 bucket and all of the objects in it are private—only the AWS account that created the bucket has permission to read or write the objects in it. If you want to allow anyone to access the objects in your Amazon S3 bucket using CloudFront URLs, you must grant public read permissions to the objects. (This is one of the most common mistakes when working with CloudFront and Amazon S3. You must explicitly grant privileges to each object in an Amazon S3 bucket.)
Source:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html#GettingStartedUploadContent
Hopefully that answers your question.

How to implement a one-time write ticket to AWS S3 bucket?

guys.
I am trying to implement some mechanism such that an anonymous AWS user can write to a specific S3 bucket that belongs to me, using a ticket provided by me(such as a random string). There may be restrictions on the object size and there should be a time limit( such as, write to the bucket within 1 hour after I issue the ticket to him). Is there any way to implement such thing using AWS S3 access policies?
Thanks in advance!
Yes, this is possible using the Post Object API call on S3.
You'll need to generate and sign a security policy and pass it along with the upload. This policy will contain rules as to what types of files can be uploaded, restrictions on file size, location in your bucket where new files can be uploaded, an expiration date for the policy, etc.
To learn more, check out this example as well as this article.

Import data from URL to Amazon S3

I have a file with a pre-signed URL.
I would like to upload that file directly to my S3 bucket without donwloading it first (I know how to do it with the intermediate step but I want to prevent it).
Any suggestion?
Thanks in advance
There is not a method supported by S3 that will accomplish what you are trying to do.
S3 does not support a request type that says, essentially, "go to this url and whatever you fetch from there, save it into my bucket under the following key."
The only option here is to fetch what you want, and then upload it. If the objects are large, and you don't want to dedicate the necessary disk space, you could fetch it in parts from the origin and upload it in parts using multipart upload... or if you are trying to save bandwidth somewhere, even the very small t1.micro instance located in the same region as the S3 bucket will likely give you very acceptable performance for doing the fetch and upload operation.
The single exception to this is where you are copying an object from S3, to S3, and the object is under 5 GB in size. In this case, you send a PUT request to the target bucket, accompanied by:
x-amz-copy-source: /source_bucket/source_object_key
That's not quite a "URL" and I assume you do not mean copying from bucket to bucket where you own both buckets, or you would have asked this more directly... but this is the only thing S3 has that resembles the behavior you are lookng for at all.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You can't use the signed URL here... the credentials you use to send the PUT request have to have permission to both fetch and store.