Amazon S3 Object Lifecycle Management via header - amazon-web-services

I've been searching for an answer to this question for quite some time but apparently I'm missing something.
I use s3cmd heavily to automate document uploads to AWS S3, via script. One of the parameters that can be used in s3cmd is --add-header, which I assume allows for lifecycle rules to be added.
My objective is to add this parameters and specify a +X (where X is days) to the upload. In the event of ... --add-header=...1 ... the lifecyle rule would delete this file after 24h.
I know this can be easily done via the console, but I would like to have a more detailed control over individual files/scripts.
I've read the parameters that can be passed to S3 via s3cmd, but I somehow can't understand how to put all of those together to get the intended result.
Thank you very much for any help or assistance!

The S3 API itself does not implement support for any request header that triggers lifecycle management at the object level.
The --add-header option for s3cmd can add headers that S3 understands, such as Content-Type, but there is no lifecycle header you can send using any tool.
You might be thinking of this:
If you make a GET or a HEAD request on an object that has been scheduled for expiration, the response will include an x-amz-expiration header that includes this expiration date and the corresponding rule Id
https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/
This is a reaponse header, and is read-only.

Related

How to specify custom metadata in resumable upload (via XML API)?

I am following steps for resumable upload outlined here.
According to documentation custom metadata has to be specified in first POST and is to be passed via x-goog-meta-* headers. I.e.:
x-goog-meta-header1: value1
x-goog-meta-header2: value2
... etc
But in my testing all these values disappear. After final PUT object shows up in the bucket with proper content-type but without a single piece of custom metadata.
What I am doing wrong?
P.S. It is rather suspicious that JSON API in resumable upload takes metadata as payload of first POST...
P.P.S. I am performing resumable upload via XML API described here (only using C++ code instead of curl utility). Adding x-goog-meta-mykey: myvalue header has no effect on object's custom metadata.
if you replace AWS4-HMAC-SHA256 in Authorization header with GOOG4-HMAC-SHA256 -- it works. GCS uses this bit as a "should I expect x-amz- or x-goog- headers?" switch. Problem is that with resumable upload you have to specify x-goog-resumable and adding x-amz-meta-* headers causes request to fail with a message about mixing x-goog- and x-amz- headers.
I also went ahead and changed few other aspects of signature, namely:
request type: aws4_request -> goog4_request
signing key: AWS4 -> GOOG4 (GOOG1 works too)
service name: s3 -> storage (even though in some errors GCS asks for either s3 or storage to be specified here, it also takes gcs and maybe other values)
... this isn't necessary, it seems. I've done it just for consistency.

AWS Lambda function getting called repeatedly

I have written a Lambda function which gets invoked automatically when a file comes into my S3 bucket.
I perform certain validations on this file, modify the particular and put the file at the same location.
Due to this "put", my lambda is called again and the process goes on till my lambda execution times out.
Is there any way to trigger this lambda only once?
I found an approach where I can store the file name in DynamoDB and can apply a check in lambda function, but can there be any other approach where DynamoDB's use can be avoided?
You have a couple options:
You can put the file to a different location in s3 and delete the original
You can add a metadata field to the s3 object when you update it. Then check for the presence of that field in s3 so you know if you have processed it already. Now this might not work perfectly since s3 does not always provide the most recent data on reads after updates.
AWS allows different type of s3 event triggers. You can try playing s3:ObjectCreated:Put vs s3:ObjectCreated:Post.
You can upload your files in a folder, say
s3://bucket-name/notvalidated
and store the validated in another folder, say
s3://bucket-name/validated.
Update your S3 Event notification to invoke your lambda function whenever there is a ObjectCreate(All) event in the /notvalidated prefix.
The second answer does not seem to be correct (put vs post) - there is not really a concept of update in S3 in terms of POST or PUT. The request to update an object will be the same as the initial POST of the object. See here for details on the available S3 events.
I had this exact problem last year - I was doing an image resize on PUT and every time a file was overwritten, it would be triggered again. My recommended solution would be to have two folders in your s3 bucket - one for the original file and one for the finalized file. You could then create the lambda trigger with the lambda prefix so it only checks the files in the original folder
The events are triggered in S3 based on if the object is put/post/copy/complete Multipart Upload - All these operations corresponds to ObjectCreate as per AWS documentation .
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
The best solution is to restrict your S3 object create event to particular bucket location. So that any change in that bucket location will trigger lambda function.
You can do the modification in some other bucket location which is not configured to trigger lambda function when object is created in that location.
Hope it helps!

If an updated object in S3 serves as a lambda trigger, is there an inherent race condition?

If I update an object in an S3 Bucket, and trigger on that S3 PUT event as my Lambda trigger, is there a chance that the Lambda could operate on the older version of that object given S3’s eventual consistency model?
I’m having a devil of a time parsing out an authoritative answer either way...
Yes, there is a possibility that a blind GET of an object could fetch a former version.
There are at least two solutions that come to mind.
Weak: the notification event data contains the etag of the newly-uploaded object. If the object you fetch doesn't have this same etag in its response headers, then you know it isn't the intended object.
Strong: enable versioning on the bucket. The event data then contains the object versionId. When you download the object from S3, specify this exact version in the request. The consistency model is not as well documented when you overwrite an object and then download it with a specific version-id, so it is possible that this might result in an occasional 404 -- in which case, you almost certainly just spared yourself from fetching the old object -- but you can at least be confident that S3 will never give you a version other than the one explicitly specified.
If you weren't already using versioning on the bucket, you'll want to consider whether to keep old versions around, or whether to create a lifecycle policy to purge them... but one brilliantly-engineered feature about versioning is that the parts of your code that were written without awareness of versioning should still function correctly with versioning enabled -- if you send non-versioning-aware requests to S3, it still does exactly the right thing... for example, if you delete an object without specifying a version-id and later try to GET the object without specifying a version-id, S3 will correctly respond with a 404, even though the "deleted" version is actually still in the bucket.
How does the file get there in the first place? I'm asking, because if you could reverse the order, it'd solve your issue as you put your file in s3 via a lambda that before overwriting the file, can first get the existing version from the bucket and do whatever you need.

AWS CloudFront Behavior

I've been setting up aws lambda functions for S3 events. I want to set up a new structure for my bucket, but it's not possible--so I set up a new bucket the way I want and will migrate old things and send new things there. I wanted to have some of the structure the same under a given base folder name old-bucket/images and new-bucket/images. I set up CloudFront to serve from old-bucket/images now, but I wanted to add new-bucket/images as well. I thought the behavior tab would set it such that it would check the new-bucket/images first then old-bucket/images. Alas, that didn't work. If the object wasn't found in the first, that was the end of the line.
Am I misunderstanding how behaviors work? Has anyone attempted anything like this?
That is expected behavior. An origin tells Amazon CloudFront where to obtain the data to serve to users, based upon a prefix, suffix, etc.
For example, you could serve old-bucket/* from one Amazon S3 bucket, while serving new-bucket/* from a different bucket.
However, there is no capability to 'fall-back' to a different origin if a file is not found.
You could check for the existence of files before serving the link, and then provide a different link depending upon where the files are stored. Otherwise, you'll need to put all of your files in the location that matches the link you are serving.

Amazon S3 and Cloudfront with TTL=0 Testing procedure

I Would like to test and see that my TTL=0 did work.
What I have:
S3 bucket that is mounted to directory in my redhat. so when I edit a simple txt file from the shell, I can open it in the aws console bucket manager and view the file. Also I have created cloudfront distribution so i can open the txt file from the cloudfront link.
Test:
I edit the txt file with the telnet, then open it from aws console on S3 bucket section, i see the file has changed, but when i open the file on the cloudfront link, it didnt change. This means the TTL=0 did not work.
How can i verify TTL=0 works ? and it is set correctly ? after creating the distribution i cannot find where to edit the TTL again.
Thanks
Quoting AWS:
Note that our default behavior isn’t changing; if no cache control header is set, each edge location will continue to use an expiration period of 24 hours before checking the origin for changes to that file. You can also continue to use Amazon CloudFront's Invalidation feature to expire a file sooner than the TTL set on that file.
You're likely not setting the cache control correctly. One way to confirm that is to Enable S3 Bucket Logging - New files will appear whenever there are new HTTP GETs from your S3 Bucket, even if they come from CloudFront.
You could also test S3 Directly, with curl (or s3curl) so you can track its headers correctly.
My recommendation is that, whenever you upload new content, you force CloudFront to Invalidate. If you're using tools like s3fs, then inotify/icron might help you
(Disclaimer: I totally hate the whole idea of mapping filesystems off to S3. They're quite different tools and you're likely to get 'leaky abstractions')
It is most likely that you are not sending any TTL headers from S3. CloudFront will look for a TTL header in the source file and if it doesn't find anything, will default to 24 hours.
You could look to set a bucket policy or use a tool like S3 browser to automatically set the headers. http://s3browser.com/automatically-apply-http-headers.php
If you just want to test then I would follow the steps below.
Create a new text file in your bucket
Through the AWS console, locate the file and check and/or add the caching headers
Retrieve the file from CloudFront
Change the file in the bucket
Check the headers of the new file in AWS console (your S3 mapping utility may erase the previous file headers)
Retrieve the new changed file from CloudFront
Sending an invalidate call to CloudFront with each request may become chargeable if you have a large number of edits a month. Plus invalidations take several minutes (sometimes 20mins or more) to propagate, meaning you could never instantly change your content.