lambda function triggers only for a upload of filename pattern - amazon-web-services

Let's say I upload folder/key.jpg to a S3 bucket. How would I trigger a lambda function only when a file contains jpg at the end of File Name, is uploaded?
Is this possible or do I need to check the filename in the function and early-out it doesn't match what I'm looking for?
The reason I ask is that a lot of stuff will be uploaded to the bucket, and it seems inefficient (and costly) for the function to trigger every time.

It is possible to trigger lambda only on jpg image is uploaded in your s3 folder, follow https://n2ws.com/blog/aws-automation/lambda-function-s3-event-triggers. Just add jpg as suffix and foldername as prefix in event section of s3.

You can use S3 Event notifications.
For example:

You could use Suffix filter. As per this blog

Related

How to let AWS lambda to pass a string value to index.html in s3 bucket

I'm currently building a lambda function which the iot trigger passes event['key'] value which based on the value of event['key'] it will update the index.html that is stored in s3 bucket. For example, if event['key'] = 'Yes', the html will display a string 'hi'.
I'm not quite sure how I'd be able to update html since I'm fairly new to AWS. I know there's like an API that has that functionality but can't seem to find it. putObject seems fairly close but it's not the one that I'm looking for since it needs to update the string value in html. Any way to do so?
Details can vary based on environment / stack you using to write your Lambda.
If you want to update your index.html file located in S3 based on (IoT or any) trigger - your Lambda needs to getObject (read that file from S3) modify content - by simple find and replace or more advanced parsing, traversing, and DOM manipulation - and putObject back to S3.

AWS Lambda avoid recursive trigger

I'm downloading data from an API and writing it to a csv file that I store in an S3 bucket. I'm then copying my file from this input bucket into an output bucket with a Lambda function. From the output bucket I'm ingesting it into a MySQL RDS instance with another Lambda function.
The copy-to-another-bucket and upload-to-RDS lambda functions both get triggered when I create a new object in a bucket. Since I'm appending to my csv file, the upload-to-RDS function gets triggered way more than it should and I end up with ~30 rows in my database instead of 6.
I thought by copying the files between S3 buckets I could avoid this, but it doesn't help. Is there any way to only upload the csv file to the database once it has been written and not while it's being updated? Can I delay the trigger maybe?
The only other solution I can think of is to skip the copy-to-another-bucket function altogether and to schedule the upload-to-RDS function.
You need to realize that S3 doesn't support updating an existing file. If you are appending a row to an existing CSV file in S3, then that operation requires uploading the entire contents of the CSV file to S3 again, which S3 sees as a new object.
If you need to store a temporary version of the CSV file in S3 while you are updating it, then you should store it in a separate path, like s3://your_bucket/tmp and then when you have completed your updates, move it to the final path like s3://your_bucket/complete and only configure the Lambda trigger on the /complete path.

How to recover lambda code or edit it inline after uploading huge sized zip file to the aws lambda?

Firstly, I am a newbie to AWS. I was able to edit my Lambda code in line, but I recently uploaded a zip file to it(30MB) to S3 bucket and added this zip to my Lambda from S3, and now my Lambda inline editor doesn't open anymore saying the following error
"The deployment package of your Lambda function "LF2" is too large to
enable inline code editing. However, you can still invoke your function."
I tried deleting my zip file from S3 bucket hoping that the URL of zip would not be reachable and the lambda would lose the zip file and let me edit the function again. But, my lambda size still consists of the 30MB zip file size. I am unable to delete this zip and can't figure out a way to get rid of this it and edit my lambda code again.
Note: My Lambda code was written in-line and different from the zip file(which only contains elastic search setup files which I uploaded for using in my code since import elastic search wasn't working). I know there would have been a better way to do this without uploading it's zip.
Yes, you can download the Lambda function. Go to the AWS console for the Lambda function, make sure you are in the Configuration view, then click Actions | Export function. This will allow you to download a ZIP file containing the Lambda function.
Note that once you upload a Lambda function via S3, it's copied by the Lambda service. There's no connection at that point back to the S3 object that you uploaded. One reason for this is that your Lambda function would break if you, accidentally or otherwise, deleted the file from S3.
I had this problem yesterday then I somehow managed to find my code but not that full code that was vanished from AWS lambda. I wrote that code again, tested it, then tried to upload it with the same name of the lambda function and at the same lambda function by compressing it in my own system.
While uploading it, lambda gave me the option to choose between the remote file I uploaded and the local file it had saved previously. I opted for the local file and boom! I got my code back as it was last saved.
So, I suggest you to just try to upload a random blank compressed zip file containing one file name same as the lambda function. It would give you the option to choose from both files, then choose for "local" file. It would take you to the in-line editor where your code was.
I just ran into same soup .. seem like the upload replaced the previous index.js with export handler.

AWS Lambda function getting called repeatedly

I have written a Lambda function which gets invoked automatically when a file comes into my S3 bucket.
I perform certain validations on this file, modify the particular and put the file at the same location.
Due to this "put", my lambda is called again and the process goes on till my lambda execution times out.
Is there any way to trigger this lambda only once?
I found an approach where I can store the file name in DynamoDB and can apply a check in lambda function, but can there be any other approach where DynamoDB's use can be avoided?
You have a couple options:
You can put the file to a different location in s3 and delete the original
You can add a metadata field to the s3 object when you update it. Then check for the presence of that field in s3 so you know if you have processed it already. Now this might not work perfectly since s3 does not always provide the most recent data on reads after updates.
AWS allows different type of s3 event triggers. You can try playing s3:ObjectCreated:Put vs s3:ObjectCreated:Post.
You can upload your files in a folder, say
s3://bucket-name/notvalidated
and store the validated in another folder, say
s3://bucket-name/validated.
Update your S3 Event notification to invoke your lambda function whenever there is a ObjectCreate(All) event in the /notvalidated prefix.
The second answer does not seem to be correct (put vs post) - there is not really a concept of update in S3 in terms of POST or PUT. The request to update an object will be the same as the initial POST of the object. See here for details on the available S3 events.
I had this exact problem last year - I was doing an image resize on PUT and every time a file was overwritten, it would be triggered again. My recommended solution would be to have two folders in your s3 bucket - one for the original file and one for the finalized file. You could then create the lambda trigger with the lambda prefix so it only checks the files in the original folder
The events are triggered in S3 based on if the object is put/post/copy/complete Multipart Upload - All these operations corresponds to ObjectCreate as per AWS documentation .
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
The best solution is to restrict your S3 object create event to particular bucket location. So that any change in that bucket location will trigger lambda function.
You can do the modification in some other bucket location which is not configured to trigger lambda function when object is created in that location.
Hope it helps!

S3 bucket script to add timestamp in filename on upload

I'm looking for a way to add a timestamp in every file that is uploaded to an S3 bucket, Amazon-side. There is, of course, an option to do this client-side before the upload, but I don't think this is as nice and clean as it would be to have some script to run in the bucket itself everytime a new file is uploaded. I didn't find anything in the docs, though.
There is no capability within Amazon S3 to change the Key (filename) of a file based upon upload time.
Given that your desire is to avoid name conflicts, some choices are:
Use a unique GUID or a timestamp to name the file when uploading. This will avoid naming conflicts.
Upload the file to Bucket A, then use a Lambda function triggered on ObjectCreation to copy the object to Bucket B with a unique name based on timestamp
You can try with a lambda function handling the ObjectCreated event. See this tutorial.
Not sure that works though.