For some reason i see the triggers for my lambda function as double. If i delete one both are deleted and if i add 1 two are added.
The real problem is that i think this is making it fire twice per event so its not just a graphical problem.
Any ideas?
I also tried removing the trigger and adding it from the s3 console but i had no luck.
Related
I have a lambda function that generates thumbnails on S3 Put event. It works fine. But I want to handle the case when it accidentally takes longer than reserved time(3 sec).
It's because I'm fetching the lambda generated thumbnail by suffixing '-small.jpg' or '-medium.jpg'. If the timeout happens and the thumbnails are not generated, I must have an alternative image in my bucket.
if you want to increase function timeout you can edit in the general setting of your function. steps and screenshot below will explain how to do it.
Click on the lambda function hyperlink and click on General Configuration.
click on edit [top right pane], and increase the function timeout.
2:
I suspect that creating thumbnails will not take more than 15 minutes at maximum Lambda RAM (and correspondingly max CPU) but if you need to handle that possibility then configure your Lambda function with a DLQ and trigger subsequent processing of failed Lambda functions.
I have a Lambda function that gets triggered whenever an object is created in s3 bucket.
Now, I need to trigger the Lambda for alternate object creation.
Lambda should not be triggered when object is created for the first, third , fifth and so on time. But, Lambda should be triggered for the second, fourth, sixth and so on time.
For this, I created an s3 event for 'PUT' operation.
The first time I used the PUT API. The second time I uploaded the file using -
s3_res.meta.client.upload_file
I thought that it would not trigger lambda since this was upload and not PUT. But this also triggered the Lambda.
Is there any way for this?
The reason that meta.client.upload_file is triggering your PUT event lambda is because it is actually using PUT.
upload_file (docs) uses the TransferManager client, which uses PUT under-the-hood (you can see this in the code: https://github.com/boto/s3transfer/blob/develop/s3transfer/upload.py)
Looking at the AWS-SDK you'll see that POST'ing to S3 is pretty much limited to when you want to give a browser/client a pre-signed URL for them to upload a file to. (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html)
If you want to count the number of times PUT has been called, in order to take action on every even call, then the easiest thing to do is to use something like DynamoDB to create a table of 'file-name' vs 'put-count' which you update with every PUT and action accordingly.
Alternatively you could enable bucket file versioning. Then you could use list_object_versions to see how many times the file has been updated. Although you should be aware that S3 is eventually consistent, so this may not be accurate if the file is being rapidly updated.
I'm trying to delete an AWS Lambda function through the GUI, but am getting a response: There was an error deleting your function: Lambda was unable to delete arn:aws:lambda:us-east-1:624929674184:function:lambda-auth:1 because it is a replicated function.
How can one delete replicated Lambda functions?
I have figured out the solution to delete Lambda#edge replica.
Firstly, Login to CloudFront Console and go to your Distribution.
Under the Behaviors Tab - tick the listed Behavior and edit
Scroll down to Lambda Function Associations and remove any Association by clicking the X.
Press yes,edit to save the changes.
--- Now that you have removed the Associations it's time to delete the Lambda#edge replicas
Go to Lambda Console and open your lambda( you wish to delete).
On the top menus - Qualifiers -> Versions-> choose the listed drop-down version
It will open that #edgeLambda Version
On the top menus - Actions -> Delete version
This way , deleting all the versions - you are left with $LATEST
Deleting that also - you are finally able to delete the Lambda#edge Function
Note!> Please remember to delete any IAM Roles and Permissions associated with Lambda#edge Functions.
I hope this will work :)
Please refer the link Delete Lambda#Edge Functions and Replicas, you will find it much useful.
Replicated functions are something Lambda#Edge uses, so I assume that's the case here even though it's not stated. You should review this document on how to delete these. You can't manually delete them at this time:
You can delete a Lambda#Edge function only when the replicas of the
function have been deleted by CloudFront. Replicas of a Lambda
function are automatically deleted in the following situations:
After you have removed the last association for the function from all
of your CloudFront distributions. If more than one distribution uses a
function, the replicas are removed only after the function is
disassociated from the last one.
After you delete the last distribution that a function was associated
with.
Replicas are typically deleted within a few hours.
Note:
Replicas cannot be manually deleted at this time. This helps prevent a situation where a replica is removed that you're still using, which would result in an error.
BHEERAJ's answer is good, but in my exact case I waited like 6 hours and nothing changed and the error was still ocurring, but then I also removed related S3 buckets (and to remove the bucket, I had to remove items inside it first):
https://s3.console.aws.amazon.com/s3
Then in about half a hour I tried to remove those Lambda functions, and finally it actually deleted.
I'm looking for a way to temporarily disable Lambda triggers on a DynamoDB. I want to be able do apply manual Updates on a table (e.g. such as importing data from a S3 backup) without the Lambda code being triggers. I tried the disable button next to the trigger in the lambda functions "Trigger" tab. I also tried to disable the whole Stream for the table. In both cases, when reactivating the trigger/stream all the trigger events (that happened, while they were deactivated) are executed then.
How can i prevent this code being triggered?
Thank you very much!
For others that arrive at this answer - https://alestic.com/2015/11/aws-lambda-kinesis-pause-resume/ provides a CLI solution for pausing stream reading, and resuming from the same place at some point in the future.
There was an error creating the event source mapping: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
I created an event earlier from the GUI console 6-7 days ago and it was working fine. The next day the event just missing, i cant see it anymore at the Lambda console GUI. But every S3 objects still seems triggering the lambda function not a problem. If i cant see, it is not good; So i deleted the Lambda function, waited for 5-10 seconds before creating another new function. And now, i receive the same above when i try to create the event sources like this:
When i click "Submit" the event sources tab says "You do not have any event sources for this function", Lambda does not get triggered; it means the entire application flow is now broken :(
The problem is almost the same as: "https://forums.aws.amazon.com/thread.jspa?messageID=670712" But somehow i cant reply to that thread, so i created a new thread here instead. anyone encounter this issue?
In fact, i try to response to the existing AWS forum thread: https://forums.aws.amazon.com/thread.jspa?messageID=670712򣯸
but i keep getting this funny error: "Your message quota has been reached. Please try again later.". And i wasnt even posting anything, how can i use up my quota?
What I suspect is your S3 bucket may still be "linked" to the lambda function.
Maybe check your S3 bucket for events and remove them there, then try creating the lambda events again?
i.e. S3 bucket-> properties-> Events
After 6 years nice to see some people still befitting from this answer,
Here is a shamless plug to youtube video I uploaded 2022-12-13.
https://www.youtube.com/watch?v=rjpOU7jbgEs
The issue must be that the s3 bucket is already linked with the suffix/prefix you are trying to link. Remove the link in S3 and try again.
When you setup a lambda function and setup a trigger related to S3. The notification gets updated in the properties sections of that S3 bucket.
The mentioned error occurs when the earlier lambda function is deleted and you're trying to setup same kind of trigger again. This time the thing to note is, the S3 notification is still not deleted when you deleted the lambda function.
Goto S3 bucket > Properties > Event notifications
and delete the old setting and then setup new trigger in the new lambda function trigger.
Here is a link to a youtube video profiling this issue and demonstrating the solution:
https://www.youtube.com/watch?v=1Tfmc9nEtbU
Just as Ridwaan Manuel, you must remove the events by going to S3 bucket-> properties-> Events as the video shows.
Steps to reproduce this issue:
Create a bucket and create a folder called “example/”
Create Lambda Function
Add S3 trigger to the lambda using the bucket from (1) with default settings
Save the trigger
Click Save and notice error
Refresh the page and notice that the triggers disappeared
Add the same bucket again and notice the ambiguous reference error