AWS Lambda: unable to delete arn because it is a replicated function - amazon-web-services

I'm trying to delete an AWS Lambda function through the GUI, but am getting a response: There was an error deleting your function: Lambda was unable to delete arn:aws:lambda:us-east-1:624929674184:function:lambda-auth:1 because it is a replicated function.
How can one delete replicated Lambda functions?

I have figured out the solution to delete Lambda#edge replica.
Firstly, Login to CloudFront Console and go to your Distribution.
Under the Behaviors Tab - tick the listed Behavior and edit
Scroll down to Lambda Function Associations and remove any Association by clicking the X.
Press yes,edit to save the changes.
--- Now that you have removed the Associations it's time to delete the Lambda#edge replicas
Go to Lambda Console and open your lambda( you wish to delete).
On the top menus - Qualifiers -> Versions-> choose the listed drop-down version
It will open that #edgeLambda Version
On the top menus - Actions -> Delete version
This way , deleting all the versions - you are left with $LATEST
Deleting that also - you are finally able to delete the Lambda#edge Function
Note!> Please remember to delete any IAM Roles and Permissions associated with Lambda#edge Functions.
I hope this will work :)
Please refer the link Delete Lambda#Edge Functions and Replicas, you will find it much useful.

Replicated functions are something Lambda#Edge uses, so I assume that's the case here even though it's not stated. You should review this document on how to delete these. You can't manually delete them at this time:
You can delete a Lambda#Edge function only when the replicas of the
function have been deleted by CloudFront. Replicas of a Lambda
function are automatically deleted in the following situations:
After you have removed the last association for the function from all
of your CloudFront distributions. If more than one distribution uses a
function, the replicas are removed only after the function is
disassociated from the last one.
After you delete the last distribution that a function was associated
with.
Replicas are typically deleted within a few hours.
Note:
Replicas cannot be manually deleted at this time. This helps prevent a situation where a replica is removed that you're still using, which would result in an error.

BHEERAJ's answer is good, but in my exact case I waited like 6 hours and nothing changed and the error was still ocurring, but then I also removed related S3 buckets (and to remove the bucket, I had to remove items inside it first):
https://s3.console.aws.amazon.com/s3
Then in about half a hour I tried to remove those Lambda functions, and finally it actually deleted.

Related

Configure multiple delete event in S3/Lambda

I am trying to build a Lambda function that gets triggered on S3 delete events. If multiple items are deleted at once, I want to use an S3 batch job. What I can't figure out or find in the documentation is what an event like that would look like. I'd assume it would just have multiple similar items in Records and I could iterate through, get all the keys, and then batch delete, but I can't confirm that. I've searched the documentation, and I built a test Lambda that would just log the event, but that came through as multiple distinct events. I'm stumped as to how to do what I'm trying here.
The s3 event you need to subscribe to is s3:ObjectRemoved:Delete that by documentation is used to track an object or a batch of objects being removed:
By using the ObjectRemoved event types, you can enable notification when an object or a batch of objects is removed from a bucket.
You can expect an event structured as detailed here.
However since in the comment you said you just wanted to "copy the objects pre-delete to another bucket" you may want to explore S3 buckets versioning capabilities.
Enabling versioning will allow you to preserve in a "deleted" state the objects, leaving room for future restores, as per delete workflow here.

Delete all version of S3 object using lifecycle rule

I have a S3 bucket with multiple folders with versioning enabled.
Out of these multiple folders I want to complete delete one folder as it has multiple delete marker.
I am using Lifecycle rule to delete the objects but not sure if it will work for specific folder.
In Lifecycle Rule, If I specify the folder_name/ as a prefix and expiration rule as 1 day after creation for all and current versions.
Will it delete all the objects and its versions ?
Can someone please confirm ?
The other folders are quite critical so can't mess with the rule to test.
I can confirm that you can delete at folder level instead of entire bucket. We have a rule that does the exact same thing (although 7 days instead of 1). I will echo John's point that after initial setup, it will take time to do the deletion. You should see progress STARTING within 1 hour, but actual completion may take a while.

Cheapest way to delete 2 billion objects from S3 IA

I have a bucket in S3 (Infrequent access) containing 2 billion objects. It is too big to delete in the console or over the api without taking years.
I can create a lifecycle rule to expire and delete the objects but the calculator predicts this will cost me >$20,000. Is that correct? Is there a better way to delete a bucket?
I have a file effectively containing a list of all the objects in that bucket if that helps.
Update 2021:
An answer below from #MAP points out that there is now an "Empty" button. I haven't tested yet, but looks like the way to go (I'll accept that answer once tested):
If you have a list of all the objects available then you can certainly use Multi Delete Object action. Apparently this API is free. I would create AWS Step Functions state machine to loop through the file and delete 1000 objects at a time. 1000 appears to be the limit.
It will take around 2M step function transactions to delete all the objects in the bucket. As per the pricing for step function it will cost you around $50 + cost of Lambda invocations around $1 so total cost roughly $51.
Update
Using Lambda or Step Functions is probably not the most cost effective option because both ways you will need to read the file (that contains object keys) from some source such as S3. So I think running the script from local machine or any EC2 linux screen appears to be the best option.
In 2021, anyone who comes across this question may benefit to know that AWS console now provides an empty button.
Select the bucket and click on "empty" button and all objects versioned or not versioned would be emptied/deleted. Depending on the number of objects it can take minutes to days.
Expiration lifecycle rules are free. From the original feature announcement:
As with standard delete requests, Amazon S3 doesn’t charge you for using Object Expiration.
Delete operations are for free. You can create a lifecycle
Policy to automate a bulk delete.
I would start with a small number of objects first and check billing report to 100% confirm that the delete will not be charged, then go for the rest.

AWS Lambda - Double trigger

For some reason i see the triggers for my lambda function as double. If i delete one both are deleted and if i add 1 two are added.
The real problem is that i think this is making it fire twice per event so its not just a graphical problem.
Any ideas?
I also tried removing the trigger and adding it from the s3 console but i had no luck.

AWS Lambda: error creating the event source mapping: Configuration is ambiguously defined

There was an error creating the event source mapping: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
I created an event earlier from the GUI console 6-7 days ago and it was working fine. The next day the event just missing, i cant see it anymore at the Lambda console GUI. But every S3 objects still seems triggering the lambda function not a problem. If i cant see, it is not good; So i deleted the Lambda function, waited for 5-10 seconds before creating another new function. And now, i receive the same above when i try to create the event sources like this:
When i click "Submit" the event sources tab says "You do not have any event sources for this function", Lambda does not get triggered; it means the entire application flow is now broken :(
The problem is almost the same as: "https://forums.aws.amazon.com/thread.jspa?messageID=670712򣯸" But somehow i cant reply to that thread, so i created a new thread here instead. anyone encounter this issue?
In fact, i try to response to the existing AWS forum thread: https://forums.aws.amazon.com/thread.jspa?messageID=670712&#670712
but i keep getting this funny error: "Your message quota has been reached. Please try again later.". And i wasnt even posting anything, how can i use up my quota?
What I suspect is your S3 bucket may still be "linked" to the lambda function.
Maybe check your S3 bucket for events and remove them there, then try creating the lambda events again?
i.e. S3 bucket-> properties-> Events
After 6 years nice to see some people still befitting from this answer,
Here is a shamless plug to youtube video I uploaded 2022-12-13.
https://www.youtube.com/watch?v=rjpOU7jbgEs
The issue must be that the s3 bucket is already linked with the suffix/prefix you are trying to link. Remove the link in S3 and try again.
When you setup a lambda function and setup a trigger related to S3. The notification gets updated in the properties sections of that S3 bucket.
The mentioned error occurs when the earlier lambda function is deleted and you're trying to setup same kind of trigger again. This time the thing to note is, the S3 notification is still not deleted when you deleted the lambda function.
Goto S3 bucket > Properties > Event notifications
and delete the old setting and then setup new trigger in the new lambda function trigger.
Here is a link to a youtube video profiling this issue and demonstrating the solution:
https://www.youtube.com/watch?v=1Tfmc9nEtbU
Just as Ridwaan Manuel, you must remove the events by going to S3 bucket-> properties-> Events as the video shows.
Steps to reproduce this issue:
Create a bucket and create a folder called “example/”
Create Lambda Function
Add S3 trigger to the lambda using the bucket from (1) with default settings
Save the trigger
Click Save and notice error
Refresh the page and notice that the triggers disappeared
Add the same bucket again and notice the ambiguous reference error