I have a Lambda that I have created following the example given by the aws docs (https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html), but I am invoking my Lambda from within a VPC and it seems that the CodePipeline never successfully talks to the Lambda (it gets a timeout and never seems to enter the Lambda as CloudWatch has none of my console.logs); this is despite the fact that I have created a CodePipeline Endpoint from within the VPC and associated the private subnet from which I launch the Lambda out to codepipeline.
I can give the Lambda an API Gateway endpoint and fire it manually just fine from Postman; it takes ~1 second to run. My Cloudwatch logs just have "Task timed out after 20.02 seconds." I'm not sure what else I can try; what else might prevent CodePipeline from talking to the Lambda?
After additional logging, I discovered that I actually had the VPC set up correctly and that the Lambda was being invoked; the Lambda was failing to get to S3 and was hanging on getting objects. I created another Endpoint for S3 for the VPC and was able to move passed the initial issue.
Related
I have one query. I tried to google it but could not find the answer specific to my query.
S3 is a global service. We can access it via the internet or using the VPC endpoint from our private network. That I understand.
If lambda functions are present inside VPC. Then how does s3 event trigger lambda functions?
You have to differentiate between the Lambda Service, a Lambda Function, and an Execution Context.
The Lambda service operates the Lambda functions, and an Execution Context is an instance of a Lambda Function. Only the Execution Context is located in the VPC. The rest of the components reside outside of it. The Lambda service can always communicate with the Execution Contexts of any particular Lambda Function to pass events to it and monitor the execution. It does that through a private channel and not through the VPC.
S3 is also not really a global service. The buckets and APIs reside in specific regions. It has a global namespace, meaning that bucket names have to be globally unique. This means some APIs will do "global checks", but when S3 acts, it acts inside of a region.
Let's talk through what happens in the S3-Lambda integration. When an event happens in a bucket (e.g. an object is created), the S3 service checks, which endpoints are interested in this event. If you want to send an event to a Lambda function, it has to be in the same region as the bucket. S3 will then contact the Lambda service and tell it to invoke the Lambda function with this specific event. S3 doesn't care about the results here.
This is where Lambda takes over. The service checks if S3 is permitted to invoke the function in question. If that's the case, it will check for existing Execution Contexts for that function that aren't busy. Once it finds
one, it sends the event to the Execution Context, which is executed inside the VPC and can access resources in the VPC.
Assuming everything goes well, this is how it ends, otherwise, Lambda will retry the event in another Execution Context.
References
AWS Docs: Using AWS Lambda with Amazon S3
AWS Docs: Asynchronous Lambda Invocation
I have my SQS URL which is configured in one VPC and want to trigger Lambda function from another VPC based on any new message that comes in SQS. How can i achieve it for real time analysis.
Amazon SQS is not associated with Amazon VPC. It exists "outside" of VPCs.
Therefore, you can simply configure the Lambda function to use the SQS queue as a trigger.
I accidentally deleted a lambda log group in CloudWatch.
Now my lambda fails and I do not see the log group reappear in CloudWatch.
Is it supposed to be recreated automatically? How can I fix the situation?
I tried recreating the log group manually but it didn't receive any log.
Try to remove and redeploy the lambda.
Also, make sure it has permissions to write to CloudWatch.
If the role configured in the lambda function has permissions to write to CloudWatch logs, then the lambda function will recreate the log groups upon execution. It may take up to a minute after the function has been invoked.
To resolve this issue, modify the role that is configured in the lambda function to include the "AWSLambdaBasicExecutionRole" Policy. This is an AWS Managed policy that includes everything you need to write to CloudWatch Logs.
See this article and video walk through!
https://geektopia.tech/post.php?blogpost=Write_To_CloudWatch_Logs_From_Lambda
I have a Lambda function that shares an RDS manual snapshot with another AWS account.
I am trying to create a 'chain reaction' where the lambda is executed in the 1st account , then the snapshot is visible to the 2nd account, and another lambda is triggered that copies the visible snapshot in another region (inside the 2nd account) .
I tried using RDS event subscriptions and SNS topics, but I noticed that there is no RDS event subscription for sharing and/or modifying a RDS snapshot.
Then, I tried to setup cross-account permissions so the lambda from the first account will publish to an SNS topic which will trigger the lambda in the second account, but it seems that the topic and the target lambda must be in the same region (however the code that copies the db snapshot must be in the target region) . I followed this guide and I end up with this error:
A client error (InvalidParameter) occurred when calling the Subscribe operation: Invalid parameter: TopicArn
Has anyone tried something like this?
Is cross-region communication eventually feasible?
Could I trigger something in one region from something in another (any AWS service is welcome)?
My next attempts will be:
cross-region lambda invocation
Make use of API Gateway
I'd like to run some code using Lambda on the event that I create a new EC2 instance. Looking the blueprint config-rule-change-triggered I have the ability to run code depending on various configuration changes, but not when one is created. Is there a way to do what I want? Or have I misunderstood the use case of Lambda?
We had similar requirements couple of days back(Users were supposed to get emails whenever a new instance gets launched)
1) Go to cloudwatch, then select Rules
2) Select service name (its ec2 for your case) then select "Ec2 instance state-change notification"
3) Then select pending in "Specific state" dropdown
4) Click on Add target option and select your lambda function.
That's it, whenever a new instance gets launched, Cloudwatch will trigger your lambda function.
Hope it helps !!
You could do this by inserting code into your EC2 instance launch userdata and have that code explicitly invoke a Lambda function, but that's not the best way to do it.
A better way is to use a combination of CloudTrail and Lambda. If you enable CloudTrail logging (every a/c should have this enabled, all the time, in all regions) then CloudTrail will log to S3 all of the API calls made in your account. You then connect this to Lambda by configuring S3 to publish events to Lambda. Your Lambda function will receive an S3 event, can then retrieve the API logs, find RunInstances API calls, and then do whatever work you need to as a consequence of the new instance being launched.
Some helpful references here and here.
I don't see a notification trigger for instance startup, however what you can do is write a startup script and pass that in via userdata. That startup script would need to download and install the AWS CLI and then authenticate to SNS and publish a message to a pre-configured topic. The startup script would authenticate to SNS and whatever other AWS services are needed via your IAM Role, so you would need to give the IAM Role permission to do whatever you want the script to do. This can be done in the IAM console.
That topic would then have your Lambda function subscribed to it, which would execute. Similar to the below article (though the author is doing something similar for shutdown, not startup).
http://rogueleaderr.com/post/48795010760/how-to-notifyemail-yourself-when-an-ec2-instance
If you are putting the EC2 instances into an autoscale group, I believe there is a trigger that gets fired when the autoscale group launches a new instance, so you could take advantage of that.
I hope that helps.
If you have CloudTrail enabled, then you can have S3 PutObject/TrailBucket trigger a Lambda function. Lambda function parses the object that is passed to it and if it finds RunInstances event, then run your code.
I do the exact same thing to notify certain users when a new instance is launched. With Lambda/Python, it is ~20 lines of code.