Invoking lambda with CloudWatch events across regions - amazon-web-services

I have a lambda function deployed in us-east-1 which runs every time an EC2 instance is started.
The lambda function is triggered with the following EventBridge configuration:
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"source": [
"aws.ec2"
],
"detail": {
"eventName": [
"RunInstances"
]
}
}
The lambda function is working great. Now, I'm looking to extend this so that my lambda function is triggered even when an EC2 instance is launched in a different region (e.g. us-east-2).
How can I achieve this?

One option is to put SNS as an event target and subscribe the lambda to the SNS topic. SNS supports cross region subscriptions.
Another option is to use cross region event busses. You create a rule that forwards the event to another region and create another event rule in that region that invokes a lambda. More info here: https://aws.amazon.com/blogs/compute/introducing-cross-region-event-routing-with-amazon-eventbridge/

There was a recently announced new functionality that can help with cross region use cases with aws lambda: https://aws.amazon.com/blogs/compute/introducing-cross-region-event-routing-with-amazon-eventbridge/
Amazon eventBridge is a great way for cross region (and cross-account) event processing

Related

How to invoke a REST API when an object arrives on S3?

S3 can be configured to invoke lambda when an object arrives in it.
Is it possible to invoke a REST API (endpoint of a microservice running in EKS) when an object arrives in S3?
From November 2021 is possible to integrate S3 with Amazon EventBridge.
So you can create an EventBridge rule which is triggered on bucket object creation and has API destination as a target.
For this, the option Bucket Properties -> Amazon EventBridge -> "Send notifications to Amazon EventBridge for all events in this bucket" should be enabled.
Then, on EventBridge create the rule with an event pattern like this:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["<my-bucket-name>"]
}
}
}
And configure the target as API destination endpoint (configure http method, endpoint, authorization).
You can set up an SNS topic as a target for the event from S3. In the SNS topic, you can add an HTTP/s subscriber, which can be your API endpoint.
Have the Lambda hit the REST API for you.

AWS EventBridge ECS task status change event

I want to trigger a lambda function when a fargate task is deprovisionning, I created this EventBridge rule :
{
"source": ["aws.ecs"],
"detail-type": ["ECS Task State Change"],
"detail": {
"clusterArn": ["arn:aws:ecs:eu-west-3:xxx"],
"lastStatus": ["DEPROVISIONING"]
}
}
It does not seem to be working all the time, ie sometimes cloudwatch receives it and sometimes it doesn't (no logs are generated from the lambda function).
What could cause this issue ?
So it seems the error was comming from my lambda function and because it was failing so often Event Bridge was blocking some of the calls of the lambda.
Not that big of a deal afterall...

CodeBuild: how to pass variables / JSON data to an CloudWatch Event Rule?

Codebuild is used to build a project from a repository and deploy it to s3. I want to pass data/information that is processed in codebuild to cloudwatch event so that I can send the notification with that information for pass as well as failed build. Is there a way to send data ($variables) processed in codebuild in cloudwatch event rule or any other way?
I have the rule, topic, and email working.... but I see no way to pass any extra data than what is supplied by CodeBuild.
For example: I have some environment variables in code build and I need to send these as a part of my notification which will help me determine what value caused the failure of build.
You have to do this form with your CB as part of your buildspec.yml. If you are using SNS (I guess), then you can use aws sns publish AWS CLI as part of your CB procedure. This would also require you to add permissions to CB role for sns:publish action.
I'll start with saying that #Marcin answer is totally correct but it doesn't answer the "as well as failed build" part.
So for the first part where you want to send the responses from the processed data you either need to:
publish to SNS directly from your buildspec (as #Marcin pointed out)
or send an event to AWS EventBridge (aka Cloudwatch Events) from your buildspec
With regard to the second part of the question where you want to catch the CodeBuild execution status you can rely on the built-in notifications events from that are generated from CodeBuild itself:
{
"source": [
"aws.codebuild"
],
"detail-type": [
"CodeBuild Build State Change"
],
"detail": {
"build-status": [
"IN_PROGRESS",
"SUCCEEDED",
"FAILED",
"STOPPED"
],
"project-name": [
"my-demo-project-1",
"my-demo-project-2"
]
}
}
You can intercept the events for the whole build and for each phase separately if needed and act upon them (whether you are going to send to SNS, or trigger a Lambda or something else it's up to you).

Amazon EventBridge rule S3 put object event cannot trigger the AWS StepFunction

After setting the EventBridge, S3 put object event still cannot trigger the StepFuction.
However, I tried to change the event rule to EC2 status. It's working !!!
I also try to change the rule to S3 all event, but it still not working.
Amazon EventBridge:
Event pattern:
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": {
"bucketName": ["MY_BUCKETNAME"]
}
}
Target(s):
Type:Step Functions state machine
ARN:arn:aws:states:us-east-1:xxxxxxx:stateMachine:MY_FUNCTION_NAME
Reference:https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html
Your step function isn't being triggered because the PutObject events aren't being published to cloudtrail. S3 operations are classified as data events so you must enable Data events when creating your cloudTrail. The tutorial says next next and create which seems to suggest no additional options need to be selected. By default, Data Events on the next step (step 2 - Choose log events - as of this writing) is not checked. You have to check it and fill up the bottom part to specify if all buckets/events are to be logged.

How to test lambda using test event

I have lambda which is triggered by cloudwatch event when VPN tunnels are down or up. I searched online but can't find a way to trigger this cloudwatch event.
I see an option for test event but what can I enter in here for it to trigger an event that tunnel is up or down?
You can look into CloudWatchEventsandEventPatterns
Events in Amazon CloudWatch Events are represented as JSON objects.
For more information about JSON objects, see RFC 7159. The following
is an example event:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2017-12-22T18:43:48Z",
"region": "us-west-1",
"resources": [
"arn:aws:ec2:us-west-1:123456789012:instance/ i-1234567890abcdef0"
],
"detail": {
"instance-id": " i-1234567890abcdef0",
"state": "terminated"
}
}
Also log based on event, you can pick your required event from AWS CW EventTypes
I believe in your scenario, you don't need to pass any input data as you must have built the logic to test the VPN tunnels connectivity within the Lamda. You can remove that JSON from the test event and then run the test.
If you need to pass in some information as part of the input event then follow the approach mentioned by #Adiii.
EDIT
The question is more clear through the comment which says
But question is how will I trigger the lambda? Lets say I want to
trigger it when tunnel is down? How will let lambda know tunnel is in
down state? – NoviceMe
This can be achieved by setting up a rule in Cloudwatch to schedule the lambda trigger at a periodic interval. More details here:
Tutorial: Schedule AWS Lambda Functions Using CloudWatch Events
Lambda does not have an invocation trigger right now that can monitor a VPN tunnel, so the only workaround is to poll the status through lamda.