I have lambda which is triggered by cloudwatch event when VPN tunnels are down or up. I searched online but can't find a way to trigger this cloudwatch event.
I see an option for test event but what can I enter in here for it to trigger an event that tunnel is up or down?
You can look into CloudWatchEventsandEventPatterns
Events in Amazon CloudWatch Events are represented as JSON objects.
For more information about JSON objects, see RFC 7159. The following
is an example event:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2017-12-22T18:43:48Z",
"region": "us-west-1",
"resources": [
"arn:aws:ec2:us-west-1:123456789012:instance/ i-1234567890abcdef0"
],
"detail": {
"instance-id": " i-1234567890abcdef0",
"state": "terminated"
}
}
Also log based on event, you can pick your required event from AWS CW EventTypes
I believe in your scenario, you don't need to pass any input data as you must have built the logic to test the VPN tunnels connectivity within the Lamda. You can remove that JSON from the test event and then run the test.
If you need to pass in some information as part of the input event then follow the approach mentioned by #Adiii.
EDIT
The question is more clear through the comment which says
But question is how will I trigger the lambda? Lets say I want to
trigger it when tunnel is down? How will let lambda know tunnel is in
down state? – NoviceMe
This can be achieved by setting up a rule in Cloudwatch to schedule the lambda trigger at a periodic interval. More details here:
Tutorial: Schedule AWS Lambda Functions Using CloudWatch Events
Lambda does not have an invocation trigger right now that can monitor a VPN tunnel, so the only workaround is to poll the status through lamda.
Related
I have a lambda function deployed in us-east-1 which runs every time an EC2 instance is started.
The lambda function is triggered with the following EventBridge configuration:
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"source": [
"aws.ec2"
],
"detail": {
"eventName": [
"RunInstances"
]
}
}
The lambda function is working great. Now, I'm looking to extend this so that my lambda function is triggered even when an EC2 instance is launched in a different region (e.g. us-east-2).
How can I achieve this?
One option is to put SNS as an event target and subscribe the lambda to the SNS topic. SNS supports cross region subscriptions.
Another option is to use cross region event busses. You create a rule that forwards the event to another region and create another event rule in that region that invokes a lambda. More info here: https://aws.amazon.com/blogs/compute/introducing-cross-region-event-routing-with-amazon-eventbridge/
There was a recently announced new functionality that can help with cross region use cases with aws lambda: https://aws.amazon.com/blogs/compute/introducing-cross-region-event-routing-with-amazon-eventbridge/
Amazon eventBridge is a great way for cross region (and cross-account) event processing
Codebuild is used to build a project from a repository and deploy it to s3. I want to pass data/information that is processed in codebuild to cloudwatch event so that I can send the notification with that information for pass as well as failed build. Is there a way to send data ($variables) processed in codebuild in cloudwatch event rule or any other way?
I have the rule, topic, and email working.... but I see no way to pass any extra data than what is supplied by CodeBuild.
For example: I have some environment variables in code build and I need to send these as a part of my notification which will help me determine what value caused the failure of build.
You have to do this form with your CB as part of your buildspec.yml. If you are using SNS (I guess), then you can use aws sns publish AWS CLI as part of your CB procedure. This would also require you to add permissions to CB role for sns:publish action.
I'll start with saying that #Marcin answer is totally correct but it doesn't answer the "as well as failed build" part.
So for the first part where you want to send the responses from the processed data you either need to:
publish to SNS directly from your buildspec (as #Marcin pointed out)
or send an event to AWS EventBridge (aka Cloudwatch Events) from your buildspec
With regard to the second part of the question where you want to catch the CodeBuild execution status you can rely on the built-in notifications events from that are generated from CodeBuild itself:
{
"source": [
"aws.codebuild"
],
"detail-type": [
"CodeBuild Build State Change"
],
"detail": {
"build-status": [
"IN_PROGRESS",
"SUCCEEDED",
"FAILED",
"STOPPED"
],
"project-name": [
"my-demo-project-1",
"my-demo-project-2"
]
}
}
You can intercept the events for the whole build and for each phase separately if needed and act upon them (whether you are going to send to SNS, or trigger a Lambda or something else it's up to you).
After setting the EventBridge, S3 put object event still cannot trigger the StepFuction.
However, I tried to change the event rule to EC2 status. It's working !!!
I also try to change the rule to S3 all event, but it still not working.
Amazon EventBridge:
Event pattern:
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": {
"bucketName": ["MY_BUCKETNAME"]
}
}
Target(s):
Type:Step Functions state machine
ARN:arn:aws:states:us-east-1:xxxxxxx:stateMachine:MY_FUNCTION_NAME
Reference:https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html
Your step function isn't being triggered because the PutObject events aren't being published to cloudtrail. S3 operations are classified as data events so you must enable Data events when creating your cloudTrail. The tutorial says next next and create which seems to suggest no additional options need to be selected. By default, Data Events on the next step (step 2 - Choose log events - as of this writing) is not checked. You have to check it and fill up the bottom part to specify if all buckets/events are to be logged.
We can set up event rules to trigger an ECS task, but I don't see if the triggering event is passed to the runing ECS task and in the task how to fetch the content of this event. If a Lambda is triggered, we can get it from the event variable, for example, in Python:
def lambda_handler(event, context):
...
But in ECS I don't see how I can do things similar. Going to the cloudtrail log bucket doesn't sound to be a good way because it has around 5 mins delay for the new log/event to show up, which requires ECS to be waiting and additional logic to talk to S3 and find & read the log. And when the triggering events are frequent, this sounds hard to handle.
One way to handle this is to set two targets In the Cloud watch rule.
One target will launch the ECS task
One target will push same event to SQS
So the SQS will contain info like
{
"version": "0",
"id": "89d1a02d-5ec7-412e-82f5-13505f849b41",
"detail-type": "Scheduled Event",
"source": "aws.events",
"account": "123456789012",
"time": "2016-12-30T18:44:49Z",
"region": "us-east-1",
"resources": [
"arn:aws:events:us-east-1:123456789012:rule/SampleRule"
],
"detail": {}
}
So when the ECS TASK up, it will be able to read event from the SQS.
For example in Docker entrypoint
#!/bin/sh
echo "Starting container"
echo "Process SQS event"
node process_schdule_event.sj
#or if you need process at run time
schdule_event=$(aws sqs receive-message --queue-url https://sqs.us-west-2.amazonaws.com/123456789/demo --attribute-names All --message-attribute-names All --max-number-of-messages 1)
echo "Schdule Event: ${schdule_event}"
# one process done, start the main process of the container
exec "$#"
After further investigation, I finally worked out another solution that is to use S3 to invoke Lambda and then in that Lambda I use ECS SDK (boto3, I use Python) to run my ECS task. By this way I can easily pass the event content to ECS and it is nearly real-time.
But I still give credit to #Adiii because his solution also works.
I have created a workflow like this:
Use requests for an instance creation through a API Gateway endpoint
The gateway invokes a lamda function that executes the following code
Generate a RDP with the public dns to give it to the user so that they can connect.
ec2 = boto3.resource('ec2', region_name='us-east-1')
instances = ec2.create_instances(...)
instance = instances[0]
time.sleep(3)
instance.load()
return instance.public_dns_name
The problem with this approach is that the user has to wait almost 2 minutes before they were able to login successfully. I'm totally okay to let the lamda run for that time by adding the following code:
instance.wait_until_running()
But unfortunately the API gateway has a 29 seconds timeout for lambda integration. So even if I'm willing to spend it wouldn't work. What's the easiest way to overcome this?
My approach to accomplish your scenario could be Cloudwatch Event Rule.
The lambda function after Instance creation must store a kind of relation between the instance and user, something like this:
Proposed table:
The table structure is up to you, but these are the most important columns.
------------------------------
| Instance_id | User_Id |
------------------------------
Creates a CloudWatch Event Rule to execute a Lambda function.
Firstly, pick Event Type: EC2 Instance State-change Notification then select Specific state(s): Running:
Secondly, pick the target: Lambda function:
Lambda Function to send email to the user.
That Lambda function will receive the InstanceId. With that information, you can find the related User_Id and send the necessary information to the user. You can use the SDK to get information of your EC2 instance, for example, its public_dns_name.
This is an example of the payload that will be sent by Cloudwatch Event Rule notification:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2015-12-22T18:43:48Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-12345678"
],
"detail": {
"instance-id": "i-12345678",
"state": "running"
}
}
That way, you can send the public_dns_name when your instance is totally in service.
Hope it helps!