I'd like to run some code using Lambda on the event that I create a new EC2 instance. Looking the blueprint config-rule-change-triggered I have the ability to run code depending on various configuration changes, but not when one is created. Is there a way to do what I want? Or have I misunderstood the use case of Lambda?
We had similar requirements couple of days back(Users were supposed to get emails whenever a new instance gets launched)
1) Go to cloudwatch, then select Rules
2) Select service name (its ec2 for your case) then select "Ec2 instance state-change notification"
3) Then select pending in "Specific state" dropdown
4) Click on Add target option and select your lambda function.
That's it, whenever a new instance gets launched, Cloudwatch will trigger your lambda function.
Hope it helps !!
You could do this by inserting code into your EC2 instance launch userdata and have that code explicitly invoke a Lambda function, but that's not the best way to do it.
A better way is to use a combination of CloudTrail and Lambda. If you enable CloudTrail logging (every a/c should have this enabled, all the time, in all regions) then CloudTrail will log to S3 all of the API calls made in your account. You then connect this to Lambda by configuring S3 to publish events to Lambda. Your Lambda function will receive an S3 event, can then retrieve the API logs, find RunInstances API calls, and then do whatever work you need to as a consequence of the new instance being launched.
Some helpful references here and here.
I don't see a notification trigger for instance startup, however what you can do is write a startup script and pass that in via userdata. That startup script would need to download and install the AWS CLI and then authenticate to SNS and publish a message to a pre-configured topic. The startup script would authenticate to SNS and whatever other AWS services are needed via your IAM Role, so you would need to give the IAM Role permission to do whatever you want the script to do. This can be done in the IAM console.
That topic would then have your Lambda function subscribed to it, which would execute. Similar to the below article (though the author is doing something similar for shutdown, not startup).
http://rogueleaderr.com/post/48795010760/how-to-notifyemail-yourself-when-an-ec2-instance
If you are putting the EC2 instances into an autoscale group, I believe there is a trigger that gets fired when the autoscale group launches a new instance, so you could take advantage of that.
I hope that helps.
If you have CloudTrail enabled, then you can have S3 PutObject/TrailBucket trigger a Lambda function. Lambda function parses the object that is passed to it and if it finds RunInstances event, then run your code.
I do the exact same thing to notify certain users when a new instance is launched. With Lambda/Python, it is ~20 lines of code.
Related
I am looking for a way to trigger the Jenkins job whenever the s3 bucket is updated with a particular file format.
I have tried a lambda function method with an "Add trigger -> s3 bucket PUT method". I have followed this article. But it's not working. I have explored and I have found out that "AWS SNS" and "AWS SQS" also can use for this, but the problem is some are saying this is outdated. So which is the simplest way to trigger the Jenkins job when the s3 bucket is updated?
I just want a trigger, whenever the zip file is updated from job A in jenkins1 to the S3 bucket name called 'testbucket' in AWS enviornment2. Both Jenkins are in different AWS accounts under seperate private VPC. I have attached my Jenkins workflow as a picture. Please refer below picture.
The approach you are using seems solid and a good way to go. I'm not sure what specific issue you are having so I'll list a couple things that could explain why this might not be working for you:
Permissions issue - Check to ensure that the Lambda can be invoked by the S3 service. If you are doing this in the console (manually) then you probably don't have to worry about that since the permissions should be automatically setup. If you're doing this through infrastructure as code then it's something you need to add.
Lambda VPC config - Your lambda will need to run out of the same subnet in your VPC that the Jenkins instance runs out of. Lambda by default will not be associated to a VPC and will not have access to the Jenkins instance (unless it's publicly available over the internet).
I found this other stackoverflow post that describes the SNS/SQS setup if you want to continue down that path Trigger Jenkins job when a S3 file is updated
could any one please help me the lambda code , whenever AWS Ec2 instances get stopped, we need to get the email notifications with sns. In the email we need instance name. I could able to get instance id but not the instance name.
AWS CloudTrail allows you to identify and track EC2 instance lifecycle API calls (launch, start, stop, terminate). See How do I use AWS CloudTrail to track API calls to my Amazon EC2 instances?
And you can trigger a Lambda function to run arbitrary code when CloudTrail logs certain events. See Triggering a Lambda function with AWS CloudTrail events.
You can also create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and triggers a Lambda via CloudWatch Events.
You can create a rule in Amazon CloudWatch Events that:
Triggers when an instance enters the Stopped state
Sends a message to an Amazon SNS Topic
Like this:
If you want to modify the message that is being sent, then configure the Rule to trigger an AWS Lambda function instead. Your function should:
Extract the instance information (eg InstanceId) from the event parameter
Call describe-instances to obtain the Name of the instance (presumably the Tag with a Key of Name)
Publish a message to the Amazon SNS Topic
I have a lambda function that triggers when any new objectCreated(s3:Objectcreated.*) in a Bucket A-prod, This lambda process the file saves the result in B-prod.
Now I want to restore this data on QA, I am writing a script for same and I don't want to process file again. For my case I will copy B-prod to B-qa and A-prod to A-qa via a python script.
It would be great if I can disable the lambda trigger temporarily, But It is not possible as aws do not allow disabling anymore(It is grey on aws console).
I can delete trigger from aws console but can't find how to do it from python or cli.
AWS Console Screenshot Lambda Function Trigger
can't find how to do it from python or cli.
In boto3 you use put_bucket_notification_configuration to provide empty notifications.
For anyone looking for a temporary disable from AWS console without deleting the trigger, here's a work-around. Note that this is not meant as a permanent solution. I had to disable temporarily and it works.
Go to S3 > Buckets > your-bucket > Properties
Scroll down to "Event Notifications"
You should see your Lambda Function in Destination,
Choose Edit,
Uncheck "All Object Create Events",
Check another event type that will not occur in your process, such as "Object Tagging",
Save.
When finished, enable by checking the "Object Create Events" and unchecking the other event.
What is the easiest way to trigger an Elastic Beanstalk redeployment from within AWS?
The goal is to just reboot all instances, but in an orderly manner, according to ALB / target group rules.
Currently I am doing this locally via the EB shell by calling eb deploy without doing any code changes. But rather than doing this manually on a regular basis, I want to use CloudWatch jobs to trigger it with a schedule.
One way would be to setup CloudWatch Schedule Expressions rule.
The rule would trigger a lambda based on your per-defined schedule. The lambda can be as simple as to only trigger the re-deployment of the existing application:
import json
import boto3
eb = boto3.client('elasticbeanstalk')
def lambda_handler(event, context):
response = eb.update_environment(
ApplicationName='<your-eb-app-name>',
EnvironmentName='<your-eb-env-name>',
VersionLabel='<existing-label-of-application-version-to-redeply')
print(response)
You could customize the lambda to be more useful, e.g. by parametrizing it instead of hard-codding all the names required for update_environment.
The lambda execution role also needs to be adjusted to allow the actions on EB.
The other option would be to use CodePipline with two stages:
Source S3 where you specify the zip with the application version to deploy. Its bucket must be versioned.
Deploy stage with Elastic Beanstaslk provider.
The pipeline would also be triggered by the CloudWatch rule on a schedule.
There is actually a feature called Instance Refresh that replaces the instances without deploying a new app version.
Triggering that via a Lambda function scheduled via CloudWatch Jobs seems to be the easiest and cleanest way for my use case. However, keep in mind that replacing instances is not the same are rebooting / redeploying, for example when it comes to managing burst credits.
This AWS blog post described how to set up a scheduled instance refresh with AWS Lambda.
I'm currently setting up a small Lambda to take snapshots of all the important volumes of our EC2 instances. To guarantee application consistency I need to trigger actions inside the instances: One to quiesce the application before the snapshot and one to wake it up again after the snapshot is done. So far I have no clue how to do this.
I've thought about using SNS or SQS to notify the instances about start and stop of the snapshot, but that has several problems:
I'll need to install (and develop) a custom listener inside the instances.
I'll not get feedback if the quiescing/wake-up is done.
So here's my question: How can I trigger an action inside an instance from an Lambda?
But maybe I'm approaching this from the wrong direction. Is there really no simple backup solution? I know azure has a snapshot based backup service that can do application consitent backups. Did I just miss an equivalent AWS service?
Edit 1:
Ok, it looks like the feature 'Run Command' of AWS Systems Manager is what I really need. It allows me to run scripts, Ansible playbooks and more inside an EC2 instance. When I've got a working solution I'll post the necessary steps.
You can trigger a Lambda function on demand:
Using AWS Lambda with Amazon API Gateway (On-Demand Over HTTPS)
You can invoke AWS Lambda functions over HTTPS. You can do this by
defining a custom REST API and endpoint using Amazon API Gateway, and
then mapping individual methods, such as GET and PUT, to specific
Lambda functions. Alternatively, you could add a special method named
ANY to map all supported methods (GET, POST, PATCH, DELETE) to your
Lambda function. When you send an HTTPS request to the API endpoint,
the Amazon API Gateway service invokes the corresponding Lambda
function. For more information about the ANY method, see Step 3:
Create a Simple Microservice using Lambda and API Gateway.