I added Cloudwatch Logs trigger to Lambda function, which gets triggered when a particular word is found(for example: 'application started') and then it processes certain functions like sending an SNS Notification. What I need is help with the python code to reboot an EC2 instance from inside the lambda function. I've seen everyone doing start and stop EC2 instances from lambda's but not rebooting them.
Thanks!
You can check this documentation, which explains the reboot_instances command from boto3 package.
import boto3
ec2 = boto3.client('ec2')
response = ec2.reboot_instances(InstanceIds=['string'])
So simple.
Related
could any one please help me the lambda code , whenever AWS Ec2 instances get stopped, we need to get the email notifications with sns. In the email we need instance name. I could able to get instance id but not the instance name.
AWS CloudTrail allows you to identify and track EC2 instance lifecycle API calls (launch, start, stop, terminate). See How do I use AWS CloudTrail to track API calls to my Amazon EC2 instances?
And you can trigger a Lambda function to run arbitrary code when CloudTrail logs certain events. See Triggering a Lambda function with AWS CloudTrail events.
You can also create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and triggers a Lambda via CloudWatch Events.
You can create a rule in Amazon CloudWatch Events that:
Triggers when an instance enters the Stopped state
Sends a message to an Amazon SNS Topic
Like this:
If you want to modify the message that is being sent, then configure the Rule to trigger an AWS Lambda function instead. Your function should:
Extract the instance information (eg InstanceId) from the event parameter
Call describe-instances to obtain the Name of the instance (presumably the Tag with a Key of Name)
Publish a message to the Amazon SNS Topic
What is the easiest way to trigger an Elastic Beanstalk redeployment from within AWS?
The goal is to just reboot all instances, but in an orderly manner, according to ALB / target group rules.
Currently I am doing this locally via the EB shell by calling eb deploy without doing any code changes. But rather than doing this manually on a regular basis, I want to use CloudWatch jobs to trigger it with a schedule.
One way would be to setup CloudWatch Schedule Expressions rule.
The rule would trigger a lambda based on your per-defined schedule. The lambda can be as simple as to only trigger the re-deployment of the existing application:
import json
import boto3
eb = boto3.client('elasticbeanstalk')
def lambda_handler(event, context):
response = eb.update_environment(
ApplicationName='<your-eb-app-name>',
EnvironmentName='<your-eb-env-name>',
VersionLabel='<existing-label-of-application-version-to-redeply')
print(response)
You could customize the lambda to be more useful, e.g. by parametrizing it instead of hard-codding all the names required for update_environment.
The lambda execution role also needs to be adjusted to allow the actions on EB.
The other option would be to use CodePipline with two stages:
Source S3 where you specify the zip with the application version to deploy. Its bucket must be versioned.
Deploy stage with Elastic Beanstaslk provider.
The pipeline would also be triggered by the CloudWatch rule on a schedule.
There is actually a feature called Instance Refresh that replaces the instances without deploying a new app version.
Triggering that via a Lambda function scheduled via CloudWatch Jobs seems to be the easiest and cleanest way for my use case. However, keep in mind that replacing instances is not the same are rebooting / redeploying, for example when it comes to managing burst credits.
This AWS blog post described how to set up a scheduled instance refresh with AWS Lambda.
I have a EC2 instance with 300 GB of data (EBS volumes attached). I would like to develop lambda function
to start/stop this EC2 during non business hours so save cloud cost. can anyone help me by sharing any sample code/function?
I think the scenario could be addressed without lambda:
Crone Expressions for CloudWatch Event with
targets of SSM Automation,
and documents of AWS-StartEC2Instance and AWS-StopEC2Instance.
Note that CouldWatch Event has target for stopping an instance. There is no target for starting it. Thus, SSM Automation is proposed.
But if lambda is a requirement, then instead of SSM Automation, just use lambda function with CloudWatch Events.
You can use Cloudwatch EventBridge Rule along with cron expressions to define a schedule on which a Lambda function runs. Within that Lambda function, you can then turn off your Ec2 instance easily.
def turn_off_instance(instance_ids):
ec2 = boto3.client('ec2', region_name=region)
ec2.stop_instances(InstanceIds=instance_ids)
logger.info(f'Instance(s) stopped')
these two guides do something very similar:
EventBridge:
https://medium.com/geekculture/enable-or-disable-aws-alarms-at-given-intervals-d2f867aa9aa4
Lambda code:
https://medium.com/geekculture/terraform-setup-for-automatically-turning-off-ec2-instances-upon-inactivity-d7f414390800
I'd like to run some code using Lambda on the event that I create a new EC2 instance. Looking the blueprint config-rule-change-triggered I have the ability to run code depending on various configuration changes, but not when one is created. Is there a way to do what I want? Or have I misunderstood the use case of Lambda?
We had similar requirements couple of days back(Users were supposed to get emails whenever a new instance gets launched)
1) Go to cloudwatch, then select Rules
2) Select service name (its ec2 for your case) then select "Ec2 instance state-change notification"
3) Then select pending in "Specific state" dropdown
4) Click on Add target option and select your lambda function.
That's it, whenever a new instance gets launched, Cloudwatch will trigger your lambda function.
Hope it helps !!
You could do this by inserting code into your EC2 instance launch userdata and have that code explicitly invoke a Lambda function, but that's not the best way to do it.
A better way is to use a combination of CloudTrail and Lambda. If you enable CloudTrail logging (every a/c should have this enabled, all the time, in all regions) then CloudTrail will log to S3 all of the API calls made in your account. You then connect this to Lambda by configuring S3 to publish events to Lambda. Your Lambda function will receive an S3 event, can then retrieve the API logs, find RunInstances API calls, and then do whatever work you need to as a consequence of the new instance being launched.
Some helpful references here and here.
I don't see a notification trigger for instance startup, however what you can do is write a startup script and pass that in via userdata. That startup script would need to download and install the AWS CLI and then authenticate to SNS and publish a message to a pre-configured topic. The startup script would authenticate to SNS and whatever other AWS services are needed via your IAM Role, so you would need to give the IAM Role permission to do whatever you want the script to do. This can be done in the IAM console.
That topic would then have your Lambda function subscribed to it, which would execute. Similar to the below article (though the author is doing something similar for shutdown, not startup).
http://rogueleaderr.com/post/48795010760/how-to-notifyemail-yourself-when-an-ec2-instance
If you are putting the EC2 instances into an autoscale group, I believe there is a trigger that gets fired when the autoscale group launches a new instance, so you could take advantage of that.
I hope that helps.
If you have CloudTrail enabled, then you can have S3 PutObject/TrailBucket trigger a Lambda function. Lambda function parses the object that is passed to it and if it finds RunInstances event, then run your code.
I do the exact same thing to notify certain users when a new instance is launched. With Lambda/Python, it is ~20 lines of code.
I need to receive notifications whenver my instance in terminated. I know it can be done by cloudtrail and then using sns and sqs to get email for it, if you receive event of termination.
Is there a simpler way to do that ?
Any solution will is appreciated, but I prefer is doing using boto.
While it is not possible to receive a notification directly from Amazon EC2 when an instance is terminated, there are a couple of ways this could be accomplished:
Auto Scaling can send a notification when an instance managed by Auto Scaling is terminated. See: Configure Your Auto Scaling Group to Send Notifications
AWS Config can also be configured to send a Simple Notification Service (SNS) notification when resources change. This would send many notifications, so you would need to inspect and filter the notifications to find the one(s) indicating an instance termination. See the SNS reference in: Set Up AWS Config Using the Console and Example Amazon SNS Notification and Email from AWS Config.
Amazon Simple Notification Service (SNS) can also push a message to Amazon Queueing Service (SQS), which can be easily polled with the boto python SDK.
Receiving notifications via CloudTrail and CloudWatch Logs is somewhat messier, so I'd recommend the AWS Config method.
Now AWS introduced "rules" Under "Events" in AWS CloudWatch. In your case, you can select EC2 as Event Selector and SNS or SQS as Targets.
https://aws.amazon.com/blogs/aws/new-cloudwatch-events-track-and-respond-to-changes-to-your-aws-resources/
According to the AWS doc: Spot Instance Interruptions, it is possible to pool the instance-metadata in order to get an approximation of the termination time. You can build any custom monitoring solution around that.
> curl http://169.254.169.254/latest/meta-data/spot/instance-action
{"action": "stop", "time": "2017-09-18T08:22:00Z"}
If the instance is not scheduled for termination a http:400 will be returned.