I have a EC2 instance with 300 GB of data (EBS volumes attached). I would like to develop lambda function
to start/stop this EC2 during non business hours so save cloud cost. can anyone help me by sharing any sample code/function?
I think the scenario could be addressed without lambda:
Crone Expressions for CloudWatch Event with
targets of SSM Automation,
and documents of AWS-StartEC2Instance and AWS-StopEC2Instance.
Note that CouldWatch Event has target for stopping an instance. There is no target for starting it. Thus, SSM Automation is proposed.
But if lambda is a requirement, then instead of SSM Automation, just use lambda function with CloudWatch Events.
You can use Cloudwatch EventBridge Rule along with cron expressions to define a schedule on which a Lambda function runs. Within that Lambda function, you can then turn off your Ec2 instance easily.
def turn_off_instance(instance_ids):
ec2 = boto3.client('ec2', region_name=region)
ec2.stop_instances(InstanceIds=instance_ids)
logger.info(f'Instance(s) stopped')
these two guides do something very similar:
EventBridge:
https://medium.com/geekculture/enable-or-disable-aws-alarms-at-given-intervals-d2f867aa9aa4
Lambda code:
https://medium.com/geekculture/terraform-setup-for-automatically-turning-off-ec2-instances-upon-inactivity-d7f414390800
Related
I have a DocumentDB cluster backed up using AWS Backup. When I restore it, it just creates a cluster with no instances and the cluster uses the default security group of the VPC.
I could not find any solution to fix this as part of the restore job. So, I am using a lambda function that uses boto3 to update the security group and add instances to the cluster.
Now is it possible to trigger the Lambda function automatically when the restore job is completed?
When your Backup job finishes, you can capture an event using EventBridge and then trigger your Lambda off of that.
This blog post from AWS covers triggering a Lambda off the back of an AWS Backup job using EventBridge. It's not the exact same scenario since they're triggering the Lambda from the Backup AND Restore jobs, but you should be able to extract the steps you need from that.
What is the easiest way to trigger an Elastic Beanstalk redeployment from within AWS?
The goal is to just reboot all instances, but in an orderly manner, according to ALB / target group rules.
Currently I am doing this locally via the EB shell by calling eb deploy without doing any code changes. But rather than doing this manually on a regular basis, I want to use CloudWatch jobs to trigger it with a schedule.
One way would be to setup CloudWatch Schedule Expressions rule.
The rule would trigger a lambda based on your per-defined schedule. The lambda can be as simple as to only trigger the re-deployment of the existing application:
import json
import boto3
eb = boto3.client('elasticbeanstalk')
def lambda_handler(event, context):
response = eb.update_environment(
ApplicationName='<your-eb-app-name>',
EnvironmentName='<your-eb-env-name>',
VersionLabel='<existing-label-of-application-version-to-redeply')
print(response)
You could customize the lambda to be more useful, e.g. by parametrizing it instead of hard-codding all the names required for update_environment.
The lambda execution role also needs to be adjusted to allow the actions on EB.
The other option would be to use CodePipline with two stages:
Source S3 where you specify the zip with the application version to deploy. Its bucket must be versioned.
Deploy stage with Elastic Beanstaslk provider.
The pipeline would also be triggered by the CloudWatch rule on a schedule.
There is actually a feature called Instance Refresh that replaces the instances without deploying a new app version.
Triggering that via a Lambda function scheduled via CloudWatch Jobs seems to be the easiest and cleanest way for my use case. However, keep in mind that replacing instances is not the same are rebooting / redeploying, for example when it comes to managing burst credits.
This AWS blog post described how to set up a scheduled instance refresh with AWS Lambda.
I added Cloudwatch Logs trigger to Lambda function, which gets triggered when a particular word is found(for example: 'application started') and then it processes certain functions like sending an SNS Notification. What I need is help with the python code to reboot an EC2 instance from inside the lambda function. I've seen everyone doing start and stop EC2 instances from lambda's but not rebooting them.
Thanks!
You can check this documentation, which explains the reboot_instances command from boto3 package.
import boto3
ec2 = boto3.client('ec2')
response = ec2.reboot_instances(InstanceIds=['string'])
So simple.
I want to build an end to end automated system which consists of the following steps:
Getting data from source to landing bucket AWS S3 using AWS Lambda
Running some transformation job using AWS Lambda and storing in processed bucket of AWS S3
Running Redshift copy command using AWS Lambda to push the transformed/processed data from AWS S3 to AWS Redshift
From the above points, I've completed pulling data, transforming data and running manual copy command from a Redshift using a SQL query tool.
Doubts:
I've heard AWS CloudWatch can be used to schedule/automate things but never worked on it. So, if I want to achieve the steps above in a streamlined fashion, how to go about it?
Should I use Lambda to trigger copy and insert statements? Or are there better AWS services to do the same?
Any other suggestion on other AWS Services and of the likes are most welcome.
Constraint: Want as many tasks as possible to be serverless (except for semantic layer, Redshift).
CloudWatch:
Your options here are either to use CloudWatch Alarms or Events.
With alarms, you can respond to any metric of your system (eg CPU utilization, Disk IOPS, count of Lambda invocations etc) when it crosses some threshold, and when this alarm is triggered, invoke a lambda function (or send SNS notification etc) to perform a task.
With events you can use either a cron expression or some AWS service event (eg EC2 instance state change, SNS notification etc) to then trigger another service (eg Lambda), so you could for example run some kind of clean-up operation via lambda on a regular schedule, or create a snapshot of an EBS volume when its instance is shut down.
Lambda itself is a very powerful tool, and should allow you to program a decent copy/insert function in a language you are familiar with. AWS has several GitHub repos with lots of examples too, see for example the serverless examples and many samples. There may be other services which could work for you in your specific case, but part of Lambda's power is its flexibility.
I have an AWS auto scaling group. From the instances I collect a variety of metrics and placed some cloud watch alarms on these metrics. In specific scenarios I would like to add a cloud watch alarm action that terminates the entire auto scaling group. Is this possible? I am going over aws documentation but does not seem to be possible.
Thanks!!
You can do this by invoking Lambda from your custom Cloudwatch event
You will need to write a Lambda that can use STS to assume a role that permits it to issue an EC2 Terminate command
The workflow would be:
Cloudwatch event triggers
Lambda function is invoked
Lambda function assumes role via STS
Lambda function retrieves list of instances in the ASG
Lambda function cycles through instances, issuing termination commands