How to trigger Elastic Beanstalk redeployment? - amazon-web-services

What is the easiest way to trigger an Elastic Beanstalk redeployment from within AWS?
The goal is to just reboot all instances, but in an orderly manner, according to ALB / target group rules.
Currently I am doing this locally via the EB shell by calling eb deploy without doing any code changes. But rather than doing this manually on a regular basis, I want to use CloudWatch jobs to trigger it with a schedule.

One way would be to setup CloudWatch Schedule Expressions rule.
The rule would trigger a lambda based on your per-defined schedule. The lambda can be as simple as to only trigger the re-deployment of the existing application:
import json
import boto3
eb = boto3.client('elasticbeanstalk')
def lambda_handler(event, context):
response = eb.update_environment(
ApplicationName='<your-eb-app-name>',
EnvironmentName='<your-eb-env-name>',
VersionLabel='<existing-label-of-application-version-to-redeply')
print(response)
You could customize the lambda to be more useful, e.g. by parametrizing it instead of hard-codding all the names required for update_environment.
The lambda execution role also needs to be adjusted to allow the actions on EB.
The other option would be to use CodePipline with two stages:
Source S3 where you specify the zip with the application version to deploy. Its bucket must be versioned.
Deploy stage with Elastic Beanstaslk provider.
The pipeline would also be triggered by the CloudWatch rule on a schedule.

There is actually a feature called Instance Refresh that replaces the instances without deploying a new app version.
Triggering that via a Lambda function scheduled via CloudWatch Jobs seems to be the easiest and cleanest way for my use case. However, keep in mind that replacing instances is not the same are rebooting / redeploying, for example when it comes to managing burst credits.
This AWS blog post described how to set up a scheduled instance refresh with AWS Lambda.

Related

How to use an AWS Restore job to a trigger Lambda function

I have a DocumentDB cluster backed up using AWS Backup. When I restore it, it just creates a cluster with no instances and the cluster uses the default security group of the VPC.
I could not find any solution to fix this as part of the restore job. So, I am using a lambda function that uses boto3 to update the security group and add instances to the cluster.
Now is it possible to trigger the Lambda function automatically when the restore job is completed?
When your Backup job finishes, you can capture an event using EventBridge and then trigger your Lambda off of that.
This blog post from AWS covers triggering a Lambda off the back of an AWS Backup job using EventBridge. It's not the exact same scenario since they're triggering the Lambda from the Backup AND Restore jobs, but you should be able to extract the steps you need from that.

lambda function stop/start ec2 without loss of data

I have a EC2 instance with 300 GB of data (EBS volumes attached). I would like to develop lambda function
to start/stop this EC2 during non business hours so save cloud cost. can anyone help me by sharing any sample code/function?
I think the scenario could be addressed without lambda:
Crone Expressions for CloudWatch Event with
targets of SSM Automation,
and documents of AWS-StartEC2Instance and AWS-StopEC2Instance.
Note that CouldWatch Event has target for stopping an instance. There is no target for starting it. Thus, SSM Automation is proposed.
But if lambda is a requirement, then instead of SSM Automation, just use lambda function with CloudWatch Events.
You can use Cloudwatch EventBridge Rule along with cron expressions to define a schedule on which a Lambda function runs. Within that Lambda function, you can then turn off your Ec2 instance easily.
def turn_off_instance(instance_ids):
ec2 = boto3.client('ec2', region_name=region)
ec2.stop_instances(InstanceIds=instance_ids)
logger.info(f'Instance(s) stopped')
these two guides do something very similar:
EventBridge:
https://medium.com/geekculture/enable-or-disable-aws-alarms-at-given-intervals-d2f867aa9aa4
Lambda code:
https://medium.com/geekculture/terraform-setup-for-automatically-turning-off-ec2-instances-upon-inactivity-d7f414390800

How to setup Cloudwatch SQL monitor?

I have a view on a PostgreSQL RDS instance that lists any ongoing deadlocks. Ideally, there are no deadlocks in the database, causing the view to show nothing, but on rare occasions, there are.
How would I setup an alarm in Cloudwatch to query this view and raise an alarm if any records return?
I found the cool script on Github specifically for this:
A Serverless MySQL RDS Data Collection script to push Custom Metrics to CloudWatch on AWS
Basically, there are 2 main possibilities to publish any custom metrics on CloudWatch:
Via API
You can run it on a schedule on EC2 instance (AWS example) or as a lambda function (great manual with code examples)
With CloudWatch agent
Here is the pretty example for Monitor your Microsoft SQL Server using custom metrics with Amazon CloudWatch and AWS Systems Manager.
After all, you should set up CloudWatch alarms with Metric Math and relevant thresholds.
It is not possible to configure Amazon CloudWatch to look inside an Amazon RDS database.
You will need some code running somewhere that regularly runs a query on the database and sends a custom metric to Amazon CloudWatch.
For example, you could trigger an AWS Lambda function, or use cron on an Amazon EC2 instance to trigger a script.

Idea and guidelines on end to end AWS solution

I want to build an end to end automated system which consists of the following steps:
Getting data from source to landing bucket AWS S3 using AWS Lambda
Running some transformation job using AWS Lambda and storing in processed bucket of AWS S3
Running Redshift copy command using AWS Lambda to push the transformed/processed data from AWS S3 to AWS Redshift
From the above points, I've completed pulling data, transforming data and running manual copy command from a Redshift using a SQL query tool.
Doubts:
I've heard AWS CloudWatch can be used to schedule/automate things but never worked on it. So, if I want to achieve the steps above in a streamlined fashion, how to go about it?
Should I use Lambda to trigger copy and insert statements? Or are there better AWS services to do the same?
Any other suggestion on other AWS Services and of the likes are most welcome.
Constraint: Want as many tasks as possible to be serverless (except for semantic layer, Redshift).
CloudWatch:
Your options here are either to use CloudWatch Alarms or Events.
With alarms, you can respond to any metric of your system (eg CPU utilization, Disk IOPS, count of Lambda invocations etc) when it crosses some threshold, and when this alarm is triggered, invoke a lambda function (or send SNS notification etc) to perform a task.
With events you can use either a cron expression or some AWS service event (eg EC2 instance state change, SNS notification etc) to then trigger another service (eg Lambda), so you could for example run some kind of clean-up operation via lambda on a regular schedule, or create a snapshot of an EBS volume when its instance is shut down.
Lambda itself is a very powerful tool, and should allow you to program a decent copy/insert function in a language you are familiar with. AWS has several GitHub repos with lots of examples too, see for example the serverless examples and many samples. There may be other services which could work for you in your specific case, but part of Lambda's power is its flexibility.

AWS application consistent snapshots of EC2 instances

I'm currently setting up a small Lambda to take snapshots of all the important volumes of our EC2 instances. To guarantee application consistency I need to trigger actions inside the instances: One to quiesce the application before the snapshot and one to wake it up again after the snapshot is done. So far I have no clue how to do this.
I've thought about using SNS or SQS to notify the instances about start and stop of the snapshot, but that has several problems:
I'll need to install (and develop) a custom listener inside the instances.
I'll not get feedback if the quiescing/wake-up is done.
So here's my question: How can I trigger an action inside an instance from an Lambda?
But maybe I'm approaching this from the wrong direction. Is there really no simple backup solution? I know azure has a snapshot based backup service that can do application consitent backups. Did I just miss an equivalent AWS service?
Edit 1:
Ok, it looks like the feature 'Run Command' of AWS Systems Manager is what I really need. It allows me to run scripts, Ansible playbooks and more inside an EC2 instance. When I've got a working solution I'll post the necessary steps.
You can trigger a Lambda function on demand:
Using AWS Lambda with Amazon API Gateway (On-Demand Over HTTPS)
You can invoke AWS Lambda functions over HTTPS. You can do this by
defining a custom REST API and endpoint using Amazon API Gateway, and
then mapping individual methods, such as GET and PUT, to specific
Lambda functions. Alternatively, you could add a special method named
ANY to map all supported methods (GET, POST, PATCH, DELETE) to your
Lambda function. When you send an HTTPS request to the API endpoint,
the Amazon API Gateway service invokes the corresponding Lambda
function. For more information about the ANY method, see Step 3:
Create a Simple Microservice using Lambda and API Gateway.