How to manually rollback CloudFormation deployment of Lambda functions? - amazon-web-services

In my CodePipeline, I am creating a CloudFormation ChangeSet and then executing it to deploy Lambda functions. It doesn't seem like CloudFormation saves the old ChangeSets so that I can revert to an old version. Am I wrong?
CloudFormation does automatically rollback when it fails to create/execute the ChangeSet due to IAM permission issues and such but I want the ability to manually rollback in case I deploy a buggy function.

You could use rollback triggers in AWS CloudFormation to detect failed tests in your code, via Amazon CloudWatch metrics and alarms, and perform an automated rollback.
Your application code would need to be modified to perform the tests upon deployment, and then write the metric values into Amazon CloudWatch.
There are a couple limits you'll want to be aware of:
Maximum of five (5) rollback configurations per CloudFormation stack
Monitoring time: 0 - 180 minutes (3 hours)

Related

How to trigger Elastic Beanstalk redeployment?

What is the easiest way to trigger an Elastic Beanstalk redeployment from within AWS?
The goal is to just reboot all instances, but in an orderly manner, according to ALB / target group rules.
Currently I am doing this locally via the EB shell by calling eb deploy without doing any code changes. But rather than doing this manually on a regular basis, I want to use CloudWatch jobs to trigger it with a schedule.
One way would be to setup CloudWatch Schedule Expressions rule.
The rule would trigger a lambda based on your per-defined schedule. The lambda can be as simple as to only trigger the re-deployment of the existing application:
import json
import boto3
eb = boto3.client('elasticbeanstalk')
def lambda_handler(event, context):
response = eb.update_environment(
ApplicationName='<your-eb-app-name>',
EnvironmentName='<your-eb-env-name>',
VersionLabel='<existing-label-of-application-version-to-redeply')
print(response)
You could customize the lambda to be more useful, e.g. by parametrizing it instead of hard-codding all the names required for update_environment.
The lambda execution role also needs to be adjusted to allow the actions on EB.
The other option would be to use CodePipline with two stages:
Source S3 where you specify the zip with the application version to deploy. Its bucket must be versioned.
Deploy stage with Elastic Beanstaslk provider.
The pipeline would also be triggered by the CloudWatch rule on a schedule.
There is actually a feature called Instance Refresh that replaces the instances without deploying a new app version.
Triggering that via a Lambda function scheduled via CloudWatch Jobs seems to be the easiest and cleanest way for my use case. However, keep in mind that replacing instances is not the same are rebooting / redeploying, for example when it comes to managing burst credits.
This AWS blog post described how to set up a scheduled instance refresh with AWS Lambda.

adding CloudWatch to a stack with CloudFormation

I am currently in charge of adding CloudWatch integration to an already made Cloud Formation stack.
We create the stacks through CLI, but at the moment we add CloudWatch manually afterwards.
What i need is to automatically activate CloudWatch for instances and monitor CPU, hdd and so on through the use of CloudFormation templates.
Thanks in advance!
My suggestion is that you don't add new CloudWatch items to the existing CloudFormation stack. Instead, create a CF template with the appropriate metrics and deploy from this template for each instance you want to monitor.
From there, I suggest you create an AWS Lambda function that will receive an Instance Id as input and will deploy a CloudFormation stack against the instance. You should enable CloudTrail on your account and create a Rule to match any RunInstances event on the account and trigger the Lambda function.
Keep in mind the default limit for CloudFormation stacks is 200. You might need to request an increase depending on your use case.

AWS CloudFormation Rate Exceeded

I am running a multi-branch pipeline in Jenkins for CI/CD that deploys a CloudFormation stack to my AWS account. Occasionally, when multiple developers push to their branches at the same time, I receive this error on one or more branches:
com.amazonaws.services.cloudformation.model.AmazonCloudFormationException:
Rate exceeded (Service: AmazonCloudFormation; Status Code: 400; Error
Code: Throttling;
This seems to be a rate limit that Amazon has imposed on the number of requests to CloudFormation within a specified time frame.
What is the request limit of CloudFormation, and can I request a limit increase?
No - Not the requests to the cloudformation API.
Most likely the issue will be that Jenkins pipeline requesting for updates every few seconds in order to get the current status. And when you are deploying multiple stacks you will hit this error.
This is probably a bug in the Cloudformation plugin in Jenkins - you'll need to raise a ticket and ask them to implement a backoff of requests if the cfn stack is taking longer than expected, so that it doesn't keep requesting the status of the stack as often.
You could also change your Jenkinsfile's to use the aws-cli which do a better job of managing requests to AWS on cfn updates.

launching AND terminating EMR cluster with boto3 on AWS Lambda

My case is the following. I want to launch a cluster during working hours and terminate it after 18:00 and weekends. The clusters will be used for a datascience project. Years ago we would use a boring crontab for this, but these days i prefer to do this with a lambda function.
In boto3 i can launch a cluster (thanks to Jose Quinteiro) and this post describes it very well How to launch and configure an EMR cluster using boto
How can i terminate a cluster in boto3 in the same lambda function as where i start it?
Using AWS CloudWatch event/rule and AWS Lambda function to check for Idle EMR clusters, you complete your goal. You achieve visibility on the AWS Console level and can easily enable and disable it.
Keeping in mind the need for this, I have developed a small framework to achieve that using the 2nd solution mentioned above. This framework is an AWS based solution using AWS CloudWatch and AWS Lambda using a Python script that is using Boto3 to terminate AWS EMR clusters that have been idle for a specified period of time.
You specify the maximum idle time threshold and AWS CloudWatch event/rule triggers an AWS Lambda function that queries all AWS EMR clusters in WAITING state and for each, compares the current time with AWS EMR cluster's ready time in case of no EMR steps added so far or compares the current time with AWS EMR cluster's last step's end time. If the threshold has been compromised, the AWS EMR will be terminated after removing termination protection if enabled. If not, it will skip that AWS EMR cluster.
AWS CloudWatch event/rule will decide how often AWS Lambda function should check for idle AWS EMR clusters.
You can disable the AWS CloudWatch event/rule at any time to disable this framework in a single click without deleting its AWS CloudFormation stack.
AWS Lambda function is using Python 3.7 as its runtime environment.
In your case, while creating the stack, you can specify your required Cron expression and maximum idle EMR cluster threshold in minutes to achieve this.
You can get the code and use it from GitHub here: https://github.com/abdullahkhawer/auto-terminate-idle-emr
Any contributions, improvements and suggestions to this solution will be highly appreciated. :)
You can terminate the cluster using boto3 by using
emr_client = boto3.client('emr')
emr_client.terminate_job_flows(JobFlowIds=[#replace it with cluster Id you want it to close ])
You could create a scheduled event in cloudwatch that triggers the lambda you are using.
Scheduled events use Cron expressions so you will be able to apply the same logic. Once your function is triggered you will need to determine that it is a shutdown trigger from the event input.

AWS CodePipeline - Run at a specific time only if there are changes

AWS: Is it possible to setup a CloudWatch event to run a pipeline at a specific time but only if there are changes on my codecommit repository?
I don't think is possible out of the box.
An approach could be having a lambda function executing on a regular schedule (3am).
Then get your lambda to compare the latest codepipeline release against your latest revision committed, and trigger the pipeline accordingly.