Rollback a build using AWS CodePipeline - amazon-web-services

What is the best mechanism to implement to rollback a deployment that is orchestrated using CodePipeline? The source comes from a S3 bucket and we are looking to see if there is a one-lick rollback mechanism without manual intervention.

CodePipeline doesn't support rollback currently. If you are using CodeDeploy as the deployment action, you can setup rollback on alarm or failed deployment on the CodeDeploy DeploymentGroup. The cloud formation template to enable auto-rollback for a CodeDeploy deployment group looks like:
Type: "AWS::CodeDeploy::DeploymentGroup"
Properties:
...
AutoRollbackConfiguration:
Enabled: true
Events:
- "DEPLOYMENT_FAILURE"
- "DEPLOYMENT_STOP_ON_ALARM"
AlarmConfiguration:
Alarms:
- CloudWatchAlarm1
- CloudWatchAlarm2
Enabled: true
You can find more information about it at Deployments and Redeploy

In case we are not using AWS CodeDeploy, then anyday we can use the manual way of rollback, which is to redeploy the previous stable build or tag.

Related

AWS EventBridge rule doesn't trigger: Error. NotAuthorizedForSourceException. Not authorized for the source

I'm creating a rule that should fire every time there is a change in status in a SageMaker batch transform job.
I'm using Serverless Framework but to simplify it even further, here's what I did:
The rule, exported from AWS console:
AWSTemplateFormatVersion: '2010-09-09'
Description: >-
CloudFormation template for EventBridge rule
'sagemaker-transform-status-to-CWL'
Resources:
EventRule0:
Type: AWS::Events::Rule
Properties:
EventBusName: default
EventPattern:
source:
- aws.sagemaker
detail-type:
- SageMaker Training Job State Change
Name: sagemaker-transform-status-to-CWL
State: ENABLED
Targets:
- Id: XXX
Arn: >-
arn:aws:logs:us-east-1:XXX:log-group:/aws/events/sagemaker-notifications
Eventually I want this to trigger a step function or a lambda function, but for now I am configuring the target to be CloudWatch with log group 'sagemaker-notifications'
I expect that everytime I run a batch transform job in SageMaker, this will get notified and the log would show up on cloudwatch.
But I'm not getting any logs, so when I tried to PutEvents manually to test it, I was getting this:
Error. NotAuthorizedForSourceException. Not authorized for the source.
It's probably an issue with roles, but I'm not sure which kind of role to configure, where and who should assume it.
Tried going through AWS tutorials, adding permissions to the default event bus, using serverless framework
See some sample event patterns here - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule--examples
Your source should be a custom source, and cannot contain aws. (Reference -https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html)

Is there any way to stop AWS from starting CodePipeline automatically if I deploy it via CloudFormation?

If you create a CodePipeline via CloudFormation. It starts it automatically, that can be a problem because the pipeline can rewrite the same stack...
Is there any way to disable this behaviour?
Thanks.
Had same issue, I don't want a pipeline launch on pipeline creation (which is the default beahviour).
Best solution I fount is :
Create an EventBridge rule which catch the pipelineExecution on
pipeline creation
Stop the pipeline execution from the lambda triggered
Rule looks like this :
{
"source": ["aws.codepipeline"],
"detail-type": ["CodePipeline Pipeline Execution State Change"],
"detail": {
"state": ["STARTED"],
"execution-trigger": {
"trigger-type": ["CreatePipeline"]
}
}
}
It works fine
Sadly, there seem to be no way of this this. Docs clearly states that a newly created pipeline immediately starts running:
Now that you've created your pipeline, you can view it in the console. The pipeline starts to run after you create it.
The initial run will always happen. Subsequent runs depend on your source action. For example, if you use CodeCommit as your source, you can disable CloudWatch Event that triggers the pipeline.
Thus if you want to use CodePipeline in your project, you have to design it so that it does not causes any issues due to immediate start.
You can disable the Event rule from automatically starting your pipeline.
Go to Amazon EventBridge -> Rules and disable the rule that notifies the CodePipeline.
Further to Marcin's comment, it would seem there are 2 approaches you can take which would limit the run of the pipeline.
Create a disabled StageTransition or Manual Approval stage directly after the Source stage. This would prevent the pipeline executing any other action aside from getting the source which would have no impact or capability to re-write anything.
Alternatively if your source stage is from a repository, you can opt to handle the pipeline triggers yourself by disabling the PollForSourceChanges parameter in your cloudformation template.
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: *NAME*
RoleArn: *IAMROLE*
Stages:
- Name: Source
Actions:
- Name: CodeCommitSourceAction
RunOrder: 1
ActionTypeId:
Category: Source
Provider: CodeCommit
Owner: AWS
Version: '1'
OutputArtifacts:
- Name: Source
Configuration:
RepositoryName: *REPOSITORYNAME*
BranchName: *BRANCH*
PollForSourceChanges: "false" #prevents codepipeline polling repository for changes.
So the correct answer is...
Commit your code before you deploy for the first time
Deploy only the pipeline
Let Code Pipeline do its thing
99% of cases it will finish sooner than your machine.

How to automate API Gateway deployment via SAM deploy?

When I changed AWS::ApiGateway::Method properties by AWS SAM template and deployed it. but I noticed that change was not reflected until I deploy API manually from AWS management console. I think it is because I didn't change the resource of AWS::ApiGateway::Deployment and AWS::ApiGateway::Stage on the template.(The deployment history of AWS::ApiGateway::Stage was not updated.)
How can I reflect the change when I trigger sam deploy?
No, you have to manually deploy the api from the console :) it's a limitation at the moment.

How can my CodeBuild in a CodePipeline resolve resources created by the previous CloudFormation step?

I have setup my CodePipeline something like:
Source: Github
CodeBuild: Package SAM application (CloudFormation resources like DB)
Deploy CloudFormation: Does the create & execute changesets
CodeBuild: I want to run DB migrations for the DB created by CloudFormation ... but how do I get it ... CodeBuild does not support parameters from my Pipeline
Maybe I am creating my pipeline wrong?
The CloudFormation action can output stack parameters, but currently the CodeBuild action in CodePipeline can't accept both a code artifact and an artifact with CloudFormation outputs.
For now I'd call aws cloudformation describe-stacks from the CLI inside your build script to retrieve the DB information from your CloudFormation stacks.
Maybe in the step 3. You setup your cloudformation in this way:
1- create the database...Export as an output the name of the database
Outputs:
DataBaseName:
Description: "Name of the Database"
Value: !Ref DataBaseName
2- In the codebuild use the Boto3 and use the Describe Stacks and get the output (the name of the database and another information about it), now you could take advangate of Python in your codebuild and start the migration usign Boto3.
response = client.describe_stacks(
StackName='string',
NextToken='string'
)

deploy cloudformation and add trigger to lambda function when done

For an automatic deployment workflow, I want to start a cloudformation deployment and trigger a lambda function when done.
I know, I can add a cloudwatch event that triggers whenever an event occurs on cloudformation in my account. But I do not want to trigger the lambda on any cloudformation template being deployed, but only on templates, where I decide on deployment that the lambda should be triggered.
I could add code to the lambda function, making it decide for itself if it was supposed to be triggered. That would probably work, but I wonder if there is a better more direct solution?
Ideas?
Custom resources enable you to write custom provisioning logic in
templates that AWS CloudFormation runs anytime you create, update
Ex: Custom Lambda resource to enable RDS logs after the RDS DB is created.
Resources:
EnableLogs:
Type: Custom::EnableLogs
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:acc:function:rds-EnableRDSLogs-1O6XLL6LWNR5Z
DBInstanceIdentifier: mydb
See my python gist here