How can I setup manual approval to proceed in AWS CodePipeline - amazon-web-services

I wish to create the following AWS CodePipeline process
Developers push code to GitHub
CodeDeploy deploys code to Test environment EC2
Test engineer tests the web app on EC2
Test engineer manually approves this revision
CodeDeploy deploys code to Live environment EC2
My problem is in step 4 and 5, how can I make the codepipeline wait for manual approval (step 4) and then if approved, automatically proceeds to deploying next stage (step 5)
Thanks

To address your problem with Step 4 and 5 this can be accomplished 2 ways:
1) AWS has added the ability to add a manual approval step via the console:
https://aws.amazon.com/about-aws/whats-new/2016/07/aws-codepipeline-adds-manual-approval-actions/
Open existing CodePipeline
Edit CodePipeline
Select the pencil icon on the Stage you want the Manual approval in
Click then add this type of action:
2) ManualApproval can also be added to a CodePipeline CloudFormation Template Action like this:
- InputArtifacts: []
Name: !Join ["",[!Ref GitHubRepository, "-prd-approval"]]
ActionTypeId:
Category: Approval
Owner: AWS
Version: '1'
Provider: Manual
OutputArtifacts: []
Configuration:
NotificationArn: !Ref ManualApprovalNotification
ExternalEntityLink: OutputTestUrl
RunOrder: 3

Related

Add Snyk Action to CodePipeline with CloudFormation

I wanted to spin up a CodePipeline on AWS with a Snyk Scan action through CloudFormation. The official documentation on how to do this is a little light on details and seems to be missing key bits of information, so I was hoping someone could shed some light on this issue. According to the Snyk action reference, there are only several variables that need to be configured, so I followed along and setup my CodePipeline CF template with the following configuration,
- Name: Scan
Actions:
- Name: Scan
InputArtifacts:
- Name: "source"
ActionTypeId:
Category: Invoke
Owner: ThirdParty
Version: 1
Provider: Snyk
OutputArtifacts:
- Name: "source-scan"
However, it is unclear how CodePipeline authenticates with Snyk with just this configuration. Sure enough, when I tried to spin up this template, I got the following error through the CloudFormation console,
Action configuration for action 'Scan' is missing required configuration 'ClientId'
I'm not exactly sure what the ClientId is in this case, but I assume it is the Snyk ORG id. So, I added ClientId under the Configuration section of the template. When I spun the new template up, I got the following error,
Action configuration for action 'Scan' is missing required configuration 'ClientToken'
Again, there is no documentation (that I could find) on the AWS side for what this ClientToken is, but I assume it is a Snyk API token, so I went ahead and added that. My final template looks like,
- Name: Scan
Actions:
- Name: Scan
InputArtifacts:
- Name: "source"
ActionTypeId:
Category: Invoke
Owner: ThirdParty
Version: 1
Provider: Snyk
OutputArtifacts:
- Name: "source-scan"
Configuration:
ClientId: <id>
ClientToken: <token>
The CloudFormation now goes up fine and without error, but the CodePipeline itself halts on the Scan stage, stalls for ten or so minutes and then outputs a error that doesn't give you much information,
There was an error in the scan execution.
I assume I am not authenticating with Snyk correctly. I can set up the scan fine through the console, but that includes an OAuth page where I enter my username/password before Snyk can authorize AWS. Anyway, I need to be able to set up the scan through CloudFormation as I will not have console for the project I am working on.
I am looking for a solution and/or some documentation that covers this use case. If anyone could point me in the right direction, I would be much obliged.

Is there any way to stop AWS from starting CodePipeline automatically if I deploy it via CloudFormation?

If you create a CodePipeline via CloudFormation. It starts it automatically, that can be a problem because the pipeline can rewrite the same stack...
Is there any way to disable this behaviour?
Thanks.
Had same issue, I don't want a pipeline launch on pipeline creation (which is the default beahviour).
Best solution I fount is :
Create an EventBridge rule which catch the pipelineExecution on
pipeline creation
Stop the pipeline execution from the lambda triggered
Rule looks like this :
{
"source": ["aws.codepipeline"],
"detail-type": ["CodePipeline Pipeline Execution State Change"],
"detail": {
"state": ["STARTED"],
"execution-trigger": {
"trigger-type": ["CreatePipeline"]
}
}
}
It works fine
Sadly, there seem to be no way of this this. Docs clearly states that a newly created pipeline immediately starts running:
Now that you've created your pipeline, you can view it in the console. The pipeline starts to run after you create it.
The initial run will always happen. Subsequent runs depend on your source action. For example, if you use CodeCommit as your source, you can disable CloudWatch Event that triggers the pipeline.
Thus if you want to use CodePipeline in your project, you have to design it so that it does not causes any issues due to immediate start.
You can disable the Event rule from automatically starting your pipeline.
Go to Amazon EventBridge -> Rules and disable the rule that notifies the CodePipeline.
Further to Marcin's comment, it would seem there are 2 approaches you can take which would limit the run of the pipeline.
Create a disabled StageTransition or Manual Approval stage directly after the Source stage. This would prevent the pipeline executing any other action aside from getting the source which would have no impact or capability to re-write anything.
Alternatively if your source stage is from a repository, you can opt to handle the pipeline triggers yourself by disabling the PollForSourceChanges parameter in your cloudformation template.
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: *NAME*
RoleArn: *IAMROLE*
Stages:
- Name: Source
Actions:
- Name: CodeCommitSourceAction
RunOrder: 1
ActionTypeId:
Category: Source
Provider: CodeCommit
Owner: AWS
Version: '1'
OutputArtifacts:
- Name: Source
Configuration:
RepositoryName: *REPOSITORYNAME*
BranchName: *BRANCH*
PollForSourceChanges: "false" #prevents codepipeline polling repository for changes.
So the correct answer is...
Commit your code before you deploy for the first time
Deploy only the pipeline
Let Code Pipeline do its thing
99% of cases it will finish sooner than your machine.

Issues Creating Environments For AWS Lambda Service In CodeStar And CodePipeline

I used AWS CodeStar to create a new application with the "Express.js Aws Lambda Webservice" CodeStar template. This was great because it set me up with a simple CI/CD pipeline using AWS CodePipeline. By default the pipeline has 3 steps for grabbing the source code from a git repo, running the build step, and then deploying to "dev" environment.
My issue is that I can't set it up so that my pipeline has multiple environments: dev, staging, and prod.
My current deploy step has 2 actions: GenerateChangeSet and ExecuteChangeSet. Here are the configurations for the actions in original dev environment build step which work great:
I've created a new deploy stage at the end of my pipeline to deploy to staging, but honestly I'm not sure how to change the configurations. I'm thinking ultimately I want to be able to go into the AWS Lambda section of the AWS console and see three independent lambda functions: binance-bot-dev, binance-bot-staging, binance-bot-prod. Then each of these I could set as cloudwatch scheduled events or expose with their own api gateway url.
This is the configuration that I tried to use for a new deployment stage:
I'm really not sure if this configuration is correct and what exactly I should change in order to deploy in the way I want.
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
Also, I'm pointing to a different template.yml file in the project. The original template.yml looks like this:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
Resources:
Dev:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs4.3
Environment:
Variables:
NODE_ENV: dev
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
For template.staging.yml I use the exact same config except I changed "Dev:" to "Staging:" under "Resources", and I also changed the value of the NODE_ENV environment variable. So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
Assuming that everything in the configuration is correct, I then need to troubleshoot this error. With everything set as described above I can run my pipeline, but when it gets to my staging build step the GenerateChage_Staging action fails with this error message:
Action execution failed User:
arn:aws:sts::954459734159:assumed-role/CodeStarWorker-binance-bot-CodePipeline/1524253307698
is not authorized to perform: cloudformation:DescribeStacks on
resource:
arn:aws:cloudformation:us-east-1:954459734159:stack/awscodestar-binance-bot-lambda-staging/*
(Service: AmazonCloudFormation; Status Code: 403; Error Code:
AccessDenied; Request ID: dd801664-44d2-11e8-a2de-8fa6c42cbf86)
It seem to me from this error message that I need to add the "cloudformation:DescribeStacks" for my "CodeStarWorker-binance-bot-CodePipeline" so I go to IAM -> Roles and click on the CodeStarWorker-binance-bot-CodePipeline role. However, when I click on "CodeStarWorker-binance-bot-CodePipeline" and drill into the policy information for CloudFormation it looks like this role already has permissions for "DescribeStacks"!
If anyone could point out what I'm doing wrong or offer any guidance on understanding and thinking about how to do multiple environments with AWS CodePipeline that would be great. thanks!
UPDATE:
I changed the "Stack name" in my Deploy_To_Staging pipeline stage back to "awscodestar-binance-bot-lambda". However, I then get this error form the GenerateChange_Staging action:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
UPDATE 2:
In the root of my project I have the buildspec.yml file that was generated by CodeStar. It looks like this:
version: 0.2
phases:
install:
commands:
# Install dependencies needed for running tests
- npm install
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
pre_build:
commands:
# Discover and run unit tests in the 'tests' directory
- npm test
build:
commands:
# Use AWS SAM to package the application using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
artifacts:
type: zip
files:
- template-export.yml
I then added this to the CloudFormation section:
Then I add this to the "build: -> commands:" section:
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
And I added this to the "files:"
template-export.staging.yml
template-export.prod.yml
HOWEVER, I am still getting an error that "binance-bot-BuildArtifact does not exist".
Here is the full error after making the buildspec.yml change:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
It seems very strange to me that I can access "binance-bot-BuildArtifact" in one stage of the pipeline but not another. Could it be that the build artifact is only available to the one pipeline stage directly after the build stage? Can someone please help me to be able to access this "binance-bot-BuildArtifact"? Thanks!
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
You should use a unique stack name for each environment. If you didn't, you would be replacing your 'dev' environment with your 'staging' environment, and so forth.
So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
I don't think so. You should use the exact same template for each environment. In order to change the environment name for each of your deploys, you can use the 'Parameter Overrides' field to choose the correct value for your 'Environment' parameter.
it looks like this role already has permissions for "DescribeStacks"!
Could the issue here be that your IAM role only has DescribeStacks permission for the dev stack? It looks like it does not have permission to describe the staging stack. Maybe you can add a 'wildcard'/asterisk to the policy so that it matches all of your stack names?
Could it be that the build artifact is only available to the one pipeline stage directly after the build stage?
No, that has not been my experience with CodePipeline. Unfortunately I don't know why it's telling you that your artifact can't be found.
robrtsql has already provided some good advice in terms of using the same template in both stages.
You might find this walkthrough useful.
Basically, it describes adding a Cloudformation "template configuration" which allows you to specify parameters to the Cloudformation stack.
This will allow you to deploy the same template in both your dev and prod environments, but also allow you to tell the difference between a dev deployment and a prod deployment, by choosing a different template configuration in each stage.

Rollback a build using AWS CodePipeline

What is the best mechanism to implement to rollback a deployment that is orchestrated using CodePipeline? The source comes from a S3 bucket and we are looking to see if there is a one-lick rollback mechanism without manual intervention.
CodePipeline doesn't support rollback currently. If you are using CodeDeploy as the deployment action, you can setup rollback on alarm or failed deployment on the CodeDeploy DeploymentGroup. The cloud formation template to enable auto-rollback for a CodeDeploy deployment group looks like:
Type: "AWS::CodeDeploy::DeploymentGroup"
Properties:
...
AutoRollbackConfiguration:
Enabled: true
Events:
- "DEPLOYMENT_FAILURE"
- "DEPLOYMENT_STOP_ON_ALARM"
AlarmConfiguration:
Alarms:
- CloudWatchAlarm1
- CloudWatchAlarm2
Enabled: true
You can find more information about it at Deployments and Redeploy
In case we are not using AWS CodeDeploy, then anyday we can use the manual way of rollback, which is to redeploy the previous stable build or tag.

Getting Commit ID in CodePipeline

I am using CodePipeline with CodeCommit. Builds are triggered automatically with push to master branch. In CodePipeline console it is clearly visible that i am receiving commit id but i need to get it in the build environment so i can add them as a tag to the ECS image when i build it. Is there a way to get in in build environment.
You can use the CODEBUILD_RESOLVED_SOURCE_VERSION environment variable to retrieve the commit hash displayed in CodePipeline at build time.
Adding an answer that explains how to achieve this in CloudFormation, as it took me a while to figure it out. You need to define your stage as:
Name: MyStageName
Actions:
-
Name: StageName
InputArtifacts:
- Name: InputArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
OutputArtifacts:
- Name: OutputArtifact
Configuration:
ProjectName: !Ref MyBuildProject
EnvironmentVariables:
'[{"name":"COMMIT_ID","value":"#{SourceVariables.CommitId}","type":"PLAINTEXT"}]'
In your actions you need to have this kind of syntax. Note that the EnvironmentVariables property of a CodePipeline stage is different from a AWS::CodeBuild::Project property. If you were to add #{SourceVariables.CommitId} as an env variable there, it wouldn't be resolved properly.
CodePipeline now also allows you to configure your pipeline with variables that are generated at execution time. In this example your CodeCommit action will produce a variable called CommitId that you can pass into a CodeBuild environment variable via the CodeBuild action configuration.
Here is a conceptual overview of the feature: https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-variables.html
For an example walk through of passing the commit id into your build action you can go here:
https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-variables.html
It would also be worth considering tagging the image with the CodePipeline execution id instead of the commit id, that way it prevents future builds with the same commit from overwriting the image. Using the CodePipeline execution id is also shown in the example above.
Is this what you're looking for?
http://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring-source-revisions-view.html#monitoring-source-revisions-view-cli
Most (if not all) of the language SDKs have this API built in also.
Additionally to #Bar's answer: just adding EnvironmentVariables is not enough, you need to set Namespace also.
For example:
pipeBackEnd:
Type: AWS::CodePipeline::Pipeline
Properties:
...
Stages:
- Name: GitSource
Actions:
- Name: CodeSource
ActionTypeId:
Category: Source
...
Configuration: (...)
Namespace: SourceVariables # <<< === HERE, in Source
- Name: Deploy
Actions:
- Name: BackEnd-Deploy
ActionTypeId:
Category: Build
Provider: CodeBuild (...)
Configuration:
ProjectName: !Ref CodeBuildBackEnd
EnvironmentVariables: '[{"name":"BranchName","value":"#{SourceVariables.BranchName}","type":"PLAINTEXT"},{"name":"CommitMessage","value":"#{SourceVariables.CommitMessage}","type":"PLAINTEXT"}]'
Also, it may be useful: list of CodePipeline variables