How do you deploy existing deployment artifacts through codepipeline? - amazon-web-services

Background: I am using github actions for CI and aws codepipeline for CD. Github actions pushes a set of versioned artifacts to S3 and ECR. I setup my AWS codepipeline using CDK.
Question: how do I get the codepipeline to pick up those artifacts and deploy them?
opt 1: Just tag your images and everything else with "latest"
answer: no, having a pipeline that always deploys the latest is not the same as a pipeline that deploys version X.
opt 2: Just send the version number (version X) to codepipeline so that codepipeline knows which artifacts to fetch
answer: no, codepipeline seems to support passing variables between actions (actions generate output variables, other action can pick them up), but I have found no documentation stating that a codepipeline can be triggered with input parameters.
opt 3: tag your commit in github and use a webhook to pass that information along to codepipeline.
answer: no, codepipeline can filter webhooks so that you can trigger the pipeline for certain events, but it does not support parsing the webhook body, picking out stuff you want to use.
opt 4: resolve the version number in cdk synth before that pesky critter tries to update itself.
answer: yeah, that kinda works, I can query an ecr repo, find that actual version number of the release and regenerate the pipeline so that it points to the resolved version. Its not the same as passing a version number from github to codepipeline, but at least my pipeline is versioned and all my deployment units (like ECS services, batch jobs, etc) are pointing to an explicit version after deployment. Unfortunately, this has several drawbacks, like making the deployment pipeline (even) slow(er) and if the pipeline fails I will have update the pipeline by running cdk deploy from my machine.
opt 5: you come in to save the day :-)

Related

Version Control And Pipeline for AWS Cloudformation

I'm trying to figure out a way to come up with a CI/CD pipeline for CloudFormation. We use Cloudformation Console directly to deploy our infrastructure and app to the cloud.
Does anyone have any examples of how they have created a CI/CD pipeline using Jenkins or other types of CI tools to do some type of linting, CI, version control, and artifact deployment to Artifactory (or similar toolset)? I'd like to execute a pipeline once a new version of the cloud formation templates is uploaded to Artifactory.
You can always use CodePipeline.
see docs:
CodePipeline
CI:
I am using GitHub, so before i can merge a pull request, my code must pass 3 tests.
Those tests are 3 Codebuilds containers that run tests.
CD:
After my code merged it invoke a CodePipeline that use mainly CodeDeploy and CodeBuild.
About your goal:
I'd like to execute a pipeline once a new version of the cloud formation templates is uploaded to Artifactory.
I don't really think you need a pipeline for this.
Let assume your artifacts uploaded to s3 bucket called artifact-bucket.
You can create a CloudWatch rule that will execute StepFunctions state machine when file added to
artifact-bucket.
see docs:
Trigger StepFunctions from S3
You can easily deploy stack with StepFunctions.

Deploy previous version in AWS Codepipeline

I am new to AWS and trying to create a pipeline for CICD. Stages involved in my pipeline are:
Source -> Codecommit
Build -> Codebuild project
Deploy using Cloudformation
I am able to complete the pipeline and deployment is successful. But I am struggling to implement a rollback procedure with this. How to deploy previous version without making a code revert in the repository? Any help regarding this?
I changed the pipeline configuration a bit and now I am able to deploy any version from history. Below is the solution:
Source -> Codecommit
Build -> Codebuild project
Deploy using CodeDeploy instead of Cloudformation
Now, deployment history can be triggered at any time. Pick the version from the history in the deployments under code deploy and retry deployment.
Unfortunately there is currently no rollback step for CodePipeline, traditionally people would rollback by reverting the change from their master branch (which is meant to represent the state of live).
If you're unable to do this revert, then you will need to manage the rollback either from a different service or different pipeline.
As you're using CloudFormation you could take a look at implementing Rollback Triggers which would monitor the status of an alarm. If the alarm fails then it could rollback and fail the pipeline.

Trigger an AWS CodePipeline on every new pull request in GitHub repo

Source code in my organization is managed in a GitHub repository. For now, our CI process uses AWS CodePipeline as follows:
Webhooks detect code changes in a specific git branch
The updated branch is then used the input for AWS CodeBuild
The finished build is deployed onto one of our staging environments using Elastic Beanstalk
Tests are run on the Elastic Beanstalk environment.
We want to add detection of new pull requests in our git repository. Whenever a new PR is created in our repo, we'd like to automatically trigger a build to an EB environment, through CodePipeline as above.
Our roadblocks:
Looking at the available settings for GitHub Webhooks in CodePipeline, we cannot find a way to specify that the pipeline's trigger should be a new PR.
In any case, the GitHub source for a CodePipeline must be a specific branch. We'd like PRs to be detected in any branch.
What would be the best approach here? I've seen some methods being discussed, but most of them appear to be on the cumbersome/high-maintenance side. If there's anything new in the AWS toolchain that makes this easy, it'd be cool to know.
Thanks!
The best approach to solving this problem seems to be creating a CodePipeline for each PR using a parameterized CloudFormation stack.
Essentially the steps are:
Define your CodePipeline using CloudFormation and have a parameter that identifies the environment - Prod, QA, PR_xyz etc.
Set up CodeBuild to trigger on any changes to your GitHub repository. When a new PR is created, have CodeBuild construct a new CodePipeline based on your CloudFormation template. Supply the name of the PR as the environment name when creating the CloudFormation stack.
Detailed steps are described here: https://moduscreate.com/blog/track-git-branches-aws-codepipeline/

Do I need the code deploy step in aws code pipeline for a static s3 website

I created a repo in Code Commit for a static s3 website
Then I created a CodePipeline and configured the code build part.
There I set the Build Spec file with the some basic commands:
build and then copy the files in the s3 bucket.
The third step the Code Deploy I'm not sure why it's needed.
When I run it it gets stuck for an hour.
I did disable it and the site was deployed just fine.
Am I missing something?
You can disable the CodeDeploy part if it is working fine for you. Or you can skip the CodeBuild step and use appspec.yml to deploy the static website onto S3.
You have to use either of the steps to make it work, you can't skip both the steps.
CodeDeploy part is present in the CodePipeline in case you need to deploy it on your EC2 fleet or Autoscaling Group after you have built the artifacts. If not needed, just skip it.
Codepipeline has three stages source->codeBuild->codeDeploy. According to Amazon you must use atleast two stages of the Codepipeline, You cannot skip the first stage (i.e source) but you choose any one or both from the remaining. For your use case source and CodeBuild stages are enough you don't need codedeploy. Just remove the codeDeploy stage.

What is the "right" way to set up code pipeline to deploy different branches to different Elastic Beanstalk environments?

This is a basic question:
I have a basic codepipeline pipeline set up like the following:
Source is github branch (master)
Build with AWS Codebuild
Deploy to Elastic Beanstalk -> deploys to appname-prod environment
That all makes sense and is straightforward.
Now I want to do the same thing for staging: have a staging branch, and when it's commited to, Codebuild builds it, and it's deployed to the appname-staging environment.
What's the right way to do this?
Should I just have two different pipelines (one for prod and one for staging)?
Or is there a good way to have both of those behaviors (and potentially a third) within the same pipeline?
Thanks!
You need to have a pipeline per branch from my experience. It may be the case that the pipeline structure for each environment is different or the same.
If you opt to do a CI/CD setup, then the non-production pipeline will include stages for testing etc and assuming each stage passes, the deployment will be then automatic to the non-prod elasticbeanstalk environment.
For the production environment, you may wish to include a manual approval stage thus having a continuous delivery pipeline. Or if it is ok to do automatic deployments you could use the pipeline format for non-prod.
A nice trick if you wish to use a pipeline you've created manually over and over again. Is to extract a cloudformation template of it via the cli.
aws codepipeline getpipeline --name <name>
This gives you a cloudformation template for the codepipeline resource. You will need to edit a bit, but if it references a repo and branch, you can adjust that. You can also parameterise it so that you can create many pipelines easily which have a similar structure.
Good luck!
I was fighting with this issue for a while. CodePipeline only allows sources on the first step, which means you can't do your source-build-deploy steps for dev and then repeat for higher environments within the same pipeline.