CDK pipelines seems to only work, by default, with one branch. Am I missing something or is there a way to:
have a dev branch to deploy to the Dev account/ env
test branch deploy to Test account/env.
jons-cool-feature-branch to X account/env etc
Ideally we do not want to have to push everything to the master branch to deploy to dev / test, so that we can keep the master branch clean, tidy, and stable.
I have thought about having multiple pipelines, one for dev, one for test, and one for master, this would solve the issue, but doesn’t feel like the cleanest solution.
Are there any recommended patterns?
The AWS-prescribed best practice is to use trunk-based development.
Thus, a single pipeline cannot use multiple branches for deploying to different environments cleanly.
You should look into creating a single pipeline that would in turn create environment-specific pipelines.
Here is a relevant issue in the CDK repo:
https://github.com/aws/aws-cdk/issues/9461
Solution
Building on what #gshpychka said https://stackoverflow.com/a/69812428/12907894
A pipeline that deploys pipleines. I found lots of overcomplicated solutions online, but in the end it turned out to be quite simple.
Just adding extra pipelines, for each branch we wished to deploy.
A core pipeline that builds the branch pipelines.
Only variables that need to change between any of this:
Account ID
env name
branch name
Account Pattern
Build
Dev
Staging
Prod
core-pipeline
branch
master
Webhook -> null (so it doesn't fire on each build)
Deploys:
master-pipeline -> build account
staging-pipeline -> build account
master-pipeline
branch
master
deploys
app stack -> prod account
staging-pipeline
branch
Staging
deploys
app stack -> staging account
Codepipeline cannot branch. It is not designed to do so.
A solution is to have a multi stage pipeline that has manual approval steps in the middle if you absolutely must have multiple environments and a single pipeline.
That is
Source (Dev branch) -> Build/Deploy -> Manual Approval step -> Make use of of a Codebuild or a lambda to move your now tested code (still in the artifact chain) to your test branch for you (ie make use of a git server api to initiate the merge based on the commit message from the initial commit that started the chain -> Another Build./Deploy to your test env (can even do cross account deployment here) -> Manual Approval step -> Repeat as many times as you want until you deploy to Production.
However.... this is entirely a hack. You're better off with multiple pipelines. I would use the CDK to be able to dynamically adjust the cloudformation template for the pipeline itself to handle Dev/Prod and then simply deploy it twice, linking one to the source of Dev and one to the source of Main.
Related
We have a CodePipeline that runs on every GitHub commit/merge to the main branch, building the application and releasing it to a staging environment where we can manually test the application. Every now and then, ad-hoc, weekly, etc, depending on the project, we'd release to production manually. To implement this I added a ManualApprovalStep to my CodePipeline between staging and production but that means that my pipeline is never green. It's always stuck in blue:
This makes me think that I'm using the wrong tool here.
My mental model is coming from Heroku (ignore the review apps, I'm not tackling that challenge yet):
In Heroku there's a Tests tab that's green if the tests pass and there's a pipeline that's green if it gets deployed to staging. Lack of promotion to production in Heroku is not a non-green state in Heroku but it would be in ManualApprovalStep.
Is there another tool that AWS gives me to model this way of working that I'm missing?
Update: another big difference. The ManualApprovalStep seems to pile each change and releasing each change, one by one, not releasing whatever was the last release to staging, so clearly it's not analogous to the release to production that Heroku has.
You are right that the ManualApprovalStep is not a natural "promotion" mechanism. They are for yes-no approvals and will result in execution failure if rejected or after 7 days. Disabled Stage Transitions also sit awkwardly with your use case.
pipeline.CodePipline executions are (a) triggered on a change to a source and (b) meant to run all stages from start-to-finish. Executions are hard to interrupt. A consequence of a requirement to deploy environments independently is that environments are best modelled as independent pipelines, not stages within a single pipeline.
Simple Option: 2 github branches, 2 Pipelines
Clone your pipeline setup. A staging pipeline is tied to a staging branch source. A prod pipeline is triggered on changes to the main branch. This setup is easy to reason about and has the advantage that deploys always match your source. But it does not replicate the Heroku "promotion" concept.
Complex Option: 1 github branch, 2 Pipeline?
You could probably get something closer to the "promotion" pattern by having a pipelines.CodePipeline deployment for staging (tied to github) and a separate codepipeline.Pipeline pipeline for prod. The latter can be triggered by EventBridge events. Asset handling would be complex in this scenario.
[Edit:] Amplify CI/CD for the Front-end, CodePipeline for Back-end
AWS Amplify CI/CD gives you automatic feature branch deploys, PR review approvals etc. for front-end apps. Manual deploys require a workaround, but are possible. See this related SO question. The CDK supports Amplify build configurations. The catch is that these CI/CD goodies work for front-end apps, but not for arbitrary infrastructure stacks. To get the best of both worlds, split the app in two. Use Amplify for the high-velocity front-end and stick with CodePipeline for the back-end deploys.
Background
I want to create the following CI/CD flow in AWS and Github, for a react app using Amplify:
A single main branch, with short-lived feature branches and PRs into main.
Each PR triggers its own test environment in Amplify, with its own temporary subdomain, which gets torn down when the PR is merged, as described here.
Merging into main does not automatically trigger a deploy to production.
Instead, there is a separate mechanism (a web page, or amplify command, or even triggers based on git tags) for manually selecting a commit from main to deploy to production.
Questions
It's not clear to me if...
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
There is another AWS tool that solves this.
I'm looking for answers to those questions, or specific references in the docs which address them.
The answers for Amplify are Yes, Yes, Yes, Partially.
(1) A single main branch, with short-lived feature branches and PRs into main.
Yes. Feature branch deploys. Can define which branch patterns, such as feature*/, you wish to auto-deploy.
(2) Each PR triggers its own test environment in Amplify, with its own temporary subdomain,
Yes. Web Previews for PRs. "A web preview deploys every pull request made to your GitHub repository to a unique preview URL which is completely different from the URL your main site uses."
(3) Merging into main does not automatically trigger a deploy to production.
Yes. Disable automatic builds on main.
(4) Instead, there is a separate mechanism ... for manually selecting a commit from main to deploy to production.
Partially (HEAD only?). Call the StartJob API to manually trigger a build from, say, Lambda. The job type RELEASE starts a new job with the latest change from the specified branch. I am not sure if jobType: MANUAL with a commitId starts a job from an arbitrary commit hash.
Another workaround for 3+4 is to skip the build for an arbitrary commit. Amplify will skip building if [skip-cd] appears at the end of a commit message.
In my experience, I don't think there is any easy way to meet your requirement.
If you are using Gitlab, you can try Gitlab Review Apps to achieve that (I tried before with some scripts)
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Check below links, if this help:
https://www.youtube.com/watch?v=QV2WS535nyI
https://dev.to/rajandmr/deploying-react-app-using-aws-amplify-with-ci-cd-pipeline-setup-3lid
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
For this, you need to create a full your own pipeline. Yes, you can configure your pipeline.
There is another AWS tool that solves this.
If you are okay with Jenkins, then Jenkins will help you to achieve this.
You can deploy Jenkins docker in AWS EC2 and create your pipeline. You can also use the parameterised option for selecting your environment and git branch.
The AWS Amplify service allows for multiple branches to be configured within a single Amplify application and this is how CD is performed. Each stage is assigned to a particular branch and auto-builds when branch changes are pushed. From my understanding this is the Git Flow like approach, having different branches for each stage.
We have split up our Amplify app now however such that stages are separate Amplify applications; this was done as we are using the CDK and wish to have CD/CI deployment for Amplify infrastructure and components. The infrastructure is now CD using Code Pipelines and self-mutates. This all works using the "master" branch and so any pushes to this branch will push changes to the Code Pipeline, first deploying to our test stage and then to our production stage.
The trouble here is that we now have infrastructure being CD from "master" whereas the actual app that Amplify runs uses separate branches ("master" and "prod"). What I would like to happen is that whenever we push say the infrastructure change to production then it will also update the Amplify app with the "master" branch logic too.
I have looked into this and found a couple of solutions but neither of them are ideal:
Webhooks - The production App could be set to not be auto-built and a webhook triggered after infrastructure deployment to perform the update. This could work but does mean that the App will use the HEAD state of the master branch which may in theory be ahead of the infrastructure change should multiple commits be pushed at once. Given that our staged pipeline has a manual deploy step then the chance of this happening is high and not ideal.
Custom lambda to run aws amplify start-job commit-id=xyz; after a deployment we run this command with the commit-id of the CodePipeline execution. This will allow us to get the exact change but is extra overhead and infrastructure to maintain. We will also need to do this cross-account.
Is there a solution I am overlooking? Is there a way to align our infrastructure as code and our app code to deploy at the same time at the same commit entry without the need for separate branches?
My team has been running into issues with our CodePipeline where features were pushed out into production when they shouldn't have been due to our Docker image patching. A little background on our architecture: Our pipeline has two sources, one for the source code and one for the Docker image builder. Docker builds via CodeBuild and is deployed to dev, test, and then prod environments with manual approval steps in between.
Our Docker image receives monthly patching which triggers the pipeline to execute and is what caused the features to be pushed out. We redesigned our git branching strategy so that our master branch will only contain stable releases, but I could still see this issue potentially occurring again if a specific release date is specified. Is there a way to push out the image patching without pushing out the latest commit?
Can CodePipeline Use a Specific Commit
This is an often requested feature but unfortunately CodePipeline will always bring the latest commit from the selected branch in the Source action.
CodePipeline tied to a single git branch is more of a feature of CodePipeline as the design is more inclined towards Trunk based development [0]. Also, as per the designers of this service, CodePipeline is designed for post-merge/release validation. That is, once your change is ready to be released to production and is merged into your master/main branch, CodePipeline takes over and automatically tests and releases the final merged set of changes. CodePipeline has a lot of features like stage locking, superseding versions, etc. which don't work well for the case where you want to test a change in isolation before it's merged (e.g. feature branch testing or pull request testing.) Therefore there currently isn't a recommended way to do this in CodePipeline.
[0] https://trunkbaseddevelopment.com/
Having said that, there is a way to hack this with S3 Source action in pipeline instead of GitHub/CodeCommit source action. Essentially your pipeline's S3 source action is tied to S3 bucket/key. You can then upload a zip of any specific commit to this S3 bucket/key and trigger the pipeline.
We are a relatively inexperienced development team trying to do things 'the right way'. We are using Github along with AWS and CodeDeploy for multiple PHP based web applications. We are utilising Github's auto-deployment with CodeDeploy when the master branch is updated.
We have two production EC2 web servers in separate AZ's along with a single EC2 staging server.
It currently works as follows:
We write code in a branch, we push to GitHub, we merge into 'master' which then kicks off CodeDeploy to write to our staging server where we can test it. Once we have tested it we then manually kick off CodeDeploy to write to production (with the same commit ID).
The problem is, if testing brings up issues, and we have another branch waiting to be merged and tested, everything becomes backed up?
We are obviously doing something wrong. We are writing to the master branch to utilise GitHub's autodeploy, but I assumed master was only to be written to when it was ready to be deployed?
Can someone please help us and put us straight?
Thanks
Make another branch called 'livecandidate' this branch will have each of the new feature branches merged into it
Each time a feature branch is merged into 'livecandidate' pull 'livecandidate' into your Code Deploy process and install to the test machine.
If the tests pass then merge 'livecandidate' into 'master' and kick off the install to production
If the tests do not pass then unwind the merge into 'livecandidate' (assuming no dependencies on chains of changes etc)
After doing a production install or a un-merge, try the next feature
General idea is to never ever have a broken master
All problems in computer science can be solved by another level of indirection - David Wheeler