Can CodePipeline Use a Specific Commit - amazon-web-services

My team has been running into issues with our CodePipeline where features were pushed out into production when they shouldn't have been due to our Docker image patching. A little background on our architecture: Our pipeline has two sources, one for the source code and one for the Docker image builder. Docker builds via CodeBuild and is deployed to dev, test, and then prod environments with manual approval steps in between.
Our Docker image receives monthly patching which triggers the pipeline to execute and is what caused the features to be pushed out. We redesigned our git branching strategy so that our master branch will only contain stable releases, but I could still see this issue potentially occurring again if a specific release date is specified. Is there a way to push out the image patching without pushing out the latest commit?

Can CodePipeline Use a Specific Commit
This is an often requested feature but unfortunately CodePipeline will always bring the latest commit from the selected branch in the Source action.
CodePipeline tied to a single git branch is more of a feature of CodePipeline as the design is more inclined towards Trunk based development [0]. Also, as per the designers of this service, CodePipeline is designed for post-merge/release validation. That is, once your change is ready to be released to production and is merged into your master/main branch, CodePipeline takes over and automatically tests and releases the final merged set of changes. CodePipeline has a lot of features like stage locking, superseding versions, etc. which don't work well for the case where you want to test a change in isolation before it's merged (e.g. feature branch testing or pull request testing.) Therefore there currently isn't a recommended way to do this in CodePipeline.
[0] https://trunkbaseddevelopment.com/
Having said that, there is a way to hack this with S3 Source action in pipeline instead of GitHub/CodeCommit source action. Essentially your pipeline's S3 source action is tied to S3 bucket/key. You can then upload a zip of any specific commit to this S3 bucket/key and trigger the pipeline.

Related

How should I deploy from staging to production when using AWS CodePipeline?

We have a CodePipeline that runs on every GitHub commit/merge to the main branch, building the application and releasing it to a staging environment where we can manually test the application. Every now and then, ad-hoc, weekly, etc, depending on the project, we'd release to production manually. To implement this I added a ManualApprovalStep to my CodePipeline between staging and production but that means that my pipeline is never green. It's always stuck in blue:
This makes me think that I'm using the wrong tool here.
My mental model is coming from Heroku (ignore the review apps, I'm not tackling that challenge yet):
In Heroku there's a Tests tab that's green if the tests pass and there's a pipeline that's green if it gets deployed to staging. Lack of promotion to production in Heroku is not a non-green state in Heroku but it would be in ManualApprovalStep.
Is there another tool that AWS gives me to model this way of working that I'm missing?
Update: another big difference. The ManualApprovalStep seems to pile each change and releasing each change, one by one, not releasing whatever was the last release to staging, so clearly it's not analogous to the release to production that Heroku has.
You are right that the ManualApprovalStep is not a natural "promotion" mechanism. They are for yes-no approvals and will result in execution failure if rejected or after 7 days. Disabled Stage Transitions also sit awkwardly with your use case.
pipeline.CodePipline executions are (a) triggered on a change to a source and (b) meant to run all stages from start-to-finish. Executions are hard to interrupt. A consequence of a requirement to deploy environments independently is that environments are best modelled as independent pipelines, not stages within a single pipeline.
Simple Option: 2 github branches, 2 Pipelines
Clone your pipeline setup. A staging pipeline is tied to a staging branch source. A prod pipeline is triggered on changes to the main branch. This setup is easy to reason about and has the advantage that deploys always match your source. But it does not replicate the Heroku "promotion" concept.
Complex Option: 1 github branch, 2 Pipeline?
You could probably get something closer to the "promotion" pattern by having a pipelines.CodePipeline deployment for staging (tied to github) and a separate codepipeline.Pipeline pipeline for prod. The latter can be triggered by EventBridge events. Asset handling would be complex in this scenario.
[Edit:] Amplify CI/CD for the Front-end, CodePipeline for Back-end
AWS Amplify CI/CD gives you automatic feature branch deploys, PR review approvals etc. for front-end apps. Manual deploys require a workaround, but are possible. See this related SO question. The CDK supports Amplify build configurations. The catch is that these CI/CD goodies work for front-end apps, but not for arbitrary infrastructure stacks. To get the best of both worlds, split the app in two. Use Amplify for the high-velocity front-end and stick with CodePipeline for the back-end deploys.

Trigger specific AWS Codepipeline source stage when change is made to a specific directory in repo

I have a number of services in a single GitHub repository, each service has its own CodePipeline on AWS managed through Terraform. Instead of triggering all of the pipelines on commit, I'd like to know how I can trigger each service's pipeline if its directory had any changes on commit, without having to split the services each into its own repository.
I don't think that there's a conditional source stage support per folder at code pipeline as we speak. Just finished checking this documentation about sources in CodePipeline. It does not seem to contain a folder-level filtering.
You could try this CDK-based template solution which showcases a mono-repository, which is composed of multiple services, have different CI/CD pipelines for each service. The solution detects which top level directory the modification happened and triggers the AWS CodePipeline configured to that directory.
This is sad but they might add it in the future. I've also wanted Quality gates, images from readme files in code-commit but these features seem too hard to implement haha.
It ended up being simpler than I had anticipated, there are github actions that do exactly what I needed.
This action checks whether a path had a change, and this action triggers a specific pipeline.

CD AWS Amplify without Git Flow?

The AWS Amplify service allows for multiple branches to be configured within a single Amplify application and this is how CD is performed. Each stage is assigned to a particular branch and auto-builds when branch changes are pushed. From my understanding this is the Git Flow like approach, having different branches for each stage.
We have split up our Amplify app now however such that stages are separate Amplify applications; this was done as we are using the CDK and wish to have CD/CI deployment for Amplify infrastructure and components. The infrastructure is now CD using Code Pipelines and self-mutates. This all works using the "master" branch and so any pushes to this branch will push changes to the Code Pipeline, first deploying to our test stage and then to our production stage.
The trouble here is that we now have infrastructure being CD from "master" whereas the actual app that Amplify runs uses separate branches ("master" and "prod"). What I would like to happen is that whenever we push say the infrastructure change to production then it will also update the Amplify app with the "master" branch logic too.
I have looked into this and found a couple of solutions but neither of them are ideal:
Webhooks - The production App could be set to not be auto-built and a webhook triggered after infrastructure deployment to perform the update. This could work but does mean that the App will use the HEAD state of the master branch which may in theory be ahead of the infrastructure change should multiple commits be pushed at once. Given that our staged pipeline has a manual deploy step then the chance of this happening is high and not ideal.
Custom lambda to run aws amplify start-job commit-id=xyz; after a deployment we run this command with the commit-id of the CodePipeline execution. This will allow us to get the exact change but is extra overhead and infrastructure to maintain. We will also need to do this cross-account.
Is there a solution I am overlooking? Is there a way to align our infrastructure as code and our app code to deploy at the same time at the same commit entry without the need for separate branches?

Code pipeline to build a branch on pull request

I am trying to make a code pipeline which will build my branch when I make a pull request to the master branch in AWS. I have many developers working in my organisation and all the developers work on their own branch. I am not very familiar with ccreating lambda function. Hoping for a solution
You can dynamically create pipelines everytime a new pull-request has been created. Look for the CodeCommit Triggers (in the old CodePipeline UI), you need lambda for this.
Basically it works like this: Copy existing pipeline and update the the source branch.
It is not the best, but afaik the only way to do what you want.
I was there and would not recommend it for the following reasons:
I hit this limit of 20 in my region: "Maximum number of pipelines with change detection set to periodically checking for source changes" - but, you definitely want this feature ( https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html )
The branch-deleted trigger does not work correctly, so you can not delete the created pipeline, when the branch has been merged into master.
I would recommend you to use Github.com if you need a workflow as you described. Sorry for this.
I have recently implemented an approach that uses CodeBuild GitHub webhook support to run initial unit tests and build, and then publish the source repository and built artefacts as a zipped archive to S3.
You can then use the S3 archive as a source in CodePipeline, where you can then transition your PR artefacts and code through Integration testing, Staging deployments etc...
This is quite a powerful pattern, although one trap here is that if you have a lot of pull requests being created at a single time, you can get CodePipeline executions being superseded given only one execution can proceed through a given stage at a time (this is actually a really important property, especially if your integration tests run against shared resources and you don't want multiple instances of your application running data setup/teardown tasks at the same time). To overcome this, I publish an S3 notification to an SQS FIFO queue when CodeBuild publishes the S3 artifact, and then poll the queue, copying each artifact to a different S3 location that triggers CodePipeline, but only if there are are currently no executions waiting to execute after the first CodePipeline source stage.
We can very well have dynamic branching support with the following approach.
One of the limitations in AWS code-pipeline is that we have to specify branch names while creating the pipeline. We can however overcome this issue using the architecture shown below.
flow diagram
Create a Lambda function which takes the GitHub web-hook data as input, using boto3 integrate it with AWS pipeline(pull the pipeline and update), have an API gateway to make the call to the Lambda function as a rest call and at last create a web-hook to the GitHub repository.
External links:
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codepipeline.html
Related thread: Dynamically change branches on AWS CodePipeline

AWS Codepipeline Continue Previous Execution

I am using AWS CodePipeline in order to automatically check out code, build an application with CodeBuild and deploy the application to an ECS cluster for development. After that I inserted a manual step to approve deployment to the staging environment. This works well. However, when I run the pipeline again, there seems to be no way to approve the actions in one of previous executions. As far as I can see, I can only push the latest build artifact to staging (and later to production). This is surely not, what I would like to do. I could use more than one pipeline - one for each stage - for this, but than, what is the manual approval good for?
Currently updating a pipeline will end all in-flight executions at the end of their current action. This includes cancelling in-flight approvals.
After updating your pipeline you can click "Release Change" to have a fresh execution run through your pipeline and after that the changes will continue to be released as usual.
Unlike creating a pipeline, editing a pipeline does not rerun the most
recent revision through the pipeline. If you want to run the most
recent revision through a pipeline you've just edited, you must
manually rerun it. Otherwise, the edited pipeline will run the next
time you make a change to a source location configured in the source
stage of the pipeline. For information, see Start a Pipeline Manually
in AWS CodePipeline.
From the documentation here: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-edit.html