An AWS CI/CD Pipeline that allows manual deploy by commit - amazon-web-services

Background
I want to create the following CI/CD flow in AWS and Github, for a react app using Amplify:
A single main branch, with short-lived feature branches and PRs into main.
Each PR triggers its own test environment in Amplify, with its own temporary subdomain, which gets torn down when the PR is merged, as described here.
Merging into main does not automatically trigger a deploy to production.
Instead, there is a separate mechanism (a web page, or amplify command, or even triggers based on git tags) for manually selecting a commit from main to deploy to production.
Questions
It's not clear to me if...
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
There is another AWS tool that solves this.
I'm looking for answers to those questions, or specific references in the docs which address them.

The answers for Amplify are Yes, Yes, Yes, Partially.
(1) A single main branch, with short-lived feature branches and PRs into main.
Yes. Feature branch deploys. Can define which branch patterns, such as feature*/, you wish to auto-deploy.
(2) Each PR triggers its own test environment in Amplify, with its own temporary subdomain,
Yes. Web Previews for PRs. "A web preview deploys every pull request made to your GitHub repository to a unique preview URL which is completely different from the URL your main site uses."
(3) Merging into main does not automatically trigger a deploy to production.
Yes. Disable automatic builds on main.
(4) Instead, there is a separate mechanism ... for manually selecting a commit from main to deploy to production.
Partially (HEAD only?). Call the StartJob API to manually trigger a build from, say, Lambda. The job type RELEASE starts a new job with the latest change from the specified branch. I am not sure if jobType: MANUAL with a commitId starts a job from an arbitrary commit hash.
Another workaround for 3+4 is to skip the build for an arbitrary commit. Amplify will skip building if [skip-cd] appears at the end of a commit message.

In my experience, I don't think there is any easy way to meet your requirement.
If you are using Gitlab, you can try Gitlab Review Apps to achieve that (I tried before with some scripts)

Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Check below links, if this help:
https://www.youtube.com/watch?v=QV2WS535nyI
https://dev.to/rajandmr/deploying-react-app-using-aws-amplify-with-ci-cd-pipeline-setup-3lid
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
For this, you need to create a full your own pipeline. Yes, you can configure your pipeline.
There is another AWS tool that solves this.
If you are okay with Jenkins, then Jenkins will help you to achieve this.
You can deploy Jenkins docker in AWS EC2 and create your pipeline. You can also use the parameterised option for selecting your environment and git branch.

Related

Trigger specific AWS Codepipeline source stage when change is made to a specific directory in repo

I have a number of services in a single GitHub repository, each service has its own CodePipeline on AWS managed through Terraform. Instead of triggering all of the pipelines on commit, I'd like to know how I can trigger each service's pipeline if its directory had any changes on commit, without having to split the services each into its own repository.
I don't think that there's a conditional source stage support per folder at code pipeline as we speak. Just finished checking this documentation about sources in CodePipeline. It does not seem to contain a folder-level filtering.
You could try this CDK-based template solution which showcases a mono-repository, which is composed of multiple services, have different CI/CD pipelines for each service. The solution detects which top level directory the modification happened and triggers the AWS CodePipeline configured to that directory.
This is sad but they might add it in the future. I've also wanted Quality gates, images from readme files in code-commit but these features seem too hard to implement haha.
It ended up being simpler than I had anticipated, there are github actions that do exactly what I needed.
This action checks whether a path had a change, and this action triggers a specific pipeline.

AWS CodeCommit prevent merge until successful build

I'm using an AWS Lambda function to kick off a build in AWS CodeBuild when a Pull Request is created or updated in AWS CodeComimit, which is working well.
However, I'd like to be able to prevent the merging of that Pull Request in to the master branch of the repository, until the latest build for that PR has completed successfully.
Does anyone know if there's a way that can be done in AWS? E.g. so that the Merge button is disabled or not available, like when not enough approvers have been obtained?
I was looking into this myself and from what I understand, it is currently not possible to directly create this rule, but I think it should be doable with a different approach.
Instead of requiring a custom rule that disables merging (which doesn't exist today), you could make it so that the PR requires review from a specific IAM user. With that, you could probably use a fixed "build" user, and fire an automatic approval request for the PR once the build finishes successfully. This will in turn "approve" that rule in the PR and allow it to be merged after the build succeeds.
Since approval can be done via the CLI interface, I'm sure it should also be possible via API. For example, you could use this API to automatically mark any given PR as approved by the calling user, then ensure the service that is calling it is the same user registered in the "build" approval template.
Besides the HTTP WebApi, there are also other ways to call into these CodeCommit actions, like the AWS SDK library (C# example: https://www.nuget.org/packages/AWSSDK.CodeCommit/).

Trigger AWS codepipeline manually and not on every commit using bitbucket codestar connection

I am not able to find a way to stop the auto triggering of the pipeline whenever I push code to bitbucket.
My assumption is that you want more control over when your pipeline does certain things.
Rather than achieving this through stopping the pipeline from getting triggered, I'd recommend using either stage transitions or manual approvals to achieve this control inside the pipeline.
Stage transitions are better when you want to "turn off" a pipeline and have the latest thing run through when you turn it back on.
Manual approvals are better when you want the version to be locked while waiting for approval so you can run tests without worrying that the version will change.
You mentioned in your comment that you wanted to only run your pipeline at certain times, so a way you could do that is to enable and disable the stage transition after source on a schedule.
https://docs.aws.amazon.com/codepipeline/latest/userguide/transitions.html
https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals.html
You can disable DetectChanges parameter on your Source action as explained here. Extract with the relevant context:
DetectChanges: Controls automatically starting your pipeline when a new commit is made on the configured repository and branch. If unspecified, the default value is true, and the field does not display by default.
This works on Bitbucket, GitHub, and GitHub Enterprise Server actions. I have a CloudFormation template configured with this option and works. Not sure about the same option on AWS console, because I saw that some configurations are only available from CloudFormation or aws cli. As you can read "this field does not display by default".

Code pipeline to build a branch on pull request

I am trying to make a code pipeline which will build my branch when I make a pull request to the master branch in AWS. I have many developers working in my organisation and all the developers work on their own branch. I am not very familiar with ccreating lambda function. Hoping for a solution
You can dynamically create pipelines everytime a new pull-request has been created. Look for the CodeCommit Triggers (in the old CodePipeline UI), you need lambda for this.
Basically it works like this: Copy existing pipeline and update the the source branch.
It is not the best, but afaik the only way to do what you want.
I was there and would not recommend it for the following reasons:
I hit this limit of 20 in my region: "Maximum number of pipelines with change detection set to periodically checking for source changes" - but, you definitely want this feature ( https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html )
The branch-deleted trigger does not work correctly, so you can not delete the created pipeline, when the branch has been merged into master.
I would recommend you to use Github.com if you need a workflow as you described. Sorry for this.
I have recently implemented an approach that uses CodeBuild GitHub webhook support to run initial unit tests and build, and then publish the source repository and built artefacts as a zipped archive to S3.
You can then use the S3 archive as a source in CodePipeline, where you can then transition your PR artefacts and code through Integration testing, Staging deployments etc...
This is quite a powerful pattern, although one trap here is that if you have a lot of pull requests being created at a single time, you can get CodePipeline executions being superseded given only one execution can proceed through a given stage at a time (this is actually a really important property, especially if your integration tests run against shared resources and you don't want multiple instances of your application running data setup/teardown tasks at the same time). To overcome this, I publish an S3 notification to an SQS FIFO queue when CodeBuild publishes the S3 artifact, and then poll the queue, copying each artifact to a different S3 location that triggers CodePipeline, but only if there are are currently no executions waiting to execute after the first CodePipeline source stage.
We can very well have dynamic branching support with the following approach.
One of the limitations in AWS code-pipeline is that we have to specify branch names while creating the pipeline. We can however overcome this issue using the architecture shown below.
flow diagram
Create a Lambda function which takes the GitHub web-hook data as input, using boto3 integrate it with AWS pipeline(pull the pipeline and update), have an API gateway to make the call to the Lambda function as a rest call and at last create a web-hook to the GitHub repository.
External links:
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codepipeline.html
Related thread: Dynamically change branches on AWS CodePipeline

Trigger deployment button in Jenkins pipeline

I'm setting up a Continuous Delivery pipeline for my team with Jenkins. As a final step, we want to deploy to AWS.
I came across this while searching: :
The last step is a button where you can click to trigger deploying. Very nice! However, I searched throw Jenkins plugins page but I don't think it is there (or it is under a vague name).
Any ideas what it could be?
I'm not sure about the specific plugin you are looking for, but there is a Jenkins plugin for CodeDeploy, which can automatically create a deployment as a post-build action. See: https://github.com/awslabs/aws-codedeploy-plugin
It really depends on how what kind of requirements you have on the actual deployment procedure. One thing to keep in mind if you do infrastructure as code to setup your pipelines automatically (e.g. through JobDSL or Jenkins Job Builder), is that the particular plugins must be supported. For that reason it some times might be more convenient to just script your deployments instead of relying on plugins. I've implemented multiple deployment jobs from Jenkins to AWS by just using plain AWS CLI commands, e.g. triggering Cloudformation creation/updates.
It turns out that there is a button to trigger an operation in the plugin. It was hard to detect as the UI of the plugin is redesigned and the button became smaller.