aws_codepipeline inside cdk-pipelines, how? - amazon-web-services

I'm exploring how to do CI/CD using cdk-pipelines, but while the setup part works, I don't understand the control part. All the examples are really simple inline code Lambda Functions.
How do I "release a change" of a codepipeline.Pipeline inside a cdk_pipelines?
const cdk_pipeline = new pipelines.CodePipeline(...)
cdk_pipeline.addStage(new BuildImageTestStage(...))
// build-image-test-stage.ts
new BuildImageTestStack(...)
// build-image-test-stack.ts
const pipeline = new codepipeline.Pipeline(...)
pipeline.addStage(...CodeStarConnectionsSourceAction...)
pipeline.addStage(...CodeBuildAction...)
It sets up the pipeline just fine, but it doesn't fire off the actual codepipeline itself.
Options that I see:
codepipeline.Pipelines(triggerOnPush: true) - doesn't work because I want the cloud formation to run first THEN build+test. Trigger on push effectively runs parallel.
Completely separate setup from code deployment by using codepipeline_actions.CloudFormationCreateUpdateStackAction
Unsure option:
cdk_pipeline.addStage(..., { post: [] }) - CDK Pipeline post to trigger a code deploy, but I'm unsure on how to "wait" for it to finish.
Separately, I wish cdk-pipelines were named differently than codepipeline. It's just so hard to search.

[Edit after exchanges in the comments]: pipeline.CodePipeline is a wrapper for a codepipeline.Pipeline. It creates a new codepipeline.Pipeline under the hood in its construtor. The const pipeline = new codepipeline.Pipeline(...) line is adding a *second* codepipeline.Pipeline. You almost certainly don't want two. You have a few options:
Option 1: Pass codePipeline: pipeline in the cdk_pipeline constructor. cdk_pipeline will use your codepipeline.Pipeline instead of creating its own.
Option 2: Get rid of your pipeline instance. Refactor your build/test actions as pre or post Steps in the options for cdk_pipline.addStage or addWave. A CodeBuildStep will add a build step, for example. You can add arbitrary CodePipeline actions as steps, too.
AWS pipeline executions always run from beginning to end, each step "waiting" for the next to finish (except if stopped by an error). A CDK pipeline.CodePipeline always starts with the source action defined in its synth property.
The addStage method adds a serial stage to the pipeline. To add parallel stages, use addWave.
A CDK pipeline.CodePipeline makes it easy to create a pipeline that deploys CDK apps. It abstracts away the details of the more general-purpose codepipeline.Pipeline. If your use case is something other than building/testing/deploying CDK apps, though, you may be better off working directly with the latter, as the docs say.

Related

Multiple configuration file for a code pipeline action

I want to run a CFN stack from code pipeline, but the paramaters are scatered in different outputs from previous action. The only way I can think of is to run a Lambda action which load all of them and create a new one output out of it.
Is there something more straight forward?

Conditionally execute stage in AWS Codepipeline

I would like to conditionally execute certain stage in AWS Codepipeline depending on that if I put certain file on repo location. So, if I put "some_file.txt" on certain location in repo, I want for Codepipeline to check existence of this file and if it's there continue further to deploy code to production, otherwise stop on that stage.
With this I would like to avoid manual approval action and control release process with committing a file. Is this possible and what would be best practice?
I think you could create a lambda action for that:
Invoke an AWS Lambda function in a pipeline in CodePipeline
The lambda function can access the input artifact, and check if your file of interest is there or not.
Depending on the outcome of the check, the function with either put_job_success_result or put_job_failure_resul to continue or stop the pipeline.
you can use the spec file to check if there's the needed file present. If not, then you can execute a "stop-pipeline-execution" https://docs.aws.amazon.com/cli/latest/reference/codepipeline/stop-pipeline-execution.html
command. The required args can be fetched from the env vars and one more thing to note is to give that stage of yours adequate permission(s) to be able to execute the command.

Perform cloud formation only if any changes in lambda using AWS Code Pipeline

I am uisng AWS Code pipeline to perform cloud formation. My source code is committed in GitHub repository. When ever a commit is happening in my github repository, AWS Code Pipeline will starts its execution and perform the cloud formation. These functionalities are working fine.
In my project I have multiple modules. So if a user is modified only in one module, the entire module's lambda's are updated. Is there any way to restrict this using AWS Code Pipeline.
My Code Pipeline has 3 stages.
Source
Build
Deploy
The following is the snapshot of my code pipeline.
We had a similar issue and eventually we came to conclusion that this is not exactly possible. So unless you separate your modules into different repos and make separate pipelines for each of them it is always going to execute everything.
The good thing is that with each execution of the pipeline it is not entirely redeploying everything when the cloud formation is executed. In the deploy stage you can add Create Changeset part which is basically going to detect what is changed from the previous CloudFormation deployment and it is going to redeploy only those parts and will not touch anything else.
This is the exact issue we faced recently and while I see comments mentioning that it isn't possible to achieve with a single repository, I have found a workaround!
Generally, the code pipeline is triggered by a CloudWatch event listening to the GitHub/Code Commit repository. Rather than triggering the pipeline, I made the CloudWatch event trigger a lambda function. In the lambda, we can write the logic to execute the pipeline(s) only for module which has changes. This works really nicely and provides a lot of control over the pipeline execution. This way multiple pipeline can be created from a single repository, solving the problem mention in the question.
Lambda logic can be something like:
import boto3
# Map config files to pipelines
project_pipeline_mapping = {
"CodeQuality_ScoreCard" : "test-pipeline-code-quality",
"ProductQuality_ScoreCard" : "test-product-quality-pipeline"
}
files_to_ignore = [ "readme.md" ]
codecommit_client = boto3.client('codecommit')
codepipeline_client = boto3.client('codepipeline')
def lambda_handler(event, context):
projects_changed = []
# Extract commits
print("\n EVENT::: " , event)
old_commit_id = event["detail"]["oldCommitId"]
new_commit_id = event["detail"]["commitId"]
# Get commit differences
codecommit_response = codecommit_client.get_differences(
repositoryName="ScorecardAPI",
beforeCommitSpecifier=str(old_commit_id),
afterCommitSpecifier=str(new_commit_id)
)
print ("\n Code commit response: ", codecommit_response)
# Search commit differences for files that trigger executions
for difference in codecommit_response["differences"]:
file_name = difference["afterBlob"]["path"]
project_name = file_name.split('/')[0]
print("\nChanged project: ", project_name)
# If project corresponds to pipeline, add it to the pipeline array
if project_name in project_pipeline_mapping:
projects_changed.insert(len(projects_changed),project_name)
projects_changed = list(dict.fromkeys(projects_changed))
print("pipeline(s) to be executed: ", projects_changed)
for project in projects_changed:
codepipeline_response = codepipeline_client.start_pipeline_execution(
name=project_pipeline_mapping[project]
)
Check AWS blog on this topic: Customizing triggers for AWS CodePipeline with AWS Lambda and Amazon CloudWatch Events
Why not model this as a pipeline per module?

How to trigger CodePipeline for GitHub pull requests being merged?

How can I configure CodePipeline to be triggered for Pull Requests being opened, edited or merged?
Here is a Terraform configuration:
resource "aws_codepipeline_webhook" "gh_to_codepipeline_integration" {
name = "gh_to_codepipeline_integration"
authentication = "GITHUB_HMAC"
target_action = "Source"
target_pipeline = aws_codepipeline.mycodepipeline.name
authentication_configuration {
secret_token = var.github_webhook_secret
}
// accept pull requests
// Is there a way to filter on the PR being closed and merged? This isn't it...
filter {
json_path = "$.action"
match_equals = "closed"
}
}
CodePipeline is set to accept webhook events that have all of the conditions specified in the filters, which corresponds to Pull Request Events.
Note that the GitHub documentation states for the action field of a PullRequestEvent (my emphasis in bold):
The action that was performed. Can be one of assigned, unassigned,
review_requested, review_request_removed, labeled, unlabeled, opened,
edited, closed, ready_for_review, locked, unlocked, or reopened. If
the action is closed and the merged key is false, the pull request was
closed with unmerged commits. If the action is closed and the merged
key is true, the pull request was merged. While webhooks are also
triggered when a pull request is synchronized, Events API timelines
don't include pull request events with the synchronize action.
It seems like I need to filter for both $.action==closed && $.pull_request_merged=true, but it doesn't look like I can do both. If I just filter on $.action==closed then my pipeline will rebuild if PRs are closed without merging. Is this an oversight on my part, or are CodePipelines not as flexible in their triggers as CodeBuild projects?
For pull requests being opened/updated, because CodePipeline's Git integrations require a branch name, this is not natively supported as the branch name is variable, unless you open PRs on long running branches like dev, qa etc (e.g. if you are using a Gitflow-based workflow).
The way that we support PRs based from dynamic branches is use CodeBuild for the build/unit test stage of our workflow, and then package up the repository and build artefacts to S3. From there we trigger Deployment pipelines for integration and acceptance environments using S3 artefact as the source. Using CodePipeline for deployments is powerful as it automatically ensures only one stage can execute at a time, meaning only one change for a given application is going through a given environment at any one time.
This approach is however quite complex and requires quite a bit of Lambda magic mixed with SQS FIFO queues to deal with concurrent PRs (this is to overcome the superseding behaviour of CodePipeline), but it's quite a powerful pattern. We also use GitHub reviews to do things like trigger acceptance stage, and auto-approve manual approval steps in CodePipeline.
Once you are ready to merge the PR, we just use normal CodePipeline triggered off master to deploy to production - one thing you also need to do is ensure you use the artefact that was built and tested on the PR.
I'm not sure why you want to trigger the whole pipeline when a pull request is open? They way I usually set things up is:
CodePipeline watches the master branch and triggers on a push to it
It will run some builds in CodeBuild
If the builds pass it runs a deploy
Then we have CodeBuild which gets triggered by both CodePipeline and also GitHub pull requests:
resource "aws_codebuild_webhook" "dev" {
project_name = aws_codebuild_project.dev.name
filter_group {
filter {
type = "EVENT"
pattern = "PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED"
}
}
}
Then you can use codebuild filters to choose when to trigger the build. The terraform docs are also helpful.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}