I'm new to AWS and transitioning from Azure. I would like to create a pipeline in CodePipeline which asks the user for input (for example: the user needs to input a value for the variable "hello"), and uses that input to run a CodeBuild project. In Azure DevOps this was quite easy to define in the pipeline YML specification, but I can't seem to find a way to easily do this in AWS, or am I missing something?
AWS CodePipeline not supporting this feature currently. What you can do is, pass this parameter in your commit message (if pipeline trigger on commits to branches) or in your Git tag (if pipeline trigger on git tag push).
example:
commit message: my commit message [my_var]
git tag: my_var-1.0.0
Then in your buildspec.yml file collect the commit message or tag and check whether it contains your required parameters. If so execute the next commands otherwise exit the script.
Related
I'm using CDK to set up a CI/CD Pipeline. I have currently a code build from a git into the pipeline. There are then two builds - one that pulls out code for a lambda and builds an artifact for it, and a second that issues the cdk synth to construct the lambda framework (including a nested bucket and dynamo).
Then it heads to a deploy stage, but fails because it can't find the parameters for the location of the lambda code
ive been using this example: https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
the only differences from this example are that I'm using python for all of it and due to known future needs, the lamdba's are are in a parallel directory from the stack code
|-Lambdas
|--Lambda1
|---Lambda1Code
|--Lambda2
|---Lambda2Code
|-CDKStacks
|--LambdaCreationStack
|--PipelineCreationStack
|--app.py
Everything runs up until deploy where it fails with the error "The following CloudFormation Parameters are missing a value:" and then lists the BucketName and ObjectKey
I assigned those as overrides as per the above link:
admin_permissions=True,
parameter_overrides=dict(
lambda_code.assign(
bucket_name=lambda_location.bucket_name,
object_key=lambda_location.object_key,
object_version=lambda_location.object_version
)
),
as part of the pipeline actions CloudFormationCreateUpdateStackAction, and passed the code just like in the example from lambda stack to the pipeline stack. But every time the lambda stack is attempted to deploy the parameters for the location of the code 'do not exist'
I've tried overriding the parameters, but being in the pipeline and dynamically created I am hesitant to follow further (and my attempts didnt work anyways). I've tried a bunch of different stack/nested stack/single stack configurations but haven't had a Successs yet.
thoughts?
This basically boils down to CodeUri in the Cloudformation template will automatically append the s3 bucket if your CodeUri starts with ./
So you have 2 options.
In your pipeline output your artifact as normal, just do the whole repo from the codebuild into the code deploy. Your code deoploy can pick up the artifact naturally and will automatically append the S3 url to that
if you're using Python however, you MUST be aware that starting from a lambda directory deeper in the tree will mean that the python Imports expect that directory to be a root directory - meaning if you were in Lambdas/Lambda1 and wanted to import a file that existed in the Lambda1 directory, in order for it to work on AWS Lambda you would need to have the import be just the file name, ignoring the rest of the path.
This means that coding can be difficult, and running unit tests can be difficult as well. You'll want to add all the individual lambda folders (and their paths) from root to the PYTHONPATH env variable of your codebuild instance so the unit tests know where to do so (and add a .env file to your IDE as well to handle this in your local)
You use CDK and you cdk synth the stack you want to deploy. This creates a cdk.out folder with a bunch of asset zips in it plus the stack template (a json). you adjust your artifact output in the codebuild to output the cdk.out folder, and the asset zips are automatically (thanks to cdk) subbed into the codeUri locations in the also automatically synthed template. Once you know what the templates name is its easy to set the CodeDeploy to look for that template name and it will find the asset zips individually for each lambda.
I would like to conditionally execute certain stage in AWS Codepipeline depending on that if I put certain file on repo location. So, if I put "some_file.txt" on certain location in repo, I want for Codepipeline to check existence of this file and if it's there continue further to deploy code to production, otherwise stop on that stage.
With this I would like to avoid manual approval action and control release process with committing a file. Is this possible and what would be best practice?
I think you could create a lambda action for that:
Invoke an AWS Lambda function in a pipeline in CodePipeline
The lambda function can access the input artifact, and check if your file of interest is there or not.
Depending on the outcome of the check, the function with either put_job_success_result or put_job_failure_resul to continue or stop the pipeline.
you can use the spec file to check if there's the needed file present. If not, then you can execute a "stop-pipeline-execution" https://docs.aws.amazon.com/cli/latest/reference/codepipeline/stop-pipeline-execution.html
command. The required args can be fetched from the env vars and one more thing to note is to give that stage of yours adequate permission(s) to be able to execute the command.
I am uisng AWS Code pipeline to perform cloud formation. My source code is committed in GitHub repository. When ever a commit is happening in my github repository, AWS Code Pipeline will starts its execution and perform the cloud formation. These functionalities are working fine.
In my project I have multiple modules. So if a user is modified only in one module, the entire module's lambda's are updated. Is there any way to restrict this using AWS Code Pipeline.
My Code Pipeline has 3 stages.
Source
Build
Deploy
The following is the snapshot of my code pipeline.
We had a similar issue and eventually we came to conclusion that this is not exactly possible. So unless you separate your modules into different repos and make separate pipelines for each of them it is always going to execute everything.
The good thing is that with each execution of the pipeline it is not entirely redeploying everything when the cloud formation is executed. In the deploy stage you can add Create Changeset part which is basically going to detect what is changed from the previous CloudFormation deployment and it is going to redeploy only those parts and will not touch anything else.
This is the exact issue we faced recently and while I see comments mentioning that it isn't possible to achieve with a single repository, I have found a workaround!
Generally, the code pipeline is triggered by a CloudWatch event listening to the GitHub/Code Commit repository. Rather than triggering the pipeline, I made the CloudWatch event trigger a lambda function. In the lambda, we can write the logic to execute the pipeline(s) only for module which has changes. This works really nicely and provides a lot of control over the pipeline execution. This way multiple pipeline can be created from a single repository, solving the problem mention in the question.
Lambda logic can be something like:
import boto3
# Map config files to pipelines
project_pipeline_mapping = {
"CodeQuality_ScoreCard" : "test-pipeline-code-quality",
"ProductQuality_ScoreCard" : "test-product-quality-pipeline"
}
files_to_ignore = [ "readme.md" ]
codecommit_client = boto3.client('codecommit')
codepipeline_client = boto3.client('codepipeline')
def lambda_handler(event, context):
projects_changed = []
# Extract commits
print("\n EVENT::: " , event)
old_commit_id = event["detail"]["oldCommitId"]
new_commit_id = event["detail"]["commitId"]
# Get commit differences
codecommit_response = codecommit_client.get_differences(
repositoryName="ScorecardAPI",
beforeCommitSpecifier=str(old_commit_id),
afterCommitSpecifier=str(new_commit_id)
)
print ("\n Code commit response: ", codecommit_response)
# Search commit differences for files that trigger executions
for difference in codecommit_response["differences"]:
file_name = difference["afterBlob"]["path"]
project_name = file_name.split('/')[0]
print("\nChanged project: ", project_name)
# If project corresponds to pipeline, add it to the pipeline array
if project_name in project_pipeline_mapping:
projects_changed.insert(len(projects_changed),project_name)
projects_changed = list(dict.fromkeys(projects_changed))
print("pipeline(s) to be executed: ", projects_changed)
for project in projects_changed:
codepipeline_response = codepipeline_client.start_pipeline_execution(
name=project_pipeline_mapping[project]
)
Check AWS blog on this topic: Customizing triggers for AWS CodePipeline with AWS Lambda and Amazon CloudWatch Events
Why not model this as a pipeline per module?
How can I configure CodePipeline to be triggered for Pull Requests being opened, edited or merged?
Here is a Terraform configuration:
resource "aws_codepipeline_webhook" "gh_to_codepipeline_integration" {
name = "gh_to_codepipeline_integration"
authentication = "GITHUB_HMAC"
target_action = "Source"
target_pipeline = aws_codepipeline.mycodepipeline.name
authentication_configuration {
secret_token = var.github_webhook_secret
}
// accept pull requests
// Is there a way to filter on the PR being closed and merged? This isn't it...
filter {
json_path = "$.action"
match_equals = "closed"
}
}
CodePipeline is set to accept webhook events that have all of the conditions specified in the filters, which corresponds to Pull Request Events.
Note that the GitHub documentation states for the action field of a PullRequestEvent (my emphasis in bold):
The action that was performed. Can be one of assigned, unassigned,
review_requested, review_request_removed, labeled, unlabeled, opened,
edited, closed, ready_for_review, locked, unlocked, or reopened. If
the action is closed and the merged key is false, the pull request was
closed with unmerged commits. If the action is closed and the merged
key is true, the pull request was merged. While webhooks are also
triggered when a pull request is synchronized, Events API timelines
don't include pull request events with the synchronize action.
It seems like I need to filter for both $.action==closed && $.pull_request_merged=true, but it doesn't look like I can do both. If I just filter on $.action==closed then my pipeline will rebuild if PRs are closed without merging. Is this an oversight on my part, or are CodePipelines not as flexible in their triggers as CodeBuild projects?
For pull requests being opened/updated, because CodePipeline's Git integrations require a branch name, this is not natively supported as the branch name is variable, unless you open PRs on long running branches like dev, qa etc (e.g. if you are using a Gitflow-based workflow).
The way that we support PRs based from dynamic branches is use CodeBuild for the build/unit test stage of our workflow, and then package up the repository and build artefacts to S3. From there we trigger Deployment pipelines for integration and acceptance environments using S3 artefact as the source. Using CodePipeline for deployments is powerful as it automatically ensures only one stage can execute at a time, meaning only one change for a given application is going through a given environment at any one time.
This approach is however quite complex and requires quite a bit of Lambda magic mixed with SQS FIFO queues to deal with concurrent PRs (this is to overcome the superseding behaviour of CodePipeline), but it's quite a powerful pattern. We also use GitHub reviews to do things like trigger acceptance stage, and auto-approve manual approval steps in CodePipeline.
Once you are ready to merge the PR, we just use normal CodePipeline triggered off master to deploy to production - one thing you also need to do is ensure you use the artefact that was built and tested on the PR.
I'm not sure why you want to trigger the whole pipeline when a pull request is open? They way I usually set things up is:
CodePipeline watches the master branch and triggers on a push to it
It will run some builds in CodeBuild
If the builds pass it runs a deploy
Then we have CodeBuild which gets triggered by both CodePipeline and also GitHub pull requests:
resource "aws_codebuild_webhook" "dev" {
project_name = aws_codebuild_project.dev.name
filter_group {
filter {
type = "EVENT"
pattern = "PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED"
}
}
}
Then you can use codebuild filters to choose when to trigger the build. The terraform docs are also helpful.
As part of my AWS Codebuild pipeline, I am sending a Slack notification that includes the commit ID, which I obtain from the environment variable CODEBUILD_RESOLVED_SOURCE_VERSION as documented here: https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
This is good, but I also want to access the name or email of the person who made the commit.
How can I possibly obtain that in the same way as I obtain the CODEBUILD_RESOLVED_SOURCE_VERSION?
CodeBuild webhook-triggered builds include the .git metadata. You should be able to retrieve this using the Git CLI, e.g.,:
git log -1 --format="%an <%ae>"
Which gives something like:
John Doe <jdoe#example.com>
The aws/codebuild/standard Docker image comes with Git pre-installed.