I've been calling codebuild and manually overriding the buildspec like this:
aws codebuild start-build --cli-input-json file://servicea/custom.json
and then in custom.json
{
"projectName": "myproject",
"sourceVersion": "master",
"buildspecOverride": "servicea/buildspec.yml"
}
Now I want to use bitbucket trigger (or github if bitbucket is not supported) to build the service automatically after it's being pushed to master.
I've been Googling and found this tutorial https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html
However, I met a roadblock where I couldn't build a specific folder with a specific buildspec.
e.g.
for servicea, the build should run if I push to master and change any files in servicea folder with servicea/buildspec.yaml as the buildspec
for serviceb, the build should run if I push to master and change any files in serviceb folder with serviceb/buildspec.yaml as the buildspec
There is a FILE_PATH filter in the trigger, however there's I couldn't find a way to set the custom buildspec.
Is there any way to achieve this?
Note:
I want to use 1 codebuild project for all of my services
Bitbucket's webhook payload doesn't have the list of files changed in them, unlike GitHub.
Workaround:
Set the "git-credential-helper" to "yes" (or true) in your buildspec. Details in https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
You can then fetch the list of file changed for the specific commit using the call mentioned in https://community.atlassian.com/t5/Bitbucket-questions/Bitbucket-How-to-get-modified-files-of-a-commit-in-JSON-format/qaq-p/704126
You can obtain the commit from the environment variable: CODEBUILD_RESOLVED_SOURCE_VERSION and the branch from: CODEBUILD_WEBHOOK_HEAD_REF. Details in https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
Related
I have an existing CodePipeline which listens to changes to a CodeCommit repository and triggers a CodeBuild of a build project with specific environment variables and a specific artifact upload location. Is there a way to create another CodeBuild step where the same build project is run but with overridden environment variables and another artifact upload location, or will I have to create another build project with these settings?
You can set up the CodeBuild project to allow the build to override artifact names when using S3 as the artifact location. Each artifact has a OverrideArtifactName (in the console it is a checkbox called 'Enable semantic versioning') property that is a boolean. If you set this to true the buildspec will need to specify the name of the file in the artifacts section. While this field is called name, it can include the path as well. That means that you can calculate the name (including the path) based on values inside the build spec (including using environment variables).
I've got a flow where I want a codepipeline to trigger on git commits on Github, go via some test and build steps and end in a codedeploy step where the code will be deployed on a ECS cluster with blue/green-deployment. But I'm stuck on the last step on how to get the image to the CodeDeploy-step.
The pipeline looks like this:
Source (GitHub) -> Test -> Build, creates a docker image which is uploaded to ECR. Artifact contains appspec.yaml, taskdefinition.json, imagedefinitions.json. -> Deploy (CodeDeployToECS), using artifact from the build step.
The last step in the pipeline is configured with the "CodeDeployToECS" provider. But what I cannot get my head around is how I get the image that is created in the build step to end up in the CodeDeploy step which is using blue/green-deployment.
I've checked out this guide: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html but they are using a image from the source step as artifact in the CodeDeploy step which don't match my use case.
This guide is to deploy with ECS with a rolling update which creates, on the fly, a imagedefinitions.json that I'm trying to apply but won't work: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cd-pipeline.html
With the above setup the Deploy step just tells me the image artifact is invalid. Any pointers if this is possible or any workaround?
I found the answer, to create a image artifact you need to generate the file imageDetail.json which is a JSON object with one property named ImageURI with the URI to the image. I followed this thread to get to this fact: https://forums.aws.amazon.com/message.jspa?messageID=881131
I have a pipeline which creates docker images and pushes it to ECR. Since I want to use the AWS provided build environments, I am using 2 build stages.
The pipeline has a total of 3 stages
Get the source code from GitHub : Source
Install dependencies and create a .war file : Build : aws/codebuild/java:openjdk-9
Build the docker image and push it to ECR : Build : aws/codebuild/docker:17.09.0
I would like to tag the docker images with the commit ID which is usually CODEBUILD_RESOLVED_SOURCE_VERSION. However, I have noticed that this variable is only available in my second stage which is immediately after the source.
The worst case work around I found is to write this variable into a file in the second stage and include that file in the artifacts which is the input for the third stage.
Is there a better way to use this in my third stage or overall the pipeline?
Can you write the commit ID to a file that sits alongside the WAR file in the CodePipeline artifact?
And a couple related thoughts:
CodeBuild can be configured in CodePipeline to have multiple input
artifacts, so I assume CODEBUILD_RESOLVED_SOURCE_VERSION refers to
the primary artifact. I'm not sure how to generalize getting the
commit ID into the third action (publish to ECR) because fan-in
(multiple sources with a distinct commit id) can occur at both
CodeBuild actions.
Tagging by commit ID means that multiple pipeline executions may produce an image with the same tag. Ideally I'd like each pipeline execution to be isolated so I don't have to worry about the tag being changed by concurrent pipeline executions or later to use a different dependency closure.
I have managed to do something with jq and sponge as shown in this file buildspec.yaml
I modify my config.json file upon each commit and pass it on to the next stage.
I am using a combination of codepipeline + jq. It's not the best approach, but it's the best I have so far.
commit=$(aws codepipeline get-pipeline-state --name PIPELINE_NAME | jq '.stageStates[0].actionStates[0].currentRevision.revisionId' | tr -d '"'))
and then push the docker image with the new tag. You need to install jq first, if you don't like jq, you can parse the response by yourself.
This 'may' be a duplicate of this other question
I've created a pipeline which does the following:
Git changes trigger next action (code build)
Codebuild initiates & builds a docker image from git source
Set latest docker container up on Elasticbeanstalk
The first 2 steps are working fine, git changes initiate a codebuild, the codebuild builds a docker image, and then tries to set it up on Elasticbeanstalk (which fails). The following error is thrown:
Invalid action configuration The action failed because either the
artifact or the Amazon S3 bucket could not be found. Name of artifact
bucket: MY_BUCKET_NAME. Verify that this bucket
exists. If it exists, check the life cycle policy, then try releasing
a change.
In my codebuild project, I've set the artifact location to MY_BUCKET_NAME & named it aws-test-artifact. Is this all I have to do?
I've tried looking around and am unable to find anything on this issue.
I had the same problem. Just changed Input artifacts from BuildArtifact to SourceArtifact in the build stage, and everything worked.
As Adam Loving commented we must add artifacts section.
Adding this section to your buildspec.yml file will make this work.
artifacts:
files:
- '**/*'
From documentation https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.files adding '**/*' will include all files into the build target.
So I found the fix to this issue! What I had to do was goto codebuild => edit project => Show advanced settings => Artifacts packaging
From here I changed Artifacts packaging to Zip!
I want to deploy war from Jenkins to Cloud.
Could you please let me know how to deploy war file from Jenkins on my local to AWS Bean Stalk ?
I tried using a Jenkins post-process plugin to copy the artifact to S3, but I get the following error:
ERROR: Failed to upload files java.io.IOException: put Destination [bucketName=https:, objectName=/s3-eu-west-1.amazonaws.com/bucketname/test.war]:
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connect to s3.amazonaws.com/s3.amazonaws.com/ timed out at hudson.plugins.s3.S3Profile.upload(S3Profile.java:85) at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:143)
Some work has been done on this.
http://purelyinstinctual.com/2013/03/18/automated-deployment-to-amazon-elastic-beanstalk-using-jenkins-on-ec2-part-2-guide/
Basically, this is just adding a post-build task to run the standard command line deployment scripts.
From the referenced page, assuming you have the post-build task plugin on Jenkins and the AWS command line tools installed:
STEP 1
In a Jenkins job configuration screen, add a “Post-build action” and choose the plugin “Publish artifacts to S3 bucket”, specify the Source (in our case, we use Maven so the source is target/.war and destination is your S3 bucket name)
STEP 2
Then, add a “Post-build task” (if you don’t have it, this is a plugin in Maven repo) to the same section above (“Post-build Actions”) and drag it below the “Publish artifacts to S3 bucket”. This is important that we want to make sure the war file is uploaded to S3 before proceeding with the scripts.
In the Post-build task portion, make sure you check the box “Run script only if all previous steps were successful”
In the script text area, put in the path of the script to automate the deployment (described in step 3 below). For us, we put something like this:
<path_to_script_file>/deploy.sh "$VERSION_NUMBER" "$VERSION_DESCRIPTION"
The $VERSION_NUMBER and $VERSION_DESCRIPTION are Jenkins’ build parameters and must be specified when a deployment is triggered. Both variables will be used for AEB deployment
STEP 3
The script
#!/bin/sh
export AWS_CREDENTIAL_FILE=<path_to_your aws.key file>
export PATH=$PATH:<path to bin file inside the "api" folder inside the AEB Command line tool (A)>
export PATH=$PATH:<path to root folder of s3cmd (B)>
//get the current time and append to the name of .war file that's being deployed.
//This will create a unique identifier for each .war file and allow us to rollback easily.
current_time=$(date +"%Y%m%d%H%M%S")
original_file="app.war"
new_file="app_$current_time.war"
//Rename the deployed war file with the new name.
s3cmd mv "s3://<your S3 bucket>/$original_file" "s3://<your S3 bucket>/$new_file"
//Create application version in AEB and link it with the renamed WAR file
elastic-beanstalk-create-application-version -a "Hoiio App" -l "$1" -d "$2" -s "<your S3 bucket>/$new_file"