I've got a CodePipeline working with a Java application. I'm pulling the source from GitHub, building a package with Maven using CodeBuild, and deploying to ElasticBeanstalk in the Deploy stage. My problem is that CodeBuild is returning the artifact in a zip file:
[Container] 2019/03/21 13:23:07 Expanding target/*.war
[Container] 2019/03/21 13:23:07 Found 1 file(s)
[Container] 2019/03/21 13:23:09 Phase complete: UPLOAD_ARTIFACTS Success: true
I'm grabbing the resulting war file after the Maven package. I only want the war file to be picked up by ElasticBeanstalk. How I can force CodePipeline/CodeBuild to NOT compress the file?
You can specify any type of file, with or without compression in the artifacts section of your buildspec.yaml file.
Here is an example I am using with docker :
artifacts:
files: imagedefinitions.json
You will find the full doc of possible values and other examples here : https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
Related
.. aaaand me again :)
This time with a very interesting problem.
Again AWS Lambda function, node.js 12, Javascript, Ubuntu 18.04 for local development, aws cli/aws sam/Docker/IntelliJ, everything is working perfectly in local and is time to deploy.
So I did set up an AWS account for tests, created and assigned an access key/secret and finally did try to deploy.
Almost at the end an error pop up aborting the deployment.
I'm showing the SAM cli version from a terminal, but the same happens with IntelliJ.
(of course I mask/change some names)
From a terminal I'm going where I have my local sandbox with the project and then :
$ sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: MyActualProjectName
AWS Region [us-east-1]: us-east-2
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
Save arguments to configuration file [Y/n]: y
SAM configuration file [samconfig.toml]: y
SAM configuration environment [default]:
Looking for resources needed for deployment: Not found.
Creating the required resources...
Successfully created!
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-7qo1hy7mdu9z
A different default S3 bucket can be set in samconfig.toml
Saved arguments to config file
Running 'sam deploy' for future deployments will use the parameters saved above.
The above parameters can be changed by modifying samconfig.toml
Learn more about samconfig.toml syntax at
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
Error: Unable to upload artifact MyFunctionName referenced by CodeUri parameter of MyFunctionName resource.
ZIP does not support timestamps before 1980
$
I spent quite some time looking around for this problem but I found only some old threads.
In theory this problems was solved in 2018 ... but probably some npm libraries I had to use contains something old ... how in the world I fix this stuff ?
In one thread I found a kind of workaround.
In the file buildspec.yml somebody suggested to add AFTER the npm install :
ls $CODEBUILD_SRC_DIR
find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
Basically the idea is to touch all the files installed after the npm install but still the error happens.
This my buildspec.yml file after the modification :
version: 0.2
phases:
install:
commands:
# Install all dependencies (including dependencies for running tests)
- npm install
- ls $CODEBUILD_SRC_DIR
- find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
pre_build:
commands:
# Discover and run unit tests in the '__tests__' directory
- npm run test
# Remove all unit tests to reduce the size of the package that will be ultimately uploaded to Lambda
- rm -rf ./__tests__
# Remove all dependencies not needed for the Lambda deployment package (the packages from devDependencies in package.json)
- npm prune --production
build:
commands:
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
I will continue to search but again I wonder if somebody here had this kind of problem and thus some suggestions/methodology about how to solve it.
Many many thanks !
Steve
I'm creating a AWS CodePipeline with the following phases:
Source: Get code from Github when some change occurs in the staging branch.
Build:Read the buildspec.yml to execute "mvn clean package", docker build and docker push.
Deploy: Deploy to ECS Cluster
Now I need to create a 4 phase (AfterDeploy Phase) that should commit some files in github. So, after all these phases completed with success, the AfterDeploy Phase should commit some files generated by Build phase in github.
Any idea how can I do it?
Should I have 2 buildspec files, because I have 2 build phases ?
Yes, you can do this. For example, you can have primary buildspec.yml for your first build, and secondary buildspec_postdeploy.yml for the second build stage.
How to use multiple buildspec files is documented at:
Buildspec file name and storage location
I don't have an example to share, but it would just execute any git commint and git push commands that are needed. Its exact structure is very use-case specific, thus it is difficult to speculate on it.
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-name-storage
I have a simple CodeBuild spec that defines artifacts to be uploaded after tests run:
artifacts:
files:
- cypress/**/*.png
discard-paths: yes
These artifacts are only generated if the test-action fails (a screenshot is captured of the failing test screen) and are being successfully uploaded to S3.
In the case that tests succeed, no .png files will be generated and the CodeBuild action fails:
[Container] 2018/09/21 20:06:34 Expanding cypress/**/*.png
[Container] 2018/09/21 20:06:34 Phase complete: UPLOAD_ARTIFACTS Success: false
[Container] 2018/09/21 20:06:34 Phase context status code: CLIENT_ERROR Message: no matching artifact paths found
Is there a way to conditionally upload files if they exist in the buildspec?
Alternatively I could use the s3 cli -- in which case I would need a way to easily access the bucket name and artifact key.
To get around this, I'm creating a placeholder file that matches the glob pattern if build succeeds:
post_build:
commands:
- if [ -z "$CODEBUILD_BUILD_SUCCEEDING" ]; then echo "Build failing, no need to create placeholder image"; else touch cypress/0.png; fi
artifacts:
files:
- cypress/**/*.png
discard-paths: yes
If any one still looking for solution base on tgk answer.
in my cas I wanna upload the artifact only in master ENV , so other than master I create a place holder and upload in a TMP folder.
post_build:
commands:
#creating a fake file to workaround fail upload in non prod build
- |
if [ "$ENV" = "master" ]; then
export FOLDERNAME=myapp-$(date +%Y-%m-%d).$((BUILD_NUMBER))
else
touch myapp/0.tmp;
export FOLDERNAME="TMP"
fi
artifacts:
files:
- myapp/build/outputs/apk/prod/release/*.apk
- myapp/*.tmp
discard-paths: yes
name: $FOLDERNAME
I am attempting to get CodePipeline to fetch my code from GitHub and build it with CodeBuild. The first (Source) step works fine. But the second (Build) step fails during the "UPLOAD_ARTIFACTS" part. Here are the relevant log statements:
[Container] 2017/01/12 17:21:31 Assembling file list
[Container] 2017/01/12 17:21:31 Expanding MyApp
[Container] 2017/01/12 17:21:31 Skipping invalid artifact path MyApp
[Container] 2017/01/12 17:21:31 Phase complete: UPLOAD_ARTIFACTS Success: false
[Container] 2017/01/12 17:21:31 Phase context status code: ARTIFACT_ERROR Message: No matching artifact paths found
[Container] 2017/01/12 17:21:31 Runtime error (No matching artifact paths found)
My app has a buildspec.yml in its root folder. It looks like:
version: 0.1
phases:
build:
commands:
- echo `$BUILD_COMMAND`
artifacts:
discard-paths: yes
files:
- MyApp
It would appear that the "MyApp" in my buildspec.yml should be something different, but I'm pouring through all of the AWS docs to no avail (what else is new?). How can I get it to upload the artifact correctly?
The artifacts should refer to files downloaded from your Source action or generated as part of the Build action in CodePipeline. For example, this is from a buildspec.yml I wrote:
artifacts:
files:
- appspec.yml
- target/SampleMavenTomcatApp.war
- scripts/*
When I see that you used MyApp in your artifacts section, it makes me think you're referring to the OutputArtifacts of the Source action of CodePipeline. Instead, you need to refer to the files it downloads and stores there (i.e. S3) and/or it generates and stores there.
You can find a sample of a CloudFormation template that uses CodePipeline, CodeBuild, CodeDeploy, and CodeCommit here: https://github.com/stelligent/aws-codedeploy-sample-tomcat/blob/master/codebuild-cpl-cd-cc.json The buildspec.yml is in the same forked repo.
Buildspec artifacts are information about where CodeBuild can find the build output and how CodeBuild prepares it for uploading to the Amazon S3 output bucket.
For the error "No matching artifact paths found" Couple of things to check:
Artifacts file(s) specified on buildspec.yml file has correct path and file name.
artifacts:
files:
-'FileNameWithPath'
If you are using .gitignore file, make sure file(s) specified on Artifacts section
is not included in .gitignore file.
Hope this helps.
In my case I received this error because I had changed directory in my build stage (the java project I am building is in a subdirectory) and did not change back to the root. Adding cd ..at the end of the build stage did the trick.
I had the similar issue, and the solution to fix the problem was "packaging directories and files inside the archive with no further root folder creation".
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-war-hw.html
Artifacts are the stuff you want from your build process - whether compiled in some way or just files copied straight from the source. So the build server pulls in the code, compiles it as per your instructions, then copies the specified files out to S3.
In my case using Spring Boot + Gradle, the output jar file (when I gradle bootJar this on my own system) is placed in build/libs/demo1-0.0.1-SNAPSHOT.jar, so I set the following in buildspec.yml:
artifacts:
files:
- build/libs/*.jar
This one file appears for me in S3, optionally in a zip and/or subfolder depending on the options chosen in the rest of the Artifacts section
try using the version 0.2 of the buildspec
here is a typical example for nodejs
version: 0.2
phases:
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- npm install
- npm run build
post_build:
commands:
- echo Build completed on
artifacts:
files:
- appspec.yml
- build/*
If you're like me and ran into this problem whilst using Codebuild within a CodePipeline arrangement.
You need to use the following
- printf '[{"name":"container-name-here","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > $CODEBUILD_SRC_DIR/imagedefinitions.json
There was the same issue as #jd96 wrote. I needed to return to the root directory of the project to export artifact.
build:
commands:
- cd tasks/jobs
- make build
- cd ../..
post_build:
commands:
- printf '[{"name":"%s","imageUri":"%s"}]' $IMAGE_REPO_NAME $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
I've created a pipeline which does the following:
Git changes trigger next action (code build)
Codebuild initiates & builds a docker image from git source
Set latest docker container up on Elasticbeanstalk
The first 2 steps are working fine, git changes initiate a codebuild, the codebuild builds a docker image, and then tries to set it up on Elasticbeanstalk (which fails). The following error is thrown:
Invalid action configuration The action failed because either the
artifact or the Amazon S3 bucket could not be found. Name of artifact
bucket: MY_BUCKET_NAME. Verify that this bucket
exists. If it exists, check the life cycle policy, then try releasing
a change.
In my codebuild project, I've set the artifact location to MY_BUCKET_NAME & named it aws-test-artifact. Is this all I have to do?
I've tried looking around and am unable to find anything on this issue.
I had the same problem. Just changed Input artifacts from BuildArtifact to SourceArtifact in the build stage, and everything worked.
As Adam Loving commented we must add artifacts section.
Adding this section to your buildspec.yml file will make this work.
artifacts:
files:
- '**/*'
From documentation https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.files adding '**/*' will include all files into the build target.
So I found the fix to this issue! What I had to do was goto codebuild => edit project => Show advanced settings => Artifacts packaging
From here I changed Artifacts packaging to Zip!