I'm using AWS CodeBuild to deploy the function to AWS lambda using serverless-framework.
Here is my buildspec.yml,
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- echo installing Mocha...
- npm install -g mocha
- echo installing Serverless...
- npm install -g serverless
pre_build:
commands:
- echo running npm install for global project...
- npm install
- echo running npm install for each function...
- folders=src/*
- for value in $folders;
do
echo $value
npm --prefix $value install $value;
done
build:
commands:
- sls package
- serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION | tee deploy.out
post_build:
commands:
- echo done
# - . ./test.sh
The problem is even when the serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION | tee deploy.out command fails, the build project is shown as success by AWS codebuild in the codepipeline.
I want the build status as failure when serverless deploy command fails.
This happens because post_build executes whether build fails or succeeds. Thus it does not meter that build fails, post_build will run anyway. This is explained in the build phase transitions.
You can rectify this by "manually" checking if build failed in post_build by checking CODEBUILD_BUILD_SUCCEEDING env variable:
CODEBUILD_BUILD_SUCCEEDING: Whether the current build is succeeding. Set to 0 if the build is failing, or 1 if the build is succeeding.
Thus in your post_build you can check ifCODEBUILD_BUILD_SUCCEEDING == 0 and exit 1 if is true.
post_build:
commands:
- if [[ $CODEBUILD_BUILD_SUCCEEDING == 0 ]]; then exit 1; fi
- echo done
# - . ./test.sh
Your command:
- serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION | tee deploy.out
... is not returning a non-zero code on failure which is required to fail the build. The command tee is masking the return code from serverless deploy as it itself is responding with a '0' return code.
I would recommend to re-write the command as:
- serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION > deploy.out
- cat deploy.out
Related
When I am trying to commit changes to gitlab for continuous integrations i am facing this error even though all my steps pass successfully, Gitlab CI shows this
Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1
I am running 1 stages "deploy" at the moment here is my script for deploy:
image: python:3.8
stages:
- deploy
default:
before_script:
- wget https://golang.org/dl/go1.16.5.linux-amd64.tar.gz
- rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- source ~/.bashrc
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
deploy-development:
only:
- feature/backend/ci/cd
stage: deploy
script:
- sam build -p
- yes | sam deploy
This command probably creates an issue in the docker shell:
yes | sam deploy
Try this command:
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset
From https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html:
--confirm-changeset | --no-confirm-changeset Prompt to confirm whether the AWS SAM CLI deploys the computed changeset.
--fail-on-empty-changeset | --no-fail-on-empty-changeset Specify whether to return a non-zero exit code if there are no changes to be made to the stack. The default behavior is to return a non-zero exit code.
I have the following buildspec.yml:
version: 0.2
phases:
install:
commands:
- curl -L -o sbt-0.13.6.deb http://dl.bintray.com/sbt/debian/sbt-0.13.6.deb && \
- dpkg -i sbt-0.13.6.deb && \
- rm sbt-0.13.6.deb && \
- apt-get update && \
- apt-get install sbt && \
pre_build:
commands:
- echo Entered the pre_build phase...
- docker login -u user -p pass
build:
commands:
- echo Build started on `date`
- sbt test
- echo test completed on `date`
- sbt docker:publishLocal
- docker tag image repo
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push repo
cache:
paths:
- $HOME/.ivy2/cache
- $HOME/.sbt
and fails with
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: docker: not found
in the console. As far as I see in the examples provided in the doc, docker should be already given.
How can I avoid this?
Thanks
On your CodeBuild project select the "privileged" flag to enable Docker in your build container. If you are using a CodeBuild managed image, then selecting this flag is all that's needed. If you are using a custom image then ensure the Docker is started as explained in https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html
I am trying to build a docker image whenever there is push to my source code and move the docker image to the ECR( EC2 Container Registry).
I have tried with the following build-spec file
version: 0.2
env:
variables:
IMG: "app"
REPO: "<<zzzzzzzz>>.dkr.ecr.us-east-1.amazonaws.com/app"
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login --region us-east-1
- TAG=echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8
build:
commands:
- echo $TAG
- docker build -t $IMG:$TAG .
- docker tag $IMG:$TAG $REPO:$TAG
post_build:
commands:
- docker push $REPO:$TAG
- printf Image":"%s:%s" $REPO $TAG > build.json
artifacts:
files: build.json
discard-paths: yes
when I build this I am receiving the error invalid reference format at docker build -t
I looked into the document and found no help.
TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)
you can use $()
version: 0.2
phases:
install:
commands:
- echo Entered the install phase...
- TAG=$(echo "This is test")
pre_build:
commands:
- echo $TAG
build:
commands:
- echo Entered the build phase...
- echo Build started on $TAG
Logs:
[Container] 2018/03/17 16:15:31 Running command TAG=$(echo "This is test")
[Container] 2018/03/17 16:15:31 Entering phase PRE_BUILD
[Container] 2018/03/17 16:15:31 Running command echo $TAG
This is test
So after of lots of retries, i finally found my mistake.
Env CODEBUILD_RESOLVED_SOURCE_VERSION should be replaced with CODEBUILD_SOURCE_VERSION environment variable because I am using codebuild to build directly from source repo in GitHub.
To log in to ecr, need to add --no-include-email option and wrap the command with $(). This will allow you to run docker login. My updated buildspec file would be similar below
version: 0.2
env:
variables:
REPO: "184665364105.dkr.ecr.us-east-1.amazonaws.com/app"
phases:
pre_build:
commands:
- echo $CODEBUILD_SOURCE_VERSION
- TAG=$(echo $CODEBUILD_SOURCE_VERSION | head -c 8)
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region us-east-1)
build:
commands:
- echo $TAG
- echo $REPO
- docker build --tag $REPO:$TAG .
post_build:
commands:
- docker push $REPO:$TAG
I tried to follow this doc ( https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html ) but couldn't build the custom nginx conf.
I am able to deploy an application and environment and it works. After testing a working environment I wanted to modify some nginx configurations and I followed the steps as:
cd WS
mkdir -p .ebextensions/nginx/conf.d
cp ~/dozee.conf .ebextensions/nginx/conf.d
eb deploy
WS is a directory from where eb deploy works perfectly. After logging(ssh) into the instance created by eb environment I could see the dozee.conf present at /var/app/current/.ebextensions/nginx/conf.d/ but was not present at /etc/nginx/conf.d/.
What I might be missing here? Any help is appreciated :)
The most likely problem is that your .ebextensions folder is not being included in your build. Can you post your buildspec.yml? To give you an idea what needs to happen, here is one of mine:
version: 0.2
phases:
install:
commands:
- echo Entering install phase...
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Entering pre_build phase...
- echo Running tests...
- mvn test
build:
commands:
- echo Entering build phase...
- echo Build started on `date`
- mvn package -Dmaven.test.skip=true
post_build:
commands:
- echo Entering post_build phase...
- echo Build completed on `date`
- mv target/app.war app.war
artifacts:
type: zip
files:
- app.war
- .ebextensions/**/*
I'm building a CI/CD pipeline using git, codebuild and elastic beanstalk.
During codebuild execution when build fails due to syntax error of a test case, I see codebuild progress to next stage and ultimately go on to produce the artifacts.
My understanding was if the build fails, execution should stop. is this a correct behavior ?
Please see the buildspec below.
version: 0.2
phases:
install:
commands:
- echo Installing package.json..
- npm install
- echo Installing Mocha...
- npm install -g mocha
pre_build:
commands:
- echo Installing source NPM placeholder dependencies...
build:
commands:
- echo Build started on `date`
- echo Compiling the Node.js code
- mocha modules/**/tests/*.js
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- modules/*
- node_modules/*
- package.json
- config/*
- server.js
CodeBuild detects build failures by exit codes. You should ensure that your test execution returns a non-zero exit code on failure.
POST_BUILD will always run as long as BUILD was also run (regardless of BUILD's success or failure.) The same goes for UPLOAD_ARTIFACTS. This is so you can retrieve debug information/artifacts.
If you want to do something different in POST_BUILD depending on the success or failure of BUILD, you can test the builtin environment variable CODEBUILD_BUILD_SUCCEEDING, which is set to 1 if BUILD succeeded, and 0 if it failed.
CodeBuild uses the environment variable CODEBUILD_BUILD_SUCCEEDING to show if the build process seems to go right.
the best way I found right now is to create a small script in the install secion and then alway use this like:
phases:
install:
commands:
- echo '#!/bin/bash' > /usr/local/bin/ok; echo 'if [[ "$CODEBUILD_BUILD_SUCCEEDING" == "0" ]]; then exit 1; else exit 0; fi' >> /usr/local/bin/ok; chmod +x /usr/local/bin/ok
post_build:
commands:
- ok && echo Build completed on `date`
The post_build section is run even if the build section might fail. Expanding on the previous answers, you can use the variable CODEBUILD_BUILD_SUCCEEDING in the post_build section of the buildspec.yml file. You can make the post_build section to run if and only if the build section completed successfully. Below is an example of how this can be achieved:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- CODEBUILD_RESOLVED_SOURCE_VERSION="${CODEBUILD_RESOLVED_SOURCE_VERSION:-$IMAGE_TAG}"
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_URI="$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG"
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_URI .
post_build:
commands:
- bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi"
- echo Build stage successfully completed on `date`
- docker push $IMAGE_URI
- printf '[{"name":"clair","imageUri":"%s"}]' "$IMAGE_URI" > images.json
artifacts:
files: images.json
add this in build section
build:
on-failure: ABORT
I just wanted to point out that if you want the whole execution to stop when a command fails, you may specify the -e option:
When running a bash file
- /bin/bash -e ./commands.sh
Or when running a set of commands/bash file
#!/bin/bash
set -e
# ... commands
The post_build stage will be executed and the artifacts will be produced. The post_build is good to properly shut down the build environment, if necessary, and the artifacts could be useful even if the build failed. E.g. extra logs, intermediate files, etc.
I would suggest to use post_build only for commands what are agnostic to the result of your build, and properly de-initialise the build environment. Otherwise you can just exclude that step.