I usually name artifacts based on the commits they have been build from.
Based on this documentation, CODEBUILD_WEBHOOK_PREV_COMMIT is what I am looking for in AWS Code Build
Here is the buildspec.yml
phases:
install:
commands:
- apt-get update -y
build:
commands:
- export $CODEBUILD_WEBHOOK_PREV_COMMIT
- echo Entered the build phase...
- echo Build started on `date`
- mvn clean install -Dmaven.test.skip=true
- for f in ./target/*.car;do mv -- "$f" $(echo $f | sed -E "s/.car$/_${CODEBUILD_WEBHOOK_PREV_COMMIT}.car/") ;done
artifacts:
files:
- ./target/*.car
Build works but the commit does not show in the final .car name. I would like to understand why.
Hypothesis n°1: VARs needs to be explicitly sourced
I tried the following without much success
env:
variable:
- COMMIT="${CODEBUILD_WEBHOOK_PREV_COMMIT}"
phases:
install:
commands:
- apt-get update -y
build:
commands:
- echo Entered the build phase...
- echo Build started on `date`
- mvn clean install -Dmaven.test.skip=true
- carpath=./*_CA/target/*.car
- for f in $carpath;do mv -- "$f" $(echo $f | sed -E "s/.car$/_${COMMIT}.car/") ;done
VARs are only available to AWS default build container
I am using Maven's official image maven:3.6.3-jdk-8 instead of Amazon's general purpose build image. Are VARs available for custom images? I can't find any clear indication they are not.
I lost entire afternoon on this, for anyone that is coming here with the same problem, here is how I solved it:
First, I've put printenv in commands to see whats going on and $CODEBUILD_WEBHOOK_PREV_COMMIT env variable was completely missing. But you can use $CODEBUILD_RESOLVED_SOURCE_VERSION instead, which was there!
Related
When I run the following command: - go build -o app I get the following error (for multiple dependencies) : main.go:21:2: cannot find package "github.com/gorilla/mux" in any of:
/usr/local/go/src/github.com/gorilla/mux (from $GOROOT)
/go/src/github.com/gorilla/mux (from $GOPATH)
/codebuild/output/src324986171/src/github.com/gorilla/mux
Meaning the codebuild fails. Any idea how I can fix this, or in general where the problem is ?
Thanks for your help.
EDIT:
After adding go get ./... to the build I get the following error for all my local packages:# cd .; git clone https://github.com/aristotle/dbhelper /go/src/github.com/aristotle/dbhelper
Cloning into '/go/src/github.com/aristotle/dbhelper'...
My buildspec.yml looks like this:
version: 0.2
phases:
install:
commands:
- echo CODEBUILD_SRC_DIR - $CODEBUILD_SRC_DIR
- echo GOPATH - $GOPATH
- echo GOROOT - $GOROOT
build:
commands:
- echo Build started on `date`
- echo Getting packages
- go get ./...
- echo Compiling the Go code...
- go build -o app main.go
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- app
According to this article you need to add it to the install section of your buildspec.yml file.
install:
commands:
- go get github.com/gorilla/mux
maybe it does also work to include go get ./... this will resolve all dependencies... but if you do not have too many it is a good practise to list them explicitly.
this is the source article: https://www.contributing.md/2017/06/30/golang-with-aws-codebuild/
Is there a way to drop root user on AWS CodeBuild?
We are building a Yocto project that fails on CodeBuild if we're root (Bitbake sanity check).
Our desperate approach doesn't work either:
...
build:
commands:
- chmod -R 777 $(pwd)/ && chown -R builder $(pwd)/ && su -c "$(pwd)/make.sh" -s /bin/bash builder
...
Fails with:
bash: /codebuild/output/src624711770/src/.../make.sh: Permission denied
Any idea how we could run this a non-root?
I am succeeded in using non-root user in AWS CodeBuild.
It takes much more than knowing some CodeBuild options to come up with a practical solution.
Everyone should spot run-as option quite easily.
The next question is "which user?"; you cannot just put any word as a username.
In order to find out which users are available, the next clue is at Docker images provided by CodeBuild section. There, you'll find a link to each image definition.
For me, the link leads me to this page on GitHub
After inspecting the source code of Dockerfile, we'll know that there is a user called codebuild-user available. And we can use this codebuild-user for our run-as in the buildspec.
Then we'll face with a whole lot of other problems because the standard image only installs runtime of each language for root only.
This is as far as generic explanations can go.
For me, I wanted to use the Ruby runtime, so my only concern is the Ruby runtime.
If you use CodeBuild for something else, you are on your own now.
In order to utilize Ruby runtime as codebuild-user, we have to expose them from the root user. To do that, I change the required permissions and owner of .rbenv used by the CodeBuild image with the following command.
chmod +x ~
chown -R codebuild-user:codebuild-user ~/.rbenv
The bundler (Ruby's dependency management tool) still wants to access the home directory, which is not writable. We have to set up an environment variable to make it use other writable location as the home directory.
The environment variable is BUNDLE_USER_HOME.
Put everything together; my buildspec looks like:
version: 0.2
env:
variables:
RAILS_ENV: test
BUNDLE_USER_HOME: /tmp/bundle-user
BUNDLE_SILENCE_ROOT_WARNING: true
run-as: codebuild-user
phases:
install:
runtime-versions:
ruby: 2.x
run-as: root
commands:
- chmod +x ~
- chown -R codebuild-user:codebuild-user ~/.rbenv
- bundle config set path 'vendor/bundle'
- bundle install
build:
commands:
- bundle exec rails spec
cache:
paths:
- vendor/bundle/**/*
My points are:
It is, indeed, possible.
Show how I did it for my use case.
Thank you for this feature request. Currently you cannot run as a non-root user in CodeBuild, I have passed it to the team for further review. Your feedback is very much appreciated.
To run CodeBuild as non root you need to specify a Linux username using the run-as tag in your buildspec.yaml as shown in the docs
version: 0.2
run-as: Linux-user-name
env:
variables:
key: "value"
key: "value"
parameter-store:
key: "value"
key: "value"
phases:
install:
run-as: Linux-user-name
runtime-versions:
runtime: version
What we ended up doing was the following:
Create a Dockerfile which contains all the stuff to build a Yocto / Bitbake project in which we ADD the required sources and create an user builder which we use to build our project.
FROM ubuntu:16.04
RUN apt-get update && apt-get -y upgrade
# Required Packages for the Host Development System
RUN apt-get install -y gawk wget git-core diffstat unzip texinfo gcc-multilib \
build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
xz-utils debianutils iputils-ping vim
# Additional host packages required by poky/scripts/wic
RUN apt-get install -y curl dosfstools mtools parted syslinux tree
# Create a non-root user that will perform the actual build
RUN id builder 2>/dev/null || useradd --uid 30000 --create-home builder
RUN apt-get install -y sudo
RUN echo "builder ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers
# Fix error "Please use a locale setting which supports utf-8."
# See https://wiki.yoctoproject.org/wiki/TipsAndTricks/ResolvingLocaleIssues
RUN apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
echo 'LANG="en_US.UTF-8"'>/etc/default/locale && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LANG US.UTF-8
ENV LANGUAGE en_US.UTF-8
WORKDIR /home/builder/
ADD ./ ./
USER builder
ENTRYPOINT ["/bin/bash", "-c", "./make.sh"]
We build this docker during the Codebuild pre_build step and run the actual build in the ENTRYPOINT (in make.sh) when we run the image. After the container has been excited, we copy the artifacts to the Codebuild host and put them on S3:
version: 0.2
phases:
pre_build:
commands:
- mkdir ./images
- docker build -t bob .
build:
commands:
- docker run bob:latest
post_build:
commands:
# copy the last excited container's images into host as build artifact
- docker cp $(docker container ls -a | head -2 | tail -1 | awk '{ print $1 }'):/home/builder/yocto-env/build/tmp/deploy/images ./images
- tar -cvzf artifacts.tar.gz ./images/*
artifacts:
files:
- artifacts.tar.gz
The only drawback this approach has, is the fact that we can't (easily) use Codebuild's caching functionality. But the build is sufficiently fast for us, since we do local builds during the day and basically one rebuild from scratch at night, which takes about 90 minutes (on the most powerful Codebuild instance).
Sigh, so I came across this question and I am disappointed that there is no good or simple answer to this problem. There are many, many processes that strongly discourage running as root like composer and others that will flat-out refuse like wp-cli. If you are using the Ubuntu "standard image" provided by AWS, then there appears to be an existing user in the /etc/passwd file, dockremap:x:1000:1000::/home/dockremap:/bin/sh. I think this user is for userns-remap in docker and I am not sure about it's availability. The other option that astonishingly hasn't been mentioned is running useradd -N -G users develop to create a new user in the container. It is far simpler than spinning up a custom container for something so trivial.
I tried to follow this doc ( https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html ) but couldn't build the custom nginx conf.
I am able to deploy an application and environment and it works. After testing a working environment I wanted to modify some nginx configurations and I followed the steps as:
cd WS
mkdir -p .ebextensions/nginx/conf.d
cp ~/dozee.conf .ebextensions/nginx/conf.d
eb deploy
WS is a directory from where eb deploy works perfectly. After logging(ssh) into the instance created by eb environment I could see the dozee.conf present at /var/app/current/.ebextensions/nginx/conf.d/ but was not present at /etc/nginx/conf.d/.
What I might be missing here? Any help is appreciated :)
The most likely problem is that your .ebextensions folder is not being included in your build. Can you post your buildspec.yml? To give you an idea what needs to happen, here is one of mine:
version: 0.2
phases:
install:
commands:
- echo Entering install phase...
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Entering pre_build phase...
- echo Running tests...
- mvn test
build:
commands:
- echo Entering build phase...
- echo Build started on `date`
- mvn package -Dmaven.test.skip=true
post_build:
commands:
- echo Entering post_build phase...
- echo Build completed on `date`
- mv target/app.war app.war
artifacts:
type: zip
files:
- app.war
- .ebextensions/**/*
I'm building a CI/CD pipeline using git, codebuild and elastic beanstalk.
During codebuild execution when build fails due to syntax error of a test case, I see codebuild progress to next stage and ultimately go on to produce the artifacts.
My understanding was if the build fails, execution should stop. is this a correct behavior ?
Please see the buildspec below.
version: 0.2
phases:
install:
commands:
- echo Installing package.json..
- npm install
- echo Installing Mocha...
- npm install -g mocha
pre_build:
commands:
- echo Installing source NPM placeholder dependencies...
build:
commands:
- echo Build started on `date`
- echo Compiling the Node.js code
- mocha modules/**/tests/*.js
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- modules/*
- node_modules/*
- package.json
- config/*
- server.js
CodeBuild detects build failures by exit codes. You should ensure that your test execution returns a non-zero exit code on failure.
POST_BUILD will always run as long as BUILD was also run (regardless of BUILD's success or failure.) The same goes for UPLOAD_ARTIFACTS. This is so you can retrieve debug information/artifacts.
If you want to do something different in POST_BUILD depending on the success or failure of BUILD, you can test the builtin environment variable CODEBUILD_BUILD_SUCCEEDING, which is set to 1 if BUILD succeeded, and 0 if it failed.
CodeBuild uses the environment variable CODEBUILD_BUILD_SUCCEEDING to show if the build process seems to go right.
the best way I found right now is to create a small script in the install secion and then alway use this like:
phases:
install:
commands:
- echo '#!/bin/bash' > /usr/local/bin/ok; echo 'if [[ "$CODEBUILD_BUILD_SUCCEEDING" == "0" ]]; then exit 1; else exit 0; fi' >> /usr/local/bin/ok; chmod +x /usr/local/bin/ok
post_build:
commands:
- ok && echo Build completed on `date`
The post_build section is run even if the build section might fail. Expanding on the previous answers, you can use the variable CODEBUILD_BUILD_SUCCEEDING in the post_build section of the buildspec.yml file. You can make the post_build section to run if and only if the build section completed successfully. Below is an example of how this can be achieved:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- CODEBUILD_RESOLVED_SOURCE_VERSION="${CODEBUILD_RESOLVED_SOURCE_VERSION:-$IMAGE_TAG}"
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_URI="$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG"
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_URI .
post_build:
commands:
- bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi"
- echo Build stage successfully completed on `date`
- docker push $IMAGE_URI
- printf '[{"name":"clair","imageUri":"%s"}]' "$IMAGE_URI" > images.json
artifacts:
files: images.json
add this in build section
build:
on-failure: ABORT
I just wanted to point out that if you want the whole execution to stop when a command fails, you may specify the -e option:
When running a bash file
- /bin/bash -e ./commands.sh
Or when running a set of commands/bash file
#!/bin/bash
set -e
# ... commands
The post_build stage will be executed and the artifacts will be produced. The post_build is good to properly shut down the build environment, if necessary, and the artifacts could be useful even if the build failed. E.g. extra logs, intermediate files, etc.
I would suggest to use post_build only for commands what are agnostic to the result of your build, and properly de-initialise the build environment. Otherwise you can just exclude that step.
With Snap-CI going away I've been trying to get our builds working on AWS CodeBuild. I have my buildspec.yml built out, but changing directories doesn't seem to work.
version: 0.1
phases:
install:
commands:
- apt-get update -y
- apt-get install -y node
- apt-get install -y npm
build:
commands:
- cd MyDir //Expect to be in MyDir now
- echo `pwd` //Shows /tmp/blablabla/ instead of /tmp/blablabla/MyDir
- npm install //Fails because I'm not in the right directory
- bower install
- npm run ci
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- MyDir/MyFile.war
discard-paths: yes
It seems like this should be fairly simple, but so far I haven't had any luck getting this to work.
If you change the buildspec.yml version to 0.2 then the shell keeps its settings.
In version: 0.1 you get a clean shell for each command.
Each command in CodeBuild runs in a separate shell against the root of your source (access root of your source from CODEBUILD_SRC_DIR environment variable).
Your possible options are
Short circuit the commands to run under the same shell: Works when you have relatively simple buildspec (like yours).
commands:
- cd MyDir && npm install && bower install
- cd MyDir && npm run ci
Move your commands from buildspec to a script and have more control (useful for more complicated build logic).
commands:
- ./mybuildscipt.sh
Let me know if any of these work for you.
-- EDIT --
CodeBuild has since launched buildspec v0.2 where this work around is no longer required.