When I run the following command: - go build -o app I get the following error (for multiple dependencies) : main.go:21:2: cannot find package "github.com/gorilla/mux" in any of:
/usr/local/go/src/github.com/gorilla/mux (from $GOROOT)
/go/src/github.com/gorilla/mux (from $GOPATH)
/codebuild/output/src324986171/src/github.com/gorilla/mux
Meaning the codebuild fails. Any idea how I can fix this, or in general where the problem is ?
Thanks for your help.
EDIT:
After adding go get ./... to the build I get the following error for all my local packages:# cd .; git clone https://github.com/aristotle/dbhelper /go/src/github.com/aristotle/dbhelper
Cloning into '/go/src/github.com/aristotle/dbhelper'...
My buildspec.yml looks like this:
version: 0.2
phases:
install:
commands:
- echo CODEBUILD_SRC_DIR - $CODEBUILD_SRC_DIR
- echo GOPATH - $GOPATH
- echo GOROOT - $GOROOT
build:
commands:
- echo Build started on `date`
- echo Getting packages
- go get ./...
- echo Compiling the Go code...
- go build -o app main.go
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- app
According to this article you need to add it to the install section of your buildspec.yml file.
install:
commands:
- go get github.com/gorilla/mux
maybe it does also work to include go get ./... this will resolve all dependencies... but if you do not have too many it is a good practise to list them explicitly.
this is the source article: https://www.contributing.md/2017/06/30/golang-with-aws-codebuild/
Related
I have a Spring Boot application built on Beanstalk (Amazon Linux 2), I need to increase the client_max_body_size because some form data I'm posting contains images and I'm getting the 413: Request Too Large Nginx error.
I followed AWS's documentation on how to change this property.
My project structure looks like this now:
And the content of the file is:
client_max_body_size 50M;
After deploying I keep getting the same error (with images > 1MB total).
No file has been created in conf.d:
Is this because how my buildSpec packages my application?
version: 0.2
phases:
install:
runtime-versions:
java: corretto17
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn package -Dmaven.test.skip
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- .platform/nginx/conf.d/proxy.conf
- target/myApp-0.0.1-SNAPSHOT.jar
- appspec.yml
discard-paths: yes
I also tried adding the configuration file into the artifacts.files section of my buildspec.yml.
I also tried to create the file its content from the files section on the buildspec.
I feel like I tried everything, is there anything I may be missing?
For now, my workaround:
I edited mannually the file
cd /etc/nginx/
sudo nano nginx.conf
and restarted. That worked, but I want to avoid this manual configuration so it's configured from the application source, as a good practice.
The problem was on my buildspec.
discard-paths: yes
was putting all the files on the root path of the bundle. I needed this so I the jar was on the root, but it was putting the proxy.conf in the root as well.
Setting that property to no (or removing it) made it work, but I needed a way to change the jar from /target/ to the root so I did it with a post-build command:
version: 0.2
phases:
install:
runtime-versions:
java: corretto17
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn package -Dmaven.test.skip
post_build:
commands:
- mv target/myApp-0.0.1-SNAPSHOT.jar myApp-0.0.1-SNAPSHOT.jar <------ HERE
- echo Build completed on `date`
artifacts:
files:
- .platform/nginx/conf.d/proxy.conf
- myApp-0.0.1-SNAPSHOT.jar
- appspec.yml
I usually name artifacts based on the commits they have been build from.
Based on this documentation, CODEBUILD_WEBHOOK_PREV_COMMIT is what I am looking for in AWS Code Build
Here is the buildspec.yml
phases:
install:
commands:
- apt-get update -y
build:
commands:
- export $CODEBUILD_WEBHOOK_PREV_COMMIT
- echo Entered the build phase...
- echo Build started on `date`
- mvn clean install -Dmaven.test.skip=true
- for f in ./target/*.car;do mv -- "$f" $(echo $f | sed -E "s/.car$/_${CODEBUILD_WEBHOOK_PREV_COMMIT}.car/") ;done
artifacts:
files:
- ./target/*.car
Build works but the commit does not show in the final .car name. I would like to understand why.
Hypothesis n°1: VARs needs to be explicitly sourced
I tried the following without much success
env:
variable:
- COMMIT="${CODEBUILD_WEBHOOK_PREV_COMMIT}"
phases:
install:
commands:
- apt-get update -y
build:
commands:
- echo Entered the build phase...
- echo Build started on `date`
- mvn clean install -Dmaven.test.skip=true
- carpath=./*_CA/target/*.car
- for f in $carpath;do mv -- "$f" $(echo $f | sed -E "s/.car$/_${COMMIT}.car/") ;done
VARs are only available to AWS default build container
I am using Maven's official image maven:3.6.3-jdk-8 instead of Amazon's general purpose build image. Are VARs available for custom images? I can't find any clear indication they are not.
I lost entire afternoon on this, for anyone that is coming here with the same problem, here is how I solved it:
First, I've put printenv in commands to see whats going on and $CODEBUILD_WEBHOOK_PREV_COMMIT env variable was completely missing. But you can use $CODEBUILD_RESOLVED_SOURCE_VERSION instead, which was there!
TL;DR
How can I cache my modules in codebuild using AWS provided image (Go 1.12)?
Background
I'm trying to cache go modules in codebuild using the go image (1.12) from AWS.
Trying to cache /go/pkg/mod
After digging deeper, I found that there is no /go/pkg folder in that image. Hence, when I tried to cache /go/pkg it would throw an error.
Error mounting /go/pkg/mod: symlink /codebuild/local-cache/custom//go/pkg/mod /go/pkg/mod: no such file or directory
Even after I run go mod download (which will create the /go/pkg/mod, it won't cache the folder because codebuild cannot mounted it earlier).
This is my codebuild.yml
version: 0.2
phases:
install:
runtime-versions:
golang: 1.12
nodejs: 10
commands:
- npm install
build:
commands:
- go build -ldflags="-s -w" -o api/bin/main api/main.go
cache:
paths:
- /go/src/**/*
- /go/pkg/mod/**/*
Trying to cache ./vendor
I also tried caching ./vendor folder which doesn't throw any errors in codebuild. However, I don't think it's caching anything because the build time doesn't decrease. It also says it ignores the symlink.
warning: ignoring symlink /codebuild/output/src074479210/src/github.com/kkesley/myrepo/vendor
go: finding github.com/aws/aws-lambda-go v1.11.1
go: finding github.com/stretchr/testify v1.2.1
go: finding github.com/pmezard/go-difflib v1.0.0
go: finding github.com/davecgh/go-spew v1.1.0
go: finding gopkg.in/urfave/cli.v1 v1.20.0
go: downloading github.com/aws/aws-lambda-go v1.11.1
go: extracting github.com/aws/aws-lambda-go v1.11.1
This is my codebuild.yml for this version:
version: 0.2
phases:
install:
runtime-versions:
golang: 1.12
nodejs: 10
commands:
- npm install
- go mod vendor
build:
commands:
- go build -mod vendor -ldflags="-s -w" -o api/bin/main api/main.go
cache:
paths:
- /go/src/**/*
- vendor/**/*
Question
How do you cache go modules in code build without using custom docker image? Is it possible?
To get it working with the default CodeBuild Ubuntu build image (I'm using v4)
Make sure that local caching is enabled on the CodeBuild project. Go to to edit then artifacts then make sure Custom Cache is ticked
Set the path to cache as /go/pkg/**.*
cache:
paths:
- '/go/pkg/**/*'
Make sure your build script includes a step before building for go mod download. Before I did this caching didn't seem to work so this seemed a key step.
Here is my full buildspec.yml for reference
version: 0.2
phases:
install:
runtime-versions:
golang: latest
commands:
- "git config --global credential.helper '!aws codecommit credential-helper $#'"
- "git config --global credential.UseHttpPath true"
build:
commands:
- 'go mod edit -dropreplace git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/xyz'
- 'go mod download'
# Use latest from develop for the build (test env only)
- 'go get git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/xyz#develop'
- 'rm -rf "dist"'
- 'cp -r "eb-template" "dist"'
- 'env GOOS=linux GOARCH=amd64 go build -o "dist/bin/server"'
- 'go mod edit -replace git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/xyz=../xyz'
- 'echo -n ${CODEBUILD_RESOLVED_SOURCE_VERSION} > dist/commithash'
artifacts:
base-directory: dist
files:
- '**/*'
cache:
paths:
- '/go/pkg/**/*'
I tried to follow this doc ( https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html ) but couldn't build the custom nginx conf.
I am able to deploy an application and environment and it works. After testing a working environment I wanted to modify some nginx configurations and I followed the steps as:
cd WS
mkdir -p .ebextensions/nginx/conf.d
cp ~/dozee.conf .ebextensions/nginx/conf.d
eb deploy
WS is a directory from where eb deploy works perfectly. After logging(ssh) into the instance created by eb environment I could see the dozee.conf present at /var/app/current/.ebextensions/nginx/conf.d/ but was not present at /etc/nginx/conf.d/.
What I might be missing here? Any help is appreciated :)
The most likely problem is that your .ebextensions folder is not being included in your build. Can you post your buildspec.yml? To give you an idea what needs to happen, here is one of mine:
version: 0.2
phases:
install:
commands:
- echo Entering install phase...
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Entering pre_build phase...
- echo Running tests...
- mvn test
build:
commands:
- echo Entering build phase...
- echo Build started on `date`
- mvn package -Dmaven.test.skip=true
post_build:
commands:
- echo Entering post_build phase...
- echo Build completed on `date`
- mv target/app.war app.war
artifacts:
type: zip
files:
- app.war
- .ebextensions/**/*
I'm building a CI/CD pipeline using git, codebuild and elastic beanstalk.
During codebuild execution when build fails due to syntax error of a test case, I see codebuild progress to next stage and ultimately go on to produce the artifacts.
My understanding was if the build fails, execution should stop. is this a correct behavior ?
Please see the buildspec below.
version: 0.2
phases:
install:
commands:
- echo Installing package.json..
- npm install
- echo Installing Mocha...
- npm install -g mocha
pre_build:
commands:
- echo Installing source NPM placeholder dependencies...
build:
commands:
- echo Build started on `date`
- echo Compiling the Node.js code
- mocha modules/**/tests/*.js
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- modules/*
- node_modules/*
- package.json
- config/*
- server.js
CodeBuild detects build failures by exit codes. You should ensure that your test execution returns a non-zero exit code on failure.
POST_BUILD will always run as long as BUILD was also run (regardless of BUILD's success or failure.) The same goes for UPLOAD_ARTIFACTS. This is so you can retrieve debug information/artifacts.
If you want to do something different in POST_BUILD depending on the success or failure of BUILD, you can test the builtin environment variable CODEBUILD_BUILD_SUCCEEDING, which is set to 1 if BUILD succeeded, and 0 if it failed.
CodeBuild uses the environment variable CODEBUILD_BUILD_SUCCEEDING to show if the build process seems to go right.
the best way I found right now is to create a small script in the install secion and then alway use this like:
phases:
install:
commands:
- echo '#!/bin/bash' > /usr/local/bin/ok; echo 'if [[ "$CODEBUILD_BUILD_SUCCEEDING" == "0" ]]; then exit 1; else exit 0; fi' >> /usr/local/bin/ok; chmod +x /usr/local/bin/ok
post_build:
commands:
- ok && echo Build completed on `date`
The post_build section is run even if the build section might fail. Expanding on the previous answers, you can use the variable CODEBUILD_BUILD_SUCCEEDING in the post_build section of the buildspec.yml file. You can make the post_build section to run if and only if the build section completed successfully. Below is an example of how this can be achieved:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- CODEBUILD_RESOLVED_SOURCE_VERSION="${CODEBUILD_RESOLVED_SOURCE_VERSION:-$IMAGE_TAG}"
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_URI="$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG"
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_URI .
post_build:
commands:
- bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi"
- echo Build stage successfully completed on `date`
- docker push $IMAGE_URI
- printf '[{"name":"clair","imageUri":"%s"}]' "$IMAGE_URI" > images.json
artifacts:
files: images.json
add this in build section
build:
on-failure: ABORT
I just wanted to point out that if you want the whole execution to stop when a command fails, you may specify the -e option:
When running a bash file
- /bin/bash -e ./commands.sh
Or when running a set of commands/bash file
#!/bin/bash
set -e
# ... commands
The post_build stage will be executed and the artifacts will be produced. The post_build is good to properly shut down the build environment, if necessary, and the artifacts could be useful even if the build failed. E.g. extra logs, intermediate files, etc.
I would suggest to use post_build only for commands what are agnostic to the result of your build, and properly de-initialise the build environment. Otherwise you can just exclude that step.