I have an AWS CodeBuild project, and I need to call the SAM CLI inside my CodeBuild container. In the build phase, I added a command to install Linux Homebrew, so that I can install the SAM CLI from the AWS Homebrew tap, per the documentation.
However, upon running this command, I am receiving the error below.
[Container] 2020/01/20 05:29:26 Running command bash -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"
-e:196: warning: Insecure world writable dir /go/bin in PATH, mode 040777
Don't run this as root!
[Container] 2020/01/20 05:29:28 Command did not exit successfully bash -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)" exit status 1
[Container] 2020/01/20 05:29:28 Phase complete: BUILD State: FAILED
[Container] 2020/01/20 05:29:28 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: bash -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)". Reason: exit status 1
I'm using the Ubuntu Standard "3.0" build environment, that AWS provides.
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
docker: 18
nodejs: 10
python: 3.8
build:
commands:
- echo Installing SAM CLI
- sh -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"
- brew tap aws/tap
- brew install aws-sam-cli
- sam version
Question: How can I successfully install Linux Homebrew inside an AWS CodeBuild project?
First and recommended option is to bring your own build image with CodeBuild, e.g. use [1] which is an image that includes aws sam cli.
[1] https://hub.docker.com/r/pahud/aws-sam-cli
Second and more difficult option is to install the SAM CLI yourself. Since brew cannot be used as root in any way and the CodeBuild build container is running as root, this gets tricky. Following is a buildspec I have tested and can confirm will install the aws sam cli:
Buildspec:
version: 0.2
phases:
install:
commands:
- curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh > /tmp/install.sh
- cat /tmp/install.sh
- chmod +x /tmp/install.sh
- useradd -m brewuser
- echo "brewuser:brewuser" | chpasswd
- adduser brewuser sudo
- /bin/su -c /tmp/install.sh - brewuser
- /bin/su -c '/home/brewuser/.linuxbrew/bin/brew tap aws/tap' - brewuser
- /bin/su -c '/home/brewuser/.linuxbrew/bin/brew install aws-sam-cli' - brewuser
build:
commands:
- PATH=/home/brewuser/.linuxbrew/bin:$PATH
- sam --version
Note: As per my tests, Python 3.8 does not include sam cli.
Building on #shariqmaws answer, I used a public ECR image that includes AWS SAM and Node.js 10: public.ecr.aws/sam/build-nodejs10.x:latest. You can find out more here: https://gallery.ecr.aws/sam/build-nodejs10.x
CloudFormation template:
CodeBuildIntegrationProject:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
Image: public.ecr.aws/sam/build-nodejs10.x:latest
ImagePullCredentialsType: CODEBUILD
ComputeType: BUILD_GENERAL1_SMALL
LogsConfig:
CloudWatchLogs:
Status: ENABLED
Name: !Sub ${GitHubRepositoryName}-integration
ServiceRole: !Sub ${CodeBuildRole.Arn}
Source:
Type: CODEPIPELINE
Related
I am trying to setup a s3 bucket and github account to create a pipeline in aws codepipeline service. I am getting an error I can't seem to find. It seems to be related to the NPM Install command, but not sure why. Can someone help please?
COMMAND_EXECUTION_ERROR: Error while executing command: npm i. Reason: exit status 1
BuildSpec:
version: 0.2
env:
variables:
CACHE_CONTROL: "86400"
S3_BUCKET: "{{s3_bucket_url}}"
BUILD_FOLDER: "dist"
phases:
install:
runtime-versions:
nodejs: 16
commands:
- echo Installing source NPM dependencies...
- npm install
- npm install -g #angular/cli
build:
commands:
- echo Build started
- ng build
artifacts:
files:
- '**/*'
base-directory: 'dist*'
discard-paths: yes
When I am trying to commit changes to gitlab for continuous integrations i am facing this error even though all my steps pass successfully, Gitlab CI shows this
Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1
I am running 1 stages "deploy" at the moment here is my script for deploy:
image: python:3.8
stages:
- deploy
default:
before_script:
- wget https://golang.org/dl/go1.16.5.linux-amd64.tar.gz
- rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- source ~/.bashrc
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
deploy-development:
only:
- feature/backend/ci/cd
stage: deploy
script:
- sam build -p
- yes | sam deploy
This command probably creates an issue in the docker shell:
yes | sam deploy
Try this command:
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset
From https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html:
--confirm-changeset | --no-confirm-changeset Prompt to confirm whether the AWS SAM CLI deploys the computed changeset.
--fail-on-empty-changeset | --no-fail-on-empty-changeset Specify whether to return a non-zero exit code if there are no changes to be made to the stack. The default behavior is to return a non-zero exit code.
I'm using AWS CodeBuild to deploy the function to AWS lambda using serverless-framework.
Here is my buildspec.yml,
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- echo installing Mocha...
- npm install -g mocha
- echo installing Serverless...
- npm install -g serverless
pre_build:
commands:
- echo running npm install for global project...
- npm install
- echo running npm install for each function...
- folders=src/*
- for value in $folders;
do
echo $value
npm --prefix $value install $value;
done
build:
commands:
- sls package
- serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION | tee deploy.out
post_build:
commands:
- echo done
# - . ./test.sh
The problem is even when the serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION | tee deploy.out command fails, the build project is shown as success by AWS codebuild in the codepipeline.
I want the build status as failure when serverless deploy command fails.
This happens because post_build executes whether build fails or succeeds. Thus it does not meter that build fails, post_build will run anyway. This is explained in the build phase transitions.
You can rectify this by "manually" checking if build failed in post_build by checking CODEBUILD_BUILD_SUCCEEDING env variable:
CODEBUILD_BUILD_SUCCEEDING: Whether the current build is succeeding. Set to 0 if the build is failing, or 1 if the build is succeeding.
Thus in your post_build you can check ifCODEBUILD_BUILD_SUCCEEDING == 0 and exit 1 if is true.
post_build:
commands:
- if [[ $CODEBUILD_BUILD_SUCCEEDING == 0 ]]; then exit 1; fi
- echo done
# - . ./test.sh
Your command:
- serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION | tee deploy.out
... is not returning a non-zero code on failure which is required to fail the build. The command tee is masking the return code from serverless deploy as it itself is responding with a '0' return code.
I would recommend to re-write the command as:
- serverless deploy --stage $STAGE --region $AWS_DEFAULT_REGION > deploy.out
- cat deploy.out
I am trying to install SonarQube using AWS CodeBuild. I am using a Nodejs: 10 as the run time environment. I am getting the below error when I run the below script as the build spec? As I understood, the issue is the NodeJS env does not contain Maven inbuilt. If that is the case, How can I proceed with Maven with in the Node JS Env. Thanks in advance.
[Container] 2020/07/26 18:16:43 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn test. Reason: exit status 1
Issue occurs when it starts to execute -mvn test
buildspec.yml
version: 0.2
env:
secrets-manager:
LOGIN: SonarCloud:sonartoken
HOST: SonarCloud:HOST
Organization: SonarCloud:Organization
Project: prod/sonar:Project
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- npm install
- apt-get update
- apt-get install -y jq
- wget http://www-eu.apache.org/dist/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.tar.gz
- tar xzf apache-maven-3.5.4-bin.tar.gz
- ln -s apache-maven-3.5.4 maven
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-3.3.0.1492-linux.zip
- unzip ./sonar-scanner-cli-3.3.0.1492-linux.zip
- export PATH=$PATH:/sonar-scanner-3.3.0.1492-linux/bin/
build:
commands:
- mvn test
- mvn sonar:sonar -Dsonar.login=$LOGIN -Dsonar.host.url=$HOST -Dsonar.projectKey=$Project -Dsonar.organization=$Organization
- sleep 5
- curl https://sonarcloud.io/api/qualitygates/project_status?projectKey=$Project >result.json
- cat result.json
- if [ $(jq -r '.projectStatus.status' result.json) = ERROR ] ; then $CODEBUILD_BUILD_SUCCEEDING -eq 0 ;fi
- echo Build started on `date`
- echo Compiling the Node.js code
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- server.js
- package.json
- controller/*
Maven is available in java : openjdk8.
You need to add the same to your yml.
Sample format :
phases:
install:
runtime-versions:
java: openjdk8
build:
commands:
- mvn test
Add either java: corretto11 or java: openjdk8 or java: openjdk11 under runtime-versions: and maven will start executing.
Probably you might need to use your project specific settings.xml for maven build, which you can easily provide in a S3 bucket and then refer it under build commands of buildspec.yml
I'm using corretto rather than openjdk in my aws configuration, as aws provide LTS for corretto. Reference - https://aws.amazon.com/corretto/faqs/
I am trying to run a code pipeline with github as the source, codeBuild as the builder and elastic beanstalk as the server infrastructure. I am using a docker image amazonlinux:2018.03 which works perfectly locally but during the codebuild in the pipeline i get the following error:
docker-compose: command not found
I have tried to install docker, docker-compose etc. but it keeps giving me this error. I've set the build to use a file buildspec.yaml:
version: 0.2
phases:
install:
commands:
- echo "installing"
- sudo yum install -y yum-utils
- sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
- docker-compose --version
build:
commands:
- bash compose-local.sh
compose-local.sh:
#!/bin/bash
sudo docker-compose up
I have tried for a couple of days. And i am not sure if i am overseeing something with codeBuild i dont know?
Run /usr/local/bin/docker-compose up instead.
If using Ubuntu 2.0+ or Amazon Linux 2 image, we need to specify docker as the runtime-versions in install phase at buildspec.yml file, e.g.:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo Build started on `date`
- echo Building the Docker image with docker-compose...
- docker-compose -f docker-compose.yml build
Also please make sure to enable privilege mode: https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-console