How to use git lfs in AWS CodeBuild? - amazon-web-services

Since AWS CodeBuild doesn't seem to support git LFS (Large File System) I tried to install it:
version: 0.2
phases:
install:
commands:
- apt-get install -y bash curl
- curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash
- apt-get install -y git-lfs
pre_build:
commands:
- echo Downloading LFS files
- git lfs pull
build:
commands:
- echo Build started on `date`
post_build:
commands:
- echo Build completed on `date`
For the above code I'm getting the following error (renamed repo address):
[Container] 2020/06/18 16:02:17 Running command git lfs pull
fatal: could not read Password for 'https://username#bitbucket.org': No such device or address
batch response: Git credentials for https://username#bitbucket.org/company/repo.git not found.
error: failed to fetch some objects from 'https://username#bitbucket.org/company/repo.git/info/lfs'
[Container] 2020/06/18 16:02:17 Command did not exit successfully git lfs pull exit status 2
[Container] 2020/06/18 16:02:17 Phase complete: PRE_BUILD State: FAILED
[Container] 2020/06/18 16:02:17 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: git lfs pull. Reason: exit status 2
Can I do something else in order to fetch LFS files?

CodeBuild does not natively support git LFS. The workaround would be to set up Git LFS 1 and cloning the repository 2 as part of the buildspec.yml execution.
Use 'git-credential-helper: yes' in buildspec for CodeBuild to provide the credentials to git commands 3.

CodeBuild does not support Git LFS, however it's possible to install it on-the-fly and then run git lfs pull from the source directory to download the files. Like this:
env:
git-credential-helper: yes
phases:
install:
commands:
- cd /tmp/
- curl -OJL https://github.com/git-lfs/git-lfs/releases/download/v2.13.2/git-lfs-linux-amd64-v2.13.2.tar.gz
- tar xzf git-lfs-linux-amd64-v2.13.2.tar.gz
- ./install.sh
- cd $CODEBUILD_SRC_DIR
pre_build:
commands:
- git lfs pull
<rest of your buildspec.yml file>

CodeBuild doesn't support Git LFS out of the box. One workaround would be to install it manually, but this will work only if you are connecting to GitHub, BitBucket or other provider directly (e.g. via SSH key).
If you are using it with CodePipeline and using repositories connection (aka "CodeStar Source Connections") then it won't work. When you connect your BitBucket or GitHub account this way, it creates some kind of "proxy" that doesn't support git-lfs resources:
batch response: Repository or object not found: https://codestar-connections.eu-central-1.amazonaws.com/git-http/[..].git/info/lfs/objects/batch
Check that it exists and that you have proper access to it
Failed to fetch some objects from 'https://codestar-connections.eu-central-1.amazonaws.com/git-http/[..].git/info/lfs'
With GitHub however there is a workaround:
GitHub CodeBuild git-lfs workaround
First you have to make sure that in the pipeline's Source stage, source output artifact it set to CODE_ZIP which equals to following setting in the Console:
Then in GitHub, in the repository settings, make sure that git-lfs resouces are included in source code archives:
This will make it work. Now source code downloaded by CodePipeline and passed to CodeBuild will include git-lfs files.

Related

Running Chromedriver on AWS instance freezes during build - bind() failed: Cannot assign requested address (99)

I'm trying to get selenium automation tests running with Chromedriver on AWS and an error occurs in the logs which freezes the process and I'm unable to get around it. I've tried adding verbose logging to Chromedriver, but this hasn't worked.
These are the last of the logs (Can provide the full logs on request):
[Container] 2022/06/13 09:02:47 Running command sudo unzip chromedriver_linux64.zip
Archive: chromedriver_linux64.zip
inflating: chromedriver
[Container] 2022/06/13 09:02:47 Running command sudo mv chromedriver /usr/bin/chromedriver
[Container] 2022/06/13 09:02:47 Running command chromedriver –-version
Starting ChromeDriver 80.0.3987.106 (f68069574609230cf9b635cd784cfb1bf81bb53a-refs/branch-heads/3987#{#882}) on port 9515
Only local connections are allowed.
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
[1655110968.068][SEVERE]: bind() failed: Cannot assign requested address (99)
The build freezes at this point without failing and I have no idea why it's doing this. My YAML file is below:
version: 0.2
phases:
build:
commands:
- echo Build started on `date`
- cd /tmp/
- sudo wget https://chromedriver.storage.googleapis.com/80.0.3987.106/chromedriver_linux64.zip
- sudo unzip chromedriver_linux64.zip
- sudo mv chromedriver /usr/bin/chromedriver
- chromedriver –-version
- sudo curl https://intoli.com/install-google-chrome.sh | bash
- sudo mv /usr/bin/google-chrome-stable /usr/bin/google-chrome
- google-chrome – version && which google-chrome
- pip3 install selenium – user
- mvn $PREPROD_CREDENTIALS -Dcucumber.options="--tags #Regression --tags #GUI" test
post_build:
commands:
- echo Build completed on `date`
- mvn surefire-report:report-only
reports:
arn:aws:codebuild:eu-west-2:161668806093:report-group/empris-automation-gui-test-preprod-reportGroupCucumberJson:
files:
- 'TEST-com.emprisautomationtest.apiDefinition.RunCukesTest.xml'
base-directory: 'target'
discard-paths: yes
artifacts:
files:
- '**/*'
cache:
paths:
- '/root/.m2/**/*'

.gitlab-ci.yaml throws "Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1" at the end after successfully run the job

When I am trying to commit changes to gitlab for continuous integrations i am facing this error even though all my steps pass successfully, Gitlab CI shows this
Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1
I am running 1 stages "deploy" at the moment here is my script for deploy:
image: python:3.8
stages:
- deploy
default:
before_script:
- wget https://golang.org/dl/go1.16.5.linux-amd64.tar.gz
- rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- source ~/.bashrc
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
deploy-development:
only:
- feature/backend/ci/cd
stage: deploy
script:
- sam build -p
- yes | sam deploy
This command probably creates an issue in the docker shell:
yes | sam deploy
Try this command:
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset
From https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html:
--confirm-changeset | --no-confirm-changeset Prompt to confirm whether the AWS SAM CLI deploys the computed changeset.
--fail-on-empty-changeset | --no-fail-on-empty-changeset Specify whether to return a non-zero exit code if there are no changes to be made to the stack. The default behavior is to return a non-zero exit code.

Integrating SonarQube within AWS CodePipeline: Connection Refused

tl;dr
CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file with the following log (I formatted it a bit for better readability):
[ERROR] SonarQube server [http://localhost:9000] can not be reached
...
[ERROR] Failed to execute goal
org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar
(default-cli) on project myproject:
Unable to execute SonarQube:
Fail to get bootstrap index from server:
Failed to connect to localhost/127.0.0.1:9000:
Connection refused (Connection refused) -> [Help 1]
Goal
This is my first project with AWS, so sorry if I'm providing irrelevant informations.
I'm trying to deploy my backend API so that it's reachable by the public. Among other things, I want a CI/CD set up to automatically run tests and abort on failure or if a certain quality gate isn't passed. If everything went fine, then the new version should automatically be deployed online.
Current state
My pipeline automatically aborts when one of the tests fails, but that is about all I've gotten to properly do.
I've yet to figure out how to deploy (even manually) the API to be able to send requests to it. Maybe it's already done and I just don't know which URL to use, though.
Anyways, as it is, the CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file.
The files
Here is my buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- cd /root
- codeAnalysisFolder="Sonar" # todo: refactor to include "/root"
- mkdir $codeAnalysisFolder && cd $codeAnalysisFolder
# Get SonarQube
- wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.1.0.31237.zip
- unzip ./sonarqube-8.1.0.31237.zip
# Launch SonarQube server locally
- cd ./sonarqube-8.1.0.31237/bin/linux-x86-64
- sh ./sonar.sh start
# Get SonarScanner
- cd /root/$codeAnalysisFolder
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873-linux.zip
- unzip ./sonar-scanner-cli-4.2.0.1873-linux.zip
- export PATH=$PATH:/root/$codeAnalysisFolder/sonar-scanner-cli-4.2.0.1873-linux.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Here are the last few lines of the log of the failed build:
[INFO] User cache: /root/.sonar/cache
[ERROR] SonarQube server [http://localhost:9000] can not be reached
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.071 s
[INFO] Finished at: 2019-12-18T21:27:23Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar (default-cli) on project myproject: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/127.0.0.1:9000: Connection refused (Connection refused) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Container] 2019/12/18 21:27:23 Command did not exit successfully mvn sonar:sonar exit status 1
[Container] 2019/12/18 21:27:23 Phase complete: PRE_BUILD State: FAILED
[Container] 2019/12/18 21:27:23 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn sonar:sonar. Reason: exit status 1
And because you might also be interested in knowing the build's log related to the sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
Additionally, here is my sonar-project.properties file:
# SONAR SCANNER CONFIGS
sonar.projectKey=bullhubs
# SOURCES
sonar.java.source=8
sonar.sources=src/main/java
sonar.java.binaries=target/classes
sonar.sourceEncoding=UTF-8
# EXCLUSIONS
# (exclusion of Lombok-generated stuff comes from the `lombok.config` file)
sonar.coverage.exclusions=**/*Exception.java
# TESTS
sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
sonar.junit.reportsPath=target/surefire-reports/TEST-*.xml
sonar.tests=src/test/java
The environment
(Sorry for the hidden infos: not being sure what should remain private, I went on the safe side. If you need any specific information, please let me know!)
I have an Elastic Beanstalk set up with the following properties:
I also have an EC2 instance up and running:
I also use a VPC.
What I've tried
I tried adding a bunch of entries into the inbound rules of my EC2's Security Group:
I started from 0.0.0.0/0 : 9000, to then try 127.0.0.1/32 : 9000, to finally try All traffic. None of it worked, so the problem seems to be somewhere else.
I also tried changing some properties of the sonar-project.properties file, namely sonar.web.host and sonar.host.url, to try to redirect where the SonarQube server is hosted (I thought maybe I was supposed to point it to the EC2's IPv4 Public IP address or its attached Public DNS (IPv4)), but somehow the failing build log keeps displaying the failure to connect on localhost:9000 when trying to contact the SonarQube server.
I've figured it out.
Somehow, SonarQube reports having started properly despite that not being true. Thus, when you see this log after having ran your sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
It isn't necessarily true that SonarQube's local server has successfully started. One would have to go into the logs folder of the SonarQube installation folder and read the sonar.log file to figure out that something was actually wrong and that the server was stopped...
In my case, it reported an error that JDK11 was required to run the server. To solve that, I changed the java: openjdk8 line of my buildspec.yml to java: openjdk11.
Then, I had to figure out that now a new log file was available to be read: es.log. When printing that file in the console, it was revealed to me that the latest ElasticSearch version (which is used by the latest SonarQube server version) does not allow itself to be ran by a root user. Thus, I had to create a new user group and edit some configuration file to run the server with that user:
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
Complete solution
This gives us the following working version of buildspec.yml :
version: 0.2
phases:
install:
runtime-versions:
java: openjdk11
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##### This folder contains the whole structure of the CodeCommit repository. This means that
##### the actual Java classes are accessed through "cd src" from there, for example.
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- preSonarPath="/opt/"
- codeAnalysisFolder="Sonar"
- sonarPath="$preSonarPath$codeAnalysisFolder"
- cd $preSonarPath && mkdir $codeAnalysisFolder
# Get SonarQube
- cd $sonarPath
- sonarQube="sonarqube-8.1.0.31237"
- wget https://binaries.sonarsource.com/Distribution/sonarqube/$sonarQube.zip
- unzip ./$sonarQube.zip
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
# Get SonarScanner and add to PATH
- sonarScanner="sonar-scanner-cli-4.2.0.1873-linux"
- cd $sonarPath
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/$sonarScanner.zip
- unzip ./$sonarScanner.zip
- export PATH=$PATH:$sonarPath/$sonarScanner.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
# - cd $sonarPath/$sonarQube/logs
# - cat access.log
# - cat es.log
# - cat sonar.log
# - cat web.log
# - cd $CODEBUILD_SRC_DIR
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Cheers !

Is it possible/recommended to use `sam build` in AWS CodeBuild?

This question spun out of this one. Now that I have a better understanding of what was going wrong there, and a workable, if imperfect, solution, I'm submitting a more focused follow-up (I'm still something of a novice at StackOverflow - please let me know if this contravenes etiquette, and I should follow-up on the original).
This page suggests that "You use AWS CodeBuild to build, locally test, and package your serverless application". However, when I include a sam build command in my buildspec.yml, I get the following log output, suggesting that sam is not installed on CodeBuild images:
[Container] 2018/12/31 11:41:49 Running command sam build --use-container
sh: 1: sam: not found
[Container] 2018/12/31 11:41:49 Command did not exit successfully sam build --use-container exit status 127
[Container] 2018/12/31 11:41:49 Phase complete: BUILD Success: false
[Container] 2018/12/31 11:41:49 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: sam build --use-container. Reason: exit status 127
Furthermore, if I install SAM with pip install aws-sam-cli, running sam build --use-container in CodeBuild gives an error. sam build itself succeeds, but since it doesn't install test dependencies, I'd still need to use pip install -r tests/requirements-test.txt -t . to be able to run tests in CodeBuild. Moreover, this suggests that --use-container is required for "packages that have natively compiled programs").
This makes me wonder whether I'm trying to do something wrong. What's the recommended way of building SAM services in a CI/CD workflow on AWS?
2019_10_18 - Update (confirming #Spiff answer above):
Apparently Codebuild now work seamlessly with SAM, that's all I needed in buildspec.yml for a lambda using pandas and psycopg2-binary:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- python -m unittest discover tests
build:
commands:
- sam build
post_build:
commands:
- sam package --output-template-file packaged.yaml --s3-bucket my-code-pipeline-bucketz
artifacts:
type: zip
files:
- packaged.yaml
Cheers
Please see below for buildspec.yaml that works for me when using AWS SAM with AWS CodeBuild, with cloudformation.yml
phases:
build:
commands:
- pip install --user aws-sam-cli
- USER_BASE_PATH=$(python -m site --user-base)
- export PATH=$PATH:$USER_BASE_PATH/bin
- sam build -t cloudformation.yml
- aws cloudformation package --template-file .aws-sam/build/template.yaml --s3-bucket <TARGET_S3_BUCKET> --output-template-file cloudformation-packaged.yaml
- aws s3 cp ./cloudformation-packaged.yaml <TARGET_S3_BUCKET>/cloudformation-packaged.yaml
In the result I get a deployment package and packaged cloudformation template in the TARGET_S3_BUCKET.
For each function in the ./src folder, I have a requirements.txt file that includes all the dependencies, but I dont run pip install -r requirements.txt manually.
If you want to run sam build command in CodeBuild, you must install aws-sam-cli first (probably in the install phase of buildspec.yml file) i.e. by running pip install aws-sam-cli command or alike.
--use-container option in the sam build command will cause the command to pull in the Docker image resembling the AWS Lambda execution environment, then run the container from this Docker image to pip install (if your lambda is written in Python) your function dependencies for creating your lambda deployment package. This will ensure that the lambda function will use native compiled libraries that are compatible with the actual runtime environment of AWS Lambda.
Therefore, if you specify --use-container option for sam build command running in CodeBuild, you also need to make sure that a Docker image used by your CodeBuild build project must support Docker runtime.
The most easiest way is to use CodeBuild build environment named aws/codebuild/standard:2.0 Docker image. Enabling Docker runtime in runtime-versions property in the install phases of your buildspec.yml. Also you might need to enable PrevilegedMode of your CodeBuild project in order to connect with Docker daemon from your build environment.
As of October 2019 I had no issues whatsoever deploying a serverless application with codebuild using sam build,
First of all --user is not needed for pip install aws-sam-cli. In fact including --user appears to be the only reason that sam is not in the path.
In addition the --use-container is not needed either as long as no native libraries are built, like psycopg

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).