Integrating SonarQube within AWS CodePipeline: Connection Refused - amazon-web-services

tl;dr
CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file with the following log (I formatted it a bit for better readability):
[ERROR] SonarQube server [http://localhost:9000] can not be reached
...
[ERROR] Failed to execute goal
org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar
(default-cli) on project myproject:
Unable to execute SonarQube:
Fail to get bootstrap index from server:
Failed to connect to localhost/127.0.0.1:9000:
Connection refused (Connection refused) -> [Help 1]
Goal
This is my first project with AWS, so sorry if I'm providing irrelevant informations.
I'm trying to deploy my backend API so that it's reachable by the public. Among other things, I want a CI/CD set up to automatically run tests and abort on failure or if a certain quality gate isn't passed. If everything went fine, then the new version should automatically be deployed online.
Current state
My pipeline automatically aborts when one of the tests fails, but that is about all I've gotten to properly do.
I've yet to figure out how to deploy (even manually) the API to be able to send requests to it. Maybe it's already done and I just don't know which URL to use, though.
Anyways, as it is, the CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file.
The files
Here is my buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- cd /root
- codeAnalysisFolder="Sonar" # todo: refactor to include "/root"
- mkdir $codeAnalysisFolder && cd $codeAnalysisFolder
# Get SonarQube
- wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.1.0.31237.zip
- unzip ./sonarqube-8.1.0.31237.zip
# Launch SonarQube server locally
- cd ./sonarqube-8.1.0.31237/bin/linux-x86-64
- sh ./sonar.sh start
# Get SonarScanner
- cd /root/$codeAnalysisFolder
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873-linux.zip
- unzip ./sonar-scanner-cli-4.2.0.1873-linux.zip
- export PATH=$PATH:/root/$codeAnalysisFolder/sonar-scanner-cli-4.2.0.1873-linux.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Here are the last few lines of the log of the failed build:
[INFO] User cache: /root/.sonar/cache
[ERROR] SonarQube server [http://localhost:9000] can not be reached
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.071 s
[INFO] Finished at: 2019-12-18T21:27:23Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar (default-cli) on project myproject: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/127.0.0.1:9000: Connection refused (Connection refused) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Container] 2019/12/18 21:27:23 Command did not exit successfully mvn sonar:sonar exit status 1
[Container] 2019/12/18 21:27:23 Phase complete: PRE_BUILD State: FAILED
[Container] 2019/12/18 21:27:23 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn sonar:sonar. Reason: exit status 1
And because you might also be interested in knowing the build's log related to the sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
Additionally, here is my sonar-project.properties file:
# SONAR SCANNER CONFIGS
sonar.projectKey=bullhubs
# SOURCES
sonar.java.source=8
sonar.sources=src/main/java
sonar.java.binaries=target/classes
sonar.sourceEncoding=UTF-8
# EXCLUSIONS
# (exclusion of Lombok-generated stuff comes from the `lombok.config` file)
sonar.coverage.exclusions=**/*Exception.java
# TESTS
sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
sonar.junit.reportsPath=target/surefire-reports/TEST-*.xml
sonar.tests=src/test/java
The environment
(Sorry for the hidden infos: not being sure what should remain private, I went on the safe side. If you need any specific information, please let me know!)
I have an Elastic Beanstalk set up with the following properties:
I also have an EC2 instance up and running:
I also use a VPC.
What I've tried
I tried adding a bunch of entries into the inbound rules of my EC2's Security Group:
I started from 0.0.0.0/0 : 9000, to then try 127.0.0.1/32 : 9000, to finally try All traffic. None of it worked, so the problem seems to be somewhere else.
I also tried changing some properties of the sonar-project.properties file, namely sonar.web.host and sonar.host.url, to try to redirect where the SonarQube server is hosted (I thought maybe I was supposed to point it to the EC2's IPv4 Public IP address or its attached Public DNS (IPv4)), but somehow the failing build log keeps displaying the failure to connect on localhost:9000 when trying to contact the SonarQube server.

I've figured it out.
Somehow, SonarQube reports having started properly despite that not being true. Thus, when you see this log after having ran your sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
It isn't necessarily true that SonarQube's local server has successfully started. One would have to go into the logs folder of the SonarQube installation folder and read the sonar.log file to figure out that something was actually wrong and that the server was stopped...
In my case, it reported an error that JDK11 was required to run the server. To solve that, I changed the java: openjdk8 line of my buildspec.yml to java: openjdk11.
Then, I had to figure out that now a new log file was available to be read: es.log. When printing that file in the console, it was revealed to me that the latest ElasticSearch version (which is used by the latest SonarQube server version) does not allow itself to be ran by a root user. Thus, I had to create a new user group and edit some configuration file to run the server with that user:
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
Complete solution
This gives us the following working version of buildspec.yml :
version: 0.2
phases:
install:
runtime-versions:
java: openjdk11
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##### This folder contains the whole structure of the CodeCommit repository. This means that
##### the actual Java classes are accessed through "cd src" from there, for example.
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- preSonarPath="/opt/"
- codeAnalysisFolder="Sonar"
- sonarPath="$preSonarPath$codeAnalysisFolder"
- cd $preSonarPath && mkdir $codeAnalysisFolder
# Get SonarQube
- cd $sonarPath
- sonarQube="sonarqube-8.1.0.31237"
- wget https://binaries.sonarsource.com/Distribution/sonarqube/$sonarQube.zip
- unzip ./$sonarQube.zip
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
# Get SonarScanner and add to PATH
- sonarScanner="sonar-scanner-cli-4.2.0.1873-linux"
- cd $sonarPath
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/$sonarScanner.zip
- unzip ./$sonarScanner.zip
- export PATH=$PATH:$sonarPath/$sonarScanner.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
# - cd $sonarPath/$sonarQube/logs
# - cat access.log
# - cat es.log
# - cat sonar.log
# - cat web.log
# - cd $CODEBUILD_SRC_DIR
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Cheers !

Related

Running Chromedriver on AWS instance freezes during build - bind() failed: Cannot assign requested address (99)

I'm trying to get selenium automation tests running with Chromedriver on AWS and an error occurs in the logs which freezes the process and I'm unable to get around it. I've tried adding verbose logging to Chromedriver, but this hasn't worked.
These are the last of the logs (Can provide the full logs on request):
[Container] 2022/06/13 09:02:47 Running command sudo unzip chromedriver_linux64.zip
Archive: chromedriver_linux64.zip
inflating: chromedriver
[Container] 2022/06/13 09:02:47 Running command sudo mv chromedriver /usr/bin/chromedriver
[Container] 2022/06/13 09:02:47 Running command chromedriver –-version
Starting ChromeDriver 80.0.3987.106 (f68069574609230cf9b635cd784cfb1bf81bb53a-refs/branch-heads/3987#{#882}) on port 9515
Only local connections are allowed.
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
[1655110968.068][SEVERE]: bind() failed: Cannot assign requested address (99)
The build freezes at this point without failing and I have no idea why it's doing this. My YAML file is below:
version: 0.2
phases:
build:
commands:
- echo Build started on `date`
- cd /tmp/
- sudo wget https://chromedriver.storage.googleapis.com/80.0.3987.106/chromedriver_linux64.zip
- sudo unzip chromedriver_linux64.zip
- sudo mv chromedriver /usr/bin/chromedriver
- chromedriver –-version
- sudo curl https://intoli.com/install-google-chrome.sh | bash
- sudo mv /usr/bin/google-chrome-stable /usr/bin/google-chrome
- google-chrome – version && which google-chrome
- pip3 install selenium – user
- mvn $PREPROD_CREDENTIALS -Dcucumber.options="--tags #Regression --tags #GUI" test
post_build:
commands:
- echo Build completed on `date`
- mvn surefire-report:report-only
reports:
arn:aws:codebuild:eu-west-2:161668806093:report-group/empris-automation-gui-test-preprod-reportGroupCucumberJson:
files:
- 'TEST-com.emprisautomationtest.apiDefinition.RunCukesTest.xml'
base-directory: 'target'
discard-paths: yes
artifacts:
files:
- '**/*'
cache:
paths:
- '/root/.m2/**/*'

How to use git lfs in AWS CodeBuild?

Since AWS CodeBuild doesn't seem to support git LFS (Large File System) I tried to install it:
version: 0.2
phases:
install:
commands:
- apt-get install -y bash curl
- curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash
- apt-get install -y git-lfs
pre_build:
commands:
- echo Downloading LFS files
- git lfs pull
build:
commands:
- echo Build started on `date`
post_build:
commands:
- echo Build completed on `date`
For the above code I'm getting the following error (renamed repo address):
[Container] 2020/06/18 16:02:17 Running command git lfs pull
fatal: could not read Password for 'https://username#bitbucket.org': No such device or address
batch response: Git credentials for https://username#bitbucket.org/company/repo.git not found.
error: failed to fetch some objects from 'https://username#bitbucket.org/company/repo.git/info/lfs'
[Container] 2020/06/18 16:02:17 Command did not exit successfully git lfs pull exit status 2
[Container] 2020/06/18 16:02:17 Phase complete: PRE_BUILD State: FAILED
[Container] 2020/06/18 16:02:17 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: git lfs pull. Reason: exit status 2
Can I do something else in order to fetch LFS files?
CodeBuild does not natively support git LFS. The workaround would be to set up Git LFS 1 and cloning the repository 2 as part of the buildspec.yml execution.
Use 'git-credential-helper: yes' in buildspec for CodeBuild to provide the credentials to git commands 3.
CodeBuild does not support Git LFS, however it's possible to install it on-the-fly and then run git lfs pull from the source directory to download the files. Like this:
env:
git-credential-helper: yes
phases:
install:
commands:
- cd /tmp/
- curl -OJL https://github.com/git-lfs/git-lfs/releases/download/v2.13.2/git-lfs-linux-amd64-v2.13.2.tar.gz
- tar xzf git-lfs-linux-amd64-v2.13.2.tar.gz
- ./install.sh
- cd $CODEBUILD_SRC_DIR
pre_build:
commands:
- git lfs pull
<rest of your buildspec.yml file>
CodeBuild doesn't support Git LFS out of the box. One workaround would be to install it manually, but this will work only if you are connecting to GitHub, BitBucket or other provider directly (e.g. via SSH key).
If you are using it with CodePipeline and using repositories connection (aka "CodeStar Source Connections") then it won't work. When you connect your BitBucket or GitHub account this way, it creates some kind of "proxy" that doesn't support git-lfs resources:
batch response: Repository or object not found: https://codestar-connections.eu-central-1.amazonaws.com/git-http/[..].git/info/lfs/objects/batch
Check that it exists and that you have proper access to it
Failed to fetch some objects from 'https://codestar-connections.eu-central-1.amazonaws.com/git-http/[..].git/info/lfs'
With GitHub however there is a workaround:
GitHub CodeBuild git-lfs workaround
First you have to make sure that in the pipeline's Source stage, source output artifact it set to CODE_ZIP which equals to following setting in the Console:
Then in GitHub, in the repository settings, make sure that git-lfs resouces are included in source code archives:
This will make it work. Now source code downloaded by CodePipeline and passed to CodeBuild will include git-lfs files.

AWS Elastic Beanstalk error: Failed to deploy application

I spent many hours to solve my problem. I use CodePipeline : CodeSource, CodeBuild that produces docker container (code from Bitbucket) and stores the image in ECR.
In CodeDeploy I want to deploy that image from ECR to Elastic Beanstalk:
Errors in Elastic Beanstalk:
Environment health has transitioned from Info to Degraded. Command failed on all instances. Incorrect application version found on all instances. Expected version "Sample Application" (deployment 6). Application update failed 15 seconds ago and took 59 seconds.
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
Failed to deploy application.
Unsuccessful command execution on instance id(s) 'i-04df549361597208a'. Aborting the operation.
Another error from EB:
Incorrect application version "code-pipeline-1586854202535-MyflashcardsBuildOutput-ce0d6cd7-8290-40ad-a95e-9c57162b9ff1"
(deployment 9). Expected version "Sample Application" (deployment 8).
Error in CodeDeploy:
Action execution failed
Deployment completed, but with errors: During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version. Failed to deploy application. Unsuccessful command execution on instance id(s) 'i-04df539061522208a'. Aborting the operation. [Instance: i-04df549333582208a] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
Does anyone know what happens here?
I use Dockerfile:
### STAGE 1: Build ###
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
EXPOSE 80
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
and buildspec.yml:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region eu-west-1 --no-include-email)
- REPOSITORY_URI=176901363719.dkr.ecr.eu-west-1.amazonaws.com/myflashcards
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=myflashcards
build:
commands:
- echo Build started on `date`
- echo Building the Docker image
- docker build --tag $REPOSITORY_URI:latest .
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- echo Writing image definitions file...
- printf '[{"name":"eagle","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
# - echo Deleting old artifacts
# - aws s3 sync dist/ s3://$BUCKET_NAME --delete
artifacts:
files: imagedefinitions.json
The third step (CodeDeploy) fails:(
Ran into the same issue. The first fix worked for me. Listing down all possible fixes which can resolve this issue:
Reason: some bug with elasticbeanstalk, which is making the multi-stage builder step to fail. AWS logs would show you a message like docker pull requires exactly one argument
Solution: Use unnamed builder. By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first FROM instruction. Make changes in your docker file as below:
### STAGE 1: Build ###
FROM node:12.7-alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
EXPOSE 80
COPY --from=0 /usr/src/app/dist /usr/share/nginx/html
Reason: Incase using t2.micro as instance type. npm install command sometimes times out on the t2.micro instance.
Solution: Change the instance type that Elastic Beanstalk is using something other than t2.micro(say t2.small)
If none of the above two fixes work, try changing the COPY line of your Dockerfile as below:
COPY package*.json ./
As AWS sometimes prefer ./ over '.'

AWS Codedeploy No such file or directory

I have two problems deploying via AWS CodeDeploy.
I'm trying to deploy CodeCommit's code to an EC2 ubuntu instance.
At appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
ApplicationStart:
- location: scripts/ApplicationStart.sh
timeout: 300
runas: ubuntu
There are several config files that I need to place at the right place in the application before starting pm2. I also assume since I set runas in appspec.yml as ubuntu, the bash script will at /home/ubuntu.
The my /home/ubuntu has
config/ backend/ frontend/
Looks like Code deploy won't overwrite the previous deployment so if I have backend/ and frontend/ folder at the directory, it will fail at Install stage.
In the ApplicationStart.sh
#!bin/bash
sudo cp config/config1.json backend/config/config1.json
sudo cp config/config2.json backend/config/environments/development/config2.json
sudo cp config/config3.json frontend/config3.json
sudo pm2 kill
cd backend
sudo npm install
sudo pm2 start "strapi start" --name backend
cd ../frontend
sudo npm install
sudo pm2 start "npm start" --name frontend
While the ApplicationStart stage, it gives me the following error.
LifecycleEvent - ApplicationStart
Script - scripts/ApplicationStart.sh
[stderr]bash: /opt/codedeploy-agent/path/to/deployment/scripts/ApplicationStart.sh: bin/bash:
bad interpreter: No such file or directory
I run the same bash file at the /home/ubuntu. It works fine.
Question 1.
- how to run BeforeInstall.sh without the error? Is there the path problems or something else I try to do but I am not supposed to do?
Question 2.
- How can I let code deploy to overwrite the previous deployment when there are already application folders in the directory (/home/ubuntu)?
- Do I manually delete the directory at BeforeInstall stage?
You're missing a slash before bin/bash in #!bin/bash.
It should be #!/bin/bash.

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).