AWS Elastic Beanstalk error: Failed to deploy application - amazon-web-services

I spent many hours to solve my problem. I use CodePipeline : CodeSource, CodeBuild that produces docker container (code from Bitbucket) and stores the image in ECR.
In CodeDeploy I want to deploy that image from ECR to Elastic Beanstalk:
Errors in Elastic Beanstalk:
Environment health has transitioned from Info to Degraded. Command failed on all instances. Incorrect application version found on all instances. Expected version "Sample Application" (deployment 6). Application update failed 15 seconds ago and took 59 seconds.
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
Failed to deploy application.
Unsuccessful command execution on instance id(s) 'i-04df549361597208a'. Aborting the operation.
Another error from EB:
Incorrect application version "code-pipeline-1586854202535-MyflashcardsBuildOutput-ce0d6cd7-8290-40ad-a95e-9c57162b9ff1"
(deployment 9). Expected version "Sample Application" (deployment 8).
Error in CodeDeploy:
Action execution failed
Deployment completed, but with errors: During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version. Failed to deploy application. Unsuccessful command execution on instance id(s) 'i-04df539061522208a'. Aborting the operation. [Instance: i-04df549333582208a] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
Does anyone know what happens here?
I use Dockerfile:
### STAGE 1: Build ###
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
EXPOSE 80
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
and buildspec.yml:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region eu-west-1 --no-include-email)
- REPOSITORY_URI=176901363719.dkr.ecr.eu-west-1.amazonaws.com/myflashcards
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=myflashcards
build:
commands:
- echo Build started on `date`
- echo Building the Docker image
- docker build --tag $REPOSITORY_URI:latest .
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- echo Writing image definitions file...
- printf '[{"name":"eagle","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
# - echo Deleting old artifacts
# - aws s3 sync dist/ s3://$BUCKET_NAME --delete
artifacts:
files: imagedefinitions.json
The third step (CodeDeploy) fails:(

Ran into the same issue. The first fix worked for me. Listing down all possible fixes which can resolve this issue:
Reason: some bug with elasticbeanstalk, which is making the multi-stage builder step to fail. AWS logs would show you a message like docker pull requires exactly one argument
Solution: Use unnamed builder. By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first FROM instruction. Make changes in your docker file as below:
### STAGE 1: Build ###
FROM node:12.7-alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
EXPOSE 80
COPY --from=0 /usr/src/app/dist /usr/share/nginx/html
Reason: Incase using t2.micro as instance type. npm install command sometimes times out on the t2.micro instance.
Solution: Change the instance type that Elastic Beanstalk is using something other than t2.micro(say t2.small)
If none of the above two fixes work, try changing the COPY line of your Dockerfile as below:
COPY package*.json ./
As AWS sometimes prefer ./ over '.'

Related

.gitlab-ci.yaml throws "Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1" at the end after successfully run the job

When I am trying to commit changes to gitlab for continuous integrations i am facing this error even though all my steps pass successfully, Gitlab CI shows this
Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1
I am running 1 stages "deploy" at the moment here is my script for deploy:
image: python:3.8
stages:
- deploy
default:
before_script:
- wget https://golang.org/dl/go1.16.5.linux-amd64.tar.gz
- rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- source ~/.bashrc
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
deploy-development:
only:
- feature/backend/ci/cd
stage: deploy
script:
- sam build -p
- yes | sam deploy
This command probably creates an issue in the docker shell:
yes | sam deploy
Try this command:
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset
From https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html:
--confirm-changeset | --no-confirm-changeset Prompt to confirm whether the AWS SAM CLI deploys the computed changeset.
--fail-on-empty-changeset | --no-fail-on-empty-changeset Specify whether to return a non-zero exit code if there are no changes to be made to the stack. The default behavior is to return a non-zero exit code.

error with docker build stage of CodeBuild build

I am getting the following error from the BUILD stage of my CodeBuild build process:
"Error while executing command: docker build -t ..." Reason: exit status 1
I have a code build service role set up with permissions for ecr, the aws ecr login stage has succeeded, and my buildspec.yml is really simple - pretty much just the standard template. Runtime is the Amazon-managed ubuntu image, standard.
Is there any reason why the Docker build could be failing and anything anyone would suggest to troubleshoot?
Thank you
Full buildspec.yml file:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region eu-west-1)
build:
commands:
- echo Building the Docker image...
- docker build -t maxmind:latest .
- docker tag maxmind:latest 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest
Full error message (BUILD stage):
COMMAND_EXECUTION_ERROR: Error while executing command docker build -t maxmind:latest .. Reason: exit status 1
Full error message (POST_BUILD stage):
COMMAND EXECUTION_ERROR: Error while executing command: docker push 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest. Reason: exit status 1
Full error message (logstream):
[Container] 2020/05/20 09:28:54 Running command docker build -t maxmind:latest .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Container] 2020/05/20 09:28:54 Command did not exit successfully docker build -t maxmind:latest . exit status 1
[Container] 2020/05/20 09:28:54 Phase complete: BUILD State: FAILED
Things I have tried
Attached AmazonEC2ContainerRegistryPowerUser policy to the codebuild-service-role created by my build process
Based on the comments.
There were two issues. The first one was not using PrivilegedMode mode in the CodeBuild project. The mode is required when building a docker image inside a docker container.
The second issue was missing permission iam:DeletePolicyVersion.
Enabling the mode and adding the missing permissions, solved the issue.
Just want to share this in case anyone still has this issue.
This issue can be caused of 3 reasons:
Not having PrivilegedMode enabled id the CodeBuild project
Not having enough permissions for the IAM role
An error with your dockerfile build
In my case it was the 3rd reason.
I activated s3 logs which helped me see better error messages as it turned out to be that I was missing a folder in my project which my build dockerfile tried to COPY.
But it can be any error, like running an npm command that doesn't exists.

Integrating SonarQube within AWS CodePipeline: Connection Refused

tl;dr
CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file with the following log (I formatted it a bit for better readability):
[ERROR] SonarQube server [http://localhost:9000] can not be reached
...
[ERROR] Failed to execute goal
org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar
(default-cli) on project myproject:
Unable to execute SonarQube:
Fail to get bootstrap index from server:
Failed to connect to localhost/127.0.0.1:9000:
Connection refused (Connection refused) -> [Help 1]
Goal
This is my first project with AWS, so sorry if I'm providing irrelevant informations.
I'm trying to deploy my backend API so that it's reachable by the public. Among other things, I want a CI/CD set up to automatically run tests and abort on failure or if a certain quality gate isn't passed. If everything went fine, then the new version should automatically be deployed online.
Current state
My pipeline automatically aborts when one of the tests fails, but that is about all I've gotten to properly do.
I've yet to figure out how to deploy (even manually) the API to be able to send requests to it. Maybe it's already done and I just don't know which URL to use, though.
Anyways, as it is, the CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file.
The files
Here is my buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- cd /root
- codeAnalysisFolder="Sonar" # todo: refactor to include "/root"
- mkdir $codeAnalysisFolder && cd $codeAnalysisFolder
# Get SonarQube
- wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.1.0.31237.zip
- unzip ./sonarqube-8.1.0.31237.zip
# Launch SonarQube server locally
- cd ./sonarqube-8.1.0.31237/bin/linux-x86-64
- sh ./sonar.sh start
# Get SonarScanner
- cd /root/$codeAnalysisFolder
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873-linux.zip
- unzip ./sonar-scanner-cli-4.2.0.1873-linux.zip
- export PATH=$PATH:/root/$codeAnalysisFolder/sonar-scanner-cli-4.2.0.1873-linux.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Here are the last few lines of the log of the failed build:
[INFO] User cache: /root/.sonar/cache
[ERROR] SonarQube server [http://localhost:9000] can not be reached
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.071 s
[INFO] Finished at: 2019-12-18T21:27:23Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar (default-cli) on project myproject: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/127.0.0.1:9000: Connection refused (Connection refused) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Container] 2019/12/18 21:27:23 Command did not exit successfully mvn sonar:sonar exit status 1
[Container] 2019/12/18 21:27:23 Phase complete: PRE_BUILD State: FAILED
[Container] 2019/12/18 21:27:23 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn sonar:sonar. Reason: exit status 1
And because you might also be interested in knowing the build's log related to the sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
Additionally, here is my sonar-project.properties file:
# SONAR SCANNER CONFIGS
sonar.projectKey=bullhubs
# SOURCES
sonar.java.source=8
sonar.sources=src/main/java
sonar.java.binaries=target/classes
sonar.sourceEncoding=UTF-8
# EXCLUSIONS
# (exclusion of Lombok-generated stuff comes from the `lombok.config` file)
sonar.coverage.exclusions=**/*Exception.java
# TESTS
sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
sonar.junit.reportsPath=target/surefire-reports/TEST-*.xml
sonar.tests=src/test/java
The environment
(Sorry for the hidden infos: not being sure what should remain private, I went on the safe side. If you need any specific information, please let me know!)
I have an Elastic Beanstalk set up with the following properties:
I also have an EC2 instance up and running:
I also use a VPC.
What I've tried
I tried adding a bunch of entries into the inbound rules of my EC2's Security Group:
I started from 0.0.0.0/0 : 9000, to then try 127.0.0.1/32 : 9000, to finally try All traffic. None of it worked, so the problem seems to be somewhere else.
I also tried changing some properties of the sonar-project.properties file, namely sonar.web.host and sonar.host.url, to try to redirect where the SonarQube server is hosted (I thought maybe I was supposed to point it to the EC2's IPv4 Public IP address or its attached Public DNS (IPv4)), but somehow the failing build log keeps displaying the failure to connect on localhost:9000 when trying to contact the SonarQube server.
I've figured it out.
Somehow, SonarQube reports having started properly despite that not being true. Thus, when you see this log after having ran your sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
It isn't necessarily true that SonarQube's local server has successfully started. One would have to go into the logs folder of the SonarQube installation folder and read the sonar.log file to figure out that something was actually wrong and that the server was stopped...
In my case, it reported an error that JDK11 was required to run the server. To solve that, I changed the java: openjdk8 line of my buildspec.yml to java: openjdk11.
Then, I had to figure out that now a new log file was available to be read: es.log. When printing that file in the console, it was revealed to me that the latest ElasticSearch version (which is used by the latest SonarQube server version) does not allow itself to be ran by a root user. Thus, I had to create a new user group and edit some configuration file to run the server with that user:
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
Complete solution
This gives us the following working version of buildspec.yml :
version: 0.2
phases:
install:
runtime-versions:
java: openjdk11
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##### This folder contains the whole structure of the CodeCommit repository. This means that
##### the actual Java classes are accessed through "cd src" from there, for example.
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- preSonarPath="/opt/"
- codeAnalysisFolder="Sonar"
- sonarPath="$preSonarPath$codeAnalysisFolder"
- cd $preSonarPath && mkdir $codeAnalysisFolder
# Get SonarQube
- cd $sonarPath
- sonarQube="sonarqube-8.1.0.31237"
- wget https://binaries.sonarsource.com/Distribution/sonarqube/$sonarQube.zip
- unzip ./$sonarQube.zip
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
# Get SonarScanner and add to PATH
- sonarScanner="sonar-scanner-cli-4.2.0.1873-linux"
- cd $sonarPath
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/$sonarScanner.zip
- unzip ./$sonarScanner.zip
- export PATH=$PATH:$sonarPath/$sonarScanner.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
# - cd $sonarPath/$sonarQube/logs
# - cat access.log
# - cat es.log
# - cat sonar.log
# - cat web.log
# - cd $CODEBUILD_SRC_DIR
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Cheers !

Why isn't Kaniko able to push multi-stage Docker Image?

Building the following Dockerfile on GitLab CI using Kaniko, result in the error error pushing image: failed to push to destination eu.gcr.io/stritzke-enterprises/eliah-speech-server:latest: Get https://eu.gcr.io/...: exit status 1
If I remove the first FROM, RUN and COPY --from statements from the Dockerfile, the Docker Image is built and pushed as expected. If I execute the Kaniko build using Docker on my local machine everything works as expected. I execute other Kaniko builds and pushed on the same GitLab CI runner with the same GCE Service Account credentials.
What is going wrong with the GitLab CI based Kaniko build?
Dockerfile
FROM alpine:latest as alpine
RUN apk add -U --no-cache ca-certificates
FROM scratch
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY binaries/speech-server /speech-server
EXPOSE 8080
ENTRYPOINT ["/speech-server"]
CMD ["serve", "-t", "$GOOGLE_ACCESS_TOKEN"]
GitLab CI build stage
buildDockerImage:
stage: buildImage
dependencies:
- build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
GOOGLE_APPLICATION_CREDENTIALS: /secret.json
script:
- echo "$GCR_SERVICE_ACCOUNT_KEY" > /secret.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $DOCKER_IMAGE:latest -v debug
only:
- branches
except:
- master
As tdensmore pointed out this was most likely an authentication issue.
So for everyone who has come here, the following Dockerfile and Kaniko call work just fine.
FROM ubuntu:latest as ubuntu
RUN echo "Foo" > /foo.txt
FROM ubuntu:latest
COPY --from=ubuntu /foo.txt /
CMD ["/bin/cat", "/foo.txt"]
The Dockerfile can be built by running
docker run -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest --context /workspace --no-push

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).