AWS CodeBuild /codebuild/output/tmp/script.sh: docker: not found - amazon-web-services

I am using AWS CodeBuild to build my application. I am using example build spec file as given here: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-example
I have already uploaded my custom Docker image to AWS ECR having requisites to build my application (Java/Scala based).
I get following error:
Reading package lists...
[Container] 2018/10/26 10:40:07 Running command echo Entered the install phase...
Entered the install phase...
[Container] 2018/10/26 10:40:07 Running command docker login -u AWS -p
.....
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: docker: not found
Why should I get this error ? AWS CodeBuild is supposed to download this Docker image from ECR and then follow the instructions that I provide in the build spec file for building my application.

The example build.spec file assumes that your build image has Docker already installed. I was assuming "wrongly" that CodeBuild will install/configure Docker tools inside the image automatically.

Issue looks similar to AWS CodeBuild - docker: not found. I can't paste the same response to this question. So, please check my response there on how to enable Docker inside your build container to see if that solves your issue.

Related

error with docker build stage of CodeBuild build

I am getting the following error from the BUILD stage of my CodeBuild build process:
"Error while executing command: docker build -t ..." Reason: exit status 1
I have a code build service role set up with permissions for ecr, the aws ecr login stage has succeeded, and my buildspec.yml is really simple - pretty much just the standard template. Runtime is the Amazon-managed ubuntu image, standard.
Is there any reason why the Docker build could be failing and anything anyone would suggest to troubleshoot?
Thank you
Full buildspec.yml file:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region eu-west-1)
build:
commands:
- echo Building the Docker image...
- docker build -t maxmind:latest .
- docker tag maxmind:latest 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest
Full error message (BUILD stage):
COMMAND_EXECUTION_ERROR: Error while executing command docker build -t maxmind:latest .. Reason: exit status 1
Full error message (POST_BUILD stage):
COMMAND EXECUTION_ERROR: Error while executing command: docker push 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest. Reason: exit status 1
Full error message (logstream):
[Container] 2020/05/20 09:28:54 Running command docker build -t maxmind:latest .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Container] 2020/05/20 09:28:54 Command did not exit successfully docker build -t maxmind:latest . exit status 1
[Container] 2020/05/20 09:28:54 Phase complete: BUILD State: FAILED
Things I have tried
Attached AmazonEC2ContainerRegistryPowerUser policy to the codebuild-service-role created by my build process
Based on the comments.
There were two issues. The first one was not using PrivilegedMode mode in the CodeBuild project. The mode is required when building a docker image inside a docker container.
The second issue was missing permission iam:DeletePolicyVersion.
Enabling the mode and adding the missing permissions, solved the issue.
Just want to share this in case anyone still has this issue.
This issue can be caused of 3 reasons:
Not having PrivilegedMode enabled id the CodeBuild project
Not having enough permissions for the IAM role
An error with your dockerfile build
In my case it was the 3rd reason.
I activated s3 logs which helped me see better error messages as it turned out to be that I was missing a folder in my project which my build dockerfile tried to COPY.
But it can be any error, like running an npm command that doesn't exists.

AWS Elastic Beanstalk error: Failed to deploy application

I spent many hours to solve my problem. I use CodePipeline : CodeSource, CodeBuild that produces docker container (code from Bitbucket) and stores the image in ECR.
In CodeDeploy I want to deploy that image from ECR to Elastic Beanstalk:
Errors in Elastic Beanstalk:
Environment health has transitioned from Info to Degraded. Command failed on all instances. Incorrect application version found on all instances. Expected version "Sample Application" (deployment 6). Application update failed 15 seconds ago and took 59 seconds.
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
Failed to deploy application.
Unsuccessful command execution on instance id(s) 'i-04df549361597208a'. Aborting the operation.
Another error from EB:
Incorrect application version "code-pipeline-1586854202535-MyflashcardsBuildOutput-ce0d6cd7-8290-40ad-a95e-9c57162b9ff1"
(deployment 9). Expected version "Sample Application" (deployment 8).
Error in CodeDeploy:
Action execution failed
Deployment completed, but with errors: During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version. Failed to deploy application. Unsuccessful command execution on instance id(s) 'i-04df539061522208a'. Aborting the operation. [Instance: i-04df549333582208a] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
Does anyone know what happens here?
I use Dockerfile:
### STAGE 1: Build ###
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
EXPOSE 80
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
and buildspec.yml:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region eu-west-1 --no-include-email)
- REPOSITORY_URI=176901363719.dkr.ecr.eu-west-1.amazonaws.com/myflashcards
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=myflashcards
build:
commands:
- echo Build started on `date`
- echo Building the Docker image
- docker build --tag $REPOSITORY_URI:latest .
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- echo Writing image definitions file...
- printf '[{"name":"eagle","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
# - echo Deleting old artifacts
# - aws s3 sync dist/ s3://$BUCKET_NAME --delete
artifacts:
files: imagedefinitions.json
The third step (CodeDeploy) fails:(
Ran into the same issue. The first fix worked for me. Listing down all possible fixes which can resolve this issue:
Reason: some bug with elasticbeanstalk, which is making the multi-stage builder step to fail. AWS logs would show you a message like docker pull requires exactly one argument
Solution: Use unnamed builder. By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first FROM instruction. Make changes in your docker file as below:
### STAGE 1: Build ###
FROM node:12.7-alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
EXPOSE 80
COPY --from=0 /usr/src/app/dist /usr/share/nginx/html
Reason: Incase using t2.micro as instance type. npm install command sometimes times out on the t2.micro instance.
Solution: Change the instance type that Elastic Beanstalk is using something other than t2.micro(say t2.small)
If none of the above two fixes work, try changing the COPY line of your Dockerfile as below:
COPY package*.json ./
As AWS sometimes prefer ./ over '.'

Installing Docker during AWS CodeBuild

When running a bash script during CodeBuild, I get this error:
./scripts/test.sh: line 95: docker: command not found
However, I've made sure to install docker at the start of the script using:
curl -sSL https://get.docker.com/ | sh
apt-get install -y docker-ce docker-compose
But this results in the following error:
Package docker-ce is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'docker-ce' has no installation candidate
Any ideas on how to get docker working during CodeBuild?
There are a few different options for this in CodeBuild:
You can use CodeBuild provided images, which will already have docker installed on them. To use any one of these images select the privilege mode when creating the CodeBuild project.
You can enable Docker in custom image (images not managed by CodeBuild. e.g.: hosted in your ECR repo or public DockerHub) when configuring CodeBuild project. Select the privileged mode for your project settings. Instructions here: https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html

Is it possible/recommended to use `sam build` in AWS CodeBuild?

This question spun out of this one. Now that I have a better understanding of what was going wrong there, and a workable, if imperfect, solution, I'm submitting a more focused follow-up (I'm still something of a novice at StackOverflow - please let me know if this contravenes etiquette, and I should follow-up on the original).
This page suggests that "You use AWS CodeBuild to build, locally test, and package your serverless application". However, when I include a sam build command in my buildspec.yml, I get the following log output, suggesting that sam is not installed on CodeBuild images:
[Container] 2018/12/31 11:41:49 Running command sam build --use-container
sh: 1: sam: not found
[Container] 2018/12/31 11:41:49 Command did not exit successfully sam build --use-container exit status 127
[Container] 2018/12/31 11:41:49 Phase complete: BUILD Success: false
[Container] 2018/12/31 11:41:49 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: sam build --use-container. Reason: exit status 127
Furthermore, if I install SAM with pip install aws-sam-cli, running sam build --use-container in CodeBuild gives an error. sam build itself succeeds, but since it doesn't install test dependencies, I'd still need to use pip install -r tests/requirements-test.txt -t . to be able to run tests in CodeBuild. Moreover, this suggests that --use-container is required for "packages that have natively compiled programs").
This makes me wonder whether I'm trying to do something wrong. What's the recommended way of building SAM services in a CI/CD workflow on AWS?
2019_10_18 - Update (confirming #Spiff answer above):
Apparently Codebuild now work seamlessly with SAM, that's all I needed in buildspec.yml for a lambda using pandas and psycopg2-binary:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- python -m unittest discover tests
build:
commands:
- sam build
post_build:
commands:
- sam package --output-template-file packaged.yaml --s3-bucket my-code-pipeline-bucketz
artifacts:
type: zip
files:
- packaged.yaml
Cheers
Please see below for buildspec.yaml that works for me when using AWS SAM with AWS CodeBuild, with cloudformation.yml
phases:
build:
commands:
- pip install --user aws-sam-cli
- USER_BASE_PATH=$(python -m site --user-base)
- export PATH=$PATH:$USER_BASE_PATH/bin
- sam build -t cloudformation.yml
- aws cloudformation package --template-file .aws-sam/build/template.yaml --s3-bucket <TARGET_S3_BUCKET> --output-template-file cloudformation-packaged.yaml
- aws s3 cp ./cloudformation-packaged.yaml <TARGET_S3_BUCKET>/cloudformation-packaged.yaml
In the result I get a deployment package and packaged cloudformation template in the TARGET_S3_BUCKET.
For each function in the ./src folder, I have a requirements.txt file that includes all the dependencies, but I dont run pip install -r requirements.txt manually.
If you want to run sam build command in CodeBuild, you must install aws-sam-cli first (probably in the install phase of buildspec.yml file) i.e. by running pip install aws-sam-cli command or alike.
--use-container option in the sam build command will cause the command to pull in the Docker image resembling the AWS Lambda execution environment, then run the container from this Docker image to pip install (if your lambda is written in Python) your function dependencies for creating your lambda deployment package. This will ensure that the lambda function will use native compiled libraries that are compatible with the actual runtime environment of AWS Lambda.
Therefore, if you specify --use-container option for sam build command running in CodeBuild, you also need to make sure that a Docker image used by your CodeBuild build project must support Docker runtime.
The most easiest way is to use CodeBuild build environment named aws/codebuild/standard:2.0 Docker image. Enabling Docker runtime in runtime-versions property in the install phases of your buildspec.yml. Also you might need to enable PrevilegedMode of your CodeBuild project in order to connect with Docker daemon from your build environment.
As of October 2019 I had no issues whatsoever deploying a serverless application with codebuild using sam build,
First of all --user is not needed for pip install aws-sam-cli. In fact including --user appears to be the only reason that sam is not in the path.
In addition the --use-container is not needed either as long as no native libraries are built, like psycopg

AWS CLI tools on Circle CI: configure: unknown command

I'm trying to deploy a docker application onto Elastic Beanstalk from Circle CI.
The deployment section of my circle.yml is
deployment:
hub:
branch: [internal, production]
commands:
- pip install awscli
- docker push company/web:$CIRCLE_SHA1
- sudo bash deploy.sh $CIRCLE_SHA1 $CIRCLE_BRANCH $CIRCLE_BUILD_NUM
and my deploy.sh calls aws cli as follows
aws --version
aws configure set aws_access_key_id $AWSKEY
aws configure set aws_secret_access_key $AWSSECRETKEY
aws configure set default.region us-west-2
aws configure set default.output json
echo "SAVING NEW DOCKERRUNFILE: $DOCKERRUN_FILE"
aws s3 cp $DOCKERRUN_FILE s3://$EB_BUCKET/$DOCKERRUN_FILE
But I get the error
--version: mispelled meta parameter?
sanity-check: "/root/.awssecret": file is missing. (Format: AccessKeyID\nSecretAccessKey\n)
configure: unknown command Usage: aws ACTION [--help]
The script works completely fine locally on mac os using the exact same key and secret.
Both versions (on circle and my mac) of awscli are 1.7.14
I'm Kevin from CircleCI. It looks like the issue here is related to the fact that when you install Python dependencies CircleCI installs them into a virtualenv. This is usually a great thing, as it isolates your python environment from the default system Python and supports our dependency cacheing. The problem here is that you're running your deploy.sh script with sudo, which clobbers the virtualenv environment and runs the default system version (which in this case is actually an older alternative AWS CLI). Dropping the sudo should fix things for you. (You would also be better off running pip install awscli==x.x.x in the "dependencies" phase, as it would be cached then.)
PS: Please contact sayhi#circleci.com for a timely response to questions in general.