I've been trying to figure this out for a couple days and the issue is compounded by the fact that I'm not getting a useful error message.
I'm using the following buildspec.yml file in codebuild to build docker containers and then send to AWS ECR.
version: 0.2
env:
parameter-store:
AWS_DEFAULT_REGION: "/docker_test/region"
IMAGE_REPO_NAME: "/docker_test/repo_name"
IMAGE_TAG: "/docker_test/img_tag"
AWS_ACCOUNT_ID: "account_id"
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR and DockerHub...
- docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
artifacts:
files:
- 'Dockerrun.aws.json'
I've tried docker 19, slightly different versions of the docker login line and made sure my roles were set. I get "login succeeded" so I assume the login line is good.
[Container] 2021/12/22 16:19:20 Running command docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2021/12/22 16:19:26 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2021/12/22 16:19:26 Phase context status code: Message:
[Container] 2021/12/22 16:19:26 Entering phase BUILD
The post_build phase fails however with the following:
Successfully built d6878cbb68ba
Successfully tagged ***.dkr.ecr.***.amazonaws.com/***:latest
[Container] 2021/12/22 16:21:58 Phase complete: BUILD State: SUCCEEDED
[Container] 2021/12/22 16:21:58 Phase context status code: Message:
[Container] 2021/12/22 16:21:58 Entering phase POST_BUILD
[Container] 2021/12/22 16:21:58 Running command echo Build completed on `date`
Build completed on Wed Dec 22 16:21:58 UTC 2021
[Container] 2021/12/22 16:21:58 Running command echo Pushing the Docker image...
Pushing the Docker image...
[Container] 2021/12/22 16:21:58 Running command docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
Pushing myapp (***.dkr.ecr.***.amazonaws.com/***:latest)...
The push refers to repository [***.dkr.ecr.***.amazonaws.com/***]
EOF
[Container] 2021/12/22 16:22:49 Command did not exit successfully docker-compose -f docker-compose.yml -f docker-compose.prod.yml push exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: POST_BUILD State: FAILED
[Container] 2021/12/22 16:22:49 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml push. Reason: exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2021/12/22 16:22:49 Phase context status code: Message:
I'm just not sure how to get more information on this error - that would be ideal.
EDIT:
I'm adding the docker-compose.prod.yml file for additional context:
version: "3.2"
services:
myapp:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: bash -c "
python manage.py migrate
&& gunicorn --bind :8000 --workers 3 --threads 2 --timeout 240 project.wsgi:application"
restart: always
ports:
- "80:80"
celery_worker:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: celery -A project worker --loglevel=${CELERY_LOG_LEVEL:-WARNING}
restart: always
OK, so I figured it out. Your question about making sure the repo exists pointed me in the right direction #mreferre. I was confused about the use of IMAGE_TAG and IMAGE_REPO_NAME in the code samples I referenced when trying to build this. They were essentially supposed to be the same thing so the push was failing because I was trying to push to an ECR repo named "proj-name" which didn't exist. I just needed to change it to "repo-name" so the image in docker-compose.prod.yml becomes:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}
Related
I'm trying to run a build after creating a stack in the AWS cloudFormation but unfortunately, the build has failed with an error message:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error
while executing command: python -m pip install --upgrade --force pip.
Reason: exit status 1
here is the log for the build and why it failed:
[Container] 2021/10/15 13:01:39 Waiting for agent ping
[Container] 2021/10/15 13:01:40 Waiting for DOWNLOAD_SOURCE
[Container] 2021/10/15 13:01:41 Phase is DOWNLOAD_SOURCE
[Container] 2021/10/15 13:01:41 CODEBUILD_SRC_DIR=/codebuild/output/src061758247/src
[Container] 2021/10/15 13:01:41 YAML location is /codebuild/output/src061758247/src/buildspec.yml
[Container] 2021/10/15 13:01:41 Processing environment variables
[Container] 2021/10/15 13:01:41 Decrypting parameter store environment variables
[Container] 2021/10/15 13:01:41 [WARN] Skipping install of runtimes. Runtime version selection is not supported by this build image.
[Container] 2021/10/15 13:01:43 Moving to directory /codebuild/output/src061758247/src
[Container] 2021/10/15 13:01:43 Registering with agent
[Container] 2021/10/15 13:01:43 Phases found in YAML: 4
[Container] 2021/10/15 13:01:43 POST_BUILD: 10 commands
[Container] 2021/10/15 13:01:43 INSTALL: 10 commands
[Container] 2021/10/15 13:01:43 PRE_BUILD: 6 commands
[Container] 2021/10/15 13:01:43 BUILD: 1 commands
[Container] 2021/10/15 13:01:43 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2021/10/15 13:01:43 Phase context status code: Message:
[Container] 2021/10/15 13:01:43 Entering phase INSTALL
[Container] 2021/10/15 13:01:43 Running command echo 'about to call dockerd'
about to call dockerd
[Container] 2021/10/15 13:01:43 Running command nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
[Container] 2021/10/15 13:01:43 Running command timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
Error starting daemon: pid file found, ensure docker is not running or delete /var/run/docker.pid
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.09.0-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.243-185.433.amzn2.x86_64
Operating System: Ubuntu 14.04.5 LTS (containerized)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.645GiB
Name: 9d1ea8d456c4
ID: GA3S:TOF2:A43S:WTEP:JIFT:RNGG:X3XM:5N6S:7JMU:5IE3:HV2Z:AFGS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[Container] 2021/10/15 13:01:43 Running command curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
[Container] 2021/10/15 13:01:44 Running command curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 44.7M 100 44.7M 0 0 60.0M 0 --:--:-- --:--:-- --:--:-- 60.1M
[Container] 2021/10/15 13:01:45 Running command chmod +x ./kubectl ./aws-iam-authenticator
[Container] 2021/10/15 13:01:45 Running command echo `kubectl version`
/codebuild/output/tmp/script.sh: 1: /codebuild/output/tmp/script.sh: kubectl: not found
[Container] 2021/10/15 13:01:45 Running command export PATH=$PWD/:$PATH
[Container] 2021/10/15 13:01:45 Running command python -m pip install --upgrade --force pip
Collecting pip
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:339: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
Could not find a version that satisfies the requirement pip (from versions: )
No matching distribution found for pip
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
[Container] 2021/10/15 13:01:45 Command did not exit successfully python -m pip install --upgrade --force pip exit status 1
[Container] 2021/10/15 13:01:45 Phase complete: INSTALL State: FAILED
[Container] 2021/10/15 13:01:45 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1
My buildspec.yaml file looks like this:
---
version: 0.2
phases:
install:
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
- curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
- apt-get update && apt-get -y install jq python3-pip python3-dev && pip3 install --upgrade awscli
pre_build:
commands:
- TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
- sed -i 's#CONTAINER_IMAGE#'"$REPOSITORY_URI:$TAG"'#' simple_jwt_api.yml
- $(aws ecr get-login --no-include-email)
- export KUBECONFIG=$HOME/.kube/config
- pip3 install -r requirements.txt
- pytest
build:
commands:
- docker build --tag $REPOSITORY_URI:$TAG .
post_build:
commands:
- docker push $REPOSITORY_URI:$TAG
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
- kubectl apply -f simple_jwt_api.yml
- printf '[{"name":"simple_jwt_api","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
- pwd
- ls
artifacts:
files: build.json
env:
parameter-store:
JWT_SECRET: JWT_SECRET
can anyone help me with this issue or guide me to a similar asked question?
thanks
You need to provide more information about your Cloud Build resources and the log output doesn't exactly match the commands in your buildspec.yaml.
However the error you have is because you are trying to run python 2.7 version to upgrade pip. You should use python3 instead.
yaml executed successfully in AWS code build but image not send to aws ecr.
buildspec.yml file output is given below
`[Container] 2020/10/26 09:50:07 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2020/10/26 09:50:07 Phase context status code: Message:
[Container] 2020/10/26 09:50:07 Entering phase BUILD
[Container] 2020/10/26 09:50:07 Phase complete: BUILD State: SUCCEEDED
[Container] 2020/10/26 09:50:07 Phase context status code: Message:
[Container] 2020/10/26 09:50:07 Entering phase POST_BUILD
[Container] 2020/10/26 09:50:07 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2020/10/26 09:50:07 Phase context status code: Message: `
Every phase is executed successfully with SUCCEEDED message.
Below is the buildspec.yml file code snippet
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"ui","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- cat imagedefinitions.json
No command is being run due to bad indentation. Please rectify the indentation using the buildspec reference as guide:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
Also I do not see a docker login before the push:
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
A buildspce sample for Docker is here:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html#sample-docker-files
my building process always fails when I try to install jq. Could you please tell me what I am doing wrong?
env:
parameter-store:
JWT_SECRET: JWT_SECRET
phases:
install:
runtime-versions:
python: 3.8
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
- curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
- apt-get update && apt-get -y install jq
- apt-get -y install python3-pip python3-dev
- pip3 install --upgrade awscli
- pip3 install -r requirements.txt
# - pip3 install -r requirements.txt
- python3 -m pytest test_main.py
pre_build:
commands:
- TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
- sed -i 's#CONTAINER_IMAGE#'"$REPOSITORY_URI:$TAG"'#' simple_jwt_api.yml
- $(aws ecr get-login --no-include-email)
- export KUBECONFIG=$HOME/.kube/config
build:
commands:
- docker build --tag $REPOSITORY_URI:$TAG .
post_build:
commands:
- docker push $REPOSITORY_URI:$TAG
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
- kubectl apply -f simple_jwt_api.yml
- printf '[{"name":"simple_jwt_api","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
- pwd
- ls
artifacts:
files: build.json
Error msg:
[Container] 2020/03/22 15:57:14 Waiting for agent ping
[Container] 2020/03/22 15:57:16 Waiting for DOWNLOAD_SOURCE
[Container] 2020/03/22 15:57:17 Phase is DOWNLOAD_SOURCE
[Container] 2020/03/22 15:57:17 CODEBUILD_SRC_DIR=/codebuild/output/src534423531/src
[Container] 2020/03/22 15:57:17 YAML location is /codebuild/output/src534423531/src/buildspec.yml
[Container] 2020/03/22 15:57:17 Processing environment variables
[Container] 2020/03/22 15:57:17 Decrypting parameter store environment variables
[Container] 2020/03/22 15:57:17 Selecting 'python' runtime version '3.8' based on manual selections...
[Container] 2020/03/22 15:57:17 Running command echo "Installing Python version 3.8 ..."
Installing Python version 3.8 ...
[Container] 2020/03/22 15:57:17 Running command pyenv global $PYTHON_38_VERSION
[Container] 2020/03/22 15:57:17 Moving to directory /codebuild/output/src534423531/src
[Container] 2020/03/22 15:57:17 Registering with agent
[Container] 2020/03/22 15:57:17 Phases found in YAML: 4
[Container] 2020/03/22 15:57:17 POST_BUILD: 11 commands
[Container] 2020/03/22 15:57:17 INSTALL: 9 commands
[Container] 2020/03/22 15:57:17 PRE_BUILD: 4 commands
[Container] 2020/03/22 15:57:17 BUILD: 1 commands
[Container] 2020/03/22 15:57:17 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2020/03/22 15:57:17 Phase context status code: Message:
[Container] 2020/03/22 15:57:17 Entering phase INSTALL
[Container] 2020/03/22 15:57:17 Running command curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
[Container] 2020/03/22 15:57:19 Running command curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
[Container] 2020/03/22 15:57:19 Running command chmod +x ./kubectl ./aws-iam-authenticator
[Container] 2020/03/22 15:57:19 Running command export PATH=$PWD/:$PATH
[Container] 2020/03/22 15:57:19 Running command apt-get update && apt-get -y install jq
/codebuild/output/tmp/script.sh: line 4: apt-get: command not found
[Container] 2020/03/22 15:57:19 Command did not exit successfully apt-get update && apt-get -y install jq exit status 127
[Container] 2020/03/22 15:57:19 Phase complete: INSTALL State: FAILED
[Container] 2
The issue is clear from this line:
/codebuild/output/tmp/script.sh: line 4: apt-get: command not found
Please use the Ubuntu 3.0/4.0 image which will provide Python 3.8 and 'apt-get' command. In Amazon Linux image, use 'yum' command.
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
we wanna try CodePipeline with a image that we already have on ECR.
So we follow the steps on the documentation.
We have buildspec.yml like this:
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --no-include-email --region us-east-1)
- REPOSITORY_URI=OUR_URL_FROM_ECR
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
- echo $REPOSITORY_URI
- echo $COMMIT_HASH
- echo $IMAGE_TAG
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"Petr","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
We created a new pipeline flow, but when we push some changes we get this log:
[Container] 2019/11/07 23:30:49 Waiting for agent ping
[Container] 2019/11/07 23:30:51 Waiting for DOWNLOAD_SOURCE
[Container] 2019/11/07 23:30:52 Phase is DOWNLOAD_SOURCE
[Container] 2019/11/07 23:30:52 CODEBUILD_SRC_DIR=/codebuild/output/src386464501/src
[Container] 2019/11/07 23:30:52 YAML location is /codebuild/output/src386464501/src/buildspec.yml
[Container] 2019/11/07 23:30:52 No commands found for phase name: INSTALL
[Container] 2019/11/07 23:30:52 Processing environment variables
[Container] 2019/11/07 23:30:52 Moving to directory /codebuild/output/src386464501/src
[Container] 2019/11/07 23:30:52 Registering with agent
[Container] 2019/11/07 23:30:52 Phases found in YAML: 4
[Container] 2019/11/07 23:30:52 POST_BUILD: 6 commands
[Container] 2019/11/07 23:30:52 INSTALL: 0 commands
[Container] 2019/11/07 23:30:52 PRE_BUILD: 9 commands
[Container] 2019/11/07 23:30:52 BUILD: 4 commands
[Container] 2019/11/07 23:30:52 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2019/11/07 23:30:52 Phase context status code: Message:
[Container] 2019/11/07 23:30:52 Entering phase INSTALL
[Container] 2019/11/07 23:30:52 Running command echo "Installing Node.js version 10 ..."
Installing Node.js version 10 ...
[Container] 2019/11/07 23:30:52 Running command n 10.16.3
installed : v10.16.3 (with npm 6.9.0)
[Container] 2019/11/07 23:31:02 Phase complete: INSTALL State: SUCCEEDED
[Container] 2019/11/07 23:31:02 Phase context status code: Message:
[Container] 2019/11/07 23:31:02 Entering phase PRE_BUILD
[Container] 2019/11/07 23:31:02 Running command echo Logging in to Amazon ECR...
Logging in to Amazon ECR...
[Container] 2019/11/07 23:31:02 Running command aws --version
aws-cli/1.16.242 Python/3.6.8 Linux/4.14.143-91.122.amzn1.x86_64 exec-env/AWS_ECS_EC2 botocore/1.12.232
[Container] 2019/11/07 23:31:07 Running command $(aws ecr get-login --no-include-email --region us-east-1)
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2019/11/07 23:31:10 Running command REPOSITORY_URI=***********
[Container] 2019/11/07 23:31:10 Running command COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
[Container] 2019/11/07 23:31:10 Running command IMAGE_TAG=${COMMIT_HASH:=latest}
[Container] 2019/11/07 23:31:10 Running command echo $REPOSITORY_URI
***********
[Container] 2019/11/07 23:31:10 Running command echo $COMMIT_HASH
88f8cfc
[Container] 2019/11/07 23:31:10 Running command echo $IMAGE_TAG
88f8cfc
[Container] 2019/11/07 23:31:10 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2019/11/07 23:31:10 Phase context status code: Message:
[Container] 2019/11/07 23:31:10 Entering phase BUILD
[Container] 2019/11/07 23:31:10 Running command echo Build started on `date`
Build started on Thu Nov 7 23:31:10 UTC 2019
[Container] 2019/11/07 23:31:10 Running command echo Building the Docker image...
Building the Docker image...
[Container] 2019/11/07 23:31:10 Running command docker build -t $REPOSITORY_URI:latest .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Container] 2019/11/07 23:31:10 Command did not exit successfully docker build -t $REPOSITORY_URI:latest . exit status 1
[Container] 2019/11/07 23:31:10 Phase complete: BUILD State: FAILED
[Container] 2019/11/07 23:31:10 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t $REPOSITORY_URI:latest .. Reason: exit status 1
[Container] 2019/11/07 23:31:10 Entering phase POST_BUILD
[Container] 2019/11/07 23:31:10 Running command echo Build completed on `date`
Build completed on Thu Nov 7 23:31:10 UTC 2019
[Container] 2019/11/07 23:31:10 Running command echo Pushing the Docker images...
Pushing the Docker images...
[Container] 2019/11/07 23:31:10 Running command docker push $REPOSITORY_URI:latest
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Container] 2019/11/07 23:31:10 Command did not exit successfully docker push $REPOSITORY_URI:latest exit status 1
[Container] 2019/11/07 23:31:10 Phase complete: POST_BUILD State: FAILED
[Container] 2019/11/07 23:31:10 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker push $REPOSITORY_URI:latest. Reason: exit status 1
[Container] 2019/11/07 23:31:10 Expanding base directory path: .
[Container] 2019/11/07 23:31:10 Assembling file list
[Container] 2019/11/07 23:31:10 Expanding .
[Container] 2019/11/07 23:31:10 Expanding file paths for base directory .
[Container] 2019/11/07 23:31:10 Assembling file list
[Container] 2019/11/07 23:31:10 Expanding imagedefinitions.json
[Container] 2019/11/07 23:31:10 Skipping invalid file path imagedefinitions.json
[Container] 2019/11/07 23:31:10 Phase complete: UPLOAD_ARTIFACTS State: FAILED
[Container] 2019/11/07 23:31:10 Phase context status code: CLIENT_ERROR Message: no matching artifact paths found
We wanna know if we are missing something, we follow some steps from here:
https://aws.amazon.com/es/blogs/devops/build-a-continuous-delivery-pipeline-for-your-container-images-with-amazon-ecr-as-source/
Any advice?
I ran into similar error. The fix is, this build project needs to build a Docker image therefore set Privilege Mode to true.
Privileged mode grants a build project's Docker container access.
There will be two possibilities:-
one is that you didn't add ecr and ecs in your roles which you created for ec2 instance or if you are using elastic beanstalk. first verify that
otherwise look into this second possibility:-
Use following commands in your phases:-
phases:
install:
commands:
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
- timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
for more details use this link
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html#sample-docker-custom-image-files
https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-cannot-connect-to-docker-daemon
For error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
I found this helpful:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
Specifically point 5.d.
Follow the steps in Run CodeBuild directly to create a build project, run the build, and view build information.
If you use the console to create your project:
a. For Operating system, choose Ubuntu.
b. For Runtime, choose Standard.
c. For Image, choose aws/codebuild/standard:4.0.
d. Because you use this build project to build a Docker image, select Privileged.
I have had the same problem as you and the way I fixed it was by following: try going to CodeBuild and then to its IAM Role. AmazonEC2ContainerRegistryFullAccess role and now click on 'Edit' for that code build and select 'Environment' and click on Allow AWS CodeBuild to modify this service role so it can be used with this building project. Now try again.
Cheers
I'm trying to get a simple docker app to build using AWS codebuild, but I am coming across an error where the aws command is not found:
[Container] 2016/12/10 04:29:17 Build started on Sat Dec 10 04:29:17 UTC 2016
[Container] 2016/12/10 04:29:17 Running command echo Building the Docker image...
[Container] 2016/12/10 04:29:17 Building the Docker image...
[Container] 2016/12/10 04:29:17 Running command docker build -t aws-test .
[Container] 2016/12/10 04:29:17 sh: 1: docker: not found
[Container] 2016/12/10 04:29:17 Command did not exit successfully docker build -t aws-test . exit status 127
[Container] 2016/12/10 04:29:17 Phase complete: BUILD Success: false
[Container] 2016/12/10 04:29:17 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t aws-test .. Reason: exit status 127
I've got a super simple docker file which builds a simple express app:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD npm install && npm start
And I've got a super simple buildspec.yml which is suppose to build the docker container and push it to the aws registry:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region us-west-2)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest
However once ran, it throws the error posted above ^^ I'm not sure why the aws cli utils aren't found? This guide here:
http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
Suggests I don't need to do anything to setup the aws cli utils anywhere?
Also one other thing I noticed, I removed $(aws ecr get-login --region us-west-2) step from the buildspec file, built it again and it then said that the docker command was not found?! Have I missed a step somewhere (I don't think I have).
So it turned out I was using the wrong environment. Here is what I'm using now:
I was trying to specify my own docker image, which was ultimately not setup with any of the AWS cli utils!
Thanks to #Clare Liguori for tipping me off!