Error in UPLOAD_ARTIFACTS phase: [pytest_reports: [report files not found in build]] - dockerfile

I am building a docker image in AWS codebuild and running pytest on it.
I've a Dockerfile which looks like this.
FROM python:latest
ENV PORT=80
COPY requirements.txt .
RUN pip install -r requirements.txt
WORKDIR /tests/
COPY . .
RUN apt-get -y update
RUN pip install --upgrade pip
RUN pip install selenium
RUN pip3 install webdriver_manager
RUN apt-get install zip -y
RUN apt-get install unzip -y
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list
RUN apt-get -y update
RUN apt-get -y install google-chrome-stable
# Download chrome driver
RUN wget -N https://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip -P ~/
RUN unzip ~/chromedriver_linux64.zip -d ~/
RUN rm ~/chromedriver_linux64.zip
RUN mv -f ~/chromedriver /usr/local/bin/chromedriver
RUN chmod 0777 /usr/local/bin/chromedriver
# Install chrome broswer
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list
EXPOSE $PORT
CMD ["pytest", "-v", "-s", "--junitxml=pytest_report.xml"]
The buildspec.yaml looks like this.
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
pre_build:
commands:
- IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
- REPOSITORY_URI=3xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test-ui-python-dkr
build:
on-failure: ABORT
commands:
- export DOCKER_BUILDKIT=1
- export BUILDKIT_PROGRESS=plain
- export PROGRESS_NO_TRUNC=1
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 359772415770.dkr.ecr.us-east-1.amazonaws.com
- docker build --progress=plain -t 3xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test-ui-python-dkr .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
- docker push $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed..
- docker run $REPOSITORY_URI:$IMAGE_TAG
reports:
pytest_reports:
files:
- ./tests/pytest_report.xml
base-directory: ./tests
file-format: JUNITXML
artifacts:
files:
- "**/*"
I am seeing that pytest is collecting the tests successfully and running it.
collecting ...
collected 2 items
test_login.py::Test_login::test_01 Title==================> Google
PASSED
test_login.py::Test_login::test_02
by_locator====> ('xpath', "//input[#name='q']")
Entered Docker in search....
PASSED
----------------- generated xml file: /tests/pytest_report.xml -----------------
But the pytest_reports section failed.
[Container] 2022/07/18 20:38:27 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2022/07/18 20:38:27 Phase context status code: Message:
[Container] 2022/07/18 20:38:27 Expanding base directory path: .
[Container] 2022/07/18 20:38:27 Assembling file list
[Container] 2022/07/18 20:38:27 Expanding .
[Container] 2022/07/18 20:38:27 Expanding file paths for base directory .
[Container] 2022/07/18 20:38:27 Assembling file list
[Container] 2022/07/18 20:38:27 Expanding **/*
[Container] 2022/07/18 20:38:27 Found 7 file(s)
[Container] 2022/07/18 20:38:27 Preparing to copy TEST report pytest_reports
[Container] 2022/07/18 20:38:27 Expanding base directory path: ./tests
[Container] 2022/07/18 20:38:27 Assembling file list
[Container] 2022/07/18 20:38:27 Expanding ./tests
[Container] 2022/07/18 20:38:27 Skipping invalid file path ./tests
[Container] 2022/07/18 20:38:27 No matching base directory path found for ./tests, skipping
[Container] 2022/07/18 20:38:27 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2022/07/18 20:38:27 Phase context status code: Message:
Error in UPLOAD_ARTIFACTS phase: [pytest_reports: [report files not found in build]]
I am sure this is something to do with the invalid path reference somewhere...But I am unable to figure out..
Any help is much appreciated.

Related

unable to prepare context: unable to evaluate symlinks in Dockerfile path

I'm using AWS Code Build to build a Docker image from ECR. This is the Code Build configuration.
Here is the buidspec.yml
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- aws ecr get-login-password --region my-region | docker login --username AWS --password-stdin my-image-uri
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t pos-drf .
- docker tag pos-drf:latest my-image-uri/pos-drf:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push my-image-uri/pos-drf:latest
Now it's working up until the build command docker build -t pos-drf .
the error message I get is the following
[Container] 2022/12/30 15:12:39 Running command docker build -t pos-drf .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /codebuild/output/src696881611/src/Dockerfile: no such file or directory
[Container] 2022/12/30 15:12:39 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t pos-drf .. Reason: exit status 1
Now quite sure this is not a permission related issue.
Please let me know if I need to share something else.
UPDATE:
This is the Dockerfile
# base image
FROM python:3.8
# setup environment variable
ENV DockerHOME=/home/app/webapp
# set work directory
RUN mkdir -p $DockerHOME
# where your code lives
WORKDIR $DockerHOME
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
# copy whole project to your docker home directory.
COPY . $DockerHOME
RUN apt-get dist-upgrade
# RUN apt-get install mysql-client mysql-server
# run this command to install all dependencies
RUN pip install -r requirements.txt
# port where the Django app runs
EXPOSE 8000
# start server
CMD python manage.py runserver
My mistake was that I had the Dockerfile locally but hadn't pushed it.
CodeBuild worked successfully after pushing the Dockerfile.

docker-compose push to ECR fails in codebuild

I've been trying to figure this out for a couple days and the issue is compounded by the fact that I'm not getting a useful error message.
I'm using the following buildspec.yml file in codebuild to build docker containers and then send to AWS ECR.
version: 0.2
env:
parameter-store:
AWS_DEFAULT_REGION: "/docker_test/region"
IMAGE_REPO_NAME: "/docker_test/repo_name"
IMAGE_TAG: "/docker_test/img_tag"
AWS_ACCOUNT_ID: "account_id"
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR and DockerHub...
- docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
artifacts:
files:
- 'Dockerrun.aws.json'
I've tried docker 19, slightly different versions of the docker login line and made sure my roles were set. I get "login succeeded" so I assume the login line is good.
[Container] 2021/12/22 16:19:20 Running command docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2021/12/22 16:19:26 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2021/12/22 16:19:26 Phase context status code: Message:
[Container] 2021/12/22 16:19:26 Entering phase BUILD
The post_build phase fails however with the following:
Successfully built d6878cbb68ba
Successfully tagged ***.dkr.ecr.***.amazonaws.com/***:latest
[Container] 2021/12/22 16:21:58 Phase complete: BUILD State: SUCCEEDED
[Container] 2021/12/22 16:21:58 Phase context status code: Message:
[Container] 2021/12/22 16:21:58 Entering phase POST_BUILD
[Container] 2021/12/22 16:21:58 Running command echo Build completed on `date`
Build completed on Wed Dec 22 16:21:58 UTC 2021
[Container] 2021/12/22 16:21:58 Running command echo Pushing the Docker image...
Pushing the Docker image...
[Container] 2021/12/22 16:21:58 Running command docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
Pushing myapp (***.dkr.ecr.***.amazonaws.com/***:latest)...
The push refers to repository [***.dkr.ecr.***.amazonaws.com/***]
EOF
[Container] 2021/12/22 16:22:49 Command did not exit successfully docker-compose -f docker-compose.yml -f docker-compose.prod.yml push exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: POST_BUILD State: FAILED
[Container] 2021/12/22 16:22:49 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml push. Reason: exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2021/12/22 16:22:49 Phase context status code: Message:
I'm just not sure how to get more information on this error - that would be ideal.
EDIT:
I'm adding the docker-compose.prod.yml file for additional context:
version: "3.2"
services:
myapp:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: bash -c "
python manage.py migrate
&& gunicorn --bind :8000 --workers 3 --threads 2 --timeout 240 project.wsgi:application"
restart: always
ports:
- "80:80"
celery_worker:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: celery -A project worker --loglevel=${CELERY_LOG_LEVEL:-WARNING}
restart: always
OK, so I figured it out. Your question about making sure the repo exists pointed me in the right direction #mreferre. I was confused about the use of IMAGE_TAG and IMAGE_REPO_NAME in the code samples I referenced when trying to build this. They were essentially supposed to be the same thing so the push was failing because I was trying to push to an ECR repo named "proj-name" which didn't exist. I just needed to change it to "repo-name" so the image in docker-compose.prod.yml becomes:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}

AWS Codebuild - Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1

I'm trying to run a build after creating a stack in the AWS cloudFormation but unfortunately, the build has failed with an error message:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error
while executing command: python -m pip install --upgrade --force pip.
Reason: exit status 1
here is the log for the build and why it failed:
[Container] 2021/10/15 13:01:39 Waiting for agent ping
[Container] 2021/10/15 13:01:40 Waiting for DOWNLOAD_SOURCE
[Container] 2021/10/15 13:01:41 Phase is DOWNLOAD_SOURCE
[Container] 2021/10/15 13:01:41 CODEBUILD_SRC_DIR=/codebuild/output/src061758247/src
[Container] 2021/10/15 13:01:41 YAML location is /codebuild/output/src061758247/src/buildspec.yml
[Container] 2021/10/15 13:01:41 Processing environment variables
[Container] 2021/10/15 13:01:41 Decrypting parameter store environment variables
[Container] 2021/10/15 13:01:41 [WARN] Skipping install of runtimes. Runtime version selection is not supported by this build image.
[Container] 2021/10/15 13:01:43 Moving to directory /codebuild/output/src061758247/src
[Container] 2021/10/15 13:01:43 Registering with agent
[Container] 2021/10/15 13:01:43 Phases found in YAML: 4
[Container] 2021/10/15 13:01:43 POST_BUILD: 10 commands
[Container] 2021/10/15 13:01:43 INSTALL: 10 commands
[Container] 2021/10/15 13:01:43 PRE_BUILD: 6 commands
[Container] 2021/10/15 13:01:43 BUILD: 1 commands
[Container] 2021/10/15 13:01:43 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2021/10/15 13:01:43 Phase context status code: Message:
[Container] 2021/10/15 13:01:43 Entering phase INSTALL
[Container] 2021/10/15 13:01:43 Running command echo 'about to call dockerd'
about to call dockerd
[Container] 2021/10/15 13:01:43 Running command nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
[Container] 2021/10/15 13:01:43 Running command timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
Error starting daemon: pid file found, ensure docker is not running or delete /var/run/docker.pid
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.09.0-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.243-185.433.amzn2.x86_64
Operating System: Ubuntu 14.04.5 LTS (containerized)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.645GiB
Name: 9d1ea8d456c4
ID: GA3S:TOF2:A43S:WTEP:JIFT:RNGG:X3XM:5N6S:7JMU:5IE3:HV2Z:AFGS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[Container] 2021/10/15 13:01:43 Running command curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
[Container] 2021/10/15 13:01:44 Running command curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 44.7M 100 44.7M 0 0 60.0M 0 --:--:-- --:--:-- --:--:-- 60.1M
[Container] 2021/10/15 13:01:45 Running command chmod +x ./kubectl ./aws-iam-authenticator
[Container] 2021/10/15 13:01:45 Running command echo `kubectl version`
/codebuild/output/tmp/script.sh: 1: /codebuild/output/tmp/script.sh: kubectl: not found
[Container] 2021/10/15 13:01:45 Running command export PATH=$PWD/:$PATH
[Container] 2021/10/15 13:01:45 Running command python -m pip install --upgrade --force pip
Collecting pip
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:339: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
Could not find a version that satisfies the requirement pip (from versions: )
No matching distribution found for pip
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
[Container] 2021/10/15 13:01:45 Command did not exit successfully python -m pip install --upgrade --force pip exit status 1
[Container] 2021/10/15 13:01:45 Phase complete: INSTALL State: FAILED
[Container] 2021/10/15 13:01:45 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1
My buildspec.yaml file looks like this:
---
version: 0.2
phases:
install:
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
- curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
- apt-get update && apt-get -y install jq python3-pip python3-dev && pip3 install --upgrade awscli
pre_build:
commands:
- TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
- sed -i 's#CONTAINER_IMAGE#'"$REPOSITORY_URI:$TAG"'#' simple_jwt_api.yml
- $(aws ecr get-login --no-include-email)
- export KUBECONFIG=$HOME/.kube/config
- pip3 install -r requirements.txt
- pytest
build:
commands:
- docker build --tag $REPOSITORY_URI:$TAG .
post_build:
commands:
- docker push $REPOSITORY_URI:$TAG
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
- kubectl apply -f simple_jwt_api.yml
- printf '[{"name":"simple_jwt_api","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
- pwd
- ls
artifacts:
files: build.json
env:
parameter-store:
JWT_SECRET: JWT_SECRET
can anyone help me with this issue or guide me to a similar asked question?
thanks
You need to provide more information about your Cloud Build resources and the log output doesn't exactly match the commands in your buildspec.yaml.
However the error you have is because you are trying to run python 2.7 version to upgrade pip. You should use python3 instead.

AWS - Python - Flask: Building fails

my building process always fails when I try to install jq. Could you please tell me what I am doing wrong?
env:
parameter-store:
JWT_SECRET: JWT_SECRET
phases:
install:
runtime-versions:
python: 3.8
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
- curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
- apt-get update && apt-get -y install jq
- apt-get -y install python3-pip python3-dev
- pip3 install --upgrade awscli
- pip3 install -r requirements.txt
# - pip3 install -r requirements.txt
- python3 -m pytest test_main.py
pre_build:
commands:
- TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
- sed -i 's#CONTAINER_IMAGE#'"$REPOSITORY_URI:$TAG"'#' simple_jwt_api.yml
- $(aws ecr get-login --no-include-email)
- export KUBECONFIG=$HOME/.kube/config
build:
commands:
- docker build --tag $REPOSITORY_URI:$TAG .
post_build:
commands:
- docker push $REPOSITORY_URI:$TAG
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
- kubectl apply -f simple_jwt_api.yml
- printf '[{"name":"simple_jwt_api","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
- pwd
- ls
artifacts:
files: build.json
Error msg:
[Container] 2020/03/22 15:57:14 Waiting for agent ping
[Container] 2020/03/22 15:57:16 Waiting for DOWNLOAD_SOURCE
[Container] 2020/03/22 15:57:17 Phase is DOWNLOAD_SOURCE
[Container] 2020/03/22 15:57:17 CODEBUILD_SRC_DIR=/codebuild/output/src534423531/src
[Container] 2020/03/22 15:57:17 YAML location is /codebuild/output/src534423531/src/buildspec.yml
[Container] 2020/03/22 15:57:17 Processing environment variables
[Container] 2020/03/22 15:57:17 Decrypting parameter store environment variables
[Container] 2020/03/22 15:57:17 Selecting 'python' runtime version '3.8' based on manual selections...
[Container] 2020/03/22 15:57:17 Running command echo "Installing Python version 3.8 ..."
Installing Python version 3.8 ...
[Container] 2020/03/22 15:57:17 Running command pyenv global $PYTHON_38_VERSION
[Container] 2020/03/22 15:57:17 Moving to directory /codebuild/output/src534423531/src
[Container] 2020/03/22 15:57:17 Registering with agent
[Container] 2020/03/22 15:57:17 Phases found in YAML: 4
[Container] 2020/03/22 15:57:17 POST_BUILD: 11 commands
[Container] 2020/03/22 15:57:17 INSTALL: 9 commands
[Container] 2020/03/22 15:57:17 PRE_BUILD: 4 commands
[Container] 2020/03/22 15:57:17 BUILD: 1 commands
[Container] 2020/03/22 15:57:17 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2020/03/22 15:57:17 Phase context status code: Message:
[Container] 2020/03/22 15:57:17 Entering phase INSTALL
[Container] 2020/03/22 15:57:17 Running command curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
[Container] 2020/03/22 15:57:19 Running command curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
[Container] 2020/03/22 15:57:19 Running command chmod +x ./kubectl ./aws-iam-authenticator
[Container] 2020/03/22 15:57:19 Running command export PATH=$PWD/:$PATH
[Container] 2020/03/22 15:57:19 Running command apt-get update && apt-get -y install jq
/codebuild/output/tmp/script.sh: line 4: apt-get: command not found
[Container] 2020/03/22 15:57:19 Command did not exit successfully apt-get update && apt-get -y install jq exit status 127
[Container] 2020/03/22 15:57:19 Phase complete: INSTALL State: FAILED
[Container] 2
The issue is clear from this line:
/codebuild/output/tmp/script.sh: line 4: apt-get: command not found
Please use the Ubuntu 3.0/4.0 image which will provide Python 3.8 and 'apt-get' command. In Amazon Linux image, use 'yum' command.
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html

Codebuild aws command not found when ran?

I'm trying to get a simple docker app to build using AWS codebuild, but I am coming across an error where the aws command is not found:
[Container] 2016/12/10 04:29:17 Build started on Sat Dec 10 04:29:17 UTC 2016
[Container] 2016/12/10 04:29:17 Running command echo Building the Docker image...
[Container] 2016/12/10 04:29:17 Building the Docker image...
[Container] 2016/12/10 04:29:17 Running command docker build -t aws-test .
[Container] 2016/12/10 04:29:17 sh: 1: docker: not found
[Container] 2016/12/10 04:29:17 Command did not exit successfully docker build -t aws-test . exit status 127
[Container] 2016/12/10 04:29:17 Phase complete: BUILD Success: false
[Container] 2016/12/10 04:29:17 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t aws-test .. Reason: exit status 127
I've got a super simple docker file which builds a simple express app:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD npm install && npm start
And I've got a super simple buildspec.yml which is suppose to build the docker container and push it to the aws registry:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region us-west-2)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest
However once ran, it throws the error posted above ^^ I'm not sure why the aws cli utils aren't found? This guide here:
http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
Suggests I don't need to do anything to setup the aws cli utils anywhere?
Also one other thing I noticed, I removed $(aws ecr get-login --region us-west-2) step from the buildspec file, built it again and it then said that the docker command was not found?! Have I missed a step somewhere (I don't think I have).
So it turned out I was using the wrong environment. Here is what I'm using now:
I was trying to specify my own docker image, which was ultimately not setup with any of the AWS cli utils!
Thanks to #Clare Liguori for tipping me off!