AWS - Python - Flask: Building fails - amazon-web-services

my building process always fails when I try to install jq. Could you please tell me what I am doing wrong?
env:
parameter-store:
JWT_SECRET: JWT_SECRET
phases:
install:
runtime-versions:
python: 3.8
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
- curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
- apt-get update && apt-get -y install jq
- apt-get -y install python3-pip python3-dev
- pip3 install --upgrade awscli
- pip3 install -r requirements.txt
# - pip3 install -r requirements.txt
- python3 -m pytest test_main.py
pre_build:
commands:
- TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
- sed -i 's#CONTAINER_IMAGE#'"$REPOSITORY_URI:$TAG"'#' simple_jwt_api.yml
- $(aws ecr get-login --no-include-email)
- export KUBECONFIG=$HOME/.kube/config
build:
commands:
- docker build --tag $REPOSITORY_URI:$TAG .
post_build:
commands:
- docker push $REPOSITORY_URI:$TAG
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
- kubectl apply -f simple_jwt_api.yml
- printf '[{"name":"simple_jwt_api","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
- pwd
- ls
artifacts:
files: build.json
Error msg:
[Container] 2020/03/22 15:57:14 Waiting for agent ping
[Container] 2020/03/22 15:57:16 Waiting for DOWNLOAD_SOURCE
[Container] 2020/03/22 15:57:17 Phase is DOWNLOAD_SOURCE
[Container] 2020/03/22 15:57:17 CODEBUILD_SRC_DIR=/codebuild/output/src534423531/src
[Container] 2020/03/22 15:57:17 YAML location is /codebuild/output/src534423531/src/buildspec.yml
[Container] 2020/03/22 15:57:17 Processing environment variables
[Container] 2020/03/22 15:57:17 Decrypting parameter store environment variables
[Container] 2020/03/22 15:57:17 Selecting 'python' runtime version '3.8' based on manual selections...
[Container] 2020/03/22 15:57:17 Running command echo "Installing Python version 3.8 ..."
Installing Python version 3.8 ...
[Container] 2020/03/22 15:57:17 Running command pyenv global $PYTHON_38_VERSION
[Container] 2020/03/22 15:57:17 Moving to directory /codebuild/output/src534423531/src
[Container] 2020/03/22 15:57:17 Registering with agent
[Container] 2020/03/22 15:57:17 Phases found in YAML: 4
[Container] 2020/03/22 15:57:17 POST_BUILD: 11 commands
[Container] 2020/03/22 15:57:17 INSTALL: 9 commands
[Container] 2020/03/22 15:57:17 PRE_BUILD: 4 commands
[Container] 2020/03/22 15:57:17 BUILD: 1 commands
[Container] 2020/03/22 15:57:17 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2020/03/22 15:57:17 Phase context status code: Message:
[Container] 2020/03/22 15:57:17 Entering phase INSTALL
[Container] 2020/03/22 15:57:17 Running command curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
[Container] 2020/03/22 15:57:19 Running command curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
[Container] 2020/03/22 15:57:19 Running command chmod +x ./kubectl ./aws-iam-authenticator
[Container] 2020/03/22 15:57:19 Running command export PATH=$PWD/:$PATH
[Container] 2020/03/22 15:57:19 Running command apt-get update && apt-get -y install jq
/codebuild/output/tmp/script.sh: line 4: apt-get: command not found
[Container] 2020/03/22 15:57:19 Command did not exit successfully apt-get update && apt-get -y install jq exit status 127
[Container] 2020/03/22 15:57:19 Phase complete: INSTALL State: FAILED
[Container] 2

The issue is clear from this line:
/codebuild/output/tmp/script.sh: line 4: apt-get: command not found
Please use the Ubuntu 3.0/4.0 image which will provide Python 3.8 and 'apt-get' command. In Amazon Linux image, use 'yum' command.
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html

Related

Error in UPLOAD_ARTIFACTS phase: [pytest_reports: [report files not found in build]]

I am building a docker image in AWS codebuild and running pytest on it.
I've a Dockerfile which looks like this.
FROM python:latest
ENV PORT=80
COPY requirements.txt .
RUN pip install -r requirements.txt
WORKDIR /tests/
COPY . .
RUN apt-get -y update
RUN pip install --upgrade pip
RUN pip install selenium
RUN pip3 install webdriver_manager
RUN apt-get install zip -y
RUN apt-get install unzip -y
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list
RUN apt-get -y update
RUN apt-get -y install google-chrome-stable
# Download chrome driver
RUN wget -N https://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip -P ~/
RUN unzip ~/chromedriver_linux64.zip -d ~/
RUN rm ~/chromedriver_linux64.zip
RUN mv -f ~/chromedriver /usr/local/bin/chromedriver
RUN chmod 0777 /usr/local/bin/chromedriver
# Install chrome broswer
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list
EXPOSE $PORT
CMD ["pytest", "-v", "-s", "--junitxml=pytest_report.xml"]
The buildspec.yaml looks like this.
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
pre_build:
commands:
- IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
- REPOSITORY_URI=3xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test-ui-python-dkr
build:
on-failure: ABORT
commands:
- export DOCKER_BUILDKIT=1
- export BUILDKIT_PROGRESS=plain
- export PROGRESS_NO_TRUNC=1
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 359772415770.dkr.ecr.us-east-1.amazonaws.com
- docker build --progress=plain -t 3xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test-ui-python-dkr .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
- docker push $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed..
- docker run $REPOSITORY_URI:$IMAGE_TAG
reports:
pytest_reports:
files:
- ./tests/pytest_report.xml
base-directory: ./tests
file-format: JUNITXML
artifacts:
files:
- "**/*"
I am seeing that pytest is collecting the tests successfully and running it.
collecting ...
collected 2 items
test_login.py::Test_login::test_01 Title==================> Google
PASSED
test_login.py::Test_login::test_02
by_locator====> ('xpath', "//input[#name='q']")
Entered Docker in search....
PASSED
----------------- generated xml file: /tests/pytest_report.xml -----------------
But the pytest_reports section failed.
[Container] 2022/07/18 20:38:27 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2022/07/18 20:38:27 Phase context status code: Message:
[Container] 2022/07/18 20:38:27 Expanding base directory path: .
[Container] 2022/07/18 20:38:27 Assembling file list
[Container] 2022/07/18 20:38:27 Expanding .
[Container] 2022/07/18 20:38:27 Expanding file paths for base directory .
[Container] 2022/07/18 20:38:27 Assembling file list
[Container] 2022/07/18 20:38:27 Expanding **/*
[Container] 2022/07/18 20:38:27 Found 7 file(s)
[Container] 2022/07/18 20:38:27 Preparing to copy TEST report pytest_reports
[Container] 2022/07/18 20:38:27 Expanding base directory path: ./tests
[Container] 2022/07/18 20:38:27 Assembling file list
[Container] 2022/07/18 20:38:27 Expanding ./tests
[Container] 2022/07/18 20:38:27 Skipping invalid file path ./tests
[Container] 2022/07/18 20:38:27 No matching base directory path found for ./tests, skipping
[Container] 2022/07/18 20:38:27 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2022/07/18 20:38:27 Phase context status code: Message:
Error in UPLOAD_ARTIFACTS phase: [pytest_reports: [report files not found in build]]
I am sure this is something to do with the invalid path reference somewhere...But I am unable to figure out..
Any help is much appreciated.

docker-compose push to ECR fails in codebuild

I've been trying to figure this out for a couple days and the issue is compounded by the fact that I'm not getting a useful error message.
I'm using the following buildspec.yml file in codebuild to build docker containers and then send to AWS ECR.
version: 0.2
env:
parameter-store:
AWS_DEFAULT_REGION: "/docker_test/region"
IMAGE_REPO_NAME: "/docker_test/repo_name"
IMAGE_TAG: "/docker_test/img_tag"
AWS_ACCOUNT_ID: "account_id"
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR and DockerHub...
- docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
artifacts:
files:
- 'Dockerrun.aws.json'
I've tried docker 19, slightly different versions of the docker login line and made sure my roles were set. I get "login succeeded" so I assume the login line is good.
[Container] 2021/12/22 16:19:20 Running command docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2021/12/22 16:19:26 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2021/12/22 16:19:26 Phase context status code: Message:
[Container] 2021/12/22 16:19:26 Entering phase BUILD
The post_build phase fails however with the following:
Successfully built d6878cbb68ba
Successfully tagged ***.dkr.ecr.***.amazonaws.com/***:latest
[Container] 2021/12/22 16:21:58 Phase complete: BUILD State: SUCCEEDED
[Container] 2021/12/22 16:21:58 Phase context status code: Message:
[Container] 2021/12/22 16:21:58 Entering phase POST_BUILD
[Container] 2021/12/22 16:21:58 Running command echo Build completed on `date`
Build completed on Wed Dec 22 16:21:58 UTC 2021
[Container] 2021/12/22 16:21:58 Running command echo Pushing the Docker image...
Pushing the Docker image...
[Container] 2021/12/22 16:21:58 Running command docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
Pushing myapp (***.dkr.ecr.***.amazonaws.com/***:latest)...
The push refers to repository [***.dkr.ecr.***.amazonaws.com/***]
EOF
[Container] 2021/12/22 16:22:49 Command did not exit successfully docker-compose -f docker-compose.yml -f docker-compose.prod.yml push exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: POST_BUILD State: FAILED
[Container] 2021/12/22 16:22:49 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml push. Reason: exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2021/12/22 16:22:49 Phase context status code: Message:
I'm just not sure how to get more information on this error - that would be ideal.
EDIT:
I'm adding the docker-compose.prod.yml file for additional context:
version: "3.2"
services:
myapp:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: bash -c "
python manage.py migrate
&& gunicorn --bind :8000 --workers 3 --threads 2 --timeout 240 project.wsgi:application"
restart: always
ports:
- "80:80"
celery_worker:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: celery -A project worker --loglevel=${CELERY_LOG_LEVEL:-WARNING}
restart: always
OK, so I figured it out. Your question about making sure the repo exists pointed me in the right direction #mreferre. I was confused about the use of IMAGE_TAG and IMAGE_REPO_NAME in the code samples I referenced when trying to build this. They were essentially supposed to be the same thing so the push was failing because I was trying to push to an ECR repo named "proj-name" which didn't exist. I just needed to change it to "repo-name" so the image in docker-compose.prod.yml becomes:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}

AWS Codebuild: Command did not exit successfully

I am trying to use CodeBuild to install Python on the building I am getting the following error:
[Container] 2021/11/05 06:55:13 Successfully updated ssm agent configuration
[Container] 2021/11/05 06:55:13 Registering with agent
[Container] 2021/11/05 06:55:13 Phases found in YAML: 1
[Container] 2021/11/05 06:55:13 INSTALL: 3 commands
[Container] 2021/11/05 06:55:13 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2021/11/05 06:55:13 Phase context status code: Message:
[Container] 2021/11/05 06:55:13 Entering phase INSTALL
[Container] 2021/11/05 06:55:13 Running command apt-get install -y python38
/codebuild/output/tmp/script.sh: line 4: apt-get: command not found
[Container] 2021/11/05 06:55:13 Command did not exit successfully apt-get install -y python38 exit status 127
[Container] 2021/11/05 06:55:13 Phase complete: INSTALL State: FAILED
[Container] 2021/11/05 06:55:13 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: apt-get install -y python38. Reason: exit status 127
BuildSpec contains the following:
version: 0.2
phases:
install:
commands:
- apt-get install -y python38
- python3 -m venv venv
- source venv/bin/activate
I even tried yum too.
**Update 1 **
I made the changes and ran yum install python3 and now it gives the following:
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
python3 aarch64 3.7.10-1.amzn2.0.1 amzn2-core 72 k
Installing for dependencies:
libtirpc aarch64 0.2.4-0.16.amzn2 amzn2-core 91 k
python3-libs aarch64 3.7.10-1.amzn2.0.1 amzn2-core 9.1 M
python3-pip noarch 20.2.2-1.amzn2.0.3 amzn2-core 2.0 M
python3-setuptools noarch 49.1.3-1.amzn2.0.2 amzn2-core 1.1 M
Transaction Summary
================================================================================
Install 1 Package (+4 Dependent packages)
Total download size: 12 M
Installed size: 57 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2021-11-05.07-26.BV6vZH.yumtx
[Container] 2021/11/05 07:26:53 Command did not exit successfully yum install python3 exit status 1
[Container] 2021/11/05 07:26:53 Phase complete: INSTALL State: FAILED
[Container] 2021/11/05 07:26:53 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: yum install python3. Reason: exit status 1
As you can see, the yum command requires an action to install python3 - Is this ok [y/d/N]: Exiting on user command. But you don't respond to its question, then it failed.
Let's accept to install python3 and its dependencies:
yum install python3 -y

AWS Codebuild - Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1

I'm trying to run a build after creating a stack in the AWS cloudFormation but unfortunately, the build has failed with an error message:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error
while executing command: python -m pip install --upgrade --force pip.
Reason: exit status 1
here is the log for the build and why it failed:
[Container] 2021/10/15 13:01:39 Waiting for agent ping
[Container] 2021/10/15 13:01:40 Waiting for DOWNLOAD_SOURCE
[Container] 2021/10/15 13:01:41 Phase is DOWNLOAD_SOURCE
[Container] 2021/10/15 13:01:41 CODEBUILD_SRC_DIR=/codebuild/output/src061758247/src
[Container] 2021/10/15 13:01:41 YAML location is /codebuild/output/src061758247/src/buildspec.yml
[Container] 2021/10/15 13:01:41 Processing environment variables
[Container] 2021/10/15 13:01:41 Decrypting parameter store environment variables
[Container] 2021/10/15 13:01:41 [WARN] Skipping install of runtimes. Runtime version selection is not supported by this build image.
[Container] 2021/10/15 13:01:43 Moving to directory /codebuild/output/src061758247/src
[Container] 2021/10/15 13:01:43 Registering with agent
[Container] 2021/10/15 13:01:43 Phases found in YAML: 4
[Container] 2021/10/15 13:01:43 POST_BUILD: 10 commands
[Container] 2021/10/15 13:01:43 INSTALL: 10 commands
[Container] 2021/10/15 13:01:43 PRE_BUILD: 6 commands
[Container] 2021/10/15 13:01:43 BUILD: 1 commands
[Container] 2021/10/15 13:01:43 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2021/10/15 13:01:43 Phase context status code: Message:
[Container] 2021/10/15 13:01:43 Entering phase INSTALL
[Container] 2021/10/15 13:01:43 Running command echo 'about to call dockerd'
about to call dockerd
[Container] 2021/10/15 13:01:43 Running command nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
[Container] 2021/10/15 13:01:43 Running command timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
Error starting daemon: pid file found, ensure docker is not running or delete /var/run/docker.pid
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.09.0-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.243-185.433.amzn2.x86_64
Operating System: Ubuntu 14.04.5 LTS (containerized)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.645GiB
Name: 9d1ea8d456c4
ID: GA3S:TOF2:A43S:WTEP:JIFT:RNGG:X3XM:5N6S:7JMU:5IE3:HV2Z:AFGS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[Container] 2021/10/15 13:01:43 Running command curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
[Container] 2021/10/15 13:01:44 Running command curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 44.7M 100 44.7M 0 0 60.0M 0 --:--:-- --:--:-- --:--:-- 60.1M
[Container] 2021/10/15 13:01:45 Running command chmod +x ./kubectl ./aws-iam-authenticator
[Container] 2021/10/15 13:01:45 Running command echo `kubectl version`
/codebuild/output/tmp/script.sh: 1: /codebuild/output/tmp/script.sh: kubectl: not found
[Container] 2021/10/15 13:01:45 Running command export PATH=$PWD/:$PATH
[Container] 2021/10/15 13:01:45 Running command python -m pip install --upgrade --force pip
Collecting pip
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:339: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
Could not find a version that satisfies the requirement pip (from versions: )
No matching distribution found for pip
/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
[Container] 2021/10/15 13:01:45 Command did not exit successfully python -m pip install --upgrade --force pip exit status 1
[Container] 2021/10/15 13:01:45 Phase complete: INSTALL State: FAILED
[Container] 2021/10/15 13:01:45 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1
My buildspec.yaml file looks like this:
---
version: 0.2
phases:
install:
commands:
- curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
- curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
- chmod +x ./kubectl ./aws-iam-authenticator
- export PATH=$PWD/:$PATH
- apt-get update && apt-get -y install jq python3-pip python3-dev && pip3 install --upgrade awscli
pre_build:
commands:
- TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
- sed -i 's#CONTAINER_IMAGE#'"$REPOSITORY_URI:$TAG"'#' simple_jwt_api.yml
- $(aws ecr get-login --no-include-email)
- export KUBECONFIG=$HOME/.kube/config
- pip3 install -r requirements.txt
- pytest
build:
commands:
- docker build --tag $REPOSITORY_URI:$TAG .
post_build:
commands:
- docker push $REPOSITORY_URI:$TAG
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
- kubectl apply -f simple_jwt_api.yml
- printf '[{"name":"simple_jwt_api","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
- pwd
- ls
artifacts:
files: build.json
env:
parameter-store:
JWT_SECRET: JWT_SECRET
can anyone help me with this issue or guide me to a similar asked question?
thanks
You need to provide more information about your Cloud Build resources and the log output doesn't exactly match the commands in your buildspec.yaml.
However the error you have is because you are trying to run python 2.7 version to upgrade pip. You should use python3 instead.

AWS ECS CodePipeline build error REPOSITORY_URI

we wanna try CodePipeline with a image that we already have on ECR.
So we follow the steps on the documentation.
We have buildspec.yml like this:
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --no-include-email --region us-east-1)
- REPOSITORY_URI=OUR_URL_FROM_ECR
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
- echo $REPOSITORY_URI
- echo $COMMIT_HASH
- echo $IMAGE_TAG
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"Petr","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
We created a new pipeline flow, but when we push some changes we get this log:
[Container] 2019/11/07 23:30:49 Waiting for agent ping
[Container] 2019/11/07 23:30:51 Waiting for DOWNLOAD_SOURCE
[Container] 2019/11/07 23:30:52 Phase is DOWNLOAD_SOURCE
[Container] 2019/11/07 23:30:52 CODEBUILD_SRC_DIR=/codebuild/output/src386464501/src
[Container] 2019/11/07 23:30:52 YAML location is /codebuild/output/src386464501/src/buildspec.yml
[Container] 2019/11/07 23:30:52 No commands found for phase name: INSTALL
[Container] 2019/11/07 23:30:52 Processing environment variables
[Container] 2019/11/07 23:30:52 Moving to directory /codebuild/output/src386464501/src
[Container] 2019/11/07 23:30:52 Registering with agent
[Container] 2019/11/07 23:30:52 Phases found in YAML: 4
[Container] 2019/11/07 23:30:52 POST_BUILD: 6 commands
[Container] 2019/11/07 23:30:52 INSTALL: 0 commands
[Container] 2019/11/07 23:30:52 PRE_BUILD: 9 commands
[Container] 2019/11/07 23:30:52 BUILD: 4 commands
[Container] 2019/11/07 23:30:52 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2019/11/07 23:30:52 Phase context status code: Message:
[Container] 2019/11/07 23:30:52 Entering phase INSTALL
[Container] 2019/11/07 23:30:52 Running command echo "Installing Node.js version 10 ..."
Installing Node.js version 10 ...
[Container] 2019/11/07 23:30:52 Running command n 10.16.3
installed : v10.16.3 (with npm 6.9.0)
[Container] 2019/11/07 23:31:02 Phase complete: INSTALL State: SUCCEEDED
[Container] 2019/11/07 23:31:02 Phase context status code: Message:
[Container] 2019/11/07 23:31:02 Entering phase PRE_BUILD
[Container] 2019/11/07 23:31:02 Running command echo Logging in to Amazon ECR...
Logging in to Amazon ECR...
[Container] 2019/11/07 23:31:02 Running command aws --version
aws-cli/1.16.242 Python/3.6.8 Linux/4.14.143-91.122.amzn1.x86_64 exec-env/AWS_ECS_EC2 botocore/1.12.232
[Container] 2019/11/07 23:31:07 Running command $(aws ecr get-login --no-include-email --region us-east-1)
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2019/11/07 23:31:10 Running command REPOSITORY_URI=***********
[Container] 2019/11/07 23:31:10 Running command COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
[Container] 2019/11/07 23:31:10 Running command IMAGE_TAG=${COMMIT_HASH:=latest}
[Container] 2019/11/07 23:31:10 Running command echo $REPOSITORY_URI
***********
[Container] 2019/11/07 23:31:10 Running command echo $COMMIT_HASH
88f8cfc
[Container] 2019/11/07 23:31:10 Running command echo $IMAGE_TAG
88f8cfc
[Container] 2019/11/07 23:31:10 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2019/11/07 23:31:10 Phase context status code: Message:
[Container] 2019/11/07 23:31:10 Entering phase BUILD
[Container] 2019/11/07 23:31:10 Running command echo Build started on `date`
Build started on Thu Nov 7 23:31:10 UTC 2019
[Container] 2019/11/07 23:31:10 Running command echo Building the Docker image...
Building the Docker image...
[Container] 2019/11/07 23:31:10 Running command docker build -t $REPOSITORY_URI:latest .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Container] 2019/11/07 23:31:10 Command did not exit successfully docker build -t $REPOSITORY_URI:latest . exit status 1
[Container] 2019/11/07 23:31:10 Phase complete: BUILD State: FAILED
[Container] 2019/11/07 23:31:10 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t $REPOSITORY_URI:latest .. Reason: exit status 1
[Container] 2019/11/07 23:31:10 Entering phase POST_BUILD
[Container] 2019/11/07 23:31:10 Running command echo Build completed on `date`
Build completed on Thu Nov 7 23:31:10 UTC 2019
[Container] 2019/11/07 23:31:10 Running command echo Pushing the Docker images...
Pushing the Docker images...
[Container] 2019/11/07 23:31:10 Running command docker push $REPOSITORY_URI:latest
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Container] 2019/11/07 23:31:10 Command did not exit successfully docker push $REPOSITORY_URI:latest exit status 1
[Container] 2019/11/07 23:31:10 Phase complete: POST_BUILD State: FAILED
[Container] 2019/11/07 23:31:10 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker push $REPOSITORY_URI:latest. Reason: exit status 1
[Container] 2019/11/07 23:31:10 Expanding base directory path: .
[Container] 2019/11/07 23:31:10 Assembling file list
[Container] 2019/11/07 23:31:10 Expanding .
[Container] 2019/11/07 23:31:10 Expanding file paths for base directory .
[Container] 2019/11/07 23:31:10 Assembling file list
[Container] 2019/11/07 23:31:10 Expanding imagedefinitions.json
[Container] 2019/11/07 23:31:10 Skipping invalid file path imagedefinitions.json
[Container] 2019/11/07 23:31:10 Phase complete: UPLOAD_ARTIFACTS State: FAILED
[Container] 2019/11/07 23:31:10 Phase context status code: CLIENT_ERROR Message: no matching artifact paths found
We wanna know if we are missing something, we follow some steps from here:
https://aws.amazon.com/es/blogs/devops/build-a-continuous-delivery-pipeline-for-your-container-images-with-amazon-ecr-as-source/
Any advice?
I ran into similar error. The fix is, this build project needs to build a Docker image therefore set Privilege Mode to true.
Privileged mode grants a build project's Docker container access.
There will be two possibilities:-
one is that you didn't add ecr and ecs in your roles which you created for ec2 instance or if you are using elastic beanstalk. first verify that
otherwise look into this second possibility:-
Use following commands in your phases:-
phases:
install:
commands:
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
- timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
for more details use this link
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html#sample-docker-custom-image-files
https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-cannot-connect-to-docker-daemon
For error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
I found this helpful:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
Specifically point 5.d.
Follow the steps in Run CodeBuild directly to create a build project, run the build, and view build information.
If you use the console to create your project:
a. For Operating system, choose Ubuntu.
b. For Runtime, choose Standard.
c. For Image, choose aws/codebuild/standard:4.0.
d. Because you use this build project to build a Docker image, select Privileged.
I have had the same problem as you and the way I fixed it was by following: try going to CodeBuild and then to its IAM Role. AmazonEC2ContainerRegistryFullAccess role and now click on 'Edit' for that code build and select 'Environment' and click on Allow AWS CodeBuild to modify this service role so it can be used with this building project. Now try again.
Cheers