Dokcer-compose issue with AWS Fargate - amazon-web-services
I'm having a long running problem building a new webapp. A while back I request info on some docker-compose types of problems and trying to reduce the size of the images:
Decrease docker build size, share conda environment between two images
In short I have got to a stage (many iterations of docker-compose, dockerfile, buildspec.yaml) where I can spin the images up during an AWS-Codebuild. However when the images are pushed to AWS-Fargate the images in the two containers appear to be the same.
File directory structure:
-worker_app
---service
-----worker.py
-----server.py
-----other_files.py
---other_folders
---Dockerfile
---environment.yml
-buildspec.yml
-docker-compose.yml
Buildspec:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --region $AWS_DEFAULT_REGION ecr get-login-password | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-2.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- pwd
- ls -la
- echo checking config
- docker-compose -f docker-compose.yml config
- echo building images
- docker-compose -f docker-compose.yml up --build -d
# Tag the built docker image using the appropriate Amazon ECR endpoint and relevant
# repository for our service container. This ensures that when the docker push
# command is executed later, it will be pushed to the appropriate repository.
- docker tag co2gasp/worker:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest
- docker tag co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image..
# Push the image to ECR.
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest
- echo Completed pushing Docker image. Deploying Docker image to AWS Fargate on `date`
# Create a artifacts file that contains the name and location of the image
# pushed to ECR. This will be used by AWS CodePipeline to automate
# deployment of this specific container to Amazon ECS.
- printf '[{"name":"CO2GASP-Service","imageUri":"%s"},{"name":"CO2GASP-Worker","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest > imagedefinitions.json
artifacts:
# Indicate that the created imagedefinitions.json file created on the previous
# line is to be referenceable as an artifact of the build execution job.
files: imagedefinitions.json
Docker-compose
version: '3.8'
services:
web:
# will build ./docker/web/Dockerfile
image: co2gasp/service:latest
build: ./worker_app
command: ["python", "server.py"]
worker:
# will build ./docker/db/Dockerfile
image: co2gasp/worker:latest
build: ./worker_app
command: ["python", "worker.py"]
Dockerfike
FROM continuumio/miniconda3
RUN apt-get update -y
RUN apt-get install zip -y
RUN apt-get install awscli -y
#RUN aws route53 list-hosted-zones
WORKDIR /app
## Create the environment:
COPY environment.yml .
#Make RUN commands use the new environment:
RUN conda env create -f environment.yml
COPY ./PHREEQC /PHREEQC
COPY ./service /service
COPY ./temp_files /temp_files
COPY ./INPUT_DATA /INPUT_DATA
COPY ./PHREEQC/phreeqc_files/database/pitzer.dat /bin/pitzer.dat
COPY ./PHREEQC/phreeqc_files/bin/phreeqc /bin/phreeqc
#ENV PATH=${PATH}:/app/bin
ENV PATH=${PATH}:/bin/phreeqc
ENV PATH=${PATH}:/bin/pitzer.dat
ENV PATH=${PATH}:/bin
RUN echo 'Adding new'
#RUN phreeqc
RUN echo "conda activate myenv" >> ~/.bashrc
#RUN echo "export PATH=/PHREEQC/phreeqc_files/bin/phreeqc:${PATH}" >> ~/.bashrc
#RUN echo "export PATH=/PHREEQC/phreeqc_files/bin/phreeqc:$PATH" >> ~/.bashrc
#RUN echo "$(cat ~/.bashrc)"
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
# Demonstrate the environment is activated:
RUN echo "Make sure flask is installed:"
RUN python -c "import flask"
RUN echo Copy service directory
WORKDIR /service
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myenv"]
CMD ["python","server.py"]
Codebuild output
[Container] 2023/01/22 20:22:23 Waiting for agent ping
[Container] 2023/01/22 20:22:24 Waiting for DOWNLOAD_SOURCE
[Container] 2023/01/22 20:22:37 Phase is DOWNLOAD_SOURCE
[Container] 2023/01/22 20:22:37 CODEBUILD_SRC_DIR=/codebuild/output/src693461010/src
[Container] 2023/01/22 20:22:37 YAML location is /codebuild/output/src693461010/src/buildspec.yml
[Container] 2023/01/22 20:22:37 Setting HTTP client timeout to higher timeout for S3 source
[Container] 2023/01/22 20:22:37 Processing environment variables
[Container] 2023/01/22 20:22:37 No runtime version selected in buildspec.
[Container] 2023/01/22 20:22:39 Moving to directory /codebuild/output/src693461010/src
[Container] 2023/01/22 20:22:39 Configuring ssm agent with target id: codebuild:7b0e2985-8075-4ac9-ad81-61c7e146093e
[Container] 2023/01/22 20:22:39 Successfully updated ssm agent configuration
[Container] 2023/01/22 20:22:39 Registering with agent
[Container] 2023/01/22 20:22:39 Phases found in YAML: 3
[Container] 2023/01/22 20:22:39 BUILD: 10 commands
[Container] 2023/01/22 20:22:39 POST_BUILD: 6 commands
[Container] 2023/01/22 20:22:39 PRE_BUILD: 2 commands
[Container] 2023/01/22 20:22:39 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2023/01/22 20:22:39 Phase context status code: Message:
[Container] 2023/01/22 20:22:40 Entering phase INSTALL
[Container] 2023/01/22 20:22:40 Phase complete: INSTALL State: SUCCEEDED
[Container] 2023/01/22 20:22:40 Phase context status code: Message:
[Container] 2023/01/22 20:22:40 Entering phase PRE_BUILD
[Container] 2023/01/22 20:22:40 Running command echo Logging in to Amazon ECR...
Logging in to Amazon ECR...
[Container] 2023/01/22 20:22:40 Running command aws --region $AWS_DEFAULT_REGION ecr get-login-password | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-2.amazonaws.com
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2023/01/22 20:22:49 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2023/01/22 20:22:49 Phase context status code: Message:
[Container] 2023/01/22 20:22:49 Entering phase BUILD
[Container] 2023/01/22 20:22:49 Running command echo Build started on `date`
Build started on Sun Jan 22 20:22:49 UTC 2023
[Container] 2023/01/22 20:22:49 Running command echo Building the Docker image...
Building the Docker image...
[Container] 2023/01/22 20:22:49 Running command pwd
/codebuild/output/src693461010/src
[Container] 2023/01/22 20:22:49 Running command ls -la
total 12
drwxr-xr-x 4 root root 139 Jan 22 20:22 .
drwxr-xr-x 3 root root 17 Jan 22 20:22 ..
-rw-r--r-- 1 root root 2338 Jan 22 20:22 buildspec.yml
-rw-r--r-- 1 root root 2888 Jan 22 20:22 buildspec_old.yml
-rw-r--r-- 1 root root 312 Jan 22 20:22 docker-compose.yml
drwxr-xr-x 6 root root 113 Jan 22 20:22 server_app
-rw-r--r-- 1 root root 0 Jan 22 20:22 website_build.txt
drwxr-xr-x 6 root root 135 Jan 22 20:22 worker_app
[Container] 2023/01/22 20:22:49 Running command echo checking config
checking config
[Container] 2023/01/22 20:22:49 Running command docker-compose -f docker-compose.yml config
services:
web:
build:
context: /codebuild/output/src693461010/src/worker_app
command:
- python
- server.py
image: co2gasp/service:latest
worker:
build:
context: /codebuild/output/src693461010/src/worker_app
command:
- python
- worker.py
image: co2gasp/worker:latest
version: '3.8'
[Container] 2023/01/22 20:22:50 Running command echo building images
building images
[Container] 2023/01/22 20:22:50 Running command docker-compose -f docker-compose.yml up --build -d
Creating network "src_default" with the default driver
Building web
Step 1/26 : FROM continuumio/miniconda3
latest: Pulling from continuumio/miniconda3
Digest: sha256:10b38c9a8a51692838ce4517e8c74515499b68d58c8a2000d8a9df7f0f08fc5e
Status: Downloaded newer image for continuumio/miniconda3:latest
---> 45461d36cbf1
Step 2/26 : RUN apt-get update -y
---> Running in dd74833eb6a6
Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
Get:4 http://deb.debian.org/debian bullseye/main amd64 Packages [8183 kB]
Get:5 http://deb.debian.org/debian-security bullseye-security/main amd64 Packages [214 kB]
Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [14.6 kB]
Fetched 8620 kB in 1s (6800 kB/s)
Reading package lists...
Removing intermediate container dd74833eb6a6
---> d025f5361af7
Step 3/26 : RUN apt-get install zip -y
---> Running in 93e55c431c12
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
unzip
The following NEW packages will be installed:
unzip zip
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 404 kB of archives.
After this operation, 1031 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 unzip amd64 6.0-26+deb11u1 [172 kB]
Get:2 http://deb.debian.org/debian bullseye/main amd64 zip amd64 3.0-12 [232 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 404 kB in 0s (2258 kB/s)
Selecting previously unselected package unzip.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 12440 files and directories currently installed.)
Preparing to unpack .../unzip_6.0-26+deb11u1_amd64.deb ...
Unpacking unzip (6.0-26+deb11u1) ...
Selecting previously unselected package zip.
Preparing to unpack .../archives/zip_3.0-12_amd64.deb ...
Unpacking zip (3.0-12) ...
Setting up unzip (6.0-26+deb11u1) ...
Setting up zip (3.0-12) ...
Removing intermediate container 93e55c431c12
---> e3c960679ed3
Step 4/26 : RUN apt-get install awscli -y
---> Running in 5664acef1c09
Reading package lists...
Building dependency tree...
Reading state information...
(removed for shortness)
Removing intermediate container 5c4e38ee01c5
---> 10ae3f85a5dc
Step 8/26 : COPY ./PHREEQC /PHREEQC
---> e90d9f82e4be
Step 9/26 : COPY ./service /service
---> 9adc70933fcd
Step 10/26 : COPY ./temp_files /temp_files
---> 0009a6b30e37
Step 11/26 : COPY ./INPUT_DATA /INPUT_DATA
---> c6fefb1177d2
Step 12/26 : COPY ./PHREEQC/phreeqc_files/database/pitzer.dat /bin/pitzer.dat
---> 6c607db80b5c
Step 13/26 : COPY ./PHREEQC/phreeqc_files/bin/phreeqc /bin/phreeqc
---> 9929ca929c36
Step 14/26 : ENV PATH=${PATH}:/bin/phreeqc
---> Running in 3584df0a38a3
Removing intermediate container 3584df0a38a3
---> bc1fbc3ab44a
Step 15/26 : ENV PATH=${PATH}:/bin/pitzer.dat
---> Running in df6567e946bb
Removing intermediate container df6567e946bb
---> 7884bbf9c81a
Step 16/26 : ENV PATH=${PATH}:/bin
---> Running in e5844cc5a89c
Removing intermediate container e5844cc5a89c
---> 863c92f66cfe
Step 17/26 : RUN echo 'Adding new'
---> Running in d983f0139087
Adding new
Removing intermediate container d983f0139087
---> 165061bdbb1a
Step 18/26 : RUN echo "conda activate myenv" >> ~/.bashrc
---> Running in 10480f5953e0
Removing intermediate container 10480f5953e0
---> 73b398920e88
Step 19/26 : SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
---> Running in 7825c13f4d82
Removing intermediate container 7825c13f4d82
---> 28d64beaf762
Step 20/26 : RUN echo "Make sure flask is installed:"
---> Running in 6464253fb0f7
Make sure flask is installed:
Removing intermediate container 6464253fb0f7
---> 8f24b186dbcb
Step 21/26 : RUN python -c "import flask"
---> Running in 35baf159fe93
Removing intermediate container 35baf159fe93
---> 02cef1cee9d9
Step 22/26 : RUN echo "Please work v14 new"
---> Running in 66d087cd8df8
Please work v14 new
Removing intermediate container 66d087cd8df8
---> c601c52eaeb0
Step 23/26 : RUN echo Copy service directory
---> Running in e82660354cd5
Copy service directory
Removing intermediate container e82660354cd5
---> aa3f75d5851f
Step 24/26 : WORKDIR /service
---> Running in 717fcc72d06d
Removing intermediate container 717fcc72d06d
---> ef5fdef9d4f4
Step 25/26 : ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myenv"]
---> Running in e0560cc2107d
Removing intermediate container e0560cc2107d
---> bd7571eca5cc
Step 26/26 : CMD ["python","server.py"]
---> Running in 0c20ad9202c1
Removing intermediate container 0c20ad9202c1
---> 45b528b9fc92
Successfully built 45b528b9fc92
Successfully tagged co2gasp/service:latest
Building worker
Step 1/26 : FROM continuumio/miniconda3
---> 45461d36cbf1
Step 2/26 : RUN apt-get update -y
---> Using cache
---> d025f5361af7
Step 3/26 : RUN apt-get install zip -y
---> Using cache
---> e3c960679ed3
Step 4/26 : RUN apt-get install awscli -y
---> Using cache
---> 80aedd834d9d
Step 5/26 : WORKDIR /app
---> Using cache
---> 441c997e0184
Step 6/26 : COPY environment.yml .
---> Using cache
---> c7d0ab20c3fd
Step 7/26 : RUN conda env create -f environment.yml
---> Using cache
---> 10ae3f85a5dc
Step 8/26 : COPY ./PHREEQC /PHREEQC
---> Using cache
---> e90d9f82e4be
Step 9/26 : COPY ./service /service
---> Using cache
---> 9adc70933fcd
Step 10/26 : COPY ./temp_files /temp_files
---> Using cache
---> 0009a6b30e37
Step 11/26 : COPY ./INPUT_DATA /INPUT_DATA
---> Using cache
---> c6fefb1177d2
Step 12/26 : COPY ./PHREEQC/phreeqc_files/database/pitzer.dat /bin/pitzer.dat
---> Using cache
---> 6c607db80b5c
Step 13/26 : COPY ./PHREEQC/phreeqc_files/bin/phreeqc /bin/phreeqc
---> Using cache
---> 9929ca929c36
Step 14/26 : ENV PATH=${PATH}:/bin/phreeqc
---> Using cache
---> bc1fbc3ab44a
Step 15/26 : ENV PATH=${PATH}:/bin/pitzer.dat
---> Using cache
---> 7884bbf9c81a
Step 16/26 : ENV PATH=${PATH}:/bin
---> Using cache
---> 863c92f66cfe
Step 17/26 : RUN echo 'Adding new'
---> Using cache
---> 165061bdbb1a
Step 18/26 : RUN echo "conda activate myenv" >> ~/.bashrc
---> Using cache
---> 73b398920e88
Step 19/26 : SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
---> Using cache
---> 28d64beaf762
Step 20/26 : RUN echo "Make sure flask is installed:"
---> Using cache
---> 8f24b186dbcb
Step 21/26 : RUN python -c "import flask"
---> Using cache
---> 02cef1cee9d9
Step 22/26 : RUN echo "Please work v14 new"
---> Using cache
---> c601c52eaeb0
Step 23/26 : RUN echo Copy service directory
---> Using cache
---> aa3f75d5851f
Step 24/26 : WORKDIR /service
---> Using cache
---> ef5fdef9d4f4
Step 25/26 : ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myenv"]
---> Using cache
---> bd7571eca5cc
Step 26/26 : CMD ["python","server.py"]
---> Using cache
---> 45b528b9fc92
Successfully built 45b528b9fc92
Successfully tagged co2gasp/worker:latest
Creating src_worker_1 ...
Creating src_web_1 ...
·[1A
Creating src_web_1 ... done
·[1B·[2A
Creating src_worker_1 ... done
·[2B
[Container] 2023/01/22 20:50:09 Running command docker tag co2gasp/worker:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest
[Container] 2023/01/22 20:50:09 Running command docker tag co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest
[Container] 2023/01/22 20:50:09 Phase complete: BUILD State: SUCCEEDED
[Container] 2023/01/22 20:50:09 Phase context status code: Message:
[Container] 2023/01/22 20:50:09 Entering phase POST_BUILD
[Container] 2023/01/22 20:50:09 Running command echo Build completed on `date`
Build completed on Sun Jan 22 20:50:09 UTC 2023
[Container] 2023/01/22 20:50:09 Running command echo Pushing the Docker image..
Pushing the Docker image..
[Container] 2023/01/22 20:50:09 Running command docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest
The push refers to repository [769126297153.dkr.ecr.us-east-2.amazonaws.com/co2gasp/worker]
72e0458bf59f: Preparing
3ed9cb7ff5e4: Preparing
33810354d9da: Preparing
58f71f4114eb: Preparing
edcb85c7c85a: Preparing
89bfec2a6ec0: Preparing
9809700b743d: Preparing
d4ea492f859c: Preparing
aaa1fcd61920: Preparing
edc2c622596c: Preparing
107838da2ee5: Preparing
999b746901d1: Preparing
e7ecfc83aef3: Preparing
b9a946f70034: Preparing
b16bba17811d: Preparing
d8f00b2dd1ec: Preparing
7bd72d2b5d13: Preparing
92d9617bd3c6: Preparing
32a72a3896c6: Preparing
8a70d251b653: Preparing
9809700b743d: Waiting
d4ea492f859c: Waiting
aaa1fcd61920: Waiting
edc2c622596c: Waiting
107838da2ee5: Waiting
89bfec2a6ec0: Waiting
999b746901d1: Waiting
7bd72d2b5d13: Waiting
e7ecfc83aef3: Waiting
92d9617bd3c6: Waiting
b9a946f70034: Waiting
32a72a3896c6: Waiting
b16bba17811d: Waiting
8a70d251b653: Waiting
3ed9cb7ff5e4: Pushed
72e0458bf59f: Pushed
58f71f4114eb: Pushed
edcb85c7c85a: Pushed
33810354d9da: Pushed
9809700b743d: Pushed
aaa1fcd61920: Pushed
edc2c622596c: Pushed
e7ecfc83aef3: Pushed
b9a946f70034: Pushed
89bfec2a6ec0: Pushed
d8f00b2dd1ec: Pushed
7bd72d2b5d13: Pushed
92d9617bd3c6: Layer already exists
32a72a3896c6: Layer already exists
8a70d251b653: Layer already exists
107838da2ee5: Pushed
b16bba17811d: Pushed
d4ea492f859c: Pushed
999b746901d1: Pushed
latest: digest: sha256:ffff1b4491a2e00c440570264e7f1f3d2accb2b704d3be7f09ae6cfef544ed62 size: 4516
[Container] 2023/01/22 20:52:13 Running command docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest
The push refers to repository [769126297153.dkr.ecr.us-east-2.amazonaws.com/co2gasp/service]
72e0458bf59f: Preparing
3ed9cb7ff5e4: Preparing
33810354d9da: Preparing
58f71f4114eb: Preparing
edcb85c7c85a: Preparing
89bfec2a6ec0: Preparing
9809700b743d: Preparing
d4ea492f859c: Preparing
aaa1fcd61920: Preparing
edc2c622596c: Preparing
107838da2ee5: Preparing
999b746901d1: Preparing
e7ecfc83aef3: Preparing
b9a946f70034: Preparing
b16bba17811d: Preparing
d8f00b2dd1ec: Preparing
7bd72d2b5d13: Preparing
92d9617bd3c6: Preparing
32a72a3896c6: Preparing
89bfec2a6ec0: Waiting
8a70d251b653: Preparing
aaa1fcd61920: Waiting
d4ea492f859c: Waiting
b16bba17811d: Waiting
d8f00b2dd1ec: Waiting
edc2c622596c: Waiting
9809700b743d: Waiting
107838da2ee5: Waiting
7bd72d2b5d13: Waiting
b9a946f70034: Waiting
92d9617bd3c6: Waiting
999b746901d1: Waiting
32a72a3896c6: Waiting
e7ecfc83aef3: Waiting
33810354d9da: Pushed
58f71f4114eb: Pushed
72e0458bf59f: Pushed
edcb85c7c85a: Pushed
3ed9cb7ff5e4: Pushed
9809700b743d: Pushed
aaa1fcd61920: Pushed
edc2c622596c: Pushed
e7ecfc83aef3: Pushed
b9a946f70034: Pushed
89bfec2a6ec0: Pushed
d8f00b2dd1ec: Pushed
7bd72d2b5d13: Pushed
92d9617bd3c6: Layer already exists
32a72a3896c6: Layer already exists
8a70d251b653: Layer already exists
b16bba17811d: Pushed
107838da2ee5: Pushed
d4ea492f859c: Pushed
999b746901d1: Pushed
latest: digest: sha256:ffff1b4491a2e00c440570264e7f1f3d2accb2b704d3be7f09ae6cfef544ed62 size: 4516
[Container] 2023/01/22 20:54:18 Running command echo Completed pushing Docker image. Deploying Docker image to AWS Fargate on `date`
Completed pushing Docker image. Deploying Docker image to AWS Fargate on Sun Jan 22 20:54:18 UTC 2023
[Container] 2023/01/22 20:54:18 Running command printf '[{"name":"CO2GASP-Service","imageUri":"%s"},{"name":"CO2GASP-Worker","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest > imagedefinitions.json
[Container] 2023/01/22 20:54:18 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2023/01/22 20:54:18 Phase context status code: Message:
[Container] 2023/01/22 20:54:18 Expanding base directory path: .
[Container] 2023/01/22 20:54:18 Assembling file list
[Container] 2023/01/22 20:54:18 Expanding .
[Container] 2023/01/22 20:54:18 Expanding file paths for base directory .
[Container] 2023/01/22 20:54:18 Assembling file list
[Container] 2023/01/22 20:54:18 Expanding imagedefinitions.json
[Container] 2023/01/22 20:54:18 Found 1 file(s)
[Container] 2023/01/22 20:54:18 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2023/01/22 20:54:18 Phase context status code: Message:
When I ran the codebuild without the -d option i.e. instead of
docker-compose -f docker-compose.yml up --build -d
I did
docker-compose -f docker-compose.yml up --build
I got this relevant response
Step 26/26 : CMD ["python","server.py"]
---> Using cache
---> c367c9c15b42
Successfully built c367c9c15b42
Successfully tagged co2gasp/worker:latest
Creating src_web_1 ...
Creating src_worker_1 ...
·[1A
Creating src_worker_1 ... done
·[1B·[2A
Creating src_web_1 ... done
·[2BAttaching to src_worker_1, src_web_1
worker_1 | 18:42:44 Worker rq:worker:0171503f40bb44cfb4cc18b7d60844cc: started, version 1.9.0
worker_1 | 18:42:44 Subscribing to channel rq:pubsub:0171503f40bb44cfb4cc18b7d60844cc
worker_1 | 18:42:44 *** Listening on default...
worker_1 | 18:42:44 Cleaning registries for queue: default
web_1 | * Serving Flask app 'server'
web_1 | * Debug mode: on
web_1 | /service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
web_1 | rawusgs,geo =read_in_data()
web_1 | /service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
web_1 | medusgs=medusgs_data_import(rawusgs,grad,sur)
web_1 | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
web_1 | * Running on all addresses (0.0.0.0)
web_1 | * Running on http://127.0.0.1:8080
web_1 | * Running on http://172.18.0.3:8080
web_1 | Press CTRL+C to quit
web_1 | * Restarting with stat
web_1 | /service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
web_1 | rawusgs,geo =read_in_data()
web_1 | /service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
web_1 | medusgs=medusgs_data_import(rawusgs,grad,sur)
web_1 | * Debugger is active!
web_1 | * Debugger PIN: 145-314-329
However it then hangs and the images aren't pushed. By using the -d flag it seems to start the images.
When I then go to fargate the logs for both containers seem to show that the
CMD ["python","server.py"] line in dockerfile has been executed for both images.
e.g.
my service log
* Serving Flask app 'server'
* Debug mode: on
/service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
rawusgs,geo =read_in_data()
/service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
medusgs=medusgs_data_import(rawusgs,grad,sur)
[31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:8080
* Running on http://10.0.3.102:8080
and the worker log
* Serving Flask app 'server'
* Debug mode: on
/service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
rawusgs,geo =read_in_data()
/service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
medusgs=medusgs_data_import(rawusgs,grad,sur)
Address already in use
Port 8080 is in use by another program. Either identify and stop that program, or start the server with a different port.
ERROR conda.cli.main_run:execute(47): `conda run python server.py` failed. (See above for error)
You shouldn't be running docker-compose up inside CodeBuild. You are actually running your Docker images inside the CodeBuild environment, which is pointless. You should change the command to only build the images, not run them:
docker-compose -f docker-compose.yml build
Also, both of your containers use exactly the same Dockerfile to build exactly the same images. So you only really need to create one docker image, and configure both containers to use that image. There is no reason to create both a service image and a worker image, when both images are exactly the same, the only difference being the command you select at run time.
The runtime issue with ECS/Fargate is almost certainly because of how you are creating the ECS Task Definition and deploying the ECS Task (which you haven't shown in your question). You need to make sure that your task definition for the two containers correctly specifies the different command for the containers.
The default command is server.py because that's what you configured as the last line in your Dockerfile:
CMD ["python","server.py"]
You are overriding that in your docker-compose file, but that is a run-time setting in docker-compose. When you build images, that run-time setting isn't copied into those images. The command setting is only applied when you run containers using docker-compose. The different command settings in your docker-compose file would only be applied to your ECS deployment if you are using the docker-compose ECS integration to perform your actual ECS deployments. It sounds like that isn't how you are performing your ECS deployments, so however you are deploying to ECS, you need to make sure that you are overriding the command setting in the container definitions inside your ECS Task Definition, just like you are in your docker-compose file.
Related
Error in UPLOAD_ARTIFACTS phase: [pytest_reports: [report files not found in build]]
I am building a docker image in AWS codebuild and running pytest on it. I've a Dockerfile which looks like this. FROM python:latest ENV PORT=80 COPY requirements.txt . RUN pip install -r requirements.txt WORKDIR /tests/ COPY . . RUN apt-get -y update RUN pip install --upgrade pip RUN pip install selenium RUN pip3 install webdriver_manager RUN apt-get install zip -y RUN apt-get install unzip -y RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list RUN apt-get -y update RUN apt-get -y install google-chrome-stable # Download chrome driver RUN wget -N https://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip -P ~/ RUN unzip ~/chromedriver_linux64.zip -d ~/ RUN rm ~/chromedriver_linux64.zip RUN mv -f ~/chromedriver /usr/local/bin/chromedriver RUN chmod 0777 /usr/local/bin/chromedriver # Install chrome broswer RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list EXPOSE $PORT CMD ["pytest", "-v", "-s", "--junitxml=pytest_report.xml"] The buildspec.yaml looks like this. version: 0.2 phases: install: runtime-versions: python: 3.8 commands: pre_build: commands: - IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}') - REPOSITORY_URI=3xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test-ui-python-dkr build: on-failure: ABORT commands: - export DOCKER_BUILDKIT=1 - export BUILDKIT_PROGRESS=plain - export PROGRESS_NO_TRUNC=1 - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 359772415770.dkr.ecr.us-east-1.amazonaws.com - docker build --progress=plain -t 3xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test-ui-python-dkr . - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG - docker push $REPOSITORY_URI:$IMAGE_TAG post_build: commands: - echo Build completed.. - docker run $REPOSITORY_URI:$IMAGE_TAG reports: pytest_reports: files: - ./tests/pytest_report.xml base-directory: ./tests file-format: JUNITXML artifacts: files: - "**/*" I am seeing that pytest is collecting the tests successfully and running it. collecting ... collected 2 items test_login.py::Test_login::test_01 Title==================> Google PASSED test_login.py::Test_login::test_02 by_locator====> ('xpath', "//input[#name='q']") Entered Docker in search.... PASSED ----------------- generated xml file: /tests/pytest_report.xml ----------------- But the pytest_reports section failed. [Container] 2022/07/18 20:38:27 Phase complete: POST_BUILD State: SUCCEEDED [Container] 2022/07/18 20:38:27 Phase context status code: Message: [Container] 2022/07/18 20:38:27 Expanding base directory path: . [Container] 2022/07/18 20:38:27 Assembling file list [Container] 2022/07/18 20:38:27 Expanding . [Container] 2022/07/18 20:38:27 Expanding file paths for base directory . [Container] 2022/07/18 20:38:27 Assembling file list [Container] 2022/07/18 20:38:27 Expanding **/* [Container] 2022/07/18 20:38:27 Found 7 file(s) [Container] 2022/07/18 20:38:27 Preparing to copy TEST report pytest_reports [Container] 2022/07/18 20:38:27 Expanding base directory path: ./tests [Container] 2022/07/18 20:38:27 Assembling file list [Container] 2022/07/18 20:38:27 Expanding ./tests [Container] 2022/07/18 20:38:27 Skipping invalid file path ./tests [Container] 2022/07/18 20:38:27 No matching base directory path found for ./tests, skipping [Container] 2022/07/18 20:38:27 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED [Container] 2022/07/18 20:38:27 Phase context status code: Message: Error in UPLOAD_ARTIFACTS phase: [pytest_reports: [report files not found in build]] I am sure this is something to do with the invalid path reference somewhere...But I am unable to figure out.. Any help is much appreciated.
docker-compose push to ECR fails in codebuild
I've been trying to figure this out for a couple days and the issue is compounded by the fact that I'm not getting a useful error message. I'm using the following buildspec.yml file in codebuild to build docker containers and then send to AWS ECR. version: 0.2 env: parameter-store: AWS_DEFAULT_REGION: "/docker_test/region" IMAGE_REPO_NAME: "/docker_test/repo_name" IMAGE_TAG: "/docker_test/img_tag" AWS_ACCOUNT_ID: "account_id" phases: install: runtime-versions: docker: 18 pre_build: commands: - echo Logging in to Amazon ECR and DockerHub... - docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com build: commands: - echo Build started on `date` - echo Building the Docker image... - docker-compose -f docker-compose.yml -f docker-compose.prod.yml build post_build: commands: - echo Build completed on `date` - echo Pushing the Docker image... - docker-compose -f docker-compose.yml -f docker-compose.prod.yml push artifacts: files: - 'Dockerrun.aws.json' I've tried docker 19, slightly different versions of the docker login line and made sure my roles were set. I get "login succeeded" so I assume the login line is good. [Container] 2021/12/22 16:19:20 Running command docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [Container] 2021/12/22 16:19:26 Phase complete: PRE_BUILD State: SUCCEEDED [Container] 2021/12/22 16:19:26 Phase context status code: Message: [Container] 2021/12/22 16:19:26 Entering phase BUILD The post_build phase fails however with the following: Successfully built d6878cbb68ba Successfully tagged ***.dkr.ecr.***.amazonaws.com/***:latest [Container] 2021/12/22 16:21:58 Phase complete: BUILD State: SUCCEEDED [Container] 2021/12/22 16:21:58 Phase context status code: Message: [Container] 2021/12/22 16:21:58 Entering phase POST_BUILD [Container] 2021/12/22 16:21:58 Running command echo Build completed on `date` Build completed on Wed Dec 22 16:21:58 UTC 2021 [Container] 2021/12/22 16:21:58 Running command echo Pushing the Docker image... Pushing the Docker image... [Container] 2021/12/22 16:21:58 Running command docker-compose -f docker-compose.yml -f docker-compose.prod.yml push Pushing myapp (***.dkr.ecr.***.amazonaws.com/***:latest)... The push refers to repository [***.dkr.ecr.***.amazonaws.com/***] EOF [Container] 2021/12/22 16:22:49 Command did not exit successfully docker-compose -f docker-compose.yml -f docker-compose.prod.yml push exit status 1 [Container] 2021/12/22 16:22:49 Phase complete: POST_BUILD State: FAILED [Container] 2021/12/22 16:22:49 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml push. Reason: exit status 1 [Container] 2021/12/22 16:22:49 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED [Container] 2021/12/22 16:22:49 Phase context status code: Message: I'm just not sure how to get more information on this error - that would be ideal. EDIT: I'm adding the docker-compose.prod.yml file for additional context: version: "3.2" services: myapp: image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG} command: bash -c " python manage.py migrate && gunicorn --bind :8000 --workers 3 --threads 2 --timeout 240 project.wsgi:application" restart: always ports: - "80:80" celery_worker: image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG} command: celery -A project worker --loglevel=${CELERY_LOG_LEVEL:-WARNING} restart: always
OK, so I figured it out. Your question about making sure the repo exists pointed me in the right direction #mreferre. I was confused about the use of IMAGE_TAG and IMAGE_REPO_NAME in the code samples I referenced when trying to build this. They were essentially supposed to be the same thing so the push was failing because I was trying to push to an ECR repo named "proj-name" which didn't exist. I just needed to change it to "repo-name" so the image in docker-compose.prod.yml becomes: image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}
AWS Codebuild - Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1
I'm trying to run a build after creating a stack in the AWS cloudFormation but unfortunately, the build has failed with an error message: Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1 here is the log for the build and why it failed: [Container] 2021/10/15 13:01:39 Waiting for agent ping [Container] 2021/10/15 13:01:40 Waiting for DOWNLOAD_SOURCE [Container] 2021/10/15 13:01:41 Phase is DOWNLOAD_SOURCE [Container] 2021/10/15 13:01:41 CODEBUILD_SRC_DIR=/codebuild/output/src061758247/src [Container] 2021/10/15 13:01:41 YAML location is /codebuild/output/src061758247/src/buildspec.yml [Container] 2021/10/15 13:01:41 Processing environment variables [Container] 2021/10/15 13:01:41 Decrypting parameter store environment variables [Container] 2021/10/15 13:01:41 [WARN] Skipping install of runtimes. Runtime version selection is not supported by this build image. [Container] 2021/10/15 13:01:43 Moving to directory /codebuild/output/src061758247/src [Container] 2021/10/15 13:01:43 Registering with agent [Container] 2021/10/15 13:01:43 Phases found in YAML: 4 [Container] 2021/10/15 13:01:43 POST_BUILD: 10 commands [Container] 2021/10/15 13:01:43 INSTALL: 10 commands [Container] 2021/10/15 13:01:43 PRE_BUILD: 6 commands [Container] 2021/10/15 13:01:43 BUILD: 1 commands [Container] 2021/10/15 13:01:43 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED [Container] 2021/10/15 13:01:43 Phase context status code: Message: [Container] 2021/10/15 13:01:43 Entering phase INSTALL [Container] 2021/10/15 13:01:43 Running command echo 'about to call dockerd' about to call dockerd [Container] 2021/10/15 13:01:43 Running command nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2& [Container] 2021/10/15 13:01:43 Running command timeout 15 sh -c "until docker info; do echo .; sleep 1; done" Error starting daemon: pid file found, ensure docker is not running or delete /var/run/docker.pid Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.09.0-ce Storage Driver: overlay Backing Filesystem: extfs Supports d_type: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0 runc version: 3f2f8b84a77f73d38244dd690525642a72156c64 init version: 949e6fa Security Options: seccomp Profile: default Kernel Version: 4.14.243-185.433.amzn2.x86_64 Operating System: Ubuntu 14.04.5 LTS (containerized) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.645GiB Name: 9d1ea8d456c4 ID: GA3S:TOF2:A43S:WTEP:JIFT:RNGG:X3XM:5N6S:7JMU:5IE3:HV2Z:AFGS Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled [Container] 2021/10/15 13:01:43 Running command curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator [Container] 2021/10/15 13:01:44 Running command curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 44.7M 100 44.7M 0 0 60.0M 0 --:--:-- --:--:-- --:--:-- 60.1M [Container] 2021/10/15 13:01:45 Running command chmod +x ./kubectl ./aws-iam-authenticator [Container] 2021/10/15 13:01:45 Running command echo `kubectl version` /codebuild/output/tmp/script.sh: 1: /codebuild/output/tmp/script.sh: kubectl: not found [Container] 2021/10/15 13:01:45 Running command export PATH=$PWD/:$PATH [Container] 2021/10/15 13:01:45 Running command python -m pip install --upgrade --force pip Collecting pip /usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:339: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings SNIMissingWarning /usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecurePlatformWarning Could not find a version that satisfies the requirement pip (from versions: ) No matching distribution found for pip /usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:137: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecurePlatformWarning [Container] 2021/10/15 13:01:45 Command did not exit successfully python -m pip install --upgrade --force pip exit status 1 [Container] 2021/10/15 13:01:45 Phase complete: INSTALL State: FAILED [Container] 2021/10/15 13:01:45 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python -m pip install --upgrade --force pip. Reason: exit status 1 My buildspec.yaml file looks like this: --- version: 0.2 phases: install: commands: - curl -sS -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator - curl -sS -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl - chmod +x ./kubectl ./aws-iam-authenticator - export PATH=$PWD/:$PATH - apt-get update && apt-get -y install jq python3-pip python3-dev && pip3 install --upgrade awscli pre_build: commands: - TAG="$REPOSITORY_NAME.$REPOSITORY_BRANCH.$ENVIRONMENT_NAME.$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)" - sed -i 's#CONTAINER_IMAGE#'"$REPOSITORY_URI:$TAG"'#' simple_jwt_api.yml - $(aws ecr get-login --no-include-email) - export KUBECONFIG=$HOME/.kube/config - pip3 install -r requirements.txt - pytest build: commands: - docker build --tag $REPOSITORY_URI:$TAG . post_build: commands: - docker push $REPOSITORY_URI:$TAG - CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900) - export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')" - export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')" - export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')" - export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration') - aws eks update-kubeconfig --name $EKS_CLUSTER_NAME - kubectl apply -f simple_jwt_api.yml - printf '[{"name":"simple_jwt_api","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json - pwd - ls artifacts: files: build.json env: parameter-store: JWT_SECRET: JWT_SECRET can anyone help me with this issue or guide me to a similar asked question? thanks
You need to provide more information about your Cloud Build resources and the log output doesn't exactly match the commands in your buildspec.yaml. However the error you have is because you are trying to run python 2.7 version to upgrade pip. You should use python3 instead.
Getting /bin/sh Docker not found
I am trying to deploy a Windows container to AWS ECR using Gitlab CI: Here is the Gitlab yaml file: variables: AWS_REGISTRY: ****************.amazonaws.com/devops AWS_DEFAULT_REGION: ***** APP_NAME: devops windows: stage: build tags: - prod before_script: ./docker_install.sh > /dev/null script: - docker build -t ${AWS_REGISTRY}/${CI_PROJECT_PATH} - docker push ${AWS_REGISTRY}/${CI_PROJECT_PATH} Docker file is FROM mcr.microsoft.com/windows/servercore:ltsc2019 CMD [ "cmd" ] Error is : Running with gitlab-runner 13.8.0 (*****) on *********-aws-gitlab-runner-prod ****** Resolving secrets 00:00 Preparing the "docker" executor 00:02 Using Docker executor with image alpine:latest ... Pulling docker image alpine:latest ... Using docker image sha256:*************** for alpine:latest with digest alpine#sha256:********************* ... Preparing environment 00:01 Running on runner-***************** via ******************.compute.internal... Getting source from Git repository 00:02 Fetching changes with git depth set to 50... Reinitialized existing Git repository in /builds/jostens/devops/ci-images/docker-base-windows-2019-std-core/.git/ Checking out 76498ebe as main... Skipping Git submodules setup Executing "step_script" stage of the job script 00:00 /bin/sh: eval: line 110: docker: not found $ docker build -t ${AWS_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_REF_SLUG} . Cleaning up file based variables 00:01 ERROR: Job failed: exit code 127 Please Help/advice
Codebuild aws command not found when ran?
I'm trying to get a simple docker app to build using AWS codebuild, but I am coming across an error where the aws command is not found: [Container] 2016/12/10 04:29:17 Build started on Sat Dec 10 04:29:17 UTC 2016 [Container] 2016/12/10 04:29:17 Running command echo Building the Docker image... [Container] 2016/12/10 04:29:17 Building the Docker image... [Container] 2016/12/10 04:29:17 Running command docker build -t aws-test . [Container] 2016/12/10 04:29:17 sh: 1: docker: not found [Container] 2016/12/10 04:29:17 Command did not exit successfully docker build -t aws-test . exit status 127 [Container] 2016/12/10 04:29:17 Phase complete: BUILD Success: false [Container] 2016/12/10 04:29:17 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t aws-test .. Reason: exit status 127 I've got a super simple docker file which builds a simple express app: FROM node:6.2.0 # Create app directory RUN mkdir -p /usr/src/app WORKDIR /usr/src/app # Install app dependencies COPY package.json /usr/src/app/ # Bundle app source COPY . /usr/src/app EXPOSE 3000 CMD npm install && npm start And I've got a super simple buildspec.yml which is suppose to build the docker container and push it to the aws registry: version: 0.1 phases: pre_build: commands: - echo Logging in to Amazon ECR... - $(aws ecr get-login --region us-west-2) build: commands: - echo Build started on `date` - echo Building the Docker image... - docker build -t <CONTAINER_NAME> . - docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest post_build: commands: - echo Build completed on `date` - echo Pushing the Docker image... - docker push <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest However once ran, it throws the error posted above ^^ I'm not sure why the aws cli utils aren't found? This guide here: http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html Suggests I don't need to do anything to setup the aws cli utils anywhere? Also one other thing I noticed, I removed $(aws ecr get-login --region us-west-2) step from the buildspec file, built it again and it then said that the docker command was not found?! Have I missed a step somewhere (I don't think I have).
So it turned out I was using the wrong environment. Here is what I'm using now: I was trying to specify my own docker image, which was ultimately not setup with any of the AWS cli utils! Thanks to #Clare Liguori for tipping me off!