I'm using AWS Code Build to build a Docker image from ECR. This is the Code Build configuration.
Here is the buidspec.yml
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- aws ecr get-login-password --region my-region | docker login --username AWS --password-stdin my-image-uri
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t pos-drf .
- docker tag pos-drf:latest my-image-uri/pos-drf:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push my-image-uri/pos-drf:latest
Now it's working up until the build command docker build -t pos-drf .
the error message I get is the following
[Container] 2022/12/30 15:12:39 Running command docker build -t pos-drf .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /codebuild/output/src696881611/src/Dockerfile: no such file or directory
[Container] 2022/12/30 15:12:39 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t pos-drf .. Reason: exit status 1
Now quite sure this is not a permission related issue.
Please let me know if I need to share something else.
UPDATE:
This is the Dockerfile
# base image
FROM python:3.8
# setup environment variable
ENV DockerHOME=/home/app/webapp
# set work directory
RUN mkdir -p $DockerHOME
# where your code lives
WORKDIR $DockerHOME
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
# copy whole project to your docker home directory.
COPY . $DockerHOME
RUN apt-get dist-upgrade
# RUN apt-get install mysql-client mysql-server
# run this command to install all dependencies
RUN pip install -r requirements.txt
# port where the Django app runs
EXPOSE 8000
# start server
CMD python manage.py runserver
My mistake was that I had the Dockerfile locally but hadn't pushed it.
CodeBuild worked successfully after pushing the Dockerfile.
Related
I am trying to do a pip install from codeartifact from within a dockerbuild in aws codebuild.
This article does not quite solve my problem: https://docs.aws.amazon.com/codeartifact/latest/ug/using-python-packages-in-codebuild.html
The login to AWS CodeArtifct is in the prebuild; outside of the Docker context.
But my pip install is inside my Dockerfile (we pull from a private pypi registry).
How do I do this, without doing something horrible like setting an env variable to the password derived from reading ~/.config/pip.conf/ after running the login command in prebuild?
You can use the environment
variable: PIP_INDEX_URL[1].
Below is an AWS CodeBuild buildspec.yml file where we construct the
PIP_INDEX_URL for CodeArtifact by using
this example from the AWS documentation.
buildspec.yml
pre_build:
commands:
- echo Getting CodeArtifact authorization...
- export CODEARTIFACT_AUTH_TOKEN=$(aws codeartifact get-authorization-token --domain "${CODEARTIFACT_DOMAIN}" --domain-owner "${AWS_ACCOUNT_ID}" --query authorizationToken --output text)
- export PIP_INDEX_URL="https://aws:${CODEARTIFACT_AUTH_TOKEN}#${CODEARTIFACT_DOMAIN}-${AWS_ACCOUNT_ID}.d.codeartifact.${AWS_DEFAULT_REGION}.amazonaws.com/pypi/${CODEARTIFACT_REPO}/simple/"
In your Dockerfile, add an ARG PIP_INDEX_URL line just above
your RUN pip install -r requirements.txt so it can become an environment
variable during the build process:
Dockerfile
# this needs to be added before your pip install line!
ARG PIP_INDEX_URL
RUN pip install -r requirements.txt
Finally, we build the image with the PIP_INDEX_URL build-arg.
buildspec.yml
build:
commands:
- echo Building the Docker image...
- docker build -t "${IMAGE_REPO_NAME}" --build-arg PIP_INDEX_URL .
As an aside, adding ARG PIP_INDEX_URL to your Dockerfile shouldn't break any
existing CI or workflows. If --build-arg PIP_INDEX_URL is omitted when
building an image, pip will still use the default PyPI index.
Specifying --build-arg PIP_INDEX_URL=${PIP_INDEX_URL} is valid, but
unnecessary. Specifying the argument name with no value will make Docker take
its value from the environment variable of the same
name[2].
Security note: If someone runs docker history ${IMAGE_REPO_NAME}, they can
see the value
of ${PIP_INDEX_URL}[3]
. The token is only good for a maximum of 12 hours though, and you can shorten
it to as little as 15 minutes with the --duration-seconds parameter
of aws codeartifact get-authorization-token[4],
so maybe that's acceptable. If your Dockerfile is a multi-stage build, then it
shouldn't be an issue if you're not using ARG PIP_INDEX_URL in your target
stage. docker build --secret does not seem to be supported in CodeBuild at this time.
So, here is how I solved this for now. Seems kinda hacky, but it works. (EDIT: we have since switched to #phistrom answer)
In the prebuild, I run the command and copy ~/.config/pip/pip.conf to the current build directory:
pre_build:
commands:
- echo Logging in to Amazon ECR...
...
- echo Fetching pip.conf for PYPI
- aws codeartifact --region us-east-1 login --tool pip --repository ....
- cp ~/.config/pip/pip.conf .
build:
commands:
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Then in the Dockerfile, I COPY that file in, do the pip install, then rm it
COPY requirements.txt pkg/
COPY --chown=myuser:myuser pip.conf /home/myuser/.config/pip/pip.conf
RUN pip install -r ./pkg/requirements.txt
RUN pip install ./pkg
RUN rm /home/myuser/.config/pip/pip.conf
I'm new to Docker and I'm trying to understand the following setup.
I want to debug my docker container to see if it is receiving AWS credentials when running as a task in Fargate. It is suggested that I run the command:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
But I'm not sure how to do so.
The setup uses Gitlab CI to build and push the docker container to AWS ECR.
Here is the dockerfile:
FROM rocker/tidyverse:3.6.3
RUN apt-get update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y liblzma-dev && \
apt-get install -y libbz2-dev && \
apt-get install -y libnetcdf-dev
COPY ./packrat/packrat.lock /home/project/packrat/
COPY initiate.R /home/project/
COPY hello.Rmd /home/project/
RUN install2.r packrat
RUN which nc-config
RUN Rscript -e 'packrat::restore(project = "/home/project/")'
RUN echo '.libPaths("/home/project/packrat/lib/x86_64-pc-linux-gnu/3.6.3")' >> /usr/local/lib/R/etc/Rprofile.site
WORKDIR /home/project/
CMD Rscript initiate.R
Here is the gitlab-ci.yml file:
image: docker:stable
variables:
ECR_PATH: XXXXX.dkr.ecr.eu-west-2.amazonaws.com/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- docker:dind
stages:
- build
- deploy
before_script:
- docker info
- apk add --no-cache curl jq py-pip
- pip install awscli
- chmod +x ./build_and_push.sh
build-rmarkdown-task:
stage: build
script:
- export REPO_NAME=edelta/rmarkdown_report
- export BUILD_DIR=rmarkdown_report
- export REPOSITORY_URL=$ECR_PATH$REPO_NAME
- ./build_and_push.sh
when: manual
Here is the build and push script:
#!/bin/sh
$(aws ecr get-login --no-include-email --region eu-west-2)
docker pull $REPOSITORY_URL || true
docker build --cache-from $REPOSITORY_URL -t $REPOSITORY_URL ./$BUILD_DIR/
docker push $REPOSITORY_URL
I'd like to run this command on my docker container:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
How I run this command on container startup in fargate?
For running a command inside docker container you need to be inside the docker container.
Step 1: Find the container ID / Container Name that you want to debug
docker ps A list of containers will be displayed, pick one of them
Step 2 run following command
docker exec -it <containerName/ConatinerId> bash and then enter wait for few seconds and you will be inside the docker container with interactive mode Bash
for more info read https://docs.docker.com/engine/reference/commandline/exec/
Short answer, just replace the CMD
CMD ["sh", "-c", " curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_UR && Rscript initiate.R"]
Long answer, You need to replace the CMD of the DockerFile, as currently running only Rscript.
you have two option add entrypoint or change CMD, for CMD check above
create entrypoint.sh and run run only when you want to debug.
#!/bin/sh
if [ "${IS_DEBUG}" == true ];then
echo "Container running in debug mode"
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
# uncomment below section if you still want to execute R script.
# exec "$#"
else
exec "$#"
fi
Changes that will required on Dockerfile side
WORKDIR /home/project/
ENV IS_DEBUG=true
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint ["/entrypoint.sh"]
CMD Rscript initiate.R
I am trying to run a code pipeline with github as the source, codeBuild as the builder and elastic beanstalk as the server infrastructure. I am using a docker image amazonlinux:2018.03 which works perfectly locally but during the codebuild in the pipeline i get the following error:
docker-compose: command not found
I have tried to install docker, docker-compose etc. but it keeps giving me this error. I've set the build to use a file buildspec.yaml:
version: 0.2
phases:
install:
commands:
- echo "installing"
- sudo yum install -y yum-utils
- sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
- docker-compose --version
build:
commands:
- bash compose-local.sh
compose-local.sh:
#!/bin/bash
sudo docker-compose up
I have tried for a couple of days. And i am not sure if i am overseeing something with codeBuild i dont know?
Run /usr/local/bin/docker-compose up instead.
If using Ubuntu 2.0+ or Amazon Linux 2 image, we need to specify docker as the runtime-versions in install phase at buildspec.yml file, e.g.:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo Build started on `date`
- echo Building the Docker image with docker-compose...
- docker-compose -f docker-compose.yml build
Also please make sure to enable privilege mode: https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-console
I'm using zappa to deploy on aws. And I wanted to implement CI/CD on AWS.
So, I created a pipeline and successfully did Aws COMMIT and AWS BUILD.
I'm unable to deploy the same using AWS CODE DEPLOY.
The Buildspec.yaml looks like this:
version: 0.2
phases:
install:
commands:
- echo Setting up virtualenv
- python -m venv venv
- source venv/bin/activate
- echo Installing requirements from file
- pip install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python tests.py
- flask db upgrade
post_build:
commands:
- echo Build completed on `date`
- echo Starting deployment
- zappa update dev
- echo Deployment completed
How should I execute zappa deploy or zappa update on AWS?
I'm not sure how to add create appspec.yaml file.
Please HELP! Stuck!!
Here's a buildspec.yml file that I use. You could adjust this to suit your needs (for example, including the DB upgrade command).
version: 0.2
phases:
install:
commands:
- mkdir /tmp/src/
- mv $CODEBUILD_SRC_DIR/* /tmp/src/
- cd /tmp/src/
- python3 -m venv docker_env && source docker_env/bin/activate && pip install --upgrade pip==9.0.3 && pip install -r requirements.txt && zappa update production && deactivate && rm -rf docker_env
post_build:
commands:
- cd $CODEBUILD_SRC_DIR
- rm -rf /tmp/src/
- echo Build completed on `date`
Note that this is using the Docker image danielwhatmuff/zappa:python3.6 in CodeBuild. I use this image as it's based on AWS Lambda and has been tuned for Zappa.
Zappa update to Code Deploy:
Your Buildspec.yaml looks fair good but there is one important point to consider.
Postbuild will always run regardless of success/failure. Debug information can be pulled from a failed build.
Either check the reason for failure from build log, or modify your yml to look like below (caution: this is only draft change, test before using in systems):
version: 0.2
phases:
install:
commands:
- yum -y groupinstall development
- yum -y install zlib-devel
- yum -y install openssl-devel
- wget https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
- tar xJf Python-3.6.0.tar.xz
- cd Python-3.6.0
- ./configure
- make
- make install
- ln -s /usr/local/bin/python3.6 /usr/bin/python3
- curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
- python3 get-pip.py
- pip3 install virtualenv
- virtualenv -p /usr/bin/python3 venv
- source venv/bin/activate
- pip3 install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python3 tests.py
- flask db upgrade
post_build:
commands:
- if [ $CODEBUILD_BUILD_SUCCEEDING = 1 ]; then echo Build completed on `date`; echo Starting deployment; zappa update dev; else echo Build failed ignoring deployment; fi
- echo Deployment completed
Hope it answers.
Zappa update to AWS
Below are the steps to do Zappa update on AWS
Configure AWS with IAM user
Configure AWS cli in the local host using command
a. pip install awscli
b. aws configure
Call "Zappa init", it will generate zappa_settings.json based on details provided
Zappa deploy <name provided for environment in step3>
Now your application will be deployed to AWS. Whenever you need to update call
Zappa update <name provided for environment in step3>
Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).