I am new to Django and am trying to follow the book Django for Professionals 3.1 by William S. Vincent. In this context, I am trying to move a simple Django project currently on my system (Mac OS) using conda environment on the PyCharm IDE to a Docker container.
The Problem
The book uses pipenv for the project and suggests to enter the following code in the Dockerfile:
However, since I am using a Conda environment for the project, I cannot use the above code in Dockerfile.
What I Tried
Step 1
I started by entering the following code to create the environment.yml file containing all packages that the project uses.
(django_for_professionals_31) My-MacBook-Pro:django_for_professionals_31_ch1 me$ conda env export --no-builds > environment.yml
My environment.yml file looks like the following*:
name: django_for_professionals_31
channels:
- defaults
dependencies:
- asgiref=3.4.1
- bzip2=1.0.8
- ca-certificates=2022.4.26
- certifi=2022.5.18.1
- django=3.2.5
- krb5=1.19.2
- libedit=3.1.20210910
- libffi=3.3
- libpq=12.9
- ncurses=6.3
- openssl=1.1.1o
- pip=21.2.4
- psycopg2=2.8.6
- python=3.10.4
- pytz=2021.3
- readline=8.1.2
- setuptools=61.2.0
- sqlite=3.38.3
- sqlparse=0.4.1
- tk=8.6.12
- typing_extensions=4.1.1
- tzdata=2022a
- wheel=0.37.1
- xz=5.2.5
- zlib=1.2.12
prefix: /opt/anaconda3/envs/django_for_professionals_31
Step 2
Then, based on this tutorial, I tried to write my Dockerfile, as shown below:
# Pull base image
FROM continuumio/miniconda3
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Create the environment:
COPY environment.yml .
RUN conda env create -f environment.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "django_for_professionals_31", "/bin/bash", "-c"]
# Demonstrate the environment is activated:
RUN echo "Make sure django is installed:"
RUN python -c "import django"
# The code to run when container is started:
COPY run.py .
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "django_for_professionals_31", "python", "run.py"]
# Copy project
COPY . /code/
Thereafter, I ran the command to build a docker image. The command and output I got are given below:
(django_for_professionals_31) My-MacBook-Pro:django_for_professionals_31_ch1 me$ docker build .
[+] Building 3.6s (12/13)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 737B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/continuumio/miniconda3:latest 3.3s
=> [auth] continuumio/miniconda3:pull token for registry-1.docker.io 0.0s
=> [1/8] FROM docker.io/continuumio/miniconda3#sha256:24103733efebe6d610d868ab16a6f0e5f114c7aad0326a793d946b73af15391d 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 5.29kB 0.0s
=> CACHED [2/8] WORKDIR /code 0.0s
=> CACHED [3/8] COPY environment.yml . 0.0s
=> CACHED [4/8] RUN conda env create -f environment.yml 0.0s
=> CACHED [5/8] RUN echo "Make sure django is installed:" 0.0s
=> CACHED [6/8] RUN python -c "import django" 0.0s
=> ERROR [7/8] COPY run.py . 0.0s
------
> [7/8] COPY run.py .:
------
failed to compute cache key: "/run.py" not found: not found
Step 3
I was not sure what to do next when I got this error, and my first instinct was to comment out the code under the # The code to run when container is started: section in Dockerfile. So, I did that and my Dockerfile looked like the following:
# Pull base image
FROM continuumio/miniconda3
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Create the environment:
COPY environment.yml .
RUN conda env create -f environment.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "django_for_professionals_31", "/bin/bash", "-c"]
# Demonstrate the environment is activated:
RUN echo "Make sure django is installed:"
RUN python -c "import django"
# The code to run when container is started:
#COPY run.py .
#ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "django_for_professionals_31", "python", "run.py"]
# Copy project
COPY . /code/
Upon re-running the $ docker build . on the terminal, I was able to create the Docker image, as the last line of the output I got was => => writing image sha256:....
Step 4
Then, the book required to create the docker-compose.yml file. I am using the exact same code in the book, given below:
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
After that, I ran the command to build a Docker container based on the created image. The command and output are shown below.
(django_for_professionals_31) My-MacBook-Pro:django_for_professionals_31_ch1 me$ docker-compose up
[+] Building 3.8s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 734B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/continuumio/miniconda3:latest 3.4s
=> [auth] continuumio/miniconda3:pull token for registry-1.docker.io 0.0s
=> [1/7] FROM docker.io/continuumio/miniconda3#sha256:24103733efebe6d610d868ab16a6f0e5f114c7aad0326a793d946b73af15391d 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 5.11kB 0.0s
=> CACHED [2/7] WORKDIR /code 0.0s
=> CACHED [3/7] COPY environment.yml . 0.0s
=> CACHED [4/7] RUN conda env create -f environment.yml 0.0s
=> CACHED [5/7] RUN echo "Make sure django is installed:" 0.0s
=> CACHED [6/7] RUN python -c "import django" 0.0s
=> [7/7] COPY . /code/ 0.1s
=> exporting to image 0.1s
=> => exporting layers 0.1s
=> => writing image sha256:d0166e8db7d10a43f18975955b398d9227ae7c2a217a69a2fe76a9cc869c0917 0.0s
=> => naming to docker.io/library/django_for_professionals_31_ch1_web 0.0s
[+] Running 2/2
⠿ Network django_for_professionals_31_ch1_default Created 0.1s
⠿ Container django_for_professionals_31_ch1-web-1 Created 0.2s
Attaching to django_for_professionals_31_ch1-web-1
django_for_professionals_31_ch1-web-1 | Traceback (most recent call last):
django_for_professionals_31_ch1-web-1 | File "/code/manage.py", line 11, in main
django_for_professionals_31_ch1-web-1 | from django.core.management import execute_from_command_line
django_for_professionals_31_ch1-web-1 | ModuleNotFoundError: No module named 'django'
django_for_professionals_31_ch1-web-1 |
django_for_professionals_31_ch1-web-1 | The above exception was the direct cause of the following exception:
django_for_professionals_31_ch1-web-1 |
django_for_professionals_31_ch1-web-1 | Traceback (most recent call last):
django_for_professionals_31_ch1-web-1 | File "/code/manage.py", line 22, in <module>
django_for_professionals_31_ch1-web-1 | main()
django_for_professionals_31_ch1-web-1 | File "/code/manage.py", line 13, in main
django_for_professionals_31_ch1-web-1 | raise ImportError(
django_for_professionals_31_ch1-web-1 | ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
django_for_professionals_31_ch1-web-1 exited with code 1
From the above, it seems to me that (I could be wrong though) after commenting out the two lines of code in Dockerfile, the conda environment does not get activated in the Docker container but including those two lines of code leads to the failed to compute cache key: "/run.py" not found: not found error because of which the image cannot be created.
How can I resolve the above issue and create a docker image and use that to build a container for my project? Any help would be much appreciated. Thanks.
*Note: I removed the libcxx=12.0.0 package from the environment.yml file because with it, I got the following error upon running $ docker build . on the terminal. I took the advice to remove this package after reading this
> [4/7] RUN conda env create -f environment.yml:
#9 1.036 Collecting package metadata (repodata.json): ...working... done
#9 8.859 Solving environment: ...working... failed
#9 8.874
#9 8.874 ResolvePackageNotFound:
#9 8.874 - libcxx=12.0.0
#9 8.874
Try this :
environment.yml
name: myenv
channels:
- conda-forge
dependencies:
- python=3.8
- asgiref==3.3.1
- Django==3.1.7
- pytz==2021.1
- sqlparse==0.4.1
Dockerfile
FROM continuumio/miniconda3
WORKDIR /app
COPY environment.yml .
RUN conda env create -f environment.yml
SHELL ["conda","run","-n","myenv","/bin/bash","-c"]
COPY . .
#start server inside container
CMD ["conda", "run", "--no-capture-output","-n", "myenv", "python3","manage.py", "runserver","0.0.0.0:8000"]
At the current dir, Build image :
docker build --tag python-django .
Run the image :
docker run --publish 8000:8000 python-django
Related
I create an app using react and node express. I have a problem when I run a docker image.
It has been built successfully. But when I run the image it couldn't go because of ./aws/config problem.
Docker build:
docker build -t test2 .
[+] Building 100.0s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 210B 0.1s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:14-alpine 2.6s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 4.1s
=> => transferring context: 3.35MB 3.8s
=> CACHED [1/5] FROM docker.io/library/node:14-alpine#sha256:4aff4ba0da347e51561587eba037 0.0s
=> [2/5] COPY . /src 12.6s
=> [3/5] WORKDIR /src 0.0s
=> [4/5] RUN npm install 54.3s
=> [5/5] RUN npm run build 17.1s
=> exporting to image 9.1s
=> => exporting layers 9.1s
=> => writing image sha256:d7b426fed3e0fc05947bdc966ad6924e15882b9607a4f89171b472cb6e3719 0.0s
=> => naming to docker.io/library/test2
Docker run :
docker container run -p 3000:3000 --rm test2
/src/node_modules/aws-sdk/lib/node_loader.js:133
if (fileInfo.isConfig) throw err;
^
Error: ENOENT: no such file or directory, open '/root/.aws/config'
at Object.openSync (fs.js:498:3)
at Object.readFileSync (fs.js:394:35)
at Object.readFileSync (/src/node_modules/aws-sdk/lib/util.js:95:26)
at parseFile (/src/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:6:38)
at IniLoader.loadFrom (/src/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:72:25)
at getRegion (/src/node_modules/aws-sdk/lib/node_loader.js:131:32)
at Config.region (/src/node_modules/aws-sdk/lib/node_loader.js:186:18)
at Config.set (/src/node_modules/aws-sdk/lib/config.js:600:39)
at Config.<anonymous> (/src/node_modules/aws-sdk/lib/config.js:359:12)
at Config.each (/src/node_modules/aws-sdk/lib/util.js:520:32) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '/root/.aws/config'
}
Dockerfile:
FROM node:14-alpine
COPY . /src
WORKDIR /src
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["node", "server.js"]
When you are using aws sdk you have to load your configuration. Refer this link:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-json-file.html
On startup, the following error appears. OS win 10. how do i solve this problem?
(venv) C:\shop>docker-compose up --build [+] Building 0.1s (2/2)
FINISHED => [internal] load build definition from shop
0.1s => => transferring dockerfile: 1.78MB 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: read
/var/lib/docker/tmp/buildkit-mount057531924/shop: is a directory
Dockerfile and docker-compose are in the root of the project.
dima#DESKTOP-1BLNH42:/mnt/c/shop$ ls Dockerfile account blog cart
discount_system docker-compose.yaml favorites loyalty_program
manage.py orders projectshop requirements.txt search shop venv
Dockerfile:
FROM python:3.9
RUN apt-get update -y
RUN apt-get upgrade -y
WORKDIR /app
COPY ./requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./src
CMD ['python3', './src/manage.py', 'runserver', '0.0.0.0:8000']
docker-compose:
version: '3.9'
services:
rabbitmq:
image: rabbitmq
restart: always
web:
restart: always
build:
context: ./shop
ports:
- 8000:8000
command: ['python3', './src/manage.py', 'runserver', '0.0.0.0:8000']
depends_on:
- pg_db
pg_db:
image: postgres:14
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
You may be able to build without using the new Docker BuildKit:
DOCKER_BUILDKIT=0 docker-compose build
docker-compose up
How do I install (anything) on debian: 'apt is unknown instruction' in Dockerfile
If I search: [how to install wget on debian]
I get articles that say:
$sudo apt install wget
So I try to do that in Docker:
FROM debain
apt install wget
and I get this error
/full/path/to/current/working/directory
[+] Building 0.1s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 102B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
failed to solve with frontend dockerfile.v0: failed to create LLB definition: dockerfile parse error line 2: unknown instruction:
APT
/full/path/to/current/working/directory
--
what could be the problem here?
You need to add the RUN instruction, like this:
FROM debian
RUN apt install wget
Unable to create docker image using git bash (windows 10)
created following Dockerfile
FROM debian:sid
RUN apt-get -y update
RUN apt-get install nano
CMD ["bin/nano", "/tmp/notes"]
$ docker build -t example .
Get Error:
$ docker build -t example .
[+] Building 0.0s (3/3) FINISHED
=> [internal] load build definition from Dockerfile
=> => transferring dockerfile: 31B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> ERROR [internal] load metadata for docker.io/library/debian:sid
------
> [internal] load metadata for docker.io/library/debian:sid:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch oauth token: Get https://auth.docker.io/token?scope=repository%3Alibrary%2Fdebian%3Apull&service=registry.docker.io: dial tcp 3.211.199.249:443: i/o timeout
steps tried: restart terminal, tried on cmd, restart docker desktop -
no go
please advise, thank you
The error mentions failed to fetch oauth token
Have you tried to run
docker login
in your terminal prior to running the build command?
I am trying to use CodePipeline to build a docker image that will ARM-64 Graviton2 processors. I have a custom build file as such:
#########
# Build Spec
#
# The build spec is used to build the image in code deploy. When using AWS
# CodePipeline, use this customized buildspec.
#
#########
version: 0.2
run-as: root
artifacts:
files:
- Dockerrun.aws.json
- imagedefinitions.json
phases:
install:
runtime-versions:
php: 7.4
build:
commands:
- echo Build started on `date`
- cp app/config/config.sample.php app/config/config.php
post_build:
commands:
- echo Build completed on `date`
- which aws
- AWS_PASSWORD="$(aws ecr get-login-password --region us-east-1)"
- docker build -t live -f docker/live/Dockerfile .
- docker login -u AWS -p $AWS_PASSWORD xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com
- docker tag live:latest xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- docker push xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- mv docker/Dockerrun.aws.json Dockerrun.aws.json
- echo Pushing the Docker image...
- echo Writing image definitions file...
- printf '[{"name":"live","imageUri":"%s"}]' xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest > imagedefinitions.json
Works totally finally! But when I add/change
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7
It completely fails. Here how it looks:
#########
# Build Spec
#
# The build spec is used to build the image in code deploy. When using AWS
# CodePipeline, use this customized buildspec.
#
#########
version: 0.2
run-as: root
artifacts:
files:
- Dockerrun.aws.json
- imagedefinitions.json
phases:
install:
runtime-versions:
php: 7.4
build:
commands:
- echo Build started on `date`
- cp app/config/config.sample.php app/config/config.php
post_build:
commands:
- echo Build completed on `date`
- which aws
- AWS_PASSWORD="$(aws ecr get-login-password --region us-east-1)"
- docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t live -f docker/live/Dockerfile .
- docker login -u AWS -p $AWS_PASSWORD xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com
- docker tag live:latest xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- docker push xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- mv docker/Dockerrun.aws.json Dockerrun.aws.json
- echo Pushing the Docker image...
- echo Writing image definitions file...
- printf '[{"name":"live","imageUri":"%s"}]' xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest > imagedefinitions.json
The failure error message is:
[Container] 2020/11/09 00:19:02 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t live -f docker/live/Dockerfile .. Reason: exit status 125
What am I doing wrong to get dockerx working?
It looks buildx isn't installed or isn't enabled. You have to enable experimental features in both the daemon and the cli. The documentation says it's bundled with 19.03, but apparently some distributions don't include it and it still has to be installed. I wasn't able to find the information below in the documentation and I had to piece it together by searching and by trial and error.
The steps that worked for me I found in this issue:
wget https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64
chmod a+x buildx-v0.5.1.linux-amd64
mkdir -p ~/.docker/cli-plugins
mv buildx-v0.5.1.linux-amd64 ~/.docker/cli-plugins/docker-buildx
cat <<EOF >~/.docker/config.json
{
"experimental": "enabled"
}
EOF
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"experimental": true
}
EOF
sudo systemctl restart docker
docker buildx create --use
Notes:
By the time many of you read this, there may be a newer version released. Check the release on GitHub.
That snippet will overwrite any existing content in /etc/docker/daemon.json and ~/.docker/config.json. You may want to check that those files do not exist already.
I'm not familiar with CodePipelines, but I would hope that these steps or some derivative will work to get you unstuck.
To test this out, from a fresh Ubuntu 20.04 install, I ran the following:
sudo apt update
sudo apt install -y docker.io
sudo usermod -a -G docker ubuntu
logout
Next login again and run the commands from the snippet above.
cat <<EOF >Dockerfile
FROM ubuntu
RUN arch && sleep 10
EOF
docker buildx build --platform linux/arm64,linux/amd64 .
ubuntu#ip-172-31-94-5:~$ docker buildx build --platform linux/arm64,linux/amd64 .
WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 5.9s (6/8)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 70B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [linux/amd64 internal] load metadata for docker.io/library/ubuntu:latest 1.9s
=> [linux/arm64 internal] load metadata for docker.io/library/ubuntu:latest 1.2s
=> [linux/amd64 1/2] FROM docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 1.0s
=> => resolve docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 0.0s
=> => sha256:2c2d948710f21ad82dce71743b1654b45acb5c059cf5c19da491582cef6f2601 162B / 162B 0.0s
=> => sha256:14428a6d4bcdba49a64127900a0691fb00a3f329aced25eb77e3b65646638f8d 847B / 847B 0.1s
=> => sha256:da7391352a9bb76b292a568c066aa4c3cbae8d494e6a3c68e3c596d34f7c75f8 28.56MB / 28.56MB 0.3s
=> => extracting sha256:da7391352a9bb76b292a568c066aa4c3cbae8d494e6a3c68e3c596d34f7c75f8 0.6s
=> => extracting sha256:14428a6d4bcdba49a64127900a0691fb00a3f329aced25eb77e3b65646638f8d 0.0s
=> => extracting sha256:2c2d948710f21ad82dce71743b1654b45acb5c059cf5c19da491582cef6f2601 0.0s
=> [linux/arm64 1/2] FROM docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 1.0s
=> => resolve docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 0.0s
=> => sha256:e9c66f1fb5a2d6587841797a3b0d4c2d0fd0b7ccd867e55a1314cee2e90ad47d 848B / 848B 0.0s
=> => sha256:94362ba2c285844f83a1b1e2dac5217b0426427f8bb809af534b5f4d751e298c 188B / 188B 0.1s
=> => sha256:a970164f39c1a46f71b3615bc9d5b6710832766b530d9179db8e36563f705abb 27.17MB / 27.17MB 0.4s
=> => extracting sha256:a970164f39c1a46f71b3615bc9d5b6710832766b530d9179db8e36563f705abb 0.5s
=> => extracting sha256:e9c66f1fb5a2d6587841797a3b0d4c2d0fd0b7ccd867e55a1314cee2e90ad47d 0.0s
=> => extracting sha256:94362ba2c285844f83a1b1e2dac5217b0426427f8bb809af534b5f4d751e298c 0.0s
=> [linux/amd64 2/2] RUN arch && sleep 10 2.8s
=> => # x86_64
=> [linux/arm64 2/2] RUN arch && sleep 10 2.7s
=> => # aarch64