getting the following error and have not been able to resolve with google search. can someone assist please
macbook pro - monterey 12.6.3
Docker version 20.10.22, build 3a2c30b
compete error below docker build -t webserver .
[+] Building 0.0s (1/2)
=> ERROR [internal] load build definition from Dockerfile a 0.0s
=> transferring dockerfile: 40B 0.0s
i have tried uninstall and reinstall but did not correct the issue
any help is appreciated
i was expecting the image to build
I am new to Django and am trying to follow the book Django for Professionals 3.1 by William S. Vincent. In this context, I am trying to move a simple Django project currently on my system (Mac OS) using conda environment on the PyCharm IDE to a Docker container.
The Problem
The book uses pipenv for the project and suggests to enter the following code in the Dockerfile:
However, since I am using a Conda environment for the project, I cannot use the above code in Dockerfile.
What I Tried
Step 1
I started by entering the following code to create the environment.yml file containing all packages that the project uses.
(django_for_professionals_31) My-MacBook-Pro:django_for_professionals_31_ch1 me$ conda env export --no-builds > environment.yml
My environment.yml file looks like the following*:
name: django_for_professionals_31
channels:
- defaults
dependencies:
- asgiref=3.4.1
- bzip2=1.0.8
- ca-certificates=2022.4.26
- certifi=2022.5.18.1
- django=3.2.5
- krb5=1.19.2
- libedit=3.1.20210910
- libffi=3.3
- libpq=12.9
- ncurses=6.3
- openssl=1.1.1o
- pip=21.2.4
- psycopg2=2.8.6
- python=3.10.4
- pytz=2021.3
- readline=8.1.2
- setuptools=61.2.0
- sqlite=3.38.3
- sqlparse=0.4.1
- tk=8.6.12
- typing_extensions=4.1.1
- tzdata=2022a
- wheel=0.37.1
- xz=5.2.5
- zlib=1.2.12
prefix: /opt/anaconda3/envs/django_for_professionals_31
Step 2
Then, based on this tutorial, I tried to write my Dockerfile, as shown below:
# Pull base image
FROM continuumio/miniconda3
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Create the environment:
COPY environment.yml .
RUN conda env create -f environment.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "django_for_professionals_31", "/bin/bash", "-c"]
# Demonstrate the environment is activated:
RUN echo "Make sure django is installed:"
RUN python -c "import django"
# The code to run when container is started:
COPY run.py .
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "django_for_professionals_31", "python", "run.py"]
# Copy project
COPY . /code/
Thereafter, I ran the command to build a docker image. The command and output I got are given below:
(django_for_professionals_31) My-MacBook-Pro:django_for_professionals_31_ch1 me$ docker build .
[+] Building 3.6s (12/13)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 737B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/continuumio/miniconda3:latest 3.3s
=> [auth] continuumio/miniconda3:pull token for registry-1.docker.io 0.0s
=> [1/8] FROM docker.io/continuumio/miniconda3#sha256:24103733efebe6d610d868ab16a6f0e5f114c7aad0326a793d946b73af15391d 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 5.29kB 0.0s
=> CACHED [2/8] WORKDIR /code 0.0s
=> CACHED [3/8] COPY environment.yml . 0.0s
=> CACHED [4/8] RUN conda env create -f environment.yml 0.0s
=> CACHED [5/8] RUN echo "Make sure django is installed:" 0.0s
=> CACHED [6/8] RUN python -c "import django" 0.0s
=> ERROR [7/8] COPY run.py . 0.0s
------
> [7/8] COPY run.py .:
------
failed to compute cache key: "/run.py" not found: not found
Step 3
I was not sure what to do next when I got this error, and my first instinct was to comment out the code under the # The code to run when container is started: section in Dockerfile. So, I did that and my Dockerfile looked like the following:
# Pull base image
FROM continuumio/miniconda3
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Create the environment:
COPY environment.yml .
RUN conda env create -f environment.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "django_for_professionals_31", "/bin/bash", "-c"]
# Demonstrate the environment is activated:
RUN echo "Make sure django is installed:"
RUN python -c "import django"
# The code to run when container is started:
#COPY run.py .
#ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "django_for_professionals_31", "python", "run.py"]
# Copy project
COPY . /code/
Upon re-running the $ docker build . on the terminal, I was able to create the Docker image, as the last line of the output I got was => => writing image sha256:....
Step 4
Then, the book required to create the docker-compose.yml file. I am using the exact same code in the book, given below:
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
After that, I ran the command to build a Docker container based on the created image. The command and output are shown below.
(django_for_professionals_31) My-MacBook-Pro:django_for_professionals_31_ch1 me$ docker-compose up
[+] Building 3.8s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 734B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/continuumio/miniconda3:latest 3.4s
=> [auth] continuumio/miniconda3:pull token for registry-1.docker.io 0.0s
=> [1/7] FROM docker.io/continuumio/miniconda3#sha256:24103733efebe6d610d868ab16a6f0e5f114c7aad0326a793d946b73af15391d 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 5.11kB 0.0s
=> CACHED [2/7] WORKDIR /code 0.0s
=> CACHED [3/7] COPY environment.yml . 0.0s
=> CACHED [4/7] RUN conda env create -f environment.yml 0.0s
=> CACHED [5/7] RUN echo "Make sure django is installed:" 0.0s
=> CACHED [6/7] RUN python -c "import django" 0.0s
=> [7/7] COPY . /code/ 0.1s
=> exporting to image 0.1s
=> => exporting layers 0.1s
=> => writing image sha256:d0166e8db7d10a43f18975955b398d9227ae7c2a217a69a2fe76a9cc869c0917 0.0s
=> => naming to docker.io/library/django_for_professionals_31_ch1_web 0.0s
[+] Running 2/2
⠿ Network django_for_professionals_31_ch1_default Created 0.1s
⠿ Container django_for_professionals_31_ch1-web-1 Created 0.2s
Attaching to django_for_professionals_31_ch1-web-1
django_for_professionals_31_ch1-web-1 | Traceback (most recent call last):
django_for_professionals_31_ch1-web-1 | File "/code/manage.py", line 11, in main
django_for_professionals_31_ch1-web-1 | from django.core.management import execute_from_command_line
django_for_professionals_31_ch1-web-1 | ModuleNotFoundError: No module named 'django'
django_for_professionals_31_ch1-web-1 |
django_for_professionals_31_ch1-web-1 | The above exception was the direct cause of the following exception:
django_for_professionals_31_ch1-web-1 |
django_for_professionals_31_ch1-web-1 | Traceback (most recent call last):
django_for_professionals_31_ch1-web-1 | File "/code/manage.py", line 22, in <module>
django_for_professionals_31_ch1-web-1 | main()
django_for_professionals_31_ch1-web-1 | File "/code/manage.py", line 13, in main
django_for_professionals_31_ch1-web-1 | raise ImportError(
django_for_professionals_31_ch1-web-1 | ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
django_for_professionals_31_ch1-web-1 exited with code 1
From the above, it seems to me that (I could be wrong though) after commenting out the two lines of code in Dockerfile, the conda environment does not get activated in the Docker container but including those two lines of code leads to the failed to compute cache key: "/run.py" not found: not found error because of which the image cannot be created.
How can I resolve the above issue and create a docker image and use that to build a container for my project? Any help would be much appreciated. Thanks.
*Note: I removed the libcxx=12.0.0 package from the environment.yml file because with it, I got the following error upon running $ docker build . on the terminal. I took the advice to remove this package after reading this
> [4/7] RUN conda env create -f environment.yml:
#9 1.036 Collecting package metadata (repodata.json): ...working... done
#9 8.859 Solving environment: ...working... failed
#9 8.874
#9 8.874 ResolvePackageNotFound:
#9 8.874 - libcxx=12.0.0
#9 8.874
Try this :
environment.yml
name: myenv
channels:
- conda-forge
dependencies:
- python=3.8
- asgiref==3.3.1
- Django==3.1.7
- pytz==2021.1
- sqlparse==0.4.1
Dockerfile
FROM continuumio/miniconda3
WORKDIR /app
COPY environment.yml .
RUN conda env create -f environment.yml
SHELL ["conda","run","-n","myenv","/bin/bash","-c"]
COPY . .
#start server inside container
CMD ["conda", "run", "--no-capture-output","-n", "myenv", "python3","manage.py", "runserver","0.0.0.0:8000"]
At the current dir, Build image :
docker build --tag python-django .
Run the image :
docker run --publish 8000:8000 python-django
How do I install (anything) on debian: 'apt is unknown instruction' in Dockerfile
If I search: [how to install wget on debian]
I get articles that say:
$sudo apt install wget
So I try to do that in Docker:
FROM debain
apt install wget
and I get this error
/full/path/to/current/working/directory
[+] Building 0.1s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 102B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
failed to solve with frontend dockerfile.v0: failed to create LLB definition: dockerfile parse error line 2: unknown instruction:
APT
/full/path/to/current/working/directory
--
what could be the problem here?
You need to add the RUN instruction, like this:
FROM debian
RUN apt install wget
I am trying to deploy a Windows container to AWS ECR using Gitlab CI:
Here is the Gitlab yaml file:
variables:
AWS_REGISTRY: ****************.amazonaws.com/devops
AWS_DEFAULT_REGION: *****
APP_NAME: devops
windows:
stage: build
tags:
- prod
before_script:
./docker_install.sh > /dev/null
script:
- docker build -t ${AWS_REGISTRY}/${CI_PROJECT_PATH}
- docker push ${AWS_REGISTRY}/${CI_PROJECT_PATH}
Docker file is
FROM mcr.microsoft.com/windows/servercore:ltsc2019
CMD [ "cmd" ]
Error is :
Running with gitlab-runner 13.8.0 (*****)
on *********-aws-gitlab-runner-prod ******
Resolving secrets
00:00
Preparing the "docker" executor
00:02
Using Docker executor with image alpine:latest ...
Pulling docker image alpine:latest ...
Using docker image sha256:*************** for alpine:latest with digest alpine#sha256:********************* ...
Preparing environment
00:01
Running on runner-***************** via ******************.compute.internal...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/jostens/devops/ci-images/docker-base-windows-2019-std-core/.git/
Checking out 76498ebe as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
/bin/sh: eval: line 110: docker: not found
$ docker build -t ${AWS_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_REF_SLUG} .
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
Please Help/advice
I am trying to use CodePipeline to build a docker image that will ARM-64 Graviton2 processors. I have a custom build file as such:
#########
# Build Spec
#
# The build spec is used to build the image in code deploy. When using AWS
# CodePipeline, use this customized buildspec.
#
#########
version: 0.2
run-as: root
artifacts:
files:
- Dockerrun.aws.json
- imagedefinitions.json
phases:
install:
runtime-versions:
php: 7.4
build:
commands:
- echo Build started on `date`
- cp app/config/config.sample.php app/config/config.php
post_build:
commands:
- echo Build completed on `date`
- which aws
- AWS_PASSWORD="$(aws ecr get-login-password --region us-east-1)"
- docker build -t live -f docker/live/Dockerfile .
- docker login -u AWS -p $AWS_PASSWORD xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com
- docker tag live:latest xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- docker push xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- mv docker/Dockerrun.aws.json Dockerrun.aws.json
- echo Pushing the Docker image...
- echo Writing image definitions file...
- printf '[{"name":"live","imageUri":"%s"}]' xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest > imagedefinitions.json
Works totally finally! But when I add/change
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7
It completely fails. Here how it looks:
#########
# Build Spec
#
# The build spec is used to build the image in code deploy. When using AWS
# CodePipeline, use this customized buildspec.
#
#########
version: 0.2
run-as: root
artifacts:
files:
- Dockerrun.aws.json
- imagedefinitions.json
phases:
install:
runtime-versions:
php: 7.4
build:
commands:
- echo Build started on `date`
- cp app/config/config.sample.php app/config/config.php
post_build:
commands:
- echo Build completed on `date`
- which aws
- AWS_PASSWORD="$(aws ecr get-login-password --region us-east-1)"
- docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t live -f docker/live/Dockerfile .
- docker login -u AWS -p $AWS_PASSWORD xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com
- docker tag live:latest xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- docker push xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest
- mv docker/Dockerrun.aws.json Dockerrun.aws.json
- echo Pushing the Docker image...
- echo Writing image definitions file...
- printf '[{"name":"live","imageUri":"%s"}]' xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/live:latest > imagedefinitions.json
The failure error message is:
[Container] 2020/11/09 00:19:02 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t live -f docker/live/Dockerfile .. Reason: exit status 125
What am I doing wrong to get dockerx working?
It looks buildx isn't installed or isn't enabled. You have to enable experimental features in both the daemon and the cli. The documentation says it's bundled with 19.03, but apparently some distributions don't include it and it still has to be installed. I wasn't able to find the information below in the documentation and I had to piece it together by searching and by trial and error.
The steps that worked for me I found in this issue:
wget https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64
chmod a+x buildx-v0.5.1.linux-amd64
mkdir -p ~/.docker/cli-plugins
mv buildx-v0.5.1.linux-amd64 ~/.docker/cli-plugins/docker-buildx
cat <<EOF >~/.docker/config.json
{
"experimental": "enabled"
}
EOF
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"experimental": true
}
EOF
sudo systemctl restart docker
docker buildx create --use
Notes:
By the time many of you read this, there may be a newer version released. Check the release on GitHub.
That snippet will overwrite any existing content in /etc/docker/daemon.json and ~/.docker/config.json. You may want to check that those files do not exist already.
I'm not familiar with CodePipelines, but I would hope that these steps or some derivative will work to get you unstuck.
To test this out, from a fresh Ubuntu 20.04 install, I ran the following:
sudo apt update
sudo apt install -y docker.io
sudo usermod -a -G docker ubuntu
logout
Next login again and run the commands from the snippet above.
cat <<EOF >Dockerfile
FROM ubuntu
RUN arch && sleep 10
EOF
docker buildx build --platform linux/arm64,linux/amd64 .
ubuntu#ip-172-31-94-5:~$ docker buildx build --platform linux/arm64,linux/amd64 .
WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 5.9s (6/8)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 70B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [linux/amd64 internal] load metadata for docker.io/library/ubuntu:latest 1.9s
=> [linux/arm64 internal] load metadata for docker.io/library/ubuntu:latest 1.2s
=> [linux/amd64 1/2] FROM docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 1.0s
=> => resolve docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 0.0s
=> => sha256:2c2d948710f21ad82dce71743b1654b45acb5c059cf5c19da491582cef6f2601 162B / 162B 0.0s
=> => sha256:14428a6d4bcdba49a64127900a0691fb00a3f329aced25eb77e3b65646638f8d 847B / 847B 0.1s
=> => sha256:da7391352a9bb76b292a568c066aa4c3cbae8d494e6a3c68e3c596d34f7c75f8 28.56MB / 28.56MB 0.3s
=> => extracting sha256:da7391352a9bb76b292a568c066aa4c3cbae8d494e6a3c68e3c596d34f7c75f8 0.6s
=> => extracting sha256:14428a6d4bcdba49a64127900a0691fb00a3f329aced25eb77e3b65646638f8d 0.0s
=> => extracting sha256:2c2d948710f21ad82dce71743b1654b45acb5c059cf5c19da491582cef6f2601 0.0s
=> [linux/arm64 1/2] FROM docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 1.0s
=> => resolve docker.io/library/ubuntu#sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c 0.0s
=> => sha256:e9c66f1fb5a2d6587841797a3b0d4c2d0fd0b7ccd867e55a1314cee2e90ad47d 848B / 848B 0.0s
=> => sha256:94362ba2c285844f83a1b1e2dac5217b0426427f8bb809af534b5f4d751e298c 188B / 188B 0.1s
=> => sha256:a970164f39c1a46f71b3615bc9d5b6710832766b530d9179db8e36563f705abb 27.17MB / 27.17MB 0.4s
=> => extracting sha256:a970164f39c1a46f71b3615bc9d5b6710832766b530d9179db8e36563f705abb 0.5s
=> => extracting sha256:e9c66f1fb5a2d6587841797a3b0d4c2d0fd0b7ccd867e55a1314cee2e90ad47d 0.0s
=> => extracting sha256:94362ba2c285844f83a1b1e2dac5217b0426427f8bb809af534b5f4d751e298c 0.0s
=> [linux/amd64 2/2] RUN arch && sleep 10 2.8s
=> => # x86_64
=> [linux/arm64 2/2] RUN arch && sleep 10 2.7s
=> => # aarch64