I have a dockerfile like this:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
ADD reports /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN ls -l /code/reports/report/manage.py # gives expected result
RUN ls -l /code/reports/build_static/ # gives expected result
RUN python /code/reports/report/manage.py build full_report.views.RenderView # does not work
Everything works fine except for the last command which runs a python package (django-bakery) through manage.py build. I don't get any errors.
This command should output some files inside build_static directory in the container.
If I ssh into the container and run the command manually then it is working. I inserted the full path with /code/ to make sure that they match and created all necessary directories beforehand.
This is how I build the container:
docker-compose run django /bin/bash
This is my docker-compose:
version: '3'
services:
django:
build: .
volumes:
- .:/code
ports:
- "8000:8000"
I wonder how come it is working when I run the command manually through bash inside the container, but not working with the command in the dockerfile.
Thanks!
Update (it seems that the files are created, but then if I check on them they aren't there):
Step 12/12 : RUN ls -l /code/reports/build_static/
---> Running in e294563d26d5
total 11080
-rw-r--r-- 1 root root 11339956 Apr 30 10:53 index.html
drwxr-xr-x 7 root root 4096 Apr 30 10:53 static
Removing intermediate container e294563d26d5
---> b8e72da8ee5c
Successfully built b8e72da8ee5c
Successfully tagged image_django:latest
WARNING: Image for service django was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
root#7483853ecc45:/code# ls -l reports/build_static/
total 0
Try following steps and let me know output:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
ADD reports /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN ls -l /code/reports/report/manage.py # gives expected result
RUN ls -l /code/reports/build_static/ # gives expected result
RUN python /code/reports/report/manage.py build full_report.views.RenderView
RUN ls -l /code/reports/build_static/ # should give you expected list of files
Give me output for the last step.
I'll help you out based on output.
The following Dockerfile, copies your current directory content to a code folder (if it doesn't exist, it creates it), then sets it as the workdir.
The WORKDIR instruction sets the working directory for any RUN, CMD,
ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile
Then, in order to reduce you docker image size to the maximum, we regroup all your commands in one RUN command so that it reduces the number of layers.
FROM python:3
ENV PYTHONUNBUFFERED 1
COPY . /code
WORKDIR /code
RUN pip install -r requirements.txt && \
ls -l reports/report/manage.py && \
ls -l reports/build_static/ && \
python reports/report/manage.py build full_report.views.RenderView
I haven't tried it with a full Django app example, but it should help you narrow down the problem !
Related
I want to spin up a low configuration containerized service for which I created a Dockerfile as below:
docker build -t apache/druid_nano:0.20.2 -f Dockerfile .
FROM ubuntu:16.04
Install Java JDK 8
RUN apt-get update
&& apt-get install -y openjdk-8-jdk
RUN mkdir /app
WORKDIR /app
COPY apache-druid-0.20.2-bin.tar.gz /app
RUN tar xvzf apache-druid-0.20.2-bin.tar.gz
WORKDIR /app/apache-druid-0.20.2
EXPOSE <PORT_NUMBERS>
ENTRYPOINT ["/bin/start/start-nano-quickstart"]
When I start the container using the command "docker run -d -p 8888:8888 apache/druid_nano:0.20.2, I get an error as below:
/bin/start-nano-quickstart: no such file or directory
I removed the ENTRYPOINT command and built the image again just to check if the file exists in the bin directory inside the container. There is a file start-nano-quickstart under the bin directory inside the container.
Am I missing anything here? Please help.
I am trying to upload a Django app to Docker Hub. On the local machine (Ubuntu 18.04) everything works fine, but on Docker Hub there is an issue that the requirements.txt file cannot be found.
Local machine:
sudo docker-compose build --no-cache
Result (it's okay):
Step 5/7 : COPY . .
---> 5542d55caeae
Step 6/7 : RUN file="$(ls -1 )" && echo $file
---> Running in b85a55aa2640
Dockerfile db.sqlite3 hello_django manage.py requirements.txt venv
Removing intermediate container b85a55aa2640
---> 532e91546d41
Step 7/7 : RUN pip install -r requirements.txt
---> Running in e940ebf96023
Collecting Django==3.2.2....
But, Docker Hub:
Step 5/7 : COPY . .
---> 852fa937cb0a
Step 6/7 : RUN file="$(ls -1 )" && echo $file
---> Running in 281d9580d608
README.md app config docker-compose.yml
Removing intermediate container 281d9580d608
---> 99eaafb1a55d
Step 7/7 : RUN pip install -r requirements.txt
---> Running in d0e180d83772
[91mERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Removing intermediate container d0e180d83772
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
app/Dockerfile
FROM python:3.8.3-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY . .
RUN file="$(ls -1 )" && echo $file
RUN pip install -r requirements.txt
docker-composer.yml
version: '3'
services:
web:
build:
context: app
dockerfile: Dockerfile
volumes:
- ./app/:/code/
ports:
- "8000:8000"
env_file:
- ./config/.env.dev
command: python manage.py runserver 0.0.0.0:8000
Project Structure:
UPDATE:
Docker is building from Github.
File requirements.txt is in the GitHub repository (app folder), but for some reason during build Docker Hub copies files from the project root folder and not the contents of the app folder.
Github:
https://github.com/sigalglebru/django-on-docker
The problem is that you need to tell Docker Hub where to find your build context.
When you run docker-compose build locally, docker-compose reads your docker-compose.yml file and knows to build inside the app directory, because you've explicitly set the build context:
build:
context: app
dockerfile: Dockerfile
When you build on Docker Hub, by default it will assume the build
context is the top level of your repository. If you set the path to
your Dockerfile to, e.g., app/Dockerfile, this is equivalent to
running:
docker build -f app/Dockerfile .
If you try that, you'll see if fail the same way. Rather than setting
the path to the Dockerfile, you need to set the path to the build
context to the app directory. For example:
(Look at the "Build Context" column).
When configured correct, your repository builds on Docker Hub without errors.
Thank you, I found solution:
I just copied files from./app to the mounted volume, and little changed context, but still don't understand why it worked fine on the local machine
Dockerfile:
FROM python:3.8.3-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY ./app .
RUN pip install -r requirements.txt
docker-compose.yml
version: "3.6"
services:
python:
restart: always
build:
context: .
dockerfile: docker/Dockerfile
expose:
- 8000
ports:
- 8000:8000
command: "python manage.py runserver 0.0.0.0:8000"
I want to include static files generated from python manage.py collectstatic in the Docker image.
For this, I included the following line in my Dockerfile
CMD python manage.py collectstatic --no-input
But since it runs the command in an intermediate container, the generated static files aren't present STATIC_ROOT directory. The following lines I can see on the logs of build.
Step 13/14 : CMD python manage.py collectstatic --no-input
---> Running in 8ea5efada461
Removing intermediate container 8ea5efada461
---> 67aef71cc7b6
I'd like to include the generated static files in the image. What shall I do to achieve this?
Update ( solution )
I was using CMD but instead, I should use RUN command for this task as the docs say
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
You need to copy the output of collectstatic into your final container.
For example, my dockerfile contains the same concept (this isn't the complete dockerfile, just the relevant pieces)
# Pull base image
FROM python:3.7.7-slim-buster AS python-base
COPY requirements.txt /requirements.txt
WORKDIR /project
RUN apt-get update && \
apt-get -y upgrade && \
pip install --upgrade pip && \
pip install -r /requirements.txt
FROM node:8 AS frontend-deps-npm
WORKDIR /
COPY ./package.json /package.json
RUN npm install
COPY . /app
WORKDIR /app
RUN /node_modules/gulp/bin/gulp.js
FROM python-base AS frontend-deps
COPY --from=frontend-deps-npm /app /app
WORKDIR /app
RUN python manage.py collectstatic -v 2 --noinput
FROM python-base AS app
COPY . /app
COPY --from=frontend-deps /app/static-collection /app/static-collection
Quite simple: the docker-compose configuration below does not allow any files to persist after running. So when I do docker exec -i -t aas-job-hunter_web_1 ls /app -alt, I see nothing.
Here is the (non-)working minimal example: https://github.com/philastrophist/test-docker
I'm on Windows 10, I've allowed mounted drives and enabled the TLS connection. I'm not sure what else to do. The thing that most confuses me is that requirements.txt is clearly copied over (since it installs it all) but it isn't there when I have a look docker exec.
My directory structure is:
parent/
website/
manage.py
...
Dockerfile
docker-compose.yml
...
Dockerfile:
FROM python:3.6
#WORKDIR /app
# By copying over requirements first, we make sure that Docker will cache
# our installed requirements rather than reinstall them on every build
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Now copy in our code, and run it
COPY . /app
EXPOSE 8000
CMD python website/manage.py runserver 0.0.0.0:8000
# CMD tail -f /dev/null # use when testing
docker-compose.yml:
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
volumes:
- .:/app
links:
- db
db:
image: "postgres:9.6"
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: hunter2
Traceback:
> docker-compose -f docker-compose.yml up --build
Building web
Step 1/6 : FROM python:3.6
---> 0668df180a32
Step 2/6 : COPY requirements.txt /app/requirements.txt
---> Using cache
---> 3073d0bef876
Step 3/6 : RUN pip install -r /app/requirements.txt
---> Using cache
---> 8ad63bbb3de5
Step 4/6 : COPY . /app
---> 16390cdd6c2c
Step 5/6 : EXPOSE 8000
---> Running in f628000e8961
Removing intermediate container f628000e8961
---> 80e6994cfbd2
Step 6/6 : CMD python website/manage.py runserver 0.0.0.0:8000
---> Running in acb6b25eb558
Removing intermediate container acb6b25eb558
---> da8876d78103
Successfully built da8876d78103
Successfully tagged aas-job-hunter_web:latest
Starting aas-job-hunter_db_1 ... done
Recreating aas-job-hunter_web_1 ... done
Attaching to aas-job-hunter_db_1, aas-job-hunter_web_1
db_1 | LOG: database system was shut down at 2019-05-24 21:23:31 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
web_1 | python: can't open file 'website/manage.py': [Errno 2] No such file or directory
aas-job-hunter_web_1 exited with code 2
Actually it copies files.
Solution 1
Change CMD to :
CMD python /app/website/manage.py runserver 0.0.0.0:8000
Solution 2
You call WORKDIR before the /app folder is created. So change your Dockerfile to :
FROM python:3.6.2
# By copying over requirements first, we make sure that Docker will cache
# our installed requirements rather than reinstall them on every build
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Now copy in our code, and run it
COPY . /app
WORKDIR /app
#EXPOSE 8000
CMD python ./website/manage.py runserver 0.0.0.0:8000
# CMD tail -f /dev/null # use when testing
And call it after.
Moreover remember that in your current docker-compose file you are using bind mounts, not volumes, so the context - . will replace entirely the content of /app in your container.
Uncomment #WORKDIR /app.
I also cleaned the other parts a bit up to utilize the WORKDIR more.
FROM python:3.6
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD python website/manage.py runserver 0.0.0.0:8000
I think there is nothing wrong with COPY. But, you need to set the work directory to /app as your manage.py file is inside /app/website, not in /website inside Docker.
So, I think your Dockerfile should be like this:
FROM python:3.6
RUN mkdir /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
COPY . /app
WORKDIR /app
EXPOSE 8000
CMD python website/manage.py runserver 0.0.0.0:8000
When I was trying to dockerize my django app, I followed a tutorial telling me to structure my Dockerfile like this
FROM python:3.6
ENV PYTHONUNBUFFERED 1
COPY . /code/
WORKDIR /code/
RUN pip install pipenv
RUN pipenv install --system
EXPOSE 8000
After I saved that and run docker build .
the system threw me this error
Warning: --system is intended to be used for pre-existing Pipfile
installation,not installation of specific packages. Aborting.
I think it is complaining about the --system suffix above but the tutorial says it's crucial to have it so that my packages are applied in the entire docker container. I'm new to docker and even pipenv because I took over a previous person's code and isn't sure where their pipfile is or even if they have a pipfile. If you have any insights on how to fix this error thank you in advance.
pipenv --rm
This helped me! I was starting the "Django for beginners" and at the very beginning, got this error (accidently deleted Pipfile & Pipfile.lock)
Your warning is saying you that there is no Pipfile in your project dir.
--system is intended to be used for pre-existing Pipfile.
So before running
docker build .
run
pipenv install
in your project folder
Above solution didn't work for me.
After installing in the virtual env I also had to explicitly include Pipfile and Pipfile.lock into my dockerfile:
COPY Pipfile* .
# Install dependencies
RUN pip install pipenv && pipenv install --system
Then rebuild with docker compose:
docker-compose build
You can find more info in this thread
It has Error in pipenv
It is 👇 ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.
try it
pipenv check or python3 -m pipenv check
Be careful when using Docker bind mounts!
Summary: In my case, I was using bind mounts in my dev environment, and mounting a docker bind mount on a non-empty directory would overwrite the contents of the container's directory, removing the Pipfile and Pipfile.lock, which showed the error mentioned when running the container.
Explanation
Directory structure on the host
> ls project/
docker-compose.yml Dockerfile Pipfile Pipfile.lock app/
Dockerfile
My Dockerfile would copy the contents of the project and then install the dependencies with pipenv, like this:
FROM python:3.8
# ...
COPY Pipfile Pipfile.lock /app/
RUN pipenv install --deploy --ignore-pipfile
COPY ./app /app/
CMD ["pipenv", "run", "uvicorn", "etc..", "--reload"]
Pipfile, Pipfile.lock and the code of ./app would all be in the same /app directory inside the container.
docker-compose.yml
I wanted uvicorn to hot-reload, so I mounted the code in /app inside the container's /app directory.
service:
app:
#...
volumes:
- type: bind
source: ./app
target: /app
This meant that when I changed the code in /app, the code in the container's /app directory would also change.
Effects
The side effect of this bind mount is that the content mounted on /app "obscured" the content previously copied in there.
Container's content with the bind mount:
> ls app/
code1.py code2.py
Container's content without the bind mount:
> ls app/
Pipfile Pipfile.lock code1.py code2.py
Solution
Either make sure that you include the Pipfile and Pipfile.lock as well when mounting the bind mount, or make sure that you COPY these 2 files to a directory that won't get overwritten by a bind mount.