Volume share from container to host docker-compose - django

I'm trying to share data between container and host. So I just want to do this to store container files. The data must be shared from a container to host.
My docker-compose.yml
version: "3.3"
services:
django:
image: python:slim
volumes:
- type: volume
source: ./env
target: /usr/local/lib/python3.6/site-packages
volume:
nocopy: true
- ./src:/usr/src/app
ports:
- '80:80'
working_dir: /usr/src/app
command: bash -c "pip install -r requirements.txt && python manage.py runserver"
When I run docker throws this:
ERROR: for django Cannot create container for service django: invalid
bind mount spec
"/Users/gustavoopb/git/adv/env:/usr/local/lib/python3.6/site-packages:nocopy":
invalid volume specification:
'/Users/gustavoopb/git/adv/env:/usr/local/lib/python3.6/site-packages:nocopy':
invalid mount config for type "bind": field VolumeOptions must not be
specified ERROR: Encountered errors while bringing up the project.
https://docs.docker.com/compose/compose-file/#long-syntax-3

You're trying to use the named volume syntax with a bind mount. I'd switch your syntax to:
version: "3.3"
services:
django:
image: python:slim
volumes:
- type: bind
source: ./env
target: /usr/local/lib/python3.6/site-packages
- ./src:/usr/src/app
ports:
- '80:80'
working_dir: /usr/src/app
command: bash -c "pip install -r requirements.txt && python manage.py runserver"
Note the change in the type and the lack of the nocopy option. Copying files from the image to a host bind isn't supported, that is only available with named volumes.

My problem was keeping the python env when my container goes down. To do this I need to share the env inside of the container to host. I tried the docker docs suggestion, but it wasn't working.
volume:
nocopy: true
My solution:
I create a named container.
version: "2"
services:
django:
image: python:2.7
command: bash -c "pip install -r requirements.txt && python manage.py collectstatic --no-input && python manage.py migrate && python manage.py runserver 0.0.0.0:80"
env_file:
- .env
volumes:
- .:/app
- env:/Library/Python/2.7/site-packages
links:
- database
ports:
- "8000:80"
working_dir: /app
volumes:
env:

The volume specified in the docs is not used in the service, rather, it is specified externally to service. Try removing the last line from volume:
volume:
nocopy: true

Related

Dockerized django container not producing local migrations file

Question
I am a beginner with docker; this being the first project I have set up with it and don't particularly know what I am doing. I would very much appreciate if someone could give me some advice on what the best way to get migrations from a dockerized django app to store locally
What I have tried so far
I have a local django project setup with the following file structure:
Project
.docker
-Dockerfile
project
-data
-models
- __init__.py
- user.py
- test.py
-migrations
- 0001_initial.py
- 0002_user_role.py
...
settings.py
...
manage.py
Makefile
docker-compose.yml
...
In the current state the migrations for the test.py model have not been run; so I attempted to do so using docker-compose exec main python manage.py makemigrations. This worked successfully returning the following:
Migrations for 'data':
project/data/migrations/0003_test.py
- Create model Test
But produced no local file. However, if I explore the file system of the container I can see that the file exists on the container itself.
Upon running the following:
docker-compose exec main python manage.py migrate
I receive:
Running migrations:
No migrations to apply.
Your models in app(s): 'data' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
I was under the impression that even if this did not create the local file it would at least run the migrations on the container.
Regardless, my intention was that when I run docker-compose exec main python manage.py makemigrations it store the file locally in the project/data/migrations folder and then I just run migrate manually. I can't find much documentation on how to do this; the only post I have seen suggested bind mounts (Migrations files not created in dockerized Django) which I attempted by adding the following to my docker-compose file:
volumes:
- type: bind
source: ./data/migrations
target: /var/lib/migrations_test
but I was struggling to get it to work and following from this I had no idea how to run commands through this volume using docker-compose and I was questioning whether this was even a good idea as I had read somewhere it was not best practice to use bind mounts.
Project setup:
The docker-compose.yml file looking like so:
version: '3.7'
x-common-variables: &common-variables
ENV: 'DEV'
DJANGO_SETTINGS_MODULE: 'project.settings'
DATABASE_NAME: 'postgres'
DATABASE_USER: 'postgres'
DATABASE_PASSWORD: 'postgres'
DATABASE_HOST: 'postgres'
CELERY_BROKER_URLS: 'redis://redis:6379/0'
volumes:
postgres:
services:
main:
command:
python manage.py runserver 0.0.0.0:8000
build:
context: ./
dockerfile: .docker/Dockerfile
target: main
environment:
<<: *common-variables
ports:
- '8000:8000'
env_file:
- dev.env
networks:
- default
postgres:
image: postgres:13.6
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '25432:5432'
environment:
POSTGRES_PASSWORD: 'postgres'
command: postgres -c log_min_messages=INFO -c log_statement=all
wait_for_dependencies:
image: dadarek/wait-for-dependencies
environment:
SLEEP_LENGTH: '0.5'
redis:
image: redis:latest
ports:
- '16379:6379'
worker:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project worker -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
beat:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project beat -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
networks:
default:
Makefile:
build: pre-run
build:
docker-compose build --pull
dev-deps: pre-run
dev-deps:
docker-compose up -d postgres redis
docker-compose run --rm wait_for_dependencies postgres:5432 redis:6379
migrate: pre-run
migrate:
docker-compose run --rm main python manage.py migrate
setup: build dev-deps migrate
up: dev-deps
docker-compose up -d main
Dockerfile:
FROM python:3.10.2 as main
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir -p /code
WORKDIR /code
ADD . ./
RUN useradd -m -s /bin/bash app
RUN chown -R app:app .
USER app
EXPOSE 8000
Follow up based on diptangsu-goswami's response
I tried adding the following:
volumes:
- type: bind
source: C:\dev\Project\project
target: /code/
This creates an empty directory in my Project folder; named C:\dev\Project\project but the app doesn't run as it cannot find the manage.py file... I assumed this was because it was in the parent directory Project and tried again with:
volumes:
- type: bind
source: C:\dev\Project
target: /code/
But the same problem occured. Why is it creating the empty directory? surely it should just be binding the existing directory with the container directory? Also using this method, would I need to change my Dockerfile to not copy the codebase to the container in the first place and just mount it on instead?
I managed to fix it by adding the following to my 'main' service in my docker compose:
volumes:
- .:/code:delegated

Docker pull Django image and run container

So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000

What is the docker command to run my Django server?

I'm trying to Dockerize my local Django/MySql setup. I have this directory and file structure ...
apache
docker-compose.yml
web
- manage.py
- venv
- requirements.txt
- ...
Below is the docker-compose.yml file I'm using ...
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "./web/manage.py runserver 0.0.0.0:8000" ]
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
expose:
# Opens port 3406 on the container
- '3406'
volumes:
- my-db:/var/lib/mysql
volumes:
web-django:
web-static:
my-db:
However when I run
docker-compose up
I get errors like the below
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
Is there another way I'm supposed to be referencing the manage.py file?
Edit: Added info requested in comments ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
As others suggested, this is most probably because of running the manage.py runserver from a wrong directory or something very similar to this.
You are not using WORKDIR directive in your Dockerfile, at all. It is much safer if you do use them. Change your Dockerfile and docker-compose.yml files as below, and you problem should be solved.
Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
...
Notice
You should be able to fix the problem by simply deleting web from your command for running the server. That's because when you are building the Dockerfile, you are inside the web directory. So when you do COPY . . you are copying contents inside web directory, and not the web directory itself. Actually, your file structure inside the docker image, should look something similar to this:
- root
- home
- var
- ...
- manage.py
- venv
- requirements.txt
- ...
In the command: directive, if you're using the array syntax, you're responsible for breaking up the command into words. As you've shown it you're running the equivalent of python "manage.py runserver 0.0.0.0:8000" at the shell prompt, and it's dutifully considering the entire command and options as the filename of a script to be run, including spaces. If you break this up into single words it will work better
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
But there's not really a reason to specify this in docker-compose.yml at all. This is the default command you'd want to run to launch the container no matter how you ran it, so it should be the default command in your image's Dockerfile
...
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
You don't need links: at all on modern Docker (Docker Compose automatically sets up inter-container networking for you). You definitely don't want to mount named volumes over your application code: this hides what's in your image, and (since you've told Docker this is critical user data) it forces Docker to use an old version of your application if you try to update your image.
That leaves you with a simpler docker-compose.yml file:
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
MYSQL_USER: 'chicommons'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306" # second port is always container-internal port
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
Let us try to debug this error:
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
Looks like either code is not not copied to container (named 'web') or command is triggered from root/home directory, where manage.py is not accessible.
1. Is the code available on container? How to check?
Usually, docker will just commands in container execute and exit unless there is unfinished running task (like server running in background).
To stop exiting and enable debugging it, let us add a running command, so that you can login to container and see if code is present.
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
docker-compose.yml
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
Login to container 'web', from command line run docker exec -it web bash
Check if project files are present, now you can run python manage.py runserver 8000 command manually. If it works, then we can be sure of that the server can be run on container. Now, we can analyse initial working directory.
If code is present, check why manage.py is not found? Is the working directory set? meaning, does the container know what is the base directory to run command?
Specify which is the working directory, in Dockerfile, before you copy the project files in to container.
Dockerfile in web directory
ENV PYTHONUNBUFFERED 1
ARG PROJ_DIR=/usr/project/web
RUN mkdir -p $PROJ_DIR
WORKDIR $PROJ_DIR
COPY . $WORKDIR
docker-compose.yml
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000 #note this command is triggered from $WORKDIR that we set in Dockerfile.
I think this should resolve the issue or help you to figure out the problem.

Behavior from docker-compose command not the same as run in Dockerfile

I have a Django project and I've been struggling with the automation of the static files generation. My project structure has a docker-compose.yml file and a Dockerfile for every container image.
The docker-compose.yml file for my project:
version: '3'
services:
web:
build: ./dispenser
command: gunicorn -c gunicorn.conf.py dispenser.wsgi
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
The Dockerfile for the Django project I'm using:
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
RUN python manage.py makemigrations && \
python manage.py migrate && \
python manage.py collectstatic --no-input
After several hours of test and research I've found out that running the collectstatic and migrations commands from the Dockerfile doesn't produce the same result as doing it via the command argument on the docker-compose.yml file.
If I do it as shown above, when time for running the collectstatic command comes, only the "staticfiles" folder is generated (no files inside it). Also database migrations weren't applied (note that I'm using the default .sqlite3 db). Even though the stdout when creating the container said that migrations were applied and staticfiles generated.
The only workaround I found to make it work was executing bash from the container and then running those commands from there.
But later I've found out that if I specify those commands on the docker-file.yml everything works as expected. Leaving the files as follows:
docker-compose.yml
version: '3'
services:
web:
build: ./dispenser
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --no-input && gunicorn -c gunicorn.conf.py dispenser.wsgi"
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
Dockerfile
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
Can anyone explain me why does this occur? And if is there another way of achieving what I intend without having to specify the commands on the docker-compose.yml file?
When you mount a host directory into a container, the contents of host directory shadow the contents of the container.
volumes:
- ./dispenser:/dispenser
So when you run your container, the initial contents of /dispenser inside container will be the contents of ./dispenser from host machine. Any content already at /dispenser inside the container is shadowed. So the content generated during image build time by the RUN instructions inside your Dockerfile will be lost.
In your second approach of using command in compose file, you are mounting the volume first and then generating the content and hence it works.
The command instruction in Dockerfile is used to override the default command in the Docker image which can be set using CMD instruction in Dockerfile. Since you want to use the first approach of running your python script during image build time using RUN instructions, you can RUN them in a different directory(say /tmp/dispenser) and as part of the command in compose or CMD in Dockerfile, you can move the generated content from /tmp/dispenser to /dispenser.

How to drop `django_admin_log` with docker-compose?

I am using a docker-compose.yml file for my django app, and
I am trying to do docker-compose run web python manage.py dbshell and drop the table django_admin_log like here.
But this returned
CommandError: You appear not to have the 'psql' program installed or on your path.
How can I do python manage.py dbshell or drop the table django_admin_log?
Here is my docker-compose.yml
storage:
image: busybox
volumes:
- /var/lib/postgresql/data
- /data
command: true
db:
image: postgres
environment:
- POSTGRESQL_DB=postgres
- POSTGRESQL_USER=postgres
- POSTGRESQL_PASSWORD=password
volumes_from:
- storage
web:
build: .
environment:
- DATABASE_HOST=postgres
command: ./run_web.sh
ports:
- "80:80"
links:
- db
Thank you
Assuming you are using a Debian or Ubuntu-based image, in your Dockerfile, you just need to add the line:
RUN apt-get update && apt-get -y install postgresql
That will install the psql command for you and allow you to use the dbshell.