Dockerized django container not producing local migrations file - django

Question
I am a beginner with docker; this being the first project I have set up with it and don't particularly know what I am doing. I would very much appreciate if someone could give me some advice on what the best way to get migrations from a dockerized django app to store locally
What I have tried so far
I have a local django project setup with the following file structure:
Project
.docker
-Dockerfile
project
-data
-models
- __init__.py
- user.py
- test.py
-migrations
- 0001_initial.py
- 0002_user_role.py
...
settings.py
...
manage.py
Makefile
docker-compose.yml
...
In the current state the migrations for the test.py model have not been run; so I attempted to do so using docker-compose exec main python manage.py makemigrations. This worked successfully returning the following:
Migrations for 'data':
project/data/migrations/0003_test.py
- Create model Test
But produced no local file. However, if I explore the file system of the container I can see that the file exists on the container itself.
Upon running the following:
docker-compose exec main python manage.py migrate
I receive:
Running migrations:
No migrations to apply.
Your models in app(s): 'data' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
I was under the impression that even if this did not create the local file it would at least run the migrations on the container.
Regardless, my intention was that when I run docker-compose exec main python manage.py makemigrations it store the file locally in the project/data/migrations folder and then I just run migrate manually. I can't find much documentation on how to do this; the only post I have seen suggested bind mounts (Migrations files not created in dockerized Django) which I attempted by adding the following to my docker-compose file:
volumes:
- type: bind
source: ./data/migrations
target: /var/lib/migrations_test
but I was struggling to get it to work and following from this I had no idea how to run commands through this volume using docker-compose and I was questioning whether this was even a good idea as I had read somewhere it was not best practice to use bind mounts.
Project setup:
The docker-compose.yml file looking like so:
version: '3.7'
x-common-variables: &common-variables
ENV: 'DEV'
DJANGO_SETTINGS_MODULE: 'project.settings'
DATABASE_NAME: 'postgres'
DATABASE_USER: 'postgres'
DATABASE_PASSWORD: 'postgres'
DATABASE_HOST: 'postgres'
CELERY_BROKER_URLS: 'redis://redis:6379/0'
volumes:
postgres:
services:
main:
command:
python manage.py runserver 0.0.0.0:8000
build:
context: ./
dockerfile: .docker/Dockerfile
target: main
environment:
<<: *common-variables
ports:
- '8000:8000'
env_file:
- dev.env
networks:
- default
postgres:
image: postgres:13.6
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '25432:5432'
environment:
POSTGRES_PASSWORD: 'postgres'
command: postgres -c log_min_messages=INFO -c log_statement=all
wait_for_dependencies:
image: dadarek/wait-for-dependencies
environment:
SLEEP_LENGTH: '0.5'
redis:
image: redis:latest
ports:
- '16379:6379'
worker:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project worker -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
beat:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project beat -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
networks:
default:
Makefile:
build: pre-run
build:
docker-compose build --pull
dev-deps: pre-run
dev-deps:
docker-compose up -d postgres redis
docker-compose run --rm wait_for_dependencies postgres:5432 redis:6379
migrate: pre-run
migrate:
docker-compose run --rm main python manage.py migrate
setup: build dev-deps migrate
up: dev-deps
docker-compose up -d main
Dockerfile:
FROM python:3.10.2 as main
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir -p /code
WORKDIR /code
ADD . ./
RUN useradd -m -s /bin/bash app
RUN chown -R app:app .
USER app
EXPOSE 8000
Follow up based on diptangsu-goswami's response
I tried adding the following:
volumes:
- type: bind
source: C:\dev\Project\project
target: /code/
This creates an empty directory in my Project folder; named C:\dev\Project\project but the app doesn't run as it cannot find the manage.py file... I assumed this was because it was in the parent directory Project and tried again with:
volumes:
- type: bind
source: C:\dev\Project
target: /code/
But the same problem occured. Why is it creating the empty directory? surely it should just be binding the existing directory with the container directory? Also using this method, would I need to change my Dockerfile to not copy the codebase to the container in the first place and just mount it on instead?

I managed to fix it by adding the following to my 'main' service in my docker compose:
volumes:
- .:/code:delegated

Related

Failed to read dockerfile

I'm trying to dockerize my djangp app with postgres db and I'm having trouble, when run docker-compose, error is:
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount4260694681/Dockerfile: no such file or directory
Project structure in screenshot:
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
docker-compose:
version: "3.9"
services:
gunter:
restart: always
build: .
container_name: gunter
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8080
depends_on:
- db
db:
image: postgres
Then: docker-compose run gunter
What am I doing wrong?
I tried to change directories, but everything seems to be correct, I tried to change the location of the Dockerfile
In your docker-compose.yml, you are trying to make a build from your app, from .. But the Dockerfile is not there, so docker-compose wouldn't be able to build anything.
You can pass the path of you Dockerfile:
version: "3.9"
services:
gunter:
restart: always
build: ./gunter_site/
container_name: gunter
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8080
depends_on:
- db
db:
image: postgres

Integration Pycharm, Docker Compose and Django Debbuger getting error: /usr/local/bin/python: can't find '__main__' module. in ''

I'm having a problem using Pycharm to run a Docker container to debug a Django project on MacOs.
The Pycharm setup is working fine to run the Django project inside a Docker container. But when I try to debug I'm having the following issue:
/usr/local/bin/docker-compose -f /Users/claudius/Studies/Pycharm/test-api/docker-compose.yaml -f /Users/claudius/Library/Caches/JetBrains/PyCharm2021.2/tmp/docker-compose.override.1494.yml up --exit-code-from web --abort-on-container-exit web
Docker Compose is now in the Docker CLI, try `docker compose up`
mongo is up-to-date
postgres is up-to-date
Recreating test-api_web_1 ...
Attaching to test-api_web_1
Connected to pydev debugger (build 212.4746.96)
web_1 | /usr/local/bin/python: can't find '__main__' module in ''
test-api_web_1 exited with code 1
Aborting on container exit...
ERROR: 1
Process finished with exit code 1
To set up the Pycharm I did:
Add a Python interpreter with docker-compose and the container of the web application (Django).
Add a Django config for the project.
Add a Run/Debugger config.
UPDATE
As requested by #Thy and #DanielM that is the Dockerfile:
FROM python:3.9.1 AS backend
ARG DJANGO_ENV
ENV DJANGO_ENV=${DJANGO_ENV} \
# pip:
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
# poetry:
POETRY_VERSION=1.1.7 \
POETRY_VIRTUALENVS_CREATE=false
# Set work directory
WORKDIR /pysetup
COPY ./pyproject.toml ./poetry.lock* /pysetup/
RUN pip install "poetry==$POETRY_VERSION"
RUN poetry install --no-interaction --no-ansi
# Copy project
COPY . /pysetup/
And that is the docker-compose.yaml:
# docker-compose.yml
version: '3.8'
services:
db:
image: postgres:12.0-alpine
env_file:
.env
ports:
- "5432:5432"
container_name: postgres
mongodb:
image: mongo:5.0.5
env_file:
- .env
ports:
- "27017:27017"
container_name: mongo
web:
build: .
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
env_file:
- .env
volumes:
- .:/test-api-volume
ports:
- 8000:8000
stdin_open: true
tty: true
depends_on:
- db
- mongodb
Does someone have any suggestions?
After I did can't debug with Docker, I tried to debug using virtual environments and I did receive the same message:
Connected to pydev debugger (build 212.4746.96)
/usr/local/bin/python: can't find '__main__' module in ''
So a work colleague suggested updating the IDE Pycharm.
And it just worked. Now I can debug with Docker or using virtual envs.
In other words, the problem was with IDE Pycharm Professional- version 2021.1.1 running on MacOS Monterey.

Dockerize django app along side with cucumber test

Here is the case. I have simple django app with cucumber tests. I dockerized the django app and it works perfect, but I want to dockerize the cucumber test too and run them. Here is my project sturcutre:
-cucumber_drf_tests
-feature
-step_definitions
axiosinst.js
config.js
package.json
cucumber.js
Dockerfile
package-lock.json
-project_apps
-common
docker-compose.yaml
Dockerfile
manage.py
requirements.txt
Here is my cucumber_drf_tests/Dockerfile
FROM node:12
WORKDIR /app/src
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["yarn", "cucumber-drf"] (this is how I run my test locally)
My second Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
RUN mkdir -p /app/src
WORKDIR /app/src
COPY requirements.txt /app/src
RUN pip install -r requirements.txt
COPY . /app/src
And my docker-compose file
version: "3.8"
services:
test:
build: ./cucumber_drf_tests
image: cucumber_test
container_name: cucumber_container
ports:
- 8000:8000
depends_on:
- app
app:
build: .
image: app:django
container_name: django_rest_container
ports:
- 8000:8000
volumes:
- .:/django #describes a folder that resides on our OS within the container
command: >
bash -c "python manage.py migrate
&& python manage.py loaddata ./project_apps/fixtures/dummy_data.json
&& python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
db:
image: postgres
container_name: postgres_db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=bla
- POSTGRES_PASSWORD=blaa
If I remove I remove test service and run the tests locally everything is fine, but otherwise I got different errors the last one is:
Bind for 0.0.0.0:8000 failed: port is already allocated
It is logic I know, but how to tell to test_container to make the API calls to the address of the running django_rest_container. Maybe this dummy question but I am new of containers world so every sharing of good practice is wellcomed
The issue is in exposing the ports. You are exposing both app and test on the same port (8000). For container you can keep it same. But for host it has to be different.
<host port> : <container port>
This is how ports are mapped in docker. So either change the host port in app or test to different port like below.
For app keep below ports:
7500:8000
Now your app will be accessible at port 7500 and test at 8000

What is the docker command to run my Django server?

I'm trying to Dockerize my local Django/MySql setup. I have this directory and file structure ...
apache
docker-compose.yml
web
- manage.py
- venv
- requirements.txt
- ...
Below is the docker-compose.yml file I'm using ...
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "./web/manage.py runserver 0.0.0.0:8000" ]
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
expose:
# Opens port 3406 on the container
- '3406'
volumes:
- my-db:/var/lib/mysql
volumes:
web-django:
web-static:
my-db:
However when I run
docker-compose up
I get errors like the below
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
Is there another way I'm supposed to be referencing the manage.py file?
Edit: Added info requested in comments ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
As others suggested, this is most probably because of running the manage.py runserver from a wrong directory or something very similar to this.
You are not using WORKDIR directive in your Dockerfile, at all. It is much safer if you do use them. Change your Dockerfile and docker-compose.yml files as below, and you problem should be solved.
Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
...
Notice
You should be able to fix the problem by simply deleting web from your command for running the server. That's because when you are building the Dockerfile, you are inside the web directory. So when you do COPY . . you are copying contents inside web directory, and not the web directory itself. Actually, your file structure inside the docker image, should look something similar to this:
- root
- home
- var
- ...
- manage.py
- venv
- requirements.txt
- ...
In the command: directive, if you're using the array syntax, you're responsible for breaking up the command into words. As you've shown it you're running the equivalent of python "manage.py runserver 0.0.0.0:8000" at the shell prompt, and it's dutifully considering the entire command and options as the filename of a script to be run, including spaces. If you break this up into single words it will work better
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
But there's not really a reason to specify this in docker-compose.yml at all. This is the default command you'd want to run to launch the container no matter how you ran it, so it should be the default command in your image's Dockerfile
...
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
You don't need links: at all on modern Docker (Docker Compose automatically sets up inter-container networking for you). You definitely don't want to mount named volumes over your application code: this hides what's in your image, and (since you've told Docker this is critical user data) it forces Docker to use an old version of your application if you try to update your image.
That leaves you with a simpler docker-compose.yml file:
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
MYSQL_USER: 'chicommons'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306" # second port is always container-internal port
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
Let us try to debug this error:
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
Looks like either code is not not copied to container (named 'web') or command is triggered from root/home directory, where manage.py is not accessible.
1. Is the code available on container? How to check?
Usually, docker will just commands in container execute and exit unless there is unfinished running task (like server running in background).
To stop exiting and enable debugging it, let us add a running command, so that you can login to container and see if code is present.
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
docker-compose.yml
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
Login to container 'web', from command line run docker exec -it web bash
Check if project files are present, now you can run python manage.py runserver 8000 command manually. If it works, then we can be sure of that the server can be run on container. Now, we can analyse initial working directory.
If code is present, check why manage.py is not found? Is the working directory set? meaning, does the container know what is the base directory to run command?
Specify which is the working directory, in Dockerfile, before you copy the project files in to container.
Dockerfile in web directory
ENV PYTHONUNBUFFERED 1
ARG PROJ_DIR=/usr/project/web
RUN mkdir -p $PROJ_DIR
WORKDIR $PROJ_DIR
COPY . $WORKDIR
docker-compose.yml
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000 #note this command is triggered from $WORKDIR that we set in Dockerfile.
I think this should resolve the issue or help you to figure out the problem.

Behavior from docker-compose command not the same as run in Dockerfile

I have a Django project and I've been struggling with the automation of the static files generation. My project structure has a docker-compose.yml file and a Dockerfile for every container image.
The docker-compose.yml file for my project:
version: '3'
services:
web:
build: ./dispenser
command: gunicorn -c gunicorn.conf.py dispenser.wsgi
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
The Dockerfile for the Django project I'm using:
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
RUN python manage.py makemigrations && \
python manage.py migrate && \
python manage.py collectstatic --no-input
After several hours of test and research I've found out that running the collectstatic and migrations commands from the Dockerfile doesn't produce the same result as doing it via the command argument on the docker-compose.yml file.
If I do it as shown above, when time for running the collectstatic command comes, only the "staticfiles" folder is generated (no files inside it). Also database migrations weren't applied (note that I'm using the default .sqlite3 db). Even though the stdout when creating the container said that migrations were applied and staticfiles generated.
The only workaround I found to make it work was executing bash from the container and then running those commands from there.
But later I've found out that if I specify those commands on the docker-file.yml everything works as expected. Leaving the files as follows:
docker-compose.yml
version: '3'
services:
web:
build: ./dispenser
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --no-input && gunicorn -c gunicorn.conf.py dispenser.wsgi"
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
Dockerfile
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
Can anyone explain me why does this occur? And if is there another way of achieving what I intend without having to specify the commands on the docker-compose.yml file?
When you mount a host directory into a container, the contents of host directory shadow the contents of the container.
volumes:
- ./dispenser:/dispenser
So when you run your container, the initial contents of /dispenser inside container will be the contents of ./dispenser from host machine. Any content already at /dispenser inside the container is shadowed. So the content generated during image build time by the RUN instructions inside your Dockerfile will be lost.
In your second approach of using command in compose file, you are mounting the volume first and then generating the content and hence it works.
The command instruction in Dockerfile is used to override the default command in the Docker image which can be set using CMD instruction in Dockerfile. Since you want to use the first approach of running your python script during image build time using RUN instructions, you can RUN them in a different directory(say /tmp/dispenser) and as part of the command in compose or CMD in Dockerfile, you can move the generated content from /tmp/dispenser to /dispenser.