Docker compose run migrations on django web application + postgres db - django

Hi I am having issues running migrations on postgres db container from django.
Here is my docker-compose file
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app
- /usr/src/app/static
env_file:
- ./.env
environment:
- DEBUG=1
command: /usr/local/bin/gunicorn mysite.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres
env_file: .env
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
The directory structure is below.
The .env file defines the Postgres DB , user name and password
DB_NAME=test
DB_USER=test
DB_PASS=test!
DB_SERVICE=postgres
DB_PORT=5432
POSTGRES_USER=test
POSTGRES_DB=test
POSTGRES_PASSWORD=test!
When I run docker-compose build and docker-compose up -d nginx, postgres and web containers start. The postgres startup (default) creates the db, user and password. The django startup container installs requirements.txt and starts the django server (everything looks good).
On running makemigrations
docker-compose run web /usr/local/bin/python manage.py makemigrations polls
I get the following output
Migrations for 'polls':
0001_initial.py:
- Create model Choice
- Create model Question
- Add field question to choice
But when I run
docker-compose run web /usr/local/bin/python manage.py showmigrations polls
the output is.
polls
(no migrations)
on running
docker-compose run web /usr/local/bin/python manage.py migrate --fake polls
the output I see
Operations to perform:
Apply all migrations: (none)
Running migrations:
No migrations to apply.
Your models have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
The tables are not created in postgres. What am I doing wrong ? Sorry for the long post but I wanted to put all the details here.

Related

Dockerized django container not producing local migrations file

Question
I am a beginner with docker; this being the first project I have set up with it and don't particularly know what I am doing. I would very much appreciate if someone could give me some advice on what the best way to get migrations from a dockerized django app to store locally
What I have tried so far
I have a local django project setup with the following file structure:
Project
.docker
-Dockerfile
project
-data
-models
- __init__.py
- user.py
- test.py
-migrations
- 0001_initial.py
- 0002_user_role.py
...
settings.py
...
manage.py
Makefile
docker-compose.yml
...
In the current state the migrations for the test.py model have not been run; so I attempted to do so using docker-compose exec main python manage.py makemigrations. This worked successfully returning the following:
Migrations for 'data':
project/data/migrations/0003_test.py
- Create model Test
But produced no local file. However, if I explore the file system of the container I can see that the file exists on the container itself.
Upon running the following:
docker-compose exec main python manage.py migrate
I receive:
Running migrations:
No migrations to apply.
Your models in app(s): 'data' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
I was under the impression that even if this did not create the local file it would at least run the migrations on the container.
Regardless, my intention was that when I run docker-compose exec main python manage.py makemigrations it store the file locally in the project/data/migrations folder and then I just run migrate manually. I can't find much documentation on how to do this; the only post I have seen suggested bind mounts (Migrations files not created in dockerized Django) which I attempted by adding the following to my docker-compose file:
volumes:
- type: bind
source: ./data/migrations
target: /var/lib/migrations_test
but I was struggling to get it to work and following from this I had no idea how to run commands through this volume using docker-compose and I was questioning whether this was even a good idea as I had read somewhere it was not best practice to use bind mounts.
Project setup:
The docker-compose.yml file looking like so:
version: '3.7'
x-common-variables: &common-variables
ENV: 'DEV'
DJANGO_SETTINGS_MODULE: 'project.settings'
DATABASE_NAME: 'postgres'
DATABASE_USER: 'postgres'
DATABASE_PASSWORD: 'postgres'
DATABASE_HOST: 'postgres'
CELERY_BROKER_URLS: 'redis://redis:6379/0'
volumes:
postgres:
services:
main:
command:
python manage.py runserver 0.0.0.0:8000
build:
context: ./
dockerfile: .docker/Dockerfile
target: main
environment:
<<: *common-variables
ports:
- '8000:8000'
env_file:
- dev.env
networks:
- default
postgres:
image: postgres:13.6
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '25432:5432'
environment:
POSTGRES_PASSWORD: 'postgres'
command: postgres -c log_min_messages=INFO -c log_statement=all
wait_for_dependencies:
image: dadarek/wait-for-dependencies
environment:
SLEEP_LENGTH: '0.5'
redis:
image: redis:latest
ports:
- '16379:6379'
worker:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project worker -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
beat:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project beat -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
networks:
default:
Makefile:
build: pre-run
build:
docker-compose build --pull
dev-deps: pre-run
dev-deps:
docker-compose up -d postgres redis
docker-compose run --rm wait_for_dependencies postgres:5432 redis:6379
migrate: pre-run
migrate:
docker-compose run --rm main python manage.py migrate
setup: build dev-deps migrate
up: dev-deps
docker-compose up -d main
Dockerfile:
FROM python:3.10.2 as main
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir -p /code
WORKDIR /code
ADD . ./
RUN useradd -m -s /bin/bash app
RUN chown -R app:app .
USER app
EXPOSE 8000
Follow up based on diptangsu-goswami's response
I tried adding the following:
volumes:
- type: bind
source: C:\dev\Project\project
target: /code/
This creates an empty directory in my Project folder; named C:\dev\Project\project but the app doesn't run as it cannot find the manage.py file... I assumed this was because it was in the parent directory Project and tried again with:
volumes:
- type: bind
source: C:\dev\Project
target: /code/
But the same problem occured. Why is it creating the empty directory? surely it should just be binding the existing directory with the container directory? Also using this method, would I need to change my Dockerfile to not copy the codebase to the container in the first place and just mount it on instead?
I managed to fix it by adding the following to my 'main' service in my docker compose:
volumes:
- .:/code:delegated

How to connect to a postgres database in a docker container?

I setup my django and postgres container on my local machine and all working fine. Local server is running, database running but I am not being able to connect to the created postgres db.
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_db
volumes:
postgres_data:
I tried this command:
docker exec -it container_id psql -U postgres
error:
psql: error: could not connect to server: FATAL: role "postgres" does not exist
I am very new to Docker.
You're not using the username and the password you provided in your docker-compose file. Try this and then enter my_password:
docker exec -it container_id psql -U my_user -d my_db --password
Check the official documentation to find out about the PostgreSQL terminal.
I would also like to add, in your compose file you're not exposing any ports for the db container. So it will be unreachable via external sources (you, your app or anything that isn't ran within that container).
I think you need to add environment to project container.
environment:
- DB_HOST=db
- DB_NAME=my_db
- DB_USER=youruser
- DB_PASS=yourpass
depends_on:
- db
add this before depends_on
And now see if it solves
You should add ports to the docker-compose for the postgres image,as this would allow postgres to be accessible outside the container
- ports:
"5432:5432"
You can checkout more here docker-compose for postgres

Can't change dyno with Procfile on Heroku

I'm trying to deploy django project to Heroku using docker image. My Procfile contains command:
web: gunicoron myProject.wsgi
But when I push and release to heroku - somewhy dyno process command according to a dashboard is
web: python3
Command heroku ps tells
web.1: crashed
And I can not change it anyhow. Any manipulation with Procfile doesn't work.
When I deploy the same project with git - everything works fine. But why heroku container deploy does not work. Everything done following heroku instruction.
My Dockerfile:
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/
My docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
# ports:
# - "5432:5432"
environment:
- POSTGRES_DB=${SQL_NAME}
- POSRGRES_USER=${SQL_USER}
- POSTGRES_PASSWORD=${SQL_PASSWORD}
web:
build: .
# command: python manage.py runserver 0.0.0.0:8000
command: gunicorn kereell.wsgi --bind 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
# env_file: .env
environment:
- DEBUG=${DEBUG}
- SECRET_KEY=${SECRET_KEY}
- DB_ENGINE=${SQL_ENGINE}
- DB_NAME=${SQL_NAME}
- DB_USER=${SQL_USER}
- DB_PASSWD=${SQL_PASSWORD}
- DB_HOST=${SQL_HOST}
- DB_PORT=${SQL_PORT}
depends_on:
- db
My requirements.txt:
Django
psycopg2
gunicorn
Please help me resolve this. Thanks in advance.

how to dump postgres database in django?

I have an application running in a docker container and psql database running in a docker container as well. i want to dump database while in django container, i know there is dumpdata in django but this command takes long time, i also tried docker exec pg_dump but inside django container this command doesn't work.
services:
db_postgres:
image: postgres:10.5-alpine
restart: always
volumes:
- pgdata_invivo:/var/lib/postgresql/data/
env_file:
- .env
django:
build: .
restart: always
volumes:
- ./static:/static
- ./media:/media
ports:
- 8000:8000
depends_on:
- db_postgres
env_file:
- .env
Is there any way to do pg_dump without using docker exec pg_dump while in django container?
While your container is running type:
docker-compose down -v
This will remove the volumes and thus all the data stored in your database of the container will be removed.
Now run
docker-compose up --build
docker-compose exec django python manage.py migrate
to create your tables again.

Want to connect mongodb docker service with my django application

I want to create two docker services one is mongodb service another one is web service build using django. And i need that web-service (django app) which need to be connected to that mongodb docker service.
but i dont know how to connect with mongodb docker service in my django application which is also a service running in a same docker swarm .`This is my docker-compose.yml:
version: '3'
services:
mongo:
image: mongo:latest
command: mongod --storageEngine wiredTiger
ports:
- "27017:27017"
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- mongo
depends_on:
- mongo
Here i tried with mongoengine in settings.py of django application but failed
MONGO_DATABASE_NAME = "reg_task21"
MONGO_HOST = "mongo"
mongoengine.connect(db=MONGO_DATABASE_NAME, host=MONGO_HOST,port=27017)
You should add the username and password to connect statement:
mongoengine.connect(db=MONGO_DATABASE_NAME, username='root', password='example', host=MONGO_HOST,port=27017)