how to dump postgres database in django? - django

I have an application running in a docker container and psql database running in a docker container as well. i want to dump database while in django container, i know there is dumpdata in django but this command takes long time, i also tried docker exec pg_dump but inside django container this command doesn't work.
services:
db_postgres:
image: postgres:10.5-alpine
restart: always
volumes:
- pgdata_invivo:/var/lib/postgresql/data/
env_file:
- .env
django:
build: .
restart: always
volumes:
- ./static:/static
- ./media:/media
ports:
- 8000:8000
depends_on:
- db_postgres
env_file:
- .env
Is there any way to do pg_dump without using docker exec pg_dump while in django container?

While your container is running type:
docker-compose down -v
This will remove the volumes and thus all the data stored in your database of the container will be removed.
Now run
docker-compose up --build
docker-compose exec django python manage.py migrate
to create your tables again.

Related

How to connect to a postgres database in a docker container?

I setup my django and postgres container on my local machine and all working fine. Local server is running, database running but I am not being able to connect to the created postgres db.
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_db
volumes:
postgres_data:
I tried this command:
docker exec -it container_id psql -U postgres
error:
psql: error: could not connect to server: FATAL: role "postgres" does not exist
I am very new to Docker.
You're not using the username and the password you provided in your docker-compose file. Try this and then enter my_password:
docker exec -it container_id psql -U my_user -d my_db --password
Check the official documentation to find out about the PostgreSQL terminal.
I would also like to add, in your compose file you're not exposing any ports for the db container. So it will be unreachable via external sources (you, your app or anything that isn't ran within that container).
I think you need to add environment to project container.
environment:
- DB_HOST=db
- DB_NAME=my_db
- DB_USER=youruser
- DB_PASS=yourpass
depends_on:
- db
add this before depends_on
And now see if it solves
You should add ports to the docker-compose for the postgres image,as this would allow postgres to be accessible outside the container
- ports:
"5432:5432"
You can checkout more here docker-compose for postgres

Docker pull Django image and run container

So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000

Developing with celery and docker

I have noticed that developing with celery in container, something like this:
celeryworker:
build: .
user: django
command: celery -A project.celery worker -Q project -l DEBUG
links:
- redis
- postgres
depends_on:
- redis
- postgres
env_file: .env
environment:
DJANGO_SETTINGS_MODULE: config.settings.celery
if I want to do some changes on some celery task, I have to completly rebuild the docker image in order to have the latest changes.
So I tried:
docker-compose -f celery.yml down
docker-compose -f celery.yml up
Nothing changed, then:
docker-compose -f celery.yml down
docker-compsoe -f celery.yml build
docker-compose -f celery.yml up
I have the new changes.
Is this the way to do it? seems very slow to me, all the time rebuilding the image, is then better have the local celery outsite docker containers?
Mount your . (that is, your working copy) as a volume within the container you're developing in.
That way you're using the fresh code from your working directory without having to rebuild (unless, say, you're changing dependencies or something else that requires a rebuild).
The idea is explained here by Heroku, emphasis mine:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
env_file: .env
depends_on:
- db
volumes:
- ./webapp:/opt/webapp # <--- Whatever code your Celery workers need should be here
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"

Docker not persisting postgres volume [django]

There are many questions that have been asked on here about similar issues that I went through such as this, this, this and this that are very similar but none of the solutions there solve my problem. Please don't close this question.
Problem:
I am running django with nginx and postgres on docker. Secret information is stored in an .env file. My postgres data is not persisting with docker-compose up/start and docker-compose down/stop/restart.
This is my docker-compose file:
version: '3.7'
services:
web:
build: ./app
command: gunicorn umngane_project.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
expose:
- 8000
environment:
- SECRET_KEY=${SECRET}
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=${POSTGRESQLUSER}
- SQL_PASSWORD=${POSTGRESQLPASSWORD}
- SQL_HOST=db
- SQL_PORT=5432
- SU_NAME=${SU_NAME}
- SU_EMAIL=${SU_EMAIL}
- SU_PASSWORD=${SU_PASSWORD}
depends_on:
- db
db:
image: postgres:11.2-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/app/assets
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
external: true # I tried running without this and the result is the same
static_volume:
My entrypoint scipt is this:
python manage.py flush --no-input
python manage.py makemigrations
python manage.py migrate
python manage.py createsuperuser --user "${SU_NAME}" --email "${SU_EMAIL}" --password "${SU_PASSWORD}"
python manage.py collectstatic --no-input
exec "$#"
where createsuperuser is a custom module that creates a superuser in the application.
This setup is not persisting the information in postgres_data.
Additional information:
Before doing anything, I check to see that there is no volume named postgres_data using docker volume ls and get just that.
At which point I run docker-compose up -d/docker-compose up -d --build and everything works out fine with no errors.
I run docker inspect postgres_data and it shows "CreatedAt": "X1"
I am able to login as the superuser. I proceed to create admin users, logout as the superuser and then login as any of the admin users with no problem. I run docker exec -it postgres_data psql -U <postgres_user> to make sure the admin users are in the database and find just that.
At which point I proceed to run docker-compose down/docker-compose stop with no problem. I run docker volume ls and it shows that postgres_data is still there.
I run docker inspect postgres_data and it shows "CreatedAt": "X2"
To test that everything works as expected I run docker-compose up -d/docker-compose up -d --build/docker-compose start/docker-compose restart.
I run docker inspect postgres_data and it shows "CreatedAt": "X3"
At which point I proceed to try and login as an admin user and am not able to. I run docker exec -it postgres_data psql -U <postgres_user> again but this time only see the superuser, no admin users.
(Explanation: I am here using the forward slash to show all the different things I tried on different attempts. I tried every combination of commands shown here.)
The issue is you run "flush" in your entrypoint script which clears the database. The entrypoint will run whenever you boot or recreate the container.
One way of having persistent data is specifying an actual path on the disk instead of creating a volume:
...
db:
image: postgres:11.2-alpine
volumes:
- "/local/path/to/postgres/data:/var/lib/postgresql/data/"
...
This way, the container's postgres data location is mapped to a path you specify. This way, the data persists directly on disk unless purposely deleted.
A docker volume, as far as I know, is going to be removed on container removal.

Docker: Reconnect new postgres container to existing Data container

I have 3 docker containers. One running django, another running postgres and third is a data container for the postgres. I'm using docker-compose to link them up.
docker-compose.yml
dbdata:
image: postgres
container_name: dbdata_container
volumes:
- ./data:/var/lib/postgresql/data
command: true
db:
image: postgres
container_name: postgres_container
ports:
- "5432:5432"
volumes_from:
- dbdata
web:
build: .
container_name: django_container
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
I have mistaken deleted my postgres container.
How to create a new postgres container which connects to existing data container?
I have tried running:
docker-compose up
It fails with following error:
web_1 | django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
When I connect to postgres using PgMastero I could only see any tables in it.
Kindly help
Found out a work around. Updating it here so it might help others who might face same issue.
I think links are getting messed up when once postgres container is removed.
So if I do docker-compose up, docker is unable to link with db container which I suppose is still pointing with deleted container or the new container is given different link name.
Deleting all the three containers and doing docker-compose up does the trick. I am not sure why is this, may be some bug in docker-compose module.