Docker: Reconnect new postgres container to existing Data container - django

I have 3 docker containers. One running django, another running postgres and third is a data container for the postgres. I'm using docker-compose to link them up.
docker-compose.yml
dbdata:
image: postgres
container_name: dbdata_container
volumes:
- ./data:/var/lib/postgresql/data
command: true
db:
image: postgres
container_name: postgres_container
ports:
- "5432:5432"
volumes_from:
- dbdata
web:
build: .
container_name: django_container
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
I have mistaken deleted my postgres container.
How to create a new postgres container which connects to existing data container?
I have tried running:
docker-compose up
It fails with following error:
web_1 | django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
When I connect to postgres using PgMastero I could only see any tables in it.
Kindly help

Found out a work around. Updating it here so it might help others who might face same issue.
I think links are getting messed up when once postgres container is removed.
So if I do docker-compose up, docker is unable to link with db container which I suppose is still pointing with deleted container or the new container is given different link name.
Deleting all the three containers and doing docker-compose up does the trick. I am not sure why is this, may be some bug in docker-compose module.

Related

How to connect to a postgres database in a docker container?

I setup my django and postgres container on my local machine and all working fine. Local server is running, database running but I am not being able to connect to the created postgres db.
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_db
volumes:
postgres_data:
I tried this command:
docker exec -it container_id psql -U postgres
error:
psql: error: could not connect to server: FATAL: role "postgres" does not exist
I am very new to Docker.
You're not using the username and the password you provided in your docker-compose file. Try this and then enter my_password:
docker exec -it container_id psql -U my_user -d my_db --password
Check the official documentation to find out about the PostgreSQL terminal.
I would also like to add, in your compose file you're not exposing any ports for the db container. So it will be unreachable via external sources (you, your app or anything that isn't ran within that container).
I think you need to add environment to project container.
environment:
- DB_HOST=db
- DB_NAME=my_db
- DB_USER=youruser
- DB_PASS=yourpass
depends_on:
- db
add this before depends_on
And now see if it solves
You should add ports to the docker-compose for the postgres image,as this would allow postgres to be accessible outside the container
- ports:
"5432:5432"
You can checkout more here docker-compose for postgres

how to dump postgres database in django?

I have an application running in a docker container and psql database running in a docker container as well. i want to dump database while in django container, i know there is dumpdata in django but this command takes long time, i also tried docker exec pg_dump but inside django container this command doesn't work.
services:
db_postgres:
image: postgres:10.5-alpine
restart: always
volumes:
- pgdata_invivo:/var/lib/postgresql/data/
env_file:
- .env
django:
build: .
restart: always
volumes:
- ./static:/static
- ./media:/media
ports:
- 8000:8000
depends_on:
- db_postgres
env_file:
- .env
Is there any way to do pg_dump without using docker exec pg_dump while in django container?
While your container is running type:
docker-compose down -v
This will remove the volumes and thus all the data stored in your database of the container will be removed.
Now run
docker-compose up --build
docker-compose exec django python manage.py migrate
to create your tables again.

How to connect a django app to a dockerized postgres db, both from a dockerized django and non-dockerized django using same DATABASE_HOST

I have a postgres container, docker-compose.yml:
services:
db:
container_name: db
expose:
- "5432"
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
And a django project with settings.py:
DATABASES = {
'default': {
'HOST': os.environ.get('POSTGRES_HOST', '127.0.0.1')
# etc
}
}
.env
POSTGRES_HOST_DJANGO=db
When I run my django app locally with manage.py runserver 0.0.0.0:8000 it connects fine, using the default POSTGRES_HOST=127.0.0.1, because .env isn't loaded.
I also run my django app in a container sometimes:
docker-compose.yml:
web:
#restart: unless-stopped
build: .
env_file: .env
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- 8000:8000
links:
- db:db
However it uses the .env file and connects with POSTGRES_HOST=db
If I try to connect the locally run django app with POSTGRES_HOST=db it fails:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
And if I try to run the django app in a container with POSTGRES_HOST=127.0.0.1, it fails in the same way.
How can I get them to use the same POSTGRES_HOST?
The problem seems to be in the network config. I don't see one.
The default behavior of docker-compose is that it creates a network for every compose file - by default the name is the folder name with '_default'.
Django App is in a different network and Postgres is in a different network
If your Django app and your Postgres containers are in different docker-compose files, using container names to resolve hosts will not work(by default, can be done with a custom network config) as they are in two different networks.
As you have done a port binding, you can directly access Postgres by giving host machine's private ip and port 5432 in the container, this way communication is happening through the host network.
If you find a need to make the containers talk to each other directly, make sure they are on the same docker network
I figured out how to do it. It wasn't getting them to use the same variable, it was to get them to read different variables based on how it was run. So:
from docker-compose.yml
web:
build: .
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
env_file: .env
environment:
POSTGRES_HOST: db # takes precendent over .env file
And in .env:
POSTGRES_HOST=127.0.0.1
Now, when I run locally, with ./manage.py runserver, it uses the .env file and connects to the db container properly at 127.0.0.1:5342
But if I run docker-compose up web, even though it also read the .env file, the environment variable provided in the compose file takes precedent and it uses POSTGRES_HOST: db and connects to the db container as well!

127.0.0.1 refused to connect in docker django

I'm trying to connect to an instance of django running in docker. As far as i can tell I've opened the correct port, and see in docker ps that there is tcp on port 8000, but it there is no forwarding to the port.
After reading the docker compose docs on ports, i would expect this to work (I can view pgadmin on 127.0.0.1:9000 too).
My docker compose:
version: '3'
services:
postgresql:
restart: always
image: postgres:latest
environment:
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_DB: pgdb
pgadmin:
restart: always
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: admin
GUNICORN_THREADS: 4
PGADMIN_LISTEN_PORT: 9000
volumes:
- ./utility/pgadmin4-servers.json:/pgadmin4/servers.json
depends_on:
- postgresql
ports:
- "9000:9000"
app:
build: .
environment:
POSTGRES_DB: pgdb
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_HOST: postgresql
volumes:
- .:/code
ports:
- "127.0.0.1:8000:8000"
- "5555:5555"
depends_on:
- postgresql
- pgadmin
I have tried with the following combinations for (app) ports, as are suggested here:
app:
...
ports:
- "8000"
- "8000:8000"
- "127.0.0.1:8000:8000"
but i still see This site can’t be reached 127.0.0.1 refused to connect. on trying to access the site.
I'm sure that this is a port forwarding problem, and that my server is turning correctly in django because i can run a docker attach to the container and curl a url with the expected response.
What am i doing wrong?
I was running my application using the command:
docker-compose run app python3 manage.py runserver 0.0.0.0:8000
With docker-compose you need to use the argument --service-ports to:
Run command with the service's ports enabled and mapped to the host.
Thus my final command looked like this:
docker-compose run --service-ports app python3 manage.py runserver 0.0.0.0:8000
Documentation on run can be found here
If you are running docker inside a virtual machine then you need to access your application through the virtual machine IP address and not using localhost or 127.0.0.1. Try to get the virtual machine IP. Also please specify in which platform/environment you installed and running the docker.

Docker-Compose Workflow, docker-compose down?

I am learning docker have a working docker-compose implementation with django/postgresql. Everything is working as expected. My question is what is considered "best practice" with data persistence and the risk to the data.
Here is my full docker-compose.yml:
version: '2'
services:
db:
image: postgres
volumes:
- postgresql:/var/lib/postgresql
ports:
- "5432:5432"
env_file: .env
web:
build: .
command: python run_server.py
volumes:
- .:/project
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgresql:
the run_server.py script simply checks to makes sure the database can be connected to and then runs python manage.py runserver.
So if I stop my containers and restart them the data persists. My concern lies in the docker-compose down command. This command deletes the database. Is this intended? It seems like it would be very easy to run this and accidentally do a lot of damage.
Is there a way so that the database persists even if these containers are removed?
I follow this guide for integrate Django with Docker.
https://docs.docker.com/compose/django/
The way for make the data "persist" is set the database outside the Docker image and let the APP connect to the database via the settings.py
With this trick, when the container is down, the dabase persist because is outside the same container.
Another trick is set the database inside another docker container