use local postgres database to docker container - django

Hey everyone i am trying to connect my postgres database install in ubuntu 20.04 to the docker container, which will be outside of the container. I am working on django project.
I am able to create the postgres database inside the docker container and connect my django project to that database, but i want is to connect localdatabase to the django project which is running in docker container
Here is my docker-compose.yml file
version: '3.3'
services:
# Description (For the postgres databse)
kapediadb:
image: postgres
restart: always
container_name: kapediadb
# For accessing env data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
# Description (For django applications)
kapedia:
restart: always
container_name: kapedia
command:
- /bin/bash
- -c
- |
python manage.py makemigrations accounts
python manage.py makemigrations posts
python manage.py makemigrations quiz
python manage.py migrate
gunicorn kapedia.wsgi:application --bind 0.0.0.0:8000
image: kapedia
# Description (define your dockerfile location here)
build: .
volumes:
- .:/kapedia
ports:
- "8000:8000"
depends_on:
- kapediadb
env_file:
- .env
# Description (For volumes)
volumes:
static:

simply you can add this inside project container:
extra_hosts:
- "host.docker.internal:172.17.0.1"
To find IP of docker i.e. 172.17.0.1 (in my case) you can use in local machine's terminal:
$> ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
In postgresql.conf, change listen_addresses to listen_addresses = '*'
In pg_hba.conf, add this at the end of line
host all all 0.0.0.0/0 md5
Now restart postgresql service using, sudo service postgresql restart
Please use host.docker.internal hostname to connect database from Server Application.
Ex: jdbc:postgresql://host.docker.internal:5432/bankDB
Note: sudo nano /etc/postgresql/<your_version>/main/postgresql.conf use this command to open postgresql.conf file
This is the way you can connect your local database to docker-contaner

Related

How to connect to a postgres database in a docker container?

I setup my django and postgres container on my local machine and all working fine. Local server is running, database running but I am not being able to connect to the created postgres db.
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_db
volumes:
postgres_data:
I tried this command:
docker exec -it container_id psql -U postgres
error:
psql: error: could not connect to server: FATAL: role "postgres" does not exist
I am very new to Docker.
You're not using the username and the password you provided in your docker-compose file. Try this and then enter my_password:
docker exec -it container_id psql -U my_user -d my_db --password
Check the official documentation to find out about the PostgreSQL terminal.
I would also like to add, in your compose file you're not exposing any ports for the db container. So it will be unreachable via external sources (you, your app or anything that isn't ran within that container).
I think you need to add environment to project container.
environment:
- DB_HOST=db
- DB_NAME=my_db
- DB_USER=youruser
- DB_PASS=yourpass
depends_on:
- db
add this before depends_on
And now see if it solves
You should add ports to the docker-compose for the postgres image,as this would allow postgres to be accessible outside the container
- ports:
"5432:5432"
You can checkout more here docker-compose for postgres

Docker pull Django image and run container

So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000

How to connect a django app to a dockerized postgres db, both from a dockerized django and non-dockerized django using same DATABASE_HOST

I have a postgres container, docker-compose.yml:
services:
db:
container_name: db
expose:
- "5432"
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
And a django project with settings.py:
DATABASES = {
'default': {
'HOST': os.environ.get('POSTGRES_HOST', '127.0.0.1')
# etc
}
}
.env
POSTGRES_HOST_DJANGO=db
When I run my django app locally with manage.py runserver 0.0.0.0:8000 it connects fine, using the default POSTGRES_HOST=127.0.0.1, because .env isn't loaded.
I also run my django app in a container sometimes:
docker-compose.yml:
web:
#restart: unless-stopped
build: .
env_file: .env
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- 8000:8000
links:
- db:db
However it uses the .env file and connects with POSTGRES_HOST=db
If I try to connect the locally run django app with POSTGRES_HOST=db it fails:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
And if I try to run the django app in a container with POSTGRES_HOST=127.0.0.1, it fails in the same way.
How can I get them to use the same POSTGRES_HOST?
The problem seems to be in the network config. I don't see one.
The default behavior of docker-compose is that it creates a network for every compose file - by default the name is the folder name with '_default'.
Django App is in a different network and Postgres is in a different network
If your Django app and your Postgres containers are in different docker-compose files, using container names to resolve hosts will not work(by default, can be done with a custom network config) as they are in two different networks.
As you have done a port binding, you can directly access Postgres by giving host machine's private ip and port 5432 in the container, this way communication is happening through the host network.
If you find a need to make the containers talk to each other directly, make sure they are on the same docker network
I figured out how to do it. It wasn't getting them to use the same variable, it was to get them to read different variables based on how it was run. So:
from docker-compose.yml
web:
build: .
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
env_file: .env
environment:
POSTGRES_HOST: db # takes precendent over .env file
And in .env:
POSTGRES_HOST=127.0.0.1
Now, when I run locally, with ./manage.py runserver, it uses the .env file and connects to the db container properly at 127.0.0.1:5342
But if I run docker-compose up web, even though it also read the .env file, the environment variable provided in the compose file takes precedent and it uses POSTGRES_HOST: db and connects to the db container as well!

127.0.0.1 refused to connect in docker django

I'm trying to connect to an instance of django running in docker. As far as i can tell I've opened the correct port, and see in docker ps that there is tcp on port 8000, but it there is no forwarding to the port.
After reading the docker compose docs on ports, i would expect this to work (I can view pgadmin on 127.0.0.1:9000 too).
My docker compose:
version: '3'
services:
postgresql:
restart: always
image: postgres:latest
environment:
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_DB: pgdb
pgadmin:
restart: always
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: admin
GUNICORN_THREADS: 4
PGADMIN_LISTEN_PORT: 9000
volumes:
- ./utility/pgadmin4-servers.json:/pgadmin4/servers.json
depends_on:
- postgresql
ports:
- "9000:9000"
app:
build: .
environment:
POSTGRES_DB: pgdb
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_HOST: postgresql
volumes:
- .:/code
ports:
- "127.0.0.1:8000:8000"
- "5555:5555"
depends_on:
- postgresql
- pgadmin
I have tried with the following combinations for (app) ports, as are suggested here:
app:
...
ports:
- "8000"
- "8000:8000"
- "127.0.0.1:8000:8000"
but i still see This site can’t be reached 127.0.0.1 refused to connect. on trying to access the site.
I'm sure that this is a port forwarding problem, and that my server is turning correctly in django because i can run a docker attach to the container and curl a url with the expected response.
What am i doing wrong?
I was running my application using the command:
docker-compose run app python3 manage.py runserver 0.0.0.0:8000
With docker-compose you need to use the argument --service-ports to:
Run command with the service's ports enabled and mapped to the host.
Thus my final command looked like this:
docker-compose run --service-ports app python3 manage.py runserver 0.0.0.0:8000
Documentation on run can be found here
If you are running docker inside a virtual machine then you need to access your application through the virtual machine IP address and not using localhost or 127.0.0.1. Try to get the virtual machine IP. Also please specify in which platform/environment you installed and running the docker.

Want to connect mongodb docker service with my django application

I want to create two docker services one is mongodb service another one is web service build using django. And i need that web-service (django app) which need to be connected to that mongodb docker service.
but i dont know how to connect with mongodb docker service in my django application which is also a service running in a same docker swarm .`This is my docker-compose.yml:
version: '3'
services:
mongo:
image: mongo:latest
command: mongod --storageEngine wiredTiger
ports:
- "27017:27017"
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- mongo
depends_on:
- mongo
Here i tried with mongoengine in settings.py of django application but failed
MONGO_DATABASE_NAME = "reg_task21"
MONGO_HOST = "mongo"
mongoengine.connect(db=MONGO_DATABASE_NAME, host=MONGO_HOST,port=27017)
You should add the username and password to connect statement:
mongoengine.connect(db=MONGO_DATABASE_NAME, username='root', password='example', host=MONGO_HOST,port=27017)