I am running Django application dockerized and trying to connect to PostgreSQL database that is located at external host with a public IP.
When running a container, makemigrations command falls with the following error:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "myhost" (89.xx.xx.102) and accepting
TCP/IP connections on port 5432?
However, it successfully connects when not dockerized.
Here the docker-compose.yml:
services:
backend:
build: .
ports:
- 65534:65534
and corresponding Dockerfile:
FROM python:3.10 AS builder
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
RUN pip install gunicorn
FROM builder
COPY ./ /app/
ENTRYPOINT [ "/app/entrypoint.sh" ]
and entrypoint.sh:
#!/bin/bash
python /app/manage.py collectstatic --no-input --clear
python /app/manage.py makemigrations
python /app/manage.py migrate --no-input
gunicorn --pythonpath /app/ -b 0.0.0.0:65534 app.wsgi:application
How to make it possible for the Django application to connect to externally hosted PostgresSQL database?
In your terminal run below command:
vim /etc/postgresql/14/main/postgresql.conf #14 is the version of postgres.
Uncomment and edit the listen_addresses attribute to start listening to start listening to all available IP addresses.
listen_addresses = '*'
Append a new connection policy (a pattern stands for [CONNECTION_TYPE][DATABASE][USER] [ADDRESS][METHOD]) in the bottom of the file.
host all all 0.0.0.0/0 md5
We are allowing TCP/IP connections (host) to all databases (all) for all users (all) with any IPv4 address (0.0.0.0/0) using an MD5 encrypted password for authentication (md5).
It is now time to restart your PostgreSQL service to load your configuration changes.
systemctl restart postgresql
And make sure your system is listening to the 5432 port that is reserved for PostgreSQL.
ss -nlt | grep 5432
Connect to PostgreSQL database through a remote host:
Your PostgreSQL server is now running and listening for external requests. It is now time to connect to your database through a remote host.
Connect via Command Line Tool:
You may now connect to a remote database by using the following command pattern:
psql -h [ip address] -p [port] -d [database] -U [username]
Let’s now connect to a remote PostgreSQL database that we have hosted.
psql -h 5.199.162.56 -p 5432 -d test_erp -U postgres
To double check your connection details use the \conninfo command.
Related
I'm currently trying to figure out how to configure my docker-compose.yml to allow a web-server (django) to communicate with a PostgreSQL database running on the host machine.
The app is perfectly working outside of a container.
And now I want to create a container for better management and deployment capabilities.
I've tried this,
docker-compose.yml :
version: '3'
services:
web:
image: myimage
volumes:
- .:/appdir
environment:
- DB_NAME=test
- DB_USER=test
- DB_PASSWORD=test
- DB_HOST=localhost
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
networks:
- mynet
networks:
mynet:
Dockerfile :
FROM python:3
ENV PYTHONUNBUFFERED=1 \
DJANGO_SETTINGS_MODULE=app.settings \
DEBUG=True \
SECRET_KEY=akey
WORKDIR /appdir
COPY . /appdir
EXPOSE 8000
RUN pip install -r requirements.txt
But when I do so, I get the following error :
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 5432?
Thanks
localhost is relative - inside the docker container - localhost (aka 127.0.0.1) refers to the container itself. if you want to connect to your host- give the container your host real ip as the DB_HOST.
there are many ways to find your host ip, for instance:
run in your terminal hostname -I | awk '{print $1}'
I would like to run a dockerized Django app with a dockerized postgres.
I run the dockerized Django app by using:
docker run --rm --env-file /path/to/variables -d -p 8000:8000 django_app:test
I run a dockerized postgres by using:
docker run --rm -d --env-file /path/to/secrets/variables -p 5432:5432 \
-v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf \
--mount src=/path/to/db/data,dst=/var/lib/postgresql/data,type=bind \
postgres:alpine -c 'config_file=/etc/postgresql/postgresql.conf'
my postgres config is the default config that is suggested in the postgres docker container documentation. It is essentially a config file that contains listen_addresses = '*'
I use the same environment variables for both containers:
DJANGO_SETTINGS_MODULE=settings.module
PROJECT_KEY=xxyyzzabcdefg
DB_ENGINE=django.db.backends.postgresql
POSTGRES_DB=db_name
POSTGRES_USER=db_user
POSTGRES_PASSWORD=verydifficultpassword
POSTGRES_HOST=localhost # I've also tried to use 0.0.0.0
POSTGRES_PORT=5432
My Django settings module for the database is:
DATABASES = {
'default': {
'ENGINE': os.environ.get('DB_ENGINE'),
'NAME': os.environ.get('POSTGRES_DB'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
'HOST': os.environ.get('POSTGRES_HOST'),
'PORT': os.environ.get('POSTGRES_PORT')
}
}
However, I keep on getting:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "0.0.0.0" and accepting
TCP/IP connections on port 5432?
The Dockerfiles for my django app looks like:
FROM python:alpine3.7
COPY --from=installer /app /app
# required for postgres
COPY --from=installer /usr/lib /usr/lib
COPY --from=installer /usr/local/bin /usr/local/bin
COPY --from=installer /usr/local/lib /usr/local/lib
ARG SETTINGS_MODULE
WORKDIR /app
ENTRYPOINT python manage.py migrate &&\
python manage.py test &&\
python manage.py create_default_groups &&\
python manage.py set_screen_permissions &&\
python manage.py create_test_users &&\
python manage.py init_core &&\
python manage.py runserver 0.0.0.0:8000
Another interesting fact is that if I run the app locally python manage.py runserver and have the postgres container running, the app seems to be working.
Can anyone help me try to figure out why am I getting a connection refused? Thanks in advance!
Just make use of user-defined bridge networking. First, leverage your knowledge with reading a short explanation of different types of networks in Docker: https://docs.docker.com/network/bridge/
Second, define your own network
docker network create foo
Next, run your containers connected to this network:
docker run --rm --env-file /path/to/variables -d --network foo django_app:test
docker run --rm -d ... --network foo postgres:alpine ...
Notice in both commands --network foo. Also you dont need to expose ports in this case - inside user-defined networks it is done automatically:
Containers connected to the same user-defined bridge network
automatically expose all ports to each other, and no ports to the
outside world. This allows containerized applications to communicate
with each other easily, without accidentally opening access to the
outside world.
Third, give your containers human readable host names with --name bar
docker run ... --network foo --name my-django django_app:test ...
docker run ... --network foo --name my-postgres postgres:alpine ...
And finally fix the connection string - change from localhost to container name, like my-postgres:
...
POSTGRES_HOST=my-postgres
...
In your docker container with Django there is no Postgres running on localhost. you need to point to the Postgres docker container by specifying the container name instead of localhost
Also, the Postgres container and your app have to be in the same network. see https://docs.docker.com/network/
To create a network run:
docker network create --driver bridge my-network
To run docker container in the network use:
docker run --network my-network -it container_name
To setup Postgres in container with Django:
POSTGRES_HOST=postgres-container-name
In your case you are not specifying the container names. When setting POSTGRES_HOST=localhost you're telling the django container to connect to itself. Which is naturally going to fail as there is no postgres server present in the django container.
Your two containers are using the default bridge network (docker0). In a bridge network the container names are used as host names.
This means means that from your django container you can access the postgres container by using its name as the hostname.
To be able to connect to postgres from you django app you need to:
Give names to your containers by using the --name option. Example: docker run --rm --name mypostgresdb -d ... postgres:alpine -c ...
First create the postgres container and wait for it to boot. Then create the database and user/password.
Only when postgres is up, create the django container. If the postgres container is not up your django container will fail since it's trying to run migrations in the entrypoint.
The POSTGRES_HOST env variable in your django container should receive the postgres container's name as a value. Example: POSTGRES_HOST=mypostgresdb
To check the connection to you postgres container you can ping it from your django container.
I want to create a Dockerfile for the database. In this Dockerfile I want to add a dump and restore. Then I build an image, and everytime I run a container I will have the database restored
This is my Dockerfile
FROM postgres:9.5.8
WORKDIR /home/
COPY my_dump.sql my_dump.sql
EXPOSE 5432 5432
RUN psql -f my_dump.sql postgres
Then I execute
$ docker build -t my_postgres_db .
I get
Step 5/5 : RUN psql -f my_dump.sql postgres
---> Running in 70f7b511cc7c
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Here is a nice little script for doing this taken and altered from https://docs.docker.com/compose/startup-order/
All you need to do is create the script
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
until psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - you can execute commands now"
and then add this to the docker file
RUN /wait-for-postgres.sh host
Then run your commands after that
Those are the following steps I am following to run my application within a docker container .
docker run -i -t -d -p 8000:8000 c4ba9ec8e613 /bin/bash
docker attach c4ba9ec8e613
my start up script :
#!/bin/bash
#activate virtual env
echo Activate vitualenv.
source /home/my_env/bin/activate
#restart nginx
echo Restarting Nginx
service nginx restart
# Start Gunicorn processes
echo Starting Gunicorn.
gunicorn OPC.wsgi:application --bind=0.0.0.0:8000 --daemon
This setup is working fine in the local machine but not working within the docker .
Need to change the port no application to be accessible as my nginx server is responding at port 80
docker run -i -t -d -p 80:80 c4ba9ec8e613 /bin/bash
docker attach c4ba9ec8e613
I've imported to PyCharm 5.1 Beta 2 a tutorial project, which works fine when I run it from the commandline with docker-compose up
: https:// docs.docker.com/compose/django/
Trying to set a remote python interpreter is causing problems.
I've been trying to work out what the service name field is expecting:
remote interpreter - docker compose window - http:// i.stack.imgur.com/Vah7P.png.
My docker-compose.yml file is:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I try to enter web or db or anything at all that comes to mind, I get an error message: Service definition is expected to be a map
So what am I supposed to enter there?
EDIT1 (new version: Pycharm 2016.1 release)
I have now updated to the latest version and am having still issues: .IOError: [Errno 21] Is a directory
Sorry for not tagging all links - have a new user link limit
The only viable way we found to workaround this (Pycharm 2016.1) is setting up an SSH remote interpreter.
Add this to the main service Dockerfile:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then log into docker container like this (in the code sample pass 'screencast'):
$ ssh root#192.168.99.100 -p 2000
Note: We aware the IP and port might change depending on your docker and compose configs
For PyCharm just set up a remote SSH Interpreter and you are done!
https://www.jetbrains.com/help/pycharm/2016.1/configuring-remote-interpreters-via-ssh.html