In my case, a postgres database works as the main django backend database. Additional postgres initialization is required. The problem is that status of postgres service becomes ready before additional database initialization. As a result, dependent django app starts running prior to database initialization.
Is there any way to configure postgres service in a way that becomes ready after additional initialization?
docker-compose.yml:
version: "3.3"
services:
postgres:
image: library/postgres:11
volumes:
- some_folder:/docker-entrypoint-initdb.d
django_app:
image: custom_django_image:latest
volumes:
- $PWD:/app
ports:
- 8000:8000
depends_on:
- postgres
Your some_folder, which is mapped to the Postgres container's /docker-entrypoint-initdb.d location is where you (and seems like you are doing) should place your initialization scripts. As long as there is no data in a
possible volume attached to the Postgres container's /var/lib/postgresql/data directory (persisting data), upon container creation, Postgres will first run those scripts before setting Postgres to a ready state. The scripts must be either .sh or .sql files. (documentation). I'll show a typical workflow I use:
I have this script which creates multiple databases:
# create-multiple-databases.sh
set -e
set -u
function create_user_and_database() {
local database=$1
echo " Creating user and database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER $database;
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
EOSQL
}
if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
for db in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
create_user_and_database $db
done
echo "Multiple databases created"
fi
In the docker-compose.yml file I set:
environment:
- POSTGRES_MULTIPLE_DATABASES=dev,test
Now when running docker-compose up, I see the output from the script, and then finally:
postgres_1 | CREATE DATABASE
postgres_1 | GRANT
postgres_1 | Multiple databases created
...
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | 2020-05-23 16:18:40.055 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-23 16:18:40.056 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-05-23 16:18:40.056 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-05-23 16:18:40.063 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Now inside my Django application, in a typical settings.py file (or similar related settings file), I create a while loop that only completes once that database is ready. Once completed, the Django server continues its initialization process and then starts running the server.
while True:
from django.db import connections
from django.db.utils import OperationalError
conn = connections["default"] # or some other key in `DATBASES`
try:
c = conn.cursor()
LOGGER.info("Postgres Ready")
break
except OperationalError:
LOGGER.warning("Postgres Not Ready...")
time.sleep(0.5)
Providing I have understood your question, I hope this gives you the information needed in your question.
I would highly suggest looking at health checks of docker-compose.yml.
You can change the healthcheck command to specific postgresql check, Once health-check is ready, then only djano will start sending the request to the postgresql container.
Please consider using the below file.
version: "3.3"
services:
postgres:
image: library/postgres:11
volumes:
- some_folder:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
django_app:
image: custom_django_image:latest
volumes:
- $PWD:/app
ports:
- 8000:8000
depends_on:
- postgres
Ref:- https://docs.docker.com/compose/compose-file/#healthcheck
You can take a look at how Django Cookiecutter Template does it: https://github.com/pydanny/cookiecutter-django
What they do here is use a startup script for production using Docker: here
This entrypoint script checks, whether postgres is up and running and only proceeds then, otherwise it loops over the connection. You can write a similar script based on that snippet to check, whether your postgres instance is up and running correctly. Whether it is ready or not you can decide in your script docker-entrypoint-initdb.d. It should do all the operations it requires. Once it's done, you can startup postgres and the script for Django should then proceed because it can reach the postgres instance.
It works well for me, I do some other checks in the entrypoint script before I proceed to starting the server itself.
Related
It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?
I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.
You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash
Celery is not able to connect to PostgreSQL in my docker service and getting this error
could not connect to server: Cannot assign requested address
celery_1 | Is the server running on host "localhost" (::1) and accepting
celery_1 | TCP/IP connections on port 5432?
while PostgreSQL working fine for database and I am able to perform actions its just in case of celery .
I have now 2 cases in this celery service
celery:
build:
context: ./
dockerfile: Dockerfile
command: celery -A sampleproject worker -l info
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_HOST=${POSTGRES_HOST}
- POSTGRES_PORT=${POSTGRES_PORT}
volumes:
- .:/usr/src/app/
depends_on:
- database
- app
- redis
when I pass all PostgreSQL variables in celery environment its working. while I delete them its not working . why its happening ? and how I can resolve this ? so that I can run celery with proper way
Found this while searching for the same. I'll share my solution for anyone else that finds this.
For me, the issue was that ${POSTGRES_HOST} host was set to localhost. It should be set to database.
I am using cloud_proxy to connect to google cloud postgres instance. I followed the steps in GCP website https://cloud.google.com/sql/docs/postgres/connect-admin-proxy. When I run it locally using python manage.py runserver with host for db as 127.0.0.1 and port as 5432, the program is working fine.
If I try to dockerize the application and run the program, I am facing the error
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Docker file
services:
web:
build: .
command: python manage.py runserver
volumes:
- .:/code
ports:
- 8000:8000
So I tried to dockerize the application using the stack overflow answer Is there a way to access google cloud SQL via proxy inside docker container modified the host in settings.py file too.
Now facing the error
gcloud is not in the path and -instances and -projects are empty
services:
web:
build: .
command: python manage.py runserver
depends_on:
- cloud-sql-proxy
volumes:
- .:/code
ports:
- 8000:8000
env_file:
- ./.env.dev
cloud-sql-proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: /cloud_sql_proxy --dir=/cloudsql instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
ports:
- 5432:5432
volumes:
- credentials.json:/secrets/cloudsql/credentials.json
restart: always
Could you please help me with this issue. My requirement is to create a docker image with Django application so that it can be deployed to GCP.
I think You are missing - It should be
command: /cloud_sql_proxy --dir=/cloudsql -instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
I recommend you to follow the next documentation:
Connecting psql client using the Cloud SQL Proxy docker Image
This page describes how to connect a psql client to your Cloud SQL instance, from a client machine running Linux or Compute Engine Linux instance, using the Cloud SQL Proxy Docker image, I think this guide could meet your needs.
This guide mentions the way to start the proxy at point 9.
Unix sockets:
docker run -d -v /cloudsql:/cloudsql \
-v <PATH_TO_KEY_FILE>:/config \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy -dir=/cloudsql \
-instances=<INSTANCE_CONNECTION_NAME> -credential_file=/config
If you are using the credentials provided by your Compute Engine instance, do not include the credential_file parameter and the -v <PATH_TO_KEY_FILE>:/config line.
If you are using a container optimized image, use a writeable directory in place of /cloudsql, for example:
-v /mnt/stateful_partition/cloudsql:/cloudsql
Additionally if you want to know more about Cloud SQL Proxy parameters and flags I recommend to take a look at this page
I hope this information would be useful to you
I am trying to dockerize my Django project. For this purpose, I am trying to divide the whole project into 2 parts
The whole web related things in one container.
Database i.e Postgres in another
I am creating the Postgres database container using command:
docker run --name postgres -it -e POSTGRES_USER=username -e POSTGRES_PASSWORD=mysecretpassword postgres
When this postgres instance started running I entered the shell postgres instance using:
docker exec -it postgres /bin/bash
root#ae052fbce400:/# psql -U psql
Inside Psql shell that i got, I am creating the Database named DBNAME and granted all the privileges to username;
Database settings inside the webapp container is:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'DBNAME',
'USER': 'username',
'PASSWORD': 'mysecretpassword',
'HOST': 'postgres',
'PORT': 5432
}
}
Here is my docker-compose.yml file
services:
web:
image: 1ce04167758d #image build of webapp container
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- postgres
postgres:
image: postgres
env_file:
- .env
expose:
- "5432"
When I ran docker-compose up I am getting the following error:-
web_1 | django.db.utils.OperationalError: FATAL: password authentication failed for user "username"
I tried various steps but this is the one which solved my problem.
docker stop $(docker ps -qa) && docker system prune -af --volumes
docker-compose up
This is because you created two database services. One manually via docker run and one via docker-compose. Unfortunately both unusable, meaning they'd have to be reconfigured in order to cooperate.
Scenario 1 - using a separate DB
You should remove the database definition from compose file - so that it looks like this:
version: '3'
services:
web:
image: 1ce04167758d
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
And in your config you should change postgres to your host machine - for example 192.168.1.2
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'itoucan',
'USER': 'mannu',
'PASSWORD': 'mysecretpassword',
'HOST': '192.168.1.2',
'PORT': 5432
}
}
Then, run a separate database service just like you did, via the run command, but exposing a port publicly.
docker run --name postgres -it -p 5432:5432 -e POSTGRES_USER=mannu -e POSTGRES_PASSWORD=mysecretpassword postgres
When it finished initializing and when you finish adding databases and users you can fire up your Django app and it'll connect.
further reading on postgres env variables
Scenario 2 - using composed database
There's a lot of explaining here, as you have to set up a entrypoint that will wait until the DB is fully initialized. But I've already written a step by step answer on how to do it here on stack
Your situation is basically the same except for the DB service. You leave your compose nearly as it is now, with a little changes:
version: '3'
services:
web:
image: 1ce04167758d
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- postgres
entrypoint: ["./docker-entrypoint.sh"]
postgres:
image: postgres
env_file:
- .env
I've added a entrypoint that is supposed to wait until your DB service completes initialization (for instructions on how to set it up you should refer to the link I provided earlier on).
I can see you've defined a entrypoint already - I'd suggest removing this entrypoint from Dockerfile, move it to the compose file and merge it with what I've described in the referred link. It's a common practice in commercial/bigger environments, as you might have many entrypoints, or/and as your entrypoint might not be intended to run while building - like the one I suggest is.
I've removed DB port mapping as you shouldn't expose services if there's no need - if only the web service is supposed to use the DB, then we shouldn't expose the DB for other possibilities.
With the above configuration, your Django configuration would be perfectly fine.
edit from comments
The 0.0.0.0 IP provided for postgres states that the server will listen on all incoming connections. It means that in settings.py you should specify not the 0.0.0.0 address but a address of the host on which your service runs - in your case I guess it's run on your computer - so simply running:
$ ifconfig
on your host will give your your local ip address ( 192.x.x.x or 10.x.x.x ) and this IP is what you specify in settings
I have Docker configured to run Postgres and Django using docker-compose.yml and it is working fine.
The trouble I am having is with Selenium not being able to connect to the Django liveserver.
Now it makes sense (to me at least) that django has to access selenium to control the browser and selenium has to access django to access the server.
I have tried using the docker 'ambassador' pattern using the following configuration for docker-compose.yml from here: https://github.com/docker/compose/issues/666
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
django-ambassador:
container_name: django-ambassador
image: cpuguy83/docker-grand-ambassador
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
command: "-name django -name selenium"
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
links:
- postgis
- "django-ambassador:selenium"
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
links:
- "django-ambassador:django"
When I check http://DOCKER-MACHINE-IP:4444/wd/hub/static/resource/hub.html
I can see that firefox starts, but all the tests fail as firefox is unable to connect to django
'Firefox can't establish a connection to the server at localhost:8081'
I also tried this solution here https://github.com/docker/compose/issues/1991
however this is not working cause I can't get django to connect to postgis and selenium at the same time
'django.db.utils.OperationalError: could not translate host name "postgis" to address: Name or service not known'
I tried using the networking feature as listed below
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
net: appnet
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test foo
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
net: appnet
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
net: appnet
but the result is the same
'Firefox can't establish a connection to the server at localhost:8081'
So how can I get selenium to connect to django?
I have been playing around with this for days - would really appreciate any help.
More Info
Another weird thing is that when the testserver is running not using docker (using my old config of virtualenv etc.) if I run ./manage.py test foo I can access the server through any browser at http://localhost:8081 and get served up webpages, but I can't access the test server when I run the equivalent command if I run it under docker. This is weird cause I am mapping port 8081:8081 - is this related?
Note: I am using OSX and Docker v1.9.1
I ended up coming up with a better solution that didn't require me to hardcode the IP Address. Below is the configuration I used to run tests in django with docker.
Docker-compose file
# docker-compose base file for everything
version: '2'
services:
postgis:
build:
context: .
dockerfile: ./docker/postgis/Dockerfile
container_name: postgis
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/dev/docker_cookiecutter/postgres:/var/lib/postgresql/data
django:
build:
context: .
dockerfile: ./docker/django/Dockerfile
container_name: django
volumes:
- .:/app
depends_on:
- selenium
- postgis
environment:
- SITE_DOMAIN=django
- DJANGO_SETTINGS_MODULE=settings.my_dev_settings
links:
- postgis
- mailcatcher
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug:2.52.0
ports:
- "4444:4444"
- "5900:5900"
Dockerfile (for Django)
ENTRYPOINT ["/docker/django/entrypoint.sh"]
In Entrypoint file
#!/bin/bash
set -e
# Now we need to get the ip address of this container so we can supply it as an environmental
# variable for django so that selenium knows what url the test server is on
# Use below or alternatively you could have used
# something like "$# --liveserver=$THIS_DOCKER_CONTAINER_TEST_SERVER"
if [[ "'$*'" == *"manage.py test"* ]] # only add if 'manage.py test' in the args
then
# get the container id
THIS_CONTAINER_ID_LONG=`cat /proc/self/cgroup | grep 'docker' | sed 's/^.*\///' | tail -n1`
# take the first 12 characters - that is the format used in /etc/hosts
THIS_CONTAINER_ID_SHORT=${THIS_CONTAINER_ID_LONG:0:12}
# search /etc/hosts for the line with the ip address which will look like this:
# 172.18.0.4 8886629d38e6
THIS_DOCKER_CONTAINER_IP_LINE=`cat /etc/hosts | grep $THIS_CONTAINER_ID_SHORT`
# take the ip address from this
THIS_DOCKER_CONTAINER_IP=`(echo $THIS_DOCKER_CONTAINER_IP_LINE | grep -o '[0-9]\+[.][0-9]\+[.][0-9]\+[.][0-9]\+')`
# add the port you want on the end
# Issues here include: django changing port if in use (I think)
# and parallel tests needing multiple ports etc.
THIS_DOCKER_CONTAINER_TEST_SERVER="$THIS_DOCKER_CONTAINER_IP:8081"
echo "this docker container test server = $THIS_DOCKER_CONTAINER_TEST_SERVER"
export DJANGO_LIVE_TEST_SERVER_ADDRESS=$THIS_DOCKER_CONTAINER_TEST_SERVER
fi
eval "$#"
In your django settings file
SITE_DOMAIN = 'django'
Then to run your tests
docker-compose run django ./manage.py test
Whenever you see localhost, try first to port-forward that port (at the VM level)
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8081,tcp,,8081,,8081"
VBoxManage controlvm "default" natpf1 "udp-port8081,udp,,8081,,8081"
(Replace default with the name of your docker-machine: see docker-machine ls)
This differs for port mapping at the docker host level (which is your boot2docker-based Linux host)
The OP luke-aus confirms in the comments:
entering the IP address for the network solved the problem!
I've been struggling with this as well, and I finally found a solution that worked for me. You can try something like this:
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgis
- selenium # django can access selenium:4444, selenium can access django:8081-8100
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
- DJANGO_LIVE_TEST_SERVER_ADDRESS=django:8081-8100 # this gives selenium the correct address
selenium:
image: selenium/standalone-firefox-debug
ports:
- "5900:5900"
I don't think you need to include port 4444 in the selenium config. That port is exposed by default, and there's no need to map it to the host machine, since the django container can access it directly via its link to the selenium container.
[Edit] I've found you don't need to explicitly expose the 8081 port of the django container either. Also, I used a range of ports for the test server, because if tests are run in parallel, you can get an "Address already in use" error, as discussed here.