I am trying to dockerize my Django project. For this purpose, I am trying to divide the whole project into 2 parts
The whole web related things in one container.
Database i.e Postgres in another
I am creating the Postgres database container using command:
docker run --name postgres -it -e POSTGRES_USER=username -e POSTGRES_PASSWORD=mysecretpassword postgres
When this postgres instance started running I entered the shell postgres instance using:
docker exec -it postgres /bin/bash
root#ae052fbce400:/# psql -U psql
Inside Psql shell that i got, I am creating the Database named DBNAME and granted all the privileges to username;
Database settings inside the webapp container is:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'DBNAME',
'USER': 'username',
'PASSWORD': 'mysecretpassword',
'HOST': 'postgres',
'PORT': 5432
}
}
Here is my docker-compose.yml file
services:
web:
image: 1ce04167758d #image build of webapp container
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- postgres
postgres:
image: postgres
env_file:
- .env
expose:
- "5432"
When I ran docker-compose up I am getting the following error:-
web_1 | django.db.utils.OperationalError: FATAL: password authentication failed for user "username"
I tried various steps but this is the one which solved my problem.
docker stop $(docker ps -qa) && docker system prune -af --volumes
docker-compose up
This is because you created two database services. One manually via docker run and one via docker-compose. Unfortunately both unusable, meaning they'd have to be reconfigured in order to cooperate.
Scenario 1 - using a separate DB
You should remove the database definition from compose file - so that it looks like this:
version: '3'
services:
web:
image: 1ce04167758d
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
And in your config you should change postgres to your host machine - for example 192.168.1.2
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'itoucan',
'USER': 'mannu',
'PASSWORD': 'mysecretpassword',
'HOST': '192.168.1.2',
'PORT': 5432
}
}
Then, run a separate database service just like you did, via the run command, but exposing a port publicly.
docker run --name postgres -it -p 5432:5432 -e POSTGRES_USER=mannu -e POSTGRES_PASSWORD=mysecretpassword postgres
When it finished initializing and when you finish adding databases and users you can fire up your Django app and it'll connect.
further reading on postgres env variables
Scenario 2 - using composed database
There's a lot of explaining here, as you have to set up a entrypoint that will wait until the DB is fully initialized. But I've already written a step by step answer on how to do it here on stack
Your situation is basically the same except for the DB service. You leave your compose nearly as it is now, with a little changes:
version: '3'
services:
web:
image: 1ce04167758d
command: python manage.py runserver
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- postgres
entrypoint: ["./docker-entrypoint.sh"]
postgres:
image: postgres
env_file:
- .env
I've added a entrypoint that is supposed to wait until your DB service completes initialization (for instructions on how to set it up you should refer to the link I provided earlier on).
I can see you've defined a entrypoint already - I'd suggest removing this entrypoint from Dockerfile, move it to the compose file and merge it with what I've described in the referred link. It's a common practice in commercial/bigger environments, as you might have many entrypoints, or/and as your entrypoint might not be intended to run while building - like the one I suggest is.
I've removed DB port mapping as you shouldn't expose services if there's no need - if only the web service is supposed to use the DB, then we shouldn't expose the DB for other possibilities.
With the above configuration, your Django configuration would be perfectly fine.
edit from comments
The 0.0.0.0 IP provided for postgres states that the server will listen on all incoming connections. It means that in settings.py you should specify not the 0.0.0.0 address but a address of the host on which your service runs - in your case I guess it's run on your computer - so simply running:
$ ifconfig
on your host will give your your local ip address ( 192.x.x.x or 10.x.x.x ) and this IP is what you specify in settings
Related
It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?
I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.
You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash
I am surprised that I cannot get this to work, as I have run a django applications with a postgres DB in docker containers multiple times, however, this time on my Linux machine the django application cannot connect to the postgres docker container, whereas on my Mac the setup works without a problem.
I am getting the standard error:
django.db.utils.OperationalError: could not connect to server: Host is unreachable
Is the server running on host "postgresdb" (172.27.0.2) and accepting
TCP/IP connections on port 5432?
The postgres container is running and should be reachable. I check the IP address of the docker container and it seems correct. My docker-compose file looks like this:
version: "3.8"
services:
app:
container_name: data-management
restart: always
build: ./
ports:
- '3000:3000'
links:
- 'postgresdb'
volumes:
- ./:/usr/local/share/src/app
env_file:
- .dev.env
postgresdb:
container_name: be-postgres
restart: always
image: postgres:13.4-alpine
ports:
- '5432:5432'
volumes:
- postgres-db:/var/lib/postgresql/data
env_file:
- .dev.env
volumes:
postgres-db:
The configurations are stored in an environment file, which is used in both containers to avoid any mismatches.
POSTGRES_PASSWORD=devpassword123
POSTGRES_USER=devuser
POSTGRES_SERVICE=postgresdb
POSTGRES_PORT=5432
POSTGRES_DB=data_management
The django DB settings are as standard as they could be:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': os.environ['POSTGRES_SERVICE'],
'NAME': os.environ['POSTGRES_DB'],
'PORT': os.environ['POSTGRES_PORT'],
'USER': os.environ['POSTGRES_USER'],
'PASSWORD': os.environ['POSTGRES_PASSWORD'],
}
}
I am out of ideas as what could be causing this as this happens only on my Fedora machine and there are no issues when running on my Mac. I have already had SELinux problems with some docker volumes in the past, however, after googling I could not find any suggestions that SELinux could be the culprit again.
I am using cloud_proxy to connect to google cloud postgres instance. I followed the steps in GCP website https://cloud.google.com/sql/docs/postgres/connect-admin-proxy. When I run it locally using python manage.py runserver with host for db as 127.0.0.1 and port as 5432, the program is working fine.
If I try to dockerize the application and run the program, I am facing the error
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Docker file
services:
web:
build: .
command: python manage.py runserver
volumes:
- .:/code
ports:
- 8000:8000
So I tried to dockerize the application using the stack overflow answer Is there a way to access google cloud SQL via proxy inside docker container modified the host in settings.py file too.
Now facing the error
gcloud is not in the path and -instances and -projects are empty
services:
web:
build: .
command: python manage.py runserver
depends_on:
- cloud-sql-proxy
volumes:
- .:/code
ports:
- 8000:8000
env_file:
- ./.env.dev
cloud-sql-proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: /cloud_sql_proxy --dir=/cloudsql instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
ports:
- 5432:5432
volumes:
- credentials.json:/secrets/cloudsql/credentials.json
restart: always
Could you please help me with this issue. My requirement is to create a docker image with Django application so that it can be deployed to GCP.
I think You are missing - It should be
command: /cloud_sql_proxy --dir=/cloudsql -instances=abc:us-central1:def=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
I recommend you to follow the next documentation:
Connecting psql client using the Cloud SQL Proxy docker Image
This page describes how to connect a psql client to your Cloud SQL instance, from a client machine running Linux or Compute Engine Linux instance, using the Cloud SQL Proxy Docker image, I think this guide could meet your needs.
This guide mentions the way to start the proxy at point 9.
Unix sockets:
docker run -d -v /cloudsql:/cloudsql \
-v <PATH_TO_KEY_FILE>:/config \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy -dir=/cloudsql \
-instances=<INSTANCE_CONNECTION_NAME> -credential_file=/config
If you are using the credentials provided by your Compute Engine instance, do not include the credential_file parameter and the -v <PATH_TO_KEY_FILE>:/config line.
If you are using a container optimized image, use a writeable directory in place of /cloudsql, for example:
-v /mnt/stateful_partition/cloudsql:/cloudsql
Additionally if you want to know more about Cloud SQL Proxy parameters and flags I recommend to take a look at this page
I hope this information would be useful to you
I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.
I have Docker configured to run Postgres and Django using docker-compose.yml and it is working fine.
The trouble I am having is with Selenium not being able to connect to the Django liveserver.
Now it makes sense (to me at least) that django has to access selenium to control the browser and selenium has to access django to access the server.
I have tried using the docker 'ambassador' pattern using the following configuration for docker-compose.yml from here: https://github.com/docker/compose/issues/666
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
django-ambassador:
container_name: django-ambassador
image: cpuguy83/docker-grand-ambassador
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
command: "-name django -name selenium"
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
links:
- postgis
- "django-ambassador:selenium"
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
links:
- "django-ambassador:django"
When I check http://DOCKER-MACHINE-IP:4444/wd/hub/static/resource/hub.html
I can see that firefox starts, but all the tests fail as firefox is unable to connect to django
'Firefox can't establish a connection to the server at localhost:8081'
I also tried this solution here https://github.com/docker/compose/issues/1991
however this is not working cause I can't get django to connect to postgis and selenium at the same time
'django.db.utils.OperationalError: could not translate host name "postgis" to address: Name or service not known'
I tried using the networking feature as listed below
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
net: appnet
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test foo
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
net: appnet
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
net: appnet
but the result is the same
'Firefox can't establish a connection to the server at localhost:8081'
So how can I get selenium to connect to django?
I have been playing around with this for days - would really appreciate any help.
More Info
Another weird thing is that when the testserver is running not using docker (using my old config of virtualenv etc.) if I run ./manage.py test foo I can access the server through any browser at http://localhost:8081 and get served up webpages, but I can't access the test server when I run the equivalent command if I run it under docker. This is weird cause I am mapping port 8081:8081 - is this related?
Note: I am using OSX and Docker v1.9.1
I ended up coming up with a better solution that didn't require me to hardcode the IP Address. Below is the configuration I used to run tests in django with docker.
Docker-compose file
# docker-compose base file for everything
version: '2'
services:
postgis:
build:
context: .
dockerfile: ./docker/postgis/Dockerfile
container_name: postgis
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/dev/docker_cookiecutter/postgres:/var/lib/postgresql/data
django:
build:
context: .
dockerfile: ./docker/django/Dockerfile
container_name: django
volumes:
- .:/app
depends_on:
- selenium
- postgis
environment:
- SITE_DOMAIN=django
- DJANGO_SETTINGS_MODULE=settings.my_dev_settings
links:
- postgis
- mailcatcher
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug:2.52.0
ports:
- "4444:4444"
- "5900:5900"
Dockerfile (for Django)
ENTRYPOINT ["/docker/django/entrypoint.sh"]
In Entrypoint file
#!/bin/bash
set -e
# Now we need to get the ip address of this container so we can supply it as an environmental
# variable for django so that selenium knows what url the test server is on
# Use below or alternatively you could have used
# something like "$# --liveserver=$THIS_DOCKER_CONTAINER_TEST_SERVER"
if [[ "'$*'" == *"manage.py test"* ]] # only add if 'manage.py test' in the args
then
# get the container id
THIS_CONTAINER_ID_LONG=`cat /proc/self/cgroup | grep 'docker' | sed 's/^.*\///' | tail -n1`
# take the first 12 characters - that is the format used in /etc/hosts
THIS_CONTAINER_ID_SHORT=${THIS_CONTAINER_ID_LONG:0:12}
# search /etc/hosts for the line with the ip address which will look like this:
# 172.18.0.4 8886629d38e6
THIS_DOCKER_CONTAINER_IP_LINE=`cat /etc/hosts | grep $THIS_CONTAINER_ID_SHORT`
# take the ip address from this
THIS_DOCKER_CONTAINER_IP=`(echo $THIS_DOCKER_CONTAINER_IP_LINE | grep -o '[0-9]\+[.][0-9]\+[.][0-9]\+[.][0-9]\+')`
# add the port you want on the end
# Issues here include: django changing port if in use (I think)
# and parallel tests needing multiple ports etc.
THIS_DOCKER_CONTAINER_TEST_SERVER="$THIS_DOCKER_CONTAINER_IP:8081"
echo "this docker container test server = $THIS_DOCKER_CONTAINER_TEST_SERVER"
export DJANGO_LIVE_TEST_SERVER_ADDRESS=$THIS_DOCKER_CONTAINER_TEST_SERVER
fi
eval "$#"
In your django settings file
SITE_DOMAIN = 'django'
Then to run your tests
docker-compose run django ./manage.py test
Whenever you see localhost, try first to port-forward that port (at the VM level)
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8081,tcp,,8081,,8081"
VBoxManage controlvm "default" natpf1 "udp-port8081,udp,,8081,,8081"
(Replace default with the name of your docker-machine: see docker-machine ls)
This differs for port mapping at the docker host level (which is your boot2docker-based Linux host)
The OP luke-aus confirms in the comments:
entering the IP address for the network solved the problem!
I've been struggling with this as well, and I finally found a solution that worked for me. You can try something like this:
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgis
- selenium # django can access selenium:4444, selenium can access django:8081-8100
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
- DJANGO_LIVE_TEST_SERVER_ADDRESS=django:8081-8100 # this gives selenium the correct address
selenium:
image: selenium/standalone-firefox-debug
ports:
- "5900:5900"
I don't think you need to include port 4444 in the selenium config. That port is exposed by default, and there's no need to map it to the host machine, since the django container can access it directly via its link to the selenium container.
[Edit] I've found you don't need to explicitly expose the 8081 port of the django container either. Also, I used a range of ports for the test server, because if tests are run in parallel, you can get an "Address already in use" error, as discussed here.