Can't connect postgresql in a docker (receiving fastshut down request) - django

I am running a django/postgresql in docker container. When run docker-compose.yml, the postgresql service will start and listen for a moment then it will be shut-down with LOG:
"listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" then
"database system was shut down".
I have read it somewhere that it is receiving a postmaster fast shutdown request but I don't know how to solve this. I have tried to change ports and other PostgreSQL env variables without success.
Here is my .ylm for docker-compose
version: '3'
volumes:
postgres_data_local: {}
postgres_backup_local: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
depends_on:
- postgres
- redis
volumes:
- .:/app
env_file: .env
ports:
- "8000:8000"
command: /start.sh
postgres:
image: postgres:10.1-alpine
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- postgres_data_local:/var/lib/postgresql/data
- postgres_backup_local:/backups
env_file: .env
redis:
image: redis:3.0
ports:
- '6379:6379'
My .env file is somehow like
# PostgreSQL conf
POSTGRES_PASSWORD=p3wrd
POSTGRES_USER=postgres
POSTGRES_DB=postgres
POSTGRES_HOST=127.0.0.1 #have tried localhost, 0.0.0.0 etc
POSTGRES_PORT=5432
DATABASE_URL= postgresql://postgres:p3wrd#127.0.0.1:5432/postgres
# General settings
READ_DOT_ENV_FILE=True
SETTINGS_MODULE=config.settings.test
SECRET_KEY=Sup3rS3cr3tP#22word
DEBUG=True
ALLOWED_HOSTS=*
# URL for Redis
REDIS_URL=redis://127.0.0.1:6379

Why are you setting POSTGRES_HOST to 127.0.0.1 or variants thereof? That means "localhost", which in a container means "the local container". Postgres isn't running inside the django container so that won't work.
Since you're using docker-compose, your containers are all running in a user-defined network. This means that Docker maintains a DNS server for you that maps service names to container ip addresses. In other words, you can simply set:
POSTGRES_HOST=postgres
DATABASE_URL= postgresql://postgres:p3wrd#postgres:5432/postgres

Related

No connection in a multicontainer docker environment

I am trying to deploy an application in Docker running on 64bit Amazon Linux 2. I am using a pipeline, which publishes images to a private repository on Dockerhub. Elastic beanstalk uses docker-compose to run containers, but so far I've had no success in accessing the application. I am not using a dockerrun.aws.json file, as v.3 does not support any container configuration, and as far as I know, it's not needed for docker compose.
My docker-compose file contains several services, one of which is a RabbitMQ message broker.
version: '3.9'
services:
Some.API:
image: ...
container_name: some-api
networks:
- my-network
ports:
- "9002:80"
Another.API:
image: ...
container_name: another-api
networks:
- my-network
ports:
- "9003:80"
rabbitmQ:
image: rabbitmq:3-management-alpine
container_name: rabbit-mq
labels:
NAME: rabbitmq
volumes:
- ./rabbitconfig/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
networks:
- my-network
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
networks:
my-network:
driver: bridge
Once the current version of the application is successfuly deployed to Beanstalk, I see that there is no successful communication in the bridge network.
In the eb-stdouterr.log I see that there are errors while establishing connection between the apis and the message broker:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable.
The APIs are .NET Core applications, which use the Beanstalk's environment variables to determine the name of the broker service. In the Configuration/Software/Environment properties section there is a following entry:
RABBIT_HOSTNAME | rabbitmq
which should ensure that the services use a proper host name.
Yet, I get exceptions. Any advice?
It turned out that I needed to reference the automatically generated .env file in docker-compose.yml like so:
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
env_file: <--- these
- .env <--- 2 lines
for each service. Only after doing this the Environment properties from AWS Beanstalk were passed to the containers.

Error while trying to communicate between django and postgresql inside a docker [duplicate]

I am trying to deploy a second database container on a remote server using Docker compose. This postgresql server runs on port 5433 as opposed to 5432 as used by the first postgresql container.
When I set up the application I get this error output:
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "db" (172.17.0.2) and accepting
web_1 | TCP/IP connections on port 5433?
and my docker compose file is:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433"
ports:
- "5433"
volumes:
- ./backups:/home/backups
web:
build: .
command: bash -c "sleep 5 && python -u application/manage.py runserver 0.0.0.0:8081"
volumes:
- .:/code
ports:
- "81:8081"
links:
- db
environment:
- PYTHONUNBUFFERED=0
I feel the issue must be the postgresql.conf file on the server instance having set the port to 5432 causing the error when my app tries to connect to it. Is there a simple way of changing the port using a command in the compose file as opposed to messing around with volumes to replace the file?
I am using the official postgresql container for this job.
Some people may wish to actually change the port Postgres is running on, rather than remapping the exposed port to the host using the port directive.
To do so, use command: -p 5433
In the example used for the question:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433" # Publishes 5433 to other containers but NOT to host machine
ports:
- "5433:5433"
volumes:
- ./backups:/home/backups
command: -p 5433
Note that only the host will respect the port directive. Other containers will not.
Assuming postgres is running on port 5432 in the container and you want to expose it on the host on 5433, this ports strophe:
ports:
- "5433:5432"
will expose the server on port 5433 on the host. You can get rid of your existing expose strophe in this scenario.
If you only want to expose the service to other services declared in the compose file (and NOT localhost), just use the expose strophe and point it to the already internally exposed port 5432.

Python Flask not running with docker-compose [duplicate]

This question already has answers here:
Deploying a minimal flask app in docker - server connection issues
(8 answers)
Closed 8 months ago.
I have a simple Flask app consisting of a web part and a database part. Code can be found here. Running this Flask app locally works perfectly well.
I'd like to have the app run on Docker. Therefore I created the following docker-compose file.
version: '3.6'
services:
web:
build: .
depends_on:
- db
networks:
- default
ports:
- 50000:5000
volumes:
- ./app:/usr/src/app/app
- ./migrations:/usr/src/app/migrations
restart: always
db:
environment:
MYSQL_ROOT_PASSWORD: ***
MYSQL_DATABASE: flask_employees
MYSQL_USER: root
MYSQL_PASSWORD: ***
image: mariadb:latest
networks:
- default
ports:
- 33060:3306
restart: always
volumes:
- db_data:/var/lib/mysql
volumes:
db_data: {}
Inside the Flask app, I'm setting the following SQLALCHEMY_DATABASE_URI = 'mysql://root:***#localhost:33060/flask_employees'.
When I do docker-compose up, both containers are created and are running however when I go to http://localhost:50000 or http://127.0.0.1:50000 I get:
This site can’t be reached, 127.0.0.1 refused to connect.
You have to run your flask app on "0.0.0.0" host, in order to be able to map ports from the docker container.
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000')

Docker-compose: db connection from web container to neo4j container using bolt

I'm working on django project with neo4j db using neomodel and django-neomodel.
I'm trying to containerize it using docker-compose.
when I build the images everything seems fine, but any connection from web container to db using bolt is refused. although I can access the neo4j db from the browser on http, and even from local machine on bolt.
this is the error I get:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to ('127.0.0.1', 7688) (reason 111)
I'm using the following configs:
<pre>Django == 3.1.1
neo4j==4.1.0
neomodel==3.3.0
neobolt==1.7.17 </pre>
this is my docker-compose file:
version: '3'
services:
backend:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- neo4j_db
networks:
- mynetwork
links:
- neo4j_db
neo4j_db:
image: neo4j:3.5.17-enterprise
ports:
- "7474:7474"
- "7688:7687"
expose:
- 7474
- 7687
volumes:
- ./db/dbms:/data/dbms
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- dbms.connector.bolt.listen_address=:7688
- dbms.connector.bolt.advertised_address=:7688
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
and here's connection configs in django settings:
NEOMODEL_NEO4J_BOLT_URL = os.environ.get('NEO4J_BOLT_URL', 'bolt://neo4j:pass#123#127.0.0.1:7688')
Thanks in advance..
To connect from one container to another one (inside the same docker-compose project) you should use container name of the target container instead of the localhost (or 127.0.0.1). In your case it would be neo4j_db.
When connecting from other container you should use the internal port, in your case 7687.
In the neo4j service, the bolt.listen_address should be 7687 instead of 7688 (honestly, I'm not sure why you are changing the default port).
To wrap up, the connection url should be:
bolt://neo4j:pass#neo4j_db:7687

Docker, Django and Selenium - Selenium unable to connect

I have Docker configured to run Postgres and Django using docker-compose.yml and it is working fine.
The trouble I am having is with Selenium not being able to connect to the Django liveserver.
Now it makes sense (to me at least) that django has to access selenium to control the browser and selenium has to access django to access the server.
I have tried using the docker 'ambassador' pattern using the following configuration for docker-compose.yml from here: https://github.com/docker/compose/issues/666
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
django-ambassador:
container_name: django-ambassador
image: cpuguy83/docker-grand-ambassador
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
command: "-name django -name selenium"
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
links:
- postgis
- "django-ambassador:selenium"
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
links:
- "django-ambassador:django"
When I check http://DOCKER-MACHINE-IP:4444/wd/hub/static/resource/hub.html
I can see that firefox starts, but all the tests fail as firefox is unable to connect to django
'Firefox can't establish a connection to the server at localhost:8081'
I also tried this solution here https://github.com/docker/compose/issues/1991
however this is not working cause I can't get django to connect to postgis and selenium at the same time
'django.db.utils.OperationalError: could not translate host name "postgis" to address: Name or service not known'
I tried using the networking feature as listed below
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
net: appnet
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test foo
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
net: appnet
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
net: appnet
but the result is the same
'Firefox can't establish a connection to the server at localhost:8081'
So how can I get selenium to connect to django?
I have been playing around with this for days - would really appreciate any help.
More Info
Another weird thing is that when the testserver is running not using docker (using my old config of virtualenv etc.) if I run ./manage.py test foo I can access the server through any browser at http://localhost:8081 and get served up webpages, but I can't access the test server when I run the equivalent command if I run it under docker. This is weird cause I am mapping port 8081:8081 - is this related?
Note: I am using OSX and Docker v1.9.1
I ended up coming up with a better solution that didn't require me to hardcode the IP Address. Below is the configuration I used to run tests in django with docker.
Docker-compose file
# docker-compose base file for everything
version: '2'
services:
postgis:
build:
context: .
dockerfile: ./docker/postgis/Dockerfile
container_name: postgis
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/dev/docker_cookiecutter/postgres:/var/lib/postgresql/data
django:
build:
context: .
dockerfile: ./docker/django/Dockerfile
container_name: django
volumes:
- .:/app
depends_on:
- selenium
- postgis
environment:
- SITE_DOMAIN=django
- DJANGO_SETTINGS_MODULE=settings.my_dev_settings
links:
- postgis
- mailcatcher
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug:2.52.0
ports:
- "4444:4444"
- "5900:5900"
Dockerfile (for Django)
ENTRYPOINT ["/docker/django/entrypoint.sh"]
In Entrypoint file
#!/bin/bash
set -e
# Now we need to get the ip address of this container so we can supply it as an environmental
# variable for django so that selenium knows what url the test server is on
# Use below or alternatively you could have used
# something like "$# --liveserver=$THIS_DOCKER_CONTAINER_TEST_SERVER"
if [[ "'$*'" == *"manage.py test"* ]] # only add if 'manage.py test' in the args
then
# get the container id
THIS_CONTAINER_ID_LONG=`cat /proc/self/cgroup | grep 'docker' | sed 's/^.*\///' | tail -n1`
# take the first 12 characters - that is the format used in /etc/hosts
THIS_CONTAINER_ID_SHORT=${THIS_CONTAINER_ID_LONG:0:12}
# search /etc/hosts for the line with the ip address which will look like this:
# 172.18.0.4 8886629d38e6
THIS_DOCKER_CONTAINER_IP_LINE=`cat /etc/hosts | grep $THIS_CONTAINER_ID_SHORT`
# take the ip address from this
THIS_DOCKER_CONTAINER_IP=`(echo $THIS_DOCKER_CONTAINER_IP_LINE | grep -o '[0-9]\+[.][0-9]\+[.][0-9]\+[.][0-9]\+')`
# add the port you want on the end
# Issues here include: django changing port if in use (I think)
# and parallel tests needing multiple ports etc.
THIS_DOCKER_CONTAINER_TEST_SERVER="$THIS_DOCKER_CONTAINER_IP:8081"
echo "this docker container test server = $THIS_DOCKER_CONTAINER_TEST_SERVER"
export DJANGO_LIVE_TEST_SERVER_ADDRESS=$THIS_DOCKER_CONTAINER_TEST_SERVER
fi
eval "$#"
In your django settings file
SITE_DOMAIN = 'django'
Then to run your tests
docker-compose run django ./manage.py test
Whenever you see localhost, try first to port-forward that port (at the VM level)
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8081,tcp,,8081,,8081"
VBoxManage controlvm "default" natpf1 "udp-port8081,udp,,8081,,8081"
(Replace default with the name of your docker-machine: see docker-machine ls)
This differs for port mapping at the docker host level (which is your boot2docker-based Linux host)
The OP luke-aus confirms in the comments:
entering the IP address for the network solved the problem!
I've been struggling with this as well, and I finally found a solution that worked for me. You can try something like this:
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgis
- selenium # django can access selenium:4444, selenium can access django:8081-8100
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
- DJANGO_LIVE_TEST_SERVER_ADDRESS=django:8081-8100 # this gives selenium the correct address
selenium:
image: selenium/standalone-firefox-debug
ports:
- "5900:5900"
I don't think you need to include port 4444 in the selenium config. That port is exposed by default, and there's no need to map it to the host machine, since the django container can access it directly via its link to the selenium container.
[Edit] I've found you don't need to explicitly expose the 8081 port of the django container either. Also, I used a range of ports for the test server, because if tests are run in parallel, you can get an "Address already in use" error, as discussed here.