I'm working on django project with neo4j db using neomodel and django-neomodel.
I'm trying to containerize it using docker-compose.
when I build the images everything seems fine, but any connection from web container to db using bolt is refused. although I can access the neo4j db from the browser on http, and even from local machine on bolt.
this is the error I get:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to ('127.0.0.1', 7688) (reason 111)
I'm using the following configs:
<pre>Django == 3.1.1
neo4j==4.1.0
neomodel==3.3.0
neobolt==1.7.17 </pre>
this is my docker-compose file:
version: '3'
services:
backend:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- neo4j_db
networks:
- mynetwork
links:
- neo4j_db
neo4j_db:
image: neo4j:3.5.17-enterprise
ports:
- "7474:7474"
- "7688:7687"
expose:
- 7474
- 7687
volumes:
- ./db/dbms:/data/dbms
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- dbms.connector.bolt.listen_address=:7688
- dbms.connector.bolt.advertised_address=:7688
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
and here's connection configs in django settings:
NEOMODEL_NEO4J_BOLT_URL = os.environ.get('NEO4J_BOLT_URL', 'bolt://neo4j:pass#123#127.0.0.1:7688')
Thanks in advance..
To connect from one container to another one (inside the same docker-compose project) you should use container name of the target container instead of the localhost (or 127.0.0.1). In your case it would be neo4j_db.
When connecting from other container you should use the internal port, in your case 7687.
In the neo4j service, the bolt.listen_address should be 7687 instead of 7688 (honestly, I'm not sure why you are changing the default port).
To wrap up, the connection url should be:
bolt://neo4j:pass#neo4j_db:7687
Related
I am trying to deploy an application in Docker running on 64bit Amazon Linux 2. I am using a pipeline, which publishes images to a private repository on Dockerhub. Elastic beanstalk uses docker-compose to run containers, but so far I've had no success in accessing the application. I am not using a dockerrun.aws.json file, as v.3 does not support any container configuration, and as far as I know, it's not needed for docker compose.
My docker-compose file contains several services, one of which is a RabbitMQ message broker.
version: '3.9'
services:
Some.API:
image: ...
container_name: some-api
networks:
- my-network
ports:
- "9002:80"
Another.API:
image: ...
container_name: another-api
networks:
- my-network
ports:
- "9003:80"
rabbitmQ:
image: rabbitmq:3-management-alpine
container_name: rabbit-mq
labels:
NAME: rabbitmq
volumes:
- ./rabbitconfig/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
networks:
- my-network
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
networks:
my-network:
driver: bridge
Once the current version of the application is successfuly deployed to Beanstalk, I see that there is no successful communication in the bridge network.
In the eb-stdouterr.log I see that there are errors while establishing connection between the apis and the message broker:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable.
The APIs are .NET Core applications, which use the Beanstalk's environment variables to determine the name of the broker service. In the Configuration/Software/Environment properties section there is a following entry:
RABBIT_HOSTNAME | rabbitmq
which should ensure that the services use a proper host name.
Yet, I get exceptions. Any advice?
It turned out that I needed to reference the automatically generated .env file in docker-compose.yml like so:
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
env_file: <--- these
- .env <--- 2 lines
for each service. Only after doing this the Environment properties from AWS Beanstalk were passed to the containers.
I am trying to deploy a second database container on a remote server using Docker compose. This postgresql server runs on port 5433 as opposed to 5432 as used by the first postgresql container.
When I set up the application I get this error output:
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "db" (172.17.0.2) and accepting
web_1 | TCP/IP connections on port 5433?
and my docker compose file is:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433"
ports:
- "5433"
volumes:
- ./backups:/home/backups
web:
build: .
command: bash -c "sleep 5 && python -u application/manage.py runserver 0.0.0.0:8081"
volumes:
- .:/code
ports:
- "81:8081"
links:
- db
environment:
- PYTHONUNBUFFERED=0
I feel the issue must be the postgresql.conf file on the server instance having set the port to 5432 causing the error when my app tries to connect to it. Is there a simple way of changing the port using a command in the compose file as opposed to messing around with volumes to replace the file?
I am using the official postgresql container for this job.
Some people may wish to actually change the port Postgres is running on, rather than remapping the exposed port to the host using the port directive.
To do so, use command: -p 5433
In the example used for the question:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433" # Publishes 5433 to other containers but NOT to host machine
ports:
- "5433:5433"
volumes:
- ./backups:/home/backups
command: -p 5433
Note that only the host will respect the port directive. Other containers will not.
Assuming postgres is running on port 5432 in the container and you want to expose it on the host on 5433, this ports strophe:
ports:
- "5433:5432"
will expose the server on port 5433 on the host. You can get rid of your existing expose strophe in this scenario.
If you only want to expose the service to other services declared in the compose file (and NOT localhost), just use the expose strophe and point it to the already internally exposed port 5432.
I am using React Client, Django for the backend and Postgres for the db. I am preparing docker images of the client, server and the db.
My docker-compose.yml looks like this:
version: '3'
services:
client:
build: ./client
stdin_open: true
ports:
- "3000:3000"
depends_on:
- server
server:
build: ./server
ports:
- "8000:8000"
depends_on:
- db
db:
image: "postgres:12-alpine"
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: bLah6laH614h
Because the docker images can be deployed anywhere separately, I am unsure how to access the server from the client code and that of the db's ip in the server. I am new to react,django and dockers. Please help!
Using your docker-compose.yml file configuration as basis 4 things will happen, as per the docs:
A network called myapp_default is created. (Let's say that your project folder is name myapp)
A container is created using db’s configuration. It joins the network myapp_default under the name db.
A container is created using server’s configuration. It joins the network myapp_default under the name server.
A container is created using client’s configuration. It joins the network myapp_default under the name client.
Now to send an HTTP request from client to server you should use this URL:
http://server:8000
because of item 3 and because the server configured port is 8000.
I am creating a web service which uses react for the frontend and django REST for the backend. Both are running in separate docker containers. My docker-compose file looks like this.
services:
db:
image: postgres
volumes:
- ./config/load.sql:/docker-entrypoint-initdb.d/init.sql
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
image: gudmundurorri4/hoss-web
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
stdin_open: true
ports:
- "8000:8000"
depends_on:
- db
frontend:
image: gudmundurorri4/hoss-frontend
stdin_open: true
ports:
- "3000:3000"
depends_on:
- "web"
Both the web and frontend containers work fine. The backend works when I open it in the browser and when I execute a get request to http://web:8000 from within the frontend container I get the correct response. However when I execute a GET request from my react app using the same address (http://web:8000) it always fails with the error
net::ERR_NAME_NOT_RESOLVED
The Compose service names only are a thing within the containers themselves (set up using local DNS magic).
From the viewpoint of your (or a user's) browser, there's no such thing as http://web:8000.
You'll have to either
publish the backend with a port and DNS name that's accessible to your users, e.g. https://backend.myservice.kittens/ and have the frontend that runs in your browser request things from there (be mindful of CORS).
or have the frontend service proxy those requests through to the backend (that it can access via http://web:8000/). If you're using webpack-dev-server, it has this capability built-in.
I am running a django/postgresql in docker container. When run docker-compose.yml, the postgresql service will start and listen for a moment then it will be shut-down with LOG:
"listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" then
"database system was shut down".
I have read it somewhere that it is receiving a postmaster fast shutdown request but I don't know how to solve this. I have tried to change ports and other PostgreSQL env variables without success.
Here is my .ylm for docker-compose
version: '3'
volumes:
postgres_data_local: {}
postgres_backup_local: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
depends_on:
- postgres
- redis
volumes:
- .:/app
env_file: .env
ports:
- "8000:8000"
command: /start.sh
postgres:
image: postgres:10.1-alpine
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- postgres_data_local:/var/lib/postgresql/data
- postgres_backup_local:/backups
env_file: .env
redis:
image: redis:3.0
ports:
- '6379:6379'
My .env file is somehow like
# PostgreSQL conf
POSTGRES_PASSWORD=p3wrd
POSTGRES_USER=postgres
POSTGRES_DB=postgres
POSTGRES_HOST=127.0.0.1 #have tried localhost, 0.0.0.0 etc
POSTGRES_PORT=5432
DATABASE_URL= postgresql://postgres:p3wrd#127.0.0.1:5432/postgres
# General settings
READ_DOT_ENV_FILE=True
SETTINGS_MODULE=config.settings.test
SECRET_KEY=Sup3rS3cr3tP#22word
DEBUG=True
ALLOWED_HOSTS=*
# URL for Redis
REDIS_URL=redis://127.0.0.1:6379
Why are you setting POSTGRES_HOST to 127.0.0.1 or variants thereof? That means "localhost", which in a container means "the local container". Postgres isn't running inside the django container so that won't work.
Since you're using docker-compose, your containers are all running in a user-defined network. This means that Docker maintains a DNS server for you that maps service names to container ip addresses. In other words, you can simply set:
POSTGRES_HOST=postgres
DATABASE_URL= postgresql://postgres:p3wrd#postgres:5432/postgres