Http request between two containers - django

I am creating a web service which uses react for the frontend and django REST for the backend. Both are running in separate docker containers. My docker-compose file looks like this.
services:
db:
image: postgres
volumes:
- ./config/load.sql:/docker-entrypoint-initdb.d/init.sql
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
image: gudmundurorri4/hoss-web
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
stdin_open: true
ports:
- "8000:8000"
depends_on:
- db
frontend:
image: gudmundurorri4/hoss-frontend
stdin_open: true
ports:
- "3000:3000"
depends_on:
- "web"
Both the web and frontend containers work fine. The backend works when I open it in the browser and when I execute a get request to http://web:8000 from within the frontend container I get the correct response. However when I execute a GET request from my react app using the same address (http://web:8000) it always fails with the error
net::ERR_NAME_NOT_RESOLVED

The Compose service names only are a thing within the containers themselves (set up using local DNS magic).
From the viewpoint of your (or a user's) browser, there's no such thing as http://web:8000.
You'll have to either
publish the backend with a port and DNS name that's accessible to your users, e.g. https://backend.myservice.kittens/ and have the frontend that runs in your browser request things from there (be mindful of CORS).
or have the frontend service proxy those requests through to the backend (that it can access via http://web:8000/). If you're using webpack-dev-server, it has this capability built-in.

Related

connect to rabbitMQ cloud using django docker

I am trying to follow a "Microservices Application with Django, Flask, Mysql" guide on youtube.
In this example a Django app uses RabbitMQ (cloud version) to send the message that must be consumed. The following is the consumer.py
import pika
params = pika.URLParameters('amqps_key')
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue='admin')
def callback(ch, method, properties, body):
print('Recevied in admin')
print(body)
channel.basic_consume(queue='admin', on_message_callback=callback, auto_ack=True)
print('Started cosuming')
channel.start_consuming()
channel.close()
I have an endpoint in the urls.py that calls the producer.py
This is the docker-compose:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
command: ["./wait-for-it.sh","db:3306","--","python","manage.py","runserver","0.0.0.0:8000"]
db:
image: mysql:5.7.36
restart: always
environment:
MYSQL_DATABASE: tutorial_ms_admin
MYSQL_USER: ms_admin
MYSQL_PASSWORD: ms_admin
MYSQL_ROOT_PASSWORD: ms_admin
volumes:
- .dbdata:/var/lib/mysql
ports:
- 3308:3306
Now: if I start the docker-compose, then open a terminal, go inside the backend service and in the service shell type "python consumer.py", using Postman and enter the endpoint I can see the message being consumed (both in screen and in the rabbit cloud administration panel).
If I tried to add the python consumer.py as a command after the "wait-for-it.sh.... " it does not work. Messages are stuck in the Rabbit cloud administration panel, as if nothing consumes them.
Basically i would like not to open a terminal to start the consumer.py but to make it start automatically as soon as the backend service runserver has started
I cannot understand how to fix this.
EDIT:
Following the youtube lesson I should have create the service for handling the call to RabbitMQ cloud, but that is the point where I stucked. I simplified the problem otherwise it would have been too difficult to explain clearly for me.
The amqps_key is the key I get from RabbitMQ Cloud to access a free channel to exchange message, so
params = pika.URLParameters('amqps_key')
connection = pika.BlockingConnection(params)
is where the scripts knows where to access the channel.
The script consumer.py is inside the backend (Django) service structure. So after I launch the docker-compose I open a terminal, enter inside the running service (docker-compose exec backend sh) and run
python consumer.py
Then using postman when I hit 127.0.0.1:8000/api/products I can see the message received in the terminal (so the message started from that endpoint, reached the RabbitMQ cloud channel and the consumer.py is able to listen to that channel and consume the message).
If I try to set the python consumer.py at the docker-compose startup it doesn't work. Neither if i try to setup a service that tries to handle that.
For the first solution I tried different approaches, as variation of:
command: bash -c "python consumer.py && ./wait-for-it.sh db:3306 python manage.py runserver 0.0.0.0:800"
For the second approach, following that guide, I setup a service as such:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
command: ["./wait-for-it.sh","db:3306","--","python","manage.py","runserver","0.0.0.0:8000"]
queue:
build:
context: .
dockerfile: Dockerfile
depends_on:
- backend
command: 'python consumer.py'
db:
image: mysql:5.7.36
restart: always
environment:
MYSQL_DATABASE: tutorial_ms_admin
MYSQL_USER: ms_admin
MYSQL_PASSWORD: ms_admin
MYSQL_ROOT_PASSWORD: ms_admin
volumes:
- .dbdata:/var/lib/mysql
ports:
- 3308:3306

No connection in a multicontainer docker environment

I am trying to deploy an application in Docker running on 64bit Amazon Linux 2. I am using a pipeline, which publishes images to a private repository on Dockerhub. Elastic beanstalk uses docker-compose to run containers, but so far I've had no success in accessing the application. I am not using a dockerrun.aws.json file, as v.3 does not support any container configuration, and as far as I know, it's not needed for docker compose.
My docker-compose file contains several services, one of which is a RabbitMQ message broker.
version: '3.9'
services:
Some.API:
image: ...
container_name: some-api
networks:
- my-network
ports:
- "9002:80"
Another.API:
image: ...
container_name: another-api
networks:
- my-network
ports:
- "9003:80"
rabbitmQ:
image: rabbitmq:3-management-alpine
container_name: rabbit-mq
labels:
NAME: rabbitmq
volumes:
- ./rabbitconfig/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
networks:
- my-network
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
networks:
my-network:
driver: bridge
Once the current version of the application is successfuly deployed to Beanstalk, I see that there is no successful communication in the bridge network.
In the eb-stdouterr.log I see that there are errors while establishing connection between the apis and the message broker:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable.
The APIs are .NET Core applications, which use the Beanstalk's environment variables to determine the name of the broker service. In the Configuration/Software/Environment properties section there is a following entry:
RABBIT_HOSTNAME | rabbitmq
which should ensure that the services use a proper host name.
Yet, I get exceptions. Any advice?
It turned out that I needed to reference the automatically generated .env file in docker-compose.yml like so:
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
env_file: <--- these
- .env <--- 2 lines
for each service. Only after doing this the Environment properties from AWS Beanstalk were passed to the containers.

Docker-compose: db connection from web container to neo4j container using bolt

I'm working on django project with neo4j db using neomodel and django-neomodel.
I'm trying to containerize it using docker-compose.
when I build the images everything seems fine, but any connection from web container to db using bolt is refused. although I can access the neo4j db from the browser on http, and even from local machine on bolt.
this is the error I get:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to ('127.0.0.1', 7688) (reason 111)
I'm using the following configs:
<pre>Django == 3.1.1
neo4j==4.1.0
neomodel==3.3.0
neobolt==1.7.17 </pre>
this is my docker-compose file:
version: '3'
services:
backend:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- neo4j_db
networks:
- mynetwork
links:
- neo4j_db
neo4j_db:
image: neo4j:3.5.17-enterprise
ports:
- "7474:7474"
- "7688:7687"
expose:
- 7474
- 7687
volumes:
- ./db/dbms:/data/dbms
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- dbms.connector.bolt.listen_address=:7688
- dbms.connector.bolt.advertised_address=:7688
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
and here's connection configs in django settings:
NEOMODEL_NEO4J_BOLT_URL = os.environ.get('NEO4J_BOLT_URL', 'bolt://neo4j:pass#123#127.0.0.1:7688')
Thanks in advance..
To connect from one container to another one (inside the same docker-compose project) you should use container name of the target container instead of the localhost (or 127.0.0.1). In your case it would be neo4j_db.
When connecting from other container you should use the internal port, in your case 7687.
In the neo4j service, the bolt.listen_address should be 7687 instead of 7688 (honestly, I'm not sure why you are changing the default port).
To wrap up, the connection url should be:
bolt://neo4j:pass#neo4j_db:7687

How does the Client in a docker image know the ip address of a server which is in another docker image?

I am using React Client, Django for the backend and Postgres for the db. I am preparing docker images of the client, server and the db.
My docker-compose.yml looks like this:
version: '3'
services:
client:
build: ./client
stdin_open: true
ports:
- "3000:3000"
depends_on:
- server
server:
build: ./server
ports:
- "8000:8000"
depends_on:
- db
db:
image: "postgres:12-alpine"
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: bLah6laH614h
Because the docker images can be deployed anywhere separately, I am unsure how to access the server from the client code and that of the db's ip in the server. I am new to react,django and dockers. Please help!
Using your docker-compose.yml file configuration as basis 4 things will happen, as per the docs:
A network called myapp_default is created. (Let's say that your project folder is name myapp)
A container is created using db’s configuration. It joins the network myapp_default under the name db.
A container is created using server’s configuration. It joins the network myapp_default under the name server.
A container is created using client’s configuration. It joins the network myapp_default under the name client.
Now to send an HTTP request from client to server you should use this URL:
http://server:8000
because of item 3 and because the server configured port is 8000.

Can't connect postgresql in a docker (receiving fastshut down request)

I am running a django/postgresql in docker container. When run docker-compose.yml, the postgresql service will start and listen for a moment then it will be shut-down with LOG:
"listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" then
"database system was shut down".
I have read it somewhere that it is receiving a postmaster fast shutdown request but I don't know how to solve this. I have tried to change ports and other PostgreSQL env variables without success.
Here is my .ylm for docker-compose
version: '3'
volumes:
postgres_data_local: {}
postgres_backup_local: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
depends_on:
- postgres
- redis
volumes:
- .:/app
env_file: .env
ports:
- "8000:8000"
command: /start.sh
postgres:
image: postgres:10.1-alpine
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- postgres_data_local:/var/lib/postgresql/data
- postgres_backup_local:/backups
env_file: .env
redis:
image: redis:3.0
ports:
- '6379:6379'
My .env file is somehow like
# PostgreSQL conf
POSTGRES_PASSWORD=p3wrd
POSTGRES_USER=postgres
POSTGRES_DB=postgres
POSTGRES_HOST=127.0.0.1 #have tried localhost, 0.0.0.0 etc
POSTGRES_PORT=5432
DATABASE_URL= postgresql://postgres:p3wrd#127.0.0.1:5432/postgres
# General settings
READ_DOT_ENV_FILE=True
SETTINGS_MODULE=config.settings.test
SECRET_KEY=Sup3rS3cr3tP#22word
DEBUG=True
ALLOWED_HOSTS=*
# URL for Redis
REDIS_URL=redis://127.0.0.1:6379
Why are you setting POSTGRES_HOST to 127.0.0.1 or variants thereof? That means "localhost", which in a container means "the local container". Postgres isn't running inside the django container so that won't work.
Since you're using docker-compose, your containers are all running in a user-defined network. This means that Docker maintains a DNS server for you that maps service names to container ip addresses. In other words, you can simply set:
POSTGRES_HOST=postgres
DATABASE_URL= postgresql://postgres:p3wrd#postgres:5432/postgres