I have two django projects (mircroservices), running in separate docker containers. Both projects are using django-tenant-schemas. How can I send a request from serice-bar to service-foo on url http://boohoo.site.com:18150/api/me/, 18150 is the PORT of project-a? I need to use the tenant url so that project-a can verify the tenant and process the request.
I can send a request by using the container name, but that doesn't work because if I use http://site.foo:18150/api/me, it sends the request successfully, but there's no tenant with definition site.foo.
Here's the docker-compose.yml:
version: '3.3'
services:
db:
container_name: site.postgres
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
foo:
container_name: site.foo
build:
context: ../poll
command: python /app/foo/src/manage.py runserver 0.0.0.0:18150
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
stdin_open: true
tty: true
ports:
- "18150:18150"
bar:
container_name: site.bar
build:
context: ../toll
command: python /app/bar/src/manage.py runserver 0.0.0.0:18381
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
stdin_open: true
tty: true
ports:
- "18381:18381"
You can do this using aliases on the default (or any other...) network. For more info on this feature, see the documentation. I checked and this is supported by your current compose file version (3.3) although I do suggest you move up to the latest supported one if possible (3.7).
For compactness, I'm only reproducing the modified foo service declaration below where I only added the necessary networks stanza.
foo:
container_name: site.foo
build:
context: ../poll
command: python /app/foo/src/manage.py runserver 0.0.0.0:18150
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
networks:
default:
aliases:
- boohoo.site.com
stdin_open: true
tty: true
ports:
- "18150:18150"
After this change, your foo service container will be reachable from any other container on the same network either with foo (the service name), site.foo (your custom container name) or boohoo.site.com (the network alias).
Related
I have a large monorepo Django app that I want to break into two separate repositories (1 to handle external api requests and the other to handle my front end that I plan on showing to users). I would still like to have both django apps have access to the same db when running things locally. Is there a way for me to do this? I'm running docker for both and am having issues with my front end facing django app being able to connect to the Postgres DB i have set up in a separate docker-compose file than the one I made for my front end app.
External API docker-compose file (Postgres DB docker image gets created here when running docker-compose up --build)
---
version: "3.9"
services:
db:
image: postgres:13.4
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
api:
restart: always
build: .
image: &img img-one
command: bash start.sh
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
env_file:
- variables.env
Front end facing docker-compose file (This is the one I want to be able to connect to the DB above):
---
version: "3.9"
services:
dashboard:
restart: always
build: .
image: &img img-two
volumes:
- .:/code
ports:
- "8010:8010"
depends_on:
- react-app
env_file:
- variables.env
react-app:
restart: always
build: .
image: *img
command: yarn start
env_file:
- variables.env
volumes:
- .:/app
- /app/node_modules
ports:
- "3050:3050"
Below is the database configuration I have set up in the front end django app that I want to connect to the DB but I keep getting connection refused errors when I try to run python manage.py runserver
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("DB_NAME", "postgres"),
"USER": os.environ.get("DB_USERNAME", "postgres"),
"PASSWORD": os.environ.get("DB_PASSWORD", "postgres"),
"HOST": os.environ.get("DB_HOSTNAME", "db"),
"PORT": os.environ.get("DB_PORT", 5432),
}
}
Any ideas on how to fix this issue? (For reference, I've also tried changing HOST to localhost instead of db but still get the same connection refused errors)
I'm having issues connecting with my Elasticsearch container since day 1.
First I was using elasticsearch as the hostname, then I've tried the container name web_elasticsearch_1, and finally I'd set a Static IP address to the container and passed it in my configuration file.
PYPI packages:
django==3.2.9
elasticsearch==7.15.1
elasticsearch-dsl==7.4.0
docker-compose.yml
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
networks:
default:
ipv4_address: 172.18.0.10
settings.py
# Elasticsearch
ELASTICSEARCH_HOST = "172.18.0.10"
ELASTICSEARCH_PORT = 9200
service.py
from django.conf import settings
from elasticsearch import Elasticsearch, RequestsHttpConnection
es = Elasticsearch(
hosts=[{"host": settings.ELASTICSEARCH_HOST, "port": settings.ELASTICSEARCH_PORT}],
use_ssl=False,
verify_certs=False,
connection_class=RequestsHttpConnection,
)
traceback
HTTPConnectionPool(host='172.18.0.10', port=9200): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1973ebd6d0>, 'Connection to 172.18.0.10 timed out. (connect timeout=5)'))
By default Docker Compose uses a bridge network to provision inter-container communication. You can read more about this network at the Debian Wiki.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. So update your file:
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
And now you can connect with elasticsearch:9200 instead of 172.18.0.10 from your web container. For more info see this article.
I realize this question comes up a lot. I've read many threads and blog posts, but here I am. Long story short, I have a docker container running wurstmeister/zookeeper & wurstmeister/kafka, and then some services running in their own containers. I'll just mention the NoddJS one for now. Everything works fine at home, using IP addresses (not localhost) so I'm baffled at what the difference is here. On AWS, it simply "doesn't work" even though it seems to at least connect to the broker in the beginning. I'm explicitly using internal IPs in the config as I don't want this exposed to anything externally.
Reading around, I've tried 2 setups. 1 works at home (KAFKA_ADVERTISED_HOST_NAME). 1 doesn't (KAFKA_ADVERTISED_LISTENERS). Neither work on my EC2 Linux box:
Kafka docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- my-network
kafka:
image: wurstmeister/kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: <internal-ip>
KAFKA_ADVERTISED_PORT: "9092"
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-network
networks:
my-network:
NodeJS docker-compose.yml
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile
networks:
- kafka_my-network
restart: unless-stopped
ports:
- "1337:3000"
volumes:
- "/tmp:/tmp"
- "/var/log:/var/log"
networks:
kafka_my-network:
external: true
Then in NodeJS
const kafkaHost = '<internal-ip>:9092';
const client = new kafka.KafkaClient({kafkaHost});
const producer = new kafka.Producer(client)
const kakfaTopic = 'test';
producer.on('ready', function() {
console.log(`kafka producer is ready`); // I see this, so I'm assuming all is well
ready = true;
});
producer.on('error', function(err) {
console.error(err);
});
const payload = [
{
topic: kafkaTopic,
messages: JSON.stringify(myMessages);
}
]
producer.send(payload, function(err, data) {
if (err) {
console.error(`Send error ${JSON.stringify(err}`);
}
console.log(`Sent data ${JSON.stringify(data)}`);
});
When I start my NodeJS server, I see that I've connected to a Kafka broker. I can confirm as well that :9092 is open after checking w/ telnet and/or nc. Then, when it sends a request, the callback gets an empty error.
I realize KAKFA_ADVERTISED_HOST_NAME is deprecated, so in the name of completion, here is my attempt using ADVERTISED_LISTENERS which failed. With this configuration I seemed to get the same results at home as I did on EC2.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- my-network
kafka:
image: wurstmeister/kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://<internal-ip>:9092
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-network
networks:
my-network:
EDIT: I will not provide this as a solution, but the bitnami image w/ the following config works. The main difference is it had a pretty straight forward README that I went through. I can't be certain if I tried the equivalent config w/ wurstmeister (I tried many - and again, at least one of which in docker containers on my own machine but not on a single EC2 instance).
Note that I did list 'kafka' with an internal IP (not loopback) in /etc/hosts for this. This should be tantamount to using the internal IP explicitly which I had done above.
version: '2'
services:
zookeeper:
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- my-network
kafka:
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
- '29092:29092'
volumes:
- 'kafka_data:/bitnami'
- /var/run/docker.sock:/var/run/docker.sock
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
depends_on:
- zookeeper
networks:
- my-network
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
networks:
my-network:
driver: bridge
I'm trying to connect my application to my mysql database which I've got up and running in a docker-compose file. I'm using flask and trying to connect using DBUtils
I keep getting the error message described in my title:
(pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'db' ([Errno 8] nodename nor servname provided, or not known)"))
I've tried using the IPAddresses of my docker instances as well as several other solutions in similar problems discussed here on StackOverflow:
Docker-Compose can't connect to MySQL
Connecting to MySQL from Flask Application using docker-compose.
however, the offered solutions don't seem to be working for me.
my docker-compose file looks as follows:
version: '3.3'
services:
db:
image: mysql:8.0
container_name: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: 'myPassword'
MYSQL_DATABASE: 'databaseName'
volumes:
- .:/dockerFiles
ports:
- "3306:3306"
expose:
- "3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
restart: always
ports:
- "8080:80"
volumes:
- /sessions
and my connect string looks as follows:
def connect_db():
# Connects to the database and takes care of the connection
return PersistentDB(
creator=pymysql, host='db',
user='root', password='myPassword', database='databaseName', port=3306,
autocommit=True, charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
The error [Errno 8] nodename nor servname provided, or not known states that it can't find the database server. Thus, host='db' will only work if your Python code is inside a docker in the same network of the database service db. Try adding a service for the Python code in your docker-compose:
version: '3.3'
services:
db:
image: mysql:8.0
container_name: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: 'myPassword'
MYSQL_DATABASE: 'databaseName'
volumes:
- .:/dockerFiles
ports:
- "3306:3306"
expose:
- "3306"
web:
build: . # where is your dockerfile
command: sh -c 'python app.py' # this should be the command to start your application
ports:
- "8082:8082"
volumes:
- .:/code # it depends on the WORKDIR of your dockerfile
links:
- db
django views.py
import redis
import jwt
from access import utils
import os
redis_url = os.environ['REDIS_URI']
R = redis.StrictRedis(redis_url)
def set(request):
R.set('foo', 'bar')
return JsonResponse({"code":200,"msg":"success"})
docker-compose
version: "3"
services:
rango:
container_name: rango
build: ./
command: python backend/manage.py runserver 0.0.0.0:8000
# command: npm start --prefix frontend/rango-frontend/
working_dir: /usr/src/rango
environment:
REDIS_URI: redis://redis_db:6379
ports:
- "8000:8000"
tty: true
links:
- elasticsearch
- node
- redis
#elastic search
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
ports:
- "9200:9200"
#node
node:
image: node:10.13.0
#redis
redis:
image: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- "6379:6379"
here i am connecting redis from django inside docker.
it is giving me exceptions connexctions refused.
Please have a look into my code and shared screenshot below
By default, docker compose makes containers discoverable with a hostname identical to the container name. Your redis container is thus discoverable via the hostname redis. However, your Django container is using the hostname redis_db.
Update your docker-compose.yml and change the REDIS_URI to reference the correct hostname:
REDIS_URI: redis://redis:6379