I am trying to set up a Traefik to serve my Django API over HTTPS, but not to expose it to the outside network/world.
My docker-compose:
---
version: "3.6"
services:
backend_prod:
image: $BACKEND_IMAGE
restart: always
environment:
- DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY
- DATABASE_ENGINE=$DATABASE_ENGINE
- DATABASE_NAME=$DATABASE_NAME
- DATABASE_USER=$DATABASE_USER
- DATABASE_PASSWORD=$DATABASE_PASSWORD
- DATABASE_HOST=$DATABASE_HOST
- DATABASE_PORT=$DATABASE_PORT
- PRODUCTION=TRUE
security_opt:
- no-new-privileges:true
container_name: backend_prod
networks:
- traefik_default
calendar_frontend_prod:
image: $FRONTEND_IMAGE
restart: always
security_opt:
- no-new-privileges:true
container_name: frontend_prod
environment:
- PRODUCTION=TRUE
networks:
- traefik_default
labels:
- "traefik.enable=true"
- "traefik.http.routers.frontend.entrypoints=webs"
- "traefik.http.routers.frontend.rule=Host(`mywebsite.org`)"
- "traefik.http.routers.frontend.tls.certresolver=letsencrypt"
- "traefik.http.services.frontend.loadbalancer.server.port=4200"
- "traefik.http.services.frontend.loadbalancer.server.scheme=http"
networks:
traefik_default:
external: true
Inside my frontend files, I got it set up like it:
export const environment = {
production: true,
apiUrl: 'http://backend_prod'
};
After that when I got to mywebsite.org and look at networking I am seeing:
polyfills.js:1 Mixed Content: The page at 'https://mywebsite.org/auth/login' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://backend_prod/api/users/login'. This request has been blocked; the content must be served over HTTPS.
I was trying to add to backend_prod service below lines:
- "traefik.enable=true"
- "traefik.http.routers.backend_prod.entrypoints=webs"
- "traefik.http.routers.backend_prod.rule=Host(`be.localhost`)"
- "traefik.http.services.backend_prod.loadbalancer.server.port=80"
- "traefik.http.services.backend_prod.loadbalancer.server.scheme=http"
but then I was getting from frontend an error: https//be.localhost Connection Refused.
How could I solve this problem?
Related
I'm running two applications using docker-compose. Each application has a bunch of containers. The intention is for App A (django app) to host the OIDC provider, while App B (some other app) will authenticate users by calling the App A API.
I'm using the django-oidc-provider library (https://django-oidc-provider.readthedocs.io/en/latest/index.html)
I've already configured the OIDC integration on both sides. However, every time App B redirects to App A, I hit the following error:
Redirect URI Error
The request fails due to a missing, invalid, or mismatching redirection URI (redirect_uri).
Even though the redirect_uri matches exactly on both sides.
Here's my docker-compose.yml:
version: '3'
networks:
default:
external:
name: datahub-gms_default
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: dqt
container_name: dqt
hostname: dqt
platform: linux/x86_64
depends_on:
- postgres
volumes:
- .:/app:z
environment:
- DJANGO_READ_DOT_ENV_FILE=true
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/local/postgres/Dockerfile
image: postgres
container_name: postgres
hostname: postgres
volumes:
- dqt_local_postgres_data:/var/lib/postgresql/data:Z
- dqt_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
broker:
container_name: broker
depends_on:
- zookeeper
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
- KAFKA_HEAP_OPTS=-Xms256m -Xmx256m
hostname: broker
image: confluentinc/cp-kafka:5.4.0
ports:
- 29092:29092
- 9092:9092
datahub-actions:
depends_on:
- datahub-gms
environment:
- GMS_HOST=datahub-gms
- GMS_PORT=8080
- KAFKA_BOOTSTRAP_SERVER=broker:29092
- SCHEMA_REGISTRY_URL=http://schema-registry:8081
- METADATA_AUDIT_EVENT_NAME=MetadataAuditEvent_v4
- METADATA_CHANGE_LOG_VERSIONED_TOPIC_NAME=MetadataChangeLog_Versioned_v1
- DATAHUB_SYSTEM_CLIENT_ID=__datahub_system
- DATAHUB_SYSTEM_CLIENT_SECRET=JohnSnowKnowsNothing
- KAFKA_PROPERTIES_SECURITY_PROTOCOL=PLAINTEXT
hostname: actions
image: public.ecr.aws/datahub/acryl-datahub-actions:${ACTIONS_VERSION:-head}
datahub-frontend-react:
container_name: datahub-frontend-react
depends_on:
- datahub-gms
environment:
- DATAHUB_GMS_HOST=datahub-gms
- DATAHUB_GMS_PORT=8080
- DATAHUB_SECRET=YouKnowNothing
- DATAHUB_APP_VERSION=1.0
- DATAHUB_PLAY_MEM_BUFFER_SIZE=10MB
- JAVA_OPTS=-Xms512m -Xmx512m -Dhttp.port=9002 -Dconfig.file=datahub-frontend/conf/application.conf
-Djava.security.auth.login.config=datahub-frontend/conf/jaas.conf -Dlogback.configurationFile=datahub-frontend/conf/logback.xml
-Dlogback.debug=false -Dpidfile.path=/dev/null
- KAFKA_BOOTSTRAP_SERVER=broker:29092
- DATAHUB_TRACKING_TOPIC=DataHubUsageEvent_v1
- ELASTIC_CLIENT_HOST=elasticsearch
- ELASTIC_CLIENT_PORT=9200
- AUTH_OIDC_ENABLED=true
- AUTH_OIDC_CLIENT_ID=778948
- AUTH_OIDC_CLIENT_SECRET=some-client-secret
- AUTH_OIDC_DISCOVERY_URI=http://dqt:8000/openid/.well-known/openid-configuration/
- AUTH_OIDC_BASE_URL=http://datahub:9002/
hostname: datahub
image: linkedin/datahub-frontend-react:${DATAHUB_VERSION:-head}
ports:
- 9002:9002
datahub-gms:
container_name: datahub-gms
depends_on:
- mysql
environment:
- DATASET_ENABLE_SCSI=false
- EBEAN_DATASOURCE_USERNAME=datahub
- EBEAN_DATASOURCE_PASSWORD=datahub
- EBEAN_DATASOURCE_HOST=mysql:3306
- EBEAN_DATASOURCE_URL=jdbc:mysql://mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8
- EBEAN_DATASOURCE_DRIVER=com.mysql.jdbc.Driver
- KAFKA_BOOTSTRAP_SERVER=broker:29092
- KAFKA_SCHEMAREGISTRY_URL=http://schema-registry:8081
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- GRAPH_SERVICE_IMPL=elasticsearch
- JAVA_OPTS=-Xms1g -Xmx1g
- ENTITY_REGISTRY_CONFIG_PATH=/datahub/datahub-gms/resources/entity-registry.yml
- MAE_CONSUMER_ENABLED=true
- MCE_CONSUMER_ENABLED=true
hostname: datahub-gms
image: linkedin/datahub-gms:${DATAHUB_VERSION:-head}
ports:
- 8080:8080
volumes:
- ${HOME}/.datahub/plugins:/etc/datahub/plugins
elasticsearch:
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms256m -Xmx256m -Dlog4j2.formatMsgNoLookups=true
healthcheck:
retries: 4
start_period: 2m
test:
- CMD-SHELL
- curl -sS --fail 'http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=0s' || exit 1
hostname: elasticsearch
image: elasticsearch:7.9.3
mem_limit: 1g
ports:
- 9200:9200
volumes:
- esdata:/usr/share/elasticsearch/data
elasticsearch-setup:
container_name: elasticsearch-setup
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- ELASTICSEARCH_PROTOCOL=http
hostname: elasticsearch-setup
image: linkedin/datahub-elasticsearch-setup:${DATAHUB_VERSION:-head}
kafka-setup:
container_name: kafka-setup
depends_on:
- broker
- schema-registry
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_BOOTSTRAP_SERVER=broker:29092
hostname: kafka-setup
image: linkedin/datahub-kafka-setup:${DATAHUB_VERSION:-head}
mysql:
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_bin
container_name: mysql
environment:
- MYSQL_DATABASE=datahub
- MYSQL_USER=datahub
- MYSQL_PASSWORD=datahub
- MYSQL_ROOT_PASSWORD=datahub
hostname: mysql
image: mysql:5.7
ports:
- 3306:3306
volumes:
- ../mysql/init.sql:/docker-entrypoint-initdb.d/init.sql
- mysqldata:/var/lib/mysql
mysql-setup:
container_name: mysql-setup
depends_on:
- mysql
environment:
- MYSQL_HOST=mysql
- MYSQL_PORT=3306
- MYSQL_USERNAME=datahub
- MYSQL_PASSWORD=datahub
- DATAHUB_DB_NAME=datahub
hostname: mysql-setup
image: acryldata/datahub-mysql-setup:head
schema-registry:
container_name: schema-registry
depends_on:
- zookeeper
- broker
environment:
- SCHEMA_REGISTRY_HOST_NAME=schemaregistry
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
hostname: schema-registry
image: confluentinc/cp-schema-registry:5.4.0
ports:
- 8081:8081
zookeeper:
container_name: zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
hostname: zookeeper
image: confluentinc/cp-zookeeper:5.4.0
ports:
- 2181:2181
volumes:
- zkdata:/var/opt/zookeeper
volumes:
dqt_local_postgres_data: {}
dqt_local_postgres_data_backups: {}
esdata: null
mysqldata: null
zkdata: null
In the above, container datahub-frontend-react is supposed to integrate into container dqt for the OIDC authentication.
The docker log doesn't show any exceptions, and the http code is 200:
dqt | [28/Feb/2022 10:43:43] "GET /openid/.well-known/openid-configuration/ HTTP/1.1" 200 682
dqt | [28/Feb/2022 10:43:44] "GET /openid/authorize?response_type=code&redirect_uri=http%3A%2F%2Fdatahub%3A9002%2F%2Fcallback%2Foidc&state=9Fj1Bog-ZN8fhN2kufWng2fRGaqCYnkMz6n3yKxPowo&client_id=778948&scope=openid+profile+email HTTP/1.1" 200 126
Here's the redirect_uri configuration in django admin:
I'm suspecting it could be related to the fact that they are different containers with different hostnames (I don't know what to do about that).
What could be the root cause of this issue?
Your log shows that the app is redirecting with this login URL, with two %2F characters, so the URL used by the app is different to that configured:
http://datahub:9002//callback/oidc
INTERNAL AND EXTERNAL URLs
Not sure if it will work once you resolve that though, since the callback URL looks like a Docker Compose internal URL, that the browser will be unable to reach. Aim to return a URL such as this instead:
http://localhost:9002/callback/oidc
One option that can be useful to make URLs more understandable during development, and to plan the real deployment, is to add custom host names to your computer's hosts file. You can then login via URLs such as http://www.myapp.com, which I find clearer.
See these resources for something to compare against, which describe a setup with both internal and external URLs.
Custom Hosts
Docker Compose Example
I'm currently trying to make a connection between two of my docker containers (The requesting container running Gunicorn/Django and the api container running kroki).
I've had a look at other answers but seem to be coming up blank with a solution, so was hoping for a little poke in the right direction.
Docker-compose:
version: '3.8'
services:
app:
build:
context: ./my_app
dockerfile: Dockerfile.prod
command: gunicorn my_app.wsgi:application --bind 0.0.0.0:8000 --access-logfile -
volumes:
- static_volume:/home/app/web/staticfiles
expose:
- 8000
environment:
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 kroki
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
ports:
- 1337:80
depends_on:
- app
kroki:
image: yuzutech/kroki
ports:
- 7331:8000
volumes:
postgres_data:
static_volume:
settings.py
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS").split(" ")
Requesting code in django
url = 'http://kroki:7331/bytefield/svg/' + base64_var
try:
response = requests.get(url)
return response.text
except ConnectionError as e:
print("Connection to bytefield module, unavailable")
return None
I'm able to access both containers via my browser successfully, however initiating the code for an internal call between the two throws out
requests.exceptions.ConnectionError: HTTPConnectionPool(host='kroki', port=7331): Max retries exceeded with url: /bytefield/svg/<API_URL_HERE> (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f286f5ecaf0>: Failed to establish a new connection: [Errno 111] Connection refused'))
I've had a go accessing the url via localhost:7331 & 127.0.0.1:7331 however neither seem to help at all
When you access to other containers within the same network you don't have to access to them using the exposed port in the host but using instead the actual port where the application within the container is listening.
I made a really simple example so you can understand what's your problem:
version: '3.8'
services:
app:
image: busybox
entrypoint: tail -f /dev/null
kroki:
image: yuzutech/kroki
ports:
- 7331:8000
From Host
❯ curl -s -o /dev/null -w "%{http_code}" localhost:7331
200
From App
/ # wget kroki:7331
Connecting to kroki:7331 (172.18.0.3:7331)
wget: can't connect to remote host (172.18.0.3): Connection refused
/ # wget kroki:8000
Connecting to kroki:8000 (172.18.0.3:8000)
saving to 'index.html'
index.html 100% |************************************************************************| 51087 0:00:00 ETA
'index.html' saved
I'm working on an application which uses these technologies:
Django Backend
Vue JS front end
PostgreSQL database
Docker as a container
Traefik as a reverse proxy version 1.7
redis.
awscli
Here is my production.yml file which i use to the docker-compose to run all applications:
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_traefik: {}
services:
django:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: goplus_backend_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
volumes:
- ./static:/static # adding static & media file for django
- ./media:/media
vue:
build:
context: /root/goplus_front/new_dashboard_v2
container_name: new_dashboard_v2
environment:
- HOST=localhost
- PORT=8080
ports:
- "8080:8080"
depends_on:
- django
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: goplus_backend_production_postgres
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_data_backups:/backups
env_file:
- ./.envs/.production/.postgres
ports:
- "5432:5432"
traefik:
build:
context: .
dockerfile: ./compose/production/traefik/Dockerfile
image: goplus_backend_production_traefik
depends_on:
- django
# volumes:
# - production_traefik:/etc/traefik/acme
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
image: redis:5.0
awscli:
build:
context: .
dockerfile: ./compose/production/aws/Dockerfile
env_file:
- ./.envs/.production/.django
volumes:
- production_postgres_data_backups:/backups
Till now everything is fine and if I use IP:80 I got redirected to Django app (admin panel), and if I use IP:8080 I got redirection to Vue app.
The problem is now I want to use Domain Name not IP address, I bought a domain name and add SSL to it and it is working fine if I try to access Django app but not working if I try to access Vue app.
Here is the configuration file for traefik.toml which responsible of redirect requests to Django or Vue:
logLevel = "INFO"
defaultEntryPoints = ["http", "https"]
# Entrypoints, http and https
[entryPoints]
# http should be redirected to https
[entryPoints.http]
address = ":80"
# https is the default
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/certs/<crt_file_name>.crt"
keyFile = "/etc/certs/<crt_key_file_name>.key"
[file]
[backends]
[backends.django]
[backends.django.servers.server1]
url = "http://django:5000"
[backends.vue]
[backends.vue.servers.server1]
url = "http://vue:8080"
[frontends]
[frontends.django]
backend = "django"
passHostHeader = true
[frontends.django.headers]
HostsProxyHeaders = ['X-CSRFToken']
[frontends.django.routes.dr1]
rule = "Host:<domain name>, <IP name>"
[frontends.vue]
backend = "vue"
[frontends.vue.routes.dr1]
rule = "Host:<domain name>;PathPrefixStrip:/dashboard"
So from that configuration i want if someone tries to access Django app he must just use the Full domain name, and if he wants to use Vue app he must use DomainName/dashboard, it is working fine with Django app but with Vue app i got these Errors.
so any help or recommendation to solve this problem?
I realize this question comes up a lot. I've read many threads and blog posts, but here I am. Long story short, I have a docker container running wurstmeister/zookeeper & wurstmeister/kafka, and then some services running in their own containers. I'll just mention the NoddJS one for now. Everything works fine at home, using IP addresses (not localhost) so I'm baffled at what the difference is here. On AWS, it simply "doesn't work" even though it seems to at least connect to the broker in the beginning. I'm explicitly using internal IPs in the config as I don't want this exposed to anything externally.
Reading around, I've tried 2 setups. 1 works at home (KAFKA_ADVERTISED_HOST_NAME). 1 doesn't (KAFKA_ADVERTISED_LISTENERS). Neither work on my EC2 Linux box:
Kafka docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- my-network
kafka:
image: wurstmeister/kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: <internal-ip>
KAFKA_ADVERTISED_PORT: "9092"
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-network
networks:
my-network:
NodeJS docker-compose.yml
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile
networks:
- kafka_my-network
restart: unless-stopped
ports:
- "1337:3000"
volumes:
- "/tmp:/tmp"
- "/var/log:/var/log"
networks:
kafka_my-network:
external: true
Then in NodeJS
const kafkaHost = '<internal-ip>:9092';
const client = new kafka.KafkaClient({kafkaHost});
const producer = new kafka.Producer(client)
const kakfaTopic = 'test';
producer.on('ready', function() {
console.log(`kafka producer is ready`); // I see this, so I'm assuming all is well
ready = true;
});
producer.on('error', function(err) {
console.error(err);
});
const payload = [
{
topic: kafkaTopic,
messages: JSON.stringify(myMessages);
}
]
producer.send(payload, function(err, data) {
if (err) {
console.error(`Send error ${JSON.stringify(err}`);
}
console.log(`Sent data ${JSON.stringify(data)}`);
});
When I start my NodeJS server, I see that I've connected to a Kafka broker. I can confirm as well that :9092 is open after checking w/ telnet and/or nc. Then, when it sends a request, the callback gets an empty error.
I realize KAKFA_ADVERTISED_HOST_NAME is deprecated, so in the name of completion, here is my attempt using ADVERTISED_LISTENERS which failed. With this configuration I seemed to get the same results at home as I did on EC2.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- my-network
kafka:
image: wurstmeister/kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://<internal-ip>:9092
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-network
networks:
my-network:
EDIT: I will not provide this as a solution, but the bitnami image w/ the following config works. The main difference is it had a pretty straight forward README that I went through. I can't be certain if I tried the equivalent config w/ wurstmeister (I tried many - and again, at least one of which in docker containers on my own machine but not on a single EC2 instance).
Note that I did list 'kafka' with an internal IP (not loopback) in /etc/hosts for this. This should be tantamount to using the internal IP explicitly which I had done above.
version: '2'
services:
zookeeper:
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- my-network
kafka:
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
- '29092:29092'
volumes:
- 'kafka_data:/bitnami'
- /var/run/docker.sock:/var/run/docker.sock
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
depends_on:
- zookeeper
networks:
- my-network
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
networks:
my-network:
driver: bridge
I have two django projects (mircroservices), running in separate docker containers. Both projects are using django-tenant-schemas. How can I send a request from serice-bar to service-foo on url http://boohoo.site.com:18150/api/me/, 18150 is the PORT of project-a? I need to use the tenant url so that project-a can verify the tenant and process the request.
I can send a request by using the container name, but that doesn't work because if I use http://site.foo:18150/api/me, it sends the request successfully, but there's no tenant with definition site.foo.
Here's the docker-compose.yml:
version: '3.3'
services:
db:
container_name: site.postgres
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
foo:
container_name: site.foo
build:
context: ../poll
command: python /app/foo/src/manage.py runserver 0.0.0.0:18150
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
stdin_open: true
tty: true
ports:
- "18150:18150"
bar:
container_name: site.bar
build:
context: ../toll
command: python /app/bar/src/manage.py runserver 0.0.0.0:18381
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
stdin_open: true
tty: true
ports:
- "18381:18381"
You can do this using aliases on the default (or any other...) network. For more info on this feature, see the documentation. I checked and this is supported by your current compose file version (3.3) although I do suggest you move up to the latest supported one if possible (3.7).
For compactness, I'm only reproducing the modified foo service declaration below where I only added the necessary networks stanza.
foo:
container_name: site.foo
build:
context: ../poll
command: python /app/foo/src/manage.py runserver 0.0.0.0:18150
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
networks:
default:
aliases:
- boohoo.site.com
stdin_open: true
tty: true
ports:
- "18150:18150"
After this change, your foo service container will be reachable from any other container on the same network either with foo (the service name), site.foo (your custom container name) or boohoo.site.com (the network alias).