How to host multiple django apps with nginx proxy and redirect by subdomain? - django

I create proxy container with docker and generate ssl certificates to my domain with jwilder/nginx-proxy. It's works, but now tried redirect my django apps by subdomain and every request return 502 bad gateway. I'm a noob to this. I need help to know what I'm doing wrong.
This is my docker-compose nginx-proxy:
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
www:
image: nginx
restart: always
expose:
- "80"
volumes:
- /Users/kbs/git/peladonerd/varios/1/www:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=pablokbs.com,www.pablokbs.com
- LETSENCRYPT_HOST=pablokbs.com,www.pablokbs.com
- LETSENCRYPT_EMAIL=pablo#pablokbs.com
depends_on:
- nginx-proxy
- letsencrypt
volumes:
certs:
html:
vhostd:
and this is docker-compose django app (bak_web is app to redirect by subdomain):
version: "3"
services:
core_api:
build:
context: .
env_file: .env
container_name: "bak-api"
ports:
- 8181:8181
volumes:
- ./BAK_API:/bak_api
- ./bak:/bak_api
command: uvicorn bak.asgi:app --host 0.0.0.0 --port 8181
bak_web:
build:
context: .
expose:
- "80"
env_file: .env
container_name: "bak-web"
volumes:
- static:/bak_web/static
- .:/bak_web
- ./bak_chatbot:/app
nginx-bak-web:
image: nginx
restart: always
expose:
- "80"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/bak_web/static
environment:
- VIRTUAL_HOST=bakzion.duckdns.org
- LETSENCRYPT_HOST=bakzion.duckdns.org
- LETSENCRYPT_EMAIL=omar.cravioto.p#gmail.com
depends_on:
- bak_web
volumes:
.:
static:
last this is local.conf configuration:
upstream bakzion.duckdns.org {
server bak_web:80;
}
server {
listen 80;
server_name bakzion.duckdns.org;
location /static/{
alias /bak_web/static/;
}
location / {
include uwsgi_params;
uwsgi_pass uwsgi://webapp.docker.localhost;
include /etc/nginx/vhost.d/default_location;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
uwsgi_read_timeout 180;
}
}
I tried create proxy docker with nginx to redirect every django app host in server

do you really need this part?:
www:
image: nginx
restart: always
expose:
- "80"
volumes:
- /Users/kbs/git/peladonerd/varios/1/www:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=pablokbs.com,www.pablokbs.com
- LETSENCRYPT_HOST=pablokbs.com,www.pablokbs.com
- LETSENCRYPT_EMAIL=pablo#pablokbs.com
depends_on:
- nginx-proxy
- letsencrypt
It's pointed to a domain of another Person, maybe that is making a conflic.
I've recently hosted two sites with jwilder an both of them work.
Example of my jwilder config:
nginx-proxy:
image: bbtsoftwareag/nginx-proxy-unrestricted-requestsize:alpine
networks:
- nginx-net
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
# If you have default config, user next line: (from a config folder of your project)
- ./config/default_location:/etc/nginx/vhost.d/default_location
- myProject_static_myProject:/myProject/static
- myProject_media_myProject:/myProject/media
- myProject2_static_myProject2:/myProject2/static
- myProject2_media_myProject2:/myProject2/media
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
certs:
html:
vhostd:
myProject_static_myProject:
external: true
myProject_media_myProject:
external: true
myProject2_static_myProject2:
external: true
myProject2_media_myProject2:
external: true
#If you have a network
networks:
nginx-net:
name: network_name
As the Pelado Nerd says... IMPRESIONANTE!

Related

Custom per-VIRTUAL_HOST location config without effect: 502 Bad Gateway

I try to setup a dockerized nginx reverse proxy server, which should supply a dockerized Gunicorn webserver with. The Gunicorn webserver shall be accessible through example.com/django so I added custom nginx directives, as described here: https://github.com/nginx-proxy/nginx-proxy/blob/main/README.md#per-virtual_host-location-configuration
I placed the directives in a example.com_django/www.example.com_django file. However, they are not adopted in the main nginx config file and don't seem to have any effect. The website is not accessible and results in a 502 Bad Gateway error.
Docker Compose files and the custom nginx config file are further below.
Main ressources I used are:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#project-setup
https://github.com/nginx-proxy/nginx-proxy/blob/main/README.md
https://docs.gunicorn.org/en/stable/deploy.html#nginx-configuration
docker-compose.yml: NGINX Proxy
version: "3.9"
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:alpine
container_name: nginx-proxy
volumes:
- conf:/etc/nginx/conf.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- vhost:/etc/nginx/vhost.d:ro
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
networks:
- nginx-net
ports:
- 80:80
- 443:443
acme:
image: nginxproxy/acme-companion:latest
container_name: nginx-proxy-acme
depends_on:
- nginx-proxy
volumes:
- html:/usr/share/nginx/html
- conf:/etc/nginx/conf.d
- dhparam:/etc/nginx/dhparam
- vhost:/etc/nginx/vhost.d:ro
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
- DEFAULT_EMAIL=account#domain.com
restart: always
networks:
- nginx-net
volumes:
conf:
certs:
html:
vhost:
dhparam:
acme:
networks:
nginx-net:
external: true
docker-compose.yml: Django Server
version: '3.8'
# Prod environment
services:
web:
build:
context: .
dockerfile: Dockerfile
command: gunicorn core.wsgi:application --forwarded-allow-ips="172.31.0.0/24,www.example.com,example.com" --bind 0.0.0.0:8083
expose:
- 8083
env_file:
- ./.env
environment:
- VIRTUAL_HOST=example.com,www.example.com
- VIRTUAL_PATH=/django
- LETSENCRYPT_HOST=example.com,www.examplecom
- LETSENCRYPT_EMAIL=account#domain.com
depends_on:
- db
networks:
- nginx-net
db:
image: postgis/postgis:15-3.3-alpine
env_file:
- ./.env
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
ports:
- ${DB_PORT_EXT}:${DB_PORT_INT}
volumes:
postgres_data:
networks:
nginx-net:
external: true
example.com_django / www.example.com_django in /etc/nginx/vhost.d
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme
proxy_set_header Host $http_host;
proxy_redirect off;

VueJS + Django Rest Framework in dockers

I have a VueJS front end and a Django Rest Framework Backend which are independant (Django does not serve my VueJS app)
In local they work very well together but after using a docker-compose to deploy them on the server they don't want to communicate anymore. I can see my frontend but the axios requests get a TimeOut.
How it's made in my docker compose:
version: '3'
networks:
intern:
external: false
extern:
external: true
services:
backend:
image: #from_registry
container_name: Backend
env_file:
- ../.env
depends_on:
- db
networks:
- intern
volumes:
- statics:/app/static_assets/
- medias:/app/media/
expose:
- "8000"
db:
image: "postgres:latest"
container_name: Db
environment:
POSTGRES_PASSWORD: ****
networks:
- intern
volumes:
- pgdb:/var/lib/postgresql/data
frontend:
image: from_registry
container_name: Frontend
volumes:
- statics:/home/app/web/staticfiles
- medias:/home/app/web/mediafiles
env_file:
- ../.env.local
depends_on:
- backend
networks:
- intern
- extern
labels:
- traefik.http.routers.site.rule=Host(`dev.x-fantasy.com`)
- traefik.http.routers.site.tls=true
- traefik.http.routers.site.tls.certresolver=lets-encrypt
- traefik.port=80
volumes:
pgdb:
statics:
medias:
In my AxiosConfiguration I put:
baseURL="http://backend:8000"
And my front try to access on this URL but get a timeout error.
In the console I have an error
xhr.js:177 POST https://backend:8000/api/v1/token/login net::ERR_TIMED_OUT
It seems that there is a https in place of the http. Can it come from here?
Any idea how to make them communicate?
Thanks

Docker: How do I use access a service that's in another container from the frontend?

I'm running a RESTful Django project on port 8000 and React project on port 3000.
For development I've had all my urls on the frontend as
href='localhost:8000/api/name' or href='localhost:8000/api/address'.
Now that i'm going into production, I want my href's to be href='mysite.com/api/name' or href='mysite.com/api/address'.I can't figure out how to do this.How do I access my RESTful data which is on another container?
I found this article but don't think its correct for production.
docker-compose.yml
version: "3.2"
services:
backend:
build: ./backend
volumes:
- ./backend:/app/backend
ports:
- "8000:8000"
stdin_open: true
tty: true
command: python3 manage.py runserver 0.0.0.0:8000
depends_on:
- db
- cache
links:
- db
frontend:
build: ./frontend
volumes:
- ./frontend:/app
#One-way volume to use node_modules from inside image
- /app/node_modules
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
tty: true
command: npm start
db:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
volumes:
- "./mysql:/var/lib/mysql"
- "./.data/conf:/etc/mysql/conf.d"
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: temp
MYSQL_USER: root
MYSQL_PASSWORD: root
volumes:
mysql: {}
you can pass the for example API_URL ("mysite.com/api/" for prod, and "localhost:8000/api/" for dev ) as an environment variable to react, and use it to make the urls

How to serve static files using Traefik and Nginx in docker-compose

I am trying to serve static files using Traefik and Nginx, also docker. My Django application works well, I can access all pages, but can't setup static files serving. When I open site.url/staic/ It redirects me to the 404 page. For the code skeleton, I am using cookiecutter-django.
Here is my docker configuration:
django:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: dreamway_team_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
**
traefik:
build:
context: .
dockerfile: ./compose/production/traefik/Dockerfile
image: dreamway_team_production_traefik
depends_on:
- django
- nginx
volumes:
- production_traefik:/etc/traefik/acme
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
**
nginx:
image: nginx:1.17.4
depends_on:
- django
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf
- ./dreamway_team/static:/static
and my config for traefik:
log:
level: INFO
entryPoints:
web:
address: ":80"
web-secure:
address: ":443"
certificatesResolvers:
letsencrypt:
acme:
email: "mail"
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
http:
routers:
web-router:
rule: "Host(`[DOMAIN_NAME]`)"
entryPoints:
- web
middlewares:
- redirect
- csrf
service: django
web-secure-router:
rule: "Host(`[DOMAIN_NAME]`)"
entryPoints:
- web-secure
middlewares:
- csrf
service: django
tls:
certResolver: letsencrypt
middlewares:
redirect:
redirectScheme:
scheme: https
permanent: true
csrf:
headers:
hostsProxyHeaders: ["X-CSRFToken"]
services:
django:
loadBalancer:
servers:
- url: http://django:5000
providers:
file:
filename: /etc/traefik/traefik.yml
watch: true
Any help would be appreciated! Thanks!

In my Dockered Django application, my Celery task does not update the SQLite database (in other container). What should I do?

This is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./docker_nginx:/etc/nginx/conf.d
- ./timezone:/etc/timezone
depends_on:
- web
rabbit:
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5672:5672"
- "15672:15672"
web:
build:
context: .
dockerfile: Dockerfile
command: /app/start_web.sh
container_name: django_airport
volumes:
- ./:/app
- ./timezone:/etc/timezone
expose:
- "8080"
depends_on:
- celerybeat
celerybeat:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celerybeat.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- celeryd
celeryd:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celeryd.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- rabbit
Normally, I have a task that executed every minutes and it updates the database located in "web". Everything works fine in development environment. However, the "celerybeat" and "celeryd" don't update my database when ran via docker-compose? What went wrong?