Docker django nginx gunicorn URL dropping port - django

Firstly apologies if this is a duplicate but i have not found a solution through similar posts shown in SO
I have a Docker Django image which is using nginx and gunicorn.
Gunicorn script:
exec /var/www/venv/bin/gunicorn wsgi:application \
--bind 0.0.0.0:8001 \
--access-logfile /var/log/gunicorn/access.log \
--error-logfile /var/log/gunicorn/error.log
Nginx config:
server {
server_name 172.0.0.1;
access_log off;
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header Host $host:$server_port;
}
location /static/ {
autoindex on;
alias /var/www/django/assets/;
expires 7d;
}
}
I am exposing port 80 and mapping to 49260.
When browsing to the docker host external ip including the port the site is published and serves the static files.
http://xxx.xx.xx.xxx:49260/
The issue is when i navigate to any other page in the django site, the mapped port is dropped from the URL which is then picked up by the host server ngnix config.
What i am trying to achieve is maintain the port in the URL which i can later reverse proxy from the host server.
Any advice would be really appreciated.

The answer was adding:
proxy_set_header Host $http_host;
to the nginx conf which prints hostname:portnumber
See serverfault.com link here: Original thread

Related

Deploying Django with Nginx, Gunicorn and Supervisor

I'm trying to deploy my Django app with Nginx and Gunicorn by following this tutorial, but I modified some steps so I can use Conda instead of ViritualEnv.
The setup looks like this:
Nginx replies with my Vue app
Requests from Vue are made to api.example.com
Nginx listens to api.example.com and directs requests to Gunicorn's unix socket
Things I've checked:
I can see the Vue requests in Nginx's access.log.
I can also see those requests with journalctl -f -u gunicorn, in the supervisor.log, and gunicorn's access.log
When my Django app starts, it's creates a log file, so I can see that Gunicorn starts it. But Django is not responding to requests from the unix socket.
I can see a response from Django when I ssh in and run the following command:
curl --no-buffer -XGET --unix-socket /var/www/example/run/gunicorn.sock http://localhost/about. This command only gives a response when any of my ALLOWED_HOSTS are used in place of localhost.
My Nginx, Supervisor and Gunicorn configurations all use the full path to gunicorn.sock.
Should I see Django running on port 8000 or anything if I do something like nmap localhost?
I saw another post mention that Nginx should point to port 8000 and that gunicorn should be run with either:
gunicorn --bind 0.0.0.0:8000 <djangoapp>.wsgi --daemon
gunicorn <djangoapp>.wsgi:application --bind <IP>:8000 --daemon
gunicorn <djangoapp>.wsgi:application --bind=unix:/var/www/example/run/gunicorn.sock
But doesn't exposing port 8000 defeat the purpose of using Nginx as a reverse proxy and Gunicorn's unix socket? Doesn't exposing 8000 also increase the surface area for attack vectors? Or is it best practice to expose port 8000? I'm a bit confused why I would use both expose that port and use both Nginx and Gunicorn.
My main problem: Why can I get responses from Django via the unix socket with curl, but not via requests from Vue? Why aren't Vue's requests making it from Gunicorn to Django via the unix socket?
I'm really stuck. Any suggestions?
Frontend Nginx config
server {
listen 80 default_server;
listen [::]:80 default_server;
# server_name example.com;
# server_name myIP;
root /var/www/example/frontend/dist;
server_name example.com www.example.com;
location =/robots.txt {
root /opt/example;
}
location /thumbnail/ {
alias /opt/example/static/img/thumbnail/;
}
location /bg/ {
alias /opt/example/static/img/bg/;
}
location / {
try_files $uri $uri/ /index.html;
}
}
API Nginx config
upstream backend_server {
server unix:/var/www/example/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name api.example.com
client_max_body_size 4G;
access_log /var/log/nginx/api-access.log;
error_log /var/log/nginx/api-error.log;
location / {
include proxy_params;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://backend_server;
}
}
}
Gunicorn config
#!/bin/bash
NAME=”backend”
DJANGODIR=/var/www/example/backend
SOCKFILE=/var/www/example/run/gunicorn.sock
USER=django
GROUP=example
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=backend.settings
DJANGO_WSGI_MODULE=backend.wsgi
CONDA_SRC=/home/justin/anaconda3/etc/profile.d/conda.sh
GUNICORN=/home/justin/anaconda3/envs/production/bin/gunicorn
echo “starting backend”
cd $DJANGODIR
source $CONDA_SRC
conda activate production
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec $GUNICORN
${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=- \
--error-logfile=/var/www/example/backend/logs/gunicorn-error.log \
--access-logfile=/var/www/example/backend/logs/gunicorn-access.log
Gunicorn access.log
- - [08/Sep/2020:01:51:24 -0400] "OPTIONS /c/about/ HTTP/1.0" 200 0 "http://example.com/c/about" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Mobile Safari/537.36"
- - [08/Sep/2020:01:51:24 -0400] "POST /c/about/ HTTP/1.0" 400 143 "http://example.com/c/about" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Mobile Safari/537.36"
But doesn't exposing port 8000 defeat the purpose of using Nginx as a reverse proxy and Gunicorn's unix socket?
In gunicorn, you have to expose 8000 port on localhost like this gunicorn --bind 127.0.0.1:8000 <djangoapp>.wsgi --daemon. Exposing it on 0.0.0.0 will obviously be a security vulnerability considering your nginx in on the same server.
Doesn't exposing 8000 also increase the surface area for attack vectors? Or is it best practice to expose port 8000? I'm a bit confused why I would use both expose that port and use both Nginx and Gunicorn.
You don't need to expose port 8000 you can expose any port but you need to tell gunicon to listen on at least a single port so that nginx can pass requests to it.
And regarding using both nginx and gunicorn, they both are really different and handle very different use case/functions of an application.
Nginx uses "event‑driven" approach to handle requests so a single worker of nginx can handle 1000s of req simultaneously. But Gunicorn on the other hand mostly(by default) uses sync worker which means a request will remain with a worker till it is processed. (posted this twice today :p)
So you need both if you remove nginx all your requests will return 50X except which are currently handled by gunicorn until the worker is free. And also gunicorn is not made to handle user traffic or in bigger application things like load balancing can only be done by nginx. So nginx has it's own purpose in an application.
After neeraj9194 pointed out the 400, I did more searching for issues relating to Nginx, Gunicorn 400 and Django and I came across a ton of similar issues. Looks like it's mainly an Nginx issue. The answer in this blog fixed my issue.
I replaced the location block in my API Nginx config with:
location / {
proxy_set_header Host $host;
proxy_pass http://backend_server;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
}

Bad Gateway when configuring nginx with. Django app container and Gunicorn

I'm using docker-compose to deploy a Django app on a VM with Nginx installed on the VM as a web server.
but I'm getting " 502 Bad gateway" I believe it's a network issue I think Nginx can't access the docker container! however, when I use the same configuration in an Nginx container it worked perfectly with the Django app but I need to use the installed one not the one with docker.
This is my docker-compose file:
version: "3.2"
services:
web:
image: ngrorra/newsapp:1.0.2
restart: always
ports:
- "8000:8000"
volumes:
- type: volume
source: django-static
target: /code/static
- type: volume
source: django-media
target: /code/media
environment:
- "DEBUG_MODE=False"
- "DB_HOST=…”
- "DB_PORT=5432"
- "DB_NAME=db_1”
- "DB_USERNAME=username1111"
volumes:
django-static:
django-media:
And this is my nginx.conf file:
upstream web_app {
server web:8000;
}
server {
listen 80;
location /static/ {
autoindex on;
alias /code/static/;
}
location /media/ {
autoindex on;
alias /code/media/;
}
location / {
proxy_pass http://web_app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
#For favicon
location /favicon.ico {
alias /code/assets/favicon.ico;
}
# Error pages
error_page 404 /404.html;
location = /404.html {
root /code/templates/;
}
}
Does anyone know what is the issue?
Thank you!
As commented above, using "web" as host name will not work, you could try localhost or the docker ip (you can get it using ifconfig in Ubuntu, for example).
For the network issue, I think you could create a new docker external network using docker network create and adding to your "network" [definition inside compose] (https://docs.docker.com/compose/networking/#use-a-pre-existing-network). Another possibility is to use the host as network
When I run docker aplications with Nginx, usualy I create first an external docker network with defined IP (using some docker network IP - usualy 172.x.x.x), then add a Nginx container to my docker-compose.yaml and my server inside nginx.conf is something like this:
upstream web_app {
server 172.x.x.x:8000;
}
.
.
.
It works without problems. Hope this can help you.

Django Channels Nginx production

I have a django project and recently added channels to use websockets. This seems to all work fine, but the problem I have is to get the production ready.
My setup is as follows:
Nginx web server
Gunicorn for django
SSL enabled
Since I have added channels to the mix. I have spent the last day trying to get it to work.
On all the turtotials they say you run daphne on some port then show how to setup nginx for that.
But what about having gunicorn serving django?
So now I have guncorn running this django app on 8001
If I run daphne on another port, lets say 8002 - how should it know its par of this django project? And what about run workers?
Should Gunicorn, Daphne and runworkers all run together?
This question is actually addressed in the latest Django Channels docs:
It is good practice to use a common path prefix like /ws/ to
distinguish WebSocket connections from ordinary HTTP connections
because it will make deploying Channels to a production environment in
certain configurations easier.
In particular for large sites it will be possible to configure a
production-grade HTTP server like nginx to route requests based on
path to either (1) a production-grade WSGI server like Gunicorn+Django
for ordinary HTTP requests or (2) a production-grade ASGI server like
Daphne+Channels for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy
where Daphne serves all requests - HTTP and WebSocket - rather than
having a separate WSGI server. In this deployment configuration no
common path prefix like is /ws/ is necessary.
In practice, your NGINX configuration would then look something like (shortened to only include relevant bits):
upstream daphne_server {
server unix:/var/www/html/env/run/daphne.sock fail_timeout=0;
}
upstream gunicorn_server {
server unix:/var/www/html/env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name _;
location /ws/ {
proxy_pass http://daphne_server;
}
location / {
proxy_pass http://gunicorn_server;
}
}
(Above it is assumed that you are binding the Gunicorn and Daphne servers to Unix socket files.)
I have created an example how to mix Django Channels and Django Rest Framework. I set nginx routing that:
websockets connections are going to daphne server
HTTP connections (REST API) are going to gunicorn server
Here is my nginx configuration file:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
I recently answered a similiar question, have a look there for an explanation on how django channels work.
Basically, you don't need gunicorn anymore. You have daphne which is the interface server that accepts HTTP/Websockets and you have your workers that run django views. Then obviously you have your channel backend that glues everything together.
To make it work you have to configure CHANNEL_LAYERS in settings.py and also run the interface server: $ daphne my_project.asgi:channel_layer
and your worker:
$ python manage.py runworker
NB! If you chose redis as the channel backend, pay attention to file sizes you're serving. If you have large static files make sure NGINX serves them or otherwise clients will experience cryptic errors that may occur due to redis running out of memory.

Nginx conf for Gunicorn on another vm?

I have hosted my Django project on Ubuntu using Gunicorn as a web server.
Now I want to serve my requests from Nginx but it should be on a different vm.
Normally my nginx project.conf would be like:
server {
listen 80;
server_name server_domain_or_IP;
location /static/ {
root /home/user/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/user/myproject/myproject.sock;
}
}
What changes should be made here to let Nginx route requests to my Gunicorn server.
You need to bind Gunicorn to an IP address and port instead of a UNIX socket.
Then in your Nginx config, change proxy_pass to the IP address and port that you are running gunicorn on.
proxy_pass http://1.2.3.4:8000;

gunicorn, nginx, and using port 80 for running a django web application

I have django, nginx, and gunicorn installed on a web server.
Nginx listens on port 80
Gunicorn runs django project on port 8000
This works fine. If I go to www.mysite.com:8000/myapp/ the django application comes up OK. But what if I want users to go to www.mysite.com/myapp/ to view the django application? I don't think getting rid of Nginx is the answer, and I'm hoping I missed some configuration tweak i can apply to make this work.
Any advice is appreciated.
You can use the following configuration, so you can access your website normally on port 80:
this is your nginx configuration file, sudo vim /etc/nginx/sites-available/django
upstream app_server {
server 127.0.0.1:9000 fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 250M;
server_name _;
keepalive_timeout 15;
# Your Django project's media files - amend as required
location /media {
alias /home/xxx/yourdjangoproject/media;
}
# your Django project's static files - amend as required
location /static {
alias /home/xxx/yourdjangoproject/static;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
and configure gunicorn as
description "Gunicorn daemon for Django project"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid yourdjangousernameonlinux
setgid yourdjangousernameonlinux
chdir /home/xxx/yourdjangoproject
exec gunicorn \
--name=yourdjangoproject \
--pythonpath=yourdjangoproject \
--bind=0.0.0.0:9000 \
--config /etc/gunicorn.d/gunicorn.py \
yourdjangoproject.wsgi:application
No, getting rid of nginx is definitely not the answer. The answer is to follow the very nice documentation to configure nginx as a reverse proxy to gunicorn.