How to allow NGINX to buffer for multiple Django App Servers - django

How can one allow NGINX to buffer client requests for multiple Django App Servers that all run a WSGI server like Gunicorn? What do I need to change in the config files?

Use nginx's upstream option to define a pool of application servers; when you proxy_pass, you can proxy_pass to the named upstream:
upstream my-upstream {
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
Unless you specify otherwise, requests will be round-robined between the different upstream servers.

upstream my-upstream {
least_conn;
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
Let Assume you are using 4 sever when your server 1 is down then nginx will intelligently shift your next request to your next available serve, once server 1 is up then your next request will be send again in server 1. Actually nginx use round robin algorithm to shifting the request.
upstream my-upstream {
ip_hash;
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
In this case for previous scenario won't be same, Like we will always get response from server 1 and when our server 1 is down then it goes to server 2 and then when it is up that time is will response server 1 again.

Related

Prevent Nginx from changing host

I am building an application which is right now working on localhost. I have my entire dockerized application up and running at https://localhost/.
HTTP request is being redirected to HTTPS
My nginx configuration in docker-compose.yml is handling all the requests as it should.
I want my application accessible from anywhere hence i tried using Ngrok to route the request to my localhost. Actually i have a mobile app in development so need a local server for apis.
Now, when i enter ngrok's url like abc123.ngrok.io in the browser, the nginx converts it to https://localhost/. That works for my host system's browser, as my web app is working there only, but when i open the same in my mobile emulator. It doesn't work.
I am newbie to nginx. Any suggestions will be welcomed.
Here's my nginx configuration.
nginx.conf
upstream web {
ip_hash;
server web:443;
}
# Redirect all HTTP requests to HTTPS
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# for https requests
server {
# Pass request to the web container
location / {
proxy_pass https://web/;
}
location /static/ {
root /var/www/mysite/;
}
listen 443 ssl;
server_name localhost;
# SSL properties
# (http://nginx.org/en/docs/http/configuring_https_servers.html)
ssl_certificate /etc/nginx/conf.d/certs/localhost.crt;
ssl_certificate_key /etc/nginx/conf.d/certs/localhost.key;
root /usr/share/nginx/html;
add_header Strict-Transport-Security "max-age=31536000" always;
}
This configuration i got from a tutorial.
First of all, you set redirection from every HTTP request to HTTPS:
# Redirect all HTTP requests to HTTPS
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
You are using $server_name variable here, so every /some/path?request_string HTTP request to your app would be redirected to https://localhost/some/path?request_string. At least change the return directive to
return 301 https://$host$request_uri;
Check this question for information about difference between $host and $server_name variables.
If these are your only server blocks in your nginx config, you can safely remove the server_name localhost; directive at all, those blocks still remains the default blocks for all incoming requests on 80 and 443 TCP ports.
The second one, if you are using self-signed certificate for localhost be ready for browser complains about mismatched certificate (issued for localhost, appeared at abc123.ngrok.io). If it doesn't break your mobile app, its ok, but if it is, you can get the certificate for your abc123.ngrok.io domain from Lets Encrypt for free after you start your ngrok connection, check this page for available ACME clients and options. Or you can disable HTTPS at all if it isn't strictly requred for your debug process, just use this single server block:
server {
listen 80;
# Pass request to the web container
location / {
proxy_pass https://web/;
}
location /static/ {
root /var/www/mysite/;
}
}
Of course this should not be used in production, only for debugging.
And the last one. I don't see any sense encrypting traffic between nginx and web containers inside docker itself, especially if you already setup HTTP-to-HTTPS redirection with nginx. It gives you nothing in the terms of security but only some extra overhead. Use plain HTTP protocol on port 80 for communications between nginx and web container:
upstream web {
ip_hash;
server web:80;
}
server {
...
location / {
proxy_pass http://web;
}
}

Django Channels Nginx production

I have a django project and recently added channels to use websockets. This seems to all work fine, but the problem I have is to get the production ready.
My setup is as follows:
Nginx web server
Gunicorn for django
SSL enabled
Since I have added channels to the mix. I have spent the last day trying to get it to work.
On all the turtotials they say you run daphne on some port then show how to setup nginx for that.
But what about having gunicorn serving django?
So now I have guncorn running this django app on 8001
If I run daphne on another port, lets say 8002 - how should it know its par of this django project? And what about run workers?
Should Gunicorn, Daphne and runworkers all run together?
This question is actually addressed in the latest Django Channels docs:
It is good practice to use a common path prefix like /ws/ to
distinguish WebSocket connections from ordinary HTTP connections
because it will make deploying Channels to a production environment in
certain configurations easier.
In particular for large sites it will be possible to configure a
production-grade HTTP server like nginx to route requests based on
path to either (1) a production-grade WSGI server like Gunicorn+Django
for ordinary HTTP requests or (2) a production-grade ASGI server like
Daphne+Channels for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy
where Daphne serves all requests - HTTP and WebSocket - rather than
having a separate WSGI server. In this deployment configuration no
common path prefix like is /ws/ is necessary.
In practice, your NGINX configuration would then look something like (shortened to only include relevant bits):
upstream daphne_server {
server unix:/var/www/html/env/run/daphne.sock fail_timeout=0;
}
upstream gunicorn_server {
server unix:/var/www/html/env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name _;
location /ws/ {
proxy_pass http://daphne_server;
}
location / {
proxy_pass http://gunicorn_server;
}
}
(Above it is assumed that you are binding the Gunicorn and Daphne servers to Unix socket files.)
I have created an example how to mix Django Channels and Django Rest Framework. I set nginx routing that:
websockets connections are going to daphne server
HTTP connections (REST API) are going to gunicorn server
Here is my nginx configuration file:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
I recently answered a similiar question, have a look there for an explanation on how django channels work.
Basically, you don't need gunicorn anymore. You have daphne which is the interface server that accepts HTTP/Websockets and you have your workers that run django views. Then obviously you have your channel backend that glues everything together.
To make it work you have to configure CHANNEL_LAYERS in settings.py and also run the interface server: $ daphne my_project.asgi:channel_layer
and your worker:
$ python manage.py runworker
NB! If you chose redis as the channel backend, pay attention to file sizes you're serving. If you have large static files make sure NGINX serves them or otherwise clients will experience cryptic errors that may occur due to redis running out of memory.

nginx map request post with files to a django upstream server

I have three upstream gunicorn servers with nginx sitting in front of them what I want to do now is to map all request post with files to a particular server and every other one to the group of upstream servers so that one of the servers is dedicated to file upload and processing .I'll appreciate if this can be done with two sets of upstream servers .
what I currently have.
upstream appservers {
server http://192.168.1.1:8000;
server http://192.168.1.2:8000;
server http://192.168.1.3:8000;
}
What I want to do is
upstream appservers {
server http://192.168.1.1:8000;
server http://192.168.1.2:8000;
}
upstream file_processors {
server http://192.168.1.3:8000;
server http://192.168.1.4:8000;
}
server {
location / {
if (-f $request_filename) {
proxy_pass http://file_processors;
break;
}
proxy_pass http://appservers;
}

How to set up load balancing with nginx and Gunicorn?

I set up a Django server using nginx, Gunicorn and Ubuntu 14.04. Now, I need to enable nginx as load balancer. However, I couldn't find anything related to this on the web and I am unclear how to enable load balancing on nginx with gunicorn being part of my upstream block.
According to the nginx documentation, load balancing requires at least the following basic setup:
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
This setup is clear and understood. My configuration with gunicorn looks like this and contains a reference to the gunicorn.sock file.
http {
upstream myapp1 {
server unix:/home/myapp/run/stage/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
location / {
if (!-f $request_filename) {
proxy_pass http://myapp1;
break;
}
}
}
}
It would be great, if you could please help me with the following questions:
Is the following upstream block correct to enable load balancing in this setup?
Or: do I have to reference to the .sock file on other servers instead of using e.g., srv2.example.com
Do I have to add a similar block to all my servers or just to one server?
upstream myapp1 {
server server unix:/home/myapp/run/stage/gunicorn.sock fail_timeout=0;
server srv2.example.com;
server srv3.example.com;
}
Hope somebody has done this before.
Thank you,
Chris

How to configure protected access to files on remote nginx with X-Accel-Redirect

I have 2 servers. First (domain.com) is a django/apache server, second (f1.domain.com) is a file server (nginx). Some files are protected and should only be allowed to be downloaded by registred users. To that end I have setup a nginx server with a
server {
listen 80 default_server;
server_name *.domain.com;
access_log /home/domain/logs/access.log;
location /files/ {
internal;
root /home/domain;
}
}
and from Django I send a request via X-Accel-Redirect header, but it doesn't work. I think because the request comes from a remote server.
How can I accomplish this task?
"and from django I send a request via X-Accel-Redirect header" -- it's incorrect, the "X-Accel" header must be a part of response header from the upstream server.
As http://wiki.nginx.org/X-accel said, there must be a proxy_pass or fastcgi_pass directive to send the response header to nginx.
location /protected_files {
internal;
proxy_pass http://127.0.0.2;
}