nginx proxy causing error when response takes too long to reply - django

I have an nginx configuration that redirects to a Django rest service (Through gunicorn).
Everything works correctly, but when the response is too big (takes more than 30s to respond) I'm getting a 503 service unavailable error.
I am sure it is because of this issue because it works correctly on other requests, and only on specific requests where the response is too big (and fetching the request from a third party api) takes too long.
Below is my nginx configuration :
server {
listen www.server.com:80;
server_name www.server.com;
client_max_body_size 200M;
keepalive_timeout 300;
location /server/ {
proxy_pass http://127.0.0.1:8000/;
proxy_connect_timeout 120s;
proxy_read_timeout 300s;
client_max_body_size 200M;
}
location / {
root /var/www/html;
index index.html index.htm;
}
}
I am sure the issue is from Nginx and not gunicorn, because if i do a curl from inside the machine i get a response.
Thanks,

You do specify proxy_connect_timeout and proxy_read_timeout, but never proxy_send_timeout. (TBH, I don't think you need to modify timeout for connect(2), as that call simply established the TCP connection, and wouldn't depend on the size or time of an individual page; but the other two seem like a fair game.)
Additionally, as per https://stackoverflow.com/a/48614613/1122270, another consideration might be proxy_http_version — your curl is probably using HTTP/1.1, whereas nginx does HTTP/1.0 by default, and your backend might behave differently.

When you run below
$ gunicorn --help | grep -A2 -i time
--graceful-timeout INT
Timeout for graceful workers restart. [30]
--do-handshake-on-connect
Whether to perform SSL handshake on socket connect
--
-t INT, --timeout INT
Workers silent for more than this many seconds are
killed and restarted. [30]
So I would assume the timeout happens from gunicorn and not through nginx. So You don't just need timeout increase on nginx side but also on gunicorn
You can either add
timeout=180
to your config.py file or you can add it to the command line when launching gunicorn
gunicorn -t 180 ......

Related

Nginx unable to proxy django application through gunicorn

I'm trying to deploy a Django app in a Ubuntu Server 18.04 using Nginx and Gunicorn. Every tool seems to work properly, at least, from logs and status points of view.
The point is that if I log into my server using SSH and try to use curl, gunicorn is able to see the request and handle it. However, if I write directly my IP, I get simply the typical Welcome to nginx home page and the request is completely invisible to gunicorn, so it seems nginx is unable to locate and pass the request to the gunicorn socket.
I'm using nginx 1.14.0, Django 2.2.1, Python 3.6.7, gunicorn 19.9.0 and PostgreSQL 10.8.
This is my nginx config
server {
listen 80;
server_name localhost;
location /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias /home/django/myproject/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
And these are my gunicorn.sock
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
and gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=django
Group=www-data
WorkingDirectory=/home/django/myproject/myproject
ExecStart=/home/django/myproject/myproject/myproject/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
MyProject.wsgi:application
[Install]
WantedBy=multi-user.target
I've been following this guide (https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04), where most of all has worked as expected, but with the difference that my project is not a completely new one as in the tutorial, but cloned from my git repo (however, it's tested and the code works properly)
I was expecting the Django admin to be accessible from my browser already with this config, but it's not. I try to access my IP from my browser and I get Welcome to nginx but also 404 if I visit /admin. In addition, the gunicorn logs shows no requests.
In the other hand, if I log through SSH into my server and I execute curl --unix-socket /run/gunicorn.sock localhost, I can see in the gunicorn logs the request done by curl.
Any help is welcome.. I've been here for hours and I'm not able to get even 1 request from outside the server.
PD: it's also not something related to the ports in the server, since when I access the root of my IP, I receive the nginx answer. It just seems like Nginx has no config at all.
in your nginx config, you should use your proper server_name instead of localhost
server_name mydomain.com;
If not, you will fall back to the default nginx server, which returns the "welcome to nginx" message. You can change which virtual server is default by changing the order of servers, removing the nginx default, or using the default_server parameter. You can also listen to multiple server names.
listen 80 default_server;
server_name mydomain.com localhost;
If the Host header field does not match a server name, NGINX Plus routes the request to the default server for the port on which the request arrived. The default server is the first one listed in the nginx.conf file, unless you include the default_server parameter to the listen directive to explicitly designate a server as the default.
https://docs.nginx.com/nginx/admin-guide/web-server/web-server/#setting-up-virtual-servers
Remember that you have to reload the nginx config after making changes. sudo nginx -s reload for example.
Finally, I've got it working properly. You were right about the config of nginx, although my real problem was not to delete/modify default config file for nginx in sites_enabled folder. Thus, when I was setting listen 80 default_server I got the following error
[emerg] 10619#0: a duplicate default server for 0.0.0.0:80 in
/etc/nginx/sites-enabled/mysite.com:4
Anyway, I had a problem with the static files which I still not knowing why it works like that. I needed to set DEBUG = True to be able to see static files of the admin module.
I'll keep on investigating the proper way of serving static files in production for the admin panel.
Thank you so much for the help!

Django Channels Nginx production

I have a django project and recently added channels to use websockets. This seems to all work fine, but the problem I have is to get the production ready.
My setup is as follows:
Nginx web server
Gunicorn for django
SSL enabled
Since I have added channels to the mix. I have spent the last day trying to get it to work.
On all the turtotials they say you run daphne on some port then show how to setup nginx for that.
But what about having gunicorn serving django?
So now I have guncorn running this django app on 8001
If I run daphne on another port, lets say 8002 - how should it know its par of this django project? And what about run workers?
Should Gunicorn, Daphne and runworkers all run together?
This question is actually addressed in the latest Django Channels docs:
It is good practice to use a common path prefix like /ws/ to
distinguish WebSocket connections from ordinary HTTP connections
because it will make deploying Channels to a production environment in
certain configurations easier.
In particular for large sites it will be possible to configure a
production-grade HTTP server like nginx to route requests based on
path to either (1) a production-grade WSGI server like Gunicorn+Django
for ordinary HTTP requests or (2) a production-grade ASGI server like
Daphne+Channels for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy
where Daphne serves all requests - HTTP and WebSocket - rather than
having a separate WSGI server. In this deployment configuration no
common path prefix like is /ws/ is necessary.
In practice, your NGINX configuration would then look something like (shortened to only include relevant bits):
upstream daphne_server {
server unix:/var/www/html/env/run/daphne.sock fail_timeout=0;
}
upstream gunicorn_server {
server unix:/var/www/html/env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name _;
location /ws/ {
proxy_pass http://daphne_server;
}
location / {
proxy_pass http://gunicorn_server;
}
}
(Above it is assumed that you are binding the Gunicorn and Daphne servers to Unix socket files.)
I have created an example how to mix Django Channels and Django Rest Framework. I set nginx routing that:
websockets connections are going to daphne server
HTTP connections (REST API) are going to gunicorn server
Here is my nginx configuration file:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
I recently answered a similiar question, have a look there for an explanation on how django channels work.
Basically, you don't need gunicorn anymore. You have daphne which is the interface server that accepts HTTP/Websockets and you have your workers that run django views. Then obviously you have your channel backend that glues everything together.
To make it work you have to configure CHANNEL_LAYERS in settings.py and also run the interface server: $ daphne my_project.asgi:channel_layer
and your worker:
$ python manage.py runworker
NB! If you chose redis as the channel backend, pay attention to file sizes you're serving. If you have large static files make sure NGINX serves them or otherwise clients will experience cryptic errors that may occur due to redis running out of memory.

nginx returning malformed packet/no response with 200 when request body is large

I've hosted my Django rest framework API server with gunicorn behind the nginx. When I'm hitting the API to the nginx with a small body in the request, the response comes. But, with a large payload, it returns nothing with 200 OK response.
However, when I hit the gunicorn directly, it returns a proper response.
NGNIX is messing up with the response if the request payload is large.
I captured packets via tcpdump, there it is showing that the response contains MALFORMED PACKET. Following is the TCP dump:
[Malformed Packet: JSON]
[Expert Info (Error/Malformed): Malformed Packet (Exception occurred)]
[Malformed Packet (Exception occurred)]
[Severity level: Error]
[Group: Malformed]
NGINX config :
server {
listen 6678 backlog=10000;
client_body_timeout 180s;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
proxy_pass http://localhost:8000;
proxy_redirect default;
}
}
I've never seen NGINX playing hard on me. Any help appreciated.
If nginx and gunicorn are running on the same server, rather than using a loopback for the two to talk to each other, a unix socket is a bit more performant I believe. I can't tell if you already doing that from the config snippet. The only other thing I'm seeing from the gunicorn deploy docs that might be helpful here is client_max_body_size 4G;, which according to the nginx docs defaults to 1 MB.

Amazon EC2 Deployment Not Working When IP Address Typed Into Browser Suspect Ngnix Problems

I am nearing the last step of deploying my Django app and I think I am having a Nginx problem. This is my first time deploying, so give me a break.
Basically, the problem is that when I navigate to my public IP on my browser I am getting a webpage is not available error.
I am thinking it is an issue with how I am writing out my directory structure in my Nginx configuration script, but am unsure. I am following a tutorial and don't really understand the script they are asking me to run.
Here is my app's directory structure within my server...
/home/ubuntu/flower_shop/flowershop
Here is my Nginx's file that configures Nginx
server {
listen 80;
server_name 54.213.141.60;
location = /favicon.ico { access_log off; log_not_found off;}
location /static/ {
root /home/ubuntu/flower_shop/flowershop;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/flower_shop/flowershop/flowershop.sock;
}
I am creating the above file by typing the following into my command line...
sudo vim /etc/nginx/sites-available/flower_shop
Can you see anything obvious that I am doing wrong? Gunicorn is set up fine and my app works on my local host. I have tried restarting Nginx, but I get the same results.
Hope you have done the following step:
sudo ln -s /etc/nginx/sites-available/flower_shop /etc/nginx/sites-enabled/flower_shop
Some other diagnostic commands which will help pin down the problem:
Supply nginx error and access logs
output of netstat -tulpn | grep nginx
In ssh session do curl -D - http://localhost:80
Try replacing the above snippet with the following extremely simple server config. Notice the only filtering it has for now is for port 80. It assumes your gunicorn is serving at 8080. Change port appropriately, if required.
```
server{
listen 80;
location / {
proxy_pass http://localhost:8080;
}
}
```

How to allow NGINX to buffer for multiple Django App Servers

How can one allow NGINX to buffer client requests for multiple Django App Servers that all run a WSGI server like Gunicorn? What do I need to change in the config files?
Use nginx's upstream option to define a pool of application servers; when you proxy_pass, you can proxy_pass to the named upstream:
upstream my-upstream {
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
Unless you specify otherwise, requests will be round-robined between the different upstream servers.
upstream my-upstream {
least_conn;
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
Let Assume you are using 4 sever when your server 1 is down then nginx will intelligently shift your next request to your next available serve, once server 1 is up then your next request will be send again in server 1. Actually nginx use round robin algorithm to shifting the request.
upstream my-upstream {
ip_hash;
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
In this case for previous scenario won't be same, Like we will always get response from server 1 and when our server 1 is down then it goes to server 2 and then when it is up that time is will response server 1 again.