I have a long task in one of my django's views that take 120-200 seconds to generate the response.
For this particular view, Nginx raises 502 Bad Gateway after 1 minute with this error message in the logs:
[error] 7719#7719: *33 upstream prematurely closed connection while reading response header from upstream,
here are my Nginx configurations:
upstream DjangoServer {
server 127.0.0.1:8000;
keepalive 300;
}
location / {
include proxy_params;
proxy_pass http://DjangoServer;
allow all;
proxy_http_version 1.1;
proxy_set_header X-Cluster-Client-Ip $remote_addr;
client_max_body_size 20M;
keepalive_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
}
And here are my uWSGI Configurations:
uid=www-data
gid=www-data
http=127.0.0.1:8000
http-keepalive=300
master=1
vacuum=1
workers=2
threads=5
log-5xx=1
Note:
The Nginx and uWSGI work fine for all other views.
Django development server can run the task with no problem.
After the Nginx 502 error, uWSGI keeps running in the background and completes the job, (according to in view print statements).
If I try to connect to uWSGI via the browser, after a while (less than 120s) it will say ERR_EMPTY_RESPONSE.
You can assume the task like this
def long_task_view(request):
start_time = time.time()
print(start_time)
# doing stuff
time.sleep(130)
print(time.time() - start_time)
return HttpResponse("The result")
you can try increase the time out time of nginx:
vim /etc/nginx/nginx.conf
Add this in http:
http {
...
fastcgi_read_timeout 300;
...
}
But the best practice is to create an asynchronous process to process the method that takes. I usually celery for async task.
Related
There is one process backend which take around 1-2 minutes to process. The loading screen runs for 1 minute and shows 504 Gateway Timeout
Here's the log in nginx.access.log
172.18.0.2 - - [16/Dec/2022:23:54:02 +0000] "POST /some_request HTTP/1.1" 499 0 "http://localhost/some_url" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
But from the django debug log, I can see that the POST request is still processing at the backend
docker-compose.yml
timeout is set to 300, without it, the browser will return 502 error (connection prematurely closed)
services:
web:
image: genetic_ark_web:2.0.0
build: .
command: bash -c "python manage.py collectstatic --noinput && gunicorn ga_core.wsgi:application --bind :8000 --timeout 300 --workers 2 --threads 4"
This is all the params I have tried in nginx.conf but still the 504 timeout is returned after 60s
server {
# port to serve the site
listen 80;
# optional server_name
server_name untitled_name;
...
# request timed out -- default 60
client_body_timeout 300;
client_header_timeout 300;
# if client stop responding, free up memory -- default 60
send_timeout 300;
# server will close connection after this time -- default 75
keepalive_timeout 300;
location /url_pathway/ {
proxy_pass http://ga;
# header
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
keepalive_timeout 300s;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 180s;
uwsgi_read_timeout 300s;
uwsgi_connect_timeout 300s;
Any idea which parameter might be controlling the timeout?
Apparently not caused by nginx-container or web-container.
Fixed by changing timeout in nginx-proxy which is timing out causing 499 error in nginx access log while web container is continuing its backend process
Added below in nginx.conf for nginx-container
proxy_read_timeout 300s;
For nginx-proxy, proxy_read_timeout 300s; is added to the config too
I'm currently running a Django (2.0.2) server with uWSGI having 10 workers
I'm trying to implement a real time chat and I took a look at Channel.
The documentation mentions that the server needs to be run with Daphne, and Daphne needs an asynchronous version of UWSGI named ASGI.
I manged to install and setup ASGI and then run the server with daphne but with only one worker (a limitation of ASGI as I understood) but the load it too high for the worker.
Is it possible to run the server with uWSGI with 10 workers to reply to HTTP/HTTPS requests and use ASGI/Daphne for WS/WSS (WebSocket) requests ?
Or maybe it's possible to run multiples instances of ASGI ?
It is possible to run WSGI alongside ASGI here is an example of a Nginx configuration:
server {
listen 80;
server_name {{ server_name }};
charset utf-8;
location /static {
alias {{ static_root }};
}
# this is the endpoint of the channels routing
location /ws/ {
proxy_pass http://localhost:8089; # daphne (ASGI) listening on port 8089
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_pass http://localhost:8088; # gunicorn (WSGI) listening on port 8088
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
client_max_body_size 50m;
}
}
To use the /ws/ correctly, you will need to enter your URL like that:
ws://localhost/ws/your_path
Then nginx will be able to upgrade the connection.
$request_time in nginx logs is the time taken by the request from when it was first received in nginx and till the response was sent back by nginx to client.
In my nginx config, the request is closed by nginx and marked 504 gateway timeout when $request_time reaches 60s
I have tried using following location context directives:
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
uwsgi_read_timeout 300s;
uwsgi_send_timeout 300s;
But still facing the same issue. Is there some config that i am missing?
Here is the location context of my nginx.conf
location / {
set $uwsgi_script "wsgi";
set $script_name "/";
set $socket "uwsgi.sock";
#client_max_body_size 1000m;
keepalive_timeout 0;
# host and port to uwsgi server
uwsgi_pass unix:///tmp/$socket;
uwsgi_param UWSGI_SCRIPT $uwsgi_script;
uwsgi_param SCRIPT_NAME $script_name;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
uwsgi_pass_header Authorization;
include /etc/nginx/uwsgi_params;
#fix redirect issue to proxy port
uwsgi_param SERVER_PORT 80;
#set correct scheme
uwsgi_param UWSGI_SCHEME $http_x_forwarded_proto;
uwsgi_intercept_errors off;
uwsgi_read_timeout 300s;
uwsgi_send_timeout 300s;
}
This is not because nginx is timing out the request. This is being timed out at uwsgi level itself.
harakiri
A feature of uWSGI that aborts workers that are serving requests for an excessively long time. Configured using the harakiri family of options. Every request that will take longer than the seconds specified in the harakiri timeout will be dropped and the corresponding worker recycled.
So you need to the set harakiri parameter in your uwsgi config. See below also for more details
uWSGI request timeout in Python
I am trying to setup a django project on my server and cannot get it to run. I am using virtualenv, gunicorn and nginx for static files. I am not sure were I am going wrong. My current setup is as follows:
myenv
- project(my django project)
- bin(and all it contains)
- lib(and all it contains)
- include(and all it contains)
- gunicorn_config.py
gunicorn_config.py:
command = '/home/me/django/myenv/bin/gunicorn'
pythonpath = '/home/me/django/myenv/project'
bind = '127.0.0.1:80'
workers = 2
nginx project.conf:
upstream project_server {
server unix:/tmp/gunicorn_project.sock fail_timeout=0;
}
server {
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name project.com www.project.com;
keepalive_timeout 5;
# path for static files
root /home/me/django/myenv/assets;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://project_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/me/django/myenv/project/project/templates;
}
}
And I run the following to try start it all up:
/home/me/django/myenv/bin/gunicorn -c /home/me/django/myenv/gunicorn_config.py project/project/wsgi.py
But it just says "Can't connect to ('127.0.0.1', 80)"
You've configured gunicorn to bind on a TCP port, but gunicorn is binding on a unix socket. You should use the same thing; preferably the socket, so it doesn't conflict with the port nginx is actually listening on.
In gunicorn_config.py:
bind = 'unix:/tmp/gunicorn_project.sock'
Basically, I would guess nginx spins up before gunicorn. It takes port 80 (from your listen). gunicorn comes next, also wants port 80 (from your bind), and finds it occupied so it errors out. Run gunicorn on a different port and use proxy_pass to tell nginx about it.
Gunicorn
bind = '127.0.0.1:8000'
Nginx
proxy_pass http://127.0.0.1:8000/;
I had my django app on heroku for a while with no problems. I now want to move it to a digital ocean droplet, partly as a learning exercise, partly for scalability (and cost) reasons.
After following this excellent tutorial almost to the letter, the app is working but with a huge gotcha: I now get an infinite redirect loop when I try to log in to the admin site. The first request is a POST ?next=/admin/ with the username and password, this gets a 302 response to redirect to GET /admin/, which gets a 302 response redirect to ?next=/admin/, and so on.
I have spent 2 or 3 hours with google and various nginx tutorials and this is the first time my "google the error message, copy and paste random code snippets, repeat" algorithm has ever failed me, I'm hoping the reason is that the error is trivial to solve and I just can't see it?
If it's not trivial to solve, let me know and I'll post more info.
Thanks in advance
edit 1: my nginx config file for the app is basically a verbatim copy of the tutorial. It looks like this:
upstream hello_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/hello_django/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
access_log /webapps/hello_django/logs/nginx-access.log;
error_log /webapps/hello_django/logs/nginx-error.log;
location /static/ {
alias /webapps/hello_django/static/;
}
location /media/ {
alias /webapps/hello_django/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://hello_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/hello_django/static/;
}
}