504 Gateway Timeout (Gunicorn-Django-Nginx) Docker Compose Problem - django

There is one process backend which take around 1-2 minutes to process. The loading screen runs for 1 minute and shows 504 Gateway Timeout
Here's the log in nginx.access.log
172.18.0.2 - - [16/Dec/2022:23:54:02 +0000] "POST /some_request HTTP/1.1" 499 0 "http://localhost/some_url" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
But from the django debug log, I can see that the POST request is still processing at the backend
docker-compose.yml
timeout is set to 300, without it, the browser will return 502 error (connection prematurely closed)
services:
web:
image: genetic_ark_web:2.0.0
build: .
command: bash -c "python manage.py collectstatic --noinput && gunicorn ga_core.wsgi:application --bind :8000 --timeout 300 --workers 2 --threads 4"
This is all the params I have tried in nginx.conf but still the 504 timeout is returned after 60s
server {
# port to serve the site
listen 80;
# optional server_name
server_name untitled_name;
...
# request timed out -- default 60
client_body_timeout 300;
client_header_timeout 300;
# if client stop responding, free up memory -- default 60
send_timeout 300;
# server will close connection after this time -- default 75
keepalive_timeout 300;
location /url_pathway/ {
proxy_pass http://ga;
# header
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
keepalive_timeout 300s;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 180s;
uwsgi_read_timeout 300s;
uwsgi_connect_timeout 300s;
Any idea which parameter might be controlling the timeout?

Apparently not caused by nginx-container or web-container.
Fixed by changing timeout in nginx-proxy which is timing out causing 499 error in nginx access log while web container is continuing its backend process
Added below in nginx.conf for nginx-container
proxy_read_timeout 300s;
For nginx-proxy, proxy_read_timeout 300s; is added to the config too

Related

Django Admin keeps returning 504 timeout ( nginx +uWSGI )

This is my nginx configuration:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name www.nameOfSite.id nameOfSite.id;
access_log off;
error_log /var/www/log_nginx/error.log;
gzip on;
gzip_disable "msie6";
client_header_timeout 180s;
client_body_timeout 180s;
client_max_body_size 100m;
proxy_connect_timeout 120s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
send_timeout 600s;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
location /static {
alias /var/www/django/static;
}
location /media {
alias /var/www/django/media;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
include uwsgi_params;
uwsgi_read_timeout 500;
uwsgi_send_timeout 500;
uwsgi_pass unix:/var/www/uwsgi_texas.sock;
}
}
This is my uWSGI ini file in /var/www/texas_uwsgi.ini:
[uwsgi]
socket = /var/www/uwsgi_texas.sock
chdir = /var/www/django/
wsgi-file = /var/www/django/django/wsgi.py
processes = 8
threads = 1
master = true
harakiri = 900
chmod-socket = 777
vacuum = true
This is my service file in /etc/systemd/system/texas.service:
[Unit]
Description=TEXAS
After=syslog.target
[Service]
ExecStart=/usr/local/bin/uwsgi --ini /var/www/texas_uwsgi.ini
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=main
[Install]
WantedBy=multi-user.target
The problem is when i enter the Django admin for one Model object that have a lot of inline objects and fields, it keeps returning 504 timeout because it takes more than 60 seconds to process. I checked my NGINX, uWSGI configurations, i cannot find the solution on how to increase this "60 seconds timeout". The rest of the pages works fine.
In my nginx configuration, i already tried:
proxy_connect_timeout 120s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
send_timeout 600s;
uwsgi_read_timeout 500;
uwsgi_send_timeout 500;
This is the result when i try to open that Model Admin page:
Maybe use raw_id fields(for the inline models) for loading the admin wherever necessary.
Refer: Django Admin raw_id_fields
by doing so you can bypass the 504 error

Getting 504 with nginx as reverse proxy and uwsgi as app server when $request_time in nginx logs reaches 60s

$request_time in nginx logs is the time taken by the request from when it was first received in nginx and till the response was sent back by nginx to client.
In my nginx config, the request is closed by nginx and marked 504 gateway timeout when $request_time reaches 60s
I have tried using following location context directives:
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
uwsgi_read_timeout 300s;
uwsgi_send_timeout 300s;
But still facing the same issue. Is there some config that i am missing?
Here is the location context of my nginx.conf
location / {
set $uwsgi_script "wsgi";
set $script_name "/";
set $socket "uwsgi.sock";
#client_max_body_size 1000m;
keepalive_timeout 0;
# host and port to uwsgi server
uwsgi_pass unix:///tmp/$socket;
uwsgi_param UWSGI_SCRIPT $uwsgi_script;
uwsgi_param SCRIPT_NAME $script_name;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
uwsgi_pass_header Authorization;
include /etc/nginx/uwsgi_params;
#fix redirect issue to proxy port
uwsgi_param SERVER_PORT 80;
#set correct scheme
uwsgi_param UWSGI_SCHEME $http_x_forwarded_proto;
uwsgi_intercept_errors off;
uwsgi_read_timeout 300s;
uwsgi_send_timeout 300s;
}
This is not because nginx is timing out the request. This is being timed out at uwsgi level itself.
harakiri
A feature of uWSGI that aborts workers that are serving requests for an excessively long time. Configured using the harakiri family of options. Every request that will take longer than the seconds specified in the harakiri timeout will be dropped and the corresponding worker recycled.
So you need to the set harakiri parameter in your uwsgi config. See below also for more details
uWSGI request timeout in Python

Django, Gunicorn Setup

I am trying to setup a django project on my server and cannot get it to run. I am using virtualenv, gunicorn and nginx for static files. I am not sure were I am going wrong. My current setup is as follows:
myenv
- project(my django project)
- bin(and all it contains)
- lib(and all it contains)
- include(and all it contains)
- gunicorn_config.py
gunicorn_config.py:
command = '/home/me/django/myenv/bin/gunicorn'
pythonpath = '/home/me/django/myenv/project'
bind = '127.0.0.1:80'
workers = 2
nginx project.conf:
upstream project_server {
server unix:/tmp/gunicorn_project.sock fail_timeout=0;
}
server {
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name project.com www.project.com;
keepalive_timeout 5;
# path for static files
root /home/me/django/myenv/assets;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://project_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/me/django/myenv/project/project/templates;
}
}
And I run the following to try start it all up:
/home/me/django/myenv/bin/gunicorn -c /home/me/django/myenv/gunicorn_config.py project/project/wsgi.py
But it just says "Can't connect to ('127.0.0.1', 80)"
You've configured gunicorn to bind on a TCP port, but gunicorn is binding on a unix socket. You should use the same thing; preferably the socket, so it doesn't conflict with the port nginx is actually listening on.
In gunicorn_config.py:
bind = 'unix:/tmp/gunicorn_project.sock'
Basically, I would guess nginx spins up before gunicorn. It takes port 80 (from your listen). gunicorn comes next, also wants port 80 (from your bind), and finds it occupied so it errors out. Run gunicorn on a different port and use proxy_pass to tell nginx about it.
Gunicorn
bind = '127.0.0.1:8000'
Nginx
proxy_pass http://127.0.0.1:8000/;

EC2 Django App Only Accessible without www. prefix

if I visit:
myapp.com (it works)
if I visit:
www.myapp.com (throws http 500 error)
or
the fully-qualified version: http://www.myapp.com (throws http 500 error)
that http 500 error is:
xx.xx.xxx.xxx - - [26/Oct/2013:18:33:10 +0000] "GET / HTTP/1.1" 500 460 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0"
I am getting this error, from my access.log used as part of my nginx configuration (note error.log has nothing new in it):
server {
#listen 8001;
listen 80;
#listen 127.0.0.1;
server_name myapp.com www.myapp.com; #*.myapp.com;
#server_name ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com;
access_log /home/ubuntu/virtualenv/myapp/error/access.log;
error_log /home/ubuntu/virtualenv/myapp/error/error.log warn;
connection_pool_size 2048;
fastcgi_buffer_size 4K;
fastcgi_buffers 64 4k;
root /home/ubuntu/virtualenv/myapp/homelaunch/;
location /static/ {
alias /home/ubuntu/virtualenv/myapp/homelaunch/static/;
#alias /static/;
#root /home/ubuntu/virtualenv/myapp/homelaunch/;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Host $server_name;
#proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
my ec2 security policy looks like:
What am I doing wrong here?
Thank you!
This didn't appear to be a problem with any settings as much as it was an issue with settings.py, the settings from settings.py weren't being applied because I didn't:
1) Stop Gunicorn and Stop Nginx first
2) Then start them up again using proper commands like:
sudo /usr/local/bin/gunicorn -c /home/ubuntu/virtualenv/gunicorn_config.py myapp.wsgi; sudo nginx -c /etc/nginx/nginx.conf;
Special thanks to user Steve above for the point out.
If anyone runs into a problem like this where your app is accessible via <domainhere>.com, but not www.<domainhere>.com in the future check the following:
1) Make sure your ALLOWED_HOSTS=[] has settings like:
ALLOWED_HOSTS = ['www.myapp.com','myapp.com','<server-ip-here>','ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com']
2) That you are properly starting both gunicorn and nginx using commands like:
sudo /usr/local/bin/gunicorn -c /home/ubuntu/virtualenv/gunicorn_config.py myapp.wsgi;
sudo nginx -c /etc/nginx/nginx.conf;
3) Use a command like this to check that both are properly running:
ps ax|grep nginx; ps ax|grep gunicorn;
Good luck!

Getting error 502 when processing an excel file with many rows

Getting error 502 when processing an excel file with many rows.
Using Django / Nginx
The problem is not the weight of the file is less than 1Mb.
This page works correctly with files of 200 rows, the problem starts when the file have more rows and then, the page take too long processing this file.
This is the error:
2012/07/28 14:29:54 [error] 18515#0: *34 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "POST /import/ HTTP/1.1", upstream: "http://127.0.0.1:9000/import/", host: "localhost:8080", referrer: "http://localhost:8080/import/"
I am using very large values ​​for the variables, but I keep getting the same error.
This is the configuration of the site:
upstream app_server {
server 127.0.0.1:9000 fail_timeout=3600s;
keepalive 3600s;
}
server {
listen 8080;
client_max_body_size 4G;
server_name localhost;
keepalive_timeout 3600s;
client_header_timeout 3600s;
client_body_timeout 3600s;
send_timeout 3600s;
location /static/ {
root /my path/;
autoindex on;
expires 7d;
}
location /media/ {
root /my path/;
autoindex on;
expires 7d;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
}
}
And this is the global configuration:
user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 3600s;
tcp_nodelay on;
client_header_timeout 3600s;
client_body_timeout 3600s;
send_timeout 3600s;
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
client_max_body_size 200m;
client_body_buffer_size 128k;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Can you give me some help?
Best regards
The best option is rewrite the routine to use django-celery but if you want a quick solution you could try upgrading the timeout for your proxy pass in Nginx by adding:
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
You should add this config on /var/nginx/sites-available/[site-config] to a specific site or /var/nginx/nginx.conf if you want to increase the timeout on all sites served by nginx.
If you are using gunicorn, You must add --timeout=300 as well. Example:
gunicorn_django -D -b 127.0.0.1:8901 --workers=2 --pid=/var/webapp/campus.pid --settings=settings.production --timeout 300 --pythonpath=/var/webapp/campus/
References:
http://wiki.nginx.org/HttpProxyModule#proxy_connect_timeout
http://reinout.vanrees.org/weblog/2011/11/24/apache-nginx-gunicorn-timeout.html
Gunicorn Nginx timeout problem