Getting error 502 when processing an excel file with many rows - django

Getting error 502 when processing an excel file with many rows.
Using Django / Nginx
The problem is not the weight of the file is less than 1Mb.
This page works correctly with files of 200 rows, the problem starts when the file have more rows and then, the page take too long processing this file.
This is the error:
2012/07/28 14:29:54 [error] 18515#0: *34 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "POST /import/ HTTP/1.1", upstream: "http://127.0.0.1:9000/import/", host: "localhost:8080", referrer: "http://localhost:8080/import/"
I am using very large values ​​for the variables, but I keep getting the same error.
This is the configuration of the site:
upstream app_server {
server 127.0.0.1:9000 fail_timeout=3600s;
keepalive 3600s;
}
server {
listen 8080;
client_max_body_size 4G;
server_name localhost;
keepalive_timeout 3600s;
client_header_timeout 3600s;
client_body_timeout 3600s;
send_timeout 3600s;
location /static/ {
root /my path/;
autoindex on;
expires 7d;
}
location /media/ {
root /my path/;
autoindex on;
expires 7d;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
}
}
And this is the global configuration:
user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 3600s;
tcp_nodelay on;
client_header_timeout 3600s;
client_body_timeout 3600s;
send_timeout 3600s;
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
client_max_body_size 200m;
client_body_buffer_size 128k;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Can you give me some help?
Best regards

The best option is rewrite the routine to use django-celery but if you want a quick solution you could try upgrading the timeout for your proxy pass in Nginx by adding:
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
You should add this config on /var/nginx/sites-available/[site-config] to a specific site or /var/nginx/nginx.conf if you want to increase the timeout on all sites served by nginx.
If you are using gunicorn, You must add --timeout=300 as well. Example:
gunicorn_django -D -b 127.0.0.1:8901 --workers=2 --pid=/var/webapp/campus.pid --settings=settings.production --timeout 300 --pythonpath=/var/webapp/campus/
References:
http://wiki.nginx.org/HttpProxyModule#proxy_connect_timeout
http://reinout.vanrees.org/weblog/2011/11/24/apache-nginx-gunicorn-timeout.html
Gunicorn Nginx timeout problem

Related

(nginx + gunicorn) small server instance drops/timeouts connections on +60 simple API requests / second. Can it be improved?

I'm setting up the first production architecture for my Django-based app. I'm using nginx + gunicorn + remote postgres database setup.
After performing simple API load tests with https://loader.io I've found out that when increasing the number of clients sending api requests over 60 clients/second for 30 seconds-long test the tool shows errors that the connections timeout.
When using double server setup with a load balancer I can double the clients/second number but I would expect a single 3vCPU /1 GB ram setup to be able to handle more than 30 requests/second - am I right?
I've tried a lot of differente gunicorn / nginx config parameters but nothing seems to help.
This is the content of my /etc/nginx/nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
multi_accept on;
use epoll;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
reset_timedout_connection on;
keepalive_requests 100000;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
This is the content of my /etc/nginx/sites-available/MY_DOMAIN file:
server {
listen 80;
listen [::]:80;
server_name MY_DOMAIN www.MY_DOMAIN;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
client_max_body_size 5M;
server_name MY_DOMAIN www.MY_DOMAIN;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/backend;
}
location /loaderio-b061bddf86a67379411d4ef54f7ee430/ {
root /var/www/backend;
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/backend/MY_SOCKET.sock;
}
location /ws/ {
include proxy_params;
proxy_pass http://unix:/var/www/backend/ws.sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
}
ssl_certificate /etc/letsencrypt/live/MY_DOMAIN/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/MY_DOMAIN/privkey.pem; # managed by Certbot
This is the content of my supervisor file:
[program:gunicorn]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/gunicorn --workers 5 --keep-alive 15 --worker-class gevent --bind unix:/var/www/backend/SOCK_NAME.sock config.wsgi:application
autostart=true
autorestart=true
log_level=debug
stderr_logfile=/var/log/gunicorn/gunicorn.out.log
stdout_logfile=/var/log/gunicorn/gunicorn.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[program:daphne]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/daphne -u /var/www/backend/ws.sock config.asgi:application
autostart=true
autorestart=true
stderr_logfile=/var/log/daphne/daphne.out.log
stdout_logfile=/var/log/daphne/daphne.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[group:GROUP_NAME]
programs:gunicorn,daphne
When performing the load test the CPU vCores are between 10-18% load and the RAM usage is around 70%
Is it possible to enhance this single server instance performance above the 60 req/sec or is it just hardware limitation (I've already tried digital ocean 16vCPU 8GB ram droplet size and the results where pretty much the same (no matter if 5 or 15 workers were used)?

Serving multiple Django applications with Nginx and Gunicorn under same domain

Now I have one Django project in one domain. I want to server three Django projects under one domain separated by / .For example: www.domain.com/firstone/, www.domain.com/secondone/ etc. How to configure nGinx to serve multiple Django-projects under one domain? How configure static-files serving in this case?
My current nGinx config is:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com www.domain.com;
ssl_certificate /etc/nginx/ssl/Certificate.crt;
ssl_certificate_key /etc/nginx/ssl/Certificate.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /home/admin/web/project;
location /static {
alias /home/admin/web/project/static;
}
location /media {
alias /home/admin/web/project/media;
}
location /assets {
alias /home/admin/web/project/assets;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_pass http://127.0.0.1:8000/;
client_max_body_size 100M;
}
# Proxies
# location /first {
# proxy_pass http://127.0.0.1:8001/;
# }
#
# location /second {
# proxy_pass http://127.0.0.1:8002/;
# }
error_page 500 502 503 504 /media/50x.html;
You have to run your projects on different ports like firsrone on 8000 and secondone on 8001.
Then in nginx conf, in place of location /, you have to write location /firstone/ and proxy pass this to port 8000 and then write same location object for second one as location /secondone/ and proxy pass it to port 8001.
For static files and media, you have to make them available as /firstone/static and same for secondone.
Other way is to specify MEDIA_ROOT and STATIC_ROOT same for both the projects.
As #prof.phython correctly states, you'll need to run a separate gunicorn process for each of the apps. This results in you having each app running on a separate port.
Next create a separate upstream block, under http for each of these app servers:
upstream app1 {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
#server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:9000 fail_timeout=0;
}
Obviously change the title, and port number for each upstream block accordingly.
Then, under your http->server block define the following for each:
location #app1_proxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app1;
}
Make sure the last line there, points at what you called the upstream block (app1) and #app1_proxy should be specific to that app also.
Finally within the http->server block, use the following code to map a URL to the app server:
location /any/subpath {
# checks for static file, if not found proxy to app
try_files $uri #app1_proxy;
}
What prof.phython said should be correct. I'm not an expert on this but I saw a similar situation with our server as well. Hope the shared nginx.conf file helps!
server {
listen 80;
listen [::]:80;
server_name alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name web.alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 443 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/html/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 443 ssl;
server_name web.alicebot.tech;
ssl_certificate /etc/letsencrypt/live/web.alicebot.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/web.alicebot.tech/privkey.pem; # managed by Certbot
location /static/ {
autoindex on;
alias /var/www/html/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 8000 ssl;
listen [::]:8000 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
autoindex on;
alias /var/www/alice_v2/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
}
}
As you can see we had different domain names here, which you wouldn't be needing. So you'll need to change the server names inside the server {...}

Django Admin keeps returning 504 timeout ( nginx +uWSGI )

This is my nginx configuration:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name www.nameOfSite.id nameOfSite.id;
access_log off;
error_log /var/www/log_nginx/error.log;
gzip on;
gzip_disable "msie6";
client_header_timeout 180s;
client_body_timeout 180s;
client_max_body_size 100m;
proxy_connect_timeout 120s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
send_timeout 600s;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
location /static {
alias /var/www/django/static;
}
location /media {
alias /var/www/django/media;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
include uwsgi_params;
uwsgi_read_timeout 500;
uwsgi_send_timeout 500;
uwsgi_pass unix:/var/www/uwsgi_texas.sock;
}
}
This is my uWSGI ini file in /var/www/texas_uwsgi.ini:
[uwsgi]
socket = /var/www/uwsgi_texas.sock
chdir = /var/www/django/
wsgi-file = /var/www/django/django/wsgi.py
processes = 8
threads = 1
master = true
harakiri = 900
chmod-socket = 777
vacuum = true
This is my service file in /etc/systemd/system/texas.service:
[Unit]
Description=TEXAS
After=syslog.target
[Service]
ExecStart=/usr/local/bin/uwsgi --ini /var/www/texas_uwsgi.ini
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=main
[Install]
WantedBy=multi-user.target
The problem is when i enter the Django admin for one Model object that have a lot of inline objects and fields, it keeps returning 504 timeout because it takes more than 60 seconds to process. I checked my NGINX, uWSGI configurations, i cannot find the solution on how to increase this "60 seconds timeout". The rest of the pages works fine.
In my nginx configuration, i already tried:
proxy_connect_timeout 120s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
send_timeout 600s;
uwsgi_read_timeout 500;
uwsgi_send_timeout 500;
This is the result when i try to open that Model Admin page:
Maybe use raw_id fields(for the inline models) for loading the admin wherever necessary.
Refer: Django Admin raw_id_fields
by doing so you can bypass the 504 error

nginx (nginx/1.10.0 ) not working after ubuntu OS upgrade

I upgraded ubuntu and after up-gradation, example.com (name changed) stopped working and giving below error. I am using nginx (nginx/1.10.0 ) and ubuntu (release 16.04). The whole setup is on AWS EC2
502 Bad Gateway
Below are contents from nginx error log
[error] 1201#1201: *27 connect() failed (111: Connection refused) while connecting to upstream, client: 171.76.106.51, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:3000/favicon.ico", host: "public IP", referrer: "https://publicIP/"
Below is nginx.conf file
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
nginx sites-enabled default has symlink to sites-available default and below is the content
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
}
Another nginx conf file
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
When I am doing http://localhost:3000/ it says connection refused
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:3000... failed: Connection refused.
When I am doing "wget www.google.com" its says connected
Can you please help here ?

Omnibus 7.10.0 Gitlab Redirect https to http

https://mydomainName.com --> AWS-ELB [ingress 443 --> egress 80]) --> OmnibusGitlab
Now Omnibus redirects to the following and times out
http://mydomainName.com/users/sign_in
Any way to debug this issue.
Full path has to be in https because if you are going forward via reverse proxy that accepts https and the you have to come back as as https.
Separate the Nginx configuration because Omnibus solution have to constrains that block the flexibility we have on standard nginx.
Do the following to make this change:
edit /etc/gitlab/gitlab.rb
and add
nginx['enable'] = false
web_server['external_users'] = ['www-data'] #for ubuntu nginx user
web_server['external_users'] = ['nginx'] # for centos 6-7
Add the following configuration to enable gitlab via simple nginx
/etc/nginx/site-availabe/server
server {
listen *:443 default_server ssl;
ssl_certificate /etc/ssl/certs/myserver.crt;
ssl_certificate_key /etc/ssl/private/myserver.key;
server_name myhostname.com
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 50m; #or 5000
access_log /var/log/gitlab/nginx_access.log;
error_log /var/log/gitlab/nginx_error.log;
location / {
try_files $uri $uri/index.html $uri.html #gitlab;
}
location #gitlab {
proxy_read_timeout 300; # Some requests take more than 30 seconds.
proxy_connect_timeout 300; # Some requests take more than 30 seconds.
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://gitlab;
}
error_page 502 /502.html;
}
gitlab-redirect
/etc/nginx/sites-available/gitlab-redirect
server {
listen 80;
server_name myhostname.com;
return 301 https://myhostname.com;
}