Django Send_mail not working in production - django

I have the current set up and emails are pushed locally fine but it doens't in production. It is a django, docker, dokku and nginx setup hosted on a VPS.
Locally I run it in docker with docker compose up and in production also with the same dockerfile. This is all the info that i thought was useful does anyone know what i have to do?
settings.py
...
EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
EMAIL_USE_TLS = True
EMAIL_HOST_USER = os.environ.get("Email_Username", "")
EMAIL_HOST_PASSWORD = os.environ.get("Email_Password", "")
EMAIL_HOST = os.environ.get("Email_Host", "mail.domain.nl")
EMAIL_PORT = os.environ.get("Email_Port", 587)
...
actions.py
def sendRegisterMail(modeladmin, request, queryset):
for lid in queryset:
print("email sending")
achievements = '...'
subject = "..."
message = f""" ... """
send_mail(
subject,
message,
settings.EMAIL_HOST_USER,
[lid.email],
fail_silently=True,
)
print("email sent")
dokku nginx:show-config app
server {
listen [::]:80;
listen 80;
server_name app.dispuutstropdas.nl;
access_log /var/log/nginx/app-access.log;
error_log /var/log/nginx/app-error.log;
include /home/dokku/app/nginx.conf.d/*.conf;
location / {
return 301 https://$host:443$request_uri;
}
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name app.dispuutstropdas.nl;
access_log /var/log/nginx/app-access.log;
error_log /var/log/nginx/app-error.log;
ssl_certificate /home/dokku/app/tls/server.crt;
ssl_certificate_key /home/dokku/app/tls/server.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
keepalive_timeout 70;
location / {
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml;
gzip_vary on;
gzip_comp_level 6;
proxy_pass http://app-8000;
http2_push_preload on;
proxy_http_version 1.1;
proxy_read_timeout 60s;
proxy_buffer_size 4096;
proxy_buffering on;
proxy_buffers 8 4096;
proxy_busy_buffers_size 8192;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-Start $msec;
}
error_page 400 401 402 403 405 406 407 408 409 410 411 412 413 414 415 416 417 418 420 422 423 424 426 428 429 431 444 449 450 451 /400-error.html;
location /400-error.html {
root /var/lib/dokku/data/nginx-vhosts/dokku-errors;
internal;
}
error_page 404 /404-error.html;
location /404-error.html {
root /var/lib/dokku/data/nginx-vhosts/dokku-errors;
internal;
}
error_page 500 501 503 504 505 506 507 508 509 510 511 /500-error.html;
location /500-error.html {
root /var/lib/dokku/data/nginx-vhosts/dokku-errors;
internal;
}
error_page 502 /502-error.html;
location /502-error.html {
root /var/lib/dokku/data/nginx-vhosts/dokku-errors;
internal;
}
include /home/dokku/app/nginx.conf.d/*.conf;
}
upstream app-8000 {
server 172.17.0.7:8000;
}
ports in dokku
DOKKU_PROXY_PORT_MAP: http:80:8000 https:443:8000
Dockerfile (which works)
FROM python:3.10.7-alpine
COPY backend/requirements.txt ./
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev \
&& apk add -u zlib-dev jpeg-dev gcc musl-dev \
&& pip install -r requirements.txt
RUN mkdir /code
WORKDIR /code
ADD . /code/
ENV PYTHONUNBUFFERED 1
EXPOSE 8000
CMD gunicorn core.wsgi:application --chdir backend -b 0.0.0.0:8000 --log-file -
Thanks for your help in advance

Related

502 Bad Gateway | Gunicorn - ngnix - django

I wanted to that my dango app run in server and I tried with gunciron. When i run my app with gunicorn, server is working.
I mean
`# gunicorn --bind 0.0.0.0:8000 myapp.wsgi`
is working
But if i disconnect with server, server is not working. So i used ngnix.
I followed this source https://github.com/satssehgal/URLShortnerDjango- . I did all them. I controlled my path but all path is right. What can i do for server work?
systemctl status nginx
enter image description here
ps aux | grep php-fpm
enter image description here
ngnix.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/x$
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}
/etc/nginx/sites-available/myapp:
server {
listen 80;
location = /favicon.ico { access_log off; log_not_found off; }
keepalive_timeout 10;
charset utf-8;
location /static/ {
root /root/ayzguci/myapp;
}
location / {
include proxy_params;
proxy_pass http://unix:/root/ayzguci/myapp/myapp.sock;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $5.105.5.140;
}
}
ps ax|grep gunicorn outout:
1782 ? Ss 0:00 /root/ayzguci/myenv/bin/python3 /root/ayzguci/myenv/bin/gunicorn --access-logfile - --timeout 1000 --workers 3 --bind unix:/root/ayzguci/myapp/myapp.sock myapp.wsgi:application
1812 ? S 0:00 /root/ayzguci/myenv/bin/python3 /root/ayzguci/myenv/bin/gunicorn --access-logfile - --timeout 1000 --workers 3 --bind unix:/root/ayzguci/myapp/myapp.sock myapp.wsgi:application
1813 ? S 0:00 /root/ayzguci/myenv/bin/python3 /root/ayzguci/myenv/bin/gunicorn --access-logfile - --timeout 1000 --workers 3 --bind unix:/root/ayzguci/myapp/myapp.sock myapp.wsgi:application
1814 ? S 0:00 /root/ayzguci/myenv/bin/python3 /root/ayzguci/myenv/bin/gunicorn --access-logfile - --timeout 1000 --workers 3 --bind unix:/root/ayzguci/myapp/myapp.sock myapp.wsgi:application
1900 pts/0 S+ 0:00 grep --color=auto gunicorn
netstat -an|grep 8000 output:
root#localhost:~# sudo apt install net-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
net-tools is already the newest version (1.60+git20161116.90da8a0-1ubuntu1).
0 upgraded, 0 newly installed, 0 to remove and 72 not upgraded.
root#localhost:~# netstat -an|grep 8000
root#localhost:~#

Unable to implement websocket with Nginix and Daphne

I am trying to setup websockets on my django application using Daphne and Ngnix. On my local setup everything works as expected but when I have uploaded to the server the websockets do not respond. This is Nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
client_max_body_size 10M;
}
and this is my sites-available file which is accessed by Nginx:
server {
server_name 139.59.9.118 newadmin.aysle.tech;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/django/AysleServer/src;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
location /wss/ {
proxy_pass http://0.0.0.1:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/newadmin.aysle.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/newadmin.aysle.tech/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = newadmin.aysle.tech) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name 139.59.9.118 newadmin.aysle.tech;
listen 80;
return 404; # managed by Certbot
}
and this is my daphne.service file:
GNU nano 4.8 daphne.service
[Unit]
Description=WebSocket Daphne Service
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/home/django/AysleServer/src
#ExecStart=/home/django/AysleServer/MyEnv/bin/python /home/django/AysleServer/MyEnv/bin/daphne -b 0.0.0.0 -p 8001 adminPanel.asgi:application
ExecStart=/home/django/AysleServer/MyEnv/bin/python /home/django/AysleServer/MyEnv/bin/daphne -e ssl:8001:privateKey=/etc/letsencrypt/live/newadmin.aysle.tech/privkey.>
Restart=on-failure
[Install]
WantedBy=multi-user.target
I tried sending a websocket request like this:
ws://newadmin.aysle.tech/ws/test/
ws://newadmin.aysle.tech:8001/ws/test/
But I do not get any response back. I tried checking the log files for error but there is no error. My guess is Nginx is not forwarding the request to Daphne. Probably a configuration issue. But I do not know what to change. Please help me with this. Thanks for your time in advance. Please note that I am also using Gunicorn to handle the HTTP request and they work as expected.
Since you are using ssl in your nginx config, you also have to use wss instead of ws as the scheme.
Also your location is /wss/ so your uri should use this location too.
Try this for a request from the client:
wss://newadmin.aysle.tech/wss/test/
If this doesn't work, you could also check, if your host does even allow WebSockets, or if you have to activate it. For example, I used a Djangoeurope server and had to activate WebSockets for the uri.

Unable to use http2 on nginx docker container

I want to use http2 by nginx image, but i tried very long the protocol are still using http/1.1
Dockerfile for nginx:
FROM nginx
COPY ./docker/nginx/etc/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./docker/nginx/etc/nginx/conf.d/default.conf.https /etc/nginx/conf.d/default.conf
/etc/nginx/nginx.conf is
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
# run ulimit -n to check
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
# Buffer size for post submission
client_body_buffer_size 10k;
client_max_body_size 8m;
# Buffer size for header
client_header_buffer_size 1k;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/default.conf is:
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name 0.0.0.0;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
expires $expires;
location = /favicon.ico {
log_not_found off;
}
location /static/ {
alias /static_files/;
}
location / {
access_log /var/log/nginx/wsgi.access.log;
error_log /var/log/nginx/wsgi.error_log warn;
proxy_pass http://app_wsgi:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /ws/ {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
access_log /var/log/nginx/asgi.access.log;
error_log /var/log/nginx/asgi.error_log warn;
proxy_pass http://app_asgi:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
Docker-compose file for nginx part:
nginx:
restart: always
build:
context: .
dockerfile: docker/nginx/Dockerfile.https
ports:
- 80:80
- 443:443
volumes:
- ./app/static:/static_files
- ./ssl/certs:/etc/nginx/certs
depends_on:
- app_wsgi
- app_asgi
go inside nginx container and run nginx -V command:
root#0a15f404bf1d:/# nginx -V
nginx version: nginx/1.17.9
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1d 10 Sep 2019
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.17.9/debian/debuild-base/nginx-1.17.9=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
is there anything wrong of my settings?
i checked in chrome dev tool and saw all the request are still send through http/1.1 protocol
my architecture is
Nginx <-> gunicorn <-> Django application
I had a similar issue, I was implementing a proxy pass and calling the nginx server, I had been receiving status 426, 'til I put set up following configuration:
upstream mservername {
server my.example.domain:443;
keepalive 20;
}
server {
listen 8443 ssl http2;
server_name my.example.domain;
access_log /opt/bitnami/nginx/logs/access_my_example_domain.log;
error_log /opt/bitnami/nginx/logs/error_my_example_domain.log;
ssl_certificate /opt/bitnami/nginx/conf/bitnami/certs/server.crt;
ssl_certificate_key /opt/bitnami/nginx/conf/bitnami/certs/server.key;
ssl_protocols TLSv1.3 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location /resource {
http2_push_preload on;
proxy_ssl_session_reuse off;
proxy_ssl_server_name on;
proxy_ssl_name my.example.domain;
proxy_ssl_trusted_certificate /opt/bitnami/nginx/conf/bitnami/certs/my_example_domain/my_domain_cert.crt;
proxy_set_header content-type "application/xml";
proxy_set_header accept "application/xml";
proxy_hide_header X-Frame-Options;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://my.example.domain/resource;
}
}
Hope can help. In my case it solved the issue.

(nginx + gunicorn) small server instance drops/timeouts connections on +60 simple API requests / second. Can it be improved?

I'm setting up the first production architecture for my Django-based app. I'm using nginx + gunicorn + remote postgres database setup.
After performing simple API load tests with https://loader.io I've found out that when increasing the number of clients sending api requests over 60 clients/second for 30 seconds-long test the tool shows errors that the connections timeout.
When using double server setup with a load balancer I can double the clients/second number but I would expect a single 3vCPU /1 GB ram setup to be able to handle more than 30 requests/second - am I right?
I've tried a lot of differente gunicorn / nginx config parameters but nothing seems to help.
This is the content of my /etc/nginx/nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
multi_accept on;
use epoll;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
reset_timedout_connection on;
keepalive_requests 100000;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
This is the content of my /etc/nginx/sites-available/MY_DOMAIN file:
server {
listen 80;
listen [::]:80;
server_name MY_DOMAIN www.MY_DOMAIN;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
client_max_body_size 5M;
server_name MY_DOMAIN www.MY_DOMAIN;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/backend;
}
location /loaderio-b061bddf86a67379411d4ef54f7ee430/ {
root /var/www/backend;
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/backend/MY_SOCKET.sock;
}
location /ws/ {
include proxy_params;
proxy_pass http://unix:/var/www/backend/ws.sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
}
ssl_certificate /etc/letsencrypt/live/MY_DOMAIN/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/MY_DOMAIN/privkey.pem; # managed by Certbot
This is the content of my supervisor file:
[program:gunicorn]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/gunicorn --workers 5 --keep-alive 15 --worker-class gevent --bind unix:/var/www/backend/SOCK_NAME.sock config.wsgi:application
autostart=true
autorestart=true
log_level=debug
stderr_logfile=/var/log/gunicorn/gunicorn.out.log
stdout_logfile=/var/log/gunicorn/gunicorn.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[program:daphne]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/daphne -u /var/www/backend/ws.sock config.asgi:application
autostart=true
autorestart=true
stderr_logfile=/var/log/daphne/daphne.out.log
stdout_logfile=/var/log/daphne/daphne.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[group:GROUP_NAME]
programs:gunicorn,daphne
When performing the load test the CPU vCores are between 10-18% load and the RAM usage is around 70%
Is it possible to enhance this single server instance performance above the 60 req/sec or is it just hardware limitation (I've already tried digital ocean 16vCPU 8GB ram droplet size and the results where pretty much the same (no matter if 5 or 15 workers were used)?

Django Admin keeps returning 504 timeout ( nginx +uWSGI )

This is my nginx configuration:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name www.nameOfSite.id nameOfSite.id;
access_log off;
error_log /var/www/log_nginx/error.log;
gzip on;
gzip_disable "msie6";
client_header_timeout 180s;
client_body_timeout 180s;
client_max_body_size 100m;
proxy_connect_timeout 120s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
send_timeout 600s;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
location /static {
alias /var/www/django/static;
}
location /media {
alias /var/www/django/media;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
include uwsgi_params;
uwsgi_read_timeout 500;
uwsgi_send_timeout 500;
uwsgi_pass unix:/var/www/uwsgi_texas.sock;
}
}
This is my uWSGI ini file in /var/www/texas_uwsgi.ini:
[uwsgi]
socket = /var/www/uwsgi_texas.sock
chdir = /var/www/django/
wsgi-file = /var/www/django/django/wsgi.py
processes = 8
threads = 1
master = true
harakiri = 900
chmod-socket = 777
vacuum = true
This is my service file in /etc/systemd/system/texas.service:
[Unit]
Description=TEXAS
After=syslog.target
[Service]
ExecStart=/usr/local/bin/uwsgi --ini /var/www/texas_uwsgi.ini
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=main
[Install]
WantedBy=multi-user.target
The problem is when i enter the Django admin for one Model object that have a lot of inline objects and fields, it keeps returning 504 timeout because it takes more than 60 seconds to process. I checked my NGINX, uWSGI configurations, i cannot find the solution on how to increase this "60 seconds timeout". The rest of the pages works fine.
In my nginx configuration, i already tried:
proxy_connect_timeout 120s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
send_timeout 600s;
uwsgi_read_timeout 500;
uwsgi_send_timeout 500;
This is the result when i try to open that Model Admin page:
Maybe use raw_id fields(for the inline models) for loading the admin wherever necessary.
Refer: Django Admin raw_id_fields
by doing so you can bypass the 504 error