Django +Nginx+waitress+AWS Elastic Load balancer-504 Error - django

I have hosted Djano app on Nginx-waitress architecture.Both are running on different ports on same server.There is an AWS ELB which directs requests to Nginx port.
I have a functionality which query database based on user input and produces multiple csv files.These files are then zipped and sent to user for download.
user gets 504 error after 2-3 minutes even when the file is being generated /has been generated in server .It takes around 4-5 minutes (for the whole process)to generate 500 Mb zipped file.And the file size is normally greater than 100 Mb in most of the cases.
I tried diferent perumation and combination of below setting in nginx and waitress but i am not getting any difference.
nginx.conf
http {
include mime.types;
default_type application/octet-stream;
include C:/nginx-1.23.3/sites-enabled/senginx.conf;
charset utf-8;
sendfile on;
client_max_body_size 500m;
send_timeout 15m;
}
senginx.conf
server {
listen 6003;
server_name rnd.aicofindia.com;
location /static/ {
alias C:/Users/Administrator/Desktop/Project/sarus-master/static/;
}
location /media/ {
alias C:/Users/Administrator/Desktop/Project/sarus-master/media/;
}
location / {
proxy_read_timeout 15m;
proxy_connect_timeout 15m;
proxy_send_timeout 15m;
keepalive_timeout 15m;
keepalive_time 1h;
proxy_socket_keepalive on;
proxy_pass http://localhost:9090;
}
}
waitress.conf
from waitress import serve
from sarus_project.wsgi import application
if __name__ == '__main__':
serve(application, host = 'localhost', port='9090',channel_timeout=900,cleanup_interval=900)

Related

Route 53 isn't letting me access to the ec2 instance through the application load balancer

I built a rails app in ec2 instance and deployed using route 53.
Currently I succeeded in associating the instance with the domain provided by amazon.
I can access to the page with the domain name.
However, once I create a record and associate the domain to the alb, I'm no longer able to access to the page.
I'm doing it to make the https access available.
I checked the things below.
Target group succeeds health check
Typing the DNS name of ALB works with both http and https
The record on route53 shows the DNS name above
The security group used for the ALB allows all the access
regardless of either http or https
listener is configured for both http and https
I also checked the article below
Unable to Access HTTPs in AWS Application Load Balancer EC2 Instance
I have no idea of what else to check after all.
Please help me.
I've been working on this the whole day...
Here's my config files
ruby_gems_bootcamp.conf(for nginx)
;; Query time: 6 msec
# log directory
error_log /var/www/rails/ruby-gems-bootcamp/log/nginx.error.log;
access_log /var/www/rails/ruby-gems-bootcamp/log/nginx.access.log;
# max body size
client_max_body_size 2G;
upstream app_server {
# for unix domain socket setups
server unix:/var/www/rails/ruby-gems-bootcamp/tmp/sockets/.unicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 54.248.194.243
# nginx so increasing this is generally safe ...
keepalive_timeout 5;
# path for static files
root /var/www/rails/ruby-gems-bootcamp/public;
# page cache loading
try_files $uri/index.html $uri.html $uri #app;
location #app {
# http headers
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
# referring upstream app_server
proxy_pass http://app_server;
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/rails/ruby-gems-bootcamp/public;
}
}
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
route53 configuration

Nginx caching of uwsgi served static files

The django project is deployed using uwsgi as application server, it also serves static files from a specified directory(as shown in the below command) and nginx is used as reverse proxy server. This is deployed using docker.
The uwsgi command to run the server is as follows:
uwsgi -b 65535 --socket :4000 --workers 100 --cpu-affinity 1 --module wui.wsgi --py-autoreload 1 --static-map /static=/project/static;
The application is working fine at this point. I would like to cache the static files into nginx server. So i have referred blog https://www.nginx.com/blog/maximizing-python-performance-with-nginx-parti-web-serving-and-caching and i have included following configuration in my nginx.conf :
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
After adding this into my Nginx conf, the Nginx server container exits with the following error:
[emerg] 1#1: invalid number of arguments in "location" directive in /etc/nginx/nginx.conf:43
Is this how uwsgi served static files can be cached into nginx? If yes please suggest me what is gone wrong here.
My complete nginx.conf is as follows:
events {
worker_connections 1024; ## Default: 1024
}
http {
include conf/mime.types;
# the upstream component nginx needs to connect to
upstream uwsgi {
server backend:4000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8443 ssl http2 default_server;
# the domain name it will serve for
server_name _; # substitute your machine's IP address or FQDN
charset utf-8;
ssl_certificate /secrets/server.crt;
ssl_certificate_key /secrets/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
add_header Strict-Transport-Security "max-age=31536000" always;
# Redirect HTTP to HTTPS
error_page 497 https://$http_host$request_uri;
# max upload size
client_max_body_size 75M; # adjust to taste
uwsgi_read_timeout 600s;
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass uwsgi;
include /config/uwsgi_params; # the uwsgi_params file you installed
}
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
}
Nginx version: 1.16
The problem with your config is that the location block has newlines in the list of filenames. I tried nginx -t -c <filename> with a modified version of your location block:
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
... and this passes the test!

Preventing DDOS attack, for Django app with nginx reverse proxy + gunicorn

I am writing a Django app which uses an nginx reverse proxy + gunicorn as a webserver in production.
I want to include the capability to stop DDOS attacks from a certain IP (or pool of IPs). This as to be at the nginx level, rather than any deeper in the code. Do I need a web application firewall? If so, how do I integrate it.
My project's nginx file located at sites-available has:
server {
listen 80;
charset utf-8;
underscores_in_headers on;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/sarahm/djangoproject/djangoapp;
}
location /static/admin {
root /home/sarahm/.virtualenvs/myenv/local/lib/python2.7/site-packages/django/contrib/admin/static/;
}
location / {
proxy_pass_request_headers on;
proxy_buffering on;
proxy_buffers 8 24k;
proxy_buffer_size 2k;
include proxy_params;
proxy_pass http://unix:/home/sarahm/djangoproject/djangoapp/djangoapp.sock;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/sarahm/djangoproject/djangoapp/templates/;
}
}
Let me know if I should include more information, and what that information should be.
If you want to prevent certain IPs or even subnets from accesing your app, add the following code to your server block:
#Specify adresses that are not allowed to access your server
deny 192.168.1.1/24;
deny 192.168.2.1/24;
allow all;
Also if you're not useing REST, then you might want to limit possible HTTP verbs, by adding the following to your server block:
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 403;
}
To lessen the possibility of DoS attack, you might want to limit the number of possible requests from single host (see http://nginx.org/en/docs/stream/ngx_stream_limit_conn_module.html), by adding the following to NGINX nginx.conf:
limit_conn_zone $binary_remote_addr zone=limitzone:1M;
and the following to your server block:
limit_conn limitzone 20;
Some other useful setting for nginx.conf, that help mitigate DoS if set correctly:
server_tokens off;
autoindex off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_timeout 10;
client_header_timeout 10;
send_timeout 10;
keepalive_timeout 20 15;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
Since it's too broad to explain these all here, suggest you to look in the docs http://nginx.org/en/docs/ for details. Though choosing correct values is achieved via trial and error on particular setup.
Django serves error pages itself as templates, so you should remove:
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/sarahm/djangoproject/djangoapp/templates/;
Adding access_log off; log_not_found off; to static if you don't really care for logging is also an option:
location /static/ {
access_log off;
log_not_found off;
root /home/sarahm/djangoproject/djangoapp;
}
this will lower the frequency of filesystem requests, therefore increasing performance.
NGINX is a great web server and setting it is a broad topic, so it's best to eaither read the docs (at least HOW-TO section) or find an article that describes the setup for a situation close to yours.

Nginx + Django + Phpmyadmin Configuration

I've migrated my server to amazon ec2, and trying to set up the following environment there:
Nginx in the front serving static content, passing to django for dynamic content. I also would like to use phpmyadmin in this setting.
I am not a server admin, so I simply followed a few tutorials to make nginx and django up and running. But I've been working for two days now trying to hook phpmyadmin to this setup, with no avail. I am sending my current server configuration now, how can I serve phpmyadmin here?
server {
listen 80;
server_name localhost;
access_log /opt/django/logs/nginx/vc_access.log;
error_log /opt/django/logs/nginx/vc_error.log;
# no security problem here, since / is always passed to upstream
root /opt/django/;
# serve directly - analogous for static/staticfiles
location /media/ {
# if asset versioning is used
if ($query_string) {
expires max;
}
}
location /admin/media/ {
# this changes depending on your python version
root /path/to/test/lib/python2.7/site-packages/django/contrib;
}
location /static/ {
# if asset versioning is used
if ($query_string) {
expires max;
}
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:8000/;
}
# what to serve if upstream is not available or crashes
error_page 500 502 503 504 /media/50x.html;
}
This question should rightly belong to http://serverfault.com
Nevertheless, the first thing you ought to do is to configure a separate subdomain for your phpmyadmin for ease of administration.
So there will be two apps running with nginx as reverse proxy, one nginx server for your above django app and another server (also known as virtualhost) for your phpmyadmin with a configuration similar to this:-
server {
server_name phpmyadmin.<domain.tld>;
access_log /srv/http/<domain>/logs/phpmyadmin.access.log;
error_log /srv/http/<domain.tld>/logs/phpmyadmin.error.log;
location / {
root /srv/http/<domain.tld>/public_html/phpmyadmin;
index index.html index.htm index.php;
}
location ~ \.php$ {
root /srv/http/<domain.tld>/public_html/phpmyadmin;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/http/<domain.tld>/public_html/phpmyadmin/$fastcgi_script_name;
include fastcgi_params;
}
}
Each of your server configuration can point at different domain names via the server_name configuration. In this example, server_name phpmyadmin.<domain.tld>;
Here's an example taken from http://wiki.nginx.org/ServerBlockExample
http {
index index.html;
server {
server_name www.domain1.com;
access_log logs/domain1.access.log main;
root /var/www/domain1.com/htdocs;
}
server {
server_name www.domain2.com;
access_log logs/domain2.access.log main;
root /var/www/domain2.com/htdocs;
}
}
As you can see, there are two declarations of server inside the large http brackets. Each declaration of the server should contain the configuration you have for django and another for the configuration of phpmyadmin.
2 "virtual hosts" ("server" instances) taken care by nginx.

How to run multiple Django sites on Nginx and uWSGI?

Is it possible to run multiple Django sites on the same server using Nginx and uWSGI?
I suppose it's necessary to run multiple uWSGI instances (one for each site). I copied /etc/init.d/uwsgi to uwsgi2 and changed the port number. But, I got the following error:
# /etc/init.d/uwsgi2 start
Starting uwsgi: /usr/bin/uwsgi already running.
How is it possible to run multiple uWSGI instances?
Thanks
You can create create multiple virtual hosts that allow you to host multiple sites, independent from each other. More info here: http://wiki.nginx.org/VirtualHostExample.
A bit more detailed info here as well on how to setup virtual hosts http://projects.unbit.it/uwsgi/wiki/RunOnNginx#VirtualHosting.
You can run multiple instances of uwsgi using Emperor Mode.
This handles the creation of new worker instances. These instances are brilliantly and hilariously named vassals. Each vassal just needs a config file which is usually placed (or symlinked) in /etc/uwsgi/vassals
For nginx you'll need to create a server block for each host you wish to serve. Just change the server_name directive for each host you want to serve. Here's an example:
#Simple HTTP server
server {
listen 80;
root /usr/share/nginx/www;
server_name host1.example.com;
}
#Django server
server {
listen 80;
server_name host2.example.com;
#...upstream config...
}
Important: Make sure you have specified your host names in /etc/hosts. I found that without doing this my django site was also served on the default server IP despite specifying that it should only be served on a specific host name within my nginx configuration.
I see many suggestions like #donturner's answer. i.e. set two or more different server in nginx configure file. But the problem is each server needs an unique server_name either different domain name or sub-domain name. How about this kind of situation: I want to server two different Django project like this:
www.ourlab.cn/site1/ # first Django project
www.ourlab.cn/site2/ # second Django project
In this way, we can configure all of the settings in one server.
This is my setting in /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
This is my setting in /etc/nginx/conf.d/self_configure.conf
# /etc/nginx/conf.d/self_configure.conf
server {
listen 80;
server_name www.ourlab.cn;
# note that these lines are originally from the "location /" block
root /mnt/data/www/ourlab;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Django media
location /media {
# your Django project's media files - amend as required
alias /mnt/data/www/metCCS/metCCS/static/media;
}
location /static {
# your Django project's static files - amend as required
# first project's static files path
alias /mnt/data/www/metCCS/metCCS/static/static_dirs;
}
location /static_lip {
# second project's static files path
alias /mnt/data/www/lipidCCS/lipidCCS/static/static_dirs;
}
# match www.ourlab.cn/metccs/*
location ~* ^/metccs {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/metCCS.sock;
}
# match www.ourlab.cn/lipidccs/*
location ~* ^/lipidccs {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/lipidCCS.sock;
}
}
You also need to change one of the Django project's settings.py file as STATIC_URL = '/static_lip/', so two projects can use their static files separately.
A new finding is nginx can server static files by itself. Even we close uwsgi and Django, we also can use these files through browser.