Preventing DDOS attack, for Django app with nginx reverse proxy + gunicorn - django

I am writing a Django app which uses an nginx reverse proxy + gunicorn as a webserver in production.
I want to include the capability to stop DDOS attacks from a certain IP (or pool of IPs). This as to be at the nginx level, rather than any deeper in the code. Do I need a web application firewall? If so, how do I integrate it.
My project's nginx file located at sites-available has:
server {
listen 80;
charset utf-8;
underscores_in_headers on;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/sarahm/djangoproject/djangoapp;
}
location /static/admin {
root /home/sarahm/.virtualenvs/myenv/local/lib/python2.7/site-packages/django/contrib/admin/static/;
}
location / {
proxy_pass_request_headers on;
proxy_buffering on;
proxy_buffers 8 24k;
proxy_buffer_size 2k;
include proxy_params;
proxy_pass http://unix:/home/sarahm/djangoproject/djangoapp/djangoapp.sock;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/sarahm/djangoproject/djangoapp/templates/;
}
}
Let me know if I should include more information, and what that information should be.

If you want to prevent certain IPs or even subnets from accesing your app, add the following code to your server block:
#Specify adresses that are not allowed to access your server
deny 192.168.1.1/24;
deny 192.168.2.1/24;
allow all;
Also if you're not useing REST, then you might want to limit possible HTTP verbs, by adding the following to your server block:
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 403;
}
To lessen the possibility of DoS attack, you might want to limit the number of possible requests from single host (see http://nginx.org/en/docs/stream/ngx_stream_limit_conn_module.html), by adding the following to NGINX nginx.conf:
limit_conn_zone $binary_remote_addr zone=limitzone:1M;
and the following to your server block:
limit_conn limitzone 20;
Some other useful setting for nginx.conf, that help mitigate DoS if set correctly:
server_tokens off;
autoindex off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_timeout 10;
client_header_timeout 10;
send_timeout 10;
keepalive_timeout 20 15;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
Since it's too broad to explain these all here, suggest you to look in the docs http://nginx.org/en/docs/ for details. Though choosing correct values is achieved via trial and error on particular setup.
Django serves error pages itself as templates, so you should remove:
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/sarahm/djangoproject/djangoapp/templates/;
Adding access_log off; log_not_found off; to static if you don't really care for logging is also an option:
location /static/ {
access_log off;
log_not_found off;
root /home/sarahm/djangoproject/djangoapp;
}
this will lower the frequency of filesystem requests, therefore increasing performance.
NGINX is a great web server and setting it is a broad topic, so it's best to eaither read the docs (at least HOW-TO section) or find an article that describes the setup for a situation close to yours.

Related

Getting 504 Gateway Time-out nginx/1.18.0 (Ubuntu) in django nginx

I am facing problems launching my django app in digitalocean. I have done all the needful but I am quite sure I made error some where. Below is my nginx configuration file. What could be the problem . Thanks alot .
PLease I would also love to know how to access nginx error log file through command line.
server {
server_name server_Ip;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/username/myproject/smsdigo;
}
location /media/ {
root /home/username/myproject/smsdigo;
}
location /locale/ {
root /home/ebong/myproject/smsdigo;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
server {
listen 80;
client_max_body_size 3000M;
client_body_buffer_size 3000M;
client_body_timeout 120;
}
Problably you need increase fastcgi timeout directives:
Try 60s or more. For example:
server{
fastcgi_read_timeout 60s;
fastcgi_send_timeout 60s;
}

Route 53 isn't letting me access to the ec2 instance through the application load balancer

I built a rails app in ec2 instance and deployed using route 53.
Currently I succeeded in associating the instance with the domain provided by amazon.
I can access to the page with the domain name.
However, once I create a record and associate the domain to the alb, I'm no longer able to access to the page.
I'm doing it to make the https access available.
I checked the things below.
Target group succeeds health check
Typing the DNS name of ALB works with both http and https
The record on route53 shows the DNS name above
The security group used for the ALB allows all the access
regardless of either http or https
listener is configured for both http and https
I also checked the article below
Unable to Access HTTPs in AWS Application Load Balancer EC2 Instance
I have no idea of what else to check after all.
Please help me.
I've been working on this the whole day...
Here's my config files
ruby_gems_bootcamp.conf(for nginx)
;; Query time: 6 msec
# log directory
error_log /var/www/rails/ruby-gems-bootcamp/log/nginx.error.log;
access_log /var/www/rails/ruby-gems-bootcamp/log/nginx.access.log;
# max body size
client_max_body_size 2G;
upstream app_server {
# for unix domain socket setups
server unix:/var/www/rails/ruby-gems-bootcamp/tmp/sockets/.unicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 54.248.194.243
# nginx so increasing this is generally safe ...
keepalive_timeout 5;
# path for static files
root /var/www/rails/ruby-gems-bootcamp/public;
# page cache loading
try_files $uri/index.html $uri.html $uri #app;
location #app {
# http headers
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
# referring upstream app_server
proxy_pass http://app_server;
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/rails/ruby-gems-bootcamp/public;
}
}
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
route53 configuration

Nginx + uWSGI + Django too slow

I have django running on nginx and uwsgi. The cached response loads very fast but at other times the website takes more than 30s to load. I am unable to diagnose the root cause of slowing down. Here's what I can provide as info to help narrow down the issue -
GTMetrix - For what I can conclude from waterfall report is that the waiting time for static files is too much alongwith the initial server response time. Here is a more detailed breakdown:
Link to the lighthouse parameters Waterfall report
nginx.conf - Here is the nginx config file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75;
types_hash_max_size 2048;
client_max_body_size 5M;
sendfile_max_chunk 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"'
'rt="$request_time" uct="$upstream_connect_time"
uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log upstream_time;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable msie6;
# And all the gzip mime types here
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
proxy_cache_path /data/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive 60m use_temp_path off;
server {
location ~* \.(jpg|jpeg|png|gif|ico|css|js){
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_pass http://example.net;
}
}
}
Nginx Project Config -
map $sent_http_content_type $expires{
default on;
text/html epoch;
text/css max;
appplication/javascript max;
~image/ max;
}
server{
listen 80;
server_name example.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/mysite/project_dir/app_dir;
expires $expires;
}
location /images/ {
expires $expires;
root /home/mysite/project_dir/app_dir/static/images/;
}
location /media/ {
expires $expires;
root /home/mysite/project_dir/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite.sock;
gzip_static on;
proxy_buffering off;
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_set_header X-Real-IP $remote-addr;
proxy_set_header Host $http-host;
proxy_set_header Connection "";
}
listen 443 ssl http2;#Managed by certbot
#All the subsequent certbot settings not tampered with
}
Logs - So, when I log nginx using the above config, the access logs show upstream_response_time perfectly only if the website was cached loaded. When it takes >30s to load, the upstream_response_time including all parameters except response_time show hyphen '-'.
UPDATE:
django-debug-toolbar- Resource Usage:
Resource
Value
User CPU time
964.000 msec
System CPU time
52.000 msec
Total CPU time
1016.000 msec
System CPU time
1019.185 msec
All the SQL queries are taking minimal time(10.78ms). Logger too shows 0 errors.
I would highly appreciate if anyone could help me diagnose the root cause of this slowdown. Thank you!
Phew! So I figured out the solution. I used - https://www.webpagetest.org and arrived to a conclusion that the initial connection time was very high (~30s). When it happens, it is most likely some dns/firewall issue. My issue was dns based. I had 2 ips added as A record to my domain. One was a private ip. So the browser actually took ~30s to load that ip and when the website got loaded, the browser cached the response so the subsequent response times were low. Simply removing the private ip worked for me.

Nginx caching of uwsgi served static files

The django project is deployed using uwsgi as application server, it also serves static files from a specified directory(as shown in the below command) and nginx is used as reverse proxy server. This is deployed using docker.
The uwsgi command to run the server is as follows:
uwsgi -b 65535 --socket :4000 --workers 100 --cpu-affinity 1 --module wui.wsgi --py-autoreload 1 --static-map /static=/project/static;
The application is working fine at this point. I would like to cache the static files into nginx server. So i have referred blog https://www.nginx.com/blog/maximizing-python-performance-with-nginx-parti-web-serving-and-caching and i have included following configuration in my nginx.conf :
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
After adding this into my Nginx conf, the Nginx server container exits with the following error:
[emerg] 1#1: invalid number of arguments in "location" directive in /etc/nginx/nginx.conf:43
Is this how uwsgi served static files can be cached into nginx? If yes please suggest me what is gone wrong here.
My complete nginx.conf is as follows:
events {
worker_connections 1024; ## Default: 1024
}
http {
include conf/mime.types;
# the upstream component nginx needs to connect to
upstream uwsgi {
server backend:4000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8443 ssl http2 default_server;
# the domain name it will serve for
server_name _; # substitute your machine's IP address or FQDN
charset utf-8;
ssl_certificate /secrets/server.crt;
ssl_certificate_key /secrets/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
add_header Strict-Transport-Security "max-age=31536000" always;
# Redirect HTTP to HTTPS
error_page 497 https://$http_host$request_uri;
# max upload size
client_max_body_size 75M; # adjust to taste
uwsgi_read_timeout 600s;
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass uwsgi;
include /config/uwsgi_params; # the uwsgi_params file you installed
}
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
}
Nginx version: 1.16
The problem with your config is that the location block has newlines in the list of filenames. I tried nginx -t -c <filename> with a modified version of your location block:
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
... and this passes the test!

Nginx + Django + Phpmyadmin Configuration

I've migrated my server to amazon ec2, and trying to set up the following environment there:
Nginx in the front serving static content, passing to django for dynamic content. I also would like to use phpmyadmin in this setting.
I am not a server admin, so I simply followed a few tutorials to make nginx and django up and running. But I've been working for two days now trying to hook phpmyadmin to this setup, with no avail. I am sending my current server configuration now, how can I serve phpmyadmin here?
server {
listen 80;
server_name localhost;
access_log /opt/django/logs/nginx/vc_access.log;
error_log /opt/django/logs/nginx/vc_error.log;
# no security problem here, since / is always passed to upstream
root /opt/django/;
# serve directly - analogous for static/staticfiles
location /media/ {
# if asset versioning is used
if ($query_string) {
expires max;
}
}
location /admin/media/ {
# this changes depending on your python version
root /path/to/test/lib/python2.7/site-packages/django/contrib;
}
location /static/ {
# if asset versioning is used
if ($query_string) {
expires max;
}
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:8000/;
}
# what to serve if upstream is not available or crashes
error_page 500 502 503 504 /media/50x.html;
}
This question should rightly belong to http://serverfault.com
Nevertheless, the first thing you ought to do is to configure a separate subdomain for your phpmyadmin for ease of administration.
So there will be two apps running with nginx as reverse proxy, one nginx server for your above django app and another server (also known as virtualhost) for your phpmyadmin with a configuration similar to this:-
server {
server_name phpmyadmin.<domain.tld>;
access_log /srv/http/<domain>/logs/phpmyadmin.access.log;
error_log /srv/http/<domain.tld>/logs/phpmyadmin.error.log;
location / {
root /srv/http/<domain.tld>/public_html/phpmyadmin;
index index.html index.htm index.php;
}
location ~ \.php$ {
root /srv/http/<domain.tld>/public_html/phpmyadmin;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/http/<domain.tld>/public_html/phpmyadmin/$fastcgi_script_name;
include fastcgi_params;
}
}
Each of your server configuration can point at different domain names via the server_name configuration. In this example, server_name phpmyadmin.<domain.tld>;
Here's an example taken from http://wiki.nginx.org/ServerBlockExample
http {
index index.html;
server {
server_name www.domain1.com;
access_log logs/domain1.access.log main;
root /var/www/domain1.com/htdocs;
}
server {
server_name www.domain2.com;
access_log logs/domain2.access.log main;
root /var/www/domain2.com/htdocs;
}
}
As you can see, there are two declarations of server inside the large http brackets. Each declaration of the server should contain the configuration you have for django and another for the configuration of phpmyadmin.
2 "virtual hosts" ("server" instances) taken care by nginx.