Nginx Browser Caching with Alias - django

I'm attempting to set up browser caching on nginx with Django. The current (working) configuration of my nginx configuration file for static files is the following:
server {
listen 443 ssl;
server_name SERVER;
ssl_certificate /etc/ssl/CERT.pem;
ssl_certificate_key /etc/ssl/KEY.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 4G;
access_log /webapps/site/logs/nginx-access.log;
error_log /webapps/site/logs/nginx-error.log;
location /static/ {
alias /webapps/site/static/;
}
# other locations, etc.
}
I would like to set up a rule that caches images etc. within the browser to limit the number of requests per page (there are often 100 or so images per page but the images are the same throughout the entire site). I tried adding a few variations of the following rule:
location ~* \.(css|js|gif|jpe?g|png)$ {
expires 365d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
However, when I do this, I get nothing but 404 errors (though the configuration file checks out and reloads without errors). I believe that this has something to do with the alias but I am not sure how to fix it.
Any suggestions would be appreciated!

You are missing the rootdirective for the images location block. Therefore, nginx will look for the files in the default location which varies by installation and since you have most likely not placed the files there, you will get a 404 Not Found error.
It works for the /static/location block because you defined an alias. I suspect though that the alias is simply what should be the root for both. If so, then try ...
server {
listen 443 ssl;
server_name SERVER;
root /path/to/web/root/folder/;
[...]
# Your locations ... Most likely no need for alias in any.
}

Related

Is Django 500 error Invalid HTTP_HOST header a security concern?

I have a custom Django web app sitting behind an NGINX proxy server.
I am seeing occasional errors come through from my Django server with messages like
Invalid HTTP_HOST header: 'my.domain.com:not-a-port-number'. The domain name provided is not valid according to RFC 1034/1035.
where the part after the colon has been a variety of wack garbage like,
my.domain.com:§port§/api/jsonws/invoke
my.domain.com:xrcr0x4a/?non-numeric-port-bypass
my.domain.com:80#+xxxxtestxxxx/
my.domain.com:80#+${12311231}{{12311231}}/
I am able to replicate a similar error using curl where the request url and the host header are different:
curl https://my.domain.com --header 'Host: my.domain.com:not-a-port-number'
I suspect that these are likely coming from our network security scanning software trying to find vulnerabilities in our custom web apps, but I am a little surprised that NGINX allows these requests to make it through to my Django server, especially if, as the 500 error output suggests, these are invalidly formatted headers.
Trying to prepare for the worst, is there anything I should change or be concerned about with this for the sake of security? Is there a best practice for this situation that I am unaware of? Should NGINX be filtering out these sorts of requests?
For my own convenience it would be nice to not to see the noise of these 500 errors coming from Django while I am on the lookout for real app level errors, but simply hiding the alerts seems to be a poor solution.
Additional Info:
I have ALLOWED_HOSTS set to 'my.domain.com' in my Django settings.py file.
NGINX configuration:
server {
listen 80 default_server;
return 444;
}
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
return 444;
}
# App https server.
# Serve static files and pass requests for application urls to gunicorn socket.
server {
listen 443 ssl;
server_name my.domain.com;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
...
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/path/to/gunicorn.sock;
}
}
I never met such a situation on practice, however I can guess nginx select your last server block to handle the request according to the information from the SNI Client Hello extension which by some reason (malformed request from scanning software?) is different from the Host header value in the encrypted request.
If you want to filter those requests at the nginx level, you can try this:
server {
listen 443 ssl;
server_name my.domain.com;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
if ($http_host != my.domain.com) {
return 444;
}
...
}
PS. For the security considerations I never use my real certificate in a stub server block like this one:
server {
listen 443 ssl default_server;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
return 444;
}
Use a self-signed one as shown here. The reason if obvious - dear hacker, if you don't know what exactly domain you are scanning now, I'm not going to tell you what it is.

400 the plain http request was sent to https port

I have two servers QA and Prod. QA works great. The answers of this title is popular and I see few on SO itself. Let me explain why mine is different but first, here's QA's nginx config:
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
# ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/ssl.key;
# https://ma.ttias.be/deploying-laravel-websockets-with-nginx-reverse-proxy-and-supervisord/
ssl_session_timeout 3m;
ssl_session_cache shared:SSL:30m;
ssl_protocols TLSv1.1 TLSv1.2;
# Diffie-Hellman performance improvements
ssl_ecdh_curve secp384r1;
root /var/www/api/html/dist/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name <PUB_IP> <SOME-DOMAINS>;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
}
[..]
}
Mine is different because on QA, I do not have a separate server block for port 80 and 443; it's works as is. For Prod, I'm getting
400 the plain http request was sent to https port
This triggers from our React server, a separate server used as the client (web).
PUB_IP: AWS Public IP address
SOME-DOMAINS: Examples: qa.website.com www.website.com
I'm not a devOps, I'm just a full stack web developer that sometimes deal with server installs (basic). Question, should PUB_ID be on the server_name line? I've never seen docs with that being there.
Both QA and Prod has the same nginx config, minus the IP addresses etc and only QA works. Ok, let's still make the change on Prod:
# Redirect traffic on port 80 to use HTTPS
# Per: https://aws.amazon.com/blogs/compute/deploying-an-nginx-based-http-https-load-balancer-with-amazon-lightsail/
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
#ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/ssl.key;
[..]
Restart nginx and getting that same error. How weird? Changes QA to use the above and all is well so something is wrong with Prod. QA was configured before Prod so I may missied a step installing some packages?
Please advice.
Note: Godaddy wild-card ssl certs are valid. I'm using nginx version 1.14.0. Visiting PUB_ID directly (https), web page does not load. Visit without https, it works (when removing ssl http2 from server listen).

Nginx shows only Welcome page after changing server_name from IP adress to domain

I use Nginx as Reverse Proxy for a Django project with Gunicorn.
After following this tutorial from Digital Ocean How To Set Up an ASGI Django App I was able to visit my project through the server IP adress in a browser with http.
In the next step I followed the How To Secure Nginx with Let's Encrypt tutorial from Digital Ocean. Now the site was available with http:// and https:// in front of the IP adress.
To redirect the user automatically to https I used code from this tutorial.5 Steps to deploy Django
The outcome is the following file in /etc/nginx/sites-available:
# Force http to https
server {
listen 80;
server_name EXAMPLE_IP_ADRESS;
return 301 https://EXAMPLE_IP_ADRESS$request_uri;
}
server {
listen 80; # manged by Certbot
server_name EXAMPLE_IP_ADRESS;
# serve static files
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/projectdir;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
The redirect to https is working fine, so I assume the changes I made according to the last tutorial are okay.
After the tests with the EXAMPLE_IP_ADRESS as server_name went well I have changed the server_name to my domain in the form www.example.com
When I type the domain in the browser the only result is the Nginx Welcome page. So the connection to the server is successfull but Nginx is loading the wrong server block.
After searching for hours I came across this Question. Here the answer of ThorSummoner worked for me. The comment by mauris under this answer to unlink the default file in the sites-enabled was the command I needed.
unlink sites-enabled/default
(I posted this Q&A because I searched hours for the solution and hope this is reducing the search time for others having a Django project too with the same problem)

Django Nginx Browser Caching Configuration

I am trying to configure Nginx to leverage on static file caching on browser.
My configuration file is as following
server {
listen 80;
server_name localhost;
client_max_body_size 4G;
access_log /home/user/webapps/app_env/logs/nginx-access.log;
error_log /home/user/webapps/app_env/logs/nginx-error.log;
location /static/ {
alias /home/user/webapps/app_env/static/;
}
location /media/ {
alias /home/user/webapps/app_env/media/;
}
...
}
When I add in the following caching configuration, the server fails to load static files and I am not able to restart my Nginx.
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}
The nginx-error log shows open() "/usr/share/nginx/html/media/cover_photos/292f109e-17ef-4d23-b0b5-bddc80708d19_t‌​humbnail.jpeg" failed (2: No such file or directory)
I have done quite some research online but cannot solve this problem.
Can anyone help me or just give me some suggestions on implementing static file caching in Nginx?
Thank you!
Reference: Leverage browser caching for Nginx
Again, I have to answer my own question.
The root problem lays on the "path".
I find the answer from #Dayo, here I quote:
You are missing the rootdirective for the images location block.
Therefore, nginx will look for the files in the default location which
varies by installation and since you have most likely not placed the
files there, you will get a 404 Not Found error.
Answer from Dayo
Thus, I added the root path in my configuration file as following:
root /home/user/webapps/app_env/;
The whole configuration will look like this:
server {
listen 80;
server_name localhost;
root /home/user/webapps/app_env/;
client_max_body_size 4G;
access_log /home/user/webapps/app_env/logs/nginx-access.log;
error_log /home/user/webapps/app_env/logs/nginx-error.log;
location /static/ {
alias /home/user/webapps/app_env/static/;
}
location /media/ {
alias /home/user/webapps/app_env/media/;
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}
...
}
And everything just work nice.
I hope people with the same problem can learn from this.

Nginx: 413 entity too large - file does not reach the application

I'm using Nginx and uwsgi with wsgi app. When I try to upload the image sometimes the application does not get the image and there used to be error 413 entity too large.
I fixed this issue by adding client_max_body_size 4M;and my Nginx conf looks something like:
//Add sample Nginx Server
//Block here
The error stopped showing but still the file does not reach the application. I don't understand it works on some computers and it dosent work on some.
If you’re getting 413 Request Entity Too Large errors trying to upload, you need to increase the size limit in nginx.conf or any other configuration file . Add client_max_body_size xxM inside the server section, where xx is the size (in megabytes) that you want to allow.
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
client_max_body_size 20M;
listen 80;
server_name localhost;
# Main location
location / {
proxy_pass http://127.0.0.1:8000/;
}
}
}
It means the max file size is larger than the upload size. See client_max_body_size
So try using instead of using a fixed value.
server {
[...]
client_max_body_size 0;
[...]
}
A value of 0 will disable the max upload check, I'd recommend putting a fixed value such as 3M, 10M, etc... instead though.