400 the plain http request was sent to https port - amazon-web-services

I have two servers QA and Prod. QA works great. The answers of this title is popular and I see few on SO itself. Let me explain why mine is different but first, here's QA's nginx config:
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
# ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/ssl.key;
# https://ma.ttias.be/deploying-laravel-websockets-with-nginx-reverse-proxy-and-supervisord/
ssl_session_timeout 3m;
ssl_session_cache shared:SSL:30m;
ssl_protocols TLSv1.1 TLSv1.2;
# Diffie-Hellman performance improvements
ssl_ecdh_curve secp384r1;
root /var/www/api/html/dist/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name <PUB_IP> <SOME-DOMAINS>;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
}
[..]
}
Mine is different because on QA, I do not have a separate server block for port 80 and 443; it's works as is. For Prod, I'm getting
400 the plain http request was sent to https port
This triggers from our React server, a separate server used as the client (web).
PUB_IP: AWS Public IP address
SOME-DOMAINS: Examples: qa.website.com www.website.com
I'm not a devOps, I'm just a full stack web developer that sometimes deal with server installs (basic). Question, should PUB_ID be on the server_name line? I've never seen docs with that being there.
Both QA and Prod has the same nginx config, minus the IP addresses etc and only QA works. Ok, let's still make the change on Prod:
# Redirect traffic on port 80 to use HTTPS
# Per: https://aws.amazon.com/blogs/compute/deploying-an-nginx-based-http-https-load-balancer-with-amazon-lightsail/
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
#ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/ssl.key;
[..]
Restart nginx and getting that same error. How weird? Changes QA to use the above and all is well so something is wrong with Prod. QA was configured before Prod so I may missied a step installing some packages?
Please advice.
Note: Godaddy wild-card ssl certs are valid. I'm using nginx version 1.14.0. Visiting PUB_ID directly (https), web page does not load. Visit without https, it works (when removing ssl http2 from server listen).

Related

Is Django 500 error Invalid HTTP_HOST header a security concern?

I have a custom Django web app sitting behind an NGINX proxy server.
I am seeing occasional errors come through from my Django server with messages like
Invalid HTTP_HOST header: 'my.domain.com:not-a-port-number'. The domain name provided is not valid according to RFC 1034/1035.
where the part after the colon has been a variety of wack garbage like,
my.domain.com:§port§/api/jsonws/invoke
my.domain.com:xrcr0x4a/?non-numeric-port-bypass
my.domain.com:80#+xxxxtestxxxx/
my.domain.com:80#+${12311231}{{12311231}}/
I am able to replicate a similar error using curl where the request url and the host header are different:
curl https://my.domain.com --header 'Host: my.domain.com:not-a-port-number'
I suspect that these are likely coming from our network security scanning software trying to find vulnerabilities in our custom web apps, but I am a little surprised that NGINX allows these requests to make it through to my Django server, especially if, as the 500 error output suggests, these are invalidly formatted headers.
Trying to prepare for the worst, is there anything I should change or be concerned about with this for the sake of security? Is there a best practice for this situation that I am unaware of? Should NGINX be filtering out these sorts of requests?
For my own convenience it would be nice to not to see the noise of these 500 errors coming from Django while I am on the lookout for real app level errors, but simply hiding the alerts seems to be a poor solution.
Additional Info:
I have ALLOWED_HOSTS set to 'my.domain.com' in my Django settings.py file.
NGINX configuration:
server {
listen 80 default_server;
return 444;
}
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
return 444;
}
# App https server.
# Serve static files and pass requests for application urls to gunicorn socket.
server {
listen 443 ssl;
server_name my.domain.com;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
...
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/path/to/gunicorn.sock;
}
}
I never met such a situation on practice, however I can guess nginx select your last server block to handle the request according to the information from the SNI Client Hello extension which by some reason (malformed request from scanning software?) is different from the Host header value in the encrypted request.
If you want to filter those requests at the nginx level, you can try this:
server {
listen 443 ssl;
server_name my.domain.com;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
if ($http_host != my.domain.com) {
return 444;
}
...
}
PS. For the security considerations I never use my real certificate in a stub server block like this one:
server {
listen 443 ssl default_server;
ssl_certificate /path/to/cert.cabundle.pem;
ssl_certificate_key /path/to/cert.key;
return 444;
}
Use a self-signed one as shown here. The reason if obvious - dear hacker, if you don't know what exactly domain you are scanning now, I'm not going to tell you what it is.

Nginx shows only Welcome page after changing server_name from IP adress to domain

I use Nginx as Reverse Proxy for a Django project with Gunicorn.
After following this tutorial from Digital Ocean How To Set Up an ASGI Django App I was able to visit my project through the server IP adress in a browser with http.
In the next step I followed the How To Secure Nginx with Let's Encrypt tutorial from Digital Ocean. Now the site was available with http:// and https:// in front of the IP adress.
To redirect the user automatically to https I used code from this tutorial.5 Steps to deploy Django
The outcome is the following file in /etc/nginx/sites-available:
# Force http to https
server {
listen 80;
server_name EXAMPLE_IP_ADRESS;
return 301 https://EXAMPLE_IP_ADRESS$request_uri;
}
server {
listen 80; # manged by Certbot
server_name EXAMPLE_IP_ADRESS;
# serve static files
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/projectdir;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
The redirect to https is working fine, so I assume the changes I made according to the last tutorial are okay.
After the tests with the EXAMPLE_IP_ADRESS as server_name went well I have changed the server_name to my domain in the form www.example.com
When I type the domain in the browser the only result is the Nginx Welcome page. So the connection to the server is successfull but Nginx is loading the wrong server block.
After searching for hours I came across this Question. Here the answer of ThorSummoner worked for me. The comment by mauris under this answer to unlink the default file in the sites-enabled was the command I needed.
unlink sites-enabled/default
(I posted this Q&A because I searched hours for the solution and hope this is reducing the search time for others having a Django project too with the same problem)

Prevent Nginx from changing host

I am building an application which is right now working on localhost. I have my entire dockerized application up and running at https://localhost/.
HTTP request is being redirected to HTTPS
My nginx configuration in docker-compose.yml is handling all the requests as it should.
I want my application accessible from anywhere hence i tried using Ngrok to route the request to my localhost. Actually i have a mobile app in development so need a local server for apis.
Now, when i enter ngrok's url like abc123.ngrok.io in the browser, the nginx converts it to https://localhost/. That works for my host system's browser, as my web app is working there only, but when i open the same in my mobile emulator. It doesn't work.
I am newbie to nginx. Any suggestions will be welcomed.
Here's my nginx configuration.
nginx.conf
upstream web {
ip_hash;
server web:443;
}
# Redirect all HTTP requests to HTTPS
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# for https requests
server {
# Pass request to the web container
location / {
proxy_pass https://web/;
}
location /static/ {
root /var/www/mysite/;
}
listen 443 ssl;
server_name localhost;
# SSL properties
# (http://nginx.org/en/docs/http/configuring_https_servers.html)
ssl_certificate /etc/nginx/conf.d/certs/localhost.crt;
ssl_certificate_key /etc/nginx/conf.d/certs/localhost.key;
root /usr/share/nginx/html;
add_header Strict-Transport-Security "max-age=31536000" always;
}
This configuration i got from a tutorial.
First of all, you set redirection from every HTTP request to HTTPS:
# Redirect all HTTP requests to HTTPS
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
You are using $server_name variable here, so every /some/path?request_string HTTP request to your app would be redirected to https://localhost/some/path?request_string. At least change the return directive to
return 301 https://$host$request_uri;
Check this question for information about difference between $host and $server_name variables.
If these are your only server blocks in your nginx config, you can safely remove the server_name localhost; directive at all, those blocks still remains the default blocks for all incoming requests on 80 and 443 TCP ports.
The second one, if you are using self-signed certificate for localhost be ready for browser complains about mismatched certificate (issued for localhost, appeared at abc123.ngrok.io). If it doesn't break your mobile app, its ok, but if it is, you can get the certificate for your abc123.ngrok.io domain from Lets Encrypt for free after you start your ngrok connection, check this page for available ACME clients and options. Or you can disable HTTPS at all if it isn't strictly requred for your debug process, just use this single server block:
server {
listen 80;
# Pass request to the web container
location / {
proxy_pass https://web/;
}
location /static/ {
root /var/www/mysite/;
}
}
Of course this should not be used in production, only for debugging.
And the last one. I don't see any sense encrypting traffic between nginx and web containers inside docker itself, especially if you already setup HTTP-to-HTTPS redirection with nginx. It gives you nothing in the terms of security but only some extra overhead. Use plain HTTP protocol on port 80 for communications between nginx and web container:
upstream web {
ip_hash;
server web:80;
}
server {
...
location / {
proxy_pass http://web;
}
}

Django & Certbot - unauthorized, Invalid response (HTTPS)

I'm trying to configure Certbot (Letsencrypt) with Nginx.
I get this error :
- The following errors were reported by the server:
Domain: koomancomputing.com
Type: unauthorized
Detail: Invalid response from
http://koomancomputing.com/.well-known/acme-challenge/xvDuo8MqaKvUhdDMjE3FFbnP1fqbp9R66ah5_uLdaZk
[2600:3c03::f03c:92ff:fefb:794b]: "<html>\r\n<head><title>404 Not
Found</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>404
Not Found</h1></center>\r\n<hr><center>"
Domain: www.koomancomputing.com
Type: unauthorized
Detail: Invalid response from
http://www.koomancomputing.com/.well-known/acme-challenge/T8GQaufb9qhKIRAva-_3IPfdu6qsDeN5wQPafS0mKNA
[2600:3c03::f03c:92ff:fefb:794b]: "<html>\r\n<head><title>404 Not
Found</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>404
Not Found</h1></center>\r\n<hr><center>"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
in /etc/nginx/sites-available/koomancomputing :
server {
listen 80;
server_name koomancomputing.com www.koomancomputing.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /staticfiles/ {
root /home/kwaku/koomancomputing;
}
location /media/ {
root /home/kwaku/koomancomputing;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
my DNS A/AAAA records :
I didn't know what to do, so I did a search and find django-letsencrypt app, but I don't know hot to use :
Your domain has a proper AAAA record configured to your server over IPv6, and certbot chose that to validate your server.
However, your server block as configured under nginx only listens to port 80 on IPv4 for your domain. When certbot requests Let's Encrypt to access your challenge and issue a certificate, nginx isn't configured to properly respond with the challenge on IPv6. It often in this case returns other things (such as a 404 in your case, or a default site).
You can resolve this by modifying the first two lines to also listen on all IPv6 addresses for your server:
server {
listen 80;
listen [::]:80;
# other configuration
}
After editing, restart nginx and run certbot again.
Your Nginx server is responding with a 404 error because it does not define a route to /.well-known needed by certbot to verify challenges. You need to modify the Nginx config file to tell it how to respond to certbot's challenges.
Certbot can update the Nginx config file for you.
First, make sure your config file is enabled. Run sudo service nginx reload and check for the presence of a file called /etc/nginx/sites-enabled/koomancomputing.
Then, run certbot --nginx -d koomancomputing.com -d www.koomancomputing.com
The --nginx flag tells certbot to find an Nginx config file with a matching server name and update that file with SSL info.
server {
listen 80;
listen [::]:80;
# other configuration
}
Works for both IPV4 and IPV6 after adding this restart nginx.
For me, it worked after I removed and installed the latest certbot version using snapd.
I use cloudflare proxy option and it failed for certbot 0.31.0.
After installing certbot 1.27 and configuring the cert newly, it works fine even proxy toggle is on in cloudflare.

Nginx reverse proxy for domain.com:port

I have a web application running and publicly available on http://example.com:8099
To run the application over HTTPS, the app documentation suggests that we use a standard reverse proxy because it does not natively support HTTPS. All the guides I found is about proxying with just a domain root and does not take the port into consideration.
To begin with, I'm not sure which port should I even listen to in the first place. Is it 443, or 8099?
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
error_log /var/log/nginx/sonar-error.log;
access_log /var/log/nginx/sonar-access.log;
location / {
proxy_pass http://localhost:8099;
}
}
In my server (AWS EC2 instance), the application is also running at the same port http://localhost:8099 as in the domain.
I've tried different configurations and checked whether anything is logged in to these log files. But these were empty. So I don't think I'm doing it right.
You need to listen on port 443 (the port Nginx is allowing connections on), and proxy_pass to 8099 (the port application traffic is being passed to).
Your also need to ensure the server_name line contains the DNS name that traffic is being requested to, or is an an underscore inside speech marks ("_") to ensure allrequests are matched to that server entry.