Nginx 2 different domains on one server - web-services

I'd like to know how to configure nginx to get 2 domains working on one server (1 ip address).
I want to setup a Keycloak SSO next to a bookstack instance.
My issue is that when I want to access bookstack.domain.com it redirects to keycloak.domain.com.
Here's my /etc/nginx/conf.d/keycloak.conf :
upstream keycloak {
# Use IP Hash for session persistence
ip_hash;
# List of Keycloak servers
server 127.0.0.1:8080;
}
server {
listen 80;
server_name keycloak.domain.com;
# Redirect all HTTP to HTTPS
location / {
return 301 https://\$server_name\$request_uri;
}
}
server {
listen 443 ssl http2;
server_name keycloak.domain.com;
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/certificate_key.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://keycloak;
}
}
Here's my /etc/nginx/conf.d/bookstack.conf :
server {
listen 3480;
access_log /var/log/nginx/bookstack_access.log;
error_log /var/log/nginx/bookstack_error.log;
server_name bookstack.domain.com;
root /var/www/bookstack/public;
#
# redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
#
return 301 https://$host$request_uri;
}
server {
listen 5443 ssl http2;
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/certificate_key.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AE;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
server_name bookstack.domain.com;
#HSTS
add_header Strict-Transport-Security "max-age=63072000" always;
root /var/www/bookstack/public;
access_log /var/log/nginx/bookstack_access.log;
error_log /var/log/nginx/bookstack_error.log;
client_max_body_size 1G;
fastcgi_buffers 64 4K;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README) {
deny all;
}
location ~ \.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass unix:/var/run/php-fpm.sock;
}
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
access_log off;
}
}
Please let me know :)

This is exactly expected nginx behavior for the given configuration. One of the server blocks always act as default server for any request arriving on some IP/port combination no matter what is the Host HTTP header value. Here is an official documentation on this subject. You can use default_server parameter for the listen directive to explicitly specify server block that should act as the default server or it will be the first server block that listen on those IP/port otherwise. On multihomed servers things can be more complicated, as discussed here.
Now back to the question. You have four server blocks in your configuration: first one listen on TCP port 80 (default port for http:// scheme), second one listen on TCP port 443 (default port for https:// scheme), one listen on port 3480 and the last one listen on port 5443. Since there is only one server block listening each port, each server block will act as default server for any request coming to that port. So if you type http://bookstack.domain.com in your browser address bar, default port 80 for http:// scheme will be used and your request will be redirected to https://keycloak.domain.com. You are using
return 301 https://\$server_name\$request_uri;
for redirection, and the $server_name variable will be always keycloak.domain.com for that server block (read this answer to understand the difference between $host, $http_host and $server_name variables). If you explicitly specify the port and type http://bookstack.domain.com:3480, your request will be served by the third server block thus being redirected to https://bookstack.domain.com (here your are using $host variable which is right). Default TCP port https:// scheme is 443. But the only server block that listen on that port is for keycloak.domain.com! Oops. The only way you can reach your bookstack.domain.com is to type https://bookstack.domain.com:5443 in your browser. And if you correctly understand all the above information, you can type https://keycloak.domain.com:5443 too, it won't made any difference.
Well, I tried to explain what happened here with your nginx configuration. Get rid of non-standard ports as #Evil_skunk recommends you in his answer. Don't forget to clear your browser cache before trying new configuration - permanent HTTP 301 redirects are often cached by the browsers, unlike temporary HTTP 302 redirects.

Your keycloak config seems ok
It listen on port 80 (http) and port 443 (https) and all requests to 80 (http) are redirected to 443 (https)
Your bookstack config looks wrong for me
It does not listen to port 80 or 443 (instead it listen to 5443 and 3480). If you don't have some kind of special port forwarding then I think request to bookstack.domain.com will never reach the nginx-server defined in bookstack.conf and as a result the only matching server will serve the request => keycloak
You should change bookstack.conf's Listen ports:
server {
listen 80;
#... redirect to https
}
server {
listen 443 ssl;
#ssl config, webroot, ...
}

Related

Run Daphne in production on (or forwarded to?) port 443

I am trying to build a speech recognition-based application. It runs on Django with Django-channels and Daphne, and Nginx as the web server, on an Ubuntu EC2 instance on AWS. It should run in the browser, so I am using WebRTC to get the audio stream – or at least that’s the goal. I'll call my domain mysite.co here.
Django serves the page properly on http://www.mysite.co:8000 and Daphne seems to run too, the logs show
2022-10-17 13:05:02,950 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2022-10-17 13:05:02,951 INFO HTTP/2 support enabled
2022-10-17 13:05:02,951 INFO Configuring endpoint fd:fileno=0
2022-10-17 13:05:02,965 INFO Listening on TCP address [Private IPv4 address of my EC2 instance]:8000
2022-10-17 13:05:02,965 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
I used the Daphne docs to set up Daphne with supervisor. There, they use port 8000.
My first Nginx config file nginx.conf (I shouldn't use that one, should I?) looks like this:
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream channels-backend {
server mysite.co:80;
}
server {
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://mysite.co;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}}
}
# and the mail settings, but I don't use them
Currently, the homepage of my server just serves a HTML that I set in my first Nginx server block (I set this up while figuring out how to get TLS on Nginx, I don't need the HTML here):
server {
root /var/www/mysite/html;
index index.html index.htm index.nginx-debian.html;
server_name mysite.co www.mysite.co;
location / {
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.co/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name mysite.co www.mysite.co;
return 404; # managed by Certbot
}
I need WebRTC to access the audio stream that should run through Daphne, but for that, I need HTTPS because you can’t access user media via unencrypted protocols. I have created a TLS cert with Let’s Encrypt for Nginx (cf. above), but of course this only works on port 443. I can’t (and probably shouldn’t be able to?) reach port 8000 via HTTPS.
I am a bit lost at this point, my Nginx experience is very limited. Do I need to bind port 8000 to 443? If so, what do I need to do with my Nginx config for the HTML file that is currently served there? Am I on the right track at all?
If I should share other config files from Nginx or supervisor, please let me know.
I was on the wrong track, actually it's very straightforward. There's no need to run it on port 8000, you can run it conveniently on 443.
You don't configure the SSL in the Nginx server blocks, but you do it right in the place where you start the Daphne server adding -e ssl:443:privateKey=key.pem:certKey=crt.pem to your daphne command. You must have generated an SSL certificate previously of course, Let'sEncrypt works just fine here as well. privateKey is privkey.pem and certKey is fullchain.pem then.
(This snippet in itself won't work, depending on your needs you might have to add other flags as well like -u or --endpoint.)

Django CSRF "Referer Malformed"... but it isn't

I'm trying to test a deployment config for a Django setup that works fine in development mode.
I have name-based routing via Nginx's ssl_preread module on a load balancer, and SSL terminates at another Nginx instance on the server itself where the requests are proxied to uwsgi by socket.
server {
server_name dev.domain.net;
listen 80 proxy_protocol;
listen [::]:80 proxy_protocol;
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name dev.domain.net;
listen 443 ssl;
listen [::]:443 ssl;
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/website.sock;
}
location /favicon.ico {
access_log off; log_not_found off;
}
}
I have uwsgi set to log %(host) and %(referer), they match in the logs.
In my uwsgi_params I'm passing $host and $referer like so, since I'm using name-based routing I pick up the $server_name variable that triggered the Nginx response...
uwsgi_param HTTP_REFERER $server_name;
uwsgi_param HTTP_HOST $host;
Adding (or taking away) protocols and ports to these makes no difference. Taking them away predictably generates a Django ALLOWED_HOSTS debug error.
I've confirmed that my ALLOWED_HOSTS includes the $host. I've tried adding CSRF_TRUSTED_ORIGINS for the same $host variable. I've tried setting CSRF_COOKIE_DOMAIN for the same $host variable. I have CSRF_COOKIE_SECURE set to True per the docs recommendation.
No matter what combination of the above settings are used, I get:
Referer checking failed - Referer is malformed. on all POST requests.
Short answer: don't use the uwsgi unix socket, but rather use http-socket and send the proxy request to localhost over unencrypted http (in uwsgi ini file):
http-socket = 127.0.0.1:8001
In nginx, get rid of uwsgi proxy params and simply proxy_pass with proxy_protocol headers enabled:
server {
server_name dev.domain.net;
listen 443 ssl proxy_protocol;
listen [::]:443 ssl proxy_protocol;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /favicon.ico {
access_log off; log_not_found off;
}
}
At that point you can enable all of the recommended deployment settings in the Django docs, explicitly declare your ALLOWED_HOSTS and everything works fine.
These are a quite silly series of hoops with no apparent correct set of answers, especially considering referers are client headers that are easily forged.
The better answer is Django needs to get rid of a client referer check in its CSRF mechanism, it's pointless and makes no sense...

For nginx, am I listening to port 443 or port 3000 for this url https://localhost:3000?

I am trying to navigate through the weeds of nginx and reverse proxy passing and one area that I am getting confused on is the port mappings. Here is an example nginx configuration file:
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name www.domain.com;
passenger_enabled on;
root /home/ubuntu/app/public;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
}
What I am specifying here is that my app should listen to port 443 because it has a self signed certificate on it. It won't accept port 80 http but only 443. Here is an example I found about proxy_passing to localhost. Which is what I want to do. Here is the example:
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate server.crt;
ssl_certificate_key server.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
}
Here is what I don't understand and could use some clarification. What port/url am I listening to in the second example? In the server block I see this:
listen 443;
server_name localhost;
That means we are listening to localhost on 443 through https. That is simple enough to understand so far. Now we get to the location block.
location / {
proxy_pass http://localhost:3000;
What is going on here? If I started nginx and typed in the address bar http:localhost:3000 what is going to happen? Will it fail because I typed in http? Shouldn't it have been https:localhost:3000? Am I listening on port 80, 443, or 3000?
Also, on a small side question. What would happen if I opened up my Postman application and typed a get request to http://localhost:3000 or https://localhost with the second configuration? Would it hit the nginx server or try to reach my laptop's localhost?
Your desired reference uses the ssl on directive which has been depreciated
Take a look at http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl
When you specify which ports to listen on you need to group them by scheme (HTTP|HTTPS)
Example server block would look like
server {
listen 80 http2 default_server;
listen [::]:80 http2 default_server;
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name www.domain.com;
passenger_enabled on;
root /home/ubuntu/app/public;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
}

Adding SSL to the Django app, Ubuntu 16+, DigitalOcean

I am trying to add my SSL certificate to my django application according to this tutorial. I can turn on my website using 'https://ebluedesign.online'. But web browsers return something in style 'The certificate can not be verified due to a trusted certificate authority.' After accepting the messages, my page is displayed correctly.
My nginx file looks like this:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name ebluedesign.online;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/ebluedesign.online/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/ebluedesign.online/privkey.pem;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
server {
listen 80;
listen [::]:80;
server_name ebluedesign.online;
return 301 https://$host$request_uri;
}
My certificates are also visible here:
/etc/letsencrypt/live/ebluedesign.online/...
How can I solve this problem with SSL certificate. I use free SSL by https://letsencrypt.org/.
EDIT:
What is odd is if you go to http://bluedesign.online/ it works fine even though your file makes it seem as if port 80 isn’t listened too at all. Do you happen to have two setup files in nginx? It is possible that the one you posted is not being used.
I’ve followed this tutorial many times with success. You could try using it from scratch if you have the opportunity: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04

Nginx redirect (non-www to www) not working with Certbot

I have a website running with a Python/Django/uWSGI/Nginx setup. I also use Certbot to enable https on my site. My redirects from non-www to www (e.g. "example.com" to "www.example.com") result in a "Bad Request (400)" message even though I couldn't spot any deviations from the Nginx/Certbot documentation. Here is the relevant part of my sites-available Nginx code:
server {
listen 80;
server_name example.com www.example.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/myname/example;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/activities.sock;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; #managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; #managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}
I found a similar StackOverflow answer (Nginx: redirect non-www to www on https) but none of the solutions worked for me. I have SSL certificates for both example.com and www.example.com. I also tried creating a separate 443 ssl server block for example.com based on the comments in that answer but it didn't work either. My sites-available and sites-enabled code is the same.
What am I doing wrong?
It seems that server_name when translated to the $host variable selects the first in the list of server_name. Let me know if that works. I can't quite test this currently.
Try swapping server_name to server_name www.example.com example.com; as well as changing return 301 https://$host$request_uri; to return 301 https://$server_name$request_uri;
server {
server_name www.example.com example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
# SSL CERT STUFF.
server_name example.com;
return 301 https://www.$server_name$request_uri;
}
server {
listen 443 ssl;
# SSL CERT STUFF.
server_name www.example.com;
# LOCATION STUFF
}
This is not an efficient configuration for Nginx request processing. It's messy, your if condition gets evaluated on every request and I don't see where your non www to www is even meant to happen.
I'd split http and https:
server {
listen 80 default_server;
return 301 https://www.example.com$request_uri;
}
Thats all non https traffic taken care of in a single redirect. Now for the https:
server {
listen 443 default_server ssl;
server_name www.example.com;
root # should be outside location blocks ideally
......
}
The default server directive means this server will handle any requests which do not have a matching server configuration. If you don't want that then add example.com after www.example.com, not before it. Any requests ending up here will display the first entry in the client browser bar.
Based on your comments you might need to add a separate block for the other domain to avoid an SSL certificate mismatch. Try this:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate .....;
ssl_certificate_key .....;
return https://www.example.com$request_uri;
}
Although the OP has accept one of the answers as the solution, I just want to point out that it may not be the best practice.
The correct way is to use $host instead of $server_name (as per Mitchell Walls' example) or hardcoded www.exmple.com (as per miknik's example). Both results an extra 443 server directive that is not necessary and messy.
server {
listen 80 default_server;
server_name www.example.com example.com;
root /var/www/html; # define your root directory here
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
# SSL CERT STUFF.
#server_name www.example.com; you don't need to specify again here
# LOCATION STUFF
}
There is a difference between $host and $server_name:
$host contains "in this order of precedence: host name from the request line, or host name from the 'Host' request header field, or the server name matching a request".
$server_name contains the server_name of the virtual host which processed the request, as it was defined in the nginx configuration. If a server contains multiple server_names, only the first one will be present in this variable.