Run Daphne in production on (or forwarded to?) port 443 - django

I am trying to build a speech recognition-based application. It runs on Django with Django-channels and Daphne, and Nginx as the web server, on an Ubuntu EC2 instance on AWS. It should run in the browser, so I am using WebRTC to get the audio stream – or at least that’s the goal. I'll call my domain mysite.co here.
Django serves the page properly on http://www.mysite.co:8000 and Daphne seems to run too, the logs show
2022-10-17 13:05:02,950 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2022-10-17 13:05:02,951 INFO HTTP/2 support enabled
2022-10-17 13:05:02,951 INFO Configuring endpoint fd:fileno=0
2022-10-17 13:05:02,965 INFO Listening on TCP address [Private IPv4 address of my EC2 instance]:8000
2022-10-17 13:05:02,965 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
I used the Daphne docs to set up Daphne with supervisor. There, they use port 8000.
My first Nginx config file nginx.conf (I shouldn't use that one, should I?) looks like this:
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream channels-backend {
server mysite.co:80;
}
server {
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://mysite.co;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}}
}
# and the mail settings, but I don't use them
Currently, the homepage of my server just serves a HTML that I set in my first Nginx server block (I set this up while figuring out how to get TLS on Nginx, I don't need the HTML here):
server {
root /var/www/mysite/html;
index index.html index.htm index.nginx-debian.html;
server_name mysite.co www.mysite.co;
location / {
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.co/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name mysite.co www.mysite.co;
return 404; # managed by Certbot
}
I need WebRTC to access the audio stream that should run through Daphne, but for that, I need HTTPS because you can’t access user media via unencrypted protocols. I have created a TLS cert with Let’s Encrypt for Nginx (cf. above), but of course this only works on port 443. I can’t (and probably shouldn’t be able to?) reach port 8000 via HTTPS.
I am a bit lost at this point, my Nginx experience is very limited. Do I need to bind port 8000 to 443? If so, what do I need to do with my Nginx config for the HTML file that is currently served there? Am I on the right track at all?
If I should share other config files from Nginx or supervisor, please let me know.

I was on the wrong track, actually it's very straightforward. There's no need to run it on port 8000, you can run it conveniently on 443.
You don't configure the SSL in the Nginx server blocks, but you do it right in the place where you start the Daphne server adding -e ssl:443:privateKey=key.pem:certKey=crt.pem to your daphne command. You must have generated an SSL certificate previously of course, Let'sEncrypt works just fine here as well. privateKey is privkey.pem and certKey is fullchain.pem then.
(This snippet in itself won't work, depending on your needs you might have to add other flags as well like -u or --endpoint.)

Related

How to handle SSL certificates for implementing WhiteLabel option in a web app running on NGINX server

I'm working on a Web App.
My app runs on the subdomain app.mydomain.com
I need to WhiteLabel my app. I'm asking my Customers to point to their own website via CNAME to my app.
design.customerwebsite.com points to app.mydomain.com
Here is what I have tried to solve this.
I created a new file in /etc/nginx/sites-available named customerwebsite.com
Added a symlink to the file.
I installed SSL using certbot with the below command.
sudo certbot --nginx -n --redirect -d design.customerwebsite.com
Here is the code for my NGINX conf file of customerwebsite.com
server
{
server_name www.customerwebsite.com;
return 301 $scheme://customerwebsite.com$request_uri;
}
server {
# proxy_hide_header X-Frame-Options;
listen 80;
listen 443;
server_name design.customerwebsite.com;
ssl_certificate /etc/letsencrypt/live/design.customerwebsite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/design.customerwebsite.com/privkey.pem;
root /opt/bitnami/apps/myapp/dist;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_hide_header X-Frame-Options;
proxy_pass http://localhost:3000;
}
proxy_set_header X-Forwarded-Proto $scheme;
if ( $http_x_forwarded_proto != 'https' )
{
return 301 https://$host$request_uri;
}
}
I'm successfully able to run my web app on https://design.customerwebsite.com
But the SSL certificate shows that it is pointed to app.mydomain.com and shows insecure.
My app.mydomain.com has SSL certificate from Amazon ACM which is attached via Load Balancer.
What should be the approach to solve this?
There are two solutions for this
1- add the ssl certs to the loadbalance: You need to request a cert with all the supported DNS names (app.mydomain.com and design.customerwebsite.com)/ and you need to manage customerwebsite.com domain with Route53. I think that is not possible in your case.
2- Do not use ssl on the load balancer: for this option, we will not terminate ssl on the load balancer, however, it will be passed to nginx to handle. Your loadbalancer configs should look like
you need to generate a new ssl cert that includes both domains
sudo certbot --nginx -n --redirect -d app.mydomain.com -d *.mydomain.com -d design.customerwebsite.com -d *.customerwebsite.com
Nginx configs
server
{
server_name www.customerwebsite.com;
return 301 $scheme://customerwebsite.com$request_uri;
}
server {
listen 80 default_server;
server_name design.customerwebsite.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl default_server;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/letsencrypt/live/design.customerwebsite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/design.customerwebsite.com/privkey.pem;
server_name design.customerwebsite.com;
root /opt/bitnami/apps/myapp/dist;
location / {
resolver 127.0.0.11 ipv6=off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https
proxy_set_header X-Real-IP $remote_addr;
proxy_hide_header X-Frame-Options;
proxy_pass http://localhost:3000;
}
}
I think that the elements provided to the ACM Load Balancer must match every domain on which you may receive requests. In the certificate, you should have a Subject Alternate Name containing every matching domain.
For example on stackoverflow.com, the certificate has a CN *.stackexchange.com but has that Subject Alternative Name :
DNS:*.askubuntu.com, DNS:*.blogoverflow.com, DNS:*.mathoverflow.net, DNS:*.meta.stackexchange.com, DNS:*.meta.stackoverflow.com, DNS:*.serverfault.com, DNS:*.sstatic.net, DNS:*.stackexchange.com, DNS:*.stackoverflow.com, DNS:*.stackoverflow.email, DNS:*.superuser.com, DNS:askubuntu.com, DNS:blogoverflow.com, DNS:mathoverflow.net, DNS:openid.stackauth.com, DNS:serverfault.com, DNS:sstatic.net, DNS:stackapps.com, DNS:stackauth.com, DNS:stackexchange.com, DNS:stackoverflow.blog, DNS:stackoverflow.com, DNS:stackoverflow.email, DNS:stacksnippets.net, DNS:superuser.com
you're forgetting some details ...
you have to do a configuration for the domain
/////// app.myDominio.com ////////
just as you did for the normal domain and also create SSL only for this domain. You can use the let script.
Configure a path for the NGINX LOG so you can check for errors that NGINX detects.
You can also use it in the NGINX settings
* .domain.com
(where * means app, maybe it detects)

Adding SSL to the Django app, Ubuntu 16+, DigitalOcean

I am trying to add my SSL certificate to my django application according to this tutorial. I can turn on my website using 'https://ebluedesign.online'. But web browsers return something in style 'The certificate can not be verified due to a trusted certificate authority.' After accepting the messages, my page is displayed correctly.
My nginx file looks like this:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name ebluedesign.online;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/ebluedesign.online/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/ebluedesign.online/privkey.pem;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
server {
listen 80;
listen [::]:80;
server_name ebluedesign.online;
return 301 https://$host$request_uri;
}
My certificates are also visible here:
/etc/letsencrypt/live/ebluedesign.online/...
How can I solve this problem with SSL certificate. I use free SSL by https://letsencrypt.org/.
EDIT:
What is odd is if you go to http://bluedesign.online/ it works fine even though your file makes it seem as if port 80 isn’t listened too at all. Do you happen to have two setup files in nginx? It is possible that the one you posted is not being used.
I’ve followed this tutorial many times with success. You could try using it from scratch if you have the opportunity: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04

Can I use nginx as reverse proxy in this particular case?

I need to know if it is possible to use Nginx as a reverse proxy to serve several web apps hosted each one in a different Raspberry Pi.
As it can be seen in the diagram, the Raspberries will be all connected to an unmanaged switch, the first switch I intend to install nginx so it could serve as reverse proxy depending on the website requested from the internet. Ex: wwww.site1.com, www.site2.www, etc
Is this possible?
Will I be able to access those RPis from a computer connected to the modem, not to the switch?
Note: The modem is a wifi modem and the switch is an unmanaged wired switch.
Apologies for my poor drawing skills, and thanks for any help. I need to know if this idea is possible before buying all this stuff.
I think, it is possible, but there are some requrements:
static external IP assigned to Modem;
static IP's on RPi's;
correct forwarding rules on modem.
I mean, you need forward all requests like the following:
modem:80 -> rp0:80
modem:443 -> rp0:443
On rp0 ports may differ from 80 and 443, so, please, set up correct rules and note it in nginx config.
After that set up upstreams or use IP's of rp1-3 in websites configs:
upstream rp1 {
server 192.168.1.11:port;
}
upstream rp2 {
server 192.168.1.12:port;
}
upstream rp3 {
server 192.168.1.13:port;
}
Replace port with port, which is listened on apropriate RPi.
Website configs will be like the following:
server {
server_name site1.com www.site1.com ;
location / { proxy_pass http://rp1 ; }
}
server {
server_name site2.com www.site2.com ;
location / { proxy_pass http://rp2 ; }
}
Add any params you need.
Also, if you are going to host some static websites, the best way is too place them on rp0.
EDIT 1
Example of working config:
server {
listen 80;
server_name site1.com www.site1.com ;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server {
listen 443 ssl;
server_name site1.com www.site1.com;
ssl_certificate /etc/letsencrypt/live/site1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site1/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://rp1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://rp1/ /;
}
Please, note, if you are going to use Letsencrypt, the best way is to set up certbot (or smth else) on rp0. It will be easier to renew certs automatically. Also, use /etc/letsencrypt/live/site1/fullchain.pem .
In order to use multiple SSL-domains, be sure that install nginx supports SNI:
# nginx -V
nginx version: nginx/1.14.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
This is the nginx conf on the side of the head node:
server {
listen 80;
server_name www.codingindfw.com codingindfw.com;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server{
listen 443 ssl;
server_name www.codingindfw.www codingindfw.com;
client_max_body_size 4G;
ssl_certificate /etc/letsencrypt/live/www.koohack.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.koohack.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_pass http://192.168.0.8;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://192.168.0.8/ /;
}
}
And this is the nginx conf file on the client running the actual Django app:
server {
listen 80 default_server;
server_name www.codingindfw.com;
client_max_body_size 4G;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/pi/coding-in-dfw;
}
location /media/ {
root /home/pi/coding-in-dfw;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/pi/coding-in-dfw/mysocket.sock;
}
}

certbot nginx doesn't finish

question regarding letsencrypt.org certbot.
Whenever I run the certbot --nginx command, it never finishes the process.
Full output (running as root):
$ certbot --nginx --agree-tos --redirect --uir --hsts --staple-ocsp --must-staple -d <DOMAINS> --email <EMAIL>
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for <DOMAIN>
http-01 challenge for <DOMAIN>
nginx: [emerg] duplicate listen options for [::]:80 in /etc/nginx/sites-enabled/django:50
Cleaning up challenges
nginx restart failed:
b''
b''
Running certbot certificates:
$ certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log
-------------------------------------------------------------------------------
No certs found.
-------------------------------------------------------------------------------
The only thing where I messed up was not properly configuring my DNS before running certbot the first time (messed up my A record, et al; I'm new at this :P), however I don't know what to do moving forward; this is my first web-server so I'm still in a bit of a learning curve. I'm not sure if this is a configuration error, or something else.
For info, I'm running a DigitalOcean Django/Ubuntu 16.04 droplet (only edited /etc/nginx/sites-available/default, to change server_name). Will update below for any additional info needed; thanks in advance. ^_^
=========================================================================
edit 1.
/etc/nginx/sites-enabled/django
upstream app_server {
server unix:/home/django/gunicorn.socket fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# Your Django project's media files - amend as required
location /media {
alias /home/django/django_project/django_project/media;
}
# your Django project's static files - amend as required
location /static {
alias /home/django/django_project/django_project/static;
}
# Proxy the static assests for the Django Admin panel
location /static/admin {
alias /usr/lib/python2.7/dist-packages/django/contrib/admin/static/admin/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://app_server;
}
}
I think the issue is that you're trying to specify two default_server directives on the same port. This is invalid - there can be only one default server. Changing your configuration as follows should fix your issue:
listen 80;
listen [::]:80 default_server;
You can also remove the ipv6only directive as this is the default anyway.

ssl with django gunicorn and nginx

I am currently working on deploying my project over https however I am running into some issues. I have it working with http but when I try to incorporate the ssl it breaks. I think I am misconfiguring the gunicorn upstream client in my nginx block but I am uncertain. Could the issue be in the unix binding in my gunicorn service file? I am very new to gunicorn so I'm a little lost.
Here is my configuration below.
Gunicorn:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
Environment=PYTHONHASHSEED=random
User=USER
Group=www-data
WorkingDirectory=/path/to/project
ExecStart=/path/to/project/project_env/bin/gunicorn --workers 3 --bind unix:/path/to/project/project.sock project.wsgi:application
[Install]
WantedBy=multi-user.target
Nginx (working-http):
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name server_domain;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /path/to/project;
}
location / {
include proxy_params;
proxy_pass http://unix:/path/to/project/project.sock;
}
}
Nginx (https):
upstream server_prod {
server unix:/path/to/project/project.sock fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name server_domain;
}
server {
server_name server_domain;
listen 443;
ssl on;
ssl_certificate /etc/ssl/server_domain.crt;
ssl_certificate_key /etc/ssl/server_domain.key;
location /static/ {
root /path/to/project;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://server_prod;
break;
}
}
}
Your gunicorn systemd unit file seems OK. Your nginx is generally OK too. You have posted too little info to get an appropriate diagnostic. I'm guessing you are missing passing the X-Forwarded-Proto header to gunicorn, but it could be something else. Here's an nginx configuration file that works for me:
upstream gunicorn{
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/path/to/project/project.sock fail_timeout=0;
# for TCP setups, point these to your backend servers
# server 127.0.0.1:9000 fail_timeout=0;
}
server {
listen 80;
listen 443 ssl http2;
server_name server_domain;
ssl_certificate /etc/ssl/server_domain.crt;
ssl_certificate_key /etc/ssl/server_domain.key;
# path for static files
root /path/to/collectstatic/dir;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When Nginx is handling SSL it is helpful to pass the protocol information
# to Gunicorn. Many web frameworks use this information to generate URLs.
# Without this information, the application may mistakenly generate http
# URLs in https responses, leading to mixed content warnings or broken
# applications. In this case, configure Nginx to pass an appropriate header:
proxy_set_header X-Forwarded-Proto $scheme;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
proxy_pass http://gunicorn;
}
}