certbot nginx doesn't finish - django

question regarding letsencrypt.org certbot.
Whenever I run the certbot --nginx command, it never finishes the process.
Full output (running as root):
$ certbot --nginx --agree-tos --redirect --uir --hsts --staple-ocsp --must-staple -d <DOMAINS> --email <EMAIL>
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for <DOMAIN>
http-01 challenge for <DOMAIN>
nginx: [emerg] duplicate listen options for [::]:80 in /etc/nginx/sites-enabled/django:50
Cleaning up challenges
nginx restart failed:
b''
b''
Running certbot certificates:
$ certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log
-------------------------------------------------------------------------------
No certs found.
-------------------------------------------------------------------------------
The only thing where I messed up was not properly configuring my DNS before running certbot the first time (messed up my A record, et al; I'm new at this :P), however I don't know what to do moving forward; this is my first web-server so I'm still in a bit of a learning curve. I'm not sure if this is a configuration error, or something else.
For info, I'm running a DigitalOcean Django/Ubuntu 16.04 droplet (only edited /etc/nginx/sites-available/default, to change server_name). Will update below for any additional info needed; thanks in advance. ^_^
=========================================================================
edit 1.
/etc/nginx/sites-enabled/django
upstream app_server {
server unix:/home/django/gunicorn.socket fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# Your Django project's media files - amend as required
location /media {
alias /home/django/django_project/django_project/media;
}
# your Django project's static files - amend as required
location /static {
alias /home/django/django_project/django_project/static;
}
# Proxy the static assests for the Django Admin panel
location /static/admin {
alias /usr/lib/python2.7/dist-packages/django/contrib/admin/static/admin/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://app_server;
}
}

I think the issue is that you're trying to specify two default_server directives on the same port. This is invalid - there can be only one default server. Changing your configuration as follows should fix your issue:
listen 80;
listen [::]:80 default_server;
You can also remove the ipv6only directive as this is the default anyway.

Related

Run Daphne in production on (or forwarded to?) port 443

I am trying to build a speech recognition-based application. It runs on Django with Django-channels and Daphne, and Nginx as the web server, on an Ubuntu EC2 instance on AWS. It should run in the browser, so I am using WebRTC to get the audio stream – or at least that’s the goal. I'll call my domain mysite.co here.
Django serves the page properly on http://www.mysite.co:8000 and Daphne seems to run too, the logs show
2022-10-17 13:05:02,950 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2022-10-17 13:05:02,951 INFO HTTP/2 support enabled
2022-10-17 13:05:02,951 INFO Configuring endpoint fd:fileno=0
2022-10-17 13:05:02,965 INFO Listening on TCP address [Private IPv4 address of my EC2 instance]:8000
2022-10-17 13:05:02,965 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
I used the Daphne docs to set up Daphne with supervisor. There, they use port 8000.
My first Nginx config file nginx.conf (I shouldn't use that one, should I?) looks like this:
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream channels-backend {
server mysite.co:80;
}
server {
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://mysite.co;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}}
}
# and the mail settings, but I don't use them
Currently, the homepage of my server just serves a HTML that I set in my first Nginx server block (I set this up while figuring out how to get TLS on Nginx, I don't need the HTML here):
server {
root /var/www/mysite/html;
index index.html index.htm index.nginx-debian.html;
server_name mysite.co www.mysite.co;
location / {
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.co/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name mysite.co www.mysite.co;
return 404; # managed by Certbot
}
I need WebRTC to access the audio stream that should run through Daphne, but for that, I need HTTPS because you can’t access user media via unencrypted protocols. I have created a TLS cert with Let’s Encrypt for Nginx (cf. above), but of course this only works on port 443. I can’t (and probably shouldn’t be able to?) reach port 8000 via HTTPS.
I am a bit lost at this point, my Nginx experience is very limited. Do I need to bind port 8000 to 443? If so, what do I need to do with my Nginx config for the HTML file that is currently served there? Am I on the right track at all?
If I should share other config files from Nginx or supervisor, please let me know.
I was on the wrong track, actually it's very straightforward. There's no need to run it on port 8000, you can run it conveniently on 443.
You don't configure the SSL in the Nginx server blocks, but you do it right in the place where you start the Daphne server adding -e ssl:443:privateKey=key.pem:certKey=crt.pem to your daphne command. You must have generated an SSL certificate previously of course, Let'sEncrypt works just fine here as well. privateKey is privkey.pem and certKey is fullchain.pem then.
(This snippet in itself won't work, depending on your needs you might have to add other flags as well like -u or --endpoint.)

Gunicorn/Django/Nginx - 502 Bad Gateway Error when uploading files above 100 MB

I have been stuck on this error for a week. I am officially at a loss with this one.
I have a React/Django web app where users can upload audio files (.WAV) (Via react Dropzone). The React and Django is completely separated into a frontend/ and backend/ folder, communicating via fetch() calls. For some reason, I am able to upload files less than 100 MB, but if I upload a file larger, for example, 180 MB, Nginx errors with the following:
2020/07/14 02:29:18 [error] 21023#21023: *71 upstream prematurely closed connection while reading response header from upstream, client: 50.***.***.***, server: api.example.com, request: "POST /api/upload_audio HTTP/1.1", upstream: "http://unix:/home/exampleuser/AudioUploadApp/AudioUploadApp.sock:/api/upload_audio", host: "api.example.com”, referrer: "https://example.com/profile/audio/record"
My Gunicorn error log does not show any errors. I can see each of the 5 workers starting, but there is no WORKER TIMEOUT errors or anything that I can see.
My guniocorn.service file:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=exampleuser
Group=www-data
WorkingDirectory=/home/exampleuser/AudioUploadApp/Backend
ExecStart=/home/exampleuser/virtualenvs/uploadenv/bin/gunicorn --access-logfile "/tmp/gunicorn_access.log" --error-logfile "/tmp/gunicorn_error.log" --capture-output --workers 5 --worker-class=gevent --timeout=900 --bind unix/home/exampleuser/AudioUploadApp/AudioUploadApp.sock AudioUploadApp.wsgi:application --log-level=error
[Install]
WantedBy=multi-user.target
server {
server_name api.example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/exampleuser/AudioUploadApp/AudioUploadApp.sock;
client_max_body_size 200M;
}
location /static {
autoindex on;
alias /home/exampleuser/AudioUploadApp/Backend/static/;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
client_max_body_size 200M;
}
server {
server_name www.example.com example.com;
root /home/exampleuser/AudioUploadApp/build;
index index.html index.html;
location / {
try_files $uri /index.html;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
client_max_body_size 200M;
}
server {
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name www.example.com example.com;
listen 80;
return 404; # managed by Certbot
}server {
if ($host = api.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name api.example.com;
client_max_body_size 200M;
return 404; # managed by Certbot
}
And my Nginx proxy params:
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 900s;
proxy_send_timeout 900s;
proxy_read_timeout 900s;
I realize that my timeout for Gunicorn and Nginx is way too much, but I don't have the best upload speed where I live, so I just want to makes sure that Timeouts due to upload speed are not the issue.
Here is what I’ve tried, with no luck:
Increase timeout for both Gunicorn and Nginx. At one point I was getting a 504 error, which increasing the timeout fixed.
Increased number of workers
client_max_body_size 0; (To make no limit on upload size)
Increase maxFile variable on React Dropzone component
Upgrade Amazon EC2 instance type to get more CPU and RAM
Verify that Django is not failing
I included print statements right at the beginning of the first method that Django runs on the request. From what I can tell, the request is not getting that far, for some reason
To reiterate, this only seems to happen with wav file sizes above 100 MB. I have successfully been able to upload file sizes such as 80 MB, but have not been able to upload files with sizes of 150 MB.
I have been on this for about a week. I am pretty stuck. Would really appreciate any help. I can include any more information if I missed any that would be helpful
The fix for this was to upgrade the EC2 instance that Gunicorn/Django/Nginx is running on. I went from a t2.medium instance to a R5.large instance. This worked. Then, I went from a R5.large down to a t2.large instance, and it still works. t2.medium and t2.large has the same amount of virtual CPUs, but t2 large has twice the amount of memory (4 GiB vs 8 GiB). I claimed to have already done this, but I must have tried it back when the first error I was getting was about the client body being too large. I fixed that error by changing client_max_body_size in Nginx. After that change is when I was getting the error that this post is about. I just tried to upgrade the hardware at the wrong spot.
I also made the following changes compared to what I have in my original post, since the larger numbers seemed unnecessary:
Number of workers in Gunicorn: from 5 to 3
Gunicorn timeout: from 900 to 300
Nginx timeout: from 900s to 300s
The Gunicorn and Nginx could probably go down to its default, but I haven't tested that yet. I have awful upload speed where I live.

How to handle SSL certificates for implementing WhiteLabel option in a web app running on NGINX server

I'm working on a Web App.
My app runs on the subdomain app.mydomain.com
I need to WhiteLabel my app. I'm asking my Customers to point to their own website via CNAME to my app.
design.customerwebsite.com points to app.mydomain.com
Here is what I have tried to solve this.
I created a new file in /etc/nginx/sites-available named customerwebsite.com
Added a symlink to the file.
I installed SSL using certbot with the below command.
sudo certbot --nginx -n --redirect -d design.customerwebsite.com
Here is the code for my NGINX conf file of customerwebsite.com
server
{
server_name www.customerwebsite.com;
return 301 $scheme://customerwebsite.com$request_uri;
}
server {
# proxy_hide_header X-Frame-Options;
listen 80;
listen 443;
server_name design.customerwebsite.com;
ssl_certificate /etc/letsencrypt/live/design.customerwebsite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/design.customerwebsite.com/privkey.pem;
root /opt/bitnami/apps/myapp/dist;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_hide_header X-Frame-Options;
proxy_pass http://localhost:3000;
}
proxy_set_header X-Forwarded-Proto $scheme;
if ( $http_x_forwarded_proto != 'https' )
{
return 301 https://$host$request_uri;
}
}
I'm successfully able to run my web app on https://design.customerwebsite.com
But the SSL certificate shows that it is pointed to app.mydomain.com and shows insecure.
My app.mydomain.com has SSL certificate from Amazon ACM which is attached via Load Balancer.
What should be the approach to solve this?
There are two solutions for this
1- add the ssl certs to the loadbalance: You need to request a cert with all the supported DNS names (app.mydomain.com and design.customerwebsite.com)/ and you need to manage customerwebsite.com domain with Route53. I think that is not possible in your case.
2- Do not use ssl on the load balancer: for this option, we will not terminate ssl on the load balancer, however, it will be passed to nginx to handle. Your loadbalancer configs should look like
you need to generate a new ssl cert that includes both domains
sudo certbot --nginx -n --redirect -d app.mydomain.com -d *.mydomain.com -d design.customerwebsite.com -d *.customerwebsite.com
Nginx configs
server
{
server_name www.customerwebsite.com;
return 301 $scheme://customerwebsite.com$request_uri;
}
server {
listen 80 default_server;
server_name design.customerwebsite.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl default_server;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/letsencrypt/live/design.customerwebsite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/design.customerwebsite.com/privkey.pem;
server_name design.customerwebsite.com;
root /opt/bitnami/apps/myapp/dist;
location / {
resolver 127.0.0.11 ipv6=off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https
proxy_set_header X-Real-IP $remote_addr;
proxy_hide_header X-Frame-Options;
proxy_pass http://localhost:3000;
}
}
I think that the elements provided to the ACM Load Balancer must match every domain on which you may receive requests. In the certificate, you should have a Subject Alternate Name containing every matching domain.
For example on stackoverflow.com, the certificate has a CN *.stackexchange.com but has that Subject Alternative Name :
DNS:*.askubuntu.com, DNS:*.blogoverflow.com, DNS:*.mathoverflow.net, DNS:*.meta.stackexchange.com, DNS:*.meta.stackoverflow.com, DNS:*.serverfault.com, DNS:*.sstatic.net, DNS:*.stackexchange.com, DNS:*.stackoverflow.com, DNS:*.stackoverflow.email, DNS:*.superuser.com, DNS:askubuntu.com, DNS:blogoverflow.com, DNS:mathoverflow.net, DNS:openid.stackauth.com, DNS:serverfault.com, DNS:sstatic.net, DNS:stackapps.com, DNS:stackauth.com, DNS:stackexchange.com, DNS:stackoverflow.blog, DNS:stackoverflow.com, DNS:stackoverflow.email, DNS:stacksnippets.net, DNS:superuser.com
you're forgetting some details ...
you have to do a configuration for the domain
/////// app.myDominio.com ////////
just as you did for the normal domain and also create SSL only for this domain. You can use the let script.
Configure a path for the NGINX LOG so you can check for errors that NGINX detects.
You can also use it in the NGINX settings
* .domain.com
(where * means app, maybe it detects)

Adding SSL to the Django app, Ubuntu 16+, DigitalOcean

I am trying to add my SSL certificate to my django application according to this tutorial. I can turn on my website using 'https://ebluedesign.online'. But web browsers return something in style 'The certificate can not be verified due to a trusted certificate authority.' After accepting the messages, my page is displayed correctly.
My nginx file looks like this:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name ebluedesign.online;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/ebluedesign.online/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/ebluedesign.online/privkey.pem;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
server {
listen 80;
listen [::]:80;
server_name ebluedesign.online;
return 301 https://$host$request_uri;
}
My certificates are also visible here:
/etc/letsencrypt/live/ebluedesign.online/...
How can I solve this problem with SSL certificate. I use free SSL by https://letsencrypt.org/.
EDIT:
What is odd is if you go to http://bluedesign.online/ it works fine even though your file makes it seem as if port 80 isn’t listened too at all. Do you happen to have two setup files in nginx? It is possible that the one you posted is not being used.
I’ve followed this tutorial many times with success. You could try using it from scratch if you have the opportunity: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04

ssl with django gunicorn and nginx

I am currently working on deploying my project over https however I am running into some issues. I have it working with http but when I try to incorporate the ssl it breaks. I think I am misconfiguring the gunicorn upstream client in my nginx block but I am uncertain. Could the issue be in the unix binding in my gunicorn service file? I am very new to gunicorn so I'm a little lost.
Here is my configuration below.
Gunicorn:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
Environment=PYTHONHASHSEED=random
User=USER
Group=www-data
WorkingDirectory=/path/to/project
ExecStart=/path/to/project/project_env/bin/gunicorn --workers 3 --bind unix:/path/to/project/project.sock project.wsgi:application
[Install]
WantedBy=multi-user.target
Nginx (working-http):
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name server_domain;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /path/to/project;
}
location / {
include proxy_params;
proxy_pass http://unix:/path/to/project/project.sock;
}
}
Nginx (https):
upstream server_prod {
server unix:/path/to/project/project.sock fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name server_domain;
}
server {
server_name server_domain;
listen 443;
ssl on;
ssl_certificate /etc/ssl/server_domain.crt;
ssl_certificate_key /etc/ssl/server_domain.key;
location /static/ {
root /path/to/project;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://server_prod;
break;
}
}
}
Your gunicorn systemd unit file seems OK. Your nginx is generally OK too. You have posted too little info to get an appropriate diagnostic. I'm guessing you are missing passing the X-Forwarded-Proto header to gunicorn, but it could be something else. Here's an nginx configuration file that works for me:
upstream gunicorn{
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/path/to/project/project.sock fail_timeout=0;
# for TCP setups, point these to your backend servers
# server 127.0.0.1:9000 fail_timeout=0;
}
server {
listen 80;
listen 443 ssl http2;
server_name server_domain;
ssl_certificate /etc/ssl/server_domain.crt;
ssl_certificate_key /etc/ssl/server_domain.key;
# path for static files
root /path/to/collectstatic/dir;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When Nginx is handling SSL it is helpful to pass the protocol information
# to Gunicorn. Many web frameworks use this information to generate URLs.
# Without this information, the application may mistakenly generate http
# URLs in https responses, leading to mixed content warnings or broken
# applications. In this case, configure Nginx to pass an appropriate header:
proxy_set_header X-Forwarded-Proto $scheme;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
proxy_pass http://gunicorn;
}
}