letsencrypt django webroot - django

I am trying to setup my nginx and django to be able to renew certificates.
However something goes wrong with my webroot-plugin
in nginx:
location ~ /.well-known {
allow all;
}
But when I run the renewal command:
./letsencrypt-auto certonly -a webroot --agree-tos --renew-by-default --webroot-path=/home/sult/huppels -d huppels.nl -d www.huppels.nl
However it seems that the cert renewal wants to retrieve a file from my server cause i get the following error.
The following errors were reported by the server:
Failed authorization procedure. www.huppels.nl (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.huppels.nl/.well-known/acme-challenge/some_long_hash [51.254.101.239]: 400
How do i make this possible with nginx or django?

I have my Django app running with gunicorn. I followed the instructions here.
I made sure to include the proper location blocks:
location /static {
alias /home/user/webapp;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Making sure to include any template location alias as well.
I set the .well-known location block like this;
location /.well-known {
alias /home/user/webapp/.well-known;
}
Pointing it directly do the root of the webapp instead of using the allow all.
I did have to make sure that I only used the non ssl block until the certificate was generated then I used a different nginx config based on h5bps nginx configs.
Note: Make sure you have proper A records for you domain pointing to www if you are going to use h5bp to redirect to www.

Related

403 Forbidden, Nginx config

I just started using Django and I try to deploy my project on the digital ocean server using Nginx. I am trying to set up SSL and domain (and they look good through the SSL checker and Google Dig DNS), however I get the 403 error in the browser once I try to access the webpage:
403 Forbidden
nginx/1.14.0 (Ubuntu)
I have been trying different things with Nginx config but it does not seem to help, here is what I have now:
http {
...
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/server_merged_key.crt;
ssl_certificate_key /etc/ssl/other_key.key;
root /var/www/html;
server_name domain.net www.domain.net ;
location / {
root /var/www/html;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
}
...
}
The Django project is located on the home of the server with no directory (just two folders /mysite1/mysite/). The server starts fine.
I do not see any GET request on the server side once I see the 403 error on the page. I do see the 400 error You're accessing the development server over HTTPS, but it only supports HTTP., if I try to access through http://IP-IN-NUMBER:8000.
Also, the settings.py looks like this, if this relevant to the issue:
DEBUG = False
ALLOWED_HOSTS = ['IP-IN-NUMBERS','localhost', '127.0.0.1','HTTP_X_FORWARDED_PROTO', 'https','domain.net'
,'www.domain.net']
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SECURE_SSL_REDIRECT = True
How do I correctly set up Nginx for Django? Thank you so much for help!
Okay, so I figured it out with additional help, I will put answer in case it will be helpful for others. Basically I just needed to put my Django files in the root of /var/www/html, so they could be together with the index file. This way Nginx allows to access this directory and not throw 403 error.

Reverse proxy on Elastic Beanstalk to forward a subdirectory to subdomain

We use Elastic Beanstalk for our frontend: https://tutorspot.co.uk. We also have a Wordpress blog at https://blog.tutorspot.co.uk but I'd like that to be available at https://tutorspot.co.uk/blog. I'm trying to configure a reverse proxy in Nginx on EB but am not having any luck getting the app deployed with the config.
I don't want to override the entire Nginx configuration, just extend it to include a new location block for the /blog path.
Here's the config I'm trying to get working:
location /blog/ {
proxy_pass https://blog.tutorspot.co.uk;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
The Elastic Beanstalk docs suggest this should go in a *.conf file in .platform/nginx/conf.d/, however I'm running into the following issue when deploying the new config:
Unsuccessful command execution on instance id(s) 'i-xxxxxxxxxxx'. Aborting the operation.
I'm unsure whether this is the correct config and it's just in the wrong place, or whether the config itself is incorrect. For example, does the location block need to be wrapped in either a server or http context?
Any help would be greatly appreciated!
NB 1:
In production, we're using Amazon AMI, however, I'm taking this opportunity to upgrade us to Amazon Linux 2.
NB 2:
Here's the folder structure showing the location of the config file relative to .ebextensions etc.:

Redirecting correctly HTTP to HTTPS using Django and Nginx

I'm trying to redirect my website from HTTP to HTTPS and I've succeeded partially doing so. The thing is, when I type mywebsite.fr, I get the name of the service of the container that contains the website's code in my address bar (e.g. Django app/) with a DNS_PROBE_FINISHED_NXDOMAIN error.
Now, I tried the same thing with another Chrome browser of another computer and this time when I type www.mywebsite.fr I get the same result whereas the non-www is correctly redirected to the secure address.
Finally, I tried the exact same process using my smartphone (Brave) with the www and non-www, I get https://djangoapp with an error ERR_NAME_NOT_RESOLVED whereas when I explicitly type https:\\mywebsite, I get no issues.
So here is the NGINX portion that redirects to the HTTPS server:
server {
...
location / {
return 301 https://djangoapp$request_uri;
}
}
This is the location in the HTTPS server that refers to the upstream:
server {
...
location / {
...
proxy_pass http://djangoapp;
}
}
And, this is the service that runs the code:
djangoapp:
build: .
ports:
- "8000:80"
links:
- db
depends_on:
- db
I do not master yet all the intricacies of NGINX and I do not really understand what I'm doing wrong here. Any solution or pieces of advice on this issue I'm having?
You are returning your django app url instead of redirecting it to your http nginx block.
In your http part of config:
server {
return 301 https://$host$request_uri;
listen 80;
}
And in https, when proxy passing if you dont want the url to change to the url of your django app, you should add proxy_set_header Host $http_host;. It's also useful to add some additional headers like ip address. So the overal server block will look like:
server {
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://djangoapp;
}
}
My problem was resolved because my proxy_set_header X-Forwarded-Proto was set to $https instead of https. Using $scheme as suggested also works fine.
By reading #mehrad's comment and searching a little bit on the web again, I found the bug on why the redirection was not working properly. This also includes using $host as opposed to djangoapp.

Nginx does not forwards remote address to gunicorn

I have the following nginx configuration to forward requests to gunicorn.
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
However, when I access remote address using request.META['REMOTE_ADDR'], it always returns 127.0.0.1. I am using Django 1.9
That is correct and expected behavior. If you would like to access users IP you will need to use:
request.META['HTTP_X_FORWARDED_FOR']
Note that in development (without running nginx), REMOTE_ADDR is still correct.
My recommendation is to add a middleware or a utility method which will do the conditional logic to get the actual users IP depending on your settings.

Nginx: different robots.txt for alternate domain

Summary
I have a single web app with an internal and external domain pointing at it, and I want a robots.txt to block all access to the internal domain, but allow all access to the external domain.
Problem Detail
I have a simple Nginx server block that I used to proxy to a Django application (see below). As you can see, this server block responds to any domain (due to the lack of the server_name parameter). However, I'm wondering how to mark specific domains such Nginx will serve up a custom robots.txt file for them.
More specifically, say the domains example.com and www.example.com will serve up a default robots.txt file from the htdocs directory. (Since "root /sites/mysite/htdocs" is set and a robots.txt file is located at /sites/mysite/htdocs/robots.txt)
BUT, I also want the domain "example.internal.com" (which refers to the same server as example.com) to have a custom robots.txt file served; I'd like to create a custom robots.txt so google doesn't index that internal domain.
I thought about duplicating the server block and specifying the following in one of the server blocks. And then somehow overriding the robots.txt lookup in that server block.
"server_name internal.example.com;"
But duplicating the whole server block just for this purpose doesn't seem very DRY.
I also thought about maybe using an if statement to check and see if the host header contains the internal domain. And then serving the custom robots.txt file that way. But Nginx says If Is Evil.
What is a good approach for serving up a custom robots.txt file for an internal domain?
Thank you for your help.
Here is a code sample of the server block that I'm using.
upstream app_server {
server unix:/sites/mysite/var/run/wsgi.socket fail_timeout=0;
}
server {
listen 80;
root /sites/mysite/htdocs;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
You can use map to define a conditional variable. Add this outside your server directive:
map $host $robots_file {
default robots.txt;
internal.example.com internal-robots.txt;
}
Then the variable can be used with try_files like this:
server_name internal.example.com;
location = /robots.txt {
try_files /$robots_file =404;
}
Now you can have two robots.txt files in your root:
robots.txt
internal-robots.txt