I'm testing cookiecutter-django in production using Docker-compose and Traefik with Let'sencrypt. I'm trying to configure it to work with 2 domains (mydomain1.com and mydomain2.com) using Django sites.
How to configure Traefik so it could forward traffic to necessary domain?
This is my traefik.toml
logLevel = "INFO"
defaultEntryPoints = ["http", "https"]
# Entrypoints, http and https
[entryPoints]
# http should be redirected to https
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
# https is the default
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
# Enable ACME (Let's Encrypt): automatic SSL
[acme]
# Email address used for registration
email = "mail#mydomain1.com"
storage = "/etc/traefik/acme/acme.json"
entryPoint = "https"
onDemand = false
OnHostRule = true
# Use a HTTP-01 acme challenge rather than TLS-SNI-01 challenge
[acme.httpChallenge]
entryPoint = "http"
[file]
[backends]
[backends.django]
[backends.django.servers.server1]
url = "http://django:5000"
[frontends]
[frontends.django]
backend = "django"
passHostHeader = true
[frontends.django.headers]
HostsProxyHeaders = ['X-CSRFToken']
[frontends.django.routes.dr1]
rule = "Host:mydomain1.com"
Now all domains working through ssl,but I can see only mydomain1.com, and mydomain2.com shows ERR_TOO_MANY_REDIRECTS.
What have you tried? What didn't work? By reading your question it's hard to tell.
There is an element of answer in the issue you seem to have opened in cc-django repo.
First things first, I would try to take Traefik out of the equation and make this work locally by doing something as suggested. Once it works locally, it's a matter of mapping the right port/container to the right domain in Traefik.
Assuming you've configure docker-compose to run the django containers on port 5000 and 5001, I think you would need to adjust you backends and frontends section as below:
[backends]
[backends.django1]
[backends.django1.servers.server1]
url = "http://django:5000"
[backends.django2]
[backends.django2.servers.server1]
url = "http://django:5001"
[frontends]
[frontends.django1]
backend = "django1"
passHostHeader = true
[frontends.django1.headers]
HostsProxyHeaders = ['X-CSRFToken']
[frontends.django1.routes.dr1]
rule = "Host:mydomain1.com"
[frontends.django2]
backend = "django2"
passHostHeader = true
[frontends.django2.headers]
HostsProxyHeaders = ['X-CSRFToken']
[frontends.django2.routes.dr1]
rule = "Host:mydomain2.com"
I didn't try these, but that would be the first thing I would do. Also, it looks like we can specify rules on frontends to adjust routing.
Related
My original question was how to enable HTTPS for a Django login page, and the only response, recommended that I - make the entire site as HTTPS-only.
Given that I'm using Django 1.3 and nginx, what's the correct way to make a site HTTPS-only?
The one response mentioned a middleware solution, but had the caveat:
Django can't perform a SSL redirect while maintaining POST data.
Please structure your views so that redirects only occur during GETs.
A question on Server Fault about nginx rewriting to https, also mentioned problems with POSTs losing data, and I'm not familiar enough with nginx to determine how well the solution works.
And EFF's recommendation to go HTTPS-only, notes that:
The application must set the Secure attribute on the cookie when
setting it. This attribute instructs the browser to send the cookie
only over secure (HTTPS) transport, never insecure (HTTP).
Do apps like Django-auth have the ability to set cookies as Secure? Or do I have to write more middleware?
So, what is the best way to configure the combination of Django/nginx to implement HTTPS-only, in terms of:
security
preservation of POST data
cookies handled properly
interaction with other Django apps (such as Django-auth), works properly
any other issues I'm not aware of :)
Edit - another issue I just discovered, while testing multiple browsers. Say I have the URL https://mysite.com/search/, which has a search form/button. I click the button, process the form in Django as usual, and do a Django HttpResponseRedirect to http://mysite.com/search?results="foo". Nginx redirects that to https://mysite.com/search?results="foo", as desired.
However - Opera has a visible flash when the redirection happens. And it happens every search, even for the same search term (I guess https really doesn't cache :) Worse, when I test it in IE, I first get the message:
You are about to be redirected to a connection that is not secure - continue?
After clicking "yes", this is immediately followed by:
You are about to view pages over a secure connection - continue?
Although the second IE warning has an option to turn it off - the first warning does not, so every time someone does a search and gets redirected to a results page, they get at least one warning message.
For the 2nd part of John C's answer, and Django 1.4+...
Instead of extending HttpResponseRedirect, you can change the request.scheme to https.
Because Django is behind Nginx's reverse proxy, it doesn't know the original request was secure.
In your Django settings, set the SECURE_PROXY_SSL_HEADER setting:
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
Then, you need Nginx to set the custom header in the reverse proxy. In the Nginx site settings:
location / {
# ...
proxy_set_header X-Forwarded-Proto $scheme;
}
This way request.scheme == 'https' and request.is_secure() returns True.
request.build_absolute_uri() returns https://... and so on...
Here is the solution I've worked out so far. There are two parts, configuring nginx, and writing code for Django. The nginx part handles external requests, redirecting http pages to https, and the Django code handles internal URL generation that has an http prefix. (At least, those resulting from a HttpResponseRedirect()). Combined, it seems to work well - as far as I can tell, the client browser never sees an http page that the users didn't type in themselves.
Part one, nginx configuration
# nginx.conf
# Redirects any requests on port 80 (http) to https:
server {
listen 80;
server_name www.mysite.com mysite.com;
rewrite ^ https://mysite.com$request_uri? permanent;
# rewrite ^ https://mysite.com$uri permanent; # also works
}
# django pass-thru via uWSGI, only from https requests:
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/mysite.com.chain.crt;
ssl_certificate_key /etc/ssl/private/mysite.com.key;
server_name mysite.com;
location / {
uwsgi_pass 127.0.0.1:8088;
include uwsgi_params;
}
}
Part two A, various secure cookie settings, from settings.py
SERVER_TYPE = "DEV"
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True # currently only in Dev branch of Django.
SESSION_EXPIRE_AT_BROWSER_CLOSE = True
Part two B, Django code
# mysite.utilities.decorators.py
import settings
def HTTPS_Response(request, URL):
if settings.SERVER_TYPE == "DEV":
new_URL = URL
else:
absolute_URL = request.build_absolute_uri(URL)
new_URL = "https%s" % absolute_URL[4:]
return HttpResponseRedirect(new_URL)
# views.py
def show_items(request):
if request.method == 'POST':
newURL = handle_post(request)
return HTTPS_Response(request, newURL) # replaces HttpResponseRedirect()
else: # request.method == 'GET'
theForm = handle_get(request)
csrfContext = RequestContext(request, {'theForm': theForm,})
return render_to_response('item-search.html', csrfContext)
def handle_post(request):
URL = reverse('item-found') # name of view in urls.py
item = request.REQUEST.get('item')
full_URL = '%s?item=%s' % (URL, item)
return full_URL
Note that it is possible to re-write HTTPS_Response() as a decorator. The advantage would be - not having to go through all your code and replace HttpResponseRedirect(). The disadvantage - you'd have to put the decorator in front of HttpResponseRedirect(), which is in Django at django.http.__init__.py. I didn't want to modify Django's code, but that's up to you - it's certainly one option.
if you stick your entire site behind https, you don't need to worry about it on the django end. (assuming you don't need to protect your data between nginx and django, only between users and your server)
I am trying to create an app using Flask with more than 9 controllers, some of them are in a different subdomains.
I am using Flask_Login to allow users to login, the users controller exist in a separated subdomain, the problem happens if i visited that subdomain, inside my console it shows a redirect to login the user first to access that subdomain, and inside cookies i can't see the remember_me token.
Here are the configurations for the extension:
SERVER_NAME = 'localhost:5000'
# Login configurations
REMEMBER_COOKIE_DURATION = timedelta(seconds=7*24*60*60)
REMEMBER_COOKIE_NAME = 'myapp.remember'
REMEMBER_COOKIE_SECURE = True
REMEMBER_COOKIE_HTTPONLY = True
REMEMBER_COOKIE_REFRESH_EACH_REQUEST = True
REMEMBER_COOKIE_DOMAIN = '.localhost:5000'
from .controllers.client import client_route
app.register_blueprint(client_route, subdomain='client')
The domain inside cookies is localhost, how can i change it to something like .localhost ??
You need to set REMEMBER_COOKIE_SECURE in your config to False
According to Flask-Login's documentation:
Restricts the “Remember Me” cookie’s scope to secure channels
(typically HTTPS).
It only sets the cookie on HTTPS version of your application
I am testing out deploying my Django application into AWS's Fargate Service.
Everything seems to run, but I am getting Health Check errors as the Application Load Balancer is sending requests to my Django application using the Local Ip of the host. This give me an Allowed Host error in the logs.
Invalid HTTP_HOST header: '172.31.86.159:8000'. You may need to add '172.31.86.159' to ALLOWED_HOSTS
I have tried getting the Local ip at task start up time and appending it to my ALLOWED_HOSTS, but this fails under Fargate:
import requests
EC2_PRIVATE_IP = None
try:
EC2_PRIVATE_IP = requests.get('http://169.254.169.254/latest/meta-data/local-ipv4', timeout = 0.01).text
except requests.exceptions.RequestException:
pass
if EC2_PRIVATE_IP:
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Is there a way to get the ENI IP Address so I can append it to ALLOWED_HOSTS?
Now this works, and it lines up with the documentation, but I don't know if it's the BEST way or if there is a BETTER WAY.
My containers are running under the awsvpc network mode.
https://aws.amazon.com/blogs/compute/under-the-hood-task-networking-for-amazon-ecs/
...the ECS agent creates an additional "pause" container for each task before starting the containers in the task definition. It then sets up the network namespace of the pause container by executing the previously mentioned CNI plugins. It also starts the rest of the containers in the task so that they share their network stack of the pause container. (emphasis mine)
I assume the
so that they share their network stack of the pause container
Means we really just need the IPv4 Address of the pause container. In my non-exhaustive testing it appears this is always Container[0] in the ECS meta: http://169.254.170.2/v2/metadata
With those assumption in play this does work, though I don't know how wise it is to do:
import requests
EC2_PRIVATE_IP = None
METADATA_URI = os.environ.get('ECS_CONTAINER_METADATA_URI', 'http://169.254.170.2/v2/metadata')
try:
resp = requests.get(METADATA_URI)
data = resp.json()
# print(data)
container_meta = data['Containers'][0]
EC2_PRIVATE_IP = container_meta['Networks'][0]['IPv4Addresses'][0]
except:
# silently fail as we may not be in an ECS environment
pass
if EC2_PRIVATE_IP:
# Be sure your ALLOWED_HOSTS is a list NOT a tuple
# or .append() will fail
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Of course, if we pass in the container name that we must set in the ECS task definition, we could do this too:
import os
import requests
EC2_PRIVATE_IP = None
METADATA_URI = os.environ.get('ECS_CONTAINER_METADATA_URI', 'http://169.254.170.2/v2/metadata')
try:
resp = requests.get(METADATA_URI)
data = resp.json()
# print(data)
container_name = os.environ.get('DOCKER_CONTAINER_NAME', None)
search_results = [x for x in data['Containers'] if x['Name'] == container_name]
if len(search_results) > 0:
container_meta = search_results[0]
else:
# Fall back to the pause container
container_meta = data['Containers'][0]
EC2_PRIVATE_IP = container_meta['Networks'][0]['IPv4Addresses'][0]
except:
# silently fail as we may not be in an ECS environment
pass
if EC2_PRIVATE_IP:
# Be sure your ALLOWED_HOSTS is a list NOT a tuple
# or .append() will fail
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Either of these snippets of code would then in in the production settings for Django.
Is there a better way to do this that I am missing? Again, this is to allow the Application Load Balancer health checks. When using ECS (Fargate) the ALB sends the host header as the Local IP of the container.
In fargate, there is an environment variable injected by the AWS container agent:${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
Putting this into python, you get
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
An alternative solution to this is to create a middleware that bypasses the ALLOWED_HOSTS check just for your healthcheck endpoint, eg
from django.http import HttpResponse
from django.utils.deprecation import MiddlewareMixin
class HealthEndpointMiddleware(MiddlewareMixin):
def process_request(self, request):
if request.META["PATH_INFO"] == "/health/":
return HttpResponse("OK")
I solved this issue by doing this:
First i've installed this middleware that can handle CIDR masks on top of ALLOWED_HOSTS: https://github.com/mozmeao/django-allow-cidr
With this middleware i can use a env var like this:
ALLOWED_CIDR_NETS = ['192.168.1.0/24']
So you need to find out the subnets you had configured on your ECS Service Definition, for me it was: 10.3.112.0/24 and 10.3.111.0/24.
You add that to your ALLOWED_CIDR_NETS and you're good to go.
I have just pushed a web app into production and requests to my nodejs no longer contain the user cookie that Django has been setting by default on my localhost (where it was working).
my nodejs looks for the cookie like this
io.configure(function(){
io.set('authorization', function(data, accept){
if (data.headers.cookie) {
data.cookie = cookie_reader.parse(data.headers.cookie);
return accept(null, true);
}
return accept('error',false);
});
io.set('log level',1);
});
and on localhost has been getting this
cookie: 'username="name:1V7yRg:n_Blpzr2HtxmlBOzCipxX9ZlJ9U"; password="root:1V7yRg:Dos81LjpauTABHrN01L1aim-EGA"; csrftoken=UwYBgHUWFIEEKleM8et1GS9FuUPEmgKF; sessionid=6qmyso9qkbxet4isdb6gg9nxmcnw4rp3' },
in the request header.
But in production, the header is the same but except no more cookie. Does Django only set this on localhost? How can I get it working in production?
I've tried setting these in my settings.py
CSRF_COOKIE_DOMAIN = '.example.com'
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = False
But so far no good.
Any insight would be great.
I just figured it out. I was making a request to nodejs on the client like this
Message.socket = io.connect('http://123.456.789.10:5000');
Where I used my respective IP address and port that my nodejs was listening on. This is considered cross domain so browsers won't include cookies in the request. Easy fix by changing it to
Message.socket = io.connect('http://www.mydomain.com:5000');
why would perlbal's reproxying give me a 503 for any remote url?
X-REPROXY-URL: /path/to/a/local/file.jpg = working
X-REPROXy-URL: http://a-public-file-in-an-s3-bucket.jpg = HTTP 503
my perlbal conf looks like:
CREATE POOL test_pool
POOL test_pool ADD 127.0.0.1:8888
POOL test_pool ADD 127.0.0.1:8889
CREATE SERVICE balancer
SET listen = 0.0.0.0:80
SET role = reverse_proxy
SET pool = test_pool
SET persist_client = on
SET persist_backend = on
SET verify_backend = on
SET enable_reproxy = true
ENABLE balancer
and i know im setting the header properly, because, as i said, it works for local files and urls.
looks like perlbal doesn't deal well with urls like "bucket-name.s3.amazonaws.com". changing the url to "s3.amazonaws.com/bucket-name/" works.