Django and Elastic Beanstalk URL health checks - django

I have a Django webapp. It runs inside Docker on Elastic Beanstalk.
I'd like to specify a health check URL for slightly more advanced health checking than "can the ELB establish a TCP connection".
Entirely reasonably, the ELB does this by connecting to the instance over HTTP, using the instance's hostname (e.g. ec2-127-0-0-1.compute-1.amazonaws.com) as the Host header.
Django has ALLOWED_HOSTS which validates the Host header of incoming requests. I set this to my application's external domain via environment variable.
Unsurprisingly and entirely reasonably, Django thus rejects ELB URL health checks due to lack of matching Host.
We don't want to disable ALLOWED_HOSTS because we'd like to be able to trust get_host().
The solutions so far seem to be:
Somehow persuade Django to not care about ALLOWED_HOSTS for certain specific paths (i.e. the health check URL)
Do something funky like calling the EC2 info API on startup to get the host's FQDN and append it to ALLOWED_HOSTS
Neither of these seem particularly pleasant. Can anyone recommend a better / existing solution?
(For the avoidance of doubt, I believe this problem to be identical to the scenario of "Disabled ALLOWED_HOSTS, fronting HTTPD that filters on host" - I want the health check to hit Django, not a fronting HTTPD)

If the ELB health check is sending its request with a host header containing the elastic beanstalk domain (*.elasticbeanstalk.com, or an EC2 domain *.amazonaws.com) then the standard ALLOWED_HOSTS can still be used with a wildcard entry of '.amazonaws.com' or '.elasticbeanstalk.com'.
In my case I received standard ipv4 addresses as the health check hosts, so a different solution was needed. If you can't predict the host at all, and it might be safer to assume you can't, you would need to take a route such as one of the following.
You can use Apache to handle approved hosts instead of propagating ambiguous requests to Django. Since the host header is intended to be the hostname of the server receiving the request, this solution changes the header of valid requests to use the expected site hostname. With elastic beanstalk you'll need to configure Apache using .ebextensions as described here. Under the .ebextensions directory in your project root, add the following to a .config file.
files:
"/etc/httpd/conf.d/eb_healthcheck.conf":
mode: "000644"
owner: root
group: root
content: |
<If "req('User-Agent') == 'ELB-HealthChecker/1.0' && %{REQUEST_URI} == '/status/'">
RequestHeader set Host "example.com"
</If>
Replacing /status/ with your health check URL and example.com with your site's appropriate domain. This tells Apache to check all incoming requests and change the host headers on requests with the appropriate health check user agent that are requesting the appropriate health check URL.
If you would really prefer not to configure Apache, you could write a custom middleware to authenticate health checks. The middleware would have to override Django's CommonMiddleware which calls HttpRequest's get_host() method that validates the request's host. You could do something like this
from django.middleware.common import CommonMiddleware
class CommonOverrideMiddleware(CommonMiddleware):
def process_request(self, request):
if not('HTTP_USER_AGENT' in request.META and request.META['HTTP_USER_AGENT'] == 'ELB-HealthChecker/1.0' and request.get_full_path() == '/status/'):
return super().process_request(request)
Which just allows any health check requests to skip the host validation. You'd then replace django.middleware.common.CommonMiddleware with path.CommonOverrideMiddleware in your settings.py.
I would recommend using the Apache configuration approach to avoid any details in the middleware, and to completely isolate Django from host issues.

This is what I use, and it works well:
import socket
local_ip = str(socket.gethostbyname(socket.gethostname()))
ALLOWED_HOSTS=[local_ip, '.mydomain.com', 'mydomain.elasticbeanstalk.com' ]
where you replace mydomain and mydomain.elasticbeanstalk.com with your own.

Related

GCP: load balancer rewrite path

I'm setting up traffic management for a global external HTTP(S) load balancers. I have two backend Cloud Run services, serverless-lb-www-service and serverless-lb-api-service, that I want to serve from the same IP/domain.
I want to configure them like this:
example.com -> serverless-lb-www-service
example.com/api -> serverless-lb-api-service
I can use the simple routing rules to serve traffic semi-expected:
path
backend
/*
serverless-lb-www-service
/api
serverless-lb-api-service
/api/*
serverless-lb-api-service
However, I'm running into an issue where I try to access an endpoint that is not the root API end, like example.com/api/test. I'm always seeing the response I would expect from example.com/api.
I believe it has something to do with my API (running express.js) receiving the path /api when it is instead expecting to serve that route from there just /test. I think I might need to set up a rewrite to remove /api when it hits the API
Any help would be much appreciated. Thanks
update
I can confirm that the requests as logged in the API are all prefixed with /api. I can solve my issue by changing all API route handlers to expect the /api prefix in production environment. However I would still rather do this via a path rewrite so application code is the same in all environments
You can customize the host and path rules. You can follow the steps through this link. It is also using Cloud Run services and might help you with the rewrite path issues.
Note: Just scroll all the way down if the link will not redirect and show the "Customize the host and path rules" steps.

Invalid HOST Header from router IP

I keep getting an Invalid HOST Header error which I am trying to find the cause of. It reads as such:
Report at /GponForm/diag_Form
Invalid HTTP_HOST header: '192.168.0.1:443'. You may need to add '192.168.0.1' to ALLOWED_HOSTS
I do not know what /GponForm/diag_Form is but from the looks of it, it may be a vulnerability attacked by malware.
I also am wondering why the IP is from a router 192.168.0.1 as well as why it is coming through SSL :443
Should I consider putting a HoneyPot and blocking this IP address? Before I do, why does the IP look like a local router?
The full Request URL in the report looks like this:
Request URL: https://192.168.0.1:443/GponForm/diag_Form?style/
I am getting this error at least ~10x/day now so I would like to stop it.
Yes, this surely represents a vulnerability - someone tried to access this url on router (which usually have ip 192.168.0.1).
It looks so because request from attacker contains HOST header with this value.
Maybe django is run locally with DEBUG=True.
You may consider running it more production wised with web-server (i.e. nginx) in front filtering unwanted requests with nginx config and further adding fail2ban to parse nginx error logs and ban ip.
Or make site available only from specific ips / ads simple authorization, i.e. Basic Auth on web-server level.
Previous irrelevant answer
ALLOWED_HOSTS option specifies domains django project can serve.
In running locally - python manage.py runserver or with DEBUG=True - it defaults to localhost, 127.0.0.1 and similar.
If you are accessing django via different url - it will complain in such a manner.
To allow access from another domains - add them to ALLOWED_HOSTS: ALLOWED_HOSTS = ['localhost', '127.0.0.1', '[::1]', '192.168.0.1'].

How to set ALLOWED_HOSTS Django setting in production when using Docker?

I always set my ALLOWED_HOSTS from an environment variable in Django. In my development .env I always set ALLOWED_HOSTS=.localhost,.127.0.0.1 and in production ALLOWED_HOSTS=mydomain.dom,my_ip_address
Now I am currently getting acquainted with Docker, and the question is what is the value of the ALLOWED_HOSTS in production. Should it remain as localhost, since I understand localhost will refer to the host container or should I set it as my domain. I am using Nginx for reverse proxy to forward requests.
You should set it to your domain. ALLOWED_HOSTS is used to determine whether the request originated from the correct domain name.
If you look at the docs for ALLOWED_HOSTS, you'll see that it is compared to the request's Host header, which is set by the User agent of the person visiting your site.
So although the Docker container is serving to it's own localhost, the request is originating from example.com
Check out this part of the docs to see exactly why host header validation is necessary, and you will probably better understand the purpose of ALLOWED_HOSTS
You can just use your regular domain/IP address. ALLOWED_HOSTS has to do with the headers of the user matching the IP of the server. The internal mechanics on the server are not the concern of it.
ALLOWED_HOSTS=mydomain.dom,my_ip_address
Is what you should go with.
Thanks for the answers and I did confirm its true. I would like to add that I also remembered that this can be confirmed by adding your domain to /etc/hosts pointing to 127.0.0.1. If the domain is not included in /etc/hosts, Django will throw a debug error telling you that the domain is not added to ALLOWED_HOSTS

Elastic Beanstalk Health Severe, failing with a code 400 -- even though I can visit my site

I have a Django application running on Elastic Beanstalk. I can visit my site no problem at example.com. I've set up automatic https redirect, so that it always directs to https. I've set it up so you can't view the site example.elasticbeanstalk.com domain -- if you go there you end up getting response code 400.
My auto scaling group is load balanced. My app is failing the health checks with status code 400, even though I can navigate to my site no problem with response code 200. My logs show:
***amazon IP*** (-) - - [date] "GET / HTTP/1.1" 400 26 "-" "ELB-HealthChecker/2.0"
I'm guessing the error is either from
Not allowing connection at example.elasticbeanstalk.com
Haivng automatic HTTP -> HTTPS redirect (although that would come up with a 302 I'd guess)
When the Health Check pings a site, is it pinging your custom domain (example.com) or is pining the elasticbeanstalk.com domain? What can I do to either fix this or further diagnose the error? I'd rather not allow traffic at the elasticbeanstalk.com domain, because I don't think I can get SSL on that.
The reason this is failing is because the health check checks the EC2 instance private IP. This can change with ELB, so you need to dynamically get the private IP of the instance and add it to hosts. See How to dynamically add EC2 ip addresses to Django ALLOWED_HOSTS
import requests
EC2_PRIVATE_IP = None
try: EC2_PRIVATE_IP = requests.get('http://169.254.169.254/latest/meta-data/local-ipv4', timeout=0.01).text
except requests.exceptions.RequestException: pass
if EC2_PRIVATE_IP: ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
(potentially) Bad Answer
I found this answer at another SO post. While it solves the problem, I do not think it is a good answer and may be insecure.
If you add this code to your .ebextensions/something.config file, it will redirect any requests from Health Checker with a certain status request to your domain.
files:
"/etc/httpd/conf.d/eb_healthcheck.conf":
mode: "000644"
owner: root
group: root
content: |
<If "req('User-Agent') == 'ELB-HealthChecker/2.0' && %{REQUEST_URI} == '/status/'">
RequestHeader set Host "sub.example.com"
</If>
Replacing /status/ with what the health check url specified in Config -> Loan Balancer -> Health Check Path, and sub.example.com with your domain. They've also updated the health checker so it's ELB-HealthChecker/2.0 now -- another thing to pay attention to.
HOWEVER: It may not be great for security reasons, I think this could be spoofed. If you were using the default / link, someone could spoof ELB-HealthChecker/2.0 and then easily guess your link. I'm not very familiar with what someone could do with a set Host command, it may be harmless.
If you recently migrated to Amazon Linux 2 and got hit with IMDSv2 then you have to use security token like this
import requests
EC2_PRIVATE_IP = None
try:
security_token = requests.put(
'http://169.254.169.254/latest/api/token',
headers={'X-aws-ec2-metadata-token-ttl-seconds': '60'}).text
EC2_PRIVATE_IP = requests.get(
'http://169.254.169.254/latest/meta-data/local-ipv4',
headers={'X-aws-ec2-metadata-token': security_token},
timeout=0.01).text
except requests.exceptions.RequestException:
pass
if EC2_PRIVATE_IP:
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Just to follow up. I was running a multi-container Docker environment on AWS Linux 2 with Django on Elastic Beanstalk. My environment was in a permanent severe state even though my app was accessible! Thanks to the answers above, I learned that the health checks were occurring at addresses that were simply not the Elastic Beanstalk URL! Also, the HTTP statuses were not visible on the EB environment health page, I had to go to the EC2 page and to the "target groups" health checks tab under load balancers to find out that my app was returning 400 codes to the health checks. To quickly test the solution, I just added ALLOWED_HOSTS = ['*'] (not good for production!) and sure enough, the issues disappeared!
I originally thought it was a Nginx issue and so I configured a Nginx container that worked with my Django app container. Not sure if that's necessary anymore. A totally frustrating and undocumented issue, but that comes with the territory of Elastic Beanstalk.

DNS hosting public and web application on different hosts

Here is my setup.
Public site hosted by squarespace.com (www.example-domain.com)
Web application (AWS EC2/ELB), i would like to be available via the same domain. (my.example-domain.com)
Custom profile pages available as www.example-domain.com/username
My question is how can i setup the DNS to achieve this? If can't do it just through DNS, any suggestions? The problem i am facing is that if squarespace.com is handling the www.example-domain.com traffic how can i have it only partially handle it for certain urls. Maybe i am going about this in the wrong was all together though.
The two first are ok. As you mention, (1) is not compatible with (3) for a pure DNS config as www of example-domain.com has to be configured to a single end-point.
Some ideas of non-DNS workaround:
Having the squarespace.com domain on sqsp.example-domain.com and configure your www domain to a custom web server on which you configure the root (/) to redirect (HTTP 300) to sqsp.example-domain.com. It will be quite transparent for the user, except in his browser address.
The same but setting on / a full page HTML iframe containing sqsp.example-domain.com.
The iframe approach is a "less clean", Google the solutions to build your opinion.
EDIT:
As #mike-ryan mentioned, there is the proxy solution as well where you configure you web server to request another server to get the content to return to your user. If you are already using AWS, a smart way to do this is to use CloudFront: you can setup CloudFront to proxy one server on one URL and proxy another server on other URL. Actually, this is maybe the faster to way to implement you need. Of course, a proxy is one more "hop", so it may add more delay.
If you really want to have content served from different servers while only using a single domain name, you'll need to set up a proxy server to handle the request routing for you. I am assuming your custom profile pages must be served from your EC2 instance.
Nginx will receive all requests, and will then decide whether they should be sent to Square Space or your web app. Requests will be reverse proxied to Square Space or to your app, depending on the URL.
This is similar to #smad's answer, except it will all be invisible to the users which IMHO is better than redirecting the user to a new domain name.
Example steps:
Set up an Nginx server, create two virtual hosts - one for my.example.com, and one for www.example.com
Create two upstreams in your Nginx config - one for Square Space, and one for your app
Configure the www.example.com virtual host to reverse proxy connections to the Square Space upstream, if the URL is "/". Otherwise, traffic should be proxied to your app upstream [0]
Configure the my.example.com virtual host to proxy all traffic to your app upstream
[0] how to reverse proxy via nginx a specific url?