Scenario -
Someone is requesting a url served by a Django app. His IP is 3.3.3.3 . It could be a client but could be a server too, I don't know at the time of the request.
In project settings file, there is
ALLOWED_HOSTS = ["1.1.1.1", "2.2.2.2"]
He (3.3.3.3) is still served. What are the allowed hosts for then?
Related
I keep getting an Invalid HOST Header error which I am trying to find the cause of. It reads as such:
Report at /GponForm/diag_Form
Invalid HTTP_HOST header: '192.168.0.1:443'. You may need to add '192.168.0.1' to ALLOWED_HOSTS
I do not know what /GponForm/diag_Form is but from the looks of it, it may be a vulnerability attacked by malware.
I also am wondering why the IP is from a router 192.168.0.1 as well as why it is coming through SSL :443
Should I consider putting a HoneyPot and blocking this IP address? Before I do, why does the IP look like a local router?
The full Request URL in the report looks like this:
Request URL: https://192.168.0.1:443/GponForm/diag_Form?style/
I am getting this error at least ~10x/day now so I would like to stop it.
Yes, this surely represents a vulnerability - someone tried to access this url on router (which usually have ip 192.168.0.1).
It looks so because request from attacker contains HOST header with this value.
Maybe django is run locally with DEBUG=True.
You may consider running it more production wised with web-server (i.e. nginx) in front filtering unwanted requests with nginx config and further adding fail2ban to parse nginx error logs and ban ip.
Or make site available only from specific ips / ads simple authorization, i.e. Basic Auth on web-server level.
Previous irrelevant answer
ALLOWED_HOSTS option specifies domains django project can serve.
In running locally - python manage.py runserver or with DEBUG=True - it defaults to localhost, 127.0.0.1 and similar.
If you are accessing django via different url - it will complain in such a manner.
To allow access from another domains - add them to ALLOWED_HOSTS: ALLOWED_HOSTS = ['localhost', '127.0.0.1', '[::1]', '192.168.0.1'].
The admins where I work have a url http://company.com/app, public facing. They use apache http server. The sysadmins have http://company.com/app pointing to http://internal_ip_address:8080. I have nginx sitting on http://internal_ip:address listening on 8080. I am trying to get nginx to take requests coming into internal_ip_address, route the requests to localhost:9000 which is a django app. Once django is done with the request the resulting page renders with appropriate public facing url (e.g. http://company.com/app . . .) Thanks for any help!
Thanks to all those that tried to help. After spending a notable amount of time on the issue I determined that it was a http server config issue on a server managed elsewhere.
I have a Django webapp. It runs inside Docker on Elastic Beanstalk.
I'd like to specify a health check URL for slightly more advanced health checking than "can the ELB establish a TCP connection".
Entirely reasonably, the ELB does this by connecting to the instance over HTTP, using the instance's hostname (e.g. ec2-127-0-0-1.compute-1.amazonaws.com) as the Host header.
Django has ALLOWED_HOSTS which validates the Host header of incoming requests. I set this to my application's external domain via environment variable.
Unsurprisingly and entirely reasonably, Django thus rejects ELB URL health checks due to lack of matching Host.
We don't want to disable ALLOWED_HOSTS because we'd like to be able to trust get_host().
The solutions so far seem to be:
Somehow persuade Django to not care about ALLOWED_HOSTS for certain specific paths (i.e. the health check URL)
Do something funky like calling the EC2 info API on startup to get the host's FQDN and append it to ALLOWED_HOSTS
Neither of these seem particularly pleasant. Can anyone recommend a better / existing solution?
(For the avoidance of doubt, I believe this problem to be identical to the scenario of "Disabled ALLOWED_HOSTS, fronting HTTPD that filters on host" - I want the health check to hit Django, not a fronting HTTPD)
If the ELB health check is sending its request with a host header containing the elastic beanstalk domain (*.elasticbeanstalk.com, or an EC2 domain *.amazonaws.com) then the standard ALLOWED_HOSTS can still be used with a wildcard entry of '.amazonaws.com' or '.elasticbeanstalk.com'.
In my case I received standard ipv4 addresses as the health check hosts, so a different solution was needed. If you can't predict the host at all, and it might be safer to assume you can't, you would need to take a route such as one of the following.
You can use Apache to handle approved hosts instead of propagating ambiguous requests to Django. Since the host header is intended to be the hostname of the server receiving the request, this solution changes the header of valid requests to use the expected site hostname. With elastic beanstalk you'll need to configure Apache using .ebextensions as described here. Under the .ebextensions directory in your project root, add the following to a .config file.
files:
"/etc/httpd/conf.d/eb_healthcheck.conf":
mode: "000644"
owner: root
group: root
content: |
<If "req('User-Agent') == 'ELB-HealthChecker/1.0' && %{REQUEST_URI} == '/status/'">
RequestHeader set Host "example.com"
</If>
Replacing /status/ with your health check URL and example.com with your site's appropriate domain. This tells Apache to check all incoming requests and change the host headers on requests with the appropriate health check user agent that are requesting the appropriate health check URL.
If you would really prefer not to configure Apache, you could write a custom middleware to authenticate health checks. The middleware would have to override Django's CommonMiddleware which calls HttpRequest's get_host() method that validates the request's host. You could do something like this
from django.middleware.common import CommonMiddleware
class CommonOverrideMiddleware(CommonMiddleware):
def process_request(self, request):
if not('HTTP_USER_AGENT' in request.META and request.META['HTTP_USER_AGENT'] == 'ELB-HealthChecker/1.0' and request.get_full_path() == '/status/'):
return super().process_request(request)
Which just allows any health check requests to skip the host validation. You'd then replace django.middleware.common.CommonMiddleware with path.CommonOverrideMiddleware in your settings.py.
I would recommend using the Apache configuration approach to avoid any details in the middleware, and to completely isolate Django from host issues.
This is what I use, and it works well:
import socket
local_ip = str(socket.gethostbyname(socket.gethostname()))
ALLOWED_HOSTS=[local_ip, '.mydomain.com', 'mydomain.elasticbeanstalk.com' ]
where you replace mydomain and mydomain.elasticbeanstalk.com with your own.
I have a django project set up with nginx+apache. The http port for outside access is 20111 which is then forwarded to the server machine (which has an internal IP) to port 80. So nginx listens on port 80 (and passes relevant requests to apache on port 5000).
Now the initial login can be reached from the outside via http://externalip:20111 - but when I complete an admin action, like saving an entry, I get redirected to http://externalip/path/to/model -- without the port 20111. The result is a timeout. How can I tell django to use a specific hostname/port (i.e. http://externalip:20111) for all admin redirects?
When deploying applications behind a proxy or load balancer, it is common to rely on the X-Forwarded-Host header. Django has support for it
First of all, you have to setup nginx to send the proper headers. Add to your nginx host configuration (inside your location section):
proxy_set_header X-Forwarded-Host $host:20111;
Second, add to your settings.py:
USE_X_FORWARDED_HOST = True
It will allow django to trust X-Forwarded-Host headers from a request.
It should make it work for you. For security reasons, you should not trust every value sent in X-Forwarded-Host, so add your trusted domains/IPs to ALLOWED_HOSTS in settings.py
I have multiple Django projects running on one server using gunicorn and nginx. Currently they are each configured to run on a unique port of the same IP address using the server directive in nginx. All this works fine.
...
server {
listen 81;
server_name my.ip.x.x;
... #static hosting and reverse proxy to site1
}
server {
listen 84;
server_name my.ip.x.x;
... #static hosting and reverse proxy to site2
}
...
I came across a problem when I had 2 different projects open in 2 tabs and I realized that I could not be logged into both sites at once (both use the built-in Django User model and auth). Upon inspecting the cookies saved in my browser, I realized that the cookie is bound to just the domain name (in my case just an ip address) and it does not include the port.
On the second site, I tried changing SESSION_COOKIE_NAME annd SESSION_COOKIE_DOMAIN, but it doesn't seem to be working and with these current settings I can't even log in.
SESSION_COOKIE_DOMAIN = 'my.ip.x.x:84' #solution is to leave this as default
SESSION_COOKIE_NAME = 'site2' #just using this works
SESSION_COOKIE_PATH = '/' #solution is to leave this as default
#site1 is using all default values for these
What do I need to do to get cookies for both sites working independently?
Just change the SESSION_COOKIE_NAME. The SESSION_COOKIE_DOMAIN doesn't support port numbers afaik. So they are all the same for your apps.
Another solution that doesn't require hard-coding different cookie names for each site is to write a middleware that changes the cookie name based on the port the request came in on.
Here's a simple version (just a few lines of code).