I have hosted to django admin project on a local machine X.
http://10.4.x.y/myapp/admin works.
I have an external IP on another machine Y and i am doing a proxy pass
from the Y to X.
http://proxypassname.com/myapp/admin works.
But, When i click the link "Save" or "Save continue editing" button after editing in admin page, it redirects to local machine ip (i.e. http://10.4.x.y/myapp/blah_blah_blah).
How to make sure that the django project redirects to proxypass name instead of local IP?
This is happening because the admin redirects to IP it thinks it has. It gets in in the HTTP request's header.
However, the fix is very easy. Assuming you proxy server implements the X-Forwarded-For standard, it could be easily fixed.
in your settings.py, simply set:
USE_X_FORWARDED_HOST = True
and restart your Django.
If that doesn't work, you can try to see if your proxy sets a different kind of header, and write a middleware that does the same thing. It's the first example on Django's documentation chapter on middleware
I did these two things and it worked.
Whenever you add a ProxyPass, you should add ProxyPassReverse
SITE_ID should be set to the domain where you want to point this django project.
Related
I always set my ALLOWED_HOSTS from an environment variable in Django. In my development .env I always set ALLOWED_HOSTS=.localhost,.127.0.0.1 and in production ALLOWED_HOSTS=mydomain.dom,my_ip_address
Now I am currently getting acquainted with Docker, and the question is what is the value of the ALLOWED_HOSTS in production. Should it remain as localhost, since I understand localhost will refer to the host container or should I set it as my domain. I am using Nginx for reverse proxy to forward requests.
You should set it to your domain. ALLOWED_HOSTS is used to determine whether the request originated from the correct domain name.
If you look at the docs for ALLOWED_HOSTS, you'll see that it is compared to the request's Host header, which is set by the User agent of the person visiting your site.
So although the Docker container is serving to it's own localhost, the request is originating from example.com
Check out this part of the docs to see exactly why host header validation is necessary, and you will probably better understand the purpose of ALLOWED_HOSTS
You can just use your regular domain/IP address. ALLOWED_HOSTS has to do with the headers of the user matching the IP of the server. The internal mechanics on the server are not the concern of it.
ALLOWED_HOSTS=mydomain.dom,my_ip_address
Is what you should go with.
Thanks for the answers and I did confirm its true. I would like to add that I also remembered that this can be confirmed by adding your domain to /etc/hosts pointing to 127.0.0.1. If the domain is not included in /etc/hosts, Django will throw a debug error telling you that the domain is not added to ALLOWED_HOSTS
Just don’t understand, in Django documents and other articles, allowed_hosts is not recommended to be [‘*’] for security reasons. But a website should be open to the whole internet, what value should it be?
But a website should be open to the whole internet
ALLOWED_HOSTS in Django settings does not mean who will be allowed to access your site. It simple means on which address your site will be accessible. for example www.google.com is the address of google site. That does not mean who will be allowed to access the site (Its already public).
To allow/disallow a particular user to access your site is usually done with firewall or with a proxy server like nginx.
what value should it be?
It simply mentions the list of address from where your site can be accessed. like ALLOWED_HOSTS = ['your_site.com', 'IP_ADDRESS_OF_YOUR_SITE'] for more information visit docs
And for why ['*'] being dangerous and why ALLOWED_HOST was added to django please refer to this post.
It should be set to your application domain. For example, if your domain is http://example.com then you need to set ALLOWED_HOSTS to:
ALLOWED_HOSTS = ['example.com']
I am using dnsmasq on my mac in order to force the domain I'm developing for (example.com) to resolve to localhost.
This means that if I go to http://example.com:8000, it uses the local development server, which is what I want.
But it also means that if I try and go to the real example.com, that doesn't resolve to the real site, obviously - it's using localhost.
Is there a way for me to develop locally with port 8000 but also be able to view the real site (without a port - or port 80)?
EDIT:
Everyone seems a bit confused about what I'm trying to do here so let me explain.
I'm trying to develop a site so that it can display different content on any subdomain of example.com. In order to do that, I need to use the Sites framework without setting SITE_ID and let the Sites framework figure out the Site by looking at the domain in the request.
That means that I can't use localhost:8000 when testing as there is no Site with localhost as the domain. I need to use example.com:8000 (or site1.example.com:8000, site2.example.com:8000, sitewhatever.example.com) instead.
But in order to do that, I need to point example.com at localhost in the hosts file. However, that means that the real example.com doesn't resolve any more.
That's what I'm trying to figure out here.
I have multiple Django projects running on one server using gunicorn and nginx. Currently they are each configured to run on a unique port of the same IP address using the server directive in nginx. All this works fine.
...
server {
listen 81;
server_name my.ip.x.x;
... #static hosting and reverse proxy to site1
}
server {
listen 84;
server_name my.ip.x.x;
... #static hosting and reverse proxy to site2
}
...
I came across a problem when I had 2 different projects open in 2 tabs and I realized that I could not be logged into both sites at once (both use the built-in Django User model and auth). Upon inspecting the cookies saved in my browser, I realized that the cookie is bound to just the domain name (in my case just an ip address) and it does not include the port.
On the second site, I tried changing SESSION_COOKIE_NAME annd SESSION_COOKIE_DOMAIN, but it doesn't seem to be working and with these current settings I can't even log in.
SESSION_COOKIE_DOMAIN = 'my.ip.x.x:84' #solution is to leave this as default
SESSION_COOKIE_NAME = 'site2' #just using this works
SESSION_COOKIE_PATH = '/' #solution is to leave this as default
#site1 is using all default values for these
What do I need to do to get cookies for both sites working independently?
Just change the SESSION_COOKIE_NAME. The SESSION_COOKIE_DOMAIN doesn't support port numbers afaik. So they are all the same for your apps.
Another solution that doesn't require hard-coding different cookie names for each site is to write a middleware that changes the cookie name based on the port the request came in on.
Here's a simple version (just a few lines of code).
I have a django web application that's running on apache 2.2.14 and I want to run the admin application over https.
Having read considerable discussions on using a proxy, writing middleware, running alternative wsgi scripts, the chaps in #httpd came to my rescue. The solution is so simple, I was surprised I didn't find it online, so I'm curious to see if I've made some glaring assumptions or errors.
One complication was that I also wanted to run one of my django apps in the site over https, that is everything on /checkout.
Essentially, if a user requests a URI starting with /admin or /checkout on http, they are to be redirected to that URI but on https. Conversely, if a user requests a URI that does not start with /admin or /checkout on https, they are to be redirected to that URI but on http.
The key to solving this problem was to use Redirect and RedirectMatch directives in my VirtualHost configuration.
<VirtualHost *:80>
... host config stuff ...
Redirect /admin https://www.mywebsite.com/admin
Redirect /checkout https://www.mywebsite.com/checkout
</VirtualHost>
<VirtualHost *:443>
... ssl host config stuff ...
RedirectMatch ^(/(?!admin|checkout).*) http://www.mywebsite.com$1
</VirtualHost>
Another approach is to use #secure_required decorator. This will automatically rewrite the requested url and redirect to https://... version of the URL. Then you don't have to have Redirect in *:80 configuration. *:443 configuration may still be required for performance purpose if you want other traffic to go through normal http traffic.
I tried your solution, but ran into several problems. First, the formatting on the admin site disappeared, as if it could not find the admin static files. Second, if I tried to reach the non-admin site through https, the browser would not find it and redirect me to Yahoo search. Oddly, if I edited the yahoo search URL to eliminate all text except my correct URL (minus the http://), it would continue to search through yahoo for my site. However, typing the exact same URL afresh sent me to my site.
I solved all of these issues by simply removing the
RedirectMatch ^(/(?!admin|checkout).*) http://www.mywebsite.com$1
directive.
I should mention that I don't have a /checkout section on my site and am only trying to secure /admin. ... and yes, I did substitute my URL for "mywebsite.com"
What you described should work, but there may be a problem in the future if you need to make changes to which paths are/are not HTTPS. Because this method requires the ability to correctly modify the Apache config file it means you do not want novices in the loop. Screw up the config file and your site can go 500-error in the blink of an eye.
We chose to have a simple text file that had a list of the must-be-HTTPS paths. Anyone on the project can edit it and it is checked for correctness when it is loaded. We handle any needed redirects to/from HTTPS in middleware and it seems to work just fine. This method will also work if you are running anything other than Apache.