Accessing HTTP content from an HTTPS server || HTTP request blocked - django

My Django application currently runs on HTTPS in the server. Recently i added a new functionality for which it has access another link to get JSON object which is HTTP link.
Its working fine in the localhost but when I deploy it in the server, it is showing the following error.
Site was loaded over HTTPS, but requested an insecure resource http link. This request has been blocked; the content must be served over HTTPS.
Can someone please suggest a workaround to bypass this so that the new functionality runs smooth.

This error comes from the browser, so there is not match you can do on the server side.
Easiest thing would be to enable https to those external resources if you have control over that.
Next workaround would be to add a proxy for your http resources and make this proxy https. In example, you could add a simple nginx server with proxy_pass to your http server and add https on that proxy'ing nginx.
Note, that if this JSON you are talking about contains anything sensitive, security-wise you really should serve it via https and not via proxy-workaround I described above. If nothing sensitive is served, workaround might be ok.
Since you have control over your http server, just allow ssl proxy on the nginx, with configuration that may look something like that:
server {
listen 443;
server_name my.host.name;
ssl_certificate /path/to/cert;
ssl_certificate_key /path/to/key;
location / {
proxy_pass http://localhost:80;
}
}
Note, if you're using something like AWS / GCP / Azure - you can do it on the load balancer side instead of nginx.
Otherwise, you can use letsencrypt to get the actual certificate and do some auto-configuration of nginx for you.

Related

Nginx rewrite to https from http on same server_name block when ssl is handled downstream

We have had this issue for ages now, and its starting to bite us in the ass. We run a site for a client written in python on the django framework. We then use nginx as a webserver/proxy for django. This is usually the most standard setup and works well.
The issue is that our client has another apache server higher up. That server handles the ssl termination and just passes requests to us via normal http. The apache server accepts both http and https on 2 domain names.
We can easily rewrite http to https on nginx level, but the issue comes in that a user can remove https and just use http.
Is there a way on nginx level to force users back to https://secure.example.com if they are on http://secure.example.com.
Thanks
The usual technique is for the proxy handling ssl termination to add an X-Forwarded-Proto header. The upstream application can then conditionally redirect when entering a secure area.
With nginx this could be accomplished using a map:
map $http_x_forwarded_proto $insecure {
default 1;
https 0;
}
server {
...
if ($insecure) {
return 301 https://$host$request_uri;
}
...
}

AWS Load Balancer https issue

I was trying to setup the load balancer for our servers. if use the http, it works fine. But when I switch to https, I got following errors in the browser console:
Mixed Content: The page at 'https://www.something.com/' was loaded over HTTPS, but requested an insecure script '...mootools.js'. This request has been blocked; the content must be served over HTTPS
I thought I did some hard code like "http://www.something.com/library/....",
but I did not, I only use the "/library/...." for including the javascript files.
When I set up the load balancer, it was asked me to setup the port for listening. I set as https , load balancer port: 443 forward to instance port 80.
Is anybody knew how could I solve this problem.
Thanks.
The forwarding back to 80 isn't responsible for it. This is either HTML that is hardcoded to http or a redirect/server-generated URL pointing to http.
Use the network panel of dev tools (like in Chrome's menu) and inspect each request until you find the culprit.
Here's an example, using this question page. I've selected an insecure request.

Restrict RESTful endpoint on tomcat to local webapp

Is there a mechanism built into Tomcat or Apache to differentiate between a local web application calling a local web service vs a direct call to a webservice?
For example, a single server has both apache and tomcat servers with a front end web app deployed to apache and back end services deployed on tomcat. The webapp can call the service via port 80 utilizing mod_proxy, however a direct call to the webservice and examining tomcaat's logs shows the requests to both be identical. For example:
http://127.0.0.1/admin/tools
<Location /admin/tools>
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
</Location>
ProxyPass /admin/tools http://localhost:8080/admin/tools
ProxyPassReverse /admin/tools http://localhost:8080/admin/tools
This only blocks (or allows if you remove the deny) all external requests and both requests appear identical in tomcat's log.
Is there a recommended mechanism to differentiate and limit a direct remote service request vs the web application making a service request?
You need to use Tomcat's Remote IP Filter to extract the client IP provided by the proxy in the X-Forwarded-For HTTP header and use it to populate Tomcat's internal data structures. Then you will be able to correctly identify the source of the request.

Django Socketio Nginx proxy & session cookie issue

I have followed this tutorial: http://www.stephendiehl.com/?p=309 describing how to run a gevent pywsgi server serving Django with socketio behind a nginx front-end.
As this tutorial says, Nginx doesn't support websocket unless using a tcp proxy module. This proxy module doesn't support the use of the same port for socketio and classic serving, from what I understood the configuration look like that:
nginx listen on port 80
nginx tcp proxy listen on port 7000
Everything is forwarded to port 8000
Problem: the resulting socketio request doesn't include the django cookie containing the session id so I have no information on the requesting user in my django view.
I guess it's caused by the fact that the request is made to another port (7000) causing the browser to identify the request as cross-domain ?
What would be the cleanest way to include the django cookie into the request ?
Most answers in this question seem to indicate that port doesn't matter.
Also checked and supposedly WebSockets is regarded as HTTP, so HTTPOnly cookies should still be sent.
SocketIO seems to be using a custom Session manager to track users. Maybe try and link that up?

Access HTTP_X_FORWARDED_FOR Header in Apache for Django

I want to read client's IP address in Django. When I try to do so now with the HTTP_X_FORWARDED_FOR Header, it fails. The key is not present.
Apparently this is related to configuring my Apache server (I'm deploying with apache and mod_wsgi). I have to configure it as a reverse proxy? How do I do that, are there security implications?
Thanks,
Brendan
Usually these headers are available in request.META. So you might try request.META['HTTP_X_FORWARDED_FOR'].
Are you using Apache as a reverse proxy as well? This doesn't seem right to me. Usually one uses a lighter weight static server like nginx as the reverse proxy to Apache running the app server. Nginx can send any headers you like using the proxy_set_header config entry.
I'm not familiar with mod_wsgi, but usually the client IP address is available in the REMOTE_ADDR environment variable.
If the client is accessing the website through a proxy, or if your setup includes a reverse proxy, the proxy address will be in the REMOTE_ADDR variable instead, and the proxy may copy the original client IP in HTTP_X_FORWARDED_FOR (depending on it's configuration).
If you have a request object, you can access these environment variables like this :
request.environ.get('REMOTE_ADDR')
request.environ.get('HTTP_X_FORWARDED_FOR')
There should be no need to change your Apache configuration or configure a reverse proxy just to get the client's IP address.