Django - How to protect web service url - API KEY - django

I use geodjango to create and serve map tiles that I usually display into OpenLayers as openLayers.Layer.TMS
I am worried that anybody could grab the web service URL and plug it into their own map without asking permission, and then consume a lot of the server's CPU and violate private data ownership. On the other hand, I want the tile service to be publicly available without login, but from my website only.
Am I right to think that such violation is possible? If yes, what would be the way to be protected from it? Is it possible to hide the url in the client browser?
Edit:
The way you initiate tile map service in OpenLayers is through javascript that could be read from client browser like this:
tiledLayer = new OpenLayers.Layer.TMS('TMS',
"{{ tmsURL }}1.0/{{ shapefile.id }}/${z}/${x}/${y}.png"
);
Its really easy to copy/paste this into another website and have access to the web service data.
How can I add an API Key in the url and manage to regenerate it regularly?

There's a great answer on RESTful Authentication that can really help you out. These principals can be adapted and implemented in django as well.
The other thing you can do is take it one level higher than implementing this in django but use your webserver.
For example I use the following in my nginx + uwsgi + django setup:
# the ip address of my front end web app (calling my Rest API) is 192.168.1.100.
server {
listen :80;
server_name my_api;
# allow only my subnet IP address - but can also do ranges e.g. 192.168.1.100/16
allow 192.168.1.100;
# deny everyone else
deny all;
location / {
# pass to uwsgi stuff here...
}
}
This way, even if they got the URL, nginx would cut them off before it even reached your application (potentially saving you some resources...??).
You can read more about HTTP Access in the nginx documentation.
It's also worth noting that you can do this in Apache too - I just prefer the setup listed above.

This may not answer your question, but there's no way to hide a web request in the browser. To normal users, seeing the actual request will be very hard, but for network/computer savvy users, (normally programmer who will want to take advantage of your API) doing some sniffing and finally seeing/using your web request may be very easy.
This you're trying to do is called security through obscurity and normally is not very recommended. You'll have to create a stronger authentication mechanism if you want your API to be completely secure from non authorized users.
Good luck!

Related

HTTP 407 Proxy Authentication Required while accessing Amazon S3

I have tried everything but I cant seem to fix this issue that is happening for only one client behind a corporate proxy/firewall. Our Silverlight application connects to Amazon S3 for downloading/Uploading some documents. On one client and one client only it returns a 407 error and after that the application fails to save anything.
Inner Exception:
System.ServiceModel.ProtocolException: [UnexpectedHttpResponseCode]
Arguments: 407,Proxy Authentication Required
We had something similar at a different client but there was more of a CORS issue. to resolve this I used cloud-front to fake a sub-domain that then accesses the S3 bucket and it solved the issue. I was hoping it would fix it with this client as well but it didnt.
I have tried adding this code to web.config as suggested by a lot of answers
<system.net>
<defaultProxy useDefaultCredentials="true" >
</defaultProxy>
</system.net>
I have read articles about passing a proxy headers with basis authentication using username and password but I am not sure how this would help us. The Proxy server is used by client and any authentication it requires is outside our domain.
**Additional Information**
The Silverlight code references 2 services. One is our wcf service that retrieves all the data for the application. One is The Amazon S3 service that uses the amazon Soap api, the endpoint for which is at http://s3.amazonaws.com/doc/2006-03-01/AmazonS3.wsdl?
If I go into our app and only use part of the system that dont make any calls to the Amazon S3 api the application works fine. As soon as I go to a part of the system that makes a call to the S3, the problem starts. funny enough the call to S3 goes fine and I can retrieve the doc fine but then any calls to our wcf service return 407.
Any ideas?
**Update 2**
Based on comments from Elliot Nelson I check the stack we were using for making http requests in our application. Turns out we are using client http for both http and https requests by default. Here is the code we have in the App.xaml constructor
public App()
{
Startup += Application_Startup;
UnhandledException += Application_UnhandledException;
InitializeComponent();
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);
WebRequest.RegisterPrefix("https://", WebRequestCreator.ClientHttp);
}
Now, to understand the differences between clienthttp and browserhttp and when to use them. Also, the potential impacts/issues of switching to browserhttp.
**Update 3**
Is there a way to request browsers to run your in-browser Silverlight application in trusted mode and would it help bypass this issue?
(Answer #2)
So, most likely (for corporate environments like this network), almost nothing can be done without whatever custom proxy settings are set in IE, usually pushed by corporate policy. To take advantage of these proxy settings, you want to use WebRequestCreator.BrowserHttp, which automatically uses the browser's default settings when making requests.
There's a table of the differences between these two clients available in the Microsoft docs. I'm guessing you were using something (maybe setting custom headers or reading the raw response body) that wasn't supported in BrowserHttp.
For security reasons, you can't "ask" the browser what its proxy settings are and use them, so this is a tricky situation. You can specify Browser vs Client handling by domain, or even for a specific request (the same page above describes how); you may be able in this case to get away with just using ClientHttp for your service calls and BrowserHttp for your S3 calls, and avoid the problem altogether!
For next steps, I'd try that approach; if it doesn't work, I'd try switching wholesale to BrowserHttp just to see if it bypasses the proxy issue (there's almost no chance the application will actually work, since you're probably using ClientHttp-only options).
Long term, you may want to consider making changes to your services so they are usable by a BrowserHttp-only application (this would require you to be pretty basic in your requests/responses, but using only BrowserHttp would be a guarantee you'd work in pretty much any corp network).
Running in trusted mode is probably a group policy thing which would require their AD admins to approve / whitelist your app.
I think the underlying issue you are facing is that the proxy requires NTLM authentication and for whatever reason the browser declines to provide your app with that context.
One way to prove that it's an NTLM auth issue is to test with curl - get it to make a req through the proxy, then it should be a bit easier to code to. EG the following curl will get you through 99% of Windows corporate proxies (assuming the proxy is at proxy-host.corp:3128):
C:\> curl.exe -v --proxy proxy-host:3128 --proxy-user : --proxy-ntlm https://www.google.com
NOTE The --proxy-user : tells curl to use the current user session to perform the NTLM challenge.
So if you can get the client to run that, you can at least identify that NTLM works, then it's a just a matter of getting the app to perform the NTLM challenge using the default credentials (which may or may not be provided by the browser session)
Since you described this as a silverlight application, I'm going to assume you can't use classic browser-proxy troubleshooting like "move browser to public network" or "try a different browser", to isolate the problem.
You should try to isolate the proxy server, and have the customer use the required proxy-auth.
The application is making request, but it might be intercepted by a transparent proxy, or the result might be coming from what you consider a web server.
In the early days, the 401 error was pretty strictly associated with web-auth, and 407 was for proxy-auth.
Architecturally, the separation is a convenience, a web server can have both web server, proxy, and reverse-proxy behaviors.
What happens is your customer's environment is making a web connection to the destination, but it receives a HTTP 407 status from some host, probably their network, or sometimes the provider. Almost certainly the request is received not forwarded. The HTTP client your application lives in needs to provide the credentials that host requires. Companies have environments that are complex enough where often your customer will say this is the first time they have heard of this (some proxy-auth is also dynamic or destination specific).
Also, in some corporate environments, the operator will allow temporary or permanent white-listing from the proxy-auth service. You should see if they can do this, even temporarily, to confirm there aren't going to be other problems.
In the end, it sounds like your application might not robustly support proxy-auth, or the proxy-auth type they use in their environment.

Redirecting API requests in Django Rest Framework

I have a two-layer backend architecture:
a "front" server, which serves web clients. This server's codebase is shared with a 3rd party developer
a "back" server, which holds top-secret-proprietary-kick-ass-algorithms, and has a single endpoint to do its calculation
When a client sends a request to a specific endpoint in the "front" server, the server should pass the request to the "back" server. The back server then crunches some numbers, and returns the result.
One way of achieving it is to use the requests library. A simpler way would be to have the "front" server simply redirect the request to the "back" server. I'm using DRF throughout both servers.
Is redirecting an ajax request possible using DRF?
You don't even need the DRF to add a redirection to urlconf. All you need to redirect is a simple rule:
urlconf = [
url("^secret-computation/$",
RedirectView.as_view(url=settings.BACKEND_SECRET_COMPUTATION_URL))),
url("^", include(your_drf_router.urls)),
]
Of course, you may extend this to a proper DRF view, register it with the DRF's router (instead of directly adding url to urlconf), etc etc - but there isn't much sense in doing so to just return a redirect response.
However, the code above would only work for GET requests. You may subclass HttpResponseRedirect to return HTTP 307 (replacing RedirectView with your own simple view class or function), and depending on your clients, things may or may not work. If your clients are web browsers and those may include IE9 (or worse) then 307 won't help.
So, unless your clients are known to be all well-behaving (and on non-hostile networks without any weird way-too-smart proxies - you'll never believe what kinds of insanity those may do to HTTP requests), I'd suggest to actually proxy the request.
Proxying can be done either in Django - write a GenericViewSet subclass that uses requests library - or by using something in front of it, e.g. nginx or Caddy (or any other HTTP server/load balancer that you know best).
For production purposes, as you probably have a fronting webserver, I suggest to use that. This would save implementation time and also a little bit of server resources, as your "front" Django project won't even have to handle the request and keep the worker busy as it waits for the response.
For development purposes, your options may vary. If you use bare runserver then a proxy view may be your best option. If you use e.g. Docker, you may just throw in an HTTP server container in front of your Django container.
For example, I currently have a two-project setup (legacy Django 1.6 project and newer Django 1.11 project, sharing the same database) and a Caddy server in front of those, routing on per-URL basis. With a simple 9-line Caddyfile things just work:
:80
tls off
log / stdout "{common}"
proxy /foo project1:8000 {
transparent
}
proxy / project2:8000 {
transparent
}
(This is a development-mode config.) If you can have something similar, then, I guess, that would be the simplest option.

Restrict access to a Django view, only from the server itself (localhost)

I want to create a localhost-only API in Django and I'm trying to find a way to restrict the access to a view only from the server itself (localhost)? I've tried using:
'HTTP_HOST',
'HTTP_X_FORWARDED_FOR',
'REMOTE_ADDR',
'SERVER_ADDR'
but with no luck.
Is there any other way?
You could configure your webserver (Apache, Nginx etc) to bind only to localhost.
This would work well if you want to restrict access to all views, but if you want to allow access to some views from remote users, then you'd have to configure a second Django project.
The problem is a bit more complex than just checking a variable. To identify the client IP address, you'll need
request.META['REMOTE_ADDR'] -- The IP address of the client.
and then to compare it with the request.get_host(). But you might take into account that the server might be started on 0.0.0.0:80, so then you'll probably need to do:
import socket
socket.gethostbyaddr(request.META['REMOTE_ADDR'])
and to compare this with let's say
socket.gethostbyaddr("127.0.0.1")
But you'll need to process lots of edge-cases with these headers and values.
A much simpler approach could be to have a reverse proxy in front of your app, that sends let's say some custom_header like X_SOURCE=internet. Then you can setup the traffic from internet to goes through the proxy, while the local traffic(in your local network) to go directly to the web server. So then if you want to have access to a specific view only from your local network, just check this header:
if 'X_SOURCE' in request.META:
# request is coming from internet, and not local network....
else:
# presumably we have a local request...
But again - this is the 'firewall approach', and it will require a some more setup, and to be sure that there is no possible access to the app from outside, that doesn't go through the reverse proxy..

Website Forms (POST) On Multiple Instances (Servers) Website (Python Django / PHP)

Suppose I have a PHP / Python (Django) website.
The website is running on multiple instances servers.
Meaning the URL for the website is www.test.com, and from a load balancer, it can get the client to www.server1.com or www.server2.com and so on.
When there is a form on the website, and the processing of this form is located on the same page:
Can the following situation exist ? :
- User go to www.test.com - behind the scenes, through the load balancer, he gets to www.server*1*.com. He fills a form.
- The form action (URL) is for www.test.com - so behind the scenes, through the load balancer, he gets to www.server*2*.com.
So here, will the needed form data, and more important for my question maybe - the 'request' data, (like request.SOMETHING at Python Django) will be missing ? Because maybe it was saved before on the session, at www.server*1*.com, and now it is missing at www.server*2*.com ?
The request will always have all data, as that gets forwarded to the edge server. request.POST and request.GET will have all the data from the request. The problem however, is that the session data might not be available at that edge server. Example, you started your session on server1, then request another page from server2. server2 might assign a new session and forbid you to access certain contents.
To overcome this session problem, you can do one of two things:
Share sessions between servers (central session storage)
Always forward the user to the same edge server. Some loadbalancers store the forwared-to edge server in a cookie. On subsequent requests, the user gets forwarded to the same edge node every time. That same edge node will keep the session of that user, so no problems.
Yes, this is a valid concern. Due to the nature of the Web (HTTP), the other request might end up on the other server. This issue is called persistence or stickiness.
The solution here would be to save all this information on the client side (using cookies) and not rely on server-side sessions. So it would be up to you to implement it like this using Python/Django. Using the client-side approach gives the best performance, and should be the easiest to implement.
Keep in mind that this solution bears quite a significant security risk for man-in-the-middle attacks, unless you encrypt the connection with SSL/TSL (using HTTPS), as all of the client data is stored in the cookies which could be intercepted.

Asp Mvc 3 - Restful web service for consuming on multiple platforms

I am wanting to expose a restful web service for posting and retrieving data, this may be consumed by mobile devices or a web site.
Now the actual creation of the service isn't a problem, what does seem to be a problem is communicating from a different domain.
I have made a simple example service deployed on the ASP.NET development server, which just exposes a simple POST action to send a request with JSON content. Then I have created a simple web page using jquery ajax to send some dummy data over, yet I believe I am getting stung with the same origin policy.
Is this a common thing, and how do you get around it? Some places have mentioned having a proxy on the domain that you always request a get to, but then you cannot use it in a restful manner...
So is this a common issue with a simple fix? As there seem to be plenty of restful services out there that allow 3rd parties to use their service...
How exactly are you "getting stung with the same origin policy"? From your description, I don't see how it could be relevant. If yourdomain.com/some-path/defined-request.json returns a certain JSON response, then it will return that response regardless of what is requesting the file, unless you have specifically defined required credentials that are not satisfied.
Here is an example of such a web service. It will return the same JSON object regardless of from where the request is made: http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&sensor=true
Unless I am misunderstanding you (in which case you should clarify your actual problem), the same origin policy doesn't really seem to apply here.
Update Re: Comment
"I make a simple HTML page and load it as file://myhtmlfilelocation/myhtmlfile.html and try to make an ajax request"
The cause of your problem is that you are using the file:// URL scheme, instead of the http:// protocol scheme. You can find information about this scheme in Section 3.10 of RFC 1738. Here is an excerpt:
The file URL scheme is used to designate files accessible on a particular host computer. This scheme, unlike most other URL schemes, does not designate a resource that is universally accessible over the Internet.
You should be able to resolve your issue by using the http:// scheme instead of the file:// scheme when you make your asynchronous HTTP request.