sorry for the dense title.
I'd like a way to dynamically map the proxy_pass url in my nginx config. I'd prefer the logic that does this be in any language, because I might need to call some web services. For example..
Request url: http://example.com/id-a
Proxied url: http://partner-domain.com/123
The mapping from id-a to 123 is non-trivial, and as mentioned, may have some impure dependencies (such as a web service call). However, the calls will be fast, as I expect to have great performance. This is why I'm not confident about, say, having python/flask make the proxy; nginx is very good at it. However, they are fast enough to make the mapping, but I don't know how to give the result back to nginx to work with.
Any ideas?
Related
I searched and I couldnt' find any way to do this, so I am wondering if it is possible.
I have a service running that accepts requests and everything works fine. However, I'd like to answer some of these requests with a different service running on the same machine. These requests are the ones going to some/path/{variable}/etc and method POST.
So I would like to know if it possible to do this directly from nginx without adding any overhead.
My first solution was creating a different API that receives all the requests and if it is not the one I want to incercept, just did a proxy request to the origianl service. But this added between 200 and 500ms to every response, which is not acceptable in our use case.
So I thought that doing this through nginx would resolve much faster, but I couldn't find a way or even find out if it is possible.
Any help would be highly appreciated. If you have any other idea or alternative that I could test or implement, it would be welcome as well.
Edit: Per request by Ivan's comment.
We have already nginx running, serving all the requests by service1.
What we would like to do is:
if request.path is in the form of /path/{variable}/etc and request.method==POST:
serve using service2
else:
serve using service1
Assuming you services hosted on the same server and differs in access ports, using a chain of the map blocks should do the trick:
map $uri $service_by_uri {
~^/path/[\w]+/etc 127.0.0.1:10002; # service 2
default 127.0.0.1:10001; # service 1
}
map $request_method $service {
POST $service_by_uri; # select service according to the request URI
default 127.0.0.1:10001; # service 1
}
server {
...
proxy_pass http://$service;
...
}
For some reason I am in need of a views.py that returns only some text. Normally, i'd use HttpResponse("text") for this. However, In this case I require the text to be send over https, to counter the inevitable mixed content warning.
What is the simplest way of sending pure text via django(1.7.11) over https?
Django in the relevant docs of httprequest.build_absolute_uri reads:
Mixing HTTP and HTTPS on the same site is discouraged, therefore
build_absolute_uri() will always generate an absolute URI with the
same scheme the current request has. If you need to redirect users to
HTTPS, it’s best to let your Web server redirect all HTTP traffic to
HTTPS.
The docs make clear that
the method of communication is entirely the responsibility of the server
as Daniel Roseman commented.
My prefered choice is to force https throughout a site, however it is possible to do it only for a certain page.
The above can be achieved by either:
Upgrading to a secure and supported release of Django where the use of SECURE_SSL_REDIRECT and SecurityMiddleware will redirect all traffic to SSL
Asking your host provider an advice on how could this be implemented in their servers
Using the apache config files.
Using .htaccess to redirect a single page.
There are also other -off the road- hackish solutions like a snippet which can be used with a decorator in urls.py to force https, or a custom middleware that redirects certain urls to https.
I've run into the mixed content problems as well. From my experience, you simply can't use the HttpResponse objects without running into trouble. I was never totally sure though and eventually found a way "around" it.
My solution for it was to use the JsonResponse object instead, to return JSON strings, kind of a work-around with the views returning something like:
mytext = 'stuff blablabla'
return JsonResponse({'response_text': mytext})
Which is super easy to parse, and OK with HTTPS.
Maybe not what you're looking for, but I hope it helps you find your way.
I have a two-layer backend architecture:
a "front" server, which serves web clients. This server's codebase is shared with a 3rd party developer
a "back" server, which holds top-secret-proprietary-kick-ass-algorithms, and has a single endpoint to do its calculation
When a client sends a request to a specific endpoint in the "front" server, the server should pass the request to the "back" server. The back server then crunches some numbers, and returns the result.
One way of achieving it is to use the requests library. A simpler way would be to have the "front" server simply redirect the request to the "back" server. I'm using DRF throughout both servers.
Is redirecting an ajax request possible using DRF?
You don't even need the DRF to add a redirection to urlconf. All you need to redirect is a simple rule:
urlconf = [
url("^secret-computation/$",
RedirectView.as_view(url=settings.BACKEND_SECRET_COMPUTATION_URL))),
url("^", include(your_drf_router.urls)),
]
Of course, you may extend this to a proper DRF view, register it with the DRF's router (instead of directly adding url to urlconf), etc etc - but there isn't much sense in doing so to just return a redirect response.
However, the code above would only work for GET requests. You may subclass HttpResponseRedirect to return HTTP 307 (replacing RedirectView with your own simple view class or function), and depending on your clients, things may or may not work. If your clients are web browsers and those may include IE9 (or worse) then 307 won't help.
So, unless your clients are known to be all well-behaving (and on non-hostile networks without any weird way-too-smart proxies - you'll never believe what kinds of insanity those may do to HTTP requests), I'd suggest to actually proxy the request.
Proxying can be done either in Django - write a GenericViewSet subclass that uses requests library - or by using something in front of it, e.g. nginx or Caddy (or any other HTTP server/load balancer that you know best).
For production purposes, as you probably have a fronting webserver, I suggest to use that. This would save implementation time and also a little bit of server resources, as your "front" Django project won't even have to handle the request and keep the worker busy as it waits for the response.
For development purposes, your options may vary. If you use bare runserver then a proxy view may be your best option. If you use e.g. Docker, you may just throw in an HTTP server container in front of your Django container.
For example, I currently have a two-project setup (legacy Django 1.6 project and newer Django 1.11 project, sharing the same database) and a Caddy server in front of those, routing on per-URL basis. With a simple 9-line Caddyfile things just work:
:80
tls off
log / stdout "{common}"
proxy /foo project1:8000 {
transparent
}
proxy / project2:8000 {
transparent
}
(This is a development-mode config.) If you can have something similar, then, I guess, that would be the simplest option.
I extend the DS.ActiveModelAdapter to use a custom host since my API is on a subdomain, using, for example, http://api.lvh.me:3000 when working locally.
In my tests I try to use Pretender to mock the responses to the API requests, but Pretender isn't handling the requests, I suspect due to this custom host setting.
I've tried many different variations to make this work, including setting the host to different values, not setting the host at all, running the tests with the --proxy command, and so on.
I'm obviously just throwing darts at a wall and hoping something will stick. Can anyone guide me to understanding what I should be doing?
It might work if you define the host of your adapter as a config variable:
export default DS.ActiveModelAdapter.extend({
host: config.apiHost
});
You define host to be the "real" host in non-hosting environments (http://api.lvh.me:3000) and just omit the config.apiHost in testing. If you do so, you can use Pretender to stub out the requests since they are now same-host (or, in other words, relative) requests.
I use geodjango to create and serve map tiles that I usually display into OpenLayers as openLayers.Layer.TMS
I am worried that anybody could grab the web service URL and plug it into their own map without asking permission, and then consume a lot of the server's CPU and violate private data ownership. On the other hand, I want the tile service to be publicly available without login, but from my website only.
Am I right to think that such violation is possible? If yes, what would be the way to be protected from it? Is it possible to hide the url in the client browser?
Edit:
The way you initiate tile map service in OpenLayers is through javascript that could be read from client browser like this:
tiledLayer = new OpenLayers.Layer.TMS('TMS',
"{{ tmsURL }}1.0/{{ shapefile.id }}/${z}/${x}/${y}.png"
);
Its really easy to copy/paste this into another website and have access to the web service data.
How can I add an API Key in the url and manage to regenerate it regularly?
There's a great answer on RESTful Authentication that can really help you out. These principals can be adapted and implemented in django as well.
The other thing you can do is take it one level higher than implementing this in django but use your webserver.
For example I use the following in my nginx + uwsgi + django setup:
# the ip address of my front end web app (calling my Rest API) is 192.168.1.100.
server {
listen :80;
server_name my_api;
# allow only my subnet IP address - but can also do ranges e.g. 192.168.1.100/16
allow 192.168.1.100;
# deny everyone else
deny all;
location / {
# pass to uwsgi stuff here...
}
}
This way, even if they got the URL, nginx would cut them off before it even reached your application (potentially saving you some resources...??).
You can read more about HTTP Access in the nginx documentation.
It's also worth noting that you can do this in Apache too - I just prefer the setup listed above.
This may not answer your question, but there's no way to hide a web request in the browser. To normal users, seeing the actual request will be very hard, but for network/computer savvy users, (normally programmer who will want to take advantage of your API) doing some sniffing and finally seeing/using your web request may be very easy.
This you're trying to do is called security through obscurity and normally is not very recommended. You'll have to create a stronger authentication mechanism if you want your API to be completely secure from non authorized users.
Good luck!