Suppose I have a PHP / Python (Django) website.
The website is running on multiple instances servers.
Meaning the URL for the website is www.test.com, and from a load balancer, it can get the client to www.server1.com or www.server2.com and so on.
When there is a form on the website, and the processing of this form is located on the same page:
Can the following situation exist ? :
- User go to www.test.com - behind the scenes, through the load balancer, he gets to www.server*1*.com. He fills a form.
- The form action (URL) is for www.test.com - so behind the scenes, through the load balancer, he gets to www.server*2*.com.
So here, will the needed form data, and more important for my question maybe - the 'request' data, (like request.SOMETHING at Python Django) will be missing ? Because maybe it was saved before on the session, at www.server*1*.com, and now it is missing at www.server*2*.com ?
The request will always have all data, as that gets forwarded to the edge server. request.POST and request.GET will have all the data from the request. The problem however, is that the session data might not be available at that edge server. Example, you started your session on server1, then request another page from server2. server2 might assign a new session and forbid you to access certain contents.
To overcome this session problem, you can do one of two things:
Share sessions between servers (central session storage)
Always forward the user to the same edge server. Some loadbalancers store the forwared-to edge server in a cookie. On subsequent requests, the user gets forwarded to the same edge node every time. That same edge node will keep the session of that user, so no problems.
Yes, this is a valid concern. Due to the nature of the Web (HTTP), the other request might end up on the other server. This issue is called persistence or stickiness.
The solution here would be to save all this information on the client side (using cookies) and not rely on server-side sessions. So it would be up to you to implement it like this using Python/Django. Using the client-side approach gives the best performance, and should be the easiest to implement.
Keep in mind that this solution bears quite a significant security risk for man-in-the-middle attacks, unless you encrypt the connection with SSL/TSL (using HTTPS), as all of the client data is stored in the cookies which could be intercepted.
Related
I have a two-layer backend architecture:
a "front" server, which serves web clients. This server's codebase is shared with a 3rd party developer
a "back" server, which holds top-secret-proprietary-kick-ass-algorithms, and has a single endpoint to do its calculation
When a client sends a request to a specific endpoint in the "front" server, the server should pass the request to the "back" server. The back server then crunches some numbers, and returns the result.
One way of achieving it is to use the requests library. A simpler way would be to have the "front" server simply redirect the request to the "back" server. I'm using DRF throughout both servers.
Is redirecting an ajax request possible using DRF?
You don't even need the DRF to add a redirection to urlconf. All you need to redirect is a simple rule:
urlconf = [
url("^secret-computation/$",
RedirectView.as_view(url=settings.BACKEND_SECRET_COMPUTATION_URL))),
url("^", include(your_drf_router.urls)),
]
Of course, you may extend this to a proper DRF view, register it with the DRF's router (instead of directly adding url to urlconf), etc etc - but there isn't much sense in doing so to just return a redirect response.
However, the code above would only work for GET requests. You may subclass HttpResponseRedirect to return HTTP 307 (replacing RedirectView with your own simple view class or function), and depending on your clients, things may or may not work. If your clients are web browsers and those may include IE9 (or worse) then 307 won't help.
So, unless your clients are known to be all well-behaving (and on non-hostile networks without any weird way-too-smart proxies - you'll never believe what kinds of insanity those may do to HTTP requests), I'd suggest to actually proxy the request.
Proxying can be done either in Django - write a GenericViewSet subclass that uses requests library - or by using something in front of it, e.g. nginx or Caddy (or any other HTTP server/load balancer that you know best).
For production purposes, as you probably have a fronting webserver, I suggest to use that. This would save implementation time and also a little bit of server resources, as your "front" Django project won't even have to handle the request and keep the worker busy as it waits for the response.
For development purposes, your options may vary. If you use bare runserver then a proxy view may be your best option. If you use e.g. Docker, you may just throw in an HTTP server container in front of your Django container.
For example, I currently have a two-project setup (legacy Django 1.6 project and newer Django 1.11 project, sharing the same database) and a Caddy server in front of those, routing on per-URL basis. With a simple 9-line Caddyfile things just work:
:80
tls off
log / stdout "{common}"
proxy /foo project1:8000 {
transparent
}
proxy / project2:8000 {
transparent
}
(This is a development-mode config.) If you can have something similar, then, I guess, that would be the simplest option.
I have my production servers running behind a load balancer on AWS (they scale up based on an AMI). Some websites have cookies - for example, a restaurant with multiple locations, and each location is set in a cookie.
I noticed that a cookie wasn't being saved across multiple servers, so I remedied this by going into Load Balancers -> Port Configuration, clicking Enable Application Generated Cookie Stickiness, and inserting the name of the cookie.
As far as I know, this only allows one cookie name, and I have many - Google Analytics, for example. (Perhaps they can be comma separated, I haven't checked yet.)
My port configuration now looks like this:
80 (HTTP) forwarding to 80 (HTTP)
Stickiness: AppCookieStickinessPolicy, cookieName='MY_COOKIE'
I was wondering if there was any way to allow ANY app generated cookie to be recognized, instead of having to name them individually.
Any input greatly appreciated. Thank you!
I think you're misunderstanding the use and purpose of session stickiness.
If you don't have a shared session store - i.e. memcached, redis, or something that is available to ALL instances in your pool, then you're probably using a session mechanism that involves local storage - saving them on a local file system is a common mechanism for php, while IIS will usually have a local session store.
If you're using a local session store, then you need to make sure that all subsequent request come back to the node that has the session stored - because if it doesn't, then whatever information your application has saved in session is no longer available.
To do this, you have two choices: allow the ELB to set and manage the session affinity cookie, or have it do it based on the session cookie you set. Note that in both cases, the ELB will create a new cookie with the name AWSELB and a value that allows it to map the request to the instance that original created it - but if you tie it to the session cookie set when the ELB only generates the AWSELB cookie when it sees a new session cookie.
It sounds like the application problem could be because you're pulling the location from session, not from the cookie, but that's just a guess.
I use geodjango to create and serve map tiles that I usually display into OpenLayers as openLayers.Layer.TMS
I am worried that anybody could grab the web service URL and plug it into their own map without asking permission, and then consume a lot of the server's CPU and violate private data ownership. On the other hand, I want the tile service to be publicly available without login, but from my website only.
Am I right to think that such violation is possible? If yes, what would be the way to be protected from it? Is it possible to hide the url in the client browser?
Edit:
The way you initiate tile map service in OpenLayers is through javascript that could be read from client browser like this:
tiledLayer = new OpenLayers.Layer.TMS('TMS',
"{{ tmsURL }}1.0/{{ shapefile.id }}/${z}/${x}/${y}.png"
);
Its really easy to copy/paste this into another website and have access to the web service data.
How can I add an API Key in the url and manage to regenerate it regularly?
There's a great answer on RESTful Authentication that can really help you out. These principals can be adapted and implemented in django as well.
The other thing you can do is take it one level higher than implementing this in django but use your webserver.
For example I use the following in my nginx + uwsgi + django setup:
# the ip address of my front end web app (calling my Rest API) is 192.168.1.100.
server {
listen :80;
server_name my_api;
# allow only my subnet IP address - but can also do ranges e.g. 192.168.1.100/16
allow 192.168.1.100;
# deny everyone else
deny all;
location / {
# pass to uwsgi stuff here...
}
}
This way, even if they got the URL, nginx would cut them off before it even reached your application (potentially saving you some resources...??).
You can read more about HTTP Access in the nginx documentation.
It's also worth noting that you can do this in Apache too - I just prefer the setup listed above.
This may not answer your question, but there's no way to hide a web request in the browser. To normal users, seeing the actual request will be very hard, but for network/computer savvy users, (normally programmer who will want to take advantage of your API) doing some sniffing and finally seeing/using your web request may be very easy.
This you're trying to do is called security through obscurity and normally is not very recommended. You'll have to create a stronger authentication mechanism if you want your API to be completely secure from non authorized users.
Good luck!
We are presently rewriting an in-production Django site. We would like to deploy the new site in parallel with the old site, and slowly divert traffic from old to new using the following scheme:
New accounts go to the new site
Existing accounts go to the old site
Existing accounts may be offered the opportunity to opt in to the new site
Accounts diverted to the new site may opt out and be returned to the old site
It's clear to me that a cookie is involved, and that Nginx is capable of rewriting requests based on a cookie:
Nginx redirect if cookie present
How do I run two rails apps behind the same domain and have nginx route requests based on cookie?
How the cookie gets set remains a bit of a mystery to me. It seems like a chicken-and-egg problem. Has anyone successfully run a scheme like this? How did you do it?
I think the most suitable solution for you problem would be:
Nginx at every request should check for some specific cookie, route
If it's presented and equals old, request goes to a old site
Otherwise request goes to the new site.
Every site (new and old) should check request for that cookie (route)
If cookie isn't presented (or wrong), your app should set it to the right value, and if request is for that site, just proceed it.
If not, it should send redirect, and we begin again with step 1
I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.