Routing traffic between two servers based on a cookie - django

We are presently rewriting an in-production Django site. We would like to deploy the new site in parallel with the old site, and slowly divert traffic from old to new using the following scheme:
New accounts go to the new site
Existing accounts go to the old site
Existing accounts may be offered the opportunity to opt in to the new site
Accounts diverted to the new site may opt out and be returned to the old site
It's clear to me that a cookie is involved, and that Nginx is capable of rewriting requests based on a cookie:
Nginx redirect if cookie present
How do I run two rails apps behind the same domain and have nginx route requests based on cookie?
How the cookie gets set remains a bit of a mystery to me. It seems like a chicken-and-egg problem. Has anyone successfully run a scheme like this? How did you do it?

I think the most suitable solution for you problem would be:
Nginx at every request should check for some specific cookie, route
If it's presented and equals old, request goes to a old site
Otherwise request goes to the new site.
Every site (new and old) should check request for that cookie (route)
If cookie isn't presented (or wrong), your app should set it to the right value, and if request is for that site, just proceed it.
If not, it should send redirect, and we begin again with step 1

Related

Keystone session cookie only working on localhost

Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.

Cookies when using separate Dyno for react frontend and Django backend

I am building a simple web app using React.js for the frontend and Django for the server side.
Thus frontend.herokuapp.com and backend.herokuapp.com.
When I attempt to make calls to my API through the react app the cookie that was received from the API is not sent with the requests.
I had expected that I would be able to support this configuration without having to do anything special since all server-side requests would (I thought) be made by the JS client app directly to the backend process with their authentication cookies attached.
In an attempt to find a solution that I thought would work I attempted to set
SESSION_COOKIE_DOMAIN = "herokuapp.com"
Which while less than ideal (as herokuapp.com is a vast domain) in Production would seem to be quite safe as they would then be on api.myapp.com and www.myapp.com.
However, with this value set in settings.py I get an AuthStateMissing when hitting my /oauth/complete/linkedin-oauth2/ endpoint.
Searching google for AuthStateMissing SESSION_COOKIE_DOMAIN yields one solitary result which implies that the issue was reported as a bug in Django social auth and has since been closed without further commentary.
Any light anyone could throw would be very much appreciated.
I ran into the exact same problem while using herokuapp.com.
I even posted a question on SO here.
According to Heroku documentation:
In other words, in browsers that support the functionality, applications in the herokuapp.com domain are prevented from setting cookies for *.herokuapp.com
Heroku blocks cookies from frontend.herokuapp.com and backend.herokuapp.com
You need to add a custom domain to frontend.herokuapp.com and backend.herokuapp.com
The entire answer https://stackoverflow.com/a/54513216/1501643

Are cookies safe in a Heroku app on herokuapp.com?

I am developing an app, which I will deploy on Heroku. The app is only used within an iframe on another site, so I don't care about the domain name. I plan to deploy my app on example.herokuapp.com instead of using a custom domain on example.com.
My app uses cookies, and I want to be sure that others cannot manipulate my cookies to protect my app against session fixation and similar attacks. If attacker.herokuapp.com is able to set a cookie for herokuapp.com, browsers will not be able to protect me, since herokuapp.com is not a public suffix. See http://w2spconf.com/2011/papers/session-integrity.pdf for a detailed description of the issue.
My question is: When browsers can't protect my users, will Heroku do it by blocking cookies for herokuapp.com?
Just wanted to post an update for anyone who ran across this question as I did. I was working on a similar problem, except that I wanted to purposefully allow access to the same cookie from two different heroku apps.
"herokuapp.com" and "herokussl.com" are now on the Public Suffix List, so your cookies should be safe if they are set for one of those domains. I ended up having to use custom domains in order to share cookies across both apps.
Heroku also released an article on the topic: https://devcenter.heroku.com/articles/cookies-and-herokuapp-com
I just tried to add a cookie from my Heroku app with the response header Set-Cookie: name=value;Path=/;Domain=.herokuapp.com, and to my disappointment, I could see the header intact in my browser. So the Heroku infrastructure does not detect and remove this cross-app supercookie.
I see three possible ways to protect a Heroku app against cross-app supercookies:
Don't use cookies at all.
Use a custom domain.
Verify that each cookie was actually set by your app, and restrict it to the client's IP address by checking the X-Forwarded-For header.
My feature request to Heroku would be that they should filter HTTP responses that goes through their HTTP routing, such that applications hosted on their infrastructure cannot set cookies with Domain=herokuapp.com.
It seems to me that, as long as you set the cookie for example.herokuapp.com, then the cookie is safe from manipulation. The cookie will only be presented to the app running on example.herokuapp.com and to herokuapp.com (where no app runs).

Website Forms (POST) On Multiple Instances (Servers) Website (Python Django / PHP)

Suppose I have a PHP / Python (Django) website.
The website is running on multiple instances servers.
Meaning the URL for the website is www.test.com, and from a load balancer, it can get the client to www.server1.com or www.server2.com and so on.
When there is a form on the website, and the processing of this form is located on the same page:
Can the following situation exist ? :
- User go to www.test.com - behind the scenes, through the load balancer, he gets to www.server*1*.com. He fills a form.
- The form action (URL) is for www.test.com - so behind the scenes, through the load balancer, he gets to www.server*2*.com.
So here, will the needed form data, and more important for my question maybe - the 'request' data, (like request.SOMETHING at Python Django) will be missing ? Because maybe it was saved before on the session, at www.server*1*.com, and now it is missing at www.server*2*.com ?
The request will always have all data, as that gets forwarded to the edge server. request.POST and request.GET will have all the data from the request. The problem however, is that the session data might not be available at that edge server. Example, you started your session on server1, then request another page from server2. server2 might assign a new session and forbid you to access certain contents.
To overcome this session problem, you can do one of two things:
Share sessions between servers (central session storage)
Always forward the user to the same edge server. Some loadbalancers store the forwared-to edge server in a cookie. On subsequent requests, the user gets forwarded to the same edge node every time. That same edge node will keep the session of that user, so no problems.
Yes, this is a valid concern. Due to the nature of the Web (HTTP), the other request might end up on the other server. This issue is called persistence or stickiness.
The solution here would be to save all this information on the client side (using cookies) and not rely on server-side sessions. So it would be up to you to implement it like this using Python/Django. Using the client-side approach gives the best performance, and should be the easiest to implement.
Keep in mind that this solution bears quite a significant security risk for man-in-the-middle attacks, unless you encrypt the connection with SSL/TSL (using HTTPS), as all of the client data is stored in the cookies which could be intercepted.

Django+apache: HTTPS only for login page

I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.