I am developing an app, which I will deploy on Heroku. The app is only used within an iframe on another site, so I don't care about the domain name. I plan to deploy my app on example.herokuapp.com instead of using a custom domain on example.com.
My app uses cookies, and I want to be sure that others cannot manipulate my cookies to protect my app against session fixation and similar attacks. If attacker.herokuapp.com is able to set a cookie for herokuapp.com, browsers will not be able to protect me, since herokuapp.com is not a public suffix. See http://w2spconf.com/2011/papers/session-integrity.pdf for a detailed description of the issue.
My question is: When browsers can't protect my users, will Heroku do it by blocking cookies for herokuapp.com?
Just wanted to post an update for anyone who ran across this question as I did. I was working on a similar problem, except that I wanted to purposefully allow access to the same cookie from two different heroku apps.
"herokuapp.com" and "herokussl.com" are now on the Public Suffix List, so your cookies should be safe if they are set for one of those domains. I ended up having to use custom domains in order to share cookies across both apps.
Heroku also released an article on the topic: https://devcenter.heroku.com/articles/cookies-and-herokuapp-com
I just tried to add a cookie from my Heroku app with the response header Set-Cookie: name=value;Path=/;Domain=.herokuapp.com, and to my disappointment, I could see the header intact in my browser. So the Heroku infrastructure does not detect and remove this cross-app supercookie.
I see three possible ways to protect a Heroku app against cross-app supercookies:
Don't use cookies at all.
Use a custom domain.
Verify that each cookie was actually set by your app, and restrict it to the client's IP address by checking the X-Forwarded-For header.
My feature request to Heroku would be that they should filter HTTP responses that goes through their HTTP routing, such that applications hosted on their infrastructure cannot set cookies with Domain=herokuapp.com.
It seems to me that, as long as you set the cookie for example.herokuapp.com, then the cookie is safe from manipulation. The cookie will only be presented to the app running on example.herokuapp.com and to herokuapp.com (where no app runs).
Related
I have an Angular app using Electron as the desktop wrapper. And there's a separate Django backend which provides HTTP APIs to the Electron client.
So normally when I call the login API the response header will have a Set-Cookie field containing the sessionId. And I can clearly see that sessionId in Postman, however, I can't see this cookie in my Angular app (Dev tools of Electron).
After some further debugging I noticed a warning sign beside my Set-Cookie in dev tools. It said that the cookie is blocked due to the SameSite being set to Lax. So I found a way to modify the server code to return a None samesite (together with a Secure property; I'm using HTTP):
# settings.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_SAMESITE = 'None'
which did work (and the warning sign is gone) but the cookie is still not visible.
So what's the problem here? Why can't (and How can) I see that cookie so as to make sure that the login works in the actual client, not just Postman?
(btw, now both ends are being developed in localhost.)
There's no need to worry. A good way to check if it works is to actually make a request that requires login (after the API has been Postman tested) and see if the desired data are returned. If so, you are good to go (especially when the warning is gone).
If the sessionId cookie is saved it should automatically be included in the request. Unless there's something wrong with the cookie's path; but a / path would be fine.
Why is the cookie not visible: it's probably due to the separation of front and back ends. In Electron, the pages are typically some local HTML files, as one common step during configuration is to probably modify loadURL or something like that in main.js, for instance:
mainWindow.loadURL(`file://${__dirname}/dist/your-project/index.html`);
So the "site" you are accessing from Electron can be considered as local filesystem (which has no domain and hence no cookie at all), and you should see an empty file:// entry in dev tools -> application -> storage -> cookie. It doesn't mean a local path containing all cookies of the Electron app. Although your backend may be on the same local machine, you are accessing as http:// instead of file:// so the browser (Electron) will treat it as an actual web server.
Therefore, your cookies should be stored in another entry like http(s)://localhost and you can't see it in Electron. (Note that the same cookie will work in both HTTP and HTTPS)
If you use Chrome instead to test, you may be able to see it in all cookies. In some cases where the frontend and backend are deployed to the same host you may see the cookie in dev tools. But I guess there're always some reasons why you need Electron to create a desktop app (e.g. Python scripts).
Further reading
Using HTTPS
Although moving to HTTPS does not necessarily solve the original problem, it may be worth doing in order to prevent potential problems and get ready for the publish.
In your case, for the backend, you can use django-sslserver as a temporary solution before getting your SSL, but it uses a self-signed certificate and may make your frontend complain.
To fix this, consider adding the following code to the main process:
# const { app } = require('electron');
if (!app.isPackaged) {
app.commandLine.appendSwitch('ignore-certificate-errors');
}
Now it provides a good way to distinguish between development (unpacked) and production (packed) and only disables certificate check in development in order to make the code work.
Assuming that SESSION_COOKIE_SECURE in your config refers to cookie's secure flag, You ll have to set
SESSION_COOKIE_SECURE = False
because if this flag is set to True the browser will allow this cookie to be set only if you are using an https connection.
PS: This is just for your localhost. Hopefully you ll be using an Https connection in other environments.
I have a production environment and a staging environment. I am wondering if I can sandbox cookies between the environments. My setup looks like
Production
domain.com - frontend SPA
api.domain.com - backend Node
Staging
staging.domain.com - frontend SPA
api.staging.domain.com - backend Node
My staging cookies use the domain .staging.domain.com so everything is fine there. But my production cookies use the domain .domain.com so these cookies show up in the staging environment.
I've read one possible solution is to use a separate domain for staging like staging-domain.com but I would like to avoid this if possible. Are there any other solutions or am I missing something about how cookies work?
There are multiple alternatives:
Set your production domains to be www.domain.com and api.www.domain.com and set your cookie to .www.domain.com
This way, your production cookie will not be seen in the staging environment.
or
Use .domain.com , but have your backend behave differently depending on which environment they receive the cookie in.
One solution would be to change the pass phrase used on staging environment to encrypt cookies.
Doing so will render cookies coming from the production invalid.
The method to do so is web server dependent, for example on Apache HTTP server:
http://httpd.apache.org/docs/current/mod/mod_session_crypto.html
Text from above link:
SessionCryptoPassphrase secret
The session will be encrypted with the given key. Different servers can be configured to share sessions by ensuring the same encryption key is used on each server.
If the encryption key is changed, sessions will be invalidated automatically.
So find how o change the passphrase on your web server on staging environment, and all cookies coming from production, along with all cookies (issued in the past) from staging will be considered invalid on staging.
Alternative option if you don't want to use separate domain or www subdomain: you can append staging environment name to the cookie name.
But personally, I would put an API gateway/proxy in front of backend and spa to keep both services under a single domain (domain.com and domain.com/api).
For staging: staging.domain.com and staging.domain.com/api or completely separate domain to avoid exposing a staging address in SSL certificate.
And I would not allow cookie sharing by omitting domain while setting the cookie. Probably, I would set the cookie path to /api.
I have jupyterhub(TLJH) running on my AWS. It is served on my site using an iframe. Since the latest chrome update, the "SameSite" cookie attribute is causing the following issue. The below image shows what I see in the Iframe
Given below is the warning I get in my console:
A cookie associated with a cross-site resource at http://www._____.com/ was set without the SameSite attribute. A future release of Chrome will only deliver cookies with cross-site requests if they are set with SameSite=None and Secure. You can review cookies in developer tools under Application>Storage>Cookies and see more details at https://www.chromestatus.com/feature/5088147346030592 and https://www.chromestatus.com/feature/5633521622188032.
When I disable the SameSite attribute in chrome://flags/, the iframe loads perfect.
I understand that I need to edit my cookie settings to add {SameSite=None; Secure} somewhere in jupyterhub, but I don't know where.
It looks to me as if you may be able to use the cookie_options setting to add SameSite=None; Secure to the cookies, but I am not 100% sure.
I've raised https://github.com/jupyterhub/jupyterhub/issues/3117 to ask the team to validate.
I could make it work only by making my server map to a subdomain. For example, say the main website which has the Iframe embed is www.mydomain.com, I had to map my Jupyter server to "subdomain.mydomain.com" to make it work.
It is obvious that the above approach was possible because the page I was trying to embed was owned by me. Hoping for an answer for the other scenario!
You can use jupyterhub proxy give your server a domain name like "http:***.mydomain.com" .But this must be subdomain of your site("http://www._____.com/")
I am building a simple web app using React.js for the frontend and Django for the server side.
Thus frontend.herokuapp.com and backend.herokuapp.com.
When I attempt to make calls to my API through the react app the cookie that was received from the API is not sent with the requests.
I had expected that I would be able to support this configuration without having to do anything special since all server-side requests would (I thought) be made by the JS client app directly to the backend process with their authentication cookies attached.
In an attempt to find a solution that I thought would work I attempted to set
SESSION_COOKIE_DOMAIN = "herokuapp.com"
Which while less than ideal (as herokuapp.com is a vast domain) in Production would seem to be quite safe as they would then be on api.myapp.com and www.myapp.com.
However, with this value set in settings.py I get an AuthStateMissing when hitting my /oauth/complete/linkedin-oauth2/ endpoint.
Searching google for AuthStateMissing SESSION_COOKIE_DOMAIN yields one solitary result which implies that the issue was reported as a bug in Django social auth and has since been closed without further commentary.
Any light anyone could throw would be very much appreciated.
I ran into the exact same problem while using herokuapp.com.
I even posted a question on SO here.
According to Heroku documentation:
In other words, in browsers that support the functionality, applications in the herokuapp.com domain are prevented from setting cookies for *.herokuapp.com
Heroku blocks cookies from frontend.herokuapp.com and backend.herokuapp.com
You need to add a custom domain to frontend.herokuapp.com and backend.herokuapp.com
The entire answer https://stackoverflow.com/a/54513216/1501643
We are presently rewriting an in-production Django site. We would like to deploy the new site in parallel with the old site, and slowly divert traffic from old to new using the following scheme:
New accounts go to the new site
Existing accounts go to the old site
Existing accounts may be offered the opportunity to opt in to the new site
Accounts diverted to the new site may opt out and be returned to the old site
It's clear to me that a cookie is involved, and that Nginx is capable of rewriting requests based on a cookie:
Nginx redirect if cookie present
How do I run two rails apps behind the same domain and have nginx route requests based on cookie?
How the cookie gets set remains a bit of a mystery to me. It seems like a chicken-and-egg problem. Has anyone successfully run a scheme like this? How did you do it?
I think the most suitable solution for you problem would be:
Nginx at every request should check for some specific cookie, route
If it's presented and equals old, request goes to a old site
Otherwise request goes to the new site.
Every site (new and old) should check request for that cookie (route)
If cookie isn't presented (or wrong), your app should set it to the right value, and if request is for that site, just proceed it.
If not, it should send redirect, and we begin again with step 1