Problem handling cookies for Blazor Server using OpenID server (Keycloak) - cookies

I have a baffling issue with cookie handling in a Blazor server app (.NET Core 6) using openid (Keycloak). Actually, more than a couple which are may or may not linked. It’s a typical (?) reverse proxy architecture:
A central nginx receives queries for services like Jenkins, JypyterHub, SonarQube, Discourse etc. These are mapped through aliases in internal IPs where the nginx can access them. This nginx intercepts URL like: https://hub.domain.eu
A reverse proxy which resolves to https://dsc.domain.eu. This forwards request to a Blazor app running in Kestrel in port 5001. Both Kestrel and nginx under SSL – required to get the websockets working.
Some required background: the Blazor app is essentially a ‘hub’ where its various razor pages ‘host’ in iframe-like the above mentioned services. How it works: When the user asks for the root path (https://hub.domain.eu) it opens the root page of the Blazor app (/).
The nav menu contains the links to razor pages which contain the iframes for the abovementioned services. For example:
The relative path is intercepted by the ‘central’ nginx which loads Jenkins. Everything is under the same Keycloak OpenID server. Note that everything works fine without the Blazor app.
Scenarios that cause the same problem
Assume the user logins in my app using the login page of Keycloak (NOT the REST API) through redirection. Then proceeds to link and he is indeed logged in as well. The controls in the App change accordingly to indicate that the user is indeed authenticated. If you close the tab and open a new one, the Blazor app will act as if it’s not logged in while the other services (e.g Jenkins) will show the logged in user from before. When you press the Login link, you’ll be greeted with a 502 nginx error. If you clean the cookies from browser (or in private / stealth mode) everything works again. Or of you just log off e.g. from Jenkins.
Assume that the user is now in a service such as Jenkins, SonarQube, etc. if you press F5 now you have two problems: you get a 404 Error but only on SOME services such as Sonarcube but not in others. This is a side problem for another post. The thing is that Blazor app appears not logged in again by pressing Back / Refresh
The critical part of Program.cs looks like the following:
This class handles the login / logoff:
Side notes:
SaveTokens = false still causes large header errors and results in empty token (shown in the above code with the Warning: Token received was null). I’m still able to obtain user details though from httpContext.
No errors show up in the reverse proxy error.log and in Kestrel (all deployed in Linux)
MOST important: if I copy-paste the failed login link (the one that produced the 502 error) to a "clean" browser, it works fine.
There are lots of properties affecting the OpenID connect, it could also be an nginx issue but I’ve run out of ideas the last five days. The nginx config has been accommodated for large headers and websockets.
Any clues as to where I should at least focus my research to track the error??

The 502 error shows an error at NGINX's side. The reverse proxy had proper configuration but as it turned out, not the front one. Once we set the header size to suggested size, everything played out.

Related

Keystone session cookie only working on localhost

Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.

nginx API cross origin calls not working only from some browsers

TLDR: React app's API calls are returning with status code 200 but without body in response, happens only when accessing the web app from some browsers.
I have a React + Django application deployed using nginx and uwsgi on a single centOS7 VM.
The React app is served by nginx on the domain, and when users log in on the javascript app, REST API requests are made to the same nginx on a sub domain (ie: backend.mydomain.com), for things like validate token and fetch data.
This works on all recent version of Firefox, Chrome, Safari, and Edge. However, some users have complained that they could not log in from their work network. They can visit the site, so obviously the javascript application is served to them, but when they log in, all of the requests come back with status 200, except the response has an empty body. (and the log in requires few pieces of information to be sent back with the log in response to work).
For example, when I log in from where I am, I would get response with status=200, and a json object with few parameters in the body of the response.
But when one of the users showed me the same from their browser, they get Status=200 back, but the Response is empty. They are using the same version of browsers as I have. They tried both Firefox and Chrome with the same behaviours.
After finally getting hold of one of the user to send me some screenshots. I found the problem. In my browser that works with the site, the API calls to the backend had Referrer Policy set to strict-origin-when-cross-origin in the Headers. However on their browser, the same was showing up as no-referrer-when-downgrade.
I had not explicitly set the referrer policy so the browsers were using each of their default values, and it differed between different versions of browsers (https://developers.google.com/web/updates/2020/07/referrer-policy-new-chrome-default)
To fix this, I added add_header 'Referrer-Policy' 'strict-origin-when-cross-origin'; to the nginx.conf file and restarted the server. More details here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
The users who had trouble before can now access the site API resources after clearing cache in their browsers.

Youtube not able to play on my django-heroku app. Giving me a mixed content error message

I tried to view youtube videos on my app and it didn't work. I checked the console and got this error message
Mixed Content: The page at'https://hispanicheights.herokuapp.com/blog/youtube-video/'
was loaded over HTTPS,but requested an insecure script
'http://content.jwplatform.com/libraries/WQWJdvRx.js'.
This request has been blocked; the content must be served over HTTPS.
Is there a way around this or is this just the situation until I get a paid account with a domain?
This has nothing to do with Heroku, paid plans or not. It is simply that you are linking to an http resource inside a page that is served by https; since that potentially side steps the man-in-the-middle protection that https gives you, modern browsers forbid it.
The solution is to serve all your dependent scripts via https as well.

Cubesviewer configuration for proper authentication

I'm trying to configure cubesviewer and try out the setup.
I've got the app installed running, along with cubes slicer app too.
However, when I visit the home page
http://127.0.0.1:8000/cubesviewer/
it fails popping up an error "Error occurred while accessing the data server"
Debugging with the browser console, shows a http status 403 error with the url http://localhost:8000/cubesviewer/view/list/
After some googling and reading, I figured I'll need to add rest frame auth settings. (as mentioned here.).
Now after running migrate and runserver, I get 401 error on that url.
Clearly I'm missing something with settings.py , Can somebody help me out.
I'm using the cubesviewer tag v0.10 from the github repo.
And find my settings here. http://dpaste.com/2G5VB5K
P.S: I've verified Cubes slicer works separately on its' own.
I have reproduced this. This is error may occur when you use different URL to access a website and to access related resources. For security reasons, browsers allow to access resources from exactly the same host as the page you are viewing.
Seems you are accessing the app via http://127.0.0.1:8000, but you have configured CubesViewer to tell clients to access the data backend via http://localhost:8000. While it's the same IP address, they are different strings.
Try accessing the app as http://localhost:8000.
If you deploy to a different server, you need to adjust settings. Here are the relevant configuration options, now with more comments:
# Base Cubes Server URL.
# Your Cubes Server needs to be running and listening on this URL, and it needs
# to be accessible to clients of the application.
CUBESVIEWER_CUBES_URL="http://localhost:5000"
# CubesViewer Store backend URL. It should point to this application.
# Note that this must match the URL that you use to access the application,
# otherwise you may hit security issues. If you access your server
# via http://localhost:8000, use the same here. Note that 127.0.0.1 and
# 'localhost' are different strings for this purpose. (If you wish to accept
# requests from different URLs, you may need to add CORS support).
CUBESVIEWER_BACKEND_URL="http://localhost:8000/cubesviewer"
Alternatively, you could change CUBESVIEWER_BACKEND_URL to "http://127.0.0.1:8000/cubesviewer" but I recommend you to use hostnames and not IP addresses for this.
Finally, I haven't yet tested with CORS support, but check this pull request if you wish to try that approach.

Persistent cookies are not being shared across subdomains

I have 2 asp .net applications running on the same domain (both in staging & production). Application A opens a page from Application B in a popup window. The cookie names for staging and production are different.
But strangely for some users, even though the request is for production, staging cookies are being appended to the request. Is it a cached request being pulled from somewhere? Where the production cookies going? In Application A the cookies are found and they are fine. But Application B is getting the staging/wrong cookies.
Sorry for not sharing any code for confidentiality issues. Here's an example:
In application A, following cookies are there:
BDT, path="/", domain=".sample.com" (this is production)
In application B, cookies are somewhat like this:
SBDT, path="/", domain=".sample.com" (this is staging cookie)
Is the request being cached (at the machine or some proxy server) and being issued repeatedly? Or can it be some mal-ware/virus?
User is using IE9 (in IE7 mode) on Windows 7
finally we cracked it.
We again carefully analyzed the HttpWatch logs and noticed that, the App B is being run in IE protected mode, where as App A is not.
We requested the client to clear the cache and launch the applications again. After that we found that,
App B is getting NO cookies at all
We made a guess that, they are running in IE protected mode with High security level enabled. And they have ONLY 'App A' in the trusted sites list i.e AppA.sample.com
We requested them to add *.sample.com instead and that FIXED the issue.
For more details check:
Persistent cookies are not shared between Internet Explorer and Office applications