Safari doesn't forward session cookies to JVM when requesting applet JAR - cookies

Our web app restricts access to authenticated users; our servers are configured to refuse access to any resource requests unless the HTTP request includes the session cookies. We use a Java applet, for which access to the JAR file is also prevented unless the request has the correct session cookies set.
This works fine for all major browsers we have tried on Windows clients except for Safari (don't have a Mac to test Safari on this). All page resources e.g., html, js, images, ..., load fine; except for the JAR file, where our server returns a 'not authorised' page which obviously doesn't work in the applet container.
It looks like the JVM isn't sending the session cookies when it requests the JAR. I suspect that Safari isn't sharing the cookies with the JVM, because everything works OK in other browsers with the same JVM (IE, Chrome, Fx).
Is there anything we can do to fix this? Or work around this? We can't make the JAR available to non-authorised users due to licensing issues, nor can we change the hosting environment.

Related

cypress browser sends all cookies in all requests

I am using cypress for e2e testing with the session storage feature enabled.
Until recently the only two cookies in the project were "access_token" and "refresh_token". Now I added 2 more cookies which store some data which will automatically be written and read while you're using the website.
When browsing the website with any native browser (chrome, firefox, edge), no cookies get sent by the frontend to the backend in the request. Only the "access_token"s content will be used as the Authentication bearer.
When browsing in any browser inside cypress or letting cypress automatically browse, all cookies which exist will be added to every sent request. Not only requests sent by cy.request() but also the requests which the frontend natively sends.
This is a problem since the header size gets to large and the backend wont accept it. The quickfix was to increase the accepted header size in the backend but I'd prefer not sending the cookies at all.
Is there a way to tell cypress which cookies to send or prevent sending cookies at all? I don't really care which cookies will be stored in the cypress session. Only which cookies get sent.
EDIT:
All cookies use "strict" same site settings.
When testing against a deployed system https is used but with an invalid certificate.
When testing against a locally running system http is used.
The cookies only get sent when running cypress against a local system (localhost).
Using samesite=strict means that the cookie will never be included in requests to other sites, so I guess that is your core problem here. You need to use samesite=none to get cookies included in HTTP POST Request across sites.

Why can't I see my (localhost) cookie being stored in Electron app?

I have an Angular app using Electron as the desktop wrapper. And there's a separate Django backend which provides HTTP APIs to the Electron client.
So normally when I call the login API the response header will have a Set-Cookie field containing the sessionId. And I can clearly see that sessionId in Postman, however, I can't see this cookie in my Angular app (Dev tools of Electron).
After some further debugging I noticed a warning sign beside my Set-Cookie in dev tools. It said that the cookie is blocked due to the SameSite being set to Lax. So I found a way to modify the server code to return a None samesite (together with a Secure property; I'm using HTTP):
# settings.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_SAMESITE = 'None'
which did work (and the warning sign is gone) but the cookie is still not visible.
So what's the problem here? Why can't (and How can) I see that cookie so as to make sure that the login works in the actual client, not just Postman?
(btw, now both ends are being developed in localhost.)
There's no need to worry. A good way to check if it works is to actually make a request that requires login (after the API has been Postman tested) and see if the desired data are returned. If so, you are good to go (especially when the warning is gone).
If the sessionId cookie is saved it should automatically be included in the request. Unless there's something wrong with the cookie's path; but a / path would be fine.
Why is the cookie not visible: it's probably due to the separation of front and back ends. In Electron, the pages are typically some local HTML files, as one common step during configuration is to probably modify loadURL or something like that in main.js, for instance:
mainWindow.loadURL(`file://${__dirname}/dist/your-project/index.html`);
So the "site" you are accessing from Electron can be considered as local filesystem (which has no domain and hence no cookie at all), and you should see an empty file:// entry in dev tools -> application -> storage -> cookie. It doesn't mean a local path containing all cookies of the Electron app. Although your backend may be on the same local machine, you are accessing as http:// instead of file:// so the browser (Electron) will treat it as an actual web server.
Therefore, your cookies should be stored in another entry like http(s)://localhost and you can't see it in Electron. (Note that the same cookie will work in both HTTP and HTTPS)
If you use Chrome instead to test, you may be able to see it in all cookies. In some cases where the frontend and backend are deployed to the same host you may see the cookie in dev tools. But I guess there're always some reasons why you need Electron to create a desktop app (e.g. Python scripts).
Further reading
Using HTTPS
Although moving to HTTPS does not necessarily solve the original problem, it may be worth doing in order to prevent potential problems and get ready for the publish.
In your case, for the backend, you can use django-sslserver as a temporary solution before getting your SSL, but it uses a self-signed certificate and may make your frontend complain.
To fix this, consider adding the following code to the main process:
# const { app } = require('electron');
if (!app.isPackaged) {
app.commandLine.appendSwitch('ignore-certificate-errors');
}
Now it provides a good way to distinguish between development (unpacked) and production (packed) and only disables certificate check in development in order to make the code work.
Assuming that SESSION_COOKIE_SECURE in your config refers to cookie's secure flag, You ll have to set
SESSION_COOKIE_SECURE = False
because if this flag is set to True the browser will allow this cookie to be set only if you are using an https connection.
PS: This is just for your localhost. Hopefully you ll be using an Https connection in other environments.

nginx API cross origin calls not working only from some browsers

TLDR: React app's API calls are returning with status code 200 but without body in response, happens only when accessing the web app from some browsers.
I have a React + Django application deployed using nginx and uwsgi on a single centOS7 VM.
The React app is served by nginx on the domain, and when users log in on the javascript app, REST API requests are made to the same nginx on a sub domain (ie: backend.mydomain.com), for things like validate token and fetch data.
This works on all recent version of Firefox, Chrome, Safari, and Edge. However, some users have complained that they could not log in from their work network. They can visit the site, so obviously the javascript application is served to them, but when they log in, all of the requests come back with status 200, except the response has an empty body. (and the log in requires few pieces of information to be sent back with the log in response to work).
For example, when I log in from where I am, I would get response with status=200, and a json object with few parameters in the body of the response.
But when one of the users showed me the same from their browser, they get Status=200 back, but the Response is empty. They are using the same version of browsers as I have. They tried both Firefox and Chrome with the same behaviours.
After finally getting hold of one of the user to send me some screenshots. I found the problem. In my browser that works with the site, the API calls to the backend had Referrer Policy set to strict-origin-when-cross-origin in the Headers. However on their browser, the same was showing up as no-referrer-when-downgrade.
I had not explicitly set the referrer policy so the browsers were using each of their default values, and it differed between different versions of browsers (https://developers.google.com/web/updates/2020/07/referrer-policy-new-chrome-default)
To fix this, I added add_header 'Referrer-Policy' 'strict-origin-when-cross-origin'; to the nginx.conf file and restarted the server. More details here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
The users who had trouble before can now access the site API resources after clearing cache in their browsers.

Persistent cookies are not being shared across subdomains

I have 2 asp .net applications running on the same domain (both in staging & production). Application A opens a page from Application B in a popup window. The cookie names for staging and production are different.
But strangely for some users, even though the request is for production, staging cookies are being appended to the request. Is it a cached request being pulled from somewhere? Where the production cookies going? In Application A the cookies are found and they are fine. But Application B is getting the staging/wrong cookies.
Sorry for not sharing any code for confidentiality issues. Here's an example:
In application A, following cookies are there:
BDT, path="/", domain=".sample.com" (this is production)
In application B, cookies are somewhat like this:
SBDT, path="/", domain=".sample.com" (this is staging cookie)
Is the request being cached (at the machine or some proxy server) and being issued repeatedly? Or can it be some mal-ware/virus?
User is using IE9 (in IE7 mode) on Windows 7
finally we cracked it.
We again carefully analyzed the HttpWatch logs and noticed that, the App B is being run in IE protected mode, where as App A is not.
We requested the client to clear the cache and launch the applications again. After that we found that,
App B is getting NO cookies at all
We made a guess that, they are running in IE protected mode with High security level enabled. And they have ONLY 'App A' in the trusted sites list i.e AppA.sample.com
We requested them to add *.sample.com instead and that FIXED the issue.
For more details check:
Persistent cookies are not shared between Internet Explorer and Office applications

IE7 & IE8, JSESSIONID cookie breaks file download

Is there a way to prevent websphere from sending cookies in a response on a per request/url basis?
Our users get a link which allows them to download a file. Works fine in all major browsers except for IE8 & IE7. In IE7 & IE8, the file download breaks when cookies are sent with the response.
When a new session is created, the WebSphere sends a JSESSIONID cookie, and sets Cache-control to no-cache=set-cookie. This causes the download process to break in IE8 and lower.
Things I tried:
1) I know that no-cache=set-cookie can be turned off in Websphere admin console, but it's not an option.
2) The websphere is fronted by a web server, so the response headers can be changed using the web server, but it's not really an option.
3) I created a servlet filter, but it seems like whatever websphere does happens after the filter runs.
4) I created a JSP page that would prompt file download on load. The idea was that the cookie will be exchanged on page load, so that it won't interfere with the download. Unfortunately, because the download is triggered through JavaScript, IE blocks the download, and a user needs to manually approve it.
Is there any way to make it work?
IE8 has bug that may connected with your problem. Bug description. stackoverflow
I solved similar problem using good article.