cypress browser sends all cookies in all requests - cookies

I am using cypress for e2e testing with the session storage feature enabled.
Until recently the only two cookies in the project were "access_token" and "refresh_token". Now I added 2 more cookies which store some data which will automatically be written and read while you're using the website.
When browsing the website with any native browser (chrome, firefox, edge), no cookies get sent by the frontend to the backend in the request. Only the "access_token"s content will be used as the Authentication bearer.
When browsing in any browser inside cypress or letting cypress automatically browse, all cookies which exist will be added to every sent request. Not only requests sent by cy.request() but also the requests which the frontend natively sends.
This is a problem since the header size gets to large and the backend wont accept it. The quickfix was to increase the accepted header size in the backend but I'd prefer not sending the cookies at all.
Is there a way to tell cypress which cookies to send or prevent sending cookies at all? I don't really care which cookies will be stored in the cypress session. Only which cookies get sent.
EDIT:
All cookies use "strict" same site settings.
When testing against a deployed system https is used but with an invalid certificate.
When testing against a locally running system http is used.
The cookies only get sent when running cypress against a local system (localhost).

Using samesite=strict means that the cookie will never be included in requests to other sites, so I guess that is your core problem here. You need to use samesite=none to get cookies included in HTTP POST Request across sites.

Related

nginx API cross origin calls not working only from some browsers

TLDR: React app's API calls are returning with status code 200 but without body in response, happens only when accessing the web app from some browsers.
I have a React + Django application deployed using nginx and uwsgi on a single centOS7 VM.
The React app is served by nginx on the domain, and when users log in on the javascript app, REST API requests are made to the same nginx on a sub domain (ie: backend.mydomain.com), for things like validate token and fetch data.
This works on all recent version of Firefox, Chrome, Safari, and Edge. However, some users have complained that they could not log in from their work network. They can visit the site, so obviously the javascript application is served to them, but when they log in, all of the requests come back with status 200, except the response has an empty body. (and the log in requires few pieces of information to be sent back with the log in response to work).
For example, when I log in from where I am, I would get response with status=200, and a json object with few parameters in the body of the response.
But when one of the users showed me the same from their browser, they get Status=200 back, but the Response is empty. They are using the same version of browsers as I have. They tried both Firefox and Chrome with the same behaviours.
After finally getting hold of one of the user to send me some screenshots. I found the problem. In my browser that works with the site, the API calls to the backend had Referrer Policy set to strict-origin-when-cross-origin in the Headers. However on their browser, the same was showing up as no-referrer-when-downgrade.
I had not explicitly set the referrer policy so the browsers were using each of their default values, and it differed between different versions of browsers (https://developers.google.com/web/updates/2020/07/referrer-policy-new-chrome-default)
To fix this, I added add_header 'Referrer-Policy' 'strict-origin-when-cross-origin'; to the nginx.conf file and restarted the server. More details here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
The users who had trouble before can now access the site API resources after clearing cache in their browsers.

Debugging with local instance and authentication cookies

So, I've recently taken over the front-end of a project in which the previous front-end developer always did his debugging with a localhost instance connecting to a remote staging back-end.
Right now we are improving some security issues and are using CSRF tokens.
For each POST,PUT,DELETE request, I'm first GETting a csrf endpoint, which sets a JSESSIONID HttpOnly cookie and returns a CSRF token in the response body. For the subsequent request the CSRF token goes into the request header and the cookie of course gets sent automatically of course.
Now,.. my code works fine deployed on the remote staging front-end. But, this new functionality has totally prevented me from debugging with a local instance, because the cookie wont work when I GET the CSRF token from localhost, because this is a different domain of course.
This requires me to deploy every single change of code to the staging front end. Very uncomfortable workflow when performing the usual trial-and-error fix.
Adding the remote as a localhost alias to my hosts file also doesn't work, because this routes all my requests to the remote to my own machine, which doesn't run a local instance of the remote.
I would've thought there would be a Chrome extension or something like that for problems like these, but since I couldn't find any I wondered if I'm missing a very obvious point here.
Okay, it seems that this was a really general issue:
set withCredentials to true for the request to the remote and use a chrome plugin to overwrite the response headers to:
Access-Control-Allow-Origin: http://localhost
Access-Control-Allow-Credentials: true

Set-Cookie for a login system

I've run into a few problems with setting cookies, and based on the reading I've done, this should work, so I'm probably missing something important.
This situation:
Previously I received responses from my API and used JavaScript to save them as cookies, but then I found that using the set-cookie response header is more secure in a lot of situations.
I have 2 cookies: "nuser" (contains a username) and key (contains a session key). nuser shouldn't be httpOnly so that JavaScript can access it. Key should be httpOnly to prevent rogue scripts from stealing a user's session. Also, any request from the client to my API should contain the cookies.
The log-in request
Here's my current implementation: I make a request to my login api at localhost:8080/login/login (keep in mind that the web-client is hosted on localhost:80, but based on what I've read, port numbers shouldn't matter for cookies)
First the web-browser will make an OPTIONS request to confirm that all the headers are allowed. I've made sure that the server response includes access-control-allow-credentials to alert the browser that it's okay to store cookies.
Once it's received the OPTIONS request, the browser makes the actual POST request to the login API. It sends back the set-cookie header and everything looks good at this point.
The Problems
This set-up yields 2 problems. Firstly, though the nuser cookie is not httpOnly, I don't seem to be able to access it via JavaScript. I'm able to see nuser in my browser's cookie option menu, but document.cookie yeilds "".
Secondly, the browser seems to only place the Cookie request header in requests to the exact same API (the login API):
But, if I do a request to a different API that's still on my localhost server, the cookie header isn't present:
Oh, and this returns a 406 just because my server is currently configured to do that if the user isn't validated. I know that this should probably be 403, but the thing to focus on in this image is the fact that the "cookie" header isn't included among the request headers.
So, I've explained my implementation based on my current understanding of cookies, but I'm obviously missing something. Posting exactly what the request and response headers should look like for each task would be greatly appreciated. Thanks.
Okay, still not exactly what was causing the problem with this specific case, but I updated my localhost:80 server to accept api requests, then do a subsequent request to localhost:8080 to get the proper information. Because the set-cookie header is being set by localhost:80 (the client's origin), everything worked fine. From my reading before, I thought that ports didn't matter, but apparently they do.

Safari doesn't forward session cookies to JVM when requesting applet JAR

Our web app restricts access to authenticated users; our servers are configured to refuse access to any resource requests unless the HTTP request includes the session cookies. We use a Java applet, for which access to the JAR file is also prevented unless the request has the correct session cookies set.
This works fine for all major browsers we have tried on Windows clients except for Safari (don't have a Mac to test Safari on this). All page resources e.g., html, js, images, ..., load fine; except for the JAR file, where our server returns a 'not authorised' page which obviously doesn't work in the applet container.
It looks like the JVM isn't sending the session cookies when it requests the JAR. I suspect that Safari isn't sharing the cookies with the JVM, because everything works OK in other browsers with the same JVM (IE, Chrome, Fx).
Is there anything we can do to fix this? Or work around this? We can't make the JAR available to non-authorised users due to licensing issues, nor can we change the hosting environment.

Does every web request send the browser cookies?

Does every web request send the browser's cookies?
I'm not talking page views, but a request for an image, .js file, etc.
Update
If a web page has 50 elements, that is 50 requests. Why would it send the SAME cookie(s) for each request, doesn't it cache or know it already has it?
Yes, as long as the URL requested is within the same domain and path defined in the cookie (and all of the other restrictions -- secure, httponly, not expired, etc) hold, then the cookie will be sent for every request.
As others have said, if the cookie's host, path, etc. restrictions are met, it'll be sent, 50 times.
But you also asked why: because cookies are an HTTP feature, and HTTP is stateless. HTTP is designed to work without the server storing any state between requests.
In fact, the server doesn't have a solid way of recognizing which user is sending a given request; there could be a thousand users behind a single web proxy (and thus IP address). If the cookies were not sent every request, the server would have no way to know which user is requesting whatever resource.
Finally, the browser has no clue if the server needs the cookies or not, it just knows the server instructed it to send the cookie for any request to foo.com, so it does so. Sometimes images need them (e.g., dynamically-generated per-user), sometimes not, but the browser can't tell.
Yes. Every request sends the cookies that belong to the same domain. They're not cached as HTTP is stateless, what means every request must be enough for the server to figure out what to do with it. Say you have images that are only accessible by certain users; you must send your auth cookie with every one of those 50 requests, so the server knows it's you and not someone else, or a guest, among the pool of requests it's getting.
Having said that, cookies might not be sent given other restrictions mentioned in the other responses, such as HTTPS setting, path or domain. Especially there, an important thing to notice: cookies are not shared between domains. That helps with reducing the size of HTTP calls for static files, such as the images and scripts you mentioned.
Example: you have 4 cookies at www.stackoverflow.com; if you make a request to www.stackoverflow.com/images/logo.png, all those 4 cookies will be sent.
However, if you request stackoverflow.com/images/logo.png (notice the subdomain change) or images.stackoverflow.com/logo.png, those 4 cookies won't be present - but maybe those related to these domains will.
You can read more about cookies and images requesting, for example, at this StackOverflow Blog Post.
No. Not every request sends the cookies. It depends on the cookie configuration and client-server connection.
For example, if your cookie's secure option is set to true then it must be transmitted over a secure HTTPS connection. Means when you see that website with HTTP protocol then these cookies won't be sent by browsers as the secure flag is true.
3 years have passed
There's another reason why a browser wouldn't send cookies. You can add a crossOrigin attribute to your <script> tag, and the value to "anonymous". This will prevent cookies to be sent to the destination server. 99.9% of the time, your javascripts are static files, and you don't generate that js code based on the request's cookies. If you have 1KB of cookies, and you have 200 resources on your page, then your user is uploading 200KB, and that might take some time on 3G and have zero effect on the result page. Visit HTML attribute: crossorigin for reference.
Cookie has a "path" property. If "path=/" , the answer is Yes.
I know this is an old thread. But I've just noticed that most browsers won't sent cookies for a domain if you add a trailing dot. For example http://example.com. won't receive cookies set for .example.com. Apache on the other hand treats them as the same host. I find this useful to make cross domain tracking more difficult for external resources I include, but you could also use it for performance reasons. Note this brakes validation of https certificates. I've run a few tests using browsershots and my own devices. The hack works on almost all browsers except for safari (mobile and desktop), which will include cookies in the request.
Short answer is Yes. The below lines are from the JS documentation
Cookies were once used for general client-side storage. While this was legitimate when they were the only way to store data on the client, it is now recommended to use modern storage APIs. Cookies are sent with every request, so they can worsen performance (especially for mobile data connections).