Why won't asp.net create cookies in localhost? - cookies

Okay, this is really kinda starting to bug me. I have a simple Web project setup located at: "C:\Projects\MyTestProject\". In IIS on my machine, I have mapped a virtual directory to this location so I can run my sites locally (I understand I can run it from Visual Studio, I like this method better). I have named this virtual directory "mtp" and I access it via http://localhost/mtp/index.aspx. All this is working fine.
However, whenever I try to create a cookie, it simply never gets written out? I've tried this in FF3 and IE7 and it just plain won't write the cookie out. I don't get it. I do have "127.0.0.1 localhost" in my hosts file, I can't really think of anything else I can do. Thanks for any advice.
James

The cookie specs require two names and a dot between, so your cookiedomain cannot be "localhost". Here's how I solved it:
Add this to your %WINDIR%\System32\drivers\etc\hosts file:
127.0.0.1 dev.livesite.com
When developing you use http://dev.livesite.com instead of http://localhost
Use ".livesite.com" as cookiedomain (with a dot in the beginning) when creating the cookie.
Modern browsers doesn't require a leading dot anymore, but you may want to use anyway for backwards compability.
Now it works on all sites:
http://dev.livesite.com
http://www.livesite.com
http://livesite.com

Since an answer has never been chosen, I suppose I can still throw something else out there.
One reason you can run into no cookies being written with an application running under localhost is the httpCookies setting in the web.config. If the domain attribute was set to a specific domain and you run under localhost, the cookies did not get written for me.
Remove the domain attribute in development and the cookies are written:
<!-- Development -->
<httpCookies httpOnlyCookies="true" requireSSL="false" />
<!-- Production -->
<!--<httpCookies domain=".domain.com" httpOnlyCookies="true" requireSSL="true" />-->

Are you assigning an expiration date to the cookie? By default, the cookie will expire when the browser session expires, meaning it won't write anything to disk.

Related

Why can't I see my (localhost) cookie being stored in Electron app?

I have an Angular app using Electron as the desktop wrapper. And there's a separate Django backend which provides HTTP APIs to the Electron client.
So normally when I call the login API the response header will have a Set-Cookie field containing the sessionId. And I can clearly see that sessionId in Postman, however, I can't see this cookie in my Angular app (Dev tools of Electron).
After some further debugging I noticed a warning sign beside my Set-Cookie in dev tools. It said that the cookie is blocked due to the SameSite being set to Lax. So I found a way to modify the server code to return a None samesite (together with a Secure property; I'm using HTTP):
# settings.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_SAMESITE = 'None'
which did work (and the warning sign is gone) but the cookie is still not visible.
So what's the problem here? Why can't (and How can) I see that cookie so as to make sure that the login works in the actual client, not just Postman?
(btw, now both ends are being developed in localhost.)
There's no need to worry. A good way to check if it works is to actually make a request that requires login (after the API has been Postman tested) and see if the desired data are returned. If so, you are good to go (especially when the warning is gone).
If the sessionId cookie is saved it should automatically be included in the request. Unless there's something wrong with the cookie's path; but a / path would be fine.
Why is the cookie not visible: it's probably due to the separation of front and back ends. In Electron, the pages are typically some local HTML files, as one common step during configuration is to probably modify loadURL or something like that in main.js, for instance:
mainWindow.loadURL(`file://${__dirname}/dist/your-project/index.html`);
So the "site" you are accessing from Electron can be considered as local filesystem (which has no domain and hence no cookie at all), and you should see an empty file:// entry in dev tools -> application -> storage -> cookie. It doesn't mean a local path containing all cookies of the Electron app. Although your backend may be on the same local machine, you are accessing as http:// instead of file:// so the browser (Electron) will treat it as an actual web server.
Therefore, your cookies should be stored in another entry like http(s)://localhost and you can't see it in Electron. (Note that the same cookie will work in both HTTP and HTTPS)
If you use Chrome instead to test, you may be able to see it in all cookies. In some cases where the frontend and backend are deployed to the same host you may see the cookie in dev tools. But I guess there're always some reasons why you need Electron to create a desktop app (e.g. Python scripts).
Further reading
Using HTTPS
Although moving to HTTPS does not necessarily solve the original problem, it may be worth doing in order to prevent potential problems and get ready for the publish.
In your case, for the backend, you can use django-sslserver as a temporary solution before getting your SSL, but it uses a self-signed certificate and may make your frontend complain.
To fix this, consider adding the following code to the main process:
# const { app } = require('electron');
if (!app.isPackaged) {
app.commandLine.appendSwitch('ignore-certificate-errors');
}
Now it provides a good way to distinguish between development (unpacked) and production (packed) and only disables certificate check in development in order to make the code work.
Assuming that SESSION_COOKIE_SECURE in your config refers to cookie's secure flag, You ll have to set
SESSION_COOKIE_SECURE = False
because if this flag is set to True the browser will allow this cookie to be set only if you are using an https connection.
PS: This is just for your localhost. Hopefully you ll be using an Https connection in other environments.

Cookie “PHPSESSID” will be soon treated as cross-site cookie against <file> because the scheme does not match

I've just noticed my console is littered with this warning, appearing for every single linked resource. This includes all referenced CSS files, javascript files, SVG images, and even URLs from ajax calls (which respond in JSON). But not images.
The warning, for example in case of a style.css file, will say:
Cookie “PHPSESSID” will be soon treated as cross-site cookie against “http://localhost/style.css” because the scheme does not match.
But, the scheme doesn't match what? The document? Because that it does.
The URL of my site is http://localhost/.
The site and its resources are all on http (no https on localhost)
The domain name is definitely not different because everything is referenced relative to the domain name (meaning the filepaths start with a slash href="/style.css")
The Network inspector just reports a green 200 OK response, showing everything as normal.
It's only Mozilla Firefox that is complaining about this. Chromium seems to not be concerned by anything. I don't have any browser add-ons. The warnings seem to originate from the browser, and each warning links to view the corresponding file source in Debugger.
Why is this appearing?
that was exactly same happening with me. the issue was that, firefox keeps me showing even Cookies of different websites hosted on same URL : "localhost:Port number" stored inside browser memory.
In my case, i have two projects configured to run at http://localhost:62601, when i run first project, it saves that cookie in browser memory. when i run second project having same URL, Cookie is available inside that projects console also.
what you can do, is delete the all of the cookies from browser.
#Paramjot Singh's answer is correct and got me most of the way to where I needed to be. I also wasted a lot of time staring at those warnings.
But to clarify a little, you don't have to delete ALL of your cookies to resolve this. In Firefox, you can delete individual site cookies, which will keep your settings on other sites.
To do so, click the hamburger menu in the top right, then, Options->Privacy & Security or Settings->Privacy & Security
From here, scroll down about half-way and find Cookies and Site Data. Don't click Clear Data. Instead, click Manage Data. Then, search for the site you are having the notices on, highlight it, and Remove Selected
Simple, I know, but I made the mistake of clearing everything the first time - maybe this will prevent someone from doing same.
The warning is given because, according to MDN web docs:
Standards related to the Cookie SameSite attribute recently changed such that:
The cookie-sending behaviour if SameSite is not specified is SameSite=Lax. Previously the default was that cookies were sent for all requests.
Cookies with SameSite=None must now also specify the Secure attribute (they require a secure context/HTTPS).
Which indicates that a secure context/HTTPS is required in order to allow cross site cookies by setting SameSite=None Secure for the cookie.
According to Mozilla, you should explicitly communicate the intended SameSite policy for your cookie (rather than relying on browsers to apply SameSite=Lax automatically), otherwise you might get a warning like this:
Cookie “myCookie” has “SameSite” policy set to “Lax” because it is missing a “SameSite” attribute, and “SameSite=Lax” is the default value for this attribute.
The suggestion to simply delete localhost cookies is not actually solving the problem. The solution is to properly set the SameSite attribute of cookies being set by the server and use HTTPS if needed.
Firefox is not the only browser making these changes. Apparently the version of Chrome I am using (84.0.4147.125) has already implemented the changes as I got this message in the console:
The previously mentioned MDN article and this article by Mike Conca have great information about changes to SameSite cookie behavior.
Guess you are using WAMP or LAMP etc. The first thing you need to do is enable ssl on WAMP as you will find many references saying you need to adjust the cookie settings to SameSite=None; Secure That entails your local connection being secure. There are instructions on this link https://articlebin.michaelmilette.com/how-to-add-ssl-https-to-wampserver/ as well as some YouTube vids.
The important thing to note is that when creating the SSL certificate you should use sha256 encoding as sha1 is now deprecated and will throw another warning.
There is a good explanation of SameSite cookies on https://web.dev/samesite-cookies-explained/
I was struggling with the same issue and solved it by making sure the Apache 2.4 headers module was enabled and than added one line of code
Header always edit Set-Cookie ^(.")$ $1;HttpOnly;Secure
I wasted lots of time staring at the same sets of warnings in the Inspector until it dawned on me that the cookies were persisting and needed purging.
Apparently Chrome was going to introduce the new rules by now but Covid-19 meant a lot of websites might have been broken while people worked from home. The major browsers are working together on the SameSite attribute this so it will be in force soon.

Delete postman cache

I use Postman extension to check out my RESTful APIs
I am trying to make a request to my "localhost", but it seems to have cached one of the query parameters.
I tried clearing cache of my chrome browser but this does not seem to work. I went to the extent of even changing the API resource name.
Has anyone come across such an issue?
Cache-Control request header can be used but one thing to clarify
no-cache does not mean do not cache. In fact, it means on every HTTP request it "revalidate with server" before using any cached response. If the server says that the resource is still valid then the cache will still use the cached version.
while no-store is effectively asking to not cache at all and is intended not to to store anything in the cache.
I tried the solution above and it didn't work for me. What worked was restart the application. I'm using eclipse and running a spring boot application.
In case someone is using the same environment and facing the same problem it may help.
I suggest to use Postman App rather than the extension because with postman app you can do lot more cool things like you can use the console to debug your APIs, create/delete cookies and cache with excellent GUI.
I came across same situation where the request are cached in Postman. I deleted JSESSIONID cookie from Cookies section on PM rather closing the PM app, it solved my problem (means - the call reached to my localhost app) and got accurate response. Please try it if someone needs this solution.
I usually just request the data on a chrome incognito tab/firefox private tab and I guess that this just resets the cache and then it appears on my Postman app.
(I would recommend using the Postman app instead of the website as it has many more features!)

Facebook Connect not setting cookies

I'm trying to implement Facebook Connect on a website with .NET MVC using C#.
I've followed the instructions here: http://wiki.developers.facebook.com/index.php/Trying_Out_Facebook_Connect step by step. I can make the login work as in that when I log in through the site I'm also logged into Facebook.
In order to work with this in the server I think I need to access the cookies Facebook is supposed to leave like:
APIKEY_user
APIKEY_session_key
...
as mentioned here http://wiki.developers.facebook.com/index.php/Verifying_The_Signature.
The thing is I'm not getting any of these cookies. I've googled and it seems like I'm the only person with this problem. Any ideas as to what I could be doing wrong ? Has this happened to anyone else ?
The issue was that I was developing locally using localhost.
I resolved the problem by changing the settings for the application to point to a certain web address instead of localhost and changing my hosts file lo point that same web address to 127.0.0.1
from the UI/client-side perspective, always insure you have the correct path indicated for the xd_receiver file in your FB.init() method.
Firecookie is very useful for seeing what Cookies are/aren't being set.

Issue with Incorrect URLs in the WSDL of a .NET Web Service

We have installed an ASP.NET web site on a client's server. This site has a web service with a couple of web methods that are called by a Flash object in order to display a news feed. If you browse to their site (ex: www.domain.com), everything's working fine except the flash.
The issue is that when we browse to the .asmx, the header shows that the Host is a subdomain internal to their network (internal.domain.com). Obviously this doesn't resolve to any public IP when browsing from outside of their network. This causes the Flash to fail since the flash object is embedded on a page and is therefore running client side.
I checked the computer name on the server in question, and it doesn't even match "internal.domain.com" - it is something completely different. Where is it getting this information from. It is not coming from IIS, since we have no host headers set up, and the IP for the site is set to (all unassigned).
We either need to force the web service to run against a specific host, or we need to change something on the server so that it resolves to a valid public-facing host name. Any and all help is greatly appreciated!!!!
The solution is to add a host header for www.domain.com
More details here
While you probably did this already, it's always a good first step:
Do a global Find in the source code of both the Flash object and the web service for the string in question.
It sounds like someone may have configured/coded the internal.domain.com string into the Flash object's request. (Host: is a HTTP Request header, not Response header, IIRC.)
Does the Flash object get the web service URL from the C# code? If so, it might be getting the default web service URL that you choose when adding a Web Reference to your project in VS. Therefore it might be pointing to a URL locally to the developer's machine/server which is not recognized on the live server.