Sense of secure cookie over HTTPS - cookies

Is there any sense to encrypt cookies (secure cookies) for HTTPS?
As far as I know in HTTPS whole request is encrypted, so do we need additional encryption of cookies?

That depends entirely on your security model. Some reasons why you would still need to encrypt cookies:
Do you care if the user of your application gets the contents of the cookie? In other words, do you store anything there that's internal and shouldn't be disclosed to the user?
Do you care if the user tampers with the contents of the cookie? Encryption can be a way to get integrity protection depending on how you do it. (There are, of course, other ways as well.)
What are the consequences of disclosure of the cookie? If it's a bearer token, whether it's encrypted or not won't make a lot of difference, but if it contains valuable data, encrypting it provides some protection against an attacker gaining access to the browser's stored cookies in some way (whether via a web attack or an attack on the actual system hosting the browser). You still may lose to an attacker in other ways, but it could provide some defense in depth.
The main thing that encrypting the cookie gives you is protection against the user who receives the cookie (or an attacker who can access that user's data), if you need that.

Related

Why is BFF pattern deemed safer for SPA's?

I am designing a new web application which requires an oAuth2 implementation. I've been reading up on oAuth2 Authorization Code flow with PKCE. That makes sense, it ensures that de client who is initiating the Authorization Code flow is the same client as the one who is exchanging the authorization code for access token (and/or refresh token).
But then I was wondering on how we could deal with refresh tokens. I understand that BFF is now the preferred solution for this, where we use a separate component (Backend for Frontend) that handles all calls from the web app and proxies those to the backend api, all the while handling all access tokens and refresh tokens. The web app and BFF maintain a session cookie, so the BFF can track which access token should be added to which request and so forth.
Most blogs mention something in the lines of "and this is safe if you set the session cookie to strict and http only and secure, because no malicious JS can get that cookie then".
And that is where I have trouble understanding why this is safer. If cookies are safe enough to deal with session id's, then why are they not safe enough to deal with access tokens? Or even refresh tokens? If they are cookie based, then they are sent with each request and no malicious JS can access it. If they could, then the BFF model doesn't provide any additional security, just a little more complexity, no?
So bottomline is: if BFF is deemed safe(r) because the sessions are kept in secure http-only cookies, why is keeping access/refresh tokens in secure http-only cookies not safe?
Adding a BFF to the mix does change the security of using tokens. The BFF is a confidential client so it makes the whole solution more secure, for sure, but this is not the only reason you would use a BFF. One of the main benefits is to keep tokens away from the browser. You don't want the tokens to be accessible by Javascript, that's why you want to rely on cookie-based sessions instead. This protects you from XSS attacks that want to steal the tokens. However, using sessions opens you to CSRF attacks and session-riding attacks. Protecting against CSRF and session-riding should be a bit easier than mitigating XSS as you can become vulnerable to XSS through a third-party library's dependency, for example.
E.g. in case of BFF: steal session from cookie -> make request to BFF -> get access to user data. In case of no BFF: steal AT/RT from cookie -> make request to API -> get access to user data. I still don't understand how this is safer, I am sorry for not understanding it.
You don't have to be sorry! It's good that you're trying to understand that.
The problem with your example is this: you assume that there is no BFF, but AT/RT are kept in cookies. If you're using cookies then it means that you have some sort of a backend component that sits between your SPA and APIs. Otherwise, you would have to deal with tokens in the browser (readable by JS). So the difference that you should be contemplating here is — am I using HTTP-only cookies to call my APIs or do I need tokens and set Authorization headers in JS? For the former case, it doesn't matter if you have a session ID in the cookie or the actual AT. The important part is that JS can't read the content of that cookie.
Another thing is that it's harder to steal sessions than tokens kept in JS. For someone to steal data from cookies you would need a Man-in-the-browser attack. If the tokens are available to JS, then all you need is an XSS attack to steal them.
As I mentioned, using cookies opens you up to CSRF and session-riding attacks, but their impact is limited:
the attacker can only perform an attack when the user has an open session. As soon as the user closes the browser no more data can be stolen (whereas, when an attacker steals a token, they can read data for as long as the token is valid)
the attacker can only perform the same actions that a user can from the front end. This should also be the case for a stolen AT, but in reality, access tokens usually have too broad privileges, and there are APIs you can call with a token that are not normally accessible from the UI.
At Curity we have spent some time researching this and you can have a look at this whitepaper we wrote about the security of SPAs and the pros and cons of different approaches: https://curity.io/resources/documents/single-page-application-security-whitepaper
BFF is considered safer not because of the cookie usage when using the access tokens but because the way of obtaining tokens is more secure. SPAs by definition are not able to keep a secret (in the browser) thus have to use a flow that involves a public client. The BFF allows for a confidential client because the client secret is kept in the backend.
Using PKCE with the public client gives you assurance indeed about the same entity requesting and receiving the tokens, but it give you little assurance about the authenticity of that client. A confidential client takes care of the latter.

How to protect web application from cookie stealing attack?

My web application's authentication mechanism currently is quite simple.
When a user logs in, the website sends back a session cookie which is stored (using localStorage) on the user's browser.
However, this cookie can too easily be stolen and used to replay the session from another machine. I notice that other sites, like Gmail for example, have much stronger mechanisms in place to ensure that just copying a cookie won't allow you access to that session.
What are these mechanisms and are there ways for small companies or single developers to use them as well?
We ran into a similar issue. How do you store client-side data securely?
We ended up going with HttpOnly cookie that contains a UUID and an additional copy of that UUID (stored in localStorage). Every request, the user has to send both the UUID and the cookie back to the server, and the server will verify that the UUID match. I think this is how OWASP's double submit cookie works.
Essentially, the attacker needs to access the cookie and localStorage.
Here are a few ideas:
Always use https - and https only cookies.
Save the cookie in a storage system (nosql/cache system/db) and set it a TTL(expiry).
Never save the cookie as received into the storage but add salt and hash it before you save or check it just like you would with a password.
Always clean up expired sessions from the store.
Save issuing IP and IP2Location area. So you can check if the IP changes.
Exclusive session, one user one session.
Session collision detected (another ip) kick user and for next login request 2 way authentication, for instance send an SMS to a registered phone number so he can enter it in the login.
Under no circumstances load untrusted libraries. Better yet host all the libraries you use on your own server/cdn.
Check to not have injection vulnerabilities. Things like profiles or generally things that post back to the user what he entered in one way or another must be heavily sanitized, as they are a prime vector of compromise. Same goes for data sent to the server via anything: cookies,get,post,headers everything you may or may not use from the client must be sanitized.
Should I mention SQLInjections?
Double session either using a url session or storing an encrypted session id in the local store are nice and all but they ultimately are useless as both are accessible for a malicious code that is already included in your site like say a library loaded from a domain that that has been highjacked in one way or another(dns poison, complomised server, proxies, interceptors etc...). The effort is valiant but ultimately futile.
There are a few other options that further increase the difficulty of fetching and effectively using a session. For instance You could reissue session id's very frequently say reissue a session id if it is older then 1 minute even if you keep the user logged in he gets a new session id so a possible attacker has just 1 minute to do something with a highjacked session id.
Even if you apply all of these there is no guarantee that your session won't be highjacked one way or the other, you just make it incredibly hard to do so to the point of being impractical, but make no mistake making it 100% secure will be impossible.
There are loads of other security features you need to consider at server level like execution isolation, data isolation etc. This is a very large discussion. Security is not something you apply to a system it must be how the system is built from ground up!
Make sure you're absolutely not vulnerable to XSS attacks. Everything below is useless if you are!
Apparently, you mix two things: LocalStorage and Cookies.
They are absolutely two different storage mechanisms:
Cookies are a string of data, that is sent with every single request sent to your server. Cookies are sent as HTTP headers and can be read using JavaScript if HttpOnly is not set.
LocalStorage, on the other hand, is a key/value storage mechanism that is offered by the browser. The data is stored there, locally on the browser, and it's not sent anywhere. The only way to access this is using JavaScript.
Now I will assume you use a token (maybe JWT?) to authenticate users.
If you store your token in LocalStorage, then just make sure when you send it along to your server, send it as an HTTP header, and you'll be all done, you won't be vulnerable to anything virtually. This kind of storage/authentication technique is very good for Single-page applications (VueJS, ReactJS, etc.)
However, if you use cookies to store the token, then there comes the problem: while token can not be stolen by other websites, it can be used by them. This is called Cross-Site Request Forgery. (CSRF)
This kind of an attack basically works by adding something like:
<img src="https://yourdomain.com/account/delete">
When your browser loads their page, it'll attempt to load the image, and it'll send the authentication cookie along, too, and eventually, it'll delete the user's account.
Now there is an awesome CSRF prevention cheat sheet that lists possible ways to get around that kind of attacks.
One really good way is to use Synchronizer token method. It basically works by generating a token server-side, and then adding it as a hidden field to a form you're trying to secure. Then when the form is submitted, you simply verify that token before applying changes. This technique works well for websites that use templating engines with simple forms. (not AJAX)
The HttpOnly flag adds more security to cookies, too.
You can use 2 Step Authentication via phone number or email. Steam is also a good example. Every time you log in from a new computer, either you'll have to mark it as a "Safe Computer" or verify using Phone Number/Email.

Do I need to sign/encrypt cookies if I am using SSL?

If I am serving all of my web content over SSL, do I need to do another layer of encryption and sign my cookie data?
SSL/TLS only offers protection against communications being intercepted and/or modified. It guarantees nothing about text files sitting on a client's hard drive (i.e. cookies).
If you want to prevent a user from presenting your web application with falsified cookie information, then yes, you need to sign your cookie data. If you want to prevent a user from seeing the cookie data, then you should encrypt it as well.

Can the user agent be used to tie a cookie to hardware?

I help maintain a site that is sold to about 100 clients. We take security pretty seriously and we have a multiple step login process. One part of the process can be skipped if you have already logged in before and choose to get a cookie. When you login again and still have that cookie, that step is skipped. Of course, the value in the cookie is random and different for every user.
My boss wants to make it impossible to copy the cookie to another computer. Of course, I've explained that is not possible, but he still insists it is by requiring the user agent to remain the same.
"We can then document that we have a “hardened” cookie that is specific to the user’s hardware and software."
Of course, I've explained that spoofing the user agent would be many many times more easy to do than spoofing the cookie value, and compared it to putting a band-aid on a padlock. Not to mention any opportunity you have at copying the cookie would allow you to copy the user agent as well. He doesn't care.
It doesn't bother me to require the same user agent but I have some integrity and a problem working on something being sold with such a lie about its security.
I'm a professional not a grunt. I wouldn't design a bridge that supports one weight when I know will be advertised as supporting a higher weight.
Am I being reasonable?
Suggest an alternative, since cookies are not intended to provide security:
*
An active network attacker can overwrite Secure cookies from an insecure channel, disrupting their integrity
Transport-layer encryption, such as that employed in HTTPS, is insufficient to prevent a network attacker from obtaining or altering a victim's cookies because the cookie protocol itself has various vulnerabilities.
A server that uses cookies to authenticate users can suffer security vulnerabilities because some user agents let remote parties issue HTTP requests from the user agent (e.g., via HTTP redirects or HTML forms). When issuing those requests, user agents attach cookies even if the remote party does not know the contents of the cookies, potentially letting the remote party exercise authority at an unwary server.
Cookies do not provide integrity guarantees for sibling domains (and their subdomains). For example, consider foo.example.com and bar.example.com. The foo.example.com server can set a cookie with a Domain attribute of "example.com" (possibly overwriting an existing "example.com" cookie set by bar.example.com), and the user agent will include that cookie in HTTP requests to bar.example.com. In the worst case, bar.example.com will be unable to distinguish this cookie from a cookie it set itself. The foo.example.com server might be
able to leverage this ability to mount an attack against bar.example.com.
Cookies rely upon the Domain Name System (DNS) for security. If the DNS is partially or fully compromised, the cookie protocol might fail to provide the security properties required by applications.
References
Sharing a Session across multiple domains
RFC 7258: Pervasive Monitoring is an Attack
A cookie that has a signature involving a server side "secret" and using the user agent as part of the salt will be more difficult to spoof, than a cookie that does not have the user agent as part of the salt. That is indisputable. First of all, it takes time to figure out how the salt is created - and a lot of "attackers" will be discouraged immediately.
Yes, but it is not more secure...
Your boss has a goal; to be able to tell his customers that the cookie is "hardened". You shouldn't assume that your boss does not understand the implications.
The fact is; it won't affect your applications security at all in either direction. It will however result in the cookie being slightly more difficult to move from one machine to another, and it will make the cookie stop working if the client updates his browser or flash version or changes his user agent in other ways.
Conclusion:
If everything else is equal, I consider the user agent salt in cookies as better than no user agent salt, by a tiny amount. I guess you could implement the thing faster than the time you spent asking this question.

Unstealable persistent login cookie

Having read articles like http://jaspan.com/improved_persistent_login_cookie_best_practice I'm wondering whether there's a reasonably good way to achieve this.
So, what I want is to have a fairly hard time for a crook to steal a cookie, and use it in his own computer. Using secure cookies is out of the question. What I've been thinking about is to hash some information about the user's browser into the cookie, which would be verified once an auto-login is attempt.
So, the problem I'm facing right now is what info to hash. The browser name should be ok, but the version number would invalidate the auto-login on each browser upgrade. The same goes with feature sniffing. What I've been thinking of is hashing the browser name and the user's locale, to get a reasonable certain way of counteracting cookie theft.
Am I on the right track? Is there a de-facto way of doing this?
The system doesn't need to be 100% impregnable, just reasonably so.
PS: You don't have to worry about the other data in the cookie. I'm just curious about the "don't steal this cookie"-part.
Edit 1: A weakness in hashing client info, as I got answered elsewhere, would be that it's enough for the attacker to know that client info is used, and copy the client info as the cookie is stolen. Granted, an additional step to do for the attacker, but not as big a step as I imagined... Any additional thoughts?
First of all, there aren't any foolproof ways to deal with this, but I'll try to give you a more suitable answer. However, I'll start by some other things you probably should consider.
Start by thing about how to avoid a user's cookie being compromised in the first place. Probably the most common ways of cookie-jacking is either by listening to unsecured HTTP traffic, by using XSS attacks or by exploiting incorrectly defined cookie paths.
You mentioned that secure cookies are out of the question in your case, but I'll want to note this for further reference for other readers. Make sure your site uses HTTPS all the way, this way you will ensure that traffic to your site is secure even if the user is using an unencrypted wireless internet access.
Make sure your site defines the proper domain and path for the cookies, in other words, make sure that the cookies aren't sent to such part of the domain which shouldn't get access to the cookie.
Enable HttpOnly in your cookies. This means that your cookies are only sent on HTTP(S) requests and cannot be read, for example, by using JavaScript. This will mitigate the chances of the user's cookies being stolen by means of XSS.
That said, to answer your actual question, probably a common way of identifying the user by other means is by using browser fingerprints. A browser fingerprint is a hash which is built using unique information to the user's environment, for example, the fingerprint can include browser plugin details, time zone, screen size, system fonts and user agent. Note however, if any of these changes, so does the fingerprint, thus, in your case, invalidating the cookie - I don't necessarily see this as a bad thing, from a security point of view.