Some frameworks (e.g. Django) support CSRF protection for users without any kind of session. What is the benefit of that?
What is the exploit that a CSRF attack could take advantage of when there is no existing session for the user?
Off the top of my head:
Having CSRF protection on day 1 means you don't need to worry about adding it after the fact if on day 17 you add user sessions
Even if there's no explicit sessions, there still could be some other authentication or mechanism that is protecting the site (example would be if you were running a django site on your private network; if you were browsing from inside your network that site AND evil.com, evil.com could trick your browser into sending requests to your private site. Rather unlikely but at least makes the point I hope.)
You might also want to raise this on the security stack overflow.
(Updated based on comment below)
Even if there was no authentication or other reason to trust the browser, there are two other weak benefits for using CSRF protection:
As Bobince points out, it does prevent simpler spamming attacks (since they now need to connect to the first page to get the CSRF token), and it also means that if someone does do something malicious, the IP in the server logs is going to be linked to them and not the client's IP. (Of course, that's spoofable etc. but it's still slightly better than making it trivial to make it look like the attack is coming from someone else)
If you were using some form of persistent authentication that wasn't based around session association (eg: HTTP Basic Auth), that would still need CSRF protection.
For entirely anonymous connections, it can still act as an obfuscation measure to block automated form submissions from the stupider kind of bots.
Related
My web application's authentication mechanism currently is quite simple.
When a user logs in, the website sends back a session cookie which is stored (using localStorage) on the user's browser.
However, this cookie can too easily be stolen and used to replay the session from another machine. I notice that other sites, like Gmail for example, have much stronger mechanisms in place to ensure that just copying a cookie won't allow you access to that session.
What are these mechanisms and are there ways for small companies or single developers to use them as well?
We ran into a similar issue. How do you store client-side data securely?
We ended up going with HttpOnly cookie that contains a UUID and an additional copy of that UUID (stored in localStorage). Every request, the user has to send both the UUID and the cookie back to the server, and the server will verify that the UUID match. I think this is how OWASP's double submit cookie works.
Essentially, the attacker needs to access the cookie and localStorage.
Here are a few ideas:
Always use https - and https only cookies.
Save the cookie in a storage system (nosql/cache system/db) and set it a TTL(expiry).
Never save the cookie as received into the storage but add salt and hash it before you save or check it just like you would with a password.
Always clean up expired sessions from the store.
Save issuing IP and IP2Location area. So you can check if the IP changes.
Exclusive session, one user one session.
Session collision detected (another ip) kick user and for next login request 2 way authentication, for instance send an SMS to a registered phone number so he can enter it in the login.
Under no circumstances load untrusted libraries. Better yet host all the libraries you use on your own server/cdn.
Check to not have injection vulnerabilities. Things like profiles or generally things that post back to the user what he entered in one way or another must be heavily sanitized, as they are a prime vector of compromise. Same goes for data sent to the server via anything: cookies,get,post,headers everything you may or may not use from the client must be sanitized.
Should I mention SQLInjections?
Double session either using a url session or storing an encrypted session id in the local store are nice and all but they ultimately are useless as both are accessible for a malicious code that is already included in your site like say a library loaded from a domain that that has been highjacked in one way or another(dns poison, complomised server, proxies, interceptors etc...). The effort is valiant but ultimately futile.
There are a few other options that further increase the difficulty of fetching and effectively using a session. For instance You could reissue session id's very frequently say reissue a session id if it is older then 1 minute even if you keep the user logged in he gets a new session id so a possible attacker has just 1 minute to do something with a highjacked session id.
Even if you apply all of these there is no guarantee that your session won't be highjacked one way or the other, you just make it incredibly hard to do so to the point of being impractical, but make no mistake making it 100% secure will be impossible.
There are loads of other security features you need to consider at server level like execution isolation, data isolation etc. This is a very large discussion. Security is not something you apply to a system it must be how the system is built from ground up!
Make sure you're absolutely not vulnerable to XSS attacks. Everything below is useless if you are!
Apparently, you mix two things: LocalStorage and Cookies.
They are absolutely two different storage mechanisms:
Cookies are a string of data, that is sent with every single request sent to your server. Cookies are sent as HTTP headers and can be read using JavaScript if HttpOnly is not set.
LocalStorage, on the other hand, is a key/value storage mechanism that is offered by the browser. The data is stored there, locally on the browser, and it's not sent anywhere. The only way to access this is using JavaScript.
Now I will assume you use a token (maybe JWT?) to authenticate users.
If you store your token in LocalStorage, then just make sure when you send it along to your server, send it as an HTTP header, and you'll be all done, you won't be vulnerable to anything virtually. This kind of storage/authentication technique is very good for Single-page applications (VueJS, ReactJS, etc.)
However, if you use cookies to store the token, then there comes the problem: while token can not be stolen by other websites, it can be used by them. This is called Cross-Site Request Forgery. (CSRF)
This kind of an attack basically works by adding something like:
<img src="https://yourdomain.com/account/delete">
When your browser loads their page, it'll attempt to load the image, and it'll send the authentication cookie along, too, and eventually, it'll delete the user's account.
Now there is an awesome CSRF prevention cheat sheet that lists possible ways to get around that kind of attacks.
One really good way is to use Synchronizer token method. It basically works by generating a token server-side, and then adding it as a hidden field to a form you're trying to secure. Then when the form is submitted, you simply verify that token before applying changes. This technique works well for websites that use templating engines with simple forms. (not AJAX)
The HttpOnly flag adds more security to cookies, too.
You can use 2 Step Authentication via phone number or email. Steam is also a good example. Every time you log in from a new computer, either you'll have to mark it as a "Safe Computer" or verify using Phone Number/Email.
I have a Django Site that uses Django's csrf-token for protection against csrf attacks. One of the forms can be accessed by public, including people who have not logged in.
Csrf Token is supposed to give protection against cross domain requests.
Edit: (quote from my comment)
"But, in case of post requests that are allowed without requiring authorization, csrf is no better than a trival spam filter(captcha would do better here). In fact, it should be a security risk to include CSRF token(that expire after say, 30 mins) in pages that require no authentication what so ever.(but my site is doing it, which is why I made this post in the first place)"
Also, in this case, one could just fetch that page in browser js console, get the csrf token through some specific xpath and then post some arbitrary data with that csrf. Also, steps being easily reproducible, one could design a specific attack for the site, or any Django site for that matter cause you'll find csrf token besides 'csrfmiddlewaretoken' every time (and that includes sites like reddit, pinterest etc.).
As far as I can see, apart from making it a little difficult, csrf token didn't help much.
Is there an aspect to it I am missing? Is my implementation wrong? and If I'am correct is it dumb to have your csrf token flying around in your html source(esp. those not requiring any authentication)?
This question has a really good couple of answers about the same thing. Also, the last answer on there addresses the fact that it technically would be possible to scrape the form for the token (via javascript), and then submit a post request with it (via javascript). But that the victim would have to be logged in.
The point of the CSRF protection is to specifically prevent tricking a random user. It has nothing to do with client-side exploits. You also have to consider that part of the protection includes denying cross-site origin requests. The request would have to come from the same origin as the target site.
Bottom line, CSRF has value. Its a region of protection, but its not the end all be all. And you can't defend against everything.
Quote from a blog post about CSRF:
Secret hidden form value. Send down a unique server form value with
each form -- typically tied to the user session -- and validate that
you get the same value back in the form post. The attacker can't
simply scrape your remote form as the target user through JavaScript,
thanks to same-domain request limits in the XmlHttpRequest function.
... And comments of interest:
I'm not a javascript wizard, but is it possible to load a remote page
in a hidden iframe on the malicious page, parse it with javascript to
find the hidden token and then populate the form the user is
(presumably) about to submit with the right values?
David Goodwin on September 24, 2008 2:35 AM
#David Goodwin: No, the same-origin policy would prevent the malicious
page from reading the contents of the iframe.
Rico on September 24, 2008 3:03 AM
If your form is public and doesn't require authentication, then there is nothing stopping anyone (including malicious sites/plugins/people) from posting to it. That problem is called Spam, not CSRF.
Read this: http://en.wikipedia.org/wiki/Cross-site_request_forgery
CSRF involves a malicious site posting to your forms by pretending to be an authenticated user.
I have a RESTful API which has annotations like #Consumes(MediaType.JSON) - in that case, would the CSRF attack still be possible on such a service? I've been tinkering with securing my services with CSRFGuard on server side or having a double submit from client side. However when I tried to POST requests using FORM with enctype="text/plain", it didn't work. The technique is explained here This works if I have MediaType.APPLICATION_FORM_URLENCODED in my consumes annotation. The content negotiation is useful when I'm using POST/PUT/DELETE verbs but GET is still accessible which might need looking into.
Any suggestions or inputs would be great, also please let me know if you need more info.
Cheers
JAX-RS is designed to create REST API which is supposed to be stateless.
The Cross Site Request Forgery is NOT a problem with stateless applications.
The way Cross Site Request Forgery works is someone may trick you to click on a link or open a link in your browser which will direct you to a site in which you are logged in, for example some online forum. Since you are already logged in on that forum the attacker can construct a url, say something like this: someforum.com/deletethread?id=23454
That forum program, being badly designed will recognize you based on the session cookie and will confirm that you have the capability to delete the thread and will in fact delete that thread.
All because the program authenticated you based on the session cookie (on even based on "remember me" cookie)
With RESTful API there is no cookie, no state is maintaned between requests, so there is no need to protect against session hijacking.
The way you usually authenticate with RESTFul api is be sending some additional headers. If someone tricks you into clicking on a url that points to restful API the browser is not going to send that extra headers, so there is no risk.
In short - if REST API is designed the way it supposed to be - stateless, then there is no risk of cross site forgery and no need to CSRF protection.
Adding another answer as Dmitri’s answer mixes serverside state and cookies.
An application is not stateless if your server stores user information in the memory over multiple requests. This decreases horizontal scalability as you need to find the "correct" server for every request.
Cookies are just a special kind of HTTP header. They are often used to identify a users session but not every cookie means server side state. The server could also use the information from the cookie without starting a session. On the other hand using other HTTP headers does not necessarily mean that your application is automatically stateless. If you store user data in your server’s memory it’s not.
The difference between cookies and other headers is the way they are handled by the browser. Most important for us is that the browser will resend them on every subsequent request. This is problematic if someone tricks a user to make a request he doesn’t want to make.
Is this a problem for an API which consumes JSON? Yes, in two cases:
The attacker makes the user submit a form with enctype=text/plain: Url encoded content is not a problem because the result can’t be valid JSON. text/plain is a problem if your server interprets the content not as plain text but as JSON. If your resource is annotated with #Consumes(MediaType.JSON) you should not have a problem because it won’t accept text/plain and should return a status 415. (Note that JSON may become a valid enctype one day and this won’t be valid any more).
The attacker makes the user submit an AJAX request: The Same Origin Policy prevents AJAX requests to other domains so you are safe as long as you don’t disable this protection by using CORS-headers like e.g. Access-Control-Allow-Origin: *.
Having read articles like http://jaspan.com/improved_persistent_login_cookie_best_practice I'm wondering whether there's a reasonably good way to achieve this.
So, what I want is to have a fairly hard time for a crook to steal a cookie, and use it in his own computer. Using secure cookies is out of the question. What I've been thinking about is to hash some information about the user's browser into the cookie, which would be verified once an auto-login is attempt.
So, the problem I'm facing right now is what info to hash. The browser name should be ok, but the version number would invalidate the auto-login on each browser upgrade. The same goes with feature sniffing. What I've been thinking of is hashing the browser name and the user's locale, to get a reasonable certain way of counteracting cookie theft.
Am I on the right track? Is there a de-facto way of doing this?
The system doesn't need to be 100% impregnable, just reasonably so.
PS: You don't have to worry about the other data in the cookie. I'm just curious about the "don't steal this cookie"-part.
Edit 1: A weakness in hashing client info, as I got answered elsewhere, would be that it's enough for the attacker to know that client info is used, and copy the client info as the cookie is stolen. Granted, an additional step to do for the attacker, but not as big a step as I imagined... Any additional thoughts?
First of all, there aren't any foolproof ways to deal with this, but I'll try to give you a more suitable answer. However, I'll start by some other things you probably should consider.
Start by thing about how to avoid a user's cookie being compromised in the first place. Probably the most common ways of cookie-jacking is either by listening to unsecured HTTP traffic, by using XSS attacks or by exploiting incorrectly defined cookie paths.
You mentioned that secure cookies are out of the question in your case, but I'll want to note this for further reference for other readers. Make sure your site uses HTTPS all the way, this way you will ensure that traffic to your site is secure even if the user is using an unencrypted wireless internet access.
Make sure your site defines the proper domain and path for the cookies, in other words, make sure that the cookies aren't sent to such part of the domain which shouldn't get access to the cookie.
Enable HttpOnly in your cookies. This means that your cookies are only sent on HTTP(S) requests and cannot be read, for example, by using JavaScript. This will mitigate the chances of the user's cookies being stolen by means of XSS.
That said, to answer your actual question, probably a common way of identifying the user by other means is by using browser fingerprints. A browser fingerprint is a hash which is built using unique information to the user's environment, for example, the fingerprint can include browser plugin details, time zone, screen size, system fonts and user agent. Note however, if any of these changes, so does the fingerprint, thus, in your case, invalidating the cookie - I don't necessarily see this as a bad thing, from a security point of view.
I consider myself newbie when it comes to securing my web applications.
I have built a website which updates the webpages regularly through an AJAX call. The Ajax call returns a decent JSON object to be used at the client side.
There is a simple problem I need to overcome: How can I prevent other people to use the same AJAX call without permission? What if they build a website, AND at the client side they allow their users to make the same AJAX call to my servers and grab what they need.. AND THEN parse it to their own needs at the client side?
I cannot put an extra layer of security like user authentication.
They won't be able to actually do this from the client directly because the browser will prevent cross domain AJAX requests for anything other than JSONP (scripts). That said, they can proxy it on their server if they want so it doesn't buy you much.
ASP.NET MVC has an antiforgery token mechanism that you should look at for inspiration. The basic idea is that you use both an encrypted cookie and an encrypted, hidden form input containing the same data that you write to each page that you want to secure. Do your AJAX calls using a POST and make sure to send back the form input. On the server-side decrypt the cookie and input and compare the data to ensure they're the same. Since the cookie is tied to your domain, it will be much harder to inject in the request that is being sent back. Use SSL and regenerate the cookie/input content periodically to make it even harder to fake the cookie/input.
You can check the HTTP_REFERER http header and see if the request originates from your page. This can however be spoofed, so don't think of it as a bulletproof solution. The best counter-meassure is user authentication, really.
You can't. That's because you can't differenciate between an AJAX call from your web app and another user's webapp.
Here are some things that might help a little bit.
Obscuring/encrypting your AJAX response. This fails mainly because you have to include the decryption code in your app as well.
Check the IP origin. If the IP didn't access your server before, you can assume that the AJAX call is not from your website. This doesn't work if a) the user switches the IP while being on your site / timing out or b) if another website sends a fake http request first before using your AJAX API.
Another idea would be to send Javascript instead of a JSON object. The Javascript should contain all the logic needed to update your website, and of course could check if the website is your own. (window.location). That has some disadvantages though: more work for you, higher traffic load and it can be broken anyways.
I don't think it's a bad thing actually. Another website could have just as easily scraped the info from your website.
If by "stealing" you mean getting some content from your website (using HTTP GET), that's more or less the same problem as hot-linking. You could have some basic protection technique using the HTTP Referer header (it can be worked around, but it works in most cases).
The other problem you have (making sure the requests come from your application) have to do with CSRF (Cross-Site Request Forgery). There are various protection mechanisms against this, mostly based on embedding tokens in forms for example.
You could potentially combine the two approaches, although the real protection against getting the content would come from user authentication (otherwise, the other site could also get the page from which you're delivering those tokens and proxy it).
(In addition, techniques that rely on remembering the IP address would probably not work well in the whole web architecture: it might cause problems if you get a pool of proxy servers or if the client is a mobile device that may change IP address between various requests, which would be perfectly legitimate.)