First apologies: This feels to me like a "dumb" question, and I expect I'll soon regret even asking it ...but I can't figure it out at the moment as my mind seems to be stuck in the wrong rut. So please bear with me and help me out:
My understanding is that "Same Origin" is a pain in the butt for web services, and in response CORS loosens the restrictions just enough to make web services work reasonably, yet still provides decent security to the user. My question is exactly how does CORS do this?
Suppose the user visits website A, which provides code that makes web service requests to website Z. But I've broken into and subverted website Z, and made it into an attack site. I quickly made it respond positively to all CORS requests (header add Access-Control-Allow-Origin: "*"). Soon the user's computer is subverted by my attack from Z.
It seems to me the user never visited Z directly, knows nothing about Z's existence, and never "approved" Z. And it seems to me -even after the breakin becomes known- there's nothing website A can do to stop it (short of going offline itself:-). Wouldn't security concerns mandate A certifying Z, rather than Z certifying A? What am I missing?
I was investigating this as well, as my thought process was akin to yours. Per my new understanding: CORS doesn't provide security, it circumvents it to provide functionality. Browsers in general don't allow cross-origin requests; if you go to shady.com, and there is a script there that tries to access bank.com using a cookie on your machine, shady.com's script would then be able to perform actions on bank.com using that cookie to impersonate you. To prevent this, bank.com would not mark it's APIs as CORS enabled, so that when shady.com's script begins the HTTP request, the browser itself prevents the request.
So same-origin protects users from themselves because they don't know what auth cookies are laying around; CORS allows a server that owns resources on behalf of the user to mark APIs as accessible from other sites' scripts, which will cause the browser to then ignore its own cross-origin protection policy.
(anyone that understands this better, please add or correct as needed!)
CORS does nothing for security. It does allow someone selling web fonts to decide which websites get easy access to their fonts though. That's pretty much the only use case.
The user is just as unaware as they were before the introduction of CORS. And please remember that cross origin requests used to work before CORS (people often complain that you have to shim jQuery to get CORS support in IE... But in IE you could just make the request and get the response without any extra effort..it just worked).
Generally speaking the trust model is backwards. As others said you have implied trust by referencing some other site...so give me the freaking data!
CORS protects the website that receives the request (Z in your example) against the one that makes the request (A in your example) by telling the user's browser who is or is not allowed to see the response of the request.
When a JavaScript application asks the browser to make a HTTP request to an origin that's different than its own, the browser does not know if there is mutual agreement between the two origins to make such calls. For sure, if the request come from origin A then A agrees (and A is responsible to its users if Z is malicious), but does Z, the recipient, agrees ? The only way for the browser to know is to ask Z, and it does that by actually doing the request. Unless Z explicitly allows A to receive the response, the browser will not let A's application read it.
You are right that the only effect of CORS is to relax the same-origin policy. Before that, cross-origin requests were permitted, and the browser would automatically include the cookies it has for the destination, that is, it would send an authenticated request to Z. This means that, without same-origin policy, A could browse Z just as if it was the user, see it's data, etc. Same-origin fixes this very severe security vulnerability, but because some services still need to use cross-origin requests sometimes, CORS was created.
Note that CORS does not prevent the request from being sent, so if A's JS app sends a request to Z ordering it to send all the user's money to some account, Z will receive this request with all the cookies in it. This is called a Cross-Site Request Forgery (CSRF). Interestingly, the main defence against this type of attack is based on CORS. It consists in requiring some secret value in the request (a “CSRF token”) that can only be obtained through a cross-origin request, which A cannot obtain if it's not on the authorized list of Z. Nowadays, same-site cookies can be used as well, they are easier to manage but don't work cross-origin.
Related
I'm working on legal portion of my site, Privacy Policy in particular. I've done the research and found that nearly all the answers to my question (below), is generalized.
Question: Do cookies "collect" data from user browsers, or do cookies "request" then receive data from user browsers?
This seems to be a very important distinction. Do I put into my privacy policy that my site "collects" data from my users or do I "request" data from my users.
My understanding of the core functionality is that cookies request data of user browser or browser activity. Users control how their browser will respond (or handle cookies) in their browser settings. If users have the ultimate control of handling "responses" to cookies is it proper for website privacy policies to state that they use cookies to collect browser data? Isn't it more accurate to state something like: "We use cookies to request data from your browser. Depending on you have your settings, your response to our request my impact your experience." Or something along those lines.
For years the way I understood the phrase "cookies collect browser data" is that we (websites) force code (the cookie), onto your browser that opens a siv for all your activity to flow back to us. But this isn't the case at all. Cookies actually make a "request" (i.e., asks) for the user's permission first, and depending on how the user has set up their browser settings, the cookie request is honored or denied.
I'm trying to stay away from the term "collect" as a general matter. I think it's improperly used and leaves the wrong impression on users.
Has anyone else thought about this? Am I missing something?
Cookies are being stored to the system/computer - or you can say browser. Cookies are used for authentication, preferences, advertisements, performance and analytics, security purposes. Yes, we need to mention that in privacy policy or some organization also add separate cookie policy.
Following should be mentioned for cookies in policies for standard web applications:
The application may use and store cookies to your system/compute which can help better to know your preferences when you visit the website later. Cookies can also be used for authentication/session checks, advertising, performance, analytics and research and security purposes. //Remove whichever is not applied for your site.
Everyone says CORS doesn't do anything to defend against CSRF attacks. This is because CORS blocks outside domains from accessing (reading) resources on your domain -- but doesn't prevent the request from being processed. So evil sites can send state-changing DELETE requests, without caring that they can't read back the result.
That's all well and good.
Except for pre-flight CORS.
In this case, CORS looks at the request BEFORE it is sent, and checks whether it's legitimate. If it's not, the request is rejected.
So the DELETE request that the CSRF attacker tries to send fails the pre-flight check, and thus is rejected. The CSRF attack fails.
What am I missing here?
Pre-flight requests don't prevent CSRF in general. For example not all cross-domain ajax calls generate a pre-flight request, plain POSTs don't. There may be specific cases when pre-flight requests do indeed help to reduce the risk though.
Another problem is the same as with checking the referer/origin. While it is not possible for an attacker to override referer or origin in plain Javascript on a malicious website, it may be possibble to do so using a suitable browser plugin, like an old version of Flash for instance. If a browser plugin allows to do that, the attacker might be able to send cross-origin requests without a pre-flight. So you don't want to rely on pre-flight requests only.
I help maintain a site that is sold to about 100 clients. We take security pretty seriously and we have a multiple step login process. One part of the process can be skipped if you have already logged in before and choose to get a cookie. When you login again and still have that cookie, that step is skipped. Of course, the value in the cookie is random and different for every user.
My boss wants to make it impossible to copy the cookie to another computer. Of course, I've explained that is not possible, but he still insists it is by requiring the user agent to remain the same.
"We can then document that we have a “hardened” cookie that is specific to the user’s hardware and software."
Of course, I've explained that spoofing the user agent would be many many times more easy to do than spoofing the cookie value, and compared it to putting a band-aid on a padlock. Not to mention any opportunity you have at copying the cookie would allow you to copy the user agent as well. He doesn't care.
It doesn't bother me to require the same user agent but I have some integrity and a problem working on something being sold with such a lie about its security.
I'm a professional not a grunt. I wouldn't design a bridge that supports one weight when I know will be advertised as supporting a higher weight.
Am I being reasonable?
Suggest an alternative, since cookies are not intended to provide security:
*
An active network attacker can overwrite Secure cookies from an insecure channel, disrupting their integrity
Transport-layer encryption, such as that employed in HTTPS, is insufficient to prevent a network attacker from obtaining or altering a victim's cookies because the cookie protocol itself has various vulnerabilities.
A server that uses cookies to authenticate users can suffer security vulnerabilities because some user agents let remote parties issue HTTP requests from the user agent (e.g., via HTTP redirects or HTML forms). When issuing those requests, user agents attach cookies even if the remote party does not know the contents of the cookies, potentially letting the remote party exercise authority at an unwary server.
Cookies do not provide integrity guarantees for sibling domains (and their subdomains). For example, consider foo.example.com and bar.example.com. The foo.example.com server can set a cookie with a Domain attribute of "example.com" (possibly overwriting an existing "example.com" cookie set by bar.example.com), and the user agent will include that cookie in HTTP requests to bar.example.com. In the worst case, bar.example.com will be unable to distinguish this cookie from a cookie it set itself. The foo.example.com server might be
able to leverage this ability to mount an attack against bar.example.com.
Cookies rely upon the Domain Name System (DNS) for security. If the DNS is partially or fully compromised, the cookie protocol might fail to provide the security properties required by applications.
References
Sharing a Session across multiple domains
RFC 7258: Pervasive Monitoring is an Attack
A cookie that has a signature involving a server side "secret" and using the user agent as part of the salt will be more difficult to spoof, than a cookie that does not have the user agent as part of the salt. That is indisputable. First of all, it takes time to figure out how the salt is created - and a lot of "attackers" will be discouraged immediately.
Yes, but it is not more secure...
Your boss has a goal; to be able to tell his customers that the cookie is "hardened". You shouldn't assume that your boss does not understand the implications.
The fact is; it won't affect your applications security at all in either direction. It will however result in the cookie being slightly more difficult to move from one machine to another, and it will make the cookie stop working if the client updates his browser or flash version or changes his user agent in other ways.
Conclusion:
If everything else is equal, I consider the user agent salt in cookies as better than no user agent salt, by a tiny amount. I guess you could implement the thing faster than the time you spent asking this question.
I have a RESTful API which has annotations like #Consumes(MediaType.JSON) - in that case, would the CSRF attack still be possible on such a service? I've been tinkering with securing my services with CSRFGuard on server side or having a double submit from client side. However when I tried to POST requests using FORM with enctype="text/plain", it didn't work. The technique is explained here This works if I have MediaType.APPLICATION_FORM_URLENCODED in my consumes annotation. The content negotiation is useful when I'm using POST/PUT/DELETE verbs but GET is still accessible which might need looking into.
Any suggestions or inputs would be great, also please let me know if you need more info.
Cheers
JAX-RS is designed to create REST API which is supposed to be stateless.
The Cross Site Request Forgery is NOT a problem with stateless applications.
The way Cross Site Request Forgery works is someone may trick you to click on a link or open a link in your browser which will direct you to a site in which you are logged in, for example some online forum. Since you are already logged in on that forum the attacker can construct a url, say something like this: someforum.com/deletethread?id=23454
That forum program, being badly designed will recognize you based on the session cookie and will confirm that you have the capability to delete the thread and will in fact delete that thread.
All because the program authenticated you based on the session cookie (on even based on "remember me" cookie)
With RESTful API there is no cookie, no state is maintaned between requests, so there is no need to protect against session hijacking.
The way you usually authenticate with RESTFul api is be sending some additional headers. If someone tricks you into clicking on a url that points to restful API the browser is not going to send that extra headers, so there is no risk.
In short - if REST API is designed the way it supposed to be - stateless, then there is no risk of cross site forgery and no need to CSRF protection.
Adding another answer as Dmitri’s answer mixes serverside state and cookies.
An application is not stateless if your server stores user information in the memory over multiple requests. This decreases horizontal scalability as you need to find the "correct" server for every request.
Cookies are just a special kind of HTTP header. They are often used to identify a users session but not every cookie means server side state. The server could also use the information from the cookie without starting a session. On the other hand using other HTTP headers does not necessarily mean that your application is automatically stateless. If you store user data in your server’s memory it’s not.
The difference between cookies and other headers is the way they are handled by the browser. Most important for us is that the browser will resend them on every subsequent request. This is problematic if someone tricks a user to make a request he doesn’t want to make.
Is this a problem for an API which consumes JSON? Yes, in two cases:
The attacker makes the user submit a form with enctype=text/plain: Url encoded content is not a problem because the result can’t be valid JSON. text/plain is a problem if your server interprets the content not as plain text but as JSON. If your resource is annotated with #Consumes(MediaType.JSON) you should not have a problem because it won’t accept text/plain and should return a status 415. (Note that JSON may become a valid enctype one day and this won’t be valid any more).
The attacker makes the user submit an AJAX request: The Same Origin Policy prevents AJAX requests to other domains so you are safe as long as you don’t disable this protection by using CORS-headers like e.g. Access-Control-Allow-Origin: *.
http://en.wikipedia.org/wiki/Same_origin_policy
The same origin policy prevents a script from one site talking to another site. Wiki says it's an "important security concept", but I'm not clear on what threat it prevents.
I understand that cookies from one site should not be shared with another, but that can be (and is) enforced separately.
The CORS standard http://en.wikipedia.org/wiki/Cross-Origin_Resource_Sharing provides a legitimate system for bypassing the same origin policy. Presumably it doesn't allow whatever threat the same origin policy is designed to block.
Looking at CORS I'm even less clear who is being protected from what. CORS is enforced by the browser so it doesn't protect either site from the browser. And the restrictions are determined by the site the script wants to talk to, so it doesn't seem to protect the user from either site.
So just what is the same origin policy for?
The article #EricLaw mentions, "Same Origin Policy Part 1: No Peeking" is good.
Here's a simple example of why we need the 'same origin policy':
It's possible to display other webpages in your own webpage by using an iframe (an "inline frame" places another HTML document in a frame). Let's say you display www.yourbank.com. The user enters their bank information. If you can read the inner HTML of that page (which requires using a script), you can easily read the bank account information, and boom. Security breach.
Therefore, we need the same origin policy to make sure one webpage can't use a script to read the information of another webpage.
The purpose of the same origin policy is to avoid the threat of a malicious site M reading information from trusted site A using the authority (i.e. authorization cookies) of a user of A. It is a browser policy, not a server policy or an HTTP standard, and is meant to mitigate the risk of another browser policy—sending cookies from site A when contacting site A.
Note that there's nothing to stop M from accessing A outside of a browser. It can send as many requests as it wants. But it won't be doing so with the authority of an unknowing user of A, which is what might otherwise happen in the browser.
Also note that the policy prevents the M page from reading from A. It does not protect the A server from the effects of the request. In particular, the browser will allow cross-domain POSTS—cookies and all—from M to A. That threat is called Cross-Site Request Forgery; it is not mitigated by the Same Origin Policy and so additional measures must be provided to protect against it.
As an example, it prevents Farmville from checking the balance on your banking account. Or, even worse, messing with the form your are about to send (after entering the PIN/TAN) so they get all the money.
CORS is mainly a standard for web sites which are sure they do not need this kind of protection. It basically says "it's OK for a script from any web site to talk to me, no security can possibly be broken". So it really does allow things which would be forbidden by the SOP, on places where the protection is not needed and cross-domain web sites are beneficial. Think of meshups.