http://en.wikipedia.org/wiki/Same_origin_policy
The same origin policy prevents a script from one site talking to another site. Wiki says it's an "important security concept", but I'm not clear on what threat it prevents.
I understand that cookies from one site should not be shared with another, but that can be (and is) enforced separately.
The CORS standard http://en.wikipedia.org/wiki/Cross-Origin_Resource_Sharing provides a legitimate system for bypassing the same origin policy. Presumably it doesn't allow whatever threat the same origin policy is designed to block.
Looking at CORS I'm even less clear who is being protected from what. CORS is enforced by the browser so it doesn't protect either site from the browser. And the restrictions are determined by the site the script wants to talk to, so it doesn't seem to protect the user from either site.
So just what is the same origin policy for?
The article #EricLaw mentions, "Same Origin Policy Part 1: No Peeking" is good.
Here's a simple example of why we need the 'same origin policy':
It's possible to display other webpages in your own webpage by using an iframe (an "inline frame" places another HTML document in a frame). Let's say you display www.yourbank.com. The user enters their bank information. If you can read the inner HTML of that page (which requires using a script), you can easily read the bank account information, and boom. Security breach.
Therefore, we need the same origin policy to make sure one webpage can't use a script to read the information of another webpage.
The purpose of the same origin policy is to avoid the threat of a malicious site M reading information from trusted site A using the authority (i.e. authorization cookies) of a user of A. It is a browser policy, not a server policy or an HTTP standard, and is meant to mitigate the risk of another browser policy—sending cookies from site A when contacting site A.
Note that there's nothing to stop M from accessing A outside of a browser. It can send as many requests as it wants. But it won't be doing so with the authority of an unknowing user of A, which is what might otherwise happen in the browser.
Also note that the policy prevents the M page from reading from A. It does not protect the A server from the effects of the request. In particular, the browser will allow cross-domain POSTS—cookies and all—from M to A. That threat is called Cross-Site Request Forgery; it is not mitigated by the Same Origin Policy and so additional measures must be provided to protect against it.
As an example, it prevents Farmville from checking the balance on your banking account. Or, even worse, messing with the form your are about to send (after entering the PIN/TAN) so they get all the money.
CORS is mainly a standard for web sites which are sure they do not need this kind of protection. It basically says "it's OK for a script from any web site to talk to me, no security can possibly be broken". So it really does allow things which would be forbidden by the SOP, on places where the protection is not needed and cross-domain web sites are beneficial. Think of meshups.
Related
I'm working on legal portion of my site, Privacy Policy in particular. I've done the research and found that nearly all the answers to my question (below), is generalized.
Question: Do cookies "collect" data from user browsers, or do cookies "request" then receive data from user browsers?
This seems to be a very important distinction. Do I put into my privacy policy that my site "collects" data from my users or do I "request" data from my users.
My understanding of the core functionality is that cookies request data of user browser or browser activity. Users control how their browser will respond (or handle cookies) in their browser settings. If users have the ultimate control of handling "responses" to cookies is it proper for website privacy policies to state that they use cookies to collect browser data? Isn't it more accurate to state something like: "We use cookies to request data from your browser. Depending on you have your settings, your response to our request my impact your experience." Or something along those lines.
For years the way I understood the phrase "cookies collect browser data" is that we (websites) force code (the cookie), onto your browser that opens a siv for all your activity to flow back to us. But this isn't the case at all. Cookies actually make a "request" (i.e., asks) for the user's permission first, and depending on how the user has set up their browser settings, the cookie request is honored or denied.
I'm trying to stay away from the term "collect" as a general matter. I think it's improperly used and leaves the wrong impression on users.
Has anyone else thought about this? Am I missing something?
Cookies are being stored to the system/computer - or you can say browser. Cookies are used for authentication, preferences, advertisements, performance and analytics, security purposes. Yes, we need to mention that in privacy policy or some organization also add separate cookie policy.
Following should be mentioned for cookies in policies for standard web applications:
The application may use and store cookies to your system/compute which can help better to know your preferences when you visit the website later. Cookies can also be used for authentication/session checks, advertising, performance, analytics and research and security purposes. //Remove whichever is not applied for your site.
Hi I'll try and keep it brief, hope one of you guys knows the answer and I'm not duplicating content.
At the moment I'm using a bucket to take the strain off my server and upload large user files to amazon. This is then reserved to them when they want it via expiring URLs. When the URL expires the user is sent an XML response to say access is denied, and i want to show them a custom error page.
Here Create my own error page for Amazon S3
and Here http://docs.aws.amazon.com/AmazonS3/latest/dev/CustomErrorDocSupport.html
It says you must enable web hosting on the bucket for custom error pages...
So the question is if I do this then just grant any user permissions to access just the custom error pages will this mess anything up with my current usage scenario?
Or is it as simple as everything else stays the same? The docs seem vague and I dont want to mess up my current system...
Sorry if this is a noob question but everyone with the same problem in my research seems happy with the 'Enable hosting' answer and i just want to be sure...
Cheers all
Ed
It's not possible to combine the two things you're trying to combine: query string authentication and custom error pages.
S3 buckets can be made accessible by two different sets of endpoints, each providing a different set of front-end behaviors.
The REST endpoints provide authentication and private content (and SSL), while the Web site endpoints provide custom error (and index) documents, but the objects must be public in order to be accessible, since the web site endpoint does not support authentication (or SSL).
The differences are explained here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff
In some environments, I use an intermediate reverse-proxy, hosted in EC2, acting as a front-end for S3 (which gives me the additional capability of rewriting portions of the request headers and capturing access logs in real-time) and I suspect this is the most viable mechanism for also providing "friendly" errors -- as my proxy does if the URL is completely missing elements like Signature= in the URL (since that can't possibly be anything but an error) but have not yet implemented anything to capture 403 Forbidden responses and style them up.
I did do some preliminary testing to add a Link: header to the error response (in the proxy), in an attempt to convince the browser to load an XSL stylesheet, but so far that has not proven viable.
I've read aws docs about using s3 + cloudfront + signed URL architecture to securely serve private content to public users. However it seems not secure enough to me. Let's me describe in steps:
Step 1: user logs in to my website.
Step 2: user clicks download (pdf, images, etc.)
Step 3: my web server will generate signed URL (expiry time: 30 secs), redirect user to the signed url and the downloading process happens.
Step 4: now, even though it's timed out after 30 secs, there is still a chance that any malicious snipper on my network will be able to catch the signed url and download my user's private content.
Any thought for this?
The risks you anticipate exist no matter what mechanism you use to "secure" anything on the web, if you aren't also using HTTPS to encrypt your users' interactions with the web site.
Without encryption, the login information, or perhaps cookies conveying the user's authentication state are also being sent in cleartext, and anything the user downloads can be directly captured without need for the signed link... making concern about capturing a download link via sniffing seem somewhat uninteresting compared to the more significant risk of general and overall insecurity that exists in such a setup.
On the other hand, if your site is using SSL, then when you deliver the signed URL to the user, there's a reasonable expectation that it will be hidden from snooping by the encryption... and, similarly, if the link to S3 also uses HTTPS, the SSL on that new connection is established before the browser transmits any information over the wire that would be discoverable by sniffing.
So, although it seems correct that there are potential security issues involved with this mechanism, I would suggest that a valid overall approach to security for user interactions should reduce the implications of any S3 signed URL-specific concerns down to a level comparable to any other mechanism allowing a browser to request a resource based on possession of a set of credentials.
First apologies: This feels to me like a "dumb" question, and I expect I'll soon regret even asking it ...but I can't figure it out at the moment as my mind seems to be stuck in the wrong rut. So please bear with me and help me out:
My understanding is that "Same Origin" is a pain in the butt for web services, and in response CORS loosens the restrictions just enough to make web services work reasonably, yet still provides decent security to the user. My question is exactly how does CORS do this?
Suppose the user visits website A, which provides code that makes web service requests to website Z. But I've broken into and subverted website Z, and made it into an attack site. I quickly made it respond positively to all CORS requests (header add Access-Control-Allow-Origin: "*"). Soon the user's computer is subverted by my attack from Z.
It seems to me the user never visited Z directly, knows nothing about Z's existence, and never "approved" Z. And it seems to me -even after the breakin becomes known- there's nothing website A can do to stop it (short of going offline itself:-). Wouldn't security concerns mandate A certifying Z, rather than Z certifying A? What am I missing?
I was investigating this as well, as my thought process was akin to yours. Per my new understanding: CORS doesn't provide security, it circumvents it to provide functionality. Browsers in general don't allow cross-origin requests; if you go to shady.com, and there is a script there that tries to access bank.com using a cookie on your machine, shady.com's script would then be able to perform actions on bank.com using that cookie to impersonate you. To prevent this, bank.com would not mark it's APIs as CORS enabled, so that when shady.com's script begins the HTTP request, the browser itself prevents the request.
So same-origin protects users from themselves because they don't know what auth cookies are laying around; CORS allows a server that owns resources on behalf of the user to mark APIs as accessible from other sites' scripts, which will cause the browser to then ignore its own cross-origin protection policy.
(anyone that understands this better, please add or correct as needed!)
CORS does nothing for security. It does allow someone selling web fonts to decide which websites get easy access to their fonts though. That's pretty much the only use case.
The user is just as unaware as they were before the introduction of CORS. And please remember that cross origin requests used to work before CORS (people often complain that you have to shim jQuery to get CORS support in IE... But in IE you could just make the request and get the response without any extra effort..it just worked).
Generally speaking the trust model is backwards. As others said you have implied trust by referencing some other site...so give me the freaking data!
CORS protects the website that receives the request (Z in your example) against the one that makes the request (A in your example) by telling the user's browser who is or is not allowed to see the response of the request.
When a JavaScript application asks the browser to make a HTTP request to an origin that's different than its own, the browser does not know if there is mutual agreement between the two origins to make such calls. For sure, if the request come from origin A then A agrees (and A is responsible to its users if Z is malicious), but does Z, the recipient, agrees ? The only way for the browser to know is to ask Z, and it does that by actually doing the request. Unless Z explicitly allows A to receive the response, the browser will not let A's application read it.
You are right that the only effect of CORS is to relax the same-origin policy. Before that, cross-origin requests were permitted, and the browser would automatically include the cookies it has for the destination, that is, it would send an authenticated request to Z. This means that, without same-origin policy, A could browse Z just as if it was the user, see it's data, etc. Same-origin fixes this very severe security vulnerability, but because some services still need to use cross-origin requests sometimes, CORS was created.
Note that CORS does not prevent the request from being sent, so if A's JS app sends a request to Z ordering it to send all the user's money to some account, Z will receive this request with all the cookies in it. This is called a Cross-Site Request Forgery (CSRF). Interestingly, the main defence against this type of attack is based on CORS. It consists in requiring some secret value in the request (a “CSRF token”) that can only be obtained through a cross-origin request, which A cannot obtain if it's not on the authorized list of Z. Nowadays, same-site cookies can be used as well, they are easier to manage but don't work cross-origin.
I help maintain a site that is sold to about 100 clients. We take security pretty seriously and we have a multiple step login process. One part of the process can be skipped if you have already logged in before and choose to get a cookie. When you login again and still have that cookie, that step is skipped. Of course, the value in the cookie is random and different for every user.
My boss wants to make it impossible to copy the cookie to another computer. Of course, I've explained that is not possible, but he still insists it is by requiring the user agent to remain the same.
"We can then document that we have a “hardened” cookie that is specific to the user’s hardware and software."
Of course, I've explained that spoofing the user agent would be many many times more easy to do than spoofing the cookie value, and compared it to putting a band-aid on a padlock. Not to mention any opportunity you have at copying the cookie would allow you to copy the user agent as well. He doesn't care.
It doesn't bother me to require the same user agent but I have some integrity and a problem working on something being sold with such a lie about its security.
I'm a professional not a grunt. I wouldn't design a bridge that supports one weight when I know will be advertised as supporting a higher weight.
Am I being reasonable?
Suggest an alternative, since cookies are not intended to provide security:
*
An active network attacker can overwrite Secure cookies from an insecure channel, disrupting their integrity
Transport-layer encryption, such as that employed in HTTPS, is insufficient to prevent a network attacker from obtaining or altering a victim's cookies because the cookie protocol itself has various vulnerabilities.
A server that uses cookies to authenticate users can suffer security vulnerabilities because some user agents let remote parties issue HTTP requests from the user agent (e.g., via HTTP redirects or HTML forms). When issuing those requests, user agents attach cookies even if the remote party does not know the contents of the cookies, potentially letting the remote party exercise authority at an unwary server.
Cookies do not provide integrity guarantees for sibling domains (and their subdomains). For example, consider foo.example.com and bar.example.com. The foo.example.com server can set a cookie with a Domain attribute of "example.com" (possibly overwriting an existing "example.com" cookie set by bar.example.com), and the user agent will include that cookie in HTTP requests to bar.example.com. In the worst case, bar.example.com will be unable to distinguish this cookie from a cookie it set itself. The foo.example.com server might be
able to leverage this ability to mount an attack against bar.example.com.
Cookies rely upon the Domain Name System (DNS) for security. If the DNS is partially or fully compromised, the cookie protocol might fail to provide the security properties required by applications.
References
Sharing a Session across multiple domains
RFC 7258: Pervasive Monitoring is an Attack
A cookie that has a signature involving a server side "secret" and using the user agent as part of the salt will be more difficult to spoof, than a cookie that does not have the user agent as part of the salt. That is indisputable. First of all, it takes time to figure out how the salt is created - and a lot of "attackers" will be discouraged immediately.
Yes, but it is not more secure...
Your boss has a goal; to be able to tell his customers that the cookie is "hardened". You shouldn't assume that your boss does not understand the implications.
The fact is; it won't affect your applications security at all in either direction. It will however result in the cookie being slightly more difficult to move from one machine to another, and it will make the cookie stop working if the client updates his browser or flash version or changes his user agent in other ways.
Conclusion:
If everything else is equal, I consider the user agent salt in cookies as better than no user agent salt, by a tiny amount. I guess you could implement the thing faster than the time you spent asking this question.