puzzel on DOM based XSS - xss

I have a page scanned by IBM AppScan, it reports 2 potential DOM based XSS issues. But after a long time analysis and Google search I could not figure out what's the risk of the code. Could you help me to identify the issue?
window.location = window.location.href + '&a=b' //compose a new url and redirect
var width = $(window).width(); //$ is jQuery

The jQuery framework is prone to DOM-based XSS because many jQuery functions/methods actually interpret JavaScript passed in a string. Your code on line 2 could be vulnerable to XSS if "window" were a string. If window is not a string, this should be safe.
Your code on line 1 reads window.location.href which could contain injected JS code. This possibly tainted data could lead to XSS if it were interpreted or added to the DOM, which does not appear to be the case here.

Related

XSS via CSS selector

BurpSuit detected DOM based XSS on following code.
var C=window.location.hash.substring(1);
$("div[data-hash="+C+"]").length>0
Can anyone suggest a XSS payload for this?
The only thing I could think of was executing JS via CSS expressions in old Internet Explorer versions but my few attempts have all failed.
It should be noted though that an attacker can introduce errors in your page by supplying a payload that causes an invalid CSS selector:
var C=window.location.hash.substring(1);
$("div[data-hash="+C+"]").length>0
Where C contains garbage like '"foo"' resulting in an invalid css selector.

XSS DOM vulnerable

I tested site for vulnerables (folder /service-contact) and possible XSS DOM issue came up (using Kali Linux, Vega and XSSER). However, i tried to manually test url with 'alert' script to make sure it's vulnerable. I used
www.babyland.nl/service-contact/alert("test")
No alert box/pop-up was shown, only the html code showed up in contact form box.
I am not sure i used the right code (i'm a rookie) or did the right interpretation. Server is Apache, using javascript/js.
Can you help?
Thanks!
This is Not Vulnerable to XSS, Whatever you are writing in the URL is Coming in Below Form section ( Vraag/opmerking ) . And the Double Quotes (") are Escaped. If you try another Payload like <script>alert(/xss/)</script> That Also won't work, Because this is Not Reflecting neither Storing. You will see output as a Text in Vraag/opmerking. Don't Rely on Online Scanners, Test Manually, For DOM Based XSS ..Check Sink and Sources and Analyze them.
The tool is right. There is a XSS-Vulnerability on the site, but the proof of concept (PoC) code is wrong. The content of a <textarea> can only contain character data (see <textarea> description on MDN). So your <script>alert("test")</script> is interpreted as text and not as HTML code. But you can close the <textarea> tag and insert the javascript code after that.
Here is the working PoC URL:
https://www.babyland.nl/service-contact/</textarea><script>alert("test")</script>
which is rendered as:
<textarea rows="" cols="" id="comment" name="comment"></textarea<script>alert("test")</script></textarea>
A little note to testing for XSS injection: Chrome/Chromium has a XSS protection. So this code doesn't exploit in this browser. For manual testing you can use Firefox or run Chrome with: --disable-web-security (see this StackOverflow Question and this for more information).

Preventing XSS in Node.js / server side javascript

Any idea how one would go about preventing XSS attacks on a node.js app? Any libs out there that handle removing javascript in hrefs, onclick attributes,etc. from POSTed data?
I don't want to have to write a regex for all that :)
Any suggestions?
I've created a module that bundles the Caja HTML Sanitizer
npm install sanitizer
http://github.com/theSmaw/Caja-HTML-Sanitizer
https://www.npmjs.com/package/sanitizer
Any feedback appreciated.
One of the answers to Sanitize/Rewrite HTML on the Client Side suggests borrowing the whitelist-based HTML sanitizer in JS from Google Caja which, as far as I can tell from a quick scroll-through, implements an HTML SAX parser without relying on the browser's DOM.
Update: Also, keep in mind that the Caja sanitizer has apparently been given a full, professional security review while regexes are known for being very easy to typo in security-compromising ways.
Update 2017-09-24: There is also now DOMPurify. I haven't used it yet, but it looks like it meets or exceeds every point I look for:
Relies on functionality provided by the runtime environment wherever possible. (Important both for performance and to maximize security by relying on well-tested, mature implementations as much as possible.)
Relies on either a browser's DOM or jsdom for Node.JS.
Default configuration designed to strip as little as possible while still guaranteeing removal of javascript.
Supports HTML, MathML, and SVG
Falls back to Microsoft's proprietary, un-configurable toStaticHTML under IE8 and IE9.
Highly configurable, making it suitable for enforcing limitations on an input which can contain arbitrary HTML, such as a WYSIWYG or Markdown comment field. (In fact, it's the top of the pile here)
Supports the usual tag/attribute whitelisting/blacklisting and URL regex whitelisting
Has special options to sanitize further for certain common types of HTML template metacharacters.
They're serious about compatibility and reliability
Automated tests running on 16 different browsers as well as three diffferent major versions of Node.JS.
To ensure developers and CI hosts are all on the same page, lock files are published.
All usual techniques apply to node.js output as well, which means:
Blacklists will not work.
You're not supposed to filter input in order to protect HTML output. It will not work or will work by needlessly malforming the data.
You're supposed to HTML-escape text in HTML output.
I'm not sure if node.js comes with some built-in for this, but something like that should do the job:
function htmlEscape(text) {
return text.replace(/&/g, '&').
replace(/</g, '<'). // it's not neccessary to escape >
replace(/"/g, '"').
replace(/'/g, ''');
}
I recently discovered node-validator by chriso.
Example
get('/', function (req, res) {
//Sanitize user input
req.sanitize('textarea').xss(); // No longer supported
req.sanitize('foo').toBoolean();
});
XSS Function Deprecation
The XSS function is no longer available in this library.
https://github.com/chriso/validator.js#deprecations
You can also look at ESAPI. There is a javascript version of the library. It's pretty sturdy.
In newer versions of validator module you can use the following script to prevent XSS attack:
var validator = require('validator');
var escaped_string = validator.escape(someString);
Try out the npm module strip-js. It performs the following actions:
Sanitizes HTML
Removes script tags
Removes attributes such as "onclick", "onerror", etc. which contain JavaScript code
Removes "href" attributes which contain JavaScript code
https://www.npmjs.com/package/strip-js
Update 2021-04-16: xss is a module used to filter input from users to prevent XSS attacks.
Sanitize untrusted HTML (to prevent XSS) with a configuration specified by a Whitelist.
Visit https://www.npmjs.com/package/xss
Project Homepage: http://jsxss.com
You should try library npm "insane".
https://github.com/bevacqua/insane
I try in production, it works well. Size is very small (around ~3kb gzipped).
Sanitize html
Remove all attributes or tags who evaluate js
You can allow attributes or tags that you don't want sanitize
The documentation is very easy to read and understand.
https://github.com/bevacqua/insane

Does HTML encoding prevent XSS security exploits?

By simply converting the following ("the big 5"):
& -> &
< -> <
> -> >
" -> "
' -> '
Will you prevent XSS attacks?
I think you need to white list at a character level too, to prevent certain attacks, but the following answer states it overcomplicates matters.
EDIT This page details it does not prevent more elaborate injections, does not help with "out of range characters = question marks" when outputting Strings to Writers with single byte encodings, nor prevents character reinterpretation when user switches browser encoding over displayed page. In essence just escaping these characters seems to be quite a naive approach.
Will you prevent XSS attacks?
If you do this escaping at the right time(*) then yes, you will prevent HTML-injection. This is the most common form of XSS attack. It is not just a matter of security, you need to do the escapes anyway so that strings with those characters in will display correctly anyway. The issue of security is a subset of the issue of correctness.
I think you need to white list at a character level too, to prevent certain attacks
No. HTML-escaping will render every one of those attacks as inactive plain text on the page, which is what you want. The range of attacks on that page is demonstrating different ways to do HTML-injection, which can get around the stupider “XSS filters” that some servers deploy to try to prevent common HTML-injection attacks. This demonstrates that “XSS filters” are inherently leaky and ineffective.
There are other forms of XSS attack that might or might not affect you, for example bad schemes on user-submitted URIs (javascript: et al), injection of code into data echoed into a JavaScript block (where you need JSON-style escaping) or into stylesheets or HTTP response headers (again, you always need the appropriate form of encoding when you drop text into another context; you should always be suspicious if you see anything with unescaped interpolation like PHP's "string $var string").
Then there's file upload handling, Flash origin policy, UTF-8 overlong sequences in legacy browsers, and application-level content generation issues; all of these can potentially lead to cross-site scripting. But HTML injection is the main one that every web application will face, and most PHP applications get wrong today.
(*: which is when inserting text content into HTML, and at no other time. Do not HTML-escape form submission data in $_POST/$_GET at the start of your script; this is a common wrong-headed mistake.)
OWASP has a great cheat sheet.
Golden Rules
Strategies
Etc.
https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.md
Counter measures depend on the context where the data is inserted in. If you insert the data into HTML, replacing the HTML meta character with escape sequences (i.e. character references) prevents inserting HTML code.
But if your in another context (e.g. HTML attribute value that is interpreted as URL) you have additional meta characters with different escape sequences you have to deal with.

Markdown and XSS

Ok, so I have been reading about markdown here on SO and elsewhere and the steps between user-input and the db are usually given as
convert markdown to html
sanitize html (w/whitelist)
insert into database
but to me it makes more sense to do the following:
sanitize markdown (remove all tags -
no exceptions)
convert to html
insert into database
Am I missing something? This seems to me to be pretty nearly xss-proof
Please see this link:
http://michelf.com/weblog/2010/markdown-and-xss/
> hello <a name="n"
> href="javascript:alert('xss')">*you*</a>
Becomes
<blockquote>
<p>hello <a name="n"
href="javascript:alert('xss')"><em>you</em></a></p>
</blockquote>
∴​ you must sanitize after converting to HTML.
There are two issues with what you've proposed:
I don't see a way for your users to be able to format posts. You took advantage of Markdown to provide nice numbered lists, for example. In the proposed no-tags-no-exceptions world, I'm not seeing how the end user would be able to do such a thing.
Considerably more important: When using Markdown as the "native" formatting language, and whitelisting the other available tags,you are limiting not just the input side of the world, but the output as well. In other words, if your display engine expects Markdown and only allows whitelisted content out, even if (God forbid) somebody gets to the database and injects some nasty malware-laden code into a bunch of posts, the actual site and its users are protected because you are sanitizing it upon display, as well.
There are some good resources on the web about output sanitization:
Sanitizing user data: Where and how to do it
Output sanitization (One of my clients, who shall remain nameless and whose affected system was not developed by me, was hit with this exact worm. We have since secured those systems, of course.)
BizTech: Best Practices: Never heard of XSS?
Well certainly removing/escaping all tags would make a markup language more secure. However the whole point of Markdown is that it allows users to include arbitrary HTML tags as well as its own forms of markup(*). When you are allowing HTML, you have to clean/whitelist the output anyway, so you might as well do it after the markdown conversion to catch everything.
*: It's a design decision I don't agree with at all, and one that I think has not proven useful at SO, but it is a design decision and not a bug.
Incidentally, step 3 should be ‘output to page’; this normally takes place at the output stage, with the database containing the raw submitted text.
insert into database
convert markdown to html
sanitize html (w/whitelist)
perl
use Text::Markdown ();
use HTML::StripScripts::Parser ();
my $hss = HTML::StripScripts::Parser->new(
{
Context => 'Document',
AllowSrc => 0,
AllowHref => 1,
AllowRelURL => 1,
AllowMailto => 1,
EscapeFiltered => 1,
},
strict_comment => 1,
strict_names => 1,
);
$hss->filter_html(Text::Markdown::markdown(shift))
convert markdown to html
sanitize html (w/whitelist)
insert into database
Here, the assumptions are
Given dangerous HTML, the sanitizer can produce safe HTML.
The definition of safe HTML will not change, so if it is safe when I insert it into the DB, it is safe when I extract it.
sanitize markdown (remove all tags - no exceptions)
convert to html
insert into database
Here the assumptions are
Given dangerous markdown, the sanitizer can produce markdown that when converted to HTML by a different program will be safe.
The definition of safe HTML will not change, so if it is safe when I insert it into the DB, it is safe when I extract it.
The markdown sanitizer has to know not just about dangerous HTML and dangerous markdown, but how the markdown->HTML converter does its job. That makes it more complex, and more likely to be wrong than the simpler unsafeHTML->safeHTML function above.
As a concrete example, "remove all tags" assumes you can identify tags, and would not work against UTF-7 attacks. There might be other encoding attacks out there that render this assumption moot, or there might be a bug that causes the markdown->HTML program to convert (full-width '<', exotic white-space characters stripped by markdown, SCRIPT) into a <script> tag.
The most secure would be:
sanitize markdown (remove all tags - no exceptions)
convert markdown to HTML
sanitize HTML
insert into a DB column marked risky
re-sanitize HTML every time you fetch that column from the DB
That way, when you update your HTML sanitizer you get protection against any newly discovered attacks. This is often inefficient, but you can get pretty good security by storing a timestamp with HTML inserted so that you can tell which might have been inserted during the time when someone knew about an attack that gets past your sanitizer.