What is an regular expression that can be used to determine if a string is an XSS (cross site scripting) security risk?
That depends on the context in which that string is being used.
For instance, if the string is being printed out as part of an HTML page, then the special HTML characters <, >, ", and ' can potentially be XSS risks.
If it's being passed around via JSON, then ' and " could potentially be XSS risks.
If it's being included in SQL statements (which it really shouldn't be, at least not directly - use parameterized queries), then things like ; and backticks may be an issue.
Et cetera.
There can never be a bullet proof function to stop all of xss and a regular expression isn't the best choice. XSS is highly dependent on where on the page and limiting charicters such as " ' < > is a good start, but by no means a comprehensive solution. Even with stopping these characters there are MANY other ways of exploiting XSS. To name a few there are malicious href's: javascript:alert(/xss/) and injection of event handlers: onload=alert(/xss/), nether of which will be stopped if you filter for the 4 characters listed.
HTMLPurifier is made up of literally thousands of regular expressions, and it gets bypassed all the time.
Look for any unencoded < characters in html generated from user data. Without any < characters, there can be no nasty html injected into your site.
If you want to allow for user-generated formatting, then limit the allowed html to a subset. It's going to be impossible to check this with regular expressions, so I recommend a good html parser instead.
Related
I am trying to add form validation to my html site in order to prevent xss injection attacks.
I am using a simple java form validator genvalidator_v4.js that allows me to use regex expressions to determine what is allowed in a text box. I am trying to write one that would prevent "<" or ">" or any other tags that could be used in this kind of attack, but still allow alphanumeric, punctuation, and other special characters.
Any ideas? Also open to other methods of preventing xss attacks but I am very inexperienced in this area so please keep it as simple as possible.
You are trying to blacklist dangerous input. That's very tricky, it's very easy to get it wrong because of the sheer number of tokens that could be dangerous.
Thus, the following two practices are recommended instead:
Escape everything read from the database before outputting it on a web page. If you correctly HtmlEncode everything (your language of choice surely has a library method for that), it doesn't matter if a user entered <script>/* do something evil */</script> and that code got stored in your database. Correctly encoded, this will just be printed verbatim and do no harm.
If you still want to filter input (which might be useful as an additional layer of security), whitelists are generally safer than blacklists. So, instead of saying that < is harmful, you say that letters, digits, punctuation, etc. are safe. What exactly is safe depends on what type of field you are filtering.
I am trying to create a regex expression for client side validation (before server side validation which will also take place) to prevent sql/script injection i.e something like this - which does not work
(script)|(<)|(>)|(%3c)|(%3e)|(SELECT) |(UPDATE) |(INSERT) |(DELETE)|(GRANT) |(REVOKE)|(UNION)|(<)|(>)
What is the correct format for this (above) expression so I can get it to work?
e.g. my EMail checker is like this
(/^[^\\W][a-zA-Z0-9\\_\\-\\.]+([a-zA-Z0-9\\_\\-\\.]+)*\\#[a-zA-Z0-9_]+(\\.[a-zA-Z0-9_]+)*\\.[a-zA-Z]{2,4}$/))
Oh and if you can think of anything else to add please "shout".
You cannot in any way even hinder SQL injection attempts on the client side. It is a terrible, terrible idea which cannot help you but may cause a ball-ache for genuine users. It will not stop anyone who has a chance of actually exploiting an SQLi.
As far as the regex goes, you need to add the / at the beginning and end, like in your mail example, to denote it is a regex. Also, I think the regex design is flawed as it still allows many injection vectors. For example it allows the dreaded single quote ', -- comments and other. It doesn't even start to cover all the builtin functions of your RDBMS that might be knocking around. An attacker will often make use of, e.g. SELECT statements already on your server side, so removing them probably wouldn't help either.
Your best defense is to use parametrized queries on the server side (e.g. pg_prepare for php & postgres)
Generally Sql Injection occurs in the strings passed to the parameters of a sql command such as insert, update, delete, or select. This regular expression validates whether there is any inline or block comment in the sql command.
/[\t\r\n]|(--[^\r\n]*)|(\/\*[\w\W]*?(?=\*)\*\/)/gi
Only a-z or A-Z or 0-9 between 4-8 characters:
^([a-z]|[A-Z]|[0-9]){4,8}$
SQL injection and escaping sound magical to many people, something like shield against some mysterious danger, but: don't be scared of it - it is nothing magical. It is just the way to enable special characters being processed by the query.
So, don't invent new magial shields and ways how to protect the magical injection danger! Instead, try to understand how escaping of the input works.
It's more common to escape the control characters like `and ' that way one can still enter SQL code into the database, say it is on a CMS and I'm adding an article about SQL injection. I want to use those words and characters without triggering an injection. Looking at it, it seems to be for something with HTML base so convert the < and > to < and >, that will sanitize any and all html tags while still allowing HTML demo content to be displayed.
As already said, this should all be server side, as it comes into the system.
I'm trying to come up with a validation expression to prevent users from entering html or javascript tags into a comment box on a web page.
The following works fine for a single line of text:
^(?!.*(<|>)).*$
..but it won't allow any newline characters because of the dot(.). If I go with something like this:
^(?!.*(<|>))(.|\s)*$
it will allow multiple lines but the expression only matches '<' and '>' on the first line. I need it to match any line.
This works fine:
^[-_\s\d\w"'\.,:;#/&\$\%\?!#\+\*\\(\)]{0,4000}$
but it's ugly and I'm concerned that it's going to break for some users because it's a multi-lingual application.
Any ideas? Thanks!
Note that your RE prevents users from entering < and >, in any context. "2 > 1", for example. This is very undesirable.
Rather than trying to use regular expressions to match HTML (which they aren't well suited to do), simply escape < and > by transforming them to < and >. Alternatively, find a package for your language-of-choice that implements whitelisting to allow a limited subset of HTML, or that supports its own markup language (I hear markdown is nice).
As for "." not matching newline characters, some regexp implementations support a flag (usually "m" for "multi-line" and "s" for "single line"; the latter causes "." to match newlines) to control this behavior.
The first two are basically equivalent to /^[^<>]*$/, except this one works on multiline strings. Any reason why you didn't write the RE that way?
So, I looked into it and there is a .Net 'SingleLine' option for regular expressions that causes "." to also match on the new line character. Unfortunately, this isn't available in the ASP.Net RegularExpressionValidator. As far as I can see, there's no way to make something like ^(?!.(<\w+>)).$ work on a multi-line textbox without doing server-side validation.
I took your advice and went the route of escaping the tags on the server side. This requires setting the validation page directive to 'false' but in this particular instance that isn't a big deal because the comment box is really the only thing to worry about.
I am using a regular expression to convert plain text URL to clickable links.
#(https?://([-\w\.]+)+(:\d+)?(/([\w/_\.-]*(\?\S+)?)?)?)#
However, sometimes in the body of the text, URL are enumerated one per line with a semi-colon at the end. The real URL does not contain any ";".
http://www.aaa.org/pressdetail.asp?PRESS_REL_ID=275;
http://www.aaa.org/pressdetail.asp?PRESS_REL_ID=123;
http://www.aaa.org/pressdetail.asp?PRESS_REL_ID=124
Is it permitted to have a semicolon (;) in a URL or can the semicolon be considered a marker of the end of an URL? How would that fit in my regular expression?
A semicolon is reserved and should only for its special purpose (which depends on the scheme).
Section 2.2:
Many URL schemes reserve certain
characters for a special meaning:
their appearance in the
scheme-specific part of the URL has a
designated semantics. If the character
corresponding to an octet is
reserved in a scheme, the octet must
be encoded. The characters ";",
"/", "?", ":", "#", "=" and "&" are
the characters which may be
reserved for special meaning within a
scheme. No other characters may be
reserved within a scheme.
The W3C encourages CGI programs to accept ; as well as & in query strings (i.e. treat ?name=fred&age=50 and ?name=fred;age=50 the same way). This is supposed to be because & has to be encoded as & in HTML whereas ; doesn't.
The semi-colon is a legal URI character; it belongs to the sub-delimiter category: http://www.ietf.org/rfc/rfc3986.txt
However, the specification states that whether the semi-colon is legitimate for a specific URI or not depends on the scheme or producer of that URI. So, if site using those links doesn't allow semi-colons, then they're not valid for that particular case.
Technically, a semicolon is a legal sub-delimiter in a URL string; plenty of source material is quoted above including http://www.ietf.org/rfc/rfc3986.txt.
And some do use it for legitimate purposes though it's use is likely site-specific (ie, only for use with that site) because it's usage has to be defined by the site using it.
In the real world however, the primary use for semicolons in URLs is to hide a virus or phishing URL behind a legitimate URL.
For example, sending someone an email with this link:
http:// www.yahoo.com/junk/nonsense;0200.0xfe.0x37.0xbf/malicious_file/
will result in the Yahoo! link (www.yahoo.com/junk/nonsense) being ignored because even though it is legitimate (ie, properly formed) no such page exists. But the second link (0200.0xfe.0x37.0xbf/malicious_file/) presumably exists* and the user will be directed to the malicious_file page; whereupon one's corporate IT manager will get a report and one will likely get a pink slip.
And before all the nay-sayers get their dander up, this is exactly how the new Facebook phishing problem works. The names have been changed to protect the guilty as usual.
*No such page actually exists to my knowledge. The link shown is for purposes of this discussion only.
http://www.ietf.org/rfc/rfc3986.txt covers URLs and what characters may appear in unencoded form. Given that URLs containing semicolons work properly in browsers, your code should support them.
Yes, semicolons are valid in URLs. However, if you're plucking them from relatively unstructured prose, it's probably safe to assume a semicolon at the end of a URL is meant as sentence punctuation. The same goes for other sentence-punctuation characters like periods, question marks, quotes, etc..
If you're only interested in URLs with an explicit http[s] protocol, and your regex flavor supports lookbehinds, this regex should suffice:
https?://[\w!#$%&'()*+,./:;=?#\[\]-]+(?<![!,.?;:"'()-])
After the protocol, it simply matches one or more characters that may be valid in a URL, without worrying about structure at all. But then it backs off as many positions as necessary until the final character is not something that might be sentence punctuation.
Quoting RFCs is not all that helpful in answering this question, because you will encounter URLs with semicolons (and commas for that matter). We had a Regex that did not handle semicolons and commas, and some of our users at NutshellMail complained because URLs containing them do in fact exist in the wild. Try building a dummy URL in Facebook or Twitter that contains a ';' or ',' and you will see that those two services encode the full URL properly.
I replaced the Regex we were using with the following pattern (and have tested that it works):
string regex = #"((www\.|(http|https|ftp|news|file)+\:\/\/)[_.a-zA-Z0-9-]+\.[a-zA-Z0-9\/_:#=.+?,##%&~_-]*[^.|\'|\# |!|\(|?|,| |>|<|;|\)])";
This Regex came from http://rickyrosario.com/blog/converting-a-url-into-a-link-in-csharp-using-regular-expressions/ (with a slight modification)
I've run into a few problems using a C# regex to implement a whitelist of allowed characters on web inputs. I am trying to avoid SQL injection and XSS attacks. I've read that whitelists of the allowable characters are the way to go.
The inputs are people names and company names.
Some of the problems are:
Company names that have ampersands. Like "Jim & Sons". The ampersand is important, but it is risky.
Unicode characters in names (we have asian customers for example), that enter their names using their character sets. I need to whitelist all these.
Company names can have all kinds of slashes, like "S/A" and "S\A". Are those risky?
I find myself wanting to allow almost every character after seeing all the data that is in the DB already (and being entered by new users).
Any suggestions for a good whitelist that will handle these (and other) issues?
NOTE: It's a legacy system, so I don't have control of all the code. I was hoping to reduce the number of attacks by preventing bad data from getting into the system in the first place.
This SO thread has a lot of good discussion on protecting yourself from injection attacks.
In short:
Filter your input as best as you can
Escape your strings using framework based methods
Parameterize your sql statements
In your case, you can limit the name field to a small character set. The company field will be more difficult, and you need to consider and balance your users need for freedom of entry with your need for site security. As others have said, trying to write your own custom sanitation methods is tricky and risky. Keep it simple and protect yourself through your architecture - don't simply rely on strings being "safe", even after sanitization.
EDIT:
To clarify - if you're trying to develop a whitelist, it's not something that the community can hand out, since it's entirely dependent on the data you want. But let's look at a example of a regex whitelist, perhaps for names. Say I've whitelisted A-Z and a-z and space.
Regex reWhiteList = new Regex("^[A-Za-z ]+$")
That checks to see if the entire string is composed of those characters. Note that a string with a number, a period, a quote, or anything else would NOT match this regex and thus would fail the whitelist.
if (reWhiteList.IsMatch(strInput))
// it's ok, proceed to step 2
else
// it's not ok, inform user they've entered invalid characters and try again
Hopefully this helps some more! With names and company names you'll have a tough-to-impossible time developing a rigorous pattern to check against, but you can do a simple allowable character list, as I showed here.
Do not try to sanitize names, especially with regex!
Just make sure that you are properly escaping the values and saving them safely in your DB, and them escaping them back when presenting in HTML
Company names might have almost any kind of symbol in them, so I don't know how well this is going to work for you. I'd concentrate on shielding yourself directly from various attacks, not hoping that your strings are "naturally" safe.
(Certainly they can have ampersands, colons, semicolons, exclamation points, hyphens, percent signs, and all kinds of other things that could be "unsafe" in a host of contexts.)
Why filter or regex the data at all, or even escape it, you should be using bind variables to access the database.
This way, the customer could enter something like: anything' OR 'x'='x
And your application doesn't care because your SQL code doesn't parse the variable because it's not set when you prepare the statement. I.e.
'SELECT count(username) FROM usertable WHERE username = ? and password = ?'
then you execute that code with those variables set.
This works in PHP, PERL, J2EE applications, and so on.
I think writing your own regexp is not a good idea: it would be very hard. Try leveraging existing functions of your web framework, there is lots of resources on the net. If you say C#, I assume you are using ASP.NET, try the following article:
How To: Protect From Injection Attacks in ASP.NET
This is my current regex WHITELIST for a company name. Any input outside of these characters is rejected:
"^[0-9\p{L} '\-\.,\/\&]{0,50}$"
The \p{L} matches any unicode "letter". So, the accents and asian characters are whitelisted.
The \& is a bit problematic because it potentially allows javascript special characters.
The \' is problematic if not using parameterized queries, because of SQL injection.
The \- could allow "--", also a potential for SQL injection if not using parameterized queries.
Also, the \p{L} won't work client-side, so you can't use it in the ASP.NET regular expression validator without disabling clientside validation:
EnableClientScript="False"