XSS - SQL Injection -- Owasp vs AntiXss Vs Microsoft Anti-Cross Site Scripting Library - xss

We are looking at using a library to help us detect SQL injections.
We are using sprocs and parametrized statements, but for the sake of this post that we are only using some sore of library that detects/ verifies user input.
Whats the best one? Easiest to implement? Easiest to update/manage?
Why prefer one over the other?
On a side note:
I've just started using Owasp. with C#.
I was hoping that there would be more default rules while validating.
When using the isValid function, there are only 5 default rules.
CREDIT_CARD -- Rule name key for the credit card validation rule.
DATE -- Rule name key for the date validation rule.
DOUBLE -- Rule name key for the double validation rule.
INTEGER -- Rule name key for the integer validation rule.
PRINTABLE -- Rule name key for the printable validation rule.
I was hoping that there would be more default rules for string SQL Injection Detection.
Thanks

Using stored procs is a pretty big step in the right direction. What I’d add to that is input validation which it looks like you’re trying to do with the OWASP ESAPI library but it pretty simple to implement by regex in most cases. You should find plenty of publicly available patterns for most untrusted data.
The other thing you might want to do is to apply the principle of least privilege at your data layer. Consider using more than one SQL account and restricting the access of your account(s) used by publicly facing users to the absolute bare minimum functions. You’re using stored procs; try and avoid any datareader or datareader rights if you haven’t already.
More info in OWASP Top 10 for .NET developers part 1: Injection

I'm using AntiXSS for validating user input - specifically including protection aganist SQL Injection. I've seen a few attacks but nothings gotten through - so seems to work well for me.
Also - Troy knows what he's talking about - His article on the subject is a really good one :)

Related

tastypie: why reference objects using uris rather than ids?

When creating or editing a model that contains a reference/foreign key to another object, you have to use the uri of that object. For example, imagine we have two classes: User and Group. Each Group has many Users and each User can belong to exactly one group.
Then, if we are creating a User, we might send an object that looks like this:
{"name":"John Doe", "group":"/path/to/group/1/"}
instead of
{"name":"John Doe", "group_id":1}
I believe this is related to one of the principles of HATEOAS, but I can't find the rationale for using the resource uri rather than the id. What are some reasons for using the uri?
(I'm not interested in opinions about which is better, but in any resources that can help me understand this design choice.)
I'll take a stab
The simplest reason is that surrogate keys like your 1 only mean something within the boundaries of your system. They are meaningless outside of the system.
Expanding on this, you could build your app such that there's no limitations on the URLs that identify groups, only the conformance of the resources gathered from the response of those URLS. Someone could add a user in your system that is in a group in the FaceBook system, as long as the two systems could negotiate what a group is. There are standards for concepts like group, and it's not impossible to do such a thing.
This is how most web apps work. EG: the citation links in a wikipedia article which can point to any other article (until the wiki trolls remove it for not being an appropriate citation resource...)
having your app work like this gets you closer to RESTful conformance. Whether or not you consider RESTful architecture a good idea is what you asked us not to discuss, so i won't.
Another often cited benefit would be the ability for you to completely re-key your setup. You may dismiss this at first...but if you really use 1 for id's, that's probably an int or long, and you'll soon run out of those. Also such an id means you have to sequence them appropriately. At some point you may wish you had used a guid as your id's. Anyone holding on to your old ID scheme would be considered legacy. The URLs give you a little abstraction from this..old url's remain a legacy thing, but it's easier to identify a legacy url than it is to identify a legacy id (granted not much...it's pretty easy to know if you're getting a long or a guid, but a bit easier to see a url as /old/path/group/1 vs /new/path/group/). Generally using URLs gives you a little more forward compatibility and room to grow.
I also find providing URLs as identifiers makes it very easy for a client to retrieve information about that thing. the self link is so VERY convenient. Suppose i have some reference to a group:1....what good is that? How many UI's are going to show a control that says "add group 1". You'll want to show more. If you pass around URLs as identifiers of selections then clients can always retrieve more information about what that selection actually is. In some apps you could pass around the whole object (which would include the id) to deal with this, but it's nice to just save the URL for later retrieval (think bookmarks). Even more importantly it's always nice to be able to refresh that object regularly in order to get the latest state of it. A self link can do that very nicely, and i'd argue it's useful enough to always include...and if an always included self link identifies the resource...why do you need to also provide your surrogate key as a secondary identifier?
One side note. I try to avoid services that require a url as a parameter. I'd prefer to create the user, than have the service offer up possible group memberships as links, then have the client choose to request those state transitions from non-membership to membership. If you need to "create the user with groups" i'd go with intermediate states prior to actual submission/commitment of the new user to the service. I've found the less inputs the client has to provide, the easier the application is to use.

ilog jrules and database connection

I am using IBM ilog jrules 7.1 trial for doing a POC.I am using decision tables to check customer registration data.
my ilog decision table rule is -- If a customer's state is any of CA,IL,AL then set status as 'eligible' else make the customer as 'ineligible' for the offer.
In a happy path , I can add the state codes as domain literals and the rule will work fine.
But I need to load this domain values dynamically from a database ( mysql ) using some IRL code. Has anyone done a similar requirement like mine , It would be very helpful if someone can point me in the right direction.
One of the general principles of JRules is, that you should call the rules engine with all the necessary information if possible. From a performance perspective, accessing the database during rule execution isn't a good idea. You might also lose the ability to use your rule app in a clustered environment. Also, decisions are less traceable and reproducible because it's harder to know what's in your database at any given moment.
Depending on how often your data changes, I suggest you add these values as a second input parameter and retrieve the data before you call the rules engine. The second possibility is to use the dynamic domain plugin to load those values from the database prior to deployment. But you would have to redeploy the ruleApp every time the data changes. With the dynamic domain plugin you can specify a data provider (e.g. Excel, MySQL etc.) and populate your BOM with the attributes contained in the database. These dynamic domain values show up as attributes and can be synced from the BOM-view in rule studio as well as from the teamserver:
In WODM (the successor of JRules 7.1) this functionality is build in, it's possible that this plugin is not part of the demo and has to be added to 7.1 individually.

Web Application Cross Site Scripting

My website http://www.imayne.com seems to have this issue, verified by MacAfee. Can someone show me how to fix this? (Title)
It says this:
General Solution:
When accepting user input ensure that you are HTML encoding potentially malicious characters if you ever display the data back to the client.
Ensure that parameters and user input are sanitized by doing the following:
Remove < input and replace with "&lt";
Remove > input and replace with "&gt";
Remove ' input and replace with "&apos";
Remove " input and replace with "&#x22";
Remove ) input and replace with "&#x29";
Remove ( input and replace with "&#x28";
I cannot seem to show the actual code. This website is showing something else.
Im not a web dev but I can do a little. Im trying to be PCI compliant.
Let me both answer your question and give you some advice. Preventing XSS properly needs to be done by defining a white-list of acceptable values at the point of user input, not a black-black of disallowed values. This needs to happen first and foremost before you even begin thinking about encoding.
Once you get to encoding, use a library from your chosen framework, don't attempt character substitution yourself. There's more information about this here in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS) (don't worry about it being .NET orientated, the concepts are consistent across all frameworks).
Now for some friendly advice: get some expert support ASAP. You've got a fundamentally obvious reflective XSS flaw in an e-commerce site and based on your comments on this page, this is not something you want to tackle on your own. The obvious nature of this flaw suggests you've quite likely got more obscure problems in the site as well. By your own admission, "you're a noob here" and you're not going to gain the competence required to sufficiently secure a website such as this overnight.
The type of changes you are describing are often accomplished in several languages via an HTML Encoding function. What is the site written in. If this is an ASP.NET site this article may help:
http://weblogs.asp.net/scottgu/archive/2010/04/06/new-lt-gt-syntax-for-html-encoding-output-in-asp-net-4-and-asp-net-mvc-2.aspx
In PHP use this function to wrap all text being output:
http://ch2.php.net/manual/en/function.htmlentities.php
Anyplace you see echo(...) or print(...) you can replace it with:
echo(htmlentities( $whateverWasHereOriginally, ENT_COMPAT));
Take a look at the examples section in the middle of the page for other guidance.
Follow those steps exactly, and you're good to go. The main thing is to ensure that you don't treat anything the user submits to you as code (HTML, SQL, Javascript, or otherwise). If you fail to properly clean up the inputs, you run the risk of script injection.
If you want to see a trivial example of this problem in action, search for
<span style="color:red">red</span>
on your site, and you'll see that the echoed search term is red.

What does using RESTful URLs buy me?

I've been reading up on REST, and I'm trying to figure out what the advantages to using it are. Specifically, what is the advantage to REST-style URLs that make them worth implementing over a more typical GET request with a query string?
Why is this URL:
http://www.parts-depot.com/parts/getPart?id=00345
Considered inferior to this?
http://www.parts-depot.com/parts/00345
In the above examples (taken from here) the second URL is indeed more elegant looking and concise. But it comes at a cost... the first URL is pretty easy to implement in any web language, out of the box. The second requires additional code and/or server configuration to parse out values, as well as additional documentation and time spent explaining the system to junior programmers and justifying it to peers.
So, my question is, aside from the pleasure of having URLs that look cool, what advantages do RESTful URLs gain for me that would make using them worth the cost of implementation?
The hope is that if you make your URL refer to a noun then there is a better chance that you will implement the HTTP verbs correctly. Beyond that there is absolutely no advantage of one URL versus another.
The reality is that the contents of an URL are completely irrelevant to a RESTful system. It is simply an identifier.
It's not what it looks like, it is what you do with it that is important.
One way of looking at REST:
http://tomayko.com/writings/rest-to-my-wife (which has now been taken down, sadly, but can still be see on web.archive.org)
So anyway, HTTP—this protocol Fielding
and his friends created—is all about
applying verbs to nouns. For instance,
when you go to a web page, the browser
does an HTTP GET on the URL you type
in and back comes a web page.
...
Instead, the large majority are busy
writing layers of complex
specifications for doing this stuff in
a different way that isn’t nearly as
useful or eloquent. Nouns aren’t
universal and verbs aren’t
polymorphic. We’re throwing out
decades of real field usage and proven
technique and starting over with
something that looks a lot like other
systems that have failed in the past.
We’re using HTTP but only because it
helps us talk to our network and
security people less. We’re trading
simplicity for flashy tools and
wizards.
One thing that jumps out at me (nice question by the way) is what they describe. The first describes an operation (getPart), the second describes a resource (part 00345).
Also, maybe you couldn't use other HTTP verbs with the first - you'd need a new method for putPart, for example. The second can be reused with different verbs (like PUT, DELETE, POST) to 'manipulate' the resource? I suppose you're also kinda saying GET twice - once with the verb, again in the method, so the second is more consistent with the intent of the HTTP protocol?
One that I always like as a savvy web-user, but certainly shouldn't be used as a guiding principle for when to use such a URL scheme is that those types of URLs are "hackable". In particular for things like blogs where I can just edit a date or a page number within a URL instead of having to find where the "next page" button is.
The biggest advantage of REST IMO is that it allows a clean way to use the HTTP Verbs (which are the most important on REST services). Actually, using REST means you are using the HTTP protocol and its verbs.
Using your urls, and imagining you want to post a "part", instead of getting it
First case should be like this:
You are using a GET where you should have used a post
http://www.parts-depot.com/parts/postPart?param1=lalala&param2=lelele&param3=lilili
While on a REST context, it should be
http://www.parts-depot.com/parts
and on the body, (for example) a xml like this
<part>
<param1>lalala<param1>
<param2>lelele<param1>
<param3>lilili<param1>
</part>
URI semantics are defined by RFC 2396. The extracts particularly pertinent to this question are 3.3. "Path Component":
The path component contains data, specific to the authority (or the
scheme if there is no authority component), identifying the resource
within the scope of that scheme and authority.
And 3.4 "Query Component":
The query component is a string of information to be interpreted by
the resource.
Note that the query component is not part of the resource identifier, it is merely information to be interpreted by the resource.
As such, the resource being identified by your first example is actually just /parts/getPart. If your intention is that the URL should identify one specific part resource then the first example does not do that, whereas the second one (/parts/00345) does.
So the 'advantage' of the second style of URL is that it is semantically correct, whereas the first one is not.
"The second requires additional code
and/or server configuration to parse
out values,"
Really? You choose a poor framework, then. My experience is that the RESTful version is exactly the same amount of code. Maybe I just lucked into a cool framework.
"as well as additional documentation
and time spent explaining the system
to junior programmers"
Only once. After they get it, you shouldn't have to explain it again.
"and justifying it to peers."
Only once. After they get it, you shouldn't have to explain it again.
Don't use query/search parts in URLs which aren't queries or searches, if you do that - according to the URL spec - you are likely implying something about that resource that you don't really want to.
Use query parts for resources that are a subset of some bigger resource - pagination is a good example of where this could be applied.

Are there cross-platform tools to write XSS attacks directly to the database?

I've recently found this blog entry on a tool that writes XSS attacks directly to the database. It looks like a terribly good way to scan an application for weaknesses in my applications.
I've tried to run it on Mono, since my development platform is Linux. Unfortunately it crashes with a System.ArgumentNullException deep inside Microsoft.Practices.EnterpriseLibrary and I seem to be unable to find sufficient information about the software (it seems to be a single-shot project, with no homepage and no further development).
Is anyone aware of a similar tool? Preferably it should be:
cross-platform (Java, Python, .NET/Mono, even cross-platform C is ok)
open source (I really like being able to audit my security tools)
able to talk to a wide range of DB products (the big ones are most important: MySQL, Oracle, SQL Server, ...)
Edit: I'd like to clarify my goal: I'd like a tool that directly writes the result of a successful XSS/SQL injection attack into the database. The idea is that I want to check that every place in my app does correct output encoding. Detecting and avoiding the data getting there in the first place is an entirely different thing (and might not be possible when I display data that's written to the DB by a third-party application).
Edit 2: Corneliu Tusnea, the author of the tool I linked to above, has since released the tool as free software on codeplex: http://xssattack.codeplex.com/
I think metasploit has most of the attributes you are looking for. It may even be the only one that has all of what you specify, since all the others I can think of are closed source. There are a few existing modules that deal with XSS and one in particular that you should take a peek at: HTTP Microsoft SQL Injection Table XSS Infection. From the sounds of that module it is capable of doing exactly what you are wanting to do.
The framework is written in Ruby I believe, and is supposed to be easy to extend with your own modules which you may need/want to do.
I hope that helps.
http://www.metasploit.com/
Not sure if this is what you're after, its a parameter fuzzer for HTTP/HTTPS.
I haven't used it in a while, but IIRC it acts a proxy between you and the web application in question - and will insert XSS/SQL Injection attack strings into any input fields before deeming whether the response was "interesting" or not, thus whether the application is vulnerable or not.
http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project
From your question I'm guessing it is a type of fuzzer you're looking for, and one specifically for XSS and web applications; if I'm right - then that might help you!
Its part of the Open Web Application Security Project (OWASP) that "jah" has linked you to above.
There are some Firefox plugins to do some XSS testing here:
http://labs.securitycompass.com/index.php/exploit-me/
A friend of mine keeps saying, that php-ids is pretty good. I haven't tried it myself, but it sounds as if it could approximately match your description:
Open Source (LGPL),
Cross Platform - PHP is not in your list, but maybe it's ok?
Detects "all sorts of XSS, SQL Injection, header injection, directory traversal, RFE/LFI, DoS and LDAP attacks" (this is from the FAQ)
Logs to databases.
I don't think there is such a tool, other than the one you pointed us to. I think there's a good reason for that: It's probably not the best way to test that each and every output is properly encoded for the applicable context.
From reading about that tool it seems the premise is to insert random xss vectors into the database and then you browse your application to see if any of those vectors succeed. This is rather a hit and miss methodology, to say the least.
A much better idea, I think, would be to perform code reviews.
You may find it helpful to have a look at some of the resources available at http://owasp.org - namely the Application Security Verification Standard (ASVS), the Testing Guide and the Code Review Guide.