AWS WAF XSS check blocking form with "ON" keyword in form field value - xss

Posting a form with " on" or any word starting with "on" as last word in a form field resulting in an XSS block from aws waf
blocked by this rule
Body contains a cross-site scripting threat after decoding as URL
e.g. "twenty only" or " online" or "check on" all results in XSS block
These seems to be normal words, why it's getting blocked for xss?
but with whitespace at the end it doesn't block
e.g. "twenty only " or " online " or "check on " these works

Just flagging up we got started with WAF last night, and overnight a few dozen legitimate requests were blocked.
Surely enough, each XSS rule had the string "on" in the request body, followed by other characters.
I wonder if it was trying to detect the hundred or so onerror, onload and other javascript events? Feels like it could have been a lot more specific than matching on followed by "some stuff"...
Only solution here seems to be disable this rule for us - it's going to be a constant source of false positives for us otherwise, which makes it worthless.

This is a known problem with the "CrossSiteScripting_BODY" WAFv2 rule provided by AWS as part of the AWSManagedRulesCommonRuleSet ruleset. The rule will block any input that matches on*=*
In a form with multiple inputs, any text that has " on" in it will likely trigger this rule with false positive, e.g. a=three two one&b=something else
In Sept 2021, I complained to AWS Enterprise Support about this clearly broken rule and they replied "Its better to block the request when in doubt than to allow a malicious one", which I strongly disagree with. The support engineer also suggested that I could attempt to whitelist inputs which have triggered this rule, which is totally impractical for any non-trivial web app.
I believe the rule is attempting to block XSS attacks containing scripts like onerror=eval(src), see https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Evasion_Cheat_Sheet.html#waf-bypass-strings-for-xss
I would recommend excluding all the black box CrossSiteScripting rules from your WAF, as they are not fit for purpose.

You can try upgrading to WAFv2, however certain combination with characters "on" +"&" may still cause a false positive. The rule that is causing the problem is XSS on body with URL decoding. So if your formdata is submitted using url-encoding, you could hit a problem. If you submit your form as JSON data or using MIME multipart/form-data it should work. I have 2 application, one with formdata submission with a javascript XHR using fetch api, it uses multipart/form-data and another with JSON data wasn't getting blocked.
Otherwise, you have to tune your XSS rules or set that specific rule to count. I will not post how to tune lest someone lurking here and try to be funny.
What your suggestion of adding a whitespace works as well, the backend can remove the whitespace or leave as it is. A little annoying but it works.

Related

Modsecurity not check POST data even SecRequestBodyAccess on is enabled

I have installed modsecurity on Nginx and as well as the owasp rules,
i have check SecRequestBodyAccess to on,
but when i send a request with a malicious post data, it pass ok with no problem
Can anyone help me?
Modsecurity by default has parameter "SecRuleEngine" set to "DetectionOnly" and work in monitor mode. Must be set to "On".
Modsecurity must have enabled a rule that discovers malicious code - audit_log will tell you if the malicious post data was found. Most of CRS rules find malicious using regex expressions. More fancy attacks require special configuration or new rules, some of them would never be discovered.
Blocking or not later depends on the settings if your're using Anomaly Scoring mode or Self-Contained mode.
For Self-Contained (older way) it is enough to have configuration line like (for POST data = phase 2):
SecDefaultAction "phase:2,log,auditlog,deny,status=403
And that's all, if post data violates any rule - attacker gets 403.
For AnomalyScore mode (newer way, more flexible) line looks like:
SecDefaultAction "phase:2,log,auditlog,pass"
Then all rules for which anomalies were found are countend and their scores are summed up. Depends on the rule it can be "critical_anomaly_score", "error_anomaly_score", "warning_anomaly_score" and "notice_anomaly_score". By default their counts as 5,4,3,2.
If the counted score equals or is greater than "inbound_anomaly_score_threshold" (default 5) then request is blocked.
Thats why by default a one rule with critical_anomaly_score (counted as 5) can block traffic. A single rule with "error_anomaly_score" (counted as 4) is not enough to stop the request.

Create single and multiple resources using restful HTTP

In my API server I have this route defined:
POST /categories
To create one category you do:
POST /categories {"name": "Books"}
I thought that if you want to create multiple categories, then you could do:
POST /categories [{"name": "Books"}, {"name": "Games"}]
I just wanna confirm that this is a good practice for Restful HTTP API.
Or should one have a
POST /bulk
for allowing them to do whatever operations at once (Creating, Reading, Updating and Deleting)?
In true REST, you should probably POST this in multiple separate calls. The reason is that each one will result in a new representation. How would you expect to get that back otherwise.
Each post should return the resultant resource location:
POST -> New Resource Location
POST -> New Resource Location
...
However, if you need a bulk, then create a bulk. Be dogmatic where possible, but if not, pragmatism gets the job done. If you get too hung up on dogmatism, then you never get anything done.
Here is a similar question
Here is one that suggests HTTP Pipelining to make this more efficient
There's nothing particularly wrong with having a bulk operation that you POST to, to activate (it'll be non-idempotent so POST is the right verb) but there are some caveats:
You're making multiple resources, so you need to respond with multiple URLs. This means you can't use the redirect pattern: you'll have to send a list of URLs back in some form.
You have a problem in that bulk operations are often not very discoverable. Discoverability is one of the most important things about RESTfulness, as it means that someone can come along and figure out how to write a client without lots of help from the server author.
Dealing with partial failures when you've got bulk operations remains problematic. It's a problem with any other paradigm too (I've watched people tie themselves in knots over this when working with extensions to SOAP) so it isn't a surprise, but unless you can guarantee that all the creations will work, you're going to have to work out what happens when you make one resource and fail to make the second. (Also, if the bulk request wanted a third one done, would you go on and try that?)
The simplest approach is just to support one create per request; that's a much easier pattern to get right and is better understood all round.
There's nothing wrong with creating multiple resources at once with POST (just don't try it with PUT). It's not "un-REST-ful", especially if you create a representation for the bulk operation itself. I suggest you create an index resource at the same time you create the individual resources, and return a "303 See Other" to it. That index representation would then contain links to all of the created resources (and possibly error information if any of them failed).
POST /categories/uploads/
[{"name": "Books"}, {"name": "Games"}]
303 See Other
Location: /categories/uploads/321/
(actually, now that I think about it, 201 might be better than 303)
GET /categories/uploads/321/
200 OK
Content-Type: application/json
[{"name": "Books", "link": "/categories/Books/"},
{"name": "Games", "error": "The 'Games' category already exists."}]
In your case I would also go the /bulk resource way. But the pattern I would suggest is the following and from my understanding the most natural: Work with the 202 Accepted status code.
The idea of a bulk request is that the server should not be forced to answer immediately as this would mean client needs to wait until it's bulk request completed.
Here is the pattern:
POST /bulk [{"name": "Books"}, {"name": "Games"}]
202 Accepted | Location: /bulk/processing/status/resourceId
GET /bulk/processing/status/resourceId
entry = "REST in peace" | completed | 0 errors | /categories/category/resourceId
entry = "Walking dead" | processing | 0 errors ->
So, the client POSTs the bulk information to the server. The server just accepts them with a 202 which gives no guarantee about the processing state at the time of response.
But the server also provides the link to a status resource. Here the client can have a look on each of the created resources and the processing state. When finished the client can access the resource via the given link.
Error cases can be identified by the client and erroneous data might be resend by a PUT on the completed resource.
Finally, a good advice I am usually following is: Whenever you hit a resource in your design that cannot be mapped on a HTTP feature it is probably because of a missing resource.
Actually this is still a hot topic till today, But simplify things I almost of the time say there is always a batter suited scenario for each practice.
Eg:
1. If you are receiving the likes from a post you don't need the bulk as in case there is only one like per comment.
2. If you are receiving favorites comment the bulk can fit well by considering someone reviewing the comment he reads and check box all of his favorites and send it once.
Again this is based on my experience working with Restful API, and but currently for the sake of multi tasking and others things, me and my colleague we found our selves doing the bulk all the time in most MIS(Management Information System) we do. This is because modern days web app and mobile app that can do a lot of work and send the final results to the back-end, this way the back-end has little job to do as long as the data received don't violate the business logic.

Hash anchor tag causing errors in URL

On very rate occasions, my error log is showing the following error:
"You specified a Fuseaction of registrationaction#close which is not defined in Circuit public."
The full link is:"http://myUrl/index.cfm?do=public.registrationAction#close"
As you can see, the has merely points to an anchor (close) on the page.
This code is working 99% of the time, but on the odd occasion, Coldfusion / Fusebox throws this error out.
Why is this happening?
Could it be related to the device accessing my page somehow? Like a cell phone or Apple product that for some reason does handle hashes the way I am expecting it to?
Could it be javascript / JQuery being disabled?
Any guidance would be appreciated
Thanks
I used to see stuff like that. Older versions of Internet Explorer were not handling the hashtag properly when there were URL parameters. The best solution I could come up with was kludgey at best, but basically it forced the anchor tag to separate from the URL parameter.
http://myUrl/index.cfm?do=public.registrationAction&#close
I'm not sure there is a simple answer to this. We get odd exceptions all the time on our site for all sorts of reasons. Sometimes it's people not using the site the way you expect and sometimes it stuff like you mention such as user-agent edge cases etc.
Basically you need to start to gather evidence and see what comes up that's unusual with these requests.
So to start: do you catch exceptions in you application? If so dumping all scopes (CGI/CLIENT/FORM/URL/SESSION) in an email along with the full exception and emailing them to a custom emails address (such as errors#yourdomain.com) will give you a reference you can square up to your error times and this might give you a hint as to the real issue.
Hope that helps some!

Externally linked images - How to prevent cross site scripting

On my site, I want to allow users to add reference to images which are hosted anywhere on the internet. These images can then be seen by all users of my site. As far as I understand, this could open the risk of cross site scripting, as in the following scenario:
User A adds a link to a gif which he hosts on his own webserver. This webserver is configured in such a way, that it returns javascript instead of the image.
User B opens the page containg the image. Instead of seeing the image, javascript is executed.
My current security messures are currently such, that both on save and open, all content is encoded.
I am using asp.net(c#) on the server and a lot of jquery on the client to build ui elements, including the generation of image tags.
Is this fear of mine correct? Am I missing any other important security loopholes here? And most important of all, how do I prevent this attack? The only secure way I can think of right now, is to webrequest the image url on the server and check if it contains anything else than binary data...
Checking the file is indeed an image won't help. An attacker could return one thing when the server requests and another when a potential victim makes the same request.
Having said that, as long as you restrict the URL to only ever be printed inside the src attribute of an img tag, then you have a CSRF flaw, but not an XSS one.
Someone could for instance create an "image" URL along the lines of:
http://yoursite.com/admin/?action=create_user&un=bob&pw=alice
Or, more realistically but more annoyingly; http://yoursite.com/logout/
If all sensitive actions (logging out, editing profiles, creating posts, changing language/theme) have tokens, then an attack vector like this wouldn't give the user any benefit.
But going back to your question; unless there's some current browser bug I can't think of you won't have XSS. Oh, remember to ensure their image URL doesn't include odd characters. ie: an image URL of "><script>alert(1)</script><!-- may obviously have bad effects. I presumed you know to escape that.
Your approach to security is incorrect.
Don't approach the topic as "I have a user input, so how can I prevent XSS". Rather approach it like it this: "I have user input - it should be restrictive as possible - i.e. allowing nothing through". Then based on that allow only what's absolutely essential - plain-text strings thoroughly sanitized to prevent anything but a URL, and the specific, necessary characters for URLS only. Then Once it is sanitized I should only allow images. Testing for that is hard because it can be easily tricked. However, it should still be tested for. Then because you're using an input field you should make sure that everything from javascript scripts and escape characters, HTML, XML and SQL injections are all converted to plaintext and rendered harmless and useless. Consider your users as being both idiots and hackers - that they'll input everything incorrectly and try to hack something into your input space.
Aside from that you may run into som legal issues with regard to copyright. Copyrighted images generally may not be used on other people's sites without the copyright owner's consent and permission - usually obtained in writing (or email). So allowing users the opportunity to simply lift images from a site could run the risk of allowing them to take copyrighted material and reposting it on your site without permission which is illegal. Some sites are okay with citing the source, others require a fee to be paid, and others will sue you and bring your whole domain down for copyright infringement.

Measures to prevent XSS vulnerability (like Twitter's one a few days before)

Even famous sites like Twitter are suffering from XSS vulnerability, what should we do to prevent this kind of attack?
The #1 Thing you can do is set your cookies to HTTP Only ... which at least protects against session cookie hijacking. Like someone stealing your cookie when you are likely admin of your own site.
The rest comes down to validating all user input.
RULE #0 - Never Insert Untrusted Data Except in Allowed Locations
RULE #1 - HTML Escape Before Inserting Untrusted Data into HTML Element Content
RULE #2 - Attribute Escape Before Inserting Untrusted Data into HTML Common Attributes
RULE #3 - JavaScript Escape Before Inserting Untrusted Data into HTML JavaScript Data Values
RULE #4 - CSS Escape Before Inserting Untrusted Data into HTML Style Property Values
RULE #5 - URL Escape Before Inserting Untrusted Data into HTML URL Attributes
Very lengthy subject discussed in detail here:
http://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet
http://www.owasp.org/index.php/Cross_site_scripting
XSS is only one of many exploits and every web dev should learn the top 10 OWASP by heart imho
http://www.owasp.org/index.php/Top_10_2007
Just like you can make SQL injection a non-issue by using prepared statements, you can make XSS non-issue by using templating engine (DOM serializer) that does similar thing.
Design your application so that all output goes via templating engine. Make that templating engine HTML-escapes all data by default. This way you'll have system that's secure by default and does not rely on humans (and rest of the large system) being diligent in escaping of HTML.
I don't what you write your code with, but if your use asp.net, you are partly covered.
asp.net has what they call request validation that when enabled, it prevent malicious script to be introduced via user input.
But sometimes, you'll have to allow some kind of text editor like the one you typed in this question. In this case, you'll have to partly disable request validation to allow some "rich text" html to be input by the end user. In this case you will have to build some kind of white list filtering mechanism.
FYI, I don't know about others but Microsft has library called Anti-Xss.