I have installed modsecurity on Nginx and as well as the owasp rules,
i have check SecRequestBodyAccess to on,
but when i send a request with a malicious post data, it pass ok with no problem
Can anyone help me?
Modsecurity by default has parameter "SecRuleEngine" set to "DetectionOnly" and work in monitor mode. Must be set to "On".
Modsecurity must have enabled a rule that discovers malicious code - audit_log will tell you if the malicious post data was found. Most of CRS rules find malicious using regex expressions. More fancy attacks require special configuration or new rules, some of them would never be discovered.
Blocking or not later depends on the settings if your're using Anomaly Scoring mode or Self-Contained mode.
For Self-Contained (older way) it is enough to have configuration line like (for POST data = phase 2):
SecDefaultAction "phase:2,log,auditlog,deny,status=403
And that's all, if post data violates any rule - attacker gets 403.
For AnomalyScore mode (newer way, more flexible) line looks like:
SecDefaultAction "phase:2,log,auditlog,pass"
Then all rules for which anomalies were found are countend and their scores are summed up. Depends on the rule it can be "critical_anomaly_score", "error_anomaly_score", "warning_anomaly_score" and "notice_anomaly_score". By default their counts as 5,4,3,2.
If the counted score equals or is greater than "inbound_anomaly_score_threshold" (default 5) then request is blocked.
Thats why by default a one rule with critical_anomaly_score (counted as 5) can block traffic. A single rule with "error_anomaly_score" (counted as 4) is not enough to stop the request.
Related
I am using mod-security V3 on a centos machine with Openlitespeed.
My php file access.php create cookie: honey_bot_trap with value : 16 character [0-9a-zA-z]. - dynamic: ex: au4abbgjk190Bl
in modsecurity create rules:
SecRule REQUEST_HEADERS:Cookie "#contains honey_bot_trap" "chain,id:'990014',phase:1,t=none,block,msg:'fake cookie'"
i want create rules :
All request to my domain will redirect to access.php (the cookie create by file: honey_bot_trap: au4abbgjk190Bl)
Modsecurity check if no cookie honey_bot_trap: au4abbgjk190Bl is block.
if request has honey_bot_trap: au4abbgjk190Bl add to check rate.
if rate of IP over 2 click /s is block (or redirect to https://m ydomain.com/verify.php)
Please help me. Thank for all.
OpenLiteSpeed is not a creator of rules, but a consumer of them. We generally recommend the use of pre-created rules like OWASP or Comodo. If you wish to create rules you should check out the rules guide: https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v3.x)https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v3.x)
The rule you are attempting to create is very, very complicated. It may sound simple, but I've written the 2nd edition of the ModSecurity Handbook and trust me, I would take me 2-3 hours to get this working.
With that being said, ModSec is probably not the best tool for what you have in mind. If you want to push through, try to put your hands on a copy of the ModSecurity Handbook (instead of the reference linked above) and use mod_qos or something along those lines for rate limiting and not ModSec.
#CRSDevOnDuty
P.S. Hat tip to Robert Perper.
Posting a form with " on" or any word starting with "on" as last word in a form field resulting in an XSS block from aws waf
blocked by this rule
Body contains a cross-site scripting threat after decoding as URL
e.g. "twenty only" or " online" or "check on" all results in XSS block
These seems to be normal words, why it's getting blocked for xss?
but with whitespace at the end it doesn't block
e.g. "twenty only " or " online " or "check on " these works
Just flagging up we got started with WAF last night, and overnight a few dozen legitimate requests were blocked.
Surely enough, each XSS rule had the string "on" in the request body, followed by other characters.
I wonder if it was trying to detect the hundred or so onerror, onload and other javascript events? Feels like it could have been a lot more specific than matching on followed by "some stuff"...
Only solution here seems to be disable this rule for us - it's going to be a constant source of false positives for us otherwise, which makes it worthless.
This is a known problem with the "CrossSiteScripting_BODY" WAFv2 rule provided by AWS as part of the AWSManagedRulesCommonRuleSet ruleset. The rule will block any input that matches on*=*
In a form with multiple inputs, any text that has " on" in it will likely trigger this rule with false positive, e.g. a=three two one&b=something else
In Sept 2021, I complained to AWS Enterprise Support about this clearly broken rule and they replied "Its better to block the request when in doubt than to allow a malicious one", which I strongly disagree with. The support engineer also suggested that I could attempt to whitelist inputs which have triggered this rule, which is totally impractical for any non-trivial web app.
I believe the rule is attempting to block XSS attacks containing scripts like onerror=eval(src), see https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Evasion_Cheat_Sheet.html#waf-bypass-strings-for-xss
I would recommend excluding all the black box CrossSiteScripting rules from your WAF, as they are not fit for purpose.
You can try upgrading to WAFv2, however certain combination with characters "on" +"&" may still cause a false positive. The rule that is causing the problem is XSS on body with URL decoding. So if your formdata is submitted using url-encoding, you could hit a problem. If you submit your form as JSON data or using MIME multipart/form-data it should work. I have 2 application, one with formdata submission with a javascript XHR using fetch api, it uses multipart/form-data and another with JSON data wasn't getting blocked.
Otherwise, you have to tune your XSS rules or set that specific rule to count. I will not post how to tune lest someone lurking here and try to be funny.
What your suggestion of adding a whitespace works as well, the backend can remove the whitespace or leave as it is. A little annoying but it works.
If I define url like "^optional/slash/?&" - and so web-page to which it bound will available by both url versions - with slash and without - will I violate any conventions or standards by doing that?
Wouldn't a redirection be more appropriate?
If I remember correctly, trailing slashes should be used with resources that list other resources. Like a directory that lists files, a list of articles or a category query (e.g http://www.example.com/category/cakes/). Without trailing slashes the URI should point to a single resource. Like a file, an article or a complex query with parameters (e.g http://www.example.com/search?ingredients=strawberry&taste=good)
Just use the HTTP code 302 FOUND to redirect typos to their correct URIs.
EDIT: Thanks to AndreD for pointing it out, a HTTP code 301 MOVED PERMANENTLY is more appropriate for permanently aliasing typos. Search engines and other clients should stop querying for the misspelled URL after getting a 301 code once, and Google recommends using it for changing the URL of a page in their index.
According to RFC 3986: Uniform Resource Identifier (URI): Generic Syntax:
Section 6.2.4. Protocol-Based Normalization -
"Substantial effort to reduce the incidence of false negatives is
often cost-effective for web spiders. Therefore, they implement even
more aggressive techniques in URI comparison. For example, if they
observe that a URI such as
http://example.com/data
redirects to a URI differing only in the trailing slash
http://example.com/data/
they will likely regard the two as equivalent in the future. This
kind of technique is only appropriate when equivalence is clearly
indicated by both the result of accessing the resources and the
common conventions of their scheme's dereference algorithm (in this
case, use of redirection by HTTP origin servers to avoid problems
with relative references)."
My interpretation of this statement would be that making the two URIs functionally equivalent (e.g. by means of an .htaccess statement, redirect, or similar) does not violate any standard conventions. According to the RFC, web spiders are prepared to treat them functionally equivalent if they point to the same resource.
No, you are not violating any standards by doing that you can Use this Optional trailing slash in URL of websites
but you need to stay on the safe side, because there are different ways servers handle the issue:
Sometimes, it doesn't matter for SEO: many web servers will just re-direct using 301 status code to the default version;
Some web servers may return a 404 page for the non-trailing-slash address = wasted link juice and efforts;
Some web servers may return 302 redirect to the correct version = wasted link juice and efforts;
Some web servers may return 200 response for both the versions = wasted link juice and efforts as well as potential duplicate content problems.
On my site, I want to allow users to add reference to images which are hosted anywhere on the internet. These images can then be seen by all users of my site. As far as I understand, this could open the risk of cross site scripting, as in the following scenario:
User A adds a link to a gif which he hosts on his own webserver. This webserver is configured in such a way, that it returns javascript instead of the image.
User B opens the page containg the image. Instead of seeing the image, javascript is executed.
My current security messures are currently such, that both on save and open, all content is encoded.
I am using asp.net(c#) on the server and a lot of jquery on the client to build ui elements, including the generation of image tags.
Is this fear of mine correct? Am I missing any other important security loopholes here? And most important of all, how do I prevent this attack? The only secure way I can think of right now, is to webrequest the image url on the server and check if it contains anything else than binary data...
Checking the file is indeed an image won't help. An attacker could return one thing when the server requests and another when a potential victim makes the same request.
Having said that, as long as you restrict the URL to only ever be printed inside the src attribute of an img tag, then you have a CSRF flaw, but not an XSS one.
Someone could for instance create an "image" URL along the lines of:
http://yoursite.com/admin/?action=create_user&un=bob&pw=alice
Or, more realistically but more annoyingly; http://yoursite.com/logout/
If all sensitive actions (logging out, editing profiles, creating posts, changing language/theme) have tokens, then an attack vector like this wouldn't give the user any benefit.
But going back to your question; unless there's some current browser bug I can't think of you won't have XSS. Oh, remember to ensure their image URL doesn't include odd characters. ie: an image URL of "><script>alert(1)</script><!-- may obviously have bad effects. I presumed you know to escape that.
Your approach to security is incorrect.
Don't approach the topic as "I have a user input, so how can I prevent XSS". Rather approach it like it this: "I have user input - it should be restrictive as possible - i.e. allowing nothing through". Then based on that allow only what's absolutely essential - plain-text strings thoroughly sanitized to prevent anything but a URL, and the specific, necessary characters for URLS only. Then Once it is sanitized I should only allow images. Testing for that is hard because it can be easily tricked. However, it should still be tested for. Then because you're using an input field you should make sure that everything from javascript scripts and escape characters, HTML, XML and SQL injections are all converted to plaintext and rendered harmless and useless. Consider your users as being both idiots and hackers - that they'll input everything incorrectly and try to hack something into your input space.
Aside from that you may run into som legal issues with regard to copyright. Copyrighted images generally may not be used on other people's sites without the copyright owner's consent and permission - usually obtained in writing (or email). So allowing users the opportunity to simply lift images from a site could run the risk of allowing them to take copyrighted material and reposting it on your site without permission which is illegal. Some sites are okay with citing the source, others require a fee to be paid, and others will sue you and bring your whole domain down for copyright infringement.
I have a resource at a URL that both humans and machines should be able to read:
http://example.com/foo-collection/foo001
What is the best way to distinguish between human browsers and machines, and return either HTML or a domain-specific XML response?
(1) The Accept type field in the request?
(2) An additional bit of URL? eg:
http://example.com/foo-collection/foo001 -> returns HTML
http://example.com/foo-collection/foo001?xml -> returns, er, XML
I do not wish to oblige machines reading the resource to parse HTML (or XHTML for that matter). Machines like the googlebot should receive the HTML response.
It is reasonable to assume I control the machine readers.
If this is under your control, rather than adding a query parameter why not add a file extension:
http://example.com/foo-collection/foo001.html - return HTML
http://example.com/foo-collection/foo001.xml - return XML
Apart from anything else, that means if someone fetches it with wget or saves it from their browser, it'll have an appropriate filename without any fuss.
My preference is to make it a first-class part of the URI. This is debatable, since there are -- in a sense -- multiple URI's for the same resource. And is "format" really part of the URI?
http://example.com/foo-collection/html/foo001
http://example.com/foo-collection/xml/foo001
These are very easy deal with in a web framework that has URI parsing to direct the request to the proper application.
If this is indeed the same resource with two different representations, the HTTP invites you to use the Accept-header as you suggest. This is probably a very reliable way to distinguish between the two different scenarios. You can be plenty sure that user agents (including search engine spiders) send the Accept-header properly.
About the machine agents you are going to give XML; are they under your control? In that case you can be doubly sure that Accept will work. If they do not set this header properly, you can give XML as default. User agents DO set the header properly.
I would try to use the Accept heder for this, because this is exactly what the Accept header is there for.
The problem with having two different URLs is that is is not automatically apparent that these two represent the same underlying resource. This can be bad if a user finds an URL in one program, which renders HTML, and pastes it in the other, which needs XML. At this point a smart user could probably change the URL appropriately, but this is just a source of error that you don't need.
I would say adding a Query String parameter is your best bet. The only way to automatically detect whether your client is a browser(human) or application would be to read the User-Agent string from the HTTP Request. But this is easily set by any application to mimic a browser, you're not guaranteed that this is going to work.