How to whitelist Country for ModSecurity Rule? - mod-security

I managed WordPress Site but the editors and contributors do get a lot of false-positive flags by ModSecurity which affected their publishing experience. so I intend to just whitelist the country where they are publishing from.
My problem is that I am not that good with ModSecurity rule configuration so am not sure if what I have prepared is correct.
Secondly, How do I get the legacy version of GeoLiteCity.dat?
Below is what I have put together
# Allow NG Country
SecGeoLookupDb /usr/local/geo/data/GeoLiteCity.dat
SecRule REMOTE_ADDR "#geoLookup" "chain,phase:2,id:1100234,allow,msg:'Custom WAF Rules: Allow NG Country'"
SecRule GEO:COUNTRY_CODE "#streq NG"

With this rule, you allow clients from country with NG country code - but you newer denied the other countries.
You should try this:
SecGeoLookupDb /usr/local/geo/data/GeoLiteCity.dat
SecRule REMOTE_ADDR "#geoLookup" "id:1100234,phase:1,drop,msg:'Custom WAF Rules: Allow NG Country',chain"
SecRule GEO:COUNTRY_CODE "!#streq GB"
As you can see, the main difference between this any your rule is that the disruptive action is not allow, but drop.
Please note, that you can use this rule in phase:1.
Secondly, How do I get the legacy version of GeoLiteCity.dat?
I'm not familiar with license, but this is what I found:
https://mailfud.org/geoip-legacy/
Edit: you can find more information about GEO variable here.

Related

modsecurity blocking but not logging a violation

So it's not that there are no logs, there are actually many violations logged, its just an issue I'm having with a few people; 10s of violations out of millions of requests. To make it easy to differentiate between modsecurity and backend violations I changed SecDefaultAction to a status of 406, works like a charm.
It's not a performance issue, the modsecurity servers are in an auto-scaling group and hardly taxed. I can see in our Kinesis logs the return code of 406 being sent to the user, as well as actually seeing the 406 in their browser. There is no corresponding modsecurity violation though.
The modsecurity servers are all behind load balancers and dont see the users IPs, I dont have any DOS or IP Reputation on anyway.
The only thing I really have to go on is, while we were in DetectionOnly these particular users would trigger a 930120 when they logged in.
"request": "GET /a/environment_settings.js HTTP/2.0", "id": "930120"
"Matched Data: <omitted> found within REQUEST_COOKIES:access_token: <omitted>
We turned the rule on and I wrote the following in crs-after:
SecRuleUpdateTargetByTag "attack-lfi" "!REQUEST_COOKIES:access_token"
Everybody was fine logging in except for this one user. Unfortunately I have nothing to go on because while they get a 406, nothing is logged for it. At one time 941150 would silently increment the anomaly counter but that rule isn't in play here. I was wondering if there are any other rules that may silently increment. Or any thoughts on how to debug this.
OWASP ModSecurity Core Rule Set dev-on-duty here. To resolve the false positive with the CRS rule 930120 you can do the following:
Put the following tuning rule into crs-after (you're right here).
SecRuleUpdateTargetById 930120 !REQUEST_COOKIES:access_token
I highly recommend the tuning tutorials of CRS co-lead Christian that can be found here: https://www.netnea.com/cms/apache-tutorials/. There you'll also find a tuning cheat sheet.
In the logs you should see the rules that increment the anomaly score. There shouldn't be a rule that increments silently.

WAZUH/OSSEC - overwriting rules doesn't seem to work

I'm trying to overwrite a rule as per documentation, like this
https://documentation.wazuh.com/3.12/learning-wazuh/replace-stock-rule.html
So I've copied one rule to local_rules.xml, created my own group (prior to that also tried to put it within the rule's original group tag), but it seems to be completely ignoring it:
This is what I've put in local_rules.xml:
<group name="istvan">
<rule frequency="8" id="31533" level="9" overwrite="yes" timeframe="20">
<if_matched_sid>31530</if_matched_sid>
<same_source_ip/>
<description>High amount of POST requests in a small period of time (likely bot).</description>
<group>pci_dss_6.5,pci_dss_11.4,gdpr_IV_35.7.d,nist_800_53_SA.11,nist_800_53_SI.4,</group>
</rule>
</group>
I've only changed the level to 9 and added the overwrite="yes" tag. The idea is that it doesn't send me this alerts (as my treshold is set to level 10+), save, restart, but it's completely ignoring it, and I'm stil getting those alerts with level 10 tag.
Frankly, I'm starting to be clueless why is it happening.
Any ideas?
Thanks.
A good way to test the expected behaviour would be using /var/ossec/bin/ossec-logtest as mentioned in that doc.
To elaborate i will take the example of that doc :
I will overwrite the rule 5716 : https://github.com/wazuh/wazuh-ruleset/blob/317052199f751e5ea936730710b71b27fdfe2914/rules/0095-sshd_rules.xml#L121, as below :
[root#localhost vagrant]# egrep -iE "ssh" /var/ossec/etc/rules/local_rules.xml -B 4 -A 3
<rule id="5716" overwrite="yes" level="9">
<if_sid>5700</if_sid>
<match>^Failed|^error: PAM: Authentication</match>
<description>sshd: authentication failed.</description>
<group>authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.1,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,</group>
</rule>
The logs can be tested without having to restart the Wazuh manager, Opening /var/ossec/bin/ossec-logtest then pasting my log :
2020/05/26 09:03:00 ossec-testrule: INFO: Started (pid: 9849).
ossec-testrule: Type one log per line.
Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2
**Phase 1: Completed pre-decoding.
full event: 'Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2'
timestamp: 'Oct 23 17:27:17'
hostname: 'agent'
program_name: 'sshd'
log: 'Failed password for root from ::1 port 60164 ssh2'
**Phase 2: Completed decoding.
decoder: 'sshd'
dstuser: 'root'
srcip: '::1'
srcport: '60164'
**Phase 3: Completed filtering (rules).
Rule id: '5716'
Level: '9'
Description: 'sshd: authentication failed.'
As expected the level has been overwriting which was initially 5. Although in your case, you will have to paste the log 8 times in timeframe lower than 20 s to be able to trigger that rule.
If you can share the logs triggering that alert, i can test with it.
On the other hand, you can create a sibling rule to simply ignore your rule 31533, something similar to below :
<rule id="100010" level="2">
<if_sid>31533</if_sid>
<description>Ignore rule 31533</description>
</rule>
Make sure to restart the Wazuh manager afterward to apply the change.
You can find more information about customizing rules/decoders here : https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/
Hope this helps,
After finally talking to the developers, it turns out that it was indeed ignoring local_rules.xml. I had a strage exclusion of one rule (probably a problematic syntax, although it did't report an error)
"rule_exclude": [
"31151"
When I removed it, it started working as described in the user's guide.

Is there any way for the IP once denied by a WAF rule to be unbarred again passing through the rule?

I have set up Google Cloud Armor security policy referring to https://cloud.google.com/armor/docs/rules-language-reference. It worked fine. My simulated SQL injection attack from my office was detected and subsequent accesses were blocked. Stackdriver log entry shows corresponding enforcedSecurityPolicy outcome of "deny" and applied expression ID was "owasp-crs-v030001-id942421-sqli". The key WAF rule is as follows:
evaluatePreconfiguredExpr('xss-stable') && evaluatePreconfiguredExpr('sqli-stable')
One point I cannot control. After my simulated attack, all accesses from my office are blocked all the way along. Once I detached and re-attached the Cloud Armor security policy from and to LB, the access from my office are still blocked. Deleting that security policy and re-created it again does not help. This implies there is an unseen persistent database of SQLi & XSS attackers and my office IP might be registered in it, causing that 'all-the-time' denial.
Question is : how can I remove my IP from that unseen 'SQLi & XSS blacklist' database to regain backend access without modifying rules? In our Cloud Armor production operation, once-forbidden IP may want to regain access to the target backend service after its attack source is removed.
Certainly, if I add higher priority permission rule than the WAF rule, I can regain access to the target backend, but WAF check will be bypassed, which is not what I want.
Thank you in advance for your time.
R.Kurishima
I had a similar situation and almost concluded the same thing you did -- that there's some kind of hidden blacklist. But after playing around some more, I figured out that, instead, some other non-malicious cookies in my request were triggering owasp-crs-v030001-id942421-sqli ("Restricted SQL Character Anomaly Detection (cookies): # of special characters exceeded (3)" -- and later owasp-crs-v030001-id942420-sqli ("Restricted SQL Character Anomaly Detection (cookies): # of special characters exceeded (8)"). Not a hidden blacklist.
Near as I can tell, these two rules use the number of 'special' characters in the Cookies header, and not the number of special characters in each cookie. Furthermore, the equals sign -- which is used for each cookie -- counts as a special character. Same with the semicolon. Irritating.
So this request will trigger 942420:
curl 'https://example.com/' -H 'cookie: a=a; b=b; c=c; d=d; e=e;'
And this will trigger 942421:
curl 'https://example.com/' -H 'cookie: a=a; b=b;'
So probably best to disable these two rules, something like
evaluatePreconfiguredExpr('sqli-canary', [
'owasp-crs-v030001-id942420-sqli',
'owasp-crs-v030001-id942421-sqli'
])

Does Amazon allow raising the SES mail *receiving* limits?

I run an email forwarding service (https://kopi.cloud).
I'm investigating the feasibility of building a feature to allow users to "bring their own domain".
It seems like this should work fine with SES, except there are limits on total number of rules and total number of recipients (see https://docs.aws.amazon.com/ses/latest/DeveloperGuide/limits.html)
With the currently limits on rules and recipients, I could pack the subscriber domains into the receipt rules and Kopi could support up to about 10,000 separate domains.
10K domains will be plenty for a while, I don't expect that many people will actually want to bring their own domain (I reckon most people who'd want this would just go ahead and do their own forwarding), so I'm going to go ahead prototyping the feature.
But I need to check if these limits are "soft limits", like the sending limits that can be raised on request; or "hard limits", where no increase is possible.
I'm still going to prototype the feature and if it were to be wildly successful, I guess I could jury-rig something together with multiple accounts or some other shennanigans.
So my question: "Is it possible to get the SES receiving rule limits raised?"
Answer from AWS support, as of June 2019 - "these limits are a hard limit and cannot be increased at the moment".
There is an outstanding request to raise them though.
https://forums.aws.amazon.com/thread.jspa?threadID=303902

SimpleSAML_Error_Error: UNHANDLEDEXCEPTION --Destination in response doesn't match the current URL

I am receiving below error when i try to login from an IDP.
Caused by: Exception: Destination in response doesn't match the current URL. Destination is "http://example.com/simplesaml/module.php/saml/sp/saml2-acs.php/SP", current URL is "https://example.com:16116/simplesaml/module.php/saml/sp/saml2-acs.php/SP".
I am using Drupal 8 and simplesamlphp_auth module.
URL matching in SimpleSAML was very particular.
You do notice that one request is using the protocol prefix 'https, while the other is using 'http'.
Additionally, the port number being reflected could throw it off.
I last worked with this technology a year and a half ago, but we did make use of hostfile switching to target different environemnts, Dev, QA, Production, etc. Avoiding the use of custom ports we had few problems on that front.
Checking the source, you can observe that the Error thrown, stems from a very literal string comparison:
https://github.com/simplesamlphp/simplesamlphp/blob/4bbcdd84ebda4c876542af582cac53bbd7056160/modules/saml/lib/Message.php#L585
Check for baseurlpath value in config.php of SimpleSAMLPHP library.
Also cross verify the RelayState in authsources.php
If required metadata also need to be updated
ref: https://groups.google.com/g/simplesamlphp/c/AcWUojq2aWg