I'm trying to overwrite a rule as per documentation, like this
https://documentation.wazuh.com/3.12/learning-wazuh/replace-stock-rule.html
So I've copied one rule to local_rules.xml, created my own group (prior to that also tried to put it within the rule's original group tag), but it seems to be completely ignoring it:
This is what I've put in local_rules.xml:
<group name="istvan">
<rule frequency="8" id="31533" level="9" overwrite="yes" timeframe="20">
<if_matched_sid>31530</if_matched_sid>
<same_source_ip/>
<description>High amount of POST requests in a small period of time (likely bot).</description>
<group>pci_dss_6.5,pci_dss_11.4,gdpr_IV_35.7.d,nist_800_53_SA.11,nist_800_53_SI.4,</group>
</rule>
</group>
I've only changed the level to 9 and added the overwrite="yes" tag. The idea is that it doesn't send me this alerts (as my treshold is set to level 10+), save, restart, but it's completely ignoring it, and I'm stil getting those alerts with level 10 tag.
Frankly, I'm starting to be clueless why is it happening.
Any ideas?
Thanks.
A good way to test the expected behaviour would be using /var/ossec/bin/ossec-logtest as mentioned in that doc.
To elaborate i will take the example of that doc :
I will overwrite the rule 5716 : https://github.com/wazuh/wazuh-ruleset/blob/317052199f751e5ea936730710b71b27fdfe2914/rules/0095-sshd_rules.xml#L121, as below :
[root#localhost vagrant]# egrep -iE "ssh" /var/ossec/etc/rules/local_rules.xml -B 4 -A 3
<rule id="5716" overwrite="yes" level="9">
<if_sid>5700</if_sid>
<match>^Failed|^error: PAM: Authentication</match>
<description>sshd: authentication failed.</description>
<group>authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.1,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,</group>
</rule>
The logs can be tested without having to restart the Wazuh manager, Opening /var/ossec/bin/ossec-logtest then pasting my log :
2020/05/26 09:03:00 ossec-testrule: INFO: Started (pid: 9849).
ossec-testrule: Type one log per line.
Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2
**Phase 1: Completed pre-decoding.
full event: 'Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2'
timestamp: 'Oct 23 17:27:17'
hostname: 'agent'
program_name: 'sshd'
log: 'Failed password for root from ::1 port 60164 ssh2'
**Phase 2: Completed decoding.
decoder: 'sshd'
dstuser: 'root'
srcip: '::1'
srcport: '60164'
**Phase 3: Completed filtering (rules).
Rule id: '5716'
Level: '9'
Description: 'sshd: authentication failed.'
As expected the level has been overwriting which was initially 5. Although in your case, you will have to paste the log 8 times in timeframe lower than 20 s to be able to trigger that rule.
If you can share the logs triggering that alert, i can test with it.
On the other hand, you can create a sibling rule to simply ignore your rule 31533, something similar to below :
<rule id="100010" level="2">
<if_sid>31533</if_sid>
<description>Ignore rule 31533</description>
</rule>
Make sure to restart the Wazuh manager afterward to apply the change.
You can find more information about customizing rules/decoders here : https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/
Hope this helps,
After finally talking to the developers, it turns out that it was indeed ignoring local_rules.xml. I had a strage exclusion of one rule (probably a problematic syntax, although it did't report an error)
"rule_exclude": [
"31151"
When I removed it, it started working as described in the user's guide.
Related
I am trying to set up an application as a service on OpenSUSE LEAP 15.
Googling around I found that one does that (or should I say "one can do that"?) by providing a file <servicename>.service in /usr/lib/systemd/system/ and then one enables that service using YaST.
I provided such a file (copying from a tomcat.service file already on the machine and replacing the misc. entries with values relevant for my application).
The setup using YaST seemed to have worked OK, the service was listed and I enabled it. But now I have an issue: when I start the application using service <servicename> start the startup fails. Using service <servicename> status I see the last 10 lines of some log which read:
Jun 08 14:41:04 test-vm ctlscript.sh[31955]: at java.lang.reflect.Method.invoke(Method.java:498)
Jun 08 14:41:04 test-vm ctlscript.sh[31955]: at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:371)
Jun 08 14:41:04 test-vm ctlscript.sh[31955]: at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:458)
...
This is the tail of some Java stacktrace, so obviously there is some exception while starting up.
But to be able to figure out what is going wrong I would need to see more from that log but where is this service command logging to? I.e. from which logfile does the above content of the service ... status command come from?
Ah - I finally found the solution: one can use journalctl -u <servicename> to see all log entries of a specific service. While that doesn't answer the original question it serves the underlying purpose of the question, namely to see all log entries for a specific service.
So it's not that there are no logs, there are actually many violations logged, its just an issue I'm having with a few people; 10s of violations out of millions of requests. To make it easy to differentiate between modsecurity and backend violations I changed SecDefaultAction to a status of 406, works like a charm.
It's not a performance issue, the modsecurity servers are in an auto-scaling group and hardly taxed. I can see in our Kinesis logs the return code of 406 being sent to the user, as well as actually seeing the 406 in their browser. There is no corresponding modsecurity violation though.
The modsecurity servers are all behind load balancers and dont see the users IPs, I dont have any DOS or IP Reputation on anyway.
The only thing I really have to go on is, while we were in DetectionOnly these particular users would trigger a 930120 when they logged in.
"request": "GET /a/environment_settings.js HTTP/2.0", "id": "930120"
"Matched Data: <omitted> found within REQUEST_COOKIES:access_token: <omitted>
We turned the rule on and I wrote the following in crs-after:
SecRuleUpdateTargetByTag "attack-lfi" "!REQUEST_COOKIES:access_token"
Everybody was fine logging in except for this one user. Unfortunately I have nothing to go on because while they get a 406, nothing is logged for it. At one time 941150 would silently increment the anomaly counter but that rule isn't in play here. I was wondering if there are any other rules that may silently increment. Or any thoughts on how to debug this.
OWASP ModSecurity Core Rule Set dev-on-duty here. To resolve the false positive with the CRS rule 930120 you can do the following:
Put the following tuning rule into crs-after (you're right here).
SecRuleUpdateTargetById 930120 !REQUEST_COOKIES:access_token
I highly recommend the tuning tutorials of CRS co-lead Christian that can be found here: https://www.netnea.com/cms/apache-tutorials/. There you'll also find a tuning cheat sheet.
In the logs you should see the rules that increment the anomaly score. There shouldn't be a rule that increments silently.
So I had a working configuration with fluent-bit on eks and elasticsearch on AWS that was pointing on the AWS elasticsearch service but for cost saving purpose, we deleted that elasticsearch and created an instance with a solo elasticsearch, enough for dev purpose. And the aws service doesn't manage well with only one instance.
The issue is that during this migration the fluent-bit seems to have broken, and I get lots of "[warn] failed to flush chunk" and some "[error] [upstream] connection #55 to ES-SERVER:9200 timed out after 10 seconds".
My current configuration:
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 50MB
Skip_Long_Lines On
Refresh_Interval 10
Ignore_Older 1m
I think the issue is in one of those configuration, if I comment the kubernetes filter I don't have the errors anymore but I'm loosing the fields in the indices...
I tried tweeking some parameters in fluent-bit to no avail, if anyone has a suggestion?
So, the previous logs did not indicate anything, but I finaly found something when activating trace_error in the elasticsearch output:
{"index":{"_index":"fluent-bit-2021.04.16","_type":"_doc","_id":"Xkxy 23gBidvuDr8mzw8W","status":400,"error":{"type":"mapper_parsing_exception","reas on":"object mapping for [kubernetes.labels.app] tried to parse field [app] as o bject, but found a concrete value"}}
Did someone get that error before and knows how to solve it?
So, after looking into the logs and finding the mapping issue I ssem to have resolved the issue. The logs are now corretly parsed and send to the elasticsearch.
To resolve it I had to augment the limit of output retry and add the Replace_Dots option.
[OUTPUT]
Name es
Match *
Host ELASTICSERVER
Port 9200
Index <fluent-bit-{now/d}>
Retry_Limit 20
Replace_Dots On
It seems that at the beginning I had issues with the content being sent, because of that the error seemed to have continued after the changed until a new index was created making me think that the error was still not resolved.
Found the solution after searching, but leaving this here if somebody happens to run into similar kind of confusion. See resolution in the end.
I'm trying to figure out why AWS CloudWatch log service fails to understand the right timestamp for my log events. Currently all my events are being saved under Time 2017-01-01 no matter what the actual timestamp in the event is.
I'm feeding the log from syslog where docker is saving the logged events and I configured docker to put the timestamp in format:
170105/103242 (%y%m%d/%H%M%S)
I configured awslogs service with parameters:
datetime_format = %y%m%d/%H%M%S
I restarted the service and hit the server, but still when I go to CloudWatch and see the log entries, even entries that indeed start with timestamp 170105/103242 are actually saved as events that belong to date 2017-01-01 containing all events between 01-01 and 01-05
When I look at the awslogs.log I can see following lines:
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to using gzip encoding.
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Using default logging configuration.
This makes me think that the configuration probably isn't actually reading/using the datetime_format but I don't understand why it decides to end up using default. I tried to put
use_gzip_http_content_encoding = true
under general settings, but it doesn't change the errors.
I am running out of ideas - has anyone managed to configure awslogger in a way where the datetime_format is actually used correctly?
Edit:
I'm currently hacking more console logs to local python2.7 push.py to see what is going on :)
RESOLVED:
Ok, problem was that I came into this project after the initial setup had been created and I had the impression that the logger was configured to use the .conf file in location:
/etc/awslogs/awslogs.conf
that was dynamically populated.
The environment had a script that gave this location to awslogs-agent-setup.py which tried to make the agent understand that configuration should be read from here.
However this script didn't actually do what it was supposed to do and when the service started, it actually read the config from
/var/awslogs/etc/awslogs.conf
Which contained the default values.
So the actual resolution was to change the datetime_format parameter in the default config and forget about the config I thought the service was using.
Add logging to /var/awslogs/lib/python2.7/site-packages/cwlogs/push.py and see how the actual config parameters are interpreted.
You will probably find out that the service is actually using configuration file at default location:
/var/awslogs/etc/awslogs.conf
and hence you have to edit configuration values there for them to be actually read.
I am creating a multiple instance setup on my developer edition of ColdFusion. I am running on Maverics. My guide to the process is this article by Rob Brooks-Bilson.
I did everything right. However I get the 'Bad Gateway Error' when I try to ping the ColdFusion Administrator.
I think you might have any of the following issues:
The workermap.properties file for your particular instance (cf10/config/wsconfig/1/) has the instance name spelled wrong.
Recheck the worker.properties file that you have added the content properly. This step is very much prone to copy-paste error. There are two places you need to add your instance name: In the list and then the port configuration (copying from the existing).
There is some glitch in your mod_jk file.
last but not the least please re-check that your server.xml (cf10//runtime/conf/) has been edited properly. Also please check if the value of the port attribute of the SERVER tag and the CONNECTOR tag are different. It happens that due to some glitch they might get generated as the same.