how to warn instead of block invalid quoting in modsecurity - mod-security

Ive got an issue with modsecurity and Im wondering if anyone can help. I'm running into an issue with uploading files to my application, anytime the file in question has a quote in the filename. Eventually I will do client side validation which will alert a user to a quote in the filename they are trying to upload and tell them to rename it, but for now I need to amend my modsecurity settings to ignore that particular check.
The modsecurity rule is:
SecRule MULTIPART_STRICT_ERROR "!#eq 0" \
"phase:2,t:none,log,deny,msg:'Multipart request body \
failed strict validation: \
PE %{REQBODY_PROCESSOR_ERROR}, \
BQ %{MULTIPART_BOUNDARY_QUOTED}, \
BW %{MULTIPART_BOUNDARY_WHITESPACE}, \
DB %{MULTIPART_DATA_BEFORE}, \
DA %{MULTIPART_DATA_AFTER}, \
HF %{MULTIPART_HEADER_FOLDING}, \
LF %{MULTIPART_LF_LINE}, \
SM %{MULTIPART_SEMICOLON_MISSING}, \
IQ %{MULTIPART_INVALID_QUOTING}, \
IH %{MULTIPART_INVALID_HEADER_FOLDING}, \
IH %{MULTIPART_FILE_LIMIT_EXCEEDED}'"
The error Im getting is:
[2016-10-11T16:08:06.8336+01:00] [OHS] [ERROR:32] [OHS-9999] [blah.c] [host_id: blah-web-kc1d] [host_addr: 1.2.3.4] [tid: 1724] [user: SYSTEM] [ecid: 00ibIu6vODDF4ETzA8m3SD0000_^001B9G] [rid: 0] [VirtualHost: main] [client 1.2.3.4] ModSecurity: Access denied with code 403 (phase 2). Match of "eq 0" against "MULTIPART_STRICT_ERROR" required. [file "E:/blah/security/<span class="skimlinks-unlinked">blah_base_rules.conf</span>"] [line "65"] [msg "Multipart request body failed strict validation: PE 0, BQ 0, BW 0, DB 0, DA 0, HF 0, LF 0, SM , IQ 1, IH 0, IH 0"] [hostname "<span class="skimlinks-unlinked">www.dev.uk</span>"] [uri "/pls/dev/blah_details_form.process_blah"] [unique_id "ZOMG!<span class="skimlinks-unlinked">ROFL.TL;DR</span>"]
IQ 1 suggests its the invalid quoting which makes sense. How do I tell modsecurity, to not block when it detects invalid quoting, without disabling the rest of the rule?
Thanks
P.S. I know allowing quotes in a filename potentially introduces SQL injection, but we aren't worried about that for reasons I can't go into.

Just replace the current rule (which checks the overall MULTIPART_STRICT_ERROR variable) with separate rules for each individual variable instead, changing the deny to a warn for the one variable you don't want to deny:
SecRule REQBODY_PROCESSOR_ERROR "!#eq 0" \
"phase:2,t:none,log,deny,msg:'Multipart request body \ failed strict
validation: \ PE %{REQBODY_PROCESSOR_ERROR}'"
SecRule MULTIPART_BOUNDARY_QUOTED "!#eq 0" \
"phase:2,t:none,log,deny,msg:'Multipart request body \ failed strict
validation: \ BQ %{MULTIPART_BOUNDARY_QUOTED}'"
...etc.
SecRule MULTIPART_INVALID_QUOTING "!#eq 0" \
"phase:2,t:none,log,warn,msg:'Multipart request body \ failed strict
validation: \ IQ %{MULTIPART_INVALID_QUOTING}'"
SecRule MULTIPART_INVALID_HEADER_FOLDING "!#eq 0" \
"phase:2,t:none,log,deny,msg:'Multipart request body \ failed strict
validation: \ IH %{MULTIPART_INVALID_HEADER_FOLDING}'"
...etc.
Note newer versions of ModSecurity (since 2.7) require a unique id so if your rule has an id which you've not shown in your question then make sure you make it unique when creating the many rules.
Finally it is also possible to check all the variables in one rule or have the rules sum up the values (or have them as part of one large chained rule where values are similarly summed up) and then check sum = 0 but separate rules is probably just simpler and easier to follow in future.

I fixed this by renaming the file name to upload as it contains some unrecognized pattern.
How do I resolve the issue?
Simply put, rename the file to remove the offending special character from the file name
or (disable this security rule in /etc/{path}/mod_security.conf by commenting the line " SecRule MULTIPART_STRICT_ERROR "!#eq 0" \" or by .htaccess file - NOT RECOMMENDED AT ALL)
How is this error caused?
This error is caused by mod_security blocking a potentially malicious upload. While it may be completely harmless, mod_security has no way of knowing if it is harmless or not.
Typically, the content in question is a file being uploaded which contains a special character such as a single or double quote within the file name which is often used by attackers to inject malicious scripts into websites.

Related

How to disable JSON format and send only the log message to Sumologic with Fluentbit?

We are using Fluentbit as as Sidecar container in our ECS fargate Cluster which is running a dotnet application, initially we faced the issue of fluentbit sending the logs in multiline and we solved it using Fluentbit Multilne feature. Now the logs are being sent to Sumologic in Multiple however it is being sent as Json format whereas we just want fluentbit send only the raw log
Logs are currently
{
date:1675120653.269619,
container_id:"xvgbertytyuuyuyu",
container_name:"XXXXXXXXXX",
source:"stdout",
log:"2023-01-30 23:17:33.269Z DEBUG [.NET ThreadPool Worker] Connection.ManagedDbConnection - ComponentInstanceEntityAsync - Executing stored proc: dbo.prcGetComponentInstance"
}
We want only the line
2023-01-30 23:17:33.269Z DEBUG [.NET ThreadPool Worker] Connection.ManagedDbConnection - ComponentInstanceEntityAsync - Executing stored proc: dbo.prcGetComponentInstance
You need to modify Fluent Bit configuration to have the following filters and output configuration:
fluent.conf:
## prepare headers for Sumo Logic
[FILTER]
Name record_modifier
Match *
Record headers.content-type text/plain
## Set headers as headers attribute
[FILTER]
Name nest
Match *
Operation nest
Wildcard headers.*
Nest_under headers
Remove_prefix headers.
[OUTPUT]
Name http
...
# use log key as body
body_key $log
# use headers key as headers
headers_key $headers
That way, you are going to craft HTTP request manually. This is going to send request per log, which is not necessary a good idea. In order to mitigate that you can add the following parser and use it (flush_timeout may need an adjustment):
parsers.conf
# merge everything as one big log
[MULTILINE_PARSER]
name multiline-all
type regex
flush_timeout 500
#
# Regex rules for multiline parsing
# ---------------------------------
#
# configuration hints:
#
# - first state always has the name: start_state
# - every field in the rule must be inside double quotes
#
# rules | state name | regex pattern | next state
# ------|---------------|--------------------------------------------
rule "start_state" ".*" "cont"
rule "cont" ".*" "cont"
fluent.conf:
[INPUT]
name tail
...
multiline.parser multiline-all

Why do I get a signature error with this AWS bash deploy script?

I am trying to create a bash script to upload files to my s3 bucket. I am having difficulty generating the correct signature.
I get the following error message:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Here is my script:
Thanks for your help!
#!/usr/bin/env bash
#upload to S3 bucket
sourceFilePath="$1"
#file path at S3
folderPathAtS3="packages";
#S3 bucket region
region="eu-central-1"
#S3 bucket name
bucket="my-bucket-name";
#S3 HTTP Resource URL for your file
resource="/${bucket}/${folderPathAtS3}";
#set content type
contentType="gzip";
#get date as RFC 7231 format
dateValue="$(date +'%a, %d %b %Y %H:%M:%S %z')"
acl="x-amz-acl:private"
#String to generate signature
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${acl}\n${resource}";
#S3 key
s3Key="my-key";
#S3 secret
s3Secret="my-secret-code";
#Generate signature, Amazon re-calculates the signature and compares if it matches the one that was contained in your request. That way the secret access key never needs to be transmitted over the network.
signature=$(echo -en "${stringToSign}" | openssl sha1 -hmac ${s3Secret} -binary | base64);
#Curl to make PUT request.
curl -L -X PUT -T "${sourceFilePath}" \
-H "Host: ${bucket}.${region}.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "$acl" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://s3.amazonaws.com/${bucket}/${folderPathAtS3}
Your signature seems fine, but your request is wrong and consequently does not match.
-H "Host: ${bucket}.${region}.amazonaws.com" \ is incorrect.
The correct value is ${bucket}.s3 ${region}.amazonaws.com. You're overlooking the s3. in the hostname... but even if correct, this is still invalidj because your URL https://s3.amazonaws.com/${bucket}/... also includes the bucket, which means your bucket name is being implicitly added to the beginning of the object key because it appears twice.
Additionally, https://s3.amazonaws.com is us-east-1. To connect to the correct region, your URL needs to be one of these variants:
https://${region}.s3.amazonaws.com/${bucket}/${folderPathAtS3}
https://${bucket}.${region}.s3.amazonaws.com/${folderPathAtS3}
https://${bucket}.s3.amazonaws.com/${folderPathAtS3}
Use one of these formats, and eliminate -H "Host: ..." because it will then be redundant.
The last of the 3 URL formats will only start to work after the bucket is more than a few minutes or hours old. S3 creates these automatically but it takes some time.

Modsecurity - REQUEST_URI allow rule is not working

We have following rules that are not working and we wanted to white list this warning ( in event viewer ), which contains "testinguri" in URI.
SecRule REQUEST_URI "#contains testinguri\?op\=message" "id:200006,phase:1,nolog,allow,ctl:ruleEngine=DetectionOnly,msg:'Test 1'"
SecRule REQUEST_URI "#beginsWith /en-us/testinguri?op=message" "id:200007,phase:1,nolog,allow,ctl:ruleEngine=DetectionOnly,msg:'Test 2'"
SecRule REQUEST_URI "^/en-us/testinguri?op=message.*" "id:200008,phase:1,nolog,allow,ctl:ruleEngine=DetectionOnly,msg:'Test 3'"
SecRule REQUEST_URI "#contains testinguri" "id:200009,phase:1,nolog,allow,ctl:ruleEngine=DetectionOnly,msg:'Test 4'"
Above rules are for same purpose but we put them if any version of the rule works but no luck.
Below is the warning in the event viewer and we want to allow the URI that have "testinguri" in it. It is running in Detection mode right now.
ModSecurity: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "C:\Program Files\ModSecurity IIS\owasp-modsecurity-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "86"]
[id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 5 - SQLI=0,XSS=0,RFI=0,LFI=0,RCE=5,PHPI=0,HTTP=0,SESS=0): Remote Command Execution: Windows Command Injection; individual paranoia level scores: 5, 0, 0, 0"] [tag "event-correlation"] [hostname "computerName"] [uri "/en-us/testinguri?op=message&to=FULL URI..."] [unique_id "454534234234234"]
Can you please help on this. Thanks.
We were able to figure it out. So created a conf file test.conf and put the rules in that that we wanted to white list.
Then in the modsecurity_iis.conf file added this files reference at last.
This works for us. Hope this will help someone. Thanks.

"YYYYMMDD": Invalid identifier error while trying through SQOOP

Please help me out from the below error.It works fine when checked in oracle but fails when trying through SQOOP import.
version : Hadoop 0.20.2-cdh3u4 and Sqoop 1.3.0-cdh3u5
sqoop import $SQOOP_CONNECTION_STRING
--query 'SELECT st.reference,u.unit,st.reading,st.code,st.read_id,st.avg FROM reading st,tunit `tu,unit u
WHERE st.reference=tu.reference and st.number IN ('218730','123456') and tu.unit_id = u.unit_id
and u.enrolled='Y' AND st.reading <= latest_off and st.reading >= To_Date('20120701','yyyymmdd')
and st.type_id is null and $CONDITIONS'
--split-by u.unit
--target-dir /sample/input
Error:
12/10/10 09:33:21 ERROR manager.SqlManager: Error executing statement:
java.sql.SQLSyntaxErrorException: ORA-00904: "YYYYMMDD": invalid identifier
followed by....
12/10/10 09:33:21 ERROR sqoop.Sqoop: Got exception running Sqoop:
java.lang.NullPointerException
Thanks & Regards,
Tamil
I believe that the problem is actually on Bash side (or your command line interpret). Your query contains for example following fragment u.enrolled='Y'. Please notice that you're escaping character constants with single quotes. You seem to be putting entire query into additional single quotes: --query 'YOUR QUERY'. Which results in something like --query '...u.enrolled='Y'...'. However such string is stripped by bash to '...u.enrolled=Y...'. You can verify that by using "echo" to see what exactly will bash do with your string before it will be passed to Sqoop.
jarcec#jarcec-thinkpad ~ % echo '...u.enrolled='Y'...'
...u.enrolled=Y..
.
I would recommend to either escape all single quotes (\') inside your query or choose double quotes for entire query. Please note that the later option will require escaping $ characters with backslash (\$).

Procmail: Move to folder and mark as read

a simple question:
I want to move emails with a certain subject to a folder and mark them as read afterwards. Moving works for me with
:0: H
* ^Subject:.*(ThisIsMySubject)
$HOME/mail/ThisIsMyFolder
But how to mark the mails as read?
Note: Updated dec. 16th 2011
Procmail solution
The following recipe works for me. .Junk is the spam folder:
MAILDIR=$HOME/Maildir
:0
* ^X-Spam-Flag: YES
{
# First deliver to maildir so LASTFOLDER gets set
:0 c
.Junk
# Manipulate the filename
:0 ai
* LASTFOLDER ?? ()\/[^/]+^^
|mv "$LASTFOLDER" "$MAILDIR/.Junk/cur/$MATCH:2,S"
}
Maildrop solution
Preface: Recently I had (no, I wanted) to do the same thing with a maildropfilter. After reading man maildropfilter I concocted the following recipe. I'm sure people will find this handy - I know I do.
The example below marks new emails as read but also unread old messages.
SPAMDIRFULL="$DEFAULT/.Junk"
if ( /^X-Spam-Flag: YES$/ || \
/^X-Spam-Level: \*\*\*/ || \
/^Subject: \*+SPAM\*/ )
{
exception {
cc "$SPAMDIRFULL"
`for x in ${SPAMDIRFULL}/new/*; do [ -f $x ] && mv $x ${SPAMDIRFULL}/cur/${x##*/}:2,S; done`
`for x in ${SPAMDIRFULL}/cur/*:2,; do [ -f $x ] && mv $x ${SPAMDIRFULL}/cur/${x##*/}S; done`
to "/dev/null"
}
}
Note that the exception command might read counterintuitive. The manual states the following:
The exception statement traps errors that would normally cause
maildrop to terminate. If a fatal error is encountered anywhere within
the block of statements enclosed by the exception clause, execution
will resume immediately following the exception clause.