mod_security false positives - mod-security

I`m getting lots of false positives [??]after just setting up mod_security. I'm running it in detection only so no issues yet but these filters will start blocking requests once I need it to go live.
Afraid I don't 100% understand what the significance of these filters are, I get 100s of them on nearly every domain & all the requests look legitimate.
Request Missing a User Agent Header
Request Missing an Accept Header
What is the best thing to do here? Should I disable these filters? Can I set the severity lower so that requests won't be blocked?
Here is a complete entry
[22/Nov/2011:21:32:37 --0500] u6t6IX8AAAEAAHSiwYMAAAAG 72.47.232.216 38543 72.47.232.216 80
--5fcb9215-B--
GET /Assets/XHTML/mainMenu.html HTTP/1.0
Host: www.domain.com
Content-type: text/html
Cookie: pdgcomm-babble=413300:451807c5d49b8f61024afdd94e57bdc3; __utma=100306584.1343043347.1321115981.1321478968.1321851203.4; __utmz=100306584.1321115981.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=XXXXXXXX%20clip%20ons
--5fcb9215-F--
HTTP/1.1 200 OK
Last-Modified: Wed, 23 Nov 2011 02:01:02 GMT
ETag: "21e2a7a-816d"
Accept-Ranges: bytes
Content-Length: 33133
Vary: Accept-Encoding
Connection: close
Content-Type: text/html
--5fcb9215-H--
Message: Operator EQ matched 0 at REQUEST_HEADERS. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "47"] [id "960015"] [rev "2.2.1"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER_ACCEPT"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"]
Message: Operator EQ matched 0 at REQUEST_HEADERS. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "66"] [id "960009"] [rev "2.2.1"] [msg "Request Missing a User Agent Header"] [severity "NOTICE"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER_UA"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"]
Message: Warning. Operator LT matched 5 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_60_correlation.conf"] [line "33"] [id "981203"] [msg "Inbound Anomaly Score (Total Inbound Score: 4, SQLi=5, XSS=): Request Missing a User Agent Header"]
Stopwatch: 1322015557122593 24656 (- - -)
Stopwatch2: 1322015557122593 24656; combined=23703, p1=214, p2=23251, p3=2, p4=67, p5=168, sr=88, sw=1, l=0, gc=0
Producer: ModSecurity for Apache/2.6.1 (http://www.modsecurity.org/); core ruleset/2.2.1.
Server: Apache/2.2.3 (CentOS)

If you look under Section H of the audit log entry you showed at the Producer line, you will see that you are using the OWASP ModSecurity Core Rule Set (CRS) v2.2.1. In this case, I suggest you review the documentation information on the project page -
https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project#tab=Documentation
Specifically, you should review these two blog posts that I did -
http://blog.spiderlabs.com/2010/11/advanced-topic-of-the-week-traditional-vs-anomaly-scoring-detection-modes.html
http://blog.spiderlabs.com/2011/08/modsecurity-advanced-topic-of-the-week-exception-handling.html
Blog post #1 is useful so that you understand which "mode of operation" you are using for the CRS. By looking at your audit log, it appears you are running in anomaly scoring mode. This is where the rules are doing detection but the blocking decision is being done separately by inspecting the overall anomaly score in the modsecurity_crs_49_inbound_blocking.conf file.
Blog post #2 is useful so that you can decided exactly how you want to handle these two rules. If you feel that these are not important to you - then I would suggest that you use the SecRuleRemoveById directive to disable these rules from your own modsecurity_crs_60_exceptions.conf file. The way that it stands now, these two alert are only generating an inbound anomaly score of 4 - which is below the default threshold of 5 set in the modsecurity_crs_10_config.conf file so it is not blocked.
Looking at your audit log example, while this request did generate alerts, the transaction was not blocked. If it was, the message data under Section H would have stated "Access denied...".
As for the purposed of these rules - they are meant to flag requests that are not generated from standard web browsers (IE, Chrome, Firefox, etc...) as all of these browsers will send both User-Agent and Accept requests headers per the HTTP RFC spec.
One last comment - I would suggest that you use the official OWASP ModSecurity CRS mail-list for these types of questions -
https://lists.owasp.org/mailman/listinfo/owasp-modsecurity-core-rule-set
You can also search the archives that for answers.
Cheers,
Ryan Barnett
ModSecurity Project Lead
OWASP ModSecurity CRS Project Lead

This aren't false positives. Your request headers lack User-Agent and Accept headers. Usually these are sent from scanner- or hack-tools.

Related

Send email with Microsoft Flow when Power BI alert is triggered

I am trying to build a flow that sends an email to me, when a Power BI alert is triggered. I have build the flow, and now trying the test option.
This gives me a Status code error 429
Addional details:
Headers
Retry-After: 600
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Frame-Options: deny
X-Content-Type-Options: nosniff
RequestId: ad5eb81f-a02d-4edd-b0c2-964cef662d01
Timing-Allow-Origin: *
x-ms-apihub-cached-response: false
Cache-Control: no-store, must-revalidate, no-cache
Date: Thu, 28 Mar 2019 12:35:42 GMT
Content-Length: 254
Content-Type: application/json
Body
{
"error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"pbi.error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"parameters": {},
"details": [],
"exceptionCulprit": 1
}
}
}
I noticed this 429 is caused by too many requests, but I do not understand this, since I only have 1 alert, and this is a very simple Flow thats connected to this 1 alert, and should then send an email.
In general Error 429 means you have exceeded the limit of triggers per period (probably 60 seconds according to https://learn.microsoft.com/en-gb/connectors/powerbi/ ). You should find these parameters in Peek code tool.
My suggestion is to check how many alerts for the data tracked you had in Power BI service. Too low limit might be the answer.
I got the same error.
It was appearing when testing manually.
When I changed the testing to "Automatic" the error changed and it was clear that the "Send an e-mail" step caused the issue.
It turned out that the second step needed to change to Outlook - Send an email (V2) step.
It was really confusing as the MicrosoftFlowCheckAlertStatusEndpointThrottled was irrelevant and it was not the real issue!

Google IAP Public Keys Expiry?

This page provides public keys to decrypt headers from Google's Identity Aware Proxy. Making a request to the page provides its own set of headers, one of which is Expires (it contains a datetime).
What does the expiration date actually mean? I have noticed it fluctuating occasionally, and have not noticed the public keys changing at the expiry time.
I have read about Securing Your App With Signed Headers, and it goes over how to fetch the keys after every key ID mismatch, but I am looking to make a more efficient cache that can fetch the keys less often based on the expiry time.
Here are all the headers from the public keys page:
Accept-Ranges →bytes
Age →1358
Alt-Svc →quic=":443"; ma=2592000; v="39,38,37,36,35"
Cache-Control →public, max-age=3000
Content-Encoding →gzip
Content-Length →519
Content-Type →text/html
Date →Thu, 29 Jun 2017 14:46:55 GMT
Expires →Thu, 29 Jun 2017 15:36:55 GMT
Last-Modified →Thu, 29 Jun 2017 04:46:21 GMT
Server →sffe
Vary →Accept-Encoding
X-Content-Type-Options →nosniff
X-XSS-Protection →1; mode=block
The Expires header controls how long HTTP caches are supposed to hold onto that page. We didn't bother giving Google's content-serving infrastructure any special instructions for the keyfile, so whatever you're seeing there is the default value.
Is there a reason the "refresh the keyfile on lookup failure" approach isn't a good fit for your application? I'm not sure you'll be able to do any better than that, since:
Unless there's a bug or problem, you should never get a key lookup failure.
Even if you did have some scheduled key fetch, it'd probably still be advisable to refresh the keyfile on lookup failure as a fail-safe.
We don't currently rotate the keys super-frequently, though that could change in the future (which is why we don't publish the rotation interval), so it shouldn't be a significant source of load. Are you observing that refreshing the keys is impacting you?
--Matthew, Google Cloud IAP engineer

Akka Http turn off header parsing

I'm trying to implement a transparent proxy with Akka-Http & Akka-Stream.
However, I'm running into an issue where Akka-Http maniuplates and parses the response headers from the upstream server.
For example, when the upstream server sends the following header:
Expires: "0"
Akka will parse this into Expires Header and correct the the value to:
Expires: "Wed, 01 Jan 1800 00:00:00 GMT"
Although start of unix time is better than "0", I don't want this proxy to touch any of the headers. I want the proxy to be transparent and not "fix" any of the headers passing through.
Here is the simple proxy:
Http().bind("localhost", 9000).to(Sink.foreach { connection =>
logger.info("Accepted new connection from " + connection.remoteAddress)
connection handleWith pipeline
}).run()
The proxy flow:
Flow[HttpRequest].map(x => (x, UUID.randomUUID().toString()).via(Http().superPool[String]()).map(x => x._1)
I noticed that the http-server configuration allows me to configure and keep the raw request headers, but there doesn't seem to be one for http-client.
raw-request-uri-header = off
Is there way I can configure Akka to leave the header values as is when I respond to the client?
This is not possible currently.
I wonder how hard it would be to expose such mode, and how much complexity we'd have to pay for it, however I err on the side of this feature not being able to pull its weight.
Feel free to open a ticket for it on http://github.com/akka/akka where we could discuss it further. Some headers are treated specially so we really do want to parse them into the proper model – imagine websocket upgrades, Connection headers etc, so there would have to be a strong case behind this feature request to make it pull its weight IMO.
(I'm currently maintaining Akka HTTP).

Is there a way to configure Amazon Cloudfront to delay the time before my S3 object reaches clients by specifying a release date? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I would like to upload content to S3 and but schedule a time at which Cloudfront delivers it to clients rather than immediately vending it to clients upon processing. Is there a configuration option to accomplish this?
EDIT: This time should be able to differ per object in S3.
There is something of a configuration option to allow this, and it does allow you to restrict specific files -- or path prefixes -- from being served up prior to a given date and time... though it's slightly... well, I don't even know what derogatory term to use to describe it. :) But it's the only thing I can come up with that uses entirely built-in functionality.
First, a quick reminder, that public/unauthenticated read access to objects in S3 can be granted at the bucket level with bucket policies, or at the object level, using "make everything public" when uploading the object in the console, or sending x-amz-acl: public-read when uploading via the API. If either or both of these is present, the object is publicly readable, except in the face of any policy denying the same access. Deny always wins over Allow.
So, we can create a bucket policy statement matching a specific file or prefix, denying access prior to a certain date and time.
{
"Version": "2012-10-17",
"Id": "Policy1445197123468",
"Statement": [
{
"Sid": "Stmt1445197117172",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-bucket/hello.txt",
"Condition": {
"DateLessThan": {
"aws:CurrentTime": "2015-10-18T15:55:00.000-0400"
}
}
}
]
}
Using a wildcard would allow everything under a specific path to be subject to the same restriction.
"Resource": "arn:aws:s3:::example-bucket/cant/see/these/yet/*",
This works, even if the object is public.
This example blocks all GET requests for matching objects by anybody, regardless of permissions they may have. Signed URLs, etc., are not sufficient to override this policy.
The policy statement is checked for validity when it is created; however, the object being matched does not have to exist, yet, so if the policy is created before the object, that doesn't make the policy invalid.
Live test:
Before the expiration time: (unrelated request/response headers removed for clarity)
$ curl -v example-bucket.s3.amazonaws.com/hello.txt
> GET /hello.txt HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Sun, 18 Oct 2015 19:54:55 GMT
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
* Connection #0 to host example-bucket.s3.amazonaws.com left intact
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AAAABBBBCCCCDDDD</RequestId><HostId>g0bbl3dyg00kbunc4Ofl1n3n0iz3h3rehahahasqlbot1337kenqweqwel24234kj41l1ke</HostId></Error>
After the specified date and time:
$ curl -v example-bucket.s3.amazonaws.com/hello.txt
> GET /hello.txt HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Sun, 18 Oct 2015 19:55:05 GMT
< Last-Modified: Sun, 18 Oct 2015 19:36:17 GMT
< ETag: "78016cea74c298162366b9f86bfc3b16"
< Accept-Ranges: bytes
< Content-Type: text/plain
< Content-Length: 15
< Server: AmazonS3
<
Hello, world!
These tests were done against the S3 REST endpoint for the bucket, but the website endpoint for the same bucket yields the same results -- only the error message is in HTML rather than XML.
The positive aspect of this policy is that since the object is public, the policy can be removed any time after the date passes, because it is denying access before a certain time, rather than allowing access after a certain time -- logically the same, but implemented differently. (If the policy allowed access after rather than denying access before, the policy would have to stick around indefinitely; this way, it can just be deleted.)
You could use custom error documents in either S3 or CloudFront to present the viewer with a slightly nicer output... probably CloudFront, since you can select customize each error code individually, creating a custom 403 page.
The major drawbacks to this approach are, of course, that the policy must be edited for each object or path prefix and even though it works per-object, it's not something that's set per object.
And there is a limit to how many policy statements you can include, because of the size restriction on bucket policies:
Note
Bucket policies are limited to 20 KB in size.
http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-language-overview.html
The other solution that comes to mind involves deploying a reverse proxy component (such as HAProxy) in EC2 between CloudFront and the bucket, passing the requests through and reading the custom metadata from the object's response headers, looking of a header such as x-amz-meta-embargo-until: 2015-10-18T19:55:00Z and comparing its value to the system clock; if the current time is before the cutoff time, the proxy would drop the connection from S3 and replace the response headers and body with a locally-generated 403 message, so the client would not be able to fetch the object until the designated time had passed.
This solution seems fairly straightforward to implement, but requires a non-built-in component, so it doesn't meet the constraint of the question and I haven't built a proof of concept; however, I already use HAProxy with Lua in front of some buckets to give S3 some other capabilities not offered natively, such as removing sensitive custom metadata from responses and modifying, and directing the browser to apply an XSL stylesheet to, the XML on S3 error responses, so there's no obvious reason that comes to mind why this application wouldn't work equally well.
Lambda#edge can apply your customized access control easily

S3 PUT Bucket to a location endpoint results in a MalformedXML exception

I'm trying to create an AWS s3 bucket using libCurl thusly:
Location end-point
curl_easy_setopt(curl, CURLOPT_URL, "http://s3-us-west-2.amazonaws.com/");
Assembled RESTful HTTP header:
PUT / HTTP/1.1
Date:Fri, 18 Apr 2014 19:01:15 GMT
x-amz-content-sha256:ce35ff89b32ad0b67e4638f40e1c31838b170bbfee9ed72597d92bda6d8d9620
host:tempviv.s3-us-west-2.amazonaws.com
x-amz-acl:private
content-type:text/plain
Authorization: AWS4-HMAC-SHA256 Credential=AKIAISN2EXAMPLE/20140418/us-west-2/s3/aws4_request, SignedHeaders=date;x-amz-content-sha256;host;x-amz-acl;content-type, Signature=e9868d1a3038d461ff3cfca5aa29fb5e4a4c9aa3764e7ff04d0c689d61e6f164
Content-Length: 163
The body contains the bucket configuration
http://s3.amazonaws.com/doc/2006-03-01/">us-west-2
I get the following exception back.
MalformedXMLThe XML you provided was not well-formed or did not validate against our published schema
I've been able to carry out the same operation through the aws cli.
Things I've also tried.
1) In the xml, used \ to escape the quotes (i.e., xmlns=\"http:.../\").
2) Not providing a CreateBucketConfiguration ("Although s3 documentation suggests this is not allowed when sending the request to a location endpoint").
3) A get service call to the same end point is listing all the provisioned buckets correctly.
Please do let me know if there is anything else I might be missing here.
Ok, the problem was that I was not transferring the entire xml across as was revealed by a wireshark trace. Once I fixed it, the problem went away.
Btw... escaping the quotes with a \ works but the & quot ; does not.