ModSecurity blocked requests stats - mod-security

I installed ModSecurity, I would like to get stats out from its usage. The number of blocked requests would be a good starting point.
The only solution I was able to imagine is to parse access_log to find 403 http status, is something more clever out there?

There are several other ways to see the blocked request.
error.log contains many detailed information, eg. which rule blocked the request, why (shows the target and the operator with its argument)
audit.log (default config file uses modsec_audit.log) shows more details: HTTP status, client ip and port, headers, body, and also the response headers and body

Related

Localhost 8080 is not allowed 'Access-Control-Allow-Origin'

Building the app and running on the iPhone device gives the following errors:
I managed to get the details of the request sent from iOS Device, please check the below image is the request is valid or we need to do change something related to sending requests because there is no other place we can make any changes except the way we are sending requests
Header sending in the request is showing here:
To enable cross-origin requests, the server that is responding to the request needs to set headers that will tell the requesting browser (which is on a different domain) that it is allowed to load the resource from the different domain. Essentially, on the server (localhost) you will need add the headers to essentially white-list the calling domain as one that is allowed to fetch resources cross domain.
This is a useful article to help you ramp: https://www.html5rocks.com/en/tutorials/cors/
Also see http://excellencenodejsblog.com/handing-cors-for-your-mobile-app/

ShimmerCat with reverse proxy when using "the old way"

I have used ShimmerCat with sc-tool to connect to my development sites as described here, and everything has worked always like a charm with it, but I also wanted to follow the "old way" configuring my /etc/hosts. In this case I had a small problem, the server ran ok, and I could access to my development site (let's say that I used https://www.example.com:4043/), but I'm also using a reverse proxy as described on this article, and on the config file reference. It redirects to a Django app I'm using. Let's say it is my devlove.yaml config file:
---
shimmercat-devlove:
domains:
www.example.com:
root-dir: site
consultant: 8080
cache-key: xxxxxxx
api.example.com:
port: 8080
The problem is that when I try to access to a URL that requests the API, a 404 response is sent from the API. Let me try to explain it through an example. I try to access to https://www.example.com:4043/country/, and on this page I do a request to the API: /api/<country>/towns/, then the API endpoint is returning a 404 response so it is not finding this URL, which does not happen when using Google Chrome with sc-tool. I had set both domains www.example.com, and api.example.com on my /etc/hosts. I have been trying to solve it, but without any luck, is there something I'm missing? Any help will be welcome. Thanks in advance.
With a bit more of data, we may be able to find the issue. In the meantime, here is a list of troubleshooting tips:
Possible issue: DNS is cached in browser, /etc/hosts is not being used (yet)
This can happen if somehow your browser has not done a DNS lookup since before you changed your /etc/hosts file. Then the connection is going to a domain in the Internet that may not have the API endpoint that you are calling.
Troubleshooting: Check ShimmerCat's log for the requests. If this is the issue, closing and opening the browser may solve the issue.
Possible issue: the host header is incorrect
ShimmerCat uses the Host header in HTTP/1.1 requests and the :authority header in HTTP/2 requests to distinguish the domains. It always discards any port number present in them. If these headers are not set or are set to a domain other than the ones ShimmerCat is configured to listen, the server will consider the situation so despicable that it will just close the connection.
Troubleshooting: This is not a 404 error, but a connection close (if trying to connect un-proxied, directly to the SSL port where ShimmerCat is listening), or a Socks Connection Failed (if trying to connect through ShimmerCat's built-in SOCKS5 proxy). In the former case, the server will print the message "Rejected request to Just https://some-domain-or-ip/some/path" in his log, using the actual value for the domain, or "Rejected request to Nothing", if no header was present. The second case is more complicated, because the SOCKS5 proxy is before the HTTP routing algorithm.
In any case, the browser will put a red line in the network panel of the developer tools. If you are accessing the server using curl, like this:
curl -k -H host:api.incorrect-domain.com https://127.0.0.1:4043/contents/blog/data-density/
or like
curl -k -H host:api.incorrect-domain.com
(notice the --http2 parameter in the second form), you will get a response:
curl: (56) Unexpected EOF
Extra-tip: There is a field for the network address in the browser's developer tools. Check it, it may tell you something!
Possible issue: something gets messed up when passing the request to the api back-end.
API backends are also sensitive to the host header, and to additional things like authentication cookies and request parameters.
Troubleshooting: A way to diagnose things is invoking ShimmerCat using the --show-proxied-headers command-line option. It makes ShimmerCat to report the proxied headers to the log:
Issuing request with headers :authority: api.example.com
:method: GET
:path: /my/api/endpoint/path/
:scheme: https
accept: */*
user-agent: curl/7.47.0
Possible issue: there are two instances or more of ShimmerCat running
...and they are using different configurations. ShimmerCat uses port sharing among several processes to increase availability. A downside of this is that is perfectly possible to mistakenly start ShimmerCat, forget about stopping it, and start it again after changing some configuration bit. The two instances will be running at the same time, and any of them will pick connections made to the listening port.
Troubleshooting: Shutdown all instances of ShimmerCat, then double-check there are none running by using the corresponding form of the ps command, and start the server with the configuration you want.

Discriminating between infrastructure and business logic when using HTTP status codes

We are trying to build a REST interface that allows users to test the existence of a specific resource. Let's assume we're selling domain names: the user needs to determine if the domain is available.
An HTTP GET combined with 200 and 404 response codes seems sensible at first glance.
The problem we have is discriminating between a request successfully served by our lookup service, and a request served under exceptional behaviour from other components. For example:
404 and 200 can be returned by intermediary proxies that actually block the request. This can be due to proxy misconfiguration, or even external infrastructure such as coffee shop Wifi using poor forms-based authentication.
Clients could be using broken URLs. This could occur through deprecation or (again) by misconfiguration. We could combat the former through 301, however.
What is the current best practice for discriminating between responses that have been successfully fulfilled against the client's intention for that request, and responses served through exceptional behaviour?
The problem is eliminated by tunnelling responses through the response body, as we can ensure these are unique to our service. However, doesn't seem very RESTful!
Simply have your application add some content to its HTTP responses that will distinguish them from the responses thrown by intermediaries. Any or all of these would work:
Information about the error in the response content that is recognizable as your application's content (for example, Application error: Domain name not found (404))
A Content-Type header in the response that indicates that the response content should be decoded as an application error (for example, Content-Type: application/vnd.domain-finder.error+json)
A custom header in the response that indicates it is an application error
Once you implement a scheme like this, your API clients will need to be aware of the mechanism you choose if they want to react differently to application errors versus infrastructure errors, so just document it clearly.
I tend to follow the "do what's RESTful as long as it makes sense" line of thinking.
Let's say you have an API that looks like this:
/api/v1/domains/<name>/
Hitting /api/v1/domain/exists.com/ could then return a 200 with some whois information.
Hitting /api/v1/domain/doesnt.com/ could return a 404 with links to purchase options.
That would probably work. If the returned content follows a strict format (e.g. a JSON response with a results key) then your API's responses can be differentiated from your proxies' responses.
Alternatively, you could offer
/api/v1/domains/?search=maybe
/api/v1/domains/?lookup=maybe.com
This is now slightly less RESTful but it's still self-describing and (in my opinion) not that bad. Now every response can be a 200 and your content can reveal the results.

How to identify GET/POST requests made by a human ignoring any requests following

I'm writing an application that listens to HTTP traffic and tries to recognize which requests where initiated by a human.
For example:
The user types cnn.com in their address bar, which starts a request. Then I want to find
CNN's server response while discarding any others requests (such as XHR, etc.)
How could you tell from the header information what means what?
After doing some research I've found that relevant responses come with :
Content-Type: text/html
Html comes with a meaningful title
status 200 ok
There is no way to tell from the bits on the wire. The HTTP protocol has a defined format, which all (non-broken) user agents adhere to.
You are probably thinking that the translation of a user's typing of just 'cnn.com' into 'http://www.cnn.com/' on the wire can be detected from the protocol payload. The answer is no, it can't.
To detect the user agent allowing the user such shorthand, you would have to snoop the user agent application (e.g. a browser) itself.
Actually, detecting non-human agency is the interesting problem (with spam detection as one obvious motivation). This is because HTTP belongs to the family of NVT protocols, where the basic idea, believe it or not, is that a human should be able to run the protocol "by hand" in a network terminal/console program (such as a telnet client.) In other words, the protocol is basically designed as if a human were using it.
I don't think header information can suffice to identify real users from bots, since bots are made to mimic real users and headers are very easy to imitate.
One thing you can do, is to track the path (sequence of clicks) followed by a user, which is most likely to be different from one made by a bot, and made some analysis on the posted information (i.e. bayesian filters).
A very easy to implement check is based on the IP source. There are databases of black listed IP addresses, see Project Honeypot - and if you are writing your software in java, here is an example on how to check an IP address: How to query HTTP:BL for spamming IP addresses.
What I do on my blog is this (using wordpress plugins):
check if an IP address is in the HTTP:BL, if it is the user is shown an html page to take action to whitelist his IP address. This is done in Wordpress by Bad Behavior plugin.
when the user submits some content, a bayesian filter verifies the content of his submission and if his comment is identified as spam, a captcha is displayed before completing the submission. This is done with akismet and conditional captcha, and the comment is also enqueued for manual approval.
After being approved once, the same user is considered safe, and can post without restrictions/checks.
Applying the above rules, I have nomore spam on my blog. And I think that a similar logic can be used for any website.
The advantage of this approach, is that most of the users don't even notice any security mechanism, since no captcha is displayed, nor anything unusual happens in 99% of the times. But still there is quite restrictive, and effective, checks going on under the hoods.
I can't offer any code to help, but I'd say look at the Referer HTTP header. The initial GET request shouldn't have a Referer, but when you start loading the resources on the page (such as JavaScript, CSS, and so on) the Referer will be set to the URL that requested those resources.
So when I type in "stackoverflow.com" in my browser and hit enter, the browser will send a GET request with no Referer, like this:
GET / HTTP/1.1
Host: stackoverflow.com
# ... other Headers
When the browser loads the supporting static resources on the page, though, each request will have a Referer header, like this:
GET /style.css HTTP/1.1
Host: stackoverflow.com
Referer: http://www.stackoverflow.com
# ... other Headers

How can you drop an HttpRequest connection in Django when TLS/SSL is not being used?

I'm writing a Django 1.3 view method which requires TLS/SSL to be used. I want to entirely drop the connection if an HttpRequest is received without using TLS/SSL and NOT return any kind of response. This is for security reasons.
Currently I am returning a response like so:
def some_view(request):
if not request.is_secure():
return HttpResponse(status=426)
...
However, returning 426 - Upgrade Required poses a couple of problems:
It's part of a proposed standard from May 2000 (RFC 2817), and is not an official HTTP standard.
The HttpResponse is open to a man-in-the-middle (MITM) attack. As mentioned in the comments here, if the server returns any type of response to the client without a TLS/SSL connection first being established, a MITM could hijack the response, alter it to re-direct elsewhere, and deliver the malicious re-direct response to the client.
Having the server re-direct from a HTTP URI to a HTTPS URI is open to the same MITM attack as noted above.
So, how can you entirely drop a connection inside a Django 1.3 view method without returning any type of HttpResponse?
As I was saying in this answer, I'm generally against the use of automatic redirections from http:// to https:// for the reasons you're mentioning. I would certainly not recommend resorting only to bulk mod_rewrite-style redirections for securing a site.
However, in your case, by the time the request reaches the server, it's too late. If there is a MITM, he has done his attack (or part of it) before you got the request.
The best you can do by then is to reply without any useful content. In this case, a redirection (using 301 or 302 and the Location header) could be appropriate. However, it may hide problems if the user (or even you as a developer) ignores the warnings (in this case, the browser will follow the redirection and retry the request almost transparently).
Therefore, I would simply suggest returning a 404 status:
http://yoursite/ and https://yoursite/ are effectively two distinct sites. There is no reason to expect a 1:1 mapping of all resources from the URI spaces from one to the other (just in the same way as you could have a completely different hierarchy for ftp://yoursite/).
More importantly, this is a problem that should be treated upstream: the link that led your user to this resource using http:// should be considered as broken. Don't make it work automatically. Having a 404 status for a resource that shouldn't be there is fine. In addition, returning an error message when there is an error is good: it will force you (or at least remind you) as a developer that you need to fix the page/form/link that led to this problem.
Dropping the connection is just a bonus, if you can do this with this framework: it will only be really useful if it can be sent asynchronously by the server (before the client has finished sending the request), if the browser can read it asynchronously (in which case it should stop sending immediately when there is an error) and if the MITM attacker is passive (an active MITM could stop the response to go back through the client and make sure the client sends all the request by consuming it with its own "proxy", whether or not the server has dropped the connection). These conditions can happen, but fixing the problem at the source is still better anyway.
If you are using Nginx as your front-end proxy then you can use another non-standard HTTP status code 444 which closes the connection without sending any headers. http://wiki.nginx.org/HttpRewriteModule#return It would require that Nginx know enough about which urls to deny on HTTP.