Is there a way to filter logged elmah entries by the http code? - elmah

I know that I can stop http code 404 errors from been registered on elmah on the configuration.
But I want to register 404 and 500 errors, however in some situations I want to see only the 500.
Can I pass some url param like the "page" and "size" to the elmah.axd to do that?

Short answer: no.
Longer: if you store your elmah errors in SQL server or another data store supporting search, you can write custom SQL or whatever query language your DB support.
I'm the founder of https://elmah.io (which support search). We also wrote an errorLog implementation for elmah that store data i Elasticsearch. With Kibana on top, you can create a custom dashboard with filters. You can find the error logger here: https://github.com/elmahio/Elmah.Io.ElasticSearch

Related

Block part of a request using WAF or ModSecurity

Is it possible to block just part of a request using ModSecurity, Azure WAF or similar? For example could you block a cookie because it contains invalid characters while allowing the rest through
I'm trying to trace an issue where sometimes a cookie is lost
ModSecurity or the web server could possibly be used to drop cookies, the easiest way to troubleshoot will be to use an application proxy like BurpSuite and see what's going on with the cookie, often the browser is the one taking the decision to use or not the cookie.

How to find out if an app is currently stopped with CloudFoundry in Swisscom Cloud? Header X-Cf-Routererror reliable?

We would like to add a maintenance page to our front-end which should appear when the back-end is currently unavailable (e.g. stopped or deploying). When the application is not running, the following message is displayed together with a 404 status code:
404 Not Found: Requested route ('name.scapp.io') does not exist.
Additionally, there is header present, when the application is stopped (and only then):
X-Cf-Routererror: unknown_route
Is this header reliably added if the application is not running? If this is the case, I can use this flag to display a maintenance page.
By the way: Wouldn't it make more sense to provide a 5xx status code if the application is not started/crashed, i.e. differ between stopped applications and wrong request routes? Catching a 503 error would be much easier, as it does not interfere with our business logic (404 is used inside the application).
Another option is to use a wildcard route.
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-an-http-route-with-wildcard-hostname
An application mapped to a wildcard route acts as a fallback app for route requests if the requested route does not exist.
Thus you can map a wildcard route to a static app that displays a maintenance page. Then if your app mapped to a specific route is down or unavailable the maintenance page will get displayed instead of the 404.
In regards to your question...
By the way: Wouldn't it make more sense to provide a 5xx status code if the application is not started/crashed, i.e. differ between stopped applications and wrong request routes? Catching a 503 error would be much easier, as it does not interfere with our business logic (404 is used inside the application).
The GoRouter maintains a list of routes for mapping incoming requests to applications. If your application is down then there is no route in the routing table, that's why you end up with a 404. If you think about it from the perspective of the GoRouter, it makes sense. There's no route, so it returns a 404 Not Found. For a 503 to make sense, the GoRouter would have to know about the app and know it's down or not responding.
I suppose you might be able to achieve that behavior if you used a wildcard route above, but instead of displaying a maintenance page just have it return an HTTP 503.
Hope that helps!
The 404 Error you see is generated by CloudFoundrys routing tier and is maintained upstream.
Generally if you don't want to get such error messages you can use blue-green deployments. Here is a detailed description of it in the CF docs: https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
An other option is to add a routing service that implements this functionality for you. Have a look at the CF docs for this: https://docs.cloudfoundry.org/services/route-services.html

S3 Static Website: Return HTTP 410

Background
I have a static website on S3 with 10000s of HTML pages indexed on Google. I'm moving to a new version and I want to remove old pages (which may no longer exist) from Google index. I've read online that the most efficient way to do that is to return HTTP 410 (Gone)
Problem
According to http://docs.aws.amazon.com/AmazonS3/latest/dev/CustomErrorDocSupport.html , you can not return a HTTP 410 when using S3 Static website
Api Gateway
I created a mock integration of API Gateway which return HTTP 410. Then I configured my S3 bucket to automatically redirect specific prefix to this url. However, the return code seen is HTTP 301 (for the first redirect). If I GET the API endpoint directly, I receive the 410 successfully, however if I access the API through a S3 GET, then the error code is 301
What's next
If anyone has an idea on how to return HTTP 410 on a static website hosted on S3, let me know.
Additionally, if you can think of a better alternative to de-index old page on Google (the manual tool isn't a solution as I have a large amount of pages) let me know :)
I really feel that a better answer would be to put a server in front of the S3 content with a very simple database table. Your real issue is determining a 410 vs a 404. That is, you know a page is gone but how do you differentiate from a typo or other error?
What I would envision is a table that is indexed by the path name - i.e., /path/to/my/file.html and a status of some sort. The server takes in a request for the full path, does a lookup in the database and either serves the page (assuming that the page is "active" or "available") or a 410 if you know the page is not active. If the page can't be found in the database then return a 404.
The two issues I see with this approach are:
The initial population of the database. If you've already removed the pages from S3 then how will you know when to put in a page and a "not available" flag? I'm not sure how many pages we're talking about but it could be quite big the first time.
Maintenance - you will likely need an administrative interface of some sort down the road for the next time you need to deactivate some number of pages.
There are content management systems that will do some of this for you or it wouldn't be too bad to write a simple server to do this pending the issues I've outlined.

404 error Sharepoint 2010 accessing Lists.asmx

When attempting to checkout a document I receive the following message
"This document could not be checked out. You may not have permission..." etc etc
So I used Fiddler and discovered that the REAL error was a 404 when Sharepoint attempts to access the Lists.asmx url. I am assuming it uses the webservice to perform checkout.
The Sharepoint is set up under IIS for https or SSL. Another test farm I put together without SSL does not have issues getting to it's web service.
Another curiousity is that there was no issue checking a document out when the SHAREPOINT OPENDOCUMENTS CLASS add on in the browser is turned off.
Any Ideas?
I was stumped on a 404 error as well while communicating with Sharepoint, but it was when I was calling GetListCollection(). It turned out to be because the default behavior for GetListCollection() is to return all of the lists from the rootweb. To filter it more specifically change the .Url property to point to a list.asmx lower down the site (regardless of where you set your initial reference).
For example:
ListsWS.Lists lws = new ListsWS.Lists();
lws.Url = "http://server/sites/someSubSite/_vti_bin/lists.asmx";

Discriminating between infrastructure and business logic when using HTTP status codes

We are trying to build a REST interface that allows users to test the existence of a specific resource. Let's assume we're selling domain names: the user needs to determine if the domain is available.
An HTTP GET combined with 200 and 404 response codes seems sensible at first glance.
The problem we have is discriminating between a request successfully served by our lookup service, and a request served under exceptional behaviour from other components. For example:
404 and 200 can be returned by intermediary proxies that actually block the request. This can be due to proxy misconfiguration, or even external infrastructure such as coffee shop Wifi using poor forms-based authentication.
Clients could be using broken URLs. This could occur through deprecation or (again) by misconfiguration. We could combat the former through 301, however.
What is the current best practice for discriminating between responses that have been successfully fulfilled against the client's intention for that request, and responses served through exceptional behaviour?
The problem is eliminated by tunnelling responses through the response body, as we can ensure these are unique to our service. However, doesn't seem very RESTful!
Simply have your application add some content to its HTTP responses that will distinguish them from the responses thrown by intermediaries. Any or all of these would work:
Information about the error in the response content that is recognizable as your application's content (for example, Application error: Domain name not found (404))
A Content-Type header in the response that indicates that the response content should be decoded as an application error (for example, Content-Type: application/vnd.domain-finder.error+json)
A custom header in the response that indicates it is an application error
Once you implement a scheme like this, your API clients will need to be aware of the mechanism you choose if they want to react differently to application errors versus infrastructure errors, so just document it clearly.
I tend to follow the "do what's RESTful as long as it makes sense" line of thinking.
Let's say you have an API that looks like this:
/api/v1/domains/<name>/
Hitting /api/v1/domain/exists.com/ could then return a 200 with some whois information.
Hitting /api/v1/domain/doesnt.com/ could return a 404 with links to purchase options.
That would probably work. If the returned content follows a strict format (e.g. a JSON response with a results key) then your API's responses can be differentiated from your proxies' responses.
Alternatively, you could offer
/api/v1/domains/?search=maybe
/api/v1/domains/?lookup=maybe.com
This is now slightly less RESTful but it's still self-describing and (in my opinion) not that bad. Now every response can be a 200 and your content can reveal the results.