Is it OK to reuse HTTP status codes like 416? - web-services

I want to notify a client of a specific error condition using a HTTP status code.
The closest I can come by is "416 Range Not Satisfiable" - although the service has nothing to do with serving byte-ranges from files.
Can I liberally interpret the meaning of "Range Not Satisfiable" or must I respect the technical definition involving byte-ranges of files?

You can liberally interpret that. However, that doesn't make it the correct thing to do.
Errors that aren't specifically handled by the current 4xx set generally use the more generic 400 error along with an added explanation as to why. The general rule is that, if your error is an exact match to the more specific code, use it, otherwise use the less specific code.
Overloading the meaning of the specific codes is likely to lead to mass confusion.
As per RFC7231, section 6.5 (my italics):
The 4xx (Client Error) class of status code indicates that the client seems to have erred. Except when responding to a HEAD request, the server SHOULD send a representation containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included representation to the user.

Related

multiple error reporting with menhir: which token?

I am writing a small parser with Menhir + Ocamllex and I have two requirements I cannot seem to meet at the same time
I would like to keep parsing after an error (to report more errors).
I would like to print the token at which the error ocurred.
I can do only 1) easily, by using the error token. I can also do only 2)
easily, using the approach suggested for this question. However, I don't know of an easy way to achieve both.
The way I handle errors right now goes something like this:
pair:
| left = prodA SEPARATOR right = prodA { (* happy case *) }
| error SEPARATOR right = prodA { print_error_report $startpos;
(* would like to continue after the first error, just in case
there is a second error, so I report both *) }
One thing that would help me is accessing the lexbuf itself, so I could get the token directly. This would mean instead of $startpos I pass something like $lexbuf But as far as I can tell, there is no official way to access the lexbuf. The solution in 1 works only at the level of the caller to the parser, where the caller is itself passing lexbuf t othe parser, but not within semantic actions.
Does anyone know if it is actually available somehow? or perhaps a workaround?
Thanks to combined work by Frédéric Bour and François Pottier, there is a new version of Menhir available that supports incremental parsing. See the announcement email sent on December 17.
The idea of this incremental API is to reverse control: instead of the parser calling the lexer to process the input, you have a lower-level API where you manipulate the parser state which returns an updated state after each consumed token (in this is slightly more fine-grained as you can observe internal reductions that do not require new tokens). In particular, you can observe whether the resulting parser state is an error, and choose to backtrack and provide a different input (depending on your error-recovery startegy) to go farther along in your input.
The general idea is that this will allow to implement good error-recovery and error-reporting strategies on the parser-user side, and slowly deprecate the rather inflexible "error token" mechanism.
This is already usable, but work on those features is still ongoing, and you should expect a more robust support for these new features in other releases over the following months.

Transport Rule Logical And for Exchange 2010

Good Afternoon,
I have exhausted my googling and best-guess ideas, so I hope someone here has an idea of whether this is possible or not.
I am using Exchange Server 2010 (vanilla) in a test environment and trying to create a Hub Transport Rule using the Exchange Management Console. The requirements of the rules filtering are similar to the following scenario:
1.) If a recipient's address matches (ends with) "#testdomain.com" AND (begins with) "john"
2.) If the sender's address matches (ends with) "#testdomain.com"
3.) Copy the message to the "SupervisorOfJohns#testdomain.com" mailbox
I have no problems doing items 2 and 3, but I cannot figure out how to get item 1 in the same condition. I have come across some threads that simply concluded that MS goofed on this, but I am hesitant to fault them for something which seems like it should be really straightforward. I must be missing something. Expressions I have tried so far...:
1.) (^john)(#testdomain.com$)
2.) ^(john)(#testdomain.com)$
3.) (^john)#testdomain.com
4.) ^john #testdomain.com$
5.) ^(john)#testdomain.com
If you use the interface and +Add them as two separate entries, it treats them as an OR clause (if a recipient address begins with "john", OR it ends with "#testdomain.com"). As you can see from my simplistic attempts, I have barely any clue what can/should work in this case. Any suggestions or ideas would be appreciated.
Respectfully,
B. Whitman
Here's what I ended up using:
john\w*#testdomain.com
The reasoning behind the question is that I'm trying to make a service to catch certain e-mails and do some processing with them. I also wanted to restrict the senders/recipients to certain domains (though some checking will also be done with the processing service). Thanks to hjpotter92 for his solutions!

WIF has IsSessionMode = true but still produces chunked cookies on occasion

I've got an unusual problem with WIF. I have to use WIF 3.5 because of compatibility with .Net 4.0.
Following the advice from Vittorio Bertocci here http://www.cloudidentity.com/blog/2010/05/26/your-fedauth-cookies-on-a-diet-issessionmode-true/ We have set IsSessionMode = true in WSFederationAuthenticationModule_SecurityTokenValidated, and most of the time it is working perfectly - we are getting small FedAuth tokens which are pointers to our token in our memory cache.
However, periodically we are getting chunked FedAuth cookies which contain the full token information.
There is no obvious place in our code where we have an alternative code path.
I can't find any other examples of this particular inconsistency on Stack Overflow, or in any blogs about WIF on the wider internet, so I'm throwing this question out here in case anyone else has seen this problem and resolved it.
Meanwhile we are going to try setting up so that we can debug through the WIF code if we can make the problem occur reliably.
We've found out the problem - IsSessionMode was being set in the wrong place, it should have been on SessionSecurityTokenCreated. It appears that it was being set per-instance rather than on init which meant that in some circumstances it had the default value of true.
Could you be sharing another relying party's cookie? One which is not using session? Try explicitly naming each RP's cookie -each one differently.

How to correctly parse incoming HTTP requests

i've created an C++ application using WinSck, which has a small (handles just a few features which i need) http server implemented. This is used to communicate with the outside world using http requests. It works, but sometimes the requests are not handled correctly, because the parsing fails. Now i'm quite sure that the requests are correctly formed, since they are sent by major web browsers like firefox/chrome or perl/C# (which have http modules/dll's).
After some debugging i found out that the problem is in fact in receiving the message. When the message comes in more than just one part (it is not read in one recv() call) then sometimes the parsing fails. I have gone through numerous tries on how to resolve this, but nothing seems to be reliable enough.
What i do now is that i read in data until i find "\r\n\r\n" sequence which indicates end of header. If WSAGetLastError() reports something else than 10035 (connection closed/failed) before such a sequence is found i discard the message. When i know i have the whole header i parse it and look for information about the body length. However i'm not sure if this information is mandatory (i think not) and what should i do if there is no such information - does it mean there will be no body? Another problem is that i do not know if i should look for a "\r\n\r\n" after the body (if its length is greater than zero).
Does anybody know how to reliably parse a http message?
Note: i know there are implementations of http servers out there. I want my own for various reasons. And yes, reinventing the wheel is bad, i know that too.
If you're set on writing your own parser, I'd take the Zed Shaw approach: use the Ragel state machine compiler and build your parser based on that. Ragel can handle input arriving in chunks, if you're careful.
Honestly, though, I'd just use something like this.
Your go-to resource should be RFC 2616, which describes HTTP 1.1, which you can use to construct a parser. Good luck!
You could try looking at their code to see how they handle a HTTP message.
Or you could look at the spec, there's message length fields you should use. Only buggy browsers send additional CRLFs at the end, apparently.
Anyway HTTP request has "\r\n\r\n" at the end of request headers and before the request data if any, even if request is "GET / HTTP/1.0\r\n\r\n".
If method is "POST" you should read as many bytes after "\r\n\r\n", as specified in Content-Length field.
So pseudocode is:
read_until(buf, "\r\n\r\n");
if(buf.starts_with("POST")
{
contentLength = regex("^Content-Length: (\d+)$").find(buf)[1];
read_all(buf, contentLength);
}
There will be "\r\n\r\n" after the content only if content includes it. Content may be binary data, it hasn't any terminating sequences, and the one method to get its size is use Content-Length field.
HTTP GET/HEAD requests have no body, and POST request can have no body too. You have to check if it's a GET/HEAD, if it's, then you have no content (body/message) sent. If it was a POST, do as the specs say about parsing a message of known/unknown length, as #gbjbaanb said.

How to handle server-client requests

Currently I'm working on a Server-Client system which will be the backbone of my application.
I have to find the best way to send requests and handle them on the server-side.
The server-side should be able to handle requests like this one:
getPortfolio -i 2 -d all
In an old project I decided to send such a request as string and the server application had to look up the first part of the string ("getPortfolio"). Afterwards the server application had to find the correct method in a map which linked the methods with the the first part of the string ("getPortfolio"). The second part ("-i 2 -d all") got passed as parameter and the method itself had to handle this string/parameter.
I doubt that this is the best solution in order to handle many different requests.
Rgds
Layne
To me it seems you're having two different questions.
For the socket part, I suggest you use Beej's guide to socket programming if you want to have full control about what you do. If you don't want to/don't have the time to treat this part yourself, you can just use a C++ socket library as well. There are plenty of them; I only used this one so far, but others might be as just good (or even better).
Regarding your parsing algorithm, you may first write down everything about the message format, so you'll have a strict guideline to follow. Then process step by step:
First, extract the "first word" and just keep the following parameters in some list. Check if the first word is valid and if it is known. If the "first word" does not match with any of the predefined existing functions, just ignore the message (and eventually report the error to the client application).
Once you have the matching function, simply call it passing the others parameters.
This way, each function will do a specific task and your code will be splitted in an elegant way.
Unfortunately, it is difficult for me to be any more explicit since we somehow lack of details here.