Mongoose Web Server HTTP Headers extremely slow - c++

I have a mongoose server, with commands callable with AJAX. I get a CORS error if I call it without sending HTTP headers from mongoose (but visiting the address with the browser works just fine), but when I do send headers, it may take up to a minute before I get a response (but it does work), both with AJAX and the browser. My reply code:
//without headers
mg_printf(conn,reply.c_str());
//with headers
mg_printf(conn,"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\n"
"Cache-Control: no-cache\n"
"Access-Control-Allow-Origin: *\n\n"
"%s\n", reply.c_str());
How can I speed this up? Am I sending my headers wrong?
Ok, I found a solution, it works if I first check whether the request is an api call or not, and only send the headers when it is.

The reason mongoose is slow is because it waits for the rest of the content until it times out. And the reason it waits is because you do not set Content-Length, in which case the "end of a content" marker is when connection closes.
So the correct solution is:
Add Content-Length header with correct body length, OR
Alternatively, use mg_send_header() and mg_printf_data() functions, in which case you don't need to bother with Content-Length cause these functions use chunked encoding.

Related

Is there a workaround for Postman's bug when content is returned with a 204?

Using Postman, when I make a PUT request to an endpoint which returns a 204 with content, Postman is unable to parse the response, and my collection runner stops that iteration, indicating that an error has occurred.
When run outside of the runner, Postman displays the following:
Other people have also had this problem
Unfortunately I cannot fix the non-standard endpoint. Is there a workaround that will let Postman continue without throwing an error, especially when using the collection runner?
The 204 (204 NO CONTENT) response from the server means that the server processed your request successfully and a response is not needed.
More here: https://httpstatuses.com/204
Actually as much as I know, if the server is sending a 204 with a payload response, the endpoint is not developed as it should.
This would be the main reason Postman is not showing a response payload. You will only be able to read response headers.
So if you send a PUT request, and only receive headers, it means everything is ok. If you spect data the server should be responding with a 200 code.
Now, said this, if postman is telling you that “it could not get any response” it means basically the server is not responding any thing. Now try to increase the timeout in the postman settings. It’s very probable that the server is taking to much time. Check outside the runner how much time it’s taking to response.
I hope this helps you.

Jetty - large messages filtering

Hi I want to refuse incoming requests with too large body or header in my Jetty.I suppose that I have to set some filter, but I haven't found any solution. Do you have any suggestions? Thanks.
Easy enough to build a Servlet Filter or Jetty Handler that pays attention to the request's Content-Length header and then rejects (responds with an http error status code) the request.
As for the header size limit, that's controlled by the HttpConfiguration.setRequestHeaderSize(int)
However, there are a class of requests, that uses Chunked Transfer-Encoding, with these kinds of requests, there is no Content-Length and you will just have to reject the request when reading from the HttpServletRequest.getInputStream() after it hits a certain size.
There is also the complication of Mime multi-part request body content and how you determine the request content is too large.
One other note, unfortunately, due to how HTTP connection handling must be performed, even if a client sends you too large of a request body content, the server still has to read that entire body content and throw it away. This is the half-closed scenario found in the spec, its up to the client to see the early rejected http response and close/terminate the connection.

Set-Cookie for a login system

I've run into a few problems with setting cookies, and based on the reading I've done, this should work, so I'm probably missing something important.
This situation:
Previously I received responses from my API and used JavaScript to save them as cookies, but then I found that using the set-cookie response header is more secure in a lot of situations.
I have 2 cookies: "nuser" (contains a username) and key (contains a session key). nuser shouldn't be httpOnly so that JavaScript can access it. Key should be httpOnly to prevent rogue scripts from stealing a user's session. Also, any request from the client to my API should contain the cookies.
The log-in request
Here's my current implementation: I make a request to my login api at localhost:8080/login/login (keep in mind that the web-client is hosted on localhost:80, but based on what I've read, port numbers shouldn't matter for cookies)
First the web-browser will make an OPTIONS request to confirm that all the headers are allowed. I've made sure that the server response includes access-control-allow-credentials to alert the browser that it's okay to store cookies.
Once it's received the OPTIONS request, the browser makes the actual POST request to the login API. It sends back the set-cookie header and everything looks good at this point.
The Problems
This set-up yields 2 problems. Firstly, though the nuser cookie is not httpOnly, I don't seem to be able to access it via JavaScript. I'm able to see nuser in my browser's cookie option menu, but document.cookie yeilds "".
Secondly, the browser seems to only place the Cookie request header in requests to the exact same API (the login API):
But, if I do a request to a different API that's still on my localhost server, the cookie header isn't present:
Oh, and this returns a 406 just because my server is currently configured to do that if the user isn't validated. I know that this should probably be 403, but the thing to focus on in this image is the fact that the "cookie" header isn't included among the request headers.
So, I've explained my implementation based on my current understanding of cookies, but I'm obviously missing something. Posting exactly what the request and response headers should look like for each task would be greatly appreciated. Thanks.
Okay, still not exactly what was causing the problem with this specific case, but I updated my localhost:80 server to accept api requests, then do a subsequent request to localhost:8080 to get the proper information. Because the set-cookie header is being set by localhost:80 (the client's origin), everything worked fine. From my reading before, I thought that ports didn't matter, but apparently they do.

Several 100-continue received from the server

I'm using libcurl (c++) library to make a request to an IIS 7.5 server. The transaction is a common SOAP webservice
Everything is working fine, my requests send an "Expect 100-continue" flag, and the server responds with a 100-continue and inmediately after that a 200 ok code along with the web service response.
But from time to time, the client receives a 100-continue message and after that, another 100 code. This makes the client report an error, as it expects a final status code right after the server 100 code. I read in W3C HTTP1.1 protocol:
An origin server that sends a 100 (Continue) response MUST
ultimately send a final status code, once the request body is
received and processed, unless it terminates the transport
connection prematurely.
The word "ultimately" makes me loose the track. Is it possible/common that a server sends several 100 codes after a final status code?
If anyone has faced this issue before, can point me to any explanation on how to handle multiple 100 response codes with libcurl?
Thanks in advance
The current spec says this on 100-continue:
The 100 (Continue) status code indicates that the initial part of a
request has been received and has not yet been rejected by the server. The server intends to send a final response after the request has been fully received and acted upon.
When the request contains an Expect header field that includes a
100- continue expectation, the 100 response indicates that the server wishes to receive the request payload body, as described in
Section 5.1.1. The client ought to continue sending the request and discard the 100 response.
If the request did not contain an Expect header field containing the 100-continue expectation, the client can simply discard this interim response.
The way I read it, it is not supposed to be more than one 100-continue response header and that's why libcurl works like this. I've never seen this (multiple 100 responses) happen and I've been doing HTTP for a while (I am the main developer of curl). To change this behavior I would expect you'd need to patch libcurl slightly to allow for this to happen.
It is not related to CURLOPT_FAILONERROR.
I suspect it's because there is an unhandled error that is not handled by the client properly. Make sure you set the CURLOPT_FAILONERROR flag.
See this SO post for more information.

Incorrect response from the server anfter GET request

When I send a request with "GET" in c++ like this:
GET / HTTP/1.1\r\nHost: site.com\r\n\r\n
I receive the proper answer. But when I configure the request according to what browsers do (I captured the headers from packet sniffer) the response from the server is 200 OK but the html body is a piece of garbage. Also the content-Length shown in the header proves that I didn't get the correct html response.
The problem occurs when adding "Accept-Encoding: gzip, deflate". I send exactly what the browser sends. But I receive different response than browser.
Why do you think this happens?
If you accept gzipped content, the server may send gzipped content. (In fact, some buggy servers send gzipped content even if you don't say you accept it!)
Notice that in the returned headers, it will include Content-Encoding: gzip, or maybe deflate instead of gzip. This tells you about the encoding. If it is gzipped, you need to decompress it with a library like zlib.
Another thing you might see in replies to HTTP 1.1 requests is that the connection won't necessarily close when it is completed, and you might get Transfer-Encoding: chunked, which will format the body differently. Chunked responses are a series of chunks with a hex length, then content, terminated by an empty chunk. Non-chunked responses, by contrast, are sent with a Content-Length header which tells you how much to expect. The content length is the length of the data it sends, which will be smaller if the data is compressed.
Unless you implement decompression, don't send Accept-Encoding. Chunked responses are something you'll probably have to implement though, since it is common in http 1.1 and if you do just http 1.0, you won't get to use the important host header.