Why Fiddler introduce delay when we set Expect header with 100-continue? - libcurl

When the client set Expect header with 100-continue, Fiddler add around 1 sec latency to the request.
And it can be fixed by following this article.
Next question is, why Fiddler has this overhead?

More info:
So the client is LibCurl.
And apparently when Expect header is present with value of 100-continue, LibCurl will wait up to 1 sec before sending the remaining body.
This is as per spec. This blog post (About the HTTP Expect: 100-continue header) explains this behavior well.

Related

Python: reading from rfile in SimpleHTTPRequestHandler

While overloading SimpleHTTPRequestHandler, my function blocks on self.rfile.read(). How do I find out if there is any data in rfile before reading? Alternatively, is there a non-blocking read call that returns in absence of data?
For the record the solution is to only try to read as many bytes as specified in the header Content-Length. ie something like:
contentLength = int(request.headers['Content-Length'])
payload = str(request.rfile.read(contentLength))
I just solved a case like this.
I'm pretty sure what's "gottcha" on this is an infinite stream of bits being written to your socket by the client your connected to. This is often called "pre-connect" and it happens because http/tcp/HttpServer doesn't make a significant distinction between "chunks" and single bytes being fed slowly to the connection. If you see the response header contains Transfer-Encoding: chunked you are a candidate for this happening. Google Chrome works this way and is a good test. If fire fox and IE work, but Chrome doesn't when you recieve a reponse from the same website, then this what's probably happening.
Possible solution:
In CPython 3.7 the HttpServer has options that support pre-connect. Look at the HttpThreadingServer in the docs.
Put the request handing in a separate thread, and timeout that thread when the read operation takes too long.
Both of these are potentially painful solutions, but at least it should get you started.

HTTP keep-alive with C++ recv winsocket2

I'm Coding my own HTTP fetcher socket. I use C++ in MVC++ and winsocket2.h
I was able to program the socket to connect to the required website's server and send an HTTP GET request.
Now the problem is after I send an HTTP GET request with Keep-alive connection, I call the recv function , and it works fine except after it retrieves the website, it stays lingering and waiting for time-out hint from the server or a connection to close!!
This takes a few seconds of less depending in the keep-alive timeout the servers has,
Therefore, I can't benefit from the keep-alive HTTP settings.
How can I tell the recv function to stop after retrieving the website and gives back the command to me so I can send another HTTP request while avoiding another hand-shake regime.
When I use the non-blocking sockets it works faster, But I don't know when to stop, I set a str.rfind("",-1,7) to stop retrieving data.
however, it is not very efficient.
Does anybody know a way to do it, or what is that last character send by the HTTP server when the connection is kept alive, so I can use it as a stopping decision.
Best,
Moe
Check for a Content-Length: xxxxx header, and only read xxxxx bytes after the header, which is terminated by a blank line (CR-LF-CR-LF in stream).
update
If the data is chunked:
Chunked Transfer-Encoding (reference)
...
A chunked message body contains a
series of chunks, followed by a line
with "0" (zero), followed by optional
footers (just like headers), and a
blank line. Each chunk consists of two
parts:
a line with the size of the chunk
data, in hex, possibly followed by a
semicolon and extra parameters you can
ignore (none are currently standard),
and ending with CRLF.
the data itself,
followed by CRLF.
Also, http://www.w3.org description of Chunked Transfer-Encoding is in section 3.6.1 # http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html.
Set the non-blocking I/O flag on the socket, so that recv will return immediately with only as much data has already been received. Combine this with select, WSAEventSelect, WSAAsyncSelect, or completion ports to be informed when data arrives (instead of busy-waiting).

How to buffer and process chunked data before sending headers in IIS7 Native Module

So I've been working on porting an IIS6 ISAPI module to IIS7. One problem that I have is that I need to be able to parse and process responses, and then change/delete/add some HTTP headers based on the content. I got this working fine for most content, but it appears to break down when chunked encoding is being used on the response body.
It looks like CHttpModule::OnSendResponse is being called once for each chunk. I've been able to determine when a chunked response is being sent, and to buffer the data until all of the chunks have been passed in, and set the entity count to 0 to prevent it from sending that data out, but after the first OnSendResponse is called the headers are sent to the client already, so I'm not able to modify them later after I've already processed the chunked data.
I realize that doing this is going to eliminate the benefits of the chunked encoding, but in this case it is necessary.
The only example code I can find for IIS Native Modules are very simplistic and don't demonstrate performing any filtering of response data. Any tips or links on this would be great.
Edit: Okay, I found IHttpResponse::SuppressHeaders, which will prevent the headers from being sent after the first OnSendResponse. However, now it will not send the headers at all. So what I did was when it's a chunked response I set it to suppress headers, and then later after I process the response, I check to see if the headers were suppressed, and if they were I read all of the headers from raw response structure (HTTP_RESPONSE), and insert them at the beginning of the response entity chunks myself. This seems to work okay so far.
Still open to other ideas if anybody has any better option.

C Web Server and Chrome Dev tools question

I recently starting diving into http programming in C and have a functioning server that can handle GET and POST. My question comes in to my site load times and how I should send the response headers and response message.
I notice in Chromes resource tracking tool that there is almost no (a few ms) connecting/sending/proxy/blocking/waiting time in most cases (on the same network as the server), but the receive time can vary wildly. I'm not entirely sure what the receive time is including. I mostly see a long receive (40 to 140ms or more) time on the png files and sometimes javascript files and rarely other files, but it's not really consistent.
Could anyone shed some light on this for me?
I haven't done much testing yet, but I was wondering if I changed the method which I use to send the header/message would help. I currently have every file for the site cached in server memory along with it's header (all in the same char*). When I send the file that was requested, I just do 1 send() call with the header/file combo (it does not involve any string operations b/c it is all done in advance on server start up).
Would it be better to break it into multiple small send() calls?
Just some stats that I get with Chrome dev tools (again, on local network through a wireless router connection), the site loads in from 120ms to 570ms. It's 19 files at a total of 139.85KB. The computer it's on is a Asus 901 netbook (atom 1.6ghz, 2gb ddr2) with TinyCore linux. I know there are some optimizations I could be doing with how threads start up and a few other things, but not sure that's affecting it to much atm.
If you're sending the entire response in one send(), you should set the TCP_NODELAY socket option.
If that doesn't help, you may want to try using a packet capturing tool like Wireshark to see if you can spot where the delay is introduced.

Stop QNetworkRequest buffering entire request

How can I stop QNetworkRequest from buffering the entire contents of a QIODevice during a put/post to an HTTPS connection? It works fine when posting to HTTP but HTTPS causes the entire file to be read into memory before the post starts.
This isn't supported using the Qt classes. The reason is that Qt needs to know the total data length for the SSL headers. Chunked encoding is not supported from a send perspective. You can however roll your own - you'll need to create your own SSL header, then create your own chunks of SSL-encoded data.
I suggest you wrap this all up in your own class, so it's nicely re-usable (why not post it online?).
BTW, most of this information was taken from a recent thread on the Qt-interest mailing list - a thread on the 30th September 2009 discussed this exact problem.
You may probably have more success with Qt 4.6. It has some bugfixes regarding that.