When I send a request with "GET" in c++ like this:
GET / HTTP/1.1\r\nHost: site.com\r\n\r\n
I receive the proper answer. But when I configure the request according to what browsers do (I captured the headers from packet sniffer) the response from the server is 200 OK but the html body is a piece of garbage. Also the content-Length shown in the header proves that I didn't get the correct html response.
The problem occurs when adding "Accept-Encoding: gzip, deflate". I send exactly what the browser sends. But I receive different response than browser.
Why do you think this happens?
If you accept gzipped content, the server may send gzipped content. (In fact, some buggy servers send gzipped content even if you don't say you accept it!)
Notice that in the returned headers, it will include Content-Encoding: gzip, or maybe deflate instead of gzip. This tells you about the encoding. If it is gzipped, you need to decompress it with a library like zlib.
Another thing you might see in replies to HTTP 1.1 requests is that the connection won't necessarily close when it is completed, and you might get Transfer-Encoding: chunked, which will format the body differently. Chunked responses are a series of chunks with a hex length, then content, terminated by an empty chunk. Non-chunked responses, by contrast, are sent with a Content-Length header which tells you how much to expect. The content length is the length of the data it sends, which will be smaller if the data is compressed.
Unless you implement decompression, don't send Accept-Encoding. Chunked responses are something you'll probably have to implement though, since it is common in http 1.1 and if you do just http 1.0, you won't get to use the important host header.
Related
I am kinda newby in python thus the question.
I am trying to create a simple http web server that can receive chunked data from a post request.
I have realized later that once a request sends a headers with chunked data, the Content-length headers will be set to zero, thus reading the sent data with 'request.get_data()' will fail.
Is there another way of reading the chunked data?
The request I receive does give me the data length in the 'X-Data-Length' headers.
Did you write both of the js upload file code and the flask in backend to handle upload request? If not then you will need some help with js to upload it.
One way to achieve chucked data upload is:
Chucked that file in the frontend with js. Give it some headers in the request for the total size, number of the chunk, chunk size... and send each chuck in a separate POST request (You can use dropzone.js for example, they will do the job for you, just need to config the params)
In the backend, create an upload API which will read the request headers and merge the file chunks back together
I'm proxying HTTP/2 client -> HTTP/1.1 server, and I'm not sure how to handle multiple set-cookie in the response.
I believe set-cookie is the only header which is allowed to be set multiple times for HTTP/1.1 - is this the case for HTTP/2 as well?
If I receive set-cookie multiple times in the HTTP/1.1 response, how do I send that back to the client over HTTP/2? Can I merge it together into a single header, or do I need to send multiple set-cookie headers back via HTTP/2.0?
The HTTP/2 specification specifies how to handle cookies in this section.
It is the case for HTTP/2 as well that set-cookie is allowed to be set multiple times - its format would not allow otherwise.
A client receiving multiple set-cookie headers may send multiple cookie headers, or may concatenate them.
The server receiving multiple cookie headers must concatenate them
before invoking an application.
Hi I want to refuse incoming requests with too large body or header in my Jetty.I suppose that I have to set some filter, but I haven't found any solution. Do you have any suggestions? Thanks.
Easy enough to build a Servlet Filter or Jetty Handler that pays attention to the request's Content-Length header and then rejects (responds with an http error status code) the request.
As for the header size limit, that's controlled by the HttpConfiguration.setRequestHeaderSize(int)
However, there are a class of requests, that uses Chunked Transfer-Encoding, with these kinds of requests, there is no Content-Length and you will just have to reject the request when reading from the HttpServletRequest.getInputStream() after it hits a certain size.
There is also the complication of Mime multi-part request body content and how you determine the request content is too large.
One other note, unfortunately, due to how HTTP connection handling must be performed, even if a client sends you too large of a request body content, the server still has to read that entire body content and throw it away. This is the half-closed scenario found in the spec, its up to the client to see the early rejected http response and close/terminate the connection.
I have a mongoose server, with commands callable with AJAX. I get a CORS error if I call it without sending HTTP headers from mongoose (but visiting the address with the browser works just fine), but when I do send headers, it may take up to a minute before I get a response (but it does work), both with AJAX and the browser. My reply code:
//without headers
mg_printf(conn,reply.c_str());
//with headers
mg_printf(conn,"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\n"
"Cache-Control: no-cache\n"
"Access-Control-Allow-Origin: *\n\n"
"%s\n", reply.c_str());
How can I speed this up? Am I sending my headers wrong?
Ok, I found a solution, it works if I first check whether the request is an api call or not, and only send the headers when it is.
The reason mongoose is slow is because it waits for the rest of the content until it times out. And the reason it waits is because you do not set Content-Length, in which case the "end of a content" marker is when connection closes.
So the correct solution is:
Add Content-Length header with correct body length, OR
Alternatively, use mg_send_header() and mg_printf_data() functions, in which case you don't need to bother with Content-Length cause these functions use chunked encoding.
I'm using libcurl (c++) library to make a request to an IIS 7.5 server. The transaction is a common SOAP webservice
Everything is working fine, my requests send an "Expect 100-continue" flag, and the server responds with a 100-continue and inmediately after that a 200 ok code along with the web service response.
But from time to time, the client receives a 100-continue message and after that, another 100 code. This makes the client report an error, as it expects a final status code right after the server 100 code. I read in W3C HTTP1.1 protocol:
An origin server that sends a 100 (Continue) response MUST
ultimately send a final status code, once the request body is
received and processed, unless it terminates the transport
connection prematurely.
The word "ultimately" makes me loose the track. Is it possible/common that a server sends several 100 codes after a final status code?
If anyone has faced this issue before, can point me to any explanation on how to handle multiple 100 response codes with libcurl?
Thanks in advance
The current spec says this on 100-continue:
The 100 (Continue) status code indicates that the initial part of a
request has been received and has not yet been rejected by the server. The server intends to send a final response after the request has been fully received and acted upon.
When the request contains an Expect header field that includes a
100- continue expectation, the 100 response indicates that the server wishes to receive the request payload body, as described in
Section 5.1.1. The client ought to continue sending the request and discard the 100 response.
If the request did not contain an Expect header field containing the 100-continue expectation, the client can simply discard this interim response.
The way I read it, it is not supposed to be more than one 100-continue response header and that's why libcurl works like this. I've never seen this (multiple 100 responses) happen and I've been doing HTTP for a while (I am the main developer of curl). To change this behavior I would expect you'd need to patch libcurl slightly to allow for this to happen.
It is not related to CURLOPT_FAILONERROR.
I suspect it's because there is an unhandled error that is not handled by the client properly. Make sure you set the CURLOPT_FAILONERROR flag.
See this SO post for more information.