Whenever I add the "range" header in a HTTP request for a .cfm or .cfc file on my server, I get a timeout. The server simply does not respond.
To debug, I created a blank file called "/signup/test.cfm" on my server. It contains nothing. Next, I make a normal request and an edited request for the file:
Request:
GET /signup/test.cfm HTTP/1.1
Host: site.com
Response:
HTTP/1.1 200 OK
Request:
GET /signup/test.cfm HTTP/1.1
Host: site.com
Range: bytes=0-40960
Response:
timeout in transmission from site.com
If I include the Range header in a request to a static file, there is no problem.
What could be causing this, and how do I debug it? The file I am requesting is empty, so no code should be executing. Application.cfc is empty. Since I assume no code is executing, does this mean that it is a server configuration problem?
EDIT: By adding a tag to my script, I have confirmed that it does execute the ColdFusion code. The response is just never sent back to me.
Most likely the cf parser doesn't work with ranged requests while the static file handler does.
Range is for fetching part of content and subsequently fetching more later. E.g. Video file streaming or download continuation. Not something easily handled by a script handler as the request is meant to be returned all at once.
Here's a stackoverflow with a sample range request to show how this works
Sample http range request session
Related
I've managed to make a file downloader in C++ (using winsock). It downloads every simple link with a file like: www.page.com/image.png
I want to make it download all of the images from an entire page, such as all the images from a 4chan thread, but I don't know what I should send in the http request to get the page's source. How can I request the source of a webpage?
You don't send anything in the http request, in the manner you're thinking.
An http request sends a single request, for a single document, and returns a single document from the server.
To download an entire page, you will have to parse the downloaded HTML document, extract all the relative links from the HTML source, then issue a separate http request for every image, css, js, etc... referenced from the main document.
This is how tools like wget's --recursive option download entire pages.
If the page is located at the root of the http://www.page.com server, you would send a GET request to the www.page.com server asking for the / resource:
GET / HTTP/1.1
Host: www.page.com
Let's say the page was actually located at http://www.page.com/thepage.html. You would send a GET request asking for /thepage.html instead:
GET /thepage.html HTTP/1.1
Host: www.page.com
Either way, you would then have to parse the resulting HTML to get the individual URLs of all the <img> tags that are on the page.
Im trying to do simple web service test using JMeter. I'm using SOAP/XML-RPC Request with simpliest configuration
URL = https://...address here..?wsdl
SOAP action and Use KeepAlive stay unchecked
XML request is loaded from file, correctly
What is more i have added View Result Tree to see results.
Thats all.
Problem is i'm still getting whole wsdl file as a response (i have expected a normal soap response for my xml soap requst).
I have tested in SOAPui this request and url - everythink working fine. Do i need do add smth more? maybe this is problem with https protocol?
What is more i have tried WebService (SOAP) Request (DEPRECATED) however im getting exception becouse of using https while i want to use load WSDL.
Any ideas to solve my problem?
Here is a request from View Result Tree
POST https://...address here..?wsdl
POST data:
Filename: D:\install\apache-jmeter-2.11\TEST\request.xml
<actual file content, not shown here>
[no cookies]
Request Headers:
Content-Type: text/xml
Connection: close
User-Agent: Jakarta Commons-HttpClient/3.1
Host: hostname
Content-Length: 1826
EDIT: i solved this problem by doing configuration like this:
ULR = https://..address here.. (NO WSDL)
SOAP action specified (url from wsdl)
KeepAlive checked
XML pasted in textbox section
However when i load xml from file - test fails with message couldnt parse stream. The same message pasted into textbox section - works perfectly. Whats wrong?
Configuration:
URL : scheme://..address here.. (NO WSDL)
SOAP action specified (url from wsdl)
KeepAlive checked
path to XML file pasted into right section
File encoding:
XML was not loading as i expected because of encoding.
I had set UTF-8 with BOM encoding while my service expected UTF-8 without BOM.
I'm using libcurl (c++) library to make a request to an IIS 7.5 server. The transaction is a common SOAP webservice
Everything is working fine, my requests send an "Expect 100-continue" flag, and the server responds with a 100-continue and inmediately after that a 200 ok code along with the web service response.
But from time to time, the client receives a 100-continue message and after that, another 100 code. This makes the client report an error, as it expects a final status code right after the server 100 code. I read in W3C HTTP1.1 protocol:
An origin server that sends a 100 (Continue) response MUST
ultimately send a final status code, once the request body is
received and processed, unless it terminates the transport
connection prematurely.
The word "ultimately" makes me loose the track. Is it possible/common that a server sends several 100 codes after a final status code?
If anyone has faced this issue before, can point me to any explanation on how to handle multiple 100 response codes with libcurl?
Thanks in advance
The current spec says this on 100-continue:
The 100 (Continue) status code indicates that the initial part of a
request has been received and has not yet been rejected by the server. The server intends to send a final response after the request has been fully received and acted upon.
When the request contains an Expect header field that includes a
100- continue expectation, the 100 response indicates that the server wishes to receive the request payload body, as described in
Section 5.1.1. The client ought to continue sending the request and discard the 100 response.
If the request did not contain an Expect header field containing the 100-continue expectation, the client can simply discard this interim response.
The way I read it, it is not supposed to be more than one 100-continue response header and that's why libcurl works like this. I've never seen this (multiple 100 responses) happen and I've been doing HTTP for a while (I am the main developer of curl). To change this behavior I would expect you'd need to patch libcurl slightly to allow for this to happen.
It is not related to CURLOPT_FAILONERROR.
I suspect it's because there is an unhandled error that is not handled by the client properly. Make sure you set the CURLOPT_FAILONERROR flag.
See this SO post for more information.
When I send a http request using a wrong server address like 127.0.0.1 as the server address of a URL, the libcurl returns CURLE_OK and get me the http code 0. However, I get http code 404 when I send the same request with IE. Does anyone know how can I get an error code rather than 0 with libcurl when sending request like that.
libcurl returns CURLE_OK when the transfer went fine. Getting a 404 from a HTTP server is considered a fine transfer. You can make >=4xx HTTP response codes cause a libcurl error by setting the CURLOPT_FAILONERROR option.
Alternatively, and this may be the nicer way, you extract the HTTP response code after the transfer, with for example curl_easy_getinfo() to figure out the HTTP response code to see what the HTTP server thought about the resource you requested.
Try using it to visit a site that's actually running a web server, and try to retrieve a file that doesn't exist. For example, http://www.google.com/404. Your browser is almost certainly not actually getting a 404 from visiting 127.0.0.1, even if it's telling you that's what it got.
I have a VB.NET app that sends a POST request to a script on my server that is running Cloudflare. I always get an error when sending the request from the app, however using a Firefox extension to simulate the request works fine. With the use of Fiddler I think I have found the cause of the problem:
When sending the request with the Firefox addon an extra header is attached to the request:
Cookie: __cfduidxxxxxxxxxxxx
This cookie is from Cloudflare, but where does it come from, ie. how can I get this cookie value and send it with my requests from the VB app? I tried copying and pasting the cookie into the app and it worked fine, so this leads me to conclude that I need this cookie, however this value is unique for each user so I cannot simply hardcode it into the app.
Quick side-note: Not sure if this helps, but if I send a GET request from the VB app it works fine without the __cfduid cookie.
Look for a Set-Cookie header coming back from the server on it's response. It will expect to get that value back on subsequent requests in a Cookie: header. This value is usually an opaque string that is classified by a path, although not always.