Does qtwebkit load complete page in single shot? - c++

I'm using qtwebkit for HTTP request,
Does qt-webkit load http page in single shot? Does Qt-webkit creates separate requested for java script(.js), style sheet (.css), image links?
If it create separate request for this links, do we have access/control over that request?

First of all, single HTTP request fetches single file. So there are multiple requests anyway.
Second, a single TCP connection could be re-used - a persistent one.
See QNetworkAccessManager and HTTP persistent connection for more info.

Related

Does libcurl load complete page in single shot?

I'm using libcurl to fire HTTP request.
Does lincurl load complete page in single shot,
or for sub-links on page i.e. .css or. png file it request separately.
libcurl does not automatically send any sub-requests for any links in the requested resource. This would be a completely unreasonable behaviour for any linked media.
To retrieve linked media, you have to extract the links from the resource you initially retrieve, and then do separate requests for them as needed (just like a web browser does behind the scenes).

Best way to cache statistics that don't change often

I am writing a django app and as part of the job I need to crunch some data and generate statistics and serve them on a RESTful API. The statistics don't change that often, however when the statistics do change, the next request needs to serve the most up to date request. What I am currently doing is using a caching mechanism like django-redis to store the statistics and when a request is made, the view calls the cache client and serve its content. What I would prefer is a caching mechanism that prevents my view from ever being called and also provides up to date content. Is there such away (django plugin) that would allow me to do this?
One way to accomplish this would be to use a 'reverse proxy cache' such as Nginx or Varnish.
Basically when a request would be made to your Django application, it would first pass through your proxy cache. The proxy cache would check to see if the request is available in the cache, and if so, it would serve the response from cache. If the request isn't in the cache, it would hand the request off to django to process te request and issue a response. The response would then pass back through the proxy cache and set the contents of the response into cache so that subsequent requests use the response from the cache.
Invalidating items in the cache per a write through policy as updating an item in the database could accomplished by issuing a cache purge command that is specific to the reverse proxy cache server that you have installed.

The procedure of Opening a website using IE8

I want to know when I'm using IE8 open a website (like www.yahoo.com), which API will be called by IE8? so I can hook these API to capture which website that IE8 opening currently.
When you enter a URL into the browser, the browser (usually) makes an HTTP request to the server identified by the URL. To make the request, the IP address of the server is required, which is obtained by a DNS lookup of the host (domain) name.
Once the response -- usually containing HTML markup -- is received, the browser renders it to display the webpage.
More details available here: what happens when you type in a URL in browser
So, in the general case, no "API" request as such is made. (Technically speaking, you can think of the original HTTP request to the server as an API request). The sort of "API" request you presumably mean, however, is not made in this general case just described. Those requests happens when the JavaScript executing on the page makes an Ajax HTTP request (XmlHttpRequest) to the web server to carry out some operation.
I am not sure about IE8, but the "developer tools" feature of most modern browsers (including IE9 and IE10), would let you see the Ajax HTTP requests that the webpage made as it carried out different operations.
Hope this helps.
IE uses Microsoft's WinSock library API to interact with web servers.
You may want to look for a network monitoring/sniffing API, which you could use to examine HTTP requests, and determine the URLs the browser is using.

Consume REST service that returns a single value

I am used to consuming Web services via a XMLHttpRequest, to retrieve xml or JSON.
Recently, I have been working with SharePoint REST services, which can return a single value (for example 5532, or "Jeff"). I am wondering if there is a more efficient way than XMLHttpRequest to retrieve this single value. For example, would it work if I loaded the REST url via an iframe, then retrieved the iframe content? Or is there any other well established method?
[Edit] By single value, I really mean that the service just returns these characters. This is not even presented in a JSON or xml response.
Any inefficiency in XMLHttpRequest is largely due to the overhead of HTTP, which the iframe approach is going to incur, as well. Furthermore, if the Sharepoint service expects to speak HTTP, you're going to need to speak HTTP. However, an API does not have to run over HTTP to be RESTful, per Roy Fielding, so if the service provided an API over a raw socket -- or if you simply wanted to craft your own slimmer HTTP request -- you could use a Flash socket via a library like: http://code.google.com/p/javascript-as3-socket/. You could cut the request message size down to under 100 bytes, and could pull out the response data trivially.
The jQuery library is a well established framework which you can use. It´s also an article which answer your concrete question at StackOverflow.

Connecting a desktop application with a website

I made an application using Qt/C++ that reads some values every 5-7 seconds and sends them to a website.
My approach is very simple. I am just reading the values i want to send and then i make an HTTP POST to the website. I also send the username and password to the website.
The problem is that i cannot find out if the request is successful. I mean that if i send the request and server gets it, i will get an HTTP:200 always. For example if the password is not correct, there is no way to know it. It is the way HTTP works.
Now i think i will need some kind of a protocol to take care the communication between the application and the website.
The question is what protocol to use?
If the action performed completes before the response header is sent you have the option of adding a custom status to it. If your website is built on PHP you can call header() to add the custom status of the operation.
header('XAppRequest-Status: complete');
if you can modify the server side script you could do the following
on one end :
You can make the HTTP post request via ajax
and evaluate the result of the ajax request.
On the serve side
On the HTTP request you do your process and if everything goes accordingly you can send data back to the ajax script that called it.
solves your problem .. ?