How to stop Curl from caching data - c++

I have a HTTP streaming video server that I access using a url like so:
https://192.168.50.23:8011/livevideo/8
If I paste this url in to my web browser address bar I can see live video but after a few minutes the video stops streaming. If I kill the browser and repeat the process I can get video streaming for another few minutes before it stops again.
I figured that it must be something to do with the web browser caching the MJPEG frames and running out of memory so as an experiment I mocked up a simple HTML page like so:
<!DOCTYPE html>
<html>
<body>
<img src="https://192.168.50.23:8011/livevideo/8" width="500" height="500">
</body>
</html>
And the result is that the vieo streams constantly and never stops. So I guess the tag is dealing with disposing of the MJPEG frames and not causing a crash like before.
I used FireFox to analyse the HTTP requests and responses for both scenarios above to see if there is anything different and here is the results:
URL pasted into webbrowser address bar:
URL embedded into web page:
The only differences seems to be the Accept: Parameters.
Now to move onto my real problem. I am using the same url in my C++ curl program and I am seeing the exact same behaviour where I receive video data for a few mins and then all of a sudden the curl callbacks stop.
I have used the following headers in my curl program:
CURL *pEasy = curl_easy_init ();
curl_easy_setopt ( pEasy, CURLOPT_USERNAME, user.c_str() );
curl_easy_setopt ( pEasy, CURLOPT_PASSWORD, pass.c_str() );
curl_easy_setopt ( pEasy, CURLOPT_URL, urlToConnectTo.c_str() );
//Set authentication
curl_easy_setopt ( pEasy, CURLOPT_HTTPAUTH, CURLAUTH_BASIC );
curl_easy_setopt ( pEasy, CURLOPT_SSL_VERIFYPEER, false );
curl_easy_setopt ( pEasy, CURLOPT_SSL_VERIFYHOST, false );
curl_easy_setopt ( pEasy, CURLOPT_HEADER, TRUE );
curl_easy_setopt ( pEasy, CURLOPT_NOBODY, FALSE );
curl_easy_setopt ( pEasy, CURLOPT_WRITEFUNCTION, OnReceiveHttpBodyResponse );
curl_easy_setopt ( pEasy, CURLOPT_WRITEDATA, pEasy );
struct curl_slist *headers=NULL;
curl_slist_append( headers, "User-Agent: MyCurlDll");
curl_slist_append( headers, "Content-Type: text/xml");
curl_slist_append( headers, "Connection: Keep-Alive");
curl_slist_append( headers, "Accept: image/png, text/xml, text/html, application/xml");
curl_slist_append( headers, "Cache-Control: max-age=0");
curl_easy_setopt(pEasy, CURLOPT_HTTPHEADER, headers);
curl_multi_add_handle(m_curlMulti, pEasy);
//Process this curl handle in another function
What can I do to stop this behaviour in CURL? I assume it must be caching is the same way as the browser was doing it.

Sorry, but you've ended up with the wrong conclusion and thus you're sort of barking up the wrong tree here.
curl doesn't cache anything, it simply sends a HTTP request to the server (and with CURLOPT_VERBOSE you can easily inspect it) and then it pipes all data it receives on to the write callback that you provide. There's no caching, no middle layers, no magic.
If you stop getting traffic before it should've ended, it is because there's no more data being delivered or rather being received by libcurl. It could be the server that stopped sending or it can be something in your network that interferes. libcurl sent the request and it'll keep waiting for data to arrive until the entire thing has been delivered.

Related

libcurl: curl_easy_perform blocks unless CURLOPT_READFUNCTION is set

I am trying to use libcurl C++ to make REST/HTTP requests. I noticed curl_easy_perform blocks but if I set CURLOPT_READFUNCTION it doesn't. I just want to understand why that is, I am new to libcurl or HTTP/REST protocol.
Here is the code:
m_pCurl = curl_easy_init();
curl_easy_setopt(m_pCurl, CURLOPT_URL, "https://blahblahblah/api/auth/user/login");
curl_easy_setopt(m_pCurl, CURLOPT_VERBOSE, 1L);
curl_easy_setopt(m_pCurl, CURLOPT_POST, 1);
curl_easy_setopt(m_pCurl, CURLOPT_COOKIE, "SKEY=BLAHBLAHBLAH");
struct curl_slist *list = NULL;
list = curl_slist_append(list, "Accept: application/json");
list = curl_slist_append(list, "Connection: keep-alive");
list = curl_slist_append(list, "Expect:");
list = curl_slist_append(list, "Content-Type: application/json");
list = curl_slist_append(list, "x-website-parameters: LALALALA");
curl_easy_setopt(m_pCurl, CURLOPT_HTTPHEADER, list);
// Callbacks
readarg_t rarg;
// readcb is a callback function
// Removing the two lines below will cause curl_easy_perform to hang
curl_easy_setopt(m_pCurl, CURLOPT_READFUNCTION, readcb);
curl_easy_setopt(m_pCurl, CURLOPT_READDATA, &rarg);
CURLcode res = curl_easy_perform(m_pCurl);
Note: Some of the encoded data are changed above.
Any help would be greatly appreciated.
Thanks,
K
According to The Manual...
CURLOPT_READFUNCTION explained
...
If you set this callback pointer to NULL, or don't set it at all, the default internal read function will be used. It is doing an fread() on the FILE * userdata set with CURLOPT_READDATA.
However you also don't set CURLOPT_READDATA. So looking again at The manual...
CURLOPT_READDATA explained
...
By default, this is a FILE * to stdin.
So the reason your program "hangs" appears to be because it is waiting for something to arrive on the standard input stdin.
So the way it is supposed to work is this.
1) If you do nothing the data sent to the server comes from the standard input (which is often the keyboard).
2) If you set only CURLOPT_READDATA then it must be a FILE* you opened to an input file that contains the data you want to send.
3) If you set CURLOPT_READFUNCTION then CURLOPT_READDATA can point to anything your function needs to fulfil its task of sending data to the server.

Libcurl progress callback not working with multi

I'm trying to manage the progress of a download with libcurl in C++.
I have managed to do this with curl_easy, but the issue with curl_easy is that it blocks the program until the request has been made.
I need to use curl_mutli so the http request is asynchronous, but when I try changing to curl_multi, my progress function stops working.
I have the following curl_easy request code:
int progressFunc(void* p, double TotalToDownload, double NowDownloaded, double TotalToUpload, double NowUploaded) {
std::cout << TotalToDownload << ", " << NowDownloaded << std::endl;
return 0;
}
FILE* file = std::fopen(filePath.c_str(), "wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, false);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progressFunc);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeData);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
CURLcode res = curl_easy_perform(curl);
which works perfectly and prints to the console the progress of the download.
However, when trying to modify this code to use curl_multi instead, the file does not download correctly (shows 0 bytes) and the download progress callback function shows only 0, 0.
FILE* file = std::fopen(filePath.c_str(), "wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, false);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progressFunc);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeData);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
curl_multi_add_handle(curlm, curl);
int runningHandles;
CURLMcode res = curl_multi_perform(curlm, &runningHandles);
TL; DR: you are supposed to call curl_multi_perform in loop. If you don't use event loop and poll/epoll, you should probably stick with using curl_easy in separate thread.
The whole point of curl_multi API is not blocking: instead of magically downloading entire file in single call, you can use epoll or similar means to monitor curl's non-blocking sockets and invoke curl_multi_perform each time some data arrives from network. When you use it's multi-mode, curl itself does not start any internal threads and does not monitor it's sockets — you are expected to do it yourself. This allows writing highly performant event loops, that run multiple simultaneous curl transfers in the same thread. People, who need that, usually already have the necessary harness or can easily write it themselves.
The first time you invoke curl_multi_perform it will most likely return before the DNS resolution completes and/or before the TCP connection is accepted by remote side. So the amount of payload data transferred in first call will indeed be 0. Depending on server configuration, second call might not transfer any payload either. By "payload" I mean actual application data (as opposed to DNS requests, SSL negotiation, HTTP headers and HTTP2 frame metadata).
To actually complete a transfer you have to repeatedly invoke epoll_wait, curl_multi_perform and number of other functions until you are done. Curl's corresponding example stops after completing one transfer, but in practice it is more beneficial to create a permanently running thread, that handles all HTTP transfers for application's lifetime.

libcurl download speed too slow

I have a case where I'm using libcurl with c++ to download a 240 MB file, but it takes 15 minutes to do so. I have made sure that my write callback is as fast as possible. It is just writing into an in-memory buffer that is plenty big-enough for the data. When I use the curl command to download this same file from the same server, it takes less than a minute. When I use a browser to download the file, it also takes less than a minute. Is it possible that I'm using libcurl incorrectly? Here's a snippet of my code...
wxString postFields;
postFields += "package_name=" + packageName;
if( desiredVersion != 0 )
postFields += wxString::Format( "&package_version=v%d", desiredVersion );
curl_easy_reset( curlHandleEasy );
curl_slist_free_all( headers );
headers = nullptr;
headers = curl_slist_append( headers, "Content-Type: application/x-www-form-urlencoded" );
headers = curl_slist_append( headers, "Accept: application/x-zip-compressed" );
url = "http://" + packageServer + ":7000/package_download";
urlData = url.c_str();
binResponse = new BinaryResponse( packageSize );
curl_easy_setopt( curlHandleEasy, CURLOPT_HTTPHEADER, headers );
curl_easy_setopt( curlHandleEasy, CURLOPT_POSTFIELDS, postFieldsData );
curl_easy_setopt( curlHandleEasy, CURLOPT_URL, urlData );
curl_easy_setopt( curlHandleEasy, CURLOPT_WRITEFUNCTION, &Response::WriteCallback );
curl_easy_setopt( curlHandleEasy, CURLOPT_WRITEDATA, binResponse );
curlCode = curl_easy_perform( curlHandleEasy );
Is there something wrong with my request setup? If I change my write callback to be a dummy routine that just claims to have written the data, but just throws it away (to be as fast as possible), my download rate is still super slow.
Is it possible that the bottle neck is some sort of security scanning on the network that I'm being subjected to that the browser and curl command aren't?
I had claimed to have tested with a dummy write function, but I actually hadn't. When I tested with a dummy write function, the download speed was fast.
So I investigated why my write function was slow and discovered that I was using an in-memory stream class that wasn't initialized with the required buffer size, so it was growing as needed. The growth of the buffer was probably small, and every time it grew, it probably needed to copy the entire contents of the old buffer into the new one....so, long story short: I'm dumb, and the write stream was slow.
Now I initialize my memory stream to the total size of the file so that it never has to grow. Ugh! Problem solved.

error 411 Length Required c++, libcurl PUT request

Even though I set in header Content-Lenght I'm getting 411 error. I'm trying to send PUT request.
struct curl_slist *headers = NULL;
curl = curl_easy_init();
std::string paramiters =
"<data_file><archive>false</archive><data_type_id>0a7a184a-dcc6-452a-bcd3-52dbd2a83ea2</data_type_id><data_file_name>backwardstep.stt</data_file_name><description>connectionfile</description><job_id>264cf297-3bc7-42e1-8edc-5e2948ee62b6</job_id></data_file>";
if (curl) {
headers = curl_slist_append(headers, "Accept: */*");
headers = curl_slist_append(headers, "Content-Length: 123");
headers = curl_slist_append(headers, "Content-Type: application/xml");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(curl, CURLOPT_VERBOSE, true);
curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L);
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT");
curl_easy_setopt(curl, CURLOPT_URL,
"..url/data_files/new/link_upload.xml");
curl_easy_setopt(curl, CURLOPT_USERPWD, "kolundzija#example.ch:PASS");
curl_easy_setopt(curl, CURLOPT_HEADER, 1L);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, paramiters.c_str());
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE,
strlen(paramiters.c_str()));
curl_easy_setopt(curl, CURLOPT_FAILONERROR, 1L);
res = curl_easy_perform(curl);
and this is response from SERVER:
Host: cloud...
Transfer-Encoding: chunked
Accept: */*
Content-Length: 123
Content-Type: application/xml
Expect: 100-continue
* The requested URL returned error: 411 Length Required
* Closing connection #0
Ok, I honestly can not find your error. But you should have an example from the curl website (first google hit for "curl put c code"): http://curl.haxx.se/libcurl/c/httpput.html
Maybe mixing the easy and advanced interface confuses curl.
What confuses me are the options CURLOPT_POSTFIELDS and CURLOPT_POSTFIELDSIZE. This is a put request, so why are they even there? With PUT the arguments are in the URL. The body is opaque, at least from the perspective of HTTP.
You DON'T need to use a file and do NOT use custom requests, INstead set the UPLOAD and PUT options as it is specified in the documentation here:
http://curl.haxx.se/libcurl/c/httpput.html
Unlike the example above where they use a file as your data structure you can USE ANYTHING to hold your data.It's all on using a callback function with this option:
CURLOPT_READFUNCTION
The difference is made on how you set your callback function which only has to do two things:
1.-measure the size of your payload (your data) in bytes
2.-copy the data to the memory address that curl passes to the callback (that is the first argument on your call back function, the FIRST void pointer in this definition)
static size_t read_callback(void *ptr, size_t size, size_t nmemb, void *stream)
That is the ptr argument.
Use memcpy to copy the data.
Take a look at this link. I ran into the same problem as you and was able to solve it using this approach,one thing YOU need to keep in mind is that you ALSO need to set the file size before sending the curl request.
How do I send long PUT data in libcurl without using file pointers?
Use CURLOPT_INFILESIZE or CURLOPT_INFILESIZE_LARGE for that.

libcurl 7.26.0 : garbage at the end of every http response

I am using cocos2d-x game engine to develop a game. Game fetches lot of data from the server. So to reduce the loading time and data consumption , i used gzip encoding.
curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, "gzip,deflate");
But strangely, i see garbage at the end of each http response and when i don't use the gzip , every http response is ok and no garbage in the end of http response.
Please suggest what can be possible reason for this issue. Your help will be appreciated.
Thanks.
Try
curl_easy_cleanup(curl);
And
curl_global_cleanup();
after you finished sending request by curl_easy_perform(), then see if this error still exists.
I have faced the same bug in C language with the same library.
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, postString);
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, strlen(postString));
You can try to ensure that the length of POSTFIELDS is same as the POSTFIELDSIZE.