libcurl download speed too slow - c++

I have a case where I'm using libcurl with c++ to download a 240 MB file, but it takes 15 minutes to do so. I have made sure that my write callback is as fast as possible. It is just writing into an in-memory buffer that is plenty big-enough for the data. When I use the curl command to download this same file from the same server, it takes less than a minute. When I use a browser to download the file, it also takes less than a minute. Is it possible that I'm using libcurl incorrectly? Here's a snippet of my code...
wxString postFields;
postFields += "package_name=" + packageName;
if( desiredVersion != 0 )
postFields += wxString::Format( "&package_version=v%d", desiredVersion );
curl_easy_reset( curlHandleEasy );
curl_slist_free_all( headers );
headers = nullptr;
headers = curl_slist_append( headers, "Content-Type: application/x-www-form-urlencoded" );
headers = curl_slist_append( headers, "Accept: application/x-zip-compressed" );
url = "http://" + packageServer + ":7000/package_download";
urlData = url.c_str();
binResponse = new BinaryResponse( packageSize );
curl_easy_setopt( curlHandleEasy, CURLOPT_HTTPHEADER, headers );
curl_easy_setopt( curlHandleEasy, CURLOPT_POSTFIELDS, postFieldsData );
curl_easy_setopt( curlHandleEasy, CURLOPT_URL, urlData );
curl_easy_setopt( curlHandleEasy, CURLOPT_WRITEFUNCTION, &Response::WriteCallback );
curl_easy_setopt( curlHandleEasy, CURLOPT_WRITEDATA, binResponse );
curlCode = curl_easy_perform( curlHandleEasy );
Is there something wrong with my request setup? If I change my write callback to be a dummy routine that just claims to have written the data, but just throws it away (to be as fast as possible), my download rate is still super slow.
Is it possible that the bottle neck is some sort of security scanning on the network that I'm being subjected to that the browser and curl command aren't?

I had claimed to have tested with a dummy write function, but I actually hadn't. When I tested with a dummy write function, the download speed was fast.
So I investigated why my write function was slow and discovered that I was using an in-memory stream class that wasn't initialized with the required buffer size, so it was growing as needed. The growth of the buffer was probably small, and every time it grew, it probably needed to copy the entire contents of the old buffer into the new one....so, long story short: I'm dumb, and the write stream was slow.
Now I initialize my memory stream to the total size of the file so that it never has to grow. Ugh! Problem solved.

Related

libcurl: curl_easy_perform blocks unless CURLOPT_READFUNCTION is set

I am trying to use libcurl C++ to make REST/HTTP requests. I noticed curl_easy_perform blocks but if I set CURLOPT_READFUNCTION it doesn't. I just want to understand why that is, I am new to libcurl or HTTP/REST protocol.
Here is the code:
m_pCurl = curl_easy_init();
curl_easy_setopt(m_pCurl, CURLOPT_URL, "https://blahblahblah/api/auth/user/login");
curl_easy_setopt(m_pCurl, CURLOPT_VERBOSE, 1L);
curl_easy_setopt(m_pCurl, CURLOPT_POST, 1);
curl_easy_setopt(m_pCurl, CURLOPT_COOKIE, "SKEY=BLAHBLAHBLAH");
struct curl_slist *list = NULL;
list = curl_slist_append(list, "Accept: application/json");
list = curl_slist_append(list, "Connection: keep-alive");
list = curl_slist_append(list, "Expect:");
list = curl_slist_append(list, "Content-Type: application/json");
list = curl_slist_append(list, "x-website-parameters: LALALALA");
curl_easy_setopt(m_pCurl, CURLOPT_HTTPHEADER, list);
// Callbacks
readarg_t rarg;
// readcb is a callback function
// Removing the two lines below will cause curl_easy_perform to hang
curl_easy_setopt(m_pCurl, CURLOPT_READFUNCTION, readcb);
curl_easy_setopt(m_pCurl, CURLOPT_READDATA, &rarg);
CURLcode res = curl_easy_perform(m_pCurl);
Note: Some of the encoded data are changed above.
Any help would be greatly appreciated.
Thanks,
K
According to The Manual...
CURLOPT_READFUNCTION explained
...
If you set this callback pointer to NULL, or don't set it at all, the default internal read function will be used. It is doing an fread() on the FILE * userdata set with CURLOPT_READDATA.
However you also don't set CURLOPT_READDATA. So looking again at The manual...
CURLOPT_READDATA explained
...
By default, this is a FILE * to stdin.
So the reason your program "hangs" appears to be because it is waiting for something to arrive on the standard input stdin.
So the way it is supposed to work is this.
1) If you do nothing the data sent to the server comes from the standard input (which is often the keyboard).
2) If you set only CURLOPT_READDATA then it must be a FILE* you opened to an input file that contains the data you want to send.
3) If you set CURLOPT_READFUNCTION then CURLOPT_READDATA can point to anything your function needs to fulfil its task of sending data to the server.

Libcurl progress callback not working with multi

I'm trying to manage the progress of a download with libcurl in C++.
I have managed to do this with curl_easy, but the issue with curl_easy is that it blocks the program until the request has been made.
I need to use curl_mutli so the http request is asynchronous, but when I try changing to curl_multi, my progress function stops working.
I have the following curl_easy request code:
int progressFunc(void* p, double TotalToDownload, double NowDownloaded, double TotalToUpload, double NowUploaded) {
std::cout << TotalToDownload << ", " << NowDownloaded << std::endl;
return 0;
}
FILE* file = std::fopen(filePath.c_str(), "wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, false);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progressFunc);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeData);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
CURLcode res = curl_easy_perform(curl);
which works perfectly and prints to the console the progress of the download.
However, when trying to modify this code to use curl_multi instead, the file does not download correctly (shows 0 bytes) and the download progress callback function shows only 0, 0.
FILE* file = std::fopen(filePath.c_str(), "wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, false);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progressFunc);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeData);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
curl_multi_add_handle(curlm, curl);
int runningHandles;
CURLMcode res = curl_multi_perform(curlm, &runningHandles);
TL; DR: you are supposed to call curl_multi_perform in loop. If you don't use event loop and poll/epoll, you should probably stick with using curl_easy in separate thread.
The whole point of curl_multi API is not blocking: instead of magically downloading entire file in single call, you can use epoll or similar means to monitor curl's non-blocking sockets and invoke curl_multi_perform each time some data arrives from network. When you use it's multi-mode, curl itself does not start any internal threads and does not monitor it's sockets — you are expected to do it yourself. This allows writing highly performant event loops, that run multiple simultaneous curl transfers in the same thread. People, who need that, usually already have the necessary harness or can easily write it themselves.
The first time you invoke curl_multi_perform it will most likely return before the DNS resolution completes and/or before the TCP connection is accepted by remote side. So the amount of payload data transferred in first call will indeed be 0. Depending on server configuration, second call might not transfer any payload either. By "payload" I mean actual application data (as opposed to DNS requests, SSL negotiation, HTTP headers and HTTP2 frame metadata).
To actually complete a transfer you have to repeatedly invoke epoll_wait, curl_multi_perform and number of other functions until you are done. Curl's corresponding example stops after completing one transfer, but in practice it is more beneficial to create a permanently running thread, that handles all HTTP transfers for application's lifetime.

c++ libcurl progress callback with download not working

I'm using curl for uploads and downloads and also try to include the provided progress bar from curl. I managed to get the progress bar working when uploading files, but unfortunately the callback function only receives 0 values on the download.
Here are the options that are set for the download:
::curl_easy_reset( m_pimpl->curl ) ;
::curl_easy_setopt( m_pimpl->curl, CURLOPT_SSL_VERIFYPEER, 0L ) ;
::curl_easy_setopt( m_pimpl->curl, CURLOPT_SSL_VERIFYHOST, 0L ) ;
::curl_easy_setopt( m_pimpl->curl, CURLOPT_HEADERFUNCTION, &CurlAgent::HeaderCallback ) ;
::curl_easy_setopt( m_pimpl->curl, CURLOPT_HEADERDATA, this ) ;
::curl_easy_setopt( m_pimpl->curl, CURLOPT_HEADER, 0L ) ;
::curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, method.c_str() ); // "GET" in download
::curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, error ) ;
::curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
::curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &CurlAgent::Receive ) ;
::curl_easy_setopt(curl, CURLOPT_WRITEDATA, this ) ;
//setting the progress callback function
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0L);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progress_callback);
curl_easy_setopt(curl, CURLOPT_XFERINFODATA, this);
CURLcode curl_code = ::curl_easy_perform(curl);
ANd this is the callback used for the progress bar:
static int progress_callback(void *ptr, curl_off_t TotalDownloadSize, curl_off_t finishedDownloadSize, curl_off_t TotalToUpload, curl_off_t NowUploaded) {
curl_off_t processed = (TotalDownloadSize > TotalToUpload) ? finishedDownloadSize : NowUploaded;
curl_off_t total = (TotalDownloadSize > TotalToUpload) ? TotalDownloadSize : TotalToUpload;
...
return 0;
}
As mentioned when I perform uploads of files the parameters TotalToUpload and NowUploaded contain the correct values. But when downloading all four parameters contain 0!?
Do I have to set another option when downloading files to receive the correct sizes?
Alternative solution
I found an alternative solution, buy using another request that delivers information about the files on the drive which also contains the file size.
In the set callback write function
::curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &CurlAgent::Receive )
the current downloaded size is given as parameter with which it is then possible to create the progress bar.
Here is also the documentation of the used service and the requests:
Per the libcurl documentation:
CURLOPT_XFERINFOFUNCTION explained
Unknown/unused argument values passed to the callback will be set to zero (like if you only download data, the upload size will remain 0). Many times the callback will be called one or more times first, before it knows the data sizes so a program must be made to handle that.
If the callback is never giving you non-zero values during a download, then either:
there is a bug in libcurl (less likely)
libcurl simply doesn't know the sizes (more likely), such as if the download is encoded in such a way that prevents calculating the sizes effectively.

How to stop Curl from caching data

I have a HTTP streaming video server that I access using a url like so:
https://192.168.50.23:8011/livevideo/8
If I paste this url in to my web browser address bar I can see live video but after a few minutes the video stops streaming. If I kill the browser and repeat the process I can get video streaming for another few minutes before it stops again.
I figured that it must be something to do with the web browser caching the MJPEG frames and running out of memory so as an experiment I mocked up a simple HTML page like so:
<!DOCTYPE html>
<html>
<body>
<img src="https://192.168.50.23:8011/livevideo/8" width="500" height="500">
</body>
</html>
And the result is that the vieo streams constantly and never stops. So I guess the tag is dealing with disposing of the MJPEG frames and not causing a crash like before.
I used FireFox to analyse the HTTP requests and responses for both scenarios above to see if there is anything different and here is the results:
URL pasted into webbrowser address bar:
URL embedded into web page:
The only differences seems to be the Accept: Parameters.
Now to move onto my real problem. I am using the same url in my C++ curl program and I am seeing the exact same behaviour where I receive video data for a few mins and then all of a sudden the curl callbacks stop.
I have used the following headers in my curl program:
CURL *pEasy = curl_easy_init ();
curl_easy_setopt ( pEasy, CURLOPT_USERNAME, user.c_str() );
curl_easy_setopt ( pEasy, CURLOPT_PASSWORD, pass.c_str() );
curl_easy_setopt ( pEasy, CURLOPT_URL, urlToConnectTo.c_str() );
//Set authentication
curl_easy_setopt ( pEasy, CURLOPT_HTTPAUTH, CURLAUTH_BASIC );
curl_easy_setopt ( pEasy, CURLOPT_SSL_VERIFYPEER, false );
curl_easy_setopt ( pEasy, CURLOPT_SSL_VERIFYHOST, false );
curl_easy_setopt ( pEasy, CURLOPT_HEADER, TRUE );
curl_easy_setopt ( pEasy, CURLOPT_NOBODY, FALSE );
curl_easy_setopt ( pEasy, CURLOPT_WRITEFUNCTION, OnReceiveHttpBodyResponse );
curl_easy_setopt ( pEasy, CURLOPT_WRITEDATA, pEasy );
struct curl_slist *headers=NULL;
curl_slist_append( headers, "User-Agent: MyCurlDll");
curl_slist_append( headers, "Content-Type: text/xml");
curl_slist_append( headers, "Connection: Keep-Alive");
curl_slist_append( headers, "Accept: image/png, text/xml, text/html, application/xml");
curl_slist_append( headers, "Cache-Control: max-age=0");
curl_easy_setopt(pEasy, CURLOPT_HTTPHEADER, headers);
curl_multi_add_handle(m_curlMulti, pEasy);
//Process this curl handle in another function
What can I do to stop this behaviour in CURL? I assume it must be caching is the same way as the browser was doing it.
Sorry, but you've ended up with the wrong conclusion and thus you're sort of barking up the wrong tree here.
curl doesn't cache anything, it simply sends a HTTP request to the server (and with CURLOPT_VERBOSE you can easily inspect it) and then it pipes all data it receives on to the write callback that you provide. There's no caching, no middle layers, no magic.
If you stop getting traffic before it should've ended, it is because there's no more data being delivered or rather being received by libcurl. It could be the server that stopped sending or it can be something in your network that interferes. libcurl sent the request and it'll keep waiting for data to arrive until the entire thing has been delivered.

How to do curl_multi_perform() asynchronously in C++?

I have come to use curl synchronously doing a http request. My question is how can I do it asynchronously?
I did some searches which lead me to the documentation of curl_multi_* interface from this question and this example but it didn't solve anything at all.
My simplified code:
CURLM *curlm;
int handle_count = 0;
curlm = curl_multi_init();
CURL *curl = NULL;
curl = curl_easy_init();
if(curl)
{
curl_easy_setopt(curl, CURLOPT_URL, "https://stackoverflow.com/");
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeCallback);
curl_multi_add_handle(curlm, curl);
curl_multi_perform(curlm, &handle_count);
}
curl_global_cleanup();
The callback method writeCallback doesn't get called and nothing happens.
Please advise me.
EDIT:
According to #Remy's below answer, I got this but seems that it's not quite what I really needed. Cause using a loop is still a blocking one. Please tell me if I'm doing wrong or misunderstanding something. I'm actually pretty new to C++.
Here's my code again:
int main(int argc, const char * argv[])
{
using namespace std;
CURLM *curlm;
int handle_count;
curlm = curl_multi_init();
CURL *curl1 = NULL;
curl1 = curl_easy_init();
CURL *curl2 = NULL;
curl2 = curl_easy_init();
if(curl1 && curl2)
{
curl_easy_setopt(curl1, CURLOPT_URL, "https://stackoverflow.com/");
curl_easy_setopt(curl1, CURLOPT_WRITEFUNCTION, writeCallback);
curl_multi_add_handle(curlm, curl1);
curl_easy_setopt(curl2, CURLOPT_URL, "http://google.com/");
curl_easy_setopt(curl2, CURLOPT_WRITEFUNCTION, writeCallback);
curl_multi_add_handle(curlm, curl2);
CURLMcode code;
while(1)
{
code = curl_multi_perform(curlm, &handle_count);
if(handle_count == 0)
{
break;
}
}
}
curl_global_cleanup();
cout << "Hello, World!\n";
return 0;
}
I can now do 2 http requests simultaneously. Callbacks are called but still need to finish before executing following lines. Will I have to think of thread?
Read the documentation again more carefully, particularly these portions:
http://curl.haxx.se/libcurl/c/libcurl-multi.html
Your application can acquire knowledge from libcurl when it would like to get invoked to transfer data, so that you don't have to busy-loop and call that curl_multi_perform(3) like crazy. curl_multi_fdset(3) offers an interface using which you can extract fd_sets from libcurl to use in select() or poll() calls in order to get to know when the transfers in the multi stack might need attention. This also makes it very easy for your program to wait for input on your own private file descriptors at the same time or perhaps timeout every now and then, should you want that.
http://curl.haxx.se/libcurl/c/curl_multi_perform.html
When an application has found out there's data available for the multi_handle or a timeout has elapsed, the application should call this function to read/write whatever there is to read or write right now etc. curl_multi_perform() returns as soon as the reads/writes are done. This function does not require that there actually is any data available for reading or that data can be written, it can be called just in case. It will write the number of handles that still transfer data in the second argument's integer-pointer.
If the amount of running_handles is changed from the previous call (or is less than the amount of easy handles you've added to the multi handle), you know that there is one or more transfers less "running". You can then call curl_multi_info_read(3) to get information about each individual completed transfer, and that returned info includes CURLcode and more. If an added handle fails very quickly, it may never be counted as a running_handle.
When running_handles is set to zero (0) on the return of this function, there is no longer any transfers in progress.
In other words, you need to run a loop that polls libcurl for its status, calling curl_multi_perform() whenever there is data waiting to be transferred, repeating as needed until there is nothing left to transfer.
The blog article you linked to mentions this looping:
The code can be used like
Http http;
http:AddRequest("http://www.google.com");
// In some update loop called each frame
http:Update();
You are not doing any looping in your code, that is why your callback is not being called. New data has not been received yet when you call curl_multi_perform() one time.