Synchronized curl requests - c++

I'm trying to do HTTP requests to multiple targets, and I need to them to run (almost) exactly at the same moment.
I'm trying to create a thread for each request, but I don't know why Curl is crashing when doing the perform. I'm using an easy-handle per thread so in theory everything should be ok...
Has anybody had a similar problem? or Does anyone know if the multi interface allows you to choose when to perform all the requests?
Thanks a lot.
EDIT:
Here is an example of the code:
void Clazz::function(std::vector<std::string> urls, const std::string& data)
{
for (auto it : urls)
{
std::thread thread(&Clazz::DoRequest, this, it, data);
thread->detach();
}
}
int Clazz::DoRequest(const std::string& url, const std::string& data)
{
CURL* curl = curl_easy_init();
curl_slist *headers = NULL;
headers = curl_slist_append(headers, "Expect:");
headers = curl_slist_append(headers, "Content-Type: application/json");
curl_easy_setopt(curl, CURLOPT_POST, 1);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, data.c_str());
curl_easy_setopt(curl, CURLOPT_CONNECTTIMEOUT, 15);
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
curl_easy_setopt (curl, CURLOPT_FAILONERROR, 1L);
//curlMutex.lock();
curl_easy_perform(curl);
//curlMutex.unlock();
long responseCode = 404;
curl_easy_getinfo (curl, CURLINFO_RESPONSE_CODE, &responseCode);
curl_easy_cleanup(curl);
curl_slist_free_all(headers);
}
I hope this can help, thanks!

Are you calling curl_global_init anywhere? Perhaps rather early in your main() method?
Quoting from http://curl.haxx.se/libcurl/c/curl_global_init.html:
This function is not thread safe. You must not call it when any other thread in the program (i.e. a thread sharing the same memory) is running. This doesn't just mean no other thread that is using libcurl. Because curl_global_init calls functions of other libraries that are similarly thread unsafe, it could conflict with any other thread that uses these other libraries.
Quoting from http://curl.haxx.se/libcurl/c/curl_easy_init.html:
If you did not already call curl_global_init, curl_easy_init does it automatically. This may be lethal in multi-threaded cases, since curl_global_init is not thread-safe, and it may result in resource problems because there is no corresponding cleanup.
It sounds like you're not calling curl_global_init, and letting curl_easy_init take care of it for you. Since you're doing it on two threads simultaneously, you're hitting the thread unsafe scenario, with the lethal result that was mentioned.

After being able to debug properly in the device y have found that the problem is an old know issue with curl.
http://curl.haxx.se/mail/lib-2010-11/0181.html
after using CURLOPT_NOSIGNAL in every curl handle the crash has disappeared. :)

Related

Libcurl progress callback not working with multi

I'm trying to manage the progress of a download with libcurl in C++.
I have managed to do this with curl_easy, but the issue with curl_easy is that it blocks the program until the request has been made.
I need to use curl_mutli so the http request is asynchronous, but when I try changing to curl_multi, my progress function stops working.
I have the following curl_easy request code:
int progressFunc(void* p, double TotalToDownload, double NowDownloaded, double TotalToUpload, double NowUploaded) {
std::cout << TotalToDownload << ", " << NowDownloaded << std::endl;
return 0;
}
FILE* file = std::fopen(filePath.c_str(), "wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, false);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progressFunc);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeData);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
CURLcode res = curl_easy_perform(curl);
which works perfectly and prints to the console the progress of the download.
However, when trying to modify this code to use curl_multi instead, the file does not download correctly (shows 0 bytes) and the download progress callback function shows only 0, 0.
FILE* file = std::fopen(filePath.c_str(), "wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, false);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, progressFunc);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeData);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
curl_multi_add_handle(curlm, curl);
int runningHandles;
CURLMcode res = curl_multi_perform(curlm, &runningHandles);
TL; DR: you are supposed to call curl_multi_perform in loop. If you don't use event loop and poll/epoll, you should probably stick with using curl_easy in separate thread.
The whole point of curl_multi API is not blocking: instead of magically downloading entire file in single call, you can use epoll or similar means to monitor curl's non-blocking sockets and invoke curl_multi_perform each time some data arrives from network. When you use it's multi-mode, curl itself does not start any internal threads and does not monitor it's sockets — you are expected to do it yourself. This allows writing highly performant event loops, that run multiple simultaneous curl transfers in the same thread. People, who need that, usually already have the necessary harness or can easily write it themselves.
The first time you invoke curl_multi_perform it will most likely return before the DNS resolution completes and/or before the TCP connection is accepted by remote side. So the amount of payload data transferred in first call will indeed be 0. Depending on server configuration, second call might not transfer any payload either. By "payload" I mean actual application data (as opposed to DNS requests, SSL negotiation, HTTP headers and HTTP2 frame metadata).
To actually complete a transfer you have to repeatedly invoke epoll_wait, curl_multi_perform and number of other functions until you are done. Curl's corresponding example stops after completing one transfer, but in practice it is more beneficial to create a permanently running thread, that handles all HTTP transfers for application's lifetime.

How to do curl_multi_perform() asynchronously in C++?

I have come to use curl synchronously doing a http request. My question is how can I do it asynchronously?
I did some searches which lead me to the documentation of curl_multi_* interface from this question and this example but it didn't solve anything at all.
My simplified code:
CURLM *curlm;
int handle_count = 0;
curlm = curl_multi_init();
CURL *curl = NULL;
curl = curl_easy_init();
if(curl)
{
curl_easy_setopt(curl, CURLOPT_URL, "https://stackoverflow.com/");
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeCallback);
curl_multi_add_handle(curlm, curl);
curl_multi_perform(curlm, &handle_count);
}
curl_global_cleanup();
The callback method writeCallback doesn't get called and nothing happens.
Please advise me.
EDIT:
According to #Remy's below answer, I got this but seems that it's not quite what I really needed. Cause using a loop is still a blocking one. Please tell me if I'm doing wrong or misunderstanding something. I'm actually pretty new to C++.
Here's my code again:
int main(int argc, const char * argv[])
{
using namespace std;
CURLM *curlm;
int handle_count;
curlm = curl_multi_init();
CURL *curl1 = NULL;
curl1 = curl_easy_init();
CURL *curl2 = NULL;
curl2 = curl_easy_init();
if(curl1 && curl2)
{
curl_easy_setopt(curl1, CURLOPT_URL, "https://stackoverflow.com/");
curl_easy_setopt(curl1, CURLOPT_WRITEFUNCTION, writeCallback);
curl_multi_add_handle(curlm, curl1);
curl_easy_setopt(curl2, CURLOPT_URL, "http://google.com/");
curl_easy_setopt(curl2, CURLOPT_WRITEFUNCTION, writeCallback);
curl_multi_add_handle(curlm, curl2);
CURLMcode code;
while(1)
{
code = curl_multi_perform(curlm, &handle_count);
if(handle_count == 0)
{
break;
}
}
}
curl_global_cleanup();
cout << "Hello, World!\n";
return 0;
}
I can now do 2 http requests simultaneously. Callbacks are called but still need to finish before executing following lines. Will I have to think of thread?
Read the documentation again more carefully, particularly these portions:
http://curl.haxx.se/libcurl/c/libcurl-multi.html
Your application can acquire knowledge from libcurl when it would like to get invoked to transfer data, so that you don't have to busy-loop and call that curl_multi_perform(3) like crazy. curl_multi_fdset(3) offers an interface using which you can extract fd_sets from libcurl to use in select() or poll() calls in order to get to know when the transfers in the multi stack might need attention. This also makes it very easy for your program to wait for input on your own private file descriptors at the same time or perhaps timeout every now and then, should you want that.
http://curl.haxx.se/libcurl/c/curl_multi_perform.html
When an application has found out there's data available for the multi_handle or a timeout has elapsed, the application should call this function to read/write whatever there is to read or write right now etc. curl_multi_perform() returns as soon as the reads/writes are done. This function does not require that there actually is any data available for reading or that data can be written, it can be called just in case. It will write the number of handles that still transfer data in the second argument's integer-pointer.
If the amount of running_handles is changed from the previous call (or is less than the amount of easy handles you've added to the multi handle), you know that there is one or more transfers less "running". You can then call curl_multi_info_read(3) to get information about each individual completed transfer, and that returned info includes CURLcode and more. If an added handle fails very quickly, it may never be counted as a running_handle.
When running_handles is set to zero (0) on the return of this function, there is no longer any transfers in progress.
In other words, you need to run a loop that polls libcurl for its status, calling curl_multi_perform() whenever there is data waiting to be transferred, repeating as needed until there is nothing left to transfer.
The blog article you linked to mentions this looping:
The code can be used like
Http http;
http:AddRequest("http://www.google.com");
// In some update loop called each frame
http:Update();
You are not doing any looping in your code, that is why your callback is not being called. New data has not been received yet when you call curl_multi_perform() one time.

error 411 Length Required c++, libcurl PUT request

Even though I set in header Content-Lenght I'm getting 411 error. I'm trying to send PUT request.
struct curl_slist *headers = NULL;
curl = curl_easy_init();
std::string paramiters =
"<data_file><archive>false</archive><data_type_id>0a7a184a-dcc6-452a-bcd3-52dbd2a83ea2</data_type_id><data_file_name>backwardstep.stt</data_file_name><description>connectionfile</description><job_id>264cf297-3bc7-42e1-8edc-5e2948ee62b6</job_id></data_file>";
if (curl) {
headers = curl_slist_append(headers, "Accept: */*");
headers = curl_slist_append(headers, "Content-Length: 123");
headers = curl_slist_append(headers, "Content-Type: application/xml");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(curl, CURLOPT_VERBOSE, true);
curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L);
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT");
curl_easy_setopt(curl, CURLOPT_URL,
"..url/data_files/new/link_upload.xml");
curl_easy_setopt(curl, CURLOPT_USERPWD, "kolundzija#example.ch:PASS");
curl_easy_setopt(curl, CURLOPT_HEADER, 1L);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, paramiters.c_str());
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE,
strlen(paramiters.c_str()));
curl_easy_setopt(curl, CURLOPT_FAILONERROR, 1L);
res = curl_easy_perform(curl);
and this is response from SERVER:
Host: cloud...
Transfer-Encoding: chunked
Accept: */*
Content-Length: 123
Content-Type: application/xml
Expect: 100-continue
* The requested URL returned error: 411 Length Required
* Closing connection #0
Ok, I honestly can not find your error. But you should have an example from the curl website (first google hit for "curl put c code"): http://curl.haxx.se/libcurl/c/httpput.html
Maybe mixing the easy and advanced interface confuses curl.
What confuses me are the options CURLOPT_POSTFIELDS and CURLOPT_POSTFIELDSIZE. This is a put request, so why are they even there? With PUT the arguments are in the URL. The body is opaque, at least from the perspective of HTTP.
You DON'T need to use a file and do NOT use custom requests, INstead set the UPLOAD and PUT options as it is specified in the documentation here:
http://curl.haxx.se/libcurl/c/httpput.html
Unlike the example above where they use a file as your data structure you can USE ANYTHING to hold your data.It's all on using a callback function with this option:
CURLOPT_READFUNCTION
The difference is made on how you set your callback function which only has to do two things:
1.-measure the size of your payload (your data) in bytes
2.-copy the data to the memory address that curl passes to the callback (that is the first argument on your call back function, the FIRST void pointer in this definition)
static size_t read_callback(void *ptr, size_t size, size_t nmemb, void *stream)
That is the ptr argument.
Use memcpy to copy the data.
Take a look at this link. I ran into the same problem as you and was able to solve it using this approach,one thing YOU need to keep in mind is that you ALSO need to set the file size before sending the curl request.
How do I send long PUT data in libcurl without using file pointers?
Use CURLOPT_INFILESIZE or CURLOPT_INFILESIZE_LARGE for that.

libcurl http post timeout

I am using curl version 7.15.5 in multi-thread environment. Each thread is initializing and freeing its own curl object. Below is the code, executed for each thread:
CURL* curl = curl_easy_init();
tRespBuffer respBuffer = {NULL, 0};
char errorBuf[CURL_ERROR_SIZE +1];
struct curl_slist *headers=NULL;
headers = curl_slist_append(headers, "Content-Type: text/xml; charset=gbk");
headers = curl_slist_append(headers, "Expect:");
curl_easy_setopt(curl, CURLOPT_URL, url_);
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS,encr.c_str());
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE,strlen(encr.c_str()));
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, HttpSmsServer::processHttpResponse);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, (void*)&respBuffer);
curl_easy_setopt(curl, CURLOPT_TIMEOUT, 20); // wait for 20 seconds before aborting the transacttion
curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, errorBuf); // error returned if any..
curl_easy_setopt(curl, CURLOPT_NOSIGNAL, 1); // No signals allowed in case of multithreaded apps
res = curl_easy_perform(curl);
curl_slist_free_all(headers);
curl_easy_cleanup(curl);
All the four threads are posting data to http server simultaneously. I see HTTP response timeout for some of the POST requests (~3% of requests). Any idea what could be the reason of timeouts ? I assume http server should not take more than 20 seconds to respond back.
CURLOPT_TIMEOUT includes all the time of http request, have you transferred huge data?
CURLOPT_TIMEOUT:Pass a long as parameter containing the maximum time in seconds that you allow the libcurl transfer operation to take. Normally, name lookups can take a considerable time and limiting operations to less than a few minutes risk aborting perfectly normal operations.

LibCurl WriteCallback (Async?) - C++

I am successfully making an http POST call using the following code:
std::string curlString;
CURL* pCurl = curl_easy_init();
if(!pCurl)
return NULL;
string outgoingUrl = Url;
string postFields = fields;
curl_easy_setopt(pCurl, CURLOPT_TIMEOUT, 0);
curl_easy_setopt(pCurl, CURLOPT_URL, outgoingUrl.c_str());
curl_easy_setopt(pCurl, CURLOPT_POST, 1);
curl_easy_setopt(pCurl, CURLOPT_POSTFIELDS, postFields.c_str());
curl_easy_setopt(pCurl, CURLOPT_POSTFIELDSIZE, (long)postFields.size());
curl_easy_setopt(pCurl, CURLOPT_WRITEFUNCTION, CurlWriteCallback);
curl_easy_setopt(pCurl, CURLOPT_WRITEDATA, &curlString);
curl_easy_perform(pCurl);
curl_easy_cleanup(pCurl);
The write callback has the following prototype:
size_t CurlWriteCallback(char* a_ptr, size_t a_size, size_t a_nmemb, void* a_userp);
Is there a way to do this asynchronously? Currently it waits for the callback to finish before curl_easy_perform returns. This blocking method won't work for a server with many users.
From the libcurl easy documentation:
When all is setup, you tell libcurl to perform the transfer using curl_easy_perform(3). It will then do the entire operation and won't return until it is done (successfully or not).
From the libcurl multi interface docs, one of the features as opposed to the "easy" interface:
Enable multiple simultaneous transfers in the same thread without making it complicated for the application.
Sounds like you want to use the "multi" approach.