No difference between curl_easy and curl_multi - c++

I'm performing HTTP requests from my C++ program to my PHP script with libcurl.
The first easy_ version below works good, however it is quite slow (12 requests per second on localhost). Nothing strange - I got similar results using ab -n 1000 -c 1.
On the other hand ab -n 1000 -c 100 performs much more better with 600 request per second.The thing is, using libcurl multi doesn't seem to be concurrent. I used just slightly modified example code and the result is also about 12 req/s.
Do I understand curl_multi right? How can I achieve results similar to ab?
PS. I know that both codes are a bit different, however almost whole time is spent on curl work.
The easy_ way:
CURL *curl;
CURLcode response; // HTTP response
curl = curl_easy_init();
if(curl)
{
curl_easy_setopt(curl, CURLOPT_URL, "http://localhost/process.php");
while(true)
{
if(!requestsQueue.empty())
{
mtx.lock();
string data = requestsQueue.front();
requestsQueue.pop();
mtx.unlock();
const char *post = data.c_str(); //convert string to char used by CURL
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, post);
do
{
response = curl_easy_perform(curl);
} while(response != CURLE_OK);
}
else
{
//there are no request to perform, so wait for them
cout << "Sleeping...\n";
sleep(2);
continue;
}
}
//curl_easy_cleanup(curl);
}
else
{
cout << "CURL init failed!\n";
}
The multi way:
CURLM *multi_handle;
int still_running; /* keep number of running handles */
/* init a multi stack */
multi_handle = curl_multi_init();
/* add the individual transfers */
for(int i=1;i<=300;i++)
{
CURL *handle;
handle = curl_easy_init();
curl_easy_setopt(handle, CURLOPT_URL, "http://localhost/process.php");
curl_multi_add_handle(multi_handle, handle);
}
/* we start some action by calling perform right away */
curl_multi_perform(multi_handle, &still_running);
do {
struct timeval timeout;
int rc; /* select() return code */
fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd = -1;
long curl_timeo = -1;
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
/* set a suitable timeout to play around with */
timeout.tv_sec = 1;
timeout.tv_usec = 0;
curl_multi_timeout(multi_handle, &curl_timeo);
if(curl_timeo >= 0) {
timeout.tv_sec = curl_timeo / 1000;
if(timeout.tv_sec > 1)
timeout.tv_sec = 1;
else
timeout.tv_usec = (curl_timeo % 1000) * 1000;
}
/* get file descriptors from the transfers */
curl_multi_fdset(multi_handle, &fdread, &fdwrite, &fdexcep, &maxfd);
/* In a real-world program you OF COURSE check the return code of the
function calls. On success, the value of maxfd is guaranteed to be
greater or equal than -1. We call select(maxfd + 1, ...), specially in
case of (maxfd == -1), we call select(0, ...), which is basically equal
to sleep. */
rc = select(maxfd+1, &fdread, &fdwrite, &fdexcep, &timeout);
switch(rc) {
case -1:
/* select error */
break;
case 0:
default:
/* timeout or readable/writable sockets */
curl_multi_perform(multi_handle, &still_running);
break;
}
} while(still_running);
curl_multi_cleanup(multi_handle);
curl_easy_cleanup(http_handle);
return 0;

curl_multi does indeed work with any amount of transfers in parallel, but it does so using the same single thread for all the work. It has the side-effect that if something anywhere takes a long time, that action blocks all other transfers.
One example of such a blocking operation that sometimes is what causes something like what you're describing, is the name resolver phase if the old blocking name resolver is used. Other explanations include a callback implemented by the application takes time for some reason.
You can build libcurl to instead use c-ares or the threaded-resolver backends that both avoid this blocking behavior and instead much better allow for concurrency. The threaded resolver is default in libcurl since many years now (late 2021).

Related

LibCurl C++: slowing down the sending of requests when multiplexing

Goal:
To slightly slow down the sending of requests when multiplexing with libcurl, possibly by introducing small time delays in between the sending of each of the HTTP/2 request to a server. The multiplexing program needs to listen out for any changes from one webpage for around 3 seconds at a set time once a day. However, the multiplexing program finishes execution in under a second even when setting the variable num_transfers to the thousands (variable seen in the code further below).
It would be useful if there was a way to introduce for example, a 3 millisecond delay in between the transmission of a group of multiplex requests. This would mean the program could still send requests asynchronously (so it be won't blocked / won't have to wait for a response from the server before sending the next request) but at a slightly slower rate.
A definition of multiplexing taken from this resource:
Multiplexing is a method in HTTP/2 by which multiple HTTP requests can be sent and responses can be received asynchronously via a single TCP connection. Multiplexing is the heart of HTTP/2 protocol.
Ideal outcome:
An ideal program for this situation would be one that could send a few non-blocking/ multiplex requests every approx. 3 milliseconds. The program would run for around 3-4 seconds in total.
Current problem:
Currently the program is too fast when multiplexing, meaning a few thousand requests could be sent and received within around 350 milliseconds which can lead to the sending IP address to be blocked for a few minutes..
Please note it is not an option in this scenario to use a synchronous / blocking approach - a requirement of this program is that it must not be forced to wait for a response to be returned before sending another request. The issue lies in the fact that the program is too fast at sending a high number of requests.
Attempts at solving:
In the DO...WHILE loop seen in the code below, an attempt was made to introduce some artificial time delays at various locations within the loop using usleep(microseconds) from unistd.h, but this introduced a time delay either before or after all of the requests were sent, rather than an interleaved time delay between sending requests.
Current code:
#include <iostream>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <chrono>
#include <string>
/* somewhat unix-specific */
#include <sys/time.h>
#include <unistd.h>
/* curl stuff */
#include <curl/curl.h>
#include <curl/mprintf.h>
#ifndef CURLPIPE_MULTIPLEX
#define CURLPIPE_MULTIPLEX 0
#endif
struct CURLMsg *msg;
struct transfer {
CURL *easy;
unsigned int num;
//FILE *out;
std::string contents;
};
struct MemoryStruct {
char *memory;
size_t size;
};
struct MemoryStruct chunk;
#define NUM_HANDLES 1000
static size_t WriteMemoryCallback(void *contents, size_t size, size_t nmemb, void *userp) {
transfer *t = (transfer *)userp;
size_t realsize = size * nmemb;
t->contents.append((const char*)contents, realsize);
return realsize;
}
static void setup(struct transfer *t, int num)
{
CURL *hnd;
hnd = t->easy = curl_easy_init();
curl_easy_setopt(hnd, CURLOPT_WRITEFUNCTION, WriteMemoryCallback);
curl_easy_setopt(hnd, CURLOPT_WRITEDATA, (void *)t);
/* set the same URL */
curl_easy_setopt(hnd, CURLOPT_URL, "https://someurl.xyz");
/* HTTP/2 please */
curl_easy_setopt(hnd, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_2_0);
/* we use a self-signed test server, skip verification during debugging */
curl_easy_setopt(hnd, CURLOPT_SSL_VERIFYPEER, 0L);
curl_easy_setopt(hnd, CURLOPT_SSL_VERIFYHOST, 0L);
#if (CURLPIPE_MULTIPLEX > 0)
/* wait for pipe connection to confirm */
curl_easy_setopt(hnd, CURLOPT_PIPEWAIT, 1L);
#endif
}
int main() {
struct transfer trans[NUM_HANDLES];
CURLM *multi_handle;
int i;
int still_running = 0; /* keep number of running handles */
int num_transfers = 3;
chunk.memory = (char*)malloc(1);
chunk.size = 0;
/* init a multi stack */
multi_handle = curl_multi_init();
for(i = 0; i < num_transfers; i++) {
setup(&trans[i], i);
/* add the individual transfer */
curl_multi_add_handle(multi_handle, trans[i].easy);
}
curl_multi_setopt(multi_handle, CURLMOPT_PIPELINING, CURLPIPE_MULTIPLEX);
// curl_multi_setopt(multi_handle, CURLMOPT_MAX_TOTAL_CONNECTIONS, 1L);
// Main loop
do {
CURLMcode mc = curl_multi_perform(multi_handle, &still_running);
if(still_running) {
/* wait for activity, timeout or "nothing" */
mc = curl_multi_poll(multi_handle, NULL, 0, 1000, NULL);
}
if(mc) {
break;
}
// Get response
do {
int queued;
msg = curl_multi_info_read(multi_handle, &queued);
if ((msg) && (msg->msg == CURLMSG_DONE) && (msg->data.result == CURLE_OK)) {
// Get size of payload
curl_off_t dl_size;
curl_easy_getinfo(msg->easy_handle, CURLINFO_SIZE_DOWNLOAD_T, &dl_size);
for (int i = 0; i < num_transfers; i++) {
std::cout << trans[i].contents;
}
std::cout << std::flush;
}
} while (msg);
} while (still_running);
for(i = 0; i < num_transfers; i++) {
curl_multi_remove_handle(multi_handle, trans[i].easy);
curl_easy_cleanup(trans[i].easy);
}
free(chunk.memory);
curl_multi_cleanup(multi_handle);
return 0;
}
Summary question:
Is there a way to send a group of multiplexed requests to a single URL approximately every 3 milliseconds? Another idea to attempt to solve this was to wrap the entire functionality contained in main() within a FOR loop, and putting a time delay at the end of each iteration of the FOR loop.

Curl - Sending hundreds of requests but only four at a time - Programming

How do you proceed to solve this problem? I've hundreds of requests to be sent to Curl but I can send only four at a time.
Thus, I need to make four requests using curl at the same time and processes their responses. However, once one of the curl pointer is available, I need to send another request.
This is because, the server can handle only four requests at a time but I've hundreds of requests to be sent to the server.
Following is the code, I got from curl site
int main(void)
{
const int HANDLECOUNT = 4;
CURL *handles[HANDLECOUNT];
CURLM *multi_handle;
int still_running = 0; /* keep number of running handles */
int i;
CURLMsg *msg; /* for picking up messages with the transfer status */
int msgs_left; /* how many messages are left */
/* Allocate one CURL handle per transfer */
for(i = 0; i<HANDLECOUNT; i++)
handles[i] = curl_easy_init();
/* set the options (I left out a few, you'll get the point anyway) */
curl_easy_setopt(handles[0], CURLOPT_URL, "website");
curl_easy_setopt(handles[0], CURLOPT_POSTFIELDS, XMLRequestToPost.c_str());
curl_easy_setopt(handles[0], CURLOPT_POSTFIELDSIZE, (long)strlen(XMLRequestToPost.c_str()));
curl_easy_setopt(handles[1], CURLOPT_URL, "website");
curl_easy_setopt(handles[2], CURLOPT_URL, "website");
curl_easy_setopt(handles[3], CURLOPT_URL, "website");
/* set the request for other 3 handles too */
/* init a multi stack */
multi_handle = curl_multi_init();
/* add the individual transfers */
for(i = 0; i<HANDLECOUNT; i++)
curl_multi_add_handle(multi_handle, handles[i]);
/* we start some action by calling perform right away */
curl_multi_perform(multi_handle, &still_running);
while(still_running) {
}
}
Create a thread-safe queue to put your requests into.
Start 4 threads, each one with its own CURL object.
Have each thread run a loop that:
pulls the next request from the queue,
sends it
processes/dispatches the response as needed,
and repeats
Until the queue is empty.

C++ Curl Multi Perform Blocking Issue

In my QT app, I been using curl's curl_easy_setopt, and notice that its actually synchronous which was blocking my main gui. Inside my app I have a timer that has a set interval that calls curl. Anytime my timer callback would run curl, it would block my app for a few seconds than continue. So now im trying to figure out how to perform curl's multi interface and multi perform which is asynchronous, and its giving me the same blocking/lagging issue. Can anyone give me advice.
Below is my code, as well as curl's website demo for multi perform.
/********Header Files******/
#include <sys/time.h>
#include <unistd.h>
....
/************My Timer runs the code below every 10 seconds************/
std::string url = searchEngineParam.toStdString();
std::string userAgent = options[5]->userAgentsOptions[0].toStdString();
CURL *http_handle;
CURLM *multi_handle;
int still_running; /* keep number of running handles */
int repeats = 0;
curl_global_init(CURL_GLOBAL_DEFAULT);
http_handle = curl_easy_init();
curl_easy_setopt(http_handle, CURLOPT_URL, url.c_str());
curl_easy_setopt(http_handle, CURLOPT_FOLLOWLOCATION, 1L);
curl_easy_setopt(http_handle,CURLOPT_USERAGENT,userAgent.c_str());
curl_easy_setopt(http_handle, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_easy_setopt(http_handle, CURLOPT_WRITEDATA, (void *)&chunk);
curl_easy_setopt(http_handle, CURLOPT_WRITEFUNCTION, WriteMemoryCallback);
/* init a multi stack */
multi_handle = curl_multi_init();
/* add the individual transfers */
curl_multi_add_handle(multi_handle, http_handle);
/* we start some action by calling perform right away */
curl_multi_perform(multi_handle, &still_running);
do {
CURLMcode mc; /* curl_multi_wait() return code */
int numfds;
/* wait for activity, timeout or "nothing" */
mc = curl_multi_wait(multi_handle, NULL, 0, 1000, &numfds);
if(mc != CURLM_OK) {
fprintf(stderr, "curl_multi_wait() failed, code %d.\n", mc);
break;
}
/* 'numfds' being zero means either a timeout or no file descriptors to
wait for. Try timeout on first occurrence, then assume no file
descriptors and no file descriptors to wait for means wait for 100
milliseconds. */
if(!numfds) {
repeats++; /* count number of repeated zero numfds */
if(repeats > 1) {
WAITMS(100); /* sleep 100 milliseconds */
}
}
else
repeats = 0;
curl_multi_perform(multi_handle, &still_running);
} while(still_running);
curl_multi_remove_handle(multi_handle, http_handle);
curl_easy_cleanup(http_handle);
curl_multi_cleanup(multi_handle);
curl_global_cleanup();

Proper timeout nonblock socket event handling?

I don't see these sort of question asked. Which is odd because of good benefits from single threaded server applications. But how would I be able to implement a timeout system in my code when the server is in nonblock state?
Currently I'm using this method.
while(true)
{
FD_ZERO(&readfds);
FD_SET(server_socket, &readfds);
for (std::size_t i = 0; i < cur_sockets.size(); i++)
{
uint32_t sd = cur_sockets.at(i).socket;
if(sd > 0)
FD_SET(sd, &readfds);
if(sd > max_sd){
max_sd = sd;
}
}
int activity = select( max_sd + 1 , &readfds, NULL , NULL, NULL);
if(activity < 0)
{
continue;
}
if (FD_ISSET(server_socket, &readfds))
{
struct sockaddr_in cli_addr;
uint32_t newsockfd = (uint_fast32_t)accept((int)server_socket,
(struct sockaddr *) &cli_addr,
&clientlength);
if(newsockfd < 1) {
continue;
}
//Ensure we can even accept the client...
if (num_clients >= op_max_clients) {
close(newsockfd);
continue;
}
fcntl(newsockfd, F_SETFL, O_NONBLOCK);
/* DISABLE TIMEOUT EXCEPTION FROM SIGPIPE */
#ifdef __APPLE__
int set = 1;
setsockopt(newsockfd, SOL_SOCKET, SO_NOSIGPIPE, (void *) &set, sizeof(int));
#elif __LINUX__
signal(SIGPIPE, SIG_IGN);
#endif
/* ONCE WE ACCEPTED THE CONNECTION ADD CLIENT TO */
num_clients++;
client_con newCon;
newCon.socket = newsockfd;
time_t ltime;
time(&ltime);
newCon.last_message = (uint64_t) ltime;
cur_sockets.push_back(newCon);
}
handle_clients();
}
As you can tell, I've added a unix timestap to the client when they successfully connected. I was thinking of maybe adding another thread that sleeps every 1 second, and simply checks if any clients haven't made any sends for the max duration, but I'm afraid I'll run into bottlenecking because of the second thread locking up constantly when dealing with large amounts of connections.
Thank you,
Ajm.
The last argument for select is the timeout for the select call and the return code of select tells you, if it returned because a socket was ready or because of a timeout.
In order to implement your own timeout handling for all sockets you could have a time stamp for each socket and update it on any socket operation. Then before calling select compute the timeout for each socket and use the minimal value for the timeout of the select call. This is just the basic idea and one can implement it more efficient so that you don't need to recompute all timeouts before calling select. But I consider a separate thread overkill.

Why select() timeouts sometimes when the client is busy receiving data

I have written simple C/S applications to test the characteristics of non-blocking sockets, here is some brief information about the server and client:
//On linux The server thread will send
//a file to the client using non-blocking socket
void *SendFileThread(void *param){
CFile* theFile = (CFile*) param;
int sockfd = theFile->GetSocket();
set_non_blocking(sockfd);
set_sock_sndbuf(sockfd, 1024 * 64); //set the send buffer to 64K
//get the total packets count of target file
int PacketCOunt = theFile->GetFilePacketsCount();
int CurrPacket = 0;
while (CurrPacket < PacketCount){
char buffer[512];
int len = 0;
//get packet data by packet no.
GetPacketData(currPacket, buffer, len);
//send_non_blocking_sock_data will loop and send
//data into buffer of sockfd until there is error
int ret = send_non_blocking_sock_data(sockfd, buffer, len);
if (ret < 0 && errno == EAGAIN){
continue;
} else if (ret < 0 || ret == 0 ){
break;
} else {
currPacket++;
}
......
}
}
//On windows, the client thread will do something like below
//to receive the file data sent by the server via block socket
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
while (1){
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
if (ret == 0){
// log that timer expires
CLogger::log("RecvFileThread---Calling select() timeouts\n");
} else if (ret) {
//log the number of data it received
int ret = 0;
char buffer[1024 * 256];
int len = recv(sockfd, buffer, sizeof(buffer), 0);
// handle error
process_tcp_data(buffer, len);
} else {
//handle and break;
break;
}
}
}
What surprised me is that the server thread fails frequently because of socket buffer full, e.g. to send a file of 14M size it reports 50000 failures with errno = EAGAIN. However, via logging I observed there are tens of timeouts during the transfer, the flow is like below:
on the Nth loop, select() succeeds and read 256K's data successfully.
on the (N+1)th loop, select() failed with timeout.
on the (N+2)th loop, select() succeeds and read 256K's data successfully.
Why there would be timeouts interleaved during the receving? Can anyone explain this phenomenon?
[UPDATE]
1. Uploading a file of 14M to the server only takes 8 seconds
2. Using the same file with 1), the server takes nearly 30 seconds to send all data to the client.
3. All sockets used by the client are blocking. All sockets used by the server are non-blocking.
Regarding #2, I think timeouts are the reason why #2 takes much more time then #1, and I wonder why there would be so many timeouts when the client is busy in receiving data.
[UPDATE2]
Thanks for comments from #Duck, #ebrob, #EJP, #ja_mesa , I will do more investigation today
then update this post.
Regarding why I send 512 bytes per loop in the server thread, it is because I found the server thread sends data much faster than the client thread receiving them. I am very confused that why timeout happened to the client thread.
Consider this more of a long comment than an answer but as several people have noted the network is orders of magnitude slower than your processor. The point of non-blocking i/o is that the difference is so great that you can actually use it to do real work rather than blocking. Here you are just pounding on the elevator button hoping that makes a difference.
I'm not sure how much of your code is real and how much is chopped up for posting but in the server you don't account for (ret == 0) i.e. normal shutdown by the peer.
The select in the client is wrong. Again, not sure if that was sloppy editing or not but if not then the number of parameters are wrong but, more concerning, the first parameter - i.e. should be the highest file descriptor for select to look at plus one - is zero. Depending on the implementation of select I wonder if that is in fact just turning select into a fancy sleep statement.
You should be calling recv() first and then call select() only if recv() tells you to do so. Don't call select() first, that is a waste of processing. recv() knows if data is immediately available or if it has to wait for data to arrive:
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
char buffer[1024 * 256];
while (1){
int ret = 0;
int len = recv(sockfd, buffer, sizeof(buffer), 0);
if (len == -1) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
//handle error
break;
}
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, &timeout );
if (ret == -1) {
// handle error
break;
}
if (ret == 0) {
// log that timer expires
break;
}
// socket is readable so try read again
continue;
}
if (len == 0) {
// handle graceful disconnect
break;
}
//log the number of data it received
process_tcp_data(buffer, len);
}
}
Do something similar on the sending side as well. Call send() first, and then call select() waiting for writability only if send() tells you to do so.