libcurl : Handle multiple asynchronous requests in C++ - c++

I've been working with easy_perform till now and it worked as expected. But due to timeouts and single threaded application, there's latency in running multiple operations. I'm looking at optimizing these calls by converting them into asynchronous with multi_perform interface, Though I am having hard time understanding correct way to make use of it.
From my understanding, Flow looks something like following :
Create a easy_handle for call
Add this standard easy handle to the multi stack using curl_multi_add_handle
curl_multi_perform : This is where it gets tricky.
As I understand it, This call is happening in a loop.
My application is calling this API to read/write whatever there is to read or write right now etc.
If running_handles is changed from the previous call, there is data to read which we should retrieve using curl_multi_info_read
Clean up when easy handle is processed
curl_multi_remove_handle
curl_easy_cleanup
curl_multi_cleanup
Q:
Does that mean, My application needs to do periodic polling to check if there's data to read?
Is there a way to handle this with callbacks? and the callback method should trigger action in my application in asynchronous way.
Refs I've already reviewed :
Looking at http://www.godpatterns.com/2011/09/asynchronous-non-blocking-curl-multi.html , It says the same thing. Correct me if I'm wrong.
stackoverflow thread and other related : How to do curl_multi_perform() asynchronously in C++?

The prerequisite knowledge needed to understand curl_multi API is Async Sockets.
curl_multi_perform is not a blocking API. As explained in documentation:
When an application has found out there's data available for the multi_handle or a timeout has elapsed, the application should call this function to read/write whatever there is to read or write right now etc. curl_multi_perform returns as soon as the reads/writes are done.
It just needs to be called periodically.
Does that mean, My application needs to do periodic polling to check if there's data to read?
Yes. curl_multi_fdset conveniently extracts the related file descriptors so that you can select on them (select = wait), but you're free to add other descriptors to the same select call, thus interleaving curl work with your own work. Here's an example of how to do it.
Is there a way to handle this with callbacks?
Yes. The transferred data is passed during the curl_multi_perform call into a CURLOPT_WRITEFUNCTION callback. Note: curl_multi_info_read is not for reading data, it's for reading information about the transfer.
for (/* each transfer */) {
curl_easy_setopt(eh, CURLOPT_WRITEFUNCTION, write_cb);
curl_easy_setopt(eh, CURLOPT_WRITEDATA, /* pass userp value */);
curl_multi_add_handle(multi_handle, easy_handle);
}
int still_running;
do {
if (curl_multi_perform(cm, &still_running)) { // will call write_cb() when data is read
/* handle error */ break;
}
if (curl_multi_wait(cm, NULL, 0, 1000, NULL)) {
/* handle error */ break;
}
} while(still_running);
Here's a complete example of using a data callback with multi-transfer: 10-at-a-time.
Note: curl_multi_wait used in this example is a convenience wrapper around a select call.

Related

boost::asio wite() API stuck while writing the data

While sending some data to client (multiple chunks of data); if the client stop reading the data after some packets, the server gets stuck on boost::asio::write() which results in unwanted behavior of the product.
We thought of shifting to async_write() and have a timer over it so that if such condition occurs, we could fallback to original good state, but due to design faults we could not use io_service (due to high concurrency) after async_write which resulted in not getting callbacks to stop the timer.
So, is there any way through which (without using io_serivce) we can unblock the write() API.
Somthing like we could execute write() API on a separate thread and terminate it through some timer. But here the question arises, is there any way through which we can clear out the boost buffers which already has some pending write data ?
Any help would be appreciated.
Thanks.
Eventually went with using boost::asio::async_write() but with io_service::poll() -> poll being non-blocking.
run() was not an option as the system is highly concurrent and read/write had to share the same io_service.
Pseudo code looks something like this:
data_to_write = size of data;
set current_bytes_transffered = 0
set timeout_occurred to false
/*
current_bytes_transffered -> obtained from async_write() callback
timeout_occurred -> obtained from a seperate timer
*/
while((data_to_write != current_bytes_transffered) || (!timeout_occurred))
{
// poll() is used instead of run() as the system
// has high concurrency and read and write operations
// shares same io_service
io_service.poll();
if(data_to_write == current_bytes_transffered)
{
// SUCCESS write logic
}
else if(timeout_occurred)
{
// timeout logic
}
}

gRPC / C++- How to leave server streaming rpc on

Hello I have this code:
Status ListFeatures(ServerContext* context, const Rectangle* rectangle,
ServerWriter<Feature>* writer) override {
auto lo = rectangle->lo();
auto hi = rectangle->hi();
long left = std::min(lo.longitude(), hi.longitude());
long right = std::max(lo.longitude(), hi.longitude());
long top = std::max(lo.latitude(), hi.latitude());
long bottom = std::min(lo.latitude(), hi.latitude());
for (const Feature& f : feature_list_) {
if (f.location().longitude() >= left &&
f.location().longitude() <= right &&
f.location().latitude() >= bottom &&
f.location().latitude() <= top) {
writer->Write(f);
}
}
return Status::OK;
}
This is server streaming RPC on Client Unary call.
I would like to not close the stream. Once the Client initiates the unary call I would like to keep the server stream "forever" so I can send messages whenever I like.
As far as I understand at the moment this line is executed:
return Status::OK;
The stream is getting closed. Is there any way I can keep it open so later I can send more server streaming messages?
There's two major solutions here for you.
First, the API is designed to be threadsafe, and usable through threadpools. It's okay to never return, consuming a thread in the process, and continue writing endlessly. Obviously, this means keeping a thread up for sending out your responses continuously, which can be resource-intensive, depending on your situation. Also you'd need to properly set up your threadpool for this. Holding the response indefinitely by default will cause problems if you haven't tuned it beforehand.
But secondly, there's the Reactor mechanism, allowing you to return without damaging the state. Your callbacks have void returns, and you'd need to explicitly call Finish(Status::OK) to terminate the RPC. It's a different base API, so you'd need to convert your code to it. You can see an example here:
https://github.com/grpc/grpc/blob/master/examples/cpp/route_guide/route_guide_callback_server.cc
The Chatter call is an example of a streaming server API that hops threads to send its replies.
The details of the callback API can be found here: https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md

epoll_wait return EPOLLOUT even with EPOLLET flag

I am using linux epoll in edge trigger mode.
Each time a new connection is incoming, I add the file descriptor to epoll with EPOLLIN|EPOLLOUT|EPOLLET flag. My first question is: What's the right way to check which kind of event(s) occur for each ready file descriptor after the epoll_wait returns? I mean, I see some example code e.g from https://github.com/yedf/handy/blob/master/raw-examples/epoll-et.cc line 124 do it like this:
for (int i = 0; i < n; i++) {
//...
if (events & (EPOLLIN | EPOLLERR)) {
if (fd == lfd) {
handleAccept(efd, fd);
} else {
handleRead(efd, fd);
}
} else if (events & EPOLLOUT) {
if (output_log)
printf("handling epollout\n");
handleWrite(efd, fd);
} else {
exit_if(1, "unknown event");
}
}
What caught my attention is: it uses "if and else if and else" to check which event occurs, which means if it handleRead, then it can't handleWrite at the same time. And I think this may cause loss of event in the following condition: Both socket read and write operation have meet EAGAIN and then the remote end both read and send some data, thus the epoll wait may set both EPOLLIN and EPOLLOUT, but it can only handleRead, and the data remaining in output buffer can't be sent since handleWrite is not being called.
So is the above usage wrong?
According man 7 epoll QA:
If more than one event occurs between epoll_wait(2) calls, are
they combined or reported separately?
They will be combined.
If i got it right, several events can occur on a single file descriptor between epoll_wait calls. So I think I should use multiple "if if and if" to check on by one whether readable/writable/error events occur instead of using "if and else if". I went to see how nginx epoll module do, from https://github.com/nginx/nginx/blob/953f53921505a884f3912f2d8db5217a71c0479a/src/event/modules/ngx_epoll_module.c#L867 I see the following code:
if (revents & (EPOLLERR|EPOLLHUP)) {
//...
}
if ((revents & EPOLLIN) && rev->active) {
//....
rev->handler(rev);
}
if ((revents & EPOLLOUT) && wev->active) {
//....
wev->handler(wev);
}
It seems to adhere to my thoughts of checking all EPOLLERR..,EPOLLIN,EPOLLOUT events one after another.
Then I do the same kind of thing as nginx do in my application. But What I realized after experiment is: if I add the file descriptor to epoll with EPOLLIN|EPOLLOUT|EPOLLET flag, and I didn't fill up the output buffer, I will always get EPOLLOUT flag set after epoll_wait returns due to some data arrives and this fd becomes readable, therefore redundant write_handler would be called, which is not what I expect.
I did some search and found that this situation indeed exists and not caused by any bug in my application. According to the top voted answer at epoll with edge triggered event says:
On a somewhat related note: if you register for EPOLLIN and EPOLLOUT events and assuming you never fill up the send buffer, you still get the EPOLLOUT flag set in the event returned by epoll_wait each time EPOLLIN is triggered - see https://lkml.org/lkml/2011/11/17/234 for a more detailed explanation.
And the link in this answer says:
It's doesn't mean there's an EPOLLOUT "event", it just means a message
is triggered (by the socket becoming readable) so you get a status
update. In theory the program doesn't need to be told about EPOLLOUT
here (it should be assuming the socket is writable already), but it
doesn't do any harm.
So far What I understand about epoll edge trigger mode is:
the epoll_wait return when the state of any fd being monitored has changed, e.g from nothing to read -> readable or buffer is full-> buffer can write
the epoll_wait may return one or several event(flags) for each fd in the ready list.
the flags in sturct epoll_event.events field indicate the current state of this fd. Even if we don't fill out the output buffer, the EPOLLOUT flag would be set when epoll_wait return due to readable, because the current state of the fd is just writable.
Please correct me if I am wrong.
Then my question would be: Should I maintain a flag in each connection to indicate whether EAGAIN occurs when write to output buffer, if it is not set, don't call write_handler/handleWrite in "if (events & EPOLLOUT)" branch, so that my upper layer program would not be told about EPOLLOUT here?
What a great question (since I had pretty much the same question)! I'll just summarize what I think I know now wrt to your informative question/description and your helpful links and hopefully smarter folk will correct any mistakes.
Yes, the if/else handling of event flags is definitely bogus. For sure at least two can events can arrive at effectively the same time. E.g., both the read and write sides might have become unblocked since last you called epoll_wait(). And, of course, as soon as you accept() the connection, both reading and writing suddenly become possible, so you get an "event" of EPOLLIN|EPOLLOUT.
I really didn't grok that epoll_wait() is always delivering the entire current state, rather than only the parts of the state that changed -- thanks for clearing that up. To be perhaps clearer, epoll_wait() won't return an fd unless something changed on that socket, but if something did change, it returns all the flags representing the current state. So, I found myself staring at a stream of EPOLLIN|EPOLLOUT events wondering why it was claiming there was an "output" event, even though I hadn't written anything yet. Your answer being correct: it's just telling me the output side is still writeable.
"Should I maintain a flag..." Yes, but I would imagine that in all but the most trivial situations you were probably going to end up maintaining at least one bit of "am I currently blocked" state for your readers/writers anyway. For example, if you ever want to process data in an order different than how it arrives (e.g., prioritize responses over requests to make your server more resistant to overload) you instantly have to give up the simplicity of just having the arrival of I/O drive everything. In the particular case of writing, epoll simply doesn't have enough information to notify you at the "right" time. As soon as you accept a connection, there's an event that says "you can write now"--but you probably have nothing to write if you're a server who couldn't possibly have already gotten a request from the client. epoll just can't know whether you have something to write or not, so you were always going to have to either suffer essentially "extraneous" events, or maintain your own state.
In all but the simplest cases, the socket file descriptor ends up being insufficient information for handling I/O events, so you invariably have to associate some data structure with it, or object if you prefer. So, my C++ looks something like:
nAwake = epoll_wait(epollFd, events, 100, milliseconds);
if(nAwake < 0)
{
perror("epoll_wait failed");
assert(false);
}
for(int iSocket=0; iSocket < nAwake; ++iSocket)
{
auto This = static_cast<Eventable*>(events[iSocket].data.ptr);
auto eventFlags = events[iSocket].events;
fprintf(stderr, "%s event on socket [%d] -> %s\n",
This->ClassName(), This->fd, DumpEvent(eventFlags));
This->Event(eventFlags);
}
Where Eventable is a C++ class (or derivative thereof) that has all the state needed to decide how to handle the flags epoll delivers. (Of course, this is letting the kernel store a pointer to a C++ object, requiring a design that is very clear about pointer ownership/lifetimes.)
And since you're writing low-level code on Linux, you may also care about EPOLLRDHUP. This not-highly-portable flag lets you save one call to read(). If the client (curl seems pretty good at evoking this behavior) closes its write side of the connection (sends a FIN), you normally discover that when epoll tells you EPOLLIN, but read() returns zero bytes. However, Linux maintains an extra bit to indicate your client's write side (your read side) has been closed. So, if you tell epoll you want the EPOLLRDHUP event you can use it to avoid doing a read() whose sole purpose will turn out to be telling you the writer closed their side.
Note that EPOLLIN will still be turned on whenever EPOLLRDHUP is, AFAIK. Even after you do a shutdown(fd, SHUT_RD). Another example of how you will usually be driven to maintain your own idea of the state of the connection. You care more about clients who are kind enough to do half-shutdowns if you are implementing HTTP.
When used as an edge-triggered interface, for performance reasons,
it
is possible to add the file descriptor inside the epoll interface
(EPOLL_CTL_ADD) once by specifying (EPOLLIN|EPOLLOUT).
This allows you
to avoid continuously switching between EPOLLIN and EPOLLOUT calling
epoll_ctl(2) with EPOLL_CTL_MOD.

Is it expected for poll() to take 40ms to return even though data will be available sooner?

I created a proxy server to handle CQL orders from website clients. The proxy listens for incoming connections and each connection is given a thread. The thread loops as long as the socket exists and dies on HUP. You may also stop the proxy, which will stop the threads by sending an event (See eventfd()) to each thread.
By itself, this already allows me to save a good 100ms because the proxy is local and connecting to a local service is much faster than a service on a remote computer... (even if the computer is local.)
However, I send orders and once in a while the proxy sees no incoming data (i.e. it calls read() on the socket which is setup as NONBLOCK and gets -1 in return and errno == EAGAIN.) When that happens, I call poll() to wait for additional data, the HUP, or a hit on the eventfd meaning I have to quit (i.e. 2 fds, the socket and the eventfd).
Somehow, more often than not, when I hit the poll() function call, it adds an extra 40ms to the time it takes for a message to go round trip. Although one would think this only happens on larger messages, it happens when I receive an order, which is less than 100 bytes! So the size should not be the culprit. I also changed the code to make sure I send the entire order from the client to the proxy in one write() and to avoid the poll() if at all possible (i.e. I call read() first, and poll() only if nothing is available.)
Note that I have no timeout in this case because there is nothing to check other than the incoming orders and the eventfd. So I would imagine that the timeout won't be a problem.
The code base is really big. But the client/server comes down to something like this (the sizes in original are fully dynamic):
// Client
...
connect(socket);
...
write(socket, order, sizeof(order));
read(socket, result, sizeof(result));
// repeat for other orders, as required by client...
// server
...
socket = accept(); // happens for each client
...
pthread_create(runner);
...
// server thread (runner)
...
for(;;)
{
int r(0);
for(;;)
{
r += read(socket, order, sizeof(order));
if(r >= sizeof(order))
{
break;
}
// wait for more data is not enough received yet
poll(..."socket" + "eventfd"...); // <-- this will often take 40ms
if(eventfd_happened)
{
// quit thread
return;
}
}
...
[work on order]
...
write(socket, result, sizeof(result));
}
Note 1: I see the problem when I have a single client. So having multiple clients does not in itself cause the problem either.
Note 2: The client really uses BIO_connect(), BIO_read() and BIO_write() [from OpenSSL], but I doubt that would be a problem. I do not use any kind of encryption.
I don't see why you're using non-blocking I/O given you have a dedicated thread per socket. Just block in read(). Use SO_RCVTIMEO if you need an overall read timeout.

Sending IOCTL from IRQL=DISPATCH_LEVEL (KbFilter/KMDF)

I am using the KbFilter example in the WDK, trying to send an IOCTL in a function that is called by KbFilter_ServiceCallback and therefore is executed at DISPATCH_LEVEL. The function just has to send an IOCTL and return, am not waiting for an output buffer to be filled so it can be asynchronous, fire and forget.
I am currently using the WDF functions WdfIoTargetFormatRequestForIoctl and WdfRequestSend to try and send at DISPATCH_LEVEL and getting nothing. The call to WdfRequestSend is succeeding but the IOCTL doesn't appear to be received.
Using either of WdfIoTargetSendIoctlSynchronously or the WDM pattern IoBuildDeviceIoControlRequest() and IoCallDriver() requires PASSIVE_LEVEL and the only way I know to call these at PASSIVE_LEVEL is to create a separate thread that runs at PASSIVE_LEVEL and pass it instructions via a buffer or a queue, synchronized with a spinlock and semaphore.
Can someone tell me if there is an easier way to pass IOCTLs to the drivers below my filter, or is the thread/queue approach the normal pattern when you need to do things at a higher IRQL? Under what circumstances can I use KeRaiseIrql and is this what I should use? Thanks.
Use IoAllocateIrp and IoCallDriver. They can be run at IRQL <= DISPATCH_LEVEL.
You cannot lower your IRQL (unless it is you who raised it). KeRaiseIrql is used only to raise IRQL. A call to KeRaiseIrql is valid if the caller specifies NewIrql >= CurrentIrql.
Be careful: Is your IOCTL expected at DISPATCH_LEVEL?
Here is a code snippet:
PIRP Irp = IoAllocateIrp(DeviceObject->StackSize, FALSE);
Irp->Tail.Overlay.Thread = PsGetCurrentThread();
Irp->RequestorMode = KernelMode;
Irp->IoStatus.Status = STATUS_NOT_SUPPORTED;
Irp->IoStatus.Information = 0;
PIO_STACK_LOCATION stack = IoGetNextIrpStackLocation(Irp);
stack->MajorFunction = IRP_MJ_DEVICE_CONTROL;
stack->Parameters.DeviceIoControl.IoControlCode = ...