add curl_easy handles to working curl_multi_handle - c++

I try to implement multi-threaded downloading using CURL library.
I prepare N threads (easy handles that download different ranges) and invoke
curl_multi_perform(multiHandle, &running)
after that.
My questions
how to check if specific thread (that was added to multi-handle) is downloading now? I haven't found any options.
If specific thread finishes downloading, It HAS to make connection AGAIN and continue downloading of another range. Is it possible to do?

The libcurl multi interface is not threaded. It does parallel transfers in the same thread!
You can add easy handles to the multi handle at any time you like. Just call curl_multi_perform() then and it'll drive all added easy handles. You can also remove handles at any time.
You should use curl_multi_info_read() to figure out which handles that have completed. Until they are completed, you can consider them in use. If you want to put a easy handle back to the multi handle to do another transfer, just remove it from the handle (possibly set new options) and add it again.
See also http://curl.se/libcurl/c/example.html for lots of libcurl examples, including a bunch that uses the multi interface. The general multi interface "tutorial" style docs is here: http://curl.se/libcurl/c/libcurl-multi.html

Related

How to properly use the asynchronous libusb?

I worked on the synchronous libusb in my Qt project with good results and now I need the asynchronous features of this library. I understood reading here, here and here that, after I've registered my callback function using the libusb_fill_control_transfer and submitted a transfer with libusb_submit_transfer , I need to "keep live" the libusb_handle_events_completed inside a while loop to get the transfer related events since the libusb doesn't have its own thread. for example you can read a code like this
libusb_fill_control_transfer(transfer, dev, buffer, cb, &completed, 1000);
libusb_submit_transfer(transfer);
while (!completed) {
libusb_handle_events_completed(ctx, &completed);
}
Now if I want read a packet that I don't know when it occurs, I think that goes against the asynchronous nature submit a read and wait in the while with libusb_handle_events_completed until the event is triggered.
Then, do I need to create a separate thread within the libusb_handle_events_completed in an infinite while loop?
Can anyone, with experience in the asynchronous features of libusb library, give some suggestions on the right approach to handle the transfer events?

Reading/writing Cap’n Proto messages partially

I'm trying to use Cap’n Proto in existing project consisting of client and server communicating over UDS. I don't have the resources (and I doubt it would be accepted) to redo all client-server RPC, but I wanted to benefit from Cap’n Proto serialization mechanisms. Unfortunately, it seems to me it's impossible.
The biggest problem is server side, which is single threaded (and it will remain so, if there aren't any serious arguments for multithreading) and uses it's own poll based loop. All events are read partially, server can't block waiting for any event to be fully read - and this is where I am stuck. We have our own protocol and classes which wrap the message, which can consume bytes from file descriptor and notify, when the event is fully read, so the server can process it. I think I've analysed most of Cap’n Proto interfaces (serialization, async serialization) and it seems, that it can't be used the same way without any modifications.
I really hope that I've missed something. Did I?
There are two ways you can solve this:
Hard way: You can attempt to integrate with the KJ async I/O framework (used by Cap'n Proto). The KJ event loop can actually integrate with other event loops and run on top of them -- but it's tricky. For example, node-capnp includes code to integrate the KJ event loop with libuv, as seen in the first part of this source file. Once you have the necessary glue, you can write KJ-style async code that uses the interfaces in capnp/serialize-async.h.
Easy way: Instead of trying to integrate KJ, you can write simple code using your event infrastructure which reads data from the file descriptor directly and then uses capnp::expectedSizeInWordsFromPrefix() (from capnp/serialize.h) to figure out if it has received the whole message yet. If that function returns a number greater than what you already have, then you don't have the full message and have to keep waiting. Once you have the full message, you can then use capnp::FlatArrayMessageReader to parse it.

changing threads standard input and output

I have an application that creates two threads. (thread_1 for a Qt GUI and thread_2 for an app that runs a TCL interpreter).
I want thread_1 (Qt GUI) to create a command and send it to thread_2 (TCL interpreter).
I'm thinking of connecting thread_1's stdout to thread_2's stdin, and I don't know how to do it ?
if you know how to do it or can suggest different way of work, I'd appreciate your help.
The solution I propose requires to set up 2 std::queue<> or std::list to let each thread pass a message to the other one and vice versa. The simplest way is to have each thread setup its own incoming message queue, and let other threads get a pointer to it. First you need a synchronized version of the queue datatype: as I gave in the comment, there's an implementation there.
Then you only need to upgrade your thread class (or runnable class, or whatever you're using as an abstraction of a task) with one such queue available internally, and a send method publicly accessible so that other tasks may post a message to it. Your task will then have to periodically check that queue for incoming message, and eventually process it.
NB: I got that page from stack overflow itself, since the blog owner is a member of this community. See that page talking about queue synchronization issue.
I am not sure why you would like to go through standard input and output here, but I think the issue might be much simpler than you think. I would just personally use the qt signal-slot mechanism as follows:
connect(guiThreadSender, SIGNAL(sendCommand(const QByteArray&)),
tclThreadReceiver, SLOT(handleCommand(const QByteArray&)));

How to check if an application is in waiting

I have two applications running on my machine. One is supposed to hand in the work and other is supposed to do the work. How can I make sure that the first application/process is in wait state. I can verify via the resources its consuming, but that does not guarantee so. What tools should I use?
Your 2 applications shoud communicate. There are a lot of ways to do that:
Send messages through sockets. This way the 2 processes can run on different machines if you use normal network sockets instead of local ones.
If you are using C you can use semaphores with semget/semop/semctl. There should be interfaces for that in other languages.
Named pipes block until there is both a read and a write operation in progress. You can use that for synchronisation.
Signals are also good for this. In C it is called sendmsg/recvmsg.
DBUS can also be used and has bindings for variuos languages.
Update: If you can't modify the processing application then it is harder. You have to rely on some signs that indicate the progress. (I am assuming you processing application reads a file, does some processing then writes the result to an output file.) Do you know the final size the result should be? If so you need to check the size repeatedly (or whenever it changes).
If you don't know the size but you know how the processing works you may be able to use that. For example the processing is done when the output file is closed. You can use strace to see all the system calls including the close. You can replace the close() function with the LD_PRELOAD environment variable (on windows you have to replace dlls). This way you can sort of modify the processing program without actually recompiling or even having access to its source.
you can use named pipes - the first app will read from it but it will be blank and hence it will keep waiting (blocked). The second app will write into it when it wants the first one to continue.
Nothing can guarantee that your application is in waiting state. You have to pass it some work and get back a response. It might be transactions or not - application can confirm that it got the message to process before it starts to process it or after it was processed (successfully or not). If it does not wait, passing a piece of work should fail. Whether when trying to write to a TCP/IP socket or other means, or if timeout occurs. This depends on implementation, what kind of transport you are using and other requirements.
There is actually a way of figuring out if the process (thread) is in blocking state and waiting for data on a socket (or other source), but that means that client should be on the same computer and have access privileges required to do that, but that makes no sense other than debugging, which you can do using any debugger anyway.
Overall, the idea of making sure that application is waiting for data before trying to pass it that data smells bad. Not to mention the racing condition - what if you checked and it was OK, and when you actually tried to send the data, you found out that application is not waiting at that time (even if that is microseconds).

XMLRPCPP asynchronously handling multiple calls?

I have a remote server which handles various different commands, one of which is an event fetching method.
The event fetch returns right away if there is 1 or more events listed in the queue ready for processing. If the event queue is empty, this method does not return until a timeout of a few seconds. This way I don't run into any HTTP/socket timeouts. The moment an event becomes available, the method returns right away. This way the client only ever makes connections to the server, and the server does not have to make any connections to the client.
This event mechanism works nicely. I'm using the boost library to handle queues, event notifications, etc.
Here's the problem. While the server is holding back on returning from the event fetch method, during that time, I can't issue any other commands.
In the source code, XmlRpcDispatch.cpp, I'm seeing in the "work" method, a simple loop that uses a blocking call to "select".
Seems like while the handling of a method is busy, no other requests are processed.
Question: am I not seeing something and can XmlRpcpp (xmlrpc++) handle multiple requests asynchronously? Does anyone know of a better xmlrpc library for C++? I don't suppose the Boost library has a component that lets me issue remote commands?
I actually don't care about the XML or over-HTTP feature. I simply need to issue (asynchronous) commands over TCP in any shape or form?
I look forward to any input anyone might offer.
I had some problems with XMLRPC also, and investigated many solutions like GSoap and XMLRPC++, but in the end I gave up and wrote the whole HTTP+XMLRPC from scratch using Boost.ASIO and TinyXML++ (later I swaped TinyXML to expat). It wasn't really that much work; I did it myself in about a week, starting from scratch and ending up with many RPC calls fully implemented.
Boost.ASIO gave great results. It is, as its name says, totally async, and with excellent performance with little overhead, which to me was very important because it was running in an embedded environment (MIPS).
Later, and this might be your case, I changed XML to Google's Protocol-buffers, and was even happier. Its API, as well as its message containers, are all type safe (i.e. you send an int and a float, and it never gets converted to string and back, as is the case with XML), and once you get the hang of it, which doesn't take very long, its very productive solution.
My recomendation: if you can ditch XML, go with Boost.ASIO + ProtobufIf you need XML: Boost.ASIO + Expat
Doing this stuff from scratch is really worth it.