Based on this documentation, librdkafka Handle::poll() automatically calls configured callbacks when it is invoked. The documentation, however, does not mention how these callbacks are called.
For example, does poll() wait for all the triggered callbacks to finish, does it spawn new threads to handle the callbacks and forget about their completions, or a mixture, or something else?
This is important to me because some of my callbacks take time to complete.
Related
In Qt one can:
connect(object, &Object::someSignal, objectInAnotherThread, &Object::someSlot);
So, when I connect a signal from an object in a thread to an object in another thread, Qt queues the signal and someSlot will be executed in the thread of objectInAnotherThread.
This particular feature is very handy and safe, although could copy data.
Lambdas in C++11 are handy, but when replacing this kind of connection with a pure lambda callback (without Qt), the lambda will be executed in the thread of the caller. This will then usually require mutexes etc error-prone logic to make things right.
I'm aware of Boost::signals2 etc, but AFAIK they don't provide this same Qt-like behavior when used across thread boundaries..?
If I'd like to remove Qt for a reason or another, what are my options for drop-in replacement regarding my signal-slot connections?
What’s wrong with spinning up a thread and sending wrapped function calls to a queue that the thread pulls from and executes? The event queue in Qt is not very special other than it uses the “native” event loop. There’s no need to do that, though, and e.g. QtConcurrent::run threads implement a simple mutex+wait condition protected queue. Whenever the new events are delivered, the thread gets woken up and processes them until the queue is empty. The events can carry functor calls. In fact, the events can simply be std::function. The only sticking point is timers, which you’d have to implement on top of the primitive that waits on the wait condition. Those waits have timeouts, and you’d use a sorted timeout queue and schedule wake ups whenever a timer object should “tick”. This has the benefit of not using up any native timers and can potentially perform better.
I have hard time finding out how exactly does thread pool built with boost::asio::io_service behave.
The documentation says:
Multiple threads may call the run() function to set up a pool of
threads from which the io_service may execute handlers. All threads
that are waiting in the pool are equivalent and the io_service may
choose any one of them to invoke a handler.
I would imagine, that when threads executing run() are taking a handler to execute, they execute it, and then come back to wait for next handlers to execute. When executing a handler, a thread is not considered waiting, and hence no new handlers to execute are assigned to it. Is that correct? Or does io_service assign work to threads, without considering whether these are busy or not?
I am asking, because in one project that we are using (OSRM), that uses boost::asio::io_service based thread pool to handle incoming HTTP requests, I noticed that long running request, sometimes block other, fast requests, even though more threads and cores are available.
When executing a handler, a thread is not considered waiting, and hence no new handlers to execute are assigned to it. Is that correct?
Yes. It's a pull model queue.
A notable "apparent" exception is when strands are used: handlers wrapped on a on a strand do synchronize with other handlers running on that same strand.
I have a process which receives multiple jobs and picks a thread from thread pool and assigns a job to it, this thread in turn may spawn another set of threads from its own thread pool. Now when a STOP request for a job comes to the main process, it should be forwarded to corresponding thread for that request and all the threads associated with that job should clean themselves up and exit, My question is how to notify the worker threads about "STOP".
A global variable can be used and worker threads can poll it frequently but there are lot of functions that a worker can be doing, and adding checks everywhere could work.
Is there a clean approach? some kind of messaging layer. btw the code is C++
The Boost.Thread library is a wrapper around pthreads that's also portable to Windows. The boost::thread class has an interrupt() method that'll interrupt the thread at the next interruption point.
Boost.Thread also has a thread_group class which provides a collection of related threads. thread_group also has an interrupt() method that invokes interrupt() on each thread in the thread group.
I want to create an http client using boost asio. To have a structured and optimized I have looked into the examples of boost asio to have some idea of what a good implementation should look like.
Mostly, I have followed the structure of HTTP Server, so I have a connection manager that holds a set of pointers to each individual connection. Now, the big difference here is that already in the constructor of server.cpp an asynchronous function is called, namely
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error));
and in the winmain.cpp the io_service is started through a function call to server::run():
io_service_.run();
In my implementation, since it's a client and not a server, I want to wait for the user to call a send() function before I start connecting to the server. I have therefore moved all connecting-to-server-related function calls into the connection class. When a user requests to send a msg to the server the following is called:
resolver.async_resolve(query,
boost::bind(&connection::handle_resolve, boost::ref(*this),
boost::asio::placeholders::error,
boost::asio::placeholders::iterator));
io_service_.run();
I want to start every connection-object in one separate thread and this is really the background of my question. How do I do that in order to have a structured and optimized code?
I have tried, as HTTP Server 2 example, to set up a thread pool of io_services and assigning work to them so that they will not return until stopped. This seems like a good idea since I would the have the io services running in the background all the time. Consequently, I start the thread pool from my equivalent to server.cpp, in a thread:
boost::thread t(boost::bind(&geocast::enabler::io_service_pool::run, &io_service_pool_));
BUT, from my own trial and error analysis, it seems as you cannot start io_service BEFORE you have issued an asynchronous function, is that true? Because my program gets stuck. In my case I want to call async_resolve only when a user means to sends a POST request or a GET request. To support my theory; The Chat Client starts off by calling an async_connect and having an async_read as callback, this way they can safely call io_service.run() just after the client has been created. I don't want to read from the server all the time just to be able to start the io_service, because that is not how a normal client works, right? A browser does not read from every possible server on the planet without the user having navigated to a website...
If I don't use the thread pool from example 2 but start every connection-class in a separate class, each of which own its own io_service, everything works fine. But, a thread pool with a simple round-robin routine to select an appropriate io_service seems really attractive. What is the best approach for me to go multi-threaded? Am I just picky and should stick to one-connection-one-io_service-thing?
I have tried, as HTTP Server 2
example, to set up a thread pool of
io_services and assigning work to them
so that they will not return until
stopped.
When using asynchronous programming, I strongly suggest using the following designs in order:
single io_service with a single thread
pool of threads invoking a single io_service
io_service per thread or other exotic designs
You should only move to the next design if, after profiling, your current design proves to be a bottleneck.
BUT, from my own trial and error
analysis, it seems as you cannot start
io_service BEFORE you have issued an
asynchronous function, is that true?
You are correct, the io_service::run() documentation spells this out very clearly
The run() function blocks until all
work has finished and there are no
more handlers to be dispatched, or
until the io_service has been stopped.
The correct way to prevent io_service::run() from returning immediately is to queue up some handlers, or instantiate an io_service::work object and keep it in scope for as long as you want run() to stay active.
When using ASIO, you are giving up control of the flow of your program over to ASIO. You can share control if you change your code to use a thread pool and call run_one() instead of run(). run_one() only dispatches one IO job to a thread, so if you have multiple events in the ioservice, you will have to call run_one() several times.
Have you thought about spawning a new thread as your boss thread and then having your boss thread create a bunch of worker threads? Your boss thread could call run() and then your UI's thread could call post() to dispatch a new unit of work. Along with not having to manually call and schedule tasks in the ioservice, it also makes the cleanup and shutdown more straight forward since your boss thread would block when it calls run().
I was just going over the asio chat server example. My question is about their usage of the io_service.run() function. The documentation for the io_service.run() function says:
The run() function blocks until all work has finished and there are no
more handlers to be dispatched, or until the io_service has been
stopped. Multiple threads may call the run() function to set up a
pool of threads from which the io_service may execute handlers. All
threads that are waiting in the pool are equivalent and the io_service
may choose any one of them to invoke a handler. The run() function
may be safely called again once it has completed only after a call to
reset().
It says that the run function will return, and I'm assuming that when it does return the network thread stops until it is called again. If that is true, then why isn't the run function called in a loop, or at least given its own thread? the io_service.run() function is pretty much a mystery to me.
"until all work has finished and there are no more handlers to be dispatched, or until the io_service has been stopped"
Notice that you DO install a handler, named handle_accept, that reinstalls itself at each execution. Hence, the io_service.run will never return, at least until you quit it manually.
Basically, at the moment you run io_service.run in a thread, io_services proactor takes over program flow, using the handler's you installed. From that point on, you handle the program based on events (like the handle_accept) instead of normal procedural program flow. The loop you're mentioning is somewhere deep in the scary depths of the asio's proactor ;-).