boost asio io_service.run() - c++

I was just going over the asio chat server example. My question is about their usage of the io_service.run() function. The documentation for the io_service.run() function says:
The run() function blocks until all work has finished and there are no
more handlers to be dispatched, or until the io_service has been
stopped. Multiple threads may call the run() function to set up a
pool of threads from which the io_service may execute handlers. All
threads that are waiting in the pool are equivalent and the io_service
may choose any one of them to invoke a handler. The run() function
may be safely called again once it has completed only after a call to
reset().
It says that the run function will return, and I'm assuming that when it does return the network thread stops until it is called again. If that is true, then why isn't the run function called in a loop, or at least given its own thread? the io_service.run() function is pretty much a mystery to me.

"until all work has finished and there are no more handlers to be dispatched, or until the io_service has been stopped"
Notice that you DO install a handler, named handle_accept, that reinstalls itself at each execution. Hence, the io_service.run will never return, at least until you quit it manually.
Basically, at the moment you run io_service.run in a thread, io_services proactor takes over program flow, using the handler's you installed. From that point on, you handle the program based on events (like the handle_accept) instead of normal procedural program flow. The loop you're mentioning is somewhere deep in the scary depths of the asio's proactor ;-).

Related

Behavior of boost::asio::io_service thread pool during uneven load

I have hard time finding out how exactly does thread pool built with boost::asio::io_service behave.
The documentation says:
Multiple threads may call the run() function to set up a pool of
threads from which the io_service may execute handlers. All threads
that are waiting in the pool are equivalent and the io_service may
choose any one of them to invoke a handler.
I would imagine, that when threads executing run() are taking a handler to execute, they execute it, and then come back to wait for next handlers to execute. When executing a handler, a thread is not considered waiting, and hence no new handlers to execute are assigned to it. Is that correct? Or does io_service assign work to threads, without considering whether these are busy or not?
I am asking, because in one project that we are using (OSRM), that uses boost::asio::io_service based thread pool to handle incoming HTTP requests, I noticed that long running request, sometimes block other, fast requests, even though more threads and cores are available.
When executing a handler, a thread is not considered waiting, and hence no new handlers to execute are assigned to it. Is that correct?
Yes. It's a pull model queue.
A notable "apparent" exception is when strands are used: handlers wrapped on a on a strand do synchronize with other handlers running on that same strand.

Boost::Asio - wake up a thread when there are handlers to run

The common way to process Asio handlers is to have a thread (or several threads) either polling io_service (i.e. calling io_service::poll()) regularly to run the handlers or using io_service::run(), which blocks the thread until there's work to do, in which case the thread will run the required handlers and either return or go to sleep again.
However, I want to make a system where a thread is not only responsible for running Asio handlers, but also needs to sync up with another thread using a condition variable. Basically, I want the thread to do all of these:
Wake up when there are Asio handlers that need to be processed (i.e. if I call io_service::poll(), one or more handlers will be processed).
Wake up when there is non-Asio work to be done, indicated by my condition variable.
Sleep otherwise.
In other words, I need a way for Asio to signal me that there are handlers ready to execute, without having to busy-wait or continuously poll. Ideally, Asio will somehow signal a thread when work is available, and that thread will in turn wake up my main worker thread, which will process Asio handlers. That worker thread will also be occasionally woken up by yet another thread, and will process other, non-Asio related work.
Is this even feasible, or should I reconsider how I am designing my system?

Boost async main thread callback

First time using threads in C++. I've been looking at using boost which is very confusing for me. Basically all I'm trying to do is:
Create a worker thread that does some work asynchronously. Continue main thread while work is being done.
When the worker thread is done, fire a callback function with some results that executes in the main thread context.
So something similar to thread handling in C#.
There doesn't seem to be any support for 2. Using an io_service together with an async function, and thereafter using run() on the io_service seems to block the main thread. So not very asynchronous.
I've tried using boost::future as per the example here: Using boost::future with "then" continuations
Here the "then" continuation is done in a separate thread, not the main thread, so not what I'm after. Is there any way to alter this? Using boost::launch::deferred and wait() makes the call synchronous, so that doesn't help either. Same with just using get() on the boost::future construct.
It seems the only option is to create a mutex-locked shared event queue, and just poll it continuously for new data in the main thread?
It's unusual to preempt the main thread in whatever it was doing to start working on the callback. Even in "thread handling in C#" (which is quite a broad subject) the main thread will typically process callbacks when it is processing the thread's message queue.
So typically, the main thread only executes callbacks when it is ready to do so. One way of implementing that is by calling run() on an io_service.
Your main thread can only process one message queue at a time. If your application happens to be a Windows GUI application, then your main thread is already processing a message queue (the windows message queue) and should not perform a blocking function call like run() on an IO service (which is handling another message queue). In such a case, you can decide to write code that wraps your callback in a windows event message and process that.
If you happen to be using Qt, then the answer to this question shows you how you could combine an asio io_service with your message loop (I haven't tried that one).
If your process is not a GUI application, then, since you already seem to be somewhat familiar with asio, you could still use an io_service. In that case however, all functions that the main thread performs (after initialization) should be run as events on that message queue. For example: "Continue main thread" in your question could then be implemented as another callback on the io_service.

Routine when starting boost::asio::io_service

I want to create an http client using boost asio. To have a structured and optimized I have looked into the examples of boost asio to have some idea of what a good implementation should look like.
Mostly, I have followed the structure of HTTP Server, so I have a connection manager that holds a set of pointers to each individual connection. Now, the big difference here is that already in the constructor of server.cpp an asynchronous function is called, namely
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error));
and in the winmain.cpp the io_service is started through a function call to server::run():
io_service_.run();
In my implementation, since it's a client and not a server, I want to wait for the user to call a send() function before I start connecting to the server. I have therefore moved all connecting-to-server-related function calls into the connection class. When a user requests to send a msg to the server the following is called:
resolver.async_resolve(query,
boost::bind(&connection::handle_resolve, boost::ref(*this),
boost::asio::placeholders::error,
boost::asio::placeholders::iterator));
io_service_.run();
I want to start every connection-object in one separate thread and this is really the background of my question. How do I do that in order to have a structured and optimized code?
I have tried, as HTTP Server 2 example, to set up a thread pool of io_services and assigning work to them so that they will not return until stopped. This seems like a good idea since I would the have the io services running in the background all the time. Consequently, I start the thread pool from my equivalent to server.cpp, in a thread:
boost::thread t(boost::bind(&geocast::enabler::io_service_pool::run, &io_service_pool_));
BUT, from my own trial and error analysis, it seems as you cannot start io_service BEFORE you have issued an asynchronous function, is that true? Because my program gets stuck. In my case I want to call async_resolve only when a user means to sends a POST request or a GET request. To support my theory; The Chat Client starts off by calling an async_connect and having an async_read as callback, this way they can safely call io_service.run() just after the client has been created. I don't want to read from the server all the time just to be able to start the io_service, because that is not how a normal client works, right? A browser does not read from every possible server on the planet without the user having navigated to a website...
If I don't use the thread pool from example 2 but start every connection-class in a separate class, each of which own its own io_service, everything works fine. But, a thread pool with a simple round-robin routine to select an appropriate io_service seems really attractive. What is the best approach for me to go multi-threaded? Am I just picky and should stick to one-connection-one-io_service-thing?
I have tried, as HTTP Server 2
example, to set up a thread pool of
io_services and assigning work to them
so that they will not return until
stopped.
When using asynchronous programming, I strongly suggest using the following designs in order:
single io_service with a single thread
pool of threads invoking a single io_service
io_service per thread or other exotic designs
You should only move to the next design if, after profiling, your current design proves to be a bottleneck.
BUT, from my own trial and error
analysis, it seems as you cannot start
io_service BEFORE you have issued an
asynchronous function, is that true?
You are correct, the io_service::run() documentation spells this out very clearly
The run() function blocks until all
work has finished and there are no
more handlers to be dispatched, or
until the io_service has been stopped.
The correct way to prevent io_service::run() from returning immediately is to queue up some handlers, or instantiate an io_service::work object and keep it in scope for as long as you want run() to stay active.
When using ASIO, you are giving up control of the flow of your program over to ASIO. You can share control if you change your code to use a thread pool and call run_one() instead of run(). run_one() only dispatches one IO job to a thread, so if you have multiple events in the ioservice, you will have to call run_one() several times.
Have you thought about spawning a new thread as your boss thread and then having your boss thread create a bunch of worker threads? Your boss thread could call run() and then your UI's thread could call post() to dispatch a new unit of work. Along with not having to manually call and schedule tasks in the ioservice, it also makes the cleanup and shutdown more straight forward since your boss thread would block when it calls run().

how to implemet POSIX select() based behaviour, within boost::asio

I've already wasted two days reading documentation of boost::asio
And I still don't know how I could implement blocking select() like function for several sockets using only one thread (using boost framework).
Asynchronous functions of boost::asio return immediately, so there would be a need to put some wait function in main thread until one of the async_read's finishes.
I suspect that this would time consuming, but I'm really restricted by performance requirements.
The io_service object is an abstraction of the select function. Set up your sockets and then call the io_service::run member function from your main thread. The io_service::run function will block until all of the work associated with the io_service instance is completed. You can schedule more work in your asynchronous handlers.
You can also use io_service::run_one, io_service::poll, or io_service::poll_one in place of io_service::run.