I am trying to understand the difference between io_service's poll()/poll_one() and run()/run_one(). The difference as stated in the documentation is that poll() executes ready handlers as opposed to run() which executes any handler.
But nowhere in the boost documentation could I find the definition of a 'ready handler'.
A valid answer to this question is one able to show, preferably with a code example, the difference between a ready and non-ready handler and the difference between how poll() and run() executes it.
Thanks.
A "ready handler" is a handler that is ready to be executed. If you have issued an asynchronous call, it gets executed in the background and its handler becomes ready when the async call is done. Before that, the handler is pending, but not ready.
poll_one executes one ready handler if there is any.
poll executes all ready handlers, but not the pending. Both poll versions return immediately after the execution of the handlers.
run_one executes a ready handler if there is one, if not it waits for the first pending handler to become ready, meaning it blocks.
run executes and waits, until there are neither ready nor pending handlers. After it returns, the io_servie is in stopped state.
See also Boost::Asio : io_service.run() vs poll() or how do I integrate boost::asio in mainloop
int main()
{
boost::asio::io_service io_service;
boost::asio::deadline_timer timer(io_service);
timer.expires_from_now(boost::posix_time::seconds(5));
timer.async_wait([](const boost::system::error_code& err)
{ std::cout << (err ? "error" : "okay")
;});
//io_service.poll_one();
io_service.run_one();
}
If you use io_service.poll_one(); you will most likely not see any output because the timer has not elapsed yet. ready handler simply means a handle that is ready to run (such as when a timer elapses or an operation finishes, etc.). However, if you use io_service.run_one(); this call will block until the timer finishes and execute the handler.
Related
I am new to Boost::asio and I am currently looking at io_context.
I have a question regarding the function io_context::post
Posting on thread can preempt what's running on that thread currently ?
because in the documentation i have seen :
Deprecated: Use post.) Request the io_context to invoke the given handler and return immediately.
I'm expecting that post will be added to the event queue, and will only be considered to run again when control is passed back to the event loop
No it cannot interrupt the running thread(s) associated with the io_context. post() enqueues the task to the io_context which will execute it eventually. The "return immediately" is meant in terms of the post() call itself, not the task. So the post() function returns immediately without blocking, but the task is scheduled for later execution.
I wrote an asynchronous SSL socket implementation using standalone asio and am struggling to get it to reconnect after a connection reset / close by the server. I am rather new to the asio library so please bear with me.
The thread that calls io_context::run remains blocked even after a disconnect because of the steady_timer. My close() logic is responsible for resetting the socket resources and is also responsible for trying to kill the timer. This is what my code looks like right now:
Creating my async job:
timer.async_wait(std::bind(&ssl_socket::heartbeat, this));
In my close() method:
timer.expires_at(std::chrono::steady_clock::now());
timer.cancel();
According to the boost docs, cancel() should:
Cancel any asynchronous operations that are waiting on the timer.
Perhaps I misinterpreting this but I would imagine this also cancels the asynchronous job that is bound to the io_context but it doesn't. io_context::run is never released and creates a deadlock.
This is what my timer handler looks like:
void ssl_socket::heartbeat() {
spdlog::get("console")->trace("heartbeat called");
if (connected_) {
write(heartbeat_token);
spdlog::get("console")->trace("heartbeat sent");
}
timer.expires_at(std::chrono::steady_clock::now() + std::chrono::seconds(heartbeat_interval));
timer.async_wait(std::bind(&ssl_socket::heartbeat, this));
}
I would like to keep handler away from having to validate if it should renew its timer and let the close() deal with that (if possible).
You are ignoring the error code.
According to the boost docs, cancel() should:
Cancel any asynchronous operations that are waiting on the timer.
This is a bit misleading. When you read the full description for the cancel function you'll see:
This function forces the completion of any pending asynchronous wait
operations against the timer. The handler for each cancelled operation
will be invoked with the boost::asio::error::operation_aborted error
code.
Which means, your handler will be called by the cancel function, and since your handler just re-sets the expiry-time and waits again, the cycle never ends. You need to check the error code and just break out of the cycle if it is set.
if(error) return;
While trying to hack clean shutdown of asio app I find it quite irritating that I cant know if ios stopped because i called .stop() or because it run out of handlers.
Also when I want to kill it I cant find a way to see if it has handler in its handlers q, or even if some handlers are running atm.
So
1) Any way to see what stopped ios - .stop or running out of work (except the awful manual bIsAppShuttingDown flag )
2) Any way to see if io_service (after I called stop) is still processing something?
so I can write
ios->stop()
while(! ios.finished())
sleep(1) // :/
delete ios;
Typically the pattern is to dispatch on the io_service in a separate thread, for example:
_thread.reset(new std::thread([&]() { _service.run(); }); // so the dispatching here is in a thread
Subsequently, if you want to stop it and wait for it to finish cleanly, then the best way is:
_service.stop();
_thread->join();
This way the calling thread is blocked until the dispatch thread terminates (which happens when the call to execute the last handler (run()) completes. There is no way (AFAIK) of knowing whether the io_service ran out of work or whether stop() was called, you can certainly prevent the former by instantiating an io_service::work on the service. See the docs.
This is my server code:
socket_.async_read_some(boost::asio::buffer(data_read.data(), Message::header_length),
boost::bind(&TcpConnection::handle_read_header, shared_from_this(),
boost::asio::placeholders::error));
If i write a the the following code in a loop
boost::thread::sleep(boost::posix_time::seconds(2));
in the 'handle_read_header' function which is called by the above 'async_read_some' the whole thread is waiting till the sleep end. So when another request comes in it is not handled until the sleep finishes. Isn't is suppose to asynchronously handles each requests? I am new to boost and C++. Please let me know if i have mentioned anything wrong.
Read scheduled with async_read_some is realized in the thread which called io_service::run().
If you have only one thread it will wait for completing one read handler, before starting another one.
You can make a thread pool, by running more threads with io_service::run() or make the execution of read handler shorter.
I am using boost::asio::io_service to manage some asynchronous TCP communication. That means I create a boost::asio::ip::tcp::socket and give the io_service to it. When I start the communication it goes schematically like this:
Async Resolve -> Callback -> Async Connect -> Callback -> Async Write -> Callback -> Async Read
I ommitted parts like resolve and bind. Just assume the Socket has been bound to a port and the hostname is resolved ( so connect meaning establishing the real connection to the endpoint )
Now the point is that I may start several Async Connections with the same io_service object. This means for example, that while in my io_service thread the program is about to Async Write some data, the main thread will call Async Resolve with on Socket ( but with the same io_service ).
This means that my io_service now has some parallel work to do - what I'd like to know is how it will prioritize the work?
For example it go like this
Main Thread | io_service Thread
-------------------------+-----------------------------------------------
SocketA->Async Connect |
//Some other Stuff | SocketA->Callback from Async Connect
| SocketA->Async Write
SocketB->Async Connect |
| --> ?
Now at this point I have to admit I am not quite sure how the io_service works. In the fourth line there are now two different asynchronous functions which needs to be executed.
Is io_service capable of doing the Async Connect and the Async Write simultaneously? If that is the case it is clear that always the callback from the function which is finished first will be called.
If the io_service is not capable of doing so, in which order will it do the work? If SocketA Async Write will be called first, it's callback will also be called first. Actually there will be always work until the whole operation on SocketA is finished.
EDIT :
According to ereOns comment I try to make my question a bit more precise:
From the view of the io_service thread - is the SocketA Async Connect call asynchronous or synchronous? From the view of my main thread it is of course asynchronous ( it just dispatches the command and then goes on ). But in the io_service thread will this specific Connect call block other operations?
In other words: Is one single io_service capable of Connecting to one Socket while it is reading on another?
Another example would be if I just call 2 Async Connect in my main function right after each other:
SocketA->AsyncConnect();
SocketB->AsyncConnect();
Let's say the Host from SocketA is a bit slow and it takes it two seconds to answer. So while SocketA is trying to connect would SocketB in the meanwhile also connect or would it have to wait until SocketA is done /timed out?
All the work is done in the thread where io_service.run() runs.
However, the call to any async_ method won't block this specific thread: it behaves exactly like if io_service.run() called select() on several events, and "returns" (calls a callback) whenever such an event is raised. That is, if you call:
socketA->async_connect();
socketB->async_connect();
socketB may as well connect before socketA and the associated callback would then be called first, still in the thread io_service.run() runs.
That's all the beauty of Boost Asio: it takes a very good care about polling, waiting and raising events when it is more appropriate, leaving you with the "easy" part.
You shouldn't try to predict order of execution for asynchronous operations here. async_connect just signals to io_service and returns immediately. The real work gets done in io_service object's event processing loop (io_service::run), but you don't know exact specifics. It most likely uses OS-specific asynchronous IO functions.
It's not clear what you're trying to achieve. Maybe you should use synchronous operations. Maybe you should use thread synchronization functionality.
Maybe io_service::run_one will help you (it executes at most one handler).
Maybe you'll want to call io_service::run multiple times in separate threads, creating a thread pool. That way one long completion handler won't block all the others.
boost::asio::io_service service;
const size_t ASIO_THREAD_COUNT = 3;
boost::thread_group threadGroup;
for (size_t i = 0; i < ASIO_THREAD_COUNT; ++i)
threadGroup.create_thread(boost::bind(&boost::asio::io_service::run,
&service, boost::system::error_code()));