Clear boost::asio::io_service after stop() - c++

I am using (single threaded) a boost::asio:io_service to handle a lot of tcp connections. For each connection I use a deadline_timer to catch timeouts. If any of the connections times out, I can use none of the results of the other connections. Therefore I want to completely restart my io_service. I thought that calling io_service.stop() would allow "finished" handlers in the queue to be called and would call handlers in the queue with an error.
However it looks like the handlers remain in the queue and therefore calling io_service.reset() and later io_service.run() brings the old handlers back up.
Can anyone confirm that the handlers indeed remain in the queue even after io_service.stop() is called. And if so, what are the possibilities to completly reset the io_service, e.g. remove all queued handlers?

io_service::stop() and io_service::reset() only control the state of the io_service's event loop; neither affect the lifespan of handlers scheduled for deferred invocation (ready-to-run) or user-defined handler objects.
The destructor for io_service will cause all outstanding handlers to be destroyed:
Each service object associated with the io_service will have its shutdown_service() member function invoked. Per the Service type requirement, the shutdown_service() member function will destroy all copies of user-defined handler objects that are held by the service.
Uninvoked handler objects scheduled for deferred invocation are destroyed for the io_service and any of its strands.
Consider either:
Controlling the lifespan of the io_service object. One approach can be found in this answer.
Running the io_service to completion. This often requires setting state, cancelling outstanding operations, and preventing completion handlers from posting additional work into the io_service. Boost.Asio provides an official timeout example, and a timeout approach with running to the io_service to completion is also shown here.

Related

Are ASIO completion handlers invoked through the strand for cancelled operations?

Say there's a pending asynchronous operation with its completion handler wrapped by a strand when it is cancelled - for instance by closing a socket, cancelling a timer etc.
So, as I see it, the completion handlers will be enqueued with the error code operation_aborted. Now they can be dequeued by the io_service to be dispatched.
Is the way I'm telling this story right? If so, when the io_service invokes the completion handler, does it do through the strand even if they result from cancelled operations?
Yes, absolutely. It is an invariant that every asynchronous operation that is started completes. Regardless of the error code or success, the completion handler is executed the same way -- if it's strand wrapped, the handler will execute on the strand.
Typically you don't need to do anything in this case and the handler just checks for operation_aborted and returns. But if you want to do anything, you can. Also, the destruction of the callback object may cause things to happen. For example, if the invocation of the completion handler was through a shared_ptr, the destruction of that shared_ptr may trigger other destructors to run.

What does inside a strand mean?

I'm currently trying to get my hands on boost::asio strands. Doing so, I keep reading about "invoking strand post/dispatch inside or outside a strand". Somehow I can't figure out how inside a strand differs from through a strand, and therefore can't grasp the concept of invoking a strand function outside the strand at all.
Probably there is just a small piece missing in my puzzle. Can somebody please give an example how calls to a strand can be inside or outside it?
What I think I've understood so far is that posting something through a strand would be
m_strand.post(myfunctor);
or
m_strand.wrap(myfunctor);
io_svc.post(myfunctor);
Is the latter considered a call to dispatch outside the strand (as opposed to the other being a call to post inside it)? Is there some relation between the strand's "inside realm" and the threads the strand operates on?
If being inside a strand simply meant to invoke a strand's function, then the strand class's documentation would be pointless. It states that strand::post can be invoked outside the strand... That's precisely the part I don't understand.
Even I had some trouble in understanding this concept, but became clear once I started working on libdispatch. It helped me map things with asio better.
Now lets see how to make some sense out of strand. Consider strand as a serial queue of handlers which needs to be executed.
Now, where does these handlers get executed ? Within the worker threads.
Where did these worker threads come from ? From the io_service object you passed while creating the strand.
Something like:
asio::strand s(io_serv_obj);
Now, as you must be knowing, the io_service::run can be called by a single thread or multiple threads. The threads calling the run method of the io_serv_obj are the worker threads for that strand in our case. So, it could be either single threaded or multithreaded.
Coming back to strands, when you post a handler, that handler is always enqueued in the serial queue which we talked about. The worker threads will pick up the handler from the queue one after the other.
Now, when you do a dispatch, asio does some optimization for you:
It checks whether you are calling it from inside one of the worker thread or from some other thread (maybe of some other io_service instance). When it is called outside the current execution context of the strand, thats when it is called outside the strand. So, in the outside case, the dispatch will just enqueue the handler like post when there are other handlers waiting in the queue or will call it directly when it can guarantee that it will not be called concurrently with any other handler from that queue that may be running in one of the worker threads at that moment.
UPDATE:
As noted in the comments section, inside means called within another handler i.e for eg: I posted a handler A and inside that handler, I am doing a dispatch of another handler. Now, as would be explained in #2, if there are no other handlers waiting in the strands serial queue, the dispatch handler will be called synchronously. If this condition is not met, that means, the dispatch is called from outside.
Now, if you call dispatch from outside of the strand i.e not within the current execution context, asio checks its callstack to see if any other handler present in its serial queue is running or not. If not, then it will directly call that handler synchronously. So, there is no cost of enqueueing the handler (I think no extra allocation will be done as well, not sure though).
Lets see the documentation link now:
s.dispatch(a) happens-before s.post(b), where the former is performed
outside the strand
This means that, if dispatch was called from some outside the current run OR there are other handlers already enqueued, then it needs to enqueue the handler, it just cannot call it synchronously. Since its a serial queue, a will get executed before b.
Had there been another call s.dispatch(c) along with a and b but before a and b(in the mentioned order) enqueued, then c will get executed before a and b, but in no way b can get executed before a.
Hope this clears your doubt.
For a given strand object s, running outside s implies that s.running_in_this_thread() returns false. This returns true if the calling thread is executing a handler that was submitted to the strand via post(), dispatch(), or wrap(). Otherwise, it returns false:
io_service.post(handler); // handler will run outside of strand
strand.post(handler); // handler will run inside of strand
strand.dispatch(handler); // handler will run inside of strand
io_service.post(strand.wrap(handler)); // handler will run inside of strand
Given:
a strand object s
a function object f1 that is added to strand s via s.post(), or s.dispatch() when s.running_in_this_thread() == false
a function object f2 that is added to strand s via s.post(), or s.dispatch() when s.running_in_this_thread() == false
then the strand provides a guarantee of ordering and non-concurrency, such that f1 and f2 will not be invoked concurrently. Furthermore, if the addition of f1 happens before the addition of f2, then f1 will be invoked before f2.

What is the impact of calling io_service::run method twice

The following schema come from boost asio documentation:
I understand that if I call io_service::run method twice (in two separate threads), I will have two threads to deque events from the completion Event Queue via Asynchronous Event Demultiplexer am I right?
More precisely, my doubt is on the parrallelization achieve by multiple call of io_service::run method. For instance when dealing with socket, if for example I have two sockets bound on the same io_service object, each socket calling socket.async_read_some method, does it involved the 2 registered callbacks (via async_read_some method) can be called concurently when calling io_service::run twice.
Your assumptions are correct. Each thread which calls io_service::run() will dequeue and execute handlers (simple function objects) in parallel. This of course only makes sense if you have more than one source of events feeding the io_service (such as two sockets, a socket and a timer, several simultaneous post() calls and so on).
Each call to a socket's async_read() will result in exactly one handler being queued in the io_service. Only one of your threads will dequeue it and execute it.
Be careful not to call async_read() more than once at a time per socket.

Do boost asio sockets have proper RAII cleanup

I tried looking through source but I cant navigate that much of a template code.
Basically: this is what documentation says (for close()):
Remarks
For portable behaviour with respect to graceful
closure of a connected socket, call shutdown() before closing the socket.
I can do that manually, but if possible it would be nice to rely on RAII.
So if I have socket going out of scope do I need to call shutdown() and close() on it, or it will be done automatically?
One can rely on the socket performing proper cleanup with RAII.
When an IO object, such as socket, is destroyed, its destructor will invoke destroy() on the IO object's service, passing in an instance of the implementation_type on which the IO object's service will operate. The SocketService requirements state that destroy() will implicitly cancel asynchronous operations as-if by calling the close() on the service, which has a post condition that is_open() returns false. Furthermore, the service's close() will cause outstanding asynchronous operations to complete as soon as possible. Handlers for cancelled operations will be passed the error code boost::asio::error::operation_aborted, and scheduled for deferred invocation within the io_service. These handlers are removed from the io_service if they are either invoked from a thread processing the event loop or the io_service is destroyed.

What happens to handlers when socket is deleted

What happens to completion handlers when socket is shutdown/closed and then deleted (that is destructor is run and memory released)? AFAIK after socked it closed all completion handlers will receive error code next time even loop is polled. But what happens if the socked is deleted before even handlers had a chance to run? Is it OK to delete socket before dispatching event handlers?
It is safe to delete a socket before its outstanding handlers have been executed. Outstanding operations will have their handlers set to be invoked with boost::asio::error::operation_aborted. It is the responsibility of the application code to to make sure that the handlers do not invoke operations on the deleted socket.
For details, destroying an IO object, such a socket, will cause the IO object's service to be destroyed. The SocketService requirements state that destroy() will implicitly cancel asynchronous operations. Outstanding asynchronous operations will try to complete as soon as possible. This causes the handlers for cancelled operations to be passed the error code boost::asio::error::operation_aborted, and scheduled for deferred invocation within the io_service. These handlers are removed from the io_service if they are either invoked from a thread processing the event loop or the io_service is destroyed.
All handlers is guaranteed to be called. If socket was closed - handlers will be called with some error code.
Normally, you need to use this guarantee to control lifetime of your objects using boost::enable_shared_from_this.
class Writer : boost::enable_shared_from_this<Writer>
{
boost::asio::socket scoket_;
...
void StartWrite()
{
boost::asio::async_write(socket_,
boost::asio::buffer(buffer_, bytes_sent_),
boost::bind(&Writer::Handler, shared_from_this,
boost::asio::placeholders::error));
...
void Handler(boost::system::error_code const& err) {...}
With this approach, your socket object will outlive all pending handlers.