Are ASIO completion handlers invoked through the strand for cancelled operations? - c++

Say there's a pending asynchronous operation with its completion handler wrapped by a strand when it is cancelled - for instance by closing a socket, cancelling a timer etc.
So, as I see it, the completion handlers will be enqueued with the error code operation_aborted. Now they can be dequeued by the io_service to be dispatched.
Is the way I'm telling this story right? If so, when the io_service invokes the completion handler, does it do through the strand even if they result from cancelled operations?

Yes, absolutely. It is an invariant that every asynchronous operation that is started completes. Regardless of the error code or success, the completion handler is executed the same way -- if it's strand wrapped, the handler will execute on the strand.
Typically you don't need to do anything in this case and the handler just checks for operation_aborted and returns. But if you want to do anything, you can. Also, the destruction of the callback object may cause things to happen. For example, if the invocation of the completion handler was through a shared_ptr, the destruction of that shared_ptr may trigger other destructors to run.

Related

Boost Asio Async Connection Race Condition?

I am looking at the Boost Asio Blocking TCP Client timeout example with a special interest on how connection timeouts are implmented. How do we know from the documentation that the callback handler and subsequent checks don't introduce a race condition?
The Asynchronous connection command
boost::asio::async_connect(socket_, iter, var(ec) = _1);
executes the var(ec) = _1 which is the handler for setting the error code once execute. Alternatively, a full and explicit lambda could be used here.
At the same time, the check_deadline function appears to be
called by the deadline_ member. The timeout appears to be enforced by having the deadline forcibly close the socket whereup we assume that perhaps that the blocking statement
do io_service_.run_one(); while (ec == boost::asio::error::would_block);
would return. At first I thought that the error code must be atomic but that doesn't appear to be the case. Instead, this page, appears to indicate that the strand model will work whenever the calls to the socket/context come from the same thread.
So we assume that each callback for the deadline (which is in Asio) and the handle for the async_connect routine will not be run concurrently. Pages such as this in the documentation hint that handlers will only execute during run() calls which will prevent the command while(ec == whatever) from behind executed during the handler currently changing its value.
How do I know this explicitly? What in the documentation that tells me explicitly that no handlers will ever execute outside these routines? If true, the page on the proactor design pattern must infer this, but never explicitly where the "Initiator" leads to the "Completion Handler".
The closes I've found is the documentation for io_context saying
Synchronous operations on I/O objects implicitly run the io_context
object for an individual operation. The io_context functions run(),
run_one(), run_for(), run_until(), poll() or poll_one() must be called
for the io_context to perform asynchronous operations on behalf of a
C++ program. Notification that an asynchronous operation has completed
is delivered by invocation of the associated handler. Handlers are
invoked only by a thread that is currently calling any overload of
run(), run_one(), run_for(), run_until(), poll() or poll_one() for the
io_context.
This implies that if I have one thread running the run_one() command then its control path will wait until a handler is available and eventually wind its way through a handler whereupon it will return and check ther ec value.
Is this correct and is "Handlers are invoked only by a thread that is currently calling any overload of run(), run_one(), run_for(), run_until(), poll() or poll_one() for the io_context." the best statement to find for understanding how the code will always function? Is there any other exposition?
The Asio library is gearing up to be standardized as NetworkingTS. This part is indeed the deal:
Handlers are invoked only by a thread that is currently calling any overload of run(), run_one(), run_for(), run_until(), poll() or poll_one() for the io_context
You are correct in concluding that the whole example is 100% single-threaded¹. There cannot be a race.
I personally feel the best resource is the Threads and Boost.Asio page:
By only calling io_context::run() from a single thread, the user's code can avoid the development complexity associated with synchronisation. For example, a library user can implement scalable servers that are single-threaded (from the user's point of view).
It also reiterates the truth from earlier:
[...] the following guarantee:
Asynchronous completion handlers will only be called from threads that are currently calling io_context::run().
¹ except potential internal service threads depending on platform/extensions, as the threads page details

ASIO IO completion callbacks order vs the order of actual IO operations

It is obvious from the implementation that IO completion callbacks are invoked in the same order as the actual IO operations when running in a single thread mode, but I cannot find the respective part of the documentation confirming that. Is it written explicitly anywhere?
The documentation of all of the async_xxx methods on io-object classes have a passage like this:
Regardless of whether the asynchronous operation completes immediately or not, the handler will not be invoked from within this function. Invocation of the handler will be performed in a manner equivalent to using boost::asio::io_service::post().
Looking at the documentation of boost::asio::io_service::post()...
This function is used to ask the io_service to execute the given handler, but without allowing the io_service to call the handler from inside this function.
The io_service guarantees that the handler will only be called in a thread in which the run(), run_one(), poll() or poll_one() member functions is currently being invoked.
And that is the full extent of your guarantee.
If your code relies on the temporal ordering of asynchronous events, then it is not asynchronous code.
Even the documentation of run_one() make no guarantees about which handler it will dispatch:
The run_one() function blocks until one handler has been dispatched, or until the io_service has been stopped.
If you must sequence individual async operations (such as reads), then you are obliged to either:
initiate the second operation from the handler of the first, or
keep a flag set while an operations' handler is outstanding, and only initiate another operation when the flag is false.

Do boost asio sockets have proper RAII cleanup

I tried looking through source but I cant navigate that much of a template code.
Basically: this is what documentation says (for close()):
Remarks
For portable behaviour with respect to graceful
closure of a connected socket, call shutdown() before closing the socket.
I can do that manually, but if possible it would be nice to rely on RAII.
So if I have socket going out of scope do I need to call shutdown() and close() on it, or it will be done automatically?
One can rely on the socket performing proper cleanup with RAII.
When an IO object, such as socket, is destroyed, its destructor will invoke destroy() on the IO object's service, passing in an instance of the implementation_type on which the IO object's service will operate. The SocketService requirements state that destroy() will implicitly cancel asynchronous operations as-if by calling the close() on the service, which has a post condition that is_open() returns false. Furthermore, the service's close() will cause outstanding asynchronous operations to complete as soon as possible. Handlers for cancelled operations will be passed the error code boost::asio::error::operation_aborted, and scheduled for deferred invocation within the io_service. These handlers are removed from the io_service if they are either invoked from a thread processing the event loop or the io_service is destroyed.

Clear boost::asio::io_service after stop()

I am using (single threaded) a boost::asio:io_service to handle a lot of tcp connections. For each connection I use a deadline_timer to catch timeouts. If any of the connections times out, I can use none of the results of the other connections. Therefore I want to completely restart my io_service. I thought that calling io_service.stop() would allow "finished" handlers in the queue to be called and would call handlers in the queue with an error.
However it looks like the handlers remain in the queue and therefore calling io_service.reset() and later io_service.run() brings the old handlers back up.
Can anyone confirm that the handlers indeed remain in the queue even after io_service.stop() is called. And if so, what are the possibilities to completly reset the io_service, e.g. remove all queued handlers?
io_service::stop() and io_service::reset() only control the state of the io_service's event loop; neither affect the lifespan of handlers scheduled for deferred invocation (ready-to-run) or user-defined handler objects.
The destructor for io_service will cause all outstanding handlers to be destroyed:
Each service object associated with the io_service will have its shutdown_service() member function invoked. Per the Service type requirement, the shutdown_service() member function will destroy all copies of user-defined handler objects that are held by the service.
Uninvoked handler objects scheduled for deferred invocation are destroyed for the io_service and any of its strands.
Consider either:
Controlling the lifespan of the io_service object. One approach can be found in this answer.
Running the io_service to completion. This often requires setting state, cancelling outstanding operations, and preventing completion handlers from posting additional work into the io_service. Boost.Asio provides an official timeout example, and a timeout approach with running to the io_service to completion is also shown here.

What happens to handlers when socket is deleted

What happens to completion handlers when socket is shutdown/closed and then deleted (that is destructor is run and memory released)? AFAIK after socked it closed all completion handlers will receive error code next time even loop is polled. But what happens if the socked is deleted before even handlers had a chance to run? Is it OK to delete socket before dispatching event handlers?
It is safe to delete a socket before its outstanding handlers have been executed. Outstanding operations will have their handlers set to be invoked with boost::asio::error::operation_aborted. It is the responsibility of the application code to to make sure that the handlers do not invoke operations on the deleted socket.
For details, destroying an IO object, such a socket, will cause the IO object's service to be destroyed. The SocketService requirements state that destroy() will implicitly cancel asynchronous operations. Outstanding asynchronous operations will try to complete as soon as possible. This causes the handlers for cancelled operations to be passed the error code boost::asio::error::operation_aborted, and scheduled for deferred invocation within the io_service. These handlers are removed from the io_service if they are either invoked from a thread processing the event loop or the io_service is destroyed.
All handlers is guaranteed to be called. If socket was closed - handlers will be called with some error code.
Normally, you need to use this guarantee to control lifetime of your objects using boost::enable_shared_from_this.
class Writer : boost::enable_shared_from_this<Writer>
{
boost::asio::socket scoket_;
...
void StartWrite()
{
boost::asio::async_write(socket_,
boost::asio::buffer(buffer_, bytes_sent_),
boost::bind(&Writer::Handler, shared_from_this,
boost::asio::placeholders::error));
...
void Handler(boost::system::error_code const& err) {...}
With this approach, your socket object will outlive all pending handlers.