If I use close and not cancel, there are some problems.
The close function can close the socket, and any outstanding asynchronous operations is stopped by returning boost::asio::error::operation_aborted error.
Why should I use cancel instead of close?
I worry if some asynchronous operations is executing, the cancel could not cancel it, yes?
Like asio::ip::tcp::resolve::cancel, I try many times to cancel the resolve_handler after calling async_resolve, but resolve_handler always returns with no boost::asio::error::operation_aborted error.
I think resolve_handler is being executed?
Yes?
Cancel is useful if you want to stop pending operations without closing down the socket.
Note that the Boost documentation recommends using close for greater portability (from doc page):
...
For portable cancellation, consider
using one of the following
alternatives:
Disable asio's I/O completion port
backend by defining
BOOST_ASIO_DISABLE_IOCP.
Use the
close() function to simultaneously
cancel the outstanding operations and
close the socket.
cancel won't close the socket, so use cancel if you intend to continue using the socket object. In particular, if you have code in asynchronous handler methods that references the socket's member functions, you may not want to close the socket until you are guaranteed that your currently executing asynchronous handlers have completed.
cancel doesn't guarantee anything about currently executing asynchronous handlers, it only guarantees (per the boost documentation) that "This function causes all outstanding asynchronous connect, send and receive operations to finish immediately" in the case of the socket::cancel() call, or "This function forces the completion of any pending asynchronous operations on the host resolver" in the case of the resolver::cancel() call. This "completion" means that boost will call your asynchronous handler method, it has no jurisdiction to inject any cancellation logic into your asynchronous handler (not to mention it doesn't know about the handler's implementation to begin with).
I would suggest adding your own logic into your asynchronous handler method to handle the case where the socket/resolver/etc. is canceled. If you are calling the cancel method, then you likely have the ability to communicate this cancellation to the asynchronous handler method.
Related
I have a WebSocket server done with Boost.ASIO and Boost.Beast. It follows the idiomatic ASIO design: session (connection) objects own the communication socket and derive from std::enable_shared_from_this. Async completion handlers capture a std::shared_ptr to self keeping the object alive while there're pending operations, and the objects get destructed automatically when the chain of async ops end. The io_context runs on a single thread, so everything is in an implicit strand.
All this is fairly simple when there's only one chain of async handlers. The session objects I have contain an additional TCP socket and a timer. Read operations are concurrently pending on 2 sockets forwarding messages back and forth, while the timer runs periodically to clean up things. To kill such an object I created a destroySession method that calls cancel an all resources, and eventually completion handlers get called with operation_cancelled. When these all return without scheduling any new async op, then the object gets destructed. destroySession calls are carefully placed at every location where a critical error happens that should result in session termination.
Question1: Is there better way to destruct such an object? With the above solution I feel like I'm back 90's where I forget a delete somewhere and I got a leak...
Given that all destroySession calls are there, is it still possible to leak objects? In some test envs I see 1 session object in 1000 that fails to destruct. I'm thinking of a similar scenario:
websocket closure and timer expiry happens at the same time
websocket completion handler gets invoked, timer handler enqueued
websocket completion handler cancels everything
timer expiry handler gets called (not knowing the error) reschedules the timeout
timer cancel handler gets invoked and simply returns, object remains alive (by the timer)
Is this scenario plausible?
Question2: After calling cancel on a timer/socket can ASIO invoke an already enqueued completion handler with other status than operation_cancelled?
Nice description. Even though code is missing, I have a very good sense of both your design and your understanding of Asio. Both of which seem fine :)
First thoughts:
I kind of agree with the sentiment that destroySession might be a code smell of itself. I can't really state it for lack of details. In my code, I make sure to cancel the "complementary async chain", not just a broad cancel of everything. And the need rarely arises outside the common case of a async timer.
Also, I'm a little worried about the vague "timer runs periodically to clean up things" - in the sketched design there is nothing to clean up, so I worry whether the things you're not showing (leaving out of the description) might cause the symptoms you're trying to explain.
The Timer Scenario
Yes, this is a plausible scenario. In fact it's a bit of a common pitfall problem with Asio timers:
Cancelling boost asio deadline timer safely
SUMMARY TL;DR
Cancelling a time only cancels asynchronous operations in flight.
If you want to shutdown an asynchronous call chain, you'll have to use additional logic for that. An example is given below.
The answer goes into detail how to trace cases like this, and also a approach to fix it.
I have my C++ program. The main thread creates a new thread that is dedicated to only handling a websocket. This new thread reads and writes using for example boost beast's async_read() calls. It is much like https://www.boost.org/doc/libs/1_69_0/libs/beast/example/websocket/server/async/websocket_server_async.cpp where each async call gives rise to another async call.
But what is the idiomatic way to get the main thread to tell the websocket thread to shutdown given that there will likely always be some async read or write call outstanding like an async_read() idle waiting for the server to eventually send data. A shutdown would need to do something like cancel the remaining async_read() without introducing some kind of race condition where the read starts happening just before the cancel.
Use boost::asio::post to post a lambda to the io_context (using the appropriate strand if necessary) which calls cancel on the underlying basic_socket. Pending operations will complete immediately with boost::asio::error::operation_aborted. Inside your completion handler you can check basic_socket::is_open to know whether or not you should attempt new asynchronous calls.
I'd like to write server (TCP/IP) and i have some questions because I am not sure if I think properly.
I need a server with only one thread. I need to read and write data to some clients. I'd like to use async_accept, async_write, async_read etc. from boost::asio.
Is it OK in case of calling async_write for different clients in the same time? What if my program calls async_write for one client and before handler is called it calls async_write for another client?
The same question about async_read.
Isn't it problem?
Is it warranted (in this case) that callback from first calling async_write will be called before callback from second async_write?
What if some callback (handler) takes long time? Other callbacks just wait in a stack until this one will be finished? And if this callback never ends another callbacks will be never executed? Am I right?
There is no problem with overlapping async read and write calls to different sockets. I would recommend you give each connection its own strand so that things won't break in the future should you decide to add TLS support or use more than one thread. The completion handlers can be called in any order, depending on the order in which the operations actually complete. Of course, you cannot have two async read operations or two async write operations on the same connection at the same time.
No, it isn't a problem.
No, it isn't warranted.
No, callbacks should not wait for each other.
No, you aren't right.
Socket I/O proceeds independently for each socket.
I've been doing a lot of reading, but I just cannot wrap my head around the difference between synchronous and asynchronous calls in Boost ASIO: what they are, how they work, and why to pick one over the other.
My model is a server which accepts connections and appends the new connection to a list. A different thread loops over the list and sends each registered connection data as it becomes available. Each write operation should be safe. It should have a timeout so that it cannot hang, it should not allocate arbitrarily large amounts of memory, or in general cause the main application to crash.
Confusion:
How does accept_async differ from regular accept? Is a new thread allocated for each connection accepted? From examples I've seen it looks like after a connection is accepted, a request handler is called. This request handler must tell the acceptor to prepare to accept again. Nothing about this seems asynchronous. If the requset handler hangs then the acceptor blocks.
In the boost mailing list the OP was told to use async_write with a timer instead of regular write. In this configureation I don't see any asynchronous behaviour or why they would be recommended. From the Boost docs async_write seems more dangerous than write because the user must not call async_write again before the first one completes.
Asynchronous calls return immediately.
That's the important bit.
Now how do you control "the next thing" that happens when the asynchronous operation has completed? You got it, you supply the completion handler.
The strength of asynchrony is so you can have an IO operation (or similar) run "in the background" without necessarily incurring any thread switch or synchronization overhead. This way you can handle many asynchronous control flows at the same time, on a single thread.
Indeed asynchronous operations can be more complicated and require more thought (e.g. about lifetime of references used in the completion handler). However, when you need it, you need it.
Boost.Asio basic overview from the official site explains it well:
http://www.boost.org/doc/libs/1_61_0/doc/html/boost_asio/overview/core/basics.html
The io_service object is what handles the multiple operations.
Calls to io_service.run() should be made carefully (that could explain the "dangerous async_write")
I'm using asio synchronous sockets to read data over TCP from a background thread. This is encapsulated in a "server" class.
However, I want the thread to exit when the destructor of this class is called.
The problem is that a call to any of the read functions does block, so the thread cannot be easily terminated. In Win32 there is an API for that: WaitForMultipleObjects which would do exactly what I want.
How would I achieve a similar effect with boost?
In our application, we set the "terminating" condition, and then use a self-connection to the port that the thread is listening on so it wakes up, notes the terminate condition and terminate.
You could also check the boost implementation - if they are only doing a plain read on the socket (i.e., not using something like WaitForMultipleObjects internally themselves) then you can probably conclude that there isn't anything to simply and cleanly unblock the thread. If they are waiting on multiple objects (or a completion port) you could dig around to see if the ability to wake blocking thread is exposed to the outside.
Finally, you could kill the thread - but you'll have to go outside of boost to do this, and understand the consequences, such as dangling or leaked resources. If you are shutting down, this may not be a concern, depending on what else that thread was doing.
I have found no easy way to do this. Supposedly, there are ways to cancel win32 IOCP, but it doesn't work well on windows XP. MS did fix it for windows vista and 7. The recommended approach to cancel asio async_read or async_write is to close the socket.
[destructor] note that we want to teardown
[destructor] close the socket
[destructor] wait for completion handlers
[completion] if tearing down and we just failed because the socket closed, notify the destructor that the completion handlers are done.
[completion] return immediately.
Be careful if you choose to implement this. Closing the socket is pretty straight forward. 'wait for completion handlers' however is huge understatment. There are several subtle corner cases and race conditions that could occur when the server's thread and its destructor interact.
This was subtle enough that we build a completion wrapper (similar to io_service::strand just to handle synchronously canceling all pending completion callbacks.
Best way is to create a socketpair(), (whatever that is in boost::asio parlance), add the reader end to the event loop, then shut the writer end down. You'll be woken up immediately with an eof event on that socket.
The thread must then voluntarily shut itself down.
The spawner of the thread should in its destructor, have the following:
~object()
{
shutdown_queue.shutdown(); // ask thread to shut down
thread.join(); // wait until it does
}
boost::system::error_code _error_code;
client_socket_->shutdown(client_socket_->shutdown_both, _error_code);
Above code help me close sync read immediately.
Use socket.cancel(); to end all current asynchronous operations that are blocking on a socket. Client sockets might need to be killed in a loop. I've never had to shut the server down this way, but you can use shared_from_this() and run cancel()/close() in a loop similarly to how the boost chat example async_writes to all client.