From the standard SSL client example. Say I call this function.
boost::asio::async_connect(socket_.lowest_layer(), endpoint_iterator,
boost::bind(&SSLClient::handle_connect, this,
boost::asio::placeholders::error));
But then the function is called, and the program is connecting. I would like to cancel my request and stop the connection! How can I do that?
Special case: Say I have those objects in a thread. Is there a way to do it in this case?
Now if I try to do this, the program simply doesn't respond. I don't see a way to force it to stop!
There are several ways to achieve what you want ¹.
You could hard-stop the service (service.stop()). But this leaves you no control over all running operations. It's the "nuclear" approach, so to say.
The controlled way would be to call cancel()
Cancel all asynchronous operations associated with the socket.
socket_.cancel()
Now, you have the additional task of maintaining the lifetime of your connection object (presumably the this in your bound completion handler). A very common pattern to use is to make the connection class derive from enable_shared_from_this and bind the completion handler to shared_from_this() instead of just this.
That way, the shared connection object will automatically "go away" after the last pending async operation has been canceled, and you don't have to worry about leaking connection objects.
¹ short of exit, abort, quick_exit etc. :)
Related
What exactly does the destructor of boost::asio::ip::tcp::socket do? I can't tell, even after scouring Boost docs and source code, if I need to use
socket->shutdown(boost::asio::ip::tcp::socket::shutdown_both);
socket->close();
before calling
delete socket;
Do I need to close the socket manually, or does the destructor handle this?
When a socket is destroyed, it will be closed as-if by socket.close(ec) during the destruction of the socket.
I/O objects, such as socket, derive from basic_io_object. Within the basic_io_object destructor, destroy() will be invoked on the I/O object's I/O service, passing in an instance of the implementation_type on which the I/O object's service will operate. In the case of socket, destroy() will be invoked on a type that fulfills the SocketService type requirement, closing the underlying socket. In the documentation below, a is an instance of a socket service class, and b is an instance of the implementation_type for the socket service class:
a.destroy(b):
[...] Implicitly cancels asynchronous operations, as if by calling a.close(b, ec).
a.close(b, ec):
If a.is_open() is true, causes any outstanding asynchronous operations to complete as soon as possible. Handlers for cancelled operations shall be passed the error code error::operation_aborted.
post: !a.is_open(b).
No you don't need to close it. Though it might be cleaner to do so, if you want to report any errors surrounding protocol shutdown.
The destructor just /appears/ to be empty, that's a good sign of Modern
C++:
http://en.cppreference.com/w/cpp/language/rule_of_three
Rule Of Zero
The answers have skipped over the issue of shutdown(). From the close() documentation, "For portable behaviour with respect to graceful closure of a connected socket, call shutdown() before closing the socket".
If deleting the socket does an implicit close, it seems that a call to shutdown() is still recommended before deleting it.
I tried looking through source but I cant navigate that much of a template code.
Basically: this is what documentation says (for close()):
Remarks
For portable behaviour with respect to graceful
closure of a connected socket, call shutdown() before closing the socket.
I can do that manually, but if possible it would be nice to rely on RAII.
So if I have socket going out of scope do I need to call shutdown() and close() on it, or it will be done automatically?
One can rely on the socket performing proper cleanup with RAII.
When an IO object, such as socket, is destroyed, its destructor will invoke destroy() on the IO object's service, passing in an instance of the implementation_type on which the IO object's service will operate. The SocketService requirements state that destroy() will implicitly cancel asynchronous operations as-if by calling the close() on the service, which has a post condition that is_open() returns false. Furthermore, the service's close() will cause outstanding asynchronous operations to complete as soon as possible. Handlers for cancelled operations will be passed the error code boost::asio::error::operation_aborted, and scheduled for deferred invocation within the io_service. These handlers are removed from the io_service if they are either invoked from a thread processing the event loop or the io_service is destroyed.
I am trying to implement async_connect() with a timeout.
async_connect_with_timeout(socket_type & s,
std::function<void(BoostAndCustomError const & error)> const & connect_handler,
time_type timeout);
When operation completes connect_handler(error) is called with error indicating operation result (including timeout).
I was hoping to use code from timeouts example 1.51. The biggest difference is that I am using multiple worker threads performing io_service.run().
What changes are necessary to keep the example code working?
My issues are:
When calling :
Start() {
socket_.async_connect(Handleconnect);
dealine_.async_wait(HandleTimeout);
}
HandleConnect() can be completed in another thread even before async_wait() (unlikely but possible). Do I have to strand wrap Start(), HandleConnect(), and HandleTimeout()?
What if HandleConnect() is called first without error, but deadline_timer.cancel() or deadline_timer.expires_from_now() fails because HandleTimeout() "have been queued for invocation in the near future"? Looks like example code lets HandleTimeout() close socket. Such behavior (timer closes connection after we happily started some operations after connect) can easily lead to serious headache.
What if HandleTimeout() and socket.close() are called first. Is it possible to HandlerConnect() be already "queued" without error? Documentation says: "Any asynchronous send, receive or connect operations will be cancelled immediately, and will complete with the boost::asio::error::operation_aborted error". What does "immediately" mean in multithreading environment?
You should wrap with strand each handler, if you want to prevent their parallel execution in different threads. I guess some completion handlers would access socket_ or the timer, so you'll definitely have to wrap Start() with a strand as well. But wouldn't it be much more simple to use io_service-per-CPU model, i.e. to base your application on io_service pool? IMHO, you'll get much less headache.
Yes, it's possible. Why is it a headache? The socket gets closed because of a "false timeout", and you start re-connection (or whatever) procedure just as if it were closed due to a network failure.
Yes, it's also possible, but again, it shouldn't cause any problem for correctly designed program: if in HandleConnect you try to issue some operation on a closed socket, you'll get the appropriate error. Anyway, when you attempt to send/receive data you don't really know the current socket/network status.
Using boost::asio i use async_accept to accept connections. This works good, but there is one issue and i need a suggestion how to deal with it. Using typical async_accept:
Listener::Listener(int port)
: acceptor(io, ip::tcp::endpoint(ip::tcp::v4(), port))
, socket(io) {
start_accept();
}
void Listener::start_accept() {
Request *r = new Request(io);
acceptor.async_accept(r->socket(),
boost::bind(&Listener::handle_accept, this, r, placeholders::error));
}
Works fine but there is a issue: Request object is created with plain new so it can memory "leak". Not really a leak, it leaks only at program stop, but i want to make valgrind happy.
Sure there is an option: i can replace it with shared_ptr, and pass it to every event handler. This will work until program stop, when asio io_service is stopping, all objects will be destroyed and Request will be free'd. But this way i always must have an active asio event for Request, or it will be destroyed! I think its direct way to crash so i dont like this variant, too.
UPD Third variant: Listener holds list of shared_ptr to active connections. Looks great and i prefer to use this unless some better way will be found. The drawback is: since this schema allows to do "garbage collection" on idle connects, its not safe: removing connection pointer from Listener will immediately destroy it, what can lead to segfault when some of connection's handler is active in other thread. Using mutex cant fix this cus in this case we must lock nearly anything.
Is there a way to make acceptor work with connection management some beautiful and safe way? I will be glad to hear any suggestions.
The typical recipe for avoiding memory leaks when using this library is using a shared_ptr, the io_service documentation specifically mentions this
Remarks
The destruction sequence described above permits programs to simplify
their resource management by using shared_ptr<>. Where an object's
lifetime is tied to the lifetime of a connection (or some other
sequence of asynchronous operations), a shared_ptr to the object would
be bound into the handlers for all asynchronous operations associated
with it. This works as follows:
When a single connection ends, all associated asynchronous operations
complete. The corresponding handler objects are destroyed, and all
shared_ptr references to the objects are destroyed. To shut down the
whole program, the io_service function stop() is called to terminate
any run() calls as soon as possible. The io_service destructor defined
above destroys all handlers, causing all shared_ptr references to all
connection objects to be destroyed.
For your scenario, change your Listener::handle_accept() method to take a boost::shared_ptr<Request> parameter. Your second concern
removing connection pointer from Listener will immediately destroy it,
what can lead to segfault when some of connection's handler is active
in other thread. Using mutex cant fix this cus in this case we must
lock nearly anything.
is mitigated by inheriting from the boost::enable_shared_from_this template in your classes:
class Listener : public boost::enable_shared_from_this<Listener>
{
...
};
then when you dispatch handlers, use shared_from_this() instead of this when binding to member functions of Listener.
If anyone interested, i found another way. Listener holds list of shared_ptr to active connections. Connections ending/terminating is made via io_service::post which call Listener::FinishConnection wrapped with asio::strand. Usually i always wrap Request's methods with strand - its safer in terms of DDOS and/or thread safety. So, calling FinishConnection from post using strand protects from segfault in other thread
Not sure whether this is directly related to your issue, but I was also having similar memory leaks by using the Boost Asio libraries, in particular the same acceptor object you mentioned. Turned out that I was not shutting down the service correctly; some connections would stay opened and their corresponding objects would not be freed from memory. Calling the following got rid of the leaks reported by Valgrind:
acceptor.close();
Hope this can be useful for someone!
In the following code:
tcp::socket socket(io_service);
tcp::endpoint ep(boost::asio::ip::address::from_string(addr), i);
socket.async_connect(ep, &connect_handler);
socket.close();
is it correct to close the socket object, or should I close it only in the connect_handler(), resort to shared_ptr to prolong the life of of the socket object?
Thanks.
Closing the socket isn't much of an issue, but the socket being destructed and deallocated is. One way to deal with it is to just make sure the socket outlives the io_service where work is being done. In other words, you just make sure to not delete it until after the io_service has exited. Obviously this won't work in every situation.
In a variety of conditions it can be difficult to impossible to tell when all work is really done on the socket when it's active within the io_service, and ASIO doesn't provide any mechanism to explicitly remove or disconnect the object's callbacks so they don't get called. So you should consider holding the connection in a shared_ptr, which will keep the connection object until the last reference inside the io_service has been released.
Meanwhile your handler functors should handle all possible errors passed in, including the connection being destroyed.
It is safe. The connect_handler will give you ec == boost::asio::error::connection_aborted. Of course, you need to do io_service.run() for the handler to be invoked.
As already answered by Chila, it's safe to close the socket whenever you want. If the socket in question has an outstanding operation at the time, the handler/callback will be invoked to notify you've cancelled the operation. That's where connection_aborted shows up.
As for your question about shared_ptr, I consider it a big win if you have another thread or other objects referencing your sockets, however, it isn't required in many cases. All you have to do is to dynamically allocate them, and deallocate when they're no longer needed. Of course, if you have other objects or threads referencing your socket, you must update them prior to delete/dealloc. Doing so, you avoid invalid memory access because the object pointed by them no longer exists (see dangling pointer).