Can anyone tell me under what conditions boost::asio's io_service::run() method will return? The documentation documentation for io_service::run() seems to suggest that as long as there is work to be done or handlers to be dispatched, run() won't return.
The reason I'm asking this is that we have a legacy https client that contacts a server and executes http POST's. The separation of concerns in the client is a bit different than what we'd like so we're changing a few things about it, but we're running into problems.
Right now, the client basically has a mis-named connect() call that effectively drives the entire protocol conversation with the server. The connect() call starts off by creating a boost::asio::ip::tcp::resolver object and calling ::async_resolve() on it. This starts a chain where new asio calls are made from within asio callbacks.
void connect()
{
m_resolver.async_resolve( query, bind( &clientclass::resolve_callback, this ) );
thread = new boost::thread( bind( &boost::asio::io_service::run, m_io_service ) );
}
void resolve_callback( error_code & e, resolver::iterator i )
{
if (!e)
{
tcp::endpoint = *i;
m_socket.lowest_layer().async_connect(endpoint, bind(&clientclass::connect_callback,this,_1,++i));
}
}
void connect_callback( error_code & e, resolve::iterator i )
{
if (!e)
{
m_socket.lowest_layer().async_handshake(boost::asio::ssl::stream_base::client,
bind(&clientclass::handshake_callback,this,_1,++i));
}
}
void handshake_callback( error_code &e )
{
if (!e)
{
mesg = format_hello_message();
http_send( mesg, bind(&clientlass::hello_resp_handler,this,_1,_2) );
}
}
void http_send( stringstream & mesg, reply_handler handler )
{
async_write(m_socket, m_request_buffer, bind(&clientclass::write_complete_callback,this,_1,handler));
}
void write_comlete_callback( error_code &e, reply_handler handler )
{
if (!e)
{
async_read_until(m_socket,m_reply_buffer,"\r\n\r\n", bind(&clientclass::handle_reply,this,handler));
}
}
...
Anyways, this continues through the protocol until the protocol conversation is done. From the code here you can see that while connect() is running on the main thread, all of the subsequent callbacks and requests are coming back on the worker thread that is created in connect(). This is 'working' code.
When I try to break this chain up and expose it via an external interface, it stops working. In particular, I'm having the call handle_handshake() call outside of the clientclass object. Then http_send() is part of the interface (or is called by the external interface) and it creates a new worker thread to call io_service::run(). What happens is even though async_write() has been called and even though write_complete_callback() hasn't returned, io_service::run() exits. It exits without error and claims that no handlers were dispatched, but there's still 'work' to be done?
So what I'm wondering is what is io_service::run()'s definition of 'work'? Is it a pending request? Why is it that io_service::run() never returns during this chain of requests and responses in the existing code, but when I try to start the thread up again and start a new chain, it returns almost immediately before it's finished its work?
The definition of work in the context of the run() call is any pending asynchronous operations on that io_service object. This includes the invocations of the handlers in response to an operation. So, if a handler for one operation starts another operation, there is always work available.
In addition, there is an io_service::work class that can be used to create work on an io_service that never completes until the object is destroyed.
When a single chain completes, the io_service has completed all asynchronous operations, and all of the handler's have been invoked without starting a new operation, so it returns. Until you call io_service::reset(), further calls to run() will return without executing any operations.
Related
I have a problem where two threads are called like this, one after another.
new boost::thread( &SERVER::start_receive, this);
new boost::thread( &SERVER::run_io_service, this);
Where the first thread calls this function.
void start_receive()
{
udp_socket.async_receive(....);
}
and the second thread calls,
void run_io_service()
{
io_service.run();
}
and sometimes the io_service thread ends up finishing before the start_receive() thread and then the server cannot receive packets.
I thought about putting a sleep function between the two threads to wait a while for the start_receive() to complete and that works but I wondered if there was another sure fire way to make this happen?
When you call io_service.run(), the thread will block, dispatching posted handlers until either:
There are no io_service::work objects associated with the io_service, or
io_service.stop() is called.
If either of these happens, the io_service enters the stopped state and will refuse to dispatch any more handlers in future until its reset() method is called.
Every time you initiate an asynchronous operation on an io object associated with the io_service, an io_service::work object is embedded in the asynchronous handler.
For this reason, point (1) above cannot happen until the asynchronous handler has run.
this code therefore will guarantee that the async process completes and that the asserts pass:
asio::io_service ios; // ios is not in stopped state
assert(!ios.stopped());
auto obj = some_io_object(ios);
bool completed = false;
obj.async_something(..., [&](auto const& ec) { completed = true; });
// nothing will happen yet. There is now 1 work object associated with ios
assert(!completed);
auto ran = ios.run();
assert(completed);
assert(ran == 1); // only 1 async op waiting for completion.
assert(ios.stopped()); // io_service is exhausted and no work remaining
ios.reset();
assert(!ios.stopped()); // io_service is ready to run again
If you want to keep the io_service running, create a work object:
boost::asio::io_service svc;
auto work = std::make_shared<boost::asio::io_service::work>(svc);
svc.run(); // this will block as long as the work object is valid.
The nice thing about this approach is that the work object above will keep the svc object "running", but not block any other operations on it.
I have a program (client + server) that works with no issue with this write:
boost::asio::write(this->socket_, boost::asio::buffer(message.substr(count,length_to_send)));
where socket_ is boost::asio::ssl::stream<boost::asio::ip::tcp::socket> and message is an std::string.
I would like to make this better and non-blocking, so I created a function that could replace this, it's called like follows:
write_async_sync(socket_,message.substr(count,length_to_send));
The purpose of this function is:
To make the call async, intrinsically
To keep the interface unchanged
The function I implemented simply uses promise/future to simulate sync behavior, which I will modify later (after it works) to be cancellable:
std::size_t
SSLClient::write_async_sync(boost::asio::ssl::stream<boost::asio::ip::tcp::socket>& socket,
const std::string& message_to_send)
{
boost::system::error_code write_error;
std::promise<std::size_t> write_promise;
auto write_future = write_promise.get_future();
boost::asio::async_write(socket,
boost::asio::buffer(message_to_send),
[this,&write_promise,&write_error,&message_to_send]
(const boost::system::error_code& error,
std::size_t size_written)
{
logger.write("HANDLING WRITING");
if(!error)
{
write_error = error;
write_promise.set_value(size_written);
}
else
{
write_promise.set_exception(std::make_exception_ptr(std::runtime_error(error.message())));
}
});
std::size_t size_written = write_future.get();
return size_written;
}
The problem: I'm unable to get the async functionality to work. The sync one works fine, but async simply freezes and never enters the lambda part (the writing never happens). What am I doing wrong?
Edit: I realized that using poll_one() makes the function execute and it proceeds, but I don't understand it. This is how I'm calling run() for io_service (before starting the client):
io_service_work = std::make_shared<boost::asio::io_service::work>(io_service);
io_service_thread.reset(new std::thread([this](){io_service.run();}));
where basically these are shared_ptr. Is this wrong? Does this way necessitate using poll_one()?
Re. EDIT:
You have the io_service::run() correctly. This tells me you are blocking on the future inside a (completion) handler. That, obviously, prevents run() from progressing the event loop.
The question asked by #florgeng was NOT whether you have an io_service instance.
The question is whether you are calling run() (or poll()) on it suitably for async operations to proceed.
Besides, you can already use future<> builtin:
http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/overview/cpp2011/futures.html
Example: http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/example/cpp11/futures/daytime_client.cpp
std::future<std::size_t> recv_length = socket.async_receive_from(
boost::asio::buffer(recv_buf),
sender_endpoint,
boost::asio::use_future);
Regarding this post:
Why do I need strand per connection when using boost::asio?
I'm focusing on this statement regarding async calls:
"However, it is not safe for multiple threads to make calls concurrently"
This example:
http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/example/cpp11/chat/chat_client.cpp
If I refer to main as "thread 1" and the spawned thread t as "thread 2", then it seems like thread 1 is calling async_write (assuming no write_in_progress) while thread 2 is calling async_read. What am I missing?
In the official chat example, chat_client::write() defers work to the io_service via io_service::post(), which will:
request that the io_service execute the given handler via a thread that is currently invoking the poll(), poll_one(), run(), or run_one() function on the io_service
not allow the given handler to be invoked within the calling function (e.g. chat_client::write())
As only one thread is running the io_service, and all socket read, write, and close operations are only initiated from handlers that have been posted to the io_service, the program satisfies the thread-safety requirement for socket.
class chat_client
{
void write(const chat_message& msg)
{
// The nullary function `handler` is created, but not invoked within
// the calling function. `msg` is captured by value, allowing `handler`
// to append a valid `msg` object to `write_msgs_`.
auto handler = [this, msg]()
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
do_write();
}
};
// Request that `handler` be invoked within the `io_service`.
io_service_.post(handler);
}
};
I am trying to wrap my head around resource management in boost::asio. I am seeing callbacks called after the corresponding sockets are already destroyed. A good example of this is in the boost::asio official example: http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/example/cpp11/chat/chat_client.cpp
I am particularly concerned with the close method:
void close()
{
io_service_.post([this]() { socket_.close(); });
}
If you call this function and afterwards destruct chat_client instance that holds socket_, socket_ will be destructed before the close method is called on it. Also any pending async_* callbacks can be called after the chat_client has been destroyed.
How would you correctly handle this?
You can do socket_.close(); almost any time you want, but you should keep in mind some moments:
If you have threads, this call should be wrapped with strand or you can crash. See boost strand documentation.
Whenever you do close keep in mind that
io_service can already have queued handlers. And they will be called anyway with old state/error code.
close can throw an exception.
close does NOT includes ip::tcp::socket destruction. It
just closes the system socket.
You must manage object lifetime
yourself to ensure objects will be destroyed only when there is no
more handlers. Usually this is done with enable_shared_from_this
on your Connection or socket object.
Invoking socket.close() does not destroy the socket. However, the application may need to manage the lifetime of objects for which the operation and completion handlers depend upon, but this is not necessarily the socket object itself. For instance, consider a client class that holds a buffer, a socket, and has a single outstanding read operation with a completion handler of client::handle_read(). One can close() and explicitly destroy the socket, but the buffer and client instance must remain valid until at least the handler is invoked:
class client
{
...
void read()
{
// Post handler that will start a read operation.
io_service_.post([this]() {
async_read(*socket, boost::asio::buffer(buffer_);
boost::bind(&client::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
});
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred
)
{
// make use of data members...if socket_ is not used, then it
// is safe for socket to have already been destroyed.
}
void close()
{
io_service_.post([this]() {
socket_->close();
// As long as outstanding completion handlers do not
// invoke operations on socket_, then socket_ can be
// destroyed.
socket_.release(nullptr);
});
}
private:
boost::asio::io_service& io_service_;
// Not a typical pattern, but used to exemplify that outstanding
// operations on `socket_` are not explicitly dependent on the
// lifetime of `socket_`.
std::unique_ptr<boost::asio::socket> socket_;
std::array<char, 512> buffer_;
...
}
The application is responsible for managing the lifetime of objects upon which the operation and handlers are dependent. The chat client example accomplishes this by guaranteeing that the chat_client instance is destroyed after it is no longer in use, by waiting for the io_service.run() to return within the thread join():
int main(...)
{
try
{
...
boost::asio::io_service io_service;
chat_client c(...);
std::thread t([&io_service](){ io_service.run(); });
...
c.close();
t.join(); // Wait for `io_service.run` to return, guaranteeing
// that `chat_client` is no longer in use.
} // The `chat_client` instance is destroyed.
catch (std::exception& e)
{
...
}
}
One common idiom is to managing object lifetime is to have the I/O object be managed by a single class that inherits from enable_shared_from_this<>. When a class inherits from enable_shared_from_this, it provides a shared_from_this() member function that returns a valid shared_ptr instance managing this. A copy of the shared_ptr is passed to completion handlers, such as a capture-list in lambdas or passed as the instance handle to bind(), causing the lifetime of the I/O object to be extended to at least as long as the handler. See the Boost.Asio asynchronous TCP daytime server tutorial for an example using this approach.
I have a class called ServerConnectionHandler that creates a boost thread for reading data from the server. The boost thread is bound to the ServerConnectionHandler object. Relevant pieces of code are below:
ServerConnectionHandler::~ServerConnectionHandler()
{
close();
}
void ServerConnectionHandler::close()
{
closesocket(m_ConnectSocket);
WSACleanup();
}
void ServerConnectionHandler::MsgLoop()
{
int size_recv = 0;
char chunk[DEFAULT_BUFLEN];
while(1)
{
memset(chunk, 0, DEFAULT_BUFLEN);
size_recv = recv(m_ConnectSocket, chunk, DEFAULT_BUFLEN, 0);
if(size_recv > 0)
{
for( int i=0; i < size_recv; ++i )
{
if(chunk[i] == '\n')
{
m_tcpEventHandler.OnClientMessage(m_RecBuffer);
m_RecBuffer.clear();
}
else
{
m_RecBuffer.append(1, chunk[i]);
}
}
}
else if(size_recv == 0)
{
close();
const std::string error = "MsgReceiver Received 0 bytes because connection was closed. MsgReceiver shutting down.\n";
m_tcpEventHandler.OnClientSocketError(error);
break;
}
else
{
char error [512];
sprintf(error, "Error on Receiving Socket. Recv=[%d], WSAError=[%d]. MsgReceiver shutting down.\n", size_recv, WSAGetLastError());
m_tcpEventHandler.OnClientSocketError(error);
close();
break;
}
}
// NOTE: This will eventually call the destructor of ServerConnectionHandler...
m_tcpEventHandler.OnClientDisconnect("Disconnected. Reason: Remote host snapped connection.");
}
My problem is that when close() is called in the destructor, the receiver thread is still running and crashes when it attempts to call any of the m_tcpEventHandler.OnClient...() methods because the object has been destroyed at this point.
I need to be able to handle this cleanly in 3 different cases:
When the user manually disconnects the client (the destructor will be called in this case).
When the client is disconnected from the server (maybe because the server crashed for example).
When the application shuts down (needs to cleanly disconnect everything - similar to #1).
Right now, this code only works for case #2. I don't want to slow down the receiver thread with any locking as the performance is critical. From what I've read, I've seen people create a volatile bool flag that tells the receiver thread to stop. The problem I see with this approach is that what if it is in the middle of handling a message (m_tcpEventHandler.OnClientMessage()) right when the destructor is called? Then it could immediately hit code for the destroyed object (m_tcpEventHandler could in turn use ServerConnectionHandler's member variables or methods). I can't think of a clean way to handle all 3 cases here.
Before you close the socket in the destructor, shut it down for input. That will cause the receive thread to get an end of stream and exit nicely. You might want to add a little handshake between the dtor and the receiver thread before the final close, or you might just want to rely on the receiver thread closing the socket and not close it in the dtor at all.
"My problem is that when close() is called in the destructor, the receiver thread is still running" - seems to me your problem is merely thread synchronization, then. Little to do with connections.
Making the communications asynchronous gives you a lot more control over the receiving thread.
You could e.g. use Boost Asio to do the asynchronous socket reads (and writes, of course). If you add an "infinite" deadline_timer to the asynch queue, you can just cancel() that timer, which could be used by the receiving thread to stop the receive and do some more cleanups (e.g. write a "Goodbye" message to the remote end).
(If the latter were not required, just cancelling all async operations could be achieved by simply shutting down the io_service. That would be rather uncourteous, but not a bad idea in fast shutdown paths.)