boost asio - session thread does not end - c++

I use boost asio to handle a session per thread like this:
Server::Server(ba::io_service& ioService, int port): ioService_(ioService), port_(port)
{
ba::ip::tcp::acceptor acceptor(ioService_, ba::ip::tcp::endpoint(ba::ip::tcp::v4(), port_));
for (;;)
{
socket_ptr sock(new ba::ip::tcp::socket(ioService_));
acceptor.accept(*sock);
boost::thread thread(boost::bind(&Server::Session, this, sock));
}
}
void Server::Session(socket_ptr sock)
{
const int max_length = 1024;
try
{
char buffer[256] = "";
// HandleRequest() function performs async operations
if (HandleHandshake(sock, buffer))
HandleRequest(sock, buffer);
ioService_.run();
}
catch (std::exception& e)
{
std::cerr << "Exception in thread: " << e.what() << "\n";
}
std::cout << "Session thread ended \r\n"; // THIS LINE IS NEVER REACHED
}
In Server::Session() I do at some point async io using async_read_some() and async_write() functions.
All works well and in order for this to work I have to have a call to ioService_.run() inside my spawn thread otherwise Server::Session() function exits and it does not process the required io work.
The problem is that ioService_.run() called from my thread will lead for the thread not to exit at all because in the meantime other requests come to my listening server socket.
What I end up with is threads starting and processing for now sessions but never releasing resources (ending). Is it possible to use only one boost::asio::io_service when using this approach ?

I believe you are looking for run_one() or poll_one() this will allow you to have the thread either execute a ready handler (poll) or wait for a handler (run). By only handling one, you can pick how many to execute before exiting your thread. As opposed to run() which executes all the handlers until the io_service is stopped. Where as poll() would stop after it handled all the ones that are currently ready.

The way I structured handling connection here was bad.
There is quite a good video presentation about how to design your asio server bellow(made by asio creator)
Thinking Asynchronously: Designing Applications with Boost Asio

Related

Standalone ASIO library has different behaviour on OSX and Linux systems after write error

I have noticed that on OSX asio::async_write function always calls handler callback. But on linux (Ubuntu 18.04) after async_write operation completes with error 3 times (Connection reset by peer or Broken pipe) handler callback is not called anymore after next call to async_write.
Please take a look at the code example:
asio::io_service ioService;
asio::ip::tcp::resolver resolver(ioService);
// ---- Initialize server -----
auto acceptor = make_unique<asio::ip::tcp::acceptor>(ioService,
resolver.resolve(asio::ip::tcp::resolver::query(asio::ip::tcp::v4(), "localhost", "12345"))->endpoint());;
asio::ip::tcp::socket serverSocket(ioService);
std::promise<void> connectedPromise;
std::promise<void> disconnectedPromise;
std::vector<uint8_t> readBuffer(1);
acceptor->async_accept(serverSocket, [&](asio::error_code errorCode) {
std::cout << "Socket accepted!" << std::endl;
connectedPromise.set_value();
serverSocket.async_read_some(asio::buffer(readBuffer), [&](asio::error_code errorCode, std::size_t length) {
if (errorCode) {
std::cout << "Read error: " << errorCode.message() << std::endl;
disconnectedPromise.set_value();
}
});
});
// ----- Initialize client --------
asio::ip::tcp::socket clientSocket(ioService);
asio::connect(clientSocket, resolver.resolve({asio::ip::tcp::v4(), "localhost", "12345"}));
// ----- Start io service loop
std::thread mainLoop([&]() {
ioService.run();
});
connectedPromise.get_future().get(); // Wait until connected
// ----- Perform 10 async_write operations with 100 ms delay --------
std::promise<void> done;
std::atomic<int> writesCount{0};
std::vector<uint8_t> writeBuffer(1);
std::function<void (const asio::error_code&, std::size_t)> writeHandler = [&](const asio::error_code& errorCode, std::size_t) -> void {
if (errorCode) {
std::cout << errorCode.message() << std::endl;
}
if (++writesCount < 10) {
std::this_thread::sleep_for(std::chrono::milliseconds(100));
asio::async_write(serverSocket, asio::buffer(writeBuffer), writeHandler);
} else {
done.set_value();
}
};
asio::async_write(serverSocket, asio::buffer(writeBuffer), writeHandler);
clientSocket.close(); // Perform disconnect from client side
disconnectedPromise.get_future().get(); // Wait until disconnected
std::cout << "Waiting for all operations complete" << std::endl;
done.get_future().get(); // Wait until all 10 async_write operations complete
std::cout << "All operations complete" << std::endl;
ioService.stop();
mainLoop.join();
Output on OSX:
Socket accepted!
Broken pipe
Read error: Connection reset by peer
Broken pipe
Waiting for all operations complete
Broken pipe
Broken pipe
Broken pipe
Broken pipe
Broken pipe
Broken pipe
Broken pipe
All operations complete
Output on Ubuntu 18.04:
Socket accepted!
Read error: End of file
Connection reset by peer
Waiting for all operations complete
Broken pipe
Broken pipe
Linux version hangs on done.get_future().get() line because async_write completion handler is not called after several Broken pipe errors. I expect that any async_write operation should lead to handler call regardless of the socket status as in OSX version.
Is it a bug in linux version?
Asio version: 1.14.0 (standalone)
You have condition race on ioService.run().
The reference states:
The run() function blocks until all work has finished and there are no
more handlers to be dispatched.
You have to call reset() on ioService if service stopped working due to lack of handlers.
A normal exit from the run() function implies that the io_context
object is stopped (the stopped() function returns true). Subsequent
calls to run(), run_one(), poll() or poll_one() will return
immediately unless there is a prior call to restart().
The diagram below shows where condition race occurs:
main thread background thread
[1] async_accept
[2] ioService.run()
[3] handler for async_accept is called
connectedPromise.set_value();
[4] async_read_some
[5] connectedPromise.get_future().get();
---> now here is a problem
What [6.a] or [6.b] will be called as first?
[6.a] async_write which can push
a new handler to be processed
or
[6.b] handler for async_read_some
if this handler was called,
ioService::run() ends, and you have to call reset on
it to accept new incoming handlers
(In square brackets are all steps in time order)
In your case 6.b happens. Handler for async_read_some is called as first, and in this handler you don't initiate any new handlers. As a result, ioService::run() stops, and handler for async_write will not be invoked.
Try to use executor_work_guard to prevent ioService::run() from stopping when there is no handlers to be dispatched.

C++ asio provide async execution of thread

I got a simple server app. When new client connecting, it handles request from client and send data back to it. My problem is to provide a async execution of handle thread. Now, when began a handle thread it stops acceptor loop and wait for return of corresponding function.
The question is how to organize the continuation of acceptor loop (to be able to simultaneously handle other connection) after starting a handle thread?
Server.h:
class Server
{
private:
//Storage
boost::asio::io_service service;
boost::asio::ip::tcp::acceptor* acceptor;
boost::mutex mtx;
//Methods
void acceptorLoop();
void HandleRequest(boost::asio::ip::tcp::socket* clientSock);
public:
Server();
};
Server.cpp
void Server::acceptorLoop()
{
std::cout << "Waiting for clients..." << std::endl;
while (TRUE)
{
boost::asio::ip::tcp::socket clientSock (service);
acceptor->accept(clientSock); //new socket accepted
std::cout << "New client joined! ";
boost::thread request_thread (&Server::HandleRequest, this, &clientSock); //create a thread
request_thread.join(); //here I start thread, but I want to continue acceptor loop and not wait until function return.
}
}
void Server::HandleRequest(boost::asio::ip::tcp::socket* clientSock)
{
if (clientSock->available())
{
//Works with socket
}
}
Server::Server()
{
acceptor = new boost::asio::ip::tcp::acceptor(service, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 8001));
acceptorLoop(); //loop started
}
You have two main problems here:
Thread joining - you are waiting for thread finish before accept new connection
Using pointer to a socket created on a stack
I recommend you this changes:
boost::asio::ip::tcp::socket clientSock (service);
acceptor->accept(clientSock); //new socket accepted
std::cout << "New client joined! ";
std::thread{std::bind(&Server::HandleRequest, this, std::placeholders::_1), std::move(clientSock)}.detach();
And HandleRequest will change to this:
void Server::HandleRequest(boost::asio::ip::tcp::socket&& clientSock)
{
if (clientSock.available())
{
//Works with socket
}
}
You can also store thread somewhere and join it later instead of detaching.
So why do you call join? Join is about waiting for a thread to finish, and you say you don't want to wait for the thread, so, well... just don't call join?

Safely interrupt C++11 blocking operation

I have a std::thread that uses Boost's asio to read from a serial port:
std::atomic<bool> quit(false);
void serialThread()
{
try
{
asio::io_service io;
asio::serial_port port(io);
port.open("COM9"); // Yeay no port enumeration support!
port.set_option(asio::serial_port_base::baud_rate(9600));
while (!quit)
{
asio::streambuf buf;
asio::read_until(port, buf, "\n");
auto it = asio::buffers_begin(buf.data());
string line(it, it + buf.size());
doStuffWithLine(line);
}
}
catch (std::exception e)
{
cout << "Serial thread error: " << e.what() << endl;
}
}
void SetupSignals()
{
// Arrange it so that `quit = true;` happens when Ctrl-C is pressed.
}
int main(int argc, char *argv[])
{
SetupSignals();
thread st(serialThread);
st.join();
return 0;
}
When I press Ctrl-C I want to cleanly exit the thread, so that all destructors are called appropriately (some drivers on Windows hate it if you don't close their resources properly).
Unfortunately as you can see, the current code blocks in read_until() so when you press Ctrl-C nothing will happen until a new line of text is received.
One solution is to use polling, something like this:
asio::async_read_until(port, buf, "\n", ...);
while (!quit)
io.poll();
But I'd rather not use polling. It is pretty inelegant. The only solution I can currently see is to have a std::condition_variable quitOrIoFinished that is triggered either when quit is set to true, or when the read finishes. But I didn't write asio so I can't give it a condition variable to wait on.
Is there any clean sane solution? In Go I would just use a select to wait on multiple channels, where one of them is a quit channel. I can't see a similar solution in C++ though.
Use an asio::signal_set to await the INT signal (control-C tends to send interrupt).
When it arrives, simply call cancel() on your IO objects with pending asynchronous operations. They will return with error_code equal to boost::asio::error::operation_aborted.
Now, if you have a io_service::work object, destruct it and the all threads running io_service::run() will return, so you can join them.
Note Take care of synchronizing access to your IO objects (e.g. when you invoke cancel() on them) because these objects are not thread-safe, unlike io_service and strand.

How do I send a SIGTERM or SIGINT signal to the server in the boost HTML3 example?

I am using the HTML Server 3 example from boost as my learning tool (http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/examples.html#boost_asio.examples.http_server_3) for asynchronous message handling.
I have taken the example, and turned it into a library with a server object I can instantiate in my programs. The only thing I have done to the above example is remove the main.cpp and compile it as a library. And it works to the extend that I can instantiate the server object in my code, and pass messages to it from the command line.
Where I am struggling is how to terminate the server gracefully. From the sample code I see this:
server::server(const std::string& address, const std::string& port,
std::size_t thread_pool_size,
Handler &handler)
: thread_pool_size_(thread_pool_size),
signals_(io_service_),
acceptor_(io_service_),
new_connection_(),
request_handler_(handler)
{
// Register to handle the signals that indicate when the server should exit.
// It is safe to register for the same signal multiple times in a program,
// provided all registration for the specified signal is made through Asio.
signals_.add(SIGINT);
signals_.add(SIGTERM);
signals_.async_wait(boost::bind(&server::handle_stop, this));
So an asynchronous thread is set up to listen for signals and respond to them
I have implemented this server object in a thread in my program as follows:
class ServerWorker
{
public:
ServerWorker(std::string theHost, std::string thePort)
{
Host = theHost;
Port = thePort;
}
void Start()
{
try
{
MYRequestHandler handler;
int nCores = boost::thread::hardware_concurrency();
server *mServer = new server(Host, Port, nCores, handler);
svr->run();
}
catch(std::exception &e) { /* do something */ }
}
void Stop()
{
mServer->stop(); // this should raise a signal and send it to the server
// but don't know how to do it
}
private:
std::string Host;
std::string Port;
server *mServer;
};
TEST(BSGT_LBSSERVER_STRESS, BSGT_SINGLETON)
{
// Launch as server on a new thread
ServerWorker sw(BSGT_DEFAULT_IPADDRESS, BSGT_DEFAULT_PORT_STR);
boost::function<void()> th_func = boost::bind(&ServerWorker::Start, &sw);
boost::thread swThread = boost::thread(th_func);
// DO SOMETHING
// How do I signal the server in the swThread to stop?
}
How do I implement the stop() method on the server object to send the signal to itself? I have tried:
1) raise(SIGTERM) - kills the whole program
2) raise(SIGINT) - kills the whole program
raise() is appropriate for having a process signal itself.
void ServerWorker::Stop()
{
std::raise(SIGTERM);
}
Be aware that raise() is asynchronous. It will issue the signal and return immediately. Hence, control may continue before the io_service processes the enqueued SignalHandler.
void run_server()
{
// Launch as server on a new thread
ServerWorker server_worker(...);
boost::thread worker_thread([&server_worker]() { server_worker.Start(); });
...
// Raises SIGTERM. May return before io_service is stopped.
server_worker.Stop();
// Need to synchronize with worker_thread. The `worker_thread` may still be
// in `ServerWorker::Start()` which would go out of scope. Additionally,
// the `worker_thread` is joinable, so its destructor may invoke
// `std::terminate()`.
}
Here is a minimal example demonstrating using Boost.Asio signal handling, raise(), and synchronization:
#include <cassert>
#include <csignal>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io_service;
// Prevent io_service from running out of work.
boost::asio::io_service::work work(io_service);
// Boost.Asio will register an internal handler for SIGTERM.
boost::asio::signal_set signal_set(io_service, SIGTERM);
signal_set.async_wait(
[&io_service](
const boost::system::error_code& error,
int signal_number)
{
std::cout << "Got signal " << signal_number << "; "
"stopping io_service." << std::endl;
io_service.stop();
});
// Raise SIGTERM.
std::raise(SIGTERM);
// By the time raise() returns, Boost.Asio has handled SIGTERM with its
// own internal handler, queuing it internally. At this point, Boost.Asio
// is ready to dispatch this notification to a user signal handler
// (i.e. those provided to signal_set.async_wait()) within the
// io_service event loop.
std::cout << "io_service stopped? " << io_service.stopped() << std::endl;
assert(false == io_service.stopped());
// Initiate thread that will run the io_service. This will invoke
// the queued handler that is ready for completion.
std::thread work_thread([&io_service]() { io_service.run(); });
// Synchornize on the work_thread. Letting it run to completion.
work_thread.join();
// The io_service has been explicitly stopped in the async_wait
// handler.
std::cout << "io_service stopped? " << io_service.stopped() << std::endl;
assert(true == io_service.stopped());
}
Output:
io_service stopped? 0
Got signal 15; stopping io_service.
io_service stopped? 1

Matching boost::deadline_timer callbacks to corresponding wait_async

Consider this short code snippet where one boost::deadline_timer interrupts another:
#include <iostream>
#include <boost/bind.hpp>
#include <boost/function.hpp>
#include <boost/asio.hpp>
static boost::asio::io_service io;
boost::asio::deadline_timer timer1(io);
boost::asio::deadline_timer timer2(io);
static void timer1_handler1(const boost::system::error_code& error)
{
std::cout << __PRETTY_FUNCTION__ << " time:" << time(0) << " error:" << error.message() << " expect:Operation canceled." << std::endl;
}
static void timer1_handler2(const boost::system::error_code& error)
{
std::cout << __PRETTY_FUNCTION__ << " time:" << time(0) << " error:" << error.message() << " expect:success." << std::endl;
}
static void timer2_handler1(const boost::system::error_code& error)
{
std::cout << __PRETTY_FUNCTION__ << " time:" << time(0) << " error:" << error.message() << " expect:success." << std::endl;
std::cout << "cancel and restart timer1. Bind to timer1_handler2" << std::endl;
timer1.cancel();
timer1.expires_from_now(boost::posix_time::milliseconds(10000));
timer1.async_wait(boost::bind(timer1_handler2, boost::asio::placeholders::error));
}
int main()
{
std::cout << "Start timer1. Bind to timer1_handler1." << std::endl;
timer1.expires_from_now(boost::posix_time::milliseconds(2000));
timer1.async_wait(boost::bind(timer1_handler1, boost::asio::placeholders::error));
std::cout << "Start timer2. Bind to timer2_handler1. Will interrupt timer1." << std::endl;
timer2.expires_from_now(boost::posix_time::milliseconds(2000));
timer2.async_wait(boost::bind(timer2_handler1, boost::asio::placeholders::error));
std::cout << "Run the boost io service." << std::endl;
io.run();
return 0;
}
If the time for timer2 is varied around the 2 second mark, sometimes timer1_handler1 reports success, and sometimes operation cancelled. This is probably determinate in the trivial example because we know what time timer2 is set to.
./timer1
Start timer1. Bind to timer1_handler1.
Start timer2. Bind to timer2_handler1. Will interrupt timer1.
Run the boost io service.
void timer1_handler1(const boost::system::error_code&) time:1412680360 error:Success expect:Operation canceled.
void timer2_handler1(const boost::system::error_code&) time:1412680360 error:Success expect:success.
cancel and restart timer1. Bind to timer1_handler2
void timer1_handler2(const boost::system::error_code&) time:1412680370 error:Success expect:success.
This represents a more complex system where timer1 is implementing a timeout, and timer2 is really an asynchronous socket. Occasionally I've observed a scenario where timer1 is cancelled too late, and the first handler returns after the second async_wait() has been called, thus giving a spurious timeout.
Clearly I need to match up the handler callbacks with the corresponding async_wait() call. Is there a convenient way of doing this?
One convenient way of solving the posed problem, managing higher-level asynchronous operations composed of multiple non-chained asynchronous operations, is by using the approach used in the official Boost timeout example. Within it, handlers make decisions by examining current state, rather than coupling handler logic with an expected or provided state.
Before working on a solution, it is important to identify all possible cases of handler execution. When the io_service is ran, a single iteration of the event loop will execute all operations that are ready to run, and upon completion of the operation, the user's completion handler is queued with an error_code indicating the operation's status. The io_service will then invoke the queued completion handlers. Hence, in a single iteration, all ready to run operations are executed in an unspecified order before completion handlers, and the order in which completion handlers are invoked is unspecified. For instance, when composing an async_read_with_timeout() operation from async_read() and async_wait(), where either operation is only cancelled within the other operation's completion handler, the following case are possible:
async_read() runs and async_wait() is not ready to run, then async_read()'s completion handler is invoked and cancels async_wait(), causing async_wait()'s completion handler to run with an error of boost::asio::error::operation_aborted.
async_read() is not ready to run and async_wait() runs, then async_wait()'s completion handler is invoked and cancels async_read(), causing async_read()'s completion handler to run with an error of boost::asio::error::operation_aborted.
async_read() and async_wait() run, then async_read()'s completion handler is invoked first, but the async_wait() operation has already completed and cannot be cancelled, so async_wait()'s completion handler will run with no error.
async_read() and async_wait() run, then async_wait()'s completion handler is invoked first, but the async_read() operation has already completed and cannot be cancelled, so async_read()'s completion handler will run with no error.
The completion handler's error_code indicates the status of the operation and does not not reflect changes in state resulting from other completion handlers; therefore, when the error_code is successful, one may need examine the current state to perform conditional branching. However, before introducing additional state, it can be worth taking the effort to examine the goal of the higher-level operation and what state is already available. For this example, lets define that the goal of async_read_with_timeout() is to close a socket if data has not been received before a deadline has been reached. For state, the socket is either open or closed; the timer provides expiration time; and the system clock provides the current time. After examining the goal and available state information, one may propose that:
async_wait()'s handler should only close the socket if the timer's current expiration time is in the past.
async_read()'s handler should set the timer's expiration time into the future.
With that approach, if async_read()'s completion handler runs before async_wait(), then either async_wait() will be cancelled or async_wait()'s completion handler will not close the connection, as the current expiration time is in the future. On the other hand, if async_wait()'s completion handler runs before async_read(), then either async_read() will be cancelled or async_read()'s completion handler can detect that the socket is closed.
Here is a complete minimal example demonstrating this approach for various use cases:
#include <cassert>
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
class client
{
public:
// This demo is only using status for asserting code paths. It is not
// necessary nor should it be used for conditional branching.
enum status_type
{
unknown,
timeout,
read_success,
read_failure
};
public:
client(boost::asio::ip::tcp::socket& socket)
: strand_(socket.get_io_service()),
timer_(socket.get_io_service()),
socket_(socket),
status_(unknown)
{}
status_type status() const { return status_; }
void async_read_with_timeout(boost::posix_time::seconds seconds)
{
strand_.post(boost::bind(
&client::do_async_read_with_timeout, this, seconds));
}
private:
void do_async_read_with_timeout(boost::posix_time::seconds seconds)
{
// Start a timeout for the read.
timer_.expires_from_now(seconds);
timer_.async_wait(strand_.wrap(boost::bind(
&client::handle_wait, this,
boost::asio::placeholders::error)));
// Start the read operation.
boost::asio::async_read(socket_,
boost::asio::buffer(buffer_),
strand_.wrap(boost::bind(
&client::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)));
}
void handle_wait(const boost::system::error_code& error)
{
// On error, such as cancellation, return early.
if (error)
{
std::cout << "timeout cancelled" << std::endl;
return;
}
// The timer may have expired, but it is possible that handle_read()
// ran succesfully and updated the timer's expiration:
// - a new timeout has been started. For example, handle_read() ran and
// invoked do_async_read_with_timeout().
// - there are no pending timeout reads. For example, handle_read() ran
// but did not invoke do_async_read_with_timeout();
if (timer_.expires_at() > boost::asio::deadline_timer::traits_type::now())
{
std::cout << "timeout occured, but handle_read ran first" << std::endl;
return;
}
// Otherwise, a timeout has occured and handle_read() has not executed, so
// close the socket, cancelling the read operation.
std::cout << "timeout occured" << std::endl;
status_ = client::timeout;
boost::system::error_code ignored_ec;
socket_.close(ignored_ec);
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
// Update timeout state to indicate handle_read() has ran. This
// cancels any pending timeouts.
timer_.expires_at(boost::posix_time::pos_infin);
// On error, return early.
if (error)
{
std::cout << "read failed: " << error.message() << std::endl;
// Only set status if it is unknown.
if (client::unknown == status_) status_ = client::read_failure;
return;
}
// The read was succesful, but if a timeout occured and handle_wait()
// ran first, then the socket is closed, so return early.
if (!socket_.is_open())
{
std::cout << "read was succesful but timeout occured" << std::endl;
return;
}
std::cout << "read was succesful" << std::endl;
status_ = client::read_success;
}
private:
boost::asio::io_service::strand strand_;
boost::asio::deadline_timer timer_;
boost::asio::ip::tcp::socket& socket_;
char buffer_[1];
status_type status_;
};
// This example is not interested in the connect handlers, so provide a noop
// function that will be passed to bind to meet the handler concept
// requirements.
void noop() {}
/// #brief Create a connection between the server and client socket.
void connect_sockets(
boost::asio::ip::tcp::acceptor& acceptor,
boost::asio::ip::tcp::socket& server_socket,
boost::asio::ip::tcp::socket& client_socket)
{
boost::asio::io_service& io_service = acceptor.get_io_service();
acceptor.async_accept(server_socket, boost::bind(&noop));
client_socket.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.reset();
io_service.run();
io_service.reset();
}
int main()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
// Scenario 1: timeout
// The server writes no data, causing a client timeout to occur.
{
std::cout << "[Scenario 1: timeout]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(0));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Run timeout and read operations.
io_service.run();
assert(client.status() == client::timeout);
}
// Scenario 2: no timeout, succesful read
// The server writes data and the io_service is ran before the timer
// expires. In this case, the async_read operation will complete and
// cancel the async_wait.
{
std::cout << "[Scenario 2: no timeout, succesful read]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(10));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Write to client.
boost::asio::write(server_socket, boost::asio::buffer("test"));
// Run timeout and read operations.
io_service.run();
assert(client.status() == client::read_success);
}
// Scenario 3: no timeout, failed read
// The server closes the connection before the timeout, causing the
// async_read operation to fail and cancel the async_wait operation.
{
std::cout << "[Scenario 3: no timeout, failed read]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(10));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Close the socket.
server_socket.close();
// Run timeout and read operations.
io_service.run();
assert(client.status() == client::read_failure);
}
// Scenario 4: timeout and read success
// The server writes data, but the io_service is not ran until the
// timer has had time to expire. In this case, both the await_wait and
// asnyc_read operations complete, but the order in which the
// handlers run is indeterminiate.
{
std::cout << "[Scenario 4: timeout and read success]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(0));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Allow the timeout to expire, the write to the client, causing both
// operations to complete with success.
boost::this_thread::sleep_for(boost::chrono::seconds(1));
boost::asio::write(server_socket, boost::asio::buffer("test"));
// Run timeout and read operations.
io_service.run();
assert( (client.status() == client::timeout)
|| (client.status() == client::read_success));
}
}
And its output:
[Scenario 1: timeout]
timeout occured
read failed: Operation canceled
[Scenario 2: no timeout, succesful read]
read was succesful
timeout cancelled
[Scenario 3: no timeout, failed read]
read failed: End of file
timeout cancelled
[Scenario 4: timeout and read success]
read was succesful
timeout occured, but handle_read ran first
You can boost::bind additional parameters to the completion handler which can be used to identify the source.