How do I send a SIGTERM or SIGINT signal to the server in the boost HTML3 example? - c++

I am using the HTML Server 3 example from boost as my learning tool (http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/examples.html#boost_asio.examples.http_server_3) for asynchronous message handling.
I have taken the example, and turned it into a library with a server object I can instantiate in my programs. The only thing I have done to the above example is remove the main.cpp and compile it as a library. And it works to the extend that I can instantiate the server object in my code, and pass messages to it from the command line.
Where I am struggling is how to terminate the server gracefully. From the sample code I see this:
server::server(const std::string& address, const std::string& port,
std::size_t thread_pool_size,
Handler &handler)
: thread_pool_size_(thread_pool_size),
signals_(io_service_),
acceptor_(io_service_),
new_connection_(),
request_handler_(handler)
{
// Register to handle the signals that indicate when the server should exit.
// It is safe to register for the same signal multiple times in a program,
// provided all registration for the specified signal is made through Asio.
signals_.add(SIGINT);
signals_.add(SIGTERM);
signals_.async_wait(boost::bind(&server::handle_stop, this));
So an asynchronous thread is set up to listen for signals and respond to them
I have implemented this server object in a thread in my program as follows:
class ServerWorker
{
public:
ServerWorker(std::string theHost, std::string thePort)
{
Host = theHost;
Port = thePort;
}
void Start()
{
try
{
MYRequestHandler handler;
int nCores = boost::thread::hardware_concurrency();
server *mServer = new server(Host, Port, nCores, handler);
svr->run();
}
catch(std::exception &e) { /* do something */ }
}
void Stop()
{
mServer->stop(); // this should raise a signal and send it to the server
// but don't know how to do it
}
private:
std::string Host;
std::string Port;
server *mServer;
};
TEST(BSGT_LBSSERVER_STRESS, BSGT_SINGLETON)
{
// Launch as server on a new thread
ServerWorker sw(BSGT_DEFAULT_IPADDRESS, BSGT_DEFAULT_PORT_STR);
boost::function<void()> th_func = boost::bind(&ServerWorker::Start, &sw);
boost::thread swThread = boost::thread(th_func);
// DO SOMETHING
// How do I signal the server in the swThread to stop?
}
How do I implement the stop() method on the server object to send the signal to itself? I have tried:
1) raise(SIGTERM) - kills the whole program
2) raise(SIGINT) - kills the whole program

raise() is appropriate for having a process signal itself.
void ServerWorker::Stop()
{
std::raise(SIGTERM);
}
Be aware that raise() is asynchronous. It will issue the signal and return immediately. Hence, control may continue before the io_service processes the enqueued SignalHandler.
void run_server()
{
// Launch as server on a new thread
ServerWorker server_worker(...);
boost::thread worker_thread([&server_worker]() { server_worker.Start(); });
...
// Raises SIGTERM. May return before io_service is stopped.
server_worker.Stop();
// Need to synchronize with worker_thread. The `worker_thread` may still be
// in `ServerWorker::Start()` which would go out of scope. Additionally,
// the `worker_thread` is joinable, so its destructor may invoke
// `std::terminate()`.
}
Here is a minimal example demonstrating using Boost.Asio signal handling, raise(), and synchronization:
#include <cassert>
#include <csignal>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io_service;
// Prevent io_service from running out of work.
boost::asio::io_service::work work(io_service);
// Boost.Asio will register an internal handler for SIGTERM.
boost::asio::signal_set signal_set(io_service, SIGTERM);
signal_set.async_wait(
[&io_service](
const boost::system::error_code& error,
int signal_number)
{
std::cout << "Got signal " << signal_number << "; "
"stopping io_service." << std::endl;
io_service.stop();
});
// Raise SIGTERM.
std::raise(SIGTERM);
// By the time raise() returns, Boost.Asio has handled SIGTERM with its
// own internal handler, queuing it internally. At this point, Boost.Asio
// is ready to dispatch this notification to a user signal handler
// (i.e. those provided to signal_set.async_wait()) within the
// io_service event loop.
std::cout << "io_service stopped? " << io_service.stopped() << std::endl;
assert(false == io_service.stopped());
// Initiate thread that will run the io_service. This will invoke
// the queued handler that is ready for completion.
std::thread work_thread([&io_service]() { io_service.run(); });
// Synchornize on the work_thread. Letting it run to completion.
work_thread.join();
// The io_service has been explicitly stopped in the async_wait
// handler.
std::cout << "io_service stopped? " << io_service.stopped() << std::endl;
assert(true == io_service.stopped());
}
Output:
io_service stopped? 0
Got signal 15; stopping io_service.
io_service stopped? 1

Related

boost::asio completion handler on async_connect never called again after first failure

I'm writing a small client class that uses boost asio to connect to a remote socket. It should be able to try to reconnect if the initial connection failed.
When testing for that scenario, i.e. there is no open remote socket, the completion handler of async_connect got called correctly the first time. But my completion handler will never be called again for the second attempt when m_state goes into State_Connect again. What am I doing wrong?
class Test
{
public:
Test() : m_socket(m_io)
{
}
void update()
{
switch (m_state)
{
case State_Connect:
std::cout << "Start connect\n";
m_socket.async_connect(tcp::endpoint(tcp::v4(), 33000),
boost::bind(&Test::onCompleted, this, asio::placeholders::error));
m_state = State_Connecting;
break;
case State_Connecting:
if (m_error)
{
m_error.clear();
std::cout << "Could not connect\n";
m_state = State_Connect;
}
break;
}
m_io.poll_one();
}
private:
void onCompleted(const bs::error_code& error)
{
if (error)
{
m_error = error;
m_socket.close();
}
}
enum State
{
State_Connect,
State_Connecting,
};
State m_state = State_Connect;
asio::io_service m_io;
tcp::socket m_socket;
bs::error_code m_error;
};
int main(int argc, char* argv[])
{
Test test;
for (;;)
{
test.update();
boost::this_thread::sleep(boost::posix_time::milliseconds(20));
}
return 0;
}
Output is:
Start connect
Could not connect
Start connect
But I expect it to repeat indefinitely.
Reference
When an io_context object is stopped, calls to run(), run_one(),
poll() or poll_one() will return immediately without invoking any
handlers.
When you call poll_one() and no handler is ready, poll_one() function marks io_service as stopped. poll_one() has nothing to do when m_state is State_Connecting and in this moment io_service is marked as stopped due to the empty queue of handlers.
You can test if io_service is stopped, if so call reset:
if (m_io.stopped())
m_io.reset();
m_io.poll_one();

Handling multiple connections using QThreadPool

Consider a situation where you need to maintain 256 tcp connections with devices just for ocassionally sending commands. I want to do this in parallel(It needs to block until it gets the response), I'm trying to use QThreadPool for this purpose but I have some doubts if it is possible.
I tried to use QRunnable but I'm not sure how sockets will behave between threads (sockets should be used only in thread that they were created in?)
I'm also worried about efficiency of this solution, I would be glad if somebody could propose some alternatives, not necessarily using QT.
Below I'm posting some snippets of the code.
class Task : public QRunnable {
Task(){
//creating TaskSubclass instance and socket in it
}
private:
TaskSubclass *sub;
void run() override {
//some debug info and variable setting...
sub->doSomething( args );
return;
}
};
class TaskSubclass {
Socket *sock; // socket instance
//...
void doSomething( args )
{
//writing to socket here
}
}
class MainProgram : public QObject{
Q_OBJECT
private:
QThreadPool *pool;
Task *tasks;
public:
MainProgram(){
pool = new QThreadPool(this);
//create tasks here
}
void run(){
//decide which task to start
pool->start(tasks[i]);
}
};
My favorite solution for this problem is by multiplexing your sockets using select(). That way you don't need to create additional threads, and it is a "very POSIX" way to do it.
See for example see this tutorial:
http://www.binarytides.com/multiple-socket-connections-fdset-select-linux/
Or a related question in:
Using select(..) on client
As OMD_AT has allready pointed out the best solution is to use Select() and let the kernel do the job for you :-)
here you have an example of an Async approach and an Syncron multi thread approach.
In this example we create 10 connection to a google webservice and make a simple get request to the server, we measure how long all connections in each approach needed to receive the response from the google server.
Be aware that you should use a more faster webserver to make a real test, like the localhost because the network latency has a big impact on the result.
#include <QCoreApplication>
#include <QTcpSocket>
#include <QtConcurrent/QtConcurrentRun>
#include <QElapsedTimer>
#include <QAtomicInt>
class Task : public QRunnable
{
public:
Task() : QRunnable() {}
static QAtomicInt counter;
static QElapsedTimer timer;
virtual void run() override
{
QTcpSocket* socket = new QTcpSocket();
socket->connectToHost("www.google.com", 80);
socket->write("GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n");
socket->waitForReadyRead();
if(!--counter) {
qDebug("Multiple Threads elapsed: %lld nanoseconds", timer.nsecsElapsed());
}
}
};
QAtomicInt Task::counter;
QElapsedTimer Task::timer;
int main(int argc, char *argv[])
{
QCoreApplication app(argc, argv);
// init
int connections = 10;
Task::counter = connections;
QElapsedTimer timer;
/// Async via One Thread (Select)
// handle the data
auto dataHandler = [&timer,&connections](QByteArray data) {
Q_UNUSED(data);
if(!--connections) qDebug(" Single Threads elapsed: %lld nanoseconds", timer.nsecsElapsed());
};
// create 10 connection to google.com and send an http get request
timer.start();
for(int i = 0; i < connections; i++) {
QTcpSocket* socket = new QTcpSocket();
socket->connectToHost("www.google.com", 80);
socket->write("GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n");
QObject::connect(socket, &QTcpSocket::readyRead, [dataHandler,socket]() {
dataHandler(socket->readAll());
});
}
/// Async via Multiple Threads
Task::timer.start();
for(int i = 0; i < connections; i++) {
QThreadPool::globalInstance()->start(new Task());
}
return app.exec();
}
Prints:
Multiple Threads elapsed: 62324598 nanoseconds
Single Threads elapsed: 63613967 nanoseconds
Although, the answer is already accepted, I would like to share my)
What I understood from your question: Having 256 currently active connections, from time to time you send a request ("command" as you named it) to one of them and wait for the response. Meanwhile, you want to make this process multithreaded and, though you said "It needs to block until it gets the response", I assume you implied blocking a thread which handles request-response process, but not the main thread.
If I indeed understand the question right, here is how I suggest to do it using Qt:
#include <functional>
#include <QObject> // need to add "QT += core" in .pro
#include <QTcpSocket> // QT += network
#include <QtConcurrent> // QT += concurrent
#include <QFuture>
#include <QFutureWatcher>
class CommandSender : public QObject
{
public:
// Sends a command via connection and blocks
// until the response arrives or timeout occurs
// then passes the response to a handler
// when the handler is done - unblocks
void SendCommand(
QTcpSocket* connection,
const Command& command,
void(*responseHandler)(Response&&))
{
const int timeout = 1000; // milliseconds, set it to -1 if you want no timeouts
// Sending a command (blocking)
connection.write(command.ToByteArray()); // Look QByteArray for more details
if (connection.waitForBytesWritten(timeout) {
qDebug() << connection.errorString() << endl;
emit error(connection);
return;
}
// Waiting for a response (blocking)
QDataStream in{ connection, QIODevice::ReadOnly };
QString message;
do {
if (!connection.waitForReadyRead(timeout)) {
qDebug() << connection.errorString() << endl;
emit error(connection);
return;
}
in.startTransaction();
in >> message;
} while (!in.commitTransaction());
responseHandler(Response{ message }); // Translate message to a response and handle it
}
// Non-blocking version of SendCommand
void SendCommandAsync(
QTcpSocket* connection,
const Command& command,
void(*responseHandler) (Response&&))
{
QFutureWatcher<void>* watcher = new QFutureWatcher<void>{ this };
connect(watcher, &QFutureWatcher<void>::finished, [connection, watcher] ()
{
emit done(connection);
watcher->deleteLater();
});
// Does not block,
// emits "done" when finished
QFuture<void> future
= QtConcurrent::run(this, &CommandSender::SendCommand, connection, command, responseHandler);
watcher->setFuture(future);
}
signals:
void done(QTcpSocket* connection);
void error(QTcpSocket* connection);
}
Now you can send a command to a socket using a separate thread taken from a thread pool: under the hood QtConcurrent::run() uses the global instance of QThreadPool provided by Qt for you. That thread blocks until it gets a response back and than handles it with responseHandler . Meanwhile, your main thread managing all your commands and sockets stays unblocked. Just catch done() signal which tells that response was received and handled successfully.
One thing to note: asynchronous version sends request only when there is a free thread in the thread pool and waits for it otherwise. Of course, that is the behavior for any thread pool (that is exactly the point of such pattern) but just do not forget about that.
Also I was writing code without Qt in handy so may contain some errors.
Edit: As it turned out, this is not thread safe as sockets are not reentrant in Qt.
What you can do about it is to associate a mutex with a socket and lock it each time you execute its function. This can be done easily creating a wrapper around QTcpSocket class. Please, correct me if I wrong.

C++ asio provide async execution of thread

I got a simple server app. When new client connecting, it handles request from client and send data back to it. My problem is to provide a async execution of handle thread. Now, when began a handle thread it stops acceptor loop and wait for return of corresponding function.
The question is how to organize the continuation of acceptor loop (to be able to simultaneously handle other connection) after starting a handle thread?
Server.h:
class Server
{
private:
//Storage
boost::asio::io_service service;
boost::asio::ip::tcp::acceptor* acceptor;
boost::mutex mtx;
//Methods
void acceptorLoop();
void HandleRequest(boost::asio::ip::tcp::socket* clientSock);
public:
Server();
};
Server.cpp
void Server::acceptorLoop()
{
std::cout << "Waiting for clients..." << std::endl;
while (TRUE)
{
boost::asio::ip::tcp::socket clientSock (service);
acceptor->accept(clientSock); //new socket accepted
std::cout << "New client joined! ";
boost::thread request_thread (&Server::HandleRequest, this, &clientSock); //create a thread
request_thread.join(); //here I start thread, but I want to continue acceptor loop and not wait until function return.
}
}
void Server::HandleRequest(boost::asio::ip::tcp::socket* clientSock)
{
if (clientSock->available())
{
//Works with socket
}
}
Server::Server()
{
acceptor = new boost::asio::ip::tcp::acceptor(service, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 8001));
acceptorLoop(); //loop started
}
You have two main problems here:
Thread joining - you are waiting for thread finish before accept new connection
Using pointer to a socket created on a stack
I recommend you this changes:
boost::asio::ip::tcp::socket clientSock (service);
acceptor->accept(clientSock); //new socket accepted
std::cout << "New client joined! ";
std::thread{std::bind(&Server::HandleRequest, this, std::placeholders::_1), std::move(clientSock)}.detach();
And HandleRequest will change to this:
void Server::HandleRequest(boost::asio::ip::tcp::socket&& clientSock)
{
if (clientSock.available())
{
//Works with socket
}
}
You can also store thread somewhere and join it later instead of detaching.
So why do you call join? Join is about waiting for a thread to finish, and you say you don't want to wait for the thread, so, well... just don't call join?

Matching boost::deadline_timer callbacks to corresponding wait_async

Consider this short code snippet where one boost::deadline_timer interrupts another:
#include <iostream>
#include <boost/bind.hpp>
#include <boost/function.hpp>
#include <boost/asio.hpp>
static boost::asio::io_service io;
boost::asio::deadline_timer timer1(io);
boost::asio::deadline_timer timer2(io);
static void timer1_handler1(const boost::system::error_code& error)
{
std::cout << __PRETTY_FUNCTION__ << " time:" << time(0) << " error:" << error.message() << " expect:Operation canceled." << std::endl;
}
static void timer1_handler2(const boost::system::error_code& error)
{
std::cout << __PRETTY_FUNCTION__ << " time:" << time(0) << " error:" << error.message() << " expect:success." << std::endl;
}
static void timer2_handler1(const boost::system::error_code& error)
{
std::cout << __PRETTY_FUNCTION__ << " time:" << time(0) << " error:" << error.message() << " expect:success." << std::endl;
std::cout << "cancel and restart timer1. Bind to timer1_handler2" << std::endl;
timer1.cancel();
timer1.expires_from_now(boost::posix_time::milliseconds(10000));
timer1.async_wait(boost::bind(timer1_handler2, boost::asio::placeholders::error));
}
int main()
{
std::cout << "Start timer1. Bind to timer1_handler1." << std::endl;
timer1.expires_from_now(boost::posix_time::milliseconds(2000));
timer1.async_wait(boost::bind(timer1_handler1, boost::asio::placeholders::error));
std::cout << "Start timer2. Bind to timer2_handler1. Will interrupt timer1." << std::endl;
timer2.expires_from_now(boost::posix_time::milliseconds(2000));
timer2.async_wait(boost::bind(timer2_handler1, boost::asio::placeholders::error));
std::cout << "Run the boost io service." << std::endl;
io.run();
return 0;
}
If the time for timer2 is varied around the 2 second mark, sometimes timer1_handler1 reports success, and sometimes operation cancelled. This is probably determinate in the trivial example because we know what time timer2 is set to.
./timer1
Start timer1. Bind to timer1_handler1.
Start timer2. Bind to timer2_handler1. Will interrupt timer1.
Run the boost io service.
void timer1_handler1(const boost::system::error_code&) time:1412680360 error:Success expect:Operation canceled.
void timer2_handler1(const boost::system::error_code&) time:1412680360 error:Success expect:success.
cancel and restart timer1. Bind to timer1_handler2
void timer1_handler2(const boost::system::error_code&) time:1412680370 error:Success expect:success.
This represents a more complex system where timer1 is implementing a timeout, and timer2 is really an asynchronous socket. Occasionally I've observed a scenario where timer1 is cancelled too late, and the first handler returns after the second async_wait() has been called, thus giving a spurious timeout.
Clearly I need to match up the handler callbacks with the corresponding async_wait() call. Is there a convenient way of doing this?
One convenient way of solving the posed problem, managing higher-level asynchronous operations composed of multiple non-chained asynchronous operations, is by using the approach used in the official Boost timeout example. Within it, handlers make decisions by examining current state, rather than coupling handler logic with an expected or provided state.
Before working on a solution, it is important to identify all possible cases of handler execution. When the io_service is ran, a single iteration of the event loop will execute all operations that are ready to run, and upon completion of the operation, the user's completion handler is queued with an error_code indicating the operation's status. The io_service will then invoke the queued completion handlers. Hence, in a single iteration, all ready to run operations are executed in an unspecified order before completion handlers, and the order in which completion handlers are invoked is unspecified. For instance, when composing an async_read_with_timeout() operation from async_read() and async_wait(), where either operation is only cancelled within the other operation's completion handler, the following case are possible:
async_read() runs and async_wait() is not ready to run, then async_read()'s completion handler is invoked and cancels async_wait(), causing async_wait()'s completion handler to run with an error of boost::asio::error::operation_aborted.
async_read() is not ready to run and async_wait() runs, then async_wait()'s completion handler is invoked and cancels async_read(), causing async_read()'s completion handler to run with an error of boost::asio::error::operation_aborted.
async_read() and async_wait() run, then async_read()'s completion handler is invoked first, but the async_wait() operation has already completed and cannot be cancelled, so async_wait()'s completion handler will run with no error.
async_read() and async_wait() run, then async_wait()'s completion handler is invoked first, but the async_read() operation has already completed and cannot be cancelled, so async_read()'s completion handler will run with no error.
The completion handler's error_code indicates the status of the operation and does not not reflect changes in state resulting from other completion handlers; therefore, when the error_code is successful, one may need examine the current state to perform conditional branching. However, before introducing additional state, it can be worth taking the effort to examine the goal of the higher-level operation and what state is already available. For this example, lets define that the goal of async_read_with_timeout() is to close a socket if data has not been received before a deadline has been reached. For state, the socket is either open or closed; the timer provides expiration time; and the system clock provides the current time. After examining the goal and available state information, one may propose that:
async_wait()'s handler should only close the socket if the timer's current expiration time is in the past.
async_read()'s handler should set the timer's expiration time into the future.
With that approach, if async_read()'s completion handler runs before async_wait(), then either async_wait() will be cancelled or async_wait()'s completion handler will not close the connection, as the current expiration time is in the future. On the other hand, if async_wait()'s completion handler runs before async_read(), then either async_read() will be cancelled or async_read()'s completion handler can detect that the socket is closed.
Here is a complete minimal example demonstrating this approach for various use cases:
#include <cassert>
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
class client
{
public:
// This demo is only using status for asserting code paths. It is not
// necessary nor should it be used for conditional branching.
enum status_type
{
unknown,
timeout,
read_success,
read_failure
};
public:
client(boost::asio::ip::tcp::socket& socket)
: strand_(socket.get_io_service()),
timer_(socket.get_io_service()),
socket_(socket),
status_(unknown)
{}
status_type status() const { return status_; }
void async_read_with_timeout(boost::posix_time::seconds seconds)
{
strand_.post(boost::bind(
&client::do_async_read_with_timeout, this, seconds));
}
private:
void do_async_read_with_timeout(boost::posix_time::seconds seconds)
{
// Start a timeout for the read.
timer_.expires_from_now(seconds);
timer_.async_wait(strand_.wrap(boost::bind(
&client::handle_wait, this,
boost::asio::placeholders::error)));
// Start the read operation.
boost::asio::async_read(socket_,
boost::asio::buffer(buffer_),
strand_.wrap(boost::bind(
&client::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)));
}
void handle_wait(const boost::system::error_code& error)
{
// On error, such as cancellation, return early.
if (error)
{
std::cout << "timeout cancelled" << std::endl;
return;
}
// The timer may have expired, but it is possible that handle_read()
// ran succesfully and updated the timer's expiration:
// - a new timeout has been started. For example, handle_read() ran and
// invoked do_async_read_with_timeout().
// - there are no pending timeout reads. For example, handle_read() ran
// but did not invoke do_async_read_with_timeout();
if (timer_.expires_at() > boost::asio::deadline_timer::traits_type::now())
{
std::cout << "timeout occured, but handle_read ran first" << std::endl;
return;
}
// Otherwise, a timeout has occured and handle_read() has not executed, so
// close the socket, cancelling the read operation.
std::cout << "timeout occured" << std::endl;
status_ = client::timeout;
boost::system::error_code ignored_ec;
socket_.close(ignored_ec);
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
// Update timeout state to indicate handle_read() has ran. This
// cancels any pending timeouts.
timer_.expires_at(boost::posix_time::pos_infin);
// On error, return early.
if (error)
{
std::cout << "read failed: " << error.message() << std::endl;
// Only set status if it is unknown.
if (client::unknown == status_) status_ = client::read_failure;
return;
}
// The read was succesful, but if a timeout occured and handle_wait()
// ran first, then the socket is closed, so return early.
if (!socket_.is_open())
{
std::cout << "read was succesful but timeout occured" << std::endl;
return;
}
std::cout << "read was succesful" << std::endl;
status_ = client::read_success;
}
private:
boost::asio::io_service::strand strand_;
boost::asio::deadline_timer timer_;
boost::asio::ip::tcp::socket& socket_;
char buffer_[1];
status_type status_;
};
// This example is not interested in the connect handlers, so provide a noop
// function that will be passed to bind to meet the handler concept
// requirements.
void noop() {}
/// #brief Create a connection between the server and client socket.
void connect_sockets(
boost::asio::ip::tcp::acceptor& acceptor,
boost::asio::ip::tcp::socket& server_socket,
boost::asio::ip::tcp::socket& client_socket)
{
boost::asio::io_service& io_service = acceptor.get_io_service();
acceptor.async_accept(server_socket, boost::bind(&noop));
client_socket.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.reset();
io_service.run();
io_service.reset();
}
int main()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
// Scenario 1: timeout
// The server writes no data, causing a client timeout to occur.
{
std::cout << "[Scenario 1: timeout]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(0));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Run timeout and read operations.
io_service.run();
assert(client.status() == client::timeout);
}
// Scenario 2: no timeout, succesful read
// The server writes data and the io_service is ran before the timer
// expires. In this case, the async_read operation will complete and
// cancel the async_wait.
{
std::cout << "[Scenario 2: no timeout, succesful read]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(10));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Write to client.
boost::asio::write(server_socket, boost::asio::buffer("test"));
// Run timeout and read operations.
io_service.run();
assert(client.status() == client::read_success);
}
// Scenario 3: no timeout, failed read
// The server closes the connection before the timeout, causing the
// async_read operation to fail and cancel the async_wait operation.
{
std::cout << "[Scenario 3: no timeout, failed read]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(10));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Close the socket.
server_socket.close();
// Run timeout and read operations.
io_service.run();
assert(client.status() == client::read_failure);
}
// Scenario 4: timeout and read success
// The server writes data, but the io_service is not ran until the
// timer has had time to expire. In this case, both the await_wait and
// asnyc_read operations complete, but the order in which the
// handlers run is indeterminiate.
{
std::cout << "[Scenario 4: timeout and read success]" << std::endl;
// Create and connect I/O objects.
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
connect_sockets(acceptor, server_socket, client_socket);
// Start read with timeout on client.
client client(client_socket);
client.async_read_with_timeout(boost::posix_time::seconds(0));
// Allow do_read_with_timeout to intiate actual operations.
io_service.run_one();
// Allow the timeout to expire, the write to the client, causing both
// operations to complete with success.
boost::this_thread::sleep_for(boost::chrono::seconds(1));
boost::asio::write(server_socket, boost::asio::buffer("test"));
// Run timeout and read operations.
io_service.run();
assert( (client.status() == client::timeout)
|| (client.status() == client::read_success));
}
}
And its output:
[Scenario 1: timeout]
timeout occured
read failed: Operation canceled
[Scenario 2: no timeout, succesful read]
read was succesful
timeout cancelled
[Scenario 3: no timeout, failed read]
read failed: End of file
timeout cancelled
[Scenario 4: timeout and read success]
read was succesful
timeout occured, but handle_read ran first
You can boost::bind additional parameters to the completion handler which can be used to identify the source.

boost asio - session thread does not end

I use boost asio to handle a session per thread like this:
Server::Server(ba::io_service& ioService, int port): ioService_(ioService), port_(port)
{
ba::ip::tcp::acceptor acceptor(ioService_, ba::ip::tcp::endpoint(ba::ip::tcp::v4(), port_));
for (;;)
{
socket_ptr sock(new ba::ip::tcp::socket(ioService_));
acceptor.accept(*sock);
boost::thread thread(boost::bind(&Server::Session, this, sock));
}
}
void Server::Session(socket_ptr sock)
{
const int max_length = 1024;
try
{
char buffer[256] = "";
// HandleRequest() function performs async operations
if (HandleHandshake(sock, buffer))
HandleRequest(sock, buffer);
ioService_.run();
}
catch (std::exception& e)
{
std::cerr << "Exception in thread: " << e.what() << "\n";
}
std::cout << "Session thread ended \r\n"; // THIS LINE IS NEVER REACHED
}
In Server::Session() I do at some point async io using async_read_some() and async_write() functions.
All works well and in order for this to work I have to have a call to ioService_.run() inside my spawn thread otherwise Server::Session() function exits and it does not process the required io work.
The problem is that ioService_.run() called from my thread will lead for the thread not to exit at all because in the meantime other requests come to my listening server socket.
What I end up with is threads starting and processing for now sessions but never releasing resources (ending). Is it possible to use only one boost::asio::io_service when using this approach ?
I believe you are looking for run_one() or poll_one() this will allow you to have the thread either execute a ready handler (poll) or wait for a handler (run). By only handling one, you can pick how many to execute before exiting your thread. As opposed to run() which executes all the handlers until the io_service is stopped. Where as poll() would stop after it handled all the ones that are currently ready.
The way I structured handling connection here was bad.
There is quite a good video presentation about how to design your asio server bellow(made by asio creator)
Thinking Asynchronously: Designing Applications with Boost Asio