how to do boost::asio::spawn with io_service-per-CPU? - c++

My server is based on boost spawn echo server.
The server runs fine on single-core machine, not even one crash for several months. Even when it takes 100% CPU it still works fine.
But I need to handle more client requests, now I use multi-core machine. To use all the CPUs I run io_service on several thread, like this:
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/steady_timer.hpp>
#include <boost/asio/write.hpp>
#include <boost/thread/thread.hpp>
#include <iostream>
#include <memory>
#include <thread>
using namespace std;
using boost::asio::ip::tcp;
class session : public std::enable_shared_from_this<session>{
public:
explicit session(tcp::socket socket)
: socket_(std::move(socket)),
timer_(socket_.get_io_service()),
strand_(socket_.get_io_service())
{}
void go()
{
auto self(shared_from_this());
boost::asio::spawn(strand_, [this, self](boost::asio::yield_context yield)
{
try {
char data[1024] = {'3'};
for( ; ;) {
timer_.expires_from_now(std::chrono::seconds(10));
std::size_t n = socket_.async_read_some(boost::asio::buffer(data, sizeof(data)), yield);
// do something with data
// write back something
boost::asio::async_write(socket_, boost::asio::buffer(data, sizeof(data)), yield);
}
} catch(...) {
socket_.close();
timer_.cancel();
}
});
boost::asio::spawn(strand_, [this, self](boost::asio::yield_context yield)
{
while(socket_.is_open()) {
boost::system::error_code ignored_ec;
timer_.async_wait(yield[ignored_ec]);
if(timer_.expires_from_now() <= std::chrono::seconds(0))
socket_.close();
}
});
}
private:
tcp::socket socket_;
boost::asio::steady_timer timer_;
boost::asio::io_service::strand strand_;
};
int main(int argc, char* argv[]) {
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
#define PORT "7788"
tcp::endpoint(tcp::v4(), std::atoi(PORT)));
for( ; ;) {
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if(!ec)
// std::make_shared<session>(std::move(socket))->go();
io_service.post(boost::bind(&session::go, std::make_shared<session>(std::move(socket))));
}
});
// ----------- this works fine on single-core machine ------------
{
// io_service.run();
}
// ----------- this crashes (with multi core) ----------
{
auto thread_count = std::thread::hardware_concurrency(); // for multi core
boost::thread_group threads;
for(auto i = 0; i < thread_count; ++i)
threads.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
threads.join_all();
}
} catch(std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
The code works fine on single-core maching, but crashes all the time on 2-core/4-core/8-core machine. From the crash dump I don't see anything related to my code, just about something with boost::spawn and some randomly named lambda.
So I want to try this: Run io_service per CPU.
I found some demo, but it uses async function:
void server::start_accept()
{
new_connection_.reset(new connection(
io_service_pool_.get_io_service(), request_handler_));
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error));
}
void server::handle_accept(const boost::system::error_code& e)
{
if (!e)
{
new_connection_->start();
}
start_accept();
}
The io_service_pool_.get_io_service() randomly pickup an io_service, but my code uses spawn
boost::asio::spawn(io_service, ...
How to spawn with random io_service?

Seems I was asking the wrong question, spawn cannot work with multiple io_service, but the socket can. I modified the code to this:
int main(int argc, char* argv[]) {
try {
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
auto core_count = std::thread::hardware_concurrency();
// io_service_pool.hpp and io_service_pool.cpp from boost's example
io_service_pool pool(core_count);
boost::asio::spawn(io_service, [&](boost::asio::yield_context yield)
{
#define PORT "7788"
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), std::atoi(PORT)));
for( ; ;) {
boost::system::error_code ec;
boost::asio::io_service& ios = pool.get_io_service();
tcp::socket socket(ios);
acceptor.async_accept(socket, yield[ec]);
if(!ec)
ios.post(boost::bind(&session::go, std::make_shared<session>(std::move(socket))));
}
});
{ // run all io_service
thread t([&] { pool.run(); });
t.detach();
io_service.run();
}
} catch(std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Now the server doesn't crash anymore. But I still have no idea what could cause the crash if I use a single io_service for all threads.

Related

How to close async client connection in ASIO?

I'm trying to create a client for the C++ 20 server example, the one that uses coroutines.
I'm not quite sure how I'm supposed to close the client connection. As far as I'm aware, there are two ways:
#1
This one seems to be closing it once it's ready/there is nothing else to do like read/write operations.
asio::signal_set signals(io_context, SIGINT, SIGTERM);
signals.async_wait([&](auto, auto) { io_context.stop(); });
#2
Force close?
asio::post(io_context_, [this]() { socket_.close(); });
Which one should I use?
Client code (unfinished)
#include <cstdlib>
#include <deque>
#include <iostream>
#include <thread>
#include <string>
#include <asio.hpp>
using asio::ip::tcp;
using asio::awaitable;
using asio::co_spawn;
using asio::detached;
using asio::redirect_error;
using asio::use_awaitable;
awaitable<void> connect(tcp::socket socket, const tcp::endpoint& endpoint)
{
co_await socket.async_connect(endpoint, use_awaitable);
}
int main()
{
try
{
asio::io_context io_context;
tcp::endpoint endpoint(asio::ip::make_address("127.0.0.1"), 666);
tcp::socket socket(io_context);
co_spawn(io_context, connect(std::move(socket), endpoint), detached);
io_context.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Server code
#include <cstdlib>
#include <deque>
#include <iostream>
#include <list>
#include <memory>
#include <set>
#include <string>
#include <utility>
#include <asio/awaitable.hpp>
#include <asio/detached.hpp>
#include <asio/co_spawn.hpp>
#include <asio/io_context.hpp>
#include <asio/ip/tcp.hpp>
#include <asio/read_until.hpp>
#include <asio/redirect_error.hpp>
#include <asio/signal_set.hpp>
#include <asio/steady_timer.hpp>
#include <asio/use_awaitable.hpp>
#include <asio/write.hpp>
using asio::ip::tcp;
using asio::awaitable;
using asio::co_spawn;
using asio::detached;
using asio::redirect_error;
using asio::use_awaitable;
//----------------------------------------------------------------------
class chat_participant
{
public:
virtual ~chat_participant() = default;
virtual void deliver(const std::string& msg) = 0;
};
typedef std::shared_ptr<chat_participant> chat_participant_ptr;
//----------------------------------------------------------------------
class chat_room
{
public:
void join(chat_participant_ptr participant)
{
participants_.insert(participant);
for (const auto &msg : recent_msgs_)
participant->deliver(msg);
}
void leave(chat_participant_ptr participant)
{
participants_.erase(participant);
}
void deliver(const std::string& msg)
{
recent_msgs_.push_back(msg);
while (recent_msgs_.size() > max_recent_msgs)
recent_msgs_.pop_front();
for (const auto &participant : participants_)
participant->deliver(msg);
}
private:
std::set<chat_participant_ptr> participants_;
enum { max_recent_msgs = 100 };
std::deque<std::string> recent_msgs_;
};
//----------------------------------------------------------------------
class chat_session
: public chat_participant,
public std::enable_shared_from_this<chat_session>
{
public:
chat_session(tcp::socket socket, chat_room& room)
: socket_(std::move(socket)),
timer_(socket_.get_executor()),
room_(room)
{
timer_.expires_at(std::chrono::steady_clock::time_point::max());
}
void start()
{
room_.join(shared_from_this());
co_spawn(socket_.get_executor(),
[self = shared_from_this()]{ return self->reader(); },
detached);
co_spawn(socket_.get_executor(),
[self = shared_from_this()]{ return self->writer(); },
detached);
}
void deliver(const std::string& msg) override
{
write_msgs_.push_back(msg);
timer_.cancel_one();
}
private:
awaitable<void> reader()
{
try
{
for (std::string read_msg;;)
{
std::size_t n = co_await asio::async_read_until(socket_,
asio::dynamic_buffer(read_msg, 1024), "\n", use_awaitable);
room_.deliver(read_msg.substr(0, n));
read_msg.erase(0, n);
}
}
catch (std::exception&)
{
stop();
}
}
awaitable<void> writer()
{
try
{
while (socket_.is_open())
{
if (write_msgs_.empty())
{
asio::error_code ec;
co_await timer_.async_wait(redirect_error(use_awaitable, ec));
}
else
{
co_await asio::async_write(socket_,
asio::buffer(write_msgs_.front()), use_awaitable);
write_msgs_.pop_front();
}
}
}
catch (std::exception&)
{
stop();
}
}
void stop()
{
room_.leave(shared_from_this());
socket_.close();
timer_.cancel();
}
tcp::socket socket_;
asio::steady_timer timer_;
chat_room& room_;
std::deque<std::string> write_msgs_;
};
//----------------------------------------------------------------------
awaitable<void> listener(tcp::acceptor acceptor)
{
chat_room room;
for (;;)
{
std::make_shared<chat_session>(co_await acceptor.async_accept(use_awaitable), room)->start();
}
}
//----------------------------------------------------------------------
int main()
{
try
{
unsigned short port = 666;
asio::io_context io_context(1);
co_spawn(io_context,
listener(tcp::acceptor(io_context, { tcp::v4(), port })),
detached);
asio::signal_set signals(io_context, SIGINT, SIGTERM);
signals.async_wait([&](auto, auto) { io_context.stop(); });
io_context.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
In the example provided by asio, the listener runs within the io_context thread/thread-pool, which is started by run() and given a thread-pool size when constructing the io_context(1 /* pool of 1 */).
The listener will use an acceptor to listen for new connections from within the io_context. The acceptor will create a new chat_session for each new socket connection and will hand it over to the chat_room.
Thus, to safely close a connection, you need to post a lambda to asio. The asio::post will queue the lambda to be done from within the io_context thread(s).
You need to provided the correct io_context and the socket owned by the chat_session. The connection MUST be closed from within the io_context as follows:
// Where "this" is the current chat_session owning the socket
asio::post(io_context_, [this]() { socket_.close(); });
The io_context wil then close the connection and also call any active registered async_read / async_write methods of the chat_session such as in the c++11 example:
void do_read()
{
asio::async_read(socket_,
asio::buffer(read_msg_.data(), chat_message::header_length),
/* You can provide a lambda to be called on a read / error */
[this](std::error_code ec, std::size_t /*length read*/)
{
if (!ec)
{
do_read(); // No error -> Keep on reading
}
else
{
// You'll reach this point if an active async_read was stopped
// due to an error or if you called socket_.close()
// Error -> You can close the socket here as well,
// because it is called from within the io_context
socket_.close();
}
});
}
Your first option will actually stop the entire io_context. This should be used to gracefully exit your program or stop the asio io_context as a whole.
You should thus use the second option to "close an async client connection in ASIO".

C++ Boost::ASIO: system error 995 after second call to io_context::run

I've got troubles with following scenario using asio 1.66.0 Windows implementation
bind socket
run io_context
stop io_context
close socket
restart io_context
repeat 1-4
A call to io_context::run in second iteration is followed by system error 995
The I/O operation has been aborted because of either a thread exit or
an applica tion request
Looks like this error is from socket closure: asio uses PostQueuedCompletionStatus/GetQueuedCompletionStatus to signal itself that io_context::stop was called. But I/O operation, enqueued by WSARecvFrom in socket_.async_receive_from, is failed because of socket is closed and in the next call to io_context::run it is the first what I get in handler passed to socket_.async_receive_from.
Is it intended behavior of asio io_context? How do I avoid this error in sequential iterations?
If I stop io_context::run by closing the socket, all works except there will be same error and it looks little dirty.
Another odd thing is if I proceed with do_receive after error receipt, I will receive as many errors as number of previous iterations, and then I'll receive data from socket.
// based on boost_asio/example/cpp11/multicast/receiver.cpp
// https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/example/cpp11/multicast/receiver.cpp
#include <array>
#include <iostream>
#include <string>
#include <boost/asio.hpp>
#include <future>
#include <chrono>
#include <thread>
using namespace std::chrono_literals;
constexpr short multicast_port = 30001;
class receiver
{
public:
explicit receiver(boost::asio::io_context& io_context) : socket_(io_context)
{}
~receiver()
{
close();
}
void open(
const boost::asio::ip::address& listen_address,
const boost::asio::ip::address& multicast_address)
{
// Create the socket so that multiple may be bound to the same address.
boost::asio::ip::udp::endpoint listen_endpoint(
listen_address, multicast_port);
socket_.open(listen_endpoint.protocol());
socket_.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket_.bind(listen_endpoint);
// Join the multicast group.
socket_.set_option(
boost::asio::ip::multicast::join_group(multicast_address));
do_receive();
}
void close()
{
if (socket_.is_open())
{
socket_.close();
}
}
private:
void do_receive()
{
socket_.async_receive_from(
boost::asio::buffer(data_), sender_endpoint_,
[this](boost::system::error_code ec, std::size_t length)
{
if (!ec)
{
std::cout.write(data_.data(), length);
std::cout << std::endl;
do_receive();
}
else
{
// A call to io_context::run in second iteration is followed by system error 995
std::cout << ec.message() << std::endl;
}
});
}
boost::asio::ip::udp::socket socket_;
boost::asio::ip::udp::endpoint sender_endpoint_;
std::array<char, 1024> data_;
};
int main(int argc, char* argv[])
{
try
{
const std::string listen_address = "0.0.0.0";
const std::string multicast_address = "239.255.0.1";
boost::asio::io_context io_context;
receiver r(io_context);
std::future<void> fut;
for (int i = 5; i > 0; --i)
{
io_context.restart();
r.open(
boost::asio::ip::make_address(listen_address),
boost::asio::ip::make_address(multicast_address));
fut = std::async(std::launch::async, [&](){ io_context.run(); });
std::this_thread::sleep_for(3s);
io_context.stop();
fut.get();
r.close();
}
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}

asio aync_send memory leak

I have next snippet:
void TcpConnection::Send(const std::vector<uint8_t>& buffer) {
std::shared_ptr<std::vector<uint8_t>> bufferCopy = std::make_shared<std::vector<uint8_t>>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()), [socket, bufferCopy](const boost::system::error_code& err, size_t bytesSent)
{
if (err)
{
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
This code leads to memory leak. if m_socket->async_send call is commented then there is not memeory leak. I can not understand why bufferCopy is not freed after callback is dispatched. What I am doing wrong?
Windows is used.
Since you don't show any relevant code, and the code shown does not contain a strict problem, I'm going to assume from the code smells.
The smell is that you have a TcpConnection class that is not enable_shared_from_this<TcpConnection> derived. This leads me to suspect you didn't plan ahead, because there's no possible reasonable way to continue using the instance after the completion of any asynchronous operation (like the async_send).
This leads me to suspect you have a crucially simple problem, which is that your completion handler never runs. There's only one situation that could explain this, and that leads me to assume you never run() the ios_service instance
Here's the situation live:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection {
using Buffer = std::vector<uint8_t>;
void Send(Buffer const &);
TcpConnection(asio::io_service& svc) : m_socket(std::make_shared<tcp::socket>(svc)) {}
tcp::socket& socket() const { return *m_socket; }
private:
std::shared_ptr<tcp::socket> m_socket;
};
void TcpConnection::Send(Buffer const &buffer) {
auto bufferCopy = std::make_shared<Buffer>(buffer);
auto socket = m_socket;
m_socket->async_send(asio::buffer(bufferCopy->data(), bufferCopy->size()),
[socket, bufferCopy](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message();
// Assume that the communications path is no longer
// valid.
socket->close();
}
});
}
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.bind({{}, 6767});
a.listen();
boost::system::error_code ec;
do {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
} while (!ec);
}
You'll see that any client, connection e.g. with netcat localhost 6767 will receive the greeting, after which, surprisingly the connection will stay open, instead of being closed.
You'd expect the connection to be closed by the server side either way, either because
a transmission error occurred in async_send
or because after the completion handler is run, it is destroyed and hence the captured shared-pointers are destructed. Not only would that free the copied buffer, but also would it run the destructor of socket which would close the connection.
This clearly confirms that the completion handler never runs. The fix is "easy", find a place to run the service:
int main() {
asio::io_service svc;
tcp::acceptor a(svc, tcp::v4());
a.set_option(tcp::acceptor::reuse_address());
a.bind({{}, 6767});
a.listen();
std::thread th;
{
asio::io_service::work keep(svc); // prevent service running out of work early
th = std::thread([&svc] { svc.run(); });
boost::system::error_code ec;
for (int i = 0; i < 11 && !ec; ++i) {
TcpConnection conn(svc);
a.accept(conn.socket(), ec);
char const* greeting = "whale hello there!\n";
conn.Send({greeting, greeting+strlen(greeting)});
}
}
th.join();
}
This runs 11 connections and exits leak-free.
Better:
It becomes a lot cleaner when the accept loop is also async, and the TcpConnection is properly shared as hinted above:
Live On Coliru
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::tcp;
#include <memory>
#include <thread>
#include <iostream>
auto& logwarning = std::clog;
struct TcpConnection : std::enable_shared_from_this<TcpConnection> {
using Buffer = std::vector<uint8_t>;
TcpConnection(asio::io_service& svc) : m_socket(svc) {}
void start() {
char const* greeting = "whale hello there!\n";
Send({greeting, greeting+strlen(greeting)});
}
void Send(Buffer);
private:
friend struct Server;
Buffer m_output;
tcp::socket m_socket;
};
struct Server {
Server(unsigned short port) {
_acceptor.set_option(tcp::acceptor::reuse_address());
_acceptor.bind({{}, port});
_acceptor.listen();
do_accept();
}
~Server() {
keep.reset();
_svc.post([this] { _acceptor.cancel(); });
if (th.joinable())
th.join();
}
private:
void do_accept() {
auto conn = std::make_shared<TcpConnection>(_svc);
_acceptor.async_accept(conn->m_socket, [this,conn](boost::system::error_code ec) {
if (ec)
logwarning << "accept failed: " << ec.message() << "\n";
else {
conn->start();
do_accept();
}
});
}
asio::io_service _svc;
// prevent service running out of work early:
std::unique_ptr<asio::io_service::work> keep{std::make_unique<asio::io_service::work>(_svc)};
std::thread th{[this]{_svc.run();}}; // TODO handle handler exceptions
tcp::acceptor _acceptor{_svc, tcp::v4()};
};
void TcpConnection::Send(Buffer buffer) {
m_output = std::move(buffer);
auto self = shared_from_this();
m_socket.async_send(asio::buffer(m_output),
[self](const boost::system::error_code &err, size_t /*bytesSent*/) {
if (err) {
logwarning << "clientcomms_t::sendNext encountered error: " << err.message() << "\n";
// not holding on to `self` means the socket gets closed
}
// do more with `self` which points to the TcpConnection instance...
});
}
int main() {
Server server(6868);
std::this_thread::sleep_for(std::chrono::seconds(3));
}

Boost asio TCP async server not async?

I am using the code provided in the Boost example.
The server only accepts 1 connection at a time. This means, no new connections until the current one is closed.
How to make the above code accept unlimited connections at the same time?
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session
: public std::enable_shared_from_this<session>
{
public:
session(tcp::socket socket)
: socket_(std::move(socket))
{
}
void start()
{
do_read();
}
private:
void do_read()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length)
{
if (!ec)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(10000));//sleep some time
do_write(length);
}
});
}
void do_write(std::size_t length)
{
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
do_read();
}
});
}
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server
{
public:
server(boost::asio::io_service& io_service, short port)
: acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service)
{
do_accept();
}
private:
void do_accept()
{
acceptor_.async_accept(socket_,
[this](boost::system::error_code ec)
{
if (!ec)
{
std::make_shared<session>(std::move(socket_))->start();
}
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[])
{
try
{
if (argc != 2)
{
std::cerr << "Usage: async_tcp_echo_server <port>\n";
return 1;
}
boost::asio::io_service io_service;
server s(io_service, std::atoi(argv[1]));
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
As you see, the program waits for the sleep and it doesn't grab a second connection in the meantime.
You're doing a synchronous wait inside the handler which runs on the only thread that serves your io_service. This makes Asio wait with invoking the handlers for any new requests.
Use a deadline_time with wait_async, or,
void do_read() {
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if (!ec) {
timer_.expires_from_now(boost::posix_time::seconds(1));
timer_.async_wait([this, self, length](boost::system::error_code ec) {
if (!ec)
do_write(length);
});
}
});
}
Where the timer_ field is a boost::asio::deadline_timer member of session
as a poor-man's solution add more threads (this simply means that if more requests arrive at the same time than there are threads to handle them, it will still block until the first thread becomes available to pick up the new request)
boost::thread_group tg;
for (int i=0; i < 10; ++i)
tg.create_thread([&]{ io_service.run(); });
tg.join_all();
Both the original code and the modified code are asynchronous and accept multiple connections. As can be seen in the following snippet, the async_accept operation's AcceptHandler initiates another async_accept operation, forming an asynchronous loop:
.-----------------------------------.
V |
void server::do_accept() |
{ |
acceptor_.async_accept(..., |
[this](boost::system::error_code ec) |
{ |
// ... |
do_accept(); ----------------------'
});
}
The sleep() within the session's ReadHandler causes the one thread running the io_service to block until the sleep completes. Hence, the program will be doing nothing. However, this does not cause any outstanding operations to be cancelled. For a better understanding of asynchronous operations and io_service, consider reading this answer.
Here is an example demonstrating the server handling multiple connections. It spawns off a thread that creates 5 client sockets and connects them to the server.
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <vector>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using boost::asio::ip::tcp;
class session
: public std::enable_shared_from_this<session>
{
public:
session(tcp::socket socket)
: socket_(std::move(socket))
{
}
~session()
{
std::cout << "session ended" << std::endl;
}
void start()
{
std::cout << "session started" << std::endl;
do_read();
}
private:
void do_read()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length)
{
if (!ec)
{
do_write(length);
}
});
}
void do_write(std::size_t length)
{
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
do_read();
}
});
}
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server
{
public:
server(boost::asio::io_service& io_service, short port)
: acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service)
{
do_accept();
}
private:
void do_accept()
{
acceptor_.async_accept(socket_,
[this](boost::system::error_code ec)
{
if (!ec)
{
std::make_shared<session>(std::move(socket_))->start();
}
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[])
{
try
{
if (argc != 2)
{
std::cerr << "Usage: async_tcp_echo_server <port>\n";
return 1;
}
boost::asio::io_service io_service;
auto port = std::atoi(argv[1]);
server s(io_service, port);
boost::thread client_main(
[&io_service, port]
{
tcp::endpoint server_endpoint(
boost::asio::ip::address_v4::loopback(), port);
// Create and connect 5 clients to the server.
std::vector<std::shared_ptr<tcp::socket>> clients;
for (auto i = 0; i < 5; ++i)
{
auto client = std::make_shared<tcp::socket>(
std::ref(io_service));
client->connect(server_endpoint);
clients.push_back(client);
}
// Wait 2 seconds before destroying all clients.
boost::this_thread::sleep(boost::posix_time::seconds(2));
});
io_service.run();
client_main.join();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
The output:
session started
session started
session started
session started
session started
session ended
session ended
session ended
session ended
session ended

boost::asio::io_service destructor runs very long time

I'm novice in boost::asio and have the first own troubles.
I create a simple Host resolver (see full code below).
Problem 1.
In case of lost Internet connection, my host resolver stops resolving after the first enter into deadline_timer.
My assumption, that "localhost" must be resolved at any time. But "localhost" are not resolved after timeout during resolving google.us (for example, we unplugged Ethernet jack).
The same behaviour in case of resolving unexisted TLD (for example, google.usd instead google.us).
Problem 2.
In case of lost Internet connection, destructor io_service runs very long (usually, 5 seconds).
What's wrong?
I use VS2012, boost 1.54
File hostresolver.h
pragma once
#include <set>
#include <boost/system/error_code.hpp>
#include <boost/asio.hpp>
#include <boost/asio/ip/basic_resolver.hpp>
#include <boost/asio/ip/basic_resolver_iterator.hpp>
typedef std::set<unsigned long> hostresolver_result_container;
class hostresolver
{
public:
hostresolver(boost::asio::io_service* io_service);
~hostresolver(void);
boost::asio::io_service* ios_ptr;
boost::asio::ip::tcp::resolver resolver_;
boost::asio::deadline_timer timer_;
volatile bool is_completed;
bool is_timeout;
std::string hostname;
hostresolver_result_container result;
void on_timeout(const boost::system::error_code &err);
void start_resolve(const char* hostname, int timeout_seconds);
void finish_resolve(const boost::system::error_code& err, boost::asio::ip::tcp::resolver::iterator endpoint_iterator);
private:
void stop();
};
File hostresolver.cpp
#include "stdafx.h"
#include "hostresolver.h"
#include <boost/bind.hpp>
hostresolver::hostresolver(boost::asio::io_service* io_service) :
resolver_(*io_service), timer_(*io_service), is_completed(false), is_timeout(false)
{
ios_ptr = io_service;
}
hostresolver::~hostresolver(void)
{
}
void hostresolver::start_resolve(const char* hostname, int timeout_second)
{
this->hostname.assign(hostname);
timer_.expires_from_now(boost::posix_time::seconds(timeout_second));
timer_.async_wait(boost::bind(&hostresolver::on_timeout, this, _1));
boost::asio::ip::tcp::resolver::query query(hostname, "http");
resolver_.async_resolve(query,
boost::bind(&hostresolver::finish_resolve, this,
boost::asio::placeholders::error,
boost::asio::placeholders::iterator));
do
{
ios_ptr->run_one();
}
while (!is_completed);
}
void hostresolver::stop()
{
resolver_.cancel();
timer_.cancel();
is_completed = true;
}
void hostresolver::on_timeout(const boost::system::error_code &err)
{
if ((!err) && (err != boost::asio::error::operation_aborted))
{
is_timeout = true;
stop();
}
}
void hostresolver::finish_resolve(const boost::system::error_code& err, boost::asio::ip::tcp::resolver::iterator endpoint_iterator)
{
if (!err)
{
while (endpoint_iterator != boost::asio::ip::tcp::resolver::iterator())
{
boost::asio::ip::tcp::endpoint endpoint = *endpoint_iterator;
if (endpoint.address().is_v4())
{
result.insert(endpoint.address().to_v4().to_ulong());
}
endpoint_iterator++;
}
}
stop();
}
File main.cpp
#include "stdafx.h"
#include "hostresolver.h"
int _tmain(int argc, _TCHAR* argv[])
{
boost::asio::io_service ios;
for (int i = 0; i < 2; i++)
{
std::cout << "iteration: " << i << std::endl;
{
hostresolver hres(&ios);
hres.start_resolve("localhost", 1);
if (hres.result.size() == 0)
std::cout << "failed" << std::endl;
}
{
hostresolver hres(&ios);
hres.start_resolve("google.usd", 1);
}
}
return 0;
}
After returning from run_once, the io_service would most likely enteres the "stopped" state. Thus, you should call ios_ptr->reset() prior to calling run_once() again. Quoting from run_once reference:
Return Value: The number of handlers that were executed. A zero return
value implies that the io_service object is stopped (the stopped()
function returns true). Subsequent calls to run(), run_one(), poll()
or poll_one() will return immediately unless there is a prior call to
reset().