Boost Asio: checking socket ability to be readable/writable - c++

In my application I have to mix of both asio created sockets and native ones(coming from C posgresql library).
What I need is the ability to get notification out of boost's io_service class instance on the particular socket to be in non-blocking readable/writable state, but without performing an actual read/write(will be done by 3-rd party library), i.e. effectively doing only select()/poll()
Can it be achieved with passing 0 as buffer length to the function like async_read_some()?
I've made a quick test and indeed a call to async_read_some() with zero buffer length does call read event handler but I am not sure it is done after blocking in select()/poll() over the corresponding socket handle, waiting for the real "can read" state.

This is often referred to as reactor-style operations.
These can be obtained by providing boost::asio::null_buffers to the asynchronous operations. Reactor-style operations provide a way to be informed when a read or write operation can be performed, and are useful for integrating with third party libraries, using shared memory pools, etc. The Boost.Asio documentation provides some information and the following example code:
ip::tcp::socket socket(my_io_service);
...
socket.non_blocking(true);
...
socket.async_read_some(null_buffers(), read_handler);
...
void read_handler(boost::system::error_code ec)
{
if (!ec)
{
std::vector<char> buf(socket.available());
socket.read_some(buffer(buf));
}
}
Boost.Asio also provides an official nonblocking example, illustrating how to integrate with libraries that want to perform the read and write operations directly on a socket.
Providing a zero-length buffer to operations will often result in a no-op, as the operation's completion condition will have been met without attempting to perform any I/O. Here is a complete example demonstrating the difference between the two:
#include <array>
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in the handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
void print_status(
const boost::system::error_code& error,
std::size_t bytes_transferred,
boost::asio::ip::tcp::socket& socket)
{
std::cout << "error: " << error.message() << "; "
"transferred: " << bytes_transferred << "; "
"available: " << socket.available() << std::endl;
}
int main()
{
using boost::asio::ip::tcp;
// Create all I/O objects.
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
tcp::socket socket1(io_service);
tcp::socket socket2(io_service);
// Connect the sockets.
acceptor.async_accept(socket1, boost::bind(&noop));
socket2.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.run();
io_service.reset();
std::array<char, 512> buffer;
// Reading into a zero-length buffer is a no-op and will be
// considered immediately completed.
socket1.async_receive(boost::asio::buffer(buffer, 0),
boost::bind(&print_status,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
boost::ref(socket1))
);
// Guarantee the handler runs.
assert(1 == io_service.poll());
io_service.reset();
// Start a reactor-style read operation by providing a null_buffer.
socket1.async_receive(boost::asio::null_buffers(),
boost::bind(&print_status,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
boost::ref(socket1))
);
// Guarantee that the handler did not run.
assert(0 == io_service.poll());
// Write to the socket so that data becomes available.
boost::asio::write(socket2, boost::asio::buffer("hello"));
assert(1 == io_service.poll());
}
Output:
error: Success; transferred: 0; available: 0
error: Success; transferred: 0; available: 6

Related

How to ensure that the messages will be enqueued in chronological order on multithreaded Asio io_service?

Following Michael Caisse's cppcon talk I created a connection handler MyUserConnection which has a sendMessage method. sendMessage method adds a message to the queue similarly to the send() in the cppcon talk. My sendMessage method is called from multiple threads outside of the connection handler in high intervals. The messages must be enqueued chronologically.
When I run my code with only one Asio io_service::run call (aka one io_service thread) it async_write's and empties my queue as expected (FIFO), however, the problem occurs when there are, for example, 4 io_service::run calls, then the queue is not filled or the send calls are not called chronologically.
class MyUserConnection : public std::enable_shared_from_this<MyUserConnection> {
public:
MyUserConnection(asio::io_service& io_service, SslSocket socket) :
service_(io_service),
socket_(std::move(socket)),
strand_(io_service) {
}
void sendMessage(std::string msg) {
auto self(shared_from_this());
service_.post(strand_.wrap([self, msg]() {
self->queueMessage(msg);
}));
}
private:
void queueMessage(const std::string& msg) {
bool writeInProgress = !sendPacketQueue_.empty();
sendPacketQueue_.push_back(msg);
if (!writeInProgress) {
startPacketSend();
}
}
void startPacketSend() {
auto self(shared_from_this());
asio::async_write(socket_,
asio::buffer(sendPacketQueue_.front().data(), sendPacketQueue_.front().length()),
strand_.wrap([self](const std::error_code& ec, std::size_t /*n*/) {
self->packetSendDone(ec);
}));
}
void packetSendDone(const std::error_code& ec) {
if (!ec) {
sendPacketQueue_.pop_front();
if (!sendPacketQueue_.empty()) { startPacketSend(); }
} else {
// end(); // My end call
}
}
asio::io_service& service_;
SslSocket socket_;
asio::io_service::strand strand_;
std::deque<std::string> sendPacketQueue_;
};
I'm quite sure that I misinterpreted the strand and io_service::post when running the connection handler on multithreaded io_service. I'm also quite sure that the messages are not enqueued chronologically instead of messages not being async_write chronologically. How to ensure that the messages will be enqueued in chronological order in sendMessage call on multithreaded io_service?
If you use a strand, the order is guaranteed to be the order in which you post the operations to the strand.
Of course, if there is some kind of "correct ordering" between threads that post then you have to synchronize the posting between them, that's your application domain.
Here's a modernized, simplified take on your MyUserConnection class with a self-contained server test program:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
#include <deque>
#include <iostream>
#include <mutex>
namespace asio = boost::asio;
namespace ssl = asio::ssl;
using asio::ip::tcp;
using boost::system::error_code;
using SslSocket = ssl::stream<tcp::socket>;
class MyUserConnection : public std::enable_shared_from_this<MyUserConnection> {
public:
MyUserConnection(SslSocket&& socket) : socket_(std::move(socket)) {}
void start() {
std::cerr << "Handshake initiated" << std::endl;
socket_.async_handshake(ssl::stream_base::handshake_type::server,
[self = shared_from_this()](error_code ec) {
std::cerr << "Handshake complete" << std::endl;
});
}
void sendMessage(std::string msg) {
post(socket_.get_executor(),
[self = shared_from_this(), msg = std::move(msg)]() {
self->queueMessage(msg);
});
}
private:
void queueMessage(std::string msg) {
outbox_.push_back(std::move(msg));
if (outbox_.size() == 1)
sendLoop();
}
void sendLoop() {
std::cerr << "Sendloop " << outbox_.size() << std::endl;
if (outbox_.empty())
return;
asio::async_write( //
socket_, asio::buffer(outbox_.front()),
[this, self = shared_from_this()](error_code ec, std::size_t) {
if (!ec) {
outbox_.pop_front();
sendLoop();
} else {
end();
}
});
}
void end() {}
SslSocket socket_;
std::deque<std::string> outbox_;
};
int main() {
asio::thread_pool ioc;
ssl::context ctx(ssl::context::sslv23_server);
ctx.set_password_callback([](auto...) { return "test"; });
ctx.use_certificate_file("server.pem", ssl::context::file_format::pem);
ctx.use_private_key_file("server.pem", ssl::context::file_format::pem);
ctx.use_tmp_dh_file("dh2048.pem");
tcp::acceptor a(ioc, {{}, 8989u});
for (;;) {
auto s = a.accept(make_strand(ioc.get_executor()));
std::cerr << "accepted " << s.remote_endpoint() << std::endl;
auto sess = make_shared<MyUserConnection>(SslSocket(std::move(s), ctx));
sess->start();
for(int i = 0; i<30; ++i) {
post(ioc, [sess, i] {
std::string msg = "message #" + std::to_string(i) + "\n";
{
static std::mutex mx;
// Lock so console output is guaranteed in the same order
// as the sendMessage call
std::lock_guard lk(mx);
std::cout << "Sending " << msg << std::flush;
sess->sendMessage(std::move(msg));
}
});
}
break; // for online demo
}
ioc.join();
}
If you run it a few times, you will see that
the order in which the threads post is not deterministic (that's up to the kernel scheduling)
the order in which messages are sent (and received) is exactly the order in which they are posted.
See live demo runs on my machine:
On a multi core or even on a single core preemptive OS, you cannot truly feed messages into a queue in strictly chronological order. Even if you use a mutex to synchronize write access to the queue, the strict order is no longer guaranteed once multiple writers wait on the mutex and the mutex becomes free. At best, the order, in which the waiting write threads acquire the mutex, is implementation dependent (OS code dependent), but it is best to assume it is just random.
With that being said, the strict chronological order is a matter of definition in the first place. To explain that, imagine your PC has some digital output bits (1 for each writer thread) and you connected a logic analyzer to those bits.... and imagine, you pick some spot in the code, where you toggle such a respective bit in your enqueue function. Even if that bit toggle takes place just one assembly instruction prior to acquiring the mutex, it is possible, that the order had been changed, while the writer code approached that point. You could also set it to other arbirtrary points prior (e.g. when you enter the enqueue function). But then, the same reasoning applies. Hence, the strict chronological order is in itself a matter of definition.
There is an analogy to a case, where a CPUs interrupt controller has multiple inputs and you tried to build a system which processes those interrupts in strictly chronological order. Even if all interrupt inputs were signaled exactly at the same moment (a switch, pulling them all to signaled state simultaneously), some order would occur (e.g. caused by hardware logic or just by noise at the input pins or by the systems interrupt dispatcher function (some CPUs (e.g. MIPS 4102) have a single interrupt vector and assembly code checks the possible interrupt sources and dispatches to dedicated interrupt handlers).
This analogy helps see the pattern: It comes down to asynchronous inputs on a synchronous system. Which is a notoriously hard problem in itself.
So, the best you could possibly do, is to make a suitable definition of your applications "strict ordering" and live with it.
Then, to avoid violations of your definition, you could use a priority queue instead of a normal FIFO data type and use as priority some atomic counter:
At your chosen point in the code, atomically read and increment the counter.
This is your messages sequence number.
Assemble your message and enqueue it into the priority queue, using your sequence number as priority.
Another possible approach is to define a notion of "simultaneous", which is detectable on the other side of the queue (and thus, the reader cannot assume strict ordering for a set of "simultaneous" messages). This could be implemented by reading some high frequency tick count and all those messages, which have the same "time stamp" are to be considered simultaneous on reader side.

How to safely cancel a Boost ASIO asynchronous accept operation?

Everything I've read in the Boost ASIO docs and here on StackOverflow suggests I can stop an async_accept operation by calling close on the acceptor socket. However, I get an intermittent not_socket error in the async_accept handler when I try to do this. Am I doing something wrong or does Boost ASIO not support this?
(Related questions: here and here.)
(Note: I'm running on Windows 7 and using the Visual Studio 2015 compiler.)
The core problem I face is a race condition between the async_accept operation accepting an incoming connection and my call to close. This happens even when using a strand, explicit or implicit.
Note my call to async_accept strictly happens before my call to close. I conclude the race condition is between my call to close and the under-the-hood code in Boost ASIO that accepts the incoming connection.
I've included code demonstrating the problem. The program repeatedly creates an acceptor, connects to it, and immediately closes the acceptor. It expects the async_accept operation to either complete successfully or else be canceled. Any other error causes the program to abort, which is what I'm seeing intermittently.
For synchronization the program uses an explicit strand. Nevertheless, the call to close is unsynchronized with the effect of the async_accept operation, so sometimes the acceptor closes before it accepts the incoming connection, sometimes it closes afterward, sometimes neither—hence the problem.
Here's the code:
#include <algorithm>
#include <boost/asio.hpp>
#include <cstdlib>
#include <future>
#include <iostream>
#include <memory>
#include <thread>
int main()
{
boost::asio::io_service ios;
auto work = std::make_unique<boost::asio::io_service::work>(ios);
const auto ios_runner = [&ios]()
{
boost::system::error_code ec;
ios.run(ec);
if (ec)
{
std::cerr << "io_service runner failed: " << ec.message() << '\n';
abort();
}
};
auto thread = std::thread{ios_runner};
const auto make_acceptor = [&ios]()
{
boost::asio::ip::tcp::resolver resolver{ios};
boost::asio::ip::tcp::resolver::query query{
"localhost",
"",
boost::asio::ip::resolver_query_base::passive |
boost::asio::ip::resolver_query_base::address_configured};
const auto itr = std::find_if(
resolver.resolve(query),
boost::asio::ip::tcp::resolver::iterator{},
[](const boost::asio::ip::tcp::endpoint& ep) { return true; });
assert(itr != boost::asio::ip::tcp::resolver::iterator{});
return boost::asio::ip::tcp::acceptor{ios, *itr};
};
for (auto i = 0; i < 1000; ++i)
{
auto acceptor = make_acceptor();
const auto saddr = acceptor.local_endpoint();
boost::asio::io_service::strand strand{ios};
boost::asio::ip::tcp::socket server_conn{ios};
// Start accepting.
std::promise<void> accept_promise;
strand.post(
[&]()
{
acceptor.async_accept(
server_conn,
strand.wrap(
[&](const boost::system::error_code& ec)
{
accept_promise.set_value();
if (ec.category() == boost::asio::error::get_system_category()
&& ec.value() == boost::asio::error::operation_aborted)
return;
if (ec)
{
std::cerr << "async_accept failed (" << i << "): " << ec.message() << '\n';
abort();
}
}));
});
// Connect to the acceptor.
std::promise<void> connect_promise;
strand.post(
[&]()
{
boost::asio::ip::tcp::socket client_conn{ios};
{
boost::system::error_code ec;
client_conn.connect(saddr, ec);
if (ec)
{
std::cerr << "connect failed: " << ec.message() << '\n';
abort();
}
connect_promise.set_value();
}
});
connect_promise.get_future().get(); // wait for connect to finish
// Close the acceptor.
std::promise<void> stop_promise;
strand.post([&acceptor, &stop_promise]()
{
acceptor.close();
stop_promise.set_value();
});
stop_promise.get_future().get(); // wait for close to finish
accept_promise.get_future().get(); // wait for async_accept to finish
}
work.reset();
thread.join();
}
Here's the output from a sample run:
async_accept failed (5): An operation was attempted on something that is not a socket
The number in parentheses denotes how many successfully iterations the program ran.
UPDATE #1: Based on Tanner Sansbury's answer, I've added a std::promise for signaling the completion of the async_accept handler. This has no effect on the behavior I'm seeing.
UPDATE #2: The not_socket error originates from a call to setsockopt, from call_setsockopt, from socket_ops::setsockopt in the file boost\asio\detail\impl\socket_ops.ipp (Boost version 1.59). Here's the full call:
socket_ops::setsockopt(new_socket, state,
SOL_SOCKET, SO_UPDATE_ACCEPT_CONTEXT,
&update_ctx_param, sizeof(SOCKET), ec);
Microsoft's documentation for setsockopt says about SO_UPDATE_ACCEPT_CONTEXT:
Updates the accepting socket with the context of the listening socket.
I'm not sure what exactly this means, but it sounds like something that fails if the listening socket is closed. This suggests that, on Windows, one cannot safely close an acceptor that is currently running a completion handler for an async_accept operation.
I hope someone can tell me I'm wrong and that there is a way to safely close a busy acceptor.
The example program will not cancel the async_accept operation. Once the connection has been established, the async_accept operation will be posted internally for completion. At this point, the operation is no longer cancelable and is will not be affected by acceptor.close().
The issue being observed is the result of undefined behavior. The program fails to meet a lifetime requirement for async_accept's peer parameter:
The socket into which the new connection will be accepted. Ownership of the peer object is retained by the caller, which must guarantee that it is valid until the handler is called.
In particular, the peer socket, server_conn, has automatic scope within the for loop. The loop may begin a new iteration while the async_accept operation is outstanding, causing server_conn to be destroyed and violate the lifetime requirement. Consider extending server_conn's lifetime by either:
set a std::future within the accept handler and wait on the related std::promise before continuing to the next iteration of the loop
managing server_conn via a smart pointer and passing ownership to the accept handler

C++ Boost.Asio object lifetimes

asio::io_service ioService;
asio::ip::tcp::socket* socket = new asio::ip::tcp::socket(ioService);
socket->async_connect(endpoint, handler);
delete socket;
Socket's destructor should close the socket. But can the asynchronous backend handle this? Will it cancel the asynchronous operation and calling the handler? Probably not?
When the socket is destroyed, it invokes destroy on its service. When a SocketService's destroy() function is invoked, it cancels asynchronous operations by calling a non-throwing close(). Handlers for cancelled operations will be posted for invocation within io_service with a boost::asio::error::operation_aborted error.
Here is a complete example demonstrating the documented behavior:
#include <iostream>
#include <boost/asio.hpp>
void handle_connect(const boost::system::error_code& error)
{
std::cout << "handle_connect: " << error.message() << std::endl;
}
int main()
{
namespace ip = boost::asio::ip;
using ip::tcp;
boost::asio::io_service io_service;
// Create socket with a scoped life.
{
tcp::socket socket(io_service);
socket.async_connect(
tcp::endpoint(ip::address::from_string("1.2.3.4"), 12345),
&handle_connect);
}
io_service.run();
}
And its output:
handle_connect: Operation canceled
Why did you create the socket using new? It won't definitely do normal process.
If you really want to create the socket using new, you have to close and delete at the end of your program.
Here is a sample, just.
io_service service_;
ip::tcp::socket sock(service_);
sock.async_connect(ep, connect_handler);
deadline_timer t(service_, boost::posix_time::seconds(5));
t.async_wait(timeout_handler);
service_.run();

boost asio udp socket async_receive_from does not call the handler

I want to create an autonomous thread devoted only to receive data from an UDP socket using boost libraries (asio). This thread should be an infinite loop triggered by some data received from the UDP socket. In my application I need to use an asynchronous receive operation.
If I use the synchronous function receive_from everything works as expected.
However if I use async_receive_from the handler is never called. Since I use a semaphore to detect that some data have been received, the program locks and the loop is never triggered.
I have verified (with a network analyzer) that the sender device properly sends the data on the UDP socket.
I have isolated the problem in the following code.
#include <boost\array.hpp>
#include <boost\asio.hpp>
#include <boost\thread.hpp>
#include <boost\interprocess\sync\interprocess_semaphore.hpp>
#include <iostream>
typedef boost::interprocess::interprocess_semaphore Semaphore;
using namespace boost::asio::ip;
class ReceiveUDP
{
public:
boost::thread* m_pThread;
boost::asio::io_service m_io_service;
udp::endpoint m_local_endpoint;
udp::endpoint m_sender_endpoint;
udp::socket m_socket;
size_t m_read_bytes;
Semaphore m_receive_semaphore;
ReceiveUDP() :
m_socket(m_io_service),
m_local_endpoint(boost::asio::ip::address::from_string("192.168.0.254"), 11),
m_sender_endpoint(boost::asio::ip::address::from_string("192.168.0.11"), 5550),
m_receive_semaphore(0)
{
Start();
}
void Start()
{
m_pThread = new boost::thread(&ReceiveUDP::_ThreadFunction, this);
}
void _HandleReceiveFrom(
const boost::system::error_code& error,
size_t received_bytes)
{
m_receive_semaphore.post();
m_read_bytes = received_bytes;
}
void _ThreadFunction()
{
try
{
boost::array<char, 100> recv_buf;
m_socket.open(udp::v4());
m_socket.bind(m_local_endpoint);
m_io_service.run();
while (1)
{
#if 1 // THIS WORKS
m_read_bytes = m_socket.receive_from(
boost::asio::buffer(recv_buf), m_sender_endpoint);
#else // THIS DOESN'T WORK
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
/* The program locks on this wait since _HandleReceiveFrom
is never called. */
m_receive_semaphore.wait();
#endif
std::cout.write(recv_buf.data(), m_read_bytes);
}
m_socket.close();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
};
void main()
{
ReceiveUDP receive_thread;
receive_thread.m_pThread->join();
}
A timed_wait on the semaphore is to be preferred, however for debug purposes I have used a blocking wait as in the code above.
Did I miss something? Where is my mistake?
Your call to io_service.run() is exiting because there is no work for the io_service to do. The code then enters the while loop and calls m_socket.async_receive_from. At this point the io_service is not running ergo it never reads the data and calls your handler.
you need to schedule the work to do before calling io_service run:
ie:
// Configure io service
ReceiveUDP receiver;
m_socket.open(udp::v4());
m_socket.bind(m_local_endpoint);
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, receiver,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
The handler function will do the following:
// start the io service
void HandleReceiveFrom(
const boost::system::error_code& error,
size_t received_bytes)
{
m_receive_semaphore.post();
// schedule the next asynchronous read
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, receiver,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
m_read_bytes = received_bytes;
}
Your thread then simply waits for the semaphore:
while (1)
{
m_receive_semaphore.wait();
std::cout.write(recv_buf.data(), m_read_bytes);
}
Notes:
Do you really need this additional thread? The handler is completely asynchronous, and boost::asio can be used to manage a thread pool (see: think-async)
Please do not use underscores followed by a capitol letter for variable / function names. They are reserved.
m_io_service.run() returns immediately, so noone dispatches completion handlers. Note that io_service::run is a kind of "message loop" of an asio-based application, and it should run as long as you want asio functionality to be available (this's a bit simplified description, but it's good enough for your case).
Besides, you should not invoke async.operation in a loop. Instead, issue subsequent async.operation in the completion handler of the previous one -- to ensure that 2 async.reads would not run simultaniously.
See asio examples to see the typical asio application design.

Boost::asio - how to interrupt a blocked tcp server thread?

I'm working on a multithreaded application in which one thread acts as a tcp server which receives commands from a client. The thread uses a Boost socket and acceptor to wait for a client to connect, receives a command from the client, passes the command to the rest of the application, then waits again. Here's the code:
void ServerThreadFunc()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), port_no));
for (;;)
{
// listen for command connection
tcp::socket socket(io_service);
acceptor.accept(socket);
// connected; receive command
boost::array<char,256> msg_buf;
socket.receive(boost::asio::buffer(msg_buf));
// do something with received bytes here
}
}
This thread spends most of its time blocked on the call to acceptor.accept(). At the moment, the thread only gets terminated when the application exits. Unfortunately, this causes a crash after main() returns - I believe because the thread tries to access the app's logging singleton after the singleton has been destroyed. (It was like that when I got here, honest guv.)
How can I shut this thread down cleanly when it's time for the application to exit? I've read that a blocking accept() call on a raw socket can be interrupted by closing the socket from another thread, but this doesn't appear to work on a Boost socket. I've tried converting the server logic to asynchronous i/o using the Boost asynchronous tcp echo server example, but that just seems to exchange a blocking call to acceptor::accept() for a blocking call to io_service::run(), so I'm left with the same problem: a blocked call which I can't interrupt. Any ideas?
In short, there are two options:
Change code to be asynchronous (acceptor::async_accept() and async_read), run within the event loop via io_service::run(), and cancel via io_service::stop().
Force blocking calls to interrupt with lower level mechanics, such as signals.
I would recommend the first option, as it is more likely to be the portable and easier to maintain. The important concept to understand is that the io_service::run() only blocks as long as there is pending work. When io_service::stop() is invoked, it will try to cause all threads blocked on io_service::run() to return as soon as possible; it will not interrupt synchronous operations, such as acceptor::accept() and socket::receive(), even if the synchronous operations are invoked within the event loop. It is important to note that io_service::stop() is a non-blocking call, so synchronization with threads that were blocked on io_service::run() must use another mechanic, such as thread::join().
Here is an example that will run for 10 seconds and listens to port 8080:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
#include <iostream>
void StartAccept( boost::asio::ip::tcp::acceptor& );
void ServerThreadFunc( boost::asio::io_service& io_service )
{
using boost::asio::ip::tcp;
tcp::acceptor acceptor( io_service, tcp::endpoint( tcp::v4(), 8080 ) );
// Add a job to start accepting connections.
StartAccept( acceptor );
// Process event loop.
io_service.run();
std::cout << "Server thread exiting." << std::endl;
}
void HandleAccept( const boost::system::error_code& error,
boost::shared_ptr< boost::asio::ip::tcp::socket > socket,
boost::asio::ip::tcp::acceptor& acceptor )
{
// If there was an error, then do not add any more jobs to the service.
if ( error )
{
std::cout << "Error accepting connection: " << error.message()
<< std::endl;
return;
}
// Otherwise, the socket is good to use.
std::cout << "Doing things with socket..." << std::endl;
// Perform async operations on the socket.
// Done using the socket, so start accepting another connection. This
// will add a job to the service, preventing io_service::run() from
// returning.
std::cout << "Done using socket, ready for another connection."
<< std::endl;
StartAccept( acceptor );
};
void StartAccept( boost::asio::ip::tcp::acceptor& acceptor )
{
using boost::asio::ip::tcp;
boost::shared_ptr< tcp::socket > socket(
new tcp::socket( acceptor.get_io_service() ) );
// Add an accept call to the service. This will prevent io_service::run()
// from returning.
std::cout << "Waiting on connection" << std::endl;
acceptor.async_accept( *socket,
boost::bind( HandleAccept,
boost::asio::placeholders::error,
socket,
boost::ref( acceptor ) ) );
}
int main()
{
using boost::asio::ip::tcp;
// Create io service.
boost::asio::io_service io_service;
// Create server thread that will start accepting connections.
boost::thread server_thread( ServerThreadFunc, boost::ref( io_service ) );
// Sleep for 10 seconds, then shutdown the server.
std::cout << "Stopping service in 10 seconds..." << std::endl;
boost::this_thread::sleep( boost::posix_time::seconds( 10 ) );
std::cout << "Stopping service now!" << std::endl;
// Stopping the io_service is a non-blocking call. The threads that are
// blocked on io_service::run() will try to return as soon as possible, but
// they may still be in the middle of a handler. Thus, perform a join on
// the server thread to guarantee a block occurs.
io_service.stop();
std::cout << "Waiting on server thread..." << std::endl;
server_thread.join();
std::cout << "Done waiting on server thread." << std::endl;
return 0;
}
While running, I opened two connections. Here is the output:
Stopping service in 10 seconds...
Waiting on connection
Doing things with socket...
Done using socket, ready for another connection.
Waiting on connection
Doing things with socket...
Done using socket, ready for another connection.
Waiting on connection
Stopping service now!
Waiting on server thread...
Server thread exiting.
Done waiting on server thread.
When you receive an event that it's time to exit, you can call acceptor.cancel(), which will cancel the pending accept (with an error code of operation_canceled). On some systems, you might also have to close() the acceptor as well to be safe.
If it comes to it, you could open a temporary client connection to it on localhost - that will wake it up. You could even send it a special message so that you can shut down your server from the pub - there should be an app for that:)
Simply call shutdown with native handle and the SHUT_RD option, to cancel the existing receive(accept) operation.
The accepted answer is not exactly correct. Infact #JohnYu answered correctly.
Using blocking API of ASIO is much like using BSD sockets API that ASIO library wraps in its classes.
Problem is boost::asio::ip::tcp::acceptor class does not provide shutdown() functionality so you must do it using "old" sockets API.
Additional note: Make sure acceptor, socket and io_service are not deleted before all threads using it exit. In following code std::shared_ptr is used to keep shared resources alive so user of ApplicationContext class can delete the ApplicationContext object and avoid SEGFAULT crash.
Additional note: pay attention to boost documentation, there are overloaded methods that raise exception and ones that return error code. Original Poster's code used acceptor->accept(socket); without try/catch which would cause program exit instead of normal thread-routine exit and cleanup.
Here is the solution description:
#include <unistd.h> // include ::shutdown() function
// other includes ...
using boost::asio::ip::tcp;
using boost::asio::io_service;
class ApplicationContext {
// Use shared pointer to extend life of resources afer ApplicationContext is deleted
// and running threads can still keep using shared resources
std::shared_ptr<tcp::acceptor> acceptor;
std::shared_ptr<io_service> ioservice;
// called `ServerThreadFunc` in question code example
void AcceptLoopThreadRoutine(int port_no) {
ioservice = std::make_shared<io_service>();
acceptor = std::make_shared<tcp::acceptor>(*ioservice, tcp::endpoint(tcp::v4(), port_no));
try {
for (;;) {
// listen for client connection
tcp::socket socket(*ioservice);
// Note boost::system::system_error is raised when using this overload
acceptor->accept(socket);
// connected receive some data ...
// // boost::array<char,256> msg_buf;
// // socket.receive(boost::asio::buffer(msg_buf));
// do something with received bytes here
}
} catch(std::exception const & exception) {
// boost::system::system_error here indicates clean exit ;)
}
}
void StopAcceptThread() {
if(acceptor) {
// boost::asio::ip::tcp::acceptor does not have shutdown() functionality
// exposed, so we need to do it with this low-level approach
int shutdown_status = shutdown(acceptor->native_handle(), SHUT_RDWR);
}
}
};
Also note that using signals to unblock accept thread is very nasty implementation and temporary client connection on localhost to unblock accept thread is very awkward.
The ASIO is here to help you accomplish everything in single thread with callbacks. If you are mixing threads and ASIO chances are your design is bad.
Additional note: Do not confuse shutdown() and close(). Some systems may allow you to use close() on accept socket to unblock accept loop but this is not portable.