I'm writing a simple tcp socket server capable of handling multiple concurrent connections. The idea is that the main listening thread will do a blocking accept and offload socket handles to a worker thread (in a thread pool) to handle the communication asynchronously from there.
void server::run() {
{
io_service::work work(io_service);
for (std::size_t i = 0; i < pool_size; i++)
thread_pool.push_back(std::thread([&] { io_service.run(); }));
boost::asio::io_service listener;
boost::asio::ip::tcp::acceptor acceptor(listener, ip::tcp::endpoint(ip::tcp::v4(), port));
while (listening) {
boost::asio::ip::tcp::socket socket(listener);
acceptor.accept(socket);
io_service.post([&] {callback(std::move(socket));});
}
}
for (ThreadPool::iterator it = thread_pool.begin(); it != thread_pool.end(); it++)
it->join();
}
I'm creating socket on the stack because I don't want to have to repeatedly allocate memory inside the while(listening) loop.
The callback function callback has the following prototype:
void callback(boost::asio::socket socket);
It is my understanding that calling callback(std::move(socket)) will transfer ownership of socket to callback. However when I attempt to call socket.receive() from inside callback, I get a Bad file descriptor error, so I assume something is wrong here.
How can I transfer ownership of socket to the callback function, ideally without having to create sockets on the heap?
Undefined behavior is potentially being invoked, as the lambda may be invoking std::move() on a previously destroyed socket via a dangling reference. For example, consider the case where the loop containing the socket ends its current iteration, causing socket to be destroyed, before the lambda is invoked:
Main Thread | Thread Pool
-----------------------------------+----------------------------------
tcp::socket socket(...); |
acceptor.accept(socket); |
io_service.post([&socket] {...}); |
~socket(); // end iteration |
... // next iteration | callback(std::move(socket));
To resolve this, one needs to transfer socket ownership to the handler rather than transfer ownership within the handler. Per documentation, Handlers must be CopyConstructible, and hence their arguments, including the non-copyable socket, must be as well. Yet, this requirement can be relaxed if Asio can eliminate all calls to the handler's copy constructor and one has defined BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS.
#define BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
#include <boost/asio.hpp>
void callback(boost::asio::ip::tcp::socket socket);
...
// Transfer ownership of socket to the handler.
io_service.post(
[socket=std::move(socket)]() mutable
{
// Transfer ownership of socket to `callback`.
callback(std::move(socket));
});
For more details on Asio's type checking, see this answer.
Here is a complete example demonstrating a socket's ownership being transferred to a handler:
#include <functional> // std::bind
#include <utility> // std::move
#include <vector> // std::vector
#define BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
#include <boost/asio.hpp>
const auto noop = std::bind([]{});
void callback(boost::asio::ip::tcp::socket socket)
{
const std::string actual_message = "hello";
boost::asio::write(socket, boost::asio::buffer(actual_message));
}
int main()
{
using boost::asio::ip::tcp;
// Create all I/O objects.
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
tcp::socket client_socket(io_service);
// Connect the sockets.
client_socket.async_connect(acceptor.local_endpoint(), noop);
{
tcp::socket socket(io_service);
acceptor.accept(socket);
// Transfer ownership of socket to the handler.
assert(socket.is_open());
io_service.post(
[socket=std::move(socket)]() mutable
{
// Transfer ownership of socket to `callback`.
callback(std::move(socket));
});
assert(!socket.is_open());
} // ~socket
io_service.run();
// At this point, sockets have been conencted, and `callback`
// should have written data to `client_socket`.
std::vector<char> buffer(client_socket.available());
boost::asio::read(client_socket, boost::asio::buffer(buffer));
// Verify the correct message was read.
const std::string expected_message = "hello";
assert(std::equal(
begin(buffer), end(buffer),
begin(expected_message), end(expected_message)));
}
Related
I am trying to wrap my head around resource management in boost::asio. I am seeing callbacks called after the corresponding sockets are already destroyed. A good example of this is in the boost::asio official example: http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/example/cpp11/chat/chat_client.cpp
I am particularly concerned with the close method:
void close()
{
io_service_.post([this]() { socket_.close(); });
}
If you call this function and afterwards destruct chat_client instance that holds socket_, socket_ will be destructed before the close method is called on it. Also any pending async_* callbacks can be called after the chat_client has been destroyed.
How would you correctly handle this?
You can do socket_.close(); almost any time you want, but you should keep in mind some moments:
If you have threads, this call should be wrapped with strand or you can crash. See boost strand documentation.
Whenever you do close keep in mind that
io_service can already have queued handlers. And they will be called anyway with old state/error code.
close can throw an exception.
close does NOT includes ip::tcp::socket destruction. It
just closes the system socket.
You must manage object lifetime
yourself to ensure objects will be destroyed only when there is no
more handlers. Usually this is done with enable_shared_from_this
on your Connection or socket object.
Invoking socket.close() does not destroy the socket. However, the application may need to manage the lifetime of objects for which the operation and completion handlers depend upon, but this is not necessarily the socket object itself. For instance, consider a client class that holds a buffer, a socket, and has a single outstanding read operation with a completion handler of client::handle_read(). One can close() and explicitly destroy the socket, but the buffer and client instance must remain valid until at least the handler is invoked:
class client
{
...
void read()
{
// Post handler that will start a read operation.
io_service_.post([this]() {
async_read(*socket, boost::asio::buffer(buffer_);
boost::bind(&client::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
});
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred
)
{
// make use of data members...if socket_ is not used, then it
// is safe for socket to have already been destroyed.
}
void close()
{
io_service_.post([this]() {
socket_->close();
// As long as outstanding completion handlers do not
// invoke operations on socket_, then socket_ can be
// destroyed.
socket_.release(nullptr);
});
}
private:
boost::asio::io_service& io_service_;
// Not a typical pattern, but used to exemplify that outstanding
// operations on `socket_` are not explicitly dependent on the
// lifetime of `socket_`.
std::unique_ptr<boost::asio::socket> socket_;
std::array<char, 512> buffer_;
...
}
The application is responsible for managing the lifetime of objects upon which the operation and handlers are dependent. The chat client example accomplishes this by guaranteeing that the chat_client instance is destroyed after it is no longer in use, by waiting for the io_service.run() to return within the thread join():
int main(...)
{
try
{
...
boost::asio::io_service io_service;
chat_client c(...);
std::thread t([&io_service](){ io_service.run(); });
...
c.close();
t.join(); // Wait for `io_service.run` to return, guaranteeing
// that `chat_client` is no longer in use.
} // The `chat_client` instance is destroyed.
catch (std::exception& e)
{
...
}
}
One common idiom is to managing object lifetime is to have the I/O object be managed by a single class that inherits from enable_shared_from_this<>. When a class inherits from enable_shared_from_this, it provides a shared_from_this() member function that returns a valid shared_ptr instance managing this. A copy of the shared_ptr is passed to completion handlers, such as a capture-list in lambdas or passed as the instance handle to bind(), causing the lifetime of the I/O object to be extended to at least as long as the handler. See the Boost.Asio asynchronous TCP daytime server tutorial for an example using this approach.
Consider this test program :
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <functional>
#include <iostream>
static void callback (boost::asio::ip::tcp::socket && socket)
{
//boost::asio::ip::tcp::socket new_socket = std::move(socket);
std::cout << "Accepted" << std::endl;
}
static void on_accept (boost::asio::ip::tcp::acceptor & acceptor,
boost::asio::ip::tcp::socket & socket,
boost::system::error_code const & error)
{
if (error)
{
std::cerr << error << ' ' << error.message() << std::endl;
return ;
}
callback(std::move(socket));
acceptor.async_accept
(
socket,
std::bind(on_accept, std::ref(acceptor), std::ref(socket), std::placeholders::_1)
);
}
int main ()
{
boost::asio::io_service service;
boost::asio::io_service::work work { service };
boost::asio::ip::tcp::acceptor acceptor { service };
boost::asio::ip::tcp::socket socket { service };
boost::asio::ip::tcp::endpoint endpoint { boost::asio::ip::tcp::v4(), 5555 };
boost::system::error_code ec;
using socket_base = boost::asio::socket_base;
auto option = socket_base::reuse_address { false };
if (acceptor.open(endpoint.protocol(), ec) ||
acceptor.set_option(option, ec) ||
acceptor.bind(endpoint, ec) ||
acceptor.listen(socket_base::max_connections, ec) ||
acceptor.is_open() == false)
return 1;
acceptor.async_accept
(
socket,
std::bind(on_accept, std::ref(acceptor), std::ref(socket), std::placeholders::_1)
);
service.run();
}
When I connect a client to it, I get an error :
Accepted
system:1 Incorrect function
(The on_accept() function is called with an error code when the socket object from the callback() function is destroyed).
Also, the client is not disconnected at all.
If I uncomment the line in the callback() function, everything works fine, no error message and the client is disconnected as expected.
Now for the environment settings, I'm under Windows 8.1, using a MinGW-w64 v4.9.2 compiler with Boost.Asio v1.58.0 compiled with that same compiler.
The command line used to compile the file is as follow :
$ g++ -std=c++14 -IC:/C++/boost/1.58.0 main.cpp -LC:/C++/boost/1.58.0/lib -lboost_system-mgw49-mt-1_58 -lwsock32 -lws2_32 -o test.exe
Note that using Boost 1.57.0 results in the same behavior.
I can also remove the commented line completely, and then use this :
static void callback (boost::asio::ip::tcp::socket && socket)
{
std::cout << "Accepted" << std::endl;
socket.shutdown(socket.shutdown_both);
socket.close();
}
And the program will behave correctly too.
So, how come I need to add extra steps to not get an error here ? IIRC this behavior wasn't there a couple of months ago when I first tried that program.
The code only creates a single socket, which is an automatic variable whose lifetime will end once main() returns. std::move(socket) only returns an xvalue that can be provided to socket's move constructor; it does not construct a socket.
To resolve this, consider changing the callback() signature to accepting the socket via value, allowing the compiler to invoke the move-constructor for you when given an xvalue. Change:
static void callback (boost::asio::ip::tcp::socket && socket)
to:
static void callback (boost::asio::ip::tcp::socket socket)
Overall, the flow of the code is as follows:
void callback(socket&&); // rvalue reference.
void on_accept(acceptor&, socket&, ...) // lvalue reference.
{
...
callback(static_cast<socket&&>(socket)); // Cast to xvalue.
acceptor.async_accept(socket,
std::bind(&on_accept, std::ref:acceptor),
std::ref(socket), // lvalue reference.
...);
}
int main()
{
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
boost::asio::ip::tcp::acceptor acceptor(io_service);
boost::asio::ip::tcp::socket socket(io_service); // Constructor.
...
acceptor.async_accept(socket,
std::bind(&on_accept, std::ref:acceptor),
std::ref(socket), // lvalue reference.
...);
io_service.run();
}
Upon successfully accepting the first connection, the socket in main() is open. The on_accept() function invokes callback() with an xvalue, and does not change the state of the socket. Another async_accept() operation is initiated using the already open socket, immediately resulting in the operation's failure. The async_accept() operation fails, invoking on_accept() which will return early, stopping its call chain. As io_service::work is attached to the io_service, execution never returns from io_service::run(), preventing main() from returning and destroying the socket. The final result is no more connections are accepted (no async_accept() operations) and the client is not disconnected (socket is never destroyed).
When callback() changes the state of the socket to close, the issue is no longer present as the pre-condition for async_accept() is met. The other examples meet this pre-condition because:
Uncommenting the one line results in the move-constructor being invoking. The moved-from socket will have the same state as if constructed using the socket(io_service&) constructor.
The socket is explicitly closed via socket.close().
In all examples of using boost, usually people do the following
boost::asio::io_service io_service;
tcp::socket s1(io_service);
tcp::socket s2(io_service);
io_service.run();
But i am writing class that already has running in thread io_service and it has to create sockets with this io_service. And there is my question. How to make it thread safety?
class MySocket
{
private:
boost::asio::io_service* ioService;
tcp::socket* socket;
public:
MySocket(boost::asio::io_service* nioService,
tcp::resolver::iterator endpoint_iterator):
ioService(nioService)
{
socket = new tcp::socket(*ioService);
}
~MySocket();
};
SocketHandler handler;
handler.run(); //run io_service in thread
MySocket* s1 = handler.createSocket("localhost", "80");
//do something
MySocket* s2 = handler.createSocket("localhost", "81");
//dododo
handler.destroySocket(s1);
handler.destroySocket(s2);
You can create new sockets at any time with boost::asio.
io_service::run() blocks until working queue is empty. If it there is no work in the queue - the function returns immediately. That's why people usually add work to it (create timers, bind sockets, etc) prior to io_service::run().
BTW: I don't recommend doing this way:
MySocket* s1 = handler.createSocket("localhost", "80");
...
handler.destroySocket(s1);
use RAII-objects (smart pointers) instead.
asio::io_service ioService;
asio::ip::tcp::socket* socket = new asio::ip::tcp::socket(ioService);
socket->async_connect(endpoint, handler);
delete socket;
Socket's destructor should close the socket. But can the asynchronous backend handle this? Will it cancel the asynchronous operation and calling the handler? Probably not?
When the socket is destroyed, it invokes destroy on its service. When a SocketService's destroy() function is invoked, it cancels asynchronous operations by calling a non-throwing close(). Handlers for cancelled operations will be posted for invocation within io_service with a boost::asio::error::operation_aborted error.
Here is a complete example demonstrating the documented behavior:
#include <iostream>
#include <boost/asio.hpp>
void handle_connect(const boost::system::error_code& error)
{
std::cout << "handle_connect: " << error.message() << std::endl;
}
int main()
{
namespace ip = boost::asio::ip;
using ip::tcp;
boost::asio::io_service io_service;
// Create socket with a scoped life.
{
tcp::socket socket(io_service);
socket.async_connect(
tcp::endpoint(ip::address::from_string("1.2.3.4"), 12345),
&handle_connect);
}
io_service.run();
}
And its output:
handle_connect: Operation canceled
Why did you create the socket using new? It won't definitely do normal process.
If you really want to create the socket using new, you have to close and delete at the end of your program.
Here is a sample, just.
io_service service_;
ip::tcp::socket sock(service_);
sock.async_connect(ep, connect_handler);
deadline_timer t(service_, boost::posix_time::seconds(5));
t.async_wait(timeout_handler);
service_.run();
I want to create an autonomous thread devoted only to receive data from an UDP socket using boost libraries (asio). This thread should be an infinite loop triggered by some data received from the UDP socket. In my application I need to use an asynchronous receive operation.
If I use the synchronous function receive_from everything works as expected.
However if I use async_receive_from the handler is never called. Since I use a semaphore to detect that some data have been received, the program locks and the loop is never triggered.
I have verified (with a network analyzer) that the sender device properly sends the data on the UDP socket.
I have isolated the problem in the following code.
#include <boost\array.hpp>
#include <boost\asio.hpp>
#include <boost\thread.hpp>
#include <boost\interprocess\sync\interprocess_semaphore.hpp>
#include <iostream>
typedef boost::interprocess::interprocess_semaphore Semaphore;
using namespace boost::asio::ip;
class ReceiveUDP
{
public:
boost::thread* m_pThread;
boost::asio::io_service m_io_service;
udp::endpoint m_local_endpoint;
udp::endpoint m_sender_endpoint;
udp::socket m_socket;
size_t m_read_bytes;
Semaphore m_receive_semaphore;
ReceiveUDP() :
m_socket(m_io_service),
m_local_endpoint(boost::asio::ip::address::from_string("192.168.0.254"), 11),
m_sender_endpoint(boost::asio::ip::address::from_string("192.168.0.11"), 5550),
m_receive_semaphore(0)
{
Start();
}
void Start()
{
m_pThread = new boost::thread(&ReceiveUDP::_ThreadFunction, this);
}
void _HandleReceiveFrom(
const boost::system::error_code& error,
size_t received_bytes)
{
m_receive_semaphore.post();
m_read_bytes = received_bytes;
}
void _ThreadFunction()
{
try
{
boost::array<char, 100> recv_buf;
m_socket.open(udp::v4());
m_socket.bind(m_local_endpoint);
m_io_service.run();
while (1)
{
#if 1 // THIS WORKS
m_read_bytes = m_socket.receive_from(
boost::asio::buffer(recv_buf), m_sender_endpoint);
#else // THIS DOESN'T WORK
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
/* The program locks on this wait since _HandleReceiveFrom
is never called. */
m_receive_semaphore.wait();
#endif
std::cout.write(recv_buf.data(), m_read_bytes);
}
m_socket.close();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
};
void main()
{
ReceiveUDP receive_thread;
receive_thread.m_pThread->join();
}
A timed_wait on the semaphore is to be preferred, however for debug purposes I have used a blocking wait as in the code above.
Did I miss something? Where is my mistake?
Your call to io_service.run() is exiting because there is no work for the io_service to do. The code then enters the while loop and calls m_socket.async_receive_from. At this point the io_service is not running ergo it never reads the data and calls your handler.
you need to schedule the work to do before calling io_service run:
ie:
// Configure io service
ReceiveUDP receiver;
m_socket.open(udp::v4());
m_socket.bind(m_local_endpoint);
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, receiver,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
The handler function will do the following:
// start the io service
void HandleReceiveFrom(
const boost::system::error_code& error,
size_t received_bytes)
{
m_receive_semaphore.post();
// schedule the next asynchronous read
m_socket.async_receive_from(
boost::asio::buffer(recv_buf),
m_sender_endpoint,
boost::bind(&ReceiveUDP::_HandleReceiveFrom, receiver,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
m_read_bytes = received_bytes;
}
Your thread then simply waits for the semaphore:
while (1)
{
m_receive_semaphore.wait();
std::cout.write(recv_buf.data(), m_read_bytes);
}
Notes:
Do you really need this additional thread? The handler is completely asynchronous, and boost::asio can be used to manage a thread pool (see: think-async)
Please do not use underscores followed by a capitol letter for variable / function names. They are reserved.
m_io_service.run() returns immediately, so noone dispatches completion handlers. Note that io_service::run is a kind of "message loop" of an asio-based application, and it should run as long as you want asio functionality to be available (this's a bit simplified description, but it's good enough for your case).
Besides, you should not invoke async.operation in a loop. Instead, issue subsequent async.operation in the completion handler of the previous one -- to ensure that 2 async.reads would not run simultaniously.
See asio examples to see the typical asio application design.