I'm testing my application which contains a TCP client. To test that I've created a simple TCP server based on boost examples. The problem is that once each ~5 test invocations with valgrind the test fails to connect to local server. When not using valgrind all the tests pass on each invocation.
I can't find the cause of it. The server implementation:
class arcturus_mock
{
private:
boost::asio::io_service ios;
boost::asio::ip::tcp::socket socket;
boost::asio::ip::tcp::acceptor acceptor;
std::thread t;
public:
arcturus_mock(short port)
: acceptor(ios,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port))
, socket(ios)
{
do_accept();
}
// A new thread is created to not to block on calling
// io_service::run function
void run()
{
t = std::thread([&ios = this->ios]() { ios.run(); });
}
void stop()
{
ios.stop();
t.join();
}
private:
void do_accept()
{
acceptor.async_accept(socket, [this](boost::system::error_code ec) {
if (!ec)
std::make_shared<arcturus_mock_session>(std::move(socket))
->start();
do_accept();
});
}
};
And the corresponding session:
class arcturus_mock_session : public std::enable_shared_from_this<arcturus_mock_session>
{
private:
boost::asio::ip::tcp::socket socket;
char data[1024];
public:
arcturus_mock_session(boost::asio::ip::tcp::socket &&used_socket)
: socket(std::move(used_socket))
{
}
void start()
{
using boost::asio::async_write;
async_write( ...
}
};
I run the tests using Catch2 framework. This is how the test case looks like:
TEST_CASE(" ... ")
{
arcturus_mock mock(1050);
mock.run();
SECTION(" ... ")
{
client c;
// That throws sometimes
REQUIRE_NOTHROW(c.connect_and_handle("localhost", 1050));
}
mock.stop();
}
Could the problem be caused by the thread which won't manage to create and start the server by the time the client connects to it?
It's a race condition.
When the server thread starts, there is not necessarily any work to be done (async_accept may not happen quickly enough). This means run() simply exits immediately, and the server doesn't run.
Either make async_accept precede the thread launch or use a io_service::work to keep the service occupied.
Related
I am new to Asio, so I am a little confused about the control flow of asynchronous operations. Let's see this server:
class session
{
...
sendMsg()
{
bool idle = msgQueue.empty();
msgQueue.push(msg);
if (idle)
send();
}
send()
{
async_write(write_handler);
}
write_handler()
{
msgQueue.pop()
if (!msgQueue.empty())
send();
}
recvMsg()
{
async_read(read_handler);
}
read_handler()
{
...
recvMsg();
}
...
};
class server
{
...
start()
{
async_accept(accept_handler);
}
accept_handler()
{
auto client = make_shared<session>(move(socket));
client->recvMsg();
...
start();
}
...
};
int main()
{
io_context;
server srv(io_context, 22222);
srv.start();
io_context.run();
return 0;
}
In this case, all completion handlers accept_handler, read_handler, write_handler will be called in the thread calling io_context.run(), which is the main thread. If they will run in the same thread, it means they will run sequentially, not concurrently, right? And further, the msgQueue will be accessed sequentially, so there is no need a mutex lock for this queue, right?
I think async_* functions tell the operating system to do some work, and these work will run simultaneously in some other threads with their own buffers. Even if these work are done at the same time(say, at a point, a new connection requirement arrives, a new message from a exist client arrives, sending a message to a exist client is done), the completion handlers(accept_handler, read_handler, write_handler) will still be called sequentially. They will not run concurrently, am I correct?
Thank you so much for your help.
Yes. There's only one thread running the io_context, so all completion handlers will run on that one thread. Indeed this implies a strand (the implicit strand) of execution, namely, all handlers will execute sequentially.
See: https://www.boost.org/doc/libs/1_81_0/doc/html/boost_asio/overview/core/threads.html
and these work will run simultaneously in some other threads with their own buffers
They will run asynchronously, but not usually on another thread. There could be internal threads, or kernel threads, but also just hardware. Their "own" buffer is true, but dangerously worded, because in Asio the operations never own the buffer - you have to make sure it stays valid until the operation completes.
Note:
if there can be multiple threads running (or polling) the io service, you need to make sure access to IO objects is synchronized. In Asio this can be achieved with strand executors
not all IO operations may be active in overlapping fashion. You seem to be aware of this given the msgQueue in your pseudo code
Bonus
For bonus, let me convert your code into non-pseudo code showing an explicit strand per connection to be future proof:
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
using namespace std::placeholders;
class session : public std::enable_shared_from_this<session> {
public:
session(tcp::socket s) : s(std::move(s)) {}
void start() {
post(s.get_executor(), [self = shared_from_this()] { self->recvMsg(); });
}
void sendMsg(std::string msg) {
post(s.get_executor(), [=, self = shared_from_this()] { self->do_sendMsg(msg); });
}
private:
//... all private members on strand
void do_sendMsg(std::string msg) {
bool was_idle = msgQueue.empty();
msgQueue.push_back(std::move(msg));
if (was_idle)
do_writeloop();
}
void do_writeloop() {
if (!msgQueue.empty())
async_write(s, asio::buffer(msgQueue.front()),
std::bind(&session::write_handler, shared_from_this(), _1, _2));
}
void write_handler(error_code ec, size_t) {
if (!ec) {
msgQueue.pop_front();
do_writeloop();
}
}
void recvMsg() {
//async_read(s, asio::dynamic_buffer(incoming),
//std::bind(&session::read_handler, shared_from_this(), _1, _2));
async_read_until(s, asio::dynamic_buffer(incoming), "\n",
std::bind(&session::read_handler, shared_from_this(), _1, _2));
}
void read_handler(error_code ec, size_t n) {
if (!ec) {
auto msg = incoming.substr(0, n);
incoming.erase(0, n);
recvMsg();
sendMsg("starting job for " + msg);
sendMsg("finishing job for " + msg);
sendMsg(" -- some other message --\n");
}
}
tcp::socket s;
std::string incoming;
std::deque<std::string> msgQueue;
};
class server {
public:
server(auto ex, uint16_t port) : acc(ex, tcp::v4()) {
acc.set_option(tcp::acceptor::reuse_address(true));
acc.bind({{}, port});
acc.listen();
}
void accept_loop() {
acc.async_accept(make_strand(acc.get_executor()),
std::bind(&server::accept_handler, this, _1, _2));
}
void accept_handler(error_code ec, tcp::socket s) {
if (!ec ){
std::make_shared<session>(std::move(s))->start();
accept_loop();
}
}
private:
tcp::acceptor acc;
};
int main() {
boost::asio::io_context ioc;
server srv(ioc.get_executor(), 22222);
srv.accept_loop();
ioc.run();
}
With a sample client
for a in foo bar qux; do (sleep 1.$RANDOM; echo "command $a")|nc 127.0.0.1 22222 -w2; done
Prints
starting job for command foo
finishing job for command foo
-- some other message --
starting job for command bar
finishing job for command bar
-- some other message --
starting job for command qux
finishing job for command qux
-- some other message --
I am building a application in qt where i am also creating a socket Server in a separate thread using boost.
Now when i close the GUI of the QT application i want the thread to be closed also.
But currently i am not able to understand how do we signal the thread to close when we close the GUI.
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
cont.SetName("RootItem");
TreeModel* model = new TreeModel("RootElement", &cont);
WavefrontRenderer w(model);
w.show(); // Show the QT ui
boost::asio::io_service io_service;
server server1(io_service, 1980);
boost::thread t(boost::bind(&io_service::run, &io_service));
return a.exec();
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
#include "ConHandler.h"
#include "WavefrontRenderer.h"
class Server
{
private:
tcp::acceptor acceptor_;
void start_accept()
{
// socket
con_handler::pointer connection = con_handler::create(acceptor_.get_io_service());
// asynchronous accept operation and wait for a new connection.
acceptor_.async_accept(connection->socket(),
boost::bind(&Server::handle_accept, this, connection,
boost::asio::placeholders::error));
}
public:
//constructor for accepting connection from client
Server(boost::asio::io_service& io_service ) : acceptor_(io_service, tcp::endpoint(tcp::v4(), 1980))
{
start_accept();
}
void handle_accept(con_handler::pointer connection, const boost::system::error_code& err)
{
if (!err) {
connection->start();
}
start_accept();
}
};
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
By saying stop thread you mean call stop method on io_service instance. Without this, destructor of boost::thread will be invoked on unfinished thread, what leads to UB.
All GUI events are processed in exec method. If you close all windows, this method ends. main ends as well, and at the end of main scope all local variables are destroyed.
So you can just make wrapper for lambda, which will be called at the end of main, and there you can call stop method. Then dtor of thread will work without any problems.
template<class F>
struct Cleaner {
Cleaner(F in) : f(in) {}
~Cleaner() { f(); }
F f;
};
template<class F>
Cleaner<F> makeCleaner(F f) {
return Cleaner<F>(f);
}
int main()
boost::asio::io_service io_service;
server server1(io_service, 1980);
boost::thread t(boost::bind(&io_service::run, &io_service));
auto raii = makeCleaner( [&](){ io_service.stop(); } );
return a.exec();
}
I would like to use a dedicated thread to receive udp data using asio library. An example code is given below.
#define ASIO_STANDALONE // we are using the stand aloe version of ASIO and Not Boost::ASIO
#include <iostream>
#include "include/asio.hpp"
#include <array>
#include <thread>
class UDPServer
{
public:
UDPServer( asio::io_service& ioService): m_socket(ioService)
{}
~UDPServer(){}
void listen(const int& port)
{
m_socket.open(asio::ip::udp::v4());
m_socket.bind(asio::ip::udp::endpoint(asio::ip::udp::v4(), port));
#define DEDICATED_THREAD_FLAG 1
#if DEDICATED_THREAD_FLAG
m_thread = std::thread( &UDPServer::receive, this);
std::cout<<"Thead Id in listen:"<<std::this_thread::get_id()<<std::endl;
m_thread.join();
#else
receive();
#endif
}
template<std::size_t SIZE>
void processReceivedData(const std::array<char, SIZE>& rcvdMessage,
const int& rcvdMessageSizeInBytes,
const std::error_code& error)
{
std::cout<<"Rcvd Message: "<<rcvdMessage.data()<<std::endl;
receive();
}
void receive()
{
std::cout<<"Thead Id in receive0:"<<std::this_thread::get_id()<<std::endl;
asio::ip::udp::endpoint m_udpRemoteEndpoint;
m_socket.async_receive_from(asio::buffer(recv_buffer, recv_buffer.size()/*NetworkBufferSize*/), m_udpRemoteEndpoint,
[this](std::error_code ec, std::size_t bytesReceived)
{
std::cout<<"Thead Id in receive1:"<<std::this_thread::get_id()<<std::endl;
processReceivedData(recv_buffer, bytesReceived, ec);
});
}
private:
asio::ip::udp::socket m_socket;
std::thread m_thread;
static const int NetworkBufferSize = 9000;
std::array<char, NetworkBufferSize> recv_buffer;
};
int main()
{
std::cout<<"Main Thead Id:"<<std::this_thread::get_id()<<std::endl;
asio::io_service m_ioService;
UDPServer myServer( m_ioService);
myServer.listen(12345); // starting the UDP server
std::cout<<"Program waiting.."<<std::endl;
m_ioService.run();
std::cout<<"Program ending.."<<std::endl;
}
A non dedicated thread version can be enable by changing DEDICATED_THREAD_FLAG to 0, which is working as expected.
However, when DEDICATED_THREAD_FLAG is set to 1, a new thread is starting and entering the "receive" function. But when a udp packet arrives, it is handled by only the main thread and not by the dedicated thread.
What is going wrong here?
The whole event-loop that handles the asynchronous calls is done by the io_server, which you run in the main thread.
Instead of running the receive function in the thread (it will return immediately anyway), you should run io_service::run.
I'm trying to implement a single C++ application, that holds two processing loops. Currently the first processing loop (boost's io_service::run) blocks the execution of the second one.
Approaches utilizing threads or std::async approaches failed. (I don't have experience/background on multi-threading).
Is there an elegant way to run the io_service::run in an other thread, while still executing the callbacks upon incoming UDP datagrams?
Main-File:
class Foo
{
public:
Foo();
void callback(const int&);
private:
// ... (hopefully) non-relevant stuff...
};
int main()
{
Foo foo_obj;
// I need to run this function (blocking) but the constructor is blocking (io_server::run())
run();
return 0;
}
Foo::Foo(){
boost::asio::io_service io;
UDP_Server UDP_Server(io);
// Set function to be called on received message
UDP_Server.add_handler(std::bind(&Foo::callback, this, std::placeholders::_1));
// This function should be non-blocking
// -> tried several things, like threads, async, ... (unfortunately not successful)
io.run();
}
// realization of callback function here (see class definition)
Included custom "library":
class UDP_Server
{
public:
UDP_Server(boost::asio::io_service&);
void add_handler(std::function<void(int)>);
private:
// Function handle
std::function<void(int)> callbackFunctionHandle;
// Functions
void start_receive();
void handle_receive(const boost::system::error_code&, std::size_t);
// ... (hopefully) non-relevant stuff...
};
// Constructor
UDP_Server::UDP_Server(boost::asio::io_service& io_service)
: socket_(io_service, udp::endpoint(udp::v4(), UDP_PORT)){
}
// Store a callback function (class foo) to be called whenever a message is received
void UDP_Server::add_handler(std::function<void(int)> callbackFunction){
try
{
callbackFunctionHandle = callbackFunction;
start_receive();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
// Async receive
UDP_Server::start_receive()
{
socket_.async_receive_from(
boost::asio::buffer(recv_buffer_), remote_endpoint_,
boost::bind(&UDP_Server::handle_receive, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
// When message is received
void UDP_Server::handle_receive(const boost::system::error_code& error,
std::size_t bytes_transferred)
{
if (!error || error == boost::asio::error::message_size)
{
// ... do smth. with the received data ...
// Call specified function in Foo class
callbackFunctionHandle(some_integer);
start_receive();
}
else{
// ... handle errors
}
}
have a look at what they did in here:
boost::asio::io_service io_service;
/** your code here **/
boost::thread(boost::bind(&boost::asio::io_service::run, &io_service));
ros::spin();
So you basically start the blocking call to io_service::run() in a separate thread from the ros::spin().
If you start that bound to a single cpu-node (in order to not waste 2 cpu-nodes with waiting commands) your scheduler might handle stuff.
All of the boost examples work until I try to implement the exact same thing myself. I'm starting to think there must be an order of creation or io_service ownership for things to block properly.
My server structure is as follows:
class Server {
public:
Server(unsigned short port)
: ioService_(), acceptor_(ioService_), socket_(ioService_) {
acceptClient(); // begin async accept
}
void start(); // runs ioService_.run();
private:
void acceptClient();
asio::io_service ioService_;
tcp::acceptor acceptor_;
tcp::socket socket_;
Cluster cluster_; // essentially just a connection manager
};
The acceptClient() function works like this:
void Server::acceptClient() {
acceptor_.async_accept(socket_, [this](const system::error_code& e){
if(!acceptor_.is_open()) return;
if(!e) {
cluster_.add(std::make_shared<Client>(std::move(socket_), cluster_));
}
acceptClient();
});
}
I'm not sure if you need an outline of the Client class since the server should run and block even with no clients.
The creation of the server goes as follows:
try {
Server server(port);
server.start(); // this calls the server's member io_service's run();
} catch (const std::exception& e) {
std::cerr << e.what(); << std::endl;
}
The problem is the server instantly closes after that call. The program starts and then exits with no errors. Is there something that io_service.run() relies on? e.g. some form of asynchronous link that I've forgotten? My learned this design from boost asio's http server design but I've worked it to fit my basic purposes. The problem is some boost examples establish a new member boost tcp::socket in the client itself rather than moving the server's to the client so I'm quite confused. They also tend to use boost's versions of std::bind instead of lambdas which etc.
So, can anyone give me a brief rundown on how to create a basic, stripped, async server since the boost examples are really confusing since the code conventions differ per example. I was wondering if anybody noticed anything straight away that would cause my server to instantly close.
Thanks.
I tested async_accept with the following code which sends Hello to clients connecting to the port. At least there is the creation of endpoint object, acceptor.open(endpoint.protocol()), acceptor.bind(endpoint) and acceptor.listen() calls that seem to be missing from your code.
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <string>
using namespace boost::asio;
void handle_accept(
io_service * ios,
ip::tcp::acceptor * acceptor,
ip::tcp::socket * socket,
const boost::system::error_code & error)
{
if (!error) {
std::string msg("Hello\n");
socket->send(buffer(msg, msg.length()));
ip::tcp::socket * temp = new ip::tcp::socket(*ios);
acceptor->async_accept(*temp,
boost::bind(handle_accept,
ios, acceptor, temp,
placeholders::error));
}
}
int main(void)
{
io_service ios;
ip::tcp::socket socket(ios);
ip::tcp::acceptor acceptor(ios);
ip::tcp::endpoint endpoint(ip::tcp::v4(), 1500);
acceptor.open(endpoint.protocol());
acceptor.set_option(ip::tcp::acceptor::reuse_address(true));
acceptor.bind(endpoint);
acceptor.listen();
acceptor.async_accept(socket,
boost::bind(handle_accept,
&ios, &acceptor, &socket,
placeholders::error));
ios.run();
/*
acceptor.accept(socket);
std::string msg("Hello\n");
socket.send(buffer(msg, msg.length()));
*/
}
A version with a Server class and a lambda as a argument for async_accept:
#include <boost/asio.hpp>
#include <functional>
#include <string>
using namespace boost::asio;
class Server {
public:
Server(unsigned short port) : ios(), acceptor(ios), socket(ios),
endpoint(ip::tcp::v4(), port) {
acceptor.open(endpoint.protocol());
acceptor.set_option(ip::tcp::acceptor::reuse_address(true));
acceptor.bind(endpoint);
acceptor.listen();
nsocket = &socket;
}
void run() {
std::function<void (const boost::system::error_code &)> f;
f = [&f, this] (const boost::system::error_code & error) {
if (!error) {
std::string msg("Hello\n");
nsocket->send(buffer(msg, msg.length()));
nsocket = new ip::tcp::socket(ios);
acceptor.async_accept(*nsocket, f);
}
};
acceptor.async_accept(socket, f);
ios.run();
}
protected:
io_service ios;
ip::tcp::acceptor acceptor;
ip::tcp::socket socket;
ip::tcp::endpoint endpoint;
ip::tcp::socket * nsocket;
};
int main(void)
{
Server srv(1500);
srv.run();
}