I have a protocol structure where one class takes care of protocol states (Protocol) and another class takes care of send and receiving messages (Comm).
I´m using boost:asio in asynchronous mode.
So I have the following code structure:
#include <string>
#include <iostream>
#include "boost/asio.hpp"
#include "boost/bind.hpp"
class Comm {
public:
Comm::Comm();
void SendMessage(std::string message, void (callback) (const boost::system::error_code& errorCode, std::size_t bytesTranferred));
private:
boost::asio::io_service ioService;
std::shared_ptr<boost::asio::ip::tcp::socket> mySocket;
};
Comm::Comm()
{
boost::asio::ip::tcp::resolver resolver(ioService);
boost::asio::ip::tcp::resolver::query query("192.168.0.1");
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
mySocket->connect(*iterator);
}
void Comm::SendMessage(std::string message, void (callback) (const boost::system::error_code& errorCode, std::size_t bytesTranferred))
{
mySocket->async_send(boost::asio::buffer(message.c_str(), message.length()), boost::bind(&callback)); // <<< ERROR HERE
}
class Protocol {
public:
void SendMessage(std::string message);
void SendMessageHandler(const boost::system::error_code& errorCode, std::size_t bytesTranferred);
private:
Comm channel;
};
void Protocol::SendMessage(std::string message)
{
channel.SendMessage(message, &SendMessageHandler); // <<< ERROR HERE
}
void Protocol::SendMessageHandler(const boost::system::error_code& errorCode, std::size_t bytesTranferred)
{
if (!errorCode)
std::cout << "Send OK" << std::endl;
else
std::cout << "Send FAIL." << std::endl;
}
As shown, I need that the callback of the async_send to be a non-static function of the caller´s class, so I have to pass the callback function in SendMessage and use it as a parameter in async_send.
These both statements are not compiling. I´ve tried variations but I can´t find what what´s going on here.
Help appreciated.
Try something like this using binding to class method:
void Comm::SendMessage(std::string message, boost::function< void(const boost::system::error_code& , std::size_t) > callback )
{
mySocket->async_send(boost::asio::buffer(message.c_str(), message.length()), callback);
}
...//later
channel.SendMessage(message, boost::bind(&Protocol::SendMessageHandler, this) );
Note/more importantly you have amount unfixable bugs here:
You are taking std::string message by value several times - it will copy the content.
Comm::SendMessage uses local message object, which will be destroyed before async operation will complete (boost::asio::buffer will not copy content).
It will be hard to use 2 or more Comm objects, since each have its own ioService (you will not able to run them all at same time)
No shared_ptr or any other capability to control object lifetime: your SendMessageHandler can be called when Protocol already destroyed.
Protocol does not control write parallelism, and multiple SendMessage calls can lead to write mixed buffers into sockets, this can/will send complete trash over network.
More fatal/minor issues, no point to search for them.
Consider taking one of the asio examples as base usage pattern.
Related
I am new to Asio, so I am a little confused about the control flow of asynchronous operations. Let's see this server:
class session
{
...
sendMsg()
{
bool idle = msgQueue.empty();
msgQueue.push(msg);
if (idle)
send();
}
send()
{
async_write(write_handler);
}
write_handler()
{
msgQueue.pop()
if (!msgQueue.empty())
send();
}
recvMsg()
{
async_read(read_handler);
}
read_handler()
{
...
recvMsg();
}
...
};
class server
{
...
start()
{
async_accept(accept_handler);
}
accept_handler()
{
auto client = make_shared<session>(move(socket));
client->recvMsg();
...
start();
}
...
};
int main()
{
io_context;
server srv(io_context, 22222);
srv.start();
io_context.run();
return 0;
}
In this case, all completion handlers accept_handler, read_handler, write_handler will be called in the thread calling io_context.run(), which is the main thread. If they will run in the same thread, it means they will run sequentially, not concurrently, right? And further, the msgQueue will be accessed sequentially, so there is no need a mutex lock for this queue, right?
I think async_* functions tell the operating system to do some work, and these work will run simultaneously in some other threads with their own buffers. Even if these work are done at the same time(say, at a point, a new connection requirement arrives, a new message from a exist client arrives, sending a message to a exist client is done), the completion handlers(accept_handler, read_handler, write_handler) will still be called sequentially. They will not run concurrently, am I correct?
Thank you so much for your help.
Yes. There's only one thread running the io_context, so all completion handlers will run on that one thread. Indeed this implies a strand (the implicit strand) of execution, namely, all handlers will execute sequentially.
See: https://www.boost.org/doc/libs/1_81_0/doc/html/boost_asio/overview/core/threads.html
and these work will run simultaneously in some other threads with their own buffers
They will run asynchronously, but not usually on another thread. There could be internal threads, or kernel threads, but also just hardware. Their "own" buffer is true, but dangerously worded, because in Asio the operations never own the buffer - you have to make sure it stays valid until the operation completes.
Note:
if there can be multiple threads running (or polling) the io service, you need to make sure access to IO objects is synchronized. In Asio this can be achieved with strand executors
not all IO operations may be active in overlapping fashion. You seem to be aware of this given the msgQueue in your pseudo code
Bonus
For bonus, let me convert your code into non-pseudo code showing an explicit strand per connection to be future proof:
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
using namespace std::placeholders;
class session : public std::enable_shared_from_this<session> {
public:
session(tcp::socket s) : s(std::move(s)) {}
void start() {
post(s.get_executor(), [self = shared_from_this()] { self->recvMsg(); });
}
void sendMsg(std::string msg) {
post(s.get_executor(), [=, self = shared_from_this()] { self->do_sendMsg(msg); });
}
private:
//... all private members on strand
void do_sendMsg(std::string msg) {
bool was_idle = msgQueue.empty();
msgQueue.push_back(std::move(msg));
if (was_idle)
do_writeloop();
}
void do_writeloop() {
if (!msgQueue.empty())
async_write(s, asio::buffer(msgQueue.front()),
std::bind(&session::write_handler, shared_from_this(), _1, _2));
}
void write_handler(error_code ec, size_t) {
if (!ec) {
msgQueue.pop_front();
do_writeloop();
}
}
void recvMsg() {
//async_read(s, asio::dynamic_buffer(incoming),
//std::bind(&session::read_handler, shared_from_this(), _1, _2));
async_read_until(s, asio::dynamic_buffer(incoming), "\n",
std::bind(&session::read_handler, shared_from_this(), _1, _2));
}
void read_handler(error_code ec, size_t n) {
if (!ec) {
auto msg = incoming.substr(0, n);
incoming.erase(0, n);
recvMsg();
sendMsg("starting job for " + msg);
sendMsg("finishing job for " + msg);
sendMsg(" -- some other message --\n");
}
}
tcp::socket s;
std::string incoming;
std::deque<std::string> msgQueue;
};
class server {
public:
server(auto ex, uint16_t port) : acc(ex, tcp::v4()) {
acc.set_option(tcp::acceptor::reuse_address(true));
acc.bind({{}, port});
acc.listen();
}
void accept_loop() {
acc.async_accept(make_strand(acc.get_executor()),
std::bind(&server::accept_handler, this, _1, _2));
}
void accept_handler(error_code ec, tcp::socket s) {
if (!ec ){
std::make_shared<session>(std::move(s))->start();
accept_loop();
}
}
private:
tcp::acceptor acc;
};
int main() {
boost::asio::io_context ioc;
server srv(ioc.get_executor(), 22222);
srv.accept_loop();
ioc.run();
}
With a sample client
for a in foo bar qux; do (sleep 1.$RANDOM; echo "command $a")|nc 127.0.0.1 22222 -w2; done
Prints
starting job for command foo
finishing job for command foo
-- some other message --
starting job for command bar
finishing job for command bar
-- some other message --
starting job for command qux
finishing job for command qux
-- some other message --
I'm trying to pass a socket along a connection handshake, and use std::bind to do so. The compile issue I'm getting (in one continuous block, which I've added spaces to for readability) is:
'std::_Bind<_Functor(_Bound_args ...)>::_Bind(_Functor&&, _Args&& ...)
[with _Args = {socket_state**, std::function<void(socket_state*)>&, boost::asio::basic_socket_acceptor<boost::asio::ip::tcp, boost::asio::executor>&, boost::asio::io_context&};
_Functor = void (*)(socket_state*, std::function<void(socket_state*)>&, boost::asio::basic_socket_acceptor<boost::asio::ip::tcp>&, boost::asio::io_context&);
_Bound_args = {socket_state**, std::function<void(socket_state*)>, boost::asio::basic_socket_acceptor<boost::asio::ip::tcp, boost::asio::executor>, boost::asio::io_context}]':
My code is below, with the error appearing to nag at the std::bind arguments given to boost::asio::acceptor.async_accept(socket, ...) and the parameters for the accept_new_client method
void start_server(std::function<void(socket_state*)>& func, tcp::acceptor& acceptor, boost::asio::io_context& context)
{
acceptor.listen();
// Start client connection loop
networking::wait_for_client(func, acceptor, context);
}
void wait_for_client(std::function<void(socket_state*)>& func, tcp::acceptor& acceptor, boost::asio::io_context& context)
{
boost::asio::ip::tcp::socket socket(context);
// socket_state is its own class which links a particular socket with an ID and buffer data
// it also holds a function to indicate which part of the connection handshake it needs to go to next
socket_state* state = new socket_state(func, &socket);
acceptor.async_accept(socket, std::bind(&networking::accept_new_client, state, func, acceptor, context));
}
void accept_new_client(socket_state* state, std::function<void(socket_state*)>& func, tcp::acceptor& acceptor, boost::asio::io_context& context)
{
state->on_network_action(state);
wait_for_client(func, acceptor, context);
}
It seems like they would match, but you can see the error state my std::bind arguments are socket_state** instead of socket_state*, and boost::asio::basic_socket_acceptor<boost::asio::ip::tcp, boost::asio::executor>& instead of boost::asio::basic_socket_acceptor<boost::asio::ip::tcp>&.
I have no idea what the "with _Args" vs. "_Bound_args" is either.
There's many problems in this code.
The shared pointer seems to be at the wrong level of abstraction. You would want the entire "connection" type to be of shared lifetime, not just the socket. In your case, socket_state is a good candidate.
Regardless, your socket is a local variable that you pass a stale pointer to inside socket_state. Socket-state looks like it will necessarily be leaked.
So that will never work already.
Next up, the bind is binding all parameters eagerly, leaving a nullary signature. That's not what any overload accepts [no pun intended]. You need to match
AcceptHandler or
MoveAcceptHandler
Let's go for AcceptHandler. Also, let's not bind all the redundant args (func was already in the socket_stateremember,io_context` is overshared etc.).
In general it looks like you need to develop confidence in knowing where your state is. E.g. this line is is symptomatic:
state->on_network_action(state);
Since on_network_action is a member function of socket_state, there should never be any need to pass the state as an argument (it will be this implicitly). The same thing goes for acceptor and contest in all occurrences.
Demo
Fixing all the above, using std::shared_ptr and bind (you already did), notice the placeholder::_1 to accept the error_code etc.)
Live On Coliru
#include <boost/asio.hpp>
#include <memory>
#include <iostream>
namespace ba = boost::asio;
using namespace std::chrono_literals;
using boost::system::error_code;
using ba::ip::tcp;
struct socket_state;
using Callback = std::function<void(socket_state&)>;
struct socket_state : std::enable_shared_from_this<socket_state> {
Callback _callback;
tcp::socket _socket;
template <typename Executor>
socket_state(Callback cb, Executor ex) : _callback(cb)
, _socket(ex)
{
}
void on_network_action() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
};
struct networking {
using StatePtr = std::shared_ptr<socket_state>;
explicit networking(ba::io_context& ctx, Callback callback)
: context(ctx)
, callback(callback)
{
}
ba::io_context& context;
tcp::acceptor acceptor {context, {{}, 8989}};
Callback callback;
void start_server()
{
std::cout << "start_server" << std::endl;
acceptor.listen();
wait_for_client(); // Start client connection loop
}
void stop_server() {
std::cout << "stop_server" << std::endl;
acceptor.cancel();
acceptor.close();
}
void wait_for_client()
{
std::cout << "wait_for_client" << std::endl;
// socket_state is its own class which links a particular socket with
// an ID and buffer data it also holds a function to indicate which
// part of the connection handshake it needs to go to next
auto state =
std::make_shared<socket_state>(callback, context.get_executor());
acceptor.async_accept(state->_socket,
std::bind(&networking::accept_new_client, this,
std::placeholders::_1, state));
}
void accept_new_client(error_code ec, StatePtr state)
{
if (ec) {
std::cout << "accept_new_client " << ec.message() << std::endl;
return;
}
std::cout << "accept_new_client " << state->_socket.remote_endpoint()
<< std::endl;
state->on_network_action();
wait_for_client();
}
};
int main() {
ba::io_context ctx;
networking server(ctx, [](socket_state&) {
std::cout << "This is our callback" << std::endl;
});
server.start_server();
ctx.run_for(5s);
server.stop_server();
ctx.run();
}
With some random connections:
start_server
wait_for_client
accept_new_client 127.0.0.1:54376
void socket_state::on_network_action()
wait_for_client
accept_new_client 127.0.0.1:54378
void socket_state::on_network_action()
wait_for_client
accept_new_client 127.0.0.1:54380
void socket_state::on_network_action()
wait_for_client
accept_new_client 127.0.0.1:54382
void socket_state::on_network_action()
wait_for_client
stop_server
accept_new_client Operation canceled
Note that version makes the comments
// socket_state is its own class which links a particular socket with
// an ID and buffer data it also holds a function to indicate which
// part of the connection handshake it needs to go to next
no longer complete lies :)
I'm trying to implement a single C++ application, that holds two processing loops. Currently the first processing loop (boost's io_service::run) blocks the execution of the second one.
Approaches utilizing threads or std::async approaches failed. (I don't have experience/background on multi-threading).
Is there an elegant way to run the io_service::run in an other thread, while still executing the callbacks upon incoming UDP datagrams?
Main-File:
class Foo
{
public:
Foo();
void callback(const int&);
private:
// ... (hopefully) non-relevant stuff...
};
int main()
{
Foo foo_obj;
// I need to run this function (blocking) but the constructor is blocking (io_server::run())
run();
return 0;
}
Foo::Foo(){
boost::asio::io_service io;
UDP_Server UDP_Server(io);
// Set function to be called on received message
UDP_Server.add_handler(std::bind(&Foo::callback, this, std::placeholders::_1));
// This function should be non-blocking
// -> tried several things, like threads, async, ... (unfortunately not successful)
io.run();
}
// realization of callback function here (see class definition)
Included custom "library":
class UDP_Server
{
public:
UDP_Server(boost::asio::io_service&);
void add_handler(std::function<void(int)>);
private:
// Function handle
std::function<void(int)> callbackFunctionHandle;
// Functions
void start_receive();
void handle_receive(const boost::system::error_code&, std::size_t);
// ... (hopefully) non-relevant stuff...
};
// Constructor
UDP_Server::UDP_Server(boost::asio::io_service& io_service)
: socket_(io_service, udp::endpoint(udp::v4(), UDP_PORT)){
}
// Store a callback function (class foo) to be called whenever a message is received
void UDP_Server::add_handler(std::function<void(int)> callbackFunction){
try
{
callbackFunctionHandle = callbackFunction;
start_receive();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
// Async receive
UDP_Server::start_receive()
{
socket_.async_receive_from(
boost::asio::buffer(recv_buffer_), remote_endpoint_,
boost::bind(&UDP_Server::handle_receive, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
// When message is received
void UDP_Server::handle_receive(const boost::system::error_code& error,
std::size_t bytes_transferred)
{
if (!error || error == boost::asio::error::message_size)
{
// ... do smth. with the received data ...
// Call specified function in Foo class
callbackFunctionHandle(some_integer);
start_receive();
}
else{
// ... handle errors
}
}
have a look at what they did in here:
boost::asio::io_service io_service;
/** your code here **/
boost::thread(boost::bind(&boost::asio::io_service::run, &io_service));
ros::spin();
So you basically start the blocking call to io_service::run() in a separate thread from the ros::spin().
If you start that bound to a single cpu-node (in order to not waste 2 cpu-nodes with waiting commands) your scheduler might handle stuff.
All of the boost examples work until I try to implement the exact same thing myself. I'm starting to think there must be an order of creation or io_service ownership for things to block properly.
My server structure is as follows:
class Server {
public:
Server(unsigned short port)
: ioService_(), acceptor_(ioService_), socket_(ioService_) {
acceptClient(); // begin async accept
}
void start(); // runs ioService_.run();
private:
void acceptClient();
asio::io_service ioService_;
tcp::acceptor acceptor_;
tcp::socket socket_;
Cluster cluster_; // essentially just a connection manager
};
The acceptClient() function works like this:
void Server::acceptClient() {
acceptor_.async_accept(socket_, [this](const system::error_code& e){
if(!acceptor_.is_open()) return;
if(!e) {
cluster_.add(std::make_shared<Client>(std::move(socket_), cluster_));
}
acceptClient();
});
}
I'm not sure if you need an outline of the Client class since the server should run and block even with no clients.
The creation of the server goes as follows:
try {
Server server(port);
server.start(); // this calls the server's member io_service's run();
} catch (const std::exception& e) {
std::cerr << e.what(); << std::endl;
}
The problem is the server instantly closes after that call. The program starts and then exits with no errors. Is there something that io_service.run() relies on? e.g. some form of asynchronous link that I've forgotten? My learned this design from boost asio's http server design but I've worked it to fit my basic purposes. The problem is some boost examples establish a new member boost tcp::socket in the client itself rather than moving the server's to the client so I'm quite confused. They also tend to use boost's versions of std::bind instead of lambdas which etc.
So, can anyone give me a brief rundown on how to create a basic, stripped, async server since the boost examples are really confusing since the code conventions differ per example. I was wondering if anybody noticed anything straight away that would cause my server to instantly close.
Thanks.
I tested async_accept with the following code which sends Hello to clients connecting to the port. At least there is the creation of endpoint object, acceptor.open(endpoint.protocol()), acceptor.bind(endpoint) and acceptor.listen() calls that seem to be missing from your code.
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <string>
using namespace boost::asio;
void handle_accept(
io_service * ios,
ip::tcp::acceptor * acceptor,
ip::tcp::socket * socket,
const boost::system::error_code & error)
{
if (!error) {
std::string msg("Hello\n");
socket->send(buffer(msg, msg.length()));
ip::tcp::socket * temp = new ip::tcp::socket(*ios);
acceptor->async_accept(*temp,
boost::bind(handle_accept,
ios, acceptor, temp,
placeholders::error));
}
}
int main(void)
{
io_service ios;
ip::tcp::socket socket(ios);
ip::tcp::acceptor acceptor(ios);
ip::tcp::endpoint endpoint(ip::tcp::v4(), 1500);
acceptor.open(endpoint.protocol());
acceptor.set_option(ip::tcp::acceptor::reuse_address(true));
acceptor.bind(endpoint);
acceptor.listen();
acceptor.async_accept(socket,
boost::bind(handle_accept,
&ios, &acceptor, &socket,
placeholders::error));
ios.run();
/*
acceptor.accept(socket);
std::string msg("Hello\n");
socket.send(buffer(msg, msg.length()));
*/
}
A version with a Server class and a lambda as a argument for async_accept:
#include <boost/asio.hpp>
#include <functional>
#include <string>
using namespace boost::asio;
class Server {
public:
Server(unsigned short port) : ios(), acceptor(ios), socket(ios),
endpoint(ip::tcp::v4(), port) {
acceptor.open(endpoint.protocol());
acceptor.set_option(ip::tcp::acceptor::reuse_address(true));
acceptor.bind(endpoint);
acceptor.listen();
nsocket = &socket;
}
void run() {
std::function<void (const boost::system::error_code &)> f;
f = [&f, this] (const boost::system::error_code & error) {
if (!error) {
std::string msg("Hello\n");
nsocket->send(buffer(msg, msg.length()));
nsocket = new ip::tcp::socket(ios);
acceptor.async_accept(*nsocket, f);
}
};
acceptor.async_accept(socket, f);
ios.run();
}
protected:
io_service ios;
ip::tcp::acceptor acceptor;
ip::tcp::socket socket;
ip::tcp::endpoint endpoint;
ip::tcp::socket * nsocket;
};
int main(void)
{
Server srv(1500);
srv.run();
}
I am a bit lost in a construct of libraries, which I have to tangle together. I need help to indtroduce some timers into this construct.
I have the following:
com.cpp which has main and includes com.hpp
com.hpp which includes a host.h and needed boost includes and defines a class comClient
host.c with included host.h
wrapper.cpp with included com.hpp and some needed boost includes
Now, my com.cpp is creating a comClient and uses it for asynch communication on the com-port. Using boost::asio::serial_port and boost::asio::io_service.
I need to work with some timers, in order to catch when a paket needed too long to transmit.
When creating an instance of comClient, the paket-timer should be initialised.
Using asynch_read_some in a private function of comClient, I call a private handler of comClient, then this handler calls a function of host.c, which calls to the wrapper.cpp a function to restart the timer.
This is the function to init the timer:
//wrapper.cpp
void IniPacketTimer(void *pCHandle){
boost::asio::io_service io;
boost::asio::deadline_timer t(io, boost::posix_time::milliseconds(25));
t.async_wait(&hostOnTimeout(pCHandle));
io.run();
}
This would be the command chain in short:
//comClient.cpp
main{
comClient cc();
}
//comClient.hpp
class comClient(boost::asio::io_service& io_service){
comClient(){
hostInit();
aread();
}
private:
aread( call aread_done)
areaddone(call hostNewData())
}
//host.c
hostInit(){
IniPacketTimer()
}
hostNewData(){
resetTimer
}
//wrapper.cpp
resetTimer(){
t.expires_from_now
}
Questions:
How can I provide an asynchronous timer, which does not affect the asynch read/write operations on my serial port, but triggers execution of a function when the deadline is hit?
Should I use the already existing io_service or is it ok, if I just create another?
Why do I get an error C2102 '&' expects L-Value for my line t.async_wait?
You problem is not clear and since you don't post real code it is quite hard to guess what your problem is.
Especially your threading is not clear but for asio very important.
Below is an example that will compile but not run. I hope it gives you an hint on how to proceed.
It will open a serial port and a timer. Whenever the timer expires it will start a new one. It is a stripped version of code I used some time ago so maybe it will help you.
#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>
#include <boost/function.hpp>
#include <boost/bind.hpp>
#include <vector>
class SerialCommunication
{
public:
SerialCommunication(boost::asio::io_service& io_service, const std::string& serialPort)
: m_io_service(io_service)
, m_serialPort(m_io_service)
, m_timeoutTimer(m_io_service, boost::posix_time::milliseconds(5))
{
configureSerialPort(serialPort);
}
void configureSerialPort(const std::string& serialPort)
{
if(m_serialPort.is_open())
{
m_serialPort.close();
m_timeoutTimer.cancel();
}
boost::system::error_code ec;
m_serialPort.open(serialPort, ec);
if(m_serialPort.is_open())
{
// start Timer
m_timeoutTimer.async_wait(boost::bind(&SerialCommunication::TimerExpired, this, _1));
header_sync();
}
}
void header_sync()
{
m_serialPort.async_read_some(boost::asio::buffer(&m_header.back(), 1),
boost::bind(&SerialCommunication::header_sync_complete, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void header_sync_complete(
const boost::system::error_code& error, size_t bytes_transferred)
{
// stripped
read_payload(&m_payload[0], 0);
}
void read_payload(uint8_t* buffer, uint8_t length)
{
m_serialPort.async_read_some(boost::asio::buffer(buffer, length),
boost::bind(&SerialCommunication::payload_read_complete, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void payload_read_complete(
const boost::system::error_code& error, size_t bytes_transferred)
{
// stripped
// timer cancel and reset
m_timeoutTimer.cancel();
m_timeoutTimer.expires_at(boost::posix_time::microsec_clock::local_time() +
boost::posix_time::milliseconds(5));
m_timeoutTimer.async_wait(boost::bind(&SerialCommunication::TimerExpired, this, _1));
memset(&m_header[0], 0, 3);
header_sync();
}
void TimerExpired(const boost::system::error_code& e)
{
m_timeoutTimer.expires_at(m_timeoutTimer.expires_at() + boost::posix_time::milliseconds(5));
m_timeoutTimer.async_wait(boost::bind(&SerialCommunication::TimerExpired, this, _1));
}
boost::asio::io_service& m_io_service;
boost::asio::deadline_timer m_timeoutTimer;
boost::asio::serial_port m_serialPort;
std::vector<uint8_t> m_header;
std::vector<uint8_t> m_payload;
};
int main()
{
boost::asio::io_service io_service;
SerialCommunication cc(io_service, "/dev/ttyS0");
io_service.run();
return 0;
}