Current Scheme
I am developing a Serial Port routine that will regard current receive transfer is complete if no new data is received for 25 milli-seconds. I start the timer on the first the read_handler (Boost ASIO callback method) call. For every new read_handler call, I cancel the asynchronous operations that are waiting on the timer and create a new asynchronous operations on the timer.
Problem
The problem I am facing is that randomly my receive transfer that was suppose to be 1 transfer is being treated as 2 separate transfer as receive_timeout event (receive_timeout_handler) is being triggered (called) multiple times.
I'm not sure is this because of my incorrect implementation/usage of Boost ASIO system_timer or due to Driver issue in my USB to Serial Converter.
I'm currently using FT4232 module (contains 4 UART/Serial Port) to test my routines whereby I send data from send data (4 K.B. text file) from UART1 and receive data on UART0.
I expect that only after receiving all 4 K.B. of data, the serial port class signal main thread however sometimes this one 4 K.B. transfer is signaled 2-3 times.
Code :
class SerialPort
{
public:
SerialPort() : io(), port(io), receive_timeout_timer(io)
bool open_port(void);
bool read_async(std::int32_t read_timeout = -1)
void read_handler(const boost::system::error_code& error, std::size_t bytes_transferred);
void receive_timeout_handler(const boost::system::error_code& error);
private:
boost::asio::io_context io;
boost::asio::serial_port port;
boost::asio::system_timer receive_timeout_timer {25};
std::array<std::byte, 8096> read_byte_buffer;
};
bool SerialPort::open_port(void)
{
try
{
this->port.open("COM3");
return true;
}
catch (const std::exception& ex)
{
}
return false;
}
bool SerialPort::read_async(std::uint32_t read_timeout)
{
try
{
this->read_byte_buffer.fill(static_cast<std::byte>(0)); //Clear Buffer
if (read_timeout not_eq -1)
{
this->read_timeout = read_timeout;//If read_timeout is not set to ignore_timeout, update the read_timeout else use old read_timeout
}
this->port.async_read_some(
boost::asio::buffer(
this->read_byte_buffer.data(),
this->read_byte_buffer.size()
),
boost::bind(
&SerialPort::read_handler,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
return true;
}
catch (const std::exception& ex)
{
return false;
}
}
void SerialPort::read_handler(const boost::system::error_code& error, std::size_t bytes_transferred)
{
std::string temporary_recieve_data;
try
{
if (error not_eq boost::system::errc::success) //Error in serial port read
{
return;
}
std::transform(this->read_byte_buffer.begin(), this->read_byte_buffer.begin() + bytes_transferred,
std::back_inserter(temporary_recieve_data), [](std::byte character) {
return static_cast<char>(character);
}
);
this->read_async(); //Again Start the read operation
this->received_data += temporary_recieve_data;
this->receive_timeout_timer.cancel(); // Cancel existing timers if any are running
this->receive_timeout_timer.expires_after(boost::asio::chrono::milliseconds(SerialPort::bulk_data_receive_complete)); // Reset timer to current timestamp + 25 milliseconds
this->receive_timeout_timer.async_wait(boost::bind(&SerialPort::receive_timeout_handler, this, boost::asio::placeholders::error));
}
catch (const std::exception& ex)
{
}
}
void SerialPort::receive_timeout_handler(const boost::system::error_code& error)
{
try
{
if (error not_eq boost::system::errc::success) //Error in serial port read
{
return;
}
// this->signal(this->port_number, SerialPortEvents::read_data, this->received_data); //Signal to main thread that data has been received
}
catch (const std::exception& ex)
{
}
}
read_timer.cancel(); // Cancel existing timers if any are running
read_timer.expires_after(
SerialPort::bulk_data_receive_complete); // Reset timer to current timestamp + 25 milliseconds
Here the cancel is redundant, because setting the expiration cancels any pending wait.
You reschedule the timer regardless of whether it ran out. Your code misses the possibility that both the read and timer could have completed successfully. In that case your main gets signaled multiple times, even though it only "nearly" exceeded 25ms idle.
You would expect to see partially duplicated data, then, because received_data isn't cleared.
To clearly see what is going on, build your code with -DBOOST_ASIO_ENABLE_HANDLER_TRACKING=1 and run the output through handler_viz.pl (see also Cancelling boost asio deadline timer safely).
Suggestions
You could probably avoid the double firing by being explicit about the flow:
To achieve that, only cancel the read from the timeout handler:
void SerialPort::receive_timeout_handler(error_code ec) {
if (!ec.failed()) {
port.cancel(ec);
std::cerr << "read canceled: " << ec.message() << std::endl;
}
}
Then you could move the signal to the read-handler, where you expect the cancellation:
void SerialPort::read_handler(error_code ec, size_t bytes_transferred) {
if (ec == asio::error::operation_aborted) {
signal(port_number, SerialPortEvents::read_data, std::move(received_data));
} else if (ec.failed()) {
std::cerr << "SerialPort read: " << ec.message() << std::endl;
} else {
copy_n(begin(read_buffer), bytes_transferred, back_inserter(received_data));
read_timer.expires_after(bulk_data_receive_complete); // reset timer
read_timer.async_wait(boost::bind(&SerialPort::receive_timeout_handler, this, ph::error));
start_async_read(); // continue reading
}
}
To be completely fool-proof, you can check that the timer wasn't actually expired even on successful read (see again Cancelling boost asio deadline timer safely).
Intuitively, I think it makes more even sense to schedule the timer from start_async_read.
ASIDE #1
Currently your code completely ignores read_timeout (even aside from the unnecessary confusion between the argument read_timeout and the member read_timeout). It is unclear to me whether you want the read_timeout override argument to "stick" for the entire chain of read operations.
If you want it to stick, change the
start_async_read(bulk_data_receive_complete); // continue reading
call to
start_async_read(); // continue reading
below. I kept it like it is because it allows for easier timing demonstrations
ASIDE #2
I've undone the exception swallowing code. Instead of just squashing all exceptions into a boolean (which you'll then check to change control flow), use the native language feature to change the control flow, retaining error information.
Full Demo
Live On Coliru
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <boost/signals2.hpp>
#include <iomanip>
#include <iostream>
namespace asio = boost::asio;
namespace ph = boost::asio::placeholders;
using boost::system::error_code;
using namespace std::chrono_literals;
enum class SerialPortEvents { read_data };
class SerialPort {
using duration = std::chrono::system_clock::duration;
static constexpr duration //
ignore_timeout = duration::min(), // e.g. -0x8000000000000000ns
bulk_data_receive_complete = 25ms;
public:
SerialPort() : io(), port(io), read_timer(io) {}
void open_port(std::string device);
void start_async_read(duration read_timeout = ignore_timeout);
void run() {
if (io.stopped())
io.restart();
io.run();
}
boost::signals2::signal<void(unsigned, SerialPortEvents, std::string)> signal;
private:
void read_handler(error_code ec, size_t bytes_transferred);
void receive_timeout_handler(error_code ec);
duration read_timeout = bulk_data_receive_complete;
asio::io_context io;
asio::serial_port port;
asio::system_timer read_timer;
std::array<char, 8096> read_buffer;
std::string received_data;
// TODO
unsigned const port_number = 0;
};
void SerialPort::open_port(std::string device) { port.open(device); }
void SerialPort::start_async_read(duration timeout_override) {
read_buffer.fill(0); // Clear Buffer (TODO redundant)
if (timeout_override != ignore_timeout)
read_timeout = timeout_override;
std::cerr << "Expiry: " << read_timeout/1.s << "s from now" << std::endl;
read_timer.expires_after(read_timeout); // reset timer
read_timer.async_wait(boost::bind(&SerialPort::receive_timeout_handler, this, ph::error));
port.async_read_some( //
boost::asio::buffer(read_buffer),
boost::bind(&SerialPort::read_handler, this, ph::error, ph::bytes_transferred));
}
void SerialPort::read_handler(error_code ec, size_t bytes_transferred) {
if (ec == asio::error::operation_aborted) {
signal(port_number, SerialPortEvents::read_data, std::move(received_data));
} else if (ec.failed()) {
std::cerr << "SerialPort read: " << ec.message() << std::endl;
} else {
copy_n(begin(read_buffer), bytes_transferred, back_inserter(received_data));
start_async_read(bulk_data_receive_complete); // continue reading
}
}
void SerialPort::receive_timeout_handler(error_code ec) {
if (!ec.failed()) {
port.cancel(ec);
std::cerr << "read canceled: " << ec.message() << std::endl;
}
}
int main(int argc, char** argv) {
SerialPort sp;
sp.open_port(argc > 1 ? argv[1] : "COM3");
int count = 0;
sp.signal.connect([&count](unsigned port, SerialPortEvents event, std::string data) {
assert(port == 0);
assert(event == SerialPortEvents::read_data);
std::cout << "data #" << ++count << ": " << std::quoted(data) << "\n----" << std::endl;
});
sp.start_async_read(10s);
sp.run();
sp.start_async_read();
sp.run();
}
Testing with
socat -d -d pty,raw,echo=0 pty,raw,echo=0
./build/sotest /dev/pts/7
And various device emulations:
for a in hello world bye world; do sleep .01; echo "$a"; done >> /dev/pts/9
for a in hello world bye world; do sleep .025; echo "$a"; done >> /dev/pts/9
for a in hello world bye world; do sleep 1.0; echo "$a"; done >> /dev/pts/9
cat /etc/dictionaries-common/words >> /dev/pts/9
You can see all the outputs match with the expectations. With the sleep .025 you can see the input split over two read operations, but never with repeated data.
Handler tracking for the various runs: 1. 2.
3. 4.
The last one (literally throwing the dictionary at it) is way too big to be useful: https://imgur.com/a/I5lHnCV
Simplifying Notes
Note that your entire SerialPort re-implements a composed read operation. You might use simplify all that to asio::async_read_until with a MatchCondition.
This has the benefit of allowing directly asio::dynamic_buffer(received_data) as well.
Here's a simpler version that doesn't use a timer, but instead updates the deadline inside the manual run() loop.
It uses a single composed read operation with a MatchCondition that checks when the connection is "idle".
Live On Coliru
#include <boost/asio.hpp>
#include <iomanip>
#include <iostream>
namespace asio = boost::asio;
using namespace std::chrono_literals;
enum class SerialPortEvents { read_data };
class SerialPort {
using Clock = std::chrono::system_clock;
using Duration = Clock::duration;
static constexpr Duration default_idle_timeout = 25ms;
public:
void open_port(std::string device);
void read_till_idle(Duration idle_timeout = default_idle_timeout);
std::function<void(unsigned, SerialPortEvents, std::string)> signal;
private:
asio::io_context io;
asio::serial_port port{io};
std::string received_data;
};
void SerialPort::open_port(std::string device) { port.open(device); }
namespace {
// Asio requires nested result_type to be MatchCondition... :(
template <typename F> struct AsMatchCondition {
using CBT = boost::asio::dynamic_string_buffer<char, std::char_traits<char>,
std::allocator<char>>::const_buffers_type;
using It = asio::buffers_iterator<CBT>;
using result_type = std::pair<It, bool>;
F _f;
AsMatchCondition(F f) : _f(std::move(f)) {}
auto operator()(It f, It l) const { return _f(f, l); }
};
}
void SerialPort::read_till_idle(Duration idle_timeout) {
if (io.stopped())
io.restart();
using T = Clock::time_point;
T start = Clock::now();
auto current_timeout = idle_timeout;
auto deadline = T::max();
auto is_idle = [&](T& new_now) { // atomic w.r.t. a new_now
new_now = Clock::now();
return new_now >= deadline;
};
auto update = [&](int invocation) {
auto previous = start;
bool idle = is_idle(start);
if (invocation > 0) {
current_timeout = default_idle_timeout; // or not, your choice
std::cerr << " [update deadline for current timeout:" << current_timeout / 1ms << "ms after "
<< (start - previous) / 1ms << "ms]" << std::endl;
}
deadline = start + current_timeout;
return idle;
};
int invocation = 0; // to avoid updating current_timeout on first invocation
auto condition = AsMatchCondition([&](auto, auto e) { return std::pair(e, update(invocation++)); });
async_read_until(port, asio::dynamic_buffer(received_data), condition,
[this](auto...) { signal(0, SerialPortEvents::read_data, std::move(received_data)); });
for (T t; !io.stopped(); io.run_for(5ms))
if (is_idle(t))
port.cancel();
}
void data_received(unsigned port, SerialPortEvents event, std::string data) {
static int count = 0;
assert(port == 0);
assert(event == SerialPortEvents::read_data);
std::cout << "data #" << ++count << ": " << std::quoted(data) << std::endl;
}
int main(int argc, char** argv) {
SerialPort sp;
sp.signal = data_received;
sp.open_port(argc > 1 ? argv[1] : "COM3");
sp.read_till_idle(3s);
}
Same local demos:
Related
I want my TCP client to connect to multiple servers(each server has a separate IP and port).
I am using async_connect. I can successfully connect to different servers but the read/write fails since the server's corresponding tcp::socket object is not available.
Can you please suggest how I could store each server's socket in some data structure? I tried saving the IP, socket to a std::map, but the first server's socket object is not available in memory and the app crashes. I tried making the socket static, but it does not help either.
Please help me!!
Also, I hope I am logically correct in making a single TCP client connect to 2 different servers.
I am sharing below the simplified header & cpp file.
class TCPClient: public Socket
{
public:
TCPClient(boost::asio::io_service& io_service,
boost::asio::ip::tcp::endpoint ep);
virtual ~TCPClient();
void Connect(boost::asio::ip::tcp::endpoint ep, boost::asio::io_service &ioService, void (Comm::*SaveClientDetails)(std::string,void*),
void *pClassInstance);
void TransmitData(const INT8 *pi8Buffer);
void HandleWrite(const boost::system::error_code& err,
size_t szBytesTransferred);
void HandleConnect(const boost::system::error_code &err,
void (Comm::*SaveClientDetails)(std::string,void*),
void *pClassInstance, std::string sIPAddr);
static tcp::socket* CreateSocket(boost::asio::io_service &ioService)
{ return new tcp::socket(ioService); }
static tcp::socket *mSocket;
private:
std::string sMsgRead;
INT8 i8Data[MAX_BUFFER_LENGTH];
std::string sMsg;
boost::asio::deadline_timer mTimer;
};
tcp::socket* TCPClient::mSocket = NULL;
TCPClient::TCPClient(boost::asio::io_service &ioService,
boost::asio::ip::tcp::endpoint ep) :
mTimer(ioService)
{
}
void TCPClient::Connect(boost::asio::ip::tcp::endpoint ep,
boost::asio::io_service &ioService,
void (Comm::*SaveServerDetails)(std::string,void*),
void *pClassInstance)
{
mSocket = CreateSocket(ioService);
std::string sIPAddr = ep.address().to_string();
/* To send connection request to server*/
mSocket->async_connect(ep,boost::bind(&TCPClient::HandleConnect, this,
boost::asio::placeholders::error, SaveServerDetails,
pClassInstance, sIPAddr));
}
void TCPClient::HandleConnect(const boost::system::error_code &err,
void (Comm::*SaveServerDetails)(std::string,void*),
void *pClassInstance, std::string sIPAddr)
{
if (!err)
{
Comm* pInstance = (Comm*) pClassInstance;
if (NULL == pInstance)
{
break;
}
(pInstance->*SaveServerDetails)(sIPAddr,(void*)(mSocket));
}
else
{
break;
}
}
void TCPClient::TransmitData(const INT8 *pi8Buffer)
{
sMsg = pi8Buffer;
if (sMsg.empty())
{
break;
}
mSocket->async_write_some(boost::asio::buffer(sMsg, MAX_BUFFER_LENGTH),
boost::bind(&TCPClient::HandleWrite, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void TCPClient::HandleWrite(const boost::system::error_code &err,
size_t szBytesTransferred)
{
if (!err)
{
std::cout<< "Data written to TCP Client port! ";
}
else
{
break;
}
}
You seem to know your problem: the socket object is unavailable. That's 100% by choice. You chose to make it static, of course there will be only one instance.
Also, I hope I am logically correct in making a single TCP client connect to 2 different servers.
It sounds wrong to me. You can redefine "client" to mean something having multiple TCP connections. In that case at the very minimum you expect a container of tcp::socket objects to hold those (or, you know, a Connection object that contains the tcp::socket.
BONUS: Demo
For fun and glory, here's what I think you should be looking for.
Notes:
no more new, delete
no more void*, reinterpret casts (!!!)
less manual buffer sizing/handling
no more bind
buffer lifetimes are guaranteed for the corresponding async operations
message queues per connection
connections are on a strand for proper synchronized access to shared state in multi-threading environments
I added in a connection max idle time timeout; it also limits the time taken for any async operation (connect/write). I assumed you wanted something like this because (a) it's common (b) there was an unused deadline_timer in your question code
Note the technique of using shared pointers to have Comm manage its own lifetime. Note also that _socket and _outbox are owned by the individual Comm instance.
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
#include <iostream>
using INT8 = char;
using boost::asio::ip::tcp;
using boost::system::error_code;
//using SaveFunc = std::function<void(std::string, void*)>; // TODO abolish void*
using namespace std::chrono_literals;
using duration = std::chrono::high_resolution_clock::duration;
static inline constexpr size_t MAX_BUFFER_LENGTH = 1024;
using Handle = std::weak_ptr<class Comm>;
class Comm : public std::enable_shared_from_this<Comm> {
public:
template <typename Executor>
explicit Comm(Executor ex, tcp::endpoint ep, // ex assumed to be strand
duration max_idle)
: _ep(ep)
, _max_idle(max_idle)
, _socket{ex}
, _timer{_socket.get_executor()}
{
}
~Comm() { std::cerr << "Comm closed (" << _ep << ")\n"; }
void Start() {
post(_socket.get_executor(), [this, self = shared_from_this()] {
_socket.async_connect(
_ep, [this, self = shared_from_this()](error_code ec) {
std::cerr << "Connect: " << ec.message() << std::endl;
if (!ec)
DoIdle();
else
_timer.cancel();
});
DoIdle();
});
}
void Stop() {
post(_socket.get_executor(), [this, self = shared_from_this()] {
if (not _outbox.empty())
std::cerr << "Warning: some messages may be undelivered ("
<< _ep << ")" << std::endl;
_socket.cancel();
_timer.cancel();
});
}
void TransmitData(std::string_view msg) {
post(_socket.get_executor(),
[this, self = shared_from_this(), msg = std::string(msg.substr(0, MAX_BUFFER_LENGTH))] {
_outbox.emplace_back(std::move(msg));
if (_outbox.size() == 1) { // no send loop already active?
DoSendLoop();
}
});
}
private:
// The DoXXXX functions are assumed to be on the strand
void DoSendLoop() {
DoIdle(); // restart max_idle even after last successful send
if (_outbox.empty())
return;
boost::asio::async_write(
_socket, boost::asio::buffer(_outbox.front()),
[this, self = shared_from_this()](error_code ec, size_t xfr) {
std::cerr << "Write " << xfr << " bytes to " << _ep << " " << ec.message() << std::endl;
if (!ec) {
_outbox.pop_front();
DoSendLoop();
} else
_timer.cancel(); // causes Comm shutdown
});
}
void DoIdle() {
_timer.expires_from_now(_max_idle); // cancels any pending wait
_timer.async_wait([this, self = shared_from_this()](error_code ec) {
if (!ec) {
std::cerr << "Timeout" << std::endl;
_socket.cancel();
}
});
}
tcp::endpoint _ep;
duration _max_idle;
tcp::socket _socket;
boost::asio::high_resolution_timer _timer;
std::deque<std::string> _outbox;
};
class TCPClient {
boost::asio::any_io_executor _ex;
std::deque<Handle> _comms;
public:
TCPClient(boost::asio::any_io_executor ex) : _ex(ex) {}
void Add(tcp::endpoint ep, duration max_idle = 3s)
{
auto pcomm = std::make_shared<Comm>(make_strand(_ex), ep, max_idle);
pcomm->Start();
_comms.push_back(pcomm);
// optionally garbage collect expired handles:
std::erase_if(_comms, std::mem_fn(&Handle::expired));
}
void TransmitData(std::string_view msg) {
for (auto& handle : _comms)
if (auto pcomm = handle.lock())
pcomm->TransmitData(msg);
}
void Stop() {
for (auto& handle : _comms)
if (auto pcomm = handle.lock())
pcomm->Stop();
}
};
int main() {
using std::this_thread::sleep_for;
boost::asio::thread_pool ctx(1);
TCPClient c(ctx.get_executor());
c.Add({{}, 8989});
c.Add({{}, 8990}, 1s); // shorter timeout for demo
c.TransmitData("Hello world\n");
c.Add({{}, 8991});
sleep_for(2s); // times out second connection
c.TransmitData("Three is a crowd\n"); // only delivered to 8989 and 8991
sleep_for(1s); // allow for delivery
c.Stop();
ctx.join();
}
Prints (on Coliru):
for p in {8989..8991}; do netcat -t -l -p $p& done
sleep .5; ./a.out
Hello world
Connect: Success
Connect: Success
Hello world
Connect: Success
Write 12 bytes to 0.0.0.0:8989 Success
Write 12 bytes to 0.0.0.0:8990 Success
Timeout
Comm closed (0.0.0.0:8990)
Write Three is a crowd
17Three is a crowd
bytes to 0.0.0.0:8989 Success
Write 17 bytes to 0.0.0.0:8991 Success
Comm closed (0.0.0.0:8989)
Comm closed (0.0.0.0:8991)
The output is a little out of sequence there. Live local demo:
I have started with this example so won't post all the code. My objective is to download a large file without blocking my main thread. The second objective is to get notifications so I can update a progress bar. I do have the code working a couple of ways. First is to just ioc.run(); and let it go to work, I get the file downloaded. But I can not find anyway to start the session without blocking.
The second way I can make the calls down to http::async_read_some and the call works but I can not get a response that I can use. I don't know if there is a way to pass a lambda that captures.
The #if 0..#else..#endif switches the methods. I'm sure there is a simple way but I just can not see it. I'll clean up the code when I get it working, like setting the local file name. Thanks.
std::size_t on_read_some(boost::system::error_code ec, std::size_t bytes_transferred)
{
if (ec);//deal with it...
if (!bValidConnection) {
std::string_view view((const char*)buffer_.data().data(), bytes_transferred);
auto pos = view.find("Content-Length:");
if (pos == std::string_view::npos)
;//error
file_size = std::stoi(view.substr(pos+sizeof("Content-Length:")).data());
if (!file_size)
;//error
bValidConnection = true;
}
else {
file_pos += bytes_transferred;
response_call(ec, file_pos);
}
#if 0
std::cout << "in on_read_some caller\n";
http::async_read_some(stream_, buffer_, file_parser_, std::bind(
response_call,
std::placeholders::_1,
std::placeholders::_2));
#else
std::cout << "in on_read_some inner\n";
http::async_read_some(stream_, buffer_, file_parser_, std::bind(
&session::on_read_some,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
#endif
return buffer_.size();
}
The main, messy but.....
struct lambda_type {
bool bDone = false;
void operator ()(const boost::system::error_code ec, std::size_t bytes_transferred) {
;
}
};
int main(int argc, char** argv)
{
auto const host = "reserveanalyst.com";
auto const port = "443";
auto const target = "/downloads/demo.msi";
int version = argc == 5 && !std::strcmp("1.0", argv[4]) ? 10 : 11;
boost::asio::io_context ioc;
ssl::context ctx{ ssl::context::sslv23_client };
load_root_certificates(ctx);
//ctx.load_verify_file("ca.pem");
auto so = std::make_shared<session>(ioc, ctx);
so->run(host, port, target, version);
bool bDone = false;
auto const lambda = [](const boost::system::error_code ec, std::size_t bytes_transferred) {
std::cout << "data lambda bytes: " << bytes_transferred << " er: " << ec.message() << std::endl;
};
lambda_type lambda2;
so->set_response_call(lambda);
ioc.run();
std::cout << "not in ioc.run()!!!!!!!!" << std::endl;
so->async_read_some(lambda);
//pseudo message pump when working.........
for (;;) {
std::this_thread::sleep_for(250ms);
std::cout << "time" << std::endl;
}
return EXIT_SUCCESS;
}
And stuff I've added to the class session
class session : public std::enable_shared_from_this<session>
{
using response_call_type = void(*)(boost::system::error_code ec, std::size_t bytes_transferred);
http::response_parser<http::file_body> file_parser_;
response_call_type response_call;
//
bool bValidConnection = false;
std::size_t file_pos = 0;
std::size_t file_size = 0;
public:
auto& get_result() { return res_; }
auto& get_buffer() { return buffer_; }
void set_response_call(response_call_type the_call) { response_call = the_call; }
I've updated this as I finally put it to use and I wanted the old method where I could download to a file or a string. Link to how asio works, great talk.
CppCon 2016 Michael Caisse Asynchronous IO with BoostAsio
As for my misunderstanding of how to pass a lambda, here is Adam Nevraumont's answer
There are two ways to compile this using a type to select the method. Both are shown at the beginning of main. You can construct either a file downloader or string downloader by selecting the type of beast parser. The parsers don't have the same constructs so an if constexpr compile time conditions are used. And I checked, a release build of the downloader is about 1K so pretty light weight for what it does. In the case of a small string you don't have to handle the call backs. either pass an empty lambda or add the likes of:
if(response_call)
response_call(resp_ok, test);
This looks to be a pretty clean way to get the job done so I've updated this post as of 11/27/2202.
The code:
//
// Copyright (c) 2016-2019 Vinnie Falco (vinnie dot falco at gmail dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
// Official repository: https://github.com/boostorg/beast
//------------------------------------------------------------------------------
//
// Example: HTTP SSL client, synchronous, usable in a thread with a message pump
// Added code to use from a message pump
// Also useable as body to a file download, or body to string
//
//------------------------------------------------------------------------------
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio/connect.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/ssl/error.hpp>
#include <boost/asio/ssl/stream.hpp>
#include <cstdlib>
#include <iostream>
#include <string>
#include <fstream>
//the boost shipped certificates
#include <boost/../libs/beast/example/common/root_certificates.hpp>
//TODO add your ssl libs as you would like
#ifdef _M_IX86
#pragma comment(lib, "libcrypto.lib")
#pragma comment(lib, "libssl.lib")
#elif _M_X64
#pragma comment(lib, "libcrypto-3-x64.lib")
#pragma comment(lib, "libssl-3-x64.lib")
#endif
namespace downloader {
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
namespace ssl = net::ssl; // from <boost/asio/ssl.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
//specialization if using < c++17; see both 'if constexpr' below.
//this is not needed otherwise
//namespace detail {
// template<typename Type>
// void open_file(http::parser < false, Type>& p, const char* name, boost::system::error_code& file_open_ec) { }
// template<>
// void open_file(http::parser<false, http::file_body>& p, const char* name, boost::system::error_code& file_open_ec) {
// p.get().body().open(name, boost::beast::file_mode::write, file_open_ec);
// }
// template<typename Type>
// std::string get_string(http::parser < false, Type>& p) { return std::string{}; }
// template<>
// std::string get_string(http::parser<false, http::string_body>& p) {
// return p.get().body();
// }
//} //namespace detail
enum responses {
resp_null,
resp_ok,
resp_done,
resp_error,
};
using response_call_type = std::function< void(responses, std::size_t)>;
template<typename ParserType>
struct download {
//as these can be set with array initialization
const char* target_ = "/";
const char* filename_ = "test.txt";
const char* host_ = "lakeweb.net";
std::string body_;
using response_call_type = std::function< void(responses, std::size_t)>;
response_call_type response_call;
boost::asio::io_context ioc_;
ssl::context ctx_{ ssl::context::sslv23_client };
ssl::stream<tcp::socket> stream_{ ioc_, ctx_ };
tcp::resolver resolver_{ ioc_ };
boost::beast::flat_buffer buffer_;
uint64_t file_size_{};
int version{ 11 };
void set_response_call(response_call_type the_call) { response_call = the_call; }
uint64_t get_file_size() { return file_size_; }
void stop() { ioc_.stop(); }
bool stopped() { return ioc_.stopped(); }
std::string get_body() { return std::move(body_); }
void run() {
try {
// TODO should have a timer in case of a hang
load_root_certificates(ctx_);
// Set SNI Hostname (many hosts need this to handshake successfully)
if (!SSL_set_tlsext_host_name(stream_.native_handle(), host_)) {
boost::system::error_code ec{ static_cast<int>(::ERR_get_error()), boost::asio::error::get_ssl_category() };
throw boost::system::system_error{ ec };
}
//TODO resolve is depreciated, use endpoint
auto const results = resolver_.resolve(host_, "443");
boost::asio::connect(stream_.next_layer(), results.begin(), results.end());
stream_.handshake(ssl::stream_base::client);
// Set up an HTTP GET request message
http::request<http::string_body> req{ http::verb::get, target_, version };
req.set(http::field::host, host_);
req.set(http::field::user_agent, "mY aGENT");
// Send the HTTP request to the remote host
http::write(stream_, req);
// Read the header
boost::system::error_code file_open_ec;
http::parser<false, ParserType> p;
p.body_limit((std::numeric_limits<std::uint32_t>::max)());
//detail::open_file(p, filename_, file_open_ec);
//or => c++17
if constexpr (std::is_same_v<ParserType, http::file_body>)
p.get().body().open(filename_, boost::beast::file_mode::write, file_open_ec);
http::read_header(stream_, buffer_, p);
file_size_ = p.content_length().has_value() ? p.content_length().value() : 0;
//Read the body
uint64_t test{};
boost::system::error_code rec;
for (;;) {
test += http::read_some(stream_, buffer_, p, rec);
if (test >= file_size_) {
response_call(resp_done, 0);
break;
}
response_call(resp_ok, test);
}
// Gracefully close the stream
boost::system::error_code ec;
stream_.shutdown(ec);
if (ec == boost::asio::error::eof)
{
// Rationale:
// http://stackoverflow.com/questions/25587403/boost-asio-ssl-async-shutdown-always-finishes-with-an-error
ec.assign(0, ec.category());
}
if (ec)
throw boost::system::system_error{ ec };
//value = detail::get_string(p);
//or => c++17
if constexpr (std::is_same_v<ParserType, http::string_body>)
body_ = p.get().body();
}
catch (std::exception const& e)
{
std::cerr << "Error: " << e.what() << std::endl;
response_call(resp_error, -1);
}
ioc_.stop();
}
};
}//namespace downloadns
//comment to test with string body
#define THE_FILE_BODY_TEST
int main(int argc, char** argv)
{
using namespace downloader;
#ifdef THE_FILE_BODY_TEST
download<http::file_body> dl{"/Nasiri%20Abarbekouh_Mahdi.pdf", "test.pdf"};
#else //string body test
download<http::string_body> dl{ "/robots.txt" };
#endif
responses dl_response{ resp_null };
size_t cur_size{};
auto static const lambda = [&dl_response, &dl, &cur_size](responses response, std::size_t bytes_transferred) {
if ((dl_response = response) == resp_ok) {
cur_size += bytes_transferred;
size_t sizes = dl.get_file_size() - cur_size;//because size is what is left
//drive your progress bar from here in a GUI app
}
};
dl.set_response_call(lambda);
std::thread thread{ [&dl]() { dl.run(); } };
//thread has started, now the pseudo message pump
bool quit = false; //true: as if a cancel button was pushed; won't finish download
for (int i = 0; ; ++i) {
switch (dl_response) { //ad hoc as if messaged
case resp_ok:
std::cout << "from sendmessage: " << cur_size << std::endl;
dl_response = resp_null;
break;
case resp_done:
std::cout << "from sendmessage: done" << std::endl;
dl_response = resp_null;
break;
case resp_error:
std::cout << "from sendmessage: error" << std::endl;
dl_response = resp_null;
}//switch
if (!(i % 5))
std::cout << "in message pump, stopped: " << std::boolalpha << dl.stopped() << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (quit && i == 10) //the cancel message
dl.stop();
if (!(i % 20) && dl.stopped()) {//dl job was quit or error or finished
std::cout << "dl is stopped" << std::endl;
break;
}
}
#ifdef THE_FILE_BODY_TEST
std::cout << "file written named: 'test.txt'" << std::endl;
#else
std::string res = dl.get_body();
std::cout << "body retrieved:\n" << res << std::endl;
#endif
if (thread.joinable())//in the case a thread was never started
thread.join();
std::cout << "exiting, program all done" << std::endl;
return EXIT_SUCCESS;
}
I strongly recommend against using the low-level [async_]read_some function instead of using http::[async_]read as intended with http::response_parser<http::buffer_body>
I do have an example of that - which is a little bit complicated by the fact that it also uses Boost Process to concurrently decompress the body data, but regardless it should show you how to use it:
How to read data from Internet using muli-threading with connecting only once?
I guess I could tailor it to your specific example given more complete code, but perhaps the above is good enough? Also see "Relay an HTTP message" in libs/beast/example/doc/http_examples.hpp which I used as "inspiration".
Caution: the buffer arithmetic is not intuitive. I think this is unfortunate and should not have been necessary, so pay (very) close attention to these samples for exactly how that's done.
I am creating a TCP server that will use boost asio which will accept connections from many clients, receive data, and send confirmations. The thing is that I want to be able to accept all the clients but I want to work only with one at a time. I want all the other transactions to be kept in a queue.
Example:
Client1 connects
Client2 connects
Client1 sends data and asks for reply
Client2 sends data and asks for reply
Client2's request is put into queue
Client1's data is read, server replies, end of transaction
Client2's request is taken from the queue, server reads data, replies end of transaction.
So this is something between asynchronous server and blocking server. I want to do just 1 thing at once but at the same time I want to be able to store all client sockets and their demands in the queue.
I was able to create server-client communication with all the functionality that I need but only on single thread. Once client disconnects server is terminated as well. I don't really know how to start implementing what I have mentioned above. Should I open new thread each time connection is accepted? Should I use async_accept or blocking accept?
I have read boost::asio chat example, where many clients connect so single server, but there is no queuing mechanism that I need here.
I am aware that this post might be a bit confusing but TCP servers are new to me so I am not familiar enough with the terminology. There is also no source code to post because I am asking only for help with concept of this project.
Just keep accepting.
You show no code, but it typically looks like
void do_accept() {
acceptor_.async_accept(socket_, [this](boost::system::error_code ec) {
std::cout << "async_accept -> " << ec.message() << "\n";
if (!ec) {
std::make_shared<Connection>(std::move(socket_))->start();
do_accept(); // THIS LINE
}
});
}
If you don't include the line marked // THIS LINE you will indeed not accept more than 1 connection.
If this doesn't help, please include some code we can work from.
For Fun, A Demo
This uses just standard library features for the non-network part.
Network Listener
The network part is as outlined before:
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <istream>
using namespace std::chrono_literals;
using Clock = std::chrono::high_resolution_clock;
namespace Shared {
using PostRequest = std::function<void(std::istream& is)>;
}
namespace Network {
namespace ba = boost::asio;
using ba::ip::tcp;
using error_code = boost::system::error_code;
using Shared::PostRequest;
struct Connection : std::enable_shared_from_this<Connection> {
Connection(tcp::socket&& s, PostRequest poster) : _s(std::move(s)), _poster(poster) {}
void process() {
auto self = shared_from_this();
ba::async_read(_s, _request, [this,self](error_code ec, size_t) {
if (!ec || ec == ba::error::eof) {
std::istream reader(&_request);
_poster(reader);
}
});
}
private:
tcp::socket _s;
ba::streambuf _request;
PostRequest _poster;
};
struct Server {
Server(unsigned port, PostRequest poster) : _port(port), _poster(poster) {}
void run_for(Clock::duration d = 30s) {
_stop.expires_from_now(d);
_stop.async_wait([this](error_code ec) { if (!ec) _svc.post([this] { _a.close(); }); });
_a.listen();
do_accept();
_svc.run();
}
private:
void do_accept() {
_a.async_accept(_s, [this](error_code ec) {
if (!ec) {
std::make_shared<Connection>(std::move(_s), _poster)->process();
do_accept();
}
});
}
unsigned short _port;
PostRequest _poster;
ba::io_service _svc;
ba::high_resolution_timer _stop { _svc };
tcp::acceptor _a { _svc, tcp::endpoint {{}, _port } };
tcp::socket _s { _svc };
};
}
The only "connection" to the work service part is the PostRequest handler that is passed to the server at construction:
Network::Server server(6767, handler);
I've also opted for async operations, so we can have a timer to stop the service, even though we do not use any threads:
server.run_for(3s); // this blocks
The Work Part
This is completely separate, and will use threads. First, let's define a Request, and a thread-safe Queue:
namespace Service {
struct Request {
std::vector<char> data; // or whatever you read from the sockets...
};
Request parse_request(std::istream& is) {
Request result;
result.data.assign(std::istream_iterator<char>(is), {});
return result;
}
struct Queue {
Queue(size_t max = 50) : _max(max) {}
void enqueue(Request req) {
std::unique_lock<std::mutex> lk(mx);
cv.wait(lk, [this] { return _queue.size() < _max; });
_queue.push_back(std::move(req));
cv.notify_one();
}
Request dequeue(Clock::time_point deadline) {
Request req;
{
std::unique_lock<std::mutex> lk(mx);
_peak = std::max(_peak, _queue.size());
if (cv.wait_until(lk, deadline, [this] { return _queue.size() > 0; })) {
req = std::move(_queue.front());
_queue.pop_front();
cv.notify_one();
} else {
throw std::range_error("dequeue deadline");
}
}
return req;
}
size_t peak_depth() const {
std::lock_guard<std::mutex> lk(mx);
return _peak;
}
private:
mutable std::mutex mx;
mutable std::condition_variable cv;
size_t _max = 50;
size_t _peak = 0;
std::deque<Request> _queue;
};
This is nothing special, and doesn't actually use threads yet. Let's make a worker function that accepts a reference to a queue (more than 1 worker can be started if so desired):
void worker(std::string name, Queue& queue, Clock::duration d = 30s) {
auto const deadline = Clock::now() + d;
while(true) try {
auto r = queue.dequeue(deadline);
(std::cout << "Worker " << name << " handling request '").write(r.data.data(), r.data.size()) << "'\n";
}
catch(std::exception const& e) {
std::cout << "Worker " << name << " got " << e.what() << "\n";
break;
}
}
}
The main Driver
Here's where the Queue gets instantiated and both the network server as well as some worker threads are started:
int main() {
Service::Queue queue;
auto handler = [&](std::istream& is) {
queue.enqueue(Service::parse_request(is));
};
Network::Server server(6767, handler);
std::vector<std::thread> pool;
pool.emplace_back([&queue] { Service::worker("one", queue, 6s); });
pool.emplace_back([&queue] { Service::worker("two", queue, 6s); });
server.run_for(3s); // this blocks
for (auto& thread : pool)
if (thread.joinable())
thread.join();
std::cout << "Maximum queue depth was " << queue.peak_depth() << "\n";
}
Live Demo
See It Live On Coliru
With a test load looking like this:
for a in "hello world" "the quick" "brown fox" "jumped over" "the pangram" "bye world"
do
netcat 127.0.0.1 6767 <<< "$a" || echo "not sent: '$a'"&
done
wait
It prints something like:
Worker one handling request 'brownfox'
Worker one handling request 'thepangram'
Worker one handling request 'jumpedover'
Worker two handling request 'Worker helloworldone handling request 'byeworld'
Worker one handling request 'thequick'
'
Worker one got dequeue deadline
Worker two got dequeue deadline
Maximum queue depth was 6
The includes you need. Some maybe are unnecessary:
boost/asio.hpp, boost/thread.hpp, boost/asio/io_service.hpp
boost/asio/spawn.hpp, boost/asio/write.hpp, boost/asio/buffer.hpp
boost/asio/ip/tcp.hpp, iostream, stdlib.h, array, string
vector, string.h, stdio.h, process.h, iterator
using namespace boost::asio;
using namespace boost::asio::ip;
io_service ioservice;
tcp::endpoint sim_endpoint{ tcp::v4(), 4066 }; //{which connectiontype, portnumber}
tcp::acceptor sim_acceptor{ ioservice, sim_endpoint };
std::vector<tcp::socket> sim_sockets;
static int iErgebnis;
int iSocket = 0;
void do_write(int a) //int a is the postion of the socket in the vector
{
int iWSchleife = 1; //to stay connected with putty or something
static char chData[32000];
std::string sBuf = "Received!\r\n";
while (iWSchleife > 0)
{
boost::system::error_code error;
memset(chData, 0, sizeof(chData)); //clear the char
iErgebnis = sim_sockets[a].read_some(boost::asio::buffer(chData), error); //recv data from client
iWSchleife = iErgebnis; //if iErgebnis is bigger then 0 it will stay in the loop. iErgebniss is always >0 when data is received
if (iErgebnis > 0) {
printf("%d data received from client : \n%s\n\n", iErgebnis, chData);
write(sim_sockets[a], boost::asio::buffer(sBuf), error); //send data to client
}
else {
boost::system::error_code ec;
sim_sockets[a].shutdown(boost::asio::ip::tcp::socket::shutdown_send, ec); //close the socket when no data
if (ec)
{
printf("studown error"); // An error occurred.
}
}
}
}
void do_accept(yield_context yield)
{
while (1) //endless loop to accept limitless clients
{
sim_sockets.emplace_back(ioservice); //look to the link below for more info
sim_acceptor.async_accept(sim_sockets.back(), yield); //waits here to accept an client
boost::thread dosome(do_write, iSocket); //when accepted, starts the thread do_write and passes the parameter iSocket
iSocket++; //to know the position of the socket in the vector
}
}
int main()
{
sim_acceptor.listen();
spawn(ioservice, do_accept); //here you can learn more about Coroutines https://theboostcpplibraries.com/boost.coroutine
ioservice.run(); //from here you jump to do:accept
getchar();
}
I'm trying to detect lost connections that closed without sending the close frame by sending pings on a websocket++ application.
I'm having trouble setting up the handler.
I initially tried to set it up like how the handlers are setup with the broadcast_server example:
m_server.set_ping_handler(bind(&broadcast_server::on_m_server_ping,this,::_1,::_2));
That gives this error:
note: candidate is:
websocketpp/endpoint.hpp:240:10: note: void websocketpp::endpoint::set_ping_handler(websocketpp::ping_handler) [with connection = websocketpp::connection; config = websocketpp::config::asio_tls_client; websocketpp::ping_handler = std::function, std::basic_string)>]
void set_ping_handler(ping_handler h) {
I thought that setting up a typedef like with this problem would solve it, but putting it outside the class broadcast_server makes it impossible to access m_server.
How can this handler be properly implemented?
Includes & flags
Boost 1.54
#include <websocketpp/config/asio.hpp>
#include <websocketpp/server.hpp>
#include <websocketpp/common/thread.hpp>
typedef websocketpp::server<websocketpp::config::asio_tls> server;
flags
-std=c++0x -I ~/broadcast_server -D_WEBSOCKETPP_CPP11_STL_
-D_WEBSOCKETPP_NO_CPP11_REGEX_ -lboost_regex -lboost_system
-lssl -lcrypto -pthread -lboost_thread
typedef
typedef websocketpp::lib::function<bool(connection_hdl,std::string)> ping_handler;
Solving quite easy. First, the definition in websocket/connection.hpp:
/// The type and function signature of a ping handler
/**
* The ping handler is called when the connection receives a WebSocket ping
* control frame. The string argument contains the ping payload. The payload is
* a binary string up to 126 bytes in length. The ping handler returns a bool,
* true if a pong response should be sent, false if the pong response should be
* suppressed.
*/
typedef lib::function<bool(connection_hdl,std::string)> ping_handler;
gives the basic idea that function must have the definition:
bool on_ping(connection_hdl hdl, std::string s)
{
/* Do something */
return true;
}
Now everything comes to the right place:
m_server.set_ping_handler(bind(&broadcast_server::on_ping,this,::_1,::_2));
The complete modified example source looks like:
#include <websocketpp/config/asio_no_tls.hpp>
#include <websocketpp/server.hpp>
#include <iostream>
/*#include <boost/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition_variable.hpp>*/
#include <websocketpp/common/thread.hpp>
typedef websocketpp::server<websocketpp::config::asio> server;
using websocketpp::connection_hdl;
using websocketpp::lib::placeholders::_1;
using websocketpp::lib::placeholders::_2;
using websocketpp::lib::bind;
using websocketpp::lib::thread;
using websocketpp::lib::mutex;
using websocketpp::lib::unique_lock;
using websocketpp::lib::condition_variable;
/* on_open insert connection_hdl into channel
* on_close remove connection_hdl from channel
* on_message queue send to all channels
*/
enum action_type {
SUBSCRIBE,
UNSUBSCRIBE,
MESSAGE
};
struct action {
action(action_type t, connection_hdl h) : type(t), hdl(h) {}
action(action_type t, connection_hdl h, server::message_ptr m)
: type(t), hdl(h), msg(m) {}
action_type type;
websocketpp::connection_hdl hdl;
server::message_ptr msg;
};
class broadcast_server {
public:
broadcast_server() {
// Initialize Asio Transport
m_server.init_asio();
// Register handler callbacks
m_server.set_open_handler(bind(&broadcast_server::on_open,this,::_1));
m_server.set_close_handler(bind(&broadcast_server::on_close,this,::_1));
m_server.set_message_handler(bind(&broadcast_server::on_message,this,::_1,::_2));
m_server.set_ping_handler(bind(&broadcast_server::on_ping,this,::_1,::_2));
}
void run(uint16_t port) {
// listen on specified port
m_server.listen(port);
// Start the server accept loop
m_server.start_accept();
// Start the ASIO io_service run loop
try {
m_server.run();
} catch (const std::exception & e) {
std::cout << e.what() << std::endl;
} catch (websocketpp::lib::error_code e) {
std::cout << e.message() << std::endl;
} catch (...) {
std::cout << "other exception" << std::endl;
}
}
void on_open(connection_hdl hdl) {
unique_lock<mutex> lock(m_action_lock);
//std::cout << "on_open" << std::endl;
m_actions.push(action(SUBSCRIBE,hdl));
lock.unlock();
m_action_cond.notify_one();
}
void on_close(connection_hdl hdl) {
unique_lock<mutex> lock(m_action_lock);
//std::cout << "on_close" << std::endl;
m_actions.push(action(UNSUBSCRIBE,hdl));
lock.unlock();
m_action_cond.notify_one();
}
void on_message(connection_hdl hdl, server::message_ptr msg) {
// queue message up for sending by processing thread
unique_lock<mutex> lock(m_action_lock);
//std::cout << "on_message" << std::endl;
m_actions.push(action(MESSAGE,hdl,msg));
lock.unlock();
m_action_cond.notify_one();
}
bool on_ping(connection_hdl hdl, std::string s)
{
/* Do something */
return true;
}
void process_messages() {
while(1) {
unique_lock<mutex> lock(m_action_lock);
while(m_actions.empty()) {
m_action_cond.wait(lock);
}
action a = m_actions.front();
m_actions.pop();
lock.unlock();
if (a.type == SUBSCRIBE) {
unique_lock<mutex> con_lock(m_connection_lock);
m_connections.insert(a.hdl);
} else if (a.type == UNSUBSCRIBE) {
unique_lock<mutex> con_lock(m_connection_lock);
m_connections.erase(a.hdl);
} else if (a.type == MESSAGE) {
unique_lock<mutex> con_lock(m_connection_lock);
con_list::iterator it;
for (it = m_connections.begin(); it != m_connections.end(); ++it) {
m_server.send(*it,a.msg);
}
} else {
// undefined.
}
}
}
private:
typedef std::set<connection_hdl,std::owner_less<connection_hdl>> con_list;
server m_server;
con_list m_connections;
std::queue<action> m_actions;
mutex m_action_lock;
mutex m_connection_lock;
condition_variable m_action_cond;
};
int main() {
try {
broadcast_server server_instance;
// Start a thread to run the processing loop
thread t(bind(&broadcast_server::process_messages,&server_instance));
// Run the asio loop with the main thread
server_instance.run(9002);
t.join();
} catch (std::exception & e) {
std::cout << e.what() << std::endl;
}
}
I'm currently trying to log real-time data by using boost::thread and a check box. When I check the box, the logging thread starts. When I uncheck, the logging thread stops. The problem arises when I check/uncheck repeatedly and very fast (program crashes, some files aren't logged, etc.). How can I write a reliable thread-safe program where these problems don't occur when repeatedly and quickly checking/unchecking? I also don't want to use join() since this temporarily stops the data input coming from the main thread. In the secondary thread, I'm opening a log file, reading from a socket into a buffer, copying this into another buffer, and then writing this buffer to a log file. I'm thinking that maybe I should use mutex locks for reading/writing. If so, what specific locks should I use? Below is a code snippet:
//Main thread
if(m_loggingCheckBox->isChecked()) {
...
if(m_ThreadLogData.InitializeReadThread(socketInfo))//opens the socket.
//If socket is opened and can be read, start thread.
m_ThreadLogData.StartReadThread();
else
std::cout << "Did not initialize thread\n";
}
else if(!m_loggingCheckBox->isChecked())
{
m_ThreadLogData.StopReadThread();
}
void ThreadLogData::StartReadThread()
{
//std::cout << "Thread started." << std::endl;
m_stopLogThread = false;
m_threadSendData = boost::thread(&ThreadLogData::LogData,this);
}
void ThreadLogData::StopReadThread()
{
m_stopLogThread = true;
m_ReadDataSocket.close_socket(); // close the socket
if(ofstreamLogFile.is_open())
{
ofstreamLogFile.flush(); //flush the log file before closing it.
ofstreamLogFile.close(); // close the log file
}
m_threadSendData.interrupt(); // interrupt the thread
//m_threadSendData.join(); // join the thread. Commented out since this
temporarily stops data input.
}
//secondary thread
bool ThreadLogData::LogData()
{
unsigned short int buffer[1024];
bool bufferflag;
unsigned int iSizeOfBuffer = 1024;
int iSizeOfBufferRead = 0;
int lTimeout = 5;
if(!ofstreamLogFile.is_open())
{
ofstreamLogFile.open(directory_string().c_str(), ios::out);
if(!ofstreamLogFile.is_open())
{
return 0;
}
}
while(!m_stopLogThread)
{
try {
int ret = m_ReadDataSocket.read_sock(&m_msgBuffer.m_buffer
[0],iSizeOfBuffer,lTimeout,&iSizeOfBufferRead);
memcpy(&buffer[0],m_msgBuffer.m_buffer,iSizeOfBufferRead);
bufferflag = m_Buffer.setBuffer(buffer);
if(!bufferflag) return false;
object = &m_Buffer;
unsigned int data = object->getData();
ofstreamLogFile << data << std::endl;
boost::this_thread::interruption_point();
} catch (boost::thread_interrupted& interruption) {
std::cout << "ThreadLogData::LogData(): Caught Interruption thread." << std::endl;
StopReadThread();
} catch (...) {
std::cout << "ThreadLogData::LogData(): Caught Something." << std::endl;
StopReadThread();
}
} // end while()
}
I like to use Boost Asio for async stuff
#include <iostream>
#include <fstream>
#include <boost/asio.hpp>
#include <boost/asio/signal_set.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/bind.hpp>
#include <boost/optional.hpp>
#include <thread>
using boost::asio::ip::tcp;
namespace asio = boost::asio;
struct program
{
asio::io_service _ioservice;
asio::deadline_timer _timer;
asio::signal_set _signals;
std::array<char, 1024> _buffer;
tcp::socket _client;
tcp::resolver _resolver;
std::ofstream _logfile;
std::thread _thread;
program()
: _timer(_ioservice),
_signals(_ioservice),
_client(_ioservice),
_resolver(_ioservice)
{
do_connect(_resolver.resolve({ "localhost", "6767" }));
do_toggle_logging_cycle();
_signals.add(SIGINT);
_signals.async_wait([this](boost::system::error_code ec, int) { if (!ec) close(); });
_thread = std::thread(boost::bind(&asio::io_service::run, boost::ref(_ioservice)));
}
~program()
{
if (_thread.joinable())
_thread.join();
}
void close() {
_ioservice.post([this]() {
_signals.cancel();
_timer.cancel();
_client.close();
});
}
private:
void do_toggle_logging_cycle(boost::system::error_code ec = {})
{
if (ec != boost::asio::error::operation_aborted)
{
if (_logfile.is_open())
{
_logfile.close();
_logfile.clear();
} else
{
_logfile.open("/tmp/output.log");
}
_timer.expires_from_now(boost::posix_time::seconds(2));
_timer.async_wait(boost::bind(&program::do_toggle_logging_cycle, this, boost::asio::placeholders::error()));
} else
{
std::cerr << "\nDone, goobye\n";
}
}
void do_connect(tcp::resolver::iterator endpoint_iterator) {
boost::asio::async_connect(
_client, endpoint_iterator,
[this](boost::system::error_code ec, tcp::resolver::iterator) {
if (!ec) do_read();
else close();
});
}
void do_read() {
boost::asio::async_read(
_client, asio::buffer(_buffer.data(), _buffer.size()),
[this](boost::system::error_code ec, std::size_t length) {
if (!ec) {
if (_logfile.is_open())
{
_logfile.write(_buffer.data(), length);
}
do_read();
} else {
close();
}
});
}
};
int main()
{
{
program p; // does socket reading and (optional) logging on a separate thread
std::cout << "\nMain thread going to sleep for 15 seconds...\n";
std::this_thread::sleep_for(std::chrono::seconds(15));
p.close(); // if the user doesn't press ^C, let's take the initiative
std::cout << "\nDestruction of program...\n";
}
std::cout << "\nMain thread ends\n";
};
The program connects to port 6767 of localhost and asynchronously reads data from it.
If logging is active (_logfile.is_open()), all received data is written to /tmp/output.log.
Now
the reading/writing is on a separate thread, but all operations are serialized using _ioservice (see e.g. the post in close())
the user can abort the the socket reading loop with Ctrl+C
every 2 seconds, the logging will be (de)activated (see do_toggle_logging_cycle)
The main thread just sleeps for 15 seconds before canceling the program (similar to the user pressing Ctrl-C).