Boost::asio, Shared Memory and Interprocess Communication - c++

I have an application that is written to use boost::asio exclusively as its source of input data as most of our objects are network communication based. Due to some specific requirements, we now require the ability to use shared memory as an input method as well. I've already written the shared memory component and it is working relatively well.
The problem is how to handle notifications from the shared memory process to the consuming application that data is available to be read -- we need to handle the data in the existing input thread (using boost::asio), and we also need to not block that input thread waiting for data.
I've implemented this by introducing an intermediate thread that waits on events to be signaled from the shared memory provider process then posts a completion handler to the input thread to handle reading in the data.
This is working now also, but the introduction of the intermediate thread means that in a significant amount of cases we have an extra context switch before we can read the data which has a negative impact on latency, and the overhead of the additional thread is also relatively expensive.
Here's a simplistic example of what the application is doing:
#include <iostream>
using namespace std;
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/scoped_ptr.hpp>
#include <boost/bind.hpp>
class simple_thread
{
public:
simple_thread(const std::string& name)
: name_(name)
{}
void start()
{
thread_.reset(new boost::thread(
boost::bind(&simple_thread::run, this)));
}
private:
virtual void do_run() = 0;
void run()
{
cout << "Started " << name_ << " thread as: " << thread_->get_id() << "\n";
do_run();
}
protected:
boost::scoped_ptr<boost::thread> thread_;
std::string name_;
};
class input_thread
: public simple_thread
{
public:
input_thread() : simple_thread("Input")
{}
boost::asio::io_service& svc()
{
return svc_;
}
void do_run()
{
boost::system::error_code e;
boost::asio::io_service::work w(svc_);
svc_.run(e);
}
private:
boost::asio::io_service svc_;
};
struct dot
{
void operator()()
{
cout << '.';
}
};
class interrupt_thread
: public simple_thread
{
public:
interrupt_thread(input_thread& input)
: simple_thread("Interrupt")
, input_(input)
{}
void do_run()
{
do
{
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
input_.svc().post(dot());
}
while(true);
}
private:
input_thread& input_;
};
int main()
{
input_thread inp;
interrupt_thread intr(inp);
inp.start();
intr.start();
while(true)
{
Sleep(1000);
}
}
Is there any way to get the data handled in the input_thread directly (without having to post it in via the interrupt_thread? The assumption is that the interrupt thread is totally driven by timings from an external application (notification that data is available via a semaphore). Also, assume that we have total control of both the consuming and providing applications, that we have additional objects that need to be handled by the input_thread object (so we cannot simply block and wait on the semaphore objects there). The goal is to reduce the overhead, CPU utilization and latency of the data coming in via the shared memory providing application.

I guess you have found your answer since you posted this question, this is for others benefit...
try and check out boost strands.
It gives you the ability to choose on which thread you want to do some work on.
It will automatically get queued on the specific strand, that's something you won't have to think about.
It even gives you a completion handler if you need to know when the work is done.

Related

C++ condition variable without mutexes?

Problem
I think I'm misunderstanding the CV-Mutex design pattern because I'm creating a program that seems to not need a mutex, only CV.
Goal Overview
I am parsing a feed from a website from 2 different accounts. Alice, Bob. The parsing task is slow, so I have two separate threads each dedicated to handling the feeds from Alice and Bob.
I then have a thread that receives messages from the network and assigns the work to either the threadA or threadB, depending on who the update message is for. That way the reader/network thread isn't stalled, and the messages for Alice are in-order and the messages for Bob are in-order, too.
I don't care if Alice thread is a little bit behind Bob thread chronologically, as long as the individual account feeds are in-order.
Implementation Details
This is very similar to a thread pool, except the threads are essentially locked to a fixed-size array of size 2, and I use the same thread for each feed.
I create a AccountThread class which maintains a queue of JSON messages to be processed as soon as possible within the class. Here is the code for that:
#include <queue>
#include <string>
#include <condition_variable>
#include <mutex>
using namespace std;
class AccountThread {
public:
AccountThread(const string& name) : name(name) { }
void add_message(const string& d) {
this->message_queue.push(d);
this->cv.notify_all(); // could also do notify_one but whatever
}
void run_parsing_loop() {
while (true) {
std::unique_lock<std::mutex> mlock(lock_mutex);
cv.wait(mlock, [&] {
return this->is_dead || this->message_queue.size() > 0;
});
if (this->is_dead) { break; }
const auto message = this->message_queue.front();
this->message_queue.pop();
// Do message parsing...
}
}
void kill_thread() {
this->is_dead = true;
}
private:
const string& name;
condition_variable cv;
mutex lock_mutex;
queue<string> message_queue;
// To Kill Thread if Needed
bool is_dead;
};
I can add the main.cpp code, but it's essentially just a reader loop that calls thread.add_message(message) based on what the account name is.
Question
Why do I need the lock_mutex here? I don't see it's purpose since this class is essentially single-threaded. Is there a better design pattern for this? I feel like if I'm including a variable that I don't really need, such as the mutex then I'm using the wrong design pattern for this task.
I'm just adapting the code from some article I saw online about a threadpool implementation and was curious.
First things first: there's no condition_variable::wait without a mutex. The interface of wait requires a mutex. So regarding
I'm creating a program that seems to not need a mutex, only CV
note that the mutex is needed to protect the condition variable itself. If the notion of how you'd have a data race without the mutex doesn't immediately make sense, check Why do pthreads’ condition variable functions require a mutex.
Secondly there's multiple pain points in the code you provide. Consider this version where the problems are addressed and I'll explain the issues below:
class AccountThread {
public:
AccountThread(const string& name) : name(name)
{
consumer = std::thread(&AccountThread::run_parsing_loop, this); // 1
}
~AccountThread()
{
kill_thread(); // 2
consumer.join();
}
void add_message(const string& d) {
{
std::lock_guard lok(lock_mutex); // 3
this->message_queue.push(d);
}
this->cv.notify_one();
}
private:
void run_parsing_loop()
{
while (!is_dead) {
std::unique_lock<std::mutex> mlock(lock_mutex);
cv.wait(mlock, [this] { // 4
return is_dead || !message_queue.empty();
});
if (this->is_dead) { break; }
std::string message = this->message_queue.front();
this->message_queue.pop();
string parsingMsg = name + " is processing " + message + "\n";
std::cout << parsingMsg;
}
}
void kill_thread() {
{
std::lock_guard lock(lock_mutex);
this->is_dead = true;
}
cv.notify_one(); // 5
}
private:
string name; // 6
mutable condition_variable cv; // 7
mutable mutex lock_mutex;
std::thread consumer;
queue<string> message_queue;
bool is_dead{false}; // 8
};
Top to bottom the problems noted (in the numbered comments are):
If you have a worker thread class, like AccountThread, it's easier to get right when the class provides the thread. This way only the relevant interface is exposed and you have better control over the lifetime and workings of the consumer.
Case in point, when an AccountThread "dies" the worker should also die. In the example above I fix this dependency by killing the consumer thread inside the destructor.
add_message caused a data race in your code. Since you intend to run the parsing loop in a different thread, it's wrong to simply push to the queue without having a critical section.
It's cleaner to capture this here, e.g. you probably don't need the reference to mlock captured.
kill_thread was not correct. You need to notify the, potentially waiting, consumer thread that a change in state happened. To correctly do that you need to protect the state checked in the predicate with a lock.
The initial version with const string &name is probably not something you want. Member const references don't extend the lifetime of temporaries, and the way your constructor is written can leave an instance with dangling state. Even if you do the typical checks, overload the constructor with an r-value reference version, you'll be depending on an external string being alive longer than your AccountThread object. Better use a value member.
Remember the M&M rule.
You had undefined behavior. The is_alive member was used without being initialized.
Demo
All in all, I think the suggested changes point in the right direction. You can also check an implementation of a Go-like communication channel if you want more insight on how something like the TBB component you mention is implemented. Such a channel (or buffer queue) would simplify implementation to avoid manual usage of mutexes, CVs and alive states:
class AccountThread {
public:
AccountThread(const string& name) : name(name) {
consumer = std::thread(&AccountThread::run_parsing_loop, this);
}
~AccountThread() {
kill_thread();
consumer.join();
}
void add_message(const string& d) { _data.push(d); }
private:
void run_parsing_loop() {
try {
while (true) {
// This pop waits until there's data or the channel is closed.
auto message = _data.pop();
// TODO: Implement parsing here
}
} catch (...) {
// Single exception thrown per thread lifetime
}
}
void kill_thread() { _data.set(yap::BufferBehavior::Closed); }
private:
string name;
std::thread consumer;
yap::BufferQueue<string> _data;
};
Demo2

How to design proper release of a boost::asio socket or wrapper thereof

I am making a few attempts at making my own simple asynch TCP server using boost::asio after not having touched it for several years.
The latest example listing I can find is:
http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html
The problem I have with this example listing is that (I feel) it cheats and it cheats big, by making the tcp_connection a shared_ptr, such that it doesn't worry about the lifetime management of each connection. (I think) They do this for brevity, since it is a small tutorial, but that solution is not real world.
What if you wanted to send a message to each client on a timer, or something similar? A collection of client connections is going to be necessary in any real world non-trivial server.
I am worried about the lifetime management of each connection. I figure the natural thing to do would be to keep some collection of tcp_connection objects or pointers to them inside tcp_server. Adding to that collection from the OnConnect callback and removing from that collection OnDisconnect.
Note that OnDisconnect would most likely be called from an actual Disconnect method, which in turn would be called from OnReceive callback or OnSend callback, in the case of an error.
Well, therein lies the problem.
Consider we'd have a callstack that looked something like this:
tcp_connection::~tcp_connection
tcp_server::OnDisconnect
tcp_connection::OnDisconnect
tcp_connection::Disconnect
tcp_connection::OnReceive
This would cause errors as the call stack unwinds and we are executing code in a object that has had its destructor called...I think, right?
I imagine everyone doing server programming comes across this scenario in some fashion. What is a strategy for handling it?
I hope the explanation is good enough to follow. If not let me know and I will create my own source listing, but it will be very large.
Edit:
Related
) Memory management in asynchronous C++ code
IMO not an acceptable answer, relies on cheating with shared_ptr outstanding on receive calls and nothing more, and is not real world. what if the server wanted to say "Hi" to all clients every 5 minutes. A collection of some kind is necessary. What if you are calling io_service.run on multiple threads?
I am also asking on the boost mailing list:
http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html
Like I said, I fail to see how using smart pointers is "cheating, and cheating big". I also do not think your assessment that "they do this for brevity" holds water.
Here's a slightly redacted excerpt¹ from our code base that exemplifies how using shared_ptrs doesn't preclude tracking connections.
It shows just the server side of things, with
a very simple connection object in connection.hpp; this uses the enable_shared_from_this
just the fixed size connection_pool (we have dynamically resizing pools too, hence the locking primitives). Note how we can do actions on all active connections.
So you'd trivially write something like this to write to all clients, like on a timer:
_pool.for_each_active([] (auto const& conn) {
send_message(conn, hello_world_packet);
});
a sample listener that shows how it ties in with the connection_pool (which has a sample method to close all connections)
Code Listings
connection.hpp
#pragma once
#include "xxx/net/rpc/protocol.hpp"
#include "log.hpp"
#include "stats_filer.hpp"
#include <memory>
namespace xxx { namespace net { namespace rpc {
struct connection : std::enable_shared_from_this<connection>, protected LogSource {
typedef std::shared_ptr<connection> ptr;
private:
friend struct io;
friend struct listener;
boost::asio::io_service& _svc;
protocol::socket _socket;
protocol::endpoint _ep;
protocol::endpoint _peer;
public:
connection(boost::asio::io_service& svc, protocol::endpoint ep)
: LogSource("rpc::connection"),
_svc(svc),
_socket(svc),
_ep(ep)
{}
void init() {
_socket.set_option(protocol::no_delay(true));
_peer = _socket.remote_endpoint();
g_stats_filer_p->inc_value("asio." + _ep.address().to_string() + ".sockets_accepted");
debug() << "New connection from " << _peer;
}
protocol::endpoint endpoint() const { return _ep; }
protocol::endpoint peer() const { return _peer; }
protocol::socket& socket() { return _socket; }
// TODO encapsulation
int handle() {
return _socket.native_handle();
}
bool valid() const { return _socket.is_open(); }
void cancel() {
_svc.post([this] { _socket.cancel(); });
}
using shutdown_type = boost::asio::ip::tcp::socket::shutdown_type;
void shutdown(shutdown_type what = shutdown_type::shutdown_both) {
_svc.post([=] { _socket.shutdown(what); });
}
~connection() {
g_stats_filer_p->inc_value("asio." + _ep.address().to_string() + ".sockets_disconnected");
}
};
} } }
connection_pool.hpp
#pragma once
#include <mutex>
#include "xxx/threads/null_mutex.hpp"
#include "xxx/net/rpc/connection.hpp"
#include "stats_filer.hpp"
#include "log.hpp"
namespace xxx { namespace net { namespace rpc {
// not thread-safe by default, but pass e.g. std::mutex for `Mutex` if you need it
template <typename Ptr = xxx::net::rpc::connection::ptr, typename Mutex = xxx::threads::null_mutex>
struct basic_connection_pool : LogSource {
using WeakPtr = std::weak_ptr<typename Ptr::element_type>;
basic_connection_pool(std::string name = "connection_pool", size_t size)
: LogSource(std::move(name)), _pool(size)
{ }
bool try_insert(Ptr const& conn) {
std::lock_guard<Mutex> lk(_mx);
auto slot = std::find_if(_pool.begin(), _pool.end(), std::mem_fn(&WeakPtr::expired));
if (slot == _pool.end()) {
g_stats_filer_p->inc_value("asio." + conn->endpoint().address().to_string() + ".connections_dropped");
error() << "dropping connection from " << conn->peer() << ": connection pool (" << _pool.size() << ") saturated";
return false;
}
*slot = conn;
return true;
}
template <typename F>
void for_each_active(F action) {
auto locked = [=] {
using namespace std;
lock_guard<Mutex> lk(_mx);
vector<Ptr> locked(_pool.size());
transform(_pool.begin(), _pool.end(), locked.begin(), mem_fn(&WeakPtr::lock));
return locked;
}();
for (auto const& p : locked)
if (p) action(p);
}
constexpr static bool synchronizing() {
return not std::is_same<xxx::threads::null_mutex, Mutex>();
}
private:
void dump_stats(LogSource::LogTx tx) const {
// lock is assumed!
size_t empty = 0, busy = 0, idle = 0;
for (auto& p : _pool) {
switch (p.use_count()) {
case 0: empty++; break;
case 1: idle++; break;
default: busy++; break;
}
}
tx << "usage empty:" << empty << " busy:" << busy << " idle:" << idle;
}
Mutex _mx;
std::vector<WeakPtr> _pool;
};
// TODO FIXME use null_mutex once growing is no longer required AND if
// en-pooling still only happens from the single IO thread (XXX-2535)
using server_connection_pool = basic_connection_pool<xxx::net::rpc::connection::ptr, std::mutex>;
} } }
listener.hpp
#pragma once
#include "xxx/threads/null_mutex.hpp"
#include <mutex>
#include "xxx/net/rpc/connection_pool.hpp"
#include "xxx/net/rpc/io_operations.hpp"
namespace xxx { namespace net { namespace rpc {
struct listener : std::enable_shared_from_this<listener>, LogSource {
typedef std::shared_ptr<listener> ptr;
protocol::acceptor _acceptor;
protocol::endpoint _ep;
listener(boost::asio::io_service& svc, protocol::endpoint ep, server_connection_pool& pool)
: LogSource("rpc::listener"), _acceptor(svc), _ep(ep), _pool(pool)
{
_acceptor.open(ep.protocol());
_acceptor.set_option(protocol::acceptor::reuse_address(true));
_acceptor.set_option(protocol::no_delay(true));
::fcntl(_acceptor.native(), F_SETFD, FD_CLOEXEC); // FIXME use non-racy socket factory?
_acceptor.bind(ep);
_acceptor.listen(32);
}
void accept_loop(std::function<void(connection::ptr conn)> on_accept) {
auto self = shared_from_this();
auto conn = std::make_shared<xxx::net::rpc::connection>(_acceptor.get_io_service(), _ep);
_acceptor.async_accept(conn->_socket, [this,self,conn,on_accept](boost::system::error_code ec) {
if (ec) {
auto tx = ec == boost::asio::error::operation_aborted? debug() : warn();
tx << "failed accept " << ec.message();
} else {
::fcntl(conn->_socket.native(), F_SETFD, FD_CLOEXEC); // FIXME use non-racy socket factory?
if (_pool.try_insert(conn)) {
on_accept(conn);
}
self->accept_loop(on_accept);
}
});
}
void close() {
_acceptor.cancel();
_acceptor.close();
_acceptor.get_io_service().post([=] {
_pool.for_each_active([] (auto const& sp) {
sp->shutdown(connection::shutdown_type::shutdown_both);
sp->cancel();
});
});
debug() << "shutdown";
}
~listener() {
}
private:
server_connection_pool& _pool;
};
} } }
¹ download as gist https://gist.github.com/sehe/979af25b8ac4fd77e73cdf1da37ab4c2
While others have answered similarly to the second half of this answer, it seems the most complete answer I can find, came from asking the same question on the Boost Mailing list.
http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html
I will summarize here in order to assist those that arrive here from a search in the future.
There are 2 options
1) Close the socket in order to cancel any outstanding io and then post a callback for the post-disconnection logic on the io_service and let the server class be called back when the socket has been disconnected. It can then safely release the connection. As long as there was only one thread that had called io_service::run, then other asynchronous operations will have been already been resolved when the callback is made. However, if there are multiple threads that had called io_service::run, then this is not safe.
2) As others have been pointing out in their answers, using the shared_ptr to manage to connections lifetime, using outstanding io operations to keep them alive, is viable. We can use a collection weak_ptr to the connections in order to access them if we need to. The latter is the tidbit that had been omitted from other posts on the topic which confused me.
The way that asio solves the "deletion problem" where there are outstanding async methods is that is splits each async-enabled object into 3 classes, eg:
server
server_service
server_impl
there is one service per io_loop (see use_service<>). The service creates an impl for the server, which is now a handle class.
This has separated the lifetime of the handle and the lifetime of the implementation.
Now, in the handle's destructor, a message can be sent (via the service) to the impl to cancel all outstanding IO.
The handle's destructor is free to wait for those io calls to be queued if necessary (for example if the server's work is being delegated to a background io loop or thread pool).
It has become a habit with me to implement all io_service-enabled objects this way as it makes coding with aiso very much simpler.
Connection lifetime is a fundamental issue with boost::asio. Speaking from experience, I can assure you that getting it wrong causes "undefined behaviour"...
The asio examples use shared_ptr to ensure that a connection is kept alive whilst it may have outstanding handlers in an asio::io_service. Note that even in a single thread, an asio::io_service runs asynchronously to the application code, see CppCon 2016: Michael Caisse "Asynchronous IO with Boost.Asio" for an excellent description of the precise mechanism.
A shared_ptr enables the lifetime of a connection to be controlled by the shared_ptr instance count. IMHO it's not "cheating and cheating big"; but an elegant solution to complicated problem.
However, I agree with you that just using shared_ptr's to control connection lifetimes is not a complete solution since it can lead to resource leaks.
In my answer here: Boost async_* functions and shared_ptr's, I proposed using a combination of shared_ptr and weak_ptr to manage connection lifetimes. An HTTP server using a combination of shared_ptr's and weak_ptr's can be found here: via-httplib.
The HTTP server is built upon an asynchronous TCP server which uses a collection of (shared_ptr's to) connections, created on connects and destroyed on disconnects as you propose.

C++ Session management methods

I want to create a session management scheme that , in few words , has to be this way:
A std::map to keep track of the current active sessions (filled up
with id string and and an associated message queue)
A set of threads on each message queue.
There are two ways to create this scheme:
Keep all the components in the main program and when needed fill up the map with a new session id and associated message queue and start a new detached thread with a reference to the queue that he will be polling on as argument.
//main program code
_session_map[id] = queue;
thread = new thread(&queue);
thread.detach();
thread.start();
//main program code
Create a session_manager class that hides all those mechanisms
to the main program.
class session_manager
{
//session_manager code
std::string new_session(std::string id)
{
_session_map[id] = queue;
thread = new thread(&queue);
thread.detach();
thread.start();
}
private:
std::map<std::string,message_queue> _session_map;
};
what is the better way to create this scheme? I'm not sure if the second scheme could work correctly because I'm not so expert on using threads.
Also I don't have a good idea on how to keep track of the closed sessions does anyone hve some suggestion?
A few years back I would have given an entirely different answer than I will do now.
I am also not sure what you mean with "session management" or how the queues relate to the workers. So I will start with a bit of assuming:
You want N threads to do work in parallel, competing with each other over jobs found in a queue.
Your session is kind of a working session, which is to last until the sequence of jobs is done.
In the older days of C++ thread-programming, I would have actually outlined architectures on how one can implement such a scheme.
But I suspect, that you only need a job getting done and not some "theory class".
So, instead of fiddling with low level threads (which are OS specific), I chose to showcase an (also OS specific) more abstract way to get things done.
Advantages:
Concurrent programming without having to deal with locks, mutexes, static thread functions calling pthis->Execute() etc.
Conceptually easy to understand. Message blocks, Sources, targets, messages, Worker objects (Actors/Agents). No promise-future C++-linq reactive functional programming replacement attempts (attempt on humor!).
Appears to come pretty close to what you have in mind.
Disadvantages:
Hides all the low level stuff us oldies are so proud of knowing and spent years of pure joy and despair with.
Runs only on Windows platforms (AFAIK), unfortunately.
All this code here uses the Windows Concurrency runtime.
#include "stdafx.h"
#include <thread>
#include <concrt.h>
#include <agents.h>
#include <iostream>
template<class _Job>
class Worker
: public concurrency::agent
{
concurrency::ISource<_Job> *m_source;
volatile bool m_running;
uint32_t m_counter;
public:
Worker(concurrency::ISource<_Job> *source)
: m_source(source)
, m_running(true)
, m_counter(0UL)
{}
~Worker()
{}
uint32_t Counter() const
{
return m_counter;
}
void Stop()
{
m_running = false;
}
virtual void run()
{
while (m_running)
{
try
{
_Job job = concurrency::receive(m_source, 1000);
m_counter++;
}
catch (concurrency::operation_timed_out& /*timeout*/)
{
std::cout << "Timeout." << std::endl;
}
}
_Job job;
while (concurrency::try_receive(m_source, job))
{
m_counter++;
}
done();
}
};
typedef uint64_t Job_t;
int _tmain(int argc, _TCHAR* argv[])
{
const size_t NUM_WORKERS = 4;
concurrency::unbounded_buffer<Job_t> buffer;
Worker<Job_t> workers[NUM_WORKERS] =
{ Worker<Job_t>(&buffer)
, Worker<Job_t>(&buffer)
, Worker<Job_t>(&buffer)
, Worker<Job_t>(&buffer)
};
std::vector<concurrency::agent*> agents;
for (auto& worker : workers)
{
agents.push_back(&worker);
worker.start();
}
for (uint64_t jobid = 0ULL; jobid < 1000000ULL; jobid++)
{
concurrency::asend(buffer, jobid);
}
for (auto& worker : workers)
{
worker.Stop();
}
concurrency::agent::wait_for_all(NUM_WORKERS,&agents[0]);
for (auto& worker : workers)
{
std::cout << "counter: " << worker.Counter() << std::endl;
}
return 0;
}

How do I make a non concurrent print to file system in C++?

I am programming in C++ with the intention to provide some client/server communication between Unreal Engine 4 and my server.
I am in need of a logging system but the current ones are flooded by system messages.
So I made a Logger class with a ofstream object which I do file << "Write message." << endl.
Problem is that each object makes another instance of the ofstream and several longer writes to the file get cut off by newer writes.
I am looking for a way to queue writing to a file, this system/function/stream being easy to include and call.
Bonus points: the ofstream seems to complain whenever I try to write std::string and Fstring :|
log asynchronously using i.e. g2log or using a non-blocking socket wrapper, such as zeromq
ofstream can't be used across multiple threads. It needs to be synchronized using mutex or similar objects. Check the below thread for details:ofstream shared by mutiple threads - crashes after awhile
I wrote a quick example of how you can implement something like that. Please keep in mind that this may not be a final solution and still requires additional error checking and so on ...
#include <concurrent_queue.h>
#include <string>
#include <thread>
#include <fstream>
#include <future>
class Message
{
public:
Message() : text_(), sender_(), quit_(true)
{}
Message(std::string text, std::thread::id sender)
: text_(std::move(text)), sender_(sender), quit_(false)
{}
bool isQuit() const { return quit_; }
std::string getText() const { return text_; }
std::thread::id getSender() const { return sender_; }
private:
bool quit_;
std::string text_;
std::thread::id sender_;
};
class Log
{
public:
Log(const std::string& fileName)
: workerThread_(&Log::threadFn, this, fileName)
{}
~Log()
{
queue_.push(Message()); // push quit message
workerThread_.join();
}
void write(std::string text)
{
queue_.push(Message(std::move(text), std::this_thread::get_id()));
}
private:
static void threadFn(Log* log, std::string fileName)
{
std::ofstream out;
out.open(fileName, std::ios::out);
assert(out.is_open());
// Todo: ... some error checking here
Message msg;
while(true)
{
if(log->queue_.try_pop(msg))
{
if(msg.isQuit())
break;
out << msg.getText() << std::endl;
}
else
{
std::this_thread::yield();
}
}
}
concurrency::concurrent_queue<Message> queue_;
std::thread workerThread_;
};
int main(int argc, char* argv[])
{
Log log("test.txt");
Log* pLog = &log;
auto fun = [pLog]()
{
for(int i = 0; i < 100; ++i)
pLog->write(std::to_string(i));
};
// start some test threads
auto f0 = std::async(fun);
auto f1 = std::async(fun);
auto f2 = std::async(fun);
auto f3 = std::async(fun);
// wait for all
f0.get();
f1.get();
f2.get();
f3.get();
return 0;
}
The main idea is to use one Log class that has a thread safe write() method that may be called from multiple threads simultaneously. The Log class uses a worker thread to put all the file access to another thread. It uses a threadsafe (possibly lock-free) data structure to transfer all messages from the sending thread to the worker thread (I used concurrent_queue here - but there are others as well). Using a small Message wrapper it is very simple to tell the worker thread to shut down. Afterwards join it and everything is fine.
You have to make sure that the Log is not destroyed as long as any thread that may possibly write to it is still running.

boost asio asynchronously waiting on a condition variable

Is it possible to perform an asynchronous wait (read : non-blocking) on a conditional variable in boost::asio ? if it isn't directly supported any hints on implementing it would be appreciated.
I could implement a timer and fire a wakeup even every few ms, but this is approach is vastly inferior, I find it hard to believe that condition variable synchronization is not implemented / documented.
If I understand the intent correctly, you want to launch an event handler, when some condition variable is signaled, in context of asio thread pool? I think it would be sufficient to wait on the condition variable in the beginning of the handler, and io_service::post() itself back in the pool in the end, something of this sort:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
boost::asio::io_service io;
boost::mutex mx;
boost::condition_variable cv;
void handler()
{
boost::unique_lock<boost::mutex> lk(mx);
cv.wait(lk);
std::cout << "handler awakened\n";
io.post(handler);
}
void buzzer()
{
for(;;)
{
boost::this_thread::sleep(boost::posix_time::seconds(1));
boost::lock_guard<boost::mutex> lk(mx);
cv.notify_all();
}
}
int main()
{
io.post(handler);
boost::thread bt(buzzer);
io.run();
}
I can suggest solution based on boost::asio::deadline_timer which works fine for me. This is kind of async event in boost::asio environment.
One very important thing is that the 'handler' must be serialised through the same 'strand_' as 'cancel', because using 'boost::asio::deadline_timer' from multiple threads is not thread safe.
class async_event
{
public:
async_event(
boost::asio::io_service& io_service,
boost::asio::strand<boost::asio::io_context::executor_type>& strand)
: strand_(strand)
, deadline_timer_(io_service, boost::posix_time::ptime(boost::posix_time::pos_infin))
{}
// 'handler' must be serialised through the same 'strand_' as 'cancel' or 'cancel_one'
// because using 'boost::asio::deadline_timer' from multiple threads is not thread safe
template<class WaitHandler>
void async_wait(WaitHandler&& handler) {
deadline_timer_.async_wait(handler);
}
void async_notify_one() {
boost::asio::post(strand_, boost::bind(&async_event::async_notify_one_serialized, this));
}
void async_notify_all() {
boost::asio::post(strand_, boost::bind(&async_event::async_notify_all_serialized, this));
}
private:
void async_notify_one_serialized() {
deadline_timer_.cancel_one();
}
void async_notify_all_serialized() {
deadline_timer_.cancel();
}
boost::asio::strand<boost::asio::io_context::executor_type>& strand_;
boost::asio::deadline_timer deadline_timer_;
};
Unfortunately, Boost ASIO doesn't have an async_wait_for_condvar() method.
In most cases, you also won't need it. Programming the ASIO way usually means, that you use strands, not mutexes or condition variables, to protect shared resources. Except for rare cases, which usually focus around correct construction or destruction order at startup and exit, you won't need mutexes or condition variables at all.
When modifying a shared resource, the classic, partially synchronous threaded way is as follows:
Lock the mutex protecting the resource
Update whatever needs to be updated
Signal a condition variable, if further processing by a waiting thread is required
Unlock the mutex
The fully asynchronous ASIO way is though:
Generate a message, that contains everything, that is needed to update the resource
Post a call to an update handler with that message to the resource's strand
If further processing is needed, let that update handler create further message(s) and post them to the apropriate resources' strands.
If jobs can be executed on fully private data, then post them directly to the io-context instead.
Here is an example of a class some_shared_resource, that receives a string state and triggers some further processing depending on the state received. Please note, that all processing in the private method some_shared_resource::receive_state() is fully thread-safe, as the strand serializes all calls.
Of course, the example is not complete; some_other_resource needs a similiar send_code_red() method as some_shared_ressource::send_state().
#include <boost/asio>
#include <memory>
using asio_context = boost::asio::io_context;
using asio_executor_type = asio_context::executor_type;
using asio_strand = boost::asio::strand<asio_executor_type>;
class some_other_resource;
class some_shared_resource : public std::enable_shared_from_this<some_shared_resource> {
asio_strand strand;
std::shared_ptr<some_other_resource> other;
std::string state;
void receive_state(std::string&& new_state) {
std::string oldstate = std::exchange(state, new_state);
if(state == "red" && oldstate != "red") {
// state transition to "red":
other.send_code_red(true);
} else if(state != "red" && oldstate == "red") {
// state transition from "red":
other.send_code_red(false);
}
}
public:
some_shared_resource(asio_context& ctx, const std::shared_ptr<some_other_resource>& other)
: strand(ctx.get_executor()), other(other) {}
void send_state(std::string&& new_state) {
boost::asio::post(strand, [me = weak_from_this(), new_state = std::move(new_state)]() mutable {
if(auto self = me.lock(); self) {
self->receive_state(std::move(new_state));
}
});
}
};
As you see, posting always into ASIO's strands can be a bit tedious at first. But you can move most of that "equip a class with a strand" code into a template.
The good thing about message passing: As you are not using mutexes, you cannot deadlock yourself anymore, even in extreme situations. Also, using message passing, it is often easier to create a high level of parallelity than with classical multithreading. On the downside, moving and copying around all these message objects is time consuming, which can slow down your application.
A last note: Using the weak pointer in the message formed by send_state() facilitates the reliable destruction of some_shared_resource objects: Otherwise, if A calls B and B calls C and C calls A (possibly only after a timeout or similiar), using shared pointers instead of weak pointers in the messages would create cyclic references, which then prevents object destruction. If you are sure, that you never will have cycles, and that processing messages from to-be-deleted objects doesn't pose a problem, you can use shared_from_this() instead of weak_from_this(), of course. If you are sure, that objects won't get deleted before ASIO has been stopped (and all working threads been joined back to the main thread), then you can also directly capture the this pointer instead.
FWIW, I implemented an asynchronous mutex using the rather good continuable library:
class async_mutex
{
cti::continuable<> tail_{cti::make_ready_continuable()};
std::mutex mutex_;
public:
async_mutex() = default;
async_mutex(const async_mutex&) = delete;
const async_mutex& operator=(const async_mutex&) = delete;
[[nodiscard]] cti::continuable<std::shared_ptr<int>> lock()
{
std::shared_ptr<int> result;
cti::continuable<> tail = cti::make_continuable<void>(
[&result](auto&& promise) {
result = std::shared_ptr<int>((int*)1,
[promise = std::move(promise)](auto) mutable {
promise.set_value();
}
);
}
);
{
std::lock_guard _{mutex_};
std::swap(tail, tail_);
}
co_await std::move(tail);
co_return result;
}
};
usage eg:
async_mutex mutex;
...
{
const auto _ = co_await mutex.lock();
// only one lock per mutex-instance
}