What is the best way to compose a state machine using Boost SML to manage ASIO objects and threads? - c++

First of all, I'd like to clarify that by ASIO objects I mean a simple timer for the time being, but further down the line I want to create a state machine to deal with sockets and data transmission.
I have been exploring the Boost SML library for about a few weeks now and trying out different things. I quite like it, however the documentation doesn't help in my use case and its source is not exactly inviting for someone still fairly new to metaprogramming.
For the time being, I'd like to create a state machine that manages an ASIO timer (to wait asynchronously). The interface would provide a start call where you can tell it how long it should wait, a cancel call (to cancel an ongoing wait), and some callback to be invoked when the timer fires.
I have already achieved this in one way, following both sml examples in this repository and it works well - I have a class which manages the timer and contains a state machine. The public interface provides means to inject the appropriate events into the FSM and query its state. The private interface provides means to start and stop the timer. The FSM class is a friend to the controller so it has access to the private functions.
However I was wondering if there is a way to take some of the controller functionality and move it into the FSM - it would hold all ASIO objects and run the io_context/io_service in a thread it has spawned.
(1) First problem I'd encounter is the fact that the state machine is copied - ASIO objects don't allow this, but this can be worked around by wrapping them in shared pointers.
(2) Next is, sending events to the FSM from within. I figured out how to do it from actions by obtaining a callable boost::sml::back::process<> object and using this to "post" the event to the queue, but this would be useless from an ASIO handler as this by default would not be invoked from an action. I suppose a way around this is to capture the callable into the timer handler with a lambda, like this:
// Some pseudo code (this would all be done in one class):
// Transition table
"idle"_s + event<Wait> / start_waiting = "waiting"_s
// Events
struct Wait { std::chrono::seconds duration; };
struct Cancel {};
struct Expire {};
// Actions
std::function<void(Wait const&, boost::sml::back::process<Cancel, Expire>)> start_waiting =
[this] (Wait const& e, boost::sml::back::process<Cancel, Expire> p) {
run_timer(e, p);
};
// Private function
void run_timer(Wait const& e, boost::sml::back::process<Cancel, Expire>& p) {
m_timer->expires_after(e.duration);
auto timerHandler = [&p] (asio::error_code const& e) {
if (e == asio::error::operation_aborted)
p(Cancel{});
else
p(Expire{});
};
timer->async_wait(timerHandler);
}
But this feels like a bit of a botch.
(3) The last thing that worries me is how the state machine will handle threads. Obviously the timer handler will be executed in its own thread. If that posts an event to the queue of the FSM, will that event be processed by the same thread that posted it? I'm assuming yes, because I can't see any mentions of threads (other than thread safety) in the header. This will dictate how I go about managing the threads' lifetime.
Any tips on alternative ways to architect this, and their pros and cons, would be of great help.

Related

Custom creation of QFuture

I've faced quite an odd problem with QtConcurrent, mostly because of strange programming desires, maybe it's just an XY-problem, but...
So, there is my code, trying to communicate with the database, a backend code actually (on Qt, yes). It has to work quick and handle some requests, so I need a thread pool. As a well-known fact I suppose the connection establishing itself is a very time-consuming operation, so there is a need in persistent database connections resulting in persistent threads (QSqlDatabase cannot be moved around between the threads). Also it is quite natural to want asynchronous request handling, thus resulting in some need of a simple way to pass them to the persistent threads.
Nothing too complex, lets assume there already exists some boilerplate in a form like...
// That's what I want for now
QFuture<int> res = workers[i]->async(param1, param2);
// OR
// That's what I DO NOT want to get
workers[i]->async(param1, param2, [](QFuture<int> res) { // QFuture to pass exceptions
// callback here
});
That can be done for sure. Why not std::future? Well, it is much easier to use QFutureWatcher and it's signals for notifications about result's readiness. Pure C++ notification solutions are muuuch more complex and callbacks are also someting that has to be dragged through the class hierarchy. Each worker interfaces a thread with DB connections, obviously.
Okay, all of that can be written, but... custom thread pool would mean no QtConcurrent convenience, there seem to be only risky ways to create that QFuture so that it could be returned by the custom worker. QThreadPool is of no use, because it would be a whole big story to create persistent runnables in it. More to say, the boilerplate I've briefly described is gonna be some kind of project's core, used in many places, not something to be easily replaced by a 100 hand-made thread managings.
In short: if I could construst a QFuture for my results, the problem would be solved.
Could anyone point me to a solution or a workaround? Would be grateful for any bright ideas.
UPD:
#VladimirBershov offered a good modern solution which implements observer pattern. After some googling I've found a QPromise library. Of course, constructing a custom QFuture is still hacky and can be only done via undocumented QFutureInterface class, but still some "promise-like" solution makes asynchronous calls neater by far as I can judge.
You can use AsyncFuture library as a custom QFuture creation tool or ideas source:
AsyncFuture - Use QFuture like a Promise object
QFuture is used together with QtConcurrent to represent the result of
an asynchronous computation. It is a powerful component for
multi-thread programming. But its usage is limited to the result of
threads, it doesn't work with the asynchronous signal emitted by
QObject. And it is a bit trouble to setup the listener function via
QFutureWatcher.
AsyncFuture is designed to enhance the function to offer a better way
to use it for asynchronous programming. It provides a Promise object
like interface. This project is inspired by AsynQt and RxCpp.
Features:
Convert a signal from QObject into a QFuture object
Combine multiple futures with different type into a single future object
Use Future like a Promise object
Chainable Callback - Advanced multi-threading programming model
Convert a signal from QObject into a QFuture object:
#include "asyncfuture.h"
using namespace AsyncFuture;
// Convert a signal from QObject into a QFuture object
QFuture<void> future = observe(timer, &QTimer::timeout).future();
/* Listen from the future without using QFutureWatcher<T>*/
observe(future).subscribe([]() {
// onCompleted. It is invoked when the observed future is finished successfully
qDebug() << "onCompleted";
},[]() {
// onCanceled
qDebug() << "onCancel";
});
My idea is to use thread pools with maximum 1 thread available for each.
QThreadPool* persistentThread = new QThreadPool; // no need to write custom thread pool
persistentThread->setMaxThreadCount(1);
persistentThread->setExpiryTimeout(-1);
and then
QFuture<int> future_1 = QtConcurrent::run(persistentThread, func_1);
QFuture<int> future_2 = QtConcurrent::run(persistentThread, func_2);
func_2 will be executed after func_1 in the same one "persistent" thread.

Consume a std::future by connecting a QObject

I have some existing code that uses std::future/std::promise that I'd like to integrate with a Qt GUI cleanly.
Ideally, one could just:
std::future<int> future{do_something()};
connect(future, this, &MyObject::resultOfFuture);
and then implement resultOfFuture as a slot that gets one argument: the int value that came out of the std::future<int>. I've added this suggestion as a comment on QTBUG-50676. I like this best because most of my future/promises are not concurrent anyway, so I'd like to avoid firing up a thread just to wait on them. Also, type inference could then work between the future and the slot's parameter.
But it seems to me that this shouldn't be hard to implement using a wrapper Qt object (e.g., a version of QFutureWatcher that takes a std::future<int>). The two issues with a wrapper are:
the wrapper will have to be concrete in its result type.
the watcher would have to be concurrent in a thread?
Is there a best-practice to implement this sort of connection? Is there another way that can hook into the Qt main loop and avoid thread creation?
std::future is missing continuations. The only way to turn the result of a std::future asynchronously into a function call delivering the result is to launch a thread watching it, and if you want to avoid busy-waiting you need one such thread per std::future, as there is no way to lazy-wait on multiple futures at once.
There are plans to create a future with continuation (a then operation), but they are not in C++ as of c++17 let alone c++11.
You could write your own system of future/promise that mimics the interface of std::future and std::promise that does support continuations, or find a library that already did that.
A busy-wait solution that regularly checked if the future was ready could avoid launching a new thread.
In any case, std::experimental::then would make your problem trivial.
future.then( [some_state](auto future){
try {
auto x = future.get();
// send message with x
} catch( ... ) {
// deal with exception
}
} );
you can write your own std::experimetnal::future or find an implementation to use yourself, but this functionality cannot be provided without using an extra thread with a std::future.

Design a transmitter class in C++: buffer data from server & send to client

I'm writing a class "Tmt" that acts between a server and clients through sockets. My Tmt class will receive data from server, build up a queue internally and perform some operation on the data in the queue before they are available to the client.
I have already setup the socket connection and I can call
receiverData(); // to get data from server
The client will use my class Tmt as follows:
Tmt mytmt=new Tmt();
mymt.getProcessedData(); //to get one frame.
My question is how to let the Tmt class keep receiving data from server in the background once it is created and add them to the queue. I have some experience in multi-thread in C, but I'm not sure how this "working in the background" concept will be implemented in a class in C++. Please advice, thanks!
One option would be to associate a thread with each instance of the class (perhaps by creating a thread in the constructor). This thread continuously reads data from the network and adds the data to the queue as it becomes available. If the thread is marked private (i.e. class clients aren't aware of its existence), then it will essentially be running "in the background" with no explicit intervention. It would be up to the Tmt object to manage its state.
As for actual thread implementations in C++, you can just use Good ol' Pthreads in C++ just fine. However, a much better approach would probably be to use the Boost threading library, which encapsulates all the thread state into its own class. They also offer a whole bunch of synchronization primitives that are just like the pthread versions, but substantially easier to use.
Hope this helps!
By the way - I'd recommend just naming the class Transmit. No reason to be overly terse. ;-)
IMHO, multithreading is not the best solution for this kind of classes.
Introducing background threads can cause many problems, you must devise guards against multiple unnecessary thread creation at the least. Also threads need apparent initialize and cleanup. For instance, usual thread cleanup include thread join operation (wait for thread to stop) that could cause deadlocks, resource leaks, irresponsible UIs, etc.
Single thread asynchronous socket communication could be more appropriate to this scenario.
Let me draw sample code about this:
class Tmt {
...
public:
...
bool doProcessing()
{
receiverData();
// process data
// return true if data available
}
T getProcessedData()
{
// return processed data
}
}
Tmt class users must run loop doing doProcessing, getProcessedData call.
Tmt myTmt;
...
while (needMoreData)
{
if (myTmt.doProcessing())
{
myTmt.getProcessedData();
...
}
}
If Tmt users want background processing they can easily create another thread and doing Tmt job in there. This time, thread management works are covered by Tmt users.
If Tmt users prefer single thread approach they can do it without any problem.
Also noted that famous curl library uses this kind of design.

Avoid stuck calling callback

This is a question about generic c++ event driven applications design.
Lets assume that we have two threads, a "Dispatcher" (or "Engine"...) and a "Listener" (or "Client"...).
Let's assume that I write the Dispatcher code, and release it as a library. I also write the Listener interface, of course.
When the Dispatcher executes (after Listener registration)
listenerInstance.onSomeEvent();
the event handling code will actually be executed by the Dispatcher thread, so if the person that implements the Listener writes something like
void Listener::onSomeEvent() { while(true) ; }
The Dispatcher will stuck forever.
Is there a "plain old c++" (I mean no boost or libsigc++) way to "decouple" the two classes, so I can be sure that my Dispatcher will work fine whatever the Listeners does in the callbacks?
bye and thanks in advance,
Andrea
Well if the event gets invoked in the same thread (as I seem to understand can be a requirement), then there isn't much you can do about it.
If this is under a Win32 app with a message pump, you could register a windows message and call PostMessage with data representing this event and you can patch the message loop to interpret that message and call the event. What you gain is a decoupling of sorts, the event call is asynchronous (ie the event call will return no matter what). But later on when you process your messages and actually call the event, your main thread will still be stalled and nothing else will run until the event handler is ready.
Another alternative is just creating a new thread (or using a thread pool) for your call. This won't work for events that require a certain thread (ie ui updating threads). Additionally this adds synchronization overhead and thread spawning overhead AND you might starve the system of threads and/or cpu time.
But really, I don't think it's your job as the library designer to anticipate and avoid these problems. If the end-user wants to create a long event handler, let him spawn a new thread on his own. If he doesn't and just wants his specific thread to handle an event, let him. It simplifies your job and doesn't add any overhead that's not needed.
I'm afraid there's no native C++ way to do this. For windows, you can use asynchronous procedure calls (APC)
One approach could be to call the onSomeEvent into a dedicated thread. This is not 100% bullet proof but it would avoid the while(true); issue.
I hope it helps
There is a pure C++ way to achieve what you're mentioning. However, it's very ineffective. Here's a sample:
class Listener
{
bool myHasEvent;
private:
void ProcessEvent()
{
while (true)
{
if (!myHasEvent)
continue; //spin lock
// Do real processing
myHasEvent = false;
}
}
public:
void onSomeEvent() { myHasEvent = true; }
};
However, I would recommend against this approach. Instead, I would transform this into more platform-specific code. I would replace the if (!myHasEvent) continue; spin lock with a OS-specific wait routine (i.e. WaitForSingleObject on Win32) passing an Event Handle. Then, in onSomeEvent, instead of myHasEvent = true; I would set the event into signaled state (i.e. SetEvent on Win32). This would be a lot more effective because the thread wouldn't eat processor time during waiting.
Another method is the PostMessage as suggested by Blindly.

Designing Thread Class

I have a design question. Is it better to define separate classes for SENDING and RECEIVING. Or, is it better to define a single Thread class? I like the idea of a single Thread class because it is easier to share a queue which can be locked by mutex.
Design Option #1 (Separate):
mySendThread = new SendThread(); // Have thread properties and separate members
myRcvThread = new RcvThread(); // Have thread properties and separate members
Design Option #2 (Master):
Master thread -
Execute()
{
if (threadType == RCV_THREAD)
{
globalVar = new MasterThread(serialPortHandle);
}
while (!Terminated)
{
if (threadType == RCV_THREAD)
{
if(globalVar)
{
// do work
}
}
if (threadType == SND_THREAD)
{
tCountSnd = GetTickCount() / SND_THREAD_DELAY;
if (tCountSnd != tCountSnd2) {
tCountSnd2 = tCountSnd;
if (globalVar) {
// do sending work
}
}
}
}
}
I think it's better to completely decouple the purpose or execution of a thread from the actual thread abstraction that you'll be using.
Make your thread class just a thin wrapper to allow you to start, stop, and join a thread. Have it take a functor object (or function pointer) in the constructor for the actual execution.
Or better yet, use one of the many available thread abstractions already out there instead of writing your own (boost::thread for one, but I bet whatever framework you're using already has a thread class).
I've designed a thread for communicating on the serial port (in Python, not C++, but it doesn't matter much) as follows:
There's a single thread and two queues - one for sent and one for received messages. The thread always listens (asynchronously) on both the serial port (for received data) and the sending queue (to send stuff the application asks to send).
If data arrived on the serial port, it's placed in the receive queue for the application's use
If the application placed data into the send queue, the thread sends it down the serial port
This design makes more sense to me because the single resource (the serial port) is held by a single thread, and not shared by two. Breaking it to several classes sounds like an overkill to me, since reading/writing from queues and reading/writing from the serial port is a trivial operation (naturally the serial port is wrapped in a convenient class - by the way I really recommend this class by Ramon De Klein)
Oh, and it works very well.
Regarding the queue to be shared .. wrap it in a separate class and implement the mutex handling there. Every thread class holds a reference to the queue wrapper and doesn't need to deal around with mutexes at all.
2nd choice is clearly a bad one.
It is better to have 2 different classes , maybe you can have a base class which has common implementation. This is just an initial assessment please provide more information about your problem only then a good analysis of problem can be done