C++ Thread Statemachine with Timer - c++

Explanation part:
I tried to read several articles with threads but I quite new to this topic therefore I'm not sure if I can accomplish what I think about.
I already implemented an object orientated state machine for detection of movement based on this tutorial C++ State machine implementation
There is also a tutorial for multi-threading with state machines but the example is a bit to complex for me and implemented for windows so for now I'm trying to do it myself. The goal is to have several final state machines running parallel for labelling of sensor information. I thought that multi-threading is needed because events will arrive asynchronously and it shall be ensured that every information will be processed (Queue has to be implemented also).
As you can see in the picture my FSM has 4 states which should be realized within one thread. It will wait for an event (movement or timer) to happen and transform to the next state. The states are saved within an object and the transition is based on an object function.
Question part:
The movement event will be triggered from outside (receiving of sensor event).
Depending on the event I can execute the corresponding object function change into the next state.
But how can I realize that a timer runs after triggering a certain state which eventually will lead to the previous state. Also the timer has to be stopped if another event happens asynchronous. Should this be handled within the thread or outside?

I see no motivation to involve multi-threading in this scenario. Therefore, "thread safety" is not an issue. You will define some software object which represents "the state machine" with its internal state, and two methods that can be called, one for each possible notification. These should be handled serially: a message arrives (in a thread-safe queue) to tell you the timer went off, or a message arrives to tell you about the alarm, and you pop and process those messages one at a time. Or, you can use a mutex within the methods to ensure that two method calls are never attempted simultaneously ... if that's an issue, and I see no reason for it to be. Just do all of this in one thread and be done.
Apparently, your state machine receives two notifications: (1) that the timer has just gone off, and (2) that the sensor/alarm has something to report.
And then, you simply work out the logic that needs to happen, for each of these, depending on (1) the current state, and (2) various internal variables The logic within your state-machine handles both of these: nothing is actually being done in parallel.
case state == MOVING_BETWEEN_ROOM:
case stimulus == TIMER_POP:
timer_counter++;
if timer_counter > 1 then {
state = UNSURE_MOVEMENT;
timer_counter = 0; // maybe?
}
case stimulus == ALARM:
timer_counter = 0;
if current_room == room_last_alarm {
same_room_count++;
if in_same_room_count > n then {
state = MOVING_SAME_ROOM;
in_same_room_count = 0; // maybe???
}
} else {
in_same_room_count = 0;
room_last_alarm = current_room;
}
... and, so on. I'm not saying that the above logic is correct, but it should point you in the right direction.

Related

Is there a limit to the number of created events?

I'm developing a C++14 Windows DLL on VS2015 that runs on all Windows version >= XP.
TL;DR
Is there a limit to the number of events, created with CreateEvent, with different names of course?
Background
I'm writing a thread pool class.
The class interface is simple:
void AddTask(std::function<void()> task);
Task is added to a queue of tasks and waiting workers (vector <thread>) activate the task when available.
Requirement
Wait (block) for a task for a little bit before continuing with the flow. Meaning, some users of ThreadPool, after calling AddTask, may want to wait for a while (say 1 second) for the task to end, before continuing with the flow. If the task is not done yet, they will continue with the flow anyways.
Problem
ThreadPool class cannot provide Wait interface. Not its responsibility.
Solution
ThreadPool will SetEvent when task is done.
Users of ThreadPool will wait (or not. depend on their need) for the event to be signaled.
So, I've changed the return value of ThreadPool::AddTask from void to int where int is a unique task ID which is essentially the name of the event to be singled when a task is done.
Question
I don't expect more than ~500 tasks but I'm afraid that creating hundreds of events is not possible or even a bad practice.
So is there a limit? or a better approach?
Of course there is a limit (if nothing else; at some point the system runs out of memory).
In reality, the limit is around 16 million per process.
You can read more details here: https://blogs.technet.microsoft.com/markrussinovich/2009/09/29/pushing-the-limits-of-windows-handles/
You're asking the wrong question. Fortunately you gave enough background to answer your real question. But before we get to that:
First, if you're asking what's the maximum number of events a process can open or a system can hold, you're probably doing something very very wrong. Same goes for asking what's the maximum number of files a process can open or what's the maximum number of threads a process can create.
You can create 50, 100, 200, 500, 1000... but where does it stop? If you're even considering creating that many of them that you have to ask about a limit, you're on the wrong track.
Second, the answer depends on too many implementation details: OS version, amount of RAM installed, registry settings, and maybe more. Other programs running also affect that "limit".
Third, even if you knew the limit - even if you could somehow calculate it at runtime based on all the relevant factors - it wouldn't allow you to do anything that you can't already do now.
Lets say you find out the limit is L and you have created exactly L events by now. Another task come in. What do you do? Throw away the task? Execute the task without signaling an event? Wait until there are fewer than L events and only then create an event and start executing the task? Crash the process?
Whatever you decide you can do it just the same when CreateEvent fails. All of this is completely pointless. And this is yet another indication that you're asking the wrong question.
But maybe the most wrong thing you're doing is saying "the thread pool class can't provide wait because it's not its responsibility, so lets have the thread pool class provide an event for each task that the thread pool will signal when the task ends" (in paraphrase).
It looks like by the end of the sentence you forgot the premise from the beginning: It's not the thread pool's responsibility!
If you want to wait for the task to finish have the task itself signal when it's done. There's no reason to complicate the thread pool because someone, sometimes want to wait on tasks. Signaling that the task is done is the task's job:
event evt; ///// this
thread_pool.queue([evt] {
// whatever
evt.signal(); ///// and this
});
auto reason = wait(evt, 1s);
if (reason == timeout) {
log("bummer");
}
The event class could be anything you want - a Windows event, and std::promise and std::future pair, or anything else.
This is so simple and obvious.
Complicating the thread pool infrastructure, taking up valuable system resources for nothing, and signaling synchronization primitives even when no one's listening just to save the two marked code lines above in the few cases where you actually want to wait for the task is unjustifiable.

Qt: How to make one thread wait for a temporary roadblock, and temporarily increase another thread's priority to remove the roadbock?

I have two threads:
GUI, which does the typical GUI stuff and manages a bunch of flags that affect the Processing thread
Processing, which handles realtime data on a 30Hz period forever
There are lots of examples of how to have one thread wait for another to finish, but none for how to make a temporary roadbock without killing the thread.
There's a function in my GUI thread that contains this:
Scene* scene = getSceneToFadeFrom();
scene->setSelected(false);
///TODO: wait until (!scene->processing)
fadeFrom = scene->dmx;
and one in my Processing thread that contains this while looping through a QList:
if(scene->getSelected())
{
scene->processing = true;
scene->run(); //updates scene->dmx
scene->processing = false;
}
If this were an embedded project on bare metal, I could use the global interrupt enable flag in place of scene->processing (invert the logic) and be done, which dedicates the entire CPU to that task at the expense of all others.
But because this is a desktop project with an operating system to play nice with, how can I achieve the same effect within this project? Basically, pause the GUI thread at that point until scene->processing == false (which it might be already) and guarantee that the Processing thread is actually running while the GUI thread waits for it.
And here's what I came up with. It was actually an XY problem. I'm surprised that I didn't think of this right away because I had already done something similar for deleting a Scene:
GUI thread:
//(sceneToReplace != 0) means there's something for Processing to do
sceneToReplace = getSceneToFadeFrom();
if(sceneToReplace)
{
sceneToReplace->setSelected(false);
}
Processing thread, same class:
if(sceneToReplace)
{
fadeFrom = sceneToReplace->dmx;
sceneToReplace = 0;
}
and I don't even need the processing flag anymore!
fadeFrom gets set a little later than in the original veresion, but it's not actually needed until then anyway.

reacting to all orthogonal states having finished in Boost.statechart

I'm working on a robot control program that is based on a state machine. While the program uses Qt State Machine Framework, I also attempted to implement it using Boost.statechart (BS) as a theoretical exercise and a way to learn / evaluate the library.
In Qt version I used the following pattern in several places: a compound state has parallel nested sub-graphs, each of which eventually reaches a final state. When all parallel sub-states finish, the parent state emits "finished()" signal, which causes the machine to transit to the next top-level state. E.g. (Beware: pseudo diagram):
Idle -calibRequest-> Calibrate( calibrate_camera | calibrate_arm ) -finished-> Idle
and calibrate_* states in turn have nested states inside them like S -trigger[calibrated?]-> F where F is a final state. When both calibrate_* states reach their respective F states finished signal causes the state machine to transit into Idle.
Qt's parallel child state are analogous to BS's orthogonal nested states. At first I though "termination" was BS's analogue to final states, but in fact it isn't. It's more like "terminate the state machine unless there is still some orthogonal thing going somewhere" - once you terminate all orthogonal states the parent state terminates as well without any chance to transit. Posting events upon termination doesn't help either since there is no state that these events could be delivered to.
I ended up implementing "final states" which post a notification event when reached and reacting on this event in the parent state - checking if all orthogonal states have reached their final events and transiting then. Which is basically reimplementation of Qt State Machine's approach, but which has to be redone each time I need this pattern. But may be I'm just so used to one way of achieving this effect that I don't see an alternative?

Resetting Threaded Events - C++

Let's say that I have a switch statement in my thread function that evaluates for triggered events. Each case is a different event. Is it better to put the call to ResetEvent at the end of the case, or at the beginning? It seems to me that it should go at the end, so that the event cannot be triggered again, until the thread has finished processing the previous event. IF it is placed at the beginning, the event could be triggered again, while being processed.
Yes. think that is the way to go. Create a manual reset event (second parameter of CreateEvent API) so that event is not automatically reset after setting it.
If you handle incoming traffic using a single Event object (implying you have no inbound queue), you will miss events. Is this really what you want?
If you want to catch all events, a full-blown producer-consumer queue wouold be a better bet. Reference implementation for Boost.Thread here.
One problem that comes up time and
again with multi-threaded code is how
to transfer data from one thread to
another. For example, one common way
to parallelize a serial algorithm is
to split it into independent chunks
and make a pipeline — each stage in
the pipeline can be run on a separate
thread, and each stage adds the data
to the input queue for the next stage
when it's done. For this to work
properly, the input queue needs to be
written so that data can safely be
added by one thread and removed by
another thread without corrupting the
data structure.

How to design a state machine in face of non-blocking I/O?

I'm using Qt framework which has by default non-blocking I/O to develop an application navigating through several web pages (online stores) and carrying out different actions on these pages. I'm "mapping" specific web page to a state machine which I use to navigate through this page.
This state machine has these transitions;
Connect, LogIn, Query, LogOut, Disconnect
and these states;
Start, Connecting, Connected, LoggingIn, LoggedIn, Querying, QueryDone, LoggingOut, LoggedOut, Disconnecting, Disconnected
Transitions from *ing to *ed states (Connecting->Connected), are due to LoadFinished asynchronous network events received from network object when currently requested url is loaded. Transitions from *ed to *ing states (Connected->LoggingIn) are due to events send by me.
I want to be able to send several events (commands) to this machine (like Connect, LogIn, Query("productA"), Query("productB"), LogOut, LogIn, Query("productC"), LogOut, Disconnect) at once and have it process them. I don't want to block waiting for the machine to finish processing all events I sent to it. The problem is they have to be interleaved with the above mentioned network events informing machine about the url being downloaded. Without interleaving machine can't advance its state (and process my events) because advancing from *ing to *ed occurs only after receiving network type of event.
How can I achieve my design goal?
EDIT
The state machine I'm using has its own event loop and events are not queued in it so could be missed by machine if they come when the machine is busy.
Network I/O events are not posted directly to neither the state machine nor the event queue I'm using. They are posted to my code (handler) and I have to handle them. I can forward them as I wish but please have in mind remark no. 1.
Take a look at my answer to this question where I described my current design in details. The question is if and how can I improve this design by making it
More robust
Simpler
Sounds like you want the state machine to have an event queue. Queue up the events, start processing the first one, and when that completes pull the next event off the queue and start on that. So instead of the state machine being driven by the client code directly, it's driven by the queue.
This means that any logic which involves using the result of one transition in the next one has to be in the machine. For example, if the "login complete" page tells you where to go next. If that's not possible, then the event could perhaps include a callback which the machine can call, to return whatever it needs to know.
Asking this question I already had a working design which I didn't want to write about not to skew answers in any direction :) I'm going to describe in this pseudo answer what the design I have is.
In addition to the state machine I have a queue of events. Instead of posting events directly to the machine I'm placing them in the queue. There is however problem with network events which are asynchronous and come in any moment. If the queue is not empty and a network event comes I can't place it in the queue because the machine will be stuck waiting for it before processing events already in the queue. And the machine will wait forever because this network event is waiting behind all events placed in the queue earlier.
To overcome this problem I have two types of messages; normal and priority ones. Normal ones are those send by me and priority ones are all network ones. When I get network event I don't place it in the queue but instead I send it directly to the machine. This way it can finish its current task and progress to the next state before pulling the next event from the queue of events.
It works designed this way only because there is exactly 1:1 interleave of my events and network events. Because of this when the machine is waiting for a network event it's not busy doing anything (so it's ready to accept it and does not miss it) and vice versa - when the machine waits for my task it's only waiting for my task and not another network one.
I asked this question in hope for some more simple design than what I have now.
Strictly speaking, you can't. Because you only have state "Connecting", you don't know whether you need top login afterwards. You'd have to introduce a state "ConnectingWithIntentToLogin" to represent the result of a "Connect, then Login" event from the Start state.
Naturally there will be a lot of overlap between the "Connecting" and the "ConnectingWithIntentToLogin" states. This is most easily achieved by a state machine architecture that supports state hierarchies.
--- edit ---
Reading your later reactions, it's now clear what your actual problem is.
You do need extra state, obviously, whether that's ingrained in the FSM or outside it in a separate queue. Let's follow the model you prefer, with extra events in a queue. The rick here is that you're wondering how to "interleave" those queued events vis-a-vis the realtime events. You don't - events from the queue are actively extracted when entering specific states. In your case, those would be the "*ed" states like "Connected". Only when the queue is empty would you stay in the "Connected" state.
If you don't want to block, that means you don't care about the network replies. If on the other hand the replies interest you, you have to block waiting for them. Trying to design your FSM otherwise will quickly lead to your automaton's size reaching infinity.
How about moving the state machine to a different thread, i. e. QThread. I would implent a input queue in the state machine so I could send queries non blocking and a output queue to read the results of the queries. You could even call back a slotted function in your main thread via connect(...) if a result of a query arrives, Qt is thread safe in this regard.
This way your state machine could block as long as it needs without blocking your main program.
Sounds like you just want to do a list of blocking I/O in the background.
So have a thread execute:
while( !commands.empty() )
{
command = command.pop_back();
switch( command )
{
Connect:
DoBlockingConnect();
break;
...
}
}
NotifySenderDone();