For Unit Testing, I'm trying to record all the state transactions after I kick off a state machine event.
E.g., if I post_event A to the fifo_scheduler of an async_state_machine, the state machine will go through states B, C, then back to D.
Without being able to record all the event states, I can only check that it went to State D after it was done when doing a unit test :-(
The only thing I can think of is to modify all the react methods or constructors of all the states I create (derived off simple_state) so they do the recording. This seems a bit hackish when I really want to hook into the async_state_machine just before it calls a state's react() method...
This seems a bit hackish when I really want to hook into the async_state_machine just before it calls a state's react() method...
Why don't you? Create a new class that extends async_state_machine and add your desired hooks into it. If access is a problem (It probably will be), do the ever spectacular #define private public (or protected hack before including statechart.
I've done something similar to add local variables to the history of a state and add a new kind of state-ctor so I have for-real full history.
Added a different hack. Each State is created before it's used by the boost state machine (then destroyed after it goes to the next state...seems so inefficient), so each state was derived from another class that has a callback in it's constructor.
Still seems kinda hackish...wish boost++ had a cleaner way to do this :-P
Related
I am new to Unreal (switched from Unity) and still have some troubles understanding the main concepts, in this case how to access other objects and their components via c++ scriptig. I am using UE5 (but I guess solutions for UE4 should also work fine).
My Project looks as followed:
In my scene I have an "Target" Actor (Blueprintclass) that has a self written c++ "movement" component with some public function to update its position.
More over I have an "Experiment" BP Actor that has a "TrialProcedure" c++ component attached.
Here is what I want to do: I want to run the Target's movent component's update position function from this Actor's component.
I guess once I can access the Target Actor, I can use GetComponentByClass() to access the component I need and than run it's method. But how do I get access to that other actor without using blueprints? The Actor is already there so I don't want to spawn it from code.
Thanks in advance!
That is a sub-optimal solution. Use a collision and get the Other Actor, then try to cast it to your desired class if it's a collision event. If it fails, don't do anything but if it doesn't, pull a pin from that and execute your functions.
A better alternative to your current solution would be to use the Get All Actors of Class function if you only have a single target. Just use the Get (a copy) node and you'll have your target blueprint. There's a C++ version of the function as well. If you have more than one file, then you'll have to check stuff.
I am making an application in C++ that runs a simulation for a health club. The user simply enters the simulation data at the beginning (3 integer values) and presses run. After that there is no user input - so very simple.
After starting the simulation a lot of the logic is deep down in lower classes but a lot of them need to print simple output messages to the UI. Returning a message is not possible as objects need to print to the UI but keep working.
I was going to pass a reference to the UI object to all classes that need it but I end up passing it around quite a lot - there must be a better way.
What I really need is something that can make calling the UI's printOutput(string) function as easy (or not much more difficult) than cout << string;
The UI also has a displayConnectionStatus(bool[] connections) method.
Bear in mind the UI inherits an abstract 'UserInterface' class so simple console UIs and GUIs can be changed in and out easily.
How do you suggest I implement this link to the UI?
If I were to use a global function, how can I redirect it to call methods of the UserInterface implementation that I selected to use?
Don't be afraid of globals.
Global objects hurt encapsulation, but for a targeted solution with no concern for immediate reusability, globals are just fine.
Expose a global object that processes events from your simulation. You can then choose to print the events, send them by e-mail, render them with OpenGL or whatever you fancy. Make a uniform interface that catches what happens inside the simulation via report callbacks, and then you can subclass this global object to suit your needs.
If the object wasn't global, you'd be passing a pointer around all the codebase.
I would suggest to go for logging framework i.e. your own class LogMessages, which got functions which get data and log the data, it can be to a UI, file, over network or anything.
And each class which needs logging, can use your logging class.
This way you can avoid globals and a generic solution , also have a look at http://www.pantheios.org/ which is open source C/C++ Diagnostic Logging API library, you may use that also...
I am trying to figure out how to use a node graph for processing a set of data.
It is for an application that manipulates sound data, much like if you had a bunch of pedals for your guitar.
You have some nodes with predefined procedures connected to each other in a directed graph.
Each takes a turn to process the data, and when one is finished it gives a signal to the next node to do it's thing. The idea is you piece these nodes together using the ui.
I am using Qt for creating the UI, and as such I was looking through it's documentation to see if there was something I could use for the above mentioned problem. And I found the Qt state machine, from what I can read it seems to do what I need, a state is entered, you do some processing, when it is done a finished signal is given, and the next state in the graph is started. Also the fact that you could nest states, giving me the ability to create new nodes by combining existing ones, seems like an attractive idea.
However the state machine was created for changing the attributes of widgets (changing their state) and not for wrapping procedures. For example, a button is pressed and the state machine changes the state of another widget, and e.g. if the button is released the state is swapped back.
So, anyone with more experience, with Qt, the state machine, or processing by node graphs, who could give me a hint whether or not tweaking the state machine to wrap my procedures will work. Or, if there is something else in the Qt library I could use?
I used QStateMachine for online message processing (online in the sense of online algorithm) and it worked fine, there weren't restrictions just because the original idea was to modify widgets.
However, personally I would not use it for your project because a state machine is not exactly what you describe. It might be possible to bend it to your needs but it would certainly be weird. A better solution would be to make a nice polymorphic OO model with your "effects" having a base class and a decoupled graph implementation to connect them. You can use Qt signals to signal finishing the the graph to take the next step. It is also easier to build your custom graph from data than create the states and transitions for the state machine dynamically.
BRAND NEW to unit testing, I mean really new. I've read quite a bit and am moving slowly, trying to follow best practices as I go. I'm using MS-Test in Visual Studio 2010.
I have come up against a requirement that I'm not quite sure how to proceed on. I'm working on a component that's responsible for interacting with external hardware. There are a few more developers on this project and they don't have access to the hardware so I've implemented a "dummy" or simulated implementation of the component and moved as much shared logic up into a base class as possible.
Now this works fine as far as allowing them to compile and run the code, but it's not terrible useful for simulating the events and internal state changes needed for my unit tests (don't forget I'm new to testing)
For example, there are a couple events on the component that I want to test, however I need them to be invoked in order to test them. Normally to raise the event I would push a button on the hardware or shunt two terminals, but in the simulated object (obviously) I can't do that.
There are two concerns/requirements that I have:
I need to provide state changes and raise events for my unit tests
I need to provide state changes and raise events for my team to test dependencies on the component (e.g. a button on a WPF view becomes enabled when a certain hardware event occurs)
For the latter I thought about some complicated control panel dialog that would let me trigger events and generally simulate hardware operation and user interaction. This is complicated as it requires a component with no message pump to provide a window with controls. Stinky. Or another approach could be to implement the simulated component to take a "StateInfo" object that I could use to change the internals of the object.
This can't be a new problem, I'm sure many of you have had to do something similar to this and I'm just wondering what patterns or strategies you've used to accomplish this. I know I can access private fields with a n accessor, but this doesn't really provide an interactive (in the case of runtime simulation) changes.
If there is an interface on the library you use to interact with the external hardware you can just create a mock object for it and raise events from that in your unit tests.
If there isn't, then you'll need to wrap the hardware calls in a wrapper class so you mock it and provide the behaviours you want in your test.
For examples of how to raise events from mock objects have a look at Mocking Comparison - Raising Events
I hope that helps!
I have a program that (amongst other things) has a command line interface that lets the user enter strings, which will then be sent over the network. The problem is that I'm not sure how to connect the events, which are generated deep inside the GUI, to the network interface. Suppose for instance that my GUI class hierarchy looks like this:
GUI -> MainWindow -> CommandLineInterface -> EntryField
Each GUI object holds some other GUI objects and everything is private. Now the entryField object generates an event/signal that a message has been entered. At the moment I'm passing the signal up the class hierarchy so the CLI class would look something like this:
public:
sig::csignal<void, string> msgEntered;
And in the c'tor:
entryField.msgEntered.connect(sigc::mem_fun(this, &CLI::passUp));
The passUp function just emits the signal again for the owning class (MainWindow) to connect to until I can finally do this in the main loop:
gui.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
Now this seems like a real bad solution. Every time I add something to the GUI I have to wire it up all through the class hierarchy. I do see several ways around this. I could make all objects public, which would allow me to just do this in the main loop:
gui.mainWindow.cli.entryField.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
But that would go against the idea of encapsulation. I could also pass a reference to the network interface all over the GUI, but I would like to keep the GUI code as seperate as possible.
It feels like I'm missing something essential here. Is there a clean way to do this?
Note: I'm using GTK+/gtkmm/LibSigC++, but I'm not tagging it as such because I've had pretty much the same problem with Qt. It's really a general question.
The root problem is that you're treating the GUI like its a monolithic application, only the gui is connected to the rest of the logic via a bigger wire than usual.
You need to re-think the way the GUI interacts with the back-end server. Generally this means your GUI becomes a stand-alone application that does almost nothing and talks to the server without any direct coupling between the internals of the GUI (ie your signals and events) and the server's processing logic. ie, when you click a button you may want it to perform some action, in which case you need to call the server, but nearly all the other events need to only change the state inside the GUI and do nothing to the server - not until you're ready, or the user wants some response, or you have enough idle time to make the calls in the background.
The trick is to define an interface for the server totally independently of the GUI. You should be able to change GUIs later without modifying the server at all.
This means you will not be able to have the events sent automatically, you'll need to wire them up manually.
Try the Observer design pattern. Link includes sample code as of now.
The essential thing you are missing is that you can pass a reference without violating encapsulation if that reference is cast as an interface (abstract class) which your object implements.
Short of having some global pub/sub hub, you aren't going to get away from passing something up or down the hierarchy. Even if you abstract the listener to a generic interface or a controller, you still have to attach the controller to the UI event somehow.
With a pub/sub hub you add another layer of indirection, but there's still a duplication - the entryField still says 'publish message ready event' and the listener/controller/network interface says 'listen for message ready event', so there's a common event ID that both sides need to know about, and if you're not going to hard-code that in two places then it needs to be passed into both files (though as global it's not passed as an argument; which in itself isn't any great advantage).
I've used all four approaches - direct coupling, controller, listener and pub-sub - and in each successor you loosen the coupling a bit, but you don't ever get away from having some duplication, even if it's only the id of the published event.
It really comes down to variance. If you find you need to switch to a different implementation of the interface, then abstracting the concrete interface as a controller is worthwhile. If you find you need to have other logic observing the state, change it to an observer. If you need to decouple it between processes, or want to plug into a more general architecture, pub/sub can work, but it introduces a form of global state, and isn't as amenable to compile-time checking.
But if you don't need to vary the parts of the system independently it's probably not worth worrying about.
As this is a general question I’ll try to answer it even though I’m “only” a Java programmer. :)
I prefer to use interfaces (abstract classes or whatever the corresponding mechanism is in C++) on both sides of my programs. On one side there is the program core that contains the business logic. It can generate events that e.g. GUI classes can receive, e.g. (for your example) “stringReceived.” The core on the other hand implements a “UI listener” interface which contains methods like “stringEntered”.
This way the UI is completely decoupled from the business logic. By implementing the appropriate interfaces you can even introduce a network layer between your core and your UI.
[Edit] In the starter class for my applications there is almost always this kind of code:
Core core = new Core(); /* Core implements GUIListener */
GUI gui = new GUI(); /* GUI implements CoreListener */
core.addCoreListener(gui);
gui.addGUIListener(core);
[/Edit]
You can decouple ANY GUI and communicate easily with messages using templatious virtual packs. Check out this project also.
In my opinion, the CLI should be independant from GUI. In a MVC architecture, it should play the role of model.
I would put a controller which manages both EntryField and CLI: each time EntryField changes, CLI gets invoqued, all of this is managed by the controller.