How to build a graph of specific function calls? - c++

I have a project where I want to dynamically build a graph of specific function calls. For example if I have 2 template classes, A and B, where A have a tracked method (saved as graph node) and B has 3 methods (non-tracked method, tracked method and a tracked method which calls A's tracked method), then I want to be able to only register the tracked method calls into the graph object as nodes. The graph object could be a singleton.
template <class TA>
class A
{
public:
void runTracked()
{
// do stuff
}
};
template <class TB>
class B
{
public:
void runNonTracked()
{
// do stuff
}
void runTracked()
{
// do stuff
}
void callATracked()
{
auto a = A<TB>();
a.runTracked();
// do stuff
}
};
void root()
{
auto b1 = B<int>();
auto b2 = B<double>();
b1.runTracked();
b2.runNonTracked();
b2.callATracked();
}
int main()
{
auto b = B<int>();
b.runTracked()
root();
return 0;
}
This should output a similar graph object to the below:
root()
\-- B<int>::runTracked()
\-- B<double>::callATracked()
\-- A<double>::runTracked()
The tracked functions should be adjustable. If the root would be adjustable (as in the above example) that would be the best.
Is there an easy way to achieve this?
I was thinking about introducing a macro for the tracked functionalities and a Singleton graph object which would register the tracked functions as nodes. However, I'm not sure how to determine which is the last tracked function in the callstack, or (from the graphs perspective) which graph node should be the parent when I want to add a new node.

In general, you have 2 strategies:
Instrument your application with some sort of logging/tracing framework, and then try to replicate some sort of tracing mixin-like functionality to apply global/local tracing depending on which parts of code you apply the mixins.
Recompile your code with some sort of tracing instrumentation feature enabled for your compiler or runtime, and then use the associated tracing compiler/runtime-specific tools/frameworks to transform/sift through the data.
For 1, this will require you to manually insert more code or something like _penter/_pexit for MSVC manually or create some sort of ScopedLogger that would (hopefully!) log async to some external file/stream/process. This is not necessarily a bad thing, as having a separate process control the trace tracking would probably be better in the case where the traced process crashes. Regardless, you'd probably have to refactor your code since C++ does not have great first-class support for metaprogramming to refactor/instrument code at a module/global level. However, this is not an uncommon pattern anyways for larger applications; for example, AWS X-Ray is an example of a commercial tracing service (though, typically, I believe it fits the use case of tracing network calls and RPC calls rather than in-process function calls).
For 2, you can try something like utrace or something compiler-specific: MSVC has various tools like Performance Explorer, LLVM has XRay, GCC has gprof. You essentially compile in a sort of "debug++" mode or there is some special OS/hardware/compiler magic to automatically insert tracing instructions or markers that help the runtime trace your desired code. These tracing-enabled programs/runtimes typically emit to some sort of unique tracing format that must then be read by a unique tracing format reader.
Finally, to dynamically build the graph in memory is a a similar story. Like the tracing strategies above, there are a variety of application and runtime-level libraries to help trace your code that you can interact with programmatically. Even the simplest version of creating ScopedTracer objects that log to a tracing file can then be fitted with a consumer thread that owns and updates the trace graph with whatever desired latency and data durability requirements you have.
Edit: If you would like, OpenTelemetry/Jaeger may be a good place to start visualizing traces once you have extracted the data (and you can also report directly to it if you want), although it prefers a tree presentation format: Jaeger documentation for Trace Detail View

Related

OOP for global system/task monitoring class

I'm trying to create a performance monitor of sorts to run on a Particle board (STM32 based). I'm used to programming in c so the OOP approach is a bit new but I think it would fit well here.
For the purpose of this question let's assume I have two types of monitors:
Frequency. The application can call a "tick" method of the monitor to calculate the time since it last ran and store it.
Period- call a "start" and "stop" method of the monitor to calculate how long a process takes to run and store it.
What I would like to do is to create instances of these monitors throughout my application and be able to report on the stats of all monitors of all types from the main module.
I've read about the singleton design pattern which seems like it might be what I need but I'm not sure and I'm also concerned about thread safety with that.
I'm thinking I will create a "StatMonitor" class and a derived class "FrequencyMonitor" and "PeriodMonitor". Monitor would be a singleton and everywhere I wanted to create a new monitor I would request an instance of "Monitor" and use that like so:
freqMonitor * task1FreqMonitor = StatMonitor::GetInstance()->Add_Freq_Monitor("Task1");
The StatMonitor would track all monitors I've added and when I wanted to print the stats I could just call the printAll method which would iterate it's array of monitors and request their results like so:
StatMonitor::GetInstance()->PrintAllStats();
Am I going down the right path?
Your path sounds good, except that FrequencyMonitor and PeriodMonitor should not derive from the class that "manages" all these monitors (let's call it MonitorPrinter).
MonitorPrinter should be a singleton and could look like this:
class MonitorPrinter
{
public:
static MonitorPrinter& getInstance()
{
static MonitorPrinter monitorPrinter;
return monitorPrinter;
}
void printAllStats()
{
for (const auto& [_, frequencyMonitor] : _frequencyMonitors)
frequencyMonitor.print();
for (const auto& [_, periodMonitor] : _periodMonitors)
periodMonitor.print();
}
FrequencyMonitor& getFrequencyMonitor(std::string name)
{ return _frequencyMonitors[name]; }
PeriodMonitor& getPeriodMonitor(std::string name)
{ return _periodMonitors[name]; }
private:
MonitorPrinter() = default;
std::map<std::string, FrequencyMonitor> _frequencyMonitors;
std::map<std::string, PeriodMonitor> _periodMonitors;
};
Demo
(The const auto& [_, frequencyMonitor] is a structured binding).
FrequencyMonitor and PeriodMonitor should not have anything to do with singletons, and from your description, they need not be part of a class hierarchy either (as they have different interfaces). If you want, you can prevent users (other than the MonitorPrinter) from instantiating these classes using other techniques, but I won't elaborate on that here.
In short, there is no need to use OOP here. Use a singleton to provide (and keep track of) the monitors, and implement the monitors to your liking. Be wary of thread safety if this is relevant (the above is not thread-safe!).

c++ best way to realise global switches/flags to control program behaviour without tying the classes to a common point

Let me elaborate on the title:
I want to implement a system that would allow me to enable/disable/modify the general behavior of my program. Here are some examples:
I could switch off and on logging
I could change if my graphing program should use floating or pixel coordinates
I could change if my calculations should be based upon some method or some other method
I could enable/disable certain aspects like maybe a extension api
I could enable/disable some basic integrated profiler (if I had one)
These are some made-up examples.
Now I want to know what the most common solution for this sort of thing is.
I could imagine this working with some sort of singelton class that gets instanced globally or in some other globally available object. Another thing that would be possible would be just constexpr or other variables floating around in a namespace, again globally.
However doing something like that, globally, feels like bad practise.
second part of the question
This might sound like I cant decide what I want, but I want a way to modify all these switches/flags or whatever they are actually called in a single location, without tying any of my classes to it. I don't know if this is possible however.
Why don't I want to do that? Well I like to make my classes somewhat reusable and I don't like tying classes together, unless its required by the DRY principle and or inheritance. I basically couldn't get rid of the flags without modifying the possible hundreds of classes that used them.
What I have tried in the past
Having it all as compiler defines. This worked reasonably well, however I didnt like that I couldnt make it so if the flag file was gone there were some sort of default settings that would make the classes themselves still operational and changeable (through these default values)
Having it as a class and instancing it globally (system class). Worked ok, however I didnt like instancing anything globally. Also same problem as above
Instancing the system class locally and passing it to the classes on construction. This was kinda cool, since I could make multiple instruction sets. However at the same time that kinda ruined the point since it would lead to things that needed to have one flag set the same to have them set differently and therefore failing to properly work together. Also passing it on every construction was a pain.
A static class. This one worked ok for the longest time, however there is still the problem when there are missing dependencies.
Summary
Basically I am looking for a way to have a single "place" where I can mess with some values (bools, floats etc.) and that will change the behaviour of all classes using them for whatever, where said values either overwrite default values or get replaced by default values if said "place" isnt defined.
If a Singleton class does not work for you , maybe using a DI container may fit in your third approach? It may help with the construction and make the code more testable.
There are some DI frameworks for c++, like https://github.com/google/fruit/wiki or https://github.com/boost-experimental/di which you can use.
If you decide to use switch/flags, pay attention for "cyclometric complexity".
If you do not change the skeleton of your algorithm but only his behaviour according to the objets in parameter, have a look at "template design pattern". This method allow you to define a generic algorithm and specify particular step for a particular situation.
Here's an approach I found useful; I don't know if it's what you're looking for, but maybe it will give you some ideas.
First, I created a BehaviorFlags.h file that declares the following function:
// Returns true iff the given feature/behavior flag was specified for us to use
bool IsBehaviorFlagEnabled(const char * flagName);
The idea being that any code in any of your classes could call this function to find out if a particular behavior should be enabled or not. For example, you might put this code at the top of your ExtensionsAPI.cpp file:
#include "BehaviorFlags.h"
static const enableExtensionAPI = IsBehaviorFlagEnabled("enable_extensions_api");
[...]
void DoTheExtensionsAPIStuff()
{
if (enableExtensionsAPI == false) return;
[... otherwise do the extensions API stuff ...]
}
Note that the IsBehaviorFlagEnabled() call is only executed once at program startup, for best run-time efficiency; but you also have the option of calling IsBehaviorFlagEnabled() on every call to DoTheExtensionsAPIStuff(), if run-time efficiency is less important that being able to change your program's behavior without having to restart your program.
As far as how the IsBehaviorFlagEnabled() function itself is implemented, it looks something like this (simplified version for demonstration purposes):
bool IsBehaviorFlagEnabled(const char * fileName)
{
// Note: a real implementation would find the user's home directory
// using the proper API and not just rely on ~ to expand to the home-dir path
std::string filePath = "~/MyProgram_Settings/";
filePath += fileName;
FILE * fpIn = fopen(filePath.c_str(), "r"); // i.e. does the file exist?
bool ret = (fpIn != NULL);
fclose(fpIn);
return ret;
}
The idea being that if you want to change your program's behavior, you can do so by creating a file (or folder) in the ~/MyProgram_Settings directory with the appropriate name. E.g. if you want to enable your Extensions API, you could just do a
touch ~/MyProgram_Settings/enable_extensions_api
... and then re-start your program, and now IsBehaviorFlagEnabled("enable_extensions_api") returns true and so your Extensions API is enabled.
The benefits I see of doing it this way (as opposed to parsing a .ini file at startup or something like that) are:
There's no need to modify any "central header file" or "registry file" every time you add a new behavior-flag.
You don't have to put a ParseINIFile() function at the top of main() in order for your flags-functionality to work correctly.
You don't have to use a text editor or memorize a .ini syntax to change the program's behavior
In a pinch (e.g. no shell access) you can create/remove settings simply using the "New Folder" and "Delete" functionality of the desktop's window manager.
The settings are persistent across runs of the program (i.e. no need to specify the same command line arguments every time)
The settings are persistent across reboots of the computer
The flags can be easily modified by a script (via e.g. touch ~/MyProgram_Settings/blah or rm -f ~/MyProgram_Settings/blah) -- much easier than getting a shell script to correctly modify a .ini file
If you have code in multiple different .cpp files that needs to be controlled by the same flag-file, you can just call IsBehaviorFlagEnabled("that_file") from each of them; no need to have every call site refer to the same global boolean variable if you don't want them to.
Extra credit: If you're using a bug-tracker and therefore have bug/feature ticket numbers assigned to various issues, you can creep the elegance a little bit further by also adding a class like this one:
/** This class encapsulates a feature that can be selectively disabled/enabled by putting an
* "enable_behavior_xxxx" or "disable_behavior_xxxx" file into the ~/MyProgram_Settings folder.
*/
class ConditionalBehavior
{
public:
/** Constructor.
* #param bugNumber Bug-Tracker ID number associated with this bug/feature.
* #param defaultState If true, this beheavior will be enabled by default (i.e. if no corresponding
* file exists in ~/MyProgram_Settings). If false, it will be disabled by default.
* #param switchAtVersion If specified, this feature's default-enabled state will be inverted if
* GetMyProgramVersion() returns any version number greater than this.
*/
ConditionalBehavior(int bugNumber, bool defaultState, int switchAtVersion = -1)
{
if ((switchAtVersion >= 0)&&(GetMyProgramVersion() >= switchAtVersion)) _enabled = !_enabled;
std::string fn = defaultState ? "disable" : "enable";
fn += "_behavior_";
fn += to_string(bugNumber);
if ((IsBehaviorFlagEnabled(fn))
||(IsBehaviorFlagEnabled("enable_everything")))
{
_enabled = !_enabled;
printf("Note: %s Behavior #%i\n", _enabled?"Enabling":"Disabling", bugNumber);
}
}
/** Returns true iff this feature should be enabled. */
bool IsEnabled() const {return _enabled;}
private:
bool _enabled;
};
Then, in your ExtensionsAPI.cpp file, you might have something like this:
// Extensions API feature is tracker #4321; disabled by default for now
// but you can try it out via "touch ~/MyProgram_Settings/enable_feature_4321"
static const ConditionalBehavior _feature4321(4321, false);
// Also tracker #4222 is now enabled-by-default, but you can disable
// it manually via "touch ~/MyProgram_Settings/disable_feature_4222"
static const ConditionalBehavior _feature4222(4222, true);
[...]
void DoTheExtensionsAPIStuff()
{
if (_feature4321.IsEnabled() == false) return;
[... otherwise do the extensions API stuff ...]
}
... or if you know that you are planning to make your Extensions API enabled-by-default starting with version 4500 of your program, you can set it so that Extensions API will be enabled-by-default only if GetMyProgramVersion() returns 4500 or greater:
static ConditionalBehavior _feature4321(4321, false, 4500);
[...]
... also, if you wanted to get more elaborate, the API could be extended so that IsBehaviorFlagEnabled() can optionally return a string to the caller containing the contents of the file it found (if any), so that you could do shell commands like:
echo "opengl" > ~/MyProgram_Settings/graphics_renderer
... to tell your program to use OpenGL for its 3D graphics, or etc:
// In Renderer.cpp
std::string rendererType;
if (IsDebugFlagEnabled("graphics_renderer", &rendererType))
{
printf("The user wants me to use [%s] for rendering 3D graphics!\n", rendererType.c_str());
}
else printf("The user didn't specify what renderer to use.\n");

boost::log - Using independent severity levels inside a library/plugin

This is a kind of follow-up of another question I asked (here) where I was made aware that using the same backend with multiple sinks is not a safe approach.
What I am trying to obtain is to "decouple" the severity levels inside a library/plugin from the applications using them, while being able to write the different logs to the same output (may it be stdout or, more likely, a file or a remote logger); this for the following reasons:
I wouldn't like to tie the severity levels inside my library/plugin to those of the applications using them because the change (for whatever reason) of the severity levels list in one of the applications using the library/plugin would cause the library to be "updated" with the new severities and, as a waterfall, of all other applications which use the library/plugin
I would like to be able to use library specific severity levels (which, for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks)
Which is the best way to obtain this?
Some afterthoughts: As per Andrey's reply to my previous question the "problem" is that the backend is not synchronized to receive data from multiple sources (sinks); thus the solution might seem to be to create a synchronized version of the backends (e.g. wrapping the writes to the backend in a boost::asio post)...
Is this the only solution?
Edit/Update
I update the question after Andrey's awesome reply, mainly for sake of completeness: the libraries/plugins are meant to be used with internally developed applications only, thus it is assumed that there will be a common API we can shape for defining the log structure and behaviour.
Plus, most applications are meant to run mainly "unmanned", i.e. with really minimal, if not null, user/runtime interaction, so the basic idea is to have the log level set in some plugin specific configuration file, read at startup (or set to be reloaded upon a specific application API command from the application).
First, I'd like to address this premise:
for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks
You don't need different sinks to be able to filter or format different types of severity levels. Your filters and formatters have to deal with that, not the sink itself. Only create multiple sinks if you need multiple log targets. So to answer your question, you should focus on the protocol of setting up filters and formatters rather than sinks.
The exact way to do that is difficult to suggest because you didn't specify the design of your application/plugin system. What I mean by that is that there must be some common API that must be shared by both the application and the libraries, and the way you set up logging will depend on where that API belongs. Severity levels, among other things, must be a part of that API. For example:
If you're writing plugins for a specific application (e.g. plugins for a media player) then the application is the one that defines the plugin API, including the severity levels and even possibly the attribute names the plugins must use. The application configures sinks, including filters and formatters, using the attributes mandated by the API, and plugins never do any configuration and only emit log records. Note that the API may include some attributes that allow to distinguish plugins from each other (e.g. a channel name), which would allow the application to process logs from different plugins differently (e.g. write to different files).
If you're writing both plugins and application(s) to adhere some common API, possibly defined by a third party, then logging protocol must still be defined by that API. If it's not, then you cannot assume that any other application or plugin not written by you supports logging of any kind, even that it uses Boost.Log at all. In this case every plugin and the application itself must deal with logging independently, which is the worst case scenario because the plugins and the application may affect each other in unpredictable ways. It is also difficult to manage the system like that because every component will have to be configured separately by the user.
If you're writing an application that must be compatible with multiple libraries, each having its own API, then it is the application who should be aware of logging convention taken in each an every library it uses, there's no way around it. This may include setting up callbacks in the libraries, intercepting file output and translating between library's log severity levels and the application severity levels. If the libraries use Boost.Log to emit log records then they should document the attributes they use, including the severity levels, so that the application is able to setup the logging properly.
So, in order to take one approach or the other, you should first decide how your application and plugins interface each other and what API they share and how that API defines logging. The best case scenario is when you define the API, so you can also set the logging conventions you want. In that case, although possible, it is not advisable or typical to have arbitrary severity levels allowed by the API because it significantly complicates implementation and configuration of the system.
However, just in case if for some reason you do need to support arbitrary severity levels and there's no way around that, you can define an API for the library to provide, which can help the application to set up filters and formatters. For example, each plugin can provide API like this:
// Returns the filter that the plugin wishes to use for its records
boost::log::filter get_filter();
// The function extracts log severity from the log record
// and converts it to a string
typedef std::function<
std::string(boost::log::record_view const&)
> severity_formatter;
// Returns the severity formatter, specific for the plugin
severity_formatter get_severity_formatter();
Then the application can use a special filter that will make use of this API.
struct plugin_filters
{
std::shared_mutex mutex;
// Plugin-specific filters
std::vector< boost::log::filter > filters;
};
// Custom filter
bool check_plugin_filters(
boost::log::attribute_value_set const& values,
std::shared_ptr< plugin_filters > const& p)
{
// Filters can be called in parallel, we need to synchronize
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->filters)
{
// Call each of the plugin's filter and pass the record
// if any of the filters passes
if (f(values))
return true;
}
// Suppress the record by default
return false;
}
std::shared_ptr< plugin_filters > pf = std::make_shared< plugin_filters >();
// Set the filter
sink->set_filter(std::bind(&check_plugin_filters, std::placeholders::_1, pf));
// Add filters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->filters.push_back(plugin1->get_filter());
pf->filters.push_back(plugin2->get_filter());
...
And a similar formatter:
struct plugin_formatters
{
std::shared_mutex mutex;
// Plugin-specific severity formatters
std::vector< severity_formatter > severity_formatters;
};
// Custom severity formatter
std::string plugin_severity_formatter(
boost::log::record_view const& rec,
std::shared_ptr< plugin_formatters > const& p)
{
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->severity_formatters)
{
// Call each of the plugin's formatter and return the result
// if any of the formatters is able to extract the severity
std::string str = f(rec);
if (!str.empty())
return str;
}
// By default return an empty string
return std::string();
}
std::shared_ptr< plugin_formatters > pf =
std::make_shared< plugin_formatters >();
// Set the formatter
sink->set_formatter(
boost::log::expressions::stream << "["
<< boost::phoenix::bind(&plugin_severity_formatter,
boost::log::expressions::record, pf)
<< "] " << boost::log::expressions::message);
// Add formatters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->severity_formatters.push_back(plugin1->get_severity_formatter());
pf->severity_formatters.push_back(plugin2->get_severity_formatter());
...
Note, however, that at least with regard to filters, this approach is flawed because you allow the plugins to define the filters. Normally, it should be the application who selects which records are being logged. And for that there must be a way to translate library-specific severity levels to some common, probably defined by the application levels.

How to exchange custom data between Ops in Nuke?

This questions is addressed to developers using C++ and the NDK of Nuke.
Context: Assume a custom Op which implements the interfaces of DD::Image::NoIop and
DD::Image::Executable. The node iterates of a range of frames extracting information at
each frame, which is stored in a custom data structure. An custom knob, which is a member
variable of the above Op (but invisible in the UI), handles the loading and saving
(serialization) of the data structure.
Now I want to exchange that data structure between Ops.
So far I have come up with the following ideas:
Expression linking
Knobs can share information (matrices, etc.) using expression linking.
Can this feature be exploited for custom data as well?
Serialization to image data
The custom data would be serialized and written into a (new) channel. A
node further down the processing tree could grab that and de-serialize
again. Of course, the channel must not be altered between serialization
and de-serialization or else ... this is a hack, I know, but, hey, any port
in a storm!
GeoOp + renderer
In cases where the custom data is purely point-based (which, unfortunately,
it isn't in my case), I could turn the above node into a 3D node and pass
point data to other 3D nodes. At some point a render node would be required
to come back to 2D.
I am going into the correct direction with this? If not, what is a sensible
approach to make this data structure available to other nodes, which rely on the
information contained in it?
This question has been answered on the Nuke-dev mailing list:
If you know the actual class of your Op's input, it's possible to cast the
input to that class type and access it directly. A simple example could be
this snippet below:
//! #file DownstreamOp.cpp
#include "UpstreamOp.h" // The Op that contains your custom data.
// ...
UpstreamOp * upstreamOp = dynamic_cast< UpstreamOp * >( input( 0 ) );
if ( upstreamOp )
{
YourCustomData * data = yourOp->getData();
// ...
}
// ...
UPDATE
Update with reference to a question that I received via email:
I am trying to do this exact same thing, pass custom data from one Iop
plugin to another.
But these two plugins are defined in different dso/dll files.
How did you get this to work ?
Short answer:
Compile your Ops into a single shared object.
Long answer:
Say
UpstreamOp.cpp
DownstreamOp.cpp
define the depending Ops.
In a first attempt I compiled the first plugin using only UpstreamOp.cpp,
as usual. For the second plugin I compiled both DownstreamOp.cpp and
UpstreamOp.cpp into that plugin.
Strangely enough that worked (on Linux; didn't test Windows).
However, by overriding
bool Op::test_input( int input, Op * op ) const;
things will break. Creating and saving a Comp using the above plugins still
works. But loading that same Comp again breaks the connection in the node graph
between UpstreamOp and DownstreamOp and it is no longer possible to connect
them again.
My hypothesis is this: since both plugins contain symbols for UpstreamOp it
depends on the load order of the plugins if a node uses instances of UpstreamOp
from the first or from the second plugin. So, if UpstreamOp from the first plugin
is used then any dynamic_cast in Op::test_input() will fail and the two Op cannot
be connected anymore.
It is still surprising that Nuke would even bother to start at all with the above
configuration, since it can be rather picky about symbols from plugins, e.g if they
are missing.
Anyway, to get around this problem I did the following:
compile both Ops into a single shared object, e.g. myplugins.so, and
add TCL script or Python script (init.py/menu.py)which instructs Nuke how to load
the Ops correctly.
An example for a TCL scripts can be found in the dev guide and the instructions
for your menu.py could be something like this
menu = nuke.menu( 'Nodes' ).addMenu( 'my-plugins' )
menu.addCommand('UpstreamOp', lambda: nuke.createNode('UpstreamOp'))
menu.addCommand('DownstreamOp', lambda: nuke.createNode('DownstreamOp'))
nuke.load('myplugins')
So far, this works reliably for us (on Linux & Windows, haven't tested Mac).

How to test asynchronuous code

I've written my own access layer to a game engine. There is a GameLoop which gets called every frame which lets me process my own code. I'm able to do specific things and to check if these things happened. In a very basic way it could look like this:
void cycle()
{
//set a specific value
Engine::setText("Hello World");
//read the value
std::string text = Engine::getText();
}
I want to test if my Engine-layer is working by writing automated tests. I have some experience in using the Boost Unittest Framework for simple comparison tests like this.
The problem is, that some things I want the engine to do are just processed after the call to cycle(). So calling Engine::getText() directly after Engine::setText(...) would return an empty string. If I would wait until the next call of cycle() the right value would be returned.
I now am wondering how I should write my tests if it is not possible to process them in the same cycle. Are there any best practices? Is it possible to use the "traditional testing" approach given by Boost Unittest Framework in such an environment? Are there perhaps other frameworks aimed at such a specialised case?
I'm using C++ for everything here, but I could imagine that there are answers unrelated to the programming language.
UPDATE:
It is not possible to access the Engine outside of cycle()
In your example above, std::string text = Engine::getText(); is the code you want to remember from one cycle but execute in the next. You can save it for later execution. For example - using C++11 you could use a lambda to wrap the test into a simple function specified inline.
There are two options with you:
If the library that you have can be used synchronously or using c++11 futures like facility (which can indicate the readyness of the result) then in your test case you can do something as below
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (!Engine::isResultReady());
//read the value
assert(Engine::getText() == "WHATEVERVALUEYOUEXPECT");
}
If you dont have the above the best you can do have a timeout (this is not a good option though because you may have spurious failures):
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (Engine::getText() != "WHATEVERVALUEYOUEXPECT") {
wait(1 millisec);
if (total_wait_time > 1 sec) // you can put whatever max time
assert(0);
}
}