I am currently coding my biggest C++ program, using multiple objects to manage a windows (using Windows API functions), a 3D engine (DirectX), and other objects all having to work together to accomplish their task.
I have some problems to organize my objects so they can interact with each other intelligently.
When an object has to perform operations, or worse, pass parameters to another object, I struggle to define the best solution to harmonise and organize my code.
Here is an excellent example:
As I said, in my program I have 2 objects: (1) "windowManager" which handles all the operations related to the window itself, and (2) "rendererManager" which handles all my DirectX operations, from the initialisation to the rendering of every frame.
When the window get resized, DirectX has to be reset to adjust the drawing area, otherwise the image gets stretched, so "windowManager" has to be able to order "rendererManager" to reset.
I could not think of a better solution than to create a "rendererManager.Reset()" method, then 2 methods for the window manager "windowManager.GetRequestRendererReset()" and "windowManager.SetRequestRendererReset(bool reset)"
When my window get resized, the object "windowManager" calls its own method "windowManager.SetRequestRendererReset(true)" to change one of his arguments that can be queried from a "get method".
With this, in my main cycle (in the infinite loop in the main) I applied some verification such as:
if (windowManager.GetRequestRendererReset()) { // The window has been resized, we need to reset DirectX
rendererManager.Reset(windowManager.GetWindow());
windowManager.SetRequestRendererReset(false);
}
This feels very sub-optimal and I feel that there is a better solution.
In addition to this, I now need to send more complex information to my object "rendererManager", such as text to write in the image at specific location, and without an intelligent organization my whole code might become swampy.
Is there some guidelines on how to optimise interactions and communications between objects?
Do you have a good lesson/tutorial about this, or do you have examples of good practices?
Related
I am making an application in C++ that runs a simulation for a health club. The user simply enters the simulation data at the beginning (3 integer values) and presses run. After that there is no user input - so very simple.
After starting the simulation a lot of the logic is deep down in lower classes but a lot of them need to print simple output messages to the UI. Returning a message is not possible as objects need to print to the UI but keep working.
I was going to pass a reference to the UI object to all classes that need it but I end up passing it around quite a lot - there must be a better way.
What I really need is something that can make calling the UI's printOutput(string) function as easy (or not much more difficult) than cout << string;
The UI also has a displayConnectionStatus(bool[] connections) method.
Bear in mind the UI inherits an abstract 'UserInterface' class so simple console UIs and GUIs can be changed in and out easily.
How do you suggest I implement this link to the UI?
If I were to use a global function, how can I redirect it to call methods of the UserInterface implementation that I selected to use?
Don't be afraid of globals.
Global objects hurt encapsulation, but for a targeted solution with no concern for immediate reusability, globals are just fine.
Expose a global object that processes events from your simulation. You can then choose to print the events, send them by e-mail, render them with OpenGL or whatever you fancy. Make a uniform interface that catches what happens inside the simulation via report callbacks, and then you can subclass this global object to suit your needs.
If the object wasn't global, you'd be passing a pointer around all the codebase.
I would suggest to go for logging framework i.e. your own class LogMessages, which got functions which get data and log the data, it can be to a UI, file, over network or anything.
And each class which needs logging, can use your logging class.
This way you can avoid globals and a generic solution , also have a look at http://www.pantheios.org/ which is open source C/C++ Diagnostic Logging API library, you may use that also...
I've been having an issue with the basic structure of a program that I'm working on. I'm a very inexperienced programmer trying to teach myself the basics of programs that use multiple states.
Right now, I have a very simple game-ish program with a game loop that redirects event, logic, and rendering to my StateManager class, which pushes and pops states onto a . The StateManager class then redirects the events, logic, and rendering to whatever state is on the back() of the vector. The idea is to have an assortment of different states for each phase of the program (in this case a simple game with splash screens, menus, gameplay, death screens, etc)...
However, I'm a VERY novice coder (trying my best to learn), and I've run into a fundamental issue with my program right from the very first state class...
The first class that I made is the SplashScreenState. And the basic concept was to have a state that essentially just shows a series of 'splash screen images' (lets say 3, for the sake of example), and each time the user presses a key, it switches to the next image, and finally (when it is out of splash screen images to cycle through) switches to the next state (menustate).
My problem is that I'm having a hard time figuring out how to structure this. Originally, I made the mistake of treating each different splash screen image as an instance of the SplashScreenState. However, I figured that do so was incorrect, as all 3 splash screens would technically be part of the same 'state'.
So now I have two main issues:
The first issue is that I'm not sure how/where to store all the splashscreen images. If I want to cycle between 3 different screen images when the program starts up, should I make them all members of the SplashScreenState class? Or is it smarter to simply have one class member for the 'currentImage', and each time the user hits a key it runs a load() function to load the next image into the currentImage pointer? Is it better to make an array or vector of images and cycle through them? I'm just not sure...
My second issue is with the eventhandling() of the SplashScreenState.. I know that I want the image on the screen to change like; *image1 -> image 2 -> image 3 -> changeState(menuState).. So that each time the user hits a key on their keyboard, it switches to the next splash screen, until the last splash screen, where it will then change state to the main menu. I'm not sure what the best way of doing this is either.. Should I create an enum for each splash screen and increment through them (until the final screen where it changes states)? I also think that, if I did store all my various screens in an array, then I could easily increment through them, but then would that be unoptimized because all the screens would have to be stored in memory at all times?
Anyway, I'm know this question might be very basic and noobish, but that's where I am right now unfortunately! I haven't had any formal education in programming and I've been teaching myself, so I really really appreciate all the help and expertise that's present on this site! ^^
Thanks!
You seem to be torn between an object-oriented vs a procedural paradigm for handling state transitions. The other answer suggesting a switch statement to handle enumerated state changes is a good procedural way to go about it. The drawbacks to that are that you might end up with a monolithic game class that contains all the code and all the superfluous state-specific data for handling your event/logic/rendering for all possible states. The object-oriented way to handle this is much cleaner, and encapsulates these into their own separate state objects where they can be used polymorphically through a shared interface. Then, instead of holding all the details for handling all the states in your game class, your game class needs only to store a state pointer and not worry about the implementation details of the concrete state object. This moves the responsibility of handling state transitions out of the game class and into the state class where it belongs. You should read up on the state/strategy pattern from a design patterns book. Handling the changing of a state should be the responsibility of the state object itself. Here is some reading:
http://www.codeproject.com/Articles/14325/Understanding-State-Pattern-in-C
http://sourcemaking.com/design_patterns/state
http://sourcemaking.com/design_patterns/state/cpp/1
http://codewrangler.home.comcast.net/~codewrangler/tech_info/patterns_code.html#State
http://en.wikipedia.org/wiki/State_pattern
http://www.codeproject.com/Articles/38962/State-Design-Pattern
Quoting from the Design Patterns and Pattern-Oriented Software Architecture books:
The pattern-based approach uses code instead of data structures to specify state transitions, but does a good job of accommodating state transition actions. The state pattern doesn't specify where the state transitions must be defined. It can be done in the context object, or in each individual derived state class. It is generally more flexible and appropriate to let the state subclasses specify their successor state and when to make the transition.
You have the option of creating state objects ahead of time and never destroying them, which can be good when state changes occur rapidly and you want to avoid destroying states that might be needed again shortly. On the other hand, it can be inconvenient because the context must keep references to all states that might be entered.
When the states that will be entered are not known at run-time and contexts change state infrequently, it is preferable to create state objects as needed and destroy them thereafter. In determining which to use, you need to consider cost as well as transition frequency.
Excellent explanation of your problem, first of all.
The portion of your game managing the splash screen can work in two ways. You've examined the problem and it's really that simple:
Receive input; set next state.
So, examples:
STATE_SPLASH1
STATE_SPLASH2
STATE_SPLASH3
STATE_TITLE
STATE_GAME_INIT
STATE_GAME
STATE_EXIT
pseudocode:
state = STATE_SPLASH1
while (state != STATE_EXIT)
... receive input ...
... process events to responders ...
... update ...
... yadda yadda ...
switch (state) {
case STATE_SPLASH1:
show_splash1()
case STATE_SPLASH2:
show_splash2()
case ..:
case STATE_TITLE:
show_title()
case STATE_GAME_INIT:
show_loading()
setup_level()
setup_characters()
case STATE_GAME:
game_update()
case STATE_EXIT:
cleanup_and_quit()
Another way is to manage splash a 'game state' and then the state of the splash as an internal state. and when splash has no more logic to run, set the game state to the next one. When I was learning, I found the DOOM source to be a valuable resource and man, is it nothing but hundreds of state machines. :)
We are currently facing the following problem: We have an application that needs to display a multitude of separate OpenSceneGraph scenes in different Qt widgets. For example, we might have one Qt widget depicting a sphere, while another widget depicts an icosahedron. Since we are using OpenSceneGraph 3.0.1, we followed the osgViewerQt example from the official documentation for implementing this.
The example code uses a QTimer in order to force updates for the viewer widget:
connect( &_timer, SIGNAL(timeout()), this, SLOT(update()) );
_timer.start( 10 );
The problems now begin when we want to create and show multiple widgets. Since each widget comes with its own timer, performance rapidly decreases with the number of open widgets. Not only is the interaction with the OSG widgets very slow, also the interaction with other Qt widgets noticeably lags. Even a halfway recent quad-core system is almost overwhelmed when approximately 5 windows are open. This issue is definitely not related to our graphics hardware. Other applications may render much larger scenes (Blender, Meshlab etc.) without any negative performance impact.
So, to summarize: What would be the best way of creating multiple Qt widgets showing different OpenSceneGraph scenes without a performance impact?
What we already tried:
We already considered using a single osgViewer::CompositeViewer for
rendering all scene objects. However, we discarded this idea for
now because it will probably make interactions with a single widget
very complicated.
We tried putting the rendering portion of each osgViewer::CompositeViewer in a separate thread as detailed by the osgQtWidgets example.
Our second try (using threads) looked roughly like this:
class ViewerFrameThread : public OpenThreads::Thread
{
public:
ViewerFrameThread(osgViewer::ViewerBase* viewerBase):
_viewerBase(viewerBase) {}
~ViewerFrameThread()
{
cancel();
while(isRunning())
{
OpenThreads::Thread::YieldCurrentThread();
}
}
int cancel()
{
_viewerBase->setDone(true);
return 0;
}
void run()
{
int result = _viewerBase->run();
}
osg::ref_ptr<osgViewer::ViewerBase> _viewerBase;
};
However, this also resulted in a remarkable performance decrease. Each thread still requires much CPU time (which is not surprising as the basic interaction is still handled with a timer). The only advantage of this approach is that at least interaction with other Qt widgets remain possible.
The ideal solution for us would be a widget that only fires redraw requests whenever the user interacts with it, for example by clicking, double-clicking, scrolling etc. More precisely, this widget should remain idle until there is a need for an update. Is something akin to this possible at all? We would welcome any suggestions.
Having tried out several models for this problem, I am happy to report that I found one that is working perfectly. I am using a QThread (similar to the thread described above) that essentially wraps an osgViewer::ViewerBase object and simply calls viewer->run().
The trick to keep CPU usage low is to force OpenSceneGraph to render on demand only. Having tried out the various options, I found the following two settings to work best:
viewer->setRunFrameScheme( osgViewer::ViewerBase::ON_DEMAND );
viewer->setThreadingModel( osgViewer::ViewerBase::CullDrawThreadPerContext );
A viewer that is modified like this will not use spurious CPU cycles for continuous updates while still using multiple threads for culling and drawing. Other threading models might of course perform better in some cases, but for me, this was sufficient.
If any one else attempts a similar solution, be warned that some operations now require explicit redraw requests. For example, when handling interactions with OSG objects or when you are writing your own CameraManipulator class, it doesn't hurt to call viewer->requestRedraw() after changing viewer settings. Else, the viewer will only refresh when the widget requires a repaint.
In short, here's what I learned:
Don't use timers for rendering
Don't give up on multiple threads just yet
Nothing beats reading the source code (the official OSG examples were sometimes scarce on details, but the source never lies...)
It should be possible. I can't believe they used a timer running at 100Hz to update the widgets -- it's really not the right way to do it.
I don't know about OSG's architecture, but you need to figure a way to obtain a callback from OSG when the data has been changed. In the callback, you simply queue an update event to the appropriate widget like so:
void OSGCallbackForWidgetA()
{
QCoreApplication::postEvent(widgetA, new QEvent(QEvent::UpdateRequest),
Qt::LowEventPriority);
}
This callback code is thread safe and you can invoke it in any thread, whether it has been started by QThread or not. The event will be compressed, that means that it acts like a flag. Posting it sets the flag, and the flag will be reset when the widget finishes with the update. Posting it multiple times when the update is pending does not add any events to the event queue, and does not imply multiple updates.
I met similar problem when I have multiple OSG viewers in one Qt application, where one of them works well, while the rest are very "slow".
By turning on the rendering stats (press "s" on the viewer), I found that the "slow" actually is not caused by the rendering, actually the rendering is fast.
The reason why the viewers are "slow" is that many gui events are not handled. For example, whey you drag the scene, many drag events are generated by Qt, but only few are passed to the OSG viewer, so the viewers response "slowly".
The events dropping actually is due to that the rendering is too fast...in Viewer::eventTraversal(), only those relative new events are processed, and the "relative new" is measured by cutOffTime = _frameStamp->getReferenceTime(). So if Qt generates the events slower than the rendering, many events will be cut off, and thus not processed.
And finally, after I found the root cause, the solution is easy. Let's cheat a bit on the reference time of _frameStamp used in the Viewer::eventTraversal():
class MyViewer : public osgViewer::Viewer
{
...
virtual void eventTraversal();
...
}
void MyViewer::eventTraversal()
{
double reference_time = _frameStamp->getReferenceTime();
_frameStamp->setReferenceTime(reference_time*100);
osgViewer::Viewer::eventTraversal();
_frameStamp->setReferenceTime(reference_time);
return;
}
I spent also quite some time to figure out how to make this work.
I think that the answer from Gnosophilon cannot work with Qt5, as the rules for switching context thread are more strict than with Qt4 (ie, a call to moveToThread() is required on the OpenGL context object). At time of writing, OSG doesn't satisfy this rules.
(At least I couldn't make it work)
I haven't figure out how to do it in a separate thread, however, to render the scene smoothly in the UI thread, without use of fixed interval timer, one may do the following
viewer_->setRunFrameScheme(osgViewer::ViewerBase::ON_DEMAND);
viewer_->setThreadingModel(osgViewer::CompositeViewer::SingleThreaded);
osgQt::initQtWindowingSystem();
osgQt::setViewer(viewer_.get());
osgQt::setViewer() handle with global variables, so only one viewer at a time can be used.
(which can be a CompositeViewer of course)
I'm looking for suggestion on how to handle this situation as anything I've thought of thus far does not work.
I'm working on an RPG game and am currently developing the graphical system. My graphics system consists of a series of ScreenStacks which are arranged in a particular order and then drawn.
A ScreenStack is basically just a collection of related Screens, along with a unique id and a draw order.
i.e.
class ScreenStack
{
//Constructors, getters/setters etc.
private:
std::string StackName;
int DrawPriority;
int UID;
bool Valid;
bool DrawStack;
bool UpdateStack;
bool SendInputs;
bool DeleteStack;
std::vector<screen_ptr> OwnedScreens; //screen_ptr is a shared_ptr around a Screen object
};
A screen is a simple graphical layer responsible for visualizing some part of the game, for example there's a screen for showing the player inventory, party overview, party status in battle, enemy party, etc.
I have a screen manager that is responsible for storing the various functions for creating screen stacks (i.e. a function to create a battle stack will make a screen for the player party, the enemy party, the attack animation screen, the background, and the user interface). Each of the various screen stack creation functions needs a different set of paramters to construct it. What I'm doing now is manually adding in stack creation functions to the screen manager on an as needed basis. For instance, right now the screen manager has a function for creating the title screen stack, the start menu stack, the battle stack, the overworld stack, the tilemap stack etc.
This works, but is cumbersome and cluttered. What I'd like to be able to do is have files external from the screen manager be able to register stack creation functions with the screen manager and then I can just look up the screen creation function instead of having to add a new function to the screen manager for each stack I need to create.
I initially tried adding an unordered_map<std::string, StackCreationFunction> with StackCreationFunction being typedef'd as
boost::function<ScreenStack (Engine& engine, ScreenManager& manager, const std::string screenName, const int StackID, const std::vector<std::string>& TryToCopyScreens, ...)>
... was from cstdargs. The idea was that each ScreenStack would add it's own StackCreationFunction to this map. This doesn't work however as boost::function is invalid with ...
So essentially what I'm trying to do is allow external files/screens be able to register their own creation functions (that has variable arguments) with the screenmanager, and Ideally be able to do this at compile time/immediately after starting. This feels like it should be possible with the preprocessor, but I'm unsure how to do it. I'm pretty stuck here, and I really would like a better solution then adding many stack creator functions to the screen manager and cluttering it up.
Any suggestions/a better solution would be greatly appreciated, and I can provide more details if the above was not clear enough
Thanks
Each of the various screen stack creation functions needs a different set of paramters to construct it.
I think that's the problem here, and you need to think of an abstraction here to make your map work. After all, how are these extra arguments actually provided by the calling code? When the calling code knows which class is instantiated and can provide the arguments, you don't need the map. When not, the map is of no use, since you can't tell which arguments to provide to the stack creation function.
The most likely solution is to boost::bind away all arguments of your StackCreatorFunction that would normally go into the ... arguments and register this bound version of your function at the unordered map, which can be freed of the ....
I would suggest that rather than having something external register into your system think about having your system call the external entity. In this case your system would locate and load a series of plugins that have specific entry points. You will need to create your variable length argument list using some convential container; either an array or data structure.
I'm not sure if that answers your question, but if your concern is to call various functions of different signatures from an interpreter-like code, you could (under GNU/Linux at least) use the LibFFI (a library for foreign function interface) for that.
i'm writing some low level window code for a window in x (in c++), and I want to prevent the user from either maximising or minimising the window. i don't mind whether this is done by rejecting the request to resize, or by removing the buttons themselves. however, i am tied to x and can't use qt or other higher-level libraries which i know provide this functionality.
at the moment all i have managed to do is intercept the ResizeRequest event and then set the window size back using XResizeWindow... but this causes the window to momentarily maximise and then return to its original state. is there a way to directly reject a ResizeRequest, that would seem to be the proper way to handle this but a fair bit of googling and document trawling has not come up with a solution.
thanks,
james
you can't.
essentially you would fight like hell against the window manager (and against the user in the end).
eg, you could watch PropertyNotify events to check if your window (or rather the window your window is attached to (provided by the window manager)) gets minimized. and then you unminimize it. and then the user minimizes it, or the window manager. technically, you can fight against it, but i would strongly advise against it.
that said: you can try to give the window manager some hints about what you think is appropriate for the window. see http://standards.freedesktop.org/wm-spec/1.3/ar01s05.html#id2523223:
_NET_WM_ALLOWED_ACTIONS
is a property the window manager manages per window (to tell other tools what is possible with that window). one of such actions is
_NET_WM_ACTION_RESIZE indicates that the window may be resized.
(Implementation note: Window Managers can identify a non-resizable
window because its minimum and maximum size in WM_NORMAL_HINTS will
be the same.)
so, if your users are using a window manager which interpretes WM_NORMAL_HINTS correctly and drops any resizing, maximizing, minimizing: then you can feel lucky.
what do you really want to achieve? some kind of kiosk-mode? some kind of fair-trade mode where people walking by can not "shutdown", close, resize, fiddle around with the app you are presenting?
if so: consider running a xsession without any window manager involved at all. just start your app as big as you need it and done.
Technically you can't prevent anything, as WMs can do whatever they want, but most reasonable window managers will let you control this.
The preferred modern way to do it is to set _NET_WM_WINDOW_TYPE semantic type if any of those are applicable. For example, in many WMs a dialog type may imply not-maximizable.
http://standards.freedesktop.org/wm-spec/1.3/
It sounds like none of these apply to your app though probably, so you'll have to set the specific hints.
To avoid maximization you just want to make the window not-resizable. As you've discovered, "fighting" the resize by just resizing back is a Bad Idea. It has infinite loop potential among other things.
XSetWMSizeHints() is the correct way to avoid maximization. Set min size = max size. voila, not resizable.
To avoid minimization, you have to use a bit of old legacy cruft called the Mwm hints. Unfortunately this involves cut-and-pasting a struct definition and then setting a property to the bits of the struct.
I just googled for MWM hints docs, and one of the results is me suggesting documenting them, 9 years ago ;-)
http://mail.gnome.org/archives/wm-spec-list/2001-December/msg00044.html
Unfortunately, none of the results are actual docs.
You can likely figure it out from http://git.gnome.org/browse/gtk+/tree/gdk/x11/MwmUtil.h and gdk_window_set_mwm_hints() http://git.gnome.org/browse/gtk+/tree/gdk/x11/gdkwindow-x11.c#n4389
MwmUtil.h is the struct that's cut-and-pasted everywhere (into most WMs and toolkits).
The _NET_WM_ALLOWED_ACTIONS hint is set on your window by the WM indicating which features the WM has decided to put on the window. The main purpose of this hint is that pagers and task lists and other desktop components can then offer the matching actions for the window.
The specs that cover all this are the ICCCM (old spec, still mostly valid) and EMWH (new extensions and clarifications, since ICCCM left lots of things unaddressed).
For gory details, try the source code... for example recalc_window_features() in metacity's window.c file, currently on line 6185 http://git.gnome.org/browse/metacity/tree/src/core/window.c#n6185
A philosophical adjustment when coding for X: mileage will vary with window manager. The "mainstream" ones lots of people use generally will follow the specs and work about as you'd expect. However, there are all kinds of WMs out there, some broken, others deliberately quirky. The worst thing you can do is try to "fight" or work around the WM, because basically all ways of doing that will end up breaking the app when running with a sane WM. Your best bet is make things follow the specs, work with the normal WMs, and if you get users upset that they can resize your not-resizable window because their WM allows that, you just have to tell them to complain to whoever provides that WM. The whole point of the pluggable WM design is that the WM determines some of this behavior, rather than the app.
Good luck. Modern X is pretty complex and coding Xlib with no toolkit is kind of asking for things to be... not quite right. But you can probably get it going well enough. :-P
It's an old question, but, there is an unofficial, yet supported by most Window managers way to do such things - _MOTIF_WM_HINTS.
Look here: Disable actions, move, resize, minimize, etc using python-xlib for sample code.