I'm trying to create a subclass of QUndoCommand that represents a movement of one or more QGraphicsItem. If several items are moved at once (by selecting them first), this should be represented as a single command within the QUndoStack.
Since the whole movement logic (including selections) is already implemented by QGraphicsScene, I'm not sure what would be the best place to create the undo commands.
The undoframework example provided by Qt subclasses QGraphicsScene and overrides mousePressEvent() / mouseReleaseEvent() to perform some manual hit testing. Is this really the way to go?
I'm afraid that this approach could miss some special cases such that the generated undo commands do not reflect exactly the same movement that has been performed by the internal Qt implementation. Some items could have the ItemIsMovable flag unset for example. Also, when moving multiple items at once, it seems better to store just the delta movement instead of old and new position.
It would be great if you could provide some short sample code/sketch for clarification.
I personally would wrap and encapsulate QGraphicsScene and implement the undo/redo stuff there. This allows to define your own and cleaner interface, free of the clutter of all the functionality you won't need. It will also make it easier to replace QGraphicsScene with another API if such a need arises in the future and makes your core stuff more portable.
Many of the high level classes in Qt, especially those for graphics, are not exactly what I'd call "cutting edge", the same applies even more so to the rudimentary examples Qt provides for them. There are several professional graphics applications that use Qt, but none of them uses stuff like QGraphicsScene. I suppose the reason for this is also the reason I extrapolated from my own experience using Qt - as big as a framework it may be, there is a lot of stuff that's missing, and a lot of stuff that just doesn't really work the way people have come to expect from professional software, or even common logic in such workflows. Qt was developed by very good programmers, but it seems like they didn't have someone experienced in the actual graphics applications workflow to advise them.
Focus on a wrapper that features the functionality you need, and make it work with QGraphicsScene instead of using it directly. This way there is less chance that you miss something, your design is more self contained and cleaner too. The same applies to the undo API as well, don't feel obligated to use Qt's, I myself have tried it and ended up implementing my own, as I found the one provided by Qt was rather "stiff".
In the case of moving several items at once - yes, that should create a single undo entry, which will undo the moving of a selection group, which will feature all the selected items. Whether you will use a delta or position - it is entirely up to you. The delta does make a little more sense when you move multiple selected items, since the selection group is not really an item with position.
Another consideration, if you plan on making history persistent across different sessions - you can't rely on pointers, since those will be arbitrary each time you run your application. My strategy is to implement a unique integer ID and an associative registry which stores the id and pointer for each item. Then you use the ID for the history, and that will internally map to the pointer of the actual item.
Related
In this presentation SceneGraph and SceneTree are presented.
ST (SceneTree) is essentially an unfolded synchronized copy of SG (SceneGraph).
Since you are going to need ST anyway because:
it offers instance, where you need to save attributes
you need to traverse ST anyway for updating transformation and other cascading variables, such as visibility
even if SG offers less redundancy, what is the point to have SG in the first place?
To quote Markus Tavenrath, the guy who gave that presentation, who was asked the same question: http://pastebin.com/SRpnbBEj (as a result of a discussion on IRC (starts ~9:30))
[my emphasis]
One argument for a SceneGraph in general is that people have their
custom SceneGraphs implementation already and need to keep them for
legacy reasons and this SceneGraph is the reason why the CPU is the
bottleneck during the rendering phase. The goal of my talks is to show
ways to enhance a legacy SceneGraph in a way that the GPU can get the
bottleneck. One very interesting experiment would be adding the
techniques to OpenSceneGraph…
Besides this a SceneGraph is way more powerful with regards to data
manipulation, data access and reuse. I.e. the diamond shape can
be really powerful if you want to instantiate complex objects with
multiple nodes. Another powerful feature in our SceneGraph is the
properties feature which will be used for animation in the future
(search for LinkManager). With the LinkManager you can create a
network of connected properties and let data flow along the links in a
defined order. What’s missing for the animations are Curve objects
which can be used as input for animations. All those features have a
certain cost on the per object level with regards to memory size and
more memory means more touched cachelines during traversal which means
more CPU clocks spend on traversal.
The SceneTree is designed to keep only the hierarchy and the few
attributes which have to be propagated along the hierarchy. This way
the nodes are very lightweight and thus traversal is, if required,
very efficient. This also means that a lot of features provided in the
SceneGraph are no longer available in the SceneTree. Though with the
knowledge gained by the SceneTree it should be possible to write a
SceneGraph which provides most of the performance features of the
SceneTree.
Depending on the type of application you’re writing you might even
decide that the SceneTree is too big and you could write your
application directly on an interface like RiX which doesn’t have the
hierarchy anymore and thus eliminating yet another layer ;-).
So first reason is that people are already used to using a scenegraph and many custom implementations exist.
Other than that the scenetree is really a pruned scenegraph where only the necessary components which means that a scenegraph is more useful to the artists and modelers as it better abstracts the scene. A complex property may for example be a RNG seed that affect the look of each person in a crowd.
In a scenegraph that root Person node would just have a single property seed. Its children would select the exact mesh and color based on that seed. But in the expanded tree the unused meshes for alternate hairstyles, clothing and such will already have been pruned.
Context
I'm dealing with a large, nested, structured document, which shows up on the screen as a scene graph. The user needs to have the ability to modify this document -- which I am currently using functional zippers to implement.
Now, currently, I'm redrawing the entire document after every change. However, when the use of a scene graph, I would only need to modify the part that changes.
However (without intending to start a flamewar), I'm increasingly starting to feel that this problem is fundamentally a problem about state, not functions. In particular:
user actions (up, down, left, right, insert char, delete char) are about manipulating state, not doing calculations
there's only one copy of the document at once, it's only hit by one thread, and there's no gain for sharing
Current Approach
I use functional zippers to store the document and to provide state manipulation functions. Then, I continuously redraw.
Solution I suspect might be correct:
Store everything in a scene graph, ditch function zippers, and after modification, directly modify the part of the scene graph that changes.
Technical Question:
Have others dealt with this? Is there a technical, formal name for this problem? (Equiv of "Dining Philosophers" in concurrency?) Is there a well established solution to this problem?
Disclaimer
This question is a bit soft. I can't paste a code example of minimal broken case since
* my code is around 5K LOC
* it's not for me to share publicly
However, this is a real technical solution, and I hope others can share their answers
Thanks!
Your fundamental process sounds fine. It is flexible in it's current form because you have the option of having a separate thread update the tree for instance and you can implement an "undo" button rather trivially. If after each modification you return both the new tree and the path to the part that changed you could then update only the part that changed in the display. Lots of people have stronger opinions than me on separating "models" and "views" though I suspect that at least some of their wisdom applies here.
Basically, I'm unable to find any good articles for developing your own GUI, that deal with good practices, the basic structure, event bubbling, tips and avoiding all the usual pitfalls. I'm specifically not interested on how to build some proof-of-concept GUI in 5 minutes that just barely works... nor am I interested in building the next future GUI.
The purpose is to build a reasonably capable GUI to be used for tools for a game, however they will exist within the game itself so I don't want to use existing large scale GUIs, and I find most game GUIs to be rather bloated for what I need. And I enjoy the experience of doing it myself.
I have done a GUI in the past which worked very well to a point, however, due to some bad design decisions and inexperience it could only do so much (and was built in Flash so it got a lot of stuff for free). So I would like to really understand the basics this time.
A few tips -
1) Pick your style the UI will work - will it be stateless? If yes, how are you going to handle the events appropriately? In case it'll be stateless, you'll maybe have to re-evaluate your UI user code twice in order to get up to date event changes from user side. If your UIs store state, then you won't have to care about handling events, but it'll limit your UIs when it comes to rapid mutations and rebuilds.
2) Do not rely on the OO too much, virtual methods are not the fastest thing in the world so use them with care; having some sort of inheritance based structure might help though. Beware of dynamic_cast and RTTI if you use objects; they will slow you down. Instead, set up an enum, get_type() method for every widget class, and do manual checks for castability.
3) Try to separate the looks and the UI logic/layout.
4) If you want dynamic windows, layouts etc. then you'll have to handle aligning, clamping, positions etc. and their updates. If you want just statically positioned widgets, it'll make it much easier.
5) Do not overdesign, you won't benefit from that.
There is not really anything too specific I tell you; having some concrete question would help, maybe?
Take a look at the docs for existing GUI libraries. That should give you details on proven designs to handle the issues you've run into.
You might want to start with one you're familiar with, but one that I think is designed quite well is AppKit. Its API is Obj-C so it would require some adjustment if you wanted to copy it, but the docs give all kinds of details about how objects interact to, e.g. handle events, and how layout constraints work, which should be directly applicable to designing an OO GUI in most any language.
I'm working on a game and I'm finding myself having a ton of listeners.
For example, when the language of the game changes, the LanguageManager sends a languageChanged event which causes all interested gui elements to get the new text and redraw themselves. This means every interested gui element implements the LanguageListener class.
Another thing I find myself doing a lot is injecting dependencies. The GuiFactory needs to set fonts so it has a pointer to the FontManager (which holds an array of TTF fonts), it also has a GuiSkinManager so it can get Image* based on the skin of the game. Is this the right way to do this?
My main concern however is with the listeners and the massive amount of them; especially for the gui. The gui library im using is almost entirely listener driven. I can get away with ActionListeners most of the time but I still have nested ifs for if event.getSource() == button2 etc.
In general what type of design patterns are used to avoid these issues in large game projects that are much more complex than mine?
Thanks
Listeners are often used in huge applications as well. Don't be scared of them. I'm working with one of the biggest java app and I have seen tones of different listeners there, so it looks like that's the way to be.
For smaller applications Mediator could be way-around if you realy hate listening for events ;)
Instead of having one interface per event type containing a single callback (ie. LanguageListener), if you were to follow the Observer design pattern, you would have one interface xObserver with a callback for every kind of event.
class GuiElementObserver
{
public:
void LanguageChangedEvent(...) {}
void OtherEvent() {}
};
The default empty implementations allow you to redefine only the events you are interested in in the concrete classes.
Concerning dependencies, whenever I refactor code, I like to use the Single Responsibility Principle (http://en.wikipedia.org/wiki/Single_responsibility_principle). You said that the GuiFactory needs to set fonts... But is it really the responsibility of a factory to set fonts?
If you limit the responsibilities of your classes to what they were originally meant to do, you should have leaner classes, cleaner code and fewer dependencies. In this case, maybe adding a font manager and moving the font related code there could be a solution.
I'm in the early stages of developing a C++ multi platform (mobile) application which has a graphical user interface. I'm trying to find a good way of abstracting the actual UI/windowing implementation. Here is basically what I have tried so far:
I created an interface hierarchy
Screen
+Canvas
+Form
+Dialog
Control
+EditBox
+CheckBox
+...
and I also have a class, Application, which basically uses all of these interfaces to implement the UI logic. The Application class also provides abstract factory methods which it uses to create UI class instances.
These interfaces all have implementations. For example, for Windows Mobile, Win32Form implements Form, Win32Canvas implements Canvas and Win32Dialog implements Dialog. As you can see, the problem is that the implementations loses the hierarchy, i.e. Win32Dialog doesn't extend Win32Form.
This becomes a problem in, for example, the method Form::addControl(Control &ctrl). In the Windows Mobile version of the application, I know the ctrl parameter is one of the Win32... control implementations. But since the hierarchy is lost, there is no way of knowing if it's a Win32EditBox or a Win32CheckBox, which I really need to know in order to perform any platform specific operations on the control.
What I am looking for is some design pattern on how to solve this problem. Note that there is no requirement to retain this interface hierarchy solution, that is just my current approach. I'll take any approach that solves the problem of letting the UI logic be separate from the several different UI implementations.
Please don't tell me to use this or that UI library; I like coding stuff from scratch and I'm in this for the learning experience... ;-)
Some inspiration from my side:
Don't inherit Form from a Screen. It's not "is-a" a realtionship.
Screen
Canvas
GUIObject
Form
Dialog
Control
Button
EditBox
CheckBox
...
Panel
More detailed description:
Screen
"has-a" canvas
provides display specific functions
don't be tempted to inherit from Canvas
Canvas
your abstracted drawing environment
target is screen canvas, form canvas, image, memory...
GUIObject
Wnd in some of hierarchies
usually provides some introspection capabilities
usually provides message processing interface
Form
in some toolkits actually called "dialog"
"has-a" canvas on which you can draw
"has-a" list of controls or a client area Panel
"has-a" title, scrollbars, etc.
Dialog
I like to call dialog standardized YesNo, OpenFile etc, basically simplified Form (no close button, no menu etc.)
may or may not be inherited from Form
Controls are hopefully self-explanatory (Panel is area with controls inside with optional scrollbars).
This should work. However how to exploit specific features of each platform without ruining genericity of this interface and/or ease of use is the difficult part. I usually try to separate all drawing from these basic classes and have some platform specific "Painter" that can take each basic object and draw it in platform specific way. Another solution is parallel hierarchy for each platform(I tend to avoid this one, but sometimes needed - for example if you need to wrap windows handles) or "capability bits" approach.
You can use containment. For example the class that implements Dialog would contain a Win32Dialog as a property, and the class that implements Form would contain a Win32Form; all the accesses to the Dialog or Form members would delegate to accesses to the contained classes. This way, you can still make Dialog inherit from Form, if this is what makes sense for your project.
Having said that, I see somethinf strange in your interface design. If I understood correctly (I may have not, then please correct me), Dialog inherits from Form, and Form inherits from Screen. Ask yourself this to validate this design: Is a Dialog also a Form? Is it also a Screen? Inheritance should be used only when a is-a relationship makes sense.
The GOF pattern that addresses this issue is the Bridge Pattern. Konamiman's answer explains it, you use delegation/containment to abstract one of the concerns.
I've spent years doing mobile phone development.
My recommendation is that you do not fall into the 'make an abstraction' trap!
Such frameworks are more code (and tiresome hell, not fun!) than just implementing the application on each platform.
Sorry; you're looking for fun and experience, and I'd say there are funner places to get it.
First, the primary problem seems to be that there is no abstraction in your "abstraction" layer.
Implementing the same controls as are provided by the OS, and simply prefixing their names with "Win32" does not constitute abstraction.
If you're not going to actually simplify and generalize the structure, why bother writing an abstraction layer in the first place?
For this to actually be useful, you should write the classes that represent concepts you need. And then internally map them to the necessary Win32 code. Don't make a Form class or a Window class just because Win32 has one.
Second, if you need the precise types, I'd say ditch the OOP approach, and use templates.
Instead of the function
Form::addControl(Control &ctrl)
define
template <typename Ctrl_Type>
Form::addControl(Ctrl_Type &ctrl)
now the function knows the exact type of the control, and can perform whatever type-specific actions you want. (Of course, if it has to save the control into a list in the control, you lose the type information again. But the type information is propagated that much further, at least, which may help.
The collection could store something like boost::variant objects, even, which would allow you to retain some type information even there.
Otherwise, if you're going the OOP route, at least do it properly. If you're going to hide both EditBoxes and CheckBoxes behind a common IControl interface, then it should be because you don't need the exact type. The interface should be all you need. That's the point in interfaces.
If that contract isn't respected, if you're going to downcast to the actual types all the time, then your nice OOP hierarchy is flawed.
If you don't know what you're trying to achieve, it's going to be hard to succeed. What is the purpose of this Trying to write an "abstraction of the user interface exposed by Windows", you're achieving nothing, since that's what Windows already exposes. Write abstraction layers if the abstraction exposed by Windows doesn't suit your needs. If you want a different hierarchy, or no hierarchy at all, or a hierarchy without everything being a Form, or a Control, then writing an abstraction layer makes sense. If you just want a layer that is "the same as Win32, but with my naming convention", you're wasting your time.
So my advice: start with how you'd like your GUI layer to work. And then worry about how it should be mapped to the underlying Win32 primitives. Forget you ever heard of forms or dialog boxes or edit boxes, or even controls or windows. All those are Win32 concepts. Define the concepts that make sense in your application. There's no rule that every entity in the GUI has to derive from a common Control class or interface. There's no rule that there has to be a single "Window" or "Canvas" for everything to be placed on. This is the convention used by Windows. Write an API that'd make sense for you.
And then figure out how to implement it in terms of the Win32 primitives available.
Interesting problem, so a couple of thoughts:
first: you could always use MI so that Win32Dialog inherits from both Win32Form and Dialog, since you then get a nice diamond you need virtual inheritance
second: try to avoid MI ;)
If you want to use specific controls, then you are in a Win32 specific method, right ? I hope you are not talking about up_casting :/
Then how comes that this Win32 specific control method has been handed a generic object, when it could have got a more precise type ?
You could perhaps apply the Law of Demeter here. Which works great with a Facade approach.
The idea is that you instantiate a Win32Canvas, and then ask the canvas to add a dialog (with some parameters), but you never actually manipulate the dialog yourself, this way the canvas knows its actual type and can execute actions specific to the platform.
If you want to control things a bit, have the canvas return an Id that you'll use to target this specific dialog in the future.
You can alleviate most of the common implementation if canvas inherit from a template (templated on the platform).
On the other hand: Facade actually means a bloated interface...