I'm in the early stages of developing a C++ multi platform (mobile) application which has a graphical user interface. I'm trying to find a good way of abstracting the actual UI/windowing implementation. Here is basically what I have tried so far:
I created an interface hierarchy
Screen
+Canvas
+Form
+Dialog
Control
+EditBox
+CheckBox
+...
and I also have a class, Application, which basically uses all of these interfaces to implement the UI logic. The Application class also provides abstract factory methods which it uses to create UI class instances.
These interfaces all have implementations. For example, for Windows Mobile, Win32Form implements Form, Win32Canvas implements Canvas and Win32Dialog implements Dialog. As you can see, the problem is that the implementations loses the hierarchy, i.e. Win32Dialog doesn't extend Win32Form.
This becomes a problem in, for example, the method Form::addControl(Control &ctrl). In the Windows Mobile version of the application, I know the ctrl parameter is one of the Win32... control implementations. But since the hierarchy is lost, there is no way of knowing if it's a Win32EditBox or a Win32CheckBox, which I really need to know in order to perform any platform specific operations on the control.
What I am looking for is some design pattern on how to solve this problem. Note that there is no requirement to retain this interface hierarchy solution, that is just my current approach. I'll take any approach that solves the problem of letting the UI logic be separate from the several different UI implementations.
Please don't tell me to use this or that UI library; I like coding stuff from scratch and I'm in this for the learning experience... ;-)
Some inspiration from my side:
Don't inherit Form from a Screen. It's not "is-a" a realtionship.
Screen
Canvas
GUIObject
Form
Dialog
Control
Button
EditBox
CheckBox
...
Panel
More detailed description:
Screen
"has-a" canvas
provides display specific functions
don't be tempted to inherit from Canvas
Canvas
your abstracted drawing environment
target is screen canvas, form canvas, image, memory...
GUIObject
Wnd in some of hierarchies
usually provides some introspection capabilities
usually provides message processing interface
Form
in some toolkits actually called "dialog"
"has-a" canvas on which you can draw
"has-a" list of controls or a client area Panel
"has-a" title, scrollbars, etc.
Dialog
I like to call dialog standardized YesNo, OpenFile etc, basically simplified Form (no close button, no menu etc.)
may or may not be inherited from Form
Controls are hopefully self-explanatory (Panel is area with controls inside with optional scrollbars).
This should work. However how to exploit specific features of each platform without ruining genericity of this interface and/or ease of use is the difficult part. I usually try to separate all drawing from these basic classes and have some platform specific "Painter" that can take each basic object and draw it in platform specific way. Another solution is parallel hierarchy for each platform(I tend to avoid this one, but sometimes needed - for example if you need to wrap windows handles) or "capability bits" approach.
You can use containment. For example the class that implements Dialog would contain a Win32Dialog as a property, and the class that implements Form would contain a Win32Form; all the accesses to the Dialog or Form members would delegate to accesses to the contained classes. This way, you can still make Dialog inherit from Form, if this is what makes sense for your project.
Having said that, I see somethinf strange in your interface design. If I understood correctly (I may have not, then please correct me), Dialog inherits from Form, and Form inherits from Screen. Ask yourself this to validate this design: Is a Dialog also a Form? Is it also a Screen? Inheritance should be used only when a is-a relationship makes sense.
The GOF pattern that addresses this issue is the Bridge Pattern. Konamiman's answer explains it, you use delegation/containment to abstract one of the concerns.
I've spent years doing mobile phone development.
My recommendation is that you do not fall into the 'make an abstraction' trap!
Such frameworks are more code (and tiresome hell, not fun!) than just implementing the application on each platform.
Sorry; you're looking for fun and experience, and I'd say there are funner places to get it.
First, the primary problem seems to be that there is no abstraction in your "abstraction" layer.
Implementing the same controls as are provided by the OS, and simply prefixing their names with "Win32" does not constitute abstraction.
If you're not going to actually simplify and generalize the structure, why bother writing an abstraction layer in the first place?
For this to actually be useful, you should write the classes that represent concepts you need. And then internally map them to the necessary Win32 code. Don't make a Form class or a Window class just because Win32 has one.
Second, if you need the precise types, I'd say ditch the OOP approach, and use templates.
Instead of the function
Form::addControl(Control &ctrl)
define
template <typename Ctrl_Type>
Form::addControl(Ctrl_Type &ctrl)
now the function knows the exact type of the control, and can perform whatever type-specific actions you want. (Of course, if it has to save the control into a list in the control, you lose the type information again. But the type information is propagated that much further, at least, which may help.
The collection could store something like boost::variant objects, even, which would allow you to retain some type information even there.
Otherwise, if you're going the OOP route, at least do it properly. If you're going to hide both EditBoxes and CheckBoxes behind a common IControl interface, then it should be because you don't need the exact type. The interface should be all you need. That's the point in interfaces.
If that contract isn't respected, if you're going to downcast to the actual types all the time, then your nice OOP hierarchy is flawed.
If you don't know what you're trying to achieve, it's going to be hard to succeed. What is the purpose of this Trying to write an "abstraction of the user interface exposed by Windows", you're achieving nothing, since that's what Windows already exposes. Write abstraction layers if the abstraction exposed by Windows doesn't suit your needs. If you want a different hierarchy, or no hierarchy at all, or a hierarchy without everything being a Form, or a Control, then writing an abstraction layer makes sense. If you just want a layer that is "the same as Win32, but with my naming convention", you're wasting your time.
So my advice: start with how you'd like your GUI layer to work. And then worry about how it should be mapped to the underlying Win32 primitives. Forget you ever heard of forms or dialog boxes or edit boxes, or even controls or windows. All those are Win32 concepts. Define the concepts that make sense in your application. There's no rule that every entity in the GUI has to derive from a common Control class or interface. There's no rule that there has to be a single "Window" or "Canvas" for everything to be placed on. This is the convention used by Windows. Write an API that'd make sense for you.
And then figure out how to implement it in terms of the Win32 primitives available.
Interesting problem, so a couple of thoughts:
first: you could always use MI so that Win32Dialog inherits from both Win32Form and Dialog, since you then get a nice diamond you need virtual inheritance
second: try to avoid MI ;)
If you want to use specific controls, then you are in a Win32 specific method, right ? I hope you are not talking about up_casting :/
Then how comes that this Win32 specific control method has been handed a generic object, when it could have got a more precise type ?
You could perhaps apply the Law of Demeter here. Which works great with a Facade approach.
The idea is that you instantiate a Win32Canvas, and then ask the canvas to add a dialog (with some parameters), but you never actually manipulate the dialog yourself, this way the canvas knows its actual type and can execute actions specific to the platform.
If you want to control things a bit, have the canvas return an Id that you'll use to target this specific dialog in the future.
You can alleviate most of the common implementation if canvas inherit from a template (templated on the platform).
On the other hand: Facade actually means a bloated interface...
Related
I'm working on a fairly large project that involves 3D drawing, and I want to add some visualizers (for example, to see the bounding boxes of the objects) to make debugging easier. However, I'm having a problem in deciding how to go about this.
One option would be to create a public function to draw the visualizers, and call this function when I enable debugging from the UI. This would have the advantage of not modifying the existing functions, but extending the class with a new function. The disadvantage would be "creating dependencies", as one of my colleagues said, we would need to modify the base class, and all the deriving classes to add this function.
Another option would be to modify the existing drawing function so that it handles the drawing of the visualizers. This hides the implementation details, but also it seems to me it makes the code less modular.
Yet another option is extending the class, adding the visualizer in the drawing function, and swapping classes when debugging is enabled. Mixins would be of help, but C++ doesn't support that.
What would be the best approach to do this? I'm looking for a solution that is modular and respects the SOLID principles.
It looks like you are looking for the "Delegation Pattern". See http://en.wikipedia.org/wiki/Delegation_pattern
In software engineering, the delegation pattern is a design pattern in object-oriented programming where an object, instead of performing one of its stated tasks, delegates that task to an associated helper object. There is an Inversion of Responsibility in which a helper object, known as a delegate, is given the responsibility to execute a task for the delegator.
See also http://best-practice-software-engineering.ifs.tuwien.ac.at/patterns/delegation.html
The question applies to the class QGraphicsView of the qt library.
However, the problem is more general. So, if I am not missing any special mechanism in qt, it can probably be discussed without knowing qt.
I am subclassing QGraphicsView to add some features that I need.
E.g. I have a ScalableView, PannableView and LabeledView to add independent functionality.
The sublassing I use for now is linear in the following sense:
ScalableView is derived from QGraphics view.
PannableView is derived from ScalableView view.
LabeledView is derived from PannableView view.
Since those features are independent, there is a design flaw.
Applying the decorator pattern to come around this seems appropriate for me.
The problem there is, QGraphicsView is not an interface and there exists no interface class like QAbstractGraphicsView. So, for me it is not clear how that pattern could be implemented.
A different idea would be to use templates. So I could derive each of the views from a template T. Then, I could make insantiations like ScalableView<PannableView<LabeledView>>>.
Do you see any better solution to that? I would prefer a way to implement the decorator pattern in this situation, since I would like to avoid to many template classes that would increase compilation time.
A simple solution in Qt style would be to create a class that derives from QGraphicsView and simply has flags that control its behavior (whether it's scalable, pannable, labeled, etc.). The implementations of those behaviors would still be split into methods, so it's
not as monolithic as it seems.
The decorator pattern can be of course easily implemented by defining an intermediate (shim) interface. The QGraphicsView does not need to implement that interface - the interface is only for the decorators to use.
The problem with deep inheritance is that it's impossible to finely control the interaction of the behaviors. The only control you have is the order of event processing. This may happen to be sufficient, but it has me somewhat worried. The decorator pattern implemented without embellishment shares this issue.
Without knowing the full design, this may or may not work, but perhaps this will help.
One possible method would be to encapsulate the QGraphicsView and create delegate objects that provide the different functionality that you require. The delegates would then be responsible for providing the interface to the graphics view and forwarding messages.
This would mean creating a different delegate for the different types of functionality; a ScalableDelegate, a PannableDelegate and a LabeledDelegate.
As the delegates are separate objects and not inherited from the QGraphicsView, you can gain a lot of functionality from installing event filters on the graphics views.
Then, instead of objects interacting with your GraphicsView, they communicate via the relevant delegate.
If this is too restrictive,you may need to inherit from QGraphicsView to create a view with the functionality you need and then use the delegates to expose the required functionality as desired.
I'm working on a game and I'm finding myself having a ton of listeners.
For example, when the language of the game changes, the LanguageManager sends a languageChanged event which causes all interested gui elements to get the new text and redraw themselves. This means every interested gui element implements the LanguageListener class.
Another thing I find myself doing a lot is injecting dependencies. The GuiFactory needs to set fonts so it has a pointer to the FontManager (which holds an array of TTF fonts), it also has a GuiSkinManager so it can get Image* based on the skin of the game. Is this the right way to do this?
My main concern however is with the listeners and the massive amount of them; especially for the gui. The gui library im using is almost entirely listener driven. I can get away with ActionListeners most of the time but I still have nested ifs for if event.getSource() == button2 etc.
In general what type of design patterns are used to avoid these issues in large game projects that are much more complex than mine?
Thanks
Listeners are often used in huge applications as well. Don't be scared of them. I'm working with one of the biggest java app and I have seen tones of different listeners there, so it looks like that's the way to be.
For smaller applications Mediator could be way-around if you realy hate listening for events ;)
Instead of having one interface per event type containing a single callback (ie. LanguageListener), if you were to follow the Observer design pattern, you would have one interface xObserver with a callback for every kind of event.
class GuiElementObserver
{
public:
void LanguageChangedEvent(...) {}
void OtherEvent() {}
};
The default empty implementations allow you to redefine only the events you are interested in in the concrete classes.
Concerning dependencies, whenever I refactor code, I like to use the Single Responsibility Principle (http://en.wikipedia.org/wiki/Single_responsibility_principle). You said that the GuiFactory needs to set fonts... But is it really the responsibility of a factory to set fonts?
If you limit the responsibilities of your classes to what they were originally meant to do, you should have leaner classes, cleaner code and fewer dependencies. In this case, maybe adding a font manager and moving the font related code there could be a solution.
'm developing a 2D game and I want separate the game engine from the graphics.
I decided to use the model-view pattern in the following way: the game engine owns game's entities (EnemyModel, BulletModel, ExplosionModel) which implement interfaces (Enemy, Bullet, Explosion).
The View receives events when entities are created, getting the pointer to the interface: in this way the View can only use the interface methods (i.e. ask for informations to perform the drawing) and cannot change the object state. The View has its onw classes (EnemyView, BulletView, ExplosionView) which own pointers to the interfaces.
(There is also an event-base pattern involved so that the Model can notify the View about entity changes, since a pure query approach is impraticable but I wont' discuss it here).
*Model classes use a compile-time component approach: they use the boost::fusion library to store different state componets, like PositionComponent, HealthComponent and so on.
At present moment the View isn't aware of the component based design but only of the model-view part: to get the position of an enemy it calls the Enemy::get_xy() method. The EnemyModel, which implements the interface, forwards this call to the PositionComponent and returns the result.
Since the bullet has position too, I have to add the get_xy method to Bullet too. BulletModel uses then the same implementation as the EnemyModel class (i.e. it forwards the call).
This approch then leads to have a lot of duplicate code: interfaces have a lot of similar methods and *Model classes are full of forward-methods.
So I have basically two options:
1) Expose the compoment based design so that each component has an interface as well: the View can use this interface to directly query the component. It keeps the View and the Model separated, only at a component level instead of a entity level.
2) Abandon the model-view part and go for pure component based design: the View is just a component (the RenderableComponent part) which has basically full access to the game engine.
Based on your experience which approach would be best?
I'll give my two cents worth. From the problem you're describing, it seems to me that you need an abstract class that will do the operations that are common amongst all of your classes (like the get_xy, which should apply to bullet, enemy, explosion, etc.). This class is a game entity that does the basic grunt work. Inheriting classes can override it if they want.
This abstract class should be the core of all your interfaces (luckily you're in C++ where there is no physical difference between a class, and abstract class and an interface). Thus the Views will know about the specific interfaces, and still have the generic entity methods.
A rule of thumb I have for design - if more than one class has the same data members or methods, it should probably be a single class from which they inherit.
Anyway, exposing the internal structure of your Model classes is not a good idea. Say you'll want to replace boost with something else? You'd have to re-write the entire program, not just the relevant parts.
MVC isn't easy for games as when the game becomes larger (including menu, enemies, levels, GUI...) and transitions, it'll break.
Component or entity-system are pretty good for games.
As a simpler case for you, you may consider using a HMVC. You'll still have issues with transitions, but at least your code will be grouped together in a more clean manner. You probably want your tank's code (rendering and logic) to get close together.
There have been presentation architectures designed especially for agent-based systems, such as Presentation-Abstraction-Control. The hard part in designing such a system is that you ultimately end up hardwiring sequences of collaborations between the agents.
You can do this, but don't use OO inheritance to model the message passing hierarchy. You will regret it. If you think about it, you are really not interested in using the OO inheritance relationship, since the interfaces defined are really just a "Record of functions" that the object can respond to. In that case, you are better off formally modeling your communication protocol.
If you have questions, please ask -- this is not an obvious solution and easy to get wrong.
I'm working on a flexible GUI application that can have ~12 varied layouts. These layouts are all well-defined and won't change. Each layout consists of multiple widgets that interface with a DLL using bit patterns. While the majority of the widgets are the same, the bit patterns used vary depending on the interface type being presented.
My gut instinct is to use inheritance: define a generic 'Panel' and have subclasses for the different configurations. However, there are parts of the interface that are user-defined and are spec'd to be specified in an XML file.
Should the entire panel be defined in XML, or just the user configured sections?
YAGNI: Design your screens for the current requirements, which you specifically state aren't going to change. If a year down the line more customization is needed, make it more customizable then, not now.
KISS: If using XML results in less overall code and is simpler than subclassing, use XML. If subclassing results in less code, use subclassing. Experience tells me subclassing is simpler.
My feeling is that you should do whatever will give you the greater flexibility to change your mind, add new features or adjust the layout in the future.