Register Game Object Components in Game Subsystems? (Component-based Game Object design) - c++

I'm creating a component-based game object system. Some tips:
GameObject is simply a list of Components.
There are GameSubsystems. For example, rendering, physics etc. Each GameSubsystem contains pointers to some of Components. GameSubsystem is a very powerful and flexible abstraction: it represents any slice (or aspect) of the game world.
There is a need in a mechanism of registering Components in GameSubsystems (when GameObject is created and composed). There are 4 approaches:
1: Chain of responsibility pattern. Every Component is offered to every GameSubsystem. GameSubsystem makes a decision which Components to register (and how to organize them). For example, GameSubsystemRender can register Renderable Components.
pro. Components know nothing about how they are used. Low coupling. A. We can add new GameSubsystem. For example, let's add GameSubsystemTitles that registers all ComponentTitle and guarantees that every title is unique and provides interface to quering objects by title. Of course, ComponentTitle should not be rewrited or inherited in this case. B. We can reorganize existing GameSubsystems. For example, GameSubsystemAudio, GameSubsystemRender, GameSubsystemParticleEmmiter can be merged into GameSubsystemSpatial (to place all audio, emmiter, render Components in the same hierarchy and use parent-relative transforms).
con. Every-to-every check. Very innefficient.
con. Subsystems know about Components.
2: Each Subsystem searches for Components of specific types.
pro. Better performance than in Approach 1.
con. Subsystems still know about Components.
3: Component registers itself in GameSubsystem(s). We know at compile-time that there is a GameSubsystemRenderer, so let's ComponentImageRender will call something like GameSubsystemRenderer::register(ComponentRenderBase*).
Observer pattern. Component subscribes to "update" event (sent by GameSubsystem(s)).
pro. Performance. No unnecessary checks as in Approach 1 and Approach 2.
con. Components are badly coupled with GameSubsystems.
4: Mediator pattern. GameState (that contains GameSubsystems) can implement registerComponent(Component*).
pro. Components and GameSubystems know nothing about each other.
con. In C++ it would look like ugly and slow typeid-switch.
Questions:
Which approach is better and mostly used in component-based design? What Practice says? Any suggestions about (data-driven) implementation of Approach 4?
Thank you.

Vote for the third approach.
I am currently working on component-based game object system and i clearly see some of additional advantages of this approach:
The Component is more and more self-sufficing subentity as it depends only on a set of available subsystems (i presume this set is fixed in your project).
Data-driven design is more applicable. Ideally, this way you may design a system where components are completely defined in the terms of data but not C++.
EDIT: One feature i thought about while working on CBGOS. Sometimes it is convenient to have ability to design and construct subsystemless passive components. When this is on your mind the fourth approach is the only way.

My approach was to implement the proxy pattern within each subsystem. As each subsystem is only interested in a subset of the total components each entity may contain, The proxy stores pointers to only the components the system cares about, eg, a motion system only cares about position and velocity, so it needs a proxy which stores two pointers, to those components. If the entity is missing one or more of those, then the subsystem will ignore it. If both components are present, then a proxy node is created and added to an internal collection. It is also useful for the proxy to store the unique identifier value for the entity, so that proxies may be added/removed in constant time from each subsystem, should it be necessary.
In such a way, should an entity be required to be removed from the engine, a single message containing the id of the entity can be sent to every subsystem. The proxy can then be removed from each subsystem collection independently.

Related

Modeling frontend and backend in a use case diagram

I am trying to make a use case diagram for my project, the backend is going to be made using Django rest framework and the front end using react, my question is how can i model this situation in the right way, should i model the frontend and represent the backend as an actor or the opposite, since i am thinking of making a mobile application as a second front end?
The right answer here is the Standard Answer of the Business Analyst no 1: It depends.
The question is - what do you want to model and why. Then - what is the correct tool (diagram) to do it.
The goal of the Use Case diagram is to show what functionalities a system is going to offer. Now the system can be treated as a whole, in which case you show the functionalities without depicting how the system is internally organised (this is the most common scenario and most probable the best way to use Use Case diagram in your case - but it does not show the fact of having FE and BE, note that this type of diagram isn't really best suited to do so, so keep reading).
You may also tread e.g. BE as the system itself (it can make sense especially when you're preparing headless API and really separate BE from FE; even more so when your BE and FE teams are totally separate). In such case FE will become an actor (just like e.g. other system that can interact with your BE). Obviously FE can be treated in the same way (i.e. be considered the system with BE being an actor), however usually there's less reason to do so.
Now having said that, if you want to depict the distinction between BE and FE, you should consider other types of diagrams. Keep in mind that Use Case diagram is a dynamic diagram, and the internal structure of the system is static, so obviously it should be one of the static diagrams instead. One that is dedicated to show the internal structure of the system is the Component diagram and it would most likely serve best the purpose of indicating existence of FE and BE (potentially with further level of details, e.g. existing microservices).
If on the other hand you would like to show specific technology in use, Deployment diagram might be your best shot. It allows to show the actual runtime environments, artifacts and their technologies.
Keep in mind - tying to use one type of diagram, or even worse one diagram, to show everything is usually a bad idea and a mistake often made by newbies. Be smarter than that.
Use-case are about a set of behaviors with an observable result that is of value for the actors. They should not care about the internals of a system:
UseCases define the offered Behaviors of the subject without reference to its internal structure.
Therefore, you should in principle not care about the distinction between front-end and back-end, but focus on actor goals with the system.
The only situation where you'd care for the back-end in a use-case diagram, is the case where the front-end would be an independent application that is of value on its own, but can interact with actors that represent external independent systems. (More here)

What's the point to have a SceneGraph beyond a SceneTree

In this presentation SceneGraph and SceneTree are presented.
ST (SceneTree) is essentially an unfolded synchronized copy of SG (SceneGraph).
Since you are going to need ST anyway because:
it offers instance, where you need to save attributes
you need to traverse ST anyway for updating transformation and other cascading variables, such as visibility
even if SG offers less redundancy, what is the point to have SG in the first place?
To quote Markus Tavenrath, the guy who gave that presentation, who was asked the same question: http://pastebin.com/SRpnbBEj (as a result of a discussion on IRC (starts ~9:30))
[my emphasis]
One argument for a SceneGraph in general is that people have their
custom SceneGraphs implementation already and need to keep them for
legacy reasons and this SceneGraph is the reason why the CPU is the
bottleneck during the rendering phase. The goal of my talks is to show
ways to enhance a legacy SceneGraph in a way that the GPU can get the
bottleneck. One very interesting experiment would be adding the
techniques to OpenSceneGraph…
Besides this a SceneGraph is way more powerful with regards to data
manipulation, data access and reuse. I.e. the diamond shape can
be really powerful if you want to instantiate complex objects with
multiple nodes. Another powerful feature in our SceneGraph is the
properties feature which will be used for animation in the future
(search for LinkManager). With the LinkManager you can create a
network of connected properties and let data flow along the links in a
defined order. What’s missing for the animations are Curve objects
which can be used as input for animations. All those features have a
certain cost on the per object level with regards to memory size and
more memory means more touched cachelines during traversal which means
more CPU clocks spend on traversal.
The SceneTree is designed to keep only the hierarchy and the few
attributes which have to be propagated along the hierarchy. This way
the nodes are very lightweight and thus traversal is, if required,
very efficient. This also means that a lot of features provided in the
SceneGraph are no longer available in the SceneTree. Though with the
knowledge gained by the SceneTree it should be possible to write a
SceneGraph which provides most of the performance features of the
SceneTree.
Depending on the type of application you’re writing you might even
decide that the SceneTree is too big and you could write your
application directly on an interface like RiX which doesn’t have the
hierarchy anymore and thus eliminating yet another layer ;-).
So first reason is that people are already used to using a scenegraph and many custom implementations exist.
Other than that the scenetree is really a pruned scenegraph where only the necessary components which means that a scenegraph is more useful to the artists and modelers as it better abstracts the scene. A complex property may for example be a RNG seed that affect the look of each person in a crowd.
In a scenegraph that root Person node would just have a single property seed. Its children would select the exact mesh and color based on that seed. But in the expanded tree the unused meshes for alternate hairstyles, clothing and such will already have been pruned.

How do I avoid "coupling" in OOP

OK, I'm not sure coupling describes truly my problem. The problem is, I'm creating my own 3d game engine based on ogre. I'm also using a physic library, PhysX, but since I need to create an abstraction layer between my main engine code and PhysX - so just in case we could use another physic engine - I decided to create a wrapper, Quantum.
So ogre have an Entity class which will control the entity on the screen
PhysX also have a PxActor class which contains the informations about the actor position in the PhysX works.
The quantum wrapper will have a QEntity class which will act as a layer between the engine and PhysX
Finally, the engine will have an Entity class which will have two members at least, the ogre entity object and the quantum entity. On each update() it will ask the QEntity what's up and update the ogre entity position.
Spot the problem? 4 classes for one entity? And remember that we need to access all four entities at least 60 times/s! So what about data partitioning? Not really optimised. Besides, there might be more classes, one for the AI,  one for the scripting engine...
Using objects of multiple classes to represent the same thing in multiple contexts is not a bad thing in itself. In fact, it would probably be worse if you had used the same object in all these contexts - for example, through some creative use of multiple inheritance.
Your QEntity class already does decoupling for you - as long as you program to its interface, and does not let classes specific to PhysX "stick out" from the interface of QEntity*, you are good.
It looks like your project introduces coupling in the "bridge" classes, which is exactly where it belongs. As long as you keep it there, your design would not have a coupling problem.
As far as 60 FPS goes, don't worry about it too much at the design stage. As long as there are no lengthy chains of responsibility relying on virtual functions, your compiler should be able to do a good job optimizing it for you.
* for example, QEntity shouldn't accept parameters or return objects that are specific to PhysX, except in a constructor that creates a "wrapper".

Connecting C++ Model to Controller with Signals2 in Objective-C/C++

I'm developing a cross-platform C++ data model with almost 1,000 different kinds of data items (but < 100 different data structures/classes). To handle the Model to Controller messages (i.e. notification for the Controller that some data item has changed) I'm thinking about using boost:signals2. I think this will help create a unified Observer pattern that remains the same across different OS platforms. Initial implementation is on Mac OS / IOS, with subsequent UI developed on .net and unix.
Questions:
1) To set up observation of the data model, what type of object should a Controller connect to the signals2 object/slots? Should there be specific functions/methods/selectors in the Controller for EACH data item observed? Should these functions be C, C++, or Objective-C/C++ functions?
2) What level of granularity should a signal have? Should each item of data have its own? Or should the Model's management of related structures/records maintain a single signal for each type of structure/record? For example — should application preferences have ONE signal for all preferences data — passing information about what data item was changed?
3) Should the process of sending signals (or the slots receiving signals) be performed in a SEPARATE THREAD?
4) I understand Cocoa has its own system for Key-Value Observing. But would such a system be advantageous to COMBINE with the Model's signals2-based Observer paradigm — or just redundant?
UPDATE:
Regarding "granularity" of actual signal2 objects (not observers), I think having one per document and one for application prefs might be a good place to start. My document data already has a "key" concept, so it may be possible to generalize common UI cases where a UI component is tied to a specific item of model data.
There are quite a lot of facets to this question, so I'll tackle what I can.
Why do we want to be able to observe something in Objective-C and .NET?
I think it's instructive to think about why we'd want to observe a data-model in a 4th generation language? Aside from decoupling subsystems, we can also bind UI interface elements directly to the data model in some cases, avoiding the need to write a great deal of code in the controllers. This is possible in a WPF application built on .NET and in AppKit on MacOSX. There are fewer opportunities in UIKit on iOS.
Meta-model
The first obvious thing to point out is that two of the language run-times you want to target (Objective-C and .NET) provide language-level implementations of the observer pattern which are built around object properties. If you want to integrate them without writing thousands of lines of boiler-plate code, you will want to do this too.
This strongly suggests that all of your C++ model classes need to inherit from a meta-model that provides a generic property mechanism. Wrapped up with this will be an mechanism for observing them.
The alternative will be to generate an enormous quantity of boiler-plate setter/getters in C++, and binding objects to make use of them in .NET and Objective-C.
Adaptors
For both .NET and Objective-C you would design a generic adaptor class that uses reflection to do their dirty work. In the case of Objective-C you do this by overriding methods in NSObject to forward property sets/get messages onto the underlying model's property mechanism. I believe in .NET you can use reflection to inject properties corresponding to those in your model into the adaptors.
[Aside] IDL/ORM
Surely by the point you have a data model built on top of a meta-model, you might as well use some kind of IDL or ORM to describe and automatically generate the very large number of model classes? The huge number of classes you are dealing with also suggest this too.
Granularity of Observers
In terms of granularity, you could do either - it really depends on the way in which your data model changes and the UI needs to react to it. KVO in Objective-C is designed around changes to named properties on an object, but is restricted in that an observer has a single delegate method for all objects and properties it observes. This is, however, only an issue for your Objective-C UI components.
Integrating with this is definitely worthwhile unless you want to write masses of adaptor methods for value-changed events that bridge between C++ and Objective C: Remember that whilst there is toll-free interwork between C++ and Objective-C++, objective-C++ classes can't inherit interfaces C++ classes, not can you send a message to an objective-C object from C++ directly without a bridge and some nasty casting in an objective-C++ compilation unit.
With .NET you'd probably get into all kinds of similar machinations with an intermediate managed-C++ layer.
Threading
The only sane way to do this involving more than one thread is to queue signals for processing on the UI thread. After all, if generating signals in another thread you'll need to do this at some point later to update the UI. Using a separate thread for model and View/Controller portions of the app sounds like a recipe for dead-locks. It'll also be very hard to debug the source of a signal it it'll be long gone in another thread. Your life will be much easier keeping it all on one thread.

Mixing component based design and the model-view(-controller) pattern

'm developing a 2D game and I want separate the game engine from the graphics.
I decided to use the model-view pattern in the following way: the game engine owns game's entities (EnemyModel, BulletModel, ExplosionModel) which implement interfaces (Enemy, Bullet, Explosion).
The View receives events when entities are created, getting the pointer to the interface: in this way the View can only use the interface methods (i.e. ask for informations to perform the drawing) and cannot change the object state. The View has its onw classes (EnemyView, BulletView, ExplosionView) which own pointers to the interfaces.
(There is also an event-base pattern involved so that the Model can notify the View about entity changes, since a pure query approach is impraticable but I wont' discuss it here).
*Model classes use a compile-time component approach: they use the boost::fusion library to store different state componets, like PositionComponent, HealthComponent and so on.
At present moment the View isn't aware of the component based design but only of the model-view part: to get the position of an enemy it calls the Enemy::get_xy() method. The EnemyModel, which implements the interface, forwards this call to the PositionComponent and returns the result.
Since the bullet has position too, I have to add the get_xy method to Bullet too. BulletModel uses then the same implementation as the EnemyModel class (i.e. it forwards the call).
This approch then leads to have a lot of duplicate code: interfaces have a lot of similar methods and *Model classes are full of forward-methods.
So I have basically two options:
1) Expose the compoment based design so that each component has an interface as well: the View can use this interface to directly query the component. It keeps the View and the Model separated, only at a component level instead of a entity level.
2) Abandon the model-view part and go for pure component based design: the View is just a component (the RenderableComponent part) which has basically full access to the game engine.
Based on your experience which approach would be best?
I'll give my two cents worth. From the problem you're describing, it seems to me that you need an abstract class that will do the operations that are common amongst all of your classes (like the get_xy, which should apply to bullet, enemy, explosion, etc.). This class is a game entity that does the basic grunt work. Inheriting classes can override it if they want.
This abstract class should be the core of all your interfaces (luckily you're in C++ where there is no physical difference between a class, and abstract class and an interface). Thus the Views will know about the specific interfaces, and still have the generic entity methods.
A rule of thumb I have for design - if more than one class has the same data members or methods, it should probably be a single class from which they inherit.
Anyway, exposing the internal structure of your Model classes is not a good idea. Say you'll want to replace boost with something else? You'd have to re-write the entire program, not just the relevant parts.
MVC isn't easy for games as when the game becomes larger (including menu, enemies, levels, GUI...) and transitions, it'll break.
Component or entity-system are pretty good for games.
As a simpler case for you, you may consider using a HMVC. You'll still have issues with transitions, but at least your code will be grouped together in a more clean manner. You probably want your tank's code (rendering and logic) to get close together.
There have been presentation architectures designed especially for agent-based systems, such as Presentation-Abstraction-Control. The hard part in designing such a system is that you ultimately end up hardwiring sequences of collaborations between the agents.
You can do this, but don't use OO inheritance to model the message passing hierarchy. You will regret it. If you think about it, you are really not interested in using the OO inheritance relationship, since the interfaces defined are really just a "Record of functions" that the object can respond to. In that case, you are better off formally modeling your communication protocol.
If you have questions, please ask -- this is not an obvious solution and easy to get wrong.