Advice on setting up a Qt3D scene with redundant objects - c++

I'm new to the Qt3D module and am currently writing a game in Qt5/C++ using Qt3D. This question is about "Am I on the correct path?" or "Can you give me some advice on...".
The scene of the game has a static part (the "world") and some objects (buildings and movable units). Some of the buildings might be animated in the future, but most of them are very static (but of course destructible).
I divide the quesion into two parts: How to handle copies of the same model placed at different positions in the scene and how to manage the scene as a whole in the viewer class.
Redundant objects in the scene:
Of course the objects share the same library of buildings / movable units, so it would be dumb to upload the models for these objects to the graphics card for every instance of such a unit. I read through the documentation of QGLSceneNode, from which I guess that it is designed to share the same QGeometryData among multiple scene nodes, but apply different transformations in order to place the objects at different positions in my scene. Sharing the same QGLSceneNode for all instances of a building would be the wrong way, I guess.
I currently have a unit "library class" telling me the properties of each type of building / movable unit, among other things the geometry including textures. Now, I'd provide a QGeometryData for each building in this library class, which is uploaded on the loading procedure of the game (if I decide to do this for all buildings at startup...).
When creating a new instance of a unit, I'd now create a new QGLSceneNode, request the QGeometryData (which is explicitly shared) from the library and set it on the node. Then I set the transformation for this new node and put it in my scene. This leads us to the second part of my question:
Manage the scene as a whole:
My "scene" currently is neither a QGLSceneNode nor a QGLAbstractScene, but a struct of some QGLSceneNodes, one for each object (or collection of objects) in the scene. I see three approaches:
My current approach, but I guess it's "the wrong way".
The composition: Putting everything as child nodes in one root QGLSceneNode. This seemed the correct way for me, until I realized that it is very difficult to access specific nodes in such a composition. But when would I even need to access such "specific" nodes? Most operations require to take all nodes into account (rendering them, updating positions for animations), or even operate on a signal-slot-basis so I even don't need to find the nodes manually at all. For example, animations can be done using QPropertyAnimations. Acting on events can also be done by connecting a QObject in the game engine core (all buildings are QObjects in the engine's core part) with the corresponding QGLSceneNode.
But this approach has another downside: During rendering, I might need to change some properties of the QGLPainter. I'm not sure which properties I need to change, this is because I don't know Qt3D enough and can't guess what can be done without changing the properties (for example: using a specific shader to render a specific scene node).
Then I found QGLAbstractScene, but I can't see the advantages when comparing with the two solutions above, since I can't define the rendering process in the scene. But maybe it's not the correct location where to define it?
Which is the best approach to manage such a scene in Qt3D?
With "best" I mean: What am I going to do wrong? What can I do better? What other things should I take into account? Have I overlooked anything important in the Qt3D library?

Related

Entity Component System - Components requiring each other

I have written an entity component system for my game (C++). I have then refactored my render system to work with Entities / RenderComponents rather than some virtual drawable interface. Their are some classes for which I don't think it makes too much sense to force them to be a component. One of those classes is the map.
My map class consists of a tiled terrain class and some other data (not important). The tiled terrain class manages multiple layers in form of (what is at the moment) the TiledTerrainLayer class. Before refactoring the render system I simply inherited from Drawable and Transformable to enable this class to be drawn by the render system. Now it is required to be an entity with at least a TransformComponent and some RenderComponent.
Now, the TiledTerrainLayerRenderComponent should really only own the vertices and a reference of the texture and maybe a flag for whether it has been created yet. The TiledTerrainComponent would then own the list of tile indecies as well as tile and map size.
Now my problem is that when I set a tile (using something like a SetTile(size_t tileIndex, const Position & pos) method, I also have to update texture coordinates of the vertex array.
I am generally fine with one component requiring another component. For example the SpriteRenderComponent requires a TransformComponent and I am also fine with one component accessing the information of another. E.g. the GetBoundingBox() method uses the position of the transform component.
What I want to avoid is two components 'cross-referencing' each other like it would be the case with the TiledTerrainComponent (TTC) and TiledTerrainRenderComponent. (TTRC) (The TTRC gets the TTC's tileIndexList to create itself and the TTC calls the TTRC's UpdateVertices() method when its SetTile() method is called.
Lastly, I am aware that components should mainly be data. I have only added methods that directly get or modify that data such as SetTile() or GetTexture(). Would a system be viable in the case described above and if yes how would it look like?
It sounds like all you need here is a Dirty Flag.
When you change tile index, size, or other properties on your Tiled Terrain, you do not immediately phone the Tiled Renderer to update its vertices (after all, you might have many tile updates yet to come this frame — it could be wasteful to recalculate your vertices every time)
Instead, the Tiled Terrain renderer just sets its internal hasBeenModifiedSinceLastUse flag to true. It doesn't need to know about the Renderer at all.
Next, when updating your Tiled Renderer just prior to drawing, you have it ask its Tiled Terrain whether it's been updated since the last draw (you could even query a list of updates if you want to target the changes). If so, you update the vertices in one big batch, for better code & data locality.
In the process, you reset the modified flag so that if there are no updates on subsequent frames you can re-use the last generated set of vertices as-is.
Now your dependency points only one way — the renderer depends on the tile data, but the tile data has no knowledge of the rendering apart from maintaining its flag.

Directx11 How to manage multiple vertex/index buffers?

I'm Working on a small game framework as I am learning DirectX11.
What could be the best method to have a BufferManager class (maybe static) to handle all the vertex and index data of the models created both in real time or before. The class should be responsible for creating the Buffers dynamic or static, depending on the model info and then drawing them.
Should I have one vertex and index vector list and append all the new models to it... and then recreate the buffers, whenever new data is appended and set the new buffers before drawing.
Should I have seperate vertex and index buffers for the models, access respective model's buffer and set to IASetVertexBuffer(model[i].getVertBuff()) before each draw call;
Also some models could be dynamic and others static, how can I do batching here?
Not showing any code here but the construct that you are requesting would be as follows:
Create a file loader for, textures, model meshes, vertex data, normal, audio, etc.
Have a reusable structure that stores all of this data for a particular mesh of a model.
When creating this you will also want a separate texture class to hold information about different textures. This way the same texture can be referenced for different models or meshes and you won't have to load them into memory each time.
The same can be done about different meshes; you can reference a mesh that may be a part of different model objects.
To do this you would need an Asset Storage class that will manage all of your assets. This way if an asset is already in memory it will not load it again; such as a font, a texture, a model, an audio file, etc.
Then the next part you will need is a Batch class and a Batch Manager class
The batch class will define the container of what a batch is based off of a few parameters: Primitive types, if they have transparencies or not (priority queue) etc.
The Batch Manager class will do the organization and send the batches to the rendering stage. This class will also be used to state how many vertices a batch can hold and how many batches (buckets) you have. The ratio will depend on the game content. A good ratio for a basic 2D sprite type application would be approximately 10 batches where each batch contains not less than 10,000 vertices. The next step would be then to populate a bucket of similar types based on Primitive Type and its Priority (alpha channel - for Z depth), and if a bucket can not hold the data it will then look for another bucket to fill. If no buckets are available to hold the data, then the Batch Manager will look for the bucket that is most filled with the highest priority queue, it will send that batch to the video card to be rendered, then it will reuse that bucket.
The last part would be a ShaderManager class to manage different types of shaders that your program will use. So all of these classes or structures will be tied together.
If you design this appropriately you can abstract all of the Engines behaviors and responsibilities away from the actual Game Content, Properties and Logic or set of Rules. This way your Game Engine can be reusable for multiple games. That way your engine doesn't have any dependencies on a particular game and when you are ready to reuse it all you have to do is create a main project that inherits from either this Static or Dynamic library and all of the Engine Components will be incorporated into the next game. This separation of code is an excellent approach for generic reusable code.
For an excellent representation of this approach I would suggest checking out this website www.MarekKnows.com and follow the Shader Engine series of video tutorials. Albeit this particular website focuses on a Win32 in C++ but uses OpenGL instead of DirectX. However the overall design pattern has the same concept. The only difference would be to strip out the OpenGL parts and replace them with the DirectX API.
EDIT - Additional References:
Geometric Tools
Rastertek
3D Buzz
Learn OpenGL
GPU Gems by NVidia
Batches PDF by NVidia
Hieroglyph3 # Codeplex
I also found this write up on Batch Rendering Process that is by Marek Krzeminski which is from his video tutorials but found here Batch Rendering by Marek at Gamedev

Game engines: What are scene graphs?

I've started reading into the material on Wikipedia, but I still feel like I don't really understand how a scene graph works and how it can provide benefits for a game.
What is a scene graph in the game engine development context?
Why would I want to implement one for my 2D game engine?
Does the usage of a scene graph stand as an alternative to a classic entity system with a linear entity manager?
What is a scene graph in the game
engine development context?
Well, it's some code that actively sorts your game objects in the game space in a way that makes it easy to quickly find which objects are around a point in the game space.
That way, it's easy to :
quickly find which objects are in the camera view (and send only them to the graphics cards, making rendering very fast)
quickly find objects near to the player (and apply collision checks to only those ones)
And other things. It's about allowing quick search in space. It's called "space partitioning". It's about divide and conquer.
Why would I want to implement one for
my 2D game engine?
That depends on the type of game, more precisely on the structure of your game space.
For example, a game like Zelda could not need such techniques if it's fast enough to test collision between all objects in the screen. However it can easily be really really slow, so most of the time you at least setup a scene graph (or space partition of any kind) to at least know what is around all the moving objects and test collisions only on those objects.
So, that depends. Most of the time it's required for performance reasons. But the implementation of your space partitioning is totally relative to the way your game space is structured.
Does the usage of a scene graph stand
as an alternative to a classic entity
system with a linear entity manager?
No.
Whatever way you manage your game entities' object life, the space-partition/scene-graph is there only to allow you to quickly search objects in space, no more no less. Most of the time it will be an object that will have some slots of objects, corresponding to different parts of the game space and in those slots it will be objects that are in those parts.
It can be flat (like a 2D screen divider in 2 or 4), or it can be a tree (like binary tree or quadtree, or any other kind of tree) or any other sorting structure that limits the number of operations you have to execute to get some space-related informations.
Note one thing :
In some cases, you even need different separate space partition systems for different purposes. Often a "scene graph" is about rendering so it's optimized in a way that is dependent on the player's point of view and it's purpose is to allow quick gathering of a list of objects to render to send to the graphics card. It's not really suited to perform searches of objects around another object and that makes it hard to use for precise collision detection, like when you use a physic engine. So to help, you might have a different space partition system just for physics purpose.
To give an example, I want to make a "bullet hell" game, where there is a lot of balls that the player's spaceship has to dodge in a very precise way. To achieve enough rendering and collision detection performance I need to know :
when bullets appear in the screen space
when bullets leave the screen space
when the player enters in collision with bullets
when the player enters in collision with monsters
So I recursively cut the screen that is 2D in 4 parts, that gives me a quadtree. The quadtree is updated each game tick, because everything moves constantly, so I have to keep track of each object's (spaceship, bullet, monster) position in the quadtree to know which one is in which part of the screen.
Achieving 1. is easy, just enter the bullet in the system.
To achieve 2. I kept a list of leaves in the quadtree (squared sections of the screen) that are on the border of the screen. Those leaves contain the ids/pointers of the bullets that are near the border so I just have to check that they are moving out to know if I can stop rendering them and managing collision too. (It might be bit more complex but you get the idea.)
To achieve 3 and 4. I need to retrieve the objects that are near the player's spaceship. So first I get the leaf where the player's spaceship is and I get all of the objects in it. That way I will only test the collision with the player spaceship on objects that are around it, not all objects. (It IS a bit more complex but you get the idea.)
That way I can make sure that my game will run smoothly even with thousands of bullets constantly moving.
In other types of space structure, other types of space partitioning are required. Typically, kart/auto games will have a "tunnel" scene-graph because visually the player will see only things along the road, so you just have to check where he is on the road to retrieve all visible objects around in the "tunnel".
What is a scene graph? A Scene graph contains all of the geometry of a particular scene. They are useful for representing translations, rotations and scales (along with other affine transformations) of objects relative to each other.
For instance, consider a tank (the type with tracks and a gun). Your scene may have multiple tanks, but each one be oriented and positioned differently, with each having its turret rotated to different azimuth and with a different gun elevation. Rather than figuring out exactly how the gun should be positioned for each tank, you can accumulate affine transformations as you traverse your scene graph to properly position it. It makes computation of such things much easier.
2D Scene Graphs: Use of a scene graph for 2D may be useful if your content is sufficiently complex and if your objects have a number of sub components not rigidly fixed to the larger body. Otherwise, as others have mentioned, it's probably overkill. The complexity of affine transformations in 2D is quite a bit less than in the 3D case.
Linear Entity Manager: I'm not clear on exactly what you mean by a linear entity manager, but if you are refering to just keeping track of where things are positioned in your scene, then scene graphs can make things easier if there is a high degree of spatial dependence between the various objects or sub-objects in your scene.
A scene graph is a way of organizing all objects in the environment. Usually care is taken to organize the data for efficient rendering. The graph, or tree if you like, can show ownership of sub objects. For example, at the highest level there may be a city object, under it would be many building objects, under those may be walls, furniture...
For the most part though, these are only used for 3D scenes. I would suggest not going with something that complicated for a 2D scene.
There appear to be quite a few different philosophies on the web as to what the responsebilties are of a scenegraph. People tend to put in a lot of different things like geometry, camera's, light sources, game triggers etc.
In general I would describe a scenegraph as a description of a scene and is composed of a single or multiple datastructures containing the entities present in the scene. These datastructures can be of any kind (array, tree, Composite pattern, etc) and can describe any property of the entities or any relationship between the entities in the scene.
These entities can be anything ranging from solid drawable objects to collision-meshes, camera's and lightsources.
The only real restriction I saw so far is that people recommend keeping game specific components (like game triggers) out to prevent depedency problems later on. Such things would have to be abstracted away to, say, "LogicEntity", "InvisibleEntity" or just "Entity".
Here are some common uses of and datastructures in a scenegraph.
Parent/Child relationships
The way you could use a scenegraph in a game or engine is to describe parent/child relationships between anything that has a position, be it a solid object, a camera or anything else. Such a relationship would mean that the position, scale and orientation of any child would be relative to that of its parent. This would allow you to make the camera follow the player or to have a lightsource follow a flashlight object. It would also allow you to make things like the solar system in which you can describe the position of planets relative to the sun and the position of moons relative to their planet if that is what you're making.
Also things specific to some system in your game/engine can be stored in the scenegraph. For example, as part of a physics engine you may have defined simple collision-meshes for solid objects which may have too complex geometry to test collisions on. You could put these collision-meshes (I'm sure they have another name but I forgot it:P) in your scenegraph and have them follow the objects they model.
Space-partitioning
Another possible datastructure in a scenegraph is some form of space-partitioning as stated in other answers. This would allow you to perform fast queries on the scene like clipping any object that isn't in the viewing frustum or to efficiently filter out objects that need collision checking. You can also allow client code (in case you're writing an engine) to perform custom queries for whatever purpose. That way client code doesn't have to maintain its own space-partitioning structures.
I hope I gave you, and other readers, some ideas of how you can use a scenegraph and what you could put in it. I'm sure there are alot of other ways to use a scenegraph but these are the things I came up with.
In practice, scene objects in videogames are rarely organized into a graph that is "walked" as a tree when the scene is rendered. A graphics system typically expects one big array of stuff to render, and this big array is walked linearly.
Games that require geometric parenting relationships, such as those with people holding guns or tanks with turrets, define and enforce those relationships on an as-needed basis outside of the graphics system. These relationships tend to be only one-deep, and so there is almost never a need for an arbitrarily deep tree structure.

OpenGL game development - scenes that span far into view

I am working on a 2d game. Imagine a XY plane and you are a character. As your character walks, the rest of the scene comes into view.
Imagine that the XY plane is quite large and there are other characters outside of your current view.
Here is my question, with opengl, if those objects aren't rendered outside of the current view, do they eat up processing time?
Also, what are some approaches to avoid having parts of the scene rendered that aren't in view. If I have a cube that is 1000 units away from my current position, I don't want that object rendered. How could I have opengl not render that.
I guess the easiest approaches is to calculate the position and then not draw that cube/object if it is too far away.
OpenGL faq on "Clipping, Culling and Visibility Testing" says this:
OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.
Go ahead and read the rest of that link, it's all relevant.
If you've set up your scene graph correctly objects outside your field of view should be culled early on in the display pipeline. It will require a box check in your code to verify that the object is invisible, so there will be some processing overhead (but not much).
If you organise your objects into a sensible hierarchy then you could cull large sections of the scene with only one box check.
Typically your application must perform these optimisations - OpenGL is literally just the rendering part, and doesn't perform object management or anything like that. If you pass in data for something invisible it still has to transform the relevant coordinates into view space before it can determine that it's entirely off-screen or beyond one of your clip planes.
There are several ways of culling invisible objects from the pipeline. Checking if an object is behind the camera is probably the easiest and cheapest check to perform since you can reject half your data set on average with a simple calculation per object. It's not much harder to perform the same sort of test against the actual view frustrum to reject everything that isn't at all visible.
Obviously in a complex game you won't want to have to do this for every tiny object, so it's typical to group them, either hierarchically (eg. you wouldn't render a gun if you've already determined that you're not rendering the character that holds it), spatially (eg. dividing the world up into a grid/quadtree/octree and rejecting any object that you know is within a zone that you have already determined is currently invisible), or more commonly a combination of both.
"the only winning move is not to play"
Every glVertex etc is going to be a performance hit regardless of whether it ultimately gets rendered on your screen. The only way to get around that is to not draw (i.e. cull) objects which wont ever be rendered anyways.
most common method is to have a viewing frustum tied to your camera. Couple that with an octtree or quadtree depending on whether your game is 3d/2d so you dont need to check every single game object against the frustum.
The underlying driver may do some culling behind the scenes, but you can't depend on that since it's not part of the OpenGL standard. Maybe your computer's driver does it, but maybe someone else's (who might run your game) doesn't. It's best for you do to your own culling.

Scene graph implementation for Papervision?

I'm trying to use Papervision for Flash, for this project of mine, which involves a 3D model of a mechanical frame, consisting of several connected parts. Movement of one of the parts results in a corresponding change in orientation and position of other parts of the frame.
My understanding is that using a scene graph to handle this kind of linked movement would be the ideal way to go, at least, if I were to implement in one of the more established 3D development options, like OpenGL or DirectX.
My question is, is there an existing scene graph implementation for Papervision? Or, an alternative way to generate the required 3D motion?
Thank!
I thought Papervision is basically a Flash-based 3D rendering engine, therefore should contain its own scene graph.
See org.papervision3d.scenes.Scene3D in the API.
And see this article for a lengthier explanation of the various objects in Papervision. One thing you can do is google for articles with the key objects in P3D, such as EngineManager, Viewport3D, BasicRenderEngine, Scene3D and Camera3D.
As for "generating the motion", it depends on what you are trying to achieve exactly. Either you code that up and alter the scene yourself, or use a third-party library like a physics library so as to not have to code all that up yourself.
You can honestly build one in the time it would take you to search for one:
Create a class called Node with a virtual method Render(matrix:Matrix), which holds an array of child nodes.
Create a subclass of Node called TransformNode which takes a reference to a matrix.
Create a subclass of Node called ModelNode which takes a reference to a model.
The Render method of TransformNode multiplies the incoming matrix with its own, then calls the render method of its children with the resulting matrix.
The Render method of ModelNode sends its model off to the renderer at the location specified by the incoming matrix.
That's it. You can enhance things further with a BoundsNode that doesn't call its children if it's bounding shape is not visible in the viewing frustum.