Directx11 How to manage multiple vertex/index buffers? - c++

I'm Working on a small game framework as I am learning DirectX11.
What could be the best method to have a BufferManager class (maybe static) to handle all the vertex and index data of the models created both in real time or before. The class should be responsible for creating the Buffers dynamic or static, depending on the model info and then drawing them.
Should I have one vertex and index vector list and append all the new models to it... and then recreate the buffers, whenever new data is appended and set the new buffers before drawing.
Should I have seperate vertex and index buffers for the models, access respective model's buffer and set to IASetVertexBuffer(model[i].getVertBuff()) before each draw call;
Also some models could be dynamic and others static, how can I do batching here?

Not showing any code here but the construct that you are requesting would be as follows:
Create a file loader for, textures, model meshes, vertex data, normal, audio, etc.
Have a reusable structure that stores all of this data for a particular mesh of a model.
When creating this you will also want a separate texture class to hold information about different textures. This way the same texture can be referenced for different models or meshes and you won't have to load them into memory each time.
The same can be done about different meshes; you can reference a mesh that may be a part of different model objects.
To do this you would need an Asset Storage class that will manage all of your assets. This way if an asset is already in memory it will not load it again; such as a font, a texture, a model, an audio file, etc.
Then the next part you will need is a Batch class and a Batch Manager class
The batch class will define the container of what a batch is based off of a few parameters: Primitive types, if they have transparencies or not (priority queue) etc.
The Batch Manager class will do the organization and send the batches to the rendering stage. This class will also be used to state how many vertices a batch can hold and how many batches (buckets) you have. The ratio will depend on the game content. A good ratio for a basic 2D sprite type application would be approximately 10 batches where each batch contains not less than 10,000 vertices. The next step would be then to populate a bucket of similar types based on Primitive Type and its Priority (alpha channel - for Z depth), and if a bucket can not hold the data it will then look for another bucket to fill. If no buckets are available to hold the data, then the Batch Manager will look for the bucket that is most filled with the highest priority queue, it will send that batch to the video card to be rendered, then it will reuse that bucket.
The last part would be a ShaderManager class to manage different types of shaders that your program will use. So all of these classes or structures will be tied together.
If you design this appropriately you can abstract all of the Engines behaviors and responsibilities away from the actual Game Content, Properties and Logic or set of Rules. This way your Game Engine can be reusable for multiple games. That way your engine doesn't have any dependencies on a particular game and when you are ready to reuse it all you have to do is create a main project that inherits from either this Static or Dynamic library and all of the Engine Components will be incorporated into the next game. This separation of code is an excellent approach for generic reusable code.
For an excellent representation of this approach I would suggest checking out this website www.MarekKnows.com and follow the Shader Engine series of video tutorials. Albeit this particular website focuses on a Win32 in C++ but uses OpenGL instead of DirectX. However the overall design pattern has the same concept. The only difference would be to strip out the OpenGL parts and replace them with the DirectX API.
EDIT - Additional References:
Geometric Tools
Rastertek
3D Buzz
Learn OpenGL
GPU Gems by NVidia
Batches PDF by NVidia
Hieroglyph3 # Codeplex
I also found this write up on Batch Rendering Process that is by Marek Krzeminski which is from his video tutorials but found here Batch Rendering by Marek at Gamedev

Related

Using Qt3D for a huge topography model?

My scene currently consists of a huge topography model (Mill. Verticies). Now the scene becomes more complex and contains many smaller 3D objects. Does it make sense to switch from a QFrambufferObject (with C ++ / OpenGL) to Qt3D?
What about the speed?
Which file format (Wavefront OBJ, Stanford Triangle Format PLY, STL (STereoLithography)) is suitable? I am currently loading the data from the .hgt format into a vector array and then into the QFrambufferObject.
With Qt3D, is it possible to load only the tiles in the graphics card that are currently visible in the viewport or to delete them as soon as they leave the viewport when the camera angles change?
That's difficult to say. I assume by
Does it make sense to switch
you mean will it be fast enough. Regarding your two points:
Qt3D uses assimp internally to load models, so if that library is able to load hgt files, so should be Qt3D.
What do you mean by reload? Do they change dynamically in the model? Or do you mean re-render? There's QFrustumCullung but that seems to render the whole entity still. Do you know whether your object consists of multiple parts? Because when Qt3D loads models (i.e. whole scenes) from files I think it persists its structure. If your model is split into multiple standalone components it could be that QFrustumCulling improves rendering speed because they don't get drawn when they are not viewed.
I personally feel like Qt3D is slower compared to a hand-crafted renderer in OpenGL. But it constantly improves so if it doesn't take long to try it out maybe it's worth a shot. Also, the flexibility of Qt3D might make up for the slower rendering speed.

Simple 3D scene visualisation data format

My PhD project revoles around simulating the paths of photons through objects of different optical properties. My code has classes which create ccd images etc, but it would be much more useful to be able to create a simple rendering of the 3D objects and the paths the photons take through them.
I've written an opengl system for viewing such a scene, but it would be much better if I could use something much more lightweight where I could simply specify the vertices of an object, and then a photon path as a list of connected vertices.
Exporting all the data and then visualising it in another program isn't ideal, as things like mesh transformations need to be taken into account, and I'd rather avoid exporting several new mesh objects just to import them all into another program.
What I essentially need is to be able to create the three dimensional equivalent of a svg image. Does such a '3D scene' file format with a simple visualiser exist?
I write in C++ on MacOS, though I'd prefer to avoid using a visualisation library. I appreciate that what I'm asking for is rather niece and picky, but that's why I'm asking the internet as someone might have come across a similar need for such a tool.

best way to wrap opengl models

In short: What is the "preferred" way to wrap OpenGL's buffers, shaders and/or matrices required for a more high level "model" object?
I am trying to write this tiny graphics engine in C++ built on core OpenGL 3.3 and I would like to implement an as clean as possible solution to wrapping a higher level "model" object, which would contain its vertex buffer, global position/rotation, textures (and also a shader maybe?) and potentially other information.
I have looked into this open source engine, called GamePlay3D and don't quite agree with many aspects of its solution to this problem. Is there any good resource that discusses this topic for modern OpenGL? Or is there some simple and clean way to do this?
That depends a lot on what you want to be able to do with your engine. Also note that these concepts are the same with DirectX (or any other graphic API), so don't focus too much your search on OpenGL. Here are a few points that are very common in a 3D engine (names can differ):
Mesh:
A mesh contains submeshes, each submesh contains a vertex buffer and an index buffer. The idea being that each submesh will use a different material (for example, in the mesh of a character, there could be a submesh for the body and one for the clothes.)
Instance:
An instance (or mesh instance) references a mesh, a list of materials (one for each submesh in the mesh), and contains the "per instance" shader uniforms (world matrix etc.), usually grouped in a uniform buffer.
Material: (This part changes a lot depending on the complexity of the engine). A basic version would contain some textures, some render states (blend state, depth state), a shader program, and some shader uniforms that are common to all instances (for example a color, but that could also be in the instance depending on what you want to do.)
More complex versions usually separates the materials in passes (or sometimes techniques that contain passes) that contain everything that's in the previous paragraph. You can check Ogre3D documentation for more info about that and to take a look at one possible implementation. There's also a very good article called Designing a Data-Driven Renderer in GPU PRO 3 that describes an even more flexible system based on the same idea (but also more complex).
Scene: (I call it a scene here, but it could really be called anything). It provides the shader parameters and textures from the environment (lighting values, environment maps, this kind of things).
And I thinks that's it for the basics. With that in mind, you should be able to find your way around the code of any open-source 3D engine if you want the implementation details.
This is in addition to Jerem's excellent answer.
At a low level, there is no such thing as a "model", there is only buffer data and the code used to process it. At a high level, the concept of a "model" will differ from application to application. A chess game would have a static mesh for each chess piece, with shared textures and materials, but a first-person shooter could have complicated models with multiple parts, swappable skins, hit boxes, rigging, animations, et cetera.
Case study: chess
For chess, there are six pieces and two colors. Let's over-engineer the graphics engine to show how it could be done if you needed to draw, say, thousands of simultaneous chess games in the same screen, instead of just one game. Here is how you might do it.
Store all models in one big buffer. This buffer has all of the vertex and index data for all six models clumped together. This means that you never have to switch buffers / VAOs when you're drawing pieces. Also, this buffer never changes, except when the user goes into settings and chooses a different style for the chess pieces.
Create another buffer containing the current location of each piece in the game, the color of each piece, and a reference to the model for that piece. This buffer is updated every frame.
Load the necessary textures. Maybe the normals would be in one texture, and the diffuse map would be an array texture with one layer for white and another for black. The textures are designed so you don't have to change them while you're drawing chess pieces.
To draw all the pieces, you just have to update one buffer, and then call glMultiDrawElementsIndirect()... once per frame, and it draws all of the chess pieces. If that's not available, you can fall back to glDrawElements() or something else.
Analysis
You can see how this kind of design won't work for everything.
What if you have to stream new models into memory, and remove old ones?
What if the models have different size textures?
What if the models are more complex, with animations or forward kinematics?
What about translucent models?
What about hit boxes and physics data?
What about different LODs?
The problem here is that your solution, and even the very concept of what a "model" is, will be very different depending on what your needs are.

Sharing data between graphics and physics engine in the game?

I'm writing a game engine that consists of a few modules. Two of them are the graphics engine and the physics engine.
I wonder if it's a good solution to share data between them?
Two ways (sharing or not) looks like that:
Without sharing data
GraphicsModel{
//some common for graphics and physics data like position
//some only graphic data
//like textures and detailed model's vertices that physics doesn't need
};
PhysicsModel{
//some common for graphics and physics data like position
//some only physics data
//usually my physics data contains A LOT more information than graphics data
}
engine3D->createModel3D(...);
physicsEngine->createModel3D(...);
//connect graphics and physics data
//e.g. update graphics model's position when physics model's position will change
I see two main problems:
A lot of redundant data (like two positions for both physics and graphics data)
Problem with updating data (I have to manually update graphics data when physics data changes)
With sharing data
Model{
//some common for graphics and physics data like position
};
GraphicModel : public Model{
//some only graphics data
//like textures and detailed model's verticles that physics doesn't need
};
PhysicsModel : public Model{
//some only physics data
//usually my physics data contains A LOT more informations than graphics data
}
model = engine3D->createModel3D(...);
physicsEngine->assignModel3D(&model); //will cast to
//PhysicsModel for it's purposes??
//when physics changes anything (like position) in model
//(which it treats like PhysicsModel), the position for graphics data
//will change as well (because it's the same model)
Problems here:
physicsEngine cannot create new objects, just "assign" existing ones from engine3D (somehow it looks more anti-independent for me)
Casting data in assignModel3D function
physicsEngine and graphicsEngine must be careful - they cannot delete data when they don't need them (because the second one may need it). But it's a rare situation. Moreover, they can just delete the pointer, not the object. Or we can assume that graphicsEngine will delete objects, physicsEngine just pointers to them.
Which way is better?
Which will produce more problems in the future?
I like the second solution more, but I wonder why most graphics and physics engines prefer the first one (maybe because they normally make only graphics or only physics engine and somebody else connect them in the game?).
Do they have any more hidden pros & cons?
Nowadays, more game engines adopts a component design (e.g. Unity, Unreal). In this kind of design, a GameObject is composed of a list of components. In your situation, there can be a MeshComponent and a PhysicalComponent, both attaching to a single game object.
For simplicity, you can put a world transform variable to the GameObject. During update phrase, PhysicalComponent outputs the world transform to that variable. During rendering, the MeshComponent reads that variable.
The rationale behind this design is to decouple between components. Neither MeshComponent nor PhysicalComponent knows each other. They just depends on a common interface. And it can be easier to extend the system by composition, than using single hierarchy of inheritance.
In a realistic scenario, however, you may need more sophisticated handling between physics/graphics synchronization. For example, the physics simulation may need to be run in fixed time step (e.g. 30Hz), while rendering need to be variable. And you may need to interpolate results from the output of physics engine. Some physics engine (e.g. Bullet) has direct support of this issue though.
Unity provided a good reference of their Components, which worth a look.

Advice on setting up a Qt3D scene with redundant objects

I'm new to the Qt3D module and am currently writing a game in Qt5/C++ using Qt3D. This question is about "Am I on the correct path?" or "Can you give me some advice on...".
The scene of the game has a static part (the "world") and some objects (buildings and movable units). Some of the buildings might be animated in the future, but most of them are very static (but of course destructible).
I divide the quesion into two parts: How to handle copies of the same model placed at different positions in the scene and how to manage the scene as a whole in the viewer class.
Redundant objects in the scene:
Of course the objects share the same library of buildings / movable units, so it would be dumb to upload the models for these objects to the graphics card for every instance of such a unit. I read through the documentation of QGLSceneNode, from which I guess that it is designed to share the same QGeometryData among multiple scene nodes, but apply different transformations in order to place the objects at different positions in my scene. Sharing the same QGLSceneNode for all instances of a building would be the wrong way, I guess.
I currently have a unit "library class" telling me the properties of each type of building / movable unit, among other things the geometry including textures. Now, I'd provide a QGeometryData for each building in this library class, which is uploaded on the loading procedure of the game (if I decide to do this for all buildings at startup...).
When creating a new instance of a unit, I'd now create a new QGLSceneNode, request the QGeometryData (which is explicitly shared) from the library and set it on the node. Then I set the transformation for this new node and put it in my scene. This leads us to the second part of my question:
Manage the scene as a whole:
My "scene" currently is neither a QGLSceneNode nor a QGLAbstractScene, but a struct of some QGLSceneNodes, one for each object (or collection of objects) in the scene. I see three approaches:
My current approach, but I guess it's "the wrong way".
The composition: Putting everything as child nodes in one root QGLSceneNode. This seemed the correct way for me, until I realized that it is very difficult to access specific nodes in such a composition. But when would I even need to access such "specific" nodes? Most operations require to take all nodes into account (rendering them, updating positions for animations), or even operate on a signal-slot-basis so I even don't need to find the nodes manually at all. For example, animations can be done using QPropertyAnimations. Acting on events can also be done by connecting a QObject in the game engine core (all buildings are QObjects in the engine's core part) with the corresponding QGLSceneNode.
But this approach has another downside: During rendering, I might need to change some properties of the QGLPainter. I'm not sure which properties I need to change, this is because I don't know Qt3D enough and can't guess what can be done without changing the properties (for example: using a specific shader to render a specific scene node).
Then I found QGLAbstractScene, but I can't see the advantages when comparing with the two solutions above, since I can't define the rendering process in the scene. But maybe it's not the correct location where to define it?
Which is the best approach to manage such a scene in Qt3D?
With "best" I mean: What am I going to do wrong? What can I do better? What other things should I take into account? Have I overlooked anything important in the Qt3D library?