best way to wrap opengl models - opengl

In short: What is the "preferred" way to wrap OpenGL's buffers, shaders and/or matrices required for a more high level "model" object?
I am trying to write this tiny graphics engine in C++ built on core OpenGL 3.3 and I would like to implement an as clean as possible solution to wrapping a higher level "model" object, which would contain its vertex buffer, global position/rotation, textures (and also a shader maybe?) and potentially other information.
I have looked into this open source engine, called GamePlay3D and don't quite agree with many aspects of its solution to this problem. Is there any good resource that discusses this topic for modern OpenGL? Or is there some simple and clean way to do this?

That depends a lot on what you want to be able to do with your engine. Also note that these concepts are the same with DirectX (or any other graphic API), so don't focus too much your search on OpenGL. Here are a few points that are very common in a 3D engine (names can differ):
Mesh:
A mesh contains submeshes, each submesh contains a vertex buffer and an index buffer. The idea being that each submesh will use a different material (for example, in the mesh of a character, there could be a submesh for the body and one for the clothes.)
Instance:
An instance (or mesh instance) references a mesh, a list of materials (one for each submesh in the mesh), and contains the "per instance" shader uniforms (world matrix etc.), usually grouped in a uniform buffer.
Material: (This part changes a lot depending on the complexity of the engine). A basic version would contain some textures, some render states (blend state, depth state), a shader program, and some shader uniforms that are common to all instances (for example a color, but that could also be in the instance depending on what you want to do.)
More complex versions usually separates the materials in passes (or sometimes techniques that contain passes) that contain everything that's in the previous paragraph. You can check Ogre3D documentation for more info about that and to take a look at one possible implementation. There's also a very good article called Designing a Data-Driven Renderer in GPU PRO 3 that describes an even more flexible system based on the same idea (but also more complex).
Scene: (I call it a scene here, but it could really be called anything). It provides the shader parameters and textures from the environment (lighting values, environment maps, this kind of things).
And I thinks that's it for the basics. With that in mind, you should be able to find your way around the code of any open-source 3D engine if you want the implementation details.

This is in addition to Jerem's excellent answer.
At a low level, there is no such thing as a "model", there is only buffer data and the code used to process it. At a high level, the concept of a "model" will differ from application to application. A chess game would have a static mesh for each chess piece, with shared textures and materials, but a first-person shooter could have complicated models with multiple parts, swappable skins, hit boxes, rigging, animations, et cetera.
Case study: chess
For chess, there are six pieces and two colors. Let's over-engineer the graphics engine to show how it could be done if you needed to draw, say, thousands of simultaneous chess games in the same screen, instead of just one game. Here is how you might do it.
Store all models in one big buffer. This buffer has all of the vertex and index data for all six models clumped together. This means that you never have to switch buffers / VAOs when you're drawing pieces. Also, this buffer never changes, except when the user goes into settings and chooses a different style for the chess pieces.
Create another buffer containing the current location of each piece in the game, the color of each piece, and a reference to the model for that piece. This buffer is updated every frame.
Load the necessary textures. Maybe the normals would be in one texture, and the diffuse map would be an array texture with one layer for white and another for black. The textures are designed so you don't have to change them while you're drawing chess pieces.
To draw all the pieces, you just have to update one buffer, and then call glMultiDrawElementsIndirect()... once per frame, and it draws all of the chess pieces. If that's not available, you can fall back to glDrawElements() or something else.
Analysis
You can see how this kind of design won't work for everything.
What if you have to stream new models into memory, and remove old ones?
What if the models have different size textures?
What if the models are more complex, with animations or forward kinematics?
What about translucent models?
What about hit boxes and physics data?
What about different LODs?
The problem here is that your solution, and even the very concept of what a "model" is, will be very different depending on what your needs are.

Related

Cant understand concept of merge-instancing

I was reading slides from a presentation that was talking about "merge-instancing". (the presentation is from Emil Persson, the link: www.humus.name/Articles/Persson_GraphicsGemsForGames.pptx, from slide 19)
I can't understand what's going on, I know instancing only from openGL and I thought it can only draw the same mesh multiple times. Can somebody explain? Does it work differently with directX?
Instancing: You upload a mesh to the GPU and activate its buffers whenever you want to render it. Data is not duplicated.
Merging: You want to create a mesh from multiple smaller meshes (as the complex of building in the example), so you either:
Draw each complex using instancing, which means, multiple draw calls for each complex
You merge the instances into a single mesh, which will replicate the vertices and other data for each complex, but you will be able to render the whole complex with a single draw call
Instance-Merging: You create the complex by referencing the vertices of the instances that take part on it. Then you use the vertices to know where to fetch the data for each instance: This way you have the advantage of instancing (Each mesh is uploaded once to the GPU) and the merging benefits (you draw the whole complex with a single draw call)

How should you efficiently batch complex meshes?

What is the best way to render complex meshes? I wrote different solutions below and wonder what is your opinion about them.
Let's take an example: how to render the 'Crytek-Sponza' mesh?
PS: I do not use Ubershader but only separate shaders
If you download the mesh on the following link:
http://graphics.cs.williams.edu/data/meshes.xml
and load it in Blender you'll see that the whole mesh is composed by about 400 sub-meshes with their own materials/textures respectively.
A dummy renderer (version 1) will render each of the 400 sub-mesh separately! It means (to simplify the situation) 400 draw calls with for each of them a binding to a material/texture. Very bad for performance. Very slow!
pseudo-code version_1:
foreach mesh in meshList //400 iterations :(!
mesh->BindVBO();
Material material = mesh->GetMaterial();
Shader bsdf = ShaderManager::GetBSDFByMaterial(material);
bsdf->Bind();
bsdf->SetMaterial(material);
bsdf->SetTexture(material->GetTexture()); //Bind texture
mesh->Render();
Now, if we take care of the materials being loaded we can notice that the Sponza is composed in reality of ONLY (if I have a good memory :)) 25 different materials!
So a smarter solution (version 2) should be to gather all the vertex/index data in batches (25 in our example) and not store VBO/IBO into sub-meshes classes but into a new class called Batch.
pseudo-code version_2:
foreach batch in batchList //25 iterations :)!
batch->BindVBO();
Material material = batch->GetMaterial();
Shader bsdf = ShaderManager::GetBSDFByMaterial(material);
bsdf->Bind();
bsdf->SetMaterial(material);
bsdf->SetTexture(material->GetTexture()); //Bind texture
batch->Render();
In this case each VBO contains data that share exactly the same texture/material settings!
It's so much better! Now I think 25 VBO for render the sponza is too much! The problem is the number of Buffer bindings to render the sponza! I think a good solution should be to allocate a new VBO if the first one if 'full' (for example let's assume that the maximum size of a VBO (value defined in the VBO class as attribute) is 4MB or 8MB).
pseudo-code version_3:
foreach vbo in vboList //for example 5 VBOs (depends on the maxVBOSize)
vbo->Bind();
BatchList batchList = vbo->GetBatchList();
foreach batch in batchList
Material material = batch->GetMaterial();
Shader bsdf = ShaderManager::GetBSDFByMaterial(material);
bsdf->Bind();
bsdf->SetMaterial(material);
bsdf->SetTexture(material->GetTexture()); //Bind texture
batch->Render();
In this case each VBO does not contain necessary data that share exactly the same texture/material settings! It depends of the sub-mesh loading order!
So OK, there are less VBO/IBO bindings but not necessary less draw calls! (are you OK by this affirmation ?). But in a general manner I think this version 3 is better than the previous one! What do you think about this ?
Another optimization should be to store all the textures (or group of textures) of the sponza model in array(s) of textures! But if you download the sponza package you will see that all texture has different sizes! So I think they can't be bound together because of their format differences.
But if it's possible, the version 4 of the renderer should use only less texture bindings rather than 25 bindings for the whole mesh! Do you think it's possible ?
So, according to you, what is the best way to render the sponza mesh ? Have you another suggestion ?
You are focused on the wrong things. In two ways.
First, there's no reason you can't stick all of the mesh's vertex data into a single buffer object. Note that this has nothing to do with batching. Remember: batching is about the number of draw calls, not the number of buffers you use. You can render 400 draw calls out of the same buffer.
This "maximum size" that you seem to want to have is a fiction, based on nothing from the real world. You can have it if you want. Just don't expect it to make your code faster.
So when rendering this mesh, there is no reason to be switching buffers at all.
Second, batching is not really about the number of draw calls (in OpenGL). It's really about the cost of the state changes between draw calls.
This video clearly spells out (about 31 minutes in), the relative cost of different state changes. Issuing two draw calls with no state changes between them is cheap (relatively speaking). But different kinds of state changes have different costs.
The cost of changing buffer bindings is quite small (assuming you're using separate vertex formats, so that changing buffers doesn't mean changing vertex formats). The cost of changing programs and even texture bindings is far greater. So even if you had to make multiple buffer objects (which again, you don't have to), that's not going to be the primary bottleneck.
So if performance is your goal, you'd be better off focusing on the expensive state changes, not the cheap ones. Making a single shader that can handle all of the material settings for the entire mesh, so that you only need to change uniforms between them. Use array textures so that you only have one texture binding call. This will turn a texture bind into a uniform setting, which is a much cheaper state change.
There are even fancier things you can do, involving base instance counts and the like. But that's overkill for a trivial example like this.

Proper Implementation of Texture Atlas

I'm currently working alongside a piece of software that generates game maps by taking several images and then tiling them into a game map. Right now I'm working with OpenGL to draw these maps. As you know, switching states in OpenGL and making multiple draw calls is costly. I've decided to implement a texture atlas system, which would allow me to draw the entire map in a single draw call with no state switching. However, I'm having a problem with implementing the texture atlas. Firstly, would it be better to store each TILE in the texture atlas, or the images themselves? Secondly, not all of the images are guaranteed to be square, or even powers of two. Do I pad them to the nearest power of two, a square, or both? Another thing that concerns me is that the images can get quite large, and I'm worried about exceeding the OpenGL size limitation for textures, which would force me to split the map up, ruining the entire concept.
Here's what I have so far, conceptually:
-Generate texture
-Bind texture
-Generate image large enough to hold textures (Take padding into account?)
-Sort textures?
-Upload subtexture to blank texture, store offsets
-Unbind texture
This is not so much a direct answer, but I can't really answer directly since you are asking many questions at once. I'll simply try to give you as much info as I can on the related subjects.
The following is a list of considerations for you, allowing you to rethink exactly what your priorities are and how you wish to execute them.
First of all, in my experience (!!), using texture arrays is much easier than using a texture atlas, and the performance is about equal. Texture arrays do exactly what you think they would do, you can sample them in shaders based on a variable name and an index, instead of just a name (ie: mytexarray[0]). One of the big drawbacks include having the same texture size for all textures in the array, advantages being: easy indexing of subtextures and binding in one draw call.
Second of all, always use powers of 2. I don't know if some recent systems allow for non-power of 2 textures totally without problems, but (again in my experience) it is best to use powers of 2 everywhere. One of the problems I had in a 500*500 texture was black lines when drawing textured quads, these black lines were exactly the size needed to pad to a nearest power of two (12 pixels on x and y). So OpenGL somewhat creates this problem for you even on recent hardware.
Third of all (is this even english?), concerning size. All your problems seem to handle images, textures. You might want to look at texturebuffers, they allow for large amounts of data to be streamed to your GC and are updated easier than textures (this allows for LOD map systems). This is mostly nice if you use textures but only need the data in them represented in their colors, not the colors directly.
Finally you might want to look at "texture splatting", this is a way to increase detail without increasing data. I don't know exactly what you are making so I don't know if you can use it, but it's easy and it's being used in the game industry alot. You create a set of textures (rock, sand, grass, etc) you use everywhere, and one big texture keeping track of which smaller texture is applied where.
I hope at least one of the things I wrote here will help you out,
Good luck!
PS: openGL texture size limitations depend on the graphics card of the user, so be careful with sizes greater than 2048*2048, even if your computer runs fine others might have serious issues. Safe values are anything upto 1024*1024.
PSS: excuse any grammer mistakes, ask for clarification if needed. Also, this is my first answer ever, excuse my lack of protocol.

Is there a simple way to detect collisions between 2 GL objects? e.g glutSolidCylinder & glutSolidTorus

Is there a simple way to detect collisions between 2 GL objects? e.g glutSolidCylinder & glutSolidTorus
if there is no simple way, how do i refer to this objects,to their location?
and if I have their location, what should be the mathematical consideration to take in account?
No, there is no simple way. Those aren't GL objects anyway, as OpenGL doesn't know of any objects, as it is no scene graph or geometry library. It just draws simple shapes, like triangles or points, onto the screen and that's exactly what glutSolidTorus and friends do. They don't construct some abstract object with properties like position and the like. They draw a bunch of triangles to the screen, transforming the vertices using the current transformation matrices.
When you are about to do such things like collision detection or even just simple object and scene management, you won't get around managing the objects, complete with positions and geometry and whatnot, yourself, as again OpenGL only draws triangles without the notion for any abstract objects they may compose.
Once you have complete control over your objects geometry (the triangles and vertices they're composed of), you can draw them yourself and/or feed them to any collision detection algorithms/libraries. For such mathematically describable objects, like spheres, cylinders, or even tori, you may also find specialized algorithms. But keep in mind. It's up to you to manage those things as objects with any abstract properties you like them to have, OpenGL just draws them and those glutSolid... functions are just hepler functions containing nothing else than a simple glBegin/glEnd block.
You will need some system that checks for and manages collisions, if you want to insist on using the glut objects then you will need to contain them in some other class/geometry-representation to check for intersections.
Some interesting reads/links on physics/collision detection:
www.realtimerendering.com/intersections.html
http://www.wildbunny.co.uk/blog/2011/04/20/collision-detection-for-dummies/ < he also has other articles, the principles for 2D can easily be extend to 3 dimensions
http://www.dtecta.com/files/GDC2012_vandenBergen_Gino_Physics_Tut.pdf
Edit, this book is good imo: http://www.amazon.co.uk/gp/product/1558607323/ref=wms_ohs_product

Store 3d models in game, the best way

What is the best method to store 3d models in game ?
I store in vectors:
vector triangles (each triangle contain number of texcords, numer of vertex and number of normal),
vector points;
vector normals;
vector texCords;
I'm not sure what constitutes "the best method" in this case, as that's going to be situation dependent and in your question, it's somewhat open to interpretation.
If you're talking about how to rapidly render static objects, you can go a long way using Display Lists. They can be used to memoize all of the OpenGL calls once and then recall those instructions to render the object whenever used in your game. All of the overhead you incured to calculate vertex locations, normals, etc are only performed once when you build each display list. The drawback is that you won't see much of a performance gain if your models change too often.
EDIT: SurvivalMachine below mentions that display lists are deprecated. In particular, they are deprecated in OpenGL Version 3.0 and completely removed from the standard in Version 3.1. After a little research, it appears that the Vertex Buffer Object (VBO) extension is the prefered alternative, though a number of sources I found claimed that performance wasn't as good as display lists.
I chose to import models from the .ms3d format, and while I may refactor later, I think it provided a decent foundation for the data structure of my 3D models.
The spec (in C header format) is a pretty straightforward read; I am writing my game in Java so I simply ported over each data structure: vertex, triangle, group, material, and optionally the skeletal animation elements.
But really, a model is just triplets of vertices (or triangles), each with a material, right? Start by creating those basic structures, write a draw function that takes a model for an argument and draws it, and then add on any other features you might need as you need them. Iterative design, if you will.