OpenGL Model Class requirements - opengl

I've been wondering this for a while when loading in your mesh and textures and whatnot in your model class, what do you keep? I figure once the vertices are passed with glBufferData() I don't need them anymore, since the call to glDrawArrays() depends on the last glBindBuffer() This is all my Model class keeps after the loadModel() method is called.
GLuint vboID;//for... something
GLuint vaoID;//for glBindVertexArray(vaoID);
GLuint textureID;//for glBindTexture(GL_TEXTRUE2D, textureID);
GLuint vboElementCount;//for glDrawArrays(GL_TRIANGLES,0,vboElementCount);
From what I can tell that would be all I need to render a model. Other then that what else should I keep for a simple model?
Edit: Before someone mentions position rotation scale and what have you, I have a ModelInstance class that contains the matrices for those, and I pass it a pointer to the Model object so I can have multiple instances of the same Model.

I keep
VBO ids, so I can delete VBOs when I no longer need them.
VAO id, so I can bind before drawing and delete eventually.
Number of indices to pass to glDrawElements()
ids of all required textures.
Material and transformation properties (uniforms in general).
This works fine as long as meshes are static. As soon as you want to manipulate your meshes on the CPU, for example smoothen or split them, you have to keep a copy in main memory.
Maybe you also want to keep a low resolution placeholder, for example for more specialized rendering techniques like occlusion culling. But again, this is a special case; your question lists everything needed for basic rendering
Note: you should think about switching to glDrawElements(). It will reduce the amount of vertex data you have to store and transfer.

Related

Using the same vertex array object for different shader programs

my goal is to pack all mesh data into a C++ class, with the possibility of using an object of such a class with more than one GLSL shader program.
I've stuck with this problem: if I understand it well the way of how in GLSL vertex data are passed from CPU to GPU is based on using Vertex Array Objects which:
are made of Vertex Buffer Objects which store raw vertex data, and
Vertex Attribute Pointers which tells what shader variable location to assign and the format of the data passed to VBO above
additionally one can make an Element Buffer Object to store indices
another extra elements necessary to draw an object are textures which are made separately
I's able to make all that data and store only a VAO in my mesh class and only pass the VAO handler to my shader program to draw it. It works, but, for my goal of using multiple shader programs, the VAP stands in my way. It is the only element in my class that rely on a specific shader program's property (namely the location of input variables).
I wonder if I'm forced to remake all VAOs every time I'm using my shaders when I want to draw something. I'm afraid that the efficiency of my drawing code will suffer drastically.
So, I'm asking if I should forget of making my universal mesh class and instead make a separate objects for each shader program I'm using in my application. I would rather not, as I want to avoid unnecessary copying of my mesh data. On the other hand if this means my drawing routine will slow down because of all that remaking VAOs every milisecond during drawing then I have to rethink the design of my code :(
EDIT: OK I've misunderstood that VAOs store bindings to other objects, not the data itself. But there is one thing still left: to make an VAP I have to provide information about the location of an input variable from a shader and also the layout of data in VBO from a mesh.
What I'm trying to avoid is to make an separate object that stores that VAP which relies both on a mesh object and a shader object. I might solve my problem if all shader programs use the same location for the same thing, but at this moment I'm not sure if this is the case.
additionally one can make an Element Buffer Object to store indices
another extra elements necessary to draw an object are textures which are made separately
Just to be clear, that's you being lazy and/or uninformed. OpenGL strongly encourages you to use a single buffer for both vertex and index data. Vulkan goes a step further by limiting the number of object names (vao, vbo, textures, shaders, everything) you can create and actively punishes you for not doing that.
As to the problem you're facing.. It would help if you used correct nomenclature, or at least full names. "VAP" is yet another sign of laziness that hides what your actual issue is.
I will mention that VAOs are separate from VBOs, and you can have multiple VAOs linked to the same VBO (VBOs are just buffers of raw data -- again, Vulkan goes a step further here and its buffers store literally everything from vertex data, element indices, textures, shader constants etc).
You can also have multiple programs use the same VAOs, you just bind whatever VAO you want active (glBindVertexArray) once, then use your programs and call your render functions as needed. Here's an example from one of my projects that uses the same terrain data to render both the shaded terrain and a grid on top of them:
chunk.VertexIndexBuffer.Bind();
// pass 1, render the terrain
terrainShader.ProgramUniform("selectedPrimitiveID", selectedPrimitiveId);
terrainShader.Use();
roadsTexture.Bind();
GL.DrawArrays(PrimitiveType.Triangles, 0, chunk.VertexIndexBuffer.VertexCount);
// pass 2, render the grid
gridShader.Use();
GL.DrawElements(BeginMode.Lines, chunk.VertexIndexBuffer.IndexCount, DrawElementsType.UnsignedShort, 0);

How to render multiple objects that can each have multiple shaders in OpenGL 3.3?

Im trying to make a 3D renderer with OpenGL using c++, well, so far I have a Scene class that contains a list of Objects and Materials objects (I also have classes for those and I written my code so an object can have multiple shaders (every shader will be able to affect a group of vertices in an object) but now I'm trying to find a good way to send all that information to openGL.
I've seen people suggest taking everything that uses the same shader and rendering that at once, and do the same for every shader, well If I understood well enough,but is that a good idea if you can get the same shaders included in different objects, if I merged every vert that has shader A for example, won't it hurt that that group contains verts of separate objects when I try to draw them at once ? And if I take each object and separate each object according to their shaders, so for the rendering I would take Object A then split into its shader groups, then draw shadergroup1 in object1 then shader group2 in object 2 and so on.. Won't that be too many draw calls too.
What strategy do you recommend to accomplish that ?
The first things I recommend is, that you stop thinking in terms of "objects", as far as the rendering process is concerned. When rendering the only sensible grouping are drawing batches (of a certain primitive, points, lines, triangles) for which the same rendering steps (render pipeline) is executed. The modern rendering APIs that were released over the past months (Vulkan, DirectX 12 and Metal) make this explicit.
When rendering your scene the recommended strategy is to iterate over all your objects, split them into render pipeline groups and perform a single drawing batch call once for each primitive-by-pipeline group. The overall goal should be to minimize the total number of drawing calls made.
If you are using OpenGL 3.3, you are using Vertex Array Objects (VAO) and Vertex Buffer Objects (VBO). You have an object, a table for example, which can have three (or more or less) VBO:s, one for vertex data, one for normal data and one for texture coordinate data. You enclose your VBO:s of that table inside one VAO. So every object have its own VAO stored in a GPU memory.
When you want to render your objects or a part of them, you bind one of your shaders at use and call those VAO:s you want to render by that shader. It may be important that you render right objects on right order and use right shaders (of course!) on each VAO.

How could I morph one 3D object into another using shaders?

For the latest Ludum Dare competition the theme was shapeshifting, so my idea involved simple morphing from one geometric shape to another.
So what I did was I made a few objects in Blender with the same vertex count. In OpenGL I made separate VAOs for each object and one additional VAO (with dynamic draw attributes) for the "morphing" object. Every frame, while the player is shapeshifting, I would upload interpolated vertex data, between the current object and the target object, into this extra VAO and then render. Otherwise just render the object's corresponding VAO.
Morphing looked like this:
(The vertices have a different ordering, so morphing is not "smooth")
Since I had little time I just made something quick and dirty but now I think this is not a great way of doing this process, because I have to upload a lot of data to the GPU every frame. And it doesn't look scalable either, if I ever wanted to draw multiple morphing objects at different morphing stages.
As a first step to improve this process I would like to move those interpolation calcs into the shaders.
I could perhaps store the data for all objects in a single VAO, in separate attributes, and then select which of the attributes to interpolate from.
But I was wondering: is there a way to somehow send multiple (two) objects/buffers into the shaders, along with an interpolation rate uniform, and then in the shaders I would do the interpolation?
You can create a buffer that holds several coordinates for each vertex. Just like normally you have coordinates, normals, texture coordinates you can have coordinate1, coordinate2, coordinate3 etc. Then in the shader you can have a uniform variable that says which to use.
With two it's of course easy since the uniform will be from zero to one and you just multiply the first coordinate with it and add the second multiplied with (1.0 - value).
Then just make sure you create the meshes from the same base shape and they will morph nicely.
Also if you use normals, make sure you have several normals and interpolate between them also.
The minus in this is that the more data you put through the more skipping in memory the shader has to do so it might not be the prettiest solution if you have a lot of forms.

Render multiple objects in OpenGL

So I read somewhere that I should use fewer VBOs as possible and render many objects from a single VBO.
I get the point, but isn't that less organized?
I planned on creating 3d shapes classes where each instance of the class has its own VBO that will be bound when it needs to be drawn. This comes in contrast to what I wrote earlier.
How do 3D applications (video games etc.) get this done?
Firstly a VBO is just a collection of vertices that is stored in your video card memory. You can use a single VBO and model matrix to transform it, you can also apply different shaders and textures on top of it.
This is best explained with a 2D game in mind where every graphic is basically a quad; 4 vertices.
Would you really need to flood your video card memory with repeated allocations for a VBO of 4 vertices?
Instead we can have a pointer to the VBO and model matrix that transforms it. The VBO is usually stored in a resource manager so it stays in memory for as long as we like.
So now we're reusing a single VBO for everything, we're applying different transformations and textures to it. It looks like there are lots and lots of VBO's but actually we're being very efficient with our video card memory.
Here's some pseudo code:
class Sprite {
mat4 transform;
GLuint VBO = ResourceManager.quadVBO;
}

Multiple VBOs and IBOs How to Handle

Actually, I have a lot of models that are produced by cpu.(around 100K and one of them around 100 triangle) and all model has its vbo and ibo. If I try to draw each model with glDrawElements() it is quite slow. also if I try to draw combine all vbos and ibos if a model is deleted I need to update vbo also almost all ibo because of removed points change index order and then I need the buffer all of these again it is slow. Also I am not sure about instancing performance and picking I need to know which triangle belongs to which model.Is there any way to buffer and than one draw function draw all individual model with its own vbo and index?
You can set a base vertex per mesh and pass it to the glDrawElementsBaseVertex call. This will still require a call per mesh which can be solved with glMultiDrawElementsBaseVertex with which you can combine them all into a single draw call.