Design for good relation between shader program and VAO - opengl

I am learning OpenGL and trying to get grasp of the best practices. I am working on a simple demonstration project in C++ which however aims to be a bit more generic and better structured (not everything just put in main()) than most of the tutorials I have seen on the web. I want to use the modern OpenGL ways which means VAOs and shaders. My biggest concern is about the relation of VAOs and shader programs. Maybe I am missing something here.
I am now thinking about the best design. Consider the following scenario:
there is a scene which contains multiple objects
each object has its individual size, position and rotation (i.e. transform matrix)
each object has a certain basic shape (e.g. box, ball), there can be multiple objects of the same shape
there can be multiple shader programs (e.g. one with plain interpolated RGBA colors, another with textures)
This leads me to the basic three components of my design:
ShaderProgram class - each instance contains a vertex shader and fragment shader (initialized from given strings)
Object class - has transform matrix and reference to a shape instance
Shape base class - and derived classes e.g. BoxShape, SphereShape; each derived class knows how to generate its mesh and turn it into buffer and how to map it to vertex attributes, in other words it will initialize its own VAO; it also known which glDraw... function(s) to use to render itself
When a scene is being rendered, I will call glUseProgram(rgbaShaderProgram). Then I will go through all objects which can be rendered using this program and render them. Then I will switch to glUseProgram(textureShaderProgram) and go through all textured objects.
When rendering an individual object:
1) I will call glUniformMatrix4fv() to set the individual transformation matrix (of course including projection matrix etc.)
2) then I will call the shape to which the object is associated to render
3) when shape is redered, it will bind its VAO, call its specific glDraw...() function and then unbind VAO
In my design I wanted to uncouple the dependency between Shape and ShaderProgram as they in theory can be interchangeable. But still some dependency seems to be there. When generating vertices in a specific ...Shape class and setting buffers for them I already need to know that I for example need to generate texture coordinates rather than RGBA components for each vertex. And when setting vertex attribute pointers glVertexAttribPointer I already must know that the shader program will use for example floats rather than integers (otherwise I would have to call glVertexAttribIPointer). I also need to know which attribute will be at which location in the shader program. In other words I am mixing the responsibility for sole shape geometry and the prior knowledge about how it will be rendered. And as a consequence of this I cannot render a shape with a shader program which is not compatible with it.
So finally my question: how to improve my design to achieve the goal (render the scene) and at the same time keep the versatility (interchangeability of shaders and shapes), force the correct usage (not to allow mixing wrong shapes with incompatible shaders), have the best performance possible (avoid unnecessarry program or context switching) and maintain good design principles (one class - one responsibility).

What I would do, is prepare ShaderProgram(s) templates, which I adapt to VAO attributes.
After all, shader programs are text, initially. What you might do is write your main functions for vertex and fragment programs, and attach the "header" to them, depending on the bound VAO. It could be useful to use standardized names for the variables you use inside the shaders, such as InPos, InNor, InTan, InTex.
That way, you can scan for elements that are missing in the VAO, but used inside the shader, and simply appending them in the header as const value with some default setting.
I do this via ShaderManager, with RequestShader(template,VAO) kind of function, which adapts the template to the VAO, and caches the compiled shader for later use.
If another VAO has the same attributes, and requires the same template, the precompiled cached version is returned to avoid the same adaptation process.

Related

Using the same vertex array object for different shader programs

my goal is to pack all mesh data into a C++ class, with the possibility of using an object of such a class with more than one GLSL shader program.
I've stuck with this problem: if I understand it well the way of how in GLSL vertex data are passed from CPU to GPU is based on using Vertex Array Objects which:
are made of Vertex Buffer Objects which store raw vertex data, and
Vertex Attribute Pointers which tells what shader variable location to assign and the format of the data passed to VBO above
additionally one can make an Element Buffer Object to store indices
another extra elements necessary to draw an object are textures which are made separately
I's able to make all that data and store only a VAO in my mesh class and only pass the VAO handler to my shader program to draw it. It works, but, for my goal of using multiple shader programs, the VAP stands in my way. It is the only element in my class that rely on a specific shader program's property (namely the location of input variables).
I wonder if I'm forced to remake all VAOs every time I'm using my shaders when I want to draw something. I'm afraid that the efficiency of my drawing code will suffer drastically.
So, I'm asking if I should forget of making my universal mesh class and instead make a separate objects for each shader program I'm using in my application. I would rather not, as I want to avoid unnecessary copying of my mesh data. On the other hand if this means my drawing routine will slow down because of all that remaking VAOs every milisecond during drawing then I have to rethink the design of my code :(
EDIT: OK I've misunderstood that VAOs store bindings to other objects, not the data itself. But there is one thing still left: to make an VAP I have to provide information about the location of an input variable from a shader and also the layout of data in VBO from a mesh.
What I'm trying to avoid is to make an separate object that stores that VAP which relies both on a mesh object and a shader object. I might solve my problem if all shader programs use the same location for the same thing, but at this moment I'm not sure if this is the case.
additionally one can make an Element Buffer Object to store indices
another extra elements necessary to draw an object are textures which are made separately
Just to be clear, that's you being lazy and/or uninformed. OpenGL strongly encourages you to use a single buffer for both vertex and index data. Vulkan goes a step further by limiting the number of object names (vao, vbo, textures, shaders, everything) you can create and actively punishes you for not doing that.
As to the problem you're facing.. It would help if you used correct nomenclature, or at least full names. "VAP" is yet another sign of laziness that hides what your actual issue is.
I will mention that VAOs are separate from VBOs, and you can have multiple VAOs linked to the same VBO (VBOs are just buffers of raw data -- again, Vulkan goes a step further here and its buffers store literally everything from vertex data, element indices, textures, shader constants etc).
You can also have multiple programs use the same VAOs, you just bind whatever VAO you want active (glBindVertexArray) once, then use your programs and call your render functions as needed. Here's an example from one of my projects that uses the same terrain data to render both the shaded terrain and a grid on top of them:
chunk.VertexIndexBuffer.Bind();
// pass 1, render the terrain
terrainShader.ProgramUniform("selectedPrimitiveID", selectedPrimitiveId);
terrainShader.Use();
roadsTexture.Bind();
GL.DrawArrays(PrimitiveType.Triangles, 0, chunk.VertexIndexBuffer.VertexCount);
// pass 2, render the grid
gridShader.Use();
GL.DrawElements(BeginMode.Lines, chunk.VertexIndexBuffer.IndexCount, DrawElementsType.UnsignedShort, 0);

How to render multiple objects that can each have multiple shaders in OpenGL 3.3?

Im trying to make a 3D renderer with OpenGL using c++, well, so far I have a Scene class that contains a list of Objects and Materials objects (I also have classes for those and I written my code so an object can have multiple shaders (every shader will be able to affect a group of vertices in an object) but now I'm trying to find a good way to send all that information to openGL.
I've seen people suggest taking everything that uses the same shader and rendering that at once, and do the same for every shader, well If I understood well enough,but is that a good idea if you can get the same shaders included in different objects, if I merged every vert that has shader A for example, won't it hurt that that group contains verts of separate objects when I try to draw them at once ? And if I take each object and separate each object according to their shaders, so for the rendering I would take Object A then split into its shader groups, then draw shadergroup1 in object1 then shader group2 in object 2 and so on.. Won't that be too many draw calls too.
What strategy do you recommend to accomplish that ?
The first things I recommend is, that you stop thinking in terms of "objects", as far as the rendering process is concerned. When rendering the only sensible grouping are drawing batches (of a certain primitive, points, lines, triangles) for which the same rendering steps (render pipeline) is executed. The modern rendering APIs that were released over the past months (Vulkan, DirectX 12 and Metal) make this explicit.
When rendering your scene the recommended strategy is to iterate over all your objects, split them into render pipeline groups and perform a single drawing batch call once for each primitive-by-pipeline group. The overall goal should be to minimize the total number of drawing calls made.
If you are using OpenGL 3.3, you are using Vertex Array Objects (VAO) and Vertex Buffer Objects (VBO). You have an object, a table for example, which can have three (or more or less) VBO:s, one for vertex data, one for normal data and one for texture coordinate data. You enclose your VBO:s of that table inside one VAO. So every object have its own VAO stored in a GPU memory.
When you want to render your objects or a part of them, you bind one of your shaders at use and call those VAO:s you want to render by that shader. It may be important that you render right objects on right order and use right shaders (of course!) on each VAO.

Render multiple models in OpenGL with a single draw call

I built a 2D graphical engine, and I created a batching system for it, so, if I have 1000 sprites with the same texture, I can draw them with one single call to openGl.
This is achieved by putting in a single vbo vertex array all the vertices of all the sprites with the same texture.
Instead of "print these vertices, print these vertices, print these vertices", I do "put all the vertices toghether, print", just to be very clear.
Easy enough, but now I'm trying to achieve the same thing in 3D, and I'm having a big problem.
The problem is that I'm using a Model View Projection matrix to place and render my models, which is the common approach to render a model in 3D space.
For each model on screen, I need to pass the MVP matrix to the shader, so that I can use it to transform each vertex to the correct position.
If I would do the transformation outside the shader, it would be executed by the cpu, which I not a good idea, for obvious reasons.
But the problem lies there. I need to pass the matrix to the shader, but for each model the matrix is different.
So I cannot do the same I did with 2d sprites, because changing a shader uniform requires a draw every time.
I hope I've been clear, maybe you have a good idea I didn't have or you already had the same problem. I know for a fact that there is a solution somewhere, because in engine like Unity, you can use the same shader for multiple models, and get away with one draw call
There exists a feature exactly like what you're looking for, and it's called instancing. With instancing, you store n matrices (or whatever else you need) in a Uniform Buffer and call glDrawElementsInstanced to draw n copies. In the shader, you get an extra input gl_InstanceID, with which you index into the Uniform Buffer to fetch the matrix you need for that particular instance.
You can read more about instancing here: https://www.opengl.org/wiki/Vertex_Rendering#Instancing
The answer depends on whether the vertex data for each item is identical or not. If it is, you can use instancing as in #orost's answer, using glDrawElementsInstanced, and gl_InstanceID within the vertex shader, and that method should be preferred.
However, if each 3D model requires different vertex data (which is frequently the case), you can still render them using a single draw call. To do this, you would add another stream into your vertex data with glVertexAttribPointer (and glEnableVertexAttribArray). This extra stream would contain the index of the matrix within the uniform buffer that vertex should use when rendering - so each mesh within the VBO would have an identical index in the extra stream. The uniform buffer contains the same data as in the instancing setup.
Note this method may require some extra CPU processing, if you need to redo the batching - for example, an object within a batch should not be rendered anymore. If this process is required frequently, it should be determined whether batching items is actually beneficial or not.
Besides instancing and adding another vertex attribute as some object ID, I'd like to also mention another strategy (which requires modern OpenGL, though):
The extension ARB_multi_draw_indirect (in core since GL 4.3) adds indirect drawing commands. These commands do source their parameters (number of vertices, starting index and so on) directly from another buffer object. With these functions, many different objects can be drawn with a single draw call.
However, as you still want some per-object state like transformation matrices, that feature is not enough. But in combination with ARB_shader_draw_parameters (not in core GL yet), you get the gl_DrawID parameter, which will be incremented by one for each single object in one mult draw indirect call. That way, you can index into some UBO, or TBO, or SSBO (or whatever) where you store per-object data.

How to render multiple shapes and objects in OpenGL3.2+?

I am following this guide http://open.gl/ to learn how to program using modern opengl. I am stuck at one thing. Earlier with older opengl Api, I used to create different functions for drawing different types of shapes. Like drawRect(), drawCircle(), etc. All I needed to do was to use glVertex3f() in different combinations inside glBegin() and glEnd().
But in OpenGl3.2+, even to draw a rectangle, we have to write hell lot of code. I understand that it gives you more control over rendering, but I am confused at what to do when you are rendering multiple stuff like drawing rectangles, circles, etc together. Do I need to write all that code multiple times including shaders or I can reuse the code.
I have implemented this code http://open.gl/content/code/c2_triangle_elements.txt
This example draws a rectangle, but what If I wanted to draw a triangle along with this rectangle, then how do I do it.
You can reuse most of the code. All the shaders can be loaded once and the reused by calling glUseProgram. As for all the buffers you can create them once and then reuse them as many times as you want.
You might want to create some classes like ShaderClass and ShapeClass.
A base shader class will usually have 2 or 3 parameters like program, vertexShader and fragmentShader (all being some IDs). When you extend them also add all the uniforms and attributes from the shaders so you can later use them to modify your drawing (change the colour for instance).
A ShapeClass will usually contain vertex buffer ID (and optional index buffer ID) and number of vertices to draw.
So in the end when wanting do draw some shape you only use a shader you already created and compiled, bind and set the vertex data from your shape buffer and call the draw.

OpenGL and OOP programming

I want to structure my OpenGL program in classes, at first, I think I'll have Mesh, Shader, Vertex, Face (indicates) classes. Shader is a class for handling shader operations like create, compile, link, bind, unbind. Mesh contains a shader, list of Vertex and list of Face, it will create buffers, draw object to screen using attached shader, Hence all objects (like Box, Sphere, Monkey, TeaPot, ..) will inherited from Mesh class (or even use Mesh class directly). The problem is in the draw method of Mesh: each shader have different attribute and uniform variables, ... I can't find a way to draw object with dynamic shader. Is my OOP model not good? or are there any ways to do this?
I just want my program will like below:
Vertex vertices[] = {....};
Face faces[] = {...}; // Indicates
Shader color_shader = ColorShader::getOne();
Mesh box;
box.pushVertices(vertices);
box.pushFaces(faces);
box.shader = color_shader;
box.buildObject();
In render method like this:
box.draw(MVP);
Can anyone one suggest me a way to draw object with dynamic shader or other ways to do that thing?
Assembled from comments:
Conventionally mesh is just a geometry. Material is a shader (more generally - effect) and its parameters. And something like 'sceneobject' - virtual node entity that contains reference to mesh (or other object type) and materials. I highly doubt mesh should draw itself - this approach lacks flexibility, and isn't making much sense.
I think you should use something like 'renderer' or 'render_queue' or whatever it called, that takes objects to draw (bundles of meshes and materials) and draws them somewhen. It will even allow you to sort rendering queue to minimise state switches (switching shaders/shader parameters/textures have performance cost).
Attributes comes from mesh (maybe it would be easier to use some standard names like 'position', 'normal', 'texcoord0', ...). As for uniforms - there's more than one way. E.g. you have a material file with contents like
use_effect phong_single_texture;
texture t0 = "path_to_texture";
vec3 diffuse_colour = { 1, 0, 0 };
Then you load this file into material structure, link it with effect (what you called 'shader'), texture, etc.. Then renderer pulls this all together and setups attributes and uniforms by names.
Side note:
OOP everywhere isn't very good if you aim for high performance, but it is good for educational purpose. Whenever your experience is good enough - google something like 'data oriented design', but it isn't easy topic.