OpenGL and OOP programming - c++

I want to structure my OpenGL program in classes, at first, I think I'll have Mesh, Shader, Vertex, Face (indicates) classes. Shader is a class for handling shader operations like create, compile, link, bind, unbind. Mesh contains a shader, list of Vertex and list of Face, it will create buffers, draw object to screen using attached shader, Hence all objects (like Box, Sphere, Monkey, TeaPot, ..) will inherited from Mesh class (or even use Mesh class directly). The problem is in the draw method of Mesh: each shader have different attribute and uniform variables, ... I can't find a way to draw object with dynamic shader. Is my OOP model not good? or are there any ways to do this?
I just want my program will like below:
Vertex vertices[] = {....};
Face faces[] = {...}; // Indicates
Shader color_shader = ColorShader::getOne();
Mesh box;
box.pushVertices(vertices);
box.pushFaces(faces);
box.shader = color_shader;
box.buildObject();
In render method like this:
box.draw(MVP);
Can anyone one suggest me a way to draw object with dynamic shader or other ways to do that thing?

Assembled from comments:
Conventionally mesh is just a geometry. Material is a shader (more generally - effect) and its parameters. And something like 'sceneobject' - virtual node entity that contains reference to mesh (or other object type) and materials. I highly doubt mesh should draw itself - this approach lacks flexibility, and isn't making much sense.
I think you should use something like 'renderer' or 'render_queue' or whatever it called, that takes objects to draw (bundles of meshes and materials) and draws them somewhen. It will even allow you to sort rendering queue to minimise state switches (switching shaders/shader parameters/textures have performance cost).
Attributes comes from mesh (maybe it would be easier to use some standard names like 'position', 'normal', 'texcoord0', ...). As for uniforms - there's more than one way. E.g. you have a material file with contents like
use_effect phong_single_texture;
texture t0 = "path_to_texture";
vec3 diffuse_colour = { 1, 0, 0 };
Then you load this file into material structure, link it with effect (what you called 'shader'), texture, etc.. Then renderer pulls this all together and setups attributes and uniforms by names.
Side note:
OOP everywhere isn't very good if you aim for high performance, but it is good for educational purpose. Whenever your experience is good enough - google something like 'data oriented design', but it isn't easy topic.

Related

How could I morph one 3D object into another using shaders?

For the latest Ludum Dare competition the theme was shapeshifting, so my idea involved simple morphing from one geometric shape to another.
So what I did was I made a few objects in Blender with the same vertex count. In OpenGL I made separate VAOs for each object and one additional VAO (with dynamic draw attributes) for the "morphing" object. Every frame, while the player is shapeshifting, I would upload interpolated vertex data, between the current object and the target object, into this extra VAO and then render. Otherwise just render the object's corresponding VAO.
Morphing looked like this:
(The vertices have a different ordering, so morphing is not "smooth")
Since I had little time I just made something quick and dirty but now I think this is not a great way of doing this process, because I have to upload a lot of data to the GPU every frame. And it doesn't look scalable either, if I ever wanted to draw multiple morphing objects at different morphing stages.
As a first step to improve this process I would like to move those interpolation calcs into the shaders.
I could perhaps store the data for all objects in a single VAO, in separate attributes, and then select which of the attributes to interpolate from.
But I was wondering: is there a way to somehow send multiple (two) objects/buffers into the shaders, along with an interpolation rate uniform, and then in the shaders I would do the interpolation?
You can create a buffer that holds several coordinates for each vertex. Just like normally you have coordinates, normals, texture coordinates you can have coordinate1, coordinate2, coordinate3 etc. Then in the shader you can have a uniform variable that says which to use.
With two it's of course easy since the uniform will be from zero to one and you just multiply the first coordinate with it and add the second multiplied with (1.0 - value).
Then just make sure you create the meshes from the same base shape and they will morph nicely.
Also if you use normals, make sure you have several normals and interpolate between them also.
The minus in this is that the more data you put through the more skipping in memory the shader has to do so it might not be the prettiest solution if you have a lot of forms.

Design for good relation between shader program and VAO

I am learning OpenGL and trying to get grasp of the best practices. I am working on a simple demonstration project in C++ which however aims to be a bit more generic and better structured (not everything just put in main()) than most of the tutorials I have seen on the web. I want to use the modern OpenGL ways which means VAOs and shaders. My biggest concern is about the relation of VAOs and shader programs. Maybe I am missing something here.
I am now thinking about the best design. Consider the following scenario:
there is a scene which contains multiple objects
each object has its individual size, position and rotation (i.e. transform matrix)
each object has a certain basic shape (e.g. box, ball), there can be multiple objects of the same shape
there can be multiple shader programs (e.g. one with plain interpolated RGBA colors, another with textures)
This leads me to the basic three components of my design:
ShaderProgram class - each instance contains a vertex shader and fragment shader (initialized from given strings)
Object class - has transform matrix and reference to a shape instance
Shape base class - and derived classes e.g. BoxShape, SphereShape; each derived class knows how to generate its mesh and turn it into buffer and how to map it to vertex attributes, in other words it will initialize its own VAO; it also known which glDraw... function(s) to use to render itself
When a scene is being rendered, I will call glUseProgram(rgbaShaderProgram). Then I will go through all objects which can be rendered using this program and render them. Then I will switch to glUseProgram(textureShaderProgram) and go through all textured objects.
When rendering an individual object:
1) I will call glUniformMatrix4fv() to set the individual transformation matrix (of course including projection matrix etc.)
2) then I will call the shape to which the object is associated to render
3) when shape is redered, it will bind its VAO, call its specific glDraw...() function and then unbind VAO
In my design I wanted to uncouple the dependency between Shape and ShaderProgram as they in theory can be interchangeable. But still some dependency seems to be there. When generating vertices in a specific ...Shape class and setting buffers for them I already need to know that I for example need to generate texture coordinates rather than RGBA components for each vertex. And when setting vertex attribute pointers glVertexAttribPointer I already must know that the shader program will use for example floats rather than integers (otherwise I would have to call glVertexAttribIPointer). I also need to know which attribute will be at which location in the shader program. In other words I am mixing the responsibility for sole shape geometry and the prior knowledge about how it will be rendered. And as a consequence of this I cannot render a shape with a shader program which is not compatible with it.
So finally my question: how to improve my design to achieve the goal (render the scene) and at the same time keep the versatility (interchangeability of shaders and shapes), force the correct usage (not to allow mixing wrong shapes with incompatible shaders), have the best performance possible (avoid unnecessarry program or context switching) and maintain good design principles (one class - one responsibility).
What I would do, is prepare ShaderProgram(s) templates, which I adapt to VAO attributes.
After all, shader programs are text, initially. What you might do is write your main functions for vertex and fragment programs, and attach the "header" to them, depending on the bound VAO. It could be useful to use standardized names for the variables you use inside the shaders, such as InPos, InNor, InTan, InTex.
That way, you can scan for elements that are missing in the VAO, but used inside the shader, and simply appending them in the header as const value with some default setting.
I do this via ShaderManager, with RequestShader(template,VAO) kind of function, which adapts the template to the VAO, and caches the compiled shader for later use.
If another VAO has the same attributes, and requires the same template, the precompiled cached version is returned to avoid the same adaptation process.

Render multiple models in OpenGL with a single draw call

I built a 2D graphical engine, and I created a batching system for it, so, if I have 1000 sprites with the same texture, I can draw them with one single call to openGl.
This is achieved by putting in a single vbo vertex array all the vertices of all the sprites with the same texture.
Instead of "print these vertices, print these vertices, print these vertices", I do "put all the vertices toghether, print", just to be very clear.
Easy enough, but now I'm trying to achieve the same thing in 3D, and I'm having a big problem.
The problem is that I'm using a Model View Projection matrix to place and render my models, which is the common approach to render a model in 3D space.
For each model on screen, I need to pass the MVP matrix to the shader, so that I can use it to transform each vertex to the correct position.
If I would do the transformation outside the shader, it would be executed by the cpu, which I not a good idea, for obvious reasons.
But the problem lies there. I need to pass the matrix to the shader, but for each model the matrix is different.
So I cannot do the same I did with 2d sprites, because changing a shader uniform requires a draw every time.
I hope I've been clear, maybe you have a good idea I didn't have or you already had the same problem. I know for a fact that there is a solution somewhere, because in engine like Unity, you can use the same shader for multiple models, and get away with one draw call
There exists a feature exactly like what you're looking for, and it's called instancing. With instancing, you store n matrices (or whatever else you need) in a Uniform Buffer and call glDrawElementsInstanced to draw n copies. In the shader, you get an extra input gl_InstanceID, with which you index into the Uniform Buffer to fetch the matrix you need for that particular instance.
You can read more about instancing here: https://www.opengl.org/wiki/Vertex_Rendering#Instancing
The answer depends on whether the vertex data for each item is identical or not. If it is, you can use instancing as in #orost's answer, using glDrawElementsInstanced, and gl_InstanceID within the vertex shader, and that method should be preferred.
However, if each 3D model requires different vertex data (which is frequently the case), you can still render them using a single draw call. To do this, you would add another stream into your vertex data with glVertexAttribPointer (and glEnableVertexAttribArray). This extra stream would contain the index of the matrix within the uniform buffer that vertex should use when rendering - so each mesh within the VBO would have an identical index in the extra stream. The uniform buffer contains the same data as in the instancing setup.
Note this method may require some extra CPU processing, if you need to redo the batching - for example, an object within a batch should not be rendered anymore. If this process is required frequently, it should be determined whether batching items is actually beneficial or not.
Besides instancing and adding another vertex attribute as some object ID, I'd like to also mention another strategy (which requires modern OpenGL, though):
The extension ARB_multi_draw_indirect (in core since GL 4.3) adds indirect drawing commands. These commands do source their parameters (number of vertices, starting index and so on) directly from another buffer object. With these functions, many different objects can be drawn with a single draw call.
However, as you still want some per-object state like transformation matrices, that feature is not enough. But in combination with ARB_shader_draw_parameters (not in core GL yet), you get the gl_DrawID parameter, which will be incremented by one for each single object in one mult draw indirect call. That way, you can index into some UBO, or TBO, or SSBO (or whatever) where you store per-object data.

How to render multiple shapes and objects in OpenGL3.2+?

I am following this guide http://open.gl/ to learn how to program using modern opengl. I am stuck at one thing. Earlier with older opengl Api, I used to create different functions for drawing different types of shapes. Like drawRect(), drawCircle(), etc. All I needed to do was to use glVertex3f() in different combinations inside glBegin() and glEnd().
But in OpenGl3.2+, even to draw a rectangle, we have to write hell lot of code. I understand that it gives you more control over rendering, but I am confused at what to do when you are rendering multiple stuff like drawing rectangles, circles, etc together. Do I need to write all that code multiple times including shaders or I can reuse the code.
I have implemented this code http://open.gl/content/code/c2_triangle_elements.txt
This example draws a rectangle, but what If I wanted to draw a triangle along with this rectangle, then how do I do it.
You can reuse most of the code. All the shaders can be loaded once and the reused by calling glUseProgram. As for all the buffers you can create them once and then reuse them as many times as you want.
You might want to create some classes like ShaderClass and ShapeClass.
A base shader class will usually have 2 or 3 parameters like program, vertexShader and fragmentShader (all being some IDs). When you extend them also add all the uniforms and attributes from the shaders so you can later use them to modify your drawing (change the colour for instance).
A ShapeClass will usually contain vertex buffer ID (and optional index buffer ID) and number of vertices to draw.
So in the end when wanting do draw some shape you only use a shader you already created and compiled, bind and set the vertex data from your shape buffer and call the draw.

OpenGL define colors using shaders

I am learning OpenGL. At the moment i knew how to define primitives using VBO.
I implemented simple class Mesh and from this class some primitives like Square.
Now i wanted to learn good way to define colors. I am thinking about using shaders. My idea is to get something like this.
class ColorShader{
public:
static GLuint red = LoadShaders( "SimpleVertexShader.vertexshader", "red.fragmentshader" );
};
But i am not sure is that good way to do. I think that plus of this method is that i will get 30-50% less memory for each triangle. But minus will be that i will need prepare more fragmentshaders.
VertexColor gives me more power to define objects but it consume more memory and I dont like idea settings colors and vertices in the same place.
If I understand you correctly, you want to set the color per object instead of per-vertex? I would go with sending a float4 value to the fragment shader for every object in a constant buffer (at least that is what it is called in DirectX). It is really simple and the most basic thing, really.
Edit: http://www.opengl.org/wiki/Uniform_Buffer_Object They are called Uniform buffers in OpenGL.
The basic idea is that you declare a uniform variable in your shader, and get a pointer to it in your c++ code and upload 4 floats there every time you draw an object (or set once and the same color will be used for every object).