How to render multiple shapes and objects in OpenGL3.2+? - c++

I am following this guide http://open.gl/ to learn how to program using modern opengl. I am stuck at one thing. Earlier with older opengl Api, I used to create different functions for drawing different types of shapes. Like drawRect(), drawCircle(), etc. All I needed to do was to use glVertex3f() in different combinations inside glBegin() and glEnd().
But in OpenGl3.2+, even to draw a rectangle, we have to write hell lot of code. I understand that it gives you more control over rendering, but I am confused at what to do when you are rendering multiple stuff like drawing rectangles, circles, etc together. Do I need to write all that code multiple times including shaders or I can reuse the code.
I have implemented this code http://open.gl/content/code/c2_triangle_elements.txt
This example draws a rectangle, but what If I wanted to draw a triangle along with this rectangle, then how do I do it.

You can reuse most of the code. All the shaders can be loaded once and the reused by calling glUseProgram. As for all the buffers you can create them once and then reuse them as many times as you want.
You might want to create some classes like ShaderClass and ShapeClass.
A base shader class will usually have 2 or 3 parameters like program, vertexShader and fragmentShader (all being some IDs). When you extend them also add all the uniforms and attributes from the shaders so you can later use them to modify your drawing (change the colour for instance).
A ShapeClass will usually contain vertex buffer ID (and optional index buffer ID) and number of vertices to draw.
So in the end when wanting do draw some shape you only use a shader you already created and compiled, bind and set the vertex data from your shape buffer and call the draw.

Related

Is it possible to reprocess a fragment shader before it is drawn to the screen?

Is there a way to make the fragment shader pass through another fragment shader before it is drawn? As in the following example:
Consider that I want to draw a scene but only inside a shape, I can check in the shader
if the TexCoords of the fragment are inside the shape I want.
Pass 1: Bind post processing shader
Pass 2: Draw se scene
Pass 3: Bind default or disable post processing shader
Drawing without post processing shader
Drawing with post processing shader
I'm aware of the framebuffer, and it works, but it goes through a process of rendering the whole screen, and that can cost me performance in the future, especially considering that this post processing shader will be turned on, off and reset several times during the rendering of a frame
OpenGL does not recognize the idea of chaining shader stages in the manner you suggest. During any particular rendering operation, there is exactly one fragment shader active. Period.
Of course, OpenGL also does not care where your shader strings come from. It doesn't care if there's a single file on a disk with that text in it or not. All it cares about is that you pass text corresponding to valid GLSL to glShaderSource.
So if you like, you can manufacture a single shader from multiple conceptual "shaders". This can be as simple as just concatenating a bunch of file strings together (which glShaderSource can do for you, since it takes multiple strings), or it can be a complex operation where you recognize certain variables as interface variables and carefully synthesize a main function from these disparate pieces.
How you go about doing that is ultimately up to you.
Alternatively, you can take an "ubershader" approach. That is, put all of the possible post-processing stuff in one shader, and use uniform variables to tell whether or not a particular post-processing step is currently active.
Write a shader that does both things: calculates the colour, and discards the fragment if it's outside a certain shape. Render the scene with that shader.
Perhaps if you want to avoid wasting time processing pixels outside the shape, you can set the "scissor rectangle" to the bounding box of your shape, so OpenGL won't even run the shader for pixels outside that box.

How to render multiple objects that can each have multiple shaders in OpenGL 3.3?

Im trying to make a 3D renderer with OpenGL using c++, well, so far I have a Scene class that contains a list of Objects and Materials objects (I also have classes for those and I written my code so an object can have multiple shaders (every shader will be able to affect a group of vertices in an object) but now I'm trying to find a good way to send all that information to openGL.
I've seen people suggest taking everything that uses the same shader and rendering that at once, and do the same for every shader, well If I understood well enough,but is that a good idea if you can get the same shaders included in different objects, if I merged every vert that has shader A for example, won't it hurt that that group contains verts of separate objects when I try to draw them at once ? And if I take each object and separate each object according to their shaders, so for the rendering I would take Object A then split into its shader groups, then draw shadergroup1 in object1 then shader group2 in object 2 and so on.. Won't that be too many draw calls too.
What strategy do you recommend to accomplish that ?
The first things I recommend is, that you stop thinking in terms of "objects", as far as the rendering process is concerned. When rendering the only sensible grouping are drawing batches (of a certain primitive, points, lines, triangles) for which the same rendering steps (render pipeline) is executed. The modern rendering APIs that were released over the past months (Vulkan, DirectX 12 and Metal) make this explicit.
When rendering your scene the recommended strategy is to iterate over all your objects, split them into render pipeline groups and perform a single drawing batch call once for each primitive-by-pipeline group. The overall goal should be to minimize the total number of drawing calls made.
If you are using OpenGL 3.3, you are using Vertex Array Objects (VAO) and Vertex Buffer Objects (VBO). You have an object, a table for example, which can have three (or more or less) VBO:s, one for vertex data, one for normal data and one for texture coordinate data. You enclose your VBO:s of that table inside one VAO. So every object have its own VAO stored in a GPU memory.
When you want to render your objects or a part of them, you bind one of your shaders at use and call those VAO:s you want to render by that shader. It may be important that you render right objects on right order and use right shaders (of course!) on each VAO.

Design for good relation between shader program and VAO

I am learning OpenGL and trying to get grasp of the best practices. I am working on a simple demonstration project in C++ which however aims to be a bit more generic and better structured (not everything just put in main()) than most of the tutorials I have seen on the web. I want to use the modern OpenGL ways which means VAOs and shaders. My biggest concern is about the relation of VAOs and shader programs. Maybe I am missing something here.
I am now thinking about the best design. Consider the following scenario:
there is a scene which contains multiple objects
each object has its individual size, position and rotation (i.e. transform matrix)
each object has a certain basic shape (e.g. box, ball), there can be multiple objects of the same shape
there can be multiple shader programs (e.g. one with plain interpolated RGBA colors, another with textures)
This leads me to the basic three components of my design:
ShaderProgram class - each instance contains a vertex shader and fragment shader (initialized from given strings)
Object class - has transform matrix and reference to a shape instance
Shape base class - and derived classes e.g. BoxShape, SphereShape; each derived class knows how to generate its mesh and turn it into buffer and how to map it to vertex attributes, in other words it will initialize its own VAO; it also known which glDraw... function(s) to use to render itself
When a scene is being rendered, I will call glUseProgram(rgbaShaderProgram). Then I will go through all objects which can be rendered using this program and render them. Then I will switch to glUseProgram(textureShaderProgram) and go through all textured objects.
When rendering an individual object:
1) I will call glUniformMatrix4fv() to set the individual transformation matrix (of course including projection matrix etc.)
2) then I will call the shape to which the object is associated to render
3) when shape is redered, it will bind its VAO, call its specific glDraw...() function and then unbind VAO
In my design I wanted to uncouple the dependency between Shape and ShaderProgram as they in theory can be interchangeable. But still some dependency seems to be there. When generating vertices in a specific ...Shape class and setting buffers for them I already need to know that I for example need to generate texture coordinates rather than RGBA components for each vertex. And when setting vertex attribute pointers glVertexAttribPointer I already must know that the shader program will use for example floats rather than integers (otherwise I would have to call glVertexAttribIPointer). I also need to know which attribute will be at which location in the shader program. In other words I am mixing the responsibility for sole shape geometry and the prior knowledge about how it will be rendered. And as a consequence of this I cannot render a shape with a shader program which is not compatible with it.
So finally my question: how to improve my design to achieve the goal (render the scene) and at the same time keep the versatility (interchangeability of shaders and shapes), force the correct usage (not to allow mixing wrong shapes with incompatible shaders), have the best performance possible (avoid unnecessarry program or context switching) and maintain good design principles (one class - one responsibility).
What I would do, is prepare ShaderProgram(s) templates, which I adapt to VAO attributes.
After all, shader programs are text, initially. What you might do is write your main functions for vertex and fragment programs, and attach the "header" to them, depending on the bound VAO. It could be useful to use standardized names for the variables you use inside the shaders, such as InPos, InNor, InTan, InTex.
That way, you can scan for elements that are missing in the VAO, but used inside the shader, and simply appending them in the header as const value with some default setting.
I do this via ShaderManager, with RequestShader(template,VAO) kind of function, which adapts the template to the VAO, and caches the compiled shader for later use.
If another VAO has the same attributes, and requires the same template, the precompiled cached version is returned to avoid the same adaptation process.

OpenGL big projects, VAO-s and more

So I've been learning OpenGL 3.3 on https://open.gl/ and I got really confused about some stuff.
VAO-s. By my understanding they are used to store the glVertexAttribPointer calls.
VBO-s. They store vertecies. So if I am making something with multiple objects do I need a VBO for every object?
Shader Programs - Why do we need multiple ones and what exactly do they do ?
What exactly does this line do : glBindFragDataLocation(shaderProgram, 0, "outColor");
The most important thing is how does all of this fit into a big program? For what exactly are used the VAO-s? Most tutorials just cover the things just to drawing a cube or 2 with hard coded vertices, so how would one go to managing scenes with a lot of objects? I've read this thread and got a little bit of understanding on how the scene management happens and all but still I can't figure out how to connect the OpenGL stuff to it all.
1-Yes. VAOs store vertex array bindings in general. When you see that you're doing lots of calls that does enabling, disabling and changing of GPU states, you can do all that at some early point in the program and then use VAOs to take a "snapshot" ,of what is bound and what isn't, at that point in time. Later, during your actual draw calls, all you need to do is bind that VAO again to set all the vertex states to what they were then. Just like how VBOs are faster that immediate mode because they send all vertices at once, VAOs work faster by changing many vertex states at once.
2-VBOs are just another way to send your glPosition, glColor..etc coordinates to the GPU to render on screen. The idea is, unlike with immediate mode where you send your vertex data one by one with the gl*Attribute* calls, is to upload all your vertices to the GPU in advance and retrieve their location as an ID. At time of rendering, you're only going to point the GPU (you bind the VBO id to something like GL_ARRAY_BUFFER, and use glVertexAttribPointer to specify details of how you stored the vertices data) to that location and issue your order to render. That obviously saves lots of time by doing things overhead, and so it's much faster.
As for whether one should have one VBO per object or even one VBO for all the objects is up to the programmer and the structure of the objects they want to render. After all, VBOs themselves are just a bunch of data you stored in the GPU, and you tell the computer how they're arranged using the glVertexAttribPointer calls.
3-Shaders are used to define a pipeline - a routine - of what happens to the vertices, colors, normals..etc after they've been sent to the GPU until they're rendered as fragments or pixels on the screen. When you send vertices over to the GPU, they're often still 3D coordinates, but the screen is a 2D sheet of pixels. There still comes the process of re-positioning these vertices according to the ProjectionModelView matrices (job of vertex shader) and then "flattening" or rasterizing the 3D geometry (geometry shader) into a 2D plane. Then it follows with coloring the flattened 2D scene (fragment shader) and finally lighting the pixels on your screen accordingly. In OpenGL versions 1.5 core and below, you didn't have much control over those stages as it was all fixed (hence the term fixed pipeline). Just think about what you could do in any of these shader stages and you will see that there is a lot of awesome things you can do with them. For example, in the fragment shader, just before you send the fragment color to the GPU, negate the sign of the color and add 1 to have colors of objects rendered with that shader inverted!
As for how many shaders one needs to use, again, it's up to the programmer to decide whether to have many or not. They could merge all the functionalities they need into one big giant shader (uber shader) and switch these functionalities on and off with boolean uniforms (very often considered as a bad practice), or have every shader do a certain thing and bind the right one according to what they need.
What exactly does this line do :
glBindFragDataLocation(shaderProgram, 0, "outColor");
It means that whatever is stored in the out declared variable "outColor" at the end of the fragment shader execution will be sent to the GPU as the final primary fragment color.
The most important thing is how does all of this fit into a big
program? For what exactly are used the VAO-s? Most tutorials just
cover the things just to drawing a cube or 2 with hard coded vertices,
so how would one go to managing scenes with a lot of objects? I've
read this thread and got a little bit of understanding on how the
scene management happens and all but still I can't figure out how to
connect the OpenGL stuff to it all.
They all work together to draw your nice colored shapes on the screen. VBOs are the structures where the vertices of your scene are stored (all aligned in an ugly fashion), VertexAttribPointer calls to tell the GPU how the data in the VBO is arranged, VAOs to store all these VertexAttribPointer instructions ahead of time and send them all at once with simply binding one during rendering in your main loop, and shaders to give you more control during the process of drawing your scene on the screen.
All of this can sound overwhelming at first, but with practice you will get used to it.

OpenGL glBegin vs Vertex Buffer Arrays VBO

I want to make an OpenGL game but now realize that what I've learned in school -> draw objects directly by glBegin/glEnd -> is old-school and bad. Readings on the internet I see that Vertex Buffer Objects and GLSL is the "proper" method for 3d programming.
1) Should I use OpenGL's glDrawArrays or glDrawElements?
2) For a game with many entities/grounds/objects, must the render function be many glDrawArrays():
foreach(entity in entities):
glDrawArrays(entitiy->arrayFromMesh()) //Where we get an array of the mesh vertex points
or only one glDrawArrays() that encapsulates every entity:
glDrawArrays(map->arrayFromAllMeshes())
3) Does every graphics layer (such as DirectX) now standardize Vertex Buffer Objects?
1) Should I use OpenGL's glDrawArrays or glDrawElements?
This depends on what your source data looks like. glDrawArrays is a little less complex, as you just have to draw a stream of vertices, but glDrawElements is useful in a situation where you have many vertices which are shared by several triangles. If you're just drawing sprites, then glDrawArrays is probably the simplest, but glDrawElements is useful for huge complex meshes where you want to save buffer space.
2) For a game with many entities/grounds/objects, must the render function be many glDrawArrays() or only one glDrawArrays() that encapsulates every entity:
Again this is somewhat optional, but note that you can't change program state in the middle of a drawarrays call. So that means that any two objects which need different state (different textures, different shaders, different blend settings), must have a separate draw call. However it can sometimes be helpful for performance to group as many similar things into a single draw call as you can.
3) Does every graphics layer (such as DirectX) now standardize Vertex Buffer Objects?
DirectX does have a similar concept, yes.