I want to make an OpenGL game but now realize that what I've learned in school -> draw objects directly by glBegin/glEnd -> is old-school and bad. Readings on the internet I see that Vertex Buffer Objects and GLSL is the "proper" method for 3d programming.
1) Should I use OpenGL's glDrawArrays or glDrawElements?
2) For a game with many entities/grounds/objects, must the render function be many glDrawArrays():
foreach(entity in entities):
glDrawArrays(entitiy->arrayFromMesh()) //Where we get an array of the mesh vertex points
or only one glDrawArrays() that encapsulates every entity:
glDrawArrays(map->arrayFromAllMeshes())
3) Does every graphics layer (such as DirectX) now standardize Vertex Buffer Objects?
1) Should I use OpenGL's glDrawArrays or glDrawElements?
This depends on what your source data looks like. glDrawArrays is a little less complex, as you just have to draw a stream of vertices, but glDrawElements is useful in a situation where you have many vertices which are shared by several triangles. If you're just drawing sprites, then glDrawArrays is probably the simplest, but glDrawElements is useful for huge complex meshes where you want to save buffer space.
2) For a game with many entities/grounds/objects, must the render function be many glDrawArrays() or only one glDrawArrays() that encapsulates every entity:
Again this is somewhat optional, but note that you can't change program state in the middle of a drawarrays call. So that means that any two objects which need different state (different textures, different shaders, different blend settings), must have a separate draw call. However it can sometimes be helpful for performance to group as many similar things into a single draw call as you can.
3) Does every graphics layer (such as DirectX) now standardize Vertex Buffer Objects?
DirectX does have a similar concept, yes.
Related
Suppose I want to render many different models, each with a different transformation matrix I want to be applied to their vertices. As far as I understand, the naive approach is to specify a matrix uniform in the vertex shader, the value of which is updated for each mesh during rendering.
It's obvious to me that this is a bad idea, due to the expense of many uniform updates and draw calls. So, what is the most efficient way to achieve this in modern OpenGL?
I've genuinely tried to find a straight, clear answer to this question. Most answers I find vaguely mention UBOs, or instance drawing (which afaik won't work unless you are drawing instances of the same mesh many times, which is not my goal).
With OpenGL 4.6 or with ARB_shader_draw_parameters, each draw in a multi-draw rendering command (functions of the form glMultiDraw*) is assigned a draw index from 0 to the number of draw calls specified by that function. This index is provided to the Vertex Shader via the gl_DrawID input value. You can then use this index to fetch a matrix from any number of constructs: UBOs, SSBOs, buffer textures, etc.
This works for multi-draw indirect rendering as well. So in theory, you can have a compute shader operation generate a bunch of rendering commands, then render your entire scene with a single draw call (assuming that all of your objects live in the same vertex buffers and can use the same shader and other state). Or at the very least, a large portion of the scene.
Furthermore, this index is considered dynamically uniform, so you can also use it (or values derived from it and other dynamically uniform values) to index into arrays of textures, fetch a texture from an array of bindless textures, or the like.
Im trying to make a 3D renderer with OpenGL using c++, well, so far I have a Scene class that contains a list of Objects and Materials objects (I also have classes for those and I written my code so an object can have multiple shaders (every shader will be able to affect a group of vertices in an object) but now I'm trying to find a good way to send all that information to openGL.
I've seen people suggest taking everything that uses the same shader and rendering that at once, and do the same for every shader, well If I understood well enough,but is that a good idea if you can get the same shaders included in different objects, if I merged every vert that has shader A for example, won't it hurt that that group contains verts of separate objects when I try to draw them at once ? And if I take each object and separate each object according to their shaders, so for the rendering I would take Object A then split into its shader groups, then draw shadergroup1 in object1 then shader group2 in object 2 and so on.. Won't that be too many draw calls too.
What strategy do you recommend to accomplish that ?
The first things I recommend is, that you stop thinking in terms of "objects", as far as the rendering process is concerned. When rendering the only sensible grouping are drawing batches (of a certain primitive, points, lines, triangles) for which the same rendering steps (render pipeline) is executed. The modern rendering APIs that were released over the past months (Vulkan, DirectX 12 and Metal) make this explicit.
When rendering your scene the recommended strategy is to iterate over all your objects, split them into render pipeline groups and perform a single drawing batch call once for each primitive-by-pipeline group. The overall goal should be to minimize the total number of drawing calls made.
If you are using OpenGL 3.3, you are using Vertex Array Objects (VAO) and Vertex Buffer Objects (VBO). You have an object, a table for example, which can have three (or more or less) VBO:s, one for vertex data, one for normal data and one for texture coordinate data. You enclose your VBO:s of that table inside one VAO. So every object have its own VAO stored in a GPU memory.
When you want to render your objects or a part of them, you bind one of your shaders at use and call those VAO:s you want to render by that shader. It may be important that you render right objects on right order and use right shaders (of course!) on each VAO.
I'm trying to use modern OpenGL and shaders, instead of the immediate mode I have been using so far. I recently learned about VBOs and VAO, and I'm still trying to get my head round them, but I know that a VBO takes an array of floats that are vertices, which it then passes to the GPU etc
What is the best way to draw multiple objects (which are all identical) but in different positions, using VBOs. Will I have to draw one, then modify the array passed in beforehand, and then draw it again and modify and draw and modify and so on... for all blocks in the screen every frame? Or is there a better way?
I'm trying to achieve this: http://imgur.com/cBgJ0sK
Any help is appreciated - I don't want to learn bad (deprecated, old) immediate mode habits, when I could be learning a more modern way!
You should not modify the vertices in your program, that should be done in the shaders. For this, you will create a matrix that represents the transformation and will use that matrix in the vertex shader.
The main idea is:
You create a VAO holding the information of your VBO (vertices, normals, texture coordinates, tangent information, etc.)
Then, for every different object, you generate a model matrix that holds the information of the position, orientation and scale (and other homogeneous tranformations) and send that to your shader to make the transformations.
The idea is that you bind your VAO just once and then draw all the different objects just sending the information that change (model matrix, may be textures) and draw the objects.
To learn about how to use the model matrix, read tutorials like this:
http://ogldev.atspace.co.uk/www/tutorial06/tutorial06.html
There are even better ways to do this, but you can start from here.
Other information that would be good for your case is using instancing.
http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html
Later, you can move on indirect drawing for even better performance. Later...
So I've been learning OpenGL 3.3 on https://open.gl/ and I got really confused about some stuff.
VAO-s. By my understanding they are used to store the glVertexAttribPointer calls.
VBO-s. They store vertecies. So if I am making something with multiple objects do I need a VBO for every object?
Shader Programs - Why do we need multiple ones and what exactly do they do ?
What exactly does this line do : glBindFragDataLocation(shaderProgram, 0, "outColor");
The most important thing is how does all of this fit into a big program? For what exactly are used the VAO-s? Most tutorials just cover the things just to drawing a cube or 2 with hard coded vertices, so how would one go to managing scenes with a lot of objects? I've read this thread and got a little bit of understanding on how the scene management happens and all but still I can't figure out how to connect the OpenGL stuff to it all.
1-Yes. VAOs store vertex array bindings in general. When you see that you're doing lots of calls that does enabling, disabling and changing of GPU states, you can do all that at some early point in the program and then use VAOs to take a "snapshot" ,of what is bound and what isn't, at that point in time. Later, during your actual draw calls, all you need to do is bind that VAO again to set all the vertex states to what they were then. Just like how VBOs are faster that immediate mode because they send all vertices at once, VAOs work faster by changing many vertex states at once.
2-VBOs are just another way to send your glPosition, glColor..etc coordinates to the GPU to render on screen. The idea is, unlike with immediate mode where you send your vertex data one by one with the gl*Attribute* calls, is to upload all your vertices to the GPU in advance and retrieve their location as an ID. At time of rendering, you're only going to point the GPU (you bind the VBO id to something like GL_ARRAY_BUFFER, and use glVertexAttribPointer to specify details of how you stored the vertices data) to that location and issue your order to render. That obviously saves lots of time by doing things overhead, and so it's much faster.
As for whether one should have one VBO per object or even one VBO for all the objects is up to the programmer and the structure of the objects they want to render. After all, VBOs themselves are just a bunch of data you stored in the GPU, and you tell the computer how they're arranged using the glVertexAttribPointer calls.
3-Shaders are used to define a pipeline - a routine - of what happens to the vertices, colors, normals..etc after they've been sent to the GPU until they're rendered as fragments or pixels on the screen. When you send vertices over to the GPU, they're often still 3D coordinates, but the screen is a 2D sheet of pixels. There still comes the process of re-positioning these vertices according to the ProjectionModelView matrices (job of vertex shader) and then "flattening" or rasterizing the 3D geometry (geometry shader) into a 2D plane. Then it follows with coloring the flattened 2D scene (fragment shader) and finally lighting the pixels on your screen accordingly. In OpenGL versions 1.5 core and below, you didn't have much control over those stages as it was all fixed (hence the term fixed pipeline). Just think about what you could do in any of these shader stages and you will see that there is a lot of awesome things you can do with them. For example, in the fragment shader, just before you send the fragment color to the GPU, negate the sign of the color and add 1 to have colors of objects rendered with that shader inverted!
As for how many shaders one needs to use, again, it's up to the programmer to decide whether to have many or not. They could merge all the functionalities they need into one big giant shader (uber shader) and switch these functionalities on and off with boolean uniforms (very often considered as a bad practice), or have every shader do a certain thing and bind the right one according to what they need.
What exactly does this line do :
glBindFragDataLocation(shaderProgram, 0, "outColor");
It means that whatever is stored in the out declared variable "outColor" at the end of the fragment shader execution will be sent to the GPU as the final primary fragment color.
The most important thing is how does all of this fit into a big
program? For what exactly are used the VAO-s? Most tutorials just
cover the things just to drawing a cube or 2 with hard coded vertices,
so how would one go to managing scenes with a lot of objects? I've
read this thread and got a little bit of understanding on how the
scene management happens and all but still I can't figure out how to
connect the OpenGL stuff to it all.
They all work together to draw your nice colored shapes on the screen. VBOs are the structures where the vertices of your scene are stored (all aligned in an ugly fashion), VertexAttribPointer calls to tell the GPU how the data in the VBO is arranged, VAOs to store all these VertexAttribPointer instructions ahead of time and send them all at once with simply binding one during rendering in your main loop, and shaders to give you more control during the process of drawing your scene on the screen.
All of this can sound overwhelming at first, but with practice you will get used to it.
I'm trying to write a game that deals with many circles (well, Triangle Fans, but you get the idea). Each circle will have an x position, a y position, and a mass property. Every circle's mass property will be different. Also, I want to color some groups of circles different, while keeping a transparent circle center, and fading to opaque along the perimeter of the circles.
I was told to use VBOs, and have been Googling all day. I would like a full example on how I would draw these circles and an explanation as to how VBOs work, while keeping that explanation simple.
I have not implemented VBOs myself yet, but from what I understand, they work similar to texture objects. In fact, when I am reminding myself and explaining to others what VBOs are, I like to incorrectly call texture objects 'texture buffer objects', to reinforce the conceptual similarity.
(Not to be mixed with buffer textures from NVIDIA-specified extension GL_EXT_texture_buffer_object.)
So let's think: what are texture objects? They are objects you generate using glGenTextures(). glGenBuffersARB() does similar thing. An analogy applies with glBindTexture() and glBindBufferARB().
(As of OpenGL 1.5, functions glGenBuffers() and glBindBuffer() have entered core OpenGL, so you can use them in place of the extension equivalents.)
But what exactly are these 'texture objects', and what do they do? Well, consider that you can, actually, use glTexture2D() in each frame to set up a texture. Texture objects only serve to reduce traffic between GPU and main memory; instead of sending entire pixel array, you send just the "OpenGL name" (that is, an integer identifier) of the texture object which you know to be static.
VBOs serve similar purpose. Instead of sending the vertex array over and over, you upload the array once using glBufferData() and then send just the "OpenGL name" of the object. They are great for static objects, and not so great for dynamic objects. In fact, many generic 3D engines such as Ogre3D provide you with a way to specify if a mesh is dynamic or static, quite probably in order to let you decide between VBOs and vertex arrays.
For your purposes, VBOs are not the right answer. You want numerous simple objects that are continuously morphing and changing. By simple, I mean those with less than 200 vertices. Unless you intend to write a very smart and complex vertex shader, VBOs are not for you. You want to use vertex arrays, which you can easily manipulate from main CPU and update them each frame without making special calls to graphics card to reupload the entire VBO onto the graphics card (which may turn out slower than just sending vertex arrays).
Here's a quite good letscallit "man page" from nVidia about VBO API. Read it for further info!
good vbo tutorial
What you're doing looks like particles, you might want to google "particle rendering" just in case.