glMultiDrawElements with GL_TRIANGLES - usage [closed] - opengl

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In my OpenGL projects I have always used glDrawElements or glDrawElementsBaseVertex with GL_TRIANGLES. When rendering a complex object I can, for example sort meshes based on the material index, update textures and uniforms and call glDrawElements for each group.
I have started exploring other draw commands. I wonder when glMuliDrawElements is useful. I have found the following example. There glMulitDrawElements is used with GL_TRIANGLE_STRIP and is in fact a equivalent for primitive restart functionality. That is clear.
When glMuliDrawElements/glMultiDrawArrays with GL_TRIANGLES may be useful ? Could you provide please some example ?

You would use them when you have multiple, discontiguous ranges of primitives to draw (without any state changes between them).
The availability of gl_DrawID in vertex shaders makes it possible to issue multiple draws in such a way that the VS can index into some memory to find the specific per-model data for that rendering call. So in that event, you could draw dozens of entirely separate models, using different transforms stored in a buffer, all with the same draw call.
So long as you can put their per-model data in a buffer object indexable by the draw identifier (and so long as all of these models use the same vertex format and buffer storage), you could render much of the entire scene in a single call.

Related

Why does OpenGL assign the same buffer to two different VAOs? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I am using glBufferSubData on two different VAOs with two different sets of objects (I am using instanced rendering in one of them). The problem is that when I analyze the render call with RenderDoc, I see that they are sharing the same internal buffer (which I don't think it should happen). I am for sure binding different VAOs when doing glBufferSubData and updating their correspondent attributes but I don't understand why OpenGL would make the two set of objects have the same buffer. Does anyone know why is this happening and if it has any solution?
In case it is useful, one of the buffers is quite big (1527864 bytes) and the other one is not small either.
glBufferSubData doesn't care about VAOs, it affects buffers (a.k.a. VBOs).
If you want to put data in two different buffers, then you need to bind the first buffer with glBindBuffer(GL_ARRAY_BUFFER), call glBufferSubData, then bind the other buffer, and call glBufferSubData. (Same with glBufferData)

Calculate indices from a vector of vec3 objects by finding duplicate vertices [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am trying to build an .obj file importer that renders said model to the screen.
At present I have imported the vertices and they are stored in a
std::vector<vec3> vertices;
My renderer class quite large so I'll link github instead of posting it here.
https://github.com/rob-DEV/OpenGL-Model-Viewer/blob/master/OpenGL-Model-Viewer/src/graphics/renderer/renderer.cpp
So at line 41 in renderer.cpp. I submit these vertexs to the renderer. Originally I just drew triangles, but I would like to take advantage of GL_ELEMENTS
My question is how (if possible) can I calculate these indices from a list of vertices? I have tried to find duplicates and do so when the model is loaded but i don't know how to map them.
Using mapped buffers (by glMapBuffer) does not work in your case.
For example, if you want to know if a vertex has been already mapped (a duplicate) you should read all of currently mapped vertices, which is slow.
If you want to use a sorted array (so fast binary search can be used) then many portions of that mapped buffer must be re-written during the sorting/storing process. Slow too.
A better option is to build your sorted vertices array on your own, on normal RAM. Be adviced of searching issues due to numerical matters when comparing two floats.
On a second pass each index is just the position of the current vertex. Same advice for numerical issues.
Data can be sent to the GPU by two simple glBufferSubData, one for vertices (and their colors and normals and etc) and another one for the indices.
If the object moves (and you prefer to update coordinates instead of use a translation matrix) then it's right to use a mapped buffer with new coordinates (as you currently do). Indices don't change, so they don't need to be updated.

Should objects draw themselves? (Mixture of geometry and textures) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am aware that similar questions have been asked many times before. However, this is a different case.
I am writing a game in C++ using SDL2. Currently, all objects draw themselves. They do this because they all draw slightly differently.
For instance, a Button contains a rectangle, drawn with SDL_RenderFillRect();
Buttons also contain text, which are drawn using SDL_RenderCopy(), which takes a texture generated by SDL_TTF as a parameter.
Additionally, a MapView widget (basically a grid that can load a tilemap) draws the grid out using a 'for' loop containing horizontal and vertical SDL_RenderDrawLine() calls.
Finally, the tiles themselves are stored as textures, drawn using SDL_RenderCopy().
I understand that it is generally preferable to NOT have objects draw themselves. However, because there is so much variation in how the objects are drawn, I'm not sure of another way!
I thought it might be possible to have a GetTexture() function for each object, and the ones using textures could simply 'return texture', while the geometric objects could generate a texture. This gets complicated with my MapView object, because the grid is constantly updated when the user navigates around the game world (an offset value is changed and the grid is redrawn when moved).
Like so many questions of this type the answer is: it depends on your program.
If you are only every going to draw it the same way using SDL, then no reason why not. Another alternative might be to have a specific rendering class for each object, but that's doubling the effort. Having all your rendering code in a single class or function works fine too, but it gets big and complicated fast.
It's a judgement call based on the complexity of your code and what you want to do with it in the future, and my advice is to choose the simplest solution. As long as you've considered the potential downsides, you can make an educated decision.

Deferred shading, store position or construct it from depth [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm in the middle of implementing deferred shading in an engine I'm working on, and now I have to make a decision on whether to use a full RGB32F texture to store positions, or reconstruct it from the depth buffer. So it's basically a RGB32F texel fetch vs a matrix vector multiplication in the fragment shader. Also the trade between memory and extra ALU operations.
Please direct me to useful resources and tell me your own experience with the subject.
In my opinion it is preferable to recalculate the position from depth. This is what I do in my deferred engine. The recalculation is a fast enough to not even show up when I've been profiling the render loop. And that (virtually no performance impact) compared to ~24MB of extra video memory usage (for a 1920x1080 texture) was an easy choice for me.

opengl vbo advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a model designed in Blender that i am going to render, in my game engine built using opengl, after exporting it to collada. The model in blender is divided into groups. These groups contains vertices normals and textures with there own index. is it a good thing to do if i rendered each group as a separate vbo or should i render the whole model as a single vbo?
Rules of thumb are:
always do as few OpenGL state changes as possible;
always talk to OpenGL as little as possible.
The main reason being that talking to the GPU is costly. The GPU is much happier if you give it something it'll take a long time to do and then don't bother it again while it's working. So it's likely that a single VBO would be a better solution, unless it would lead to a substantial increase in the amount of storage you need to use and hence run against caching elsewhere.