Optimizing rendering through packing vertex buffers [closed] - opengl

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
John Carmack recently tweeted:
"Another outdated habit is making separate buffer objects for vertexes
and indexes."
I am afraid I don't fully understand what he means. Is he implying that packing all 3D data including vertex, index, and joint data into a single vertex buffer is optimal compared to separate buffers for each? And, if so, would such a technique apply only to OpenGL or could a Vulkan renderer benefit as well?

I think he means there's no particular need to put them in different buffer objects. You probably don't want to interleave them at fine-granularity, but putting e.g. all the indices for a mesh at the beginning of a buffer and then all the vertex data for the mesh following it is not going to be any worse than using separate buffer objects. Use offsets to point the binding points at the correct location in the buffer.
Whether it's better to put them in one buffer I don't know: if it is, I think it's probably ancillary things like having fewer larger memory allocations tends to be a little more efficient, you (or the driver) can do one large copy instead of two smaller ones when copies are necessary, etc.
Edit: I'd expect this all to apply to both GL and Vulkan.

John Carmack has replied with an answer regarding his original tweet:
"The performance should be the same, just less management. I wouldn't bother changing old code."
...
"It isn't a problem, it just isn't necessary on modern hardware. No big efficiency to packing them, just a modest management overhead."
So perhaps it isn't an outdated habit at all especially since it goes against the intended use-case for most APIs and in some cases can break compatibility -
as noted by Nico.

Related

Why does OpenGL assign the same buffer to two different VAOs? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I am using glBufferSubData on two different VAOs with two different sets of objects (I am using instanced rendering in one of them). The problem is that when I analyze the render call with RenderDoc, I see that they are sharing the same internal buffer (which I don't think it should happen). I am for sure binding different VAOs when doing glBufferSubData and updating their correspondent attributes but I don't understand why OpenGL would make the two set of objects have the same buffer. Does anyone know why is this happening and if it has any solution?
In case it is useful, one of the buffers is quite big (1527864 bytes) and the other one is not small either.
glBufferSubData doesn't care about VAOs, it affects buffers (a.k.a. VBOs).
If you want to put data in two different buffers, then you need to bind the first buffer with glBindBuffer(GL_ARRAY_BUFFER), call glBufferSubData, then bind the other buffer, and call glBufferSubData. (Same with glBufferData)

glMultiDrawElements with GL_TRIANGLES - usage [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In my OpenGL projects I have always used glDrawElements or glDrawElementsBaseVertex with GL_TRIANGLES. When rendering a complex object I can, for example sort meshes based on the material index, update textures and uniforms and call glDrawElements for each group.
I have started exploring other draw commands. I wonder when glMuliDrawElements is useful. I have found the following example. There glMulitDrawElements is used with GL_TRIANGLE_STRIP and is in fact a equivalent for primitive restart functionality. That is clear.
When glMuliDrawElements/glMultiDrawArrays with GL_TRIANGLES may be useful ? Could you provide please some example ?
You would use them when you have multiple, discontiguous ranges of primitives to draw (without any state changes between them).
The availability of gl_DrawID in vertex shaders makes it possible to issue multiple draws in such a way that the VS can index into some memory to find the specific per-model data for that rendering call. So in that event, you could draw dozens of entirely separate models, using different transforms stored in a buffer, all with the same draw call.
So long as you can put their per-model data in a buffer object indexable by the draw identifier (and so long as all of these models use the same vertex format and buffer storage), you could render much of the entire scene in a single call.

Protocol Buffers vs Flat Buffers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
So, I'm currently working on a project where Protocol Buffers is used extensively, mainly as a way to store complex objects in a key-value database.
Would a migration to Flat Buffers provide a considerable benefit in terms of performance?
More in general, is there ever a good reason to use Protocol Buffers instead of Flat Buffers?
Protocol buffers are optimized for space consumption on the wire, so for archival and storage, they are very efficient. However, the complex encoding is expensive to parse, and so they are computationally expensive, and the C++ API makes heavy use of dynamic allocations. Flat buffers, on the other hand, are optimized for efficient parsing and in-memory representation (e.g. offering zero-copy views of the data in some cases).
It depends on your use case which of those aspects is more important to you.
Quoting from the flatbuffer page:
Why not use Protocol Buffers, or .. ?
Protocol Buffers is indeed relatively similar to FlatBuffers, with the
primary difference being that FlatBuffers does not need a parsing/
unpacking step to a secondary representation before you can access
data, often coupled with per-object memory allocation. The code is an
order of magnitude bigger, too. Protocol Buffers has neither optional
text import/export nor schema language features like unions.

Starting with 3d objects [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I have just started with C++ and would like to program and little with 3d objects. Now I could use C++ or Objective C it doesnt matter.
What books are good with 3d objects?
I want to load a 3d object/file created by a 3d application, and then manipulate the 3d object.
Move it on the screen, rotate it etc.
Where is a good place to start to learn this? A book, tutorials etc.
Lesson 31 on gamedev.net should get you started.
Here's a pretty nice site with 3d engine tutorials: http://www.spacesimulator.net/wiki/index.php/3d_Engine_Programming_Tutorials
I have just started with C++
Woah there, have you done any C programming? In order to get anywhere (besides stuck!) in OpenGL, you really need to know C well, since OpenGL is written in C. At the very least you need to know all about pointers, functions, and arrays.
I'd also suggest getting started with 2D objects, and then going to 3D. There really isn't any difference with OpenGL. To render a 2D object, you render the same exact way as a 3D object, but you give every object the same z (depth) value.
Although most of these are deprecated, I'd suggest starting by learning Immediate Mode, moving towards Display Lists, then Vertex Arrays, and finally Vertex Buffer Objects and Index Buffer Objects. These are all different methods of how the GPU gets your vertex/color/texture information, and they all vary in speed.

opengl vbo advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a model designed in Blender that i am going to render, in my game engine built using opengl, after exporting it to collada. The model in blender is divided into groups. These groups contains vertices normals and textures with there own index. is it a good thing to do if i rendered each group as a separate vbo or should i render the whole model as a single vbo?
Rules of thumb are:
always do as few OpenGL state changes as possible;
always talk to OpenGL as little as possible.
The main reason being that talking to the GPU is costly. The GPU is much happier if you give it something it'll take a long time to do and then don't bother it again while it's working. So it's likely that a single VBO would be a better solution, unless it would lead to a substantial increase in the amount of storage you need to use and hence run against caching elsewhere.