How could I morph one 3D object into another using shaders? - c++

For the latest Ludum Dare competition the theme was shapeshifting, so my idea involved simple morphing from one geometric shape to another.
So what I did was I made a few objects in Blender with the same vertex count. In OpenGL I made separate VAOs for each object and one additional VAO (with dynamic draw attributes) for the "morphing" object. Every frame, while the player is shapeshifting, I would upload interpolated vertex data, between the current object and the target object, into this extra VAO and then render. Otherwise just render the object's corresponding VAO.
Morphing looked like this:
(The vertices have a different ordering, so morphing is not "smooth")
Since I had little time I just made something quick and dirty but now I think this is not a great way of doing this process, because I have to upload a lot of data to the GPU every frame. And it doesn't look scalable either, if I ever wanted to draw multiple morphing objects at different morphing stages.
As a first step to improve this process I would like to move those interpolation calcs into the shaders.
I could perhaps store the data for all objects in a single VAO, in separate attributes, and then select which of the attributes to interpolate from.
But I was wondering: is there a way to somehow send multiple (two) objects/buffers into the shaders, along with an interpolation rate uniform, and then in the shaders I would do the interpolation?

You can create a buffer that holds several coordinates for each vertex. Just like normally you have coordinates, normals, texture coordinates you can have coordinate1, coordinate2, coordinate3 etc. Then in the shader you can have a uniform variable that says which to use.
With two it's of course easy since the uniform will be from zero to one and you just multiply the first coordinate with it and add the second multiplied with (1.0 - value).
Then just make sure you create the meshes from the same base shape and they will morph nicely.
Also if you use normals, make sure you have several normals and interpolate between them also.
The minus in this is that the more data you put through the more skipping in memory the shader has to do so it might not be the prettiest solution if you have a lot of forms.

Related

Efficiently transforming many different models in modern OpenGL

Suppose I want to render many different models, each with a different transformation matrix I want to be applied to their vertices. As far as I understand, the naive approach is to specify a matrix uniform in the vertex shader, the value of which is updated for each mesh during rendering.
It's obvious to me that this is a bad idea, due to the expense of many uniform updates and draw calls. So, what is the most efficient way to achieve this in modern OpenGL?
I've genuinely tried to find a straight, clear answer to this question. Most answers I find vaguely mention UBOs, or instance drawing (which afaik won't work unless you are drawing instances of the same mesh many times, which is not my goal).
With OpenGL 4.6 or with ARB_shader_draw_parameters, each draw in a multi-draw rendering command (functions of the form glMultiDraw*) is assigned a draw index from 0 to the number of draw calls specified by that function. This index is provided to the Vertex Shader via the gl_DrawID input value. You can then use this index to fetch a matrix from any number of constructs: UBOs, SSBOs, buffer textures, etc.
This works for multi-draw indirect rendering as well. So in theory, you can have a compute shader operation generate a bunch of rendering commands, then render your entire scene with a single draw call (assuming that all of your objects live in the same vertex buffers and can use the same shader and other state). Or at the very least, a large portion of the scene.
Furthermore, this index is considered dynamically uniform, so you can also use it (or values derived from it and other dynamically uniform values) to index into arrays of textures, fetch a texture from an array of bindless textures, or the like.

OpenGL - How to render many different models?

I'm currently struggling with finding a good approach to render many (thousands) slightly different models. The model itself is a simple cube with some vertex offset, think of a skewed quad face. Each 'block' has a different offset of its vertices, so basically I have a voxel engine on steroids as each block is not a perfect cube but rather a skewed cuboid. To render this shape 48 vertices are needed but can be cut to 24 vertices as only 3 faces are visible. With indexing we are at 12 vertices (4 for each face).
But, now that I have the vertices for each block in the world, how do I render them?
What I've tried:
Instanced Rendering. Sounds good, doesn't work as my models are not the same.
I could simplify distant blocks to a cube and render them with glDrawArraysInstanced/glDrawElementsInstanced.
Put everything in one giant VBO. This has a better performance than rendering each cube individually, but has the downside of having one large mesh. This is not desireable as I need every cube to have different textures, lighting, etc... Selecting a single cube within that huge mesh is not possible.
I am aware of frustum culling and occlusion culling, but I already have problems with some cubes in front of me (tested with a 128x128 world).
My requirements:
Draw some thousand models.
Each model has vertices offsets to make the block less cubic, stored in another VBO.
Each block has to be an individual object, as you should be able to place/remove blocks.
Any good performance advices?
This is not desireable as I need every cube to have different textures, lighting, etc... Selecting a single cube within that huge mesh is not possible.
Programmers should avoid declaring that something is "impossible"; it limits your thinking.
Giving each face of these cubes different textures has many solutions. The Minecraft approach uses texture atlases. Each "texture" is really just a sub-section of one large texture, and you use texture coordinates to select which sub-section a particular face uses. But you can get more complex.
Array textures allow for a more direct way to solve this problem. Here, the texture coordinates would be the same, but you use a per-vertex integer to select the correct texture for a face. All of the vertices for a particular face would have an index. And if you're clever, you don't even really need texture coordinates. You can generate them in your vertex shader, based on per-vertex values like gl_VertexID and the like.
Lighting parameters would work the same way: use some per-vertex data to select parameters from a UBO or SSBO.
As for the "individual object" bit, that's merely a matter of how you're thinking about the problem. Do not confuse what happens in the player's mind with what happens in your code. Games are an elaborate illusion; just because something appears to the user to be an "individual object" doesn't mean it is one to your rendering engine.
What you need is the ability to modify your world's data to remove and add new blocks. And if you need to show a block as "selected" or something, then you simply need another per-block value (like the lighting parameters and index for the texture) which tells you whether to draw it as a "selected" block or as an "unselected" one. Or you can just redraw that specific selected block. There are many ways of handling it.
Any decent graphics card (since about 2010) is able to render a few millions vertices in a blinking.
The approach is different depending on how many changes per frame. In other words, how many data must be transferred to the GPU per frame.
For the case of small number of changes, storing the data in one big VBO or many smaller VBOs (and their VAOs), sending the changes by uniforms, and calling several glDraw***, shows similar performance. Different hardwares behave with little difference. Indexed data may improve the speed.
When most of the data changes in every frame and these changes are hard or impossible to do in the shaders, then your app is memory-transfer bound. Streaming is a good advise.

How to render multiple objects that can each have multiple shaders in OpenGL 3.3?

Im trying to make a 3D renderer with OpenGL using c++, well, so far I have a Scene class that contains a list of Objects and Materials objects (I also have classes for those and I written my code so an object can have multiple shaders (every shader will be able to affect a group of vertices in an object) but now I'm trying to find a good way to send all that information to openGL.
I've seen people suggest taking everything that uses the same shader and rendering that at once, and do the same for every shader, well If I understood well enough,but is that a good idea if you can get the same shaders included in different objects, if I merged every vert that has shader A for example, won't it hurt that that group contains verts of separate objects when I try to draw them at once ? And if I take each object and separate each object according to their shaders, so for the rendering I would take Object A then split into its shader groups, then draw shadergroup1 in object1 then shader group2 in object 2 and so on.. Won't that be too many draw calls too.
What strategy do you recommend to accomplish that ?
The first things I recommend is, that you stop thinking in terms of "objects", as far as the rendering process is concerned. When rendering the only sensible grouping are drawing batches (of a certain primitive, points, lines, triangles) for which the same rendering steps (render pipeline) is executed. The modern rendering APIs that were released over the past months (Vulkan, DirectX 12 and Metal) make this explicit.
When rendering your scene the recommended strategy is to iterate over all your objects, split them into render pipeline groups and perform a single drawing batch call once for each primitive-by-pipeline group. The overall goal should be to minimize the total number of drawing calls made.
If you are using OpenGL 3.3, you are using Vertex Array Objects (VAO) and Vertex Buffer Objects (VBO). You have an object, a table for example, which can have three (or more or less) VBO:s, one for vertex data, one for normal data and one for texture coordinate data. You enclose your VBO:s of that table inside one VAO. So every object have its own VAO stored in a GPU memory.
When you want to render your objects or a part of them, you bind one of your shaders at use and call those VAO:s you want to render by that shader. It may be important that you render right objects on right order and use right shaders (of course!) on each VAO.

Render multiple models in OpenGL with a single draw call

I built a 2D graphical engine, and I created a batching system for it, so, if I have 1000 sprites with the same texture, I can draw them with one single call to openGl.
This is achieved by putting in a single vbo vertex array all the vertices of all the sprites with the same texture.
Instead of "print these vertices, print these vertices, print these vertices", I do "put all the vertices toghether, print", just to be very clear.
Easy enough, but now I'm trying to achieve the same thing in 3D, and I'm having a big problem.
The problem is that I'm using a Model View Projection matrix to place and render my models, which is the common approach to render a model in 3D space.
For each model on screen, I need to pass the MVP matrix to the shader, so that I can use it to transform each vertex to the correct position.
If I would do the transformation outside the shader, it would be executed by the cpu, which I not a good idea, for obvious reasons.
But the problem lies there. I need to pass the matrix to the shader, but for each model the matrix is different.
So I cannot do the same I did with 2d sprites, because changing a shader uniform requires a draw every time.
I hope I've been clear, maybe you have a good idea I didn't have or you already had the same problem. I know for a fact that there is a solution somewhere, because in engine like Unity, you can use the same shader for multiple models, and get away with one draw call
There exists a feature exactly like what you're looking for, and it's called instancing. With instancing, you store n matrices (or whatever else you need) in a Uniform Buffer and call glDrawElementsInstanced to draw n copies. In the shader, you get an extra input gl_InstanceID, with which you index into the Uniform Buffer to fetch the matrix you need for that particular instance.
You can read more about instancing here: https://www.opengl.org/wiki/Vertex_Rendering#Instancing
The answer depends on whether the vertex data for each item is identical or not. If it is, you can use instancing as in #orost's answer, using glDrawElementsInstanced, and gl_InstanceID within the vertex shader, and that method should be preferred.
However, if each 3D model requires different vertex data (which is frequently the case), you can still render them using a single draw call. To do this, you would add another stream into your vertex data with glVertexAttribPointer (and glEnableVertexAttribArray). This extra stream would contain the index of the matrix within the uniform buffer that vertex should use when rendering - so each mesh within the VBO would have an identical index in the extra stream. The uniform buffer contains the same data as in the instancing setup.
Note this method may require some extra CPU processing, if you need to redo the batching - for example, an object within a batch should not be rendered anymore. If this process is required frequently, it should be determined whether batching items is actually beneficial or not.
Besides instancing and adding another vertex attribute as some object ID, I'd like to also mention another strategy (which requires modern OpenGL, though):
The extension ARB_multi_draw_indirect (in core since GL 4.3) adds indirect drawing commands. These commands do source their parameters (number of vertices, starting index and so on) directly from another buffer object. With these functions, many different objects can be drawn with a single draw call.
However, as you still want some per-object state like transformation matrices, that feature is not enough. But in combination with ARB_shader_draw_parameters (not in core GL yet), you get the gl_DrawID parameter, which will be incremented by one for each single object in one mult draw indirect call. That way, you can index into some UBO, or TBO, or SSBO (or whatever) where you store per-object data.

Rendering a mesh in OpenGL as a series of subgroups?

I'm completing a wavefront object parser and I want to use it to construct generic mesh objects. My engine uses OpenGL 4 and shaders to draw everything in my engine.
My question is about how to ensure best rendering efficiency for rendering a mesh.
A wavefront .obj file normally has many object sub-groups specified.
A sub-group might be assigned a specific material (e.g. a shiny red colour).
So a mesh might be a fairly complex collection of sub-groups, each with their own material assigned.
My questions are -
Q. Do I need to draw each sub-group separately e.g. with a call to glDrawElements for each sub-group ? (So if I had 4 separate sub-groups, I'd have to make four glDrawElements calls, thereby invoking the shader 4 times with 4 uniform changes (for the materials/textures) )
glDrawElements( GL_TRIANGLES, nNumIndicesInGroup, GL_UNSIGNED_INT, ((char*)NULL)+ first-vertex-offset );
If this is correct, then I'll have to calculate:
The indices in each sub-group (implying a separate index array and VAO for each sub-group)
The vertex offset of the start of the sub-group
This seems terribly inefficient, am I barking up the wrong tree?
Also, from the Wavefront obj wiki page:
Smooth shading across polygons is enabled by smoothing groups.
s 1
...
# Smooth shading can be disabled as well.
s off
...
Can anyone suggest what smooth shading values indicate? E.g. s1, s2, s4 etc.
Yes, you should draw each sub-group separately from the others. This is required till the state is different between sub-groups.
But you are making a too long step.
To avoid multiple draw calls, you can introduce a vertex attribute indicating an index used for accessing uniform array values (array of materials, array of textures). In this way, you need only one draw call, but you will have the cost of one additional attribute and its relative management.
I would avoid the above approach. What if a sub-group is textured and another one not? How do you discriminate whether to texture or not? Introducing other attributes? Seems confusing.
The first point is that the buffer object management is very flexible. Indeed you could have a single element buffer object and a single vertex buffer object: using offsets and interleaving you can satisfy every level of complexity. And then, on modern harware, using vertex array objects you can minimize the cost of the different buffer bindings.
Second point is that your software can group different sub-group having the same uniform state, joining multiple draw calls into a single one. Remember that you can use Multi draw entry points variants, and there's also the primitive restart that can aid you (in the case stripped primitives).
Other considerations are not usefull, because you have to draw anyway, regardless if it's complex or not. Successively, when you have a correct rendering, you could profile the application and the rendering, cutting-off the hot-spots.
Smoothing groups are a collection of vertices that are sharing the same option attribute (normals, texture coordinates). This is the case of element-indexed vertices.
To go deeper on subject, read one of the specification found by googling.