Managing 3D resources (samplers, specifically) - opengl

I've got a question about how to organise my graphics resources/objects. I'm importing scenes from 3D Studio using AssImp. At the moment I'm processing the AssImp structures into my own so I can stream them to and from disk more quickly than AssImp can. Anyway, I'm also processing the scene materials, which gives me a set of choices I'm not too sure how to best handle.
I have the following classes for materials:
Material
Contains things like ambient, diffuse colour, specular power and so on, also contains an array of maps, with each slot in the array being a "channel" (i.e. slot 0 is always ambient map, slot 1 diffuse, slot 2 gloss and so on).
Map
An instance of a map, with parameters such as wrap mode and a pointer to a texture object.
Texture
An OpenGL texture map.
Sampler
An OpenGL sampler object.
Now I'm kind-of wondering how I'm going to handle configuring samplers for any given map in a material. Technically each map can have different sampler parameters. For example, a sampler can have UV wrap modes, min and max filters and anisotropy. Some of these parameters are defined in the scene (wrap modes) and others are defined as engine parameters (filtering, anisotropy).
What I'm thinking about doing is creating one sampler object per map (not per texture, remember, a map points to a texture). Then when I render an object that has a material with map X, it will automatically bind in the sampler for map X to whichever channel it is. As it's possible there will be thousands of maps, I'm wondering if the corresponding thousands of samplers is a good idea.
Another way of doing it would be to add samplers to my resource dictionary so they can be shared by any map that happens to have the same parameters.
Does anyone have an opinion on how best to manage things like this?

This is really not a hard problem.
How many possible different kinds of samplers would you have? If your filtering is a global concept, then the main difference between samplers will be in wrap modes. There are three dimensions of wrapping (S, T, R) and 4 possible wrapping types (CLAMP_TO_EDGE, CLAMP_TO_BORDER, REPEAT, and MIRROR_REPEAT). This gives you a grand total of... 12 possible samplers you could use for normal things.
CLAMP_TO_BORDER requires a border color, so that would require a unique sampler object (unless many textures share the same border color, which is not unreasonable. Black is a popular choice). And if you're doing depth comparison sampling, then you'd need a few more sampler objects.
So just have a sampler object manager that you can ask for a sampler from. You give it some parameters and it regurgitates a sampler. It's either a sampler that exists or one that it created just for you. If you need a unique one (for border colors), then you can ask for a unique one that was given to you. Otherwise, if two people ask for the same sampler, then they get the same sampler.

Related

Efficiently transforming many different models in modern OpenGL

Suppose I want to render many different models, each with a different transformation matrix I want to be applied to their vertices. As far as I understand, the naive approach is to specify a matrix uniform in the vertex shader, the value of which is updated for each mesh during rendering.
It's obvious to me that this is a bad idea, due to the expense of many uniform updates and draw calls. So, what is the most efficient way to achieve this in modern OpenGL?
I've genuinely tried to find a straight, clear answer to this question. Most answers I find vaguely mention UBOs, or instance drawing (which afaik won't work unless you are drawing instances of the same mesh many times, which is not my goal).
With OpenGL 4.6 or with ARB_shader_draw_parameters, each draw in a multi-draw rendering command (functions of the form glMultiDraw*) is assigned a draw index from 0 to the number of draw calls specified by that function. This index is provided to the Vertex Shader via the gl_DrawID input value. You can then use this index to fetch a matrix from any number of constructs: UBOs, SSBOs, buffer textures, etc.
This works for multi-draw indirect rendering as well. So in theory, you can have a compute shader operation generate a bunch of rendering commands, then render your entire scene with a single draw call (assuming that all of your objects live in the same vertex buffers and can use the same shader and other state). Or at the very least, a large portion of the scene.
Furthermore, this index is considered dynamically uniform, so you can also use it (or values derived from it and other dynamically uniform values) to index into arrays of textures, fetch a texture from an array of bindless textures, or the like.

How could I morph one 3D object into another using shaders?

For the latest Ludum Dare competition the theme was shapeshifting, so my idea involved simple morphing from one geometric shape to another.
So what I did was I made a few objects in Blender with the same vertex count. In OpenGL I made separate VAOs for each object and one additional VAO (with dynamic draw attributes) for the "morphing" object. Every frame, while the player is shapeshifting, I would upload interpolated vertex data, between the current object and the target object, into this extra VAO and then render. Otherwise just render the object's corresponding VAO.
Morphing looked like this:
(The vertices have a different ordering, so morphing is not "smooth")
Since I had little time I just made something quick and dirty but now I think this is not a great way of doing this process, because I have to upload a lot of data to the GPU every frame. And it doesn't look scalable either, if I ever wanted to draw multiple morphing objects at different morphing stages.
As a first step to improve this process I would like to move those interpolation calcs into the shaders.
I could perhaps store the data for all objects in a single VAO, in separate attributes, and then select which of the attributes to interpolate from.
But I was wondering: is there a way to somehow send multiple (two) objects/buffers into the shaders, along with an interpolation rate uniform, and then in the shaders I would do the interpolation?
You can create a buffer that holds several coordinates for each vertex. Just like normally you have coordinates, normals, texture coordinates you can have coordinate1, coordinate2, coordinate3 etc. Then in the shader you can have a uniform variable that says which to use.
With two it's of course easy since the uniform will be from zero to one and you just multiply the first coordinate with it and add the second multiplied with (1.0 - value).
Then just make sure you create the meshes from the same base shape and they will morph nicely.
Also if you use normals, make sure you have several normals and interpolate between them also.
The minus in this is that the more data you put through the more skipping in memory the shader has to do so it might not be the prettiest solution if you have a lot of forms.

Render multiple models in OpenGL with a single draw call

I built a 2D graphical engine, and I created a batching system for it, so, if I have 1000 sprites with the same texture, I can draw them with one single call to openGl.
This is achieved by putting in a single vbo vertex array all the vertices of all the sprites with the same texture.
Instead of "print these vertices, print these vertices, print these vertices", I do "put all the vertices toghether, print", just to be very clear.
Easy enough, but now I'm trying to achieve the same thing in 3D, and I'm having a big problem.
The problem is that I'm using a Model View Projection matrix to place and render my models, which is the common approach to render a model in 3D space.
For each model on screen, I need to pass the MVP matrix to the shader, so that I can use it to transform each vertex to the correct position.
If I would do the transformation outside the shader, it would be executed by the cpu, which I not a good idea, for obvious reasons.
But the problem lies there. I need to pass the matrix to the shader, but for each model the matrix is different.
So I cannot do the same I did with 2d sprites, because changing a shader uniform requires a draw every time.
I hope I've been clear, maybe you have a good idea I didn't have or you already had the same problem. I know for a fact that there is a solution somewhere, because in engine like Unity, you can use the same shader for multiple models, and get away with one draw call
There exists a feature exactly like what you're looking for, and it's called instancing. With instancing, you store n matrices (or whatever else you need) in a Uniform Buffer and call glDrawElementsInstanced to draw n copies. In the shader, you get an extra input gl_InstanceID, with which you index into the Uniform Buffer to fetch the matrix you need for that particular instance.
You can read more about instancing here: https://www.opengl.org/wiki/Vertex_Rendering#Instancing
The answer depends on whether the vertex data for each item is identical or not. If it is, you can use instancing as in #orost's answer, using glDrawElementsInstanced, and gl_InstanceID within the vertex shader, and that method should be preferred.
However, if each 3D model requires different vertex data (which is frequently the case), you can still render them using a single draw call. To do this, you would add another stream into your vertex data with glVertexAttribPointer (and glEnableVertexAttribArray). This extra stream would contain the index of the matrix within the uniform buffer that vertex should use when rendering - so each mesh within the VBO would have an identical index in the extra stream. The uniform buffer contains the same data as in the instancing setup.
Note this method may require some extra CPU processing, if you need to redo the batching - for example, an object within a batch should not be rendered anymore. If this process is required frequently, it should be determined whether batching items is actually beneficial or not.
Besides instancing and adding another vertex attribute as some object ID, I'd like to also mention another strategy (which requires modern OpenGL, though):
The extension ARB_multi_draw_indirect (in core since GL 4.3) adds indirect drawing commands. These commands do source their parameters (number of vertices, starting index and so on) directly from another buffer object. With these functions, many different objects can be drawn with a single draw call.
However, as you still want some per-object state like transformation matrices, that feature is not enough. But in combination with ARB_shader_draw_parameters (not in core GL yet), you get the gl_DrawID parameter, which will be incremented by one for each single object in one mult draw indirect call. That way, you can index into some UBO, or TBO, or SSBO (or whatever) where you store per-object data.

Drawing multiple objects with different textures

Do I understand correctly that the typical way to draw multiple objects that each have a different texture in OpenGL is to simply bind one texture at a time using glBindTexture and draw the objects that use it, then bind a different texture and do the same?
In other words, only one texture can be "active" for drawing at any particular time, and that is the last texture that was bound using glBindTexture. Is this correct?
bind one texture at a time using glBindTexture and draw the objects that use it, then bind a different texture and do the same?
In other words, only one texture can be "active" for drawing at any particular time
These two statements are not the same thing.
A single rendering operation can only use a single set of textures at any time. This set is defined by the textures currently bound to the various texture image units. The number of texture image units available is queryable through GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.
So multiple textures can be "active" for a single rendering command.
However, that's not the same thing as binding a bunch of textures and then rendering several objects, where each object only uses some (or one, in your case) of the bound textures. You could do that, but really, there's not much point.
The association between a GLSL sampler and a texture image unit is part of the program object state. It's the value you set on the sampler uniform. Therefore, in order to do what you suggest, you would have to bind a bunch of textures, set the uniform to one of them, render, then set the uniform to the next, render, etc.
You are still incurring all of the cost of binding those textures. You're still incurring all of the overhead of changing uniforms. Indeed, it might be less efficient, since the normal way (bind, render, bind, render) doesn't involve changing uniform state.
Plus, it's just weird. Generally, you shouldn't be changing uniform sampler state dynamically. You define a simple convention (diffuse texture comes from unit 0, specular from unit 1, etc), and you stick to that convention. Any GL 3.x-class hardware is required to provide no less than 48 texture image units (16 per stage), so it's not like you're going to run out.
There are mechanisms to do things not entirely unlike what you're talking about. For example, array textures can be leveraged, though this requires explicit shader logic and support. That would almost certainly be faster than the "bind a bunch of textures" method, since you're only binding one. Also, the uniform you'd be changing is a regular data uniform rather than a sampler uniform, which is likely to be faster.
With the array texture mechanism, you can also leverage instancing, assuming you're rendering the same object with slightly different parameters.

Rendering a mesh in OpenGL as a series of subgroups?

I'm completing a wavefront object parser and I want to use it to construct generic mesh objects. My engine uses OpenGL 4 and shaders to draw everything in my engine.
My question is about how to ensure best rendering efficiency for rendering a mesh.
A wavefront .obj file normally has many object sub-groups specified.
A sub-group might be assigned a specific material (e.g. a shiny red colour).
So a mesh might be a fairly complex collection of sub-groups, each with their own material assigned.
My questions are -
Q. Do I need to draw each sub-group separately e.g. with a call to glDrawElements for each sub-group ? (So if I had 4 separate sub-groups, I'd have to make four glDrawElements calls, thereby invoking the shader 4 times with 4 uniform changes (for the materials/textures) )
glDrawElements( GL_TRIANGLES, nNumIndicesInGroup, GL_UNSIGNED_INT, ((char*)NULL)+ first-vertex-offset );
If this is correct, then I'll have to calculate:
The indices in each sub-group (implying a separate index array and VAO for each sub-group)
The vertex offset of the start of the sub-group
This seems terribly inefficient, am I barking up the wrong tree?
Also, from the Wavefront obj wiki page:
Smooth shading across polygons is enabled by smoothing groups.
s 1
...
# Smooth shading can be disabled as well.
s off
...
Can anyone suggest what smooth shading values indicate? E.g. s1, s2, s4 etc.
Yes, you should draw each sub-group separately from the others. This is required till the state is different between sub-groups.
But you are making a too long step.
To avoid multiple draw calls, you can introduce a vertex attribute indicating an index used for accessing uniform array values (array of materials, array of textures). In this way, you need only one draw call, but you will have the cost of one additional attribute and its relative management.
I would avoid the above approach. What if a sub-group is textured and another one not? How do you discriminate whether to texture or not? Introducing other attributes? Seems confusing.
The first point is that the buffer object management is very flexible. Indeed you could have a single element buffer object and a single vertex buffer object: using offsets and interleaving you can satisfy every level of complexity. And then, on modern harware, using vertex array objects you can minimize the cost of the different buffer bindings.
Second point is that your software can group different sub-group having the same uniform state, joining multiple draw calls into a single one. Remember that you can use Multi draw entry points variants, and there's also the primitive restart that can aid you (in the case stripped primitives).
Other considerations are not usefull, because you have to draw anyway, regardless if it's complex or not. Successively, when you have a correct rendering, you could profile the application and the rendering, cutting-off the hot-spots.
Smoothing groups are a collection of vertices that are sharing the same option attribute (normals, texture coordinates). This is the case of element-indexed vertices.
To go deeper on subject, read one of the specification found by googling.