As far as I know, OpenGL doesn't support per-face attributes [citation needed]. I have decided to use material files of .obj files and have already successfully loaded them into my project. However, I thought that materials were used per-object group and I realized that .obj format can actually use per-face materials. Therefore, a vertex group (or lets say, mesh) can have more than one material for specific faces of it.
I would be able to convert small variables like specular etc. into per vertex but the whole material can vary from face to face; illumination, ambient, specular, texture maps (diffuse normal etc.). It would be easy if the materials were per-mesh, so that I could load them as sub-meshes and attach corresponding materials on them.
How am I going to handle multiple materials for ONE mesh in which the materials are not uniformly distributed among the faces in it?
Firstly, what values do these per-face materials hold? Because, unless you are able to render them in a single pass, then you may as well split them into separate meshes anyway. If using index buffers, then just use a few of those, one for each material. Then you can set uniforms / change shaders for each material type.
The way my renderer works:
iterate through meshes
bind mesh's vertex array object
bind mesh's uniform buffer object
iterate through mesh's materials
use shaders, bind textures, set uniforms...
draw material's index buffer with glDrawElements
Of course, you wouldn't want to change shaders for every material, so if you do need to use multiple shaders rather than just changing uniforms, then you will need to batch them together.
This isn't specific to obj/mtl, but any mesh / material format.
Related
Suppose I want to render many different models, each with a different transformation matrix I want to be applied to their vertices. As far as I understand, the naive approach is to specify a matrix uniform in the vertex shader, the value of which is updated for each mesh during rendering.
It's obvious to me that this is a bad idea, due to the expense of many uniform updates and draw calls. So, what is the most efficient way to achieve this in modern OpenGL?
I've genuinely tried to find a straight, clear answer to this question. Most answers I find vaguely mention UBOs, or instance drawing (which afaik won't work unless you are drawing instances of the same mesh many times, which is not my goal).
With OpenGL 4.6 or with ARB_shader_draw_parameters, each draw in a multi-draw rendering command (functions of the form glMultiDraw*) is assigned a draw index from 0 to the number of draw calls specified by that function. This index is provided to the Vertex Shader via the gl_DrawID input value. You can then use this index to fetch a matrix from any number of constructs: UBOs, SSBOs, buffer textures, etc.
This works for multi-draw indirect rendering as well. So in theory, you can have a compute shader operation generate a bunch of rendering commands, then render your entire scene with a single draw call (assuming that all of your objects live in the same vertex buffers and can use the same shader and other state). Or at the very least, a large portion of the scene.
Furthermore, this index is considered dynamically uniform, so you can also use it (or values derived from it and other dynamically uniform values) to index into arrays of textures, fetch a texture from an array of bindless textures, or the like.
I'm trying to write a deferred renderer in OpenGL that supports multiple materials (different lighting models etc.) and layered materials (different materials blended together).
I'm writing the material ID to a g-buffer as well as the standard vertex attribute g-buffers. How would I use a different shader for each pixel in the second stage (when the lighting is calculated and rendered to the screen)?
I thought about using a compute shader to make a list of pixels for each material ID then generating a mixture of quads, points, and maybe lines out of it and then reading these meshes back to the CPU and rendering them with their respective materials. I think this would be a bit slow the mesh has to be read and written back each frame.
A. Write an uber-shader that chooses exact shader path based on pixel MaterialID attribute. That could work well for multiple materials. That uber-shader could consist of several sections stitched together programatically, to simplify development.
B. Reduce materials count. Speaks for itself.
C. Add more channels to your g-buffer to store varying material parameters (e.g. Specular)
D. Do multiple passes with different shaders and use MaterialID as a sort of "stencil" to either render if it's matching material and shader or discard; to skip the pixel ASAP.
You can combine these solutions as well.
For the latest Ludum Dare competition the theme was shapeshifting, so my idea involved simple morphing from one geometric shape to another.
So what I did was I made a few objects in Blender with the same vertex count. In OpenGL I made separate VAOs for each object and one additional VAO (with dynamic draw attributes) for the "morphing" object. Every frame, while the player is shapeshifting, I would upload interpolated vertex data, between the current object and the target object, into this extra VAO and then render. Otherwise just render the object's corresponding VAO.
Morphing looked like this:
(The vertices have a different ordering, so morphing is not "smooth")
Since I had little time I just made something quick and dirty but now I think this is not a great way of doing this process, because I have to upload a lot of data to the GPU every frame. And it doesn't look scalable either, if I ever wanted to draw multiple morphing objects at different morphing stages.
As a first step to improve this process I would like to move those interpolation calcs into the shaders.
I could perhaps store the data for all objects in a single VAO, in separate attributes, and then select which of the attributes to interpolate from.
But I was wondering: is there a way to somehow send multiple (two) objects/buffers into the shaders, along with an interpolation rate uniform, and then in the shaders I would do the interpolation?
You can create a buffer that holds several coordinates for each vertex. Just like normally you have coordinates, normals, texture coordinates you can have coordinate1, coordinate2, coordinate3 etc. Then in the shader you can have a uniform variable that says which to use.
With two it's of course easy since the uniform will be from zero to one and you just multiply the first coordinate with it and add the second multiplied with (1.0 - value).
Then just make sure you create the meshes from the same base shape and they will morph nicely.
Also if you use normals, make sure you have several normals and interpolate between them also.
The minus in this is that the more data you put through the more skipping in memory the shader has to do so it might not be the prettiest solution if you have a lot of forms.
I'm writing a parser for .obj files with multiple materials and groups (so I'm also parsing usemtl and material files). I can load and render the vertices. How do I deal with different textures and materials?
Do I render each material one by one or having a giant shader to choose ID? And how do I store different textures on the GPU buffer? (Currently I am using GL_TEXTURE_2D_ARRAY but they must have the same size.
So, to handle different materials, each object has the material specifications like ambient_color, diffuse_color, and specular_color. You simply, pass these values as uniform to fragment shader and render the object with different material specs.
Also, you can use 128 textures simultaneously in one fragment shader, so you can render an object with more than texture. But most of the time an object is made of groups and each group has just one texture, so you just need a sampler2D object in fragment shader, just the uniform values which you are passing for the texture will change.
Best way to handle this efficiently is to render the groups with the same texture together, so prevent lots of texture changes.
Taking the standard opengl 4.0+ functions & specifications into consideration; i've seen that geometries and shapes can be created in either two ways:
making use of VAO & VBO s.
using shader programs.
which one is the standard way of creating shapes?? are they consistent with each other? or they are two different ways for creating geometry and shapes?
Geometry is loaded into the GPU with VAO & VBO.
Geometry shaders produce new geometry based on uploaded. Use them to make special effects like particles, shadows(Shadow Volumes) in more efficient way.
tessellation shaders serve to subdivide geometry for some effects like displacement mapping.
I strongly (like really strongly) recommend you reading this http://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
VAOs and VBOs how about what geometry to draw (specifying per-vertex data). Shader programs are about how to draw them (which program gets applied to each provided vertex, each fragment and so on).
Let's lay out the full facts.
Shaders need input. Without input that changes, every shader invocation will produce exactly the same values. That's how shaders work. When you issue a draw call, a number of shader invocations are launched. The only variables that will change from invocation to invocation within this draw call are in variables. So unless you use some sort of input, every shader will produce the same outputs.
However, that doesn't mean you absolutely need a VAO that actually contains things. It is perfectly legal (though there are some drivers that don't support it) to render with a VAO that doesn't have any attributes enabled (though you have to use array rendering, not indexed rendering). In which case, all user-defined inputs to the vertex shader (if any) will be filled in with context state, which will be constant.
The vertex shader does have some other, built-in per-vertex inputs generated by the system. Namely gl_VertexID. This is the index used by OpenGL to uniquely identify this particular vertex. It will be different for every vertex.
So you could, for example, fetch geometry data yourself based on this index through uniform buffers, buffer textures, or some other mechanism. Or you can procedurally generate vertex data based on the index. Or something else. You could pass that data along to tessellation shaders for them to tessellate the generated data. Or to geometry shaders to do whatever it is you want with those. However you want to turn that index into real data is up to you.
Here's an example from my tutorial series that generates vertex data from nothing more than an index.
i've seen that geometries and shapes can be created in either two ways:
Not either. In modern OpenGL-4 you need both data and programs.
VBOs and VAOs do contain the raw geometry data. Shaders are the programs (usually executed on the GPU) that turn the raw data into pixels on the screen.
Vertex shaders can be used to displace vertices, or to generate them from a builtin formula and the vertex index, which is available as a built in attribute in later open gl versions.
The difference between vertex and geometry shaders is that vertex shader is a 1:1 mapping, while geometry shader can create more vertices -- can be utilized in automatic Level of Detail generation for e.g. NURBS or perlin noise based terrains etc.