OpenGL render .obj files with multiple material and textures - opengl

I'm writing a parser for .obj files with multiple materials and groups (so I'm also parsing usemtl and material files). I can load and render the vertices. How do I deal with different textures and materials?
Do I render each material one by one or having a giant shader to choose ID? And how do I store different textures on the GPU buffer? (Currently I am using GL_TEXTURE_2D_ARRAY but they must have the same size.

So, to handle different materials, each object has the material specifications like ambient_color, diffuse_color, and specular_color. You simply, pass these values as uniform to fragment shader and render the object with different material specs.
Also, you can use 128 textures simultaneously in one fragment shader, so you can render an object with more than texture. But most of the time an object is made of groups and each group has just one texture, so you just need a sampler2D object in fragment shader, just the uniform values which you are passing for the texture will change.
Best way to handle this efficiently is to render the groups with the same texture together, so prevent lots of texture changes.

Related

Using a different shader for each pixel in OpenGL

I'm trying to write a deferred renderer in OpenGL that supports multiple materials (different lighting models etc.) and layered materials (different materials blended together).
I'm writing the material ID to a g-buffer as well as the standard vertex attribute g-buffers. How would I use a different shader for each pixel in the second stage (when the lighting is calculated and rendered to the screen)?
I thought about using a compute shader to make a list of pixels for each material ID then generating a mixture of quads, points, and maybe lines out of it and then reading these meshes back to the CPU and rendering them with their respective materials. I think this would be a bit slow the mesh has to be read and written back each frame.
A. Write an uber-shader that chooses exact shader path based on pixel MaterialID attribute. That could work well for multiple materials. That uber-shader could consist of several sections stitched together programatically, to simplify development.
B. Reduce materials count. Speaks for itself.
C. Add more channels to your g-buffer to store varying material parameters (e.g. Specular)
D. Do multiple passes with different shaders and use MaterialID as a sort of "stencil" to either render if it's matching material and shader or discard; to skip the pixel ASAP.
You can combine these solutions as well.

Converting .obj per-face variables for OpenGL

As far as I know, OpenGL doesn't support per-face attributes [citation needed]. I have decided to use material files of .obj files and have already successfully loaded them into my project. However, I thought that materials were used per-object group and I realized that .obj format can actually use per-face materials. Therefore, a vertex group (or lets say, mesh) can have more than one material for specific faces of it.
I would be able to convert small variables like specular etc. into per vertex but the whole material can vary from face to face; illumination, ambient, specular, texture maps (diffuse normal etc.). It would be easy if the materials were per-mesh, so that I could load them as sub-meshes and attach corresponding materials on them.
How am I going to handle multiple materials for ONE mesh in which the materials are not uniformly distributed among the faces in it?
Firstly, what values do these per-face materials hold? Because, unless you are able to render them in a single pass, then you may as well split them into separate meshes anyway. If using index buffers, then just use a few of those, one for each material. Then you can set uniforms / change shaders for each material type.
The way my renderer works:
iterate through meshes
bind mesh's vertex array object
bind mesh's uniform buffer object
iterate through mesh's materials
use shaders, bind textures, set uniforms...
draw material's index buffer with glDrawElements
Of course, you wouldn't want to change shaders for every material, so if you do need to use multiple shaders rather than just changing uniforms, then you will need to batch them together.
This isn't specific to obj/mtl, but any mesh / material format.

How do I render multiple textures in modern OpenGL?

I am currently writing a 2d engine for a small game.
The idea was that I could render the whole scene in just one draw call. I thought I could render every 2d image on a quad which means that I could use instancing.
I imagined that my vertex shader could look like this
...
in vec2 pos;
in mat3 model;
in sampler2d tex;
in vec2 uv;
...
I thought I could just load a texture on the gpu and get a handle to it like I would do with a VBO, but it seems it is not that simple.
It seems that I have to call
glActiveTexture(GL_TEXTURE0..N);
for every texture that I want to load. Now this doesn't seem as easy to program as I thought. How do modern game engines render multiple textures?
I read that the texture limit of GL_TEXTURE is dependent on the GPU but it is at least 45. What if I want to render an image that consists of more than 45 textures for example 90?
It seems that I would have to render the first 45 textures and delete all the texture from the gpu and load the other 45 textures from the hdd to the gpu. That doesn't seem very reasonable to do every frame. Especially when I want to to animate a 2D image.
I could easily think that a simple animation of a 2d character could consist of 10 different images. That would mean I could easily over step the texture limit.
A small idea of mine was to combine multiple images in to one mega image and then offset them via uv coordinates.
I wonder if I just misunderstood how textures work in OpenGL.
How would you render multiple textures in OpenGL?
The question is somewhat broad, so this is just a quick overview of some options for using multiple textures in the same draw call.
Bind to multiple texture units
For this approach, you bind each texture to a different texture unit, using the typical sequence:
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, tex[i]);
In the shader, you can have either a bunch of separate sampler2D uniforms, or an array of sampler2D uniforms.
The main downside of this is that you're limited by the number of available texture units.
Array textures
You can use array textures. This is done by using the GL_TEXTURE_2D_ARRAY texture target. In many ways, a 2D texture array is similar to a 3D texture. It's basically a bunch of 2D textures stacked on top of each other, and stored in a single texture object.
The downside is that all textures need to have the same size. If they don't, you have to use the largest size for the size of the texture array, and you waste memory for the smaller textures. You'll also have to apply scaling to your texture coordinates if the sizes aren't all the same.
Texture atlas
This is the idea you already presented. You store all textures in a single large texture, and use the texture coordinates to control which texture is used.
While a popular approach, there are some technical challenges with this. You have to be careful at the seams between textures so that they don't bleed into each other when using linear sampling. And while this approach, unlike texture arrays, allows for different texture sizes without wasting memory, allocating regions within the atlas gets a little trickier with variable sizes.
Bindless textures
This is only available as an extension so far: ARB_bindless_texture.
You need to learn about the difference of texture units and texture objects.
Texture units are like "texture cartridges" of the OpenGL rasterizer. The rasterizer has a limited amount of "cartridge" slots (called texture units). To load a texture into a texture unit you first select the unit with glActiveTexture, then you load the texture "cartridge" (the texture object) using glBindTexture.
The amount of texture object you can have is only limited by your systems memory (and storage capabilities), but only a limited amount of textures can be "slotted" into the texture unit at the same time.
Samplers are like "taps" into the texture units. Different samplers within a shader may "tap" into the same texture unit. By setting the sampler uniform to a texture unit you select which unit you want to sample from.
And then you can also have the same texture "slotted" into multiple texture units at the same time.
Update (some clarification)
I read that the texture limit of GL_TEXTURE is dependent on the GPU but it is at least 45. What if I want to render an image that consists of more than 45 textures for example 90?
Normally you don't try to render the whole image with a single drawing call. It's practically impossible to catch all variations on which textures to use in what situation. Normally you write shaders for specific looks of a "material". Say you have a shader simulating paint on some metal. You'd have 3 textures: Metal, Paint and a modulating texture that controls where metal and where paint is visible. The shader would then have 3 sampler uniforms, one for each texture. To render the surface with that appearance you'd
select the shader program to use (glUseProgram)
for each texture activate in turn the texture unit (glActiveTexture(GL_TEXTURE_0+i) and bind the texture ('glBindTexture`)
set the sampler uniforms to the texture units to use (glUniform1i(…, i)).
draw the geometry.

OpenGL rendering with multiple textures

Is there a way in OpenGL to render a vertex buffer using multiple independent textures in VRAM without manually binding them (i.e. returning control to the CPU) in between?
Edit: So I'm currently rendering objects with multiple textures by rendering with a single texture, binding a new texture, and repeating, until everything is done. This is slow and requires returning control to CPU and making syscalls for every texture. Is there a way to avoid this switching, and make multiple textures available to the shaders to choose based on vertex data?
As mentioned in the comments on the question, glActiveTexture is the key - samplers in GLSL bind to texture units (e.g. GL_TEXTURE0), not specific texture targets (e.g. GL_TEXTURE2D), so you can bind a GL_TEXTURE2D texture under glActiveTexture(GL_TEXTURE0), another under glActiveTexture(GL_TEXTURE1), and then bind your GLSL sampler2D values to be 0, 1, etc. (NB: do not make your sampler2D values GL_TEXTURE0, GL_TEXTURE1, etc. - they are offsets from GL_TEXTURE0).

Binding textures inside shaders

I am coding a wavefront (.obj) loader with VBO's.
When "usemtl" is called, I am thinking about sending textureID together with vertex, texCoord and normal data.
With that texture ID can I bind the texture inside vertex/fragment shader without calling glBindTexture?
With that texture ID can I bind the texture inside vertex/fragment shader without calling glBindTexture?
No. Textures are not bound to shaders; they're bound to the context.
If you want to get technical, NV_bindless_texture allows such functionality, but that's NVIDIA-specific.
that is problem with materials in general, they need to be switched before rendering geometry.
simplest way is:
foreach object in renderQueue
set_material()
draw_geometry()
and of course we get into some troubles when one objects has to be rendered with two different materials. Another problem is with performance, you usually would sort objects by materials and save the switching (of textures, shaders and other data)