How do I render multiple textures in modern OpenGL? - opengl

I am currently writing a 2d engine for a small game.
The idea was that I could render the whole scene in just one draw call. I thought I could render every 2d image on a quad which means that I could use instancing.
I imagined that my vertex shader could look like this
...
in vec2 pos;
in mat3 model;
in sampler2d tex;
in vec2 uv;
...
I thought I could just load a texture on the gpu and get a handle to it like I would do with a VBO, but it seems it is not that simple.
It seems that I have to call
glActiveTexture(GL_TEXTURE0..N);
for every texture that I want to load. Now this doesn't seem as easy to program as I thought. How do modern game engines render multiple textures?
I read that the texture limit of GL_TEXTURE is dependent on the GPU but it is at least 45. What if I want to render an image that consists of more than 45 textures for example 90?
It seems that I would have to render the first 45 textures and delete all the texture from the gpu and load the other 45 textures from the hdd to the gpu. That doesn't seem very reasonable to do every frame. Especially when I want to to animate a 2D image.
I could easily think that a simple animation of a 2d character could consist of 10 different images. That would mean I could easily over step the texture limit.
A small idea of mine was to combine multiple images in to one mega image and then offset them via uv coordinates.
I wonder if I just misunderstood how textures work in OpenGL.
How would you render multiple textures in OpenGL?

The question is somewhat broad, so this is just a quick overview of some options for using multiple textures in the same draw call.
Bind to multiple texture units
For this approach, you bind each texture to a different texture unit, using the typical sequence:
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, tex[i]);
In the shader, you can have either a bunch of separate sampler2D uniforms, or an array of sampler2D uniforms.
The main downside of this is that you're limited by the number of available texture units.
Array textures
You can use array textures. This is done by using the GL_TEXTURE_2D_ARRAY texture target. In many ways, a 2D texture array is similar to a 3D texture. It's basically a bunch of 2D textures stacked on top of each other, and stored in a single texture object.
The downside is that all textures need to have the same size. If they don't, you have to use the largest size for the size of the texture array, and you waste memory for the smaller textures. You'll also have to apply scaling to your texture coordinates if the sizes aren't all the same.
Texture atlas
This is the idea you already presented. You store all textures in a single large texture, and use the texture coordinates to control which texture is used.
While a popular approach, there are some technical challenges with this. You have to be careful at the seams between textures so that they don't bleed into each other when using linear sampling. And while this approach, unlike texture arrays, allows for different texture sizes without wasting memory, allocating regions within the atlas gets a little trickier with variable sizes.
Bindless textures
This is only available as an extension so far: ARB_bindless_texture.

You need to learn about the difference of texture units and texture objects.
Texture units are like "texture cartridges" of the OpenGL rasterizer. The rasterizer has a limited amount of "cartridge" slots (called texture units). To load a texture into a texture unit you first select the unit with glActiveTexture, then you load the texture "cartridge" (the texture object) using glBindTexture.
The amount of texture object you can have is only limited by your systems memory (and storage capabilities), but only a limited amount of textures can be "slotted" into the texture unit at the same time.
Samplers are like "taps" into the texture units. Different samplers within a shader may "tap" into the same texture unit. By setting the sampler uniform to a texture unit you select which unit you want to sample from.
And then you can also have the same texture "slotted" into multiple texture units at the same time.
Update (some clarification)
I read that the texture limit of GL_TEXTURE is dependent on the GPU but it is at least 45. What if I want to render an image that consists of more than 45 textures for example 90?
Normally you don't try to render the whole image with a single drawing call. It's practically impossible to catch all variations on which textures to use in what situation. Normally you write shaders for specific looks of a "material". Say you have a shader simulating paint on some metal. You'd have 3 textures: Metal, Paint and a modulating texture that controls where metal and where paint is visible. The shader would then have 3 sampler uniforms, one for each texture. To render the surface with that appearance you'd
select the shader program to use (glUseProgram)
for each texture activate in turn the texture unit (glActiveTexture(GL_TEXTURE_0+i) and bind the texture ('glBindTexture`)
set the sampler uniforms to the texture units to use (glUniform1i(…, i)).
draw the geometry.

Related

OpenGL confused about multiple textures

I'm developing a 3d program with OpenGL 3.3. So far I managed to render many cubes and some spheres. I need to texture all the faces of all the cubes with a common texture except one face which should have a different texture. I tried with a single texture and everything worked fine but when I try to add another one the program seems to behave randomly.
My questions:
is there a suitable way of passing multiple textures to the shaders?
how am I supposed to keep track of faces in order to render the right texture?
Googling I found out that it could be useful to define vertices twice, but I don't really get why.
Is there a suitable way of passing multiple textures to the shaders?
You'd use glUniform1i() along with glActiveTexture(). Thus given your fragment shader has multiple uniform sampler2D:
uniform sampler2D tex1;
uniform sampler2D tex2;
Then as you're setting up your shader, you set the sampler uniforms to the texture units you want them associated with:
glUniform1i(glGetUniformLocation(program, "tex1"), 0)
glUniform1i(glGetUniformLocation(program, "tex2"), 1)
You then set the active texture to either GL_TEXTURE0 or GL_TEXTURE1 and bind a texture.
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, texture1)
glActiveTexture(GL_TEXTURE1)
glBindTexture(GL_TEXTURE_2D, texture2)
How am I supposed to keep track of faces in order to render the right texture?
It depends on what you want.
You could decide which texture to use based the normal (usually done with tri-planar texture mapping).
You could also have another attribute that decides how much to crossfade between the two textures.
color = mix(texture(tex1, texCoord), texture(tex2, texCoord), 0.2)
Like before, you can equally have a uniform float transition. This would allow you to fade between textures, making it possible to fade between slides like in PowerPoint, so to speak.
Try reading LearnOpenGL's Textures tutorial.
Lastly there's a minimum of 80 texture units. You can check specifically how many you have available using GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.
You can use index buffers. Define the vertices once, and then use one index buffer to draw the portion of the mesh with the first texture, then use the second index buffer to draw the portion that needs the second texture.
Here's the general formula:
Setup the vertex buffer
Setup the shader
Setup the first texture
Setup and draw the index buffer for the part of the mesh that should use the first texture
Setup the second texture
Setup and draw the index buffer for the part of the mesh that should use the second texture.

How to bind multiple textures to primitives drawn using `glDrawRangeElements()`?

I am using glDrawRangeElements() to draw textured quads (as triangles). My problem is that I can only bind one texture before that function call, and so all quads are drawn using the same texture.
How to bind a different texture for each quad?
Is this possible when using the glDrawRangeElements() function? If not, what other OpenGL function should I look at?
First,you need to give an access to multiple textures inside your fragment shader.To do this you can use :
Arrays Textures -basically 3D texture,where 3rd dimension is the number of different 2D texture layers.The restriction is that all the textures in the array must be of the same size.Also Cube Map textures can be used (GL 4.0 and later) to stack multiple textures.
Bindless textures - these you can use on relatively new hardware only.For Nvidia that's Kepler and later.Because bindless texture is essentially a pointer to a texture memory on GPU you can fill an array or Uniform buffer with thousands of those and then index into that array in the fragment shader having an access to the sampler object directly.
Now,how can you index into those arrays per primitive?There are number of ways.First,you can use instanced drawing if you render the same primitives several times.Here you have GLSL InstanceID to track what primitive is currently drawn.
In case when you don't use instancing and also try to texture different parts of geometry in a single draw call it would be more complex.You should add texture index information on per vertex basis.That's ,if your geometry has interleaved structure per vertex looking like this:
VTN,VTN,VTN... where (V-vertices,T-texture coords,N-normals),you should add another set of data ,let's call it I - (texture index),so your vertex array will
have the structure VTNI,VTNI,VTNI...
You can also set a separate Vertex buffer including only the texture indices.But for large geometry buffers it probably will be less efficient.Interleaving of usually allows faster data access.
Once you have it you can pass that texture index as varying into fragment shader(set as flat to make sure it is not interpolated ) and index into specific texture.Yeah,that means your vertex array will be larger and contain redundant data,but that's the downside of using multitexture on a single primitive level.
Hope it helps.

OpenGL rendering with multiple textures

Is there a way in OpenGL to render a vertex buffer using multiple independent textures in VRAM without manually binding them (i.e. returning control to the CPU) in between?
Edit: So I'm currently rendering objects with multiple textures by rendering with a single texture, binding a new texture, and repeating, until everything is done. This is slow and requires returning control to CPU and making syscalls for every texture. Is there a way to avoid this switching, and make multiple textures available to the shaders to choose based on vertex data?
As mentioned in the comments on the question, glActiveTexture is the key - samplers in GLSL bind to texture units (e.g. GL_TEXTURE0), not specific texture targets (e.g. GL_TEXTURE2D), so you can bind a GL_TEXTURE2D texture under glActiveTexture(GL_TEXTURE0), another under glActiveTexture(GL_TEXTURE1), and then bind your GLSL sampler2D values to be 0, 1, etc. (NB: do not make your sampler2D values GL_TEXTURE0, GL_TEXTURE1, etc. - they are offsets from GL_TEXTURE0).

How many depthtextures can i bind to a framebuffer?

I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?
There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.
No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.

How do I assign multiple textures into single a mesh in OpenGL?

First example:
You can take a huge rock shaped mesh and put a tiled rock texture all over it.
Now, some places needs to be covered with a grass texture (or other vegetation).
Another example:
Usually, terrain are built from tiled textures. In order to achieve a less "tilly" look, you can apply 4 times bigger (or 16 and so on..) tiled texture on it, and by that you'll gain a nice "random" tiled texture (seen that in the UDK's docs).
Blender (the 3d graphics app) is OpenGL based, and it allows you to assign multiple materials to a single mesh.
How can i do it in my own OpenGL application?
Thanks,
Amir
P.S:
I'm looking for a better solution than rendering 50 tris with tex a and and 3 more tris with tex b.
What you're looking for is called multitexturing. Modern graphics cards have several texture units that each can sample a different texture. When you render your rock you specify vertices that have UV coordinates for each texture you want to render.
In OpenGL you can use glActiveTexture to select your active texture unit so that you can bind a texture to it and use it in subsequent rendering. Your vertices will need additional texture coordinate pairs; one pair per texture you intend to render.
The modern way to do multitexturing is using shaders (GLSL in OpenGL typically). Load and bind each texture to a different texture unit, set your shader uniforms to the value of the texture units (0 for texture unit 0 etc) you're using, sample each texture, and blend using the desired blending function to get your output color.