How to bind multiple textures to primitives drawn using `glDrawRangeElements()`? - opengl

I am using glDrawRangeElements() to draw textured quads (as triangles). My problem is that I can only bind one texture before that function call, and so all quads are drawn using the same texture.
How to bind a different texture for each quad?
Is this possible when using the glDrawRangeElements() function? If not, what other OpenGL function should I look at?

First,you need to give an access to multiple textures inside your fragment shader.To do this you can use :
Arrays Textures -basically 3D texture,where 3rd dimension is the number of different 2D texture layers.The restriction is that all the textures in the array must be of the same size.Also Cube Map textures can be used (GL 4.0 and later) to stack multiple textures.
Bindless textures - these you can use on relatively new hardware only.For Nvidia that's Kepler and later.Because bindless texture is essentially a pointer to a texture memory on GPU you can fill an array or Uniform buffer with thousands of those and then index into that array in the fragment shader having an access to the sampler object directly.
Now,how can you index into those arrays per primitive?There are number of ways.First,you can use instanced drawing if you render the same primitives several times.Here you have GLSL InstanceID to track what primitive is currently drawn.
In case when you don't use instancing and also try to texture different parts of geometry in a single draw call it would be more complex.You should add texture index information on per vertex basis.That's ,if your geometry has interleaved structure per vertex looking like this:
VTN,VTN,VTN... where (V-vertices,T-texture coords,N-normals),you should add another set of data ,let's call it I - (texture index),so your vertex array will
have the structure VTNI,VTNI,VTNI...
You can also set a separate Vertex buffer including only the texture indices.But for large geometry buffers it probably will be less efficient.Interleaving of usually allows faster data access.
Once you have it you can pass that texture index as varying into fragment shader(set as flat to make sure it is not interpolated ) and index into specific texture.Yeah,that means your vertex array will be larger and contain redundant data,but that's the downside of using multitexture on a single primitive level.
Hope it helps.

Related

Efficiently transforming many different models in modern OpenGL

Suppose I want to render many different models, each with a different transformation matrix I want to be applied to their vertices. As far as I understand, the naive approach is to specify a matrix uniform in the vertex shader, the value of which is updated for each mesh during rendering.
It's obvious to me that this is a bad idea, due to the expense of many uniform updates and draw calls. So, what is the most efficient way to achieve this in modern OpenGL?
I've genuinely tried to find a straight, clear answer to this question. Most answers I find vaguely mention UBOs, or instance drawing (which afaik won't work unless you are drawing instances of the same mesh many times, which is not my goal).
With OpenGL 4.6 or with ARB_shader_draw_parameters, each draw in a multi-draw rendering command (functions of the form glMultiDraw*) is assigned a draw index from 0 to the number of draw calls specified by that function. This index is provided to the Vertex Shader via the gl_DrawID input value. You can then use this index to fetch a matrix from any number of constructs: UBOs, SSBOs, buffer textures, etc.
This works for multi-draw indirect rendering as well. So in theory, you can have a compute shader operation generate a bunch of rendering commands, then render your entire scene with a single draw call (assuming that all of your objects live in the same vertex buffers and can use the same shader and other state). Or at the very least, a large portion of the scene.
Furthermore, this index is considered dynamically uniform, so you can also use it (or values derived from it and other dynamically uniform values) to index into arrays of textures, fetch a texture from an array of bindless textures, or the like.

How to draw many textured quads faster, and retain glScissor (or something like it)?

I'm using OpenGL 4 and C++11.
Currently I make a whole bunch of individual calls to glDrawElements using separate VAOs with a separate VBO and an IBO.
I do this because the texture coords change for each, and my Vertex data features the texture coords. I understand that there's some redundent position information in this vertex data; however, it's always -1,-1,1,1 because I use a translation and a scale matrix in my vertex shader to then position and scale the vertex data.
The VAO, VBO, IBO, position and scale matrix and texture ID are stored in an object. It's one object per quad.
Currently, some of the drawing would occur like this:
Draw a quad object via (glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT,0)). The bound VBO is just -1,-1,1,1 and the IBO draws me a quad. The bound VBO contains the texture coords of a common texture (same texture used to texture all drawn quads). Matrix transformations on shader position it.
Repeat with another quad object
glEnable(GL_SCISSOR_TEST) is called and the position information of the preview quad is used in a call to glScissor
Next quad object is drawn; only the parts of it visible from the previous quad are actually shown.
Draw another quad object
The performance I'm getting now is acceptable but I want it faster because I've only scratched the surface of what I have in mind. So I'm looking at optimizing. So far I've read that I should:
Remove the position information from my vertex data and just keep texture coords. Instead bind a single position VBO at the start of drawing quads so it's used by all of them.
But I'm unsure how this would work? Because I can only have one VBO active at any one time.
Would I then have to call glBufferSubData and update the texture coordinates prior to drawing each quad? Would this be better performance or worse (a call to glBindVertexArray for every object or a call to glBufferSubData?)
Would I still pass the position and scale as matrices to the shader, I would I take that opportunity to also update the position info of the vertices as well as the texture coords? Which would be faster?
Create one big VBO with or without an IBO and update the vertex data for the position (rather than use a transformation and scale matrix) of each quad within this. It seems like this would be difficult to manage.
Even if I did manage to do this; I would only have a single glDraw call; which sounds fast. Is this true? What sort of performance impact does a single glBindVertexArray call have over multiple?
I don't think there's any way to use this method to implement something like the glScissor call that I'm making now?
Another option I've read is instancing. So I draw the quad however many times I need it; which means I would pass the shader an array of translation matrices and an array of texture coords?
Would this be a lot faster?
I think I could do something like the glScissor test by passing an additional array of booleans which defines whether the current quad should be only drawn within the bounds of the previous one. However, I think this means that for each gl_InstanceID I would have to traverse all previous instances looking for true and false values, and it seems like it would be slow.
I'm trying to save time by not implementing all of these individually. Hopefully an expert can point me towards which is probably better. If anyone has an even better idea, please let me know.
You can have multiple VBO attached to different attributes!
following seqence binds 2 vbos to attribs 0 & 1, note that glBindBuffer() binds buffer temporarily and actual VBO assignment to attrib is made during glVertexAttribPointer().
glBindBuffer(GL_ARRAY_BUFFER,buf1);
glVertexAttribPointer(0, ...);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,buf2);
glVertexAttribPointer(1, ...);
glEnableVertexAttribArray(1);
The fastest way to provide quad positions & sizes is to use texture and sample it inside vertex shader. Of course you'd need at least RGBA (x,y,width,height) 16bits / channel texture. But then you can update quad positions using glTexSubImage2D() or you could even render them via FBO.
Everything other than that will perform slower, of course if you want we can elaborate about using uniforms, attribs in vbos or using attribs without enabled arrays for them.
Putting all together:
use single vbo, store quad id in it (int) + your texturing data
prepare x,y,w,h texture, define mapping from quad id to this texture texcoord ie: u=quad_id&0xFF , v=(quad_id>>8) (for texture 256x256 max 65536 quads)
use vertex shader to sample displacement and size from that texture (for given quad_id stored in attribute (or use vertex_ID/4 or vertex_ID/6)
fill vbo and texture
draw everything with single drawarrays of draw elements

How do I render multiple textures in modern OpenGL?

I am currently writing a 2d engine for a small game.
The idea was that I could render the whole scene in just one draw call. I thought I could render every 2d image on a quad which means that I could use instancing.
I imagined that my vertex shader could look like this
...
in vec2 pos;
in mat3 model;
in sampler2d tex;
in vec2 uv;
...
I thought I could just load a texture on the gpu and get a handle to it like I would do with a VBO, but it seems it is not that simple.
It seems that I have to call
glActiveTexture(GL_TEXTURE0..N);
for every texture that I want to load. Now this doesn't seem as easy to program as I thought. How do modern game engines render multiple textures?
I read that the texture limit of GL_TEXTURE is dependent on the GPU but it is at least 45. What if I want to render an image that consists of more than 45 textures for example 90?
It seems that I would have to render the first 45 textures and delete all the texture from the gpu and load the other 45 textures from the hdd to the gpu. That doesn't seem very reasonable to do every frame. Especially when I want to to animate a 2D image.
I could easily think that a simple animation of a 2d character could consist of 10 different images. That would mean I could easily over step the texture limit.
A small idea of mine was to combine multiple images in to one mega image and then offset them via uv coordinates.
I wonder if I just misunderstood how textures work in OpenGL.
How would you render multiple textures in OpenGL?
The question is somewhat broad, so this is just a quick overview of some options for using multiple textures in the same draw call.
Bind to multiple texture units
For this approach, you bind each texture to a different texture unit, using the typical sequence:
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, tex[i]);
In the shader, you can have either a bunch of separate sampler2D uniforms, or an array of sampler2D uniforms.
The main downside of this is that you're limited by the number of available texture units.
Array textures
You can use array textures. This is done by using the GL_TEXTURE_2D_ARRAY texture target. In many ways, a 2D texture array is similar to a 3D texture. It's basically a bunch of 2D textures stacked on top of each other, and stored in a single texture object.
The downside is that all textures need to have the same size. If they don't, you have to use the largest size for the size of the texture array, and you waste memory for the smaller textures. You'll also have to apply scaling to your texture coordinates if the sizes aren't all the same.
Texture atlas
This is the idea you already presented. You store all textures in a single large texture, and use the texture coordinates to control which texture is used.
While a popular approach, there are some technical challenges with this. You have to be careful at the seams between textures so that they don't bleed into each other when using linear sampling. And while this approach, unlike texture arrays, allows for different texture sizes without wasting memory, allocating regions within the atlas gets a little trickier with variable sizes.
Bindless textures
This is only available as an extension so far: ARB_bindless_texture.
You need to learn about the difference of texture units and texture objects.
Texture units are like "texture cartridges" of the OpenGL rasterizer. The rasterizer has a limited amount of "cartridge" slots (called texture units). To load a texture into a texture unit you first select the unit with glActiveTexture, then you load the texture "cartridge" (the texture object) using glBindTexture.
The amount of texture object you can have is only limited by your systems memory (and storage capabilities), but only a limited amount of textures can be "slotted" into the texture unit at the same time.
Samplers are like "taps" into the texture units. Different samplers within a shader may "tap" into the same texture unit. By setting the sampler uniform to a texture unit you select which unit you want to sample from.
And then you can also have the same texture "slotted" into multiple texture units at the same time.
Update (some clarification)
I read that the texture limit of GL_TEXTURE is dependent on the GPU but it is at least 45. What if I want to render an image that consists of more than 45 textures for example 90?
Normally you don't try to render the whole image with a single drawing call. It's practically impossible to catch all variations on which textures to use in what situation. Normally you write shaders for specific looks of a "material". Say you have a shader simulating paint on some metal. You'd have 3 textures: Metal, Paint and a modulating texture that controls where metal and where paint is visible. The shader would then have 3 sampler uniforms, one for each texture. To render the surface with that appearance you'd
select the shader program to use (glUseProgram)
for each texture activate in turn the texture unit (glActiveTexture(GL_TEXTURE_0+i) and bind the texture ('glBindTexture`)
set the sampler uniforms to the texture units to use (glUniform1i(…, i)).
draw the geometry.

OpenGL - Associate Texture Coordinates Array With Index Array Rather Than Vertex Array?

Whenever we use an index array to render textured polygons with glDraw*Elements*, we can provide an array of vertices and an array of texture coordinates. Then each index in the index array refers to a vertex at some position in the vertex array and the corresponding texture coordinate at the same position in the texture array. Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it. Therefore, it would be much more convenient if the texture coordinate array could be associated with the positions in the index array. That way no vertex duplication would be necessary to associate one specific vertex with different texture coordinates.
Is this possible? If yes, what syntax to use?
No. Not in a simple way.
You could use buffer textures and shader logic to implement it. But there is no simple API to make attributes index the way you want. All attributes are sampled from the same index (except when instanced array divisors are used, but that won't help you either).
Note that doing this will be a memory/performance tradeoff. Using buffer textures to access vertex data will take up less memory, but it will be significantly slower and more limiting than just using regular attributes. You won't have access to normalized vertex attributes, so compressing the vertex data will require explicit shader logic. And accessing buffer textures is just slower overall.
You should only do this if memory is at a premium.
Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it.
If the texture coordinates differ on primitives sharing a vertex position, then the vertices at a whole are not shared! A vertex is a single vector consisting of
position
normal
texture coordinate(s)
other attributes
You alter any of these, you end up with a different vertex. Because of that vertex sharing does not the way you thought.
You can duplicate the vertices so that 1 has 1 texture coord & the other has the other. The only downfall of that is if you need to morph the surface - you may move 1 vertex but not both. Of course it is possible to do it "imperatively" - ie when you just run thru a loop & use different texture coord as you go - but that would not be VBO & much slower

OpenGL - How do I compare pixel values from separate textures with the same location

I was wondering what is the best way to go about comparing a pixel that
is currently being rendered (and accessed using a fragment shader) to a
pixel with the same location in a previously stored unbound texture (both
textures are the same size)?
Now that the question is more clear, it's possible to give an answer.
The main issue is that the framebuffer contents and the fragment parameters (position) are not available in the fragment shader. Indeed, you can't execute the "compare" operation while rendering.
You have to render the model in a texture (search for render to texture, using frame buffer objects), and then run a fragment shader (maybe using GL_texture_rectangle) on a otho view with a viewport of the same size of the texture.
The fragment shader shall have two textures as input: the first texture (containing detected edges) and the texture-rendered wireframe model. Then, it's easy to perform complex computation in the fragment shader once you can access to each textel of both textures.
Hope this can help you.