Deleting unused textures in opengl - c++

while browsing here I noticed something strange.
After loading a compressed texture and getting the ID to the shaders uniform
GLuint Texture = loadDDS("uvtemplate.DDS");
// Get a handle for our "myTextureSampler" uniform
GLuint TextureID = glGetUniformLocation(programID, "myTextureSampler");
He deleted the texture using the ID he got from glGetUniformLocation
glDeleteTextures(1, &TextureID);
Shouldn't he use this instead?
glDeleteTextures(1, &Texture );

Yes, that is correct.
Some OpenGL implementations (like AMD and nVidia drivers) tend to return ascending resource ID's, starting from 1. If this is the first texture the code allocates and the sampler is the first uniform in the shader program, then IDs will match, and the code accidentally works. However, it will likely break on other platforms (like Intel drivers), or when more resources are used.

Related

why the types of regular textures or texture buffer objects must be uniform in GLSL

I have found all the opengl tutorials would set sampler types or TBO types as uniform in GLSL. But I don't know why. Could anyone explained in more details?
In GLSL you have two kind off input data. Data that can change per vertex/instance (attributes), and global data that stays the same through out on rendering call (uniforms,buffers).
To use a texture you bind it to a texture unit, and to use it in the shader program you will set the value of the uniform to the idx of the texture unit.
So a setup would look like this:
glActiveTexture(GL_TEXTURE0 + 4)
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(glGetUniformLocation(program, "tex"), 4)
Since OpenGL 4.2 you can define the binding within the shader itself so you do not need a the glUniform1i call if the used texture unit is known:
layout(binding=4) uniform sampler2D tex;

Texture data not being passed to sampler in fragment shader

I need to pass a 2D (Rectangle) texture to my fragment shader. The data in the texture must be read in the sampler as a uniform. Depending on the value read in the texture, the color of a pixel will or will not be changed.
here is my code in the API to bind texture and sampler:
//create texture and sampler
glEnable(GL_TEXTURE_2D);
GLuint CosineTexture;
glGenTextures(1,&CosineTexture);
glTexStorage2D(GL_TEXTURE_RECTANGLE,1,GL_RGBA,1440,1080);
glTexSubImage2D(GL_TEXTURE_RECTANGLE,0,0,0,1440,1080,GL_RGBA,GL_FLOAT,dataCosines1);
GLuint SensorCosines;
glGenSamplers(1,&SensorCosines);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_RECTANGLE,CosineTexture);
glBindSampler(0,SensorCosines);
and here is my code in the shader to receive and read from the texture:
uniform samplerRect SensorCosines;
...
float textureValue;
textureValue = texture(SensorCosines, Bin).r;
Nothing happens. So, I guess that my texture is not being passed to the shader. My feeling is that it is at the level of binding texture to sampler in the API but I have no clue what is my error.
glTexStorage2D(GL_TEXTURE_RECTANGLE,1,GL_RGBA,1440,1080);
This call will produce an OpenGL error. Specifically, GL_INVALID_ENUM. You must use sized image formats when using glTexStorage functions. GL_RGBA is not a sized image format.
You may have other problems in un-shown code, but this is certainly one of the issues.

OpenGL 4.5 Buffer Texture : extensions support

I use OpenGL Version 4.5.0, but somehow I can not make texture_buffer_object extensions work for me ("GL_EXT_texture_buffer_object" or "GL_ARB_texture_buffer_object"). I am quite new with OpenGL, but if I understand right, these extensions are quite old and even already included in the core functionality...
I looked for the extensions with "OpenGL Extensions Viewer 4.1", it says they are supported on my computer, glewGetExtension("GL_EXT_texture_buffer_object") and glewGetExtension("GL_ARB_texture_buffer_object") both also return true.
But the data from the buffer does not appear in the texture sample (in the fragment shader the texture contains only zeros).
So, I thought maybe the extensions are somehow disabled by default, and I included enabling these extensions in my fragment shader:
#version 440 core
#extension GL_ARB_texture_buffer_object : enable
#extension GL_EXT_texture_buffer_object : enable
And now I get such warnings at run-time:
***GLSL Linker Log:
Fragment info
-------------
0(3) : warning C7508: extension ARB_texture_buffer_object not supported
0(4) : warning C7508: extension EXT_texture_buffer_object not supported
Please see the code example below:
//#define GL_TEXTURE_BIND_TARGET GL_TEXTURE2D
#define GL_TEXTURE_BIND_TARGET GL_TEXTURE_BUFFER_EXT
.....
glGenTextures(1, &texObject);
glBindTexture(GL_TEXTURE_BIND_TARGET, texObject);
GLuint bufferObject;
glGenBuffers(1, &bufferObject);
// Make this the current UNPACK buffer (OpenGL is state-based)
glBindBuffer(GL_TEXTURE_BIND_TARGET, bufferObject);
glBufferData(GL_TEXTURE_BIND_TARGET, nWidth*nHeight*4*sizeof(float), NULL, GL_DYNAMIC_DRAW);
float *test = (float *)glMapBuffer(GL_TEXTURE_BIND_TARGET, GL_READ_WRITE);
for(int i=0; i<nWidth*nHeight*4; i++)
test[i] = i/(nWidth*nHeight*4.0);
glUnmapBuffer(GL_TEXTURE_BIND_TARGET);
glTexBufferEXT(GL_TEXTURE_BIND_TARGET, GL_RGBA32F_ARB, bufferObject);
//glTexImage2D(GL_TEXTURE_BIND_TARGET, 0, components, nWidth, nHeight,
// 0, format, GL_UNSIGNED_BYTE, data);
............
So if I use GL_TEXTURE2D target and load some data array directly to the texture, everything works fine. If I use GL_TEXTURE_BUFFER_EXT target and try to load texture from the buffer, then I get an empty texture in the shader.
Note: I have to load texture data from the buffer because in my real project I generate the data on the CUDA side, and the only way (that I know of) to visualize data from CUDA is using such texture buffers.
So, the questions are :
1) why I become no data in the texture, although the OpenGL version is ok, and Extensions Viewer shows the Extensions as supported ?
2) why trying to enable the extensions in the shader fails ?
Edit details : I updated the post, because I found out the reason for "Invalid Enum" error about that I mentioned first, it was caused by glTexParameteri that is not allowed for buffer textures.
I solved this. I was in a hurry and stupidly missed a very important thing on a wiki page:
https://www.opengl.org/wiki/Buffer_Texture
Access in shaders
In GLSL, buffer textures can only be accessed with the texelFetch​ function. This function takes pixel offsets into the texture rather than normalized texture coordinates. The sampler types for buffer textures are samplerBuffer​.
So in GLSL we should use buffer textures like this:
uniform samplerBuffer myTexture;
void main (void)
{
vec4 color = texelFetch(myTexture, [index]);
not like usual textures:
uniform sampler1D myTexture;
void main (void)
{
vec4 color = texture(myTexture, gl_FragCoord.x);
Warnings about not supported extensions : I think I get them because this functionality is included in the core since OpenGL 3.1, so they should not be enabled additionally any more.

OpenGL ES 3 (iOS) texturing oddness - want to know why

I have a functioning OpenGL ES 3 program (iOS), but I've having a difficult time understanding OpenGL textures. I'm trying to render several quads to the screen, all with different textures. The textures are all 256 color images with a sperate palette.
This is C++ code that sends the textures to the shaders
// THIS CODE WORKS, BUT I'M NOT SURE WHY
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->TextureId);
glUniform1i(_glShaderTexture, 1); // what does the 1 mean here
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->PaletteId);
glUniform1i(_glShaderPalette, 2); // what does the 2 mean here?
glDrawElements(GL_TRIANGLES, sizeof(Indices)/sizeof(Indices[0]), GL_UNSIGNED_BYTE, 0);
This is the fragment shader
uniform sampler2D texture; // New
uniform sampler2D palette; // A palette of 256 colors
varying highp vec2 texCoordOut;
void main()
{
highp vec4 palIndex = texture2D(texture, texCoordOut);
gl_FragColor = texture2D(palette, palIndex.xy);
}
As I said, the code works, but I'm unsure WHY it works. Several seemingly minor changes break it. For example, using GL_TEXTURE0, and GL_TEXTURE1 in the C++ code breaks it. Changing the numbers in glUniform1i to 0, and 1 break it. I'm guessing I do not understand something about texturing in OpenGL 3+ (maybe Texture Units???), but need some guidance to figure out what.
Since it's often confusing to newer OpenGL programmers, I'll try to explain the concept of texture units on a very basic level. It's not a complex concept once you pick up on the terminology.
The whole thing is motivated by offering the possibility of sampling multiple textures in shaders. Since OpenGL traditionally operates on objects that are bound with glBind*() calls, this means that an option to bind multiple textures is needed. Therefore, the concept of having one bound texture was extended to having a table of bound textures. What OpenGL calls a texture unit is an entry in this table, designated by an index.
If you wanted to describe this state in a C/C++ style notation, you could define the table of bound texture as an array of texture ids, where the size is the maximum number of bound textures supported by the implementation (queried with glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, ...)):
GLuint BoundTextureIds[MAX_TEXTURE_UNITS];
If you bind a texture, it gets bound to the currently active texture unit. This means that the last call to glActiveTexture() determines which entry in the table of bound textures is modified. In a typical call sequence, which binds a texture to texture unit i:
glActiveTexture(GL_TEXTUREi);
glBindTexture(GL_TEXTURE_2D, texId);
this would correspond to modifying our imaginary data structure by:
BoundTextureIds[i] = texId;
That covers the setup. Now, the shaders can access all the textures in this table. Variables of type sampler2D are used to access textures in the GLSL code. To determine which texture each sampler2D variable accesses, we need to specify which table entry each one uses. This is done by setting the uniform value to the table index:
glUniform1i(samplerLoc, i);
specifies that the sampler uniform at location samplerLoc reads from table entry i, meaning that it samples the texture with id BoundTextureIds[i].
In the specific case of the question, the first texture was bound to texture unit 1 because glActiveTexture(GL_TEXTURE1) was called before glBindTexture(). To access this texture from the shader, the shader uniform needs to be set to 1 as well. Same thing for the second texture, with texture unit 2.
(The description above was slightly simplified because it did not take into account different texture targets. In reality, textures with different targets, e.g. GL_TEXTURE_2D and GL_TEXTURE_3D, can be bound to the same texture unit.)
GL_TEXTURE1 and GL_TEXTURE2 refer to texture units. glUniform1i takes a texture unit id for the second argument for samplers. This is why they are 1 and 2.
From the OpenGL website:
The value of a sampler uniform in a program is not a texture object,
but a texture image unit index. So you set the texture unit index for
each sampler in a program.

OpenGL, how to use depthbuffer from framebuffer as usual depth buffer

I have frame buffer, with depth component and 4 color attachments with 4 textures
I draw some stuff into it and unbind the buffer after, using 4 textures for fragment shader (deferred lighting).
Later i want to draw some more stuff on the screen, using the depth buffer from my framebuffer, is it possible?
I tried binding the framebuffer again and specifying glDrawBuffer(GL_FRONT), but it does not work.
Like Nicol already said, you cannot use an FBOs depth buffer as the default framebuffer's depth buffer directly.
But you can copy the FBO's depth buffer over to the default framebuffer using the EXT_framebuffer_blit extension (which should be core since GL 3):
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_DEPTH_BUFFER_BIT, GL_NEAREST);
If this extension is not supported (which I doubt when you already have FBOs), you can use a depth texture for the FBO's depth attachment and render this to the default framebuffer using a textured quad and a simple pass through fragment shader that writes into gl_FragDepth. Though this might be slower than just blitting it over.
I just experienced that copying a depth buffer from a renderbuffer to the main (context-provided) depth buffer is highly unreliable when using glBlitFramebuffer. Just because you cannot guarantee the format does match. Using GL_DEPTH_COMPONENT24 as my internal depth-texture-format just didn't work on my AMD Radeon 6950 (latest driver) because Windows (or the driver) decided to use the equivalent to GL_DEPTH24_STENCIL8 as the depth-format for my front/backbuffer, although i did not request any stencil precision (stencil-bits set to 0 in the pixel format descriptor). When using GL_DEPTH24_STENCIL8 for my framebuffer's depth-texture the Blitting worked as expected, but I had other issues with this format. The first attempt worked fine on NVIDIA cards, so I'm pretty sure I did not mess things up.
What works best (in my experience) is copying via shader:
The Fragment-Program (aka Pixel-Shader) [GLSL]
#version 150
uniform sampler2D depthTexture;
in vec2 texCoords; //texture coordinates from vertex-shader
void main( void )
{
gl_FragDepth = texture(depthTexture, texCoords).r;
}
The C++ code for copying looks like this:
glDepthMask(GL_TRUE);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glEnable(GL_DEPTH_TEST); //has to be enabled for some reason
glBindFramebuffer(GL_FRAMEBUFFER, 0);
depthCopyShader->Enable();
DrawFullscreenQuad(depthTextureIndex);
I know the thread is old, but it was one of my first results when googeling my issue, so I want to keep it as consistent as possible.
You cannot attach images (color or depth) to the default framebuffer. Similarly, you can't take images from the default framebuffer and attach them to an FBO.