glDeleteTextures not deleting texture - opengl

I'm trying to write a class to do fragment shader chaining by using a Frame Buffer Object to Render To Texture with a frament shader, then render that texture to another texture with a fragment shader, etc. etc.
I am trying to deal with a memory leak right now, where when I resize my window and delete/reallocate the textures I am using, the textures are not being deleted properly.
Here is a code snippet:
//Allocate first texture
glGenTextures( 1, &texIds[0] );
glBindTexture( GL_TEXTURE_2D, texIds[0] );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, screenX, screenY, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
//Allocate second texture
glGenTextures( 1, &texIds[1] );
glBindTexture( GL_TEXTURE_2D, texIds[1] );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, screenX, screenY, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
//Try to free first texture -- ALWAYS FAILS
glDeleteTextures( 1, &texIds[0] );
//Try to free second texture
glDeleteTextures( 1, &texIds[1] );
When I run this with gDEBugger, it tells me "Warning: The debugged program delete a texture that does not exist. Texture name: 1" when I try to delete texIds[0]. (The reason I have them in an array right now is because I used to creating and free'ing them at the same time, however when you free 2 textures at once, it will fail silently on one and continue with the other).
If I don't create texIds[1], I can free texIds[0], but as soon as I create a second texture, I can no longer free the first texture I create. Any ideas?

Perhaps the error is in texIds array. Is it array of GLUint?
You could erroneously declare it as array of word, thus when generating texture for [1] element, uint pointer taken from element [0] is broken.

Related

copy GL_TEXTURE_2D into one slice of a GL_TEXTURE_2D_ARRAY Texture

I'm trying to copy one GL_TEXTURE_2D into a chosen slice of a GL_TEXTURE_2D_ARRAY Texture.
I try to bind the usual Texture_2D to one framebuffer and only a slice of the Texture_2D_Array to another framebuffer (both have the same size (width, height, GL_RGB, GL_UNSIGNED_BYTE)).
Afterwards I thought to glBlitFramebuffer would copy that texture into this one slice... but I think I misunderstand the glFramebufferTexture3D command.
BTW: the GL_TEXTURE_2D is loaded correctly and I also printed it out (works)
Here my code:
//Create 2 FBOs for copying textures
glGenFramebuffers(1, &nFrameBufferRead); //FBO for texture2D
glGenFramebuffers(1, &nFrameBufferWrite); //FBO for one slice of the texture2d_array
CBasics::GetOpenGLError();
//generate the GL_TEXTURE_2D_ARRAY with given values (glgentextures is already called for this texture)
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGB, nWidth, nHeight, countSlices, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
CBasics::GetOpenGLError();
//Bind the Texture2D to the readFramebuffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, nFrameBufferRead);
glFramebufferTexture(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture2D_ID, 0);
CBasics::GetOpenGLError();
//try to bind the Texture2D_Array to the drawFramebuffer
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, nFrameBufferWrite);
CBasics::GetOpenGLError(); //till here everything works (no glerror)
glFramebufferTexture3D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_ARRAY, texture2D_Array_ID, 0, slicenumber); // here the error appears
CBasics::GetOpenGLError();
//because of the error one step earlier here will be the next error...
glBlitFramebuffer(0, 0, nWidth, nHeight, 0, 0, nWidth, nHeight, GL_COLOR_BUFFER_BIT, GL_NEAREST);
CBasics::GetOpenGLError();
at glFramebufferTexture3D the error appears: GL_INVALID_VALUE
I think it is because of
GL_INVALID_VALUE is generated if texture is not zero or the name of an
existing texture object.
1st: Is this way to copy textures into arrayslices correctly? Or is there a better way to do that?
2nd: Is it possible to bind only one slice of a GL_TEXTURE_2D_ARRAY?
3rd: Do I need the glFramebufferTexture3D command or the glFramebufferTexture2D command for GL_TEXTURE_2D_ARRAYs?
Assuming that "Here's my code" actually does contain all of your code, it doesn't work because you didn't bind the 2D array texture before calling glTexImage3D to allocate storage for it.
However, you don't have to render or blit to copy texture data. You can copy texture data by... copying texture data. The glCopyImageSubData function can copy layers between textures with different array layer counts. In your case:
glCopyImageSubData(
texture2D_ID, GL_TEXTURE_2D, 0, 0, 0, 0,
texture2D_Array_ID, GL_TEXTURE_2D_ARRAY, 0, 0, 0, slicenumber,
nWidth, nHeight, 1);
This requires OpenGL 4.3 or better, or one of the ARB/NV_copy_image extensions. The NVIDIA extension is actually quite widely implemented.
But you still need to use glTexImage3D correctly.

deferred rendering - Renderbuffer vs Texture

So, I've been reading about this, and I still haven't found a conclusion. Some examples use textures as their render targets, some people use renderbuffers, and some use both!
For example, using just textures:
// Create the gbuffer textures
glGenTextures(ARRAY_SIZE_IN_ELEMENTS(m_textures), m_textures);
glGenTextures(1, &m_depthTexture);
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures) ; i++) {
glBindTexture(GL_TEXTURE_2D, m_textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}
both:
glGenRenderbuffersEXT ( 1, &m_diffuseRT );
glBindRenderbufferEXT ( GL_RENDERBUFFER_EXT, m_diffuseRT );
glRenderbufferStorageEXT ( GL_RENDERBUFFER_EXT, GL_RGBA, m_width, m_height );
glFramebufferRenderbufferEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, m_diffuseRT );
glGenTextures ( 1, &m_diffuseTexture );
glBindTexture ( GL_TEXTURE_2D, m_diffuseTexture );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Attach the texture to the FBO
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_diffuseTexture, 0 );
What's the difference? What's the point of creating a texture, a render buffer, and then assign one to the other? After you successfully supply a texture with an image, it's got its memory allocated, so why does one need to bind it to a render buffer?
Why would one use textures or renderbuffers? What would be the advantages?
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
EDIT:
So, my current code for a GBuffer is this:
enum class GBufferTextureType
{
Depth = 0,
Position,
Diffuse,
Normal,
TexCoord
};
.
.
.
glGenFramebuffers ( 1, &OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
glBindFramebuffer ( GL_FRAMEBUFFER, OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
uint32_t TextureGLIDs[5];
glGenTextures ( 5, TextureGLIDs );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
// Create the depth texture
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, In_Dimensions.x, In_Dimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth], 0 );
// Create the color textures
for ( unsigned cont = 1; cont < 5; ++cont )
{
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[cont] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGB32F, In_Dimensions.x, In_Dimensions.y, 0, GL_RGB, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + cont, GL_TEXTURE_2D, TextureGLIDs[cont], 0 );
}
// Specify draw buffers
GLenum DrawBuffers[4];
for ( unsigned cont = 0; cont < 4; ++cont )
DrawBuffers[cont] = GL_COLOR_ATTACHMENT0 + cont;
glDrawBuffers ( 4, DrawBuffers );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
GLenum Status = glCheckFramebufferStatus ( GL_FRAMEBUFFER );
if ( Status != GL_FRAMEBUFFER_COMPLETE )
{
Delete();
return false;
}
Dimensions = In_Dimensions;
// Unbind
glBindFramebuffer ( GL_FRAMEBUFFER, 0 );
Is this the way to go?
I still have to write the corresponding shaders...
What's the point of creating a texture, a render buffer, and then assign one to the other?
That's not what's happening. But that's OK, because that second example code is errant nonsense. The glFramebufferTexture2DEXT is overriding the binding from glFramebufferRenderbufferEXT. The renderbuffer is never actually used after it is created.
If you found that code online somewhere, I strongly advise you to disregard anything that source told you about OpenGL development. Though I would advise that anyway, since it's using the "EXT" extension functions in 2016, almost a decade since core FBOs became available.
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
That is entirely the point of them: you use a renderbuffer for images that you don't want to read from. That's not useful for deferred rendering, since you really do want to read from them.
But imagine if you're generating a reflection image of a scene, which you will later use as a texture in your main scene. Well, to render the reflection scene, you need a depth buffer. But you're not going to read from that depth buffer (not as a texture, at any rate); you need a depth buffer for depth testing. But the only image you're going to read from after is the color image.
So you would make the depth buffer a renderbuffer. That tells the implementation that the image can be put into whatever storage is most efficient for use as a depth buffer, without having to worry about read-back performance. This may or may not have a performance impact. But at the very least, it won't be any slower than using a texture.
Most rendering scenarios need a depth and/or stencil buffer, though it is rare that you would ever need to sample the data stored in the stencil buffer from a shader.
It would be impossible to do depth/stencil tests if your framebuffer did not have a location to store these data and any render pass that uses these fragment tests requires a framebuffer with the appropriate images attached.
If you are not going to use the depth/stencil buffer data in a shader, a renderbuffer will happily satisfy storage requirements for fixed-function fragment tests. Renderbuffers have fewer format restrictions than textures do, particularly if we detour this discussion to multisampling.
D3D10 introduced support for multisampled color textures but omitted multisampled depth textures; D3D10.1 later fixed that problem and GL3.0 was finalized after D3D10's initial design oversight was corrected.
Pre-GL3 / D3D10.1 design would manifest itself in GL as a multisampled framebuffer object that allows either texture or renderbuffer color attachments but forces you to use a renderbuffer for the depth attachment.
Renderbuffers are ultimately the lowest common denominator for storage, they will get you through tough jams on feature-limited hardware. You can actually blit the data stored in a renderbuffer into a texture in some situations where you could not draw directly into the texture.
To that end, you can resolve a multisampled renderbuffer into a single-sampled texture by blitting from one framebuffer to another. This is implicit multisampling, and it (would) allow you to use the anti-aliased results of a previous render pass with a standard texture lookup. Unfortunately it is thoroughly useless for anti-aliasing in deferred shading--you need explicit multisample resolve for that.
Nonetheless, it is incorrect to say that a renderbuffer is not readable; it is in every sense of the word, but since your goal is deferred shading, would require additional GL commands to copy the data into a texture.

OpenGL glDeleteTextures leading to memory leak?

My program renders large jpegs as textures one at a time and allows the user to switch between them by re-using the "same" texture.
I call glGenTextures(1, &texture); once at the start.
Then each time I want to swap the image I use:
FreeTexture( texture );
ROI_img = fetch_image(temp, sortVector[tPiece]);
loadTexture_Ipl( ROI_img , &texture );
here are the two functions being called:
int loadTexture_Ipl(IplImage *image, GLuint *text)
{
if (image==NULL) return -1;
glBindTexture( GL_TEXTURE_2D, *text );
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, ROI_WIDTH, ROI_HEIGHT,0, GL_BGR, GL_UNSIGNED_BYTE, image->imageData);
return 0;
}
void FreeTexture(GLuint texture)
{
glDeleteTextures( 1, &texture);
}
My problem is that after a couple of images they stop rendering (texture is all black). If I keep trying to switch I get this error message:
test(55248,0xacdf22c0) malloc: *** mmap(size=744001536) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Mar 19 11:57:51 Ants-MacBook-Pro.local test[55248] <Error>: CGSImageDataLock: Cannot allocate memory
I thought that glDeleteTextures would free the memory each time??
Any ideas on how to better implement this?
p.s. here is a screen of the memory leaks being encountered (https://p.twimg.com/AoWVy8FCMAEK8q7.png:large)
Since you want to reuse your texture, you shouldn't delete it with glDeleteTextures. If you do that, you need to create a new texture using glGenTextures, and I don't see you are doing it in the loadTexture_Ipl function.
glGenTextures and glDeleteTextures go in pairs. If you really want to delete your texture (maybe because you most likely won't be using that texture again), delete it like you already do and call glGenerateTextures again before setting a new texture

cudaGraphicsResourceGetMappedPointer returns "unknown error"

I am creating an OpenGL texture like this:
glGenTextures( 1, &boardTex );
glBindTexture( GL_TEXTURE_2D, boardTex );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, NULL);
I get a handle at board so I assume the texture's been successfully created. I want to share this texture with CUDA so I register and map the resource:
cudaGLSetGLDevice(0);
cudaGraphicsGLRegisterImage( &boardImage, boardTex, GL_TEXTURE_2D, cudaGraphicsMapFlagsNone );
cudaGraphicsMapResources( 1, &boardImage, 0 );
Then I try to get the mapped pointer like this:
float4* mappedPointer;
size_t mappedSize;
cudaGraphicsResourceGetMappedPointer( (void**)&mappedPointer, &mappedSize, boardImage );
Unfortunately this call returns an error and refuses to work. I made sure the texture wasn't bound in OpenGL context just in case. Still not working. cudaGetErrorString yields "unknown error" so I'm pretty stuck here. I'd appreciate any ideas.
Okay, I found this out myself:
cudaGraphicsSubResourceGetMappedArray (&array, resource, 0, 0); returns a cudaArray to work with. I have yet to wrap my mind around how cudaArrays work (and I might end up using PBOs) but at least it's not crashing.
Edit:
From the CUDA Reference Guide entry for cudaGraphicsResourceGetMappedPointer():
If resource is not a buffer then it cannot be accessed via a pointer
and cudaErrorUnknown is returned.
From the CUDA Reference Guide entry for cudaGraphicsSubResourceGetMappedArray():
If resource is not a texture then it cannot be accessed via an array
and cudaErrorUnknown is returned.
In other words, use GetMappedPointer for mapped buffer objects. Use GetMappedArray for mapped texture objects.

SDL_surface to OpenGL texture

Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture