OpenGL glDeleteTextures leading to memory leak? - c++

My program renders large jpegs as textures one at a time and allows the user to switch between them by re-using the "same" texture.
I call glGenTextures(1, &texture); once at the start.
Then each time I want to swap the image I use:
FreeTexture( texture );
ROI_img = fetch_image(temp, sortVector[tPiece]);
loadTexture_Ipl( ROI_img , &texture );
here are the two functions being called:
int loadTexture_Ipl(IplImage *image, GLuint *text)
{
if (image==NULL) return -1;
glBindTexture( GL_TEXTURE_2D, *text );
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, ROI_WIDTH, ROI_HEIGHT,0, GL_BGR, GL_UNSIGNED_BYTE, image->imageData);
return 0;
}
void FreeTexture(GLuint texture)
{
glDeleteTextures( 1, &texture);
}
My problem is that after a couple of images they stop rendering (texture is all black). If I keep trying to switch I get this error message:
test(55248,0xacdf22c0) malloc: *** mmap(size=744001536) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Mar 19 11:57:51 Ants-MacBook-Pro.local test[55248] <Error>: CGSImageDataLock: Cannot allocate memory
I thought that glDeleteTextures would free the memory each time??
Any ideas on how to better implement this?
p.s. here is a screen of the memory leaks being encountered (https://p.twimg.com/AoWVy8FCMAEK8q7.png:large)

Since you want to reuse your texture, you shouldn't delete it with glDeleteTextures. If you do that, you need to create a new texture using glGenTextures, and I don't see you are doing it in the loadTexture_Ipl function.

glGenTextures and glDeleteTextures go in pairs. If you really want to delete your texture (maybe because you most likely won't be using that texture again), delete it like you already do and call glGenerateTextures again before setting a new texture

Related

"Frame not in module" when using glTexImage2D

I have never come across this error before and I use glTexImage2D elsewhere in the project without error. Below is a screenshot of what error Visual Studio shows, and a view of the disassembly:
Given the line has ptr in it I assume there's a pointer error but I don't know what I'm doing wrong.
Below is the function I use to convert from an SDL_surface to a texture.
void surfaceToTexture(SDL_Surface *&surface, GLuint &texture) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, surface->w, surface->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, surface->pixels);
glDisable(GL_TEXTURE_2D);
}
This function succeeds elsewhere in the program, for example when loading text:
SDL_Surface *surface;
surface = TTF_RenderText_Blended(tempFont, message.c_str(), color);
if (surface == NULL)
printf("Unable to generate text surface using font: %s! SDL_ttf Error: %s\n", font.c_str(), TTF_GetError());
else {
SDL_LockSurface(surface);
width = surface->w;
height = surface->h;
if (style != TTF_STYLE_NORMAL)
TTF_SetFontStyle(tempFont, TTF_STYLE_NORMAL);
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
But not when loading an image:
SDL_Surface* surface = IMG_Load(path.c_str());
if (surface == NULL)
printf("Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError());
else{
SDL_LockSurface(surface);
width = (w==0)?surface->w:w;
height = (h==0)?surface->h/4:h;
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
Both examples are extracted from a class where texture is defined.
The path to the image is correct.
I know it's glTexImage2D that causes the problem as I added a breakpoint at the start of surfaceToTexture and stepped through the function.
Even when it doesn't work, texture and surface do have seemingly correct values/properties.
Any ideas?
The error you're getting means, that the procress crashed within a section of code for which the debugger could not find any debugging information (association between assembly and source code) whatsoever. This is typically the case for anything that's part of a/your program's debug build.
Now in your case what happens is, that you called glTexImage2D with parameters that "lie" to it about the memory layout of the buffer you pointed it to with the data parameter. Pointers don't carry any meaningful meta information (as far as the assembly level is concerned, they're just another integer, with special meaning). So you must make sure, that all the parameters you pass to a function along with a pointer do match up. If not, somewhere deep in the bowles of that function, or whatever it calls (or that calls, etc.) the memory might be accessed in a way that violates constraints set up by the operating system, triggering that kind of crash.
Solution to your problem: Fix your code, i.e. make sure that what you pass to OpenGL is consistent. It crashes within the OpenGL driver, but only because you lied to it.

Segmentation fault loading texture with Devil into OpenGL

I am attempting to load a texture into OpenGL using Devil, and i am having a segmentation fault upon the calling of this constructor
Sprite::Sprite(const char *path){
ILuint tex = 0;
ilutEnable(ILUT_OPENGL_CONV);
ilGenImages(1, &tex);
ilBindImage(tex);
ilLoadImage(path);
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
width = (GLuint*)ilGetInteger(IL_IMAGE_WIDTH);
height = (GLuint*)ilGetInteger(IL_IMAGE_HEIGHT);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
width,
height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
&tex);
ilBindImage(0);
ilDeleteImages(1, &tex);
ilutDisable(ILUT_OPENGL_CONV);
}
and texture is a protected member
GLuint texture;
As soon as this constructor is called i recieve a segfault error and it exits and I am using freeglut, gl, il, ilu, and ilut. any help would be appreciated
Edit:
I also decided to take a different approach and use
texture = ilutGLLoadImage(path)
function to just load it directly into the gl texture because I located the segfault coming from
ilLoadImage(path)
but the compiler tells me that ilutGLLoadImage() is not declared in this scope, and i have IL/il.h IL/ilu.h and IL/ilut.h all included and initialized
I never used DevIL, but glTexImage2D wants pointer to pixel data as the last argument and you pass pointer to local variable tex there instead, which is allocated on stack and does not contain expected information. So glTexImage2D reads through your stack and eventually attempts to access memory it was not supposed to access and you get segmentation fault.
I guess you'd want to use ilGetData() instead.
Make sure you have DevIL initialized with ilInit ( ) and change &tex to ilGetData ( ) and then it should work.

Issue with glGetTexLevelParameter

I'm having an issue trying to get the width and height of a texture using the glGetTexLevelParameter function. No matter what I try, the function will not set the value of the width or height variable. I checked for errors but keep getting no error. Here is my code (based off of the NeHe tutorials if that helps):
int LoadGLTextures()
{
//load image file directly into opengl as new texture
GLint width = 0;
GLint height = 0;
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y); //image must be in same place as lib
if(texture[0] == 0)
{
return false;
}
glEnable(GL_TEXTURE_2D);
glGenTextures(3, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //no filtering bc of GL_NEAREST, looks really bad
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
const GLubyte* test = gluErrorString(glGetError());
cout << test << endl;
return true;
}
I'm using visual studio 2010 also if that helps. The call to load texture[0] is from the SOIL image library.
Let's break this down:
This call loads an image, creates a new texture ID and loads the image into the texture object named by this ID. In case of success the ID is returned and stored in texture[0].
texture[0] = SOIL_load_OGL_texture(
"NeHe.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y);
BTW: The image file is not to be in the same directory as the library, but in the current working directory of the process at time of calling this function. If you didn't change the working directory, it's whatever directory your process got called from.
Check if the texture was loded successfully
if(texture[0] == 0)
{
return false;
}
Enabling texturing here makes only little sense. glEnable calls belong in the rendering code.
glEnable(GL_TEXTURE_2D);
Okay, here's a problem. glGenTextures generates new texture IDs and places them in the array provided to it. Whatever was stored in that array before is overwritten. In your case the very texture ID generated and returned by SOIL_load_OGL_texture. Note that this is just some handle and is not garbage collected in any way. You now have in face a texture object dangling in OpenGL and no longer access to it, because you threw away the handle.
glGenTextures(3, &texture[0]);
Now you bind a texture object named by the newly created ID. Since this is a new ID you're effectively creating a new texture object with no image data assigned.
glBindTexture(GL_TEXTURE_2D, texture[0]);
All the following calls operate on an entirely different texture than the one created by SOIL.
How to fix the code: Remove glGenTextures. In your case it's not only redundant, it's the cause of your problem.
This line:
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y);
Creates a texture, storing the OpenGL texture in texture[0].
This line:
glGenTextures(3, &texture[0]);
Creates three textures, storing them in the texture array, overwriting whatever was there.
See the problem? You get a texture from SOIL, then you immediately throw it away by overwriting it with a newly-created texture.
This is no different conceptually than the following:
int *pInt = new int(5);
pInt = new int(10);
Hm, doesn't glGenTextures(howmany,where) work just like glGenBuffers? Why do you assign three textures to one pointer, how it's expected to work?
I think it shoud be
int textures[3];
glGenTextures(3,textures);
this way three generated texture buffers will be placed in texture array.
Or
int tex1, tex2, tex3;
glGenTextures(1,&tex1);
glGenTextures(1,&tex2);
glGenTextures(1,&tex3);
so you have three separate texture buffer pointers

Problems trying to create multiple Frame Buffer Objects in OSX but not Linux

I'm writing an applications that contains a collection of small scatter plots (called ProjectionAxes). Rather then re-render the entire plot whenever new data is acquired I have created a texture and render to that texture and then render the texture to a quad. Each plot is an object and has its own texture, render buffer object, and frame buffer object.
This has been working great under Linux but not under OSX. When I run the program on Linux each object creates its own texture, FBO, and RBO and everything renders fine. However when I run the same code on OSX the objects do not generate separate FBOs but appear to all be using the same FBO.
In my test program I create two instances of ProjectionAxes. On the first call to plot() the axes detect that the textures haven't been created and then generates them. During this generation process I display the integer values of the textureId, RBOid, and FBOid. When I run my code this is the output I get when I run the program under Linux:
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:1
Creating a new frame buffer object fboID:1 rboID:1
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:2
Creating a new frame buffer object fboID:2 rboID:2
And for OSX:
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:1
Creating a new frame buffer object fboID:1 rboID:1
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:2
Creating a new frame buffer object fboID:1 rboID:2
Notice that under linux the two FBO's have different IDs, whereas under OSX they do not.
What do I need to do to indicate to OSX that I want each object to use its own FBO?
Here is the code I use to create my FBO:
void ProjectionAxes::createFBO(){
std::cout<<"Creating a new frame buffer object";//<<std::endl;
glDeleteFramebuffers(1, &fboId);
glDeleteRenderbuffers(1, &rboId);
// Generate and Bind the frame buffer
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// Generate and bind the new Render Buffer
glGenRenderbuffersEXT(1, &rboId);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rboId);
std::cout<<" fboID:"<<fboId<<" rboID:"<<rboId<<std::endl;
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, texWidth, texHeight);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// Attach the texture to the framebuffer
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureId, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, rboId);
// If the FrameBuffer wasn't created then we have a bigger problem. Abort the program.
if(!checkFramebufferStatus()){
std::cout<<"FrameBufferObject not created! Are you running the newest version of OpenGL?"<<std::endl;
std::cout<<"FrameBufferObjects are REQUIRED! Quitting!"<<std::endl;
exit(1);
}
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
And here is the code I use to create my texture:
void ProjectionAxes::createTexture(){
texWidth = BaseUIElement::width;
texHeight = BaseUIElement::height;
std::cout<<"Creating a new texture,";
// Delete the old texture
glDeleteTextures(1, &textureId);
// Generate a new texture
glGenTextures(1, &textureId);
std::cout<<" textureId:"<<textureId<<std::endl;
// Bind the texture, and set the appropriate parameters
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glGenerateMipmap(GL_TEXTURE_2D);
// generate a new FrameBufferObject
createFBO();
// the texture should now be valid, set the flag appropriately
isTextureValid = true;
}
Try and temporarily comment out the glDelete* function calls. You're probably doing something strange with the handles.

glDeleteTextures not deleting texture

I'm trying to write a class to do fragment shader chaining by using a Frame Buffer Object to Render To Texture with a frament shader, then render that texture to another texture with a fragment shader, etc. etc.
I am trying to deal with a memory leak right now, where when I resize my window and delete/reallocate the textures I am using, the textures are not being deleted properly.
Here is a code snippet:
//Allocate first texture
glGenTextures( 1, &texIds[0] );
glBindTexture( GL_TEXTURE_2D, texIds[0] );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, screenX, screenY, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
//Allocate second texture
glGenTextures( 1, &texIds[1] );
glBindTexture( GL_TEXTURE_2D, texIds[1] );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, screenX, screenY, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
//Try to free first texture -- ALWAYS FAILS
glDeleteTextures( 1, &texIds[0] );
//Try to free second texture
glDeleteTextures( 1, &texIds[1] );
When I run this with gDEBugger, it tells me "Warning: The debugged program delete a texture that does not exist. Texture name: 1" when I try to delete texIds[0]. (The reason I have them in an array right now is because I used to creating and free'ing them at the same time, however when you free 2 textures at once, it will fail silently on one and continue with the other).
If I don't create texIds[1], I can free texIds[0], but as soon as I create a second texture, I can no longer free the first texture I create. Any ideas?
Perhaps the error is in texIds array. Is it array of GLUint?
You could erroneously declare it as array of word, thus when generating texture for [1] element, uint pointer taken from element [0] is broken.