Issue with glGetTexLevelParameter - c++

I'm having an issue trying to get the width and height of a texture using the glGetTexLevelParameter function. No matter what I try, the function will not set the value of the width or height variable. I checked for errors but keep getting no error. Here is my code (based off of the NeHe tutorials if that helps):
int LoadGLTextures()
{
//load image file directly into opengl as new texture
GLint width = 0;
GLint height = 0;
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y); //image must be in same place as lib
if(texture[0] == 0)
{
return false;
}
glEnable(GL_TEXTURE_2D);
glGenTextures(3, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //no filtering bc of GL_NEAREST, looks really bad
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
const GLubyte* test = gluErrorString(glGetError());
cout << test << endl;
return true;
}
I'm using visual studio 2010 also if that helps. The call to load texture[0] is from the SOIL image library.

Let's break this down:
This call loads an image, creates a new texture ID and loads the image into the texture object named by this ID. In case of success the ID is returned and stored in texture[0].
texture[0] = SOIL_load_OGL_texture(
"NeHe.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y);
BTW: The image file is not to be in the same directory as the library, but in the current working directory of the process at time of calling this function. If you didn't change the working directory, it's whatever directory your process got called from.
Check if the texture was loded successfully
if(texture[0] == 0)
{
return false;
}
Enabling texturing here makes only little sense. glEnable calls belong in the rendering code.
glEnable(GL_TEXTURE_2D);
Okay, here's a problem. glGenTextures generates new texture IDs and places them in the array provided to it. Whatever was stored in that array before is overwritten. In your case the very texture ID generated and returned by SOIL_load_OGL_texture. Note that this is just some handle and is not garbage collected in any way. You now have in face a texture object dangling in OpenGL and no longer access to it, because you threw away the handle.
glGenTextures(3, &texture[0]);
Now you bind a texture object named by the newly created ID. Since this is a new ID you're effectively creating a new texture object with no image data assigned.
glBindTexture(GL_TEXTURE_2D, texture[0]);
All the following calls operate on an entirely different texture than the one created by SOIL.
How to fix the code: Remove glGenTextures. In your case it's not only redundant, it's the cause of your problem.

This line:
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y);
Creates a texture, storing the OpenGL texture in texture[0].
This line:
glGenTextures(3, &texture[0]);
Creates three textures, storing them in the texture array, overwriting whatever was there.
See the problem? You get a texture from SOIL, then you immediately throw it away by overwriting it with a newly-created texture.
This is no different conceptually than the following:
int *pInt = new int(5);
pInt = new int(10);

Hm, doesn't glGenTextures(howmany,where) work just like glGenBuffers? Why do you assign three textures to one pointer, how it's expected to work?
I think it shoud be
int textures[3];
glGenTextures(3,textures);
this way three generated texture buffers will be placed in texture array.
Or
int tex1, tex2, tex3;
glGenTextures(1,&tex1);
glGenTextures(1,&tex2);
glGenTextures(1,&tex3);
so you have three separate texture buffer pointers

Related

CUDA/OpenGL Interop: Writing to surface object does not erase previous contents

I am attempting to use a CUDA kernel to modify an OpenGL texture, but am having a strange issue where my calls to surf2Dwrite() seem to blend with the previous contents of the texture, as you can see in the image below. The wooden texture in the back is what's in the texture before modifying it with my CUDA kernel. The expected output would include ONLY the color gradients, not the wood texture behind it. I don't understand why this blending is happening.
Possible Problems / Misunderstandings
I'm new to both CUDA and OpenGL. Here I'll try to explain the thought process that led me to this code:
I'm using a cudaArray to access the texture (rather than e.g. an array of floats) because I read that it's better for cache locality when reading/writing a texture.
I'm using surfaces because I read somewhere that it's the only way to modify a cudaArray
I wanted to use surface objects, which I understand to be the newer way of doing things. The old way is to use surface references.
Some possible problems with my code that I don't know how to check/test:
Am I being inconsistent with image formats? Maybe I didn't specify the correct number of bits/channel somewhere? Maybe I should use floats instead of unsigned chars?
Code Summary
You can find a full minimum working example in this GitHub Gist. It's quite long because of all the moving parts, but I'll try to summarize. I welcome suggestions on how to shorten the MWE. The overall structure is as follows:
create an OpenGL texture from a file stored locally
register the texture with CUDA using cudaGraphicsGLRegisterImage()
call cudaGraphicsSubResourceGetMappedArray() to get a cudaArray that represents the texture
create a cudaSurfaceObject_t that I can use to write to the cudaArray
pass the surface object to a kernel that writes to the texture with surf2Dwrite()
use the texture to draw a rectangle on-screen
OpenGL Texture Creation
I am new to OpenGL, so I'm using the "Textures" section of the LearnOpenGL tutorials as a starting point. Here's how I set up the texture (using the image library stb_image.h)
GLuint initTexturesGL(){
// load texture from file
int numChannels;
unsigned char *data = stbi_load("img/container.jpg", &g_imageWidth, &g_imageHeight, &numChannels, 4);
if(!data){
std::cerr << "Error: Failed to load texture image!" << std::endl;
exit(1);
}
// opengl texture
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
// wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
// filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// set texture image
glTexImage2D(
GL_TEXTURE_2D, // target
0, // mipmap level
GL_RGBA8, // internal format (#channels, #bits/channel, ...)
g_imageWidth, // width
g_imageHeight, // height
0, // border (must be zero)
GL_RGBA, // format of input image
GL_UNSIGNED_BYTE, // type
data // data
);
glGenerateMipmap(GL_TEXTURE_2D);
// unbind and free image
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(data);
return textureId;
}
CUDA Graphics Interop
After calling the function above, I register the texture with CUDA:
void initTexturesCuda(GLuint textureId){
// register texture
HANDLE(cudaGraphicsGLRegisterImage(
&g_textureResource, // resource
textureId, // image
GL_TEXTURE_2D, // target
cudaGraphicsRegisterFlagsSurfaceLoadStore // flags
));
// resource description for surface
memset(&g_resourceDesc, 0, sizeof(g_resourceDesc));
g_resourceDesc.resType = cudaResourceTypeArray;
}
Render Loop
Every frame, I run the following to modify the texture and render the image:
while(!glfwWindowShouldClose(window)){
// -- CUDA --
// map
HANDLE(cudaGraphicsMapResources(1, &g_textureResource));
HANDLE(cudaGraphicsSubResourceGetMappedArray(
&g_textureArray, // array through which to access subresource
g_textureResource, // mapped resource to access
0, // array index
0 // mipLevel
));
// create surface object (compute >= 3.0)
g_resourceDesc.res.array.array = g_textureArray;
HANDLE(cudaCreateSurfaceObject(&g_surfaceObj, &g_resourceDesc));
// run kernel
kernel<<<gridDim, blockDim>>>(g_surfaceObj, g_imageWidth, g_imageHeight);
// unmap
HANDLE(cudaGraphicsUnmapResources(1, &g_textureResource));
// --- OpenGL ---
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// use program
shader.use();
// triangle
glBindVertexArray(vao);
glBindTexture(GL_TEXTURE_2D, textureId);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
// glfw: swap buffers and poll i/o events
glfwSwapBuffers(window);
glfwPollEvents();
}
CUDA Kernel
The actual CUDA kernel is as follows:
__global__ void kernel(cudaSurfaceObject_t surface, int nx, int ny){
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if(x < nx && y < ny){
uchar4 data = make_uchar4(x % 255,
y % 255,
0, 255);
surf2Dwrite(data, surface, x * sizeof(uchar4), y);
}
}
If I understand correctly, you initially register the texture, map it once, create a surface object for the array representing the mapped texture, and then unmap the texture. Every frame, you then map the resource again, ask for the array representing the mapped texture, and then completely ignore that one and use the surface object created for the array you got back when you first mapped the resource. From the documentation:
[…] The value set in array may change every time that resource is mapped.
You have to create a new surface object every time you map the resource because you might get a different array every time. And, in my experience, you will actually get a different one every so often. It may be a valid thing to do to only create a new surface object whenever the array actually changes. The documentation seems to allow for that, but I never tried, so I can't tell whether that works for sure…
Apart from that: You generate mipmaps for your texture. You only overwrite mip level 0. You then render the texture using mipmapping with trilinear interpolation. So my guess would be that you just happen to render the texture at a resolution that does not match the resolution of mip level 0 exactly and, thus, you will end up interpolating between level 0 (in which you wrote) and level 1 (which was generated from the original texture)…
It turns out the problem is that I had mistakenly generated mipmaps for the original wood texture, and my CUDA kernel was only modifying the level-0 mipmap. The blending I noticed was the result of OpenGL interpolating between my modified level-0 mipmap and a lower-resolution version of the wood texture.
Here's the correct output, obtained by disabling mipmap interpolation. Lesson learned!

Loading OpenGL texture with SOIL in std::thread raises "Integer Division by Zero"

I can load a texture just fine in SOIL/OpenGL normally. No errors, everything works fine:
// this is inside my texture loading code in my texture class
// that i normally use for loading textures
image = SOIL_load_OGL_texture
(
file,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
NULL
);
However, using that same code and calling it from an std::thread, at the line image = SOIL_load_OGL_texture I get unhandled exception Integer Division by Zero:
void loadMe() {
Texture* abc = new Texture("res/img/office.png");
}
void loadStuff() {
Texture* loading = new Texture("res/img/head.png"); // < always works
loadMe() // < always works
std::thread textures(loadMe); // < always "integer division by zero"
Here's some relevant code from my Texture class:
// inside the class
private:
GLint w, h;
GLuint image;
// loading the texture (called by constructor if filename is given)
void Texture::loadImage(const char* file)
{
image = SOIL_load_OGL_texture
(
file,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
NULL
);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &w);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &h);
glBindTexture(GL_TEXTURE_2D, 0);
if (image <= 0)
std::cout << file << " failed to load!\n";
else
std::cout << file << " loaded.\n";
glDisable(GL_TEXTURE_2D);
}
It raises the exception exactly at image = SOIL_load_OGL_texture, and when I go into the debugger, I see things like w = -816294792 and w = -816294792, but I guess that just means it hasn't been set yet, as it also shows that in the debugger for loading the other textures.
Also, the SOIL_load_OGL_texture part of the code works fine by itself, outside of the Texture class, even in a std::thread.
Any idea what's going on here?
This is how you do it. Note that, as others have mentioned in the comments, a context needs to be maintained current for every thread that uses GL. What this means is practically there cannot be a GL API call made in multiple threads without making one thread the owner of the GL context. Hence if the intention is to separate the Image loading overhead it is recommended to load the image file into a buffer using a library in a separate thread, then use that buffer to glTexImage2D in the main thread. Till the image is loaded, a dummy texture can be displayed.
I tried checking what platform you are on (see comment above), since I did not see a response, I am assuming Linux for below.
/* Regular GL context creation foo */
/* Regular attribute, uniform, shader creation foo */
/* Create a thread that does loading with SOIL in function SOIL_loader */
std::thread textureloader(SOIL_loader);
/* Wait for loader thread to finish,
thus defeating the purpose of a thread. Ideally,
only the image file read/decode should happen in separate thread */
textureloader.join();
/* Make the GL context current back again in the main thread
for other actions */
glfwMakeContextCurrent((GLFWwindow*)window);
/* Some other foo */
======
And this is the loader thread function:
void SOIL_loader()
{
glfwMakeContextCurrent((GLFWwindow*)window);
SOIL_load_OGL_texture
(
"./img_test.png",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID /* or passed ID */,
NULL
);
GL_CHECK(SOIL);
}
Tested on Ubuntu 14.04, Mesa, and glfw3.

Loading texture using SOILs OGL function and OpenGL

I have a function to load a texture from a JPEG image using SOIL.
So far I have been loading the texture with the SOIL_load_image() function and then supplying the image to OpenGL using glTexImage2D (see code below). However! My textures is upside down, so I wanted to use the SOIL_load_OGL_texture() instead and supply the SOIL_FLAG_INVERT_Y in order to flip the images. My problem is though, that I get an unhandled exception at the SOIL_load_OGL_texture() function.
The code is almost a copy paste from the documentation, so I don’t understand why this error occurs?
(NOTE: I could invert the textures in my vertex shader, but I would like to use SOIL.)
The old way
int width;
int height;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
image = SOIL_load_image(filename, &width, &height, 0, SOIL_LOAD_RGB);
if (image == NULL) {
std::cout << "An error occurred while loading image." << std::endl;
exit(EXIT_FAILURE);
}
std::cout << "Loaded first texture image" << std::endl;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
What I am trying now
GLuint image;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
image = SOIL_load_OGL_texture(
filename,
SOIL_LOAD_RGB,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y
);
if (image == 0)
cerr << "SOIL loading error: '" << SOIL_last_result() << "' (" << "res_texture.png" << ")" << endl;
And the error
Unhandled exception at 0x0F5427FF (msvcr110d.dll) in AnotherTutorial.exe: 0xC0000005: Access violation reading location 0x00000000.
Seems like there is no answer to using SOIL, so i'll post my solution:
In the vertex shader I do:
Texcoord = vec2(texcoord.x, 1.0-texcoord.y);
gl_Position = proj * view * model * vec4( position, 1.0 );
The 1.0-texcoord.y inverts the y-axis of the image. Not as clean a solution, but it works.
void loadTexture(GLuint* texture, char* path){
*texture = SOIL_load_OGL_texture(filename,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA
);
if(*textre == NULL){
printf("[Texture loader] \"%s\" failed to load!\n", filename);
}
}
void drawTexturedRect(int x, int y, int w, int h, GLuint texture){
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glBegin(GL_QUADS);
glColor3f(255,255,255);
glTexCoord2f(0,0);
glVertex2f(x,y);
glTexCoord2f(1,0);
glVertex2f(x+w,y);
glTexCoord2f(0,1);
glVertex2f(x,y+h);
glTexCoord2f(1,1);
glVertex2f(x+w,y+h);
glTexCoord2f(0,1);
glVertex2f(x,y+h);
glEnd();
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
}
And then you can do:
// At initialization
GLuint texture;
loadTexture(&texture, "filename.png");
// Drawing part
drawTexturedRect(25,25,256,256,texture);
This is what I personally use, and it works perfectly. I'm using Visual Studio 2012 combined with SDL and SOIL.
SOIL_load_image will return unsigned char* yes, array holding your image data.
Then you will normally feed this data to gl using glGenerateTextures, assign texture IDs, calling glTexImage2D.
SOIL_load_OGL_texture returns a texture id, so you could do this instead:
texture[0] = SOIL_load_OGL_texture(...)
if your goal is to load textures 0,1,2.
As for invert y, the easiest and weight free solution is to flip the texture coordinates in glTexCoord, but it depends on what you are doing. If you are loading resources as a one time only operation or like, it can’t hurt, except some startup time probably not worth mentioning. But if you’re loading resources dynamically through the main loop when needed, then invert y flag (and about any flag), can hurt performance because of additional processing throughout your entire program.
One big advantage to using SOIL_load_image is that you can retrieve width, height and channel number from the original image which SOIL_load_OGL_texture doesn’t provide.
If it helps, SOIL_load_OGL_texture would crash after a while when loading RGB images as RGB but worked fine all the way with SOIL_LOAD_RGBA, that could be a fix to your problem.
I still find it easier with SOIL_load_image. I hope any of this helps. Also check the SOIL source code that ships with soil to see what’s going on.
Basically, in the command
SOIL_load_image(filename, &width, &height, 0, SOIL_LOAD_RGB);
the 0 (NULL) you are passing is a pointer to the channels set by the library, so, of course, when the library tries to access it you have:
Unhandled exception at 0x0F5427FF (msvcr110d.dll) in
AnotherTutorial.exe: 0xC0000005: Access violation reading location
0x00000000
Try declaring a variable and using it:
int channels;
SOIL_load_image(filename, &width, &height, &channels, SOIL_LOAD_RGB);
None of the solution above works for me.
I have same problem just now, like the following one:
SOIL_load_OGL_texture Unhandled exception at xxxxxx (msvcr120d.dll)
After reading solution at http://www.idevgames.com/forums/thread-10281.html,
i changed my path from "relative path" to "absolute path".
BTW, And since I am Chinese, i need to make sure that the path is ALL ENGLISH without non-ascii characters.
For example:
I change my code from
SOIL_load_OGL_texture("my_texture.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA
);
to
SOIL_load_OGL_texture("D:\temp_20160926\ConsoleApplication20\ConsoleApplication19\my_texture.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA
);
And it solve my problem perfectly.
I think this is a bug/restriction in SOIL. But you can avoid this by specific the absolute path like i do here.
Hope it can help you too.

Problems trying to create multiple Frame Buffer Objects in OSX but not Linux

I'm writing an applications that contains a collection of small scatter plots (called ProjectionAxes). Rather then re-render the entire plot whenever new data is acquired I have created a texture and render to that texture and then render the texture to a quad. Each plot is an object and has its own texture, render buffer object, and frame buffer object.
This has been working great under Linux but not under OSX. When I run the program on Linux each object creates its own texture, FBO, and RBO and everything renders fine. However when I run the same code on OSX the objects do not generate separate FBOs but appear to all be using the same FBO.
In my test program I create two instances of ProjectionAxes. On the first call to plot() the axes detect that the textures haven't been created and then generates them. During this generation process I display the integer values of the textureId, RBOid, and FBOid. When I run my code this is the output I get when I run the program under Linux:
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:1
Creating a new frame buffer object fboID:1 rboID:1
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:2
Creating a new frame buffer object fboID:2 rboID:2
And for OSX:
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:1
Creating a new frame buffer object fboID:1 rboID:1
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:2
Creating a new frame buffer object fboID:1 rboID:2
Notice that under linux the two FBO's have different IDs, whereas under OSX they do not.
What do I need to do to indicate to OSX that I want each object to use its own FBO?
Here is the code I use to create my FBO:
void ProjectionAxes::createFBO(){
std::cout<<"Creating a new frame buffer object";//<<std::endl;
glDeleteFramebuffers(1, &fboId);
glDeleteRenderbuffers(1, &rboId);
// Generate and Bind the frame buffer
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// Generate and bind the new Render Buffer
glGenRenderbuffersEXT(1, &rboId);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rboId);
std::cout<<" fboID:"<<fboId<<" rboID:"<<rboId<<std::endl;
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, texWidth, texHeight);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// Attach the texture to the framebuffer
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureId, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, rboId);
// If the FrameBuffer wasn't created then we have a bigger problem. Abort the program.
if(!checkFramebufferStatus()){
std::cout<<"FrameBufferObject not created! Are you running the newest version of OpenGL?"<<std::endl;
std::cout<<"FrameBufferObjects are REQUIRED! Quitting!"<<std::endl;
exit(1);
}
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
And here is the code I use to create my texture:
void ProjectionAxes::createTexture(){
texWidth = BaseUIElement::width;
texHeight = BaseUIElement::height;
std::cout<<"Creating a new texture,";
// Delete the old texture
glDeleteTextures(1, &textureId);
// Generate a new texture
glGenTextures(1, &textureId);
std::cout<<" textureId:"<<textureId<<std::endl;
// Bind the texture, and set the appropriate parameters
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glGenerateMipmap(GL_TEXTURE_2D);
// generate a new FrameBufferObject
createFBO();
// the texture should now be valid, set the flag appropriately
isTextureValid = true;
}
Try and temporarily comment out the glDelete* function calls. You're probably doing something strange with the handles.

OpenGL glDeleteTextures leading to memory leak?

My program renders large jpegs as textures one at a time and allows the user to switch between them by re-using the "same" texture.
I call glGenTextures(1, &texture); once at the start.
Then each time I want to swap the image I use:
FreeTexture( texture );
ROI_img = fetch_image(temp, sortVector[tPiece]);
loadTexture_Ipl( ROI_img , &texture );
here are the two functions being called:
int loadTexture_Ipl(IplImage *image, GLuint *text)
{
if (image==NULL) return -1;
glBindTexture( GL_TEXTURE_2D, *text );
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, ROI_WIDTH, ROI_HEIGHT,0, GL_BGR, GL_UNSIGNED_BYTE, image->imageData);
return 0;
}
void FreeTexture(GLuint texture)
{
glDeleteTextures( 1, &texture);
}
My problem is that after a couple of images they stop rendering (texture is all black). If I keep trying to switch I get this error message:
test(55248,0xacdf22c0) malloc: *** mmap(size=744001536) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Mar 19 11:57:51 Ants-MacBook-Pro.local test[55248] <Error>: CGSImageDataLock: Cannot allocate memory
I thought that glDeleteTextures would free the memory each time??
Any ideas on how to better implement this?
p.s. here is a screen of the memory leaks being encountered (https://p.twimg.com/AoWVy8FCMAEK8q7.png:large)
Since you want to reuse your texture, you shouldn't delete it with glDeleteTextures. If you do that, you need to create a new texture using glGenTextures, and I don't see you are doing it in the loadTexture_Ipl function.
glGenTextures and glDeleteTextures go in pairs. If you really want to delete your texture (maybe because you most likely won't be using that texture again), delete it like you already do and call glGenerateTextures again before setting a new texture