I'm loading a texture using a class that I've written, as shown below. The aim is to create a new texture, load it with data that is stored in an array, and render it onto the screen. The texture is created without any problem, and loaded as well. However, it's causing memory leaks where the application memory size keeps increasing, despite calling delete in order to invoke the Texture class' destructor to delete the textures.
void renderCurrentGrid(){
// Get the current grid (custom wrapper class, it's working fine)
Array2D<real_t> currentGrid = m_Continuum->getCurrentGrid();
// load it as a texture
Texture *currentGridTexture = new Texture(GL_TEXTURE_2D);
currentGridTexture->load(currentGrid.sizeWd(), currentGrid.sizeHt(), ¤tGrid(0,0));
......
delete currentGridTexture;
}
Texture class' load function:
bool Texture::load(const size_t width, const size_t height, const float *data)
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Allocate a texture name
glGenTextures(1, &m_TextureObj);
// select current texture
glBindTexture(GL_TEXTURE_2D, m_TextureObj);
// select replace to ensure texture retains each texel's colour
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
// if texture wraps over at the edges
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE,/* &density(0,0)*/data);
m_Loaded = true;
return true;
}
and the destructor of the Texture class:
Texture::~Texture()
{
glBindTexture(m_TextureTarget, m_TextureObj);
glDeleteTextures(1, &m_TextureObj);
glBindTexture(m_TextureTarget, NULL);
}
Texture class' private data members:
private:
GLenum m_TextureTarget;
GLuint m_TextureObj;
bool m_Loaded;
Here, I've created the new texture, and loaded it using the texture class' load function. The textures render absolutely fine, but the loading causes memory leaks. The application memory size keeps increasing when I check it on task manager.
However, when I comment out the line
currentGridTexture->load(....);
the memory leaks stop. I've omitted the rendering functions,a sit's not relevant to the question.
I've managed to solve the question above. In the Texture class' load function, I'm calling
glGenTextures(1, &m_TextureObj)
When calling this function, according to the OpenGL documentation, the second argument specifies an array in which the generated texture names are stored. In order to store the texture, it allocates a certain amount of CPU memory.
The load function was being called every rendering update, hence OpenGL was allocating resources for every time glGenTextures was being called within that function. Although glDeleteTextures was being called by the destructor which in theory should deallocate the resources, according to this link, it's not the case, quote below:
Just because you tell OpenGL that you're done with a texture object doesn't mean that OpenGL must immediately relinquish that memory. Doing that generally requires some pretty heavy-weight operations. So instead, drivers will defer this until later.
As I was creating and deleting a texture every update, I came up with a solution so that I can use the same texture object, and load it with different data to prevent OpenGL allocating resources every update.
I moved the glGenTextures function to the constructor of the Texture class, so that it was being called only once when the Texture object is instantiated. I now create one texture object and load it with new data every update.
To ensure glGenTextures function is called only once, I also moved the instantiation and deletion of the Texture object within the rendering function i.e.
Texture *currentGridTexture = new Texture(GL_TEXTURE_2D);
...
delete currentGridTexture;
and declared currentGridTexture as a private data member of the rendering class, and instantiated it in the constructor, and deleted it in the destructor of that class, such that only one texture object is being created and deleted during the run time of my application, instead of creating and deleting texture objects every update.
Related
I was finding the relationship between glActiveTexture(...) and glBindTexture(...) and I found an awesome answer here, the very top answer(the author/user Alfonse) gives us a pseudocode for how the both functions behave, and I understood most of it. But, in it he mentions in calls such as this:
glActiveTexture(GL_TEXTURE0 + 5);
glBindTexture(GL_TEXTURE_2D, object);
glTexImage2D(GL_TEXTURE_2D, ...);
but one often binds a texture to the context just to upload some data or to modify it. It doesn’t matter at that point which texture unit you bind it to, so there’s no need to set the current texture unit. glTexImage2D doesn’t care if the current active texture is 0, 1, 40, or whatever.
So my problem is:
when generating two texture we do something like this:
glGenTextures(1, &texture1);
glBindTexture(GL_TEXTURE_2D, texture1);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glGenTextures(1, &texture2);
glBindTexture(GL_TEXTURE_2D, texture2);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//some more code
//inside the render loop
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glDrawElements(...); glSwapBuffers();
//end of render loop
Notice, in the code before the render loop, I have used two texture glBindTexture(...) without calling glActiveTexture(...). Since, the default active texture unit is: GL_TEXTURE0, does this mean the parameters set for texture1 is overwritten by texture2?
No, texture parameters (set by glTexParameter) are set for a specific texture (the one that's currently bound to the active texture unit), not for a texture unit (not for one of GL_TEXTUREi, that is).
Note that your use of glTextureParameteri incorrect. Rather than GL_TEXTURE_2D, it expects a handle of a texture, as returned by glGenTextures1. You're confusing it with glTexParameteri, which indeed can be called with GL_TEXTURE_2D.
1 As noted by #derhass, glGenTextures (unlike glCreateTextures) merely reserves the handle. A texture with this handle is created only when you pass it to glBindTexture. It doesn't matter if you use glTexParameter, but if you want to use glTextureParameteri and other functions that operate directly on texture handles, it might be important.
This is the only part of the code that could be buggy:
GLuint tex_name;
glGenTextures(1, &tex_name);
// set id to the gl_texture_id map for later use
gl_texture_id[t] = tex_name;
// bind texture
glBindTexture(GL_TEXTURE_2D, tex_name);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
// load texture data
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,t->width(),t->height(),0,GL_BGRA,GL_UNSIGNED_BYTE,t->data());
Could you see something wrong in this code?
enabling glEnable(GL_TEXTURE_2D) is not making the difference. Texture coordinates are right, fragment and vertex shader are right for sure.
SOLVED
That was not the issue, i'm still using glGenerateMipmap (...) before glTexImage2D (...). The real problem is that i passed as format GL_RGBA when my image is in GL_RGB format. Additionally my t->data() array was height*width*sizeof(GL_FLOAT) long and i was passing GL_UNSIGNED_BYTE as type parameter causing data loss. Althougth this works you still have right, in fact preceding glTexImage2D with glGenerateMipmap causes weird effects on Nvidia hardware while life is beautiful (strangely) on ATI GPUs.
Why are you calling glGenerateMipmap (...) on a texture that has no data store?
You need to allocate at least image level 0 before this will work (e.g. call glTexImage2D (...). You should be calling this function after you draw into your texture each frame, the way you have it right now it actually does nothing and when you finally draw into your texture you are only generating an image for 1 LOD. I would remove the mipmap texture filter if you are not going to re-compute the mipmaps everytime you give texture image level 0 data.
I also do not see what this has to do with rendering to a texture? You are passing image data to your texture from client memory. Usually when you render to a texture, this is done using either a pixel buffer (old school) or frame buffer object.
Last question I asked was about how to draw different models in OpenGL. I got that covered, but now I'm stuck at doing textures. Once again I can easily get textures to work, as long as I only ever use one texture.
The class that loads the textures:
public class Textures
{
public static int tex, tex2;
public static void load() throws IOException
{
tex = glGenTextures();
tex2 = glGenTextures();
load(tex2, "moreModels/robot.jpg", "jpg");
load(tex, "moreModels/sample.jpg", "jpg");
}
public static void load(int tex, String name, String type) throws IOException
{
glBindTexture(GL43.GL_TEXTURE_2D, tex);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glGenerateMipmap(GL43.GL_TEXTURE_2D);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
Texture texture = TextureLoader.getTexture(type, ResourceLoader.getResourceAsStream(name));
byte[] pixels = texture.getTextureData();
ByteBuffer buffer = BufferUtils.createByteBuffer(pixels.length);
buffer.put(pixels);
buffer.flip();
glTexImage2D(GL43.GL_TEXTURE_2D, 0, GL_BGR, texture.getImageWidth(), texture.getImageHeight(), 0, GL_BGR, GL_BYTE, buffer);
}
}
Now, I would think that simply calling glBindTexture(GL43.GL_TEXTURE_2D, textureID) will change the texture, but no. If I at any time call the glBindTexture method everything will be rendered black.
Is it the same as before, where I have to do the entire load method every time I need to change the texture? (Perhaps saving the Texture object, so I don't have to load it over and over again). And if so, what's the point of having glGenTextures() when it aparently forgets everything about it when changed?
I think you have a little bit wrong understanding of OpenGL texture.You must remember:every time you load a new image you have to create a new texture object using the routine you have shown in your question.Additionally , if you want just to update (replace the pixels) of the existing texture , you should use glTexSubImage2D .This way you don't create new texture object but update the data in the existing one.
Now , if you want to have several textures ,let's say, one as a color map and another as normal map, then you have to create 2 textures from scratch which will be accessed by your fragment shader samplers.
Based on your code it is hard to see how you do the whole setup (rendering loop ,shaders).If you supply more info then it will be possible to assist you further.
In my application I am using extensively glTexImage2D. I copy some image of an image and render it as a texture, I do it frequently at every mouse click. I give it as a byte array for rendering. The memory is being eaten up and the swap memory is also allocated. Is it a memory leak? or is it due to the fact that glTexImage2D holds any references or anything else.
Edit:
//I allocate the memory once
GLuint texName;
texture_data = new GLubyte[width*height];
// Each time user click I repeat the following code (this code in in callback)
// Before this code the texture_data is modified to reflect the changes
glGenTextures(3, &texname);
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE,texture_data);
I hope your close requests and down voting would stop now!
Assuming you're generating a new texture with glGenTextures every time you call glTexImage2D, you are wasting memory, and leaking it if you don't keep track of all the textures you generate. glTexImage2D takes the input data and stores it video card memory. The texture name that you bind before calling glTexImage2D - the one you generate with glGenTextures is a handle to that chunk of video card memory.
If your texture is large and you're allocating new memory to store more and more copies of it every time you use it, then you will quickly run out of memory. The solution is to call glTexImage2D once during your application's initialization and only call glBindTexture when you want to use it. If you want to change the texture itself when you click, only call glBindTexture and glTexImage2D. If your new image is the same size as the previous image, you can call glTexSubImage2D to tell OpenGL to overwrite the old image data instead of deleting it and uploading the new one.
UPDATE
In response to your new code, I'm updating my answer with a more specific answer. You're dealing with OpenGL textures in the wrong way entirely The output of glGenTextures is a GLuint[] and not a String or char[]. For every texture you generate with glGenTextures, OpenGL gives you back a handle (as an unsigned integer) to a texture. This handle stores the state you give it with glTexParameteri as well a chunk of memory on the graphics card if you give it data with glTexImage[1/2/3]D. It's up to you to store the handle and send it new data when you want to update it. If you overwrite the handle or forget about it, the data still stays on the graphics card but you can't access it. You're also telling OpenGL to generate 3 textures when you only need 1.
Seeing as texture_data is of a fixed size, you can update the texture with glTexSubImage2D instead of glTexImage2D. Here is your code modified to avoid the memory leak from this issue:
texture_data = new GLubyte[width*height]();
GLuint texname; //handle to a texture
glGenTextures(1, &texname); //Gen a new texture and store the handle in texname
//These settings stick with the texture that's bound. You only need to set them
//once.
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//allocate memory on the graphics card for the texture. It's fine if
//texture_data doesn't have any data in it, the texture will just appear black
//until you update it.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//bind the texture again when you want to update it.
glBindTexture(GL_TEXTURE_2D, texname);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//When you're done using the texture, delete it. This will set texname to 0 and
//delete all of the graphics card memory associated with the texture. If you
//don't call this method, the texture will stay in graphics card memory until you
//close the application.
glDeleteTextures(1, &texname);
I got a strange problem here: i have a potential large (as in up to 500mb) 3d texture which is created several times per second. The size of the texture might change so reusing the old texture is not an option every time. The logical step to avoid memory consumption is to delete the texture every time it is not used anymore (using glDeleteTexture) but the program crashes with a read or write access violation pretty soon. The same thing happens on glDeleteBuffer when called on the buffer i use to update the texture.
In my eyes this can't happen as the glDelete* functions are pretty failsafe. If you give them a gl handle which is not a corresponding object they just don't do anything.
The interesting thing is that if i just don't delete the textures and buffers the program runs fine until it eventually runs out of memory on the graphics card.
This is running on Windows XP 32bit, NVIDIA Geforce 9500GT with 266.58er drivers, programming language is c++ in visual studio 2005.
Update
Apparently glDelete is not the only function affected. I just got violations in several other methods (which wasn't the case yesterday) ... looks like something is damn broken here.
Update 2
this shouldn't fail should it?
template <> inline
Texture<GL_TEXTURE_3D>::Texture(
GLint internalFormat,
glm::ivec3 size,
GLint border ) : Wrapper<detail::gl_texture>()
{
glGenTextures(1,&object.t);
std::vector<GLbyte> tmp(glm::compMul(size)*4);
glTextureImage3DEXT(
object, // texture
GL_TEXTURE_3D, // target
0, // level
internalFormat, // internal format
size.x, size.y, size.z, // size
border, // border
GL_RGBA, // format
GL_BYTE, // type
&tmp[0]); // don't load anything
}
fails with:
Exception (first chance) at 0x072c35c0: 0xC0000005: Access violoation while writing to position 0x00000004.
Unhandled exception at 0x072c35c0 in Project.exe: 0xC0000005: Access violatione while writing to position 0x00000004.
best guess: something messing up the program memory?
I don't know why glDelete would crash but I am fairly certain you don't need it anyway and are overcomplicating this.
glGenTextures creates a 'name' for your texture. glTexImage3D gives OpenGL some data to attach to that name. If my understanding is correct, there is no reason to delete the name when you don't want the data anymore.
Instead, you should simply call glTexImage3D again on the same texture name and trust that the driver will know that your old data is no longer needed. This allows you to respecify a new size each time, instead of specifying a maximum size first and then calling glTexSubImage3D, which would make actually using the data difficult since the texture would still retain its maximum size.
Below is a silly test in python (pyglet needed) that allocates a whole bunch of textures (just to check that the GPU memory usage measurement in GPU-Z actually works) then re-allocates new data to the same texture every frame, with a random new size and some random data just to work around any optimizations that might exist if the data stays constant.
It's (obviously) slow as hell but it definitely shows, at least on my system (Windows server 2003 x64, NVidia Quadro FX1800, drivers 259.81), that GPU memory usage does NOT go up while looping over the re-allocation of the texture.
import pyglet
from pyglet.gl import *
import random
def toGLArray(input):
return (GLfloat*len(input))(*input)
w, h = 800, 600
AR = float(h)/float(w)
window = pyglet.window.Window(width=w, height=h, vsync=False, fullscreen=False)
def init():
glActiveTexture(GL_TEXTURE1)
tst_tex = GLuint()
some_data = [11.0, 6.0, 3.2, 2.8, 2.2, 1.90, 1.80, 1.80, 1.70, 1.70, 1.60, 1.60, 1.50, 1.50, 1.40, 1.40, 1.30, 1.20, 1.10, 1.00]
some_data = some_data * 1000*500
# allocate a few useless textures just to see GPU memory load go up in GPU-Z
for i in range(10):
dummy_tex = GLuint()
glGenTextures(1, dummy_tex)
glBindTexture(GL_TEXTURE_2D, dummy_tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
# our real test texture
glGenTextures(1, tst_tex)
glBindTexture(GL_TEXTURE_2D, tst_tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
def world_update(dt):
pass
pyglet.clock.schedule_interval(world_update, 0.015)
#window.event
def on_draw():
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
# randomize texture size and data
size = random.randint(1, 1000)
data = [random.randint(0, 100) for i in xrange(size)]
data = data*1000*4
# just to see our draw calls 'tick'
print pyglet.clock.get_fps()
# reallocate texture every frame
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, size, 0, GL_RGBA, GL_FLOAT, toGLArray(data))
def main():
init()
pyglet.app.run()
if __name__ == '__main__':
main()
Sprinkle glGetError()s throughout your code. I would wager you are getting caught out by the fact that glDelete doesn't actually destroy the object. The object may be in use for several frames longer. As such I suspect you are running out of memory (ie glGetError is returning GL_OUT_OF_MEMORY).