Access Violation on glDelete* - opengl

I got a strange problem here: i have a potential large (as in up to 500mb) 3d texture which is created several times per second. The size of the texture might change so reusing the old texture is not an option every time. The logical step to avoid memory consumption is to delete the texture every time it is not used anymore (using glDeleteTexture) but the program crashes with a read or write access violation pretty soon. The same thing happens on glDeleteBuffer when called on the buffer i use to update the texture.
In my eyes this can't happen as the glDelete* functions are pretty failsafe. If you give them a gl handle which is not a corresponding object they just don't do anything.
The interesting thing is that if i just don't delete the textures and buffers the program runs fine until it eventually runs out of memory on the graphics card.
This is running on Windows XP 32bit, NVIDIA Geforce 9500GT with 266.58er drivers, programming language is c++ in visual studio 2005.
Update
Apparently glDelete is not the only function affected. I just got violations in several other methods (which wasn't the case yesterday) ... looks like something is damn broken here.
Update 2
this shouldn't fail should it?
template <> inline
Texture<GL_TEXTURE_3D>::Texture(
GLint internalFormat,
glm::ivec3 size,
GLint border ) : Wrapper<detail::gl_texture>()
{
glGenTextures(1,&object.t);
std::vector<GLbyte> tmp(glm::compMul(size)*4);
glTextureImage3DEXT(
object, // texture
GL_TEXTURE_3D, // target
0, // level
internalFormat, // internal format
size.x, size.y, size.z, // size
border, // border
GL_RGBA, // format
GL_BYTE, // type
&tmp[0]); // don't load anything
}
fails with:
Exception (first chance) at 0x072c35c0: 0xC0000005: Access violoation while writing to position 0x00000004.
Unhandled exception at 0x072c35c0 in Project.exe: 0xC0000005: Access violatione while writing to position 0x00000004.
best guess: something messing up the program memory?

I don't know why glDelete would crash but I am fairly certain you don't need it anyway and are overcomplicating this.
glGenTextures creates a 'name' for your texture. glTexImage3D gives OpenGL some data to attach to that name. If my understanding is correct, there is no reason to delete the name when you don't want the data anymore.
Instead, you should simply call glTexImage3D again on the same texture name and trust that the driver will know that your old data is no longer needed. This allows you to respecify a new size each time, instead of specifying a maximum size first and then calling glTexSubImage3D, which would make actually using the data difficult since the texture would still retain its maximum size.
Below is a silly test in python (pyglet needed) that allocates a whole bunch of textures (just to check that the GPU memory usage measurement in GPU-Z actually works) then re-allocates new data to the same texture every frame, with a random new size and some random data just to work around any optimizations that might exist if the data stays constant.
It's (obviously) slow as hell but it definitely shows, at least on my system (Windows server 2003 x64, NVidia Quadro FX1800, drivers 259.81), that GPU memory usage does NOT go up while looping over the re-allocation of the texture.
import pyglet
from pyglet.gl import *
import random
def toGLArray(input):
return (GLfloat*len(input))(*input)
w, h = 800, 600
AR = float(h)/float(w)
window = pyglet.window.Window(width=w, height=h, vsync=False, fullscreen=False)
def init():
glActiveTexture(GL_TEXTURE1)
tst_tex = GLuint()
some_data = [11.0, 6.0, 3.2, 2.8, 2.2, 1.90, 1.80, 1.80, 1.70, 1.70, 1.60, 1.60, 1.50, 1.50, 1.40, 1.40, 1.30, 1.20, 1.10, 1.00]
some_data = some_data * 1000*500
# allocate a few useless textures just to see GPU memory load go up in GPU-Z
for i in range(10):
dummy_tex = GLuint()
glGenTextures(1, dummy_tex)
glBindTexture(GL_TEXTURE_2D, dummy_tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
# our real test texture
glGenTextures(1, tst_tex)
glBindTexture(GL_TEXTURE_2D, tst_tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
def world_update(dt):
pass
pyglet.clock.schedule_interval(world_update, 0.015)
#window.event
def on_draw():
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
# randomize texture size and data
size = random.randint(1, 1000)
data = [random.randint(0, 100) for i in xrange(size)]
data = data*1000*4
# just to see our draw calls 'tick'
print pyglet.clock.get_fps()
# reallocate texture every frame
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, size, 0, GL_RGBA, GL_FLOAT, toGLArray(data))
def main():
init()
pyglet.app.run()
if __name__ == '__main__':
main()

Sprinkle glGetError()s throughout your code. I would wager you are getting caught out by the fact that glDelete doesn't actually destroy the object. The object may be in use for several frames longer. As such I suspect you are running out of memory (ie glGetError is returning GL_OUT_OF_MEMORY).

Related

Loading image to opengl texture qt c++

I'm using opengl calls on Qt with c++, in order to display images to the screen. In order to get the image to the texture, I first read it as a QImage, and then use the following code to load to texture:
void imageDisplay::loadImage(const QImage &img){
glEnable(GL_TEXTURE_2D); // Enable texturing
glBindTexture(GL_TEXTURE_2D, texture); // Set as the current texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, img.width(),
img.height(),
0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits() );
glFinish();
glDisable(GL_TEXTURE_2D);
}
However, when profiling performance, I suspect this is not the most efficient way of doing this. Performance is critical for the application that I am developing, and I would really appreciate any suggestions on how to improve this part.
(BTW - reading the image is done on a separate module than the one that is displaying it - is it possible to read and load to a texture, and then move the texture the displaying object?)
Thank you
Kick that glFinish out, it decreases the performance heavily while you don't need it at all.
It's hard to say without looking at your profiling results, but a few things to try:
You are sending your data via glTexImage2D() in GL_BGRA format and having the driver reformat. Does it work any faster when you pre-format the bits?: (would be surprising, but you never know what drivers do under the hood.)
QImage imageUpload = imageOriginal.convertToFormat( QImage::Format_ARGB32 ).rgbSwapped();
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, image.width(), image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, image.bits() );
Are you populating that QImage from a file on disk? How big is it? If the sheer volume of image data is the problem, and it's a static texture, you can try compressing the image to a DXT/St3 format offline and saving that for later. Then read and upload those bits at runtime instead. This will reduce the number of bytes being transferred to the GPU (and stored on GPU memory) to 1/6 or 1/4 the original. If you need to generate the QImage at runtime and can't precompress, then this will probably slow things down, so keep that in mind.
Is the alpha combine important? i.e. if you remove glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); does it help? I wonder if that's causing some unwanted churn. I.e. it to be downloaded to host, modified, then sent back or something...
I agree that you should remove the glFinish(); unless you have a VERY specific reason for it.
How often are you needing to do this? every frame?
What about STB image loader:
its raw pixel data in a buffer, it returns a char pointer to a byte buffer which you can use and release after its loaded in OpenGL.
int width, height comp;
unsigned char *data = stbi_load(filename, &width, &height, &comp, 4); // ask it to load 4 components since its rgba // demo only
glGenBuffers(1, &m_vbo);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, vertexByteSize), this->Vertices, GL_STATIC_DRAW);
glGenTexture(...)
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D( ... );
// you can generate mipmaps here
// Important
stbi_image_free(data);
You can find STB here: get the image load header:
https://github.com/nothings/stb
Drawing with mipmaps using openGL improves performance.
generate mipmaps after you have used glTexImage2D
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
Good luck

OpenGL Invalid Texture or State

We are developing a C++ plug-in within an OpenGL application. The application will call a "render" method on our plug-in as necessary. While rendering our textures, we noticed that sometimes some of the textures are drawn completely white even though they are created with valid data. It appears to be random about which texture and when. While investigating what could cause some of the textures to render white, I noticed that simply trying to retrieve the size of a texture (even for the ones that render correctly) doesn't work. Here is the simple code to create the texture and retrieve its size:
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0,
GL_BGRA, GL_UNSIGNED_BYTE, imageDdata);
// try to lookup the size of the texture
int textureWidth = 0, textureHeight = 0;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &textureWidth);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &textureHeight);
glBindTexture(GL_TEXTURE_2D, 0);
The image width and height input to glTexImage2D are 1536 x 1536, however the values returned from glGetTexLevelParameter are 16384 x 256. In fact, any width and height image that I pass to glTexImage2D result in an output of 16384 x 256. If I pass a width and height of 64 x 64, I still get back 16384 x 256.
I am using the same simple texture load/render code in another standalone test application and it works correctly all the time. However, I get these white textures when I use the code within this larger application. I have also verified that glGetError() returns 0.
I am assuming the containing application is setting some OpenGL state that is causing problems when we try to render our textures. Do you have any suggestions for things to check that could cause these white textures OR invalid texture dimensions?
Update
Both my test application that renders correctly and the integrated application that doesn't render correctly are running within a VM on Windows 7 with Accelerated 3D Graphics enabled. Here is the VM environment:
CentOS 7.0
OpenGL 2.1 Mesa 9.2.5
Did you check that you've got a valid OpenGL context being active when this code is called? The values you get back may be uninitialized garbage left in the variables, which values don't get modified if glGetTexLevelParameter fails for some reason. Note that glGetErrors may return GL_NO_ERROR if there's no OpenGL context active.
To check if there's a OpenGL context use wglGetCurrentContext (Windows), glXGetCurrentContext (X11 / GLX) or CGLGetCurrentContext (MacOS X CGL) to query the active OpenGL context; if there's none active all of these functions will return NULL.
Just FYI: You should use GLint for retrieval of integer values from OpenGL, the reason for that is, the the OpenGL types have very specific sizes, which may differ from the primitive C types of the same name. For example a C unsigned int may vary between 16 to 64 bits in size, while a OpenGL GLuint always is fixed to 32 bits.
https://www.opengl.org/sdk/docs/man/docbook4/xhtml/glGetTexLevelParameter.xml
glGetTexLevelParameter returns the texture level parameters for the active texture unit.
Try glActiveTexture on all the units. See if you are getting default values.

Lots of big textures is slow. Best way to speed up?

I am writing a terrain engine that has a lot of large textures draped over a heightmap. These textures are generated by rendering a bunch of stuff and then using the glCopyTexSubImage2D command a few times.
All fine and well (on speed and quality fronts), but when I make a lot of them (>45 1Mpixel) my framerate takes a nosedive from 60 to about 2. My first thought is that, well something (GPU or CPU) has to be pulling all million pixels per texture each time it is rendered which would definitely slow it down, so I tried what I thought was the solution: to implement mipmapping.
So I pull all the rgb values into an array and repass it into gluBuild2DMipmaps. (Is this not wasteful? I ask for some data and then give it right back? Is there a better way to do this with what I have (see below)).
Now, the mid-distant textures look terrible and bland and I am no better on the speed front.
Is there some way to get more detail on the further out textures while speeding up my rendering? Bear in mind that I am using freeglut and so am rather limited to opengl 2.
[EDIT: some code samples]
The generation of the texture:
//Only executes once as then the texture is defined.
if (TextureNumber == -1)
{
glGenTextures (1, &TextureNumber);
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR );
// ... Other not directly related stuff
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, TEXTURE_SIZE, TEXTURE_SIZE, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
}
//The texture is built up a little bit at a time, over a number of calls.
// Do some rendering
// ...
glBindTexture(GL_TEXTURE_2D, TextureNumber);
//And copy it into the big texture
glCopyTexSubImage2D (GL_TEXTURE_2D, 0, texX * _patch_size, texY * _patch_size, 0, 0, _patch_size, _patch_size);
Finally, this is run once:
unsigned char* dat = new unsigned char [TEXTURE_SIZE*TEXTURE_SIZE*3];
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glGetTexImage(GL_TEXTURE_2D,0,GL_RGB,GL_UNSIGNED_BYTE,dat);
gluBuild2DMipmaps(GL_TEXTURE_2D,3,TEXTURE_SIZE,TEXTURE_SIZE,GL_RGB,GL_UNSIGNED_BYTE,dat);
finishedTexture = true;
delete[] dat;
The rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor3f(1,1,1);
glVertexPointer( 3, GL_FLOAT, 0, VertexData);
glTexCoordPointer(2, GL_FLOAT, 0, TextureData);
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glDrawElements( GL_TRIANGLES, //mode
numTri[detail], //count, ie. how many indices
GL_UNSIGNED_INT, //type of the index array
TriangleData[detail]);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
First way to speed up that code is to get rid of all functions that are from 20 years ago and convert it to shaders. Fixed pipeline can be very unoptimal on modern hardware, and constant sending data to GPU is also probably killing the performance.
Bear in mind that I am using freeglut and so am rather limited to opengl 2.
No, that's not true. Freeglut is concerned mostly with window and context creation, and you can still use GLLoad or GLEW to get OGL 3.x or 4.x functions.
A quick list of things I see:
Fixed pipeline state used glColor
No VBO used / combined with deprecated glVertexPointer
Perhaps FBO should be used to fill the textures initially?

Why is there just garbage data in texture layers beyond 2048?

I am trying to use a texture_2d_array with up to 8192 layers. But all layers after the 2048th just contain garbage data (tested by mapping the individual layers on a quad to visualize the texture).
Querying the maximum number of layers with
glGetIntegerv(GL_MAX_ARRAY_TEXTURE_LAYERS, &maxTexLayers);
returns 8192 for my graphics card (AMD 5770), the same for an AMD 7850er. My only other available graphics card is an NVidia 480, which supports just 2048 layers.
I use the following code to create the texture:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, tex);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAX_LEVEL, 7);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 8, GL_RGB8, 128, 128, 8192);
//glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGB, 128, 128, 8192, 0, GL_RGB, GL_UNSIGNED_BYTE, nullptr);
std::vector<char> image = readImage(testImagePath);
for (unsigned int i = 0; i < 8192; ++i)
{
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, 128, 128, 1, GL_RGB, GL_UNSIGNED_BYTE, image.data());
}
GLuint tLoc = glGetUniformLocation (program, "texArray");
glProgramUniform1i (program, tLoc, 0);
(here https://mega.co.nz/#!FJ0gzIoJ!Kk0q_1xv9c7sCTi68mbKon1gDBUM1dgjrxoBJKTlj6U you can find a cut-down version of the program)
I am out of ideas:
Changing glTexStorage3D to glTexImage3D - no change
Playing with the base/max level - no change
Min_Filter to GL_LINEAR - no change
generating mipmaps (glGenerateMipmaps) - no change
reducing the size of the layers to e.g. 4x4 - no change
reducing the number of layers to e.g. 4096 - no change
switching to an AMD 7850 - no change
enabling debug context - no errors
etc. and a lot of other stuff
So, it could be a driver bug with the driver reporting the wrong number for GL_MAX_ARRAY_TEXTURE_LAYERS, but maybe I missed something and one of you has an idea.
EDIT: I am aware that such a texture would use quite a lot of memory and even if my graphics card had that much available OpenGL does not guarantee that I can allocate it, but I am getting no errors with the debug context enabled, especially no OUT_OF_MEMORY and I also tried it with a size of 4x4 per layer, which would be just 512kb
2 things:
1) Wouldn't 8192 textures that are 128x128 be over 500MBs of data? (Or 400MB if it's RGB instead of RGBA.) It could be that OpenGL can't allocate that much memory, even if your card has that much, due to fragmentation or other issues.
2) Just because OpenGL says the max is 8192 (or larger) doesn't mean that you're guaranteed to be able to use that much in every case. For example, my driver claims that the card can handle a max texture size of 8192 on a side. But if I try to create an 4096x4096 image that's 32-bit floating point RGBA, it fails, even though it's only 268MB and I have a Gig of VRAM.
Does glGetError() return any errors?

how to manage memory with texture in opengl?

In my application I am using extensively glTexImage2D. I copy some image of an image and render it as a texture, I do it frequently at every mouse click. I give it as a byte array for rendering. The memory is being eaten up and the swap memory is also allocated. Is it a memory leak? or is it due to the fact that glTexImage2D holds any references or anything else.
Edit:
//I allocate the memory once
GLuint texName;
texture_data = new GLubyte[width*height];
// Each time user click I repeat the following code (this code in in callback)
// Before this code the texture_data is modified to reflect the changes
glGenTextures(3, &texname);
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE,texture_data);
I hope your close requests and down voting would stop now!
Assuming you're generating a new texture with glGenTextures every time you call glTexImage2D, you are wasting memory, and leaking it if you don't keep track of all the textures you generate. glTexImage2D takes the input data and stores it video card memory. The texture name that you bind before calling glTexImage2D - the one you generate with glGenTextures is a handle to that chunk of video card memory.
If your texture is large and you're allocating new memory to store more and more copies of it every time you use it, then you will quickly run out of memory. The solution is to call glTexImage2D once during your application's initialization and only call glBindTexture when you want to use it. If you want to change the texture itself when you click, only call glBindTexture and glTexImage2D. If your new image is the same size as the previous image, you can call glTexSubImage2D to tell OpenGL to overwrite the old image data instead of deleting it and uploading the new one.
UPDATE
In response to your new code, I'm updating my answer with a more specific answer. You're dealing with OpenGL textures in the wrong way entirely The output of glGenTextures is a GLuint[] and not a String or char[]. For every texture you generate with glGenTextures, OpenGL gives you back a handle (as an unsigned integer) to a texture. This handle stores the state you give it with glTexParameteri as well a chunk of memory on the graphics card if you give it data with glTexImage[1/2/3]D. It's up to you to store the handle and send it new data when you want to update it. If you overwrite the handle or forget about it, the data still stays on the graphics card but you can't access it. You're also telling OpenGL to generate 3 textures when you only need 1.
Seeing as texture_data is of a fixed size, you can update the texture with glTexSubImage2D instead of glTexImage2D. Here is your code modified to avoid the memory leak from this issue:
texture_data = new GLubyte[width*height]();
GLuint texname; //handle to a texture
glGenTextures(1, &texname); //Gen a new texture and store the handle in texname
//These settings stick with the texture that's bound. You only need to set them
//once.
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//allocate memory on the graphics card for the texture. It's fine if
//texture_data doesn't have any data in it, the texture will just appear black
//until you update it.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//bind the texture again when you want to update it.
glBindTexture(GL_TEXTURE_2D, texname);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//When you're done using the texture, delete it. This will set texname to 0 and
//delete all of the graphics card memory associated with the texture. If you
//don't call this method, the texture will stay in graphics card memory until you
//close the application.
glDeleteTextures(1, &texname);