OpenGL new textures replacing old ones after deletion - c++

I've run into a bit of a confusing problem with OpenGL, it's rather simple but I've failed to find any directly related information.
What I'm trying to do
I'm creating several new textures every frame, and right after creation I bind them, use them for drawing, and then delete them right after.
The Problem
If I delete every texture right after it was used, the last one to be drawn replaces the previous ones(but their different geometry works as it should). If I batch my deletions after all drawing has been done, it works as expected, but if I do any draw calls at all after deleting the textures, the texture used in the last drawcall replaces the old ones(which could be some common permanent sprite texture).
Results from debugging
I've tried using glFlush(), which didn't seem to do anything at all, not deleting the textures at all gives the correct behaviour, and also not drawing anything at all between deleting the textures and calling SwapBuffers() works.
Code
This is not what my code looks like, but this is what the relevant parts boil down to:
int Tex1, Tex2, Tex3;
glGenTextures(1, &Tex1);
glBindTexture(GL_TEXTURE_2D, Tex1);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex1
glGenTextures(1, &Tex2);
glBindTexture(GL_TEXTURE_2D, Tex2);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex2
// I delete some textures here.
glDeleteTextures(1, &Tex1);
glDeleteTextures(1, &Tex2);
// If I comment out this section, everything works correctly
// If I leave it in, this texture replaces Tex1 and Tex2, but
// the geometry is correct for each geometry batch.
glGenTextures(1, &Tex3);
glBindTexture(GL_TEXTURE_2D, Tex3);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex3
glDeleteTextures(1, &Tex3);
// ...
SwapBuffers();
I suspect this might have something to do with OpenGL buffering my draw calls,
and by the time they are actually processed the textures are deleted? It doesn't really make sense to me though, why would drawing something else after deleting the previous textures cause this behaviour?
More context
The generated textures are text strings, that may or may not change each frame, right now I create new textures for each string each frame and then render the texture and discard it right after. The bitmap data is generated with Windows GDI.
I'm not really looking for advice on efficiency, ideally I want an answer that can quote the documentation on the expected/correct behaviour for rendering using temporary textures like this, as well as possible common gotchas with this approach.

The expected behavior is clear. You can delete the objects as soon as you are done using them. In your case, after you made the draw calls that use the textures, you can call glDeleteTextures() on those textures. No additional precautions are required from your side.
Under the hood, OpenGL will typically execute the draw calls asynchronously. So the texture will still be used after the draw call returns. But that's not your problem. The driver is responsible for tracking and managing the lifetime of objects to keep them around until they are not used anymore.
The clearest expression of this I found in the spec is on page 28 of the OpenGL 4.5 spec:
If an object is deleted while it is currently in use by a GL context, its name is immediately marked as unused, and some types of objects are automatically unbound from binding points in the current context, as described in section 5.1.2. However, the actual underlying object is not deleted until it is no longer in use.
In your code, this means that the driver can't delete the textures until the GPU completed the draw call using the texture.
Why that doesn't work in your case is hard to tell. One possibility is always that something in your code unintentionally deletes the texture earlier than it should be. With complex software architectures, that happens much more easily than you might think. For example, a really popular cause is that people wrap OpenGL objects in C++ classes, and let those C++ objects go out of scope while the underlying OpenGL object is still in use.
So you should definitely double check (for example by using debug breakpoints or logging) that no code that deletes textures is invoked at unexpected times.
The other option is a driver bug. While object lifetime management is not entirely trivial, it is so critical that it's hard to imagine it being broken for a very basic case. But it's certainly possible, and more or less likely depending on vendor and platform.
As a workaround, you could try not deleting the texture objects, and only specifying new data (using glTexImage2D()) for the same objects instead. If the texture size does not change, it would probably be more efficient to only replace the data with glTexSubImage2D() anyway.

Related

OpenGL: Repeated use of transform feedback buffers overwrites already established textures

I have a working implementation of this technique for view frustum culling of instanced geometry. The gist of the technique is that we use a vertex shader to check if the bounds of an object lie within the view frustum, and if they do we output the position of that object, using a transform feedback buffer and a geometry shader, to a texture. We can then, during an actual rendering pass, use that texture, along with a query of how many positions we emitted, to acquire the relevant position data for the object we're rendering, and number of draws to specify in our call to glDrawElementsInstanced. One difference between what I do, and what the article does, is that I emit a full transformation matrix, rather than a simple position vector, to the texture, but I doubt that has any bearing on my problem.
The actual problem: Currently I have this setup so that, for each object type being rendered (i.e. tree, box, rock, whatever), the actual rendering pass follows immediately upon the frustum cull rendering pass. This works, and gives the intended results. What I want to do instead, however, is to go over all my drawcommands and do all the frustum culling for the various objects first, and only thereafter do all the actual rendering, to avoid a bunch of unnecessary state changes (i.e. switching back and forth between shader programs). When I do this, however, I encounter the problem that previously established textures -- the ones I use for reading positions from during the actual rendering passes -- all seem to be overwritten by the latest call to the frustum culling function, meaning that all textures established seemingly contain only the position information from the last frustum cull call.
For example: I render, in order, 4 trees, 10 boxes and 3 rocks, and what I will see instead is a tree, a box, and a rock, at all the (three) positions where I would expect only the 3 rocks to be. I cannot for the life of me figure out why this is, because I quite clearly bind new buffers and textures to the TRANSFORM_FEEDBACK_BUFFER every time I call the function. Why are the previously used textures still receiving the new data from the latest call?
Code, in C, for the frustum culling function:
void fcullidraw(drawcommand *tar) {
/* printf("Fculling %s\n", tar->res->name); */
mesh *rmesh = &tar->res->amod->meshes[0];
/* glDeleteTextures(1, &rmesh->ctex); */
if(rmesh->ctbuf == 0)
glGenBuffers(1, &rmesh->ctbuf);
glBindBuffer(GL_TEXTURE_BUFFER, rmesh->ctbuf);
glBufferData(GL_TEXTURE_BUFFER, sizeof(instancedata) * tar->nodraws, NULL, GL_DYNAMIC_COPY);
if(rmesh->ctex == 0)
glGenTextures(1, &rmesh->ctex);
glBindTexture(GL_TEXTURE_BUFFER, rmesh->ctex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, rmesh->ctbuf);
if(rmesh->cquery == 0)
glGenQueries(1, &rmesh->cquery);
checkactiveshader(tar->tar, findshader("icull"));
glEnable(GL_RASTERIZER_DISCARD);
glUniform1f(activeshader->radius, tar->res->amesh->bbox.radius);
glUniform3fv(activeshader->extent, 1, (const GLfloat*)&tar->res->amesh->bbox.ext);
glUniform3fv(activeshader->cp, 1, (const GLfloat*)&tar->res->amesh->bbox.cp);
glBindVertexArray(tar->res->amod->meshes[0].vao);
glBindBuffer(GL_ARRAY_BUFFER, tar->res->amod->meshes[0].posarray);
glBufferData(GL_ARRAY_BUFFER, sizeof(mat4_t) * tar->nodraws, tar->posarray, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, rmesh->ctbuf);
glBeginTransformFeedback(GL_POINTS);
glBeginQuery(GL_PRIMITIVES_GENERATED, rmesh->cquery);
glDrawArrays(GL_POINTS, 0, tar->nodraws);
glEndQuery(GL_PRIMITIVES_GENERATED);
glEndTransformFeedback();
glDisable(GL_RASTERIZER_DISCARD);
glGetQueryObjectuiv(rmesh->cquery, GL_QUERY_RESULT, &rmesh->visibleinstances);
}
tar and rmesh obviously vary between each call to this function. Do note that I have left in a few lines of comments here containing code to delete the buffers and textures between each rendering cycle, rather than simply overwriting them, but using that code instead has no effect on the error mode.
I'm stumped. I feel that the textures and buffers are well defined and clearly kept separate, so I do not understand how the textures from previous calls to fcullidraw are somehow still bound to and being overwritten by the TransformFeedback, if that is indeed what is happening, and it certainly seems to be, because the earlier objects will read in the entire transformation matrix of the rock quite neatly, with the "right" rotation, translation, and everything.
The article linked does do the operations in the order I want to do them -- i.e. first repeated frustum culls, and then repeated rendering -- and I'm not sure I see what I do differently. Might be some small and obvious thing, and I might be an idiot, but in that case I'd love to know why and how I am that.
EDIT: I pushed on and updated my implementation with a refinement of the original technique, suggested here, which gets rid of the writing-to-texture method altogether, in favor of instead simply writing to a buffer bound to the VAO, and set to update once per rendered instance with a VertexAttribDivisor. This method looks at lot cleaner on the whole, and incidentally had the additional side effect of not having my original problem at all, as I'm no longer writing to and uploading textures. This is, thus, no longer a practical problem for me, but the answer to the theoretical question does still elude me, so if anyone has ideas I'm still all ears.

Opengl - Is glDrawBuffers modification stored in a FBO? No?

I try to create a FrameBuffer with 2 textures attaching to it (Multi Render Targets). Then in every time step, both textures are cleared and painted, as following code. (Some part will be replaced as pseudo code to make it shorter.)
Version 1
//beginning of the 1st time step
initialize(framebufferID12)
//^ I quite sure it is done correctly,
//^ Note : there is no glDrawBuffers() calling
loop , do once every time step {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID12);
//(#1#) a line will be add here in version 2 (see belowed) <------------
glClearColor (0.5f, 0.0f, 0.5f, 0.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// paint a lot of object here , using glsl (Shader .frag, .vert)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
All objects are painted correctly to both texture, but only the first texture (ATTACHMENT0) is cleared every frame, which is wrong.
Version 2
I try to insert a line of code ...
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ;
at (#1#) and it works as expected i.e. clear all two textures.
(image http://s13.postimg.org/66k9lr5av/gl_Draw_Buffer.jpg)
Version 3
From version 2, I move that glDrawBuffers() statement to be inside frame buffer initialization like this
initialize(int framebufferID12){
int nameFBO = glGenFramebuffersEXT();
int nameTexture0=glGenTextures();
int nameTexture1=glGenTextures();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,nameFBO);
glBindTexture(nameTexture0);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture0);
glBindTexture(nameTexture1);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture1);
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ; //<--- moved here ---
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
return nameFBO ;
}
It is no longer work (symptom like version 1), why?
The opengl manual said that "changes to context state will be stored in this object", so the state modification from glDrawBuffers() will be stored in "framebufferID12" right? Then, why I have to call it every time step (or every time I change FBO)
I may misunderstand some opengl's concept, someone enlighten me please.
Edit 1: Thank j-p. I agree that it is make sense, but shouldn't the state be recorded in the FBO already?
Edit 2 (accept answer): Reto Koradi's answer is correct! I am using a not-so-standard library called LWJGL.
Yes, the draw buffers setting is part of the framebuffer state. If you look at for example the OpenGL 3.3 spec document, it is listed in table 6.23 on page 299, titled "Framebuffer (state per framebuffer object)".
The default value for FBOs is a single draw buffer, which is GL_COLOR_ATTACHMENT0. From the same spec, page 214:
For framebuffer objects, in the initial state the draw buffer for fragment color zero is COLOR_ATTACHMENT0. For both the default framebuffer and framebuffer objects, the initial state of draw buffers for fragment colors other then zero is NONE.
So it's expected that if you have more than one draw buffer, you need the explicit glDrawBuffers() call.
Now, why it doesn't seem to work for you if you make the glDrawBuffers() call as part of the FBO setup, that's somewhat mysterious. One thing I notice in your code is that you're using the EXT form of the FBO calls. I suspect that this might have something to do with your problem.
FBOs have been part of standard OpenGL since version 3.0. If there's any way for you to use OpenGL 3.0 or later, I would strongly recommend that you use the standard entry points. While the extensions normally still work even after the functionality has become standard, I would always be skeptical how they interact with other features. Particularly, there were multiple extensions for FBO functionality before 3.0, with different behavior. I wouldn't be surprised if some of them interact differently with other OpenGL calls compared to the standard FBO functionality.
So, try using the standard entry points (the ones without the EXT in their name). That will hopefully solve your problem.

Best way to convert OpenGL immediate mode rendering utility methods to using VBOs?

I've written for myself a small utility class containing useful methods for rendering lines, quads, cubes, etc. quickly and easily in OpenGL. Up until now, I've been using almost entirely immediate mode, so I could focus on learning other aspects of OpenGL. It seems prudent to switch over to using VBOs. However, I want to keep much of the same functionality I've been using, for instance my utility class. Is there a good method of converting these simple immediate mode calls to a versatile VBO system?
I am using LWJGL.
Having converted my own code from begin..end blocks and also taught others, this is what I recommend.
I'm assuming that your utility class is mostly static methods, draw a line from this point to that point.
First step is to have each individual drawing operation create a VBO for each attribute. Replace your glBegin..glEnd block with code that creates an array (actually a ByteBuffer) for each vertex attribute: coordinates, colors, tex coords, etc. After what used to be glEnd, copy the ByteBuffers to the VBOs with glBufferData. Then set up the attributes with chunks of glEnableClientState, glBindBuffer, glVertex|Color|whateverPointer calls. Call glDrawArrays to actually draw something, and finally restore client state and delete the VBOs.
Now, this is not good OpenGL code and is horribly inefficient and wasteful. But it should work, it's fairly straightforward to write, and you can change one method at a time.
And if you don't need to draw very much, well modern GPUs are so fast that maybe you won't care that it's inefficient.
Second step is to start re-using VBOs. Have your class create one VBO for each possible attribute at init time or first use. The drawing code still creates ByteBuffer data arrays and copies them over, but doesn't delete the VBOs.
Third step, if you want to move into OpenGL 4 and are using shaders, would be to replace glVertexPointer with glVertexAttribPointer(0, glColorPointer with glVertexAttribPointer(1, etc. You should also create a Vertex Array Object along with the VBOs at init time. (You'll still have to enable/disable attrib pointers individually depending on whether each draw operation needs colors, tex coords, etc.)
And the last step, which would require changes elsewhere to your program(s), would be to go for 3D "objects" rather than methods. Your utility class would no longer contain drawing methods. Instead you create a line, quad, or cube object and draw that. Each of these objects would (probably) have its own VBOs. This is more work, but really pays off in the common case when a lot of your 3D geometry doesn't change from frame to frame. But again, you can start with the more "wasteful" approach of replacing each method call to draw a line from P1 to P2 with something like l = new Line3D(P1, P2) ; l.draw().
Hope this helps.

How to render offscreen on OpenGL? [duplicate]

This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
How can I do this?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.
I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.
Here are your options:
Pixel buffers
A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB (pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.
The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels to read back from it.
You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.
The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).
The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB information about OpenGL versions and profiles.
Framebuffer Objects
Framebuffer Objects are more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.
FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS (make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).
Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.
The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image format for your glRenderbufferStorage call.
The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.
If you need to render something that exceeds the maximum FBO size of your GL implementation libtr works pretty well:
The TR (Tile Rendering) library is an OpenGL utility library for doing
tiled rendering. Tiled rendering is a technique for generating large
images in pieces (tiles).
TR is memory efficient; arbitrarily large image files may be generated
without allocating a full-sized image buffer in main memory.
The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).
The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples

OpenGL textures: why does order matter?

I am trying to setup a simple function which will make it a lot easier for me to texture map geometry in OpenGL, but for some reason when I'm trying to make a skybox, I am getting a white box instead of the texture mapped geometry. I think that the problemed code lies within the following:
void MapTexture (char *File, int TextNum) {
if (!TextureImage[TextNum]){
TextureImage[TextNum]=auxDIBImageLoad(File);
glGenTextures(1, &texture[TextNum]);
glBindTexture(GL_TEXTURE_2D, texture[TextNum]);
glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[TextNum]->sizeX, TextureImage[TextNum]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[TextNum]->data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
}
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture[TextNum]);
//glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[TextNum]->sizeX, TextureImage[TextNum]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[TextNum]->data);
//glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
}
The big thing I don't understand is for some reason the glBindTexture() must come between glGenTextures() and glTexImage2D. If I place it anywhere else, it screws everything up. What could be causing this problem? Sorry if it's something simple, I'm brand new to openGL.
Below is a screenshot of the whitebox I am talking about:
+++++++++++++++++++++++++++++++
EDIT
+++++++++++++++++++++++++++++++
After playing around with the code a bit more, i realized that if I added glTexImage2D() and glTexParameteri()after the last glBindTexture() then all the textures load. Why is it that without these two lines most textures would load, and yet there are a few that would not, and why do I have to call glTexImage() for every frame, but only for a few textures?
Yes, order is definitely important.
glGenTexture creates a texture name.
glBindTexture takes the texture name generated by glGenTexture, so it can't be run before glGenTexture.
glTexImage2D uploads data to the currently bound texture, so it can't be run before glBindTexture.
The client-side interface to OpenGL is a Big Giant Squggly State Machine. There are an enormous number of parameters and flags that you can change, and you have to be scrupulous to always leave OpenGL in the right state. This usually means popping matrices you push and restoring flags that you modify (at least in OpenGL 1.x).
OpenGL is a state machine, which means that you can pull its levers, turn its knobs, and it will keep those settings, until you change them actively.
However it also manages it's persistent data in objects. Such objects are something abstract, and must not be confused with objects seen on the screen!
Now to the outside OpenGL identifies objects by their so called name, a numerical ID. You create a (list of) name(s) – but not the object(s)! – with glGenTextures for texture objects, which are such a kind of OpenGL object.
To maniupulate such an object, OpenGL must first be put into a state that all the following calls to manipulate such objects of that type happen to one particular object. This is done with glBindTexture. After calling glBindTexture all the following calls that manipulate textures happen to that one texture object you've just bound. If the object didn't exist previously, it is created if a new assigned object name is bound for the first time.
Now OpenGL uses that particular object.
glTexImage2D is just one of several functions to maniuplate the data of the currently bound textures.
Otherwise your function points into the right direction. OpenGL has no real initialization phase, you just do things as you go along. And it makes sense to defer loading of data until you need it. But it also makes sense to have multiple iterations over the lists of objects before you actually draw a frame. One of the preparations should be, that you iterate over all objects (now not OpenGL but your's) to test if the data's already loaded. If a significant amount of data's still missing, draw a loading screen instead, so that the user doesn't get the impression your program hangs. Maybe even carry out lengthy loading operations in a separate thread, but with OpenGL this requires some precautions.