I'm converting my OpenGL 2 program to OpenGL 3, one small step at a time. The current step is to get rid of immediate mode in favor of VBOs. I'm stumbling on the proper placement of all the VBO state management calls.
Let's assume I'm going to have a bunch of objects, each with its own VBOs for vertex and element data. Each object has a load function that sets up its VBOs and loads data into them; and it has a draw function that binds its VBOs and issues the appropriate draw command. To keep the objects from interfering with each other, my main loop should call PushClientAttrib(CLIENT_VERTEX_ARRAY_BIT) before either draw or load is called, correct? (And of course PopClientAttrib() after.)
When I push/pop that state to protect my load function, the object doesn't get drawn. It seems like there's some state I need to (re)set in my draw function, but I can't figure out what it could be.
Here's code to add a new object e to the scene, letting it call its load function:
gl.PushClientAttrib(gl.CLIENT_VERTEX_ARRAY_BIT)
e.SceneAdded()
gl.PopClientAttrib()
And here's how I call each object's draw function:
gl.PushClientAttrib(gl.CLIENT_VERTEX_ARRAY_BIT)
gl.PushMatrix()
p.Paint()
gl.PopMatrix()
gl.PopClientAttrib()
Here's the object's load function:
// Make the vertex & texture data known to GL.
gl.GenBuffers(1, &vboId)
gl.BindBuffer(gl.ARRAY_BUFFER, vboId)
gl.BufferData(gl.ARRAY_BUFFER,
gl.Sizeiptr(unsafe.Sizeof(gldata[0])*uintptr(len(gldata))),
gl.Pointer(&gldata[0].x), gl.STATIC_DRAW)
gl.VertexPointer(3, gl.DOUBLE, gl.Sizei(unsafe.Sizeof(gldata[0])),
gl.Pointer(unsafe.Offsetof(gldata[0].x)))
gl.TexCoordPointer(2, gl.DOUBLE, gl.Sizei(unsafe.Sizeof(gldata[0])),
gl.Pointer(unsafe.Offsetof(gldata[0].s)))
// Make the index data known to GL.
gl.GenBuffers(1, &iboId)
gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, iboId)
gl.BufferData(gl.ELEMENT_ARRAY_BUFFER,
gl.Sizeiptr(unsafe.Sizeof(sd.indices[0])*uintptr(len(sd.indices))),
gl.Pointer(&sd.indices[0]), gl.STATIC_DRAW)
Finally, here's the object's draw function:
func draw() {
gl.BindBuffer(gl.ARRAY_BUFFER, vboId)
gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, iboId)
gl.EnableClientState(gl.VERTEX_ARRAY)
gl.EnableClientState(gl.TEXTURE_COORD_ARRAY)
gl.DrawElements(gl.TRIANGLES, indexcount, gl.UNSIGNED_SHORT, nil)
}
Just to be clear, if I move the two EnableClientState calls from draw to load, and if I don't protect load with PushClientState, everything works fine.
I'm converting my OpenGL 2 program to OpenGL 3
Well then the following
To keep the objects from interfering with each other, my main loop should call PushClientAttrib(CLIENT_VERTEX_ARRAY_BIT) before either draw or load is called, correct? (And of course PopClientAttrib() after.)
is not a sane approach. glPushAttrib and glPopAttrib are deprecated, i.e. should not be used in OpenGL-3 program.
Also OpenGL vertex arrays don't use glVertexPointer the use glVertexAttribPointer. Attribute pointer state can be managed (and I recommend that) in a Vertex Array Object, which you can bind/unbind.
BTW: When you wrote for OpenGL-2, why did you use immediate mode in the first place? Use of immediate mode is discouraged ever since OpenGL-1.1, when vertex arrays had been introduced. And already with OpenGL-2 it was considered to remove immediate mode.
The answer to the question I actually asked is that gl.VertexPointer() and friends need to be called from the object's draw function. My main loop is pushing and popping all state related to client vertices, so of course that gets lost.
The question I implied is, "How can I convert my old, GL2 program to new, GL3 code?" That's answered very elegantly here.
Related
my goal is to pack all mesh data into a C++ class, with the possibility of using an object of such a class with more than one GLSL shader program.
I've stuck with this problem: if I understand it well the way of how in GLSL vertex data are passed from CPU to GPU is based on using Vertex Array Objects which:
are made of Vertex Buffer Objects which store raw vertex data, and
Vertex Attribute Pointers which tells what shader variable location to assign and the format of the data passed to VBO above
additionally one can make an Element Buffer Object to store indices
another extra elements necessary to draw an object are textures which are made separately
I's able to make all that data and store only a VAO in my mesh class and only pass the VAO handler to my shader program to draw it. It works, but, for my goal of using multiple shader programs, the VAP stands in my way. It is the only element in my class that rely on a specific shader program's property (namely the location of input variables).
I wonder if I'm forced to remake all VAOs every time I'm using my shaders when I want to draw something. I'm afraid that the efficiency of my drawing code will suffer drastically.
So, I'm asking if I should forget of making my universal mesh class and instead make a separate objects for each shader program I'm using in my application. I would rather not, as I want to avoid unnecessary copying of my mesh data. On the other hand if this means my drawing routine will slow down because of all that remaking VAOs every milisecond during drawing then I have to rethink the design of my code :(
EDIT: OK I've misunderstood that VAOs store bindings to other objects, not the data itself. But there is one thing still left: to make an VAP I have to provide information about the location of an input variable from a shader and also the layout of data in VBO from a mesh.
What I'm trying to avoid is to make an separate object that stores that VAP which relies both on a mesh object and a shader object. I might solve my problem if all shader programs use the same location for the same thing, but at this moment I'm not sure if this is the case.
additionally one can make an Element Buffer Object to store indices
another extra elements necessary to draw an object are textures which are made separately
Just to be clear, that's you being lazy and/or uninformed. OpenGL strongly encourages you to use a single buffer for both vertex and index data. Vulkan goes a step further by limiting the number of object names (vao, vbo, textures, shaders, everything) you can create and actively punishes you for not doing that.
As to the problem you're facing.. It would help if you used correct nomenclature, or at least full names. "VAP" is yet another sign of laziness that hides what your actual issue is.
I will mention that VAOs are separate from VBOs, and you can have multiple VAOs linked to the same VBO (VBOs are just buffers of raw data -- again, Vulkan goes a step further here and its buffers store literally everything from vertex data, element indices, textures, shader constants etc).
You can also have multiple programs use the same VAOs, you just bind whatever VAO you want active (glBindVertexArray) once, then use your programs and call your render functions as needed. Here's an example from one of my projects that uses the same terrain data to render both the shaded terrain and a grid on top of them:
chunk.VertexIndexBuffer.Bind();
// pass 1, render the terrain
terrainShader.ProgramUniform("selectedPrimitiveID", selectedPrimitiveId);
terrainShader.Use();
roadsTexture.Bind();
GL.DrawArrays(PrimitiveType.Triangles, 0, chunk.VertexIndexBuffer.VertexCount);
// pass 2, render the grid
gridShader.Use();
GL.DrawElements(BeginMode.Lines, chunk.VertexIndexBuffer.IndexCount, DrawElementsType.UnsignedShort, 0);
I have some old code that uses OpenGL 1.1 that is mysteriously segfaulting. One place where I might be going wrong is that I free() an array after providing it to glNormalPointer. Is that permitted, or does OpenGL require the memory at that pointer to stick around? I had been assuming that the data was copied.
double vertices[] = { ... };
double *normals = (double *)malloc(sizeof(vertices));
CalcNormals(vertices, normals, num_vertices); // Calculate the normal vectors.
GLuint my_display_list = glGenLists(1);
glNewList(my_display_list, GL_COMPILE);
glVertexPointer(3, GL_DOUBLE, 0, vertices);
glNormalPointer(GL_DOUBLE, 0, normals);
glDrawArrays(GL_TRIANGLES, 0, num_vertices);
glEndList();
free(normals); // Is this permitted?
// ...
glCallList(winglist);
EDIT 1: I noticed a few references that say that glNormalPointer and other "client state commands" cannot be included in a display list. I guess in this case, inclusion of glVertexPointer and glNormalPointer between glNewList and glEndList probably does no harm, but I might as well move them up, to occur before glNewList. It's the call to glDrawArrays that's really being recorded in the display list, right?
EDIT 2: I tried calling memset(normals, 0, sizeof(vertices)) to forcefully clear the normals buffer before freeing it, to make a use-after-free situation more obvious. Because the scene still draws properly, I conclude that the normals buffer that I allocate is not being used after the call to glDrawArrays.
I noticed a few references that say that glNormalPointer and other "client state commands" cannot be included in a display list.
No, it says that they aren't included in the display list. Which is true; they aren't.
But their effects can be.
The gl*Pointer commands retain the pointer well after their the command executes. That is, the pointer isn't used yet. It only gets read from when you invoke a rendering command using that pointer as client state.
The behavior of glDraw* calls with client-side vertex arrays is defined in terms of a sequence of calls to glArrayElement within a glBegin/End pair, which itself is defined in terms of a series of call to glVertex/Normal/etc, using values read from the vertex arrays. Display lists record those glVertex/etc calls into the display list itself. Thus, while recording to a display list, when you render with client-side vertex arrays you will record the contents of the arrays into the display list.
You can free the client memory only if you don't try to read from it again.
I've run into a bit of a confusing problem with OpenGL, it's rather simple but I've failed to find any directly related information.
What I'm trying to do
I'm creating several new textures every frame, and right after creation I bind them, use them for drawing, and then delete them right after.
The Problem
If I delete every texture right after it was used, the last one to be drawn replaces the previous ones(but their different geometry works as it should). If I batch my deletions after all drawing has been done, it works as expected, but if I do any draw calls at all after deleting the textures, the texture used in the last drawcall replaces the old ones(which could be some common permanent sprite texture).
Results from debugging
I've tried using glFlush(), which didn't seem to do anything at all, not deleting the textures at all gives the correct behaviour, and also not drawing anything at all between deleting the textures and calling SwapBuffers() works.
Code
This is not what my code looks like, but this is what the relevant parts boil down to:
int Tex1, Tex2, Tex3;
glGenTextures(1, &Tex1);
glBindTexture(GL_TEXTURE_2D, Tex1);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex1
glGenTextures(1, &Tex2);
glBindTexture(GL_TEXTURE_2D, Tex2);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex2
// I delete some textures here.
glDeleteTextures(1, &Tex1);
glDeleteTextures(1, &Tex2);
// If I comment out this section, everything works correctly
// If I leave it in, this texture replaces Tex1 and Tex2, but
// the geometry is correct for each geometry batch.
glGenTextures(1, &Tex3);
glBindTexture(GL_TEXTURE_2D, Tex3);
// ... Fill Texture with data, set correct filtering etc.
glDrawElements(GL_TRIANGLES, ...); // Using Tex3
glDeleteTextures(1, &Tex3);
// ...
SwapBuffers();
I suspect this might have something to do with OpenGL buffering my draw calls,
and by the time they are actually processed the textures are deleted? It doesn't really make sense to me though, why would drawing something else after deleting the previous textures cause this behaviour?
More context
The generated textures are text strings, that may or may not change each frame, right now I create new textures for each string each frame and then render the texture and discard it right after. The bitmap data is generated with Windows GDI.
I'm not really looking for advice on efficiency, ideally I want an answer that can quote the documentation on the expected/correct behaviour for rendering using temporary textures like this, as well as possible common gotchas with this approach.
The expected behavior is clear. You can delete the objects as soon as you are done using them. In your case, after you made the draw calls that use the textures, you can call glDeleteTextures() on those textures. No additional precautions are required from your side.
Under the hood, OpenGL will typically execute the draw calls asynchronously. So the texture will still be used after the draw call returns. But that's not your problem. The driver is responsible for tracking and managing the lifetime of objects to keep them around until they are not used anymore.
The clearest expression of this I found in the spec is on page 28 of the OpenGL 4.5 spec:
If an object is deleted while it is currently in use by a GL context, its name is immediately marked as unused, and some types of objects are automatically unbound from binding points in the current context, as described in section 5.1.2. However, the actual underlying object is not deleted until it is no longer in use.
In your code, this means that the driver can't delete the textures until the GPU completed the draw call using the texture.
Why that doesn't work in your case is hard to tell. One possibility is always that something in your code unintentionally deletes the texture earlier than it should be. With complex software architectures, that happens much more easily than you might think. For example, a really popular cause is that people wrap OpenGL objects in C++ classes, and let those C++ objects go out of scope while the underlying OpenGL object is still in use.
So you should definitely double check (for example by using debug breakpoints or logging) that no code that deletes textures is invoked at unexpected times.
The other option is a driver bug. While object lifetime management is not entirely trivial, it is so critical that it's hard to imagine it being broken for a very basic case. But it's certainly possible, and more or less likely depending on vendor and platform.
As a workaround, you could try not deleting the texture objects, and only specifying new data (using glTexImage2D()) for the same objects instead. If the texture size does not change, it would probably be more efficient to only replace the data with glTexSubImage2D() anyway.
I've written for myself a small utility class containing useful methods for rendering lines, quads, cubes, etc. quickly and easily in OpenGL. Up until now, I've been using almost entirely immediate mode, so I could focus on learning other aspects of OpenGL. It seems prudent to switch over to using VBOs. However, I want to keep much of the same functionality I've been using, for instance my utility class. Is there a good method of converting these simple immediate mode calls to a versatile VBO system?
I am using LWJGL.
Having converted my own code from begin..end blocks and also taught others, this is what I recommend.
I'm assuming that your utility class is mostly static methods, draw a line from this point to that point.
First step is to have each individual drawing operation create a VBO for each attribute. Replace your glBegin..glEnd block with code that creates an array (actually a ByteBuffer) for each vertex attribute: coordinates, colors, tex coords, etc. After what used to be glEnd, copy the ByteBuffers to the VBOs with glBufferData. Then set up the attributes with chunks of glEnableClientState, glBindBuffer, glVertex|Color|whateverPointer calls. Call glDrawArrays to actually draw something, and finally restore client state and delete the VBOs.
Now, this is not good OpenGL code and is horribly inefficient and wasteful. But it should work, it's fairly straightforward to write, and you can change one method at a time.
And if you don't need to draw very much, well modern GPUs are so fast that maybe you won't care that it's inefficient.
Second step is to start re-using VBOs. Have your class create one VBO for each possible attribute at init time or first use. The drawing code still creates ByteBuffer data arrays and copies them over, but doesn't delete the VBOs.
Third step, if you want to move into OpenGL 4 and are using shaders, would be to replace glVertexPointer with glVertexAttribPointer(0, glColorPointer with glVertexAttribPointer(1, etc. You should also create a Vertex Array Object along with the VBOs at init time. (You'll still have to enable/disable attrib pointers individually depending on whether each draw operation needs colors, tex coords, etc.)
And the last step, which would require changes elsewhere to your program(s), would be to go for 3D "objects" rather than methods. Your utility class would no longer contain drawing methods. Instead you create a line, quad, or cube object and draw that. Each of these objects would (probably) have its own VBOs. This is more work, but really pays off in the common case when a lot of your 3D geometry doesn't change from frame to frame. But again, you can start with the more "wasteful" approach of replacing each method call to draw a line from P1 to P2 with something like l = new Line3D(P1, P2) ; l.draw().
Hope this helps.
I am trying to setup a simple function which will make it a lot easier for me to texture map geometry in OpenGL, but for some reason when I'm trying to make a skybox, I am getting a white box instead of the texture mapped geometry. I think that the problemed code lies within the following:
void MapTexture (char *File, int TextNum) {
if (!TextureImage[TextNum]){
TextureImage[TextNum]=auxDIBImageLoad(File);
glGenTextures(1, &texture[TextNum]);
glBindTexture(GL_TEXTURE_2D, texture[TextNum]);
glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[TextNum]->sizeX, TextureImage[TextNum]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[TextNum]->data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
}
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture[TextNum]);
//glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[TextNum]->sizeX, TextureImage[TextNum]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[TextNum]->data);
//glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
}
The big thing I don't understand is for some reason the glBindTexture() must come between glGenTextures() and glTexImage2D. If I place it anywhere else, it screws everything up. What could be causing this problem? Sorry if it's something simple, I'm brand new to openGL.
Below is a screenshot of the whitebox I am talking about:
+++++++++++++++++++++++++++++++
EDIT
+++++++++++++++++++++++++++++++
After playing around with the code a bit more, i realized that if I added glTexImage2D() and glTexParameteri()after the last glBindTexture() then all the textures load. Why is it that without these two lines most textures would load, and yet there are a few that would not, and why do I have to call glTexImage() for every frame, but only for a few textures?
Yes, order is definitely important.
glGenTexture creates a texture name.
glBindTexture takes the texture name generated by glGenTexture, so it can't be run before glGenTexture.
glTexImage2D uploads data to the currently bound texture, so it can't be run before glBindTexture.
The client-side interface to OpenGL is a Big Giant Squggly State Machine. There are an enormous number of parameters and flags that you can change, and you have to be scrupulous to always leave OpenGL in the right state. This usually means popping matrices you push and restoring flags that you modify (at least in OpenGL 1.x).
OpenGL is a state machine, which means that you can pull its levers, turn its knobs, and it will keep those settings, until you change them actively.
However it also manages it's persistent data in objects. Such objects are something abstract, and must not be confused with objects seen on the screen!
Now to the outside OpenGL identifies objects by their so called name, a numerical ID. You create a (list of) name(s) – but not the object(s)! – with glGenTextures for texture objects, which are such a kind of OpenGL object.
To maniupulate such an object, OpenGL must first be put into a state that all the following calls to manipulate such objects of that type happen to one particular object. This is done with glBindTexture. After calling glBindTexture all the following calls that manipulate textures happen to that one texture object you've just bound. If the object didn't exist previously, it is created if a new assigned object name is bound for the first time.
Now OpenGL uses that particular object.
glTexImage2D is just one of several functions to maniuplate the data of the currently bound textures.
Otherwise your function points into the right direction. OpenGL has no real initialization phase, you just do things as you go along. And it makes sense to defer loading of data until you need it. But it also makes sense to have multiple iterations over the lists of objects before you actually draw a frame. One of the preparations should be, that you iterate over all objects (now not OpenGL but your's) to test if the data's already loaded. If a significant amount of data's still missing, draw a loading screen instead, so that the user doesn't get the impression your program hangs. Maybe even carry out lengthy loading operations in a separate thread, but with OpenGL this requires some precautions.