What is the meaning of glEnableClientState and glDisableClientState in OpenGL?
So far I've found that these functions are to enable or disable some client side capabilities.
Well, what exactly is the client or server here?
I am running my OpenGL program on a PC, so what is this referring to?
Why do we even need to disable certain capabilities? ...and more intriguing it's about some sort of an array related thing?
The whole picture is very gray to me.
The original terminology stems from the X11 notation, where the server is the actual graphics display system:
A server program providing access to some kind of display device
and
Clients connecting to the server to draw on the display device provided by it
glEnableClientState and glDisableClientState set state of the client side part. Vertex Arrays used to be located in the client process memory, so drawing using vertex arrays was a client local process.
Today we have Buffer Objects, that place the data in server memory, rendering the whole client side terminology of vertex arrays counterintuitive. It would make sense to discard client states and enable/disable vertex arrays through the usual glEnable/glDisable functions, like we do with framebuffer objects and textures.
If you draw your graphics by passing buffers to OpenGL (glVertexPointer(), etc) instead of direct calls (glVertex3f()), you need to tell OpenGL which buffers to use.
So instead of calling glVertex and glNormal, you'd create buffers, bind them, and use glVertexPointer and glNormalPointer to point OpenGL at your data. Afterwards a call to glDrawElements (or the like) will use those buffers to do the drawing. However, one other required step is to tell the OpenGL driver which buffers you actually want to use, which is there glEnableClientState() comes in.
This is all very hand-wavy. You need to read up on vertex buffer objects and try them out.
In OpenGL terminology, the client is your application, whereas the server is the graphics card (or the driver), I think. The only client-side capabilities are the vertex arrays, as these are stored in CPU memory and therefore on the client-side or more specifically, they are controlled (allocated and freed) by your application and not by the driver.
Vertex buffer objects are a different story. They can be used as vertex arrays, but are controlled by the driver, so the word "client state" doesn't make so much sense anymore when working with buffers.
glEnableClientState and glDisableClientState are mainly used to manage Vertex Arrays and Vertex Buffer Objects.
Related
What does gl.glTexImage2D do? The docs say it "uploads texture data". But does this mean the whole image is in GPU memory? I'd like to use one large image file for texture mapping. Further: can I simply use a VBO for uv and position coordinates to draw the texture?
Right, I am using words the wrong way here. What I meant was carrying a 2D array of UV coordinates and a 2D array of model to subsample a larger PNG image (in texture memory) onto individual tile models. My confusion here lies in not knowing how fast these fetches can take. Lets say I have a 5000x5000 pixel image. I load it as a texture. Then I create my own algorithm for fetching portions of it to draw. Where do I save myself the bandwidth for drawing these tiles? If I implement an LOD algorithm to determine which tiles are close, which are far and which are out of the camera frustum how do manage each these tiles in memory? Loaded question I know but I am struggling to find the best implementation to get started. I am developing for mobile devices with OpenGL ES 2.0.
What exactly happens when you call glTexImage2D() is system dependent, and there's no way for you to know, unless you have developer tools that allow you to track GPU and memory usage.
The only thing guaranteed is that the data you pass to the call has been consumed by the time the call returns (since the API definition allows you to modify/free the data after the call), and that the data is accessible to the GPU when it's used for rendering. Between that, anything is fair game. Keep in mind that OpenGL is a very asynchronous API. When you make API calls, the corresponding work is mostly queued up for later execution by the GPU, and is generally not completed by the time the calls return. This can include calls for uploading data.
Also, not all GPUs have "GPU memory". In fact, if you look at them by quantity, very few of them do. Mobile GPUs have caches, but mostly not VRAM in the sense of traditional discrete GPUs. How VRAM and caches are managed is highly system dependent.
With all the caveats above, and picturing a GPU that has VRAM: While it's possible that they can load the data into VRAM in the glTexImage2D() call, I would be surprised if that was commonly done. It just wouldn't make much sense to me. When a texture is loaded, you have no idea how soon it will be used for rendering. Since you don't know if all textures will fit in VRAM (and they often will not), you might have to evict it from VRAM before it was ever used. Which would obviously be very wasteful. As a general strategy, I think it will be much more efficient to load the texture data into VRAM only when you have a draw call that uses it.
Things would be somewhat different if the driver could be very confident that all texture data will fit in VRAM. But with OpenGL, there's really no reasonable way to know this ahead of time. And things get even more complicated since at least on desktop computers, you can have multiple applications running at the same time, while VRAM is a shared resource.
You are correct.
glteximage2d is the function that actually moves the texture data across to the gpu.
you will need to create the texture object first using glGenTextures() and then bind it using glBindTexture().
there is a good example of this process in the opengl redbook
example
you can then use this texture with a VBO. There are many ways to accomplish this, but interleaving your vertex coordinates, texture coordinates, and vertex normals and then telling the GPU how to unpack them with several calls to glVertexAttribPointer is the best bet as far as performance.
you are on the right track with VBOs, the old fixed pipeline GL stuff is depricated so you should just learn VBO from the outset.
this book is not 100% up to date, but it is complete and free and should serve as a great place to start learning VBO Open GL Book
I'd like to lay out some things I think I've learned, but am unsure about:
VBOs are the way to go. They're created with glGenBuffers and glBufferData.
For maximum flexibility, it's best to pass generic vertex attributes to shaders with glVertexAttribPointer, rather than glVertex, glNormal, etc..
glDrawElements can be used with vertex buffers and an index buffer to efficiently render geometry with lots of shared vertices, such as a landscape mesh.
Assuming all of that is correct so far, here's my question. All of the tutorials I've read about modern OpenGL completely omit glEnableClientState. But the OpenGL man pages say that without glEnableClientState, glDrawElements will do nothing:
http://www.opengl.org/sdk/docs/man/xhtml/glDrawElements.xml
The key passage is: "If GL_VERTEX_ARRAY is not enabled, no geometric primitives are constructed."
This leads me to the following questions:
None of the tutorials use glEnableClientState before calling glDrawElements. Does this mean the man page is wrong or outdated?
GL_VERTEX_ARRAY would seem to be the thing you enable if you're going to use glVertexPointer, and likewise you'd use GL_NORMAL_ARRAY with glNormalPointer, and so on. But if I'm not using those functions, and am instead using generic vertex attributes with glVertexAttribPointer, then why would it be necessary to enable GL_VERTEX_ARRAY?
If GL_VERTEX_ARRAY is not enabled, no geometric primitives are constructed.
That's because the man page is wrong. The man page covers GL 2.1 (and it's still wrong for that), and for whatever reason, the people updating the man page refuse to update the older GL versions for bug fixes.
In GL 2.1, you must use either generic attribute index 0 or GL_VERTEX_ARRAY. In GL 3.1+, you don't need to use any specific attribute indices.
This is because, in GL versions before 3.1, all array rendering functions were defined in terms of calls to glArrayElement, which used immediate mode-based rendering. That means that you need something to provoke the vertex. Recall that, in immediate mode, calling glVertex*() not only sets the vertex position, it also causes the vertex to be sent with the other attributes. Calling glVertexAttrib*(0, ...) does the same thing. That's why older versions require you to use either attribute 0 or GL_VERTEX_ARRAY.
In GL 3.1+, once they took out immediate mode, they had to specify array rendering differently. And because of that, they didn't have to limit themselves to using attribute 0.
If you want API docs for core GL 3.3 works, I suggest you look at the actual API docs for core GL 3.3. Though to be honest, I'd just look at the spec if you want accurate information. Those docs have a lot of misinformation in them. And since they're not actually wiki-pages, that information never gets corrected.
Your first 3 points are correct. And to answer your last half of your question, do not use glEnableClientState for modern OpenGL. Start coding!
VBOS are the way to go. They're created with glGenBuffers and
glBufferData.
VBOs are often used in high-performance applications. You might want to do something simpler first. Vertex arrays can be a good way to go to get started quickly. Because VBOs sit on top of vertex arrays anyway, you'll be using much of the same code when you switch to VBOs, and it might be good to run a test using vertex arrays or indexed vertex arrays first.
For maximum flexibility, it's best to pass generic vertex attributes
to shaders with glVertexAttribPointer, rather than glVertex, glNormal,
etc..
That's a good approach.
glDrawElements can be used with vertex buffers and an index buffer to
efficiently render geometry with lots of shared vertices, such as a
landscape mesh.
Perhaps. You have to be sure the vertices truly are shared, though. A vertex that is in the same location but has a different normal (e.g. for flat shading) isn't truly shared.
I'm writing data into a 3D texture from within a fragment shader, and I need to asynchronously read back said data into system memory. The only means of asynchronously initiating the packing operation into the buffer object seems to be calling glReadPixels() with a NULL pointer. But this function insists on getting passed a rectangle defining the region to read back. Now I don't know if these parameters are ignored when using PBOs, but I assume not. In this case, I have no idea what to pass to this function in order to obtain the whole 3D texture.
Even if have to read back individual slices (which would be kind of stupid IMO), I still have no idea how to communicate to OpenGL which slice to read from. Am I missing something?
BTW, I could use individual 2D textures for every slice, but that would screw up (3D-)mipmapping if I'm not mistaken. I wanted to use the 3D mipmaps in order to efficiently find regions of interest in the resulting 3D texture.
P.S. Sorry for the sub-optimal tags, apparently no one ever asked about 3d textures before and since I'm not allowed to create new tags...
Who says that glReadPixels is the only way to read image data? Maybe in OpenGL ES it is, but if you're using ES, you should say so. The rest of this answer will be assuming you're talking about desktop GL.
If you have a texture, and you want to read its contents, you should use glGetTexImage. The switch that controls whether it reads into a buffer object or not is the same switch that controls it for glReadPixels: whether a buffer is bound to GL_PIXEL_PACK_BUFFER.
Note that glGetTexImage will retrieve the entire texture (for a given mipmap level).
A bit of context :
I'm working on a GPU emulator (the NV2A if you want to know) at the push-buffer level, and I'm trying to implement the drawing using OpenGL. The GPU commands that I have to emulate contain separate pointers for each vertex component (so positions are in an entirely different memory address than fog coordinates, colors, texture coordinates, etc.)
Other data, like vertex component size, type and stride are also present in the push-buffer, but those are not really relevant to this question.
I've been reading about Vertex Array Objects, but as far as my tests go, the pointers you can set with glVertexAttribPointer should all be relative to a Vertex Buffer Object - something I would like to avoid, as I've already got a copy of the data in memory.
The question :
Is it possible in OpenGL to draw vertices using separate pointers (not managed by any OpenGL API) per vertex component? And how would the code look like, roughy?
PS: Since I'm emulating a GPU, I have to take vertex shader programs into account too. I haven't worked on these yet, so any suggestion on that is welcome too. TIA!
You don't need to use VBOs, glVertexAttribPointer takes a normal CPU-pointer if no VBO is bound (you can call glBindBuffer(GL_ARRAY_BUFFER, 0) to make sure). And yes, you can set up one address per attribute stream.
I'm writing an OpenGL program that draws into an Auxiliary Buffer, then the content of the Auxiliary Buffer is accumulated to the Accumulation Buffer before being GL_RETURN-ed to the Back buffer (essentially to be composited to the screen). In short, I'm doing sort of a motion blur. However the strange thing is, when I recompile and rerun my program, I was seeing the content of the Auxiliary/Accumulation Buffer from the previous program runs. This does not make sense. Am I misunderstanding something, shouldn't OpenGL's state be completely reset when the program restarts?
I'm writing an SDL/OpenGL program in Gentoo Linux nVidia Drivers 195.36.31 on GeForce Go 6150.
No - there's no reason for your GPU to ever clear its memory. It's your responsibility to clear out (or initialize) your textures before using them.
Actually, the OpenGL state is initialized to well-defined values.
However, the GL state consists of settings like all binary switches (glEnable), blending, depth test mode... etc, etc. Each of those has its default settings, which are described in OpenGL specs and you can be sure that they will be enforced upon context creation.
The point is, the framebuffer (or texture data or vertex buffers or anything) is NOT a part of what is called "GL state". GL state "exists" in your driver. What is stored in the GPU memory is totally different thing and it is uninitialized until you ask the driver (via GL calls) to initialize it. So it's completely possible to have the remains of previous run in texture memory or even in the frame buffer itself if you don't clear or initialize it at startup.