Questions about Display List OpenGL SC 1.0.1 - opengl

I have several questions about the Display Lists in OpenGL SC 1.0.1:
if I understand well, a display list saves a sequence of OpenGL primitives commands, to be reused after at runtime.
Now, let's say that I also include in this Display List an assignment that changes the value of a parameter that was declared out of the static sequence.
-> Would this parameter be updated, during the generation of the Display List, or when this Display List will be called at runtime ?
I use to generate all my Display Lists at initialization of the OpenGL context only.
But, in my application, for several reasons, I do glClear at each cycle at runtime.
-> After a glClear command, do you think that all my Display Lists are deleted ?
Because, by doing so, I notice that my graphical components that are generated through Display Lists are never drawn.

Display lists are created once, at exactly the point when you call glNewList. That list is then repeated verbatim when you call glCallList. I'm not sure what you mean by parameter exactly, but if you mean:
a) a C++ variable, then this will have no effect. The OpenGL calls in a display list are recorded as is. What loop variables etc used to get to that call are not recorded.
b) an OpenGL state variable (for example, a call to enable GL_LIGHTING). This will be recorded (since the display list will capture the call to glEnable).
glClear has no effect on display lists. glClear simply clears the back buffer to black (or colour you set via glClearColor). Obviously once you've cleared all the pixel data, you'll need to redraw it again, so you'll need to set the projection matrix, set the correct transform matrix for your geometry in the display list, and call glCallList again to repeat the list of actions within that list.
Having said all of that, I'd strongly advise steering well clear of display lists, and move to something nicer (e.g. VBO + VAO + GLSL Shaders, or VBO + Fixed Function Pipeline). Display lists have a large number of problems, including:
They are a nightmare for driver maintainers. As a result, graphics card support for them is a little bit of a mixed bag.
Not all GL API methods are valid to be called within a display list. Some methods (such as those that modify the client state - e.g. vertex array bindings) simply do not work in display lists. Others don't work simply because the driver implementors decided to not add support (e.g. NVidia will allow some GL3+ methods to be called within a display list, AMD only allows methods from OpenGL 1.5).
Updating a display list is painfully slow.

Related

VBO vs display list vs VA

I have a list of vertices that I plan to mutate and was hoping to get a little clarification on the differences among a VBO, display list, & VA -- I am trying to speed up the rendering within my application. Are VBOs and Display lists not options because I am not rendering static geometry?
Well, using anything other than VBOs (i.e., client-side memory pointers and/or display lists) isn't an option in Core contexts.
For dynamic data you can specify GL_STREAM_DRAW/GL_DYNAMIC_DRAW in your glBufferData() call's usage parameter and hope your GL implementation gets the hint.

Beating the state machine

I'm working on a plugin for a scripting language that allows the user to access the OpenGL 1.1 command set. On top of that, all functions of the scripting language's own gfx command set are transparently redirected to appropriate OpenGL calls. Normally, the user should use either the OpenGL command set or the scripting language's inbuilt gfx command set which basically contains just your typical 2D drawing commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc.
Under certain conditions, however, the user might want to mix calls to the OpenGL and the inbuilt gfx command sets. This leads to the problem that my OpenGL implementations of inbuilt commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc. have to be able to deal with whatever state the OpenGL state machine might currently be in.
Therefore, my idea was to first save all state information on the stack, then prepare a clean OpenGL context needed for my implementations of commands like DrawLine(), etc. and then restore the original state. E.g. something like this:
glPushAttrib(GL_ALL_ATTRIB_BITS);
glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);
glPushMatrix();
....prepare OpenGL context for my needs.... --> problem: see below #2
....do drawing....
glPopMatrix();
glPopClientAttrib();
glPopAttrib();
Doing it like this, however, leads to several problems:
glPushAttrib() doesn't push all attributes, e.g save pixel pack and unpack state, render mode state, and select and feedback state are not pushed. Also, extension states are not saved. Extension states are not important as my plugin is not designed to support extensions. Saving and restoring other information (pixel pack and unpack) could probably be implemented manually using glGet().
Big problem: How should I prepare the OpenGL context after having saved all state information? I could save a copy of a "clean" state on the stack right after OpenGL's initialization and then try to pop this stack but for this I'd need a function to move data inside the stack, i.e. I'd need a function to copy or move a saved state from the back to the top of stack so that I can pop it. But I didn't see a function that can accomplish this...
It's probably going to be very slow but this is something I could live with because the user should not mix OpenGL and inbuilt gfx calls. If he does nevertheless, he will have to live with a very poor performance.
After these introductory considerations I'd finally like to present my question: Is it possible to "beat" the OpenGL state machine somehow? By "beating" I mean the following: Is it possible to completely save all current state information, then restore the default state and prepare it for my needs and do the drawing, and then finally restore the complete previous state again so that everything is exactly as it was before. For example, an OpenGL based version of the scripting language's DrawLine() command would do something like this then:
1. Save all current state information
2. Restore default state, set up a 2D projection matrix
3. Draw the line
4. Restore all saved state information so that the state is exactly the same as before
Is that possible somehow? It doesn't matter if it's very slow as long as it is 100% guaranteed to put the state into exactly the same state as it was before.
You can simply use different contexts, especially if you do not care about performance. Just keep an context for your internal gfx operations and another one the user might mess with and just bind the appropriate one to your window (and thread).
The way you describe it looks like you never want to share objects with the user's GL stuff, so simple "unshared" contexts will do fine. All you seem to want to share is the framebuffer - and the GL framebuffer (including back and front color buffers, depth buffer, stencil, etc..) is part of the drawable/window, not the context - so you get access to it whit any context when you make the context current. Changing the contexts mid-frame is not a problem.

display list do not work

I have got some code like that
glNewList(displist_boxy, GL_COMPILE);
for(int i=0; i<scene_max; i++)
{
DrawAABox( sceneGL[i].x,
sceneGL[i].y,
sceneGL[i].z,
10,10,10
);
}
glEndList();
DrawAABox draws 6 quad axis aligned box ( with glBegin glNormal GlVertexx... glEnd)
It work in immediate mode but when I try to build a disoplay list as above,
then call the list it has no effect (no boxes are drawed) Should it work or this just should not work (I do not know much abut it )
A display list distills all OpenGL operations done inside the glNewList/glEndList block into a constant set of commands that are then executed when calling the display list (with glCallList). This means every "dynamic" code inside the list creation is, well, compiled into the list. So when called your box will use whatever position sceneGL[i] had when you built the list. In fact you will only have a constant number of boxes, that is whatever number scene_max was when building the list. So if you do this in initialization code, where scene_max might be 0, nothing will be drawn.
Think about it, what could the driver possibly do when building this list? Just record all OpenGL commands called (and maybe convert them into some compressed and optimized format) for later submission or magically take the executed machine code (and its whole surrounding context) from your final executable when run and store this somehow to recall every operation you did between glNewList/glEndList (which wouldn't be much of a performance boost when compared to just executing it immediately, anyway)?
EDIT: As a side note, rather prefer the use of VBOs for pre-recording of geometry, which compared to display lists might loose some features, like state change recording, but give you others, like dynamic data updates. Likewise the implementation of display lists is totally up to the implementation and might not be faster than VBOs and the like. Likewise they're deprecated (which might also speak for lousy/slow implementations on modern hardware, because drivers don't tend to optimize rarely used paths that well).
You seem to miss generating of UID for displist_boxy variable. Here's how it should look.
coAxis := glGenLists(1);
glNewList(coAxis, GL_COMPILE);
glBegin(GL_TRIANGLES);
...
glEnd;
glEndList;
Usage is as follows:
glCallList(coAxis);
To create a display list, you must have your target OpenGL context being created and bound first. If you happen to initialize the display list in code that's called before there's a/the OpenGL context nothing will show up later.
Also what Christian Rau told you in https://stackoverflow.com/a/13192138/524368
On a side note: You should not be using Display lists at all. They've been deprecated for about 10 years (OpenGL-2 was originally planned to do away with display lists) and OpenGL-3 followed through to it a half a decade later.
Use VBOs and VAOs instead.

Renderer Efficiency

Ok, I have a renderer class which has all kinds of special functions called by the rest of the program:
DrawBoxFilled
DrawText
DrawLine
About 30 more...
Each of these functions calls glBegin/glEnd separably, which I know can be very inefficiently(its even deprecated). So anyways, I am planning a total rewrite of the renderer and I need to know the most efficient ways to set up the functions so that when something calls it, it draws it all at once, or whatever else it needs to do so it will run most efficiently. Thanks in advance :)
The efficient way to render is generally to use VBO's (vertex buffer objects) to store your vertex data, but that is only really meaningful if you are rendering (mostly) static data.
Without knowing more about what your application is supposed to render, it's hard to say how you should structure it. But ideally, you should never draw individual primitives, but rather draw the contents (a subset) of a vertexbuffer.
The most efficient way is not to expose such low-level methods at all. Instead, what you want to do is build a scene graph, which is a data structure that contains a representation of the entire scene. You update the scene graph in your "update" method, then render the whole thing in one go in your "render" method.
Another, slightly different approach is to re-build the entire scene graph each frame. This has the advantage that once the scene graph is composed, it doesn't change. So you can call your "render" method on another thread while your "update" method is going through and constructing the scene for the next frame at the same time.
Many of the more advanced effects are simply not possible without a complete scene graph. You can't do shadow mapping, for instance (which requires you to render the scene multiple times from a different angle), you can't do deferred rendering, it also makes anything which relies on sorted draw order (e.g. alpha-blending) very difficult.
From your method names, it looks like you're working in 2D, so while shadow mapping is probably not high on your feature list, alpha-blending deferred rendering might be.

How can you draw primitives in OpenGL interactively?

I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.