Display Lists vs immediate rendering - opengl

I am trying to find if display list has better fps than immediate rendering. From what I found online display lists are faster, But I found some code online and it that situation immediate mode is faster?
Can anyone explain me which one has better fps and why?

Display lists will be much faster than immediate mode. They buffer the drawing commands sent to OpenGL and allow execution from the GPU itself. They are quire flexible in that they buffer quite a variety of commands. I believe you can even nest them. They are quite easy to set up, so it wouldn't take long to benchmark the difference.
Display lists are however, deprecated, so you should be looking at rendering with vertex buffer objects (VBOs) and glDraw*() unless it's just for fun.

Related

Increase Performance in OpenGL

I have a question about picking. I found a really neat way to do picking by rendering a single display call with lights turned off and everything and unique colour for each object, then simply finding what colour pixel is under the mouse pointer. It works fine and from I understand that also been used by others in the past for object picking. I thought rendering a single frame in that way will not be noticeable, however it is noticeable. I have to say I am using JOGL bindings.
So I was wondering what factors affect the performance in this case? I read on Apples dev guide NOT to use glBegin and glEnd and instead use buffers. Is that right? Am I right to think that it is possible to render a single frame without anyone noticing the flicker?

Convenient method for statically rendering 3d shapes into image files

My basic problem is to generate 2d renders of 3d objects, such as one could accomplish with openGL or DirectX. However I have no interest in displaying the rendered objects to the screen, only to generate the shaded/ textured/ rotated images as bitmaps (not necessarily written to disk). This process is likely to be a problematic bottleneck in my design, so I would prefer to keep my solution as compact as possible (ie, don't waste the time sending the image to the screen), and would be most pleased if I could make use of hardware-accelerated rendering. Does anyone know of a convenient library or tool to help in this?
Right now I would prefer a C/C++ option, however speed is what I'm going for, so I'm willing to deal with ASM/ super optimized anything if it gets what I want the fastest.
What you need is a so called technique "rendering to texture". With OpenGL you can do it very easy. Take a look at example here: http://www.songho.ca/opengl/gl_fbo.html#example
You can do this using "render to texture"
This process is likely to be a problematic bottleneck in my design, so I would prefer to keep my solution as compact as possible (ie, don't waste the time sending the image to the screen)
If you want rendering to be fast, you want to use the GPU for this. Sending a image to the screen comes for free with those things. But reading an image back to the CPU memory, that is actually quite a bottleneck. But unavoidable in you case probably.

GLUTesselator for realtime tesselation?

I'm trying to make a vector drawing application using OpenGL which will allow the user to see the result in real time. The way I have it set up is with an edge flag callback so the glu tesselator only outputs triangles which I then pass to a VBO. I'v tried t make all my algorithms as fast as possible and this is not where my issue is. According to a few code profilers, my big slowdown occurs in a call to GLUTessEndPolygon() which is the function that makes the polygon. I have found that when the shape exceeds 100 input verticies, it gets really really slow and basically destroys all the hard work I did to optimize everything else. What can I do? I provide the normal of (0,0,1). I also tried all the tips from the GL redbook. Is there a way to make the tesselator tesselate quicker but with less precision?
Thanks
You might give poly2tri a try to see if it's any faster.

How much does glCallList boost the performance?

I'm new to OpenGL. I have written a programm before getting to know OpenGL display lists. By now, it's pretty difficult to exploit them as my code contains many not 'gl' lines everywhere in between. So I'm curious how much I've lost in performance..
It depends, for static geometry, it can help quite a lot, but display lists are being deprecated in favor of VBOs (vertex buffer objects) and similar techniques that allow you to upload geometry data to the card once then just re-issue draw calls that use that data.
Display-lists (which was a nice idea in OpenGL 1.0) have turned out to be difficult to implement correctly in all corner cases, and are being phased out for more straightforward approaches.
However, if all you cache are geometry data, then by all means, lists are ok. But VBOs are the future so I'd suggest to learn those.
Call lists are boosting performance. If your users graphics card has enough memory, and supports Call lists (most of them):
There are several "performance bottlenecks", of which one of them is the pushing speed of data to the graphics card. When you call a OpenGL function, it will either calculate on the CPU or on the GPU (graphics card's processor). The GPU is optimized for graphics operations, but, the CPU must send it's data over to the GPU. This requires a lot of time.
If you use call lists; the data is sent (prematurely) to the GPU, if possible. Then when you call a call list, the CPU won't have to push data to the GPU.
Huge performance boost if you use them.
Corner case:
If you plan to run your app remotely over GLX then display lists can be a huge win because "pushing to the card" involves a costly libgl->network->xserver->libgl trip.

How can you draw primitives in OpenGL interactively?

I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.