I hear that GL_QUADS are going to be removed in the OpenGL versions > 3.0, why is that? Will my old programs not work in the future then? I have benchmarked, and GL_TRIANGLES or GL_QUADS have no difference in render speed (might even be that GL_QUADS is faster). So whats the point?
The point is that your GPU renders triangles, not quads. And it is pretty much trivial to construct a rectangle from two triangles, so the API doesn't really need to be burdened with the ability to render quads natively. OpenGL is going through a major trimming process, cutting a lot of functionality that made sense 15 years ago, but no longer match how the GPU works, or how the GPU is ever going to work. The fixed function pipeline is gone from the latest versions too, I believe, because, once again, it's no longer necessary, and it no longer matches how the GPU works (programmable shaders).
The point is that the smaller and tighter the OpenGL API can be made, the easier it is for vendors to write robust, high-performance drivers, and the easier it is to learn to use the API correctly and efficiently.
A few years ago, practically anything in OpenGL could be done in 3-5 different ways, which put a lot of burden on the developer to figure out which implementation is the right one if you want optimal performance.
So they're trying to streamline the API.
People have already answered quite well on your question. On top of their answer, one of the reason that GL_QUADS being deprecated is because of quads's undefined nature.
For example try to model a 2d square with points (0,0,0), (1,0,0), (1,1,1), (0,1,0). This is flat quad with one corner dragged up. It is impossible to draw a NORMAL flat square in such way. Depending on drivers, it will be split to 2 triangles either one or another way - which we can't control. Such a model MUST be modeled with two triangles. - All three points of a triangle always lies on a same plane.
It isn't "going" to be anything. As with a lot of other functionality, GL_QUADS was deprecated in version 3.0 and removed in version 3.1. Obviously this is all irrelevant if you create a compatibility context.
Any answer that anyone might give for the reason for deprecating them would be sheer speculation.
Related
I have a scene built in OpenGL. When my light is in the center of the room, the outside of the room is lit. Is there any easy way to make OpenGl stop the lighting at vertexes, or will it require complex calculations. Here are pictures of my crappy, quick scene showing the lighting as it is when asking this question:
Essentially, you want the walls of the room to cast a shadow. That's what you want when you want the exterior part of the object not to be lit.
Shadowing in graphics, is generally a pretty hard problem. There are a lot of good, and a lot of fast, solutions, but not both -- any one solution is going to be a tradeoff between the two. SIGGRAPH is full of all sorts of papers from Really Smart People trying to solve this problem.
If you want something quick and dirty, shadow mapping is not terribly difficult (at least the simple kind), but it is imprecise. You'll see artifacts along the intersections of your object and the walls, for one. For precision, stencil shadows will work, but you'll have hard-edged shadows.
One solution here would probably be to author the geometry in a way, that objects don't protrude outside walls (or separate them into inside/outside parts). Then render the interior and exterior with different lights.
You wouldn't have the problem if everything would cast shadows on everything.
If the lights are static, you might want to consider pre-calculating (baking) the lighting (together with shadows). You can do this in many 3D packages, and from a programming perspective this might be the simplest solution.
If you had a general solution for real-time rendering shadows for every light, that would also solve your problem, but that is a challenging task, and it might also not be the optimal thing to do, if you want to maintain good frame-rates.
If you want to learn about real-time shadow rendering, I recommend looking at shadow maps - they are generally the solution used in most games today. Note that for a point light you would need to render the shadow map for 6 sides of a cube-map. For practical purposes you should really consider which lights really need to cast shadows, and make some kind of trade-off.
I recently read this list and I noticed that almost everything I studied from the OpenGL Red Book is considered deprecated.
I'm talking about pixel transfer operations, pixel drawings, accumulation buffer, Begin/End functions (!?), automatic mipmap generation and current raster position.
Why did they flag these features as deprecated? Will it be okay to still use them? What are the workarounds?
In my opinion its for the better. But this so called Immediate Mode is indeed deprecated in OpenGL 3.0 mainly because its performance is not optimal.
In immediate mode you use calls like glBegin and glEnd. So the rendering of primitives depends on the program's commands, OpenGL can't advance until it gets the appropiate command from the CPU. Instead you can use buffer objects to store all your vertices and data. And then tell OpenGL to render its primitives using this buffer with commands like glDrawArrays or glDrawElements or even more specialized commands like glDrawElementsInstanced. While the GPU is busy executing those commands and drawing the buffer to the target FrameBuffer (basically a render target). The program can go off and issue some other commands. This way both the CPU and the GPU are busy at the same time, and no time is wasted.
Not the best explanation ever, but my advice: try to learn this new rendering pipeline instead. It's superior to immediate mode by far. I recommend tutorials like:
http://www.arcsynthesis.org/gltut/index.html
http://www.opengl-tutorial.org/
http://ogldev.atspace.co.uk/
Literally try to forget what you know so far, immediate mode is long deprecated and shouldn't be used anymore, instead, focus on the new technology ;)
Edit Excuse me if I used 'intermediate' instead of 'immediate', I think its actually called 'immediate', I tend to mix them up.
Why did they flag these features as deprecated?
First, some terminology: they aren't deprected. In OpenGL 3.0, they are deprecated (meaning "may be removed in later versions"); in 3.1 and above, most of them are removed. The compatibility profile brings the removed features back. And while it is widely implemented on Windows and Linux, Apple's 3.2 implementation only implements the core profile.
As to the reasoning behind the removal, it depends on which feature you're talking about. We can really only speculate as to why the ARB any specific feature:
pixel transfer operations
Pixel transfer operations have not been removed. If you're talking about glDrawPixels, that is a pixel transfer operation, but it is one pixel transfer. Not all of them.
Speaking of which:
pixel drawings
Because it was a horrible idea to begin with. glDrawPixels is a performance trap; it sounds nice and neat, but it performs terribly and because it's simple, people will try to use it.
Having something that is easy to do but terrible in performance encourages people to write terrible OpenGL applications.
accumulation buffer
Shaders can do this just fine. Better in fact; they have a lot more options than accumulation buffers cover.
Begin/End functions (!?),
It's another performance trap. Immediate mode rendering is terribly slow.
automatic mipmap generation
Because it was a terrible idea to begin with. Having OpenGL decide when to do a heavyweight operation like generate mipmaps of a texture is not a good idea. The much better idea the ARB had was to just let you say, "OK, OpenGL, generate some mipmaps for this texture right now."
current raster position.
Another performance trap/bad idea.
Will it be okay to still use them?
That's up to you. NVIDIA has effectively pledged to support the compatibility profile in perpetuity. Which means that AMD and Intel probably will have to as well. So that covers Windows and Linux.
On MacOSX, Apple controls the GL implementations more rigidly, and they seem committed to not supporting the compatibility profile. However, they seem to have little interest in advancing OpenGL, since they stopped with 3.2. Even Mountain Lion didn't update the OpenGL version.
What are the workarounds?
Stop using performance traps. Use buffer objects for your vertex data like everyone else. Use shaders. Use glGenerateMipmap.
I am reading through the OpenGL Superbible Fifth Edition and they discuss using stacks via their own class. That's all great but they mention that matrix stacks were deprecated. Why were they deprecated and what do people use instead of them?
The reason(s) are political, not technical, and date back to the early 2000s.
OpenGL 3 was the first ever version willing to break backwards compatibility. The designers wanted to create an API for the expert users, the game programmers and high end visualization coders who knew all about shaders and wrote their own matrix code. The intent was that the OpenGL 3 API should match the actual hardware quite closely. (Even in OpenGL 1/2, the matrix stack was usually implemented on the CPU side, not the GPU.)
From a game engine programmer point of view, this was better. And hey, if you have to develop a new game engine every couple of years anyway, what's the big deal about throwing away the old code?
The result of this design process is the OpenGL 3/4 core profile.
Once the "new generation" OpenGL was announced, all the not-so-expert coders in universities and companies realized they would be screwed. These are the people (like me) who teach 3D graphics or write utility programs for research or design. We don't need any more advanced lighting than plain ambient-diffuse-specular. We often have to mix code from different sources together, and that is only easy if everyone is using exactly the same matrix, lighting, and texturing conventions - like those supplied by OpenGL 2.
Also, I've heard but cannot verify, the big CAD/CAM companies realized that they'd be screwed as well. Throwing away two million lines of code from ten years of development is not an option when you've got paying (and well-paying: compare prices for Quadro vs GeForce, or FireGL vs Radeon) customers.
So both NVIDIA and ATI announced they'd support the old API for as long as they could.
The result of this pressure is the compatibility profiles. And the OpenGL ARB now seems to have realized that while they'd like everyone to switch to core profile it just isn't going to happen: read the extension spec for tessellation shaders in OpenGL 4 and it mentions that GL_PATCHES will work with glBegin.
Matrix Stack (and the rest of matrix functions) were deprecated only in the core profile. In the Compatibility profile you should still be able to use them.
From my point of view it was removed because most of engines/frameworks have custom Math code and shader uniform style for sending matrices to shaders.
Although for simple programs/tutorials it is very inconvenient to use and search for something else.
I suggest using:
glm (http://glm.g-truc.net/)
very simple math lib (vsml)
Why were they deprecated
Because nobody actually used it in real world OpenGL programs. Take a physics simulation for example: You'd have all the object placement being stored in the physics system as a 4×4 matrix anyway. So you'd just use that. Same goes for visible object determination and animation systems. All those need to implement the matrix math anyway, so having this in OpenGL is rather redundant, as most of the time the already existing matrices were simply put into glLoadMatrix.
and what do people use instead of them?
What they used before: Their animation systems, physics simulators, scene graphs, etc.
Well the first and main reason, for me, is that with the rise of programmable shaders (being mandatory after the 3rd version of opengl), all the variables such as GL_PROJECTION and GL_MODELVIEW that were automatically transferred to the shaders are being deleted from the shaders, so the user has to define its own matrix to use it in the shader. Since you have to send the matrix manually using the Uniform functions, you don't really need fixed variables anymore.
How can I render a bunch of hand drawn shapes in opengl 1.x? I know about instancing but how is it possible in old opengl? Could I get examples of some sort? This is for a game, I'm expecting a thousand or so shapes all of which will need to be updated every frame.
Assuming that (at least most of) the shapes remain unchanged from one frame to the next, so most of the update is just moving them around, you could at least consider building a display list for each shape, then rendering the display lists during an update.
The amount of good you'll get from this varies widely depending on the hardware (and possibly driver) in use though. Some hardware supports display lists directly, and gains a lot from it. With other hardware, you'll be hard put to find any difference at all.
The good points are that at worst this won't do any harm, and building/using display lists is pretty quick and easy. So, in the worst case you don't lose much, and in the best case you might gain quite a bit.
I'm just learning about them, and find it discouraging that they have been deprecated. Should I keep investing into learning them? Would I learn something useful for the current model?
I think, though I may be wrong, that since most high-performance graphics apps (mostly games) pretty much only used vertex buffers and the like (in order to squeeze every drop of performance out of the card), that there was pressure to stop worrying about "frivolous" items such as display lists (and even good-old glVertex calls). IMHO, this provides a huge barrier to people learning to write OpenGL code, and (for my own purposes) is a big impediment to whipping up some quick, legible, and reasonably well performing code.
Note that these features were deprecated in 3.0, and actually removed in 3.1 (but still provided compatibility via an ARB extension). In OpenGL 3.2, they moved these features into a 'compatibility' profile that is optional for driver writers to implement.
So what does this mean? NVidia, at least, has vowed to continue support for the old-school compatibility mode for the forseeable future - there is a large wealth of legacy code out there that they need to support. You can find the discussion of their support in a slideshow at:
http://www.slideshare.net/Mark_Kilgard/opengl-32-and-more
starting at about slide #32. I don't know ATI/AMD's stance on this, but I would assume that it would be similar.
So, while display lists are technically removed from the required portion of the OpenGL 3.2 standard, I think that you are safe using them for quite a while. Eventually, you may wish to learn the buffer/shader-centric interface to OpenGL, especially if your end-goal is envelope-pushing game writing, but it really is a lot less intuitive (no glRotate, even!), so I would recommend starting with good old OpenGL 2.x.
-matt
Display lists were removed, because with opengl 3+ all vertex, texture and lighting data are stored on the graphics card, in what is called retained mode rendering (the data is retained, allowing you to send a single command to the card to draw a mesh, rather than sending vertex data to the card every frame). A major bottleneck in computer graphics is data bandwidth between RAM and gpuRAM. by generating meshes once, and retaining that data, we can transform it using homogeneous transform matrices, and draw it easily. This effectively reduces the bottleneck, with the drawback of longer loading times.
Immediate mode, however (pre 3.0) uses massive amounts of graphics bandwidth to send vertex data every frame, pre-transformed, with recalculated normals etc.
The problems with this approach are twofold:
1) excessive bandwidth use, and too much gpu idle time.
2) Excessive use of cpu time for calculations that could be done in parallel on 100+ cores, on the gpu
The simple solution to this, is retained mode.
With retained mode, display lists are no longer necessary. Hence their removal from the core profile.
Immediate mode is still very good for learning the theory of computer graphics. (and it's loads of fun, to boot) It just suffers in terms of maximum possible performance.
VBOs & VAOs may be, at first, less intuitive, but in terms of speed, it is far superior.
There are several easy to understand opengl 3.0 tutorials on the internet. Once you have openGL 2.0 down, you should consider moving on to 3.0+, as it allows you to build very fast 3d graphics applications.
While Matthew Hall has a good answer and covers most things, there are a few things I'll add.
If you look at what's been deprecated, you'll see it's a lot of client side and fixed functionality. So it's obvious that they're trying to move people away from client side centered code and have people do everything possible server side on the GPU instead.
When it comes to which context to use, well, that's up to you. Though if performance is a major concerned then 3.x is probably the way to go. I personally definitely want to learn opengl 3.x, but I doubt I'll be giving up 1.x/2.x. It's just so much easier to put together a quick app with what's available in a 1.x or 2.x context.
If you want a list of what's been deprecated, download the 3.0 specification and look under "The Deprecation Model"
A note from the future: latest Direct-X, Metal, and Vulkan apis have command buffers and command queues, which allow to record commands in the CPU, then sent them to the GPU to execute them there. Thus, perhaps, display lists was not a so old-fashioned idea. In fact, compiling display list is something orthogonal to the use of shaders and VBOs, and display lists can improve performance further....I wonder if a Vulkan or Metal to OpenGL translator could use display lists for command buffers....
Because VBOs (vertex buffer objects) are much more efficient and can do everything display lists can do. They're not really any more complex, either, just a little different. Unless you're already more familiar with the old style glBegin/glEnd stuff, you're probably best off learning about buffers from the get go.