OpenGL 3.0 and 3.1 have deprecated quite a few features I find essential. In particular, the use of fixed function in shaders.
Can anyone explain what's really the deal with that?
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
As you said, no hardware company will remove support for fixed-function shaders, because there are so many existing applications that use them. What they don't want to do, though, is figure out how to specify the interactions between FF shaders and every future extension they add. Those interactions are very complicated (partly because FF shaders are so complicated), which leads to bugs and inconsistent implementations between vendors -- both of which are bad for developers and end users.
So they're drawing a line: if you want to use FF shaders, you don't get any of the new functionality. If you want new functionality, you can't use FF shaders. This is very similar to what Microsoft did in D3D10: it added a whole bunch of new functionality, but at the same time completely removed fixed-function shaders. The belief is that the set of developers who need the new non-shader functionality but who don't also need programmable shaders is very small.
It should be clarified that a feature that is marked "deprecated" is not actually removed. For example, an OpenGL 3.0 context has all of the features - nothing is gone. Further, some vendors will ship drivers that can create 3.1 and 3.2 contexts using a compatibility profile which will also enable the deprecated features. So, look closely at what vendor hardware you are going to support and ask about the ARB compatibility mode for old features. (There is also the "core" profile as of 3.2, which allows vendors to create a more lean and mean driver if they wish to make such a thing)
Note that any current card really doesn't have an FF hardware section any more - they only run shaders. When you ask for FF behavior, the GL runtime is authoring shaders on your behalf..
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
I suppose then Apple must be insane, because MacOSX 10.7 supports only 3.2 core. No compatibility specification support, no ARB_compatibility extension, nothing. You can either create a 2.1 context or a 3.2 core context.
However, if you want reasons:
For the sake of completeness: what Jesse Hall said. The ARB no longer has to consider the interaction between fixed function and new features. Integer math, array textures, and various other features are defined to not be usable with the fixed function pipeline. OpenGL has really improved over the last 3 years since GL 3.0 came out; the pace of the ARB's changes is quite substantial. Would that have been possible if they had to find a way to make all of those features interact with fixed function? And if they didn't have fixed function interactions, would you not then be complaining how you can't access new features from your old code? Which leads nicely into:
It serves as a strong indication of what one ought to be using. Even if the compatibility context is always available, you can look at core OpenGL to see how one ought to be approaching problem solving.
It makes the eventual desktop GL and GL ES unification much more reasonable. ES 2.0 threw out all of the old stuff and just adopted what you might think of as core GL 2.1. The ultimate goal will be to only have one OpenGL. To do that, you have to be able to rid the desktop GL of all of the cruft.
Fixed function shaders are quite easily replaced with standard GLSL shaders so it's difficult to see why logically they shouldn't be deprecated.
I'm less certain than you that they won't be dropped from much hardware in the foreseeable future as OpenGL ES 2.0 doesn't support the FF pipeline (and so isn't backward compatible with OpenGL ES 1.x). It seems to me that much of the momentum with OpenGL these days is coming from the widespread adoption of OpenGL ES on mobile platforms and with FF functionality gone from there there will be some considerable pressure to move away from it's use.
Indeed I'd expect the leaner OpenGL ES implementation to replace standard OpenGL quite widely over the next few years, and FF functionality may disappear more because most hardware will implement OpenGL ES rather than because it's removed from hardware implementing the full OpenGL
OpenGL allows for both a 'core' profile and a 'compatibly' profile. So for most systems you wont loose any kind of access to deprecated or removed functions.
But if you want to ensure compatibly it is best to stick to the core stuff. You won't be guaranteed a compatibility profile (even if most hardware has one and at the current state it's more likely you will encounter an out of date OpenGL rather than a core only one). Also OpenGL ES is now a subset of OpenGL, it is possible to write a OpenGL ES 2.x/3.x program and have it run in OpenGL 4.3 with almost no changes.
Game console like the PlayStations and the Nintendo ones shipped with their own graphics libraries rather than using OpenGL.
They were based on OpenGL but here stripped down in a similar was to ES (I don't think ES 2.0 was out then). Those systems need to write their own graphics drivers and libraries, asking a hardware vendor to write what is basically a whole load of legacy wrapping libraries is a bit much (all the fixed function stuff would just end up being implemented in shaders at some stage and it's likely that glBegin/glEnd would just be getting turned into a VBO automatically anyway).
I think it has also been important to ensure that developers are made aware of the current way they should be programming. For decades people have been taught the 'wrong' way to do things by default and vertex buffer objects have been taught as an extra.
Related
How I know, the updating openGL is including some extensions which was defined by vendors and used hardware features. I have geforce 520m( realize in 2010) , but it support openGL 4.4 (realize in 2014). Obviously, it cannot support some hardware features which is needed in openGL 4.4. But it support openGL 4.4.How does the support of new openGL versions is provided in old video cards?
Why do you think that it does not support the HW features? It does. OpenGL is a rendering API specification, not a HW one. The fact that new features were added to GL does not mean that new HW is required to implement them.
On nvidia, all GPUs since the fermi architecture support OpenGL 4.x as far as it has been specified. That does not guarantee that it will support everything what a future GL4.x version might introduce.
Currently, the GL major versions can be tied to some major HW generation. GL2.x is the really old stuff. GL 3.x is supported by nvidia since GeForce 8xxx (released in 2006), and fermi/kepler/maxwell support 4.x.
According to you, the progress in video cards stopped since the fermi architecture. Why vendors realize new video cards if they can update openGL to add new function? I think it is untruth. Vendors try to find ways to accelerate video cards in hw level and add functions that use it. Or I do not understand it clear? And so what does it mean that my video card is supported openGL 4.x?
No, you have it backwards.
You could say that the progress in graphics APIs with respect to hardware functionality has stopped since Fermi. Every new generation of GPUs generally adds new HW features, but the features exposed by every new version of OpenGL do not necessarily require new HW. In fact, core GL often lags a generation or more behind HW capabilities in required functionality.
OpenGL still lacks support for some features introduced in Direct3D 11 and likewise Direct3D 11.x lacks support for some OpenGL 4.x features and neither API fully exposes the underlying hardware. This is because they are designed around supporting the most hardware possible rather than all of the features of one specific piece of hardware. AMD's solution to this "problem" was to introduce a whole new API (Mantle) that more closely follows the feature set of their Graphics Core Next-based architectures; more of a console approach to API design.
There may be optional ARB extensions introduced with the yearly release of new GL version specifications, but until they are promoted to core, GL does not require support for them.
Until things are standardized across all of the vendors who are part of the ARB, some features of GPU hardware are unusable in core GL. This is why GL has extensions in addition to core versions. A 720m GPU will support many new extensions that your 520m GPU does not, but at the same time they can both implement all of the required functionality from GL 4.4. There is no guarantee, however, that GL 4.5 will not introduce some new required feature that your 520m GPU is incapable of supporting or that NV decides is too much trouble to support.
Vendors sometimes do not bother writing support for features on older GPUs even though they can technically support them. It can become too much work to write and especially to maintain multiple implementations of a feature across several different versions of a product line. You see this sort of thing in all industries, not just GPUs. Sometimes open source solutions eventually fill-in the gap where the original vendor decided it was not worth the effort, but this may take many years.
I updated my graphics card driver to support openGL 4 so that deprecated functions like glBegin wont work. However, when I run a simple triangle program, glBegin still works like before. Is glBegin still supported by openGL 4 or did I miss some step in upgrading to openGL 4?
Simply using a driver that supports OpenGL 4.x does not mean that you will lose the functionality of earlier versions. Beginning with OpenGL 3.2 the concept of Core and Compatibility profiles were introduced, and this is where the separation between modern and deprecated actually comes into play.
In a Core profile, the things you mentioned such as glBegin are invalid. However, in a Compatibility profile, you can continue to mix-and-match deprecated parts of the API with new parts. The vast majority of new OpenGL features are not guaranteed to work in conjunction with the deprecated parts of the API, in large part because most new features are related to GLSL and the programmable pipeline in some way.
Now things get a little bit more complicated when you discuss a platform like Mac OS X. Beginning with OS X 10.7, Apple began supporting OpenGL 3.2. However, they designed their implementation in such a way that the ONLY way to access OpenGL 3.2 functionality was to get a Core profile. They continue to support a legacy OpenGL 2.1 implementation so that old software does not have to be re-written, but in order to take advantage of any OpenGL 3.2+ features on OS X you have to forefit all deprecated functionality.
In fact, platforms are generally designed so that you actually have to do extra work during context creation in order to get a Core profile. Unless you specifically request Core, you will get Compatibility (or in the case of OS X, an implementation of OpenGL 2.1). It is a way of making the whole deprecation model as painless as possible for existing software.
"deprecated" doesn't necessarily means that "it will not work", it means "you should not use it because the standard say so", the vendor is free to implement what it wants to sell with the hardware; and many brands still offer deprecated OpenGL contexts and functions in their own libraries.
While AMD is following the OpenGL specification very strict, nVidia often works even when the specification is not followed. One example is that nVidia supports element incides (used in glDrawElements) on the CPU memory, whereas AMD only supports element indices from a element array buffer.
My question is: Is there a way to enforce strict OpenGL behaviour using a nVidia driver? Currently I'm interested in a solution for a Windows/OpenGL 3.2/FreeGlut/GLEW setup.
Edit: If it is not possible to enforce strict behaviour on the driver itself - is there some OpenGL proxy that guarantees strict behaviour (such as GLIntercept)
No vendor enforces the specification strictly. Be it AMD, nVidia, Intel, PowerVR, ... they all have their idiosyncrasies and you have to learn to live with them, sadly. That is one of the annoying things about having each vendor implement their own GLSL compiler, as opposed to Microsoft implementing the one and only HLSL compiler in D3D.
The ANGLE project tries to mitigate this to a certain extent by providing a single shader validator shared across many of the major web browsers, but it is an uphill battle and this only applies to WebGL for the most part. You will always have implementation differences when every vendor implements the entire API themselves.
Now that Khronos group has seriously taken on the task of establishing a set of conformance tests for desktop OpenGL like they have for WebGL / OpenGL ES, things might start to get a little bit better. But forcing a driver to operate in a strict conformance mode is not really a standard thing - there may be #pragmas and such that hint the compiler to behave more strictly, but these are all vendor specific.
By the way, I realize this question has nothing to do with GLSL per-se, but it was the best example I could give.
Unfortunately, the only way you can be sure that your OpenGL code will work on your target hardware is to test it. In theory simply writing standard compliant code should work everywhere, but sadly this isn't always the case.
What have changed that makes OpenGL different? I heard of people not liking OpenGL since OpenGL 3.x, but what happend? I want to learn OpenGL but I don't know which one. I want great graphics with the newer versions, but what's so bad?
Generally, every major version of OpenGL is roughly equivalent to a hardware generation. Which means that generally if you can run OpenGL 3.0 card, you can also run OpenGL 3.3 (if you have a sufficiently new driver).
OpenGL 2.x is the DX9-capable generation of hardware, OpenGL 3.x is the DX10, and OpenGL 4.x the DX11 generation of hardware. There is no 100% exact overlap, but this is the general thing.
OpenGL 1.x revolves around immediate mode, which is conceptually very easy to use, and a strictly fixed function pipeline. The entry barrier is very low, because there is hardly anything you have to learn, and hardly anything you can do wrong.
The downside is that you have considerably more library calls, and CPU-GPU parallelism is not optimal in this model. This does not matter so much on old hardware, but becomes more and more important to get the best performance out of newer hardware.
Beginning with OpenGL 1.5, and gradually more and more in 2.x, there is slight paradigm shift away from immediate mode towards retained mode, i.e. using buffer objects, and a somewhat programmable pipeline. Vertex and fragment shaders are available, with varying feature sets and programmability.
Much of the functionality in these versions was implemented via (often vendor-specific) extensions, and sometimes only half-way or in several distinct steps, and not few features had non-obvious restrictions or pitfalls for the casual programmer (e.g. register combiners, lack of branching, limits on instructions and dependent texture fetches, vtf support supporting zero fetches).
With OpenGL 3.0, fixed function was deprecated but still supported as a backwards-compatibility feature. Almost all of "modern OpenGL" is implemented as core functionality as of OpenGL 3.x, with clear requirements and guarantees, and with an (almost) fully programmable pipeline. The programming model is based entirely on using retained mode and shaders. Geometry shaders are available in addition to vertex and fragment shaders.
Version 3 has received a lot of negative critique, but in my opinion this is not entirely fair. The birth process was admittedly a PR fiasco, but what came out is not all bad. Compared with previous versions, OpenGL 3.x is bliss.
OpenGL 4.x has an additional tesselation shader stage which requires hardware features not present in OpenGL 3.x compatible hardware (although I daresay that's rather a marketing reason, not a technical one). There is support for a new texture compression format that older hardware cannot handle as well.
Lastly, OpenGL 4.x introduces some API improvements that are irrespective of the underlying hardware. Those are also available under OpenGL 3.x as 100% identical core extensions.
All in all, my recommendation for everyone beginning to learn OpenGL is to start with version 3.3 right away (or 3.2 if you use Apple).
OpenGL 3.x compatible hardware is nearly omni-present nowadays. There is no sane reason to assume anything older, and you save yourself a lot of pain. From an economic point of view, it does not make sense to support anything older. Entry level GL4 cards are currently at around $30. Therefore, someone who cannot afford a GL3 card will not be able to pay for your software either (it is twice as much work to maintain 2 code paths, though).
Also, you will eventually have no other choice but to use modern OpenGL, so if you start with 1.x/2.x you will have to unlearn and learn anew later.
On the other hand, diving right into version 4.x is possible, but I advise against it for the time being. Whatever is not dependent on hardware in the API is also available in 3.x, and tesselation (or compute shader) is something that is usually not strictly necessary at once, and something you can always add on later.
For an exact list of changes I suggest you download the specification documents of the latest of each OpenGL major version. At the end of each of these there are several appendices documenting the changes between versions in detail.
The many laptops with Intel integrated graphics designed before approx a year ago do not do OpenGL 3. That includes some expensive business machines, e.g., $1600 Thinkpad x201, still for sale on Amazon as of today (4/3/13) (although Lenovo has stopped making them),
OpenGL 3.1 removed the "fixed function pipeline". That means that writing vertex and fragment shaders is no longer optional: If you want to display anything, you must write them. This makes it harder for the beginner to write "hello world" in OpenGL.
The OpenGL Superbible Rev 5 does a good job of teaching you to use modern OpenGL without falling back on the fixed function pipeline. That's where I would start if I were learning OpenGL from scratch.
Their rev 4 still covers the fixed function pipeline if you want to start with a more "historical" approach.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Which version of OpenGL to use?
I have been wanting to learn a 3d graphics language for some time now and I have finally decided to learn OpenGL.
However, I work on a Mac and officially this highest version of OpenGL for mac is 2.1 but it can support 3.3 unofficially through tests that I have done.
I would like to develop applications that would work on multiple platforms but what version would be the best to learn?
A good compromise between portability and still learning the "modern OpenGL way", is roughly "the OpenGL ES 2.0 subset of OpenGL 2.1". That gives you portability to
OSX, as you mention
Windows, obviously
Linux with open source drivers (for higher OpenGL versions and better performance you need the proprietary drives which you might prefer anyway, but some people like to avoid those)
Smartphone platforms like iOS and Android.
OpenGL 1.x is even more portable (e.g. older iOS and Android releases support only OpenGL ES 1.x) but the classical fixed-function programming model is somewhat different than the modern one based on buffer objects and shaders, and use of immediate mode easily leads to performance issues when rendering lots of vertices. So probably not worth it, IMHO.
My recommendation would be to learn no less than version 3.2. If 3.3 is supported (even unofficially), go for that.
OpenGL 3.3 is already rather "last generation" than "bleeding edge". You have to search hard to find a card that does not support OpenGL 3.3, and you get 4.x capable cards in the $30 range.
Under version 2.x, you must go through a lot of pain to ensure that even the most basic functionality that you use every day is available, and you end up writing two or three code paths depending on what extension you must use and on what some limit is.
Under version 3.3, most features that you want to use every day are core (guaranteed standard), and most limits have a guaranteed minimum value that is enough for most things anyway. The features that are not core in 3.3 are few (and you won't die if you don't have them), and you can pretty much just plug them in optionally if they're there, and forget about them if they aren't.
There is a huge change in paradigms between 2.1 and 3.3 (which you will have to re-learn later if you start with 2.x first!), and there are notable changes in GLSL between 3.1 and 3.2 which make writing shader code that works for both an ordeal, or impossible.
Upwards of version 3.2, everything is smooth. New features are available or they aren't... use them or don't... but you can in principle write one piece of code to run on all versions.
If your goal is maximum interoperability, I would rather take a look at WebGL, or it's close relative, OpenGL ES. The concepts of OpenGL ES (at least in the 2.0 version) are quite close to those of OpenGL 4 (buffer-based data transfer, universal shaders etc.).
I think that by learning 2.1 you would learn some outdated concepts you will soon have to re-learn, like the direct mode, or rather the whole fixed-function pipeline which was pruned in later versions.
You can safely start learning the 3.x too, as you will learn the current concepts and features. Do not worry about the "officially supported" version.