Newest GLSL Spec with Least Changes? - glsl

What's the newest OpenGL GLSL specification that provides as little change to the language such that learning it won't be redundant when moving to a newer version that's also available now, for the future. As such I want to be able to make my shaders work on as much hardware as possible without learning a completely deprecated language.

It depends on how you define "redundant".
If you're purely talking about the core/compatibility feature removal, that only ever happened once, in the transition from OpenGL 3.0 to 3.1 (in GLSL version terms, 1.30 to 1.40).
Every shader version from 1.40 onward will be supported by any OpenGL implementation. Every shading language version from 1.10 onward will be supported by any compatibility profile implementation.
If by "redundant", you mean that you don't want to have to learn new grammar to access language changes that don't affect new hardware (separate programs, explicit attribute and uniform specifications, etc, all of which have zero hardware dependencies), tough. Pick your version based on whatever minimum hardware you want to support and stick with it.

Related

Confused about openGL version

I updated my graphics card driver to support openGL 4 so that deprecated functions like glBegin wont work. However, when I run a simple triangle program, glBegin still works like before. Is glBegin still supported by openGL 4 or did I miss some step in upgrading to openGL 4?
Simply using a driver that supports OpenGL 4.x does not mean that you will lose the functionality of earlier versions. Beginning with OpenGL 3.2 the concept of Core and Compatibility profiles were introduced, and this is where the separation between modern and deprecated actually comes into play.
In a Core profile, the things you mentioned such as glBegin are invalid. However, in a Compatibility profile, you can continue to mix-and-match deprecated parts of the API with new parts. The vast majority of new OpenGL features are not guaranteed to work in conjunction with the deprecated parts of the API, in large part because most new features are related to GLSL and the programmable pipeline in some way.
Now things get a little bit more complicated when you discuss a platform like Mac OS X. Beginning with OS X 10.7, Apple began supporting OpenGL 3.2. However, they designed their implementation in such a way that the ONLY way to access OpenGL 3.2 functionality was to get a Core profile. They continue to support a legacy OpenGL 2.1 implementation so that old software does not have to be re-written, but in order to take advantage of any OpenGL 3.2+ features on OS X you have to forefit all deprecated functionality.
In fact, platforms are generally designed so that you actually have to do extra work during context creation in order to get a Core profile. Unless you specifically request Core, you will get Compatibility (or in the case of OS X, an implementation of OpenGL 2.1). It is a way of making the whole deprecation model as painless as possible for existing software.
"deprecated" doesn't necessarily means that "it will not work", it means "you should not use it because the standard say so", the vendor is free to implement what it wants to sell with the hardware; and many brands still offer deprecated OpenGL contexts and functions in their own libraries.

What Are The Changes To OpenGL From 1.x, 2.x, 3.x And 4.x?

What have changed that makes OpenGL different? I heard of people not liking OpenGL since OpenGL 3.x, but what happend? I want to learn OpenGL but I don't know which one. I want great graphics with the newer versions, but what's so bad?
Generally, every major version of OpenGL is roughly equivalent to a hardware generation. Which means that generally if you can run OpenGL 3.0 card, you can also run OpenGL 3.3 (if you have a sufficiently new driver).
OpenGL 2.x is the DX9-capable generation of hardware, OpenGL 3.x is the DX10, and OpenGL 4.x the DX11 generation of hardware. There is no 100% exact overlap, but this is the general thing.
OpenGL 1.x revolves around immediate mode, which is conceptually very easy to use, and a strictly fixed function pipeline. The entry barrier is very low, because there is hardly anything you have to learn, and hardly anything you can do wrong.
The downside is that you have considerably more library calls, and CPU-GPU parallelism is not optimal in this model. This does not matter so much on old hardware, but becomes more and more important to get the best performance out of newer hardware.
Beginning with OpenGL 1.5, and gradually more and more in 2.x, there is slight paradigm shift away from immediate mode towards retained mode, i.e. using buffer objects, and a somewhat programmable pipeline. Vertex and fragment shaders are available, with varying feature sets and programmability.
Much of the functionality in these versions was implemented via (often vendor-specific) extensions, and sometimes only half-way or in several distinct steps, and not few features had non-obvious restrictions or pitfalls for the casual programmer (e.g. register combiners, lack of branching, limits on instructions and dependent texture fetches, vtf support supporting zero fetches).
With OpenGL 3.0, fixed function was deprecated but still supported as a backwards-compatibility feature. Almost all of "modern OpenGL" is implemented as core functionality as of OpenGL 3.x, with clear requirements and guarantees, and with an (almost) fully programmable pipeline. The programming model is based entirely on using retained mode and shaders. Geometry shaders are available in addition to vertex and fragment shaders.
Version 3 has received a lot of negative critique, but in my opinion this is not entirely fair. The birth process was admittedly a PR fiasco, but what came out is not all bad. Compared with previous versions, OpenGL 3.x is bliss.
OpenGL 4.x has an additional tesselation shader stage which requires hardware features not present in OpenGL 3.x compatible hardware (although I daresay that's rather a marketing reason, not a technical one). There is support for a new texture compression format that older hardware cannot handle as well.
Lastly, OpenGL 4.x introduces some API improvements that are irrespective of the underlying hardware. Those are also available under OpenGL 3.x as 100% identical core extensions.
All in all, my recommendation for everyone beginning to learn OpenGL is to start with version 3.3 right away (or 3.2 if you use Apple).
OpenGL 3.x compatible hardware is nearly omni-present nowadays. There is no sane reason to assume anything older, and you save yourself a lot of pain. From an economic point of view, it does not make sense to support anything older. Entry level GL4 cards are currently at around $30. Therefore, someone who cannot afford a GL3 card will not be able to pay for your software either (it is twice as much work to maintain 2 code paths, though).
Also, you will eventually have no other choice but to use modern OpenGL, so if you start with 1.x/2.x you will have to unlearn and learn anew later.
On the other hand, diving right into version 4.x is possible, but I advise against it for the time being. Whatever is not dependent on hardware in the API is also available in 3.x, and tesselation (or compute shader) is something that is usually not strictly necessary at once, and something you can always add on later.
For an exact list of changes I suggest you download the specification documents of the latest of each OpenGL major version. At the end of each of these there are several appendices documenting the changes between versions in detail.
The many laptops with Intel integrated graphics designed before approx a year ago do not do OpenGL 3. That includes some expensive business machines, e.g., $1600 Thinkpad x201, still for sale on Amazon as of today (4/3/13) (although Lenovo has stopped making them),
OpenGL 3.1 removed the "fixed function pipeline". That means that writing vertex and fragment shaders is no longer optional: If you want to display anything, you must write them. This makes it harder for the beginner to write "hello world" in OpenGL.
The OpenGL Superbible Rev 5 does a good job of teaching you to use modern OpenGL without falling back on the fixed function pipeline. That's where I would start if I were learning OpenGL from scratch.
Their rev 4 still covers the fixed function pipeline if you want to start with a more "historical" approach.

Supporting between GL versions 3.1 and 4.x

Since I've been studying OpenGL, I've been running into a bit of dilemma: while I have a good video card which supports GL 4.3 (nVidia GTX 550 ti model), my laptop is running off of integrated graphics (Intel HD 3000, from an i5-2410M to be exact), which runs version 3.1. I have no idea if this card is even capable of supporting OpenGL 3.2.
Either way, if it turns out that a driver update is all that's needed for core i3/i5 processors to support OpenGL 3.2+, I'm still not sure where I should look or what exactly I need to pay attention to in order support between 3.x and 4.x.
For example, AFAIK GLSL 1.4 doesn't support a shader which uses the layout(location=x) in blah blah feature. But, does GLSL 1.5?
What about preprocessor directives in C/C++ as well as GLSL which I can use to distinguish between what features work with what? Or, is it more recommended (in either GLSL's or C/C++'s case) to just write completely separate header/source/shader files which take care of this?
Since I only plan to be supporting 3.x to 4.x, I know this should be simpler than it very well could be.
Update
After a recent driver update on my laptop, it turns out that, as Nicol Bolas also stated, there currently is not (and likely won't ever be) any support for 3.2 on Intel 3000 cards. Therefore, to make this question simpler, I'd like to know what shading language features (be it in an answer or an external URL) I can and cannot implement on version 3.1 in comparison to 3.3 and above. Any OpenGL specific C/C++ features/functions which pertain to this would also be appreciated.
My goal is to basically be able to know what I have to do in order to mediate between the project I'm working on (which is currently using a GL 4.0 context) to be compatible with a 3.1 context.
If anything is still unclear I'll update accordingly. Thanks.
What I usually do is check out the latest header at the OpenGL registry called glcorearb.h. It has a list of every function definition from 1.0 onwards, broken up with defines labeling the respective versions in which they were introduced (and eligible for).
I'm not 100% sure of where to get GLSL information, but they do keep the specifications for older releases, so you may be able to compare the latest to the older ones for that.
Edit: Corrected wrong link.
Check the spec and this for preprocessor directives and such.
Does this help?

ARB_shader_image_load_store - what version of OpenGL and what hardware is required?

I'm a bit confused about new OpenGL extensions, what hardware they require and what OpenGL version they require.
In particular it's about the ARB_shader_image_load_store now.
http://www.opengl.org/registry/specs/ARB/shader_image_load_store.txt
As I understand, this is a feature of OpenGL 4.2 but in the OpenGL dependencies it's written:
This extension is written against the OpenGL 3.2 specification
(Compatibility Profile).
This extension is written against version 1.50 (revision 09) of the OpenGL
Shading Language Specification.
OpenGL 3.0 and GLSL 1.30 are required.
and further down stuff like
This extension interacts trivially with OpenGL 4.0 and ARB_sample_shading.
What do these things mean? What Hardware and what OpenGL version is necessary to use such Extensions?
What do these things mean?
Well, let's take them one by one.
Before we begin, some basic information. OpenGL specifications, whether core or extension, do not care about what hardware something runs on. They're not interested in that. They don't define hardware. You can't look at an extension spec and know a priori what hardware it will function on. If you want to find that information out, you're looking in the wrong place.
Furthermore, you have to understand something about extension specifications. An OpenGL extension is like a diff; you can't read it in isolation. An OpenGL extension is a document that modifies the OpenGL specification.
This extension is written against the OpenGL 3.2 specification
(Compatibility Profile).
This extension is written against version 1.50 (revision 09) of the OpenGL
Shading Language Specification.
A diff file is utterly useless unless you know exactly what file to patch it into, yes? That's the same thing with OpenGL. The extension specification will make references to section and paragraph numbers in the OpenGL specification. But... there are many versions of the OpenGL specification. Which one is it talking about?
Therefore, every extension must state which physical document it is referring to. So when this extension says, "Add a new subsection after Section 2.14.5, Samplers, p. 106", it means page 106, section 2.14.5 of the OpenGL 3.2 specification, compatibility profile.
Same goes for the GLSL language specification.
OpenGL 3.0 and GLSL 1.30 are required.
Now, just because an extension is written against a particular version does not mean that this is the minimum version where support for the extension is possible. An implementation could theoretically support it in an earlier version.
This statement says what the minimum version that can possibly support it is.
This is not a matter of hardware; it is a matter of language. The reason 3.0 is the minimum is because this extension refers to concepts that are simply not available in 2.1. Such as integer image formats and so forth. We'll talk a bit more about this in the next part.
This extension interacts with X.
The "interacts with" statement speaks to optional parts of the specification. What it means is that if "X" and this extension are both supported, then certain paragraphs in this specification also exist.
For example, ARB_shader_image_load_store states, "This extension interacts with ARB_separate_shader_objects". If you look towards the bottom, you will find a section titled "Dependencies on ARB_separate_shader_objects". That lists the specific language that changes when ARB_separate_shader_objects is available.
The "interacts trivially with X" statement simply means that the interaction is generally a "remove references to X" statement. For example, the section on ARB_tessellation_shader/4.0 dependencies state, "If OpenGL 4.0 and ARB_tessellation_shader are not supported, references to tessellation control and evaluation shaders should be removed."
The "trivially" language is just the extension's way of saying, "if X isn't supported, then obviously any references to the stuff X implements should be ignored."
The interaction with ARB_separate_shader_objects isn't trivial because it involves redefining how early depth test works.
The "interacts with" is an alternative to the "are required" wording. The ARB could have simply written it against 4.1 and firmly stated that 4.1 is required. Then there wouldn't have been nearly as many "interacts with" clauses, since none of those things are optional.
However, the ARB wanted to allow for the possibility of hardware that could support GL 3.0 concepts but not others. For example, in the mobile space, shader_image_load_store support could come before tessellation_shaders. That's why this extension has a lot of "interacts with" clauses and a fairly low "required" GL version. Despite the fact that on desktops, you will not find any implementation of ARB_shader_image_load_store paired with a version number less than 4.0.
What Hardware and what OpenGL version is necessary to use such Extensions?
None of these documents will tell you that. ARB_shader_image_load_store could be available on any implementation version 3.0 or greater.
The easiest and simplest way to find out what hardware supports what extensions is to use the OpenGL Viewer. It has a pretty up-to-date database of this information.
Alternatively, you can use some common sense. ARB_separate_shader_objects allows you to mix and match programs on the fly. This is something D3D has been doing since Direct3D 8. Obviously hardware could do it since shaders came into being; OpenGL simply didn't let you. Until now.
Obviously ARB_separate_shader_objects is not hardware-based.
Similarly ARB_shading_language_pack420 contains many features that D3D has had since forever. Again, there's clearly nothing there that requires specialized hardware support.
ARB_tessellation_shader is obviously something that does require specialized hardware support. It introduces two new shaders stages. ARB_shader_image_load_store is the same way: it introduces a fundamental new hardware ability. Now, it is certainly possible that earlier hardware could have done it. But that seems unlikely.
This isn't always the case for every extension. But it is mostly true.
The other thing you should know about is OpenGL version numbers. Since 3.0, the ARB has been good about keeping to a strict version numbering scheme.
Major versions represent fundamental hardware changes. 3.x to 4.x is directly equivalent to D3D10 to D3D11. Minor versions are either making the API nicer (see ARB_texture_storage, something we were long overdue for) or exposing previously unexposed hardware features for the same hardware level (ARB_shader_image_load_store could have been implemented on any 4.0 implementation, but the ARB just took until 4.2 to write the extension).
So if you have hardware that can run 3.0, it can also run 3.3; if it doesn't have drivers for it, then your driver maker isn't doing their job. Same goes for 4.0 to 4.2.
These things mean how to correctly read extension specification. To read specification you must know how extension interacts with other OpenGL features. And by saying "extension interacts trivially with OpenGL 4.0", it means you must thing of core OpenGL 4.0 feature ARB_sample_shading, and how this extension changes its behaviour.
Because it is in OpenGL Core 4.2 specification, ARB_shader_image_load_store should be supported on all Shader Model 5 / Direct3D11 hardware (Radeon HD5xxx and up, GeForce 400 and up).
So you can check if your OpenGL version is >=4.2 or check ARB_shader_image_load_store extension presence.

OpenGL: What's the deal with deprecation?

OpenGL 3.0 and 3.1 have deprecated quite a few features I find essential. In particular, the use of fixed function in shaders.
Can anyone explain what's really the deal with that?
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
As you said, no hardware company will remove support for fixed-function shaders, because there are so many existing applications that use them. What they don't want to do, though, is figure out how to specify the interactions between FF shaders and every future extension they add. Those interactions are very complicated (partly because FF shaders are so complicated), which leads to bugs and inconsistent implementations between vendors -- both of which are bad for developers and end users.
So they're drawing a line: if you want to use FF shaders, you don't get any of the new functionality. If you want new functionality, you can't use FF shaders. This is very similar to what Microsoft did in D3D10: it added a whole bunch of new functionality, but at the same time completely removed fixed-function shaders. The belief is that the set of developers who need the new non-shader functionality but who don't also need programmable shaders is very small.
It should be clarified that a feature that is marked "deprecated" is not actually removed. For example, an OpenGL 3.0 context has all of the features - nothing is gone. Further, some vendors will ship drivers that can create 3.1 and 3.2 contexts using a compatibility profile which will also enable the deprecated features. So, look closely at what vendor hardware you are going to support and ask about the ARB compatibility mode for old features. (There is also the "core" profile as of 3.2, which allows vendors to create a more lean and mean driver if they wish to make such a thing)
Note that any current card really doesn't have an FF hardware section any more - they only run shaders. When you ask for FF behavior, the GL runtime is authoring shaders on your behalf..
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
I suppose then Apple must be insane, because MacOSX 10.7 supports only 3.2 core. No compatibility specification support, no ARB_compatibility extension, nothing. You can either create a 2.1 context or a 3.2 core context.
However, if you want reasons:
For the sake of completeness: what Jesse Hall said. The ARB no longer has to consider the interaction between fixed function and new features. Integer math, array textures, and various other features are defined to not be usable with the fixed function pipeline. OpenGL has really improved over the last 3 years since GL 3.0 came out; the pace of the ARB's changes is quite substantial. Would that have been possible if they had to find a way to make all of those features interact with fixed function? And if they didn't have fixed function interactions, would you not then be complaining how you can't access new features from your old code? Which leads nicely into:
It serves as a strong indication of what one ought to be using. Even if the compatibility context is always available, you can look at core OpenGL to see how one ought to be approaching problem solving.
It makes the eventual desktop GL and GL ES unification much more reasonable. ES 2.0 threw out all of the old stuff and just adopted what you might think of as core GL 2.1. The ultimate goal will be to only have one OpenGL. To do that, you have to be able to rid the desktop GL of all of the cruft.
Fixed function shaders are quite easily replaced with standard GLSL shaders so it's difficult to see why logically they shouldn't be deprecated.
I'm less certain than you that they won't be dropped from much hardware in the foreseeable future as OpenGL ES 2.0 doesn't support the FF pipeline (and so isn't backward compatible with OpenGL ES 1.x). It seems to me that much of the momentum with OpenGL these days is coming from the widespread adoption of OpenGL ES on mobile platforms and with FF functionality gone from there there will be some considerable pressure to move away from it's use.
Indeed I'd expect the leaner OpenGL ES implementation to replace standard OpenGL quite widely over the next few years, and FF functionality may disappear more because most hardware will implement OpenGL ES rather than because it's removed from hardware implementing the full OpenGL
OpenGL allows for both a 'core' profile and a 'compatibly' profile. So for most systems you wont loose any kind of access to deprecated or removed functions.
But if you want to ensure compatibly it is best to stick to the core stuff. You won't be guaranteed a compatibility profile (even if most hardware has one and at the current state it's more likely you will encounter an out of date OpenGL rather than a core only one). Also OpenGL ES is now a subset of OpenGL, it is possible to write a OpenGL ES 2.x/3.x program and have it run in OpenGL 4.3 with almost no changes.
Game console like the PlayStations and the Nintendo ones shipped with their own graphics libraries rather than using OpenGL.
They were based on OpenGL but here stripped down in a similar was to ES (I don't think ES 2.0 was out then). Those systems need to write their own graphics drivers and libraries, asking a hardware vendor to write what is basically a whole load of legacy wrapping libraries is a bit much (all the fixed function stuff would just end up being implemented in shaders at some stage and it's likely that glBegin/glEnd would just be getting turned into a VBO automatically anyway).
I think it has also been important to ensure that developers are made aware of the current way they should be programming. For decades people have been taught the 'wrong' way to do things by default and vertex buffer objects have been taught as an extra.