Strict nVidia OpenGL support (OpenGL 3.2) - opengl

While AMD is following the OpenGL specification very strict, nVidia often works even when the specification is not followed. One example is that nVidia supports element incides (used in glDrawElements) on the CPU memory, whereas AMD only supports element indices from a element array buffer.
My question is: Is there a way to enforce strict OpenGL behaviour using a nVidia driver? Currently I'm interested in a solution for a Windows/OpenGL 3.2/FreeGlut/GLEW setup.
Edit: If it is not possible to enforce strict behaviour on the driver itself - is there some OpenGL proxy that guarantees strict behaviour (such as GLIntercept)

No vendor enforces the specification strictly. Be it AMD, nVidia, Intel, PowerVR, ... they all have their idiosyncrasies and you have to learn to live with them, sadly. That is one of the annoying things about having each vendor implement their own GLSL compiler, as opposed to Microsoft implementing the one and only HLSL compiler in D3D.
The ANGLE project tries to mitigate this to a certain extent by providing a single shader validator shared across many of the major web browsers, but it is an uphill battle and this only applies to WebGL for the most part. You will always have implementation differences when every vendor implements the entire API themselves.
Now that Khronos group has seriously taken on the task of establishing a set of conformance tests for desktop OpenGL like they have for WebGL / OpenGL ES, things might start to get a little bit better. But forcing a driver to operate in a strict conformance mode is not really a standard thing - there may be #pragmas and such that hint the compiler to behave more strictly, but these are all vendor specific.
By the way, I realize this question has nothing to do with GLSL per-se, but it was the best example I could give.

Unfortunately, the only way you can be sure that your OpenGL code will work on your target hardware is to test it. In theory simply writing standard compliant code should work everywhere, but sadly this isn't always the case.

Related

Find out, if hardware supports specific OpenGL feature

How do I find out, if a specific OpenGL feature is supported by hardware or not?
In my case I want to know, if two-sided lighting is available in hardware.
An approach using OpenInventor would be just as well.
In general, you don't.
If something is part of core OpenGL, then it should be implemented by the OpenGL implementation. Whether this happens "in hardware" or not is not something that you can detect.
For extension based features, you can obviously check for the presence of the extension. But otherwise, there's nothing you can do except better know the hardware your code is running on.

How to ensure backwards-compatibility of my Windows OpenGL application?

I have developed a program which makes use of many of OpenGL's aspects - ranging from both rather new to deprecated functionalities, and want to ensure that it works correctly on the great majority of machines - especially on ones with outdated graphics cards.
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
Define "compatibility"? If you want an application to run on as much hardware as possible, then you basically have to give up on shaders entirely and stick to about GL 1.4. The main confounding issue here are Intel driver bugs; many pieces of older Intel hardware will claim support for GL 2.0 or 2.1, but they have innumerable failings in this support.
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
You don't. Compatibility with old hardware is about more than just sticking to a standard. It's about making sure that your program doesn't encounter driver bugs. And the only way to do that is to actually test on the hardware of interest.
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
Test the same code on recent hardware. If it has the same failures, then the problem is likely in your code. If it works fine on recent hardware but fails on older stuff, then the problem is almost certainly a driver bug with old hardware drivers.
Develop a workaround.
Well, the best way to maximize the backwards compatibility and to get a powerful tool on tracking down target machine's functionality (imho) is to use something like GLEW: The OpenGL Extension Wrangler Library. It will load OpenGL version-specific functions for you and you can test if they are supported by user's system (or, more correctly, by video drivers).
This library is very simple in use, it is well documented and you can google a lot of examples.
So if target machine doesn't have some new opengl functions, you load module named "opengl_old.cpp" (for example), or if it don't have some functionality which is already deprecated (like glBegin(), glEnd()), you'd better go on with "opengl_new.cpp".
Basically the most changes are done in OpenGL 3.0 (and furthermore 3.3) with shaders introduced as the only non-deprecated graphics pipeline, so you can make two opengl modules in your program: one for OpenGL 1&2 and one for OpenGL 3&4. At least I solved this problem in this way in my own code.
To test some functionality you can specify concrete version of OpenGL API to be loaded, when creating context.

ARB_shader_image_load_store - what version of OpenGL and what hardware is required?

I'm a bit confused about new OpenGL extensions, what hardware they require and what OpenGL version they require.
In particular it's about the ARB_shader_image_load_store now.
http://www.opengl.org/registry/specs/ARB/shader_image_load_store.txt
As I understand, this is a feature of OpenGL 4.2 but in the OpenGL dependencies it's written:
This extension is written against the OpenGL 3.2 specification
(Compatibility Profile).
This extension is written against version 1.50 (revision 09) of the OpenGL
Shading Language Specification.
OpenGL 3.0 and GLSL 1.30 are required.
and further down stuff like
This extension interacts trivially with OpenGL 4.0 and ARB_sample_shading.
What do these things mean? What Hardware and what OpenGL version is necessary to use such Extensions?
What do these things mean?
Well, let's take them one by one.
Before we begin, some basic information. OpenGL specifications, whether core or extension, do not care about what hardware something runs on. They're not interested in that. They don't define hardware. You can't look at an extension spec and know a priori what hardware it will function on. If you want to find that information out, you're looking in the wrong place.
Furthermore, you have to understand something about extension specifications. An OpenGL extension is like a diff; you can't read it in isolation. An OpenGL extension is a document that modifies the OpenGL specification.
This extension is written against the OpenGL 3.2 specification
(Compatibility Profile).
This extension is written against version 1.50 (revision 09) of the OpenGL
Shading Language Specification.
A diff file is utterly useless unless you know exactly what file to patch it into, yes? That's the same thing with OpenGL. The extension specification will make references to section and paragraph numbers in the OpenGL specification. But... there are many versions of the OpenGL specification. Which one is it talking about?
Therefore, every extension must state which physical document it is referring to. So when this extension says, "Add a new subsection after Section 2.14.5, Samplers, p. 106", it means page 106, section 2.14.5 of the OpenGL 3.2 specification, compatibility profile.
Same goes for the GLSL language specification.
OpenGL 3.0 and GLSL 1.30 are required.
Now, just because an extension is written against a particular version does not mean that this is the minimum version where support for the extension is possible. An implementation could theoretically support it in an earlier version.
This statement says what the minimum version that can possibly support it is.
This is not a matter of hardware; it is a matter of language. The reason 3.0 is the minimum is because this extension refers to concepts that are simply not available in 2.1. Such as integer image formats and so forth. We'll talk a bit more about this in the next part.
This extension interacts with X.
The "interacts with" statement speaks to optional parts of the specification. What it means is that if "X" and this extension are both supported, then certain paragraphs in this specification also exist.
For example, ARB_shader_image_load_store states, "This extension interacts with ARB_separate_shader_objects". If you look towards the bottom, you will find a section titled "Dependencies on ARB_separate_shader_objects". That lists the specific language that changes when ARB_separate_shader_objects is available.
The "interacts trivially with X" statement simply means that the interaction is generally a "remove references to X" statement. For example, the section on ARB_tessellation_shader/4.0 dependencies state, "If OpenGL 4.0 and ARB_tessellation_shader are not supported, references to tessellation control and evaluation shaders should be removed."
The "trivially" language is just the extension's way of saying, "if X isn't supported, then obviously any references to the stuff X implements should be ignored."
The interaction with ARB_separate_shader_objects isn't trivial because it involves redefining how early depth test works.
The "interacts with" is an alternative to the "are required" wording. The ARB could have simply written it against 4.1 and firmly stated that 4.1 is required. Then there wouldn't have been nearly as many "interacts with" clauses, since none of those things are optional.
However, the ARB wanted to allow for the possibility of hardware that could support GL 3.0 concepts but not others. For example, in the mobile space, shader_image_load_store support could come before tessellation_shaders. That's why this extension has a lot of "interacts with" clauses and a fairly low "required" GL version. Despite the fact that on desktops, you will not find any implementation of ARB_shader_image_load_store paired with a version number less than 4.0.
What Hardware and what OpenGL version is necessary to use such Extensions?
None of these documents will tell you that. ARB_shader_image_load_store could be available on any implementation version 3.0 or greater.
The easiest and simplest way to find out what hardware supports what extensions is to use the OpenGL Viewer. It has a pretty up-to-date database of this information.
Alternatively, you can use some common sense. ARB_separate_shader_objects allows you to mix and match programs on the fly. This is something D3D has been doing since Direct3D 8. Obviously hardware could do it since shaders came into being; OpenGL simply didn't let you. Until now.
Obviously ARB_separate_shader_objects is not hardware-based.
Similarly ARB_shading_language_pack420 contains many features that D3D has had since forever. Again, there's clearly nothing there that requires specialized hardware support.
ARB_tessellation_shader is obviously something that does require specialized hardware support. It introduces two new shaders stages. ARB_shader_image_load_store is the same way: it introduces a fundamental new hardware ability. Now, it is certainly possible that earlier hardware could have done it. But that seems unlikely.
This isn't always the case for every extension. But it is mostly true.
The other thing you should know about is OpenGL version numbers. Since 3.0, the ARB has been good about keeping to a strict version numbering scheme.
Major versions represent fundamental hardware changes. 3.x to 4.x is directly equivalent to D3D10 to D3D11. Minor versions are either making the API nicer (see ARB_texture_storage, something we were long overdue for) or exposing previously unexposed hardware features for the same hardware level (ARB_shader_image_load_store could have been implemented on any 4.0 implementation, but the ARB just took until 4.2 to write the extension).
So if you have hardware that can run 3.0, it can also run 3.3; if it doesn't have drivers for it, then your driver maker isn't doing their job. Same goes for 4.0 to 4.2.
These things mean how to correctly read extension specification. To read specification you must know how extension interacts with other OpenGL features. And by saying "extension interacts trivially with OpenGL 4.0", it means you must thing of core OpenGL 4.0 feature ARB_sample_shading, and how this extension changes its behaviour.
Because it is in OpenGL Core 4.2 specification, ARB_shader_image_load_store should be supported on all Shader Model 5 / Direct3D11 hardware (Radeon HD5xxx and up, GeForce 400 and up).
So you can check if your OpenGL version is >=4.2 or check ARB_shader_image_load_store extension presence.

OpenGL: What's the deal with deprecation?

OpenGL 3.0 and 3.1 have deprecated quite a few features I find essential. In particular, the use of fixed function in shaders.
Can anyone explain what's really the deal with that?
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
As you said, no hardware company will remove support for fixed-function shaders, because there are so many existing applications that use them. What they don't want to do, though, is figure out how to specify the interactions between FF shaders and every future extension they add. Those interactions are very complicated (partly because FF shaders are so complicated), which leads to bugs and inconsistent implementations between vendors -- both of which are bad for developers and end users.
So they're drawing a line: if you want to use FF shaders, you don't get any of the new functionality. If you want new functionality, you can't use FF shaders. This is very similar to what Microsoft did in D3D10: it added a whole bunch of new functionality, but at the same time completely removed fixed-function shaders. The belief is that the set of developers who need the new non-shader functionality but who don't also need programmable shaders is very small.
It should be clarified that a feature that is marked "deprecated" is not actually removed. For example, an OpenGL 3.0 context has all of the features - nothing is gone. Further, some vendors will ship drivers that can create 3.1 and 3.2 contexts using a compatibility profile which will also enable the deprecated features. So, look closely at what vendor hardware you are going to support and ask about the ARB compatibility mode for old features. (There is also the "core" profile as of 3.2, which allows vendors to create a more lean and mean driver if they wish to make such a thing)
Note that any current card really doesn't have an FF hardware section any more - they only run shaders. When you ask for FF behavior, the GL runtime is authoring shaders on your behalf..
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
I suppose then Apple must be insane, because MacOSX 10.7 supports only 3.2 core. No compatibility specification support, no ARB_compatibility extension, nothing. You can either create a 2.1 context or a 3.2 core context.
However, if you want reasons:
For the sake of completeness: what Jesse Hall said. The ARB no longer has to consider the interaction between fixed function and new features. Integer math, array textures, and various other features are defined to not be usable with the fixed function pipeline. OpenGL has really improved over the last 3 years since GL 3.0 came out; the pace of the ARB's changes is quite substantial. Would that have been possible if they had to find a way to make all of those features interact with fixed function? And if they didn't have fixed function interactions, would you not then be complaining how you can't access new features from your old code? Which leads nicely into:
It serves as a strong indication of what one ought to be using. Even if the compatibility context is always available, you can look at core OpenGL to see how one ought to be approaching problem solving.
It makes the eventual desktop GL and GL ES unification much more reasonable. ES 2.0 threw out all of the old stuff and just adopted what you might think of as core GL 2.1. The ultimate goal will be to only have one OpenGL. To do that, you have to be able to rid the desktop GL of all of the cruft.
Fixed function shaders are quite easily replaced with standard GLSL shaders so it's difficult to see why logically they shouldn't be deprecated.
I'm less certain than you that they won't be dropped from much hardware in the foreseeable future as OpenGL ES 2.0 doesn't support the FF pipeline (and so isn't backward compatible with OpenGL ES 1.x). It seems to me that much of the momentum with OpenGL these days is coming from the widespread adoption of OpenGL ES on mobile platforms and with FF functionality gone from there there will be some considerable pressure to move away from it's use.
Indeed I'd expect the leaner OpenGL ES implementation to replace standard OpenGL quite widely over the next few years, and FF functionality may disappear more because most hardware will implement OpenGL ES rather than because it's removed from hardware implementing the full OpenGL
OpenGL allows for both a 'core' profile and a 'compatibly' profile. So for most systems you wont loose any kind of access to deprecated or removed functions.
But if you want to ensure compatibly it is best to stick to the core stuff. You won't be guaranteed a compatibility profile (even if most hardware has one and at the current state it's more likely you will encounter an out of date OpenGL rather than a core only one). Also OpenGL ES is now a subset of OpenGL, it is possible to write a OpenGL ES 2.x/3.x program and have it run in OpenGL 4.3 with almost no changes.
Game console like the PlayStations and the Nintendo ones shipped with their own graphics libraries rather than using OpenGL.
They were based on OpenGL but here stripped down in a similar was to ES (I don't think ES 2.0 was out then). Those systems need to write their own graphics drivers and libraries, asking a hardware vendor to write what is basically a whole load of legacy wrapping libraries is a bit much (all the fixed function stuff would just end up being implemented in shaders at some stage and it's likely that glBegin/glEnd would just be getting turned into a VBO automatically anyway).
I think it has also been important to ensure that developers are made aware of the current way they should be programming. For decades people have been taught the 'wrong' way to do things by default and vertex buffer objects have been taught as an extra.

What are OpenGL extensions, and what are the benefits/tradeoffs of using them?

In relation to this question on Using OpenGL extensions, what's the purpose of these extension functions? Why would I want to use them? Further, are there any tradeoffs or gotchas associated with using them?
The OpenGL standard allows individual vendors to provide additional functionality through extensions as new technology is created. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions.
Each vendor has an alphabetic abbreviation that is used in naming their new functions and constants. For example, NVIDIA's abbreviation (NV) is used in defining their proprietary function glCombinerParameterfvNV() and their constant GL_NORMAL_MAP_NV.
It may happen that more than one vendor agrees to implement the same extended functionality. In that case, the abbreviation EXT is used. It may further happen that the Architecture Review Board "blesses" the extension. It then becomes known as a standard extension, and the abbreviation ARB is used. The first ARB extension was GL_ARB_multitexture, introduced in version 1.2.1. Following the official extension promotion path, multitexturing is no longer an optionally implemented ARB extension, but has been a part of the OpenGL core API since version 1.3.
Before using an extension a program must first determine its availability, and then obtain pointers to any new functions the extension defines. The mechanism for doing this is platform-specific and libraries such as GLEW and GLEE exist to simplify the process.
Extensions are, in general, a way for graphics card vendors to add new functionality to OpenGL without having to wait until the next revision of the OpenGL spec. There are different types of extensions:
Vendor extension - only one vendor provides a certain type of functionality.
Example: NV_vertex_program
Multivendor extension - multiple vendors have gotten together and agreed on the functionality.
Example: EXT_vertex_program
ARB extension - the OpenGL Architecture Review Board has blessed the extension. You have a reasonable expectation that this type of extension will be around for a while.
Example: ARB_vertex_program
Extensions don't have to go through all of these steps. Sometimes an extension is only ever implemented by one vendor, before hardware designs go a different way and the extension is abandoned. Other times, an extension might make it as far as ARB status before everyone decides there's a better way. (The ARB_vertex_program approach, for instance, was set aside in favor of the high-level shading language approach of ARB_vertex_shader when it came time to roll shaders into the core OpenGL spec.) Even ARB extensions don't last forever; I wouldn't write something today requiring ARB_matrix_palette, for instance.
All of that having been said, it's a very good idea to keep up to date on extensions, in particular the latest ARB and EXT extensions. In the past it has been true that some of the 'fast paths' through the hardware were only accessible via extensions. Likewise, if you want to know what all functionality a piece of hardware can do, there's no better place to look than in a vendor-specific extension.
If you're just getting started in OpenGL, I'd recommend investigating:
ARB_vertex_buffer_object (vertices)
ARB_vertex_shader / ARB_fragment_shader / ARB_shader_objects / GLSL spec (shaders)
More advanced:
ARB/EXT_framebuffer_object (off-screen rendering)
This is all functionality that's been rolled into core, but it can be good to see it in isolation so you can get a better feel for where its boundaries lie. (The core OpenGL spec seamlessly mixes the old with the new, so this can be pretty important if you want to stay on the fast path, and avoid the legacy and sometimes implemented in software paths.)
Whatever you do, make sure you have appropriate checks for the extensions you decide to use, and fallbacks where necessary. Even though your card may have a given extension, there's no guarantee that the extension will be present on another vendor's card, or even on another operating system with the same card.
OpenGL Extensions are new features added to the OpenGL specification, they are added by the OpenGL standards body and by the various graphics card vendors. These are exposed to the programmer as new function calls or variables. Every new version of the OpenGL specification ships with newer functionality and (typically) includes all the previous functionality and extensions.
The real problem with OpenGL extensions exists only on Windows. Microsoft hasn't supported any extensions that have been released after OpenGL v1.1. The graphics card vendors overcome this by shipping their own version of this functionality through header files and libraries. However, using this can be a bit painful as the question you linked to shows. But this problem has mostly gone away with the popularity of GLEW, which takes care of wrapping all this into a easy-to-use package.
If you do use a very recent OpenGL extension, be aware that it may not be supported on older graphics hardware. Other than this, there's no other disadvantage to using these extensions. Most of the extensions which become standard are pretty darn useful and there's very little logic to not use them.