I have developed a program which makes use of many of OpenGL's aspects - ranging from both rather new to deprecated functionalities, and want to ensure that it works correctly on the great majority of machines - especially on ones with outdated graphics cards.
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
Define "compatibility"? If you want an application to run on as much hardware as possible, then you basically have to give up on shaders entirely and stick to about GL 1.4. The main confounding issue here are Intel driver bugs; many pieces of older Intel hardware will claim support for GL 2.0 or 2.1, but they have innumerable failings in this support.
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
You don't. Compatibility with old hardware is about more than just sticking to a standard. It's about making sure that your program doesn't encounter driver bugs. And the only way to do that is to actually test on the hardware of interest.
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
Test the same code on recent hardware. If it has the same failures, then the problem is likely in your code. If it works fine on recent hardware but fails on older stuff, then the problem is almost certainly a driver bug with old hardware drivers.
Develop a workaround.
Well, the best way to maximize the backwards compatibility and to get a powerful tool on tracking down target machine's functionality (imho) is to use something like GLEW: The OpenGL Extension Wrangler Library. It will load OpenGL version-specific functions for you and you can test if they are supported by user's system (or, more correctly, by video drivers).
This library is very simple in use, it is well documented and you can google a lot of examples.
So if target machine doesn't have some new opengl functions, you load module named "opengl_old.cpp" (for example), or if it don't have some functionality which is already deprecated (like glBegin(), glEnd()), you'd better go on with "opengl_new.cpp".
Basically the most changes are done in OpenGL 3.0 (and furthermore 3.3) with shaders introduced as the only non-deprecated graphics pipeline, so you can make two opengl modules in your program: one for OpenGL 1&2 and one for OpenGL 3&4. At least I solved this problem in this way in my own code.
To test some functionality you can specify concrete version of OpenGL API to be loaded, when creating context.
Related
OpenGL Stenciling, seperating ref from value written?
In the answer to this question, a vender specific extension GL_REPLACE_VALUE_AMD is able to do exactly what I'm struggling to do in OpenGL, but I'm worried it will limit what computers and platforms I want my program to run on, and I've had no luck researching where it would not be available.
My goal is for the program to run on any computer that supports OpenGL 2.0, without any functional differences between them. Should I compile a program that uses this extension, what computers/platforms in this set would no longer be able to run the program without problems, if any?
The fact that it's a vendor extension should be an immediate clue that there's a good chance that you'd be limiting yourself to that vendor's hardware. It's not a 100% guarantee; NV_texture_barrier has been implemented for years on pretty much anything that can run GL 3.3 or better.
Further research indicates that the publication date for that extension is from 2012. That suggests that the extension would likely be implemented by more recent, GL 4.x-capable hardware.
If you want more accurate information, there are databases of extension usage that give a clearer picture. From this, we see that the extension is only implemented on AMD hardware. While it is available on AMD's GL 3.x-class hardware, it is not available on any of AMD's 2.x class hardware.
So if your goal is to support GL 2.0 (why not 2.1?) as a maximum, then you can't use that extension.
When I develop shader code on my machine I often find myself in the situation where the shader works perfectly on my machine, but on other graphic cards, drivers, operating systems, etc. it doesn't.
How to achieve compatibility of shaders?
I see a few approches:
Test on many different systems. But which systems to choose? Testing with every card on every OS and every driver is not realistic. Maybe we can assume that the vendors care for backward compatibility? In this case testing with old cards and drivers might be sufficient.
Ask the driver for a specific version and core profile. This helps a bit, but the drivers seem very lenient, allowing me to code things that aren't in the spec.
Code checkers that check the code for strict compatibility with a certain spec. There don't seen to be such tools around.
Don't bother and wait for bug-reports from users.Yet the error messages generated by the drivers are rather poor, and the observed behaviour might be as un-insightful as a black screen.
I'm targeting Win/Linux/OSX platforms. Not consoles.
Khronos has released OpenGL / OpenGL ES Reference Compiler that can be used to validate shader's source code. From the site:
The primary purpose of the reference compiler is to identify shader
portability issues.
As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface. When OS developers decide to create new version of Graphic API , GPU vendors expand their interface (new routines are faster and older ones are left for compatibility issues) and OS developers use this new part of interface.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs? Or there is some technical reasons before this?
As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface.
No, not really. DirectX and OpenGL are just specifications that define APIs. But a specification is nothing more than a document, not software. The OpenGL API specification is controlled by Khronos, the DirectX API specification is controlled by Microsoft. Each OS then defines a so called ABI (Application Binary Interface) that specifies which system level APIs are supported by the OS (OpenGL and DirectX are system level APIs) and what rules an actual implementation must adhere to, when being run on the OS in question.
The actual OpenGL or Direct3D implementation happens in the hardware's drivers (and in fact the hardware itself is part of the implementation as well).
When OS developers decide to create new version of Graphic API , GPU vendors expand their interface
In fact it's the other way round: Most of the graphic APIs specifications are laid out by the graphics hardware vendors. After all they are close to where the rubber hits the road. In the case of Khronos the GPU makers are part of the controlling group of Khronos. In the case of DirectX the hardware makers submit drafts to and review the changes and suggestions made by Microsoft. But in the end each new APIs release reflects the common denominator of the capabilities of the next hardware generation in development.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs?
No, it means that each GPU vendor implements his own version of OpenGL and the Direct3D backend, which is where all the magic happens. However OpenGL puts a lot of emphasis on backward compatibility and ease of transition to newer functionality. Direct3D development OTOH is quick in cutting the ties with earlier versions. This also means that full blown compatibility profile OpenGL implementations are quite complex beasts. That's also the reason why recent versions of OpenGL core profiles did (overdue) work in cutting down support for legacy features; this reduction of API complexity is also quite a liberating thing for developers. If you develop purely for a core profile it simplifies a lot of things; for example you no longer have to worry about a plethora of internal state when writing plugin.
Another factor is, that for Direct3D there's exactly one shader compiler, which is not part of the driver infrastructure / implementation itself, but gets run at program build time. OpenGL implementations however must implement their own GLSL shader compiler, which complicates things. IMHO the lack of a unified AST or immediate shader code is one of the major shortcomings of OpenGL.
There is not a 1:1 correspondence between the graphics hardware abstraction and graphics API like OpenGL and Direct3D. WDDM, which is Windows Vista's driver model defines things like common scheduling, memory management, etc. so that DirectX and OpenGL applications work interoperably, but very little of the design of DirectX, OpenGL or GPUs in general has to do with this. Think of it like the kernel, nobody creates a CPU specifically to run it, and you do not have to re-compile the kernel everytime a new iteration of a processor architecture comes out that adds a new subset of instructions.
Application developers and IHVs (GPU vendors, as you call them) are the ones who primarily deal with changes to GPU architecture. It may appear that the operating system has more to do with the equation than it actually does because Microsoft (more so) and Apple--who both maintain their own proprietary operating systems--are influential in the design of DirectX and OpenGL. These days OpenGL closely follows the development of commodity desktop GPU hardware, but this was not always the case - it contains baggage from the days of custom SGI workstations and lots of things in compatibility profiles have not been hardware native on desktop GPUs in decades. DirectX, on the other hand, has always followed desktop hardware. It used to be if you wanted an indication of where desktop GPUs were headed, D3D was a good marker.
OpenGL is arguably more complicated than DirectX because until recently it never let go of anything, whereas DirectX radically redefined the API and stripped legacy support with every iteration. Both APIs have settled down in recent years, but D3D still maintains a bit of an edge considering it only has to be implemented on a single platform and Microsoft writes the one and only shader compiler. If anything, the shader compiler and minimal feature set (void of legacy baggage) in D3D is probably why you get the impression that vendors support it better.
With the emergence of AMD Mantle, the desktop picture might change again (think back to the days of 3Dfx and Glide)... it certainly goes to show that OS developers have very little to do with graphics API design. NV and AMD both have proprietary APIs on the PS3, GameCube/Wii/WiiU, and PS4 that they have to implement in addition to D3D and OpenGL on the desktop, so the overall picture is much broader than you think.
I've been thinking of making an additional wrapper for my project to use OpenGL rather then Allegro. I was not sure which OpenGL version to go for since I know that some computers cannot run recent versions such as v4.4. Also, I need a version which compiles no problem in Linux, Windows, Mac.
You'll want to look at what kinds of graphics cards will be available on your target systems and bear some details in mind:
OpenGL up to 1.5 can be completely emulated in software in real time on most systems. You don't necessarily need hardware support for good performance.
OpenGL 1.4 has universal support. Virtually all hardware supports it.
Mac OS X only supports up to OpenGL 2.0 and OpenGL 2.1, depending on OS version and hardware. Systems using GMA950 have only OGL1.4 support. Mac OS X Lion 10.7 supports OpenGL 3.2 Core profile on supported hardware.
On Linux, it's not unusual for users to specifically prefer open source drivers over the alternative "binary blobs," so bear in mind that the version of Mesa that most people have supports only up to about OpenGL 2.1 compatibility. Upcoming versions have support for OpenGL 3.x. Closed-source "binary blobs" will generally support the highest OpenGL version for the hardware, including up to OpenGL 4.2 Core.
When considering what hardware is available to your users, the Steam hardware Survey may help. Note that most users have DirectX 9-compatible hardware, which is roughly feature-equivalent to OpenGL 2.0. Wikipedia's OpenGL article also specifies what hardware came with initial support for which versions.
If you use a library like GLEW or GLEE or any toolkit that depends on them or offers similar functionality (like SFML, or even Allegro since 4.3), then you'll not need to concern yourself with whether your code will compile. These toolkits will take care of the details of enabling extensions and providing all of the symbols you need.
Given all of this, I'd suggest targeting OpenGL 2.1 to get the widest audience possible with the best feature support.
Your safe bet is OpenGL 2.1, it needs to be supported by the driver on your target system though. OpenGL ES, used on several mobile platforms, is basically a simplified OpenGL 2, so even porting to those would be fairly easy. I highly recommend using libGlew as VJo said.
It's less about operating systems, and more about video card drivers.
I think 1.4 is the highest version which enjoys support by all consumer graphics systems: ATI (AMD), nVidia, and Intel IGP. Intel is definitely the limiting factor here, even when ATI or nVidia doesn't have hardware support, they release OpenGL 4.1 drivers which use software to emulate the missing features. Not so with Intel.
OpenGL is not a library you usually compile and ship yourself (unless you're a Linux distributor and are packaging X.Org/Mesa). Your program just dynamically links against libGL.so (Linux/BSD), opengl32.dll (Windows, on 64 Bit systems, it's also calles opengl32.dll, but it's in fact a 64 Bit DLL) or the OpenGL Framework (MacOS X). This gives your program access to the system's OpenGL installation. The version/profile you want to use has no influence on the library you link!
Then after your program has been initialized you can test, which OpenGL version is available. If you want to use OpenGL-3 or 4 you'll have to jump a few additional hoops in Windows to make full use of it, but normally some kind of wrapper helps you with context creation anyway, boiling it down to only a few lines.
Then in the program you can implement multiple code paths for the various versions. Usually lower OpenGL verion codepaths share a large subset with higher version codepaths. I recommend writing new code in the highest version available, then adding additional code paths (oftenly just substitutions which can be done by C preprocessor macros or similar) for lower versions until you reach the lowest common denominator of features you really need.
Then you need to use OpenGL 1.1, and use needed (and supported) functions through the use of wglGetProcAddress (on windows) or glXGetProcAddress (on linux).
Instead of using those two functions, you can use GLEW library, which does that for you and is cross platform.
I'm working on a C++ cross-platform OpenGL application (Windows, Linux and MacOS) and I am wondering if some of you could share some advices on porting a large application to OpenGL 3. The reason I am looking into OpenGL 3 is because I think we could benefit a lot from using the new "Sync objects". Nvidia has supported such an extension since the Geforce 256 days (gl_nv_fences) but there seems to be no equivalent functionality on ATI hardware before OpenGL 3.0+...
Our code makes quite heavy use of glut/freeglut, glu functions, OpenGL 2 extensions and CUDA (on supported hardware). The problem I am now facing is that "gl3.h" and "gl.h" are mutually incompatible (as stated in gl3.h). Do you guys know if there is a GL3 glut equivalent ? Also, looking at the CUDA-toolkit header files, it seems that GL-CUDA interoperability is only available when using older versions of OpenGL... (cuda_gl_interop.h includes gl.h...). Am I missing something ?
Thanks a lot for your help.
The last update to glut was version 3.7, roughly 10 years ago. Taking that into account, I doubt that it'll ever support OpenGL 3.x (or 4.x).
The people working on OpenGlut seem to be considering the possibility of OpenGL 3.x support, but haven't done anything with it yet.
FLTK has a (partial) glut simulation, but it's partial enough that a program that "makes heavy use of glut" may not work with it in the first place. Since FLTK is in active development, I'd guess it'll eventually support OpenGL 3.x (or 4.x), but I don't believe it's provided yet, and it may be open to question how soon it will be either.
Edit: As far as CUDA goes, the obvious (though certainly non-trivial) answer would be to use OpenCL instead. This is considerably more compatible both with hardware (e.g., with ATI/AMD boards) and with newer versions of OpenGL.
That leaves glu. Frankly, I don't think there is a clear or obvious answer to this. OpenGL is moving away from supporting things like glu, and instead dropping support for even more of the vaguely glu-like functionality that used to be part of the core OpenGL spec (e.g., all the matrix manipulation primitives). Personally, I think this is a mistake, but whether it's good or bad, it's how things are. Unfortunately, glu is a bit like glut -- the last update to the spec was in 1998, and corresponds to OpenGL 1.2. That doesn't make an update seem at all likely. Unfortunately, I don't know of any really direct replacements for it either. There are clearly other graphics libraries that provide (at least some) similar capabilities, but all of them I can think of would require substantial rewriting.