Which OpenGL version is most stable and currently used? - c++

I've been thinking of making an additional wrapper for my project to use OpenGL rather then Allegro. I was not sure which OpenGL version to go for since I know that some computers cannot run recent versions such as v4.4. Also, I need a version which compiles no problem in Linux, Windows, Mac.

You'll want to look at what kinds of graphics cards will be available on your target systems and bear some details in mind:
OpenGL up to 1.5 can be completely emulated in software in real time on most systems. You don't necessarily need hardware support for good performance.
OpenGL 1.4 has universal support. Virtually all hardware supports it.
Mac OS X only supports up to OpenGL 2.0 and OpenGL 2.1, depending on OS version and hardware. Systems using GMA950 have only OGL1.4 support. Mac OS X Lion 10.7 supports OpenGL 3.2 Core profile on supported hardware.
On Linux, it's not unusual for users to specifically prefer open source drivers over the alternative "binary blobs," so bear in mind that the version of Mesa that most people have supports only up to about OpenGL 2.1 compatibility. Upcoming versions have support for OpenGL 3.x. Closed-source "binary blobs" will generally support the highest OpenGL version for the hardware, including up to OpenGL 4.2 Core.
When considering what hardware is available to your users, the Steam hardware Survey may help. Note that most users have DirectX 9-compatible hardware, which is roughly feature-equivalent to OpenGL 2.0. Wikipedia's OpenGL article also specifies what hardware came with initial support for which versions.
If you use a library like GLEW or GLEE or any toolkit that depends on them or offers similar functionality (like SFML, or even Allegro since 4.3), then you'll not need to concern yourself with whether your code will compile. These toolkits will take care of the details of enabling extensions and providing all of the symbols you need.
Given all of this, I'd suggest targeting OpenGL 2.1 to get the widest audience possible with the best feature support.

Your safe bet is OpenGL 2.1, it needs to be supported by the driver on your target system though. OpenGL ES, used on several mobile platforms, is basically a simplified OpenGL 2, so even porting to those would be fairly easy. I highly recommend using libGlew as VJo said.

It's less about operating systems, and more about video card drivers.
I think 1.4 is the highest version which enjoys support by all consumer graphics systems: ATI (AMD), nVidia, and Intel IGP. Intel is definitely the limiting factor here, even when ATI or nVidia doesn't have hardware support, they release OpenGL 4.1 drivers which use software to emulate the missing features. Not so with Intel.

OpenGL is not a library you usually compile and ship yourself (unless you're a Linux distributor and are packaging X.Org/Mesa). Your program just dynamically links against libGL.so (Linux/BSD), opengl32.dll (Windows, on 64 Bit systems, it's also calles opengl32.dll, but it's in fact a 64 Bit DLL) or the OpenGL Framework (MacOS X). This gives your program access to the system's OpenGL installation. The version/profile you want to use has no influence on the library you link!
Then after your program has been initialized you can test, which OpenGL version is available. If you want to use OpenGL-3 or 4 you'll have to jump a few additional hoops in Windows to make full use of it, but normally some kind of wrapper helps you with context creation anyway, boiling it down to only a few lines.
Then in the program you can implement multiple code paths for the various versions. Usually lower OpenGL verion codepaths share a large subset with higher version codepaths. I recommend writing new code in the highest version available, then adding additional code paths (oftenly just substitutions which can be done by C preprocessor macros or similar) for lower versions until you reach the lowest common denominator of features you really need.

Then you need to use OpenGL 1.1, and use needed (and supported) functions through the use of wglGetProcAddress (on windows) or glXGetProcAddress (on linux).
Instead of using those two functions, you can use GLEW library, which does that for you and is cross platform.

Related

what meant by openGL ES implementation on OS

I'm started looking for docs for OpenGL ES learning, I came across lot of links. one of them has explained like "OpenGL need to be supported by the vendors of Graphics Cards (like NVidia) and be implemented by the OS's vendors (like Apple in his MacOS and iOS) and finally, the OpenGL give to us, developers, a unified API to work with".
what does it mean by?
OpenGL need to be supported by the vendors of Graphics Cards (like NVidia)
Is it something different to normal code libraries execution?
be implemented by the OS's vendors (like Apple in his MacOS and iOS)...
Is this OS's vendor specific?
If all implementation was done by vendors, what does actually OpenGL ES will do?
I was thinking OpenGLES is a library, which needs to install in required OS and using the specific EGL API's, we need to call them? isn't it?
finally the OpenGL give to us, developers, a unified API to work with
If Os itself developing everything, why to go for OpenGL ES?
Please explain, possibly with an example.
Unlike his name suggests, OpenGL is not a library. Or better, it is cause some symbols for the functions you use need to be linked to the executable.
But OpenGL is a standard de facto library, and a great part of the OSes have a good part of the implementation in their kernel. OSX and iOS provide the OpenGL.framework and GLKit.framework to interface with OpenGL
Architecture Review Board ARB just gives specification of OpenGL. This means It tells what should be name of function which parameters it should accept and what is desired behavior. All the Graphics card vendors take this specification and implement this api. Here function interface and high level working is defined by ARB but internal implementation of API is done depending upon the hardware of vendor. Now this OpenGL API goes into driver. Drivers are generally implemented as Operating system interface part and hardware interface part. So Operating system also need some support for this driver.

Can I limit OpenGL version to 4.3?

I have an OpenGL 4.5 capable GPU and I wish to test if my application runs on an OpenGL 4.3 capable system. Can I set my GPU to use OpenGL 4.3?
Can you forcibly limit your OpenGL implementation to a specific version? No. Implementations are permitted to give you any version which is 100% compatible with the one you asked for. And 4.5 is compatible with 4.3.
However, using the right OpenGL loading library, you can forcibly limit your header. Several libraries allow you to generate version-specific headers, which will provide APIs and enums for just that version and nothing else. And any extensions you wish to use.

How does the support of new openGL versions is provided in old video cards?

How I know, the updating openGL is including some extensions which was defined by vendors and used hardware features. I have geforce 520m( realize in 2010) , but it support openGL 4.4 (realize in 2014). Obviously, it cannot support some hardware features which is needed in openGL 4.4. But it support openGL 4.4.How does the support of new openGL versions is provided in old video cards?
Why do you think that it does not support the HW features? It does. OpenGL is a rendering API specification, not a HW one. The fact that new features were added to GL does not mean that new HW is required to implement them.
On nvidia, all GPUs since the fermi architecture support OpenGL 4.x as far as it has been specified. That does not guarantee that it will support everything what a future GL4.x version might introduce.
Currently, the GL major versions can be tied to some major HW generation. GL2.x is the really old stuff. GL 3.x is supported by nvidia since GeForce 8xxx (released in 2006), and fermi/kepler/maxwell support 4.x.
According to you, the progress in video cards stopped since the fermi architecture. Why vendors realize new video cards if they can update openGL to add new function? I think it is untruth. Vendors try to find ways to accelerate video cards in hw level and add functions that use it. Or I do not understand it clear? And so what does it mean that my video card is supported openGL 4.x?
No, you have it backwards.
You could say that the progress in graphics APIs with respect to hardware functionality has stopped since Fermi. Every new generation of GPUs generally adds new HW features, but the features exposed by every new version of OpenGL do not necessarily require new HW. In fact, core GL often lags a generation or more behind HW capabilities in required functionality.
OpenGL still lacks support for some features introduced in Direct3D 11 and likewise Direct3D 11.x lacks support for some OpenGL 4.x features and neither API fully exposes the underlying hardware. This is because they are designed around supporting the most hardware possible rather than all of the features of one specific piece of hardware. AMD's solution to this "problem" was to introduce a whole new API (Mantle) that more closely follows the feature set of their Graphics Core Next-based architectures; more of a console approach to API design.
There may be optional ARB extensions introduced with the yearly release of new GL version specifications, but until they are promoted to core, GL does not require support for them.
Until things are standardized across all of the vendors who are part of the ARB, some features of GPU hardware are unusable in core GL. This is why GL has extensions in addition to core versions. A 720m GPU will support many new extensions that your 520m GPU does not, but at the same time they can both implement all of the required functionality from GL 4.4. There is no guarantee, however, that GL 4.5 will not introduce some new required feature that your 520m GPU is incapable of supporting or that NV decides is too much trouble to support.
Vendors sometimes do not bother writing support for features on older GPUs even though they can technically support them. It can become too much work to write and especially to maintain multiple implementations of a feature across several different versions of a product line. You see this sort of thing in all industries, not just GPUs. Sometimes open source solutions eventually fill-in the gap where the original vendor decided it was not worth the effort, but this may take many years.

How OpenGL know to use hardware specific driver? [duplicate]

This question already has answers here:
How does OpenGL work at the lowest level? [closed]
(4 answers)
Closed 9 years ago.
We all know windows provide a generic OpenGL driver. But some display card vendors provider specific openGL driver. I am wondering how OS to choose which driver to use?
There are WGL functions for getting pointers to appropriate functions of newer OpenGL. Windows chooses the GPU arbitrarily when creating the context.
So, when you, for example, request a 4.3 core context it looks at the list of drivers, says that the one will handle this particular context, and then all wgl calls are made so that they load appropriate functions from the particular driver's dll.
The easiest platform to explain how this works is Microsoft Windows. They have an "Installable Client Driver" (ICD) model, where the Windows SDK provides the bare minimum OpenGL functionality (OpenGL 1.1) and an extensible window system interface (WGL).
When you create your context using the WGL API in Microsoft Windows, you can enumerate all of the pixel formats for a specific device. Some pixel formats will be hardware accelerated and some will use the fallback software OpenGL 1.1 reference implementation. If you get a hardware accelerated format, all of your OpenGL commands will be handled by the graphics card driver and you will have a set of extensions to both GL and WGL that are not available in the reference (1.1) implementation.
This is why so much of modern OpenGL has to be handled through extension loading on Microsoft Windows, at the lowest level you only get OpenGL 1.1 and for anything newer than that, you have to ask the driver for the address of the function you want to use.
Many platforms have a similar system with respect to hardware/software co-existence. Mac OS X is even trickier, because you can start out with a hardware accelerated OpenGL implementation but call the wrong function (one that cannot be implemented in hardware on the installed GPU) and it will throw you off the "fast path" and fallback to software.
The nice thing about Mac OS X, however, is that this fallback only applies to individual stages of the pipeline. It will go back to hardware acceleration for the next stage as long as everything can be implemented in hardware. The beauty is that in OS X, the software implementation of OpenGL is complete (e.g. a 3.2 context will support the entire OpenGL 3.2 feature set even if doing so requires partial/full software implementation). You do not lose functionality if the GPU does not support something, just a LOT of performance. This is in huge contrast to Windows, where none of modern OpenGL has a software fallback.

How to ensure backwards-compatibility of my Windows OpenGL application?

I have developed a program which makes use of many of OpenGL's aspects - ranging from both rather new to deprecated functionalities, and want to ensure that it works correctly on the great majority of machines - especially on ones with outdated graphics cards.
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
Define "compatibility"? If you want an application to run on as much hardware as possible, then you basically have to give up on shaders entirely and stick to about GL 1.4. The main confounding issue here are Intel driver bugs; many pieces of older Intel hardware will claim support for GL 2.0 or 2.1, but they have innumerable failings in this support.
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
You don't. Compatibility with old hardware is about more than just sticking to a standard. It's about making sure that your program doesn't encounter driver bugs. And the only way to do that is to actually test on the hardware of interest.
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
Test the same code on recent hardware. If it has the same failures, then the problem is likely in your code. If it works fine on recent hardware but fails on older stuff, then the problem is almost certainly a driver bug with old hardware drivers.
Develop a workaround.
Well, the best way to maximize the backwards compatibility and to get a powerful tool on tracking down target machine's functionality (imho) is to use something like GLEW: The OpenGL Extension Wrangler Library. It will load OpenGL version-specific functions for you and you can test if they are supported by user's system (or, more correctly, by video drivers).
This library is very simple in use, it is well documented and you can google a lot of examples.
So if target machine doesn't have some new opengl functions, you load module named "opengl_old.cpp" (for example), or if it don't have some functionality which is already deprecated (like glBegin(), glEnd()), you'd better go on with "opengl_new.cpp".
Basically the most changes are done in OpenGL 3.0 (and furthermore 3.3) with shaders introduced as the only non-deprecated graphics pipeline, so you can make two opengl modules in your program: one for OpenGL 1&2 and one for OpenGL 3&4. At least I solved this problem in this way in my own code.
To test some functionality you can specify concrete version of OpenGL API to be loaded, when creating context.