How OpenGL know to use hardware specific driver? [duplicate] - opengl

This question already has answers here:
How does OpenGL work at the lowest level? [closed]
(4 answers)
Closed 9 years ago.
We all know windows provide a generic OpenGL driver. But some display card vendors provider specific openGL driver. I am wondering how OS to choose which driver to use?

There are WGL functions for getting pointers to appropriate functions of newer OpenGL. Windows chooses the GPU arbitrarily when creating the context.
So, when you, for example, request a 4.3 core context it looks at the list of drivers, says that the one will handle this particular context, and then all wgl calls are made so that they load appropriate functions from the particular driver's dll.

The easiest platform to explain how this works is Microsoft Windows. They have an "Installable Client Driver" (ICD) model, where the Windows SDK provides the bare minimum OpenGL functionality (OpenGL 1.1) and an extensible window system interface (WGL).
When you create your context using the WGL API in Microsoft Windows, you can enumerate all of the pixel formats for a specific device. Some pixel formats will be hardware accelerated and some will use the fallback software OpenGL 1.1 reference implementation. If you get a hardware accelerated format, all of your OpenGL commands will be handled by the graphics card driver and you will have a set of extensions to both GL and WGL that are not available in the reference (1.1) implementation.
This is why so much of modern OpenGL has to be handled through extension loading on Microsoft Windows, at the lowest level you only get OpenGL 1.1 and for anything newer than that, you have to ask the driver for the address of the function you want to use.
Many platforms have a similar system with respect to hardware/software co-existence. Mac OS X is even trickier, because you can start out with a hardware accelerated OpenGL implementation but call the wrong function (one that cannot be implemented in hardware on the installed GPU) and it will throw you off the "fast path" and fallback to software.
The nice thing about Mac OS X, however, is that this fallback only applies to individual stages of the pipeline. It will go back to hardware acceleration for the next stage as long as everything can be implemented in hardware. The beauty is that in OS X, the software implementation of OpenGL is complete (e.g. a 3.2 context will support the entire OpenGL 3.2 feature set even if doing so requires partial/full software implementation). You do not lose functionality if the GPU does not support something, just a LOT of performance. This is in huge contrast to Windows, where none of modern OpenGL has a software fallback.

Related

what meant by openGL ES implementation on OS

I'm started looking for docs for OpenGL ES learning, I came across lot of links. one of them has explained like "OpenGL need to be supported by the vendors of Graphics Cards (like NVidia) and be implemented by the OS's vendors (like Apple in his MacOS and iOS) and finally, the OpenGL give to us, developers, a unified API to work with".
what does it mean by?
OpenGL need to be supported by the vendors of Graphics Cards (like NVidia)
Is it something different to normal code libraries execution?
be implemented by the OS's vendors (like Apple in his MacOS and iOS)...
Is this OS's vendor specific?
If all implementation was done by vendors, what does actually OpenGL ES will do?
I was thinking OpenGLES is a library, which needs to install in required OS and using the specific EGL API's, we need to call them? isn't it?
finally the OpenGL give to us, developers, a unified API to work with
If Os itself developing everything, why to go for OpenGL ES?
Please explain, possibly with an example.
Unlike his name suggests, OpenGL is not a library. Or better, it is cause some symbols for the functions you use need to be linked to the executable.
But OpenGL is a standard de facto library, and a great part of the OSes have a good part of the implementation in their kernel. OSX and iOS provide the OpenGL.framework and GLKit.framework to interface with OpenGL
Architecture Review Board ARB just gives specification of OpenGL. This means It tells what should be name of function which parameters it should accept and what is desired behavior. All the Graphics card vendors take this specification and implement this api. Here function interface and high level working is defined by ARB but internal implementation of API is done depending upon the hardware of vendor. Now this OpenGL API goes into driver. Drivers are generally implemented as Operating system interface part and hardware interface part. So Operating system also need some support for this driver.

OpenGL and Direct3D: From a programmer's perspective, where do they stand? [duplicate]

This question already has answers here:
How does OpenGL work at the lowest level? [closed]
(4 answers)
Closed 8 years ago.
I'm very new to graphics programming and trying to understand "how graphics programming works". From what I read so far, I'm still not clear about where the APIs like OpenGL and Direct3D stand and who actually implements them?
Drivers talk to hardware directly.So NVIDIA/AMD etc write drivers and someone else need to implement these APIs on top of that? But I saw "OpenGL driver" on Nvidia website, that means OpenGL is actually a driver level API which directly talks to graphics hardware? So Nvidia/AMD implements these APIs?
I can understand the game engines etc written on top of OpenGL/Direct3D but couldn't figure out where exactly these APIs stand from a programmers perspective.
Direct3D itself is wholly implemented by Microsoft. However, it specifies an even lower-level API (The DXGI DDI resp. Direct3D DDI) that is then implemented by Nvidia/AMD as part of the device drivers. So basically D3D is a shim between application code and driver code. Recent advances in graphics architecture have caused the intermediate layer provided by D3D to become thinner and thinner as to reduce CPU overhead.
OpenGL under Windows is implemented similarly: Microsoft provides the application-facing implementation of the OpenGL API, but forwards it to a device-driver implementation, the OpenGL Installable Client Driver.
The quality of OpenGL implementations under Windows is known to vary between vendors. For this reason, both Firefox and Chrome execute WebGL via ANGLE, which translates OpenGL API calls to Direct3D API calls, thus taking the more stable codepath.

GPU DirectX VS OpenGL support

As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface. When OS developers decide to create new version of Graphic API , GPU vendors expand their interface (new routines are faster and older ones are left for compatibility issues) and OS developers use this new part of interface.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs? Or there is some technical reasons before this?
As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface.
No, not really. DirectX and OpenGL are just specifications that define APIs. But a specification is nothing more than a document, not software. The OpenGL API specification is controlled by Khronos, the DirectX API specification is controlled by Microsoft. Each OS then defines a so called ABI (Application Binary Interface) that specifies which system level APIs are supported by the OS (OpenGL and DirectX are system level APIs) and what rules an actual implementation must adhere to, when being run on the OS in question.
The actual OpenGL or Direct3D implementation happens in the hardware's drivers (and in fact the hardware itself is part of the implementation as well).
When OS developers decide to create new version of Graphic API , GPU vendors expand their interface
In fact it's the other way round: Most of the graphic APIs specifications are laid out by the graphics hardware vendors. After all they are close to where the rubber hits the road. In the case of Khronos the GPU makers are part of the controlling group of Khronos. In the case of DirectX the hardware makers submit drafts to and review the changes and suggestions made by Microsoft. But in the end each new APIs release reflects the common denominator of the capabilities of the next hardware generation in development.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs?
No, it means that each GPU vendor implements his own version of OpenGL and the Direct3D backend, which is where all the magic happens. However OpenGL puts a lot of emphasis on backward compatibility and ease of transition to newer functionality. Direct3D development OTOH is quick in cutting the ties with earlier versions. This also means that full blown compatibility profile OpenGL implementations are quite complex beasts. That's also the reason why recent versions of OpenGL core profiles did (overdue) work in cutting down support for legacy features; this reduction of API complexity is also quite a liberating thing for developers. If you develop purely for a core profile it simplifies a lot of things; for example you no longer have to worry about a plethora of internal state when writing plugin.
Another factor is, that for Direct3D there's exactly one shader compiler, which is not part of the driver infrastructure / implementation itself, but gets run at program build time. OpenGL implementations however must implement their own GLSL shader compiler, which complicates things. IMHO the lack of a unified AST or immediate shader code is one of the major shortcomings of OpenGL.
There is not a 1:1 correspondence between the graphics hardware abstraction and graphics API like OpenGL and Direct3D. WDDM, which is Windows Vista's driver model defines things like common scheduling, memory management, etc. so that DirectX and OpenGL applications work interoperably, but very little of the design of DirectX, OpenGL or GPUs in general has to do with this. Think of it like the kernel, nobody creates a CPU specifically to run it, and you do not have to re-compile the kernel everytime a new iteration of a processor architecture comes out that adds a new subset of instructions.
Application developers and IHVs (GPU vendors, as you call them) are the ones who primarily deal with changes to GPU architecture. It may appear that the operating system has more to do with the equation than it actually does because Microsoft (more so) and Apple--who both maintain their own proprietary operating systems--are influential in the design of DirectX and OpenGL. These days OpenGL closely follows the development of commodity desktop GPU hardware, but this was not always the case - it contains baggage from the days of custom SGI workstations and lots of things in compatibility profiles have not been hardware native on desktop GPUs in decades. DirectX, on the other hand, has always followed desktop hardware. It used to be if you wanted an indication of where desktop GPUs were headed, D3D was a good marker.
OpenGL is arguably more complicated than DirectX because until recently it never let go of anything, whereas DirectX radically redefined the API and stripped legacy support with every iteration. Both APIs have settled down in recent years, but D3D still maintains a bit of an edge considering it only has to be implemented on a single platform and Microsoft writes the one and only shader compiler. If anything, the shader compiler and minimal feature set (void of legacy baggage) in D3D is probably why you get the impression that vendors support it better.
With the emergence of AMD Mantle, the desktop picture might change again (think back to the days of 3Dfx and Glide)... it certainly goes to show that OS developers have very little to do with graphics API design. NV and AMD both have proprietary APIs on the PS3, GameCube/Wii/WiiU, and PS4 that they have to implement in addition to D3D and OpenGL on the desktop, so the overall picture is much broader than you think.

How opengl differentiates between software and hardware implementations?

If i have a software implementation and if also have a graphic card which supports opengl , then which of these is used by the opengl?
This is both a simple and a complicated question. The simple answer is that OpenGL neither knows nor cares. OpenGL is not a thing; it is a document. A specification. Implementations of OpenGL are things.
Which brings up the complicated part. How you talk to an OpenGL implementation depends on what platform you live on. MesaGL can be compiled as nothing more than a library you link to.
If you want hardware acceleration, then you now have to deal with the OS, because the OS owns the GPU. Mesa as a driver is implemented through the glX system. It hooks into X-windows and X-windows' OpenGL context creation functions can give you a context that is implemented by software Mesa drivers. Or by hardware Mesa drivers. If you're using other drivers, then they too hook into X-windows. These are all tied into X-windows "displays".
On Windows, it's much simpler. There is precisely one ICD driver. If it's installed, and you use a pixel format that it supports (aka: something reasonable), then you get hardware accelerated OpenGL through it. If it isn't you get Microsoft's software implementation.

Which OpenGL version is most stable and currently used?

I've been thinking of making an additional wrapper for my project to use OpenGL rather then Allegro. I was not sure which OpenGL version to go for since I know that some computers cannot run recent versions such as v4.4. Also, I need a version which compiles no problem in Linux, Windows, Mac.
You'll want to look at what kinds of graphics cards will be available on your target systems and bear some details in mind:
OpenGL up to 1.5 can be completely emulated in software in real time on most systems. You don't necessarily need hardware support for good performance.
OpenGL 1.4 has universal support. Virtually all hardware supports it.
Mac OS X only supports up to OpenGL 2.0 and OpenGL 2.1, depending on OS version and hardware. Systems using GMA950 have only OGL1.4 support. Mac OS X Lion 10.7 supports OpenGL 3.2 Core profile on supported hardware.
On Linux, it's not unusual for users to specifically prefer open source drivers over the alternative "binary blobs," so bear in mind that the version of Mesa that most people have supports only up to about OpenGL 2.1 compatibility. Upcoming versions have support for OpenGL 3.x. Closed-source "binary blobs" will generally support the highest OpenGL version for the hardware, including up to OpenGL 4.2 Core.
When considering what hardware is available to your users, the Steam hardware Survey may help. Note that most users have DirectX 9-compatible hardware, which is roughly feature-equivalent to OpenGL 2.0. Wikipedia's OpenGL article also specifies what hardware came with initial support for which versions.
If you use a library like GLEW or GLEE or any toolkit that depends on them or offers similar functionality (like SFML, or even Allegro since 4.3), then you'll not need to concern yourself with whether your code will compile. These toolkits will take care of the details of enabling extensions and providing all of the symbols you need.
Given all of this, I'd suggest targeting OpenGL 2.1 to get the widest audience possible with the best feature support.
Your safe bet is OpenGL 2.1, it needs to be supported by the driver on your target system though. OpenGL ES, used on several mobile platforms, is basically a simplified OpenGL 2, so even porting to those would be fairly easy. I highly recommend using libGlew as VJo said.
It's less about operating systems, and more about video card drivers.
I think 1.4 is the highest version which enjoys support by all consumer graphics systems: ATI (AMD), nVidia, and Intel IGP. Intel is definitely the limiting factor here, even when ATI or nVidia doesn't have hardware support, they release OpenGL 4.1 drivers which use software to emulate the missing features. Not so with Intel.
OpenGL is not a library you usually compile and ship yourself (unless you're a Linux distributor and are packaging X.Org/Mesa). Your program just dynamically links against libGL.so (Linux/BSD), opengl32.dll (Windows, on 64 Bit systems, it's also calles opengl32.dll, but it's in fact a 64 Bit DLL) or the OpenGL Framework (MacOS X). This gives your program access to the system's OpenGL installation. The version/profile you want to use has no influence on the library you link!
Then after your program has been initialized you can test, which OpenGL version is available. If you want to use OpenGL-3 or 4 you'll have to jump a few additional hoops in Windows to make full use of it, but normally some kind of wrapper helps you with context creation anyway, boiling it down to only a few lines.
Then in the program you can implement multiple code paths for the various versions. Usually lower OpenGL verion codepaths share a large subset with higher version codepaths. I recommend writing new code in the highest version available, then adding additional code paths (oftenly just substitutions which can be done by C preprocessor macros or similar) for lower versions until you reach the lowest common denominator of features you really need.
Then you need to use OpenGL 1.1, and use needed (and supported) functions through the use of wglGetProcAddress (on windows) or glXGetProcAddress (on linux).
Instead of using those two functions, you can use GLEW library, which does that for you and is cross platform.