As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface. When OS developers decide to create new version of Graphic API , GPU vendors expand their interface (new routines are faster and older ones are left for compatibility issues) and OS developers use this new part of interface.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs? Or there is some technical reasons before this?
As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface.
No, not really. DirectX and OpenGL are just specifications that define APIs. But a specification is nothing more than a document, not software. The OpenGL API specification is controlled by Khronos, the DirectX API specification is controlled by Microsoft. Each OS then defines a so called ABI (Application Binary Interface) that specifies which system level APIs are supported by the OS (OpenGL and DirectX are system level APIs) and what rules an actual implementation must adhere to, when being run on the OS in question.
The actual OpenGL or Direct3D implementation happens in the hardware's drivers (and in fact the hardware itself is part of the implementation as well).
When OS developers decide to create new version of Graphic API , GPU vendors expand their interface
In fact it's the other way round: Most of the graphic APIs specifications are laid out by the graphics hardware vendors. After all they are close to where the rubber hits the road. In the case of Khronos the GPU makers are part of the controlling group of Khronos. In the case of DirectX the hardware makers submit drafts to and review the changes and suggestions made by Microsoft. But in the end each new APIs release reflects the common denominator of the capabilities of the next hardware generation in development.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs?
No, it means that each GPU vendor implements his own version of OpenGL and the Direct3D backend, which is where all the magic happens. However OpenGL puts a lot of emphasis on backward compatibility and ease of transition to newer functionality. Direct3D development OTOH is quick in cutting the ties with earlier versions. This also means that full blown compatibility profile OpenGL implementations are quite complex beasts. That's also the reason why recent versions of OpenGL core profiles did (overdue) work in cutting down support for legacy features; this reduction of API complexity is also quite a liberating thing for developers. If you develop purely for a core profile it simplifies a lot of things; for example you no longer have to worry about a plethora of internal state when writing plugin.
Another factor is, that for Direct3D there's exactly one shader compiler, which is not part of the driver infrastructure / implementation itself, but gets run at program build time. OpenGL implementations however must implement their own GLSL shader compiler, which complicates things. IMHO the lack of a unified AST or immediate shader code is one of the major shortcomings of OpenGL.
There is not a 1:1 correspondence between the graphics hardware abstraction and graphics API like OpenGL and Direct3D. WDDM, which is Windows Vista's driver model defines things like common scheduling, memory management, etc. so that DirectX and OpenGL applications work interoperably, but very little of the design of DirectX, OpenGL or GPUs in general has to do with this. Think of it like the kernel, nobody creates a CPU specifically to run it, and you do not have to re-compile the kernel everytime a new iteration of a processor architecture comes out that adds a new subset of instructions.
Application developers and IHVs (GPU vendors, as you call them) are the ones who primarily deal with changes to GPU architecture. It may appear that the operating system has more to do with the equation than it actually does because Microsoft (more so) and Apple--who both maintain their own proprietary operating systems--are influential in the design of DirectX and OpenGL. These days OpenGL closely follows the development of commodity desktop GPU hardware, but this was not always the case - it contains baggage from the days of custom SGI workstations and lots of things in compatibility profiles have not been hardware native on desktop GPUs in decades. DirectX, on the other hand, has always followed desktop hardware. It used to be if you wanted an indication of where desktop GPUs were headed, D3D was a good marker.
OpenGL is arguably more complicated than DirectX because until recently it never let go of anything, whereas DirectX radically redefined the API and stripped legacy support with every iteration. Both APIs have settled down in recent years, but D3D still maintains a bit of an edge considering it only has to be implemented on a single platform and Microsoft writes the one and only shader compiler. If anything, the shader compiler and minimal feature set (void of legacy baggage) in D3D is probably why you get the impression that vendors support it better.
With the emergence of AMD Mantle, the desktop picture might change again (think back to the days of 3Dfx and Glide)... it certainly goes to show that OS developers have very little to do with graphics API design. NV and AMD both have proprietary APIs on the PS3, GameCube/Wii/WiiU, and PS4 that they have to implement in addition to D3D and OpenGL on the desktop, so the overall picture is much broader than you think.
Related
I'm started looking for docs for OpenGL ES learning, I came across lot of links. one of them has explained like "OpenGL need to be supported by the vendors of Graphics Cards (like NVidia) and be implemented by the OS's vendors (like Apple in his MacOS and iOS) and finally, the OpenGL give to us, developers, a unified API to work with".
what does it mean by?
OpenGL need to be supported by the vendors of Graphics Cards (like NVidia)
Is it something different to normal code libraries execution?
be implemented by the OS's vendors (like Apple in his MacOS and iOS)...
Is this OS's vendor specific?
If all implementation was done by vendors, what does actually OpenGL ES will do?
I was thinking OpenGLES is a library, which needs to install in required OS and using the specific EGL API's, we need to call them? isn't it?
finally the OpenGL give to us, developers, a unified API to work with
If Os itself developing everything, why to go for OpenGL ES?
Please explain, possibly with an example.
Unlike his name suggests, OpenGL is not a library. Or better, it is cause some symbols for the functions you use need to be linked to the executable.
But OpenGL is a standard de facto library, and a great part of the OSes have a good part of the implementation in their kernel. OSX and iOS provide the OpenGL.framework and GLKit.framework to interface with OpenGL
Architecture Review Board ARB just gives specification of OpenGL. This means It tells what should be name of function which parameters it should accept and what is desired behavior. All the Graphics card vendors take this specification and implement this api. Here function interface and high level working is defined by ARB but internal implementation of API is done depending upon the hardware of vendor. Now this OpenGL API goes into driver. Drivers are generally implemented as Operating system interface part and hardware interface part. So Operating system also need some support for this driver.
I have seen many graphics applications which primarily support OpenGL. I have also noticed that many of these applications have a -d3d flag which will force them to use the DirectX API instead.
How exactly can a single graphics application use two different APIs but render the exact same result? Surely they would need to add code for both APIs, which is time consuming and a bit of a waste?
The topic of OpenGL vs. DirectX is touchy with many folks, so let me start by saying my intent is to not inflame either side of the "API wars".
If you are talking about general graphics software like CAD, they tend to have a large set of cross-platform targets to support. In these cases, they use OpenGL for systems like MAC and *NIX as well as Windows.
While the developers can use OpenGL on Windows, the 'default' support for OpenGL on Windows is pretty minimal--OpenGL v1.x software only. If you want to use a more full-featured version of OpenGL, you need to have the user install an OpenGL ICD. The quality of these OpenGL ICDs is heavily dependent on each hardware vendor keeping up with the standard, and in the past the OpenGL ICDs for Windows have been tuned for very specific calling patterns for specific applications (read "Doom"). If your application didn't match those patterns, you would sometimes have strange performance problems or hit other bugs.
There has also been a tendency in the video hardware industry for the OpenGL support on Windows on 'consumer-grade' hardware being fairly minimal effort for a few games, while the 'workstation-grade' hardware gets lots of developer resources focused on writing a high-quality OpenGL ICD. In this case, software aimed at workstation (again, read CAD) tends to offer both APIs to cater to the needs of users on both cheaper and more expensive systems. Conversely for games, it's unrealistic to require a $2000+ video card to play to it's often much easier and more robust to use DirectX--unless you happen to the one "must-have" OpenGL only game on the market that the 'consumer-grade' drivers actually work well with.
Using DirectX, particularly DirectX 11 on Windows 7+, works well 'out of the box' for most systems across a wide array of devices from different vendors. Some software developers have found it be far more consistent across different devices and use it as the default when they offer both OpenGL and DirectX. This is why many PC games are DirectX only on Windows. However, if they are already doing an abstraction to support OpenGL for MAC, they may well leave it as an option for users on Windows as well.
For games, abstraction is typical in the rendering engines. To get good performance for a AAA title and hit all the different platforms that matter to them, they typically have to implement multiple versions of their renderer anyhow (Direct3D 11 for Windows Vista+, Direct3D 9 for Windows XP esp. for emerging markets, a variant of Direct3D 9 for Xbox 360, a variant of Direct3D 11 for Xbox One, OpenGL for MAC, specific renderers for PlayStation 3 and/or 4, and for mobile games OpenGL ES). This is usually achieved through an abstraction layer, and then dedicated coders for each platform to marshal the game through to ship taking advantage of what hardware-specific features matter to them. It is not an inexpensive solution, but many large publishers try to arbitrage project risk by offering versions of their content across many different platforms and consider it worth the additional cost.
For indie and small developers, trying to support both OpenGL and DirectX can be a pretty big challenge. It's usually better for them to focus on making a great game for a single platform using a single, well-supported API on that platform. Then if they are successful, they can port it to more platforms by doing the work needed to implement multiple rendering APIs -and- create custom content processing to tailor it to each platform. That said, indie games often don't focus on differentiating on graphics, so the feature & performance trade-offs for using an abstraction layer is less of a problem. It is, however, expensive to test your game across many different devices and do it twice to cover both OpenGL and DirectX.
How I know, the updating openGL is including some extensions which was defined by vendors and used hardware features. I have geforce 520m( realize in 2010) , but it support openGL 4.4 (realize in 2014). Obviously, it cannot support some hardware features which is needed in openGL 4.4. But it support openGL 4.4.How does the support of new openGL versions is provided in old video cards?
Why do you think that it does not support the HW features? It does. OpenGL is a rendering API specification, not a HW one. The fact that new features were added to GL does not mean that new HW is required to implement them.
On nvidia, all GPUs since the fermi architecture support OpenGL 4.x as far as it has been specified. That does not guarantee that it will support everything what a future GL4.x version might introduce.
Currently, the GL major versions can be tied to some major HW generation. GL2.x is the really old stuff. GL 3.x is supported by nvidia since GeForce 8xxx (released in 2006), and fermi/kepler/maxwell support 4.x.
According to you, the progress in video cards stopped since the fermi architecture. Why vendors realize new video cards if they can update openGL to add new function? I think it is untruth. Vendors try to find ways to accelerate video cards in hw level and add functions that use it. Or I do not understand it clear? And so what does it mean that my video card is supported openGL 4.x?
No, you have it backwards.
You could say that the progress in graphics APIs with respect to hardware functionality has stopped since Fermi. Every new generation of GPUs generally adds new HW features, but the features exposed by every new version of OpenGL do not necessarily require new HW. In fact, core GL often lags a generation or more behind HW capabilities in required functionality.
OpenGL still lacks support for some features introduced in Direct3D 11 and likewise Direct3D 11.x lacks support for some OpenGL 4.x features and neither API fully exposes the underlying hardware. This is because they are designed around supporting the most hardware possible rather than all of the features of one specific piece of hardware. AMD's solution to this "problem" was to introduce a whole new API (Mantle) that more closely follows the feature set of their Graphics Core Next-based architectures; more of a console approach to API design.
There may be optional ARB extensions introduced with the yearly release of new GL version specifications, but until they are promoted to core, GL does not require support for them.
Until things are standardized across all of the vendors who are part of the ARB, some features of GPU hardware are unusable in core GL. This is why GL has extensions in addition to core versions. A 720m GPU will support many new extensions that your 520m GPU does not, but at the same time they can both implement all of the required functionality from GL 4.4. There is no guarantee, however, that GL 4.5 will not introduce some new required feature that your 520m GPU is incapable of supporting or that NV decides is too much trouble to support.
Vendors sometimes do not bother writing support for features on older GPUs even though they can technically support them. It can become too much work to write and especially to maintain multiple implementations of a feature across several different versions of a product line. You see this sort of thing in all industries, not just GPUs. Sometimes open source solutions eventually fill-in the gap where the original vendor decided it was not worth the effort, but this may take many years.
I have developed a program which makes use of many of OpenGL's aspects - ranging from both rather new to deprecated functionalities, and want to ensure that it works correctly on the great majority of machines - especially on ones with outdated graphics cards.
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
Define "compatibility"? If you want an application to run on as much hardware as possible, then you basically have to give up on shaders entirely and stick to about GL 1.4. The main confounding issue here are Intel driver bugs; many pieces of older Intel hardware will claim support for GL 2.0 or 2.1, but they have innumerable failings in this support.
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
You don't. Compatibility with old hardware is about more than just sticking to a standard. It's about making sure that your program doesn't encounter driver bugs. And the only way to do that is to actually test on the hardware of interest.
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
Test the same code on recent hardware. If it has the same failures, then the problem is likely in your code. If it works fine on recent hardware but fails on older stuff, then the problem is almost certainly a driver bug with old hardware drivers.
Develop a workaround.
Well, the best way to maximize the backwards compatibility and to get a powerful tool on tracking down target machine's functionality (imho) is to use something like GLEW: The OpenGL Extension Wrangler Library. It will load OpenGL version-specific functions for you and you can test if they are supported by user's system (or, more correctly, by video drivers).
This library is very simple in use, it is well documented and you can google a lot of examples.
So if target machine doesn't have some new opengl functions, you load module named "opengl_old.cpp" (for example), or if it don't have some functionality which is already deprecated (like glBegin(), glEnd()), you'd better go on with "opengl_new.cpp".
Basically the most changes are done in OpenGL 3.0 (and furthermore 3.3) with shaders introduced as the only non-deprecated graphics pipeline, so you can make two opengl modules in your program: one for OpenGL 1&2 and one for OpenGL 3&4. At least I solved this problem in this way in my own code.
To test some functionality you can specify concrete version of OpenGL API to be loaded, when creating context.
What is the difference between OpenGL and Direct3D? Are they truly different implementations to accomplish the same things (like Java and Mirosoft's CLR [.NET])?
They are very different graphics API's. But it's fair to say they mostly accomplish the same thing. DirectX is probably the API of choice if you are developing a game under windows (or a game for XBOX), and OpenGL is the choice if you want cross-platform support. Mac OS uses GL, as does the iPhone, for example, and many windows games also support OpenGL.
Because OpenGL was developed over a long time and 'by committee', it comes with a lot of baggage - the API has some older options that aren't really relevant today. That's one of the reasons for OpenGL ES; it cuts out all the junk and makes for an easier target platform.
DirectX on the other hand is controlled by Microsoft, and as such it has a more 'modern' feel to it (it's based on COM components, so is highly object oriented). MS often update the API to match new hardware.
Sometimes you don't have the luxury of choice (iphone for example can't run DX). But often it just comes down to personal preference/experience. If your a long-time graphics programmer, you tend to get pretty familiar with both...
Google is your friend in this case...there's a good Wikipedia article contrastring the two libraries:
Comparison of OpenGL and Direct3D
If memory serves, OpenGL was the open implementation before Direct3D came out. Direct3D was then based off of OpenGL...but quickly diverged and became it's own distinct library.
UPDATE
Looks like my memory is shot...
Direct3D was developed independtly of OpenGL.
Direct3D and OpenGL are different API's used to accomplish the same thing.
The major differences between D3D and GL is that D3D is Object Oriented and GL is not.
D3D9 and GLES2 basically have the same features. In fact if you plan on using OpenGL and do not need any GL3 or GL4 features you should base all you code on the GLES2 API(all GLES2 features are in GL2, but not the other way around).
If possible you should always use D3D over GL on windows, as multithreading and driver support is flaky. Take the netbooks for example, they support D3D's HLSL at ShaderModel 2 but don't support the equivalent for GLSL, they only support a fixed pipeline for GL.