OpenGL versions and gpus - what kind of compatibility is there? - opengl

I'm trying to understand how graphics card versions, OpenGL versions and the API headers work together. I've read up on the OpenGL 2.0/3.0/3.1 debacle on the OpenGL forums and elsewhere but it's still not clear how this will work out for me as a developer (new to OpenGL). (btw I'm using nVidia in this question as an example because I'm buying one of their cards, but obviously I'd like to be vendor agnostic for the software I develop).
First, there is the GPU which needs to support an OpenGL version. But e.g. nVidia have drivers for older video cards to support OpenGL 3. Does that mean that these drivers implement certain functionality in software to emulate new functionality that isn't in the hardware?
Then there are the API headers. If I decide to write an application for OpenGL 3, should I wait until Microsoft releases an updated version of the platform SDK with headers that support this version? How do you select which version of the API to use - through a preprocessor define in the code, or does updating to the latest platform SDK simply upgrade to whatever is the latest version (can't imagine this last option, but you never know...).
What about backward compatibility? If I write an application targeting OpenGL 1.2, will users who have installed the drivers for their card supporting OpenGL 3 still be able to run it or should I test the card's features/supported version in my application? Reading http://developer.nvidia.com/object/opengl_3_driver.html seems to confirm that at least for nVidia cards applications written against 1.2 will continue to work, but this also implies that other vendors may stop to support the 1.2 API. Basically that would put me (potentially) in a position in the future where the software wouldn't work with recent cards because they don't support 1.2 any more. But if I develop for OpenGL 3 or even 2 today, I may shut out users who's gpu's only support 1.2.
I don't need the fancy features in OpenGL - I hardly use any shading at all, the fixed pipeline works fine for me (my application is CAD-like). What is the best version to base a new application on, with the expectation that it will be a long-lived application with incremental updates over the years to come?
I'm probably forgetting other issues that are relevant in this context, any insights are much appreciated.

From my OpenGL experience, it seems that "targeting" a given version is just accessing the various extensions that have been added for that version. So the only reason you would "target" OpenGL version 3 is if you want to use some of the extensions that are new to version 3. If you don't really use version 3 extensions (if you're just doing basic OpenGL stuff), then you naturally aren't "targeting" version 3.
In Visual Studio, you will always link your application with opengl32.lib, and opengl32.lib doesn't change across different OpenGL versions. OpenGL instead uses wglGetProcAddress() to dynamically access OpenGL extensions/versions at run time instead of at compile time. Namely, if a given driver doesn't support an extension, then wglGetProcAddress() will return NULL at run-time when that extension's procedure is requested. So in your code you will need to implement logic that handles the NULL return case. In the simplest scenario, you could just print an error and say "this feature isn't available, so this program will behave ...". Or you can find other alternative methods for doing the same thing that doesn't use the extension. For the most part, you'll only get NULL returns from wglGetProcAddress if you application is running on old hardware/drivers that don't support the OpenGL version that added the extension you're looking for. However, in future years you'll want to keep abreast of those things that newer OpenGL versions decide to deprecate. I haven't read too much into the 3.1 spec, but apparently they're introducing a deprecation model where older technology/extensions may be deprecated, which will open the door for newer hardware/drivers to no longer support deprecated extensions, in which case wglGetProcAddress will again return NULL for those extensions. So if you put logic in for handling the NULL return on wglGetProcAddress(), you should still be fine even for deprecated extensions. It just might become necessary for you to implement better alternatives, or to make newer extensions default.
As far as the versioned API headers go, the changes to the headers are mostly just changes to allow access to new functions returned by wglGetProcAddress(). So if you include the API header for version 2, you're good to go as long as you only need the extensions for OpenGL 2. If you need to access functions/extensions that were added in version 3, then you just replace your version 2 header with the version 3 header, which just adds some additional function pointer typedefs associated with the new extensions, so that when you call wglGetProcAddress(), you can cast the return value to the right function pointer. Example:
PFNGLGENQUERIESARBPROC glGenQueriesARB = NULL;
...
glGenQueriesARB = (PFNGLGENQUERIESARBPROC)wglGetProcAddress("glGenQueriesARB");
In the above example, the typedef for PFNGLGENQUERIESARBPROC is defined in the API headers. glGenQueriesARB was added in 1.2 I believe, so I'd need at least the 1.2 API headers to get the definition of PFNGLGENQUERIESARBPROC. That's really all the headers do.
One more thing I want to mention about 3.1. Apparently with 3.1 they're deprecating a lot of OpenGL functionality that my company has used pretty ubiquitously, including display lists, the glBegin/glEnd mechanisms, and the GL_SELECT render mode. I don't know much about the details, but I don't see how they can do that without having to create a new opengl32.lib to link with, because it seems that most of that functionality is embedded into opengl32.lib, and not accessed through wglGetProcAddress. Additionally, is Microsoft going to include that new opengl32.lib in their Visual Studio versions? I don't have an answer for those questions, but I would think that, even though 3.1 deprecates it, this functionality is going to be around for a long time. If you keep linking with your current opengl32.lib, it should continue to work almost indefinitely, although you may lose hardware acceleration at some point. The vast majority of OpenGL tutorials available on the web use the glBegin/glEnd methods for drawing primitives. The same is true for GL_SELECT, although a lot of hardware no longer accelerates GL_SELECT render mode. Even opengl.org's tutorial's use the supposedly deprecated glBegin/glEnd methods. And I have yet to find a "getting started" tutorial that uses only 3.1 features, avoiding the deprecated functionality (certainly if someone knows of one, link me to it). Anyway, while it seems 3.1 has thrown away a lot of the old for all new stuff, I think the old stuff will still be around for quite a while.
Long story short, here's my advice. If your OpenGL needs are simple, just use the basic OpenGL functionality. Vertex arrays are supported on versions 1.1 - 3.1, so if you're looking for maximum lifetime and you're starting fresh, that's probably what you should use. However, my opinion is glBegin/glEnd and display lists are still going to be around for a while even though they are deprecated in version 3, so if you want to use them, I wouldn't fret too much. I would avoid GL_SELECT mode for picking in favor of an alternate method. Many of the hardware vendors have considered GL_SELECT deprecated for years now, even though it just got deprecated with version 3. In our application we get lots of issues where it doesn't work on ATI cards and integrated GMA cards. Subsequently we just implemented a picking method using occlusion queries which seems to fix the problem. So to do it right the first time, avoid GL_SELECT.
Good Luck.

As far as drivers go, I think that in some cases missing functionality it written in software. The whole point of using OpenGL (aside from the acceleration) is to write to an API and not care HOW it's implemented.
In my experience, you don't declare OpenGL versions. The function calls between API versions are non-overlapping. Be aware of the spec and if, for example, you call a 2.0 method, your application's minimum version just became 2.0. I have display applications written to target OpenGL (stupid old Sun video cards) and it works just fine on brand new nvidia 200 series cards.
I don't think there is anyway to guaranteee that an api won't change in the future; especially if you don't control it. Your application may stop working in 10 years when we're on OpenGL 6.4. Hopefully you've written a good enough application that customers will be willing to pay for an upgrade.

This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90's and has been doing so on behalf of one of the two biggest names in gaming hardware.
//-------------------------------------------------------------------------------------
... the notion that an OpenGL application is "wrong" to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for "deprecation" (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn't to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.

Related

Query highest supported OpenGL version before context initialization?

I want to know if there is a way (ideally cross platform, but if not just POSIX compliant or at least in Linux) to query for the highest supported OpenGL version in the current system.
I would like to be able to instantiate GLFW windows to be the highest supported version, rather than blindly trying different versions until one allows for valid context initialization.
EDIT:
Assume I am not creating a context. Imagine I want to replicate what glxinfo does. In other words, I want to be able to query the installed OpenGL version without EVER creating a context.
When you ask for version X.Y, you are not asking for version X.Y; you're asking for at least version X.Y. Implementations are permitted to give you the highest version which is backwards compatible with X.Y.
So if you ask for 3.3 core profile, you may well get 4.6 core profile. There's no point in the implementation returning a lower version.
So just write your code against a minimum OpenGL version, then ask for that. You'll get whatever the implementation gives you, and you can query what you got later.
Imagine I want to replicate what glxinfo does.
glxinfo does what it does by creating a context. Just look at its source code.
However, the GLX_MESA_query_renderer extension does allow asking about the context that would be created for a particular display/screen/renderer combination. Of course, that only works through MESA; drivers that don't go through MESA's system will not be visible through this.
Have you tried to create an OpenGL context without querying for a specific version?
When I use native OpenGL functions (glxCreateContext() or wglCreateContext()) to create a new OpenGL context, they always create a context with the highest OpenGL version supported by my graphic card (4.6).
I don't know if GLFW have the same behavior...
ANSWER FOR QUESTION EDIT:
It's impossible to query any of OpenGL informations WHITHOUT creating a context, since you are not able to call any of glXXX() functions WHITHOUT creating a context. You can instead create a dummy context (which is not shown to the user, but stay somewhere in the memory) to query all OpenGL informations you want, and delete it when you are done (don't worry about it, many many softwares, libraries and also game engines does this, even my own)

Can't figure out Shaders for DirectX11?

So, I have no idea how to use shaders. Coding them is easy, but not actually using them. MSDN is really useless to me, meaning they have the worst tutorials out there. I am currently reading Frank Luna's Direct3d 11 book, and I am finally to the part where I actually get to draw stuff. Exciting, except for the fact that it doesn't work. His BoxDemo - I'm sure worked 3 years ago when the book was made, but now with all of the new DirectX stuff - omitting the DirectX SDK and now using "Windows SDK", FX being deprecated, no more D3DX libraries... So frustrating. I went ahead and downgraded to the DirectX 2010 SDK - just so I can actually use a tutorial. Almost every D3D tutorial out there uses the D3DX libraries.
Anyway... now to my question. Visual Studio has an option to make .hlsl files. But, it also has the ability make .fx files (if you just type .fx at the end of the file name it creates .fx file).
So, I could use the deprecated .fx way and learn how to use it easily with all of the tutorials teaching it - OR I can learn the new HLSL way, and have the hardest, most frustrating time trying to learn it with no tutorials.
I know they both use the HLSL language. But they both are used in the program differently. (CreateEffectFromMemory, CompileFromFile, etc).
I kind of hope to learn the new way, but if I don't that is fine. Although, I pretty much have an entire program using an .fx file. I'm sure it will work, but I just need help building and utilizing "Effects11.lib".
Sorry for the dragging on post - in fact, I am sure I will not get any replies for a while - if I do get any - due to the length. I am pretty frustrated because learning DirectX has put my programming career on a massive hiatus for the past month, 2 months. Please and thank you for any help
The file extension .hlsl vs. .fx is arbitrary. It's like the difference between .cpp and cxx. Historically .fx is used for HLSL shaders that included vertex shaders, pixel shaders, and the Effects (FX) techniques/passes, and .hlsl is a file that didn't include the Effects (FX) techniques/passes, but is just a convention. There's plenty of both used out there.
What matters is how the file is compiled. If you use fx_5_0, then it requires the Effects 11 runtime to actually use them. This itself is really nothing special. It effectively just invoke the same compiler for each combination of compile statements you provided for the Effects (FX) techniques/passes and bundles it up with some meta-data. In fact, you can often invoke the FXC compiler on a .fx file containing techniques/and passes using something like vs_5_0 or ps_4_0 and it will compile the appropriate stage-specific shader if you get the parameters just right.
RE: Effects11
The main issue with Effects 11 is that it requires the D3DCompile DLL at runtime because it uses that to extract the metadata required to wire up the state and shaders. This D3DCompile DLL is not usable with Windows Store apps in Windows 8.0 and Windows phone 8.0 when you actually deployed the app, only when you were developing the app. Thus, Effects 11 wasn't usable for those platforms.
This is no longer a technical issue for Windows Store apps for Windows 8.1 or Windows phone 8.1, but the compiler support for fx_5_0 is still deprecated. It has a few issues that are fixed for the other profiles vs_5_0, etc. A such, it's up to you if you want to use it or not as long as you understand it's limitations.
The latest version of Effects 11 is on CodePlex and I address the limitations there. There are some simple tutorials that use it, as well as a few samples. This version of Effects 11 actually doesn't need the legacy DirectX SDK at all.
In short, YMMV w.r.t. to Effects 11 but you can still use it for Win32 desktop apps, Windows Store apps for Windows 8.1, Windows phone 8.1 apps, and in theory with Xbox One apps too.
RE: D3DX
I can understand the frustration, but it's a common issue with the book publishing industry and technical books being way behind the ball in terms of changes. DirectX 11 was introduced back in 2008. The transition to the Windows 8 SDK came in 2012 and most developers much less book publishers completed missed it. I have some notes on that book on my blog.
For a complete list of 'modern' alternatives, see Living without D3DX.
For Win32 desktop apps, you can continue to use the legacy DirectX SDK. The main thing to note is that with the Windows 8.x SDK that comes with VS 2012 and VS 2013, the include and lib path orders are reversed than they were with VS 2010. See MSDN for details.
RE: Learning Direct3D 11
Have you looked at the DirectX Tool Kit?
Getting Started with Direct3D 11
Direct3D Feature Levels
HLSL, FXC, and D3DCompile
As for tutorials and samples, try DirectX SDK Samples Catalog.

Is the use of OpenGL restricted to creating graphics intensive softwares?

OpenGL is great in creating UI (specially in games) and it is highly portable. Is it unusual that an ordinary (not graphically intensive) application uses OpenGL for its UI? And if not, why? Is it about performance or ease of use?
For example, an Apple developer can use ready to use buttons and sliders, etc provided by Apple; he can also create the UI using OpenGL. The second method makes the code more flexible and portable. Why people don't do this?
Does using OpenGL makes sense if portability is our goal?
There's a lot more to UI than graphics.
As one example fonts. Rendering Chinese, Japanese, Korean, Arabic, Thai in the right directions with the write strokes etc it a TON of work. There are whole teams dedicated to that topic at Microsoft, Google, Apple, Adobe and other companies. If you go straight to GL you're going to have to solve that problem yourself.
Another example, native controls. iOS users expect certain controls to work a certain way. Android users expect something different. For a game that's usually not a problem, games usually have a very unique UI. Apps on the other hand are generally expected to stick to the conventions of their target platform. Generally users get upset when the controls don't match their native platform. Using GL for your UI means you won't get the native controls.
Similarly text editing is a very platform specific feature. Is it drag to select, right click to select, hold to select? What keys go which way, how do they work? Is it Ctrl-V or ⌘-V. All of that is platform dependent as well. You can use the native text editing controls and have the problem solved or you can use GL and have to reproduce not only all the code to edit text but try to make it work the right way on each platform in each configuration. Does your GL text editor handle Japanese keyboards? German Keyboards? Does it handle Input Method Editors for CJK and others?
So, it's more a matter of the right tool for the right job. There are whole UI platforms written on top of GL. Before Metal OSX was probably one of them (or still is?) But if you skip that and build your UI directly on GL you'll end up having to implement all those in-between pieces.
Certainly GL might be the right way to go for certain non-game apps. Paper comes to mind as an app that could be 98% GL and therefore gain portablility. On the other hand Evernote is probably on the far other side. It needs to handle every language, different fonts, input for users with disabilities, etc. All of that is built into the OS, not GL.
Yes, what you suggest is possible. Just have a look at Blender. Blender implements its own UI using OpenGL, for the exact portability reasons you gave.
However there's a lot more to user interfaces than just getting things drawn to the screen. Event management, input handling, interoperability with other applications. All that depends on functions that are not covered by OpenGL.
But there are cross platform application framework libraries, like Qt. And the whole drawing stuff makes only a small portion of what those frameworks do.
One problem you run into when using OpenGL for drawing the GUI though is, that there's a huge variation on the OpenGL profiles supported by the systems out there. It can vary from a mere OpenGL-1.1 software fallback on a old Windows XP machine, over OpenGL-1.4 on Windows Vista machine with only the default drivers installed by the Windows setup, up to OpenGL-4.5 on the same machine once that user installs the proper drivers. And the way you use OpenGL-4.5 is largely incompatible to OpenGL-1.4.
For a UI toolkit written with a OpenGL backend this means that you must implement at least three codepaths: A OpenGL-1.1 variant that uses the fixed function pipeline and client side vertex arrays. A OpenGL-3 compatibility profile. And a OpenGL-4 core profile.
This much more work than just using the OS specific methods, which you have to use anyway to create the window and get user input with.

Why should cocos2d-iphone users avoid using the #2x file extension?

Cocos2d-iphone uses the -hd extension for Retina images (and other assets). The cocos2d Retina guide speaks only vaguely of "some incompatibilities" regarding #2x:
Apple uses the ”#2x” suffix, but cocos2d doesn't use that extension
because of some incompatibilities. Instead, cocos2d has its own
suffix: ”-hd”.
WARNING: It is NOT recommend to use the ”#2x” suffix. Apple treats
those images in a special way which might cause bugs in your cocos2d
application.
Great. I feel well informed.
Through a 2-year old bug report regarding #2x I got the link to a forum thread that supposedly explains the issues with #2x. However, it does not. The only hints I found in there is that there are iOS (4.0/4.1) bugs regarding #2x which I suppose are no longer relevant. It's possible that I might have missed some crucial aspect (there was some talk about caching or repeat loading issues) - the thread is very long after all.
I'd like to know what specific issues might a cocos2d developer encounter if (s)he is using the #2x suffix for images instead of -hd?
Please give concrete examples of things that might go or actually will be wrong.
This seems to be the main reason from this link: http://www.cocos2d-iphone.org/forum/topic/12026
Specifically this post by riq:
I don't know if initWithContentsOfFile was fixed, but in 4.0 it was broken and it wasn't working with #2x, ~iphone extensions.
imageNamed caches all the loaded files so it consumes much more memory than initWithContentsOfFile
Also the #2x extension does something (I don't know exactly what) but it doesn't work OK with cocos2d.
Another good point: Back when the iPhone 4 was just released with the retina display, I am sure some users of Cocos2D were using an older version of it so when the user was using the retina display on a version of Cocos2D that didn't support it, things were twice as large as they should've been. Again this is now fixed to most unless you are using a VERY early version of Cocos2D.
Overview, so it seems that the main issue was with initWithContentsOfFile from iOS 4 but they have fixed this since because I use that exact API with Cocos2D 2.0-rc2 in my app and I do not have any issues whatsoever. I use all Apple specified extensions for images and everything works jolly good! :)
It seems as if this has a historic background.
What makes using -hd graphics still worthwhile is that loading them doesn't rely on Apple functionality but is rather done in framework code. So -hd can be loaded for iPads in iPhone Simulator mode and make use of the higher resolution pictures in 2x mode.
Other than that I couldn't find any more reasons to not use #2x when I was looking into this a week ago.
In case you want all the details it is probably best to drop riq an email.

What is the best approach to use openGL in the web?

I wrote a program in C++/OpenGL (using Dev-C++ compiler) for my calculus 2 class. The teacher liked the program and he requested me to somehow put it online so that instead of downloading the .exe I can just run it on the web browser. Kinda like java applets run on the browser.
The question is:
How if possible, can I display a C++/OpenGL program in a web browser?
I am thinking of moving to JOGL which is a java interpretation of OpenGL but I rather stay in C++ since I am more familiar with it.
Also is there any other better and easier 3D web base API that I can consider?
There is a lot activity recently with WebGL. It is a binding for Javascript to native OpenGL ES 2.0 implementations, designed as an extension of the canvas HTML5 element.
It is supported by the nightly builds of Firefox, Safari, Chrome and Opera.
Have a look at these tutorials, based on the well known NeHe OpenGL tutorials.
Several projects based on WebGL are emerging, most notably Scenegraphs APIs.
From Indie teams: SceneJS, GLGE, SpiderGL.
From Google: the team behind O3D plugin is trying to implement a pure WebGL backend (source) for the project, so that no plugin will be necessary.
From W3C/Web3D: There is an ongoing discussion to include X3D as part of any HTML5 DOM tree, much like SVG in HTML4. The X3DOM project was born last year to support this idea. Now it is using WebGL as its render backend, and is version 1.0 since March 2010.
I'm almost sure that WebGL is the way to go in the near future. Mozilla/Google/Apple/Opera are promoting it, and if the technology works and there is sufficient customer/developer demand, maybe Microsoft will implement it on IE (let's hope that there will be no "WebDX"!).
AFAIK, there's only 3 options:
Java. it includes the whole OpenGL stack.
Google's Native Client (NaCL), essentially it's a plugin that let's you run executable x86 code. Just compile it and call it from HTML. Highly experimental, and nobody will have it already installed. Not sure if it gives you access to OpenGL libraries.
Canvas:3D. Another very experimental project. This is an accelerated 3D API accessible from JavaScript. AFAICT, it's only on experimental builds of Firefox.
I'd go for Java, if at all.
OTOH, if it's mostly vectorial works (without lots of textures and illumination/shadows), you might make it work on SVG simply by projecting your vectors from 3D to 2D. In that case, you can achieve cross-browser compatibility using SVGWeb, it's a simple JavaScript library that allows you to transparently use either the browser's native SVG support or a Flash-based SVG renderer.
Do you really have the time to rewrite it? I thought students were meant to be too busy for non-essential assignment work.
But if you really want to do it, perhaps a preview of it running as a flash movie is the easiest way. Then it's just a matter of doing that and you could provide a download link to the real application if people are interested.
Outside of Java, in-browser OpenGL is really in its infancy. Google's launched a really cool API and plugin for it though. It's called O3D:
http://code.google.com/apis/o3d/
Article about the overall initiative:
http://www.macworld.com/article/142079/2009/08/webgl.html
It's not OpenGL, but the Web3D Consortium's X3D specification may be of interest.
Another solution is to use Emscripten (a source-to-source compiler).
Emscripten supports C/C++ and OpenGL and will translate the source into html/JavaScript.
To use Emscripten you will need to use SDL as a platform abstraction layer (for getting an OpenGL context as well as loading images).
Emscripten is currently being used in Unreal Engine and will also be used in the Unity 5 engine.
Read more about the project here:
https://github.com/kripken/emscripten
Two approaches:
Switch to Java. However, your application will suffer from a loss of performance as a trade off for portability. But since Java is everywhere, this approach ensures that your code can be executed in most browsers.
Use ActiveX, which allows you to run native binary code for Microsoft Windows. This is not recommended in production because activeX is well known as a potential security hole, but since your lecturer is the one viewing it, security doesn't seem to be a big deal. This is applicable for Microsoft platform (Windows+IE) only.