Recently I have had some problems with GLSL shader versions on different computers. I know every GPU can have different support for shaders, but I don't know how to make one shader which will work on all GPU's. If I write some shaders on my PC (GPU - AMD HD7770) I don't even have to specify the version, but on some older PC's or on PS's with nVidia GPU it's more strict on the version, so I've to specify the version that the GPU supports.
Now here comes the real problem. If I specify e.g version 330 on my PC, it works as it should, but on other PC's which should support version 330 it does not seem to work. So I have to rewrite it and make it work. And if I switch back to my PC which has newer GPU, it doesn't work either.
Does anyone know, how do I have to write the shader so it can run on all GPU's?
Writing portable OpenGL code isn't as straightforward as you might like.
nVidia drivers are permissive. You can get away with a lot of things on nVidia drivers that you can't get away with on other systems.
It's easy to accidentally use extra features. For example, I wrote a program targeting the 3.2 core profile, but used GL_INT_2_10_10_10_REV as a vertex format. The GL_INT_2_10_10_10_REV symbol is defined in 3.2, but it's not allowed as a vertex format until 3.3, and you won't get any error messages for using it by accident.
Lots of people run old drivers. According to the Steam survey, in 2013, 38% of customers with OpenGL 3.x drivers didn't have 3.3 support, even though hardware which supports 3.0 should support 3.3.
You will always have to test. This is the unfortunate reality.
My recommendations are:
Always target the core profile.
Always specify shader language version.
Check the driver version and abort if it is too old.
If you can, use OpenGL headers/bindings that only expose symbols in the version you are targeting.
Get a copy of the spec for the target version, and use that as a reference instead of the OpenGL man pages.
Write your code so that it can also run on OpenGL ES, if that's feasible.
Test on different systems. One PC is probably not going to cut it. If you can dig up a second PC with a graphics card from a different vendor (don't forget Intel's integrated graphics), that would be better. You can probably get an OpenGL 3.x desktop for a couple hundred dollars, or if you want to save the money, ask to use a friend's computer for some quick testing. You could also buy a second video card (think under $40 for a low-end card with OpenGL 4.x support), just be careful when swapping them out.
The main reason that commercial games run on a variety of systems is that they have a QA budget. If you can afford a QA team, do it! If you don't have a QA team, then you're going to have to do both QA and development -- two jobs is more work, but that's the price you pay for quashing bugs.
Related
I have an inhouse application that uses the now deprecated nvidia scenix and Cg shaders. It works fine, and as it is inhouse we can chose what hardware to run it on.
The shaders are currently using vp40/fp40 profiles (though I can change it to use later profiles like GLSLV/GLSLF). I am trying to confirm that the current crop of nvidia hardware STILL supports Cg shaders? i.e. if we purchase the latest OpenGL4 geforce or quadro cards, will they still support the Cg profiles? I have asked on the nvidia forum but no answer. Eventually we will have to upgrade to a new scene graph and GLSL, but I want to know what 'legacy' support there is for the Cg shaders.
Thanks
Yes you're perfectly fine. In fact the GLSL implementation in the NVidia drivers is actually an add-on to the Cg compiler. even on latest generation GPUs the NVidia driver internally first translates GLSL to NV/ARB_programm_… assembly (source code in fact) and runs this through the assembler. It's unlikely NVidia is going to change that in the near future (although the introduction of SPIR-V may force their hand). And all the legacy OpenGL ARB/NV_program interfaces are supported just fine as extension (even to to OpenGL-4 core profile).
I want to start coding in OpenGL and C++ and I have a couple of questions:
1. Should I use OpenGL 4.2 or rather 3.x instead? OpenGL 4.x runs on Nvidia GTX 400+. Does that mean that it is widely supported or should I go for 3.x?
2. I found some headers and libraries in Windows SDK, but not all of them. Is there any place where I can find all libraries and headers for OpenGL? What I want to avoid is downloading old and different versions from all over the internet.
3. Does OpenGL cover input or is this part of the GDI+/WinAPI?
You should go for what you want to support. The major OpenGL version (3,4) is typically used for identifying hardware generations, while the minor versions are really targeted at functional releases, which should be independent of your hardware. For example: I have now 4.2 features while that didn't even exist when I bought this GPU. So you can count that up-to-date drivers will support the latest minor version. But no matter how you try, your GPU won't get new hw functions with a driver update. Note that a tricky part is that not all GPU's are still supported. This means driver updates are no longer provisioned and they could be stuck at some minor OpenGL version.
There are wrappers that do most things for you, but do not forget that you still need to link to the gl and possibly glu libraries and some platform-specific ones. I personally like the unofficial OpenGL SDK, glload is the library that you want. The other libraries are also quite useful.
OpenGL does not cover input, it is just for drawing. Also note that OpenGL does not create the default framebuffer for you. The default framebuffer is the thing you render to (usually in a window). This is done by platform specific functions, for windows this is WGL.
I have a onboard graphics card which supports opengl 2.2. Can I run a opengl (let's say 3.3 version) application on it by using some software etc?
OpenGL major versions somewhat refer to available hardware capabilities:
OpenGL-1: fixed function pipeline (DirectX 7 class HW)
OpenGL-2: programmable vertex and fragment shader support.(DirectX 9 class HW)
OpenGL-3: programmable geometry shader support (DirectX 10 class HW)
OpenGL-4: programmable tesselation shader support and a few other nice things (DirectX 11 class HW).
If your GPU supports OpenGL-2 only, then there is no way you could run a OpenGL-3 program, making use of all whistles and bells on it. Your best bet is a software rasterizing implementation.
A few years ago, when shders were something new, NVidia shipped their developer drivers with some higher functionality emulation software rasterizer, to kickstart shader development, so that there were actual applications to run on those new programmable GPUs.
Sure you can, you just have to disable those features. Whether this will work well depends greatly on the app.
The simplest method is to intercept all OpenGL calls, using some manner of DLL hooking, and filter them as necessary. When OGL3 features are used, return a "correct" answer (but don't do anything) or provide null for calls that aren't required.
If done properly, and the app isn't relying on the OGL3 features, this will run without those on your hardware.
If the app does require OGL3 stuff, results will be unreliable at best, and it may be unusable. It really depends on what exactly the app does and what it needs. Providing a null implementation of OGL3 will allow you to run it, but results are up in the air.
No. Well, not really. NVIDIA has some software emulation that might work, but other than that, no.
Your hardware simply can't do what GL 3.0+ asks of it.
also:
I have a onboard graphics card which supports opengl 2.2
There is no OpenGL 2.2. Perhaps you meant 2.1.
I currently run a machine that allows me to program in OpenGL 2.1. If I were to make a program, should I use the power of the current OpenGL versions like 3.x/4.x or use 2.1?
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
On another side question: does only upgrading my video card allow me to program in upgraded versions of OpenGL?
OpenGL versions (for AMD and NVIDIA GPUs) roughly correspond to levels of hardware. 2.x OpenGL versions are for DX9-level hardware. 3.x represents DX10-level, and 4.x represents DX11-class hardware. So the version you pick restricts you can run your code.
In general, any AMD or NVIDIA GPU you can actually buy new from a store will be 3.x or better (more than likely, 4.x). Even integrated GPUs, motherboard or CPU, from AMD are 3.x or better. I do some home development work on an HD 3300 motherboard GPU, and it works reasonably well.
Intel is a problem. Intel's OpenGL driver quality is pretty poor. Many old Intel machines can only support GL 1.4, which is pre-DX9 class functionality. They do support some higher-level extensions (shaders, but only vertex shaders, since they run them in software).
More recent Intel GPUs are a bit better, but their GL drivers are still rather buggy.
The above describes the situation for Windows. Linux is a bit fuzzier, because there are drivers from NVIDIA/AMD, and open-source community written drivers. The latter are generally not as good, but they are improving. These tend to be for 3.x-class hardware.
The MacOSX world is a bit different. Mac OSX Lion (10.7), recently released, adds support for OpenGL 3.2 (sadly, not 3.3, for some reason). Apple rigidly controls how OpenGL works on their platform, but hopefully they will be updating GL versions more frequently than they have been recently.
So on Macs, you really have two choices: 2.1 or 3.2. Note that Lion's 3.2 support only exposes core OpenGL functionality. See this page for details on what that means.
You cannot tell what the highest version your particular computer is capable of. There is simply the version you get when you create a context. In general, unless you specifically ask for a version (and even then, usually not), you will get the highest version your hardware and drivers can handle.
Oh, and yes: the OpenGL version is controlled by your video card's capabilities (and installed drivers).
The following advise assumes that you're developing a serious application that you intend for others to use. This isn't for little demo apps or whatever.
In general, I would advise against explicitly restricting your code to 4.x. While 4.x adoption increases every day (there are 2 hardware generations from both NVIDIA and AMD with 4.x support, and a third likely will be out by years end from AMD. Also, AMD is starting to embed 4.x capable GPUs in their CPUs now), there is still a lot of 3.x hardware. 4.x doesn't buy you a whole lot, and you can easily add code paths to conditionally support 4.x features if they are available.
In order to use OpenGL 3.x you need a card that supports DirectX10 and proper drivers that have support for it.
The advantage in opposite to DirectX is, that you can also use OpenGL3 and 4 on WindowsXP. No need for 7 or Vista.
Which version you should use depends on your audience. If your audience are gamers, go ahead, use 3. Won't do 4 exclusive yet. DX11 are still rare.
For a first look on how Gamers use their computers and what hardware they have, steam is a good source:
http://store.steampowered.com/hwsurvey
You can determine the version by running:
glGetString(GL_VERSION);
A good OpenGL3 Tutorial:
http://arcsynthesis.org/gltut/
The OpenGL 3.3 SDK Reference:
http://www.opengl.org/sdk/docs/man3/
Hope this helps a bit :).
Lots of embedded Intel graphics are limited to 1.4 or 1.5.
Mac OSX is stuck on 2.1 I hear.
All Radeon and GeForce cards can do 3+ (may need a driver update).
And you can program with any version, but if your hardware doesn't support it, you'll end up testing under a software renderer (slow!).
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
I answer for the above question.
I come across to the tool below, it's really complete in itself and let me see all OpenGL version that my system currently support (from 1.0 up to what it actually support). As well for extensions available for my system to use. Not only for ARB though, it ranges from NV, ATI, OES, etc.
http://www.realtech-vr.com/glview/download.html
I'm working on some cross-platform desktop application with heavy 2-D graphics. I use OpenGL 2.0 specification because I need vertex shaders. I like 3.2+ core API because of it's simplicity and power. I think that 3.2+ core could be a choice for the future. But I'm afraid that nowadays this functionality may not be available on some platforms (I mean old graphic cards and lack (?) of modern Linux drivers). Maybe, I should use OpenGL ES 2.0 -like API for easy future porting.
What's the state of affairs with 3.2+ core, cards and linux driveres?
Older Intel chips only support OpenGL 1.5. The later chips (since about two years ago) have 2.1 but that performs worse than 1.5. Sandy Bridge claims to support "OpenGL 3" without specifying whether it is capable of doing 3.3 (as Damon suggests) but Linux drivers only do 2.1 for now. All remotely recent Radeons and Nvidia hardware with closed-source drivers support 3.3 (geometry shaders) and the 400-500 series support 4.1 (tesselation shaders).
Therefore, the versions you want to aim for are 1.5 (if you care about pre-Sandy-Bridge Intel crap), 2.1 (for pretty much all hardware), 3.3 (for decent hardware & closed-source drivers) or 4.1 (bleeding edge).
I have vertex and fragment shaders written with #version 120 and geometry shaders written in #version 330, to make fallback on old hardware easier.
You can stay on OpenGL ES 2.0. Even if ES mean Embed, it's a good approach because it remove all the fixed functions (glBegin, etc...): you are using a subset of OpenGL 2.x. So if you write your software by thinking only OpenGL ES 2.0, it will be fast and work on the majority.
In real, OpenGL ES 2.0 and desktop GL might have some difference, but i don't think it will be something you will use. If the extension GL_ARB_ES2_compatibility is supported, you have a "desktop" card that support the complete embed subset. (4 func and some const.)
Now, the real question is how many years of hardware do you want to support ? They are still lot of very old hardware that have very poor gl support. Best would be to support the less-old (OpenGL 2.0 is already old) :)
I would personally go for OpenGL 3.3, optionally with a fallback for 3.2 plus extensions (which is basically the same). It is the most convenient way of using OpenGL 3.x, and widely supported.
Targetting 3.1 or 3.0 is not really worth it any more, except if you really want to run on sandy bridge (which, for some obscure reason only supports 3.0 although the hardware is very well capable of doing 3.3). Also 3.1 and 3.0 have very considerable changes in shader code, which in my opinion are a maintenance nightmare if you want to support many versions (no such problem with 3.2 and 3.3).
Every hardware that supports 3.2 can also support 3.3, the only hindrance may be that IHVs don't provide a recent driver or a user may be too lazy to update. Therefore you cannot assume "3.3 works everywhere". The older drivers will usually have the same functionality via ARB extensions anyway, though.
Mac OS X doesn't support GL-3 context at the moment. This summer may change the situation, but I would recommend to stick with GL-2 plus extensions nevertheless.
Depends on your target market's average machine. Although to be honest, OpenGL 3.2+ is pretty ubiquitous these days.