I am having problems getting my GLSL shaders to work on both AMD and Nvidia hardware.
I am not looking for help fixing a particular shader, but how to generally avoid getting these problems. Is it possible to check if a shader will compile on AMD/Nvidia drivers without running the application on a machine with the respective hardware and actually trying it?
I know, in the end, testing is the only way to be sure, but during development I would like to at least avoid the obvious problems.
Everyone using GLSL must have these problems, so why can't I find a good way to fix them?
Is it possible to check if a shader will compile on AMD/Nvidia drivers without running the application on a machine with the respective hardware and actually trying it?
No. If you are going to be serious about developing applications, testing on a variety of hardware is the only reliable way to go about it. And if you're not going to be serious, then who cares.
Generally speaking, the easiest way to handle this for a small team is to avoid the problem altogether. Most driver incompatibilities come from attempting to do something unorthodox: passing arrays as output/input varying variables, passing matrices as attributes, using more recent driver features, etc. So... don't do that. Use only solid, safe stuff that's been around in GLSL and has been almost certainly used in real-world OpenGL applications.
using NVidias's NVEmulate and AMD's GPU ShaderAnalyzer could be an option.
AMD's GPU ShaderAnalyzer is a stand-alone GLSL/HLSL compiler. NVidias's NVEmulate is a tool to emulate the features of different (better) NVidia graphic cards in software. So if you have an NVidia card, you can simply run your program to test it (possibly emulating another NVidia card with the NVEmulate) and use the ShaderAnalyser to see if your shader compile on AMD cards.
If your shader runs on AMD, it will most probably run on NVidia. You can still test this with cgc (NVidias stand-alone Cg compiller, part of the Cg Toolkit) it compiles GLSL and Cg code to binary or cross-compile it to HLSL. This is the compiler which NVidia drivers use for GLSL anyway.
The bonus is that you can also see the binary/assembly code of your shader, which is very helpful for low-level optimizations.
What no tool can tell you is, if the shader works (as expected) on different hardware.. I recently found out that some new AMD driver don't handle default uniform values properly... without a warning or error message... but this is a different story.
So at some point you have to test your code on the target hardware.
Related
I have been wanting to make a game in OpenGL, c++ for a while now and i would love some explanation on how exactly it works and what it is.
Can computer graphics be made without OpenGL ? most of the tutorials i have seen online show how to use OpenGL for the most basic graphics drawing, it is possible to directly interface with your GPU ?
How does OpenGL work on different CPU's and Operating systems ? As far as i know languages like c++ must be recompiled if they want to be used on an ARM processor and the such, is this not the case for GPU's in general ?
If you can indeed make graphics without OpenGL, does anybody still do this ? how much work and effort does OpenGL save in general and how complex are the systems that OpenGL facilitates for us?
Are there other libraries like OpenGL that are commonly used ? if not, will new libraries eventually come and take it's place or is it perfect for the job and not going anywhere ?
How exactly it works and what it is?
OpenGL defines an interface that you as a programmer can use to develop graphics programs (API). The interface is provided to you in form of header files that you include to your project. It is meant to be multiplatform, so that you can compile your code that uses OpenGL on different operating systems. People that manage the OpenGL specification do not provide the implementation of specified functionality. That is done by the OS and hardware vendors.
Can computer graphics be made without OpenGL?
Yeah, sure. You can e.g. calculate the whole image manually in your program and then call some OS-specific function to put that image on the screen (like BitBlt in Windows).
How does OpenGL work on different CPU's and Operating systems?
Each OS will have its own implementation of OpenGL specification that will usually call the hardware drivers. So let's say you have machine with Windows OS and Nvidia graphics card. If you run some program that calls glDrawElements it will look like this:
your_program calls glDrawElements
which calls glDrawElements implementation written by people from Microsoft
which calls Nvidia drivers written by people from Nvidia
which operates the HW
If you can indeed make graphics without OpenGL, does anybody still do this?
Yeah sure. Some people might want to implement their own rendering engine from ground up (although that is really hardcore thing to do).
Are there other libraries like OpenGL that are commonly used ? if not, will new libraries eventually come and take it's place or is it perfect for the job and not going anywhere ?
Sure. There is DirectX that is maintained by Microsoft and targets only Windows platforms and the Vulkan that can be seen as successor to OpenGL.
I have seen OpenCL is widely supported by CPU implementation as well as some GPU implementations.
For the cases there is a GPU, but no GPU implementation available, would it be feasable to implement OpenCL using OpenGL?
Maybe most operations would map quite well to GLSL fragment shader or even compute shaders.
If this is feasable, where to begin? Is there any 'minimal' CPU OpenCL implementation, that one could start off?
For the cases there is a GPU, but no GPU implementation available, would it be feasable to implement OpenCL using OpenGL?
Possible: Yes, certainly. Every Turing complete machine can emulate any other Turing complete machine.
Feasible: OpenGL implementations' GLSL compilers are already primadonnas, each implementation's compiler behaving a little different. OpenGL itself has tons of heuristics in it to select the codepath. Shoehorning a makeshift OpenCL on top of OpenGL + GLSL will be an exercise in lots of pain.
Required: Absolutely not: For every GPU which has the required capabilities to actually support OpenCL the drivers do support OpenCL anyway, so this is a moot non-issue.
OpenCL has certain very explicit requirements on the precision of floating-point operations. GLSL's requirements are far more lax. So even if you implemented the OpenCL API on top of OpenGL, you would never be able to get the same behavior.
Oh and in case you start thinking otherwise, Vulkan is no better in this regard.
When I develop shader code on my machine I often find myself in the situation where the shader works perfectly on my machine, but on other graphic cards, drivers, operating systems, etc. it doesn't.
How to achieve compatibility of shaders?
I see a few approches:
Test on many different systems. But which systems to choose? Testing with every card on every OS and every driver is not realistic. Maybe we can assume that the vendors care for backward compatibility? In this case testing with old cards and drivers might be sufficient.
Ask the driver for a specific version and core profile. This helps a bit, but the drivers seem very lenient, allowing me to code things that aren't in the spec.
Code checkers that check the code for strict compatibility with a certain spec. There don't seen to be such tools around.
Don't bother and wait for bug-reports from users.Yet the error messages generated by the drivers are rather poor, and the observed behaviour might be as un-insightful as a black screen.
I'm targeting Win/Linux/OSX platforms. Not consoles.
Khronos has released OpenGL / OpenGL ES Reference Compiler that can be used to validate shader's source code. From the site:
The primary purpose of the reference compiler is to identify shader
portability issues.
I have developed a program which makes use of many of OpenGL's aspects - ranging from both rather new to deprecated functionalities, and want to ensure that it works correctly on the great majority of machines - especially on ones with outdated graphics cards.
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
Define "compatibility"? If you want an application to run on as much hardware as possible, then you basically have to give up on shaders entirely and stick to about GL 1.4. The main confounding issue here are Intel driver bugs; many pieces of older Intel hardware will claim support for GL 2.0 or 2.1, but they have innumerable failings in this support.
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
You don't. Compatibility with old hardware is about more than just sticking to a standard. It's about making sure that your program doesn't encounter driver bugs. And the only way to do that is to actually test on the hardware of interest.
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
Test the same code on recent hardware. If it has the same failures, then the problem is likely in your code. If it works fine on recent hardware but fails on older stuff, then the problem is almost certainly a driver bug with old hardware drivers.
Develop a workaround.
Well, the best way to maximize the backwards compatibility and to get a powerful tool on tracking down target machine's functionality (imho) is to use something like GLEW: The OpenGL Extension Wrangler Library. It will load OpenGL version-specific functions for you and you can test if they are supported by user's system (or, more correctly, by video drivers).
This library is very simple in use, it is well documented and you can google a lot of examples.
So if target machine doesn't have some new opengl functions, you load module named "opengl_old.cpp" (for example), or if it don't have some functionality which is already deprecated (like glBegin(), glEnd()), you'd better go on with "opengl_new.cpp".
Basically the most changes are done in OpenGL 3.0 (and furthermore 3.3) with shaders introduced as the only non-deprecated graphics pipeline, so you can make two opengl modules in your program: one for OpenGL 1&2 and one for OpenGL 3&4. At least I solved this problem in this way in my own code.
To test some functionality you can specify concrete version of OpenGL API to be loaded, when creating context.
I read from the OpenGL Wiki that current Modern GPUs are only programmable using shaders.
Modern GPUs no longer support fixed
function. Everything is done with
shaders. In order to preserve
compatibility, the GL driver generates
a shader that simulates the fixed
function. It is recommended that all
new modern programs use shaders. New
users need not learn fixed function
related operations of GL such as
glLight, glMaterial, glTexEnv and many
others.
Is that mean that if we are not implementing shader/GLSL in OpenGL, we actually don't access the GPU at all and only do the computation using the CPU?
No. It means that all fixed function stuff is automatically converted to shaders by the drivers.
Everything is done with shaders. In
order to preserve compatibility, the
GL driver generates a shader that
simulates the fixed function.
These shaders still run on the GPU (as all shaders do). They are just automatically made for you.