How to test shaders again different version of shaders model - opengl

I have many OpenGl shaders. We try to use as many different hardware as possible to evaluate the portability of our product. One of our customer recently ran into some rendering issues it seems that the target machine only provide version shaders model 2.0 all our development/build/test machine (even oldest ones run version 4.0), everything else (OpenGl version, GSLS version ...) seems identical.
I didn't find a way to downgrade the shaders model version since it's automatically provided by the graphic card driver.
Is there a way to manually install or select OpenGl/GLSL/Shader model version in use for develpment/test purpose ?
NOTE: the main target are windows XP SP2/7 (32&64) for both ATI/NVIDIA cards

OpenGL does not have the concept of "shader models"; that's a Direct3D thing. It only has versions of GLSL: 1.10, 1.20, etc.
Every OpenGL version matches a specific GLSL version. GL 2.1 supports GLSL 1.20. GL 3.0 supports GLSL 1.30. For GL 3.3 and above, they stopped fooling around and just used the same version number, so GL 3.3 supports GLSL 3.30. So there's an odd version number gap between GLSL 1.50 (maps to GL 3.2) and GLSL 3.30.
Technically, OpenGL implementations are allowed to refuse to compile older shader versions than the ones that match to the current version. As a practical matter however, you can pretty much shove any GLSL shader into any OpenGL implementation, as long as the shader's version is less than or equal to the version that the OpenGL implementation supports. This hasn't been tested on MacOSX Lion's implementation of GL 3.2 core.
There is one exception: core contexts. If you try to feed a shader through a core OpenGL context that uses functionality removed from the core, it will complain.
There is no way to force OpenGL to provide you with a particular OpenGL version. You can ask it to, with wgl/glXCreateContextAttribs. But that is allowed to give you any version higher than the one you ask for, so long as that version is backwards compatible with what you asked for.

Related

Setting the OpenGL version to 2.0 appears to do nothing, as 3.3+ features still work

Someone was asking me for a test with OpenGL 2.x since they have hardware that supports only up to OpenGL 2.1.
I figured I'd try it out by setting the window hints in GLFW to use the major/minor version of 2 and 0.
Problem is I'm still using #version 330 in my shaders, and it works. However, it would not let me use the hints of GL version 2 when I was leaving on a Core profile (by accident). This seems to indicate that my version choice is doing something, but not what I expect.
I want to restrict myself to 2.1 to see if my application would run, and if it doesn't, then see what I can change to make it work. Problem is I don't have any 2.1 hardware since my computers are all 2015 or later.
Is there a way I can emulate 2.1 (on Windows) somehow and have it crash/die if I try using features it doesn't support? Apparently the hints I'm using are not helping.
As far as I know the major/minor version flags don't set the version of your OpenGL context but the required feature set. So if you set the flags to 3.3 for example you will usually get a 4.5 or 4.6 context as those version are typically the latest OpenGL versions your GPU supports that is compatible with OpenGL 3.3. Getting a OpenGL 2.1 Core context should be impossible as the defining feature of the core context is that it doesn't support some OpenGL 1.0-2.1 functionality. So this isn't really surprising.
I think your best option here is to use headers that only contain OpenGL 2.1 functions. GLAD for examples allows you to specify which version you want to generate headers for.

Support for Cg profiles in modern hardware

I have an inhouse application that uses the now deprecated nvidia scenix and Cg shaders. It works fine, and as it is inhouse we can chose what hardware to run it on.
The shaders are currently using vp40/fp40 profiles (though I can change it to use later profiles like GLSLV/GLSLF). I am trying to confirm that the current crop of nvidia hardware STILL supports Cg shaders? i.e. if we purchase the latest OpenGL4 geforce or quadro cards, will they still support the Cg profiles? I have asked on the nvidia forum but no answer. Eventually we will have to upgrade to a new scene graph and GLSL, but I want to know what 'legacy' support there is for the Cg shaders.
Thanks
Yes you're perfectly fine. In fact the GLSL implementation in the NVidia drivers is actually an add-on to the Cg compiler. even on latest generation GPUs the NVidia driver internally first translates GLSL to NV/ARB_programm_… assembly (source code in fact) and runs this through the assembler. It's unlikely NVidia is going to change that in the near future (although the introduction of SPIR-V may force their hand). And all the legacy OpenGL ARB/NV_program interfaces are supported just fine as extension (even to to OpenGL-4 core profile).

ARB_draw_buffers available but not supported by shader engine

I'm trying to compile a fragment shader using:
#extension ARB_draw_buffers : require
but compilation fails with the following error:
extension 'ARB_draw_buffers' is not supported
However when I check for availability of this particular extensions, either by calling glGetString (GL_EXTENSIONS) or using OpenGL Extension Viewer I get positive results.
OpenGL version is 3.1,
The grapic card is Intel HD Graphics 3000.
What might be the cause of that?
Your driver in this scenario is 3.1; it is not clear what your targeted OpenGL version is.
If you can establish OpenGL 3.0 as the mininum required version, you can write your shader using #version 130 and avoid the extension directive altogether.
The ARB extension mentioned in the question is only there for drivers that cannot implement all of the features required by OpenGL 3.0, but have the necessary hardware support for this one feature.
That was its intended purpose, but there do not appear to be many driver / hardware combinations in the wild that actually have this problem. You probably do not want the headache of writing code that supports them anyway ;)

Is it viable to replace GLSL with CG?

http://http.developer.nvidia.com/Cg/TessellationControlShader.html
I have some questions regarding CG.
What OpenGL version does CG support? On their site they state
Opengl Functionality Requirements
OpenGL 1.0
Which seems a little bit odd to me. For me this means that I need to have at least OpenGL 1.0 to use all OpenGL features in CG. So litteraly all new OpenGL features are missing?
Also the compute shader seems to be missing
GeometryShader, PixelShader, TessellationEvaluationShader,
VertexShader, FragmentProgram, GeometryProgram,
TessellationControlProgram, TessellationEvaluationProgram,
VertexProgram
Is CG now a viable alternative to replace GLSL 4.x? Can I write all shaders in CG that I could write in GLSL 4.3?
Is CG now a viable alternative to replace GLSL 4.x? Can I write all shaders in CG that I could write in GLSL 4.3?
No. While some OpenGL 4.x features, such as tessellation, are exposed as of Cg 3.1, others are not.
Notable missing features in Cg 3.1 (and their OpenGL names) include:
compute shaders
atomic counters
shader-writeable storage blocks (shader storage blocks)
shader-writable textures (image load / store)
runtime shader function selection (shader subroutines)
In general, Cg tends to lag two or three years behind the latest OpenGL release.
Cg has been end-of-lifed by NVidia so it will not be developed going forward:
The Cg Toolkit is a legacy NVIDIA toolkit no longer under active development or support. Cg 3.1 is our last release and while we continue to make it available to developers, we do not recommend using it in new development projects because future hardware features may not be supported.
So I think the best answer would be No.
I'm pretty sure that OpenGL-1.0 is a typo. DirectX-11 is about the function level you get with OpenGL-4.0. Now look what key is right below the 4 on the numpad.
In fact no single NVidia GPU ever did support only a OpenGL profile as low as OpenGL-1.0. OpenGL-1.0 dates back 20 years.
Is CG now a viable alternative to replace GLSL 4.x?
Well, I personally don't see a reason why to use Cg, except if you want to support both OpenGL and DirectX with a common set of shaders. But why would you want cross API compatibility? If you aim for portability then OpenGL wins clearly over DirectX.
IMHO the main reason to keep using Cg is, if you have to maintain a legacy product that uses Cg already. Remember that Cg was introduced long before OpenGL had a high level shading language.
Can I write all shaders in CG that I could write in GLSL 4.3?
Yes.

How do I know which version of OpenGL I am using?

I started writing programs, in C (for now) using GLFW and OpenGL. The question I have is that, how do I know which version of OpenGL my program will use? My laptop says that my video card has OpenGL 3.3. Typing "glxinfo | grep -i opengl" returns:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce 9600M GT/PCI/SSE2
OpenGL version string: 3.3.0 NVIDIA 285.05.09
OpenGL shading language version string: 3.30 NVIDIA via Cg compiler
OpenGL extensions:
So is OpenGL 3.3 automatically being used ?
Just call glGetString(GL_VERSION) (once the context is initialized, of course) and put out the result (which is actually the same that glxinfo does, I suppose):
printf("%s\n", glGetString(GL_VERSION));
Your program should automatically use the highest possible version your hardware and driver support, which in your case seems to be 3.3. But for creating a core-profile context for OpenGL 3+ (one where deprecated functionality has been completely removed) you have to take special measures. But since version 2.7 GLFW has means for doing this, using the glfwOpenWindowHint function. But if you don't want to explicitly disallow deprecated functionality, you can just use the context given to you by the default context creation functions of GLFW, which will as said support the highest possible version for your hardware and drivers.
But also keep in mind that for using OpenGL functionality higher than version 1.1 you need to retrieve the corresponding function pointers or use a library that handles this for you, like GLEW.