Is it possible to use Cg shaders in WebGL? - opengl

I'm aware of that I can use GLSL shaders in WebGL via attachShader and compileShader, as I did in OpenGL. My problem is: how about shaders written in Cg? On desktop I need cgc to compile them, what's the corresponding tool in WebGL world?

Usually you don't want to.
You could compile your shaders into GLSL with cgc (-profile glslv or -profile glslf), then load them anywhere you want. There is, however, slight difference between desktop and ES GLSL (WebGL likely using ES specification), so it may require adding a little hints at the beginning of shader (like precision mediump float;, could easily be #ifdef'd/).
Of course you can't use some Cg functionality in this case - if it is missing in GLSL, cgc can do nothing. E.g. mapping uniforms to specified registers or settings varyings to specified interpolator.

Related

Automatically compile OpenGL Shaders for Vulkan

Is there any way to automatically compile OpenGL shaders for Vulkan? The problem is with the uniforms.
'non-opaque uniforms outside a block' : not allowed when using GLSL for Vulkan
I have tried compiling for OpenGL, then decompiling with spirv-cross with --vulkan-semantics, but it still has non-opaque uniforms.
spirv-cross seems to only have tools for compiling Vulkan shaders for OpenGL.
[--glsl-emit-push-constant-as-ubo]
[--glsl-emit-ubo-as-plain-uniforms]
A shader meant for OpenGL consumption will not work on Vulkan. Even ignoring the difference in how they consider uniforms, they have very different resource models. Vulkan uses descriptor sets and binding points, with all resources using the same binding indices (set+binding). By contrast, OpenGL gives each kind of resource its own unique set of indices. So a GLSL shader meant for OpenGL consumption might assign a texture uniform and a uniform block to the same binding index. But you can't do that in a GLSL shader meant for Vulkan, not unless the two resources are in different descriptor sets.
If you want to share shaders, you're going to need to employ pre-processor trickery to make sure that the shader assigns resources (including how it apportions uniforms) for the specific target that the shader is being compiled for.

Can GPU support and test shader code of an older version?

Say I want to test shader code of an older version, which is GLSL 1.2.
The GPU on the machine actually can support GLSL 4.0 (from the hardware specification).
Yes, you should be able to run shaders for a lower version.
Just make sure to identify the glsl version the code is written against in the very first line of every shader source, e.g. #version 120
The OpenGL context should also use the compatibility profile, the core profile does not contain deprecated functionality.
You need to create an OpenGL context in compatibility mode, which probably is the default.

OpenGL: "Fragment Shader not supported by HW" on old ATI card

In our OpenGL game we've got a shader link failure on an ATI Radeon x800 card. glGetProgramInfoLog reports:
Fragment shader(s) failed to link, vertex shader(s) linked.
Fragment Shader not supported by HW
Some googling suggests that we may be hitting an ALU instruction limit, due to a very long fragment shader. Any way to verify that?
I wasn't able to find detailed specs for the x800, nor any way to query the instruction limit at runtime. And even if I was able to query it, how do I determine the number of instructions of my shader?
There are several limits your may hit:
maximum shader length
maximum number of texture indirections (this is the limit most easily crossed)
using unsupported features
Technically the X800 is a shader model 2 GPU, which as about what GLSL 1.20 provides. When I started shader programming with a Radeon 9800, and the X800 is just a upscaled 9800 technically, I quickly abandoned the idea of doing it with GLSL. It was just too limited. And like so often when computer has only limited resources and capabilites, the way out was using assembly. In that case I mean the assembly provided by ARB_fragment_program extension.
GLview is a great tool to easily view all the limitations and supported GL extensions of a GPU/driver combination. If I recall correctly, I previously used AMD GPU ShaderAnalyzer which allows you to see the assembly compiled version of GLSL shaders. NVidia offers the same functionality with the nvemulate tool.
The x800 is very limited in shader power compared to current GPUs. You would probably have to cut back on your shader complexity anyway for this lower-end GPU to achieve proper performance. If you have your GLSL version running, simply choosing different fragment shaders for the X800 will probably be the most sensible approach.

How to get an uniform location of a Cg shader with OpenGL?

I've dabbled with basic shader programming before, using the GLSL way. Now I've come back to it, using Cg shaders. Following the tutorial at Josh Beam's website I've achieved the desired functionality, was able to change my shader around, however I couldn't manipulate the uniforms in it on the OpenGL side.
GLuint handle;
::glGenProgramsARB(1, &handle);
::glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, handle);
::glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, strlen(pSource), pSource);
while(true)
{
::glEnable(GL_FRAGMENT_PROGRAM_ARB);
::glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, handle);
// render
::glDisable(GL_FRAGMENT_PROGRAM_ARB);
// present backbuffer
}
Works perfectly. If I try to ::glGetUniformLocationARB(handle, pUniformName), I get a GL_INVALID_VALUE error. Tried to do this both after creating the shader, and after binding, to the same outcome. I have also tried to ::glUseProgramObjectARB(handle), as different sources suggested, however this didn't work either, and seems to belong to the GLSL way (Lighthouse3D tutorials).
I have double-checked that the uniform name is correct.
I have also found an approach (NeHe tutorials), that involves #include-ing Cg headers and calling cg APIs. Is this not doable using OpenGL API-s? (The reason for this is minimalism; I want to make this functionality a part of a static library, and I'm all for minimising the number of compiling / linking dependencies.)
You are confusing different extensions.
The term "uniform" refers to GLSL global variables declared with the uniform keyword. ARB assembly shaders do not have uniforms. They have a similar concept, but they don't call them "uniforms". glGetUniformLocationARB is a GLSL function. The ARB assembly term for them is "program local parameter". These are naturally set by the glProgramLocalParameterARB series of functions.
Oh, and you should never use any GLSL functions that end in ARB. Ever.

Is there a better way than writing several different versions of your GLSL shaders for compatibility sake?

I'm starting to play with OpenGL and I'd like to avoid the fixed functions as much as possible as the trend seems to be away from them. However, my graphics card is old and only supports up to OpenGL 2.1. Is there a way to write shaders for GLSL 1.20.8 and then run them without issue on OpenGL 3.0 and OpenGL 4.0 cards? Perhaps something when requesting the opengl context where you can specify a version?
I should use the #version directive.