how do I solve the error of loop in webgl shader - glsl

When I pass in an uniform int variable for a for loop, it reports an error
When I define a constant, it doesn't report an error,
How to solve it
// error INVALID_OPERATION
uniform int myLen;
for (int i = 0; i < myLen; i += 1)
// success
const int myNum = 10;
for (int i = 0; i < myNum; i += 1)

I am guessing you are targeting WebGL 1.
If we look at the specification for The OpenGL® ES Shading Language, version 1.00, which is what WebGL uses, and look at the section "Appendix A: Limitations for ES 2.0" (OpenGL ES 2.0 is what WebGL 1 is based on), it says:
In general, control flow is limited to forward branching and to loops where the maximum number of
iterations can easily be determined at compile time.
[…]
for loops are supported but with the following restrictions:
[…]
The for statement has the form:
for ( init-declaration ; condition ; expression ) statement
[…]
condition has the form
loop_index relational_operator constant_expression
where relational_operator is one of: > >= < <= == or !=
Note the "constant_expression". This unfortunately means that you aren't allowed* to use a uniform variable for your loop bound, like you did.
I believe this is different in WebGL 2. You might want to try using that if it's an option.
* The GLSL ES spec does say "Within the GLSL ES specification, implementations are permitted to implement features beyond the minima described in this section, without the use of an extension." However, unfortunately WebGL's specification prohibits this:
A WebGL implementation must only accept shaders which conform to The OpenGL ES Shading Language, Version 1.00 [GLES20GLSL], and which do not exceed the minimum functionality mandated in Sections 4 and 5 of Appendix A

Related

Boolean logic on my fragments takes a lot of VRAM, how can I avoid this?

I have a very simple request from GLSL 330:
if (colorOut.r <= 1.0 && colorOut.r > 0.7)
{
colorOut.r=*color_1.r;
}
I have over 40 compares like this.
However, this is creating a world of trouble for me, as I've been told AND, NOT, etc statements take a lot of video memory, and I'm developing a plugin for After Effects, and people who happen to use them don't have strong GPUs (for the most part -- I have done a survey and most of them use mobile version of mid-end GPUs). so I thought I'd ask you guys if there's a possible alternative to using AND or even if, because I've been told fragment shaders don't like if in the main branch at all.
Thanks.
A multiplexing scenario like yours you can use branchless programming. You could for example use something like this. The boolean operators are "approximated".
colorOut.r = mix(colorOut.r, (colorOut.r*color_1.r),
( clamp(pow(1-colorOut.r, 20), 0, 1)
* clamp(pow(colorOut.r-0.7, 20), 0, 1) ) );
Note that a ternary usually doesn't cause that much problems and this should be easy on resources, since it doesn't causes diverging branches:
colorOut.r = mix(colorOut.r, (colorOut.r*color_1.r),
( colorOut.r <= 1 && colorOut.r > 0.7 ? 1 : 0 );

imageAtomicExchange won't compile

I'm trying to use two opengl images, one of which is sparse and the other used as a sort of page table, in which I keep track of the page actually commited.
I have a simple little shader, which looks like this (main not included):
#version 450 core
#extension GL_ARB_shader_image_load_store : require
uniform float gridSize;
uniform float pageTableSize;
bool isPageInMemoryOrRequest (in ivec3 pos)
{
bool returnValue = false;
if ( 255u == imageAtomicExchange(pageTable, pos, 128u) )
{
returnValue = true;
}
return returnValue;
}
And my problem is that this won't compile. I keep getting this message:
Error C1115: unable to find compatible overloaded function "imageAtomicExchange(struct uimage3D1x8_bindless, ivec3, uint)"
I'm pretty sure I've never seen that _bindless part anywhere in the specs and I'm not exactly sure how the compiler figures out that is a bindless texture at compile time (or maybe they're all bindless in the latest drivers).
I've got a GTX660TI and I'm using the 352.86 drivers.
I'm wondering if anyone's had this sort of issue before and could tell me what might the problem be.
Thanks in advance.
According to the extension specification of ARB_shader_image_load_store (Section 8.X, Image Functions), there is only of very limited number of supported formats for atomic operations:
Atomic memory operations
are supported on only a subset of all image variable types; must
be either:
an image variable with signed integer components (iimage*) and a
format qualifier of "r32i", or
an image variable with unsigned integer components (uimage*) and a
format qualifier of "r32ui".
I assume from the error message, that you have tried to use a r8ui format, which is not supported.

What is the limit on work item (shader instance) memory in WebGL?

I declare an array in my WebGL vertex shader:
attribute vec2 position;
void main() {
#define length 1024
float arr[length];
// use arr so that it doesn't get optimized away
This works, but if I increase length to 2048 then gl.drawArrays does nothing. There are no errors- shaders compile, program links and passes gl.validateProgram. I'm guessing that I tried to use too much memory on the stack. Is there a better, programmatic way to discover this limit? Am I doing something else wrong?
There are no errors- shaders compile, program links and passes gl.validateProgram.
As guaranteed by the spec!
Section 2.10: "Vertex Shaders", page 42:
A shader should not fail to compile, and a program object should not fail to
link due to lack of instruction space or lack of temporary variables.
The GLSL spec helpfully notes:
Appendix A, section 3: "Usage of Temporary Variables":
The maximum number of variables is defined by the conformance tests.
You can get your very own copy of the conformance tests for the low, low price of $14,000-$19,000.
However, you can at least detect this situation (Section 2.10, page 41):
It is not always possible to determine at link time if a program object actually will execute. Therefore validation is done when the first rendering command (DrawArrays or DrawElements) is issued, to determine if the currently active program object can be executed. If it cannot be executed then no fragments will be rendered, and the rendering command will generate the error INVALID_OPERATION.

OpenGL anisotropic filtering support, contradictory check results

When checking if anisotropic filtering is supported, I get contradictory results.
if(glewIsSupported("GL_EXT_texture_filter_anisotropic") || GLEW_EXT_texture_filter_anisotropic) {
std::cout << "support anisotropic" << std::endl;
}
GLfloat max;
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &max);
std::cout << max << std::endl;
The output for this section on my machine is:
16
So seemingly an anisotropic filtering of 16 is supported, but glewIsSupported as well as the glew extension string say the opposite.
Is checking for GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT enough and is the glew check wrong, or is something different going on?
Apparently there is a known bug in glew where glGetString(GL_EXTENSIONS) is used even in an OpenGL 3+ context instead of glGetStringi that replaced the extension querying in OpenGL 3+.
So until patched, extension querying must be done manually.
A possible way to solve the chicken and egg problem is to call glGetString(GL_EXTENSIONS) and check glGetError() for GL_INVALID_ENUM. This should only be raised in case GL_EXTENSIONS is not available. If you encounter this error, try glGetStringi. Don't forget to check the errors here, too. GLEW doesn't (as of version 1.10 :/ ).

incorrect value from glGetProgramiv(program, GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam)

I need the maximum length of a uniform name. ie: given a used program with uniforms uniform test and uniform myuniform, glGetProgramiv(program, GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam) should get output of 10 (for "myuniform").
I have a very simple test shader set up with 1 defined uniform: uniform float time
glGetProgramiv(program, GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam) returns 5, the length of time. If I change "time" to something else, it returns the changed length (ex: change to "timer" it returns 6).
glGetProgramiv with GL_ACTIVE_UNIFORMS tells me that there are 2 uniforms.
The second uniform that it is reporting is gl_ModelViewProjectionMatrix.
I am fine with it including gl_ModelViewProjectionMatrix in the list - I am using it in the shader, but this brings up a problem when combined with the other return value. Why doesn't glGetProgramiv return the length of "gl_ModelViewProjectionMatrix" if it is including it in the list? I need the full names of attributes and variables for my application, but since I am getting a max length of 5, glGetActiveUniform is returning a uniform name of "gl_M" which is not acceptable.
Either the max name length should include the MVP matrix, or the list of names should not. It does not make sense to include the name in the list but not in the max name length calculation.
Is this happening only for me? I could not find anything else about it using Google. I could abandon a query for max length and always use very big buffers, but I've seen some very long variable names before, so the buffers would have to be huge to guarantee no errors. That's not a real fix anyway.
This test is working correctly for attributes. I use gl_Vertex and have no other attributes. The system reports 1 current attribute with a length of 10 and a name of gl_Vertex correctly. If I remove my time uniform entirely, that leaves the MVP matrix as the only used uniform, the system reports 1 current uniform, max name length of 0, so getting its name with the returned max length gets nothing.
For completeness, I include the code below. The code is in Java and uses JOGL to access OpenGL bindings. To highlight the relevant areas, I have deleted lines not relevant to this issue, which was mostly the GUI updating, including the GUI part that actually shows the values obtained in here. I also deleted the part that gets the attributes since that works fine, as stated above.
FYI for the C people who are wary of the Java-isms: think of a Buffer (IntBuffer, ByteBuffer, FloatBuffer) like a pointer, buffer.get() like buffer[n++], and buffer = IntBuffer.allocate(n) like a malloc(). I also use OpenGL in C and C++, so I can rewrite this in C if the GLSL gurus here prefer that.
Any suggestions?
// add options to panelShaderParameters
public void updateShaderParameters(GLAutoDrawable surface)
{
GL2 gl = surface.getGL().getGL2();
IntBuffer outParam = IntBuffer.allocate(1);
int numParameters = 0,
maxNameLength = 0;
IntBuffer size = null,
type = null;
ByteBuffer name = null;
gl.glGetProgramiv(shader.getName(), GL2.GL_ACTIVE_UNIFORMS, outParam);
numParameters = outParam.get();
outParam = IntBuffer.allocate(1);
gl.glGetProgramiv(shader.getName(), GL2.GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam);
maxNameLength = outParam.get();
for(int i = 0; i < numParameters; i += 1)
{
size = IntBuffer.allocate(1);
type = IntBuffer.allocate(1);
name = ByteBuffer.allocate(maxNameLength);
gl.glGetActiveUniform(shader.getName(), i, maxNameLength, (IntBuffer)null, size, type, name);
byte[] nameBuffer = new byte[maxNameLength];
name.position(0);
name.get(nameBuffer);
}
}
You have found a driver bug. You can attempt to report it (this forum is a place where it may be seen), but there's nothing you can do to make it work correctly. So just work around it.
Rather than having a single max-size, just ask each uniform in turn what it's length is. In C/C++, you do this by passing NULL for the buffer to glGetActiveUniformName:
GLsizei length;
glGetActiveUniformName(program, index, 1000 /*large number*/, &length, NULL);
//Use length to allocate a buffer of the appropriate size.
I don't know how this would work using JOGL. Perhaps you should switch to LWJGL, which has a much more reasonable Java implementation of this function.
Also, stop storing the strings as byte arrays. Convert them into proper Java strings (another reason to use LWJGL).
We had exactly this problem, and I have seen it described here by other:
http://www.opengl.org/discussion_boards/showthread.php/179117-Driver-bug-causing-incorrect-glGetProgramiv-output
In my case, the problem arose after upgrading from nvidia 295.40 driver to 304.64, in various GPU models. In that forum above the bug was reported for an intel driver ¿ which driver are you using Ludowijk ?
I guess that perhaps builtin attributes starting with "gl_" are not considered for max name length computation, but even if this is true this behavoir should be considered a bug IMHO.
Thanks Nicol for the idea for the workaround.