In OpenGL ES 2.0 / GLSL, where do you need precision specifiers? - glsl

Does the variable that you're stuffing values into dictate what precision you're working with, to the right of the equals sign?
For example, is there any difference, of meaning, to the precision specifier here:
gl_FragColor = lowp vec4(1);
Here's another example:
lowp float floaty = 1. * 2.;
floaty = lowp 1. * lowp 2.;
And if you take some floats, and create a vector or matrix from them, will that vector or matrix take on the precision of the values you stuff it with, or will those values transform into another precision level?
I think optimizing this would best answer the question:
dot(gl_LightSource[0].position.xyz, gl_NormalMatrix * gl_Normal)
I mean, does it need to go this far, if you want it as fast as possible, or is some of it useless?
lowp dot(lowp gl_LightSource[0].position.xyz, lowp gl_NormalMatrix * lowp gl_Normal)
I know you can define the default precision for float, and that this supposedly is used for vectors and matrices afterwards. Assume for the purpose of education, that we had defined this previously:
precision highp float;

You don't need precision specifiers on constants/literals since those get compile time evaluated to whatever they are being assigned to.
In vertex shaders, the following precisions are declared by default: ( 4.5.3 Default Precision Qualifiers)
precision highp float;
precision highp int;
precision lowp sampler2D;
precision lowp samplerCube;
And in fragment shaders you get:
precision mediump int;
precision lowp sampler2D;
precision lowp samplerCube;
This means that if you declare a float in a fragment shader, you have to say whether it is a lowp or a mediump. The default float/int precisions also extend to matrices/vectors.
highp is only supported on systems that have the GL_FRAGMENT_PRECISION_HIGH macro defined to 1; on the rest you'll get a compiler error. (4.5.4 Available Precision Qualifiers)
The rule for precision in an expression is that they get cast automatically to the type of the assignment / parameter they are bound to. So for your dot, it would use the precision of the input types by default and the additional lowp's are unnecessary (and syntactically incorrect). If you want to down-cast a type to a lower precision, the only way to do it is to explicitly assign it to a lower precision.
These answers are all from the Khronos GLSL spec, which you can find here (relevant sections are 4.5.2 and 4.5.3): https://www.khronos.org/registry/OpenGL/specs/es/2.0/GLSL_ES_Specification_1.00.pdf
In more recent versions of the spec, those sections are 4.7.3 and 4.7.4. Nothing has changed, except adding a default precision highp atomic_uint; to both lists. See https://www.khronos.org/registry/OpenGL/specs/es/3.2/GLSL_ES_Specification_3.20.pdf

Related

What's the precision of built-in functions in shading language?

Precision conversion on mobile devices may cause performance drop. I want to minimize conversion in our shaders(glsles, hlsl and spir-v). I'm confused by the built-in function precision.
Consider the following code:
mediump float a, b;
mediump float c = max(a, b);
mediump float a, b;
float c = max(a, b);
mediump float a, b;
mediump float c = sin(max(a, b));
mediump float2 uv;
uniform mediump sampler2D tex;
mediump float c = texture2D(tex, uv);
What conversions would happen? If built-in function return type depends on parameters, code #1, #3, #4 should have no conversion. Is it right?
I don't think you can always predict conversions, because shader compilers will optimise to use the lowest precision and fewest conversions where possible. But I suspect you have the wrong mindset and are micro-optimising for something that doesn't matter much. With typical implementations, precision conversions should be very cheap. What you should be concerned about is unnecessarily using high precision for your inputs, outputs and other variables, which should have more impact.
The simplest and most optimal approach is to put precision mediump float; at the top of your shader, so precision defaults to mediump, then test on devices which support reduced precision (i.e. mobile GPUs) and see if anything breaks. You can then selectively add highp where necessary.

GLSL: will compiler evaluate functions with constant arguments?

If I have the following code in a GLSL fragment shader:
float r = 0.386;
float a = 26.6;
float xd = r*cos(0.0174532924*(a+0));
float yd = r*sin(0.0174532924*(a+0));
float xe = r*cos(0.0174532924*(a+90));
float ye = r*sin(0.0174532924*(a+90));
is it a sane assumption that the compiler will evaluate those trigonometric functions instead of have them be evaluated in every fragment execution?
In this case, sadly, you can't know much, since the compilation is done by the GPU. I would say it is implementation dependent, since some compilers may be better optimized.
However, as WearyWanderer sayed, you can hardcode the values or pass them through uniforms/UBO.
As you mentioned you could calculate the values and directly assign them, but want to let it for documentation purposes, I assum the values will be the same in every execution of the shader code.
Uniform Variables are variables that you can calculate once, send to a shader, and are the same for every execution, unless you change the uniform variable at some point. For example:
float r = 0.386;
float a = 26.6;
float xd_val = r*cos(0.0174532924*(a+0));
GLuint xd_id = glGetUniformLocation(pShaderProgram, "xd");
glUniform1f(xd_id, xd_val);
This calculates the value only once on the CPU, passes it to the shader program as a uniform variable, and the shader has access to the value for every execution without recaulcating it, but still leaves the code in here for your documentation that you wanted.
Uniform's are commonly used for object wide values, I.E an alpha-value, passing in scene lights for phong shader model, etc.

GLSL(330) modulo returns unexpected value

I am currently working with GLSL 330 and came across some odd behavior of the mod() function.
Im working under windows 8 with a Radeon HD 6470M. I can not recreate this behavior on my desktop PC which uses windows 7 and a GeForce GTX 260.
Here is my test code:
float testvalf = -126;
vec2 testval = vec2(-126, -126);
float modtest1 = mod(testvalf, 63.0); //returns 63
float modtest2 = mod(testval.x, 63.0); //returns 63
float modtest3 = mod(-126, 63.0); //returns 0
Edit:
Here are some more test results done after IceCools suggestion below.
int y = 63;
int inttestval = -126;
ivec2 intvectest(-126, -126);
float floattestval = -125.9;
float modtest4 = mod(inttestval, 63); //returns 63
float modtest5 = mod(intvectest, 63); //returns vec2(63.0, 63.0)
float modtest6 = mod(intvectest.x, 63); //returns 63
float modtest7 = mod(floor(floattestval), 63); //returns 63
float modtest8 = mod(inttestval, y); //returns 63
float modtest9 = mod(-126, y); //returns 63
I updated my drivers and tested again, same results. Once again not reproducable on the desktop.
According to the GLSL docs on mod the possible parameter combinations are (GenType, float) and (GenType, GenType) (no double, since we're < 4.0). Also the return type is forced to float but that shouldn't matter for this problem.
I don't know that if you did it on intention but -126 is an int not a float, and the code might not be doing what you expect.
By the way about the modulo:
Notice that 2 different functions are called:
The first two line:
float mod(float, float);
The last line:
int mod(int, float);
If I'm right mod is calculated like:
genType mod(genType x, float y){
return x - y*floor(x/y);
}
Now note, that if x/y evaluates -2.0 it will return 0, but if it evaluates as -2.00000001 then 63.0 will be returned. That difference is not impossible between int/float and float/float division.
So the reason is might be just the fact that you are using ints and floats mixed.
I think I have found the answer.
One thing I've been wrong about is that mangsl's keyword for genType doesn't mean a generic type, like in a c++ template.
GenType is shorthand for float, vec2, vec3, and vec4 (see link - ctrl+f genType).
Btw genType naming is like:
genType - floats
genDType - doubles
genIType - ints
genBType - bools
Which means that genType mod(genType, float) implies that there is no function like int mod(int, float).
All the code above have been calling float mod(float, float) (thankfully there is implicit typecast for function parameters, so mod(int, int) works too, but actually mod(float, float) is called).
Just as a proof:
int x = mod(-126, 63);
Doesn't compile: error C7011: implicit cast from "float" to "int"
It only doesn't work because it returns float, so it works like this:
float x = mod(-126, 63);
Therefore float mod(float, float) is called.
So we are back at the original problem:
float division is inaccurate
int to float cast is inaccurate
It shouldn't be a problem on most GPU, as floats are considered equal if the difference between them is less than 10^-5 (it may vary with hardware, but this is the case for my GPU). So floor(-2.0000001) is -2. Highp floats are far more accurate than this.
Therefore either you are not using highp floats (precision highp float; should fix it then) or your GPU has stricter limit for float equality, or some of the functions are returning less accurate value.
If all else fails try:
#extension BlackMagic : enable
Maybe some driver setting is forcing default float precision to be mediump.
If this happens, all your defined variables will be mediump, however, numbers typed in the code will still remain highp.
Consider this code:
precision mediump float;
float x = 0.4121551, y = 0.4121552;
x == y; // true
0.4121551 == 0.4121552; // false, as highp they still differ.
So that mod(-126,63.0) could be still precise enough to return the correct value, as its working with high precision floats, however if you give a variable (like at all the other cases), which will only be mediump, the function won't have enough precision to calculate the correct value, and as you look at your tests, this is what's happening:
All the functions that take at least one variable are not precise enough
The only function call that takes 2 typed numbers return the correct value.

Issue with glBindBufferRange() OpenGL 3.1

My vertex shader is ,
uniform Block1{ vec4 offset_x1; vec4 offset_x2;}block1;
out float value;
in vec4 position;
void main()
{
value = block1.offset_x1.x + block1.offset_x2.x;
gl_Position = position;
}
The code I am using to pass values is :
GLfloat color_values[8];// contains valid values
glGenBuffers(1,&buffer_object);
glBindBuffer(GL_UNIFORM_BUFFER,buffer_object);
glBufferData(GL_UNIFORM_BUFFER,sizeof(color_values),color_values,GL_STATIC_DRAW);
glUniformBlockBinding(psId,blockIndex,0);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,0,16);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,16,16);
Here what I am expecting is, to pass 16 bytes for each vec4 uniform. I get GL_INVALID_VALUE error for offset=16 , size = 16.
I am confused with offset value. Spec says it is corresponding to "buffer_object".
There is an alignment restriction for UBOs when binding. Any glBindBufferRange/Base's offset must be a multiple of GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT. This alignment could be anything, so you have to query it before building your array of uniform buffers. That means you can't do it directly in compile-time C++ logic; it has to be runtime logic.
Speaking of querying things at runtime, your code is horribly broken in many other ways. You did not define a layout qualifier for your uniform block; therefore, the default is used: shared. And you cannot use `shared* layout without querying the layout of each block's members from OpenGL. Ever.
If you had done a query, you would have quickly discovered that your uniform block is at least 32 bytes in size, not 16. And since you only provided 16 bytes in your range, undefined behavior (which includes the possibility of program termination) results.
If you want to be able to define C/C++ objects that map exactly to the uniform block definition, you need to use std140 layout and follow the rules of std140's layout in your C/C++ object.

What keywords GLSL introduce to C?

So we have in C:
auto if break int case long char register
continue return default short do sizeof
double static else struct entry switch extern
typedef float union for unsigned
goto while enum void const signed volatile
What new keywords OpenGL (ES) Shader Language provide to us?
I am new to GLSL and I want to create some highlight editing util for ease of use.
Math words included into GLSL will count as keywords..?
New ones (excluding the ones you listed above) according to the latest spec document:
attribute uniform varying
layout
centroid flat smooth noperspective
patch sample
subroutine
in out inout
invariant
discard
mat2 mat3 mat4 dmat2 dmat3 dmat4
mat2x2 mat2x3 mat2x4 dmat2x2 dmat2x3 dmat2x4
mat3x2 mat3x3 mat3x4 dmat3x2 dmat3x3 dmat3x4
mat4x2 mat4x3 mat4x4 dmat4x2 dmat4x3 dmat4x4
vec2 vec3 vec4 ivec2 ivec3 ivec4 bvec2 bvec3 bvec4 dvec2 dvec3 dvec4
uvec2 uvec3 uvec4
lowp mediump highp precision
sampler1D sampler2D sampler3D samplerCube
sampler1DShadow sampler2DShadow samplerCubeShadow
sampler1DArray sampler2DArray
sampler1DArrayShadow sampler2DArrayShadow
isampler1D isampler2D isampler3D isamplerCube
isampler1DArray isampler2DArray
usampler1D usampler2D usampler3D usamplerCube
usampler1DArray usampler2DArray
sampler2DRect sampler2DRectShadow isampler2DRect usampler2DRect
samplerBuffer isamplerBuffer usamplerBuffer
sampler2DMS isampler2DMS usampler2DMS
sampler2DMSArray isampler2DMSArray usampler2DMSArray
samplerCubeArray samplerCubeArrayShadow isamplerCubeArray usamplerCubeArray
Reserved for future use (will cause error at the moment):
common partition active
asm
class union enum typedef template this packed
goto
inline noinline volatile public static extern external interface
long short half fixed unsigned superp
input output
hvec2 hvec3 hvec4 fvec2 fvec3 fvec4
sampler3DRect
filter
image1D image2D image3D imageCube
iimage1D iimage2D iimage3D iimageCube
uimage1D uimage2D uimage3D uimageCube
image1DArray image2DArray
iimage1DArray iimage2DArray uimage1DArray uimage2DArray
image1DShadow image2DShadow
image1DArrayShadow image2DArrayShadow
imageBuffer iimageBuffer uimageBuffer
sizeof cast
namespace using
row_major
You probably want to get the OpenGL ES GLSL language specification. §3.6 lists the keywords (plus a number of reserved words that aren't keywords, but you're not supposed to use anyway, so they probably merit some sort of color coding as well).
Edit: Oops, I grabbed the wrong link there. My apologies. The current specs are:
OpenGL 4.1 GLSL
OpenGL ES 2.0 GLSL
Besides the above answers, there are also reserved identifiers, eg. in GLSL ES 3.0 spec:
Identifiers starting with gl_ are reserved for use by OpenGL ES, and may not be declared in a shader as either a variable or a function. It is an error to redeclare a variable, including those starting “gl_”.
And other things beyond these are reserved:
In addition, all identifiers containing two consecutive underscores (__) are reserved for use by underlying software layers. Defining such a name in a shader does not itself result in an error, but may result in unintended behaviors that stem from having multiple definitions of the same name.
It is relevant for syntax coloring and correctness checking.
Similarly, macros starting with GL_, and a bunch of things with __.