GLSL: will compiler evaluate functions with constant arguments? - opengl

If I have the following code in a GLSL fragment shader:
float r = 0.386;
float a = 26.6;
float xd = r*cos(0.0174532924*(a+0));
float yd = r*sin(0.0174532924*(a+0));
float xe = r*cos(0.0174532924*(a+90));
float ye = r*sin(0.0174532924*(a+90));
is it a sane assumption that the compiler will evaluate those trigonometric functions instead of have them be evaluated in every fragment execution?

In this case, sadly, you can't know much, since the compilation is done by the GPU. I would say it is implementation dependent, since some compilers may be better optimized.
However, as WearyWanderer sayed, you can hardcode the values or pass them through uniforms/UBO.

As you mentioned you could calculate the values and directly assign them, but want to let it for documentation purposes, I assum the values will be the same in every execution of the shader code.
Uniform Variables are variables that you can calculate once, send to a shader, and are the same for every execution, unless you change the uniform variable at some point. For example:
float r = 0.386;
float a = 26.6;
float xd_val = r*cos(0.0174532924*(a+0));
GLuint xd_id = glGetUniformLocation(pShaderProgram, "xd");
glUniform1f(xd_id, xd_val);
This calculates the value only once on the CPU, passes it to the shader program as a uniform variable, and the shader has access to the value for every execution without recaulcating it, but still leaves the code in here for your documentation that you wanted.
Uniform's are commonly used for object wide values, I.E an alpha-value, passing in scene lights for phong shader model, etc.

Related

Why does the value of GL_MAX_UNIFORM_BLOCK_SIZE change?

According to khronos.org,
GL_MAX_UNIFORM_BLOCK_SIZE refers to the maximum size in basic machine units of a uniform block. The value must be at least 16384.
I have a fragment shader, where I declared a uniform interface block and attached a uniform buffer object to it.
#version 460 core
layout(std140, binding=2) uniform primitives{
vec3 foo[3430];
};
...
If I query the size of GL_MAX_UNIFORM_BLOCK_SIZEwith:
GLuint info;
glGetUniformiv(shaderProgram.getShaderProgram_id(), GL_MAX_UNIFORM_BLOCK_SIZE, reinterpret_cast<GLint *>(&info));
cout << "GL_MAX_UNIFORM_BLOCK_SIZE: " << info << endl;
I get: GL_MAX_UNIFORM_BLOCK_SIZE: 22098. It is ok, but for example: when I changes the size of the array to 3000 (instead of 3430), I get GL_MAX_UNIFORM_BLOCK_SIZE: 21956
As far as I know, GL_MAX_UNIFORM_BLOCK_SIZE should be a constant depending on my GPU. Then why does it change, when I modify the size of the array?
GL_MAX_UNIFORM_BLOCK_SIZE is properly queried with glGetIntegerv. It is a constant defined by the implementation which tells you the implementation-defined maximum. glGetUniform returns the value of a uniform in the given program. You probably got an OpenGL error of some kind, since GL_MAX_UNIFORM_BLOCK_SIZE is not a valid uniform location, and therefore your integer was never written to. So you're just reading uninitialized data.

glGetnUniformfv crashes for no reason?

I tried getting an uniform vec3 from the fragment shader to my CPU using glGetnUniformfv. According to the documentation this should perfectly work. It also works when only getting a float from the shader. But when used like this,
float f[3] = {0.0f};
glGetnUniformfv(program, glGetUniformLocation(program, name.c_str()), 3, f);
my program crashes. I checked the glGetUniformLocation but it had a valid output.
The third parameter to the glGetnUniform family of functions is not actually the number of entries in the array. It is the byte size of the array pointed to by f. Which, because f is an array rather than just a pointer to an array, would be sizeof(f).
Now, your implementation shouldn't have crashed, so there's probably something else going on there. But this is the problem in the code you've provided.
Unless you're using a context that actually supports OpenGL 4.5+, get the vec3 using "the old way" like this:
float f[3] = {0.0f};
glGetUniformfv(program, glGetUniformLocation(program, name.c_str()), f);
The new desktop-only glGetnUniform entry points exist only for extra safety, similar to strncpy vs strcpy.
Also, if you do use the glGetn variant, you should pass 12 instead of 3 for bufSize since it's a byte count.

Having a non bound sampler inside an uniform branch

Lets say I have pixel shader that sometimes need to read from one sampler and sometimes needs to read from two different samplers, depending on a uniform variable
layout (set = 0, binding = 0) uniform UBO {
....
bool useSecondTexture;
} ubo;
...
void main() {
vec3 value0 = texture(sampler1, pos).rgb;
vec3 value2 = vec3(0,0,0);
if(ubo.useSecondTexture) {
value2 = texture(sampler2, pos).rgb;
}
value0 += value2;
}
Does the second sampler; sampler2 need to be bound to a valid texture even though the texture will not be read if useSecondTexture is false.
All of the vkCmdDraw and vkCmdDispatch commands have this Valid Usage statement:
Descriptors in each bound descriptor set, specified via vkCmdBindDescriptorSets, must be valid if they are statically used by the currently bound VkPipeline object, specified via vkCmdBindPipeline
Since sampler2 is statically used, you must have a valid descriptor for it or you'll get undefined behavior.
My guess is that on some implementations, it'll work as you expect. But drivers/hardware are allowed to require that all descriptors that might be used by a pipeline are valid, and requiring them to inspect the contents of memory buffers to determine if something might be used would be very expensive.

Issue with glBindBufferRange() OpenGL 3.1

My vertex shader is ,
uniform Block1{ vec4 offset_x1; vec4 offset_x2;}block1;
out float value;
in vec4 position;
void main()
{
value = block1.offset_x1.x + block1.offset_x2.x;
gl_Position = position;
}
The code I am using to pass values is :
GLfloat color_values[8];// contains valid values
glGenBuffers(1,&buffer_object);
glBindBuffer(GL_UNIFORM_BUFFER,buffer_object);
glBufferData(GL_UNIFORM_BUFFER,sizeof(color_values),color_values,GL_STATIC_DRAW);
glUniformBlockBinding(psId,blockIndex,0);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,0,16);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,16,16);
Here what I am expecting is, to pass 16 bytes for each vec4 uniform. I get GL_INVALID_VALUE error for offset=16 , size = 16.
I am confused with offset value. Spec says it is corresponding to "buffer_object".
There is an alignment restriction for UBOs when binding. Any glBindBufferRange/Base's offset must be a multiple of GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT. This alignment could be anything, so you have to query it before building your array of uniform buffers. That means you can't do it directly in compile-time C++ logic; it has to be runtime logic.
Speaking of querying things at runtime, your code is horribly broken in many other ways. You did not define a layout qualifier for your uniform block; therefore, the default is used: shared. And you cannot use `shared* layout without querying the layout of each block's members from OpenGL. Ever.
If you had done a query, you would have quickly discovered that your uniform block is at least 32 bytes in size, not 16. And since you only provided 16 bytes in your range, undefined behavior (which includes the possibility of program termination) results.
If you want to be able to define C/C++ objects that map exactly to the uniform block definition, you need to use std140 layout and follow the rules of std140's layout in your C/C++ object.

HLSL DirectX9: Is there a getTime() function or similar?

I'm currently working on a project using C++ and DirectX9 and I'm looking into creating a light source which varies in colour as time goes on.
I know C++ has a timeGetTime() function, but was wondering if anyone knows of a function in HLSL that will allow me to do this?
Regards.
Mike.
Use a shader constant in HLSL (see this introduction). Here is example HLSL code that uses timeInSeconds to modify the texture coordinate:
// HLSL
float4x4 view_proj_matrix;
float4x4 texture_matrix0;
// My time in seconds, passed in by CPU program
float timeInSeconds;
struct VS_OUTPUT
{
float4 Pos : POSITION;
float3 Pshade : TEXCOORD0;
};
VS_OUTPUT main (float4 vPosition : POSITION)
{
VS_OUTPUT Out = (VS_OUTPUT) 0;
// Transform position to clip space
Out.Pos = mul (view_proj_matrix, vPosition);
// Transform Pshade
Out.Pshade = mul (texture_matrix0, vPosition);
// Transform according to time
Out.Pshade = MyFunctionOfTime( Out.Pshade, timeInSeconds );
return Out;
}
And then in your rendering (CPU) code before you call Begin() on the effect you should call:
// C++
myLightSourceTime = GetTime(); // Or system equivalent here:
m_pEffect->SetFloat ("timeInSeconds ", &myLightSourceTime);
If you don't understand the concept of shader constants, have a quick read of the PDF. You can use any HLSL data type as a constant (eg bool, float, float4, float4x4 and friends).
I am not familiar with HLSL, but I am with GLSL.
Shaders have no concept of 'time' or 'frames'. Vertex shader "understands" vertices to render, and pixel shader "understands" textures to render.
Your only option is to pass a variable to the shader program, in GLSL it is called a 'uniform', but I am not sure about HLSL.
I'm looking into creating a light
source which varies in colour as time
goes on.
There is no need to pass anything with that, though. You can directly set the light source's color (at least, you can in OpenGL). Simply change the light color on the rendering scene and the shader should pick it up from the built-in uniforms.
Nope. Shaders are essentially "one-way". The CPU can affect what's happening on the GPU (specify which shader program to run, upload textures and constants and such), but the GPU can not access anything on the CPU side of the fence. If the GPU (and your shader) needs a piece of data, it must be set by the CPU as a constant or written as a texture (or as part of the vertex data)
If you're using HLSL to write a shader for Unity, a time in seconds variable is exposed as _Time.