Do operations on uniform variables get cached in GLSL? - opengl

Let's say I have a very simple GLSL vertex shader:
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
in vec3 position;
in vec3 color;
out vec3 vertexColor;
void main()
{
mat4 mvp = projection * view * model;
vertexColor = color;
gl_Position = mvp * vec4(position, 1.0);
}
Is the variable mvp recalculated for every vertex, or is it precalculated and stored until the the uniform variables it depends on change?

Could a particular implementation cache that value? Sure. Will they?
I would not assume that they would. Such caching would be difficult to implement, since it requires evaluating such expressions before draw calls, based on uniform state, then uploading those expressions to the code.
If you have some state that is computed from multiple uniform values, and you believe that such computation would be a performance problem, you are the one who should compute it.

Related

Instancing using more values than uniforms can store

I am fairly new to OpenGL and trying to achieve instancing using uniform arrays. However the number of instances I am trying to invoke is larger than the MAX_UNIFORM_LOCATIONS limit:
QOpenGLShader::link: error: count of uniform locations > MAX_UNIFORM_LOCATIONS(262148 > 98304)error: Too many vertex shader default uniform block components
error: Too many vertex shader uniform components
What other ways are possible that will work with that large a number of objects? So far this is my shader code:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform vec3 positions[262144];
void main() {
vec3 t = vec3(positions[gl_InstanceID].x, positions[gl_InstanceID].y, positions[gl_InstanceID].z);
float val = 0;
mat4 wm = myMatrix * mat4(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t.x, t.y, t.z, 1) * worldMatrix;
color = vec3(0.4, 1.0, 0);
vert = vec3(wm * vertex);
vertNormal = mat3(transpose(inverse(wm))) * normal;
gl_Position = projMatrix * camMatrix * wm * vertex;
}
If it should matter, I am using QOpenGLExtraFunctions.
There are many alternatives for overcoming the limitations of uniform storage:
UBOs, for example; they usually have a larger storage capacity than non-block uniforms. Now in your case, that probably won't work, since storing 200,000 vec3svec4s will require more storage than most implementations allow UBOs to provide. What you need is unbounded storage.
Instanced Arrays
Instanced arrays use the instanced rendering mechanism to automatically fetch vertex attributes based on the instance index. This requires that your VAO setup work change a bit.
Your shader would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 position;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
void main() {
vec3 t = position;
/*etc*/
}
Here, the shader itself never uses gl_InstanceID. That happens automatically based on your VAO.
That setup code would have to include the following:
glBindBuffer(GL_ARRAY_BUFFER, buffer_containing_instance_data);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), 0);
glVertexAttribDivisor(2, 1);
This code assumes that the instance data is at the start of the buffer and is 3 floats-per-value (tightly packed). Since you're using vertex attributes, you can use the usual vertex attribute compression techniques on them.
The last call, to glVertexAttribDivisor is what tells OpenGL that it will only move to the next value in the array once per instance, rather than based on the vertex's index.
Note that by using instanced arrays, you also gain the ability to use the baseInstance glDraw* calls. The baseInstance in OpenGL is only respected by instanced arrays; gl_InstanceID never is affected by it.
Buffer Textures
Buffer textures are linear, one-dimensional textures that get their data from a buffer object's storage.
Your shader logic would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform samplerBuffer positions;
void main() {
vec3 t = texelFetch(positions, gl_InstanceID).xyz;
/*etc*/
}
Buffer textures can only be accessed via the direct texel fetching functions like texelFetch.
Buffer textures in GL 4.x can use a few 3-channel formats, but earlier GL versions don't give you that option (not without an extension). So you may want to expand your data to a 4-channel value rather than 3 channel.
Another problem is that buffer textures do have a maximum size limitation. And the required minimum is only 64KB of size, so the instanced array method will probably be more reliable (since it has no size restriction). However, all non-Intel OpenGL implementations give a huge size for buffer textures.
SSBOs
Shader storage buffer objects are like UBOs, only you can both read and write to them. That latter tool isn't important for you. The main advantage here is that the minimum required OpenGL size for them is have a minimum required size of 16MB (and implementations generally return a size limit on the order of available video memory). So size limits aren't a problem.
Your shader code would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
buffer PositionSSBO
{
vec4 positions[];
};
void main() {
vec3 t = positions[gl_InstanceID].xyz;
/*etc*/
}
Note that we explicitly use a vec4 here. That's because you should never use vec3 in a buffer-backed interface block (ie: UBO/SSBO).
In code, SSBOs work much like UBOs. You bind them for use with glBindBufferRange.

Why do I have to calculate the transpose of the inverse of the model matrix in order to calculate the normal for the reflection texture?

I am following this tutorial to create a skybox/cubemap with environmental mapping. I am having some trouble understanding a calculation in the vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
out vec3 Normal;
out vec3 Position;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
Normal = mat3(transpose(inverse(model))) * normal;
Position = vec3(model * vec4(position, 1.0f));
}
Here, the author is calculating the Normal before passing it to the fragment shader to calculate the reflection texture:
Normal = mat3(transpose(inverse(model))) * normal;
My question is, what exactly does this calculation do? Why do you have to calculate the transpose of the inverse of the model matrix before multiplying it with the normal?
If you don't do this, uneven scaling will distort the normal. I think this sums it up, with pictures, better than I possibly could: http://www.lighthouse3d.com/tutorials/glsl-12-tutorial/the-normal-matrix/

GLSL - Object disappears when multiplying by matrix

I've got this vertex shader:
#version 400
layout(location=0) in vec4 vertPosition;
layout(location=1) in vec2 vertUV;
layout(location=2) in vec3 vertNormal;
uniform mat4 MVP;
out vec3 fragPosition;
out vec2 fragUV;
out vec3 fragNormal;
void main(void){
gl_Position = MVP * vertPosition;
fragPosition = (MVP * vertPosition).rgb;
fragUV = vertUV;
fragNormal = (MVP * vec4(vertNormal,0)).rgb;
}
As soon as I'm multiplying vertPosition by MVP the object to render disappears, even if MVP is an identity matrix.
Here's the code where I'm setting MVP:
mat4 model = translate(mat4(), vec3(0,0,0));
mat4 view = translate(mat4(), vec3(0,0,-10));
mat4 proj = perspective(90.0f, (float)1280/720, 0.001f, 1000.0f);
mat4 MVP = model * view * proj;
GLint uniform_loc = glGetUniformLocation(this->_shaderID, "MVP");
glUniformMatrix4fv(uniform_loc, 1, GL_FALSE, value_ptr(matrix));
I tried different multiplication orders and multiplicating in the shader.
You've got the order in which you multiple MVP reversed. OpenGL matrix multiplication is column major right associative, hence it should be
mat4 MVP = proj * view * model;
A near clipping plane distance of 0.0001 is far too low. Essentially you're compressing all of the depth buffer resolution into a very small range. Make near as large as possible.
I don't get why you're selecting .rgb for the fragPosition and fragNormal.
From the look of your example you have forgotten to set the shader program before setting the uniform. Try using glUseProgram(this->program); (depending on where you've stored your program) before setting the matrix!

DirectX FVF-like GLSL Shaders

Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!

GLSL 330 Matrix-Computation Error {No compile error}

Edit:
Alright got it now :D
Problem: Completly forgot glm uses colum-major matrices. Just had to change GL_TRUE, to GL_FALSE and everything is alright.
I try to compute my ModelMatrix with my ProjectionMatrix. Like so:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
uniform mat4 projectionMatrix; //This are the Matrixes from my cpp-app
uniform mat4 modelMatrix; //With a debugger that can show all active uniforms i checked their values: They're right!
uniform mat4 testUni; //Here I checked if its working when I precompute the model and perspective matrices in my source: works
mat4 viewMatrix = mat4(1.0f);
noperspective out vec4 vertColor;
mat4 MVP = projectionMatrix * modelMatrix ; //Should actually have the same value like testUni
void main()
{
gl_Position = testUni * position ; //Well... :) Works
gl_Position = MVP * position ; //Well... :) Doesn't work [Just the perspective Transforn]
vertColor = position;
}
Move the statement
mat4 MVP = projectionMatrix * modelMatrix ; //Should actually have the same value like testUni
into main(). Shader execution starts at main. If you want to avoid per-vertex computations, precompute the matrix outside and supply it as a uniform.