Qt3D default uniform and attributes - c++

I began to learn using shaders with QML, and I can't find any references that talk about default uniform and attribute values that are passed to the shaders. In certain examples, we can saw several of them like vertexPosition or modelViewProjection (that is also passed as mvp), but there is no clear list containing all the variables that we can use.
After investigating in Qt source code, I found out default name for many variables:
uniform variables (found in renderview.cpp)
modelMatrix
viewMatrix
projectionMatrix
modelView
viewProjectionMatrix
modelViewProjection
mvp
inverseModelMatrix
inverseViewMatrix
inverseProjectionMatrix
inverseModelView
inverseViewProjectionMatrix
inverseModelViewProjection
modelNormalMatrix
modelViewNormal
viewportMatrix
inverseViewportMatrix
exposure
gamma
time
eyePosition
attributes (found in qattribute.cpp)
vertexPosition
vertexNormal
vertexColor
vertexTexCoord
vertexTangent
Is that all? These variables are largely sufficient to develop most oh the shaders I am doing right now, but I just want to know if I miss something.

to confirm part of what #aRaMinet said
source Qt Documentation

Related

How to allow user's custom shader for an openGL software

In softwares like Unity or Unreal, for example, how do they allow users to add their own custom shaders to an object?
Is this custom shader just a normal fragment shader or is it another kind of shader? And if it is just a fragment shader, how do they deal with the lights?
I'm not gonna post the code here because it's big and would pollute the page, but I'm starting to learn through here: https://github.com/opentk/LearnOpenTK/blob/master/Chapter2/6-MultipleLights/Shaders/lighting.frag (it's a series of tutorials, this is the shader from the last one), and they say we should put the light types in functions, inside the fragment shader, to calculate the colors of each fragment.
For example, this function to calculate a directional light, extracted from the code I sent above:
vec3 CalcDirLight(DirLight light, vec3 normal, vec3 viewDir)
{
vec3 lightDir = normalize(-light.direction);
//diffuse shading
float diff = max(dot(normal, lightDir), 0.0);
//specular shading
vec3 reflectDir = reflect(-lightDir, normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
//combine results
vec3 ambient = light.ambient * vec3(texture(material.diffuse, TexCoords));
vec3 diffuse = light.diffuse * diff * vec3(texture(material.diffuse, TexCoords));
vec3 specular = light.specular * spec * vec3(texture(material.specular, TexCoords));
return (ambient + diffuse + specular);
}
But I've never seen people adding lights in their shaders in Unity for example, people just add textures and mess with colors, unless they really want to specifically mess with the lights.
Is there a way of making just one fragment shader that will compute all the types of light, and the user could then apply another shader, just for the object material, on top of that?
If you don't know how to answer but have some good reading material, or place where I could learn more about openGL and GLSL it would be of great value as well.
There are a couple of different ways to structure shader files, each with different pros and cons.
As individual programs. You make each file it's own shader program. Simple to add new programs, and would allow your users to just write a program in GLSL, HLSL, or an engines custom shader language. You will have to provide some way for the user to express what kind of data the program expects, unless you query it from the engine, but it might get complicated to make something that's generic enough.
Über Shader! Put all desired functionality in one shader and let the behavior be controlled by control flow or preprocessor macros, such as #ifdef. So the user would just have to write the main function (which the application adds to the rest of the shader). This allows you to let the user use all the predefined variables and functions. The obvious downside is that it could be big and hard to handle and small changes might break many shaders.
Micro Shaders. Each program contains a small, common functionality, and the application concatenate them all to a functioning shader. The user just write the main function and tells the program which functionality to add. The problem is that it's easy to get conflicts in names unless you're careful and is harder to implement than the über shader.
Effect files. Provided by Microsoft’s effect framework or NVIDIA’s CgFX framework (deprecated).
Abstract Shade Trees. Don't actually know what this is, but it's suppose to be a solution.
You can also combine some of these techniques or try to invent your own solution based on your needs. Here's the solutions discussed (in sector 2.3.3 Existing Solutions).

The difference between a color attribute and using gl_Color

Most GLSL shaders are using a attribute for the color in the vertex shader, which will be forwarded as varying to the fragment shader. Like this:
attribute vec4 position;
attribute vec4 color;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = color;
gl_Position = mvp * position;
};
Setting the color can be done with glVertexAtribPointer() to pass one color per vertex or with glVertexAttrib4fv() to pass a global color for all vertexes. I try to understand the difference to the predefined variable gl_Color in the vertex shader (if there is any difference at all). i.e.
attribute vec4 position;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = gl_Color;
gl_Position = mvp * position;
};
and using glColorPointer() to pass one color per vertex or glColor4fv() to use a global color for all vertexes. To me the second shader looks better (= more efficient?), because it uses less attributes. But all tutorials & online resources are using the first approach - so I wonder if I missed anything or if there is no difference at all.
What is better practice when writing GLSL shaders?
To me the second shader looks better (= more efficient?), because it uses less attributes.
It does not use fewer attributes. It just uses fewer explicit attribute declarations. All of the work needed to get that color value to OpenGL is still there. It's still being done. The hardware is still fetching data from a buffer object or getting it from the glColor context value or whatever.
You just don't see it in your shader's text. But just because you don't see it doesn't mean that it happens for free.
User-defined attributes are preferred for the following reasons:
User-defined attributes make it clear how many resources your shaders are using. If you want to know how many attributes you need to provide to a shader, just look at the global declarations. But with predefined attributes, you can't do this; you have to scan through the entire vertex shader for any gl_* names that name a predefined attribute.
User-defined attributes can do more things. If you want to pass integer values as integers to the vertex shader, you must use a user-defined attribute. If you need to pass a double-precision float to the vertex shader, again, a predefined attribute cannot help you.
Predefined attributes were removed from core OpenGL contexts. OSX, for example, does not allow the compatibility profile. You can still use OpenGL 2.1, but if you want to use any OpenGL version 3.2 or greater on OSX, you cannot use removed functionality. And the built-in vertex attributes were removed in OpenGL 3.1.
Predefined attributes were never a part of OpenGL ES 2.0+. So if you want to write shaders that can work in OpenGL ES, you again cannot use them.
So basically, there's no reason to use them these days.
if I remember correctly gl_Color is deprecated remnant from the old style API without VAO/VBO using glBegin() ... glEnd(). If you go to core profile there is no gl_Color anymore ... so I assume you use old OpenGL version or compatibility profile.
If you try to use gl_Color in core profile (for example 4.00) you got:
0(35) : error C7616: global variable gl_Color is removed after version 140
Which means gl_Color was removed from GLSL 1.4
It is not entirely matter of performance but the change in graphic rendering SW architecture or hierarchy of the GL calls if you want.

Weird noise on rendered objects - OpenGL

To be more specific, here's the screenshot:
https://drive.google.com/file/d/0B_o-Ym0jhIqmY2JJNmhSeGpyanM/edit?usp=sharing
After debugging for about 3 days, I really have no idea. Those black lines and strange fractal black segments just drive me nuts. The geometries are rendered by forward rendering, blending layer by layer for each light I add.
My first guess was downloading the newest graphics card driver (I'm using GTX 660m), but that didn't solve it. Can VSync be a possible issue here? (I'm rendering in a window rather on full screen mode) Or what is the most possible point to cause this kind of trouble?
My code is like this:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(false);
glDepthFunc(GL_EQUAL);
/*loop here*/
/*draw for each light I had*/
glDepthFunc(GL_LESS);
glDepthMask(true);
glDisable(GL_BLEND);
One thing I've noticed looking at your lighting vertex shader code:
void main()
{
gl_Position = projectionMatrix * vec4(position, 1.0);
texCoord0 = texCoord;
normal0 = (normalMatrix * vec4(normal, 0)).xyz;
modelViewPos0 = (modelViewMatrix * vec4(position, 1)).xyz;
}
You are applying the projection matrix directly to the vertex position, which I'm assuming is in object space.
Try setting it to:
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
And we can work from there.
This answer is slightly speculative, but based on the symptoms, and the code you posted, I suspect a precision problem. The rendering code you linked, looks like this in a shortened form:
useShader(FWD_AMBIENT);
part.render();
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for (Light light : lights) {
useShader(light.getShaderType());
part.render();
}
So you're rendering the same thing multiple times, with different shaders, and rely on the resulting pixels to end up with the same depth value (depth comparison function is GL_EQUAL). This is not a safe assumption. Quote from the GLSL spec:
In this section, variance refers to the possibility of getting different values from the same expression in different programs. For example, say two vertex shaders, in different programs, each set gl_Position with the same expression in both shaders, and the input values into that expression are the same when both shaders run. It is possible, due to independent compilation of the two shaders, that the values assigned to gl_Position are not exactly the same when the two shaders run. In this example, this can cause problems with alignment of geometry in a multi-pass algorithm.
I copied the whole paragraph because the example they are using sounds like an exact description of what you are doing.
To prevent this from happening, you can declare your out variables as invariant. In each of your vertex shaders that you use for the multi-pass rendering, add this line:
invariant gl_Position;
This guarantees that the outputs are identical if all the inputs are the same. To meet this condition, you should also make sure that you pass exactly the same transformation matrix into both shaders, and of course use the same vertex coordinates.

GLSL + OpenGL Moving away from state machine

I started moving one of my projects away from fixed pipeline, so to try things out I tried to write a shader that would simply pass the OpenGL matrices and transform the vertex with that and then start calculating my own once I knew that worked. I thought this would be a simple task but even this will not work.
I started out with this shader for normal fixed pipeline:
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then changed it to this:
uniform mat4 model_matrix;
uniform mat4 projection_matrix;
void main(void)
{
gl_Position = model_matrix * projection_matrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then retrieve the OpenGL matrices like this and pass them to the shader with this code:
[material.shader bindShader];
GLfloat modelmat[16];
GLfloat projectionmat[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelmat);
glGetFloatv(GL_PROJECTION_MATRIX, projectionmat);
glUniformMatrix4fv([material.shader getUniformLocation:"model_matrix"], 1, GL_FALSE, modelmat);
glUniformMatrix4fv([material.shader getUniformLocation:"projection_matrix"], 1, GL_FALSE, projectionmat );
... Draw Stuff
For some reason this does not draw anything (I am 95% positive those matrices are correct before I pass them btw) Any Ideas?
The problem was that my order of matrix multiplication was wrong. I was not aware that the operations were not commutative.
The correct order should be:
projection * modelview * vertex
Thanks to ltjax and doug65536
For the matrix math, try using an external library, such as GLM. They also have some basic examples on how to create the necessary matrices and do the projection * view * model transform.
Use OpenGL 3.3's shading language. OpenGL 3.3 is roughly comparable to DirectX10, hardware-wise.
Don't use the deprecated functionality. Almost everything in your first void main example is deprecated. You must explicity declare your inputs and outputs if you expect to use the high-performance code path of the drivers. Deprecated functionality is also far more likely to be full of driver bugs.
Use the newer, more explicit style of declaring inputs and outputs and set them in your code. It really isn't bad. I thought this would be ugly but it actually was pretty easy (I wish I had just done it earlier).
FYI, the last time I looked at a lowest common denominator for OpenGL (2012), it was OpenGL 3.3. Practically all video cards from AMD and NVidia that have any gaming capability will have OpenGL 3.3. And they have for a while, so any code you write now for OpenGL 3.3 will work on a typical low-end or better GPU.

RenderMonkey - GLSL light

I am making a shader in witch i am using a spot light, I am trying some shaders that I´ve found in the Internet before I make my own.
I found this GLSL code:
vec4 final_color =
(gl_FrontLightModelProduct.sceneColor * gl_FrontMaterial.ambient) +
(gl_LightSource[0].ambient * gl_FrontMaterial.ambient);
Does anyone know how can i make this in the RenderMonkey? i know that i cannot use gl_LightSource[0], how can i make it?
In rendermonkey you would need to set variables for the light properties which your shader would use. such a a vec4 for the light's ambient, diffuse, and specular colors. Then some vec3 for the vector to the light / position of the light, etc.
Then you can set these variables to be artist variables, and you can edit them 'live' in the artist Editor on the right.
It's a bit awkward, meaning that you either need to adjust your usage of your shader such that you don't rely on the built in gl_ constructs (so you don't need to edit a shader for it to run both in your program and in RM. Or you need to edit the shaders when you go inbetween. I prefer the former.