In softwares like Unity or Unreal, for example, how do they allow users to add their own custom shaders to an object?
Is this custom shader just a normal fragment shader or is it another kind of shader? And if it is just a fragment shader, how do they deal with the lights?
I'm not gonna post the code here because it's big and would pollute the page, but I'm starting to learn through here: https://github.com/opentk/LearnOpenTK/blob/master/Chapter2/6-MultipleLights/Shaders/lighting.frag (it's a series of tutorials, this is the shader from the last one), and they say we should put the light types in functions, inside the fragment shader, to calculate the colors of each fragment.
For example, this function to calculate a directional light, extracted from the code I sent above:
vec3 CalcDirLight(DirLight light, vec3 normal, vec3 viewDir)
{
vec3 lightDir = normalize(-light.direction);
//diffuse shading
float diff = max(dot(normal, lightDir), 0.0);
//specular shading
vec3 reflectDir = reflect(-lightDir, normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
//combine results
vec3 ambient = light.ambient * vec3(texture(material.diffuse, TexCoords));
vec3 diffuse = light.diffuse * diff * vec3(texture(material.diffuse, TexCoords));
vec3 specular = light.specular * spec * vec3(texture(material.specular, TexCoords));
return (ambient + diffuse + specular);
}
But I've never seen people adding lights in their shaders in Unity for example, people just add textures and mess with colors, unless they really want to specifically mess with the lights.
Is there a way of making just one fragment shader that will compute all the types of light, and the user could then apply another shader, just for the object material, on top of that?
If you don't know how to answer but have some good reading material, or place where I could learn more about openGL and GLSL it would be of great value as well.
There are a couple of different ways to structure shader files, each with different pros and cons.
As individual programs. You make each file it's own shader program. Simple to add new programs, and would allow your users to just write a program in GLSL, HLSL, or an engines custom shader language. You will have to provide some way for the user to express what kind of data the program expects, unless you query it from the engine, but it might get complicated to make something that's generic enough.
Über Shader! Put all desired functionality in one shader and let the behavior be controlled by control flow or preprocessor macros, such as #ifdef. So the user would just have to write the main function (which the application adds to the rest of the shader). This allows you to let the user use all the predefined variables and functions. The obvious downside is that it could be big and hard to handle and small changes might break many shaders.
Micro Shaders. Each program contains a small, common functionality, and the application concatenate them all to a functioning shader. The user just write the main function and tells the program which functionality to add. The problem is that it's easy to get conflicts in names unless you're careful and is harder to implement than the über shader.
Effect files. Provided by Microsoft’s effect framework or NVIDIA’s CgFX framework (deprecated).
Abstract Shade Trees. Don't actually know what this is, but it's suppose to be a solution.
You can also combine some of these techniques or try to invent your own solution based on your needs. Here's the solutions discussed (in sector 2.3.3 Existing Solutions).
Related
I began to learn using shaders with QML, and I can't find any references that talk about default uniform and attribute values that are passed to the shaders. In certain examples, we can saw several of them like vertexPosition or modelViewProjection (that is also passed as mvp), but there is no clear list containing all the variables that we can use.
After investigating in Qt source code, I found out default name for many variables:
uniform variables (found in renderview.cpp)
modelMatrix
viewMatrix
projectionMatrix
modelView
viewProjectionMatrix
modelViewProjection
mvp
inverseModelMatrix
inverseViewMatrix
inverseProjectionMatrix
inverseModelView
inverseViewProjectionMatrix
inverseModelViewProjection
modelNormalMatrix
modelViewNormal
viewportMatrix
inverseViewportMatrix
exposure
gamma
time
eyePosition
attributes (found in qattribute.cpp)
vertexPosition
vertexNormal
vertexColor
vertexTexCoord
vertexTangent
Is that all? These variables are largely sufficient to develop most oh the shaders I am doing right now, but I just want to know if I miss something.
to confirm part of what #aRaMinet said
source Qt Documentation
I've been reading many articles on uniform if statements that deal with branching to change the behavior of large shaders "uber shaders". I started on an uber shader (opengl lwjgl) but then I realized, the simple act of adding an if statement set by a uniform in the fragment shader that does simple calculations decreased my fps by 5 compared to seperate shaders without uniform if statements. I haven't set any cap to my fps limit, it's just refreshing as fast as possible. I'm about to add normal mapping and parrallax mapping and I can see two routes:
Uber vertex shader:
#version 400 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec2 textureCoords;
layout(location = 2)in vec3 normal;
**UNIFORM float RenderFlag;**
void main(void){
if(RenderFlag ==0){
//Calculate outVariables for normal mapping to the fragment shader
}
if(RenderFlag ==1){
//Calcuate outVariables for parallax mapping to the fragment shader
}
gl_Position = MVPmatrix *vec4(position,1);
}
Uber fragment shader:
layout(location = 0) in vec3 position;
layout(location = 1) in vec2 textureCoords;
layout(location = 2)in vec3 normal;
**UNIFORM float RenderFlag;**
**UNIFORM float reflectionFlag;** // if set either of the 2 render modes
will have some reflection of the skybox added to it, like reflective
surface.
void main(void){
if(RenderFlag ==0){
//display normal mapping
if(reflectionFlag){
vec4 reflectColor = texture(cube_texture, ReflectDirR) ;
//add reflection color to final color and output
}
}
if(RenderFlag ==1){
//display parrallax mapping
if(reflectionFlag){
vec4 reflectColor = texture(cube_texture, ReflectDirR) ;
//add reflection color to final color and output
}
}
gl_Position = MVPmatrix *vec4(position,1);
}
The benefit of this (for me) is simplicity in the flow, but makes the overall program more complex and i'm faced with ugly nested if statements. Also if I wanted to completely avoid if statements I would need 4 seperate shaders, one to handle each possible branch (Normal w/o reflection : Normal with reflection : Parrallax w/o reflection : Parrallax with reflection) just for one feature, reflection.
1: Does GLSL execute both branches and subsequent branches and calculates BOTH functions then outputs the correct one?
2: Instead of a uniform flag for the reflection should I remove the if statement in favor of calculating the reflection color irregardless and adding it to the final color if it is a relatively small operation with something like
finalColor = finalColor + reflectionColor * X
where X = a uniform variable, if none X == 0, if Reflection X==some amount.
Right off the bat, let me point out that GL4 has added subroutines, which are sort of a combination of both things you discussed. However, unless you are using a massive number of permutations of a single basic shader that gets swapped out multiple times during a frame (as you might if you had some dynamic material system in a forward rendering engine), subroutines really are not a performance win. I've put some time and effort into this in my own work and I get worthwhile improvements on one particular hardware/driver combination, and no appreciable change (good or bad) on most others.
Why did I bring up subroutines? Mostly because you're discussing what amounts to micro optimization, and subroutines are a really good example of why it doesn't pay to invest a whole lot of time thinking about that until the very end of development. If you're struggling to meet some performance number and you've crossed every high-level optimization strategy off the list, then you can worry about this stuff.
That said, it's almost impossible to answer how GLSL executes your shader. It's just a high-level language; the underlying hardware architectures have changed several times over since GLSL was created. The latest generation of hardware has actual branch predication and some pretty complicated threading engines that GLSL 1.10 class hardware never had, some of which is actually exposed directly through compute shaders now.
You could run the numbers to see which strategy works best on your hardware, but I think you'll find it's the old micro optimization dilemma and you may not even get enough of a measurable difference in performance to make a guess which approach to take. Keep in mind "Uber shaders" are attractive for multiple reasons (not all performance related), none the least of which, you may have fewer and less complicated draw commands to batch. If there's no appreciable difference in performance consider the design that's simpler and easier to implement / maintain instead.
Various examples of directional lights are all too varied to try and get a coherent picture of what's supposed to be happening; Some examples use matrices with unexplained contents and others, just use the vertex normal and light direction.
I've been attempting to write a shader based on what made the most sense to me but currently it leaves a scene fully lit or fully unlit; Dependent on the light direction.
In my fragment shader:
float diffuseFactor = dot(vertexNormal, -lightDirection)
vec4 diffuseColor = lightColor * diffuseFactor;
fragColor = color * diffuseColor;
So am I way off? Do I need to pass in more information (e.g: modelViewMatrix?) to achieved the desired result?
To be more specific, here's the screenshot:
https://drive.google.com/file/d/0B_o-Ym0jhIqmY2JJNmhSeGpyanM/edit?usp=sharing
After debugging for about 3 days, I really have no idea. Those black lines and strange fractal black segments just drive me nuts. The geometries are rendered by forward rendering, blending layer by layer for each light I add.
My first guess was downloading the newest graphics card driver (I'm using GTX 660m), but that didn't solve it. Can VSync be a possible issue here? (I'm rendering in a window rather on full screen mode) Or what is the most possible point to cause this kind of trouble?
My code is like this:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(false);
glDepthFunc(GL_EQUAL);
/*loop here*/
/*draw for each light I had*/
glDepthFunc(GL_LESS);
glDepthMask(true);
glDisable(GL_BLEND);
One thing I've noticed looking at your lighting vertex shader code:
void main()
{
gl_Position = projectionMatrix * vec4(position, 1.0);
texCoord0 = texCoord;
normal0 = (normalMatrix * vec4(normal, 0)).xyz;
modelViewPos0 = (modelViewMatrix * vec4(position, 1)).xyz;
}
You are applying the projection matrix directly to the vertex position, which I'm assuming is in object space.
Try setting it to:
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
And we can work from there.
This answer is slightly speculative, but based on the symptoms, and the code you posted, I suspect a precision problem. The rendering code you linked, looks like this in a shortened form:
useShader(FWD_AMBIENT);
part.render();
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for (Light light : lights) {
useShader(light.getShaderType());
part.render();
}
So you're rendering the same thing multiple times, with different shaders, and rely on the resulting pixels to end up with the same depth value (depth comparison function is GL_EQUAL). This is not a safe assumption. Quote from the GLSL spec:
In this section, variance refers to the possibility of getting different values from the same expression in different programs. For example, say two vertex shaders, in different programs, each set gl_Position with the same expression in both shaders, and the input values into that expression are the same when both shaders run. It is possible, due to independent compilation of the two shaders, that the values assigned to gl_Position are not exactly the same when the two shaders run. In this example, this can cause problems with alignment of geometry in a multi-pass algorithm.
I copied the whole paragraph because the example they are using sounds like an exact description of what you are doing.
To prevent this from happening, you can declare your out variables as invariant. In each of your vertex shaders that you use for the multi-pass rendering, add this line:
invariant gl_Position;
This guarantees that the outputs are identical if all the inputs are the same. To meet this condition, you should also make sure that you pass exactly the same transformation matrix into both shaders, and of course use the same vertex coordinates.
I am making a shader in witch i am using a spot light, I am trying some shaders that I´ve found in the Internet before I make my own.
I found this GLSL code:
vec4 final_color =
(gl_FrontLightModelProduct.sceneColor * gl_FrontMaterial.ambient) +
(gl_LightSource[0].ambient * gl_FrontMaterial.ambient);
Does anyone know how can i make this in the RenderMonkey? i know that i cannot use gl_LightSource[0], how can i make it?
In rendermonkey you would need to set variables for the light properties which your shader would use. such a a vec4 for the light's ambient, diffuse, and specular colors. Then some vec3 for the vector to the light / position of the light, etc.
Then you can set these variables to be artist variables, and you can edit them 'live' in the artist Editor on the right.
It's a bit awkward, meaning that you either need to adjust your usage of your shader such that you don't rely on the built in gl_ constructs (so you don't need to edit a shader for it to run both in your program and in RM. Or you need to edit the shaders when you go inbetween. I prefer the former.