There is a scene with some objects and a terrain. When I try to rotate the object the normals stay the same. Means that the dark side of an object stays the dark side of an object.
My specular lightning is working.
Vertex Shader:
uniform vec3 lightPos;
uniform sampler2D Texture;
varying vec2 TexCoord;
varying vec3 position;
varying vec3 vertex;
varying mat3 nMat;
varying mat3 vMatrix;
varying vec3 normal;
varying vec3 oneNormal;
varying vec3 lightPos2;
void main()
{
gl_Position=gl_ModelViewProjectionMatrix*gl_Vertex;
position=vec3(gl_ModelViewMatrix*gl_Vertex);
vMatrix = mat3(gl_ModelViewMatrix);
lightPos2 = vec3(gl_ModelViewMatrix*vec4(lightPos,1.0));
vertex = vec3(gl_Vertex);
nMat = gl_NormalMatrix;
normal=gl_NormalMatrix*gl_Normal;
oneNormal = gl_Normal;
TexCoord=gl_MultiTexCoord0.xy;
}
Fragment Shader:
varying vec3 position;
varying vec3 normal;
uniform sampler2D Texture;
varying vec2 TexCoord;
uniform vec3 lightPos;
varying vec3 vertex;
uniform vec3 lambient;
uniform vec3 ldiffuse;
uniform vec3 lspecular;
uniform float shininess;
varying mat3 nMat;
varying mat3 vMatrix;
varying vec3 oneNormal;
varying vec3 lightPos2;
void main()
{
float dist=length(vertex-lightPos);
float att=1.0/(1.0+0.1*dist+0.01*dist*dist);
vec4 TexColor = texture2D(Texture, TexCoord);
vec3 ambient=TexColor.rgb*lambient; //the ambient light
//=== Diffuse ===//
vec3 surf2light=normalize(position-lightPos2);
float dcont=max(0.0,
dot( normalize(nMat*(-normal)), nMat*surf2light) );
vec3 diffuse=dcont*(TexColor.rgb*ldiffuse);
//=== Specular ===//
vec3 surf2view = normalize(lightPos-position);
surf2light=nMat*normalize(-vertex);
vec3 reflection=reflect(-surf2view,normalize(normal));
float scont=pow(max(0.0,dot(surf2light,reflection)),shininess);
vec3 specular=scont*lspecular;
gl_FragColor=vec4((ambient+diffuse+specular)*att,1.0);
}
You are transforming the normals. But at least for part of the lighting calculations, you're using a light source position that is also transformed into the same coordinate space. If you transform both the normals and the light source, the direction of the normals relative to the light source will be the same again.
Extracting the key lines from the vertex shader:
position=vec3(gl_ModelViewMatrix*gl_Vertex);
lightPos2 = vec3(gl_ModelViewMatrix*vec4(lightPos,1.0));
normal=gl_NormalMatrix*gl_Normal;
gl_NormalMatrix corresponds to gl_ModelViewMatrix, with the necessary adjustments to transform direction vectors instead of points. Therefore, the modelview transformation has been applied to all of position, lightPos2 and normal.
Then, all three of these values (with interpolation applied) are passed into the fragment shader, where you have this calculation for the diffuse lighting term:
vec3 surf2light=normalize(position-lightPos2);
float dcont=max(0.0,
dot( normalize(nMat*(-normal)), nMat*surf2light) );
vec3 diffuse=dcont*(TexColor.rgb*ldiffuse);
Now, there's a couple of issues here:
You're applying nMat, which is the normal matrix, to both vectors again, even though they were already transformed. It does no harm, since the dot product does not change if you apply the same rotation to both vectors, but it does not make sense.
All the values used (position, lightPos2, normal) were transformed by the modelview matrix. So their relative positions/directions are exactly the same as they were originally. The result of the calculation will be the same as it would be if it were applied to the corresponding untransformed vectors.
You need to decide which coordinate system you want to use for the lighting calculations. There are at least a couple of options. The easiest one for you is probably to use eye coordinate space. To do this, you can apply the view transformation to the light position before passing it into the shader, and then use this light position directly, without any additional transformations, in the shader code.
Since you say that the specular lighting works as desired, and you're using the uniform lightPos there, it might be as simple as using that variable instead of lightPos2 in the diffuse calculation.
Related
I know that same questions were asked many times, but unfortunately I am unable to find the source of my problem.
With help of tutorials I've written a small GLSL shader. Right now it can work with ambient light and load normals from normal map. The issue is that directional light seems to be dependent on my viewing angle.
Here are my shaders:
//Vertex Shader
#version 120
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
attribute vec3 tangent;
varying vec2 texCoord0;
varying mat3 tbnMatrix;
uniform mat4 transform;
void main(){
gl_Position=transform * vec4(position, 1.0);
texCoord0 = texCoord;
vec3 n = normalize((transform*vec4(normal,0.0)).xyz);
vec3 t = normalize((transform*vec4(tangent,0.0)).xyz);
t=normalize(t-dot(t,n)*n);
vec3 btTangent=cross(t,n);
tbnMatrix=transpose(mat3(t,btTangent,n));
}
//Fragment Shader
#version 120
varying vec2 texCoord0;
varying mat3 tbnMatrix;
struct BaseLight{
vec3 color;
float intensity;
};
struct DirectionalLight{
BaseLight base;
vec3 direction;
};
uniform sampler2D diffuse;
uniform sampler2D normalMap;
uniform vec3 ambientLight;
uniform DirectionalLight directionalLight;
vec4 calcLight(BaseLight base, vec3 direction,vec3 normal){
float diffuseFactor=dot(normal,normalize(direction));
vec4 diffuseColor = vec4(0,0,0,0);
if(diffuseFactor>0){
diffuseColor=vec4(base.color,1.0)* base.intensity *diffuseFactor;
}
return diffuseColor;
}
vec4 calcDirectionalLight(DirectionalLight directionalLight ,vec3 normal){
return calcLight(directionalLight.base,directionalLight.direction,normal);
}
void main(){
vec3 normal =tbnMatrix*(255.0/128.0* texture2D(normalMap,texCoord0).xyz-255.0/256.0);
vec4 totalLight = vec4(ambientLight,0) +calcDirectionalLight(directionalLight, normal);
gl_FragColor=texture2D(diffuse,texCoord0)*totalLight;
}
"transform" matrix that I send to the shader is summarily computed this way:
viewProjection=m_perspective* glm::lookAt(CameraPosition,CameraPosition+m_forward,m_up);
glm::mat4 transform = vievProjection * object_matrix;
"object_matrix" is a matrix that I get directly from physics engine.(I think it's the matrix that defines position and rotation of the object in the world space, correct me if I'm wrong.)
And I guess that "transform" matrix is computed correctly, since all the objects are drawn in the right positions. It looks like the problem is related to normals because if I set gl_FragColor = vec4(normal, 0) the color is also changing with camera rotation.
I would greatly appreciate if anyone could point me to my mistake
So I am using a blinn shader program on some of my models, that uses a DIRECTIONAL light as opposed to a point light. I started out this original code that uses a point light:
VERTEX:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
void main()
{
// Vertex location in modelview coordinates
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
Normal = gl_NormalMatrix * gl_Normal;
Half = gl_LightSource[0].halfVector.xyz;
Ambient = gl_FrontMaterial.emission + gl_FrontLightProduct[0].ambient + gl_LightModel.ambient*gl_FrontMaterial.ambient;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
FRAGMENT:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
uniform sampler2D tex;
vec4 blinn()
{
vec3 N = normalize(Normal);
vec3 L = normalize(Light);
vec4 color = Ambient;
float Id = dot(L,N);
if (Id>0.0)
{
color += Id*gl_FrontLightProduct[0].diffuse;
vec3 H = normalize(Half);
float Is = dot(H,L); // Specular is cosine of reflected and view vectors
if (Is>0.0) color += pow(Is,gl_FrontMaterial.shininess)*gl_FrontLightProduct[0].specular;
}
return color;
}
void main()
{
gl_FragColor = blinn() * texture2D(tex,gl_TexCoord[0].xy);
}
However, as stated above, instead of a point light I want I want a directional light, such that no matter where in the scene the model is the direction of the light is the same. So I make the following changes:
Instead of:
varying vec3 Light;
and
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
I get rid of the above lines of code and instead in my fragment shader have:
uniform vec4 lightDir;
and
vec3 L = normalize(lightDir.xyz);
I pass the direction of the light as a uniform outside of a my shader program, and this works good: the model is lighted from a single direction no matter its location in the world! HOWEVER, now the lighting changes dramatically and unrealistically depending on the user's view, which makes sense since I got rid of ("-P ") in the light calculation in the original code. I've already tried adding that to the code (by moving lightDir back to vertex shader and passing it along again in a varying) to what I have and it just doesn't fix the problem. I'm afraid I just don't understand what is going well enough to figure this out, I understand that the " - P" for the Light vec3 is necessary: to make specular/reflection work, but I don't know how to make it work for a directional light. How do I make take the original above code, and make it so the light is treated as a directional light as opposed to a point light?
Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!
I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?
I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.
When I use my shaders I get following results:
One problem is that specular light is sort of deformed and you could see sphere triangles, another is, that I can see specular where I shouldn't (second image). One spehere light is done in vertex shader, other in fragment.
Here is how my vertex light shader looks like:
Vertex:
// Material data.
uniform vec3 uAmbient;
uniform vec3 uDiffuse;
uniform vec3 uSpecular;
uniform float uSpecIntensity;
uniform float uTransparency;
uniform mat4 uWVP;
uniform mat3 uN;
uniform vec3 uSunPos;
uniform vec3 uEyePos;
attribute vec4 attrPos;
attribute vec3 attrNorm;
varying vec4 varColor;
void main(void)
{
vec3 N = uN * attrNorm;
vec3 L = normalize(uSunPos);
vec3 H = normalize(L + uEyePos);
float df = max(0.0, dot(N, L));
float sf = max(0.0, dot(N, H));
sf = pow(sf, uSpecIntensity);
vec3 col = uAmbient + uDiffuse * df + uSpecular * sf;
varColor = vec4(col, uTransparency);
gl_Position = uWVP * attrPos;
}
Fragment:
varying vec4 varColor;
void main(void)
{
//vec4 col = texture2D(texture_0, varTexCoords);
//col.r += uLightDir.x;
//col.rgb = vec3(pow(gl_FragCoord.z, 64));
gl_FragColor = varColor;
}
It is possible, that my supplied data from code is wrong.
uN is world matrix (not inverted and not transposed, even though doing that didn't seem to do anything different).
UWVP - world view projection matrix.
Any ideas, as to where the problem might be, would be appreciated.
[EDIT] Here is my light calculations done in fragment:
Vertex shader file:
uniform mat4 uWVP;
uniform mat3 uN;
attribute vec4 attrPos;
attribute vec3 attrNorm;
varying vec3 varEyeNormal;
void main(void)
{
varEyeNormal = uN * attrNorm;
gl_Position = uWVP * attrPos;
}
Fragment shader file:
// Material data.
uniform vec3 uAmbient;
uniform vec3 uDiffuse;
uniform vec3 uSpecular;
uniform float uSpecIntensity;
uniform float uTransparency;
uniform vec3 uSunPos;
uniform vec3 uEyePos;
varying vec3 varEyeNormal;
void main(void)
{
vec3 N = varEyeNormal;
vec3 L = normalize(uSunPos);
vec3 H = normalize(L + uEyePos);
float df = max(0.0, dot(N, L));
float sf = max(0.0, dot(N, H));
sf = pow(sf, uSpecIntensity);
vec3 col = uAmbient + uDiffuse * df + uSpecular * sf;
gl_FragColor = vec4(col, uTransparency);
}
[EDIT2] As Joakim pointed out, I wasn't normalizing varEyeNormal in fragment shader. After fixing that, the result is much better in fragment shader. I also used normalize function on uEyePos, so specular no longer goes to dark side. Thanks for all the help.
Short answer: You need to normalize varEyeNormal in the fragment shader, instead of in the vertex shader.
Longer answer:
In order to get smooth lighting you will need to calculate the normals per pixel instead of per vertex. Varyings computed in the vertex shader are linearily interpolated before passed to the fragment shader, which works great as a shortcut in some cases but worse in others.
The reason you are seeing the triangle edges is because of the interpolation between normals result in shorter than 1.0 normals in all pixels between the vertices.
To correct this you will need to normalize the normals in the fragment shader instead of in the vertex shader.
The Normal matrix should be upper 3x3 of the transpose of the inverse of the Modelview matrix, which is equivalent to the upper 3x3 of of the Modelview matrix if the Modelview contains only rotations and translations (no scaling)
For more about the Normal matrix, see: http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/
(Edited for correctness according to comment below.)