How to use a directional light in a blinn shader instead of point light? - opengl

So I am using a blinn shader program on some of my models, that uses a DIRECTIONAL light as opposed to a point light. I started out this original code that uses a point light:
VERTEX:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
void main()
{
// Vertex location in modelview coordinates
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
Normal = gl_NormalMatrix * gl_Normal;
Half = gl_LightSource[0].halfVector.xyz;
Ambient = gl_FrontMaterial.emission + gl_FrontLightProduct[0].ambient + gl_LightModel.ambient*gl_FrontMaterial.ambient;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
FRAGMENT:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
uniform sampler2D tex;
vec4 blinn()
{
vec3 N = normalize(Normal);
vec3 L = normalize(Light);
vec4 color = Ambient;
float Id = dot(L,N);
if (Id>0.0)
{
color += Id*gl_FrontLightProduct[0].diffuse;
vec3 H = normalize(Half);
float Is = dot(H,L); // Specular is cosine of reflected and view vectors
if (Is>0.0) color += pow(Is,gl_FrontMaterial.shininess)*gl_FrontLightProduct[0].specular;
}
return color;
}
void main()
{
gl_FragColor = blinn() * texture2D(tex,gl_TexCoord[0].xy);
}
However, as stated above, instead of a point light I want I want a directional light, such that no matter where in the scene the model is the direction of the light is the same. So I make the following changes:
Instead of:
varying vec3 Light;
and
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
I get rid of the above lines of code and instead in my fragment shader have:
uniform vec4 lightDir;
and
vec3 L = normalize(lightDir.xyz);
I pass the direction of the light as a uniform outside of a my shader program, and this works good: the model is lighted from a single direction no matter its location in the world! HOWEVER, now the lighting changes dramatically and unrealistically depending on the user's view, which makes sense since I got rid of ("-P ") in the light calculation in the original code. I've already tried adding that to the code (by moving lightDir back to vertex shader and passing it along again in a varying) to what I have and it just doesn't fix the problem. I'm afraid I just don't understand what is going well enough to figure this out, I understand that the " - P" for the Light vec3 is necessary: to make specular/reflection work, but I don't know how to make it work for a directional light. How do I make take the original above code, and make it so the light is treated as a directional light as opposed to a point light?

Related

How to get the room (or screen) coordinates from inside a Gamemaker Studio 2 shader?

I'm mostly new to writing shaders, and this might not be a great place to start, but I'm trying to make a shader to make an object "sparkle." I won't get into the specifics on how it's supposed to look, but to make it I need a value that changes with the object's position in the room (or on the screen, as the camera is fixed). I've tried v_vTexcoord, in_Position, gl_Position, and others without the intended result. If I've used them wrong or missed something, I wouldn't be surprised, but any advice is helpful.
I don't think they'll be helpful but here's my vertex shader:
//it's mostly the same as the default
// Simple passthrough vertex shader
//
attribute vec3 in_Position; // (x,y,z)
//attribute vec3 in_Normal; // (x,y,z) unused in this shader.
attribute vec4 in_Colour; // (r,g,b,a)
attribute vec2 in_TextureCoord; // (u,v)
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
varying vec3 v_inpos;
void main()
{
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;
v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;
v_inpos = /*this is the variable that i'd like to set to the x,y*/;
}
and my fragment shader:
//
//
//
varying vec2 v_vTexcoord; //x is <1, >.5
varying vec4 v_vColour;
varying vec3 v_inpos;
uniform float pixelH; //unused, but left in for accuracy
uniform float pixelW; //see above
void main() //WHEN AN ERROR HAPPENS, THE SHADER JUST WON'T DO ANYTHING AT ALL.
{
vec2 offset = vec2 (pixelW, pixelH);
gl_FragColor = v_vColour * texture2D( gm_BaseTexture, v_vTexcoord );
/* i am planning on just testing different math until something works, but i can't
vec3 test = vec3 (v_inpos.x, v_inpos.x, v_inpos.x); //find the values i need to test it
test.x = mod(test.x,.08);
test.x = test.x - 4;
test.x = abs(test.x);
while (test.x > 1.0){
test.x = test.x/10;
}
test = vec3 (test.x, test.x, test.x);
gl_FragColor.a = test.x;
*/
//everything above doesn't cause an error when uncommented, i think
//if (v_inpos.x == 0.0){gl_FragColor.rgb = vec3 (1,0,0);}
//if (v_inpos.x > 1) {gl_FragColor.rgb = vec3 (0,1,0);}
//if (v_inpos.x < 1) {gl_FragColor.rgb = vec3 (0,0,1);}
}
if this question doesn't make sense, i'll try to clarify any other questions in the comments.
If you want to get a position in world space, then you have to transform the vertex coordinate from model space to world space.
This can be done by the model (world) matrix (gm_Matrices[MATRIX_WORLD]). See game-maker-studio-2 - Matrices.
e.g.:
vec4 object_space_pos = vec4(in_Position.xyz, 1.0);
vec4 world_space_pos = gm_Matrices[MATRIX_WORLD] * object_space_pos;
Not, the Cartesian world space position can be get by:
(See also Swizzling)
vec3 pos = world_space_pos.xyz;

Should lighting be done in view space or world space

I've been working on a bit of deferred rendering lately for my engine. I pretty much have it working now. However, when I move the camera a bit, I notice some subtle shading on some surfaces, but it is more apparent on other surfaces (I don't have any specular light). I've been doing my lighting calculations as well as my G-buffer rendering in view space. That makes the question arise: Should I being doing my lighting in world space? I'm pretty sure that the variations in light are coming from the normals being in view space. If it makes a difference, I am computing the view space position from a depth map. I've read that doing calculations in view space are fine, but with a bit of tinkering, I can't figure out what's wrong, and might just resort to doing it in view space. If anyone is curious here is my shader code:
Normal pass:
varying vec3 normal;
void main(void)
{
gl_Position =gl_ModelViewProjectionMatrix*gl_Vertex;
normal = (gl_NormalMatrix*gl_Normal)* 0.5 + 0.5;
}
Lighting pass:
uniform sampler2D positionMap;
uniform sampler2D normalMap;
uniform sampler2D albedoMap;
uniform mat4 iprojMat;
uniform int light;
uniform vec3 lightcolor;
varying vec2 texcoord;
void main()
{
//get all the G-buffer information
vec3 normal = ((texture2D(normalMap,texcoord)).rgb * 2.0 - 1.0);
vec3 color = (texture2D(albedoMap,texcoord)).rgb;
if (color == vec3(0,0,0))
discard;
float z = (texture2D(positionMap,texcoord)).r;
float x = texcoord.x * 2.0 - 1.0;
float y = (1.0-texcoord.y) * 2.0 - 1.0;
vec4 proj = vec4(x,y,z,1.0);
proj = proj*iprojMat;
vec3 position = proj.xyz/proj.w;
//start making the light happen
vec3 lightVec = (gl_LightSource[light].position.xyz - position);
vec3 diffuselight = lightcolor * max(dot(normal,normalize(lightVec)), 0.0);
diffuselight = clamp(diffuselight, 0.0, 1.0);
//calculate attinuation
float distance = length(lightVec);
float att = 1.0/((distance*distance)+distance);
gl_FragColor = vec4(diffuselight,1.0);
}
Any help or hints on this would be appreciated.
P.S. I have made a direct rendering shader as well that is also done in view space and the same kind of thing happen, but not as noticeably.

OpenGL point light moving when camera rotates

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?
I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.

Tangent Space Normal Mapping - shader sanity check

I'm getting some pretty freaky results from my tangent space normal mapping shader :). In the scene I show here, the teapot and checkered walls are being shaded with my ordinary Phong-Blinn shader (obviously teapot backface cull gives it a lightly ephemeral look and feel :-) ). I've tried to add in normal mapping to the sphere, with psychedelic results:
The light is coming from the right (just about visible as a black blob). The normal map I'm using on the sphere looks like this:
I'm using AssImp to process input models, so it's calculating tangent and bi-normals for each vertex automatically for me.
The pixel and vertex shaders are below. I'm not too sure what's going wrong, but it wouldn't surprise me if the tangent basis matrix is somehow wrong. I assume I have to compute things into eye space and then transform the eye and light vectors into tangent space and that this is the correct way to go about it. Note that the light position comes into the shader already in view space.
// Vertex Shader
#version 420
// Uniform Buffer Structures
// Camera.
layout (std140) uniform Camera
{
mat4 Camera_Projection;
mat4 Camera_View;
};
// Matrices per model.
layout (std140) uniform Model
{
mat4 Model_ViewModelSpace;
mat4 Model_ViewModelSpaceInverseTranspose;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position; // Already in view space.
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Streams (per vertex)
layout(location = 0) in vec3 attrib_Position;
layout(location = 1) in vec3 attrib_Normal;
layout(location = 2) in vec3 attrib_Tangent;
layout(location = 3) in vec3 attrib_BiNormal;
layout(location = 4) in vec2 attrib_Texture;
// Output streams (per vertex)
out vec3 attrib_Fragment_Normal;
out vec4 attrib_Fragment_Position;
out vec3 attrib_Fragment_Light;
out vec3 attrib_Fragment_Eye;
// Shared.
out vec2 varying_TextureCoord;
// Main
void main()
{
// Compute normal.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Generate matrix for tangent basis.
mat3 tangentBasis = mat3( attrib_Tangent,
attrib_BiNormal,
attrib_Normal);
// Light vector.
attrib_Fragment_Light = tangentBasis * normalize(Light_Position - position.xyz);
// Eye vector.
attrib_Fragment_Eye = tangentBasis * normalize(-position.xyz);
// Return position.
gl_Position = Camera_Projection * position;
}
... and the pixel shader looks like this:
// Pixel Shader
#version 420
// Samplers
uniform sampler2D Map_Normal;
// Global Uniforms
// Material.
layout (std140) uniform Material
{
vec4 Material_Ambient_Colour;
vec4 Material_Diffuse_Colour;
vec4 Material_Specular_Colour;
vec4 Material_Emissive_Colour;
float Material_Shininess;
float Material_Strength;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position;
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Input streams (per vertex)
in vec3 attrib_Fragment_Normal;
in vec3 attrib_Fragment_Position;
in vec3 attrib_Fragment_Light;
in vec3 attrib_Fragment_Eye;
// Shared.
in vec2 varying_TextureCoord;
// Result
out vec4 Out_Colour;
// Main
void main(void)
{
// Compute normals.
vec3 N = normalize(texture(Map_Normal, varying_TextureCoord).xyz * 2.0 - 1.0);
vec3 L = normalize(attrib_Fragment_Light);
vec3 V = normalize(attrib_Fragment_Eye);
vec3 R = normalize(-reflect(L, N));
// Compute products.
float NdotL = max(0.0, dot(N, L));
float RdotV = max(0.0, dot(R, V));
// Compute final colours.
vec4 ambient = Light_Ambient_Colour * Material_Ambient_Colour;
vec4 diffuse = Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * (pow(RdotV, Material_Shininess) * Material_Strength);
// Final colour.
Out_Colour = ambient + diffuse + specular;
}
Edit: 3D Studio Render of the scene (to show the UV's are OK on the sphere):
I think your shaders are okay, but your texture coordinates on the sphere are totally off. It's as if they got distorted towards the poles along the longitude.

Phong-specular lighting in glsl (lwjgl)

I'm currently trying to make specular lighting on an sphere using glsl and using Phong-model.
This is how my fragment shader looks like:
#version 120
uniform vec4 color;
uniform vec3 sunPosition;
uniform mat4 normalMatrix;
uniform mat4 modelViewMatrix;
uniform float shininess;
// uniform vec4 lightSpecular;
// uniform vec4 materialSpecular;
varying vec3 viewSpaceNormal;
varying vec3 viewSpacePosition;
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = (modelViewMatrix * vec4(gl_FragCoord.x, gl_FragCoord.y, gl_FragCoord.z, 1.0)).xyz;
vec4 specular = calculateSpecular(sunPosition, normal, viewSpacePosition, vec4(0.3,0.3,0.3,0.3), vec4(0.3,0.3,0.3,0.3));
gl_FragColor = color+specular;
}
The sunPosition is not moving and is set to the value (2.0f, 3.0f, -1.0f).
The problem is that the image looks nothing as it's supose to do if the specular calculations were correct.
This is how it looks like:
http://i.imgur.com/Na2C6.png
The reason i don't have any ambient-/emissiv-/deffuse- lighting in this code is because i want to get the specular light part working first.
Thankful for any help!
Edit:
#Darcy Rayner
That indead helped alot tough it seams to be something that is still not right...
The current code looks like this:
Vertex Shader:
viewSpacePosition = (modelViewMatrix*gl_Vertex).xyz;
viewSpaceSunPosition = (modelViewMatrix*vec4(sunPosition,1)).xyz;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
viewSpaceNormal = (normalMatrix * vec4(gl_Position.xyz, 0.0)).xyz;
Fragment Shader:
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = normalize(viewSpacePosition);
vec3 viewSpaceSunPosition = normalize(viewSpaceSunPosition);
vec4 specular = calculateSpecular(viewSpaceSunPosition, normal, viewSpacePosition, vec4(0.7,0.7,0.7,1.0), vec4(0.6,0.6,0.6,1.0));
gl_FragColor = color+specular;
}
And the sphere looks like this:
-->Picture-link<--
with the sun position: sunPosition = new Vector(12.0f, 15.0f, -1.0f);
Try not using gl_FragCoord, as it is stored in screen coordinates, (and I don't think transforming it by the modelViewMatrix will get it back to view coordinates). Easiest thing to do, set viewSpacePosition in your vertex shader as:
// Change gl_Vertex to whatever attribute you are using.
viewSpacePosition = (modelViewMatrix * gl_Vertex).xyz;
This should get you viewSpacePosition in view coordinates, (ie. before projection is applied). You can then go ahead and normalise viewSpacePosition in the fragment shader. Not sure if you are storing the sun vector in world coordinates, but you will probably want to transform it into view space then normalise it as well. Give it a go and see what happens, these things tend to be very error prone.