I'm trying to implement shadow mapping in my game. The shaders you see below result in correctly drawn shadows for the game's map, but all the models walking around on the map are completely black.
Here's a screenshot:
I suspect the problem lies with the calculation of the world position. The gl_Vertex is not transformed in any way. Because the map is generated with absolute world coordinates, the transformation with the light matrix results in a correct relative position that can be used to perform the shadow mapping.
However, my 3D models are all very close to the origin. So if their coordinates are plainly transformed using the light matrix, they will most likely never be lit.
I'm wondering if this could be the case, and if so, how I could fix it.
Here's my vertex shader:
#version 120
uniform mat4x4 LightMatrixValue;
varying vec4 shadowMapPosition;
varying vec3 worldPos;
void main(void)
{
vec4 modelPos = gl_Vertex;
worldPos=modelPos.xyz/modelPos.w;
vec4 lightPos = LightMatrixValue*modelPos;
shadowMapPosition = 0.5 * (lightPos.xyzw +lightPos.wwww);
gl_Position = ftransform();
}
And the fragment shader:
varying vec4 shadowMapPosition;
varying vec3 worldPos;
uniform sampler2D depthMap;
uniform vec4 LightPosition;
void main(void)
{
vec4 textureColour = gl_Color;
vec3 realShadowMapPosition = shadowMapPosition.xyz/shadowMapPosition.w;
float depthSm = texture2D(depthMap, realShadowMapPosition.xy).r;
if (depthSm < realShadowMapPosition.z-0.0001)
{
textureColour = vec4(0, 0, 0, 1);
}
gl_FragColor= textureColour;
}
I write this here because it will not fit above, i hope it will solve you problem.
For rendering you on the one hand have your models with their model matrix to position the element, on the other hand you have you view and projection matrix that transforms your model into the screen space.
For creating your shadow map (simplest approach, and i guess the one you have chosen) you render the scene from the view of the light source, so you apply the view and projection matrix of you light source, which will map the x, y and z value to screen space. The x and y values are for the position in the image, the z is for the depth buffer test and for the color you write in your color buffer (you later use as shadow map).
For the final rendering you load this shadow map and the view and projection matrix of the light to your shader. For the display in the scene you apply your view and project matrix of the camera to the vertexes, for the shadow map lookup you apply the view and projection matrix of the light to the vertex (like you did with the rendering for the shadow map). When you apply the lights view and projection matrix to the vertex you have the same mapping as in the shadow map pass, now you only need to transform your x and y coordinates to the texture coordinates and lookup the stored z value, which you compare with the calculated one.
Transforming the models world position into screen space (from the lights point of view)
This part is often done in the vertex or geometry shader:
shadowMapPosition = matLightViewProjection * modelWoldPos;
This is often done in the fragment shader:
shadowMapPosition = shadowMapPosition / shadowMapPosition.w ;
// Add an offset to prevent self-shadowing and moiré pattern
shadowMapPosition.z += 0.0005;
//lookup the stored z value
float distanceFromLight = texture2D(depthMap,shadowMapPosition.xz).z;
Now just compare the distanceFromLight with the shadowMapPosition.z to see if the object is in shadow or not.
So in your second pass you will do the steps of the shadow mapping again, except that you don't draw the data you calculated but compare it to the one you calculated in the pass before.
Related
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I have circle in 3D space (red on image) with normals (white)
This circle is being drawn as linestrip.
Problem is: i need to draw only those pixels whose normals directed into camera (angle between normal and camera vector is < 90) using discard in fragment shader code. Like backface culling but for lines.
Red part of circle is what i need to draw and black is what i need to discard in fragment shader.
Good example is 3DS Max rotation gizmo, back sides of lines are hidden:
So, in fragment shader i have:
if(condition)
discard;
Help me to come up with this condition. Considering both orthographic and perspective cameras would be good.
Well, you already described your condition:
(angle between normal and camera vector is < 90)
You have to forward your normals to the fragment shader (don't forget to re-normalize it in the FS, the interpolation will change the length). And you need the viewing vector (in the same space than your normals, so you might transform normal to eye space, or use world space, or even transform the view direction/camera location into object space). Since the condition angle(N,V) >= 90 (degrees) is the same as cos(angle(N,V)) <= 0 (assuming normalized vectors), you can simply use the dot product:
if (dot(N,V) <= 0)
discard;
UPDATE:
As you pointed out in the comments, you have the "classical" GL matrices available. So it makes sense to do this transformation in eye space. In the vertex shader, you put
in vec4 vertex; // object space position
in vec3 normal; // object space normal direction
out vec3 normal_eyespace;
out vec3 vertex_eyespace;
uniform mat3 normalMatrix;
uniform mat4 modelView;
uniform mat4 projection;
void main()
{
normal_eyespace = normalize(normalMatrix * normal);
vec4 v = modelViewMatrix * vertex;
vertex_eyespace = v.xyz;
gl_Position=projectionMatrix * v;
}
and in the fragment shader, you can simply do
in vec3 normal_eyespace;
in vec3 vertex_eyespace;
void main()
{
if (dot(normalize(normal_eyespace), normalize(-vertex_eyespace)) <= 0)
discard;
// ...
}
Note: this code assumes modern GLSL with in/out instead of attribute/varying qualifiers. I also assume no builtin attributes. But that code should be easily adaptable to older GL.
I implemented a fairly simple shadow map. I have a simple obj imported plane as ground and a bunch of trees.
I have a weird shadow on the plane which I think is the plane's self shadow. I am not sure what code to post. If it would help please tell me and I'll do so then.
First image, camera view of the scene. The weird textured lowpoly sphere is just for reference of the light position.
Second image, the depth texture stored in the framebuffer. I calculated shadow coords from light perspective with it. Since I can't post more than 2 links, I'll leave this one.
Third image, depth texture with a better view of the plane projecting the shadow from a different light position above the whole scene.
LE: the second picture http://i41.tinypic.com/23h3wqf.jpg (Depth Texture of first picture)
Tried some fixes, adding glCullFace(GL_BACK) before drawing the ground in the first pass removes it from the depth texture but still appears in the final render(like in the first picture, the back part of the ground) - i tried adding CullFace in the second pass also, still showing the shadow on the ground , tried all combinations of Front and Back facing. Can it be because of the values in the ortographic projection ?
Shadow fragment shader:
#version 330 core
layout(location = 0) out vec3 color;
in vec2 texcoord;
in vec4 ShadowCoord;
uniform sampler2D textura1;
uniform sampler2D textura2;
uniform sampler2D textura_depth;
uniform int has_alpha;
void main(){
vec3 tex1 = texture(textura1, texcoord).xyz;
vec3 tex2 = texture(textura2, texcoord).xyz;
if(has_alpha>0.5) if((tex2.r<0.1) && (tex2.g<0.1) && (tex2.b<0.1)) discard;
//Z value of depth texture from pass 1
float hartaDepth=texture( textura_depth,(ShadowCoord.xy/ShadowCoord.w)).z;
float shadowValue=1.0;
if(hartaDepth < ShadowCoord.z-0.005)
shadowValue=0.5;
color = shadowValue * tex1 ;
}
VC++ 2010, OpenGL, GLSL, SDL
I am moving over to shaders, and have run into a problem that originally occured while working with the ogl pipeline. That is, the position of the light seems to point in whatever direction my camera faces. In the ogl pipeline it was just the specular highlight, which was fixable with:
glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);
Here are the two shaders:
Vertex
varying vec3 lightDir,normal;
void main()
{
normal = normalize(gl_NormalMatrix * gl_Normal);
lightDir = normalize(vec3(gl_LightSource[0].position));
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment
varying vec3 lightDir,normal;
uniform sampler2D tex;
void main()
{
vec3 ct,cf;
vec4 texel;
float intensity,at,af;
intensity = max(dot(lightDir,normalize(normal)),0.0);
cf = intensity * (gl_FrontMaterial.diffuse).rgb +
gl_FrontMaterial.ambient.rgb;
af = gl_FrontMaterial.diffuse.a;
texel = texture2D(tex,gl_TexCoord[0].st);
ct = texel.rgb;
at = texel.a;
gl_FragColor = vec4(ct * cf, at * af);
}
Any help would be much appreciated!
The question is: What coordinate system (reference frame) do you want the lights to be in? Probably "the world".
OpenGL's fixed-function pipeline, however, has no notion of world coordinates, because it uses a modelview matrix, which transforms directly from eye (camera) coordinates to model coordinates. In order to have “fixed” lights, you could do one of these:
The classic OpenGL approach is to, every frame, set up the modelview matrix to be the view transform only (that is, be the coordinate system you want to specify your light positions in) and then use glLight to set the position (which is specified to apply the modelview matrix to the input).
Since you are using shaders, you could also have separate model and view matrices and have your shader apply both (rather than using ftransform) to vertices, but only the view matrix to lights. However, this means more per-vertex matrix operations and is probably not an especially good idea unless you are looking for clarity rather than performance.
I want to adjust the colors depending on which xyz position they are in the world.
I tried this in my fragment shader:
varying vec4 verpos;
void main(){
vec4 c;
c.x = verpos.x;
c.y = verpos.y;
c.z = verpos.z;
c.w = 1.0;
gl_FragColor = c;
}
but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle?
Heres my vertex shader:
varying vec4 verpos;
void main(){
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
Edit2: changed title, so i want world coords, not screen coords!
Edit3: added my full code
In vertex shader you have gl_Vertex (or something else if you don't use fixed pipeline) which is the position of a vertex in model coordinates. Multiply the model matrix by gl_Vertex and you'll get the vertex position in world coordinates. Assign this to a varying variable, and then read its value in fragment shader and you'll get the position of the fragment in world coordinates.
Now the problem in this is that you don't necessarily have any model matrix if you use the default modelview matrix of OpenGL, which is a combination of both model and view matrices. I usually solve this problem by having two separate matrices instead of just one modelview matrix:
model matrix (maps model coordinates to world coordinates), and
view matrix (maps world coordinates to camera coordinates).
So just pass two different matrices to your vertex shader separately. You can do this by defining
uniform mat4 view_matrix;
uniform mat4 model_matrix;
In the beginning of your vertex shader. And then instead of ftransform(), say:
gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex;
In the main program you must write values to both of these new matrices. First, to get the view matrix, do the camera transformations with glLoadIdentity(), glTranslate(), glRotate() or gluLookAt() or what ever you prefer as you would normally do, but then call glGetFloatv(GL_MODELVIEW_MATRIX, &array); in order to get the matrix data to an array. And secondly, in a similar way, to get the model matrix, also call glLoadIdentity(); and do the object transformations with glTranslate(), glRotate(), glScale() etc. and finally call glGetFloatv(GL_MODELVIEW_MATRIX, &array); to get the matrix data out of OpenGL, so you can send it to your vertex shader. Especially note that you need to call glLoadIdentity() before beginning to transform the object. Normally you would first transform the camera and then transform the object which would result in one matrix that does both the view and model functions. But because you're using separate matrices you need to reset the matrix after camera transformations with glLoadIdentity().
gl_FragCoord are the pixel coordinates and not world coordinates.
Or you could just divide the z coordinate by the w coordinate, which essentially un-does the perspective projection; giving you your original world coordinates.
ie.
depth = gl_FragCoord.z / gl_FragCoord.w;
Of course, this will only work for non-clipped coordinates..
But who cares about clipped ones anyway?
You need to pass the World/Model matrix as a uniform to the vertex shader, and then multiply it by the vertex position and send it as a varying to the fragment shader:
/*Vertex Shader*/
layout (location = 0) in vec3 Position
uniform mat4 World;
uniform mat4 WVP;
//World FragPos
out vec4 FragPos;
void main()
{
FragPos = World * vec4(Position, 1.0);
gl_Position = WVP * vec4(Position, 1.0);
}
/*Fragment Shader*/
layout (location = 0) out vec4 Color;
...
in vec4 FragPos
void main()
{
Color = FragPos;
}
The easiest way is to pass the world-position down from the vertex shader via a varying variable.
However, if you really must reconstruct it from gl_FragCoord, the only way to do this is to invert all the steps that led to the gl_FragCoord coordinates. Consult the OpenGL specs, if you really have to do this, because a deep understanding of the transformations will be necessary.