In my scene, I have a few models rendered under a directional light. I currently have one of the models rotating on it's own axis and translating, but the problem I'm running into is that the shadow on that model is not being projected properly. Only models that aren't rotating have shadows in the correct position. How would I go about updating the light so that it would project correctly?
For my general vertex shader:
gl_Position = MVP * vec4(Translation + (Rotate * vec4(Position, 1.0)).xyz, 1.0);
for my shadow vertex shader:
gl_Position = gWVP * vec4(Position, 1.0);
TexCoordOut = TexCoord;
In my constructor, I initialize the directional light as such:
m_directionalLight.Color = COLOR_DAY_CLEARBLUE; // Light color
m_directionalLight.AmbientIntensity = 0.1f;
m_directionalLight.DiffuseIntensity = 1.005f;
m_directionalLight.Direction = glm::vec3(-1.0f, 1.0, 0.0);
The resulting screenshots as follow:
Related
So the short of it is that I'm trying to switch from old openGL's glClipPlane function to gl_ClipDistance[0].
For glClipPlane I could intuitively do (pseudocode)
pushMatrix()
glRotatef(camera.xRot, 1,0,0)
glRotatef(camera.yRot + 180, 0,1,0)
glTranslate(x-camera.x, y-camera.y, z-camera.z)
glClipPlane(plane_equation)
popMatrix()
and this would translate the plane to the correct location, and face it in the right direction.
For the life of me I cannot get the plane to translate with GLSL - I've tried passing various model matrices, model/view matrices, and altering the plane equation, but no matter what I do the plane is attached to the "Camera" instead of being attached to the "object". As in, moving the camera also moves the portion of the object being clipped, which is less than ideal.
Here are some things I've tried in my vertex shader based on random google searches:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(modelPos,uClipPlane);
or:
// vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(gl_Position,uClipPlane);
or:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(uClipPlane, ModelMat);
Is it just that I don't understand how to properly calculate the model matrix? or is there some obvious plane translation step that I'm missing that could solve my problems?
As you can tell from the title, I'm trying to create the mirror reflection while using deferred rendering and ambient occlusion. For ambient occlusion I'm specifically using the ssao algorithm.
To create the mirror I use the basic idea of reflecting all the models to the other side of the mirror and then rendering only the parts visible through the mirror.
Using deferred rendering I decided to do this during the creation of the gBuffer. In order to achieve correct lighting of the reflected objects, I made sure that the positions and normals of the reflected objects in the gBuffer are the same with their 'non reflected' version. That way, both the actual models and their images will receive the same lighting.
My problem is now with the ssao algorithm. It seems that the reflected objects are calculated to be highly occluded and this results in black areas which you can see in the mirror:
I've noticed that these black areas appear only in places that are not in my view. Things that I can see without the mirror have no unexpected black spots on them.
Note that the data in the gBuffer are all in view space. So there must be a connection there. Maybe the random samples used during ssao or their normals are not calculated correctly.
So , this is the fragment shader for the ambient occlusion :
void main()
{
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 normal = texture(gNormal, TexCoords).rgb;
vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
float kernelSize=64;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z -
sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
//FragColor = vec4(1,1,1,1);
occl=vec4(occlusion,occlusion,occlusion,1);
}
Any ideas as to why these black areas appear or suggestions to correct them?
I could just ignore the ambient occlusion in the reflection but I'm not happy with that.
Maybe, if the ambient occlusion shader used the positions and normals of the reflected objects there would be no problem. But then I'll get into trouble of saving more things in the buffer so I gave up that idea for now.
Let's say I start with a quad that covers the entire screen space just. I then put it through a projection matrix so that it appears as a trapezoid on the screen. There is a texture on this. As the base of the trapezoid is meant to be closer to the camera, opengl correctly renders the texture such that things in the texture appear bigger at the base of the trapezoid (as this is seemingly closer to the camera).
How does OpenGL know to render the texture itself in this perspective-based way rather than just stretching the sides of the texture into the trapezoid shape? Certainly it must be using the vertex z values, but how does it use those to map to textures in the fragment shader? In the fragment shader it feels like I am just working with x and y coordinates of textures with no z values being relevant.
EDIT:
I tried using the information provided in the links in the comments. I am not sure if there is information I am missing related to my question specifically, or if I am doing something incorrectly.
What I am trying to do is make a (if you don't know what this is, it's ok, I explain further what I'm trying to do) pseudo 3D SNES Mode 7-like projection.
Here's how it's coming out now.
As you can see something funny is happening. You can clearly see that the quad is actually 2 triangles and the black text area at the top should be straight, not crooked.
Whatever is happening, it's clear that the triangle on the left and the triangle on the right have their textures being rendered differently. The z-values are not being changed. Based on info in links in the comments I thought that I could simply move the top two vertices of my rectangular quad inward so that it became a trapezoid instead and this would act like a projection.
I know that a "normal" thing to do would be to use glm::lookat for a view matrix and glm::perspective for a projection matrix, but these are a little bit of black boxes to me and I would rather find a more easy-to-understand way.
I may have already provided enough info for someone to answer, but just in case, here is my code:
Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in vec2 texCoord;
out vec2 TexCoord;
void main()
{
// adjust vertex positions to make rectangle into trapezoid
if( position.y < 0){
gl_Position = vec4(position.x * 2.0, position.y * 2.0, 0.0, 1.0);
}else {
gl_Position = vec4(position.x * 1.0, position.y * 2.0, 0.0, 1.0);
}
TexCoord = vec2(texCoord.x, 1.0 - texCoord.y);
}
Fragment Shader:
#version 330 core
in vec2 TexCoord;
out vec4 color;
uniform sampler2D ourTexture1;
uniform mat3 textures_transform_mat_input;
mat3 TexCoord_to_mat3;
mat3 foo_mat3;
void main()
{
TexCoord_to_mat3[0][0] = 1.0;
TexCoord_to_mat3[1][1] = 1.0;
TexCoord_to_mat3[2][2] = 1.0;
TexCoord_to_mat3[0][2] = TexCoord.x;
TexCoord_to_mat3[1][2] = TexCoord.y;
foo_mat3 = TexCoord_to_mat3 * textures_transform_mat_input;
vec2 foo = vec2(foo_mat3[0][2], foo_mat3[1][2]);
vec2 bar = vec2(TexCoord.x, TexCoord.y);
color = texture(ourTexture1, foo);
vec2 center = vec2(0.5, 0.5);
}
Relevant code in main (note I am using a C library, CGLM that is like GLM; also, the "center" and "center undo" stuff is just to make sure rotation happens about the center rather than a corner):
if(!init_complete){
glm_mat3_identity(textures_scale_mat);
textures_scale_mat[0][0] = 1.0/ASPECT_RATIO / 3.0;
textures_scale_mat[1][1] = 1.0/1.0 / 3.0;
}
mat3 center_mat;
center_mat[0][0] = 1.0;
center_mat[1][1] = 1.0;
center_mat[2][2] = 1.0;
center_mat[0][2] = -0.5;
center_mat[1][2] = -0.5;
mat3 center_undo_mat;
center_undo_mat[0][0] = 1.0;
center_undo_mat[1][1] = 1.0;
center_undo_mat[2][2] = 1.0;
center_undo_mat[0][2] = 0.5;
center_undo_mat[1][2] = 0.5;
glm_mat3_identity(textures_position_mat);
textures_position_mat[0][2] = player.y / 1.0;
textures_position_mat[1][2] = player.x / 1.0;
glm_mat3_identity(textures_orientation_mat);
textures_orientation_mat[0][0] = cos(player_rotation_radians);
textures_orientation_mat[0][1] = sin(player_rotation_radians);
textures_orientation_mat[1][0] = -sin(player_rotation_radians);
textures_orientation_mat[1][1] = cos(player_rotation_radians);
glm_mat3_identity(textures_transform_mat);
glm_mat3_mul(center_mat, textures_orientation_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, center_undo_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_scale_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_position_mat, textures_transform_mat);
glUniformMatrix3fv(glGetUniformLocation(shader_perspective, "textures_transform_mat_input"), 1, GL_FALSE, textures_transform_mat);
glBindTexture(GL_TEXTURE_2D, texture_mute_city);
glDrawArrays(GL_TRIANGLES, 0, 6);
I currently have 5 models displayed in a screen and what I'm trying to do. The following is my vertex shader for translating the models individually so that I can get them to move in different directions:
#version 330
layout (location = 0) Position;
uniform mat4 MVP;
uniform vec3 Translation;
uniform mat4 Rotate;
void main()
{
gl_Position = MVP * * Rotate * vec4(Position + Translation, 1.0); // Correct?
}
And to position/move my models individually within the render loop:
//MODEL ONE
glUniform3f(loc, 0.0f, 4.0f, 0.0f); // loc is "Translate"
glUniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(rotationMatrix)); // loc is "Rotate"
_model1.render();
Also I do have a glm::mat4 rotateMatrix() that returns a rotation. but when I multiply it with the other matrices within the render loop, the whole scene (minus the camera) rotates to the set angle.
UPDATE
How would I be able to apply my rotation to the models independently of the world on their own axis? The problem now is that the model rotates, but from 0,0,0 of the world and not it's own position.
There's are a couple of syntax error in your vertex shader:
No type for the Position variable. Looks from the context like it should be a vec3.
Two * signs after MVP.
I assume that was those were just an accident while copying the code, and you actually have a vertex shader that compiles.
To apply the rotation described by the Rotate matrix before the translation from the Translation vector, you should be able to simply change the order in the vertex shader:
vec4 rotatedVec = Rotate * vec4(Position, 1.0);
gl_Position = MVP * vec4(rotatedVec.xyz + Translation, 1.0);
The whole thing would looks simpler if you defined Rotate as a 3x3 matrix, which is sufficient for a rotation.
i have been trying to implement deferred rendering for past 2 weeks. I have finally come to the spot lighting pass part using stencil buffer and linearized depth. I hold 3 framebuffer textures : albedo, normal+depth (X,Y,Z,EyeViewLinearDepth), Lighting texture. So I draw my light (sphere) and apply this fragment shader :
void main(void)
{
vec2 texCoord = gl_FragCoord.xy * u_inverseScreenSize.xy;
float linearDepth = texture2D(u_normalDepth, texCoord.st).a;
// vector to far plane
vec3 viewRay = vec3(v_vertex.xy * (-farClip/v_vertex.z), -farClip);
// scale viewRay by linear depth to get view space position
vec3 vertex = viewRay * linearDepth;
vec3 normal = texture2D(u_normalDepth, texCoord.st).xyz*2.0 - 1.0;
vec4 ambient = vec4(0.0, 0.0, 0.0, 1.0);
vec4 diffuse = vec4(0.0, 0.0, 0.0, 1.0);
vec4 specular = vec4(0.0, 0.0, 0.0, 1.0);
vec3 lightDir = lightpos - vertex ;
vec3 R = normalize(reflect(lightDir, normal));
vec3 V = normalize(vertex);
float lambert = max(dot(normal, normalize(lightDir)), 0.0);
if (lambert > 0.0) {
float distance = length(lightDir);
if (distance <= u_lightRadius) {
//CLASSICAL LIGHTING COMPUTATION PART
}
}
vec4 final_color = vec4(ambient + diffuse + specular);
gl_FragColor = vec4(final_color.xyz, 1.0);
}
The variables you need to know : v_vertex is eye space position of the vertex (of sphere), lightpos is the position/center of the light in eye space, linearDepth is generated on geometry pass stage in eye space.
The problem is that, the code fail this if check : if (distance <= u_lightRadius). The light is never computed until i remove the distance check. I am sure that i pass these values correctly, radius is 170.0, light position is only like 40-50 units away from the model. There is definitely something wrong but i can't find it somehow. I tried many possibilities of radius and other variables.