Related
The view matrix in OpenGL is mostly defined by the glm::lookAt() function with 3 parameters: position vector, target vector and up vector.
So, why these lines of code use for shadow mapping for point light define the last parameter (I mean up vector) like this:
float aspect = (float)SHADOW_WIDTH/(float)SHADOW_HEIGHT;
float near = 1.0f;
float far = 25.0f;
// This is the projection matrix
glm::mat4 shadowProj = glm::perspective(glm::radians(90.0f), aspect, near, far);
// This is for view matrix
std::vector<glm::mat4> shadowTransforms;
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(-1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 1.0, 0.0), glm::vec3(0.0, 0.0, 1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0,-1.0, 0.0), glm::vec3(0.0, 0.0,-1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0, 1.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0,-1.0), glm::vec3(0.0,-1.0, 0.0))
Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?
Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?
Because that wouldn't make the slightest sense. If you define your view transform via a lookAt function, you specify the camera position and the viewing direction. What the 3D up vector actually is defining is the angle of rotation around the view axis (so it is actually only one degree of freedom).
Think of rotating a real camera into landscape or portrait orientation, or in any arbitrary rotation., to get the image you want.
Since the lookAt convention has always been that the up vector is some world space vector which should be mapped to the upward axis in the resulting image, you will get a problem if you define the up vector into the same direction as your viewing direction (or it's negated direction). It simply is impossible to map the same vector to both point into the viewing direction and upwards in the resulting image,and it does not describe any orientation at all.
So in the case of 2 of the 6 faces, the math would simply break down. In the case of the other 4 faces, you could technically use (0,1,0) for all of these. However, usually, you use these sort of configuration to render to the 6 faces of a cube map texture. And for cube maps, the actual orientation of each face is defined in the GL spec. So for directly rendering to cube maps, you must orient your camera accordingly, otherwise the individual faces of the cube map will simply be rotated wrongly and won't fit together at all.
I am currently trying to apply a normal map in my shader but the shading in the final image is way off.
Surfaces that should be shaded are completely bright, surfaces that should be bright are completely shaded and the top surface, which should have the same shade regardless of rotation of the y-axis, is alternating between bright and dark.
After some trial and error i found out that i can get the correct shading by changing this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
Diffuse and specular lighting are now working correctly,
but obviously without the normal map applied. I honestly have no idea where exactly the error is originating. I am quite new to shader programming and was following this tutorial. Below are the shader sources, with all irrelevant parts cut.
Vertex shader:
#version 450
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 bitangent;
layout(location = 4) in vec2 texture_coordinates;
layout(location = 0) out mat3 normal_matrix;
layout(location = 3) out vec2 texture_coordinates_out;
layout(location = 4) out vec4 vertex_position_viewspace;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
void main() {
mat4 worldview = uniforms.view * uniforms.world;
normal_matrix = mat3(worldview) * mat3(normalize(tangent), normalize(bitangent), normalize(normal));
vec4 vertex_position_worldspace = uniforms.world * vec4(position, 1.0);
vertex_position_viewspace = uniforms.view * vertex_position_worldspace;
gl_Position = uniforms.projection * vertex_position_viewspace;
texture_coordinates_out = texture_coordinates;
}
Fragment shader:
#version 450
layout(location = 0) in mat3 normal_matrix;
layout(location = 3) in vec2 texture_coordinates;
layout(location = 4) in vec4 vertex_position_viewspace;
layout(location = 0) out vec4 fragment_color;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
// ...
layout (set = 0, binding = 2) uniform sampler2D normal_map;
// ...
const vec4 LIGHT = vec4(1.25, 3.0, 3.0, 1.0);
void main() {
// ...
vec4 normal_color = texture(normal_map, texture_coordinates);
// ...
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
vec4 light_position_viewspace = uniforms.view * LIGHT;
vec3 light_direction_viewspace = normalize((light_position_viewspace - vertex_position_viewspace).xyz);
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
vec3 light_color_intensity = vec3(1.0, 1.0, 1.0) * 7.0;
float distance_from_light = distance(vertex_position_viewspace, light_position_viewspace);
float diffuse_strength = clamp(dot(normal_viewspace, light_direction_viewspace), 0.0, 1.0);
vec3 diffuse_light = (light_color_intensity * diffuse_strength) / (distance_from_light * distance_from_light);
// ...
fragment_color.rgb = (diffuse_color.rgb * diffuse_light);
fragment_color.a = diffuse_color.a;
}
There are some things i am a bit uncertain about. For example i noticed that in the tutorial, the light is called lightPosition_worldSpace, making me think i need to multiply the light by the world matrix first, but doing so only makes my light rotate with the cube and still doesn't fix my lighting issue.
Any help or ideas on what i could be doing wrong would be greatly appreciated.
I'm the one who created the tutorial site you're referencing.
If possible could you share a link to your normal map as well? When you say that when you changed the line where the normal of the fragment is calculated using the normal map from this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to one where you hardcode a value like this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
and that fixes the rendering issue, it seems to indicate an issue with the normal map itself.
One way to verify this is to set your entire normal map image to the RGB value (128, 128, 255), which is exactly the same as the vec3(0.0, 0.0, 1.0) value you were using in your changed line. If this results in the object rendering correctly the same as when you were using a hardcoded value, that means you were using a bad normal map.
The normal map is just a texture/image that stores the directions of the normals of your object in "tangent-space" (think of it as like if you had to flatten out your entire object into a 2D surface, and then the normals for each point of that surface is plotted on the map). For each pixel, the red channel represents the X-axis, the green channel represents the Y-axis, and the blue channel represents the Z-axis.
With colors, the range of colors in a normal map goes from (0, 0, 128) to (255, 255, 255) (for images where each color channel uses 8 bits/1 byte), but in GLSL this would be a range from (0.0, 0.0, 0.5) to (1.0, 1.0, 1.0). Let's just work with the range that is used in GLSL for the sake of simplicity.
When looking at the actual possible values for normals, their range actually is (-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0) because you can have a normal direction be either forwards or backwards in either the X-axis or Y-axis.
So when we have a color value of (0.0, 0.0, 0.5), we're actually talking about a normal direction vector (-1.0, -1.0, 0.0). Similarly, a color value of (0.5, 0.5, 0.5) means the normal direction vector (0.0, 0.0, 0.0), and a color value of (1.0, 1.0, 1.0) means a normal value of (1.0, 1.0, 1.0).
So the goal now becomes transforming the value from the normal map from the color value range ((0.0, 0.0, 0.5) to (1.0, 1.0, 1.0)) to the actual range for normals ((-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0)).
If you multiply a value from a normal map by 2.0, you change the possible range of the value from (0.0, 0.0, 0.5) - (1.0, 1.0, 1.0) to (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0). And then if you subtract 1.0 from the result, the range now changes from (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0) to (-1.0, -1.0, 0.0) - (1.0, 1.0, 1.0), which is exactly the possible range of the normals of an object.
So you have to make sure that when you're creating your normal map, the range of the RGB color values is between (0, 0, 128) - (255, 255, 255).
Side note: As for why the range of the blue channel (Z-axis) in the normal map can only be between 128 to 255, a value less than 128 means that a negative value on the Z-axis, meaning that the normal of the fragment is pointing into the surface, not out of it. Since a normal map is supposed to represent the values of the normals when the surface of the object is flattened and facing towards you, having a normal with a negative Z-axis value would mean that at that point the surface is actually facing away from you, which doesn't really make sense, hence why negative values are not allowed.
You could still try having the blue channel be a value less than 128 and see what interesting results pop out.
Also with regards to the doubt you mentioned in the end and in the comments:
What does lightPosition_worldSpace mean?
lightPosition_worldSpace represents the coordinate at which light is present relative to the center of the world (relative to the entire world you're rendering), hence the world-space suffix. You just need to multiply this position with the your view-matrix if you wish to know the position of the light is view-space (relative to your camera).
If you have a coordinate that is relative to the center of the object you're rendering, then you should multiply it with your model matrix (uniforms.world) to transform that coordinate from one that's relative to the center of your model to one that's relative to the center of the world. Since the lightPosition_worldSpace is the position of the light already relative to the center of the world, you don't need to multiply them. This is why you saw the behavior of the light moving with the cube when you did try to do so (the light was moved since its coordinates were thought to be placed relative to the cube itself).
Your comment regarding confusion with the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
This is bad on my part for not representing what vec3(0.0, 0.0, 0.0) is with a variable. This is supposed to represent the position of the camera in view-space. Since in view-space the camera is at the center, its coordinate is vec3(0.0, 0.0, 0.0).
As for why I'm doing
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
when
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
is simpler and is basically the same thing, I had written it so to make it more obvious what was happening (which it appears I failed to do).
Typically, when you have two coordinates and you want to find the direction from a source coordinate to a destination coordinate you subtract the two coordinates to get their direction + magnitude. By normalizing that difference, you then just the directional component, with the magnitude part removed. So the equation for finding a direction from a source coordinate to a destination coordinate becomes:
direction = normalize(destination coordinate - source coordinate)
view_direction_viewspace Is supposed to represent the direction from the camera towards the fragment. To calculate this, we can just subtract the position of the camera (vec3(0.0, 0.0, 0.0)) from the position of the fragment (vertex_position_viewspace.xyz) and then run normalize(...) on the difference to get that result.
I've generally tried to maintain this consistency where when I'm calculating a direction using two coordinates I always have a destination and source coordinate explicitly written out, hence why you see the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0)); in the fragment shader code.
I've updated the code by setting vec3(0.0, 0.0, 0.0) to a variable cameraPosition_viewSpace and using that to better clarify this intention.
Feel free to reach out through GitHub issues if you want to ask anything else or help improve the tutorial.
I haven't updated this post in a while because i have completely shifted away from using normal mapping (for now) but still wanted to post an answer, in case that someone else runs into the same problem. I still can't be 100% sure but i am fairly certain, that this behavior was caused by the library i was using to load the normal map. Special thanks to sabarnac who has been a huge help to me in solving this.
I am not quite sure what is missing, but I loaded a uniform matrix into a vertex shader and when the matrix was:
GLfloat translation[4][4] = {
{1.0, 0.0, 0.0, 0.0},
{0.0, 1.0, 0.0, 0.0},
{0.0, 0.0, 1.0, 0.0},
{0.0, 0.2, 0.0, 1.0}};
or so, I seemed to be able to translate vertices just fine, depending on which values I chose to change. However, when swapping this same uniform matrix to apply projection, the image would not appear. I tried several matrices, such as:
GLfloat frustum[4][4] = {
{((2.0*frusZNear)/(frusRight - frusLeft)), 0.0, 0.0, 0.0},
{0.0, ((2.0*frusZNear)/(frusTop - frusBottom)), 0.0 , 0.0},
{((frusRight + frusLeft)/(frusRight-frusLeft)), ((frusTop + frusBottom) / (frusTop - frusBottom)), (-(frusZFar + frusZNear)/(frusZFar - frusZNear)), (-1.0)},
{0.0, 0.0, ((-2.0*frusZFar*frusZNear)/(frusZFar-frusZNear)), 0.0}
};
and values, such as:
const GLfloat frusLeft = -3.0;
const GLfloat frusRight = 3.0;
const GLfloat frusBottom = -3.0;
const GLfloat frusTop = 3.0;
const GLfloat frusZNear = 5.0;
const GLfloat frusZFar = 10.0;
The vertex shader, which seemed to apply translation just fine:
gl_Position = frustum * vPosition;
Any help appreciated.
The code for calculating the perspective/frustum matrix looks correct to me. This sets up a perspective matrix that assumes that your eye point is at the origin, and you're looking down the negative z-axis. The near and far values specify the range of distances along the negative z-axis that are within the view volume.
Therefore, with near/far values of 5.0/10.0, the range of z-values that are within your view volume will be from -5.0 to -10.0.
If your geometry is currently drawn around the origin, use a translation by something like (0.0, 0.0, -7.0) as your view matrix. This needs to be applied before the projection matrix.
You can either combine the view and projection matrices, or pass them separately into your vertex shader. With a separate view matrix, containing the translation above, your shader code could then look like this:
uniform mat4 viewMat;
...
gl_Position = frustum * viewMat * vPosition;
First thing I see is that the Z near and far planes is chosen at 5, 10. If your vertices do not lie between these planes you will not see anything.
The Projection matrix will take everything in the pyramid like shape and translate and scale it into the unit volume -1,1 in every dimension.
http://www.lighthouse3d.com/tutorials/view-frustum-culling/
First of all, I'm sorry if the title is misleading but I'm not quite sure how to describe the issue, if it is an issue at all.
I'm vert new to OpenGL, and I have just started to scratch the surface of GLSL following this tutorial.
The main part of the rendering funcion looks like this
GLfloat ambientLight[] = {0.5f, 0.5f, 0.5f, 1.0f};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientLight);
//Add directed light
GLfloat lightColor1[] = {0.5f, 0.5f, 0.5f, 1.0f}; //Color (0.5, 0.2, 0.2)
//Coming from the direction (-1, 0.5, 0.5)
GLfloat lightPos1[] = { 40.0 * cos((float) elapsed_time / 500.0) , 40.0 * sin((float) elapsed_time / 500.0), -20.0f, 0.0f};
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor1);
glLightfv(GL_LIGHT0, GL_POSITION, lightPos1);
glPushMatrix();
glTranslatef(0,0,-50);
glColor3f(1.0, 1.0, 1.0);
glRotatef( (float) elapsed_time / 100.0, 0.0,1.0,0.0 );
glUseProgram( shaderProg );
glutSolidTeapot( 10 );
glPopMatrix();
Where "shaderProg" is a shader program consisting of a vertex shader
varying vec3 normal;
void main(void)
{
normal = gl_Normal;
gl_Position = ftransform();
}
And a fragment shader
uniform vec3 lightDir;
varying vec3 normal;
void main() {
float intensity;
vec4 color;
intensity = dot(vec3(gl_LightSource[0].position), normalize(normal));
if (intensity > 0.95)
color = vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.25)
color = vec4(0.4,0.2,0.2,1.0);
else
color = vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
I have two issues.
First is that according to the tutorial the uniform lightDir should be usable, yet I only get results with vec3(gl_LightSource[0].position). Is there any difference between the two?
The other problem is that the setup rotates the light around the teapot differently when using the shader program. Without the shader the light orbits the teapot in the XY axis of the camera. Yet, if the shader is used, the light moves in the XZ axis of the camera. Have I made a mistake? Or have i forgot som translation in the shaders?
Thanks in advance : )
First is that according to the tutorial the uniform lightDir should be
usable, yet I only get results with vec3(gl_LightSource[0].position).
Is there any difference between the two?
That tutorial uses lightDir as a uniform variable. You have to set that yourself. via some glUniform call. If it is the same or not will depend on what exactly you set as the light position here. The lightDir as it is used here is the vector from the surface point you want to shade to the light source. The tutorial uses a directional light, so the light direction is the same everywhere in the scene and does not really depend on the position of the vertex/fragment. You can do the same with the fixed-function lighting by setting the w component of the light poisition to 0. If you don't do that, the results will be very different.
A side note: The GLSL code in that tutorial is unforunately relying on lots of deprecated features. If you learn GLSL, I would really recommend that you learn modern GL core profile.
lightDir is not a pre-defined uniform. The typical definition for a light direction vector is just a normalized vector to the light position in your shader, which you can easily calculate yourself by normalizing the position vector:
vec3 lightDir = normalize(gl_LightSource[0].position.xyz);
You could also pass it into the shader as a uniform you define yourself. For this approach, you would define the uniform in your fragment shader:
uniform vec3 lightDir;
and then get the uniform location with the glGetUniformLocation() call, and set a value with the glUniform3f() call. So once after linking the shader, you have this:
GLint lightDirLoc = glGetUniformLocation(shaderProg, "lightDir");
and then every time you want to change the light direction to (vx, vy, vz):
glUniform3f(lightDirLoc, vx, vy, vz);
For the second part of your question: The reason you get different behavior for the light position with the fixed pipeline compared to what you get with your own shader is that the fixed pipeline applies the current modelview matrix to the specified light position, which is not done in your shader.
As a number of others already suggested: If you learn OpenGL now, I strongly recommend that you skip the legacy features, which includes the fixed function light source parameters. In this case, you can simply use uniform variables you define yourself, as I already illustrated as an option for the lightDir variable above.
So, I'm trying to rotate a light around a stationary object in the center of my scene. I'm well aware that I will need to use the rotation matrix in order to make this transformation occur. However, I'm unsure of how to do it in code. I'm new to linear algebra, so any help with explanations along the way would help a lot.
Basically, I'm working with these two right now and I'm not sure of how to make the light circulate the object.
mat4 rotation = mat4(
vec4( cos(aTimer), 0.0, sin(aTimer), 0.0),
vec4( 0, 1.0, 0.0, 0.0),
vec4(-sin(aTimer), 0.0, cos(aTimer), 0.0),
vec4( 0.0, 0.0, 0.0, 1.0)
);
and this is how my light is set up :
float lightPosition[4] = {5.0, 5.0, 1.0, 0};
glLightfv(GL_LIGHT0, GL_POSITION, lightPositon);
The aTimer in this code is a constantly incrementing float.
Even though you want the light to rotate around your object, you must not use a rotation matrix for this purpose but a translation one.
The matrix you're handling is the model matrix. It defines the orientation, the position and the scale of your object.
The matrix you have here is a rotation matrix, so the orientation of the light will change, but not the position, which is what you want.
So there is two problems to fix here :
1.Define your matrix properly. Since you want a translation (circular), I think this is the matrix you need :
mat4 rotation = mat4(
vec4( 1.0, 0.0, 0.0, 0.0),
vec4( 0.0, 1.0, 0.0, 0.0),
vec4( 0.0, 0.0, 1.0, 0.0),
vec4( cos(aTimer), sin(aTimer), 0.0, 1.0)
);
2.Define a good position vertex for your light. Since it's a single vertex and it's the job of the model matrix (above) to move the light, the light vector 4D should be :
float lightPosition[4] = {0.0f, 0.0f, 0.0f, 1.0f};
//In C, 0.0 is a double, you may have warnings at compilation for loss of precision, so use the suffix "f"
The forth component must be one since it's thanks to it that translations are possible.
You may find additional information here
Model matrix in 3D graphics / OpenGL
However they are using column vectors. Judging from your rotation matrix I do belive you use row vectors, so the translation components are in the last row, not the last column of the model matrix.