Shadow mapping for point light - opengl

The view matrix in OpenGL is mostly defined by the glm::lookAt() function with 3 parameters: position vector, target vector and up vector.
So, why these lines of code use for shadow mapping for point light define the last parameter (I mean up vector) like this:
float aspect = (float)SHADOW_WIDTH/(float)SHADOW_HEIGHT;
float near = 1.0f;
float far = 25.0f;
// This is the projection matrix
glm::mat4 shadowProj = glm::perspective(glm::radians(90.0f), aspect, near, far);
// This is for view matrix
std::vector<glm::mat4> shadowTransforms;
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(-1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 1.0, 0.0), glm::vec3(0.0, 0.0, 1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0,-1.0, 0.0), glm::vec3(0.0, 0.0,-1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0, 1.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0,-1.0), glm::vec3(0.0,-1.0, 0.0))
Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?

Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?
Because that wouldn't make the slightest sense. If you define your view transform via a lookAt function, you specify the camera position and the viewing direction. What the 3D up vector actually is defining is the angle of rotation around the view axis (so it is actually only one degree of freedom).
Think of rotating a real camera into landscape or portrait orientation, or in any arbitrary rotation., to get the image you want.
Since the lookAt convention has always been that the up vector is some world space vector which should be mapped to the upward axis in the resulting image, you will get a problem if you define the up vector into the same direction as your viewing direction (or it's negated direction). It simply is impossible to map the same vector to both point into the viewing direction and upwards in the resulting image,and it does not describe any orientation at all.
So in the case of 2 of the 6 faces, the math would simply break down. In the case of the other 4 faces, you could technically use (0,1,0) for all of these. However, usually, you use these sort of configuration to render to the 6 faces of a cube map texture. And for cube maps, the actual orientation of each face is defined in the GL spec. So for directly rendering to cube maps, you must orient your camera accordingly, otherwise the individual faces of the cube map will simply be rotated wrongly and won't fit together at all.

Related

Opengl Camera rotation around X

Working on an opengl project in visual studio.Im trying to rotate the camera around the X and Y axis.
Thats the math i should use
Im having trouble because im using glm::lookAt for camera position and it takes glm::vec3 as arguments.
Can someone explain how can i implement this in opengl?
PS:i cant use quaternions
The lookAt function should take three inputs:
vec3 cameraPosition
vec3 cameraLookAt
vec3 cameraUp
For my past experience, if you want to move the camera, first find the transform matrix of the movement, then apply the matrix onto these three vectors, and the result will be three new vec3, which are your new input into the lookAt function.
vec3 newCameraPosition = movementMat4 * cameraPosition
//Same for other two
Another approach could be finding the inverse movement of the one you want the camera to do and applying it to the whole scene. Since moving the camera is kind of equals to move the object onto inverse movement while keep the camera not move :)
Below the camera is rotated around the z-axis.
const float radius = 10.0f;
float camX = sin(time) * radius;
float camZ = cos(time) * radius;
glm::vec3 cameraPos = glm::vec3(camX, 0.0, camZ);
glm::vec3 objectPos = glm::vec3(0.0, 0.0, 0.0);
glm::vec3 up = glm::vec3(0.0, 1.0, 0.0);
glm::mat4 view = glm::lookAt(cameraPos, objectPos, up);
Check out https://learnopengl.com/, its a great site to learn!

How to rotate a 2D square in OpenGL?

I have a basic square. How can i rotate it ?
let vertices = vec![
x, y, uv1.0, uv1.0, layer, // top left
// ---
x + w, y, uv2.0, uv2.1, layer, // top right
// ---
x + w, y + h, uv3.0, uv3.1, layer, // bottom right
// ---
x, y + h, uv4.0, uv4.1, layer // bottom left
This is my orthographic projection matrix.
let c_str_vert = CString::new("MVP".as_bytes()).unwrap();
let modelLoc = gl::GetUniformLocation(shaderProgram, c_str_vert.as_ptr());
let model = cgmath::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
gl::UniformMatrix4fv(modelLoc, 1, gl::FALSE, model.as_ptr());
#version 430 core
layout(location = 0) in vec2 position;
// ...
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
// ...
}
I have a lot of squares but I don't want to rotate them all.
width and height is 100px, how can I turn my square to make it look like this?
I know that I can achieve this by using transformation matrices. I did it in web development while working on svg, but I have no idea how I can get this working in OpenGl.
I would like to know how can i multiply matrix and vector in OpenGL.
I would like to know how can i multiply matrix and vector in OpenGL.
You already multiplying matrices with a vector
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
All you have to do is multiply your MVP matrix with your rotation matrix and than with your vector
gl_Position = MVP * rotationMatrix * vec4(position.x, position.y, 0.0, 1.0);
The rotation Matrix should be also a 4x4 matrix
i tried the library nalgebra and nalgebra-glm. API cgmath confuses me.
let mut ortho = glm::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
ortho = glm::translate(&ortho, &glm::vec3(0.0, 50.0, 0.0));
let rad = 30.0 * (PI / 360.0);
ortho = glm::rotate(&ortho, rad, &glm::vec3(0.0, 0.0, 1.0));
ortho = glm::translate(&ortho, &glm::vec3(0.0, -50.0, 0.0) );
Thank you for all the answers

Applying perspective with GLSL matrix

I am not quite sure what is missing, but I loaded a uniform matrix into a vertex shader and when the matrix was:
GLfloat translation[4][4] = {
{1.0, 0.0, 0.0, 0.0},
{0.0, 1.0, 0.0, 0.0},
{0.0, 0.0, 1.0, 0.0},
{0.0, 0.2, 0.0, 1.0}};
or so, I seemed to be able to translate vertices just fine, depending on which values I chose to change. However, when swapping this same uniform matrix to apply projection, the image would not appear. I tried several matrices, such as:
GLfloat frustum[4][4] = {
{((2.0*frusZNear)/(frusRight - frusLeft)), 0.0, 0.0, 0.0},
{0.0, ((2.0*frusZNear)/(frusTop - frusBottom)), 0.0 , 0.0},
{((frusRight + frusLeft)/(frusRight-frusLeft)), ((frusTop + frusBottom) / (frusTop - frusBottom)), (-(frusZFar + frusZNear)/(frusZFar - frusZNear)), (-1.0)},
{0.0, 0.0, ((-2.0*frusZFar*frusZNear)/(frusZFar-frusZNear)), 0.0}
};
and values, such as:
const GLfloat frusLeft = -3.0;
const GLfloat frusRight = 3.0;
const GLfloat frusBottom = -3.0;
const GLfloat frusTop = 3.0;
const GLfloat frusZNear = 5.0;
const GLfloat frusZFar = 10.0;
The vertex shader, which seemed to apply translation just fine:
gl_Position = frustum * vPosition;
Any help appreciated.
The code for calculating the perspective/frustum matrix looks correct to me. This sets up a perspective matrix that assumes that your eye point is at the origin, and you're looking down the negative z-axis. The near and far values specify the range of distances along the negative z-axis that are within the view volume.
Therefore, with near/far values of 5.0/10.0, the range of z-values that are within your view volume will be from -5.0 to -10.0.
If your geometry is currently drawn around the origin, use a translation by something like (0.0, 0.0, -7.0) as your view matrix. This needs to be applied before the projection matrix.
You can either combine the view and projection matrices, or pass them separately into your vertex shader. With a separate view matrix, containing the translation above, your shader code could then look like this:
uniform mat4 viewMat;
...
gl_Position = frustum * viewMat * vPosition;
First thing I see is that the Z near and far planes is chosen at 5, 10. If your vertices do not lie between these planes you will not see anything.
The Projection matrix will take everything in the pyramid like shape and translate and scale it into the unit volume -1,1 in every dimension.
http://www.lighthouse3d.com/tutorials/view-frustum-culling/

Rotating a light around a stationary object in openGL/glsl

So, I'm trying to rotate a light around a stationary object in the center of my scene. I'm well aware that I will need to use the rotation matrix in order to make this transformation occur. However, I'm unsure of how to do it in code. I'm new to linear algebra, so any help with explanations along the way would help a lot.
Basically, I'm working with these two right now and I'm not sure of how to make the light circulate the object.
mat4 rotation = mat4(
vec4( cos(aTimer), 0.0, sin(aTimer), 0.0),
vec4( 0, 1.0, 0.0, 0.0),
vec4(-sin(aTimer), 0.0, cos(aTimer), 0.0),
vec4( 0.0, 0.0, 0.0, 1.0)
);
and this is how my light is set up :
float lightPosition[4] = {5.0, 5.0, 1.0, 0};
glLightfv(GL_LIGHT0, GL_POSITION, lightPositon);
The aTimer in this code is a constantly incrementing float.
Even though you want the light to rotate around your object, you must not use a rotation matrix for this purpose but a translation one.
The matrix you're handling is the model matrix. It defines the orientation, the position and the scale of your object.
The matrix you have here is a rotation matrix, so the orientation of the light will change, but not the position, which is what you want.
So there is two problems to fix here :
1.Define your matrix properly. Since you want a translation (circular), I think this is the matrix you need :
mat4 rotation = mat4(
vec4( 1.0, 0.0, 0.0, 0.0),
vec4( 0.0, 1.0, 0.0, 0.0),
vec4( 0.0, 0.0, 1.0, 0.0),
vec4( cos(aTimer), sin(aTimer), 0.0, 1.0)
);
2.Define a good position vertex for your light. Since it's a single vertex and it's the job of the model matrix (above) to move the light, the light vector 4D should be :
float lightPosition[4] = {0.0f, 0.0f, 0.0f, 1.0f};
//In C, 0.0 is a double, you may have warnings at compilation for loss of precision, so use the suffix "f"
The forth component must be one since it's thanks to it that translations are possible.
You may find additional information here
Model matrix in 3D graphics / OpenGL
However they are using column vectors. Judging from your rotation matrix I do belive you use row vectors, so the translation components are in the last row, not the last column of the model matrix.

Transform light worldspace coordinates to eyespace coordinates

I am attempting to model a spotlight in a scene for an introduction to graphics class. The assignment specifies that I must do everything in modernGL, therefore I can't use anything from legacy.
I have been reading the OpenGL 4.0 Shading Language Cookbook for assistance with this, but I can't figure out how to get the eyespace coordinates for my lights. I know the position and direction that I want the lights to be in in worldspace and I have attempted to transform them to eyespace by the following.
//mv = inverse(mv);
vec3 light = vec3(vec4(0.0, 15.0, 0.0, 1.0) * mv);
vec3 direction = vec3(vec4(0.0, -1.0, 0.0, 1.0) * mv);
Where mv is my modelview matrix generated by
glm::mat4 modelview_matrix = glm::lookAt(glm::vec3(window.camera.x, window.camera.y, window.camera.z), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
without any transforms on it. As you can see I have attempted to multiply by the inverse of the modelview and the modelview to get the eyespace. I am positive that neither of these have worked as the specular highlight on my object follows as I move to the opposite side of the object. (ie if I am looking at the opposite side of the object that the light I shouldn't see a specular highlight.)
I'm fairly certain that you want:
vec3 light = vec3(mv * vec4(0.0, 15.0, 0.0, 1.0));
vec3 direction = vec3(mv * vec4(0.0, -1.0, 0.0, 0.0));
// ^ Note the 0 in the w coordinate
Where you have not inverted your mv matrix. Yes, the order of multiplication does matter. The reason you leave a 0 in the w coordinate is because you don't want direction vectors to be translated, only scaled or rotated.