bias matrix in shadow mapping - opengl

I'm looking at shadow mapping in OpenGL.
I see code like:
// This is matrix transform every coordinate x,y,z
// x = x* 0.5 + 0.5
// y = y* 0.5 + 0.5
// z = z* 0.5 + 0.5
// Moving from unit cube [-1,1] to [0,1]
const GLdouble bias[16] = {
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0};
// Grab modelview and transformation matrices
glGetDoublev(GL_MODELVIEW_MATRIX, modelView);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glMatrixMode(GL_TEXTURE);
glActiveTextureARB(GL_TEXTURE7);
glLoadIdentity();
glLoadMatrixd(bias);
// concatating all matrice into one.
glMultMatrixd (projection);
glMultMatrixd (modelView);
// Go back to normal matrix mode
glMatrixMode(GL_MODELVIEW);
Now, if I rip out the bias matrix. The code does not work. Searching other shadow mapping code, I see the same bias matrix, without any explaination. Why do I want this bias to map x, y, z to 0.5 * x + 0.5, 0.5 * y + y, ... ?
Thanks!

when you transform vertices inside the frustum with a standard modelview/projection matrix, the result you get is a vertex that, once w-divide is done, is in the [-1:1]x[-1:1]x[-1:1] cube. you want your texture coordinates to be in the [0:1]x[0:1] range, hence the remapping for x and y. It's the same kind of thing for Z, assuming your DepthRange is [0:1], which is the default.

Related

Shadow mapping for point light

The view matrix in OpenGL is mostly defined by the glm::lookAt() function with 3 parameters: position vector, target vector and up vector.
So, why these lines of code use for shadow mapping for point light define the last parameter (I mean up vector) like this:
float aspect = (float)SHADOW_WIDTH/(float)SHADOW_HEIGHT;
float near = 1.0f;
float far = 25.0f;
// This is the projection matrix
glm::mat4 shadowProj = glm::perspective(glm::radians(90.0f), aspect, near, far);
// This is for view matrix
std::vector<glm::mat4> shadowTransforms;
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(-1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 1.0, 0.0), glm::vec3(0.0, 0.0, 1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0,-1.0, 0.0), glm::vec3(0.0, 0.0,-1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0, 1.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0,-1.0), glm::vec3(0.0,-1.0, 0.0))
Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?
Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?
Because that wouldn't make the slightest sense. If you define your view transform via a lookAt function, you specify the camera position and the viewing direction. What the 3D up vector actually is defining is the angle of rotation around the view axis (so it is actually only one degree of freedom).
Think of rotating a real camera into landscape or portrait orientation, or in any arbitrary rotation., to get the image you want.
Since the lookAt convention has always been that the up vector is some world space vector which should be mapped to the upward axis in the resulting image, you will get a problem if you define the up vector into the same direction as your viewing direction (or it's negated direction). It simply is impossible to map the same vector to both point into the viewing direction and upwards in the resulting image,and it does not describe any orientation at all.
So in the case of 2 of the 6 faces, the math would simply break down. In the case of the other 4 faces, you could technically use (0,1,0) for all of these. However, usually, you use these sort of configuration to render to the 6 faces of a cube map texture. And for cube maps, the actual orientation of each face is defined in the GL spec. So for directly rendering to cube maps, you must orient your camera accordingly, otherwise the individual faces of the cube map will simply be rotated wrongly and won't fit together at all.

How to rotate a 2D square in OpenGL?

I have a basic square. How can i rotate it ?
let vertices = vec![
x, y, uv1.0, uv1.0, layer, // top left
// ---
x + w, y, uv2.0, uv2.1, layer, // top right
// ---
x + w, y + h, uv3.0, uv3.1, layer, // bottom right
// ---
x, y + h, uv4.0, uv4.1, layer // bottom left
This is my orthographic projection matrix.
let c_str_vert = CString::new("MVP".as_bytes()).unwrap();
let modelLoc = gl::GetUniformLocation(shaderProgram, c_str_vert.as_ptr());
let model = cgmath::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
gl::UniformMatrix4fv(modelLoc, 1, gl::FALSE, model.as_ptr());
#version 430 core
layout(location = 0) in vec2 position;
// ...
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
// ...
}
I have a lot of squares but I don't want to rotate them all.
width and height is 100px, how can I turn my square to make it look like this?
I know that I can achieve this by using transformation matrices. I did it in web development while working on svg, but I have no idea how I can get this working in OpenGl.
I would like to know how can i multiply matrix and vector in OpenGL.
I would like to know how can i multiply matrix and vector in OpenGL.
You already multiplying matrices with a vector
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
All you have to do is multiply your MVP matrix with your rotation matrix and than with your vector
gl_Position = MVP * rotationMatrix * vec4(position.x, position.y, 0.0, 1.0);
The rotation Matrix should be also a 4x4 matrix
i tried the library nalgebra and nalgebra-glm. API cgmath confuses me.
let mut ortho = glm::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
ortho = glm::translate(&ortho, &glm::vec3(0.0, 50.0, 0.0));
let rad = 30.0 * (PI / 360.0);
ortho = glm::rotate(&ortho, rad, &glm::vec3(0.0, 0.0, 1.0));
ortho = glm::translate(&ortho, &glm::vec3(0.0, -50.0, 0.0) );
Thank you for all the answers

How to change ortho matrix size in shader

I tried scaling orthogonal projection matrix but it seems it doesn't scale its dimensions.I am using orthogonal projection for directional lighting. But if the main camera is above ground I want to make my ortho matrix range bigger.
For example if camera is zero , orho matrix is
glm::ortho(-10.0f , 10.0f , -10.0f , 10.0f , 0.01f, 4000.0f)
but if camera goes 400 in y direction i want this matrix to be like
glm::ortho(-410.0f , 410.0f , -410.0f , 410.0f , 0.01f, 4000.0f)
but I want to do this in the shader with matrix multiplication or addition
The Orthographic Projection matrix can be computed by:
r = right, l = left, b = bottom, t = top, n = near, f = far
X: 2/(r-l) 0 0 0
y: 0 2/(t-b) 0 0
z: 0 0 -2/(f-n) 0
t: -(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
In glsl, for instance:
float l = -410.0f;
float r = 410.0f;
float b = -410.0f;
float t = 410.0f;
float n = 0.01f;
float f = 4000.0f;
mat4 projection = mat4(
vec4(2.0/(r-l), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(t-b), 0.0, 0.0),
vec4(0.0, 0.0, -2.0/(f-n), 0.0),
vec4(-(r+l)/(r-l), -(t+b)/(t-b), -(f+n)/(f-n), 1.0)
);
Furthermore you can scale the orthographic projection matrix:
c++
glm::mat4 ortho = glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, 0.01f, 4000.0f);
float scale = 10.0f / 410.0f;
glm::mat4 scale_mat = glm::scale(glm::mat4(1.0f), glm::vec3(scale, scale, 1.0f));
glm::mat4 ortho_new = ortho * scale_mat;
glsl
float scale = 10.0 / 410.0;
mat4 scale_mat = mat4(
vec4(scale, 0.0, 0.0, 0.0),
vec4(0.0, scale, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
mat4 ortho_new = ortho * scale_mat;

Applying perspective with GLSL matrix

I am not quite sure what is missing, but I loaded a uniform matrix into a vertex shader and when the matrix was:
GLfloat translation[4][4] = {
{1.0, 0.0, 0.0, 0.0},
{0.0, 1.0, 0.0, 0.0},
{0.0, 0.0, 1.0, 0.0},
{0.0, 0.2, 0.0, 1.0}};
or so, I seemed to be able to translate vertices just fine, depending on which values I chose to change. However, when swapping this same uniform matrix to apply projection, the image would not appear. I tried several matrices, such as:
GLfloat frustum[4][4] = {
{((2.0*frusZNear)/(frusRight - frusLeft)), 0.0, 0.0, 0.0},
{0.0, ((2.0*frusZNear)/(frusTop - frusBottom)), 0.0 , 0.0},
{((frusRight + frusLeft)/(frusRight-frusLeft)), ((frusTop + frusBottom) / (frusTop - frusBottom)), (-(frusZFar + frusZNear)/(frusZFar - frusZNear)), (-1.0)},
{0.0, 0.0, ((-2.0*frusZFar*frusZNear)/(frusZFar-frusZNear)), 0.0}
};
and values, such as:
const GLfloat frusLeft = -3.0;
const GLfloat frusRight = 3.0;
const GLfloat frusBottom = -3.0;
const GLfloat frusTop = 3.0;
const GLfloat frusZNear = 5.0;
const GLfloat frusZFar = 10.0;
The vertex shader, which seemed to apply translation just fine:
gl_Position = frustum * vPosition;
Any help appreciated.
The code for calculating the perspective/frustum matrix looks correct to me. This sets up a perspective matrix that assumes that your eye point is at the origin, and you're looking down the negative z-axis. The near and far values specify the range of distances along the negative z-axis that are within the view volume.
Therefore, with near/far values of 5.0/10.0, the range of z-values that are within your view volume will be from -5.0 to -10.0.
If your geometry is currently drawn around the origin, use a translation by something like (0.0, 0.0, -7.0) as your view matrix. This needs to be applied before the projection matrix.
You can either combine the view and projection matrices, or pass them separately into your vertex shader. With a separate view matrix, containing the translation above, your shader code could then look like this:
uniform mat4 viewMat;
...
gl_Position = frustum * viewMat * vPosition;
First thing I see is that the Z near and far planes is chosen at 5, 10. If your vertices do not lie between these planes you will not see anything.
The Projection matrix will take everything in the pyramid like shape and translate and scale it into the unit volume -1,1 in every dimension.
http://www.lighthouse3d.com/tutorials/view-frustum-culling/

Rotating a light around a stationary object in openGL/glsl

So, I'm trying to rotate a light around a stationary object in the center of my scene. I'm well aware that I will need to use the rotation matrix in order to make this transformation occur. However, I'm unsure of how to do it in code. I'm new to linear algebra, so any help with explanations along the way would help a lot.
Basically, I'm working with these two right now and I'm not sure of how to make the light circulate the object.
mat4 rotation = mat4(
vec4( cos(aTimer), 0.0, sin(aTimer), 0.0),
vec4( 0, 1.0, 0.0, 0.0),
vec4(-sin(aTimer), 0.0, cos(aTimer), 0.0),
vec4( 0.0, 0.0, 0.0, 1.0)
);
and this is how my light is set up :
float lightPosition[4] = {5.0, 5.0, 1.0, 0};
glLightfv(GL_LIGHT0, GL_POSITION, lightPositon);
The aTimer in this code is a constantly incrementing float.
Even though you want the light to rotate around your object, you must not use a rotation matrix for this purpose but a translation one.
The matrix you're handling is the model matrix. It defines the orientation, the position and the scale of your object.
The matrix you have here is a rotation matrix, so the orientation of the light will change, but not the position, which is what you want.
So there is two problems to fix here :
1.Define your matrix properly. Since you want a translation (circular), I think this is the matrix you need :
mat4 rotation = mat4(
vec4( 1.0, 0.0, 0.0, 0.0),
vec4( 0.0, 1.0, 0.0, 0.0),
vec4( 0.0, 0.0, 1.0, 0.0),
vec4( cos(aTimer), sin(aTimer), 0.0, 1.0)
);
2.Define a good position vertex for your light. Since it's a single vertex and it's the job of the model matrix (above) to move the light, the light vector 4D should be :
float lightPosition[4] = {0.0f, 0.0f, 0.0f, 1.0f};
//In C, 0.0 is a double, you may have warnings at compilation for loss of precision, so use the suffix "f"
The forth component must be one since it's thanks to it that translations are possible.
You may find additional information here
Model matrix in 3D graphics / OpenGL
However they are using column vectors. Judging from your rotation matrix I do belive you use row vectors, so the translation components are in the last row, not the last column of the model matrix.