Projection View matrix calculation for directional light shadow mapping - c++

In order to calculate the projection view matrix for a directional light I take the vertices of the frustum of my active camera, multiply them by the rotation of my directional light and use these rotated vertices to calculate the extends of an orthographic projection matrix for my directional light.
Then I create the view matrix using the center of my light's frustum bounding box as the position of the eye, the light's direction for the forward vector and then the Y axis as the up vector.
I calculate the camera frustum vertices by multiplying the 8 corners of a box with 2 as size and centered in the origin.
Everything works fine and the direction light projection view matrix is correct but I've encountered a big issue with this method.
Let's say that my camera is facing forward (0, 0, -1), positioned on the origin and with a zNear value of 1 and zFar of 100. Only objects visible from my camera frustum are rendered into the shadow map, so every object that has a Z position between -1 and -100.
The problem is, if my light has a direction which makes the light come from behind the camera and the is an object, for example, with a Z position of 10 (so behind the camera but still in front of the light) and tall enough to possibly cast a shadow on the scene visible from my camera, this object is not rendered into the shadow map because it's not included into my light frustum, resulting in an error not casting the shadow.
In order to solve this problem I was thinking of using the scene bounding box to calculate the light projection view Matrix, but doing this would be useless because the image rendered into the shadow map cuold be so large that numerous artifacts would be visible (shadow acne, etc...), so I skipped this solution.
How could I overcome this problem?
I've read this post under the section of 'Calculating a tight projection' to create my projection view matrix and, for clarity, this is my code:
Frustum* cameraFrustum = activeCamera->GetFrustum();
Vertex3f direction = GetDirection(); // z axis
Vertex3f perpVec1 = (direction ^ Vertex3f(0.0f, 0.0f, 1.0f)).Normalized(); // y axis
Vertex3f perpVec2 = (direction ^ perpVec1).Normalized(); // x axis
Matrix rotationMatrix;
rotationMatrix.m[0] = perpVec2.x; rotationMatrix.m[1] = perpVec1.x; rotationMatrix.m[2] = direction.x;
rotationMatrix.m[4] = perpVec2.y; rotationMatrix.m[5] = perpVec1.y; rotationMatrix.m[6] = direction.y;
rotationMatrix.m[8] = perpVec2.z; rotationMatrix.m[9] = perpVec1.z; rotationMatrix.m[10] = direction.z;
Vertex3f frustumVertices[8];
cameraFrustum->GetFrustumVertices(frustumVertices);
for (AInt i = 0; i < 8; i++)
frustumVertices[i] = rotationMatrix * frustumVertices[i];
Vertex3f minV = frustumVertices[0], maxV = frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
minV.x = min(minV.x, frustumVertices[i].x);
minV.y = min(minV.y, frustumVertices[i].y);
minV.z = min(minV.z, frustumVertices[i].z);
maxV.x = max(maxV.x, frustumVertices[i].x);
maxV.y = max(maxV.y, frustumVertices[i].y);
maxV.z = max(maxV.z, frustumVertices[i].z);
}
Vertex3f extends = maxV - minV;
extends *= 0.5f;
Matrix viewMatrix = Matrix::MakeLookAt(cameraFrustum->GetBoundingBoxCenter(), direction, perpVec1);
Matrix projectionMatrix = Matrix::MakeOrtho(-extends.x, extends.x, -extends.y, extends.y, -extends.z, extends.z);
Matrix projectionViewMatrix = projectionMatrix * viewMatrix;
SceneObject::SetMatrix("ViewMatrix", viewMatrix);
SceneObject::SetMatrix("ProjectionMatrix", projectionMatrix);
SceneObject::SetMatrix("ProjectionViewMatrix", projectionViewMatrix);
And this is how I calculate the frustum and it's bounding box:
Matrix inverseProjectionViewMatrix = projectionViewMatrix.Inversed();
Vertex3f points[8];
_frustumVertices[0] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, -1.0f); // near top-left
_frustumVertices[1] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, -1.0f); // near top-right
_frustumVertices[2] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, -1.0f); // near bottom-left
_frustumVertices[3] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, -1.0f); // near bottom-right
_frustumVertices[4] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, 1.0f); // far top-left
_frustumVertices[5] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, 1.0f); // far top-right
_frustumVertices[6] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, 1.0f); // far bottom-left
_frustumVertices[7] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, 1.0f); // far bottom-right
_boundingBoxMin = _frustumVertices[0];
_boundingBoxMax = _frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
_boundingBoxMin.x = min(_boundingBoxMin.x, _frustumVertices[i].x);
_boundingBoxMin.y = min(_boundingBoxMin.y, _frustumVertices[i].y);
_boundingBoxMin.z = min(_boundingBoxMin.z, _frustumVertices[i].z);
_boundingBoxMax.x = max(_boundingBoxMax.x, _frustumVertices[i].x);
_boundingBoxMax.y = max(_boundingBoxMax.y, _frustumVertices[i].y);
_boundingBoxMax.z = max(_boundingBoxMax.z, _frustumVertices[i].z);
}
_boundingBoxCenter = Vertex3f((_boundingBoxMin.x + _boundingBoxMax.x) / 2.0f, (_boundingBoxMin.y + _boundingBoxMax.y) / 2.0f, (_boundingBoxMin.z + _boundingBoxMax.z) / 2.0f);

Related

glm::lookAt gives unexpected result for 2D axis

Basically I have a local space where y points down and x points right, like
|
|
---------> +x
|
| +y
and the center is at (320, 240), so the upper left corner is (0,0). Some windowing system uses it.
So I have this code
auto const proj = glm::ortho(0.0f, 640.0f, 0.0f, 480.0f);
auto const view = glm::lookAt(
glm::vec3(320.0f, 240.0f, -1.0f), //eye position
glm::vec3(320.0f, 240.0f, 0.0f), //center
glm::vec3(0.0f, -1.0f, 0.0f) //up
);
I imagine I need to look at the (320,240,0) from the negative position of z towards positive z, and the "up" direction is negative y.
However it doesn't seems to provide the right result
auto const v = glm::vec2(320.0f, 240.0f);
auto const v2 = glm::vec2(0.0f, 0.0f);
auto const v3 = glm::vec2(640.0f, 480.0f);
auto const result = proj * view * glm::vec4(v, 0.0f, 1.0f); //expects: (0,0), gives (-1,-1)
auto const result2 = proj * view * glm::vec4(v2, 0.0f, 1.0f); //expects: (-1,1), gives (-2,0)
auto const result3 = proj * view * glm::vec4(v3, 0.0f, 1.0f); //expects: (1,-1), gives (0,-2)
That's not how view and projection work. The view matrix tells you which point should be mapped to [0,0] in view space. It seems you try to map the center of the visible area to [0,0], but then you use a projection matrix which assumes that [0,0] is the top-left corner.
Since you first apply the view matrix, that gives a result of view * glm::vec4(v, 0.0f, 1.0f) = [0,0]. Then the projection matrix get's applied where you defined that 0,0 as the top-left corner, thus proj * [0,0] will result in [-1,-1].
I'm not 100% sure what you want to achieve, but if you want to use the given projection matrix, then the view matrix has to transform the scene in a way that the top-left point of the visible area get's mapped to [0,0].
You can also adjust the project to use the range [-320, 320] (and respectively [-240, 240]) and keep mapping the center to [0,0] with the view matrix.

How to center directional light depthmap in camera position?

Right now I am working with shadow maps in my game engine. In the code below I compute View-Projection matrix for directional light source. I have a fixed projection-box size (=50), so now to light-up a box (-50; 50) in all directions positioned in the world center. It works correctly, but I want it to follow camera in the way such that the its position will always be the center of this box. How to do this?
Matrix4x4 DirectionalLight::GetMatrix() const
{
Vector3 position = Camera::GetPosition();
float sizeLx = -this->ProjectionSize;
float sizeRx = +this->ProjectionSize;
float sizeLy = -this->ProjectionSize;
float sizeRy = +this->ProjectionSize;
float sizeLz = -this->ProjectionSize;
float sizeRz = +this->ProjectionSize;
Matrix4x4 OrthoProjection = MakeOrthographicMatrix(sizeLx, sizeRx, sizeLy, sizeRy, sizeLz, sizeRz);
Matrix4x4 LightView = MakeViewMatrix(
this->Direction,
MakeVector3(0.0f, 0.0f, 0.0f),
MakeVector3(0.0f, 1.0f, 0.0f)
);
return OrthoProjection * LightView;
}
I am using glm as math library, most functions are aliases/wrappers: MakeOrthographicMatrix is glm::ortho, MakeViewMatrix is glm::lookAt

Get cursor position on Z = 0 plane using glm::unproject()?

I'm trying to get the coordinates (x,y) of the grid (z = 0) using only the cursor coordinates. After a long search I found this way to do that using the glm::unproject.
First I'm getting the cursor coordinates using the callback:
void cursorCallback(GLFWwindow *window, double x, double y)
{
this->cursorCoordinate = glm::vec3(x, (this->windowHeight - y - 1.0f), 0.0f);
}
an then converting these coordinates:
glm::vec3 cursorCoordinatesToWorldCoordinates()
{
glm::vec3 pointInitial = glm::unProject(
glm::vec3(this->cursorCoordinate.x, this->cursorCoordinate.y, 0.0),
this->modelMatrix * this->viewMatrix,
this->projectionMatrix,
this->viewPort
);
glm::vec3 pointFinal = glm::unProject(
glm::vec3(this->cursorCoordinate.x, this->cursorCoordinate.y, 1.0),
this->modelMatrix * this->viewMatrix,
this->projectionMatrix,
this->viewPort
);
glm::vec3 vectorDirector = pointFinal - pointInitial;
double lambda = (-pointInitial.y) / vectorDirector.y;
double x = pointInitial.x + lambda * vectorDirector.x;
double y = pointInitial.z + lambda * vectorDirector.z;
return glm::vec3(x, y, 0.0f);
}
I use an ArcBall camera to rotate the world around specified axis, so that is how I generate the MVP matrixes:
this->position = glm::vec3(0.0f, 10.0f, 5.0f);
this->up = glm::vec3(0.0f, 1.0f, 0.0f);
this->lookAt = glm::vec3(0.0f, 0.0f, 0.0f);
this->fieldView = 99.0f;
this->farDistance = 100.0f;
this->nearDistance = 0.1f;
this->modelMatrix = glm::mat4(1.0f);
this->viewMatrix = glm::lookAt(this->position, this->lookAt, this->up) * glm::rotate(glm::degrees(this->rotationAngle) * this->dragSpeed, this->rotationAxis);
this->projectionMatrix = glm::perspective(glm::radians(this->fieldView), 1.0f, this->nearDistance, this->farDistance);
But something is going wrong because I'm not getting the right results. Look this print of the application:
each square is 1 unit, the cube is rendered at position (0, 0, 0). With rotationAngle = 0 when a put the cursor at (0,0), (1,1), (2,2), (3,3), (4,4), (5,5) I get (0, 5.7), (0.8, 6.4), (1.6, 6.9), (2.4, 7.6), (3.2, 8.2), (4.2, 8.8) respectivally. That's not expected.
Why y is delayed by 6 units?
It's necessary rotate the result cursorCoordinatesToWorldCoordinates based on rotationAngle isn't?
--
That I already did:
Checked if the viewport match with glViewport - OK
Checked the opengl coordinates (Y is up, not Z) - OK
You want to intersect the ray from glm::vec3(this->cursorCoordinate.x, this->cursorCoordinate.y, 0.0) to glm::vec3(this->cursorCoordinate.x, this->cursorCoordinate.y, 1.0) with the grid in world space, rather than model space (of the cuboid).
You've to skip this.modelMatrix:
glm::vec3 pointInitial = glm::unProject(
glm::vec3(this->cursorCoordinate.x, this->cursorCoordinate.y, 0.0),
this->viewMatrix,
this->projectionMatrix,
this->viewPort);
glm::vec3 pointFinal = glm::unProject(
glm::vec3(this->cursorCoordinate.x, this->cursorCoordinate.y, 1.0),
this->viewMatrix,
this->projectionMatrix,
this->viewPort);
In any case this->modelMatrix * this->viewMatrix is incorrect. If you eant to intersect the ray with an object in model space, then it has to be this->viewMatrix * this->modelMatrix. Matrix multiplication is not Commutative.

Matrix multiplication - How to make a planet spin on it's own axis, and orbit it's parent?

I've got two planets, the sun and the earth. I want them to spin on their own axis, and orbit the planet at the same time.
I can get these two behaviors to work individually, but I'm stumped as to how to combine them.
void planet::render() {
mat4 axisPos = mat4(1.0f);
axialRotation = rotate(axisPos, axisRotationSpeedConstant, vec3(0.0f, 1.0f, 0.0f));
if (hostPlanet != NULL) {
mat4 hostTransformPosition = mat4(1.0f);
hostTransformPosition[3] = hostPlanet->getTransform()[3];
orbitalSpeed += orbitalSpeedConstant;
orbitRotation = rotate(hostTransformPosition, orbitalSpeed, vec3(0.0f, 1.0f, 0.0f));
orbitRotation = translate(orbitRotation, vec3(distanceFromParent, 0.0f, 0.0f));
//rotTransform will make them spin on their axis, but not orbit their parent planet
mat4 rotTransform = transform * axialRotation;
//transform *= rotTransform;
//orbitRotation will make the planet orbit, but it won't spin on it's own axis.
transform = orbitRotation;
}
else {
transform *= axialRotation;
}
glUniform4fv(gColLoc, 1, &color[0]);
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &getTransform()[0][0]);
glDrawArrays(GL_LINES, 0, NUMVERTS);
};
Woohoo! As usual, asking the question lead me to being able to answer it. After the last line, knowing that transform[0 to 2] represents the rotation in the 4x4 matrix (with transform[3] representing the position in 3D space), I thought to replace the rotation from the previous matrix calculation with the current one.
Badabing, I got my answer.
transform = orbitRotation;
transform[0] = rotTransform[0];
transform[1] = rotTransform[1];
transform[2] = rotTransform[2];

From perspective to orthographic projections

I'm trying to change my camera projection from perspective to orthographic.
At the moment my code is working fine with the perspective projection
m_prespective = glm::perspective(70.0f, (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_prespective * glm::lookAt(m_position, m_forward, m_up);
But as soon as i change it to orthographic projection I can't see my mesh anymore.
m_ortho = glm::ortho(0.0f, (float)DISPLAY_WIDTH, (float)DISPLAY_HEIGHT,5.0f, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_ortho * glm::lookAt(m_position, m_forward, m_up);
I don't understand what I'm doing wrong.
In perspective projection the term (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT is evaluating the picture aspect ratio. This number is going to be close to 1. The left and right clip plane distances at the near plane for perspective projection is aspect * near_distance. More interesting though is the expanse of left-right at the viewing distance, which in your case is abs(m_position.z)= abs(mesh.radius).
Carrying this over to orthographic projection the left, right, top and bottom clip plane distances should be of the same order of magnitude, so given that aspect is close to 1 the values for left, right, bottom and top should be close to the value of abs(mesh.radius). The resolution of the display in pixels is totally irrelevant except for the aspect ratio.
Furthermore when using a perspective projection the value for near should be chosen as large as possible so that all desired geometry is visible. Doing otherwise will waste precious depth buffer resolution.
float const view_distance = mesh.radius + 1;
float const aspect = (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT;
switch( typeof_projection ){
case perspective:
m_projection = glm::perspective(70.0f, aspect, 1.f, 1000.0f);
break;
case ortho:
m_projection = glm::ortho(
-aspect * view_distance,
aspect * view_distance,
view_distance,
view_distance,
-1000, 1000 );
break;
}
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -view_distance);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_projection * glm::lookAt(m_position, m_forward, m_up);