I have encountered a situation where passing a glm::vec3 to the glm::lookAt function appears to modify it.
The following code is about shadow frustum calculation in a C++ / OpenGL game engine. The problem arises in the glm::lookAt function, at the end.
void Shadows::updateFrustumBoundingBox()
{
// Here we convert main camera frustum coordinates in light view space
std::array<glm::vec3,8> points = {
// Near plane points
lightView * glm::vec4(cameraPtr->ntl, 1.0),
lightView * glm::vec4(cameraPtr->ntr, 1.0),
lightView * glm::vec4(cameraPtr->nbl, 1.0),
lightView * glm::vec4(cameraPtr->nbr, 1.0),
// Far plane points
lightView * glm::vec4(cameraPtr->ftl, 1.0),
lightView * glm::vec4(cameraPtr->ftr, 1.0),
lightView * glm::vec4(cameraPtr->fbl, 1.0),
lightView * glm::vec4(cameraPtr->fbr, 1.0)};
// Here we find the shadow bounding box dimensions
bool first = true;
for (int i=0; i<7; ++i)
{
glm::vec3* point = &points[i];
if (first)
{
minX = point->x;
maxX = point->x;
minY = point->y;
maxY = point->y;
minZ = point->z;
maxZ = point->z;
first = false;
}
if (point->x > maxX)
maxX = point->x;
else if (point->x < minX)
minX = point->x;
if (point->y > maxY)
maxY = point->y;
else if (point->y < minY)
minY = point->y;
if (point->z > maxZ)
maxZ = point->z;
else if (point->z < minZ)
minZ = point->z;
}
frustumWidth = maxX - minX;
frustumHeight = maxY - minY;
frustumLength = maxZ - minZ;
// Here we find the bounding box center, in light view space
float x = (minX + maxX) / 2.0f;
float y = (minY + maxY) / 2.0f;
float z = (minZ + maxZ) / 2.0f;
glm::vec4 frustumCenter = glm::vec4(x, y, z, 1.0f);
// Here we convert the bounding box center in world space
glm::mat4 invertedLight = glm::mat4(1.0f);
invertedLight = glm::inverse(lightView);
frustumCenter = invertedLight * frustumCenter;
// Here we define the light projection matrix (shadow frustum dimensions)
lightProjection = glm::ortho(
-frustumWidth/2.0f, // left
frustumWidth/2.0f, // right
-frustumHeight/2.0f, // down
frustumHeight/2.0f, // top
0.01f, // near
SHADOW_DISTANCE); // far
// Here we define the light view matrix (shadow frustum position and orientation)
lightDirection = glm::normalize(lightDirection);
target = glm::vec3(0.0f, 100.0f, 200.0f) + lightDirection;
lightView = glm::lookAt(
// Shadow box center
glm::vec3(0.0f, 100.0f, 200.0f), // THIS LINE
// glm::vec3(frustumCenter), // ALTERNATIVELY, THIS LINE. Here I convert it as a vec3 because it is a vec4
// Light orientation
target,
// Up vector
glm::vec3( 0.0f, 1.0f, 0.0f));
cout << "frustumCenter: " << frustumCenter.x << " " << frustumCenter.y << " " << frustumCenter.z << endl;
// Final matrix calculation
lightSpaceMatrix = lightProjection * lightView;
}
As is, the first glm::lookAt parameter is glm::vec3(0.0f, 100.0f, 200.0f), and it works correctly. The glm::vec4 frustumCenter variable isn't used by glm::lookAt, and outputs correct values each frame.
frustumCenter: 573.41 -93.2823 -133.848 1
But if I change the first glm::lookAt parameter to "glm::vec3(frustumCenter)":
frustumCenter: nan nan nan nan
How can it be?
I have encountered a situation where passing a glm::vec3 to the glm::lookAt function appears to modify it."
I don't think so. You use frustumCenter to caclucalte lightView, but before you do that, you use lightView to calculate frustumCenter: frustumCenter = invertedLight * frustumCenter;
So my educated guess on what happens here is:
The lightView matrix is not properly initialized / initialized to a singular matrix (like all zeros). As such, the inverse will be not defined, resulting in frustumCenter becoming all NaN, which in turn results in lightView becoming all NaN.
But if you not use frustumCenter in the first iteration, lightView will be properly initialized, and frustumCenter will be calculated to a sane value in the next iteration.
Related
When camera is moved around, why are my starting rays are still stuck at origin 0, 0, 0 even though the camera position has been updated?
It works fine if I start the program and my camera position is at default 0, 0, 0. But once I move my camera for instance pan to the right and click some more, the lines are still coming from 0 0 0 when it should be starting from wherever the camera is. Am I doing something terribly wrong? I've checked to make sure they're being updated in the main loop. I've used this code snippit below referenced from:
picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone
// 1. Get mouse coordinates then normalize
float x = (2.0f * lastX) / width - 1.0f;
float y = 1.0f - (2.0f * lastY) / height;
// 2. Move from clip space to world space
glm::mat4 inverseWorldMatrix = glm::inverse(proj * view);
glm::vec4 near_vec = glm::vec4(x, y, -1.0f, 1.0f);
glm::vec4 far_vec = glm::vec4(x, y, 1.0f, 1.0f);
glm::vec4 startRay = inverseWorldMatrix * near_vec;
glm::vec4 endRay = inverseWorldMatrix * far_vec;
// perspective divide
startR /= startR.w;
endR /= endR.w;
glm::vec3 direction = glm::vec3(endR - startR);
// start the ray points from the camera position
glm::vec3 startPos = glm::vec3(camera.GetPosition());
glm::vec3 endPos = glm::vec3(startPos + direction * someLength);
The first screenshot I click some rays, the 2nd I move my camera to the right and click some more but the initial starting rays are still at 0, 0, 0. What I'm looking for is for the rays to come out wherever the camera position is in the 3rd image, ie the red rays sorry for the confusion, the red lines are supposed to shoot out and into the distance not up.
// and these are my matrices
// projection
glm::mat4 proj = glm::perspective(glm::radians(camera.GetFov()), (float)width / height, 0.1f, 100.0f);
// view
glm::mat4 view = camera.GetViewMatrix(); // This returns glm::lookAt(this->Position, this->Position + this->Front, this->Up);
// model
glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f));
Its hard to tell where in the code the problem lies. But, I use this function for ray casting that is adapted from code from scratch-a-pixel and learnopengl:
vec3 rayCast(double xpos, double ypos, mat4 projection, mat4 view) {
// converts a position from the 2d xpos, ypos to a normalized 3d direction
float x = (2.0f * xpos) / WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / HEIGHT;
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
// eye space to clip we would multiply by projection so
// clip space to eye space is the inverse projection
vec4 ray_eye = inverse(projection) * ray_clip;
// convert point to forwards
ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
// world space to eye space is usually multiply by view so
// eye space to world space is inverse view
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
where you can draw your line with startPos = camera.Position and endPos = camera.Position + rayCast(...) * scalar_amount.
I did mouse picking with terrain for these lessons (but used c++)
https://www.youtube.com/watch?v=DLKN0jExRIM&index=29&listhLoLuZVfUksDP
http://antongerdelan.net/opengl/raycasting.html
The problem is that the position of the mouse does not correspond to the place where the ray intersects with the terrane:
There's a big blunder on the vertical and a little horizontal.
Do not look at the shadows, this is not a corrected map of normals.
What can be wrong? My code:
void MousePicker::update() {
view = cam->getViewMatrix();
currentRay = calculateMouseRay();
if (intersectionInRange(0, RAY_RANGE, currentRay)) {
currentTerrainPoint = binarySearch(0, 0, RAY_RANGE, currentRay);
}
else {
currentTerrainPoint = vec3();
}
}
vec3 MousePicker::calculateMouseRay() {
glfwGetCursorPos(win, &mouseInfo.xPos, &mouseInfo.yPos);
vec2 normalizedCoords = getNormalizedCoords(mouseInfo.xPos, mouseInfo.yPos);
vec4 clipCoords = vec4(normalizedCoords.x, normalizedCoords.y, -1.0f, 1.0f);
vec4 eyeCoords = toEyeCoords(clipCoords);
vec3 worldRay = toWorldCoords(eyeCoords);
return worldRay;
}
vec2 MousePicker::getNormalizedCoords(double xPos, double yPos) {
GLint width, height;
glfwGetWindowSize(win, &width, &height);
//GLfloat x = (2.0 * xPos) / width - 1.0f;
GLfloat x = -((width - xPos) / width - 0.5f) * 2.0f;
//GLfloat y = 1.0f - (2.0f * yPos) / height;
GLfloat y = ((height - yPos) / height - 0.5f) * 2.0f;
//float z = 1.0f;
mouseInfo.normalizedCoords = vec2(x, y);
return vec2(x,y);
}
vec4 MousePicker::toEyeCoords(vec4 clipCoords) {
vec4 invertedProjection = inverse(projection) * clipCoords;
//vec4 eyeCoords = translate(invertedProjection, clipCoords);
mouseInfo.eyeCoords = vec4(invertedProjection.x, invertedProjection.y, -1.0f, 0.0f);
return vec4(invertedProjection.x, invertedProjection.y, -1.0f, 0.0f);
}
vec3 MousePicker::toWorldCoords(vec4 eyeCoords) {
vec3 rayWorld = vec3(inverse(view) * eyeCoords);
vec3 mouseRay = vec3(rayWorld.x, rayWorld.y, rayWorld.z);
rayWorld = normalize(rayWorld);
mouseInfo.worldRay = rayWorld;
return rayWorld;
}
//*********************************************************************************
vec3 MousePicker::getPointOnRay(vec3 ray, float distance) {
vec3 camPos = cam->getCameraPos();
vec3 start = vec3(camPos.x, camPos.y, camPos.z);
vec3 scaledRay = vec3(ray.x * distance, ray.y * distance, ray.z * distance);
return vec3(start + scaledRay);
}
vec3 MousePicker::binarySearch(int count, float start, float finish, vec3 ray) {
float half = start + ((finish - start) / 2.0f);
if (count >= RECURSION_COUNT) {
vec3 endPoint = getPointOnRay(ray, half);
//Terrain* ter = &getTerrain(endPoint.x, endPoint.z);
if (terrain != NULL) {
return endPoint;
}
else {
return vec3();
}
}
if (intersectionInRange(start, half, ray)) {
return binarySearch(count + 1, start, half, ray);
}
else {
return binarySearch(count + 1, half, finish, ray);
}
}
bool MousePicker::intersectionInRange(float start, float finish, vec3 ray) {
vec3 startPoint = getPointOnRay(ray, start);
vec3 endPoint = getPointOnRay(ray, finish);
if (!isUnderGround(startPoint) && isUnderGround(endPoint)) {
return true;
}
else {
return false;
}
}
bool MousePicker::isUnderGround(vec3 testPoint) {
//Terrain* ter = &getTerrain(testPoint.x, testPoint.z);
float height = 0;
if (terrain != NULL) {
height = terrain->getHeightPoint(testPoint.x, testPoint.z);
mouseInfo.height = height;
}
if (testPoint.y < height) {
return true;
}
else {
return false;
}
}
Terrain MousePicker::getTerrain(float worldX, float worldZ) {
return *terrain;
}
In perspective projection, a ray from the eye position through a point on the screen can defined by 2 points. The first point is the eye (camera) position which is (0, 0, 0) in view space. The second point has to be calculated by the position on the screen.
The screen position has to be converted to normalized device coordinates in range from (-1,-1) to (1,1).
w = with of the viewport
h = height of the viewport
x = X position of the mouse
y = Y position ot the mouse
GLfloat ndc_x = 2.0 * x/w - 1.0;
GLfloat ndc_y = 1.0 - 2.0 * y/h; // invert Y axis
To calculate a point on the ray, which goes through the camera position and through the point on the screen, the field of view and the aspect ratio of the perspective projection has to be known:
fov_y = vertical field of view angle in radians
aspect = w / h
GLfloat tanFov = tan( fov_y * 0.5 );
glm::vec3 ray_P = vec3( ndc_x * aspect * tanFov, ndc_y * tanFov, -1.0 ) );
A ray from the camera position through a point on the screen can be defined by the following position (P0) and normalized direction (dir), in world space:
view = view matrix
glm::mat4 invView = glm::inverse( view );
glm::vec3 P0 = invView * glm::vec3(0.0f, 0.0f, 0.0f);
// = glm::vec3( view[3][0], view[3][1], view[3][2] );
glm::vec3 dir = glm::normalize( invView * ray_P - P0 );
In this case, the answers to the following questions will be interesting too:
How to recover view space position given view space depth value and ndc xy
Is it possble get which surface of cube will be click in OpenGL?
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
GLSL spotlight projection volume
Applying to your code results in the following changes:
The Perspective Projection Matrix looks like this:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
it follows:
aspect = w / h
tanFov = tan( fov_y * 0.5 );
p[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
p[1][1] = 2*n/(t-b) = 1.0 / tanFov
Convert from screen (mouse) coordinates to normalized device coordinates:
vec2 MousePicker::getNormalizedCoords(double x, double y) {
GLint w, h;
glfwGetWindowSize(win, &width, &height);
GLfloat ndc_x = 2.0 * x/w - 1.0;
GLfloat ndc_y = 1.0 - 2.0 * y/h; // invert Y axis
mouseInfo.normalizedCoords = vec2(ndc_x, ndc_x);
return vec2(ndc_x, ndc_x);
}
Calculate A ray from the camera position through a point on the screen (mouse position) in world space:
vec3 MousePicker::calculateMouseRay( void ) {
glfwGetCursorPos(win, &mouseInfo.xPos, &mouseInfo.yPos);
vec2 normalizedCoords = getNormalizedCoords(mouseInfo.xPos, mouseInfo.yPos);
ray_Px = normalizedCoords.x / projection[0][0]; // projection[0][0] == 1.0 / (tanFov * aspect)
ray_Py = normalizedCoords.y / projection[1][1]; // projection[1][1] == 1.0 / tanFov
glm::vec3 ray_P = vec3( ray_Px, ray_Py, -1.0f ) );
vec3 camPos = cam->getCameraPos(); // == glm::vec3( view[3][0], view[3][1], view[3][2] );
glm::mat4 invView = glm::inverse( view );
glm::vec3 P0 = camPos;
glm::vec3 dir = glm::normalize( invView * ray_P - P0 );
return dir;
}
I have a function which rotates the camera around the player by yaw and pitch angles.
void Camera::updateVectors() {
GLfloat radius = glm::length(center - position);
position.x = cos(glm::radians(this->yaw)) * cos(glm::radians(this->pitch));
position.y = sin(glm::radians(this->pitch));
position.z = sin(glm::radians(this->yaw)) * cos(glm::radians(this->pitch));
position *= radius;
this->front = glm::normalize(center - position);
this->right = glm::normalize(glm::cross(this->front, this->worldUp));
this->up = glm::normalize(glm::cross(this->right, this->front));
lookAt = glm::lookAt(this->position, this->position + this->front, this->up);
}
When I move the player the camera should moves with it by adding a translation vector to both the center and position of the camera:
void Camera::Transform(glm::vec3& t) {
this->position += t;
this->center += t;
}
Vefore moving the player the camera rotation works fine and the player movement also works fine but once I try to rotate the camera after player moving it start to change position unexpected.
After making some debugging I noticed that the radius which Is calculated at first line which is the distance between center and position of the camera like 49.888889 or 50.000079 and due to the initialized values it should be 50.0, this very small difference makes the result unexpected at all.
so how could I treat this float precision or is there a bug in my code or calculations.
Edit:
position the player depends on its yaw and pitch and update the center of the camera
GLfloat velocity = this->movementSpeed * deltaTime;
if (direction == FORWARD) {
glm::vec3 t = glm::vec3(sin(glm::radians(yaw)), sin(glm::radians(pitch)), cos(glm::radians(yaw))) * velocity;
matrix = glm::translate(matrix, t);
for (GLuint i = 0; i < this->m_Entries.size(); i++) {
this->m_Entries[i].setModelMatrix(matrix);
}
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f *= velocity;
scene->getDefCamera()->Transform(f);
}
if (direction == BACKWARD) {
glm::vec3 t = glm::vec3(sin(glm::radians(yaw)), 0.0, cos(glm::radians(yaw))) * velocity;
matrix = glm::translate(matrix, -t);
for (GLuint i = 0; i < this->m_Entries.size(); i++) {
this->m_Entries[i].setModelMatrix(matrix);
}
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f *= velocity;
f = -f;
scene->getDefCamera()->Transform(f);
}
The main problem here is that you're rotating based on a position that is moving. But rotations are based on the origin of the coordinate system. So when you move the position, the rotation is still being done relative to the origin.
Instead of having Transform offset the position, it should only offset the center. Indeed, storing position makes no sense; you compute the camera's position based on its current center point, the radius, and the angles of rotation. The radius is a property that should be stored, not computed.
the solution simply is making transformations on the camera view matrix instead of making it by lookAt function
first initialize the camera
void Camera::initCamera(glm::vec3& pos, glm::vec3& center, GLfloat yaw, GLfloat pitch) {
view = glm::translate(view, center-pos);
view = glm::rotate(view, glm::radians(yaw), glm::vec3(0.0, 1.0, 0.0));
view = glm::rotate(view, glm::radians(pitch), glm::vec3(1.0, 0.0, 0.0));
view = glm::translate(view, pos-center);
}
then the rotation function:
void Camera::Rotate(GLfloat xoffset, GLfloat yoffset, glm::vec3& c) {
xoffset *= this->mouseSensitivity;
yoffset *= this->mouseSensitivity;
view = glm::translate(view, c );//c is the player position
view = glm::rotate(view, glm::radians(xoffset), glm::vec3(0.0, 1.0, 0.0));
view = glm::rotate(view, glm::radians(yoffset), glm::vec3(1.0, 0.0, 0.0));
view = glm::translate(view, - c);
}
and the camera move function:
void Camera::Translate(glm::vec3& t) {
view = glm::translate(view, -t);
}
and in the player class when the player moves it push camera to move in its direction by this code
void Mesh::Move(Move_Directions direction, GLfloat deltaTime) {
GLfloat velocity = 50.0f * this->movementSpeed * deltaTime;
if (direction == FORWARD) {
glm::vec3 t = glm::vec3(sin(glm::radians(yaw)), sin(glm::radians(pitch)), cos(glm::radians(yaw))) * velocity;
matrix = glm::translate(matrix, t);
for (GLuint i = 0; i < this->m_Entries.size(); i++) {
this->m_Entries[i].setModelMatrix(matrix);
}
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f *= velocity;
scene->getDefCamera()->Translate(f);
}
if (direction == BACKWARD) {
glm::vec3 t = glm::vec3(sin(glm::radians(yaw)), 0.0, cos(glm::radians(yaw))) * velocity;
matrix = glm::translate(matrix, -t);
for (GLuint i = 0; i < this->m_Entries.size(); i++) {
this->m_Entries[i].setModelMatrix(matrix);
}
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f *= velocity;
f = -f;
scene->getDefCamera()->Translate(f);
}
if (direction == RIGHT) {
matrix = glm::rotate(matrix, (GLfloat) -M_PI * deltaTime, glm::vec3(0.0, 1.0, 0.0));
for (GLuint i = 0; i < this->m_Entries.size(); i++) {
this->m_Entries[i].setModelMatrix(matrix);
}
}
if (direction == LEFT) {
matrix = glm::rotate(matrix, (GLfloat) M_PI * deltaTime, glm::vec3(0.0, 1.0, 0.0));
for (GLuint i = 0; i < this->m_Entries.size(); i++) {
this->m_Entries[i].setModelMatrix(matrix);
}
}
}
thanks for every body helped
I am currently trying to learn how cascaded shadow maps work so I've been trying to get one shadow map to fit to the view frustum without shimmering. I'm using a near/far plane of 1 to 10000 for my camera projection and this is the way I calculate the orthographic matrix for the light:
GLfloat far = -INFINITY;
GLfloat near = INFINITY;
//Multiply all the world space frustum corners with the view matrix of the light
Frustum cameraFrustum = CameraMan.getActiveCamera()->mFrustum;
lightViewMatrix = glm::lookAt((cameraFrustum.frustumCenter - glm::vec3(-0.447213620f, -0.89442790f, 0.0f)), cameraFrustum.frustumCenter, glm::vec3(0.0f, 0.0f, 1.0f));
glm::vec3 arr[8];
for (unsigned int i = 0; i < 8; ++i)
arr[i] = glm::vec3(lightViewMatrix * glm::vec4(cameraFrustum.frustumCorners[i], 1.0f));
glm::vec3 minO = glm::vec3(INFINITY, INFINITY, INFINITY);
glm::vec3 maxO = glm::vec3(-INFINITY, -INFINITY, -INFINITY);
for (auto& vec : arr)
{
minO = glm::min(minO, vec);
maxO = glm::max(maxO, vec);
}
far = maxO.z;
near = minO.z;
//Get the longest diagonal of the frustum, this along with texel sized increments is used to keep the shadows from shimmering
//far top right - near bottom left
glm::vec3 longestDiagonal = cameraFrustum.frustumCorners[0] - cameraFrustum.frustumCorners[6];
GLfloat lengthOfDiagonal = glm::length(longestDiagonal);
longestDiagonal = glm::vec3(lengthOfDiagonal);
glm::vec3 borderOffset = (longestDiagonal - (maxO - minO)) * glm::vec3(0.5f, 0.5f, 0.5f);
borderOffset *= glm::vec3(1.0f, 1.0f, 0.0f);
maxO += borderOffset;
minO -= borderOffset;
GLfloat worldUnitsPerTexel = lengthOfDiagonal / 1024.0f;
glm::vec3 vWorldUnitsPerTexel = glm::vec3(worldUnitsPerTexel, worldUnitsPerTexel, 0.0f);
minO /= vWorldUnitsPerTexel;
minO = glm::floor(minO);
minO *= vWorldUnitsPerTexel;
maxO /= vWorldUnitsPerTexel;
maxO = glm::floor(maxO);
maxO *= vWorldUnitsPerTexel;
lightOrthoMatrix = glm::ortho(minO.x, maxO.x, minO.y, maxO.y, near, far);
The use of the longest diagonal to offset the frustum seems to be working as the shadow map doesn't seem to shrink/scale when looking around, however using the texel sized increments described by https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx has no effect whatsoever. I am using a pretty large scene for testing, which results in a low resolution on my shadow maps, but I wanted to get a stabilized shadow that fits a view frustum before I move on to splitting the frustum up. It's hard to tell from images, but the shimmering effect isn't reduced by the solution that microsoft presented:
Ended up using this solution:
//Calculate the viewMatrix from the frustum center and light direction
Frustum cameraFrustum = CameraMan.getActiveCamera()->mFrustum;
glm::vec3 lightDirection = glm::normalize(glm::vec3(-0.447213620f, -0.89442790f, 0.0f));
lightViewMatrix = glm::lookAt((cameraFrustum.frustumCenter - lightDirection), cameraFrustum.frustumCenter, glm::vec3(0.0f, 1.0f, 0.0f));
//Get the longest radius in world space
GLfloat radius = glm::length(cameraFrustum.frustumCenter - cameraFrustum.frustumCorners[6]);
for (unsigned int i = 0; i < 8; ++i)
{
GLfloat distance = glm::length(cameraFrustum.frustumCorners[i] - cameraFrustum.frustumCenter);
radius = glm::max(radius, distance);
}
radius = std::ceil(radius);
//Create the AABB from the radius
glm::vec3 maxOrtho = cameraFrustum.frustumCenter + glm::vec3(radius);
glm::vec3 minOrtho = cameraFrustum.frustumCenter - glm::vec3(radius);
//Get the AABB in light view space
maxOrtho = glm::vec3(lightViewMatrix*glm::vec4(maxOrtho, 1.0f));
minOrtho = glm::vec3(lightViewMatrix*glm::vec4(minOrtho, 1.0f));
//Just checking when debugging to make sure the AABB is the same size
GLfloat lengthofTemp = glm::length(maxOrtho - minOrtho);
//Store the far and near planes
far = maxOrtho.z;
near = minOrtho.z;
lightOrthoMatrix = glm::ortho(minOrtho.x, maxOrtho.x, minOrtho.y, maxOrtho.y, near, far);
//For more accurate near and far planes, clip the scenes AABB with the orthographic frustum
//calculateNearAndFar();
// Create the rounding matrix, by projecting the world-space origin and determining
// the fractional offset in texel space
glm::mat4 shadowMatrix = lightOrthoMatrix * lightViewMatrix;
glm::vec4 shadowOrigin = glm::vec4(0.0f, 0.0f, 0.0f, 1.0f);
shadowOrigin = shadowMatrix * shadowOrigin;
GLfloat storedW = shadowOrigin.w;
shadowOrigin = shadowOrigin * 4096.0f / 2.0f;
glm::vec4 roundedOrigin = glm::round(shadowOrigin);
glm::vec4 roundOffset = roundedOrigin - shadowOrigin;
roundOffset = roundOffset * 2.0f / 4096.0f;
roundOffset.z = 0.0f;
roundOffset.w = 0.0f;
glm::mat4 shadowProj = lightOrthoMatrix;
shadowProj[3] += roundOffset;
lightOrthoMatrix = shadowProj;
Which I found over at http://www.gamedev.net/topic/650743-improving-cascade-shadow/ I basically switched to using a bounding sphere instead and then constructing the rounding matrix as in that example. Works like a charm
currently I am learning 3D rendering theory with the book "Learning Modern 3D Graphics Programming" and are right now stuck in one of the "Further Study" activities on the review of chapter four, specifically the last activity.
The third activity was answered in this question, I understood it with no problem. However, this last activity asks me to do all that this time using only matrices.
I have a solution partially working, but it feels quite a hack to me, and probably not the correct way to do it.
My solution to the third question involved oscilating the 3d vector E's x, y, and z components by an arbitrary range and produced a zooming-in-out cube (growing from bottom-left, per OpenGL origin point). I wanted to do this again using matrices, it looked like this:
However I get this results with matrices (ignoring the background color change):
Now to the code...
The matrix is a float[16] called theMatrix that represents a 4x4 matrix with the data written in column-major order with everything but the following elements initialized to zero:
float fFrustumScale = 1.0f; float fzNear = 1.0f; float fzFar = 3.0f;
theMatrix[0] = fFrustumScale;
theMatrix[5] = fFrustumScale;
theMatrix[10] = (fzFar + fzNear) / (fzNear - fzFar);
theMatrix[14] = (2 * fzFar * fzNear) / (fzNear - fzFar);
theMatrix[11] = -1.0f;
then the rest of the code stays the same like the matrixPerspective tutorial lesson until we get to the void display()function:
//Hacked-up variables pretending to be a single vector (E)
float x = 0.0f, y = 0.0f, z = -1.0f;
//variables used for the oscilating zoom-in-out
int counter = 0;
float increment = -0.005f;
int steps = 250;
void display()
{
glClearColor(0.15f, 0.15f, 0.2f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(theProgram);
//Oscillating values
while (counter <= steps)
{
x += increment;
y += increment;
z += increment;
counter++;
if (counter >= steps)
{
counter = 0;
increment *= -1.0f;
}
break;
}
//Introduce the new data to the array before sending as a 4x4 matrix to the shader
theMatrix[0] = -x * -z;
theMatrix[5] = -y * -z;
//Update the matrix with the new values after processing with E
glUniformMatrix4fv(perspectiveMatrixUniform, 1, GL_FALSE, theMatrix);
/*
cube rendering code ommited for simplification
*/
glutSwapBuffers();
glutPostRedisplay();
}
And here is the vertex shader code that uses the matrix:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform mat4 perspectiveMatrix;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
gl_Position = perspectiveMatrix * cameraPos;
theColor = color;
}
What I am doing wrong, or what I am confusing? Thanks for the time reading all of this.
In OpenGL there are three major matrices that you need to be aware of:
The Model Matrix D: Maps vertices from an object's local coordinate system into the world's cordinate system.
The View Matrix V: Maps vertices from the world's coordinate system to the camera's coordinate system.
The Projection Matrix P: Maps (or more suitably projects) vertices from camera's space onto the screen.
Mutliplied the model and the view matrix give us the so called Model-view Matrix M, which maps the vertices from the object's local coordinates to the camera's cordinate system.
Altering specific elements of the model-view matrix results in certain afine transfomations of the camera.
For example, the 3 matrix elements of the rightmost column are for the translation transformation. The diagonal elements are for the scaling transformation. Altering appropriately the elements of the sub-matrix
are for the rotation transformations along camera's axis X, Y and Z.
The above transformations in C++ code are quite simple and are displayed below:
void translate(GLfloat const dx, GLfloat const dy, GLfloat dz, GLfloat *M)
{
M[12] = dx; M[13] = dy; M[14] = dz;
}
void scale(GLfloat const sx, GLfloat sy, GLfloat sz, GLfloat *M)
{
M[0] = sx; M[5] = sy; M[10] = sz;
}
void rotateX(GLfloat const radians, GLfloat *M)
{
M[5] = std::cosf(radians); M[6] = -std::sinf(radians);
M[9] = -M[6]; M[10] = M[5];
}
void rotateY(GLfloat const radians, GLfloat *M)
{
M[0] = std::cosf(radians); M[2] = std::sinf(radians);
M[8] = -M[2]; M[10] = M[0];
}
void rotateZ(GLfloat const radians, GLfloat *M)
{
M[0] = std::cosf(radians); M[1] = std::sinf(radians);
M[4] = -M[1]; M[5] = M[0];
}
Now you have to define the projection matrix P.
Orthographic projection:
// These paramaters are lens properties.
// The "near" and "far" create the Depth of Field.
// The "left", "right", "bottom" and "top" represent the rectangle formed
// by the near area, this rectangle will also be the size of the visible area.
GLfloat near = 0.001, far = 100.0;
GLfloat left = 0.0, right = 320.0;
GLfloat bottom = 480.0, top = 0.0;
// First Column
P[0] = 2.0 / (right - left);
P[1] = 0.0;
P[2] = 0.0;
P[3] = 0.0;
// Second Column
P[4] = 0.0;
P[5] = 2.0 / (top - bottom);
P[6] = 0.0;
P[7] = 0.0;
// Third Column
P[8] = 0.0;
P[9] = 0.0;
P[10] = -2.0 / (far - near);
P[11] = 0.0;
// Fourth Column
P[12] = -(right + left) / (right - left);
P[13] = -(top + bottom) / (top - bottom);
P[14] = -(far + near) / (far - near);
P[15] = 1;
Perspective Projection:
// These paramaters are about lens properties.
// The "near" and "far" create the Depth of Field.
// The "angleOfView", as the name suggests, is the angle of view.
// The "aspectRatio" is the cool thing about this matrix. OpenGL doesn't
// has any information about the screen you are rendering for. So the
// results could seem stretched. But this variable puts the thing into the
// right path. The aspect ratio is your device screen (or desired area) width
// divided by its height. This will give you a number < 1.0 the the area
// has more vertical space and a number > 1.0 is the area has more horizontal
// space. Aspect Ratio of 1.0 represents a square area.
GLfloat near = 0.001;
GLfloat far = 100.0;
GLfloat angleOfView = 0.25 * 3.1415;
GLfloat aspectRatio = 0.75;
// Some calculus before the formula.
GLfloat size = near * std::tanf(0.5 * angleOfView);
GLfloat left = -size
GLfloat right = size;
GLfloat bottom = -size / aspectRatio;
GLfloat top = size / aspectRatio;
// First Column
P[0] = 2.0 * near / (right - left);
P[1] = 0.0;
P[2] = 0.0;
P[3] = 0.0;
// Second Column
P[4] = 0.0;
P[5] = 2.0 * near / (top - bottom);
P[6] = 0.0;
P[7] = 0.0;
// Third Column
P[8] = (right + left) / (right - left);
P[9] = (top + bottom) / (top - bottom);
P[10] = -(far + near) / (far - near);
P[11] = -1.0;
// Fourth Column
P[12] = 0.0;
P[13] = 0.0;
P[14] = -(2.0 * far * near) / (far - near);
P[15] = 0.0;
Then your shader will become:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * position;
theColor = color;
}
Bibliography:
http://blog.db-in.com/cameras-on-opengl-es-2-x/
http://www.songho.ca/opengl/gl_transform.html