LookAt (glm) returns wrong translate z-value - c++

Hello i have an issue with a returning value of the glm lookAt function. When i am executing in debug mode, i get at this point
... Result[3][2] = dot(f, eye); ... of the glm function a wrong value in the translate z-position of the matrix. The value is -2, that shows me that the forward and eye vector are in the opposite position. My eye, center and up vectors are eye(0,0,2), center(0,0,-1) and up(0,1,0). The cam coodinate vectors are: f(0,0,-1), s(1,0,0) and u(0,1,0). And the vantage point the user looks at is (0,0,0). So the right view matrix should be this one:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
but i get this one:
1 -0 0 -0
-0 1 -0 -0
0 0 1 -2
0 0 0 1
My code is:
struct camera {
vec3 position = vec3(0.0f); // position of the camera
vec3 view_direction = vec3(0.0f); // forward vector (orientation)
vec3 side = vec3(0.0f); // right vector (side)
vec3 up = vec3(0.0f, 1.0f, 0.0f); // up vector
float speed = 0.1;
float yaw = 0.0f; // y-rotation
float cam_yaw_speed = 10.0f; // 10 degrees per second
float pitch = 0.0f; // x-rotation
float roll = 0.0f;
...
// calculate the orientation vector (forward)
vec3 getOrientation(vec3 vantage_point) {
// calc the difference and normalize the resulting vector
vec3 result = vantage_point - position;
result = normalize(result);
return result;
}
// calculate the right (side) vector of the camera, by given orientation(forward) and up vectors
mat4 look_at_point(vec3 vantage_point) {
view_direction = getOrientation(vantage_point);
// calculate the lookat matrix
return lookAt(position, position + view_direction, up);
}
};
I have tryied to figure out how to manage this problem but i still have no idea. Can someone help me?
The main function where i am executing the main_cam.look_at_point(vantage_point) function is showed below:
...
GLfloat points[] = {
0.0f, 0.5f, 0.0f,
0.5f, 0.0f, 0.0f,
-0.5f, 0.0f, 0.0f };
float speed = 1.0f; // move at 1 unit per second
float last_position = 0.0f;
// init camera
main_cam.position = vec3(0.0f, 0.0f, 2.0f); // don't start at zero, or will be too close
main_cam.speed = 1.0f; // 1 unit per second
main_cam.cam_yaw_speed = 10.0f; // 10 degrees per second
vec3 vantage_point = vec3(0.0f, 0.0f, 0.0f);
mat4 T = translate(mat4(1.0), main_cam.position);
//mat4 R = rotate(mat4(), -main_cam.yaw, vec3(0.0, 1.0, 0.0));
mat4 R = main_cam.look_at_point(vantage_point);
mat4 view_matrix = R * T;
// input variables
float near = 0.1f; // clipping plane
float far = 100.0f; // clipping plane
float fov = 67.0f * ONE_DEG_IN_RAD; // convert 67 degrees to radians
float aspect = (float)g_gl_width / (float)g_gl_height; // aspect ratio
mat4 proj_matrix = perspective(fov, aspect, near, far);
use_shader_program(shader_program);
set_uniform_matrix4fv(shader_program, "view", 1, GL_FALSE, &view_matrix[0][0]);
set_uniform_matrix4fv(shader_program, "proj", 1, GL_FALSE, &proj_matrix[0][0]);
...
Testing with the rotate function of glm the triangle is shown right.
Triangle shown with the rotate function of glm

I suspect that the problem is here:
mat4 view_matrix = R * T; // <---
The matrix returned by lookAt already does the translation.
Try manually applying the transformation on the (0,0,0) point that is inside your triangle. T will translate it to (0,0,2), but now it coincides with the camera, so R will send it back into (0,0,0). Now you get a division by zero accident in the projective divide.
So remove the multiplication by T:
mat4 view_matrix = R;
Now (0,0,0) will be mapped to (0,0,-2), which is in the direction camera is looking. (In camera space the center-of-projection is at (0,0,0) and the camera is looking towards the negative Z direction).
EDIT: I want to point out that calculating the view_direction from vantage_point and then feeding position + view_direction back into lookAt is a rather contrived way of achieving your goals. What you do in getOrientation function is what lookAt already does inside. Instead you can get the view_direction from the result of lookAt:
mat4 look_at_point(vec3 vantage_point) {
// calculate the lookat matrix
mat4 M = lookAt(position, vantage_point, up);
view_direction = -vec3(M[2][0], M[2][1], M[2][2]);
return M;
}
However, considering that ultimately you're trying to implement yaw/pitch/roll camera controls, you are better off to not using lookAt at all.

Related

For mouse click ray casting a line, why aren't my starting rays updating to my camera position after I move my camera?

When camera is moved around, why are my starting rays are still stuck at origin 0, 0, 0 even though the camera position has been updated?
It works fine if I start the program and my camera position is at default 0, 0, 0. But once I move my camera for instance pan to the right and click some more, the lines are still coming from 0 0 0 when it should be starting from wherever the camera is. Am I doing something terribly wrong? I've checked to make sure they're being updated in the main loop. I've used this code snippit below referenced from:
picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone
// 1. Get mouse coordinates then normalize
float x = (2.0f * lastX) / width - 1.0f;
float y = 1.0f - (2.0f * lastY) / height;
// 2. Move from clip space to world space
glm::mat4 inverseWorldMatrix = glm::inverse(proj * view);
glm::vec4 near_vec = glm::vec4(x, y, -1.0f, 1.0f);
glm::vec4 far_vec = glm::vec4(x, y, 1.0f, 1.0f);
glm::vec4 startRay = inverseWorldMatrix * near_vec;
glm::vec4 endRay = inverseWorldMatrix * far_vec;
// perspective divide
startR /= startR.w;
endR /= endR.w;
glm::vec3 direction = glm::vec3(endR - startR);
// start the ray points from the camera position
glm::vec3 startPos = glm::vec3(camera.GetPosition());
glm::vec3 endPos = glm::vec3(startPos + direction * someLength);
The first screenshot I click some rays, the 2nd I move my camera to the right and click some more but the initial starting rays are still at 0, 0, 0. What I'm looking for is for the rays to come out wherever the camera position is in the 3rd image, ie the red rays sorry for the confusion, the red lines are supposed to shoot out and into the distance not up.
// and these are my matrices
// projection
glm::mat4 proj = glm::perspective(glm::radians(camera.GetFov()), (float)width / height, 0.1f, 100.0f);
// view
glm::mat4 view = camera.GetViewMatrix(); // This returns glm::lookAt(this->Position, this->Position + this->Front, this->Up);
// model
glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f));
Its hard to tell where in the code the problem lies. But, I use this function for ray casting that is adapted from code from scratch-a-pixel and learnopengl:
vec3 rayCast(double xpos, double ypos, mat4 projection, mat4 view) {
// converts a position from the 2d xpos, ypos to a normalized 3d direction
float x = (2.0f * xpos) / WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / HEIGHT;
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
// eye space to clip we would multiply by projection so
// clip space to eye space is the inverse projection
vec4 ray_eye = inverse(projection) * ray_clip;
// convert point to forwards
ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
// world space to eye space is usually multiply by view so
// eye space to world space is inverse view
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
where you can draw your line with startPos = camera.Position and endPos = camera.Position + rayCast(...) * scalar_amount.

Directional Light Shadow Mapping Issues

So I've been trying to re-implement shadow mapping in my engine using directional lights, but I have to throw shade on my progress so far (see what I did there?).
I had it working in a previous commit a while back but refactored my engine and I'm trying to redo some of the shadow mapping. Wouldn't say I'm the best in terms of drawing shadows so thought I'd try and get some help.
Basically my issue seems to stem from the calculation of the light space matrix (seems a lot of people have the same issue). Initially I had a hardcoded projection matrix and simple view matrix for the light like this
void ZLight::UpdateLightspaceMatrix()
{
// …
if (type == ZLightType::Directional) {
auto lightDir = glm::normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
glm::mat4 lightP = glm::ortho(-50.f, 50.f, -50.f, 50.f, -100.f, 100.f);
lightspaceMatrix_ = lightP * lightV;
}
// …
}
This then gets passed unmodified as a shader uniform, with which I multiply the vertex world space positions by. A few months ago this was working but with the recent refactor I did on the engine it no longer shows anything. The output to the shadow map looks like this
And my scene isn't showing any shadows, at least not where it matters
Aside from this, after hours of scouring posts and articles about how to implement a dynamic frustrum for the light that will encompass the scene's contents at any given time, I also implemented a simple solution based on transforming the camera's frustum into light space, using an NDC cube and transforming it with the inverse camera VP matrix, and computing a bounding box from the result, which gets passed in to glm::ortho to make the light's projection matrix
void ZLight::UpdateLightspaceMatrix()
{
static std::vector <glm::vec4> ndcCube = {
glm::vec4{ -1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, 1.0f, 1.0f }
};
if (type == ZLightType::Directional) {
auto activeCamera = Scene()->ActiveCamera();
auto lightDir = normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
lightspaceRegion_ = ZAABBox();
for (const auto& corner : ndcCube) {
auto invVPMatrix = glm::inverse(activeCamera->ProjectionMatrix() * activeCamera->ViewMatrix());
auto transformedCorner = lightV * invVPMatrix * corner;
transformedCorner /= transformedCorner.w;
lightspaceRegion_.minimum.x = glm::min(lightspaceRegion_.minimum.x, transformedCorner.x);
lightspaceRegion_.minimum.y = glm::min(lightspaceRegion_.minimum.y, transformedCorner.y);
lightspaceRegion_.minimum.z = glm::min(lightspaceRegion_.minimum.z, transformedCorner.z);
lightspaceRegion_.maximum.x = glm::max(lightspaceRegion_.maximum.x, transformedCorner.x);
lightspaceRegion_.maximum.y = glm::max(lightspaceRegion_.maximum.y, transformedCorner.y);
lightspaceRegion_.maximum.z = glm::max(lightspaceRegion_.maximum.z, transformedCorner.z);
}
glm::mat4 lightP = glm::ortho(lightspaceRegion_.minimum.x, lightspaceRegion_.maximum.x,
lightspaceRegion_.minimum.y, lightspaceRegion_.maximum.y,
-lightspaceRegion_.maximum.z, -lightspaceRegion_.minimum.z);
lightspaceMatrix_ = lightP * lightV;
}
}
What results is the same output in my scene (no shadows anywhere) and the following shadow map
I've checked the light space matrix calculations over and over, and tried tweaking values dozens of times, including all manner of lightV matrices using the glm::lookAt function, but I never get the desired output.
For more reference, here's my shadow vertex shader
#version 450 core
#include "Shaders/common.glsl" //! #include "../common.glsl"
layout (location = 0) in vec3 position;
layout (location = 5) in ivec4 boneIDs;
layout (location = 6) in vec4 boneWeights;
layout (location = 7) in mat4 instanceM;
uniform mat4 P_lightSpace;
uniform mat4 M;
uniform mat4 Bones[MAX_BONES];
uniform bool rigged = false;
uniform bool instanced = false;
void main()
{
vec4 pos = vec4(position, 1.0);
if (rigged) {
mat4 boneTransform = Bones[boneIDs[0]] * boneWeights[0];
boneTransform += Bones[boneIDs[1]] * boneWeights[1];
boneTransform += Bones[boneIDs[2]] * boneWeights[2];
boneTransform += Bones[boneIDs[3]] * boneWeights[3];
pos = boneTransform * pos;
}
gl_Position = P_lightSpace * (instanced ? instanceM : M) * pos;
}
my soft shadow implementation
float PCFShadow(VertexOutput vout, sampler2D shadowMap) {
vec3 projCoords = vout.FragPosLightSpace.xyz / vout.FragPosLightSpace.w;
if (projCoords.z > 1.0)
return 0.0;
projCoords = projCoords * 0.5 + 0.5;
// PCF
float shadow = 0.0;
float bias = max(0.05 * (1.0 - dot(vout.FragNormal, vout.FragPosLightSpace.xyz - vout.FragPos.xzy)), 0.005);
for (int i = 0; i < 4; ++i) {
float z = texture(shadowMap, projCoords.xy + poissonDisk[i]).r;
shadow += z < (projCoords.z - bias) ? 1.0 : 0.0;
}
return shadow / 4;
}
...
...
float shadow = PCFShadow(vout, shadowSampler0);
vec3 color = (ambient + (1.0 - shadow) * (diffuse + specular)) + materials[materialIndex].emission;
FragColor = vec4(color, albd.a);
and my camera view and projection matrix getters
glm::mat4 ZCamera::ProjectionMatrix()
{
glm::mat4 projectionMatrix(1.f);
auto scene = Scene();
if (!scene) return projectionMatrix;
if (cameraType_ == ZCameraType::Orthographic)
{
float zoomInverse_ = 1.f / (2.f * zoom_);
glm::vec2 resolution = scene->Domain()->Resolution();
float left = -((float)resolution.x * zoomInverse_);
float right = -left;
float bottom = -((float)resolution.y * zoomInverse_);
float top = -bottom;
projectionMatrix = glm::ortho(left, right, bottom, top, -farClippingPlane_, farClippingPlane_);
}
else
{
projectionMatrix = glm::perspective(glm::radians(zoom_),
(float)scene->Domain()->Aspect(),
nearClippingPlane_, farClippingPlane_);
}
return projectionMatrix;
}
glm::mat4 ZCamera::ViewMatrix()
{
return glm::lookAt(Position(), Position() + Front(), Up());
}
Been trying all kinds of small changes but I still don't get correct shadows. Don't know what I'm doing wrong here. The closest I've gotten is by scaling lightspaceRegion_ bounds by a factor of 10 in the light space matrix calculations (only in X and Y) but the shadows are still no where near correct.
The camera near and far clipping planes are set at reasonable values (0.01 and 100.0, respectively), camera zoom is 45.0 degrees and scene→Domain()→Aspect() just returns the width/height aspect ratio of the framebuffer's resolution. My shadow map resolution is set to 2048x2048.
Any help here would be much appreciated. Let me know if I left out any important code or info.

Pixel-perfect projection matrix?

I'm trying to understand how far should I place the camera position in the lookat function (or the object in the model matrix) to have pixel-perfect coordinates to pass in the vertex shader.
This is actually simple with orthographic projection matrices, but I fail to visualize how the math would work for perspective projection.
Here's the perspective matrix I'm using:
glm::mat4 projection = glm::perspective(45.0f, (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 10000.0f);
vertex multiplication in the shader is as simple as:
gl_Position = projection * view * model * vec4(position.xy, 0.0f, 1.0);
I'm basically trying to show a quad on screen that needs to be rotated and show perspective effects (hence why I can't use orthographic projection), but I'd like to specify in pixel coordinates where and how big it should appear on screen.
Well it can only have pixel-coordinates in one "z-plane" if you want to use a trapezoid view-frustum.
Basic Math
If you use a standard camera the basic math for a camera at (0,0,0) would be
for alpha being the vertical fov (45° in your case)
target_y = tan(alpha/2) * z-distance * ((pixel_y/height)*2-1)
target_x = tan(alpha/2) * z-distance * ((pixel_x/width)*aspect-ratio*2-1)
Reversing projection
As for the general case. You can "un-project" to find where a point in 3D before all transforms should be to end up on a specific point.
Basically you need to un-do the math.
gl_Position = projection * view * model * vec4(position.xy, 0.0f, 1.0);
So if you have your final position and want to revert it you do:
unprojection = model^-1 * view^-1 *projection^-1 * gl_Position //not actual glsl notation, '^-1' being the inverse
This is basically what functions like gluUnProject or glm::gtc::matrix_transform::unProject do.
But you should note that the final clip-space after you apply the projection matrix is typically [-1,-1,0] to [1,1,1], so if you want to enter pixel coordinates you can apply an additional matrix to transform into that space.
Something like:
[2/width, 0, 0 -1]
[ 0, 2/height, 0 -1]
screenToClip = [ 0, 0, 1 0]
[ 0, 0, 0 1]
would transform [0,0,0,1] to [-1,-1,0,1] and [width,height,0,1] to [1,1,0,1]
Also, you're probably best off trying some z-value like 0.5 to make sure that you're well within the view frustum and not clipping near the front or back.
You can achieve this effect with a 60 degree field of view. Basically you want to place the camera at a distance from the viewing plane such that the camera forms an equilateral triangle with center points at the top and bottom of the screen.
Here's some code to do that:
float fovy = 60.0f; // field of view - degrees
float aspect = nScreenWidth / nScreenHeight;
float zNearClip = 0.1f;
float zFarClip = nScreenHeight*2.0f;
float degToRad = MF_PI / 180.0f;
float fH = tanf(fovY * degToRad / 2.0f) * zNearClip;
float fW = fH * aspect;
glFrustum(-fW, fW, -fH, fH, zNearClip, zFarClip);
float nCameraDistance = sqrtf( nScreenHeight * nScreenHeight - 0.25f * nScreenHeight * nScreenHeight);
glTranslatef(0, 0, -nCameraDistance);
You can also use a 90 degree fov. In that case the camera distance is 1/2 the height of the window. However, this has a lot of foreshortening.
In the 90 degree case, you could push the camera out by the full height, but then apply a 2x scaling to the x and y components (ie: glScale (2,2,1).
Here's an image of what I mean:
I'll extend PeterT answer and leave here the practical code I used to find the world coordinates of one of the frustum's plane through unprojection
This assumes a basic view matrix (camera pos at 0,0,0)
glm::mat4 projectionInv(0);
glm::mat4 projection = glm::perspective(45.0f, (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 500.0f);
projectionInv = glm::inverse(projection);
std::vector<glm::vec4> NDCCube;
NDCCube.push_back(glm::vec4(-1.0f, -1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, -1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, -1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, -1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, 1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, 1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, 1.0f, 1.0f, 1.0f));
std::vector<glm::vec3> frustumVertices;
for (int i = 0; i < 8; i++)
{
glm::vec4 tempvec;
tempvec = projectionInv * NDCCube.at(i); //multiply by projection matrix inverse to obtain frustum vertex
frustumVertices.push_back(glm::vec3(tempvec.x /= tempvec.w, tempvec.y /= tempvec.w, tempvec.z /= tempvec.w));
}
Keep in mind these coordinates would not end up on screen if your perspective far distance is lower than the one I set in the projection matrix
If you happen to know the world-coordinate width of "some item" that you want to display pixel-exact, this ends up being a bit of trivial trigonometry (works for both y FOV or x FOV):
S = Width of item in world coordinates
T = "Pixel Exact" size of item (say, the width of the texture)
h = Z distance to the object
a = 2 * h * tan(Phi / 2)
b = a / cos(phi / 2)
r = Total screen resolution (width or height depending on the FOV you want)
a = 2 * h * tan(Phi / 2) = (r / T) * S
Theta = atan(2*h / a)
Phi = 180 - 2*Theta
Where b are the sides of your triangle, a is the base of your triangle, h is the height of your triangle, theta is the angles of the two equal angles of the Isosoleces triangle, and Phi is the resulting FOV
So the end code might look something like
float frustumWidth = (float(ScreenWidth) / TextureWidth) * InWorldItemWidth;
float theta = glm::degrees(atan((2 * zDistance) / frustumWidth));
float PixelPerfectFOV = 180 - 2 * theta;

Cascaded Shadow Map shimmering

I am currently trying to learn how cascaded shadow maps work so I've been trying to get one shadow map to fit to the view frustum without shimmering. I'm using a near/far plane of 1 to 10000 for my camera projection and this is the way I calculate the orthographic matrix for the light:
GLfloat far = -INFINITY;
GLfloat near = INFINITY;
//Multiply all the world space frustum corners with the view matrix of the light
Frustum cameraFrustum = CameraMan.getActiveCamera()->mFrustum;
lightViewMatrix = glm::lookAt((cameraFrustum.frustumCenter - glm::vec3(-0.447213620f, -0.89442790f, 0.0f)), cameraFrustum.frustumCenter, glm::vec3(0.0f, 0.0f, 1.0f));
glm::vec3 arr[8];
for (unsigned int i = 0; i < 8; ++i)
arr[i] = glm::vec3(lightViewMatrix * glm::vec4(cameraFrustum.frustumCorners[i], 1.0f));
glm::vec3 minO = glm::vec3(INFINITY, INFINITY, INFINITY);
glm::vec3 maxO = glm::vec3(-INFINITY, -INFINITY, -INFINITY);
for (auto& vec : arr)
{
minO = glm::min(minO, vec);
maxO = glm::max(maxO, vec);
}
far = maxO.z;
near = minO.z;
//Get the longest diagonal of the frustum, this along with texel sized increments is used to keep the shadows from shimmering
//far top right - near bottom left
glm::vec3 longestDiagonal = cameraFrustum.frustumCorners[0] - cameraFrustum.frustumCorners[6];
GLfloat lengthOfDiagonal = glm::length(longestDiagonal);
longestDiagonal = glm::vec3(lengthOfDiagonal);
glm::vec3 borderOffset = (longestDiagonal - (maxO - minO)) * glm::vec3(0.5f, 0.5f, 0.5f);
borderOffset *= glm::vec3(1.0f, 1.0f, 0.0f);
maxO += borderOffset;
minO -= borderOffset;
GLfloat worldUnitsPerTexel = lengthOfDiagonal / 1024.0f;
glm::vec3 vWorldUnitsPerTexel = glm::vec3(worldUnitsPerTexel, worldUnitsPerTexel, 0.0f);
minO /= vWorldUnitsPerTexel;
minO = glm::floor(minO);
minO *= vWorldUnitsPerTexel;
maxO /= vWorldUnitsPerTexel;
maxO = glm::floor(maxO);
maxO *= vWorldUnitsPerTexel;
lightOrthoMatrix = glm::ortho(minO.x, maxO.x, minO.y, maxO.y, near, far);
The use of the longest diagonal to offset the frustum seems to be working as the shadow map doesn't seem to shrink/scale when looking around, however using the texel sized increments described by https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx has no effect whatsoever. I am using a pretty large scene for testing, which results in a low resolution on my shadow maps, but I wanted to get a stabilized shadow that fits a view frustum before I move on to splitting the frustum up. It's hard to tell from images, but the shimmering effect isn't reduced by the solution that microsoft presented:
Ended up using this solution:
//Calculate the viewMatrix from the frustum center and light direction
Frustum cameraFrustum = CameraMan.getActiveCamera()->mFrustum;
glm::vec3 lightDirection = glm::normalize(glm::vec3(-0.447213620f, -0.89442790f, 0.0f));
lightViewMatrix = glm::lookAt((cameraFrustum.frustumCenter - lightDirection), cameraFrustum.frustumCenter, glm::vec3(0.0f, 1.0f, 0.0f));
//Get the longest radius in world space
GLfloat radius = glm::length(cameraFrustum.frustumCenter - cameraFrustum.frustumCorners[6]);
for (unsigned int i = 0; i < 8; ++i)
{
GLfloat distance = glm::length(cameraFrustum.frustumCorners[i] - cameraFrustum.frustumCenter);
radius = glm::max(radius, distance);
}
radius = std::ceil(radius);
//Create the AABB from the radius
glm::vec3 maxOrtho = cameraFrustum.frustumCenter + glm::vec3(radius);
glm::vec3 minOrtho = cameraFrustum.frustumCenter - glm::vec3(radius);
//Get the AABB in light view space
maxOrtho = glm::vec3(lightViewMatrix*glm::vec4(maxOrtho, 1.0f));
minOrtho = glm::vec3(lightViewMatrix*glm::vec4(minOrtho, 1.0f));
//Just checking when debugging to make sure the AABB is the same size
GLfloat lengthofTemp = glm::length(maxOrtho - minOrtho);
//Store the far and near planes
far = maxOrtho.z;
near = minOrtho.z;
lightOrthoMatrix = glm::ortho(minOrtho.x, maxOrtho.x, minOrtho.y, maxOrtho.y, near, far);
//For more accurate near and far planes, clip the scenes AABB with the orthographic frustum
//calculateNearAndFar();
// Create the rounding matrix, by projecting the world-space origin and determining
// the fractional offset in texel space
glm::mat4 shadowMatrix = lightOrthoMatrix * lightViewMatrix;
glm::vec4 shadowOrigin = glm::vec4(0.0f, 0.0f, 0.0f, 1.0f);
shadowOrigin = shadowMatrix * shadowOrigin;
GLfloat storedW = shadowOrigin.w;
shadowOrigin = shadowOrigin * 4096.0f / 2.0f;
glm::vec4 roundedOrigin = glm::round(shadowOrigin);
glm::vec4 roundOffset = roundedOrigin - shadowOrigin;
roundOffset = roundOffset * 2.0f / 4096.0f;
roundOffset.z = 0.0f;
roundOffset.w = 0.0f;
glm::mat4 shadowProj = lightOrthoMatrix;
shadowProj[3] += roundOffset;
lightOrthoMatrix = shadowProj;
Which I found over at http://www.gamedev.net/topic/650743-improving-cascade-shadow/ I basically switched to using a bounding sphere instead and then constructing the rounding matrix as in that example. Works like a charm

X Axis seems inverted in OpenTK

Edit: okay, I've written the code totally intuitive now and this is the result:
http://i.imgur.com/x5arJE9.jpg
The Cube is at 0,0,0
As you can see, the camera position is negative on the z axis, suggesting that I'm viewing along the positive z axis, which does not match up. (fw is negative)
Also the cube colors suggest that I'm on the positive z axis, looking in the negative direction. Also the positive x-axis is to the right (in modelspace)
The angles are calculated like this:
public virtual Vector3 Right
{
get
{
return Vector3.Transform(Vector3.UnitX, Rotation);
}
}
public virtual Vector3 Forward
{
get
{
return Vector3.Transform(-Vector3.UnitZ, Rotation);
}
}
public virtual Vector3 Up
{
get
{
return Vector3.Transform(Vector3.UnitY, Rotation);
}
}
Rotation is a Quaternion.
This is how the view and model matrices are creates:
public virtual Matrix4 GetMatrix()
{
Matrix4 translation = Matrix4.CreateTranslation(Position);
Matrix4 rotation = Matrix4.CreateFromQuaternion(Rotation);
return translation * rotation;
}
Projection:
private void SetupProjection()
{
if(GameObject != null)
{
AspectRatio = GameObject.App.Window.Width / (float)GameObject.App.Window.Height;
projectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)((Math.PI * Fov) / 180), AspectRatio, ZNear, ZFar);
}
}
Matrix multiplication:
public Matrix4 GetModelViewProjectionMatrix(Transform model)
{
return model.GetMatrix()* Transform.GetMatrix() * projectionMatrix;
}
Shader:
[Shader vertex]
#version 150 core
in vec3 pos;
in vec4 color;
uniform float _time;
uniform mat4 _modelViewProjection;
out vec4 vColor;
void main() {
gl_Position = _modelViewProjection * vec4(pos, 1);
vColor = color;
}
OpenTK matrices are transposed, thus the multiplication order.
Any idea why the axis / locations are all messed up ?
End of edit. Original Post:
Have a look at this image: http://i.imgur.com/Cjjr8jz.jpg
As you can see, while the forward vector ( of the camera ) is positive in the z-Axis and the red cube is on the negative x axis,
float[] points = {
// position (3) Color (3)
-s, s, z, 1.0f, 0.0f, 0.0f, // Red point
s, s, z, 0.0f, 1.0f, 0.0f, // Green point
s, -s, z, 0.0f, 0.0f, 1.0f, // Blue point
-s, -s, z, 1.0f, 1.0f, 0.0f, // Yellow point
};
(cubes are created in the geometry shader around those points)
the camera x position seems to be inverted. In other words, if I increase the camera position along its local x axis, it will move to the left, and vice versa.
I pass the transformation matrix like this:
if (DefaultAttributeLocations.TryGetValue("modelViewProjectionMatrix", out loc))
{
if (loc >= 0)
{
Matrix4 mvMatrix = Camera.GetMatrix() * projectionMatrix;
GL.UniformMatrix4(loc, false, ref mvMatrix);
}
}
The GetMatrix() method looks like this:
public virtual Matrix4 GetMatrix()
{
Matrix4 translation = Matrix4.CreateTranslation(Position);
Matrix4 rotation = Matrix4.CreateFromQuaternion(Rotation);
return translation * rotation;
}
And the projection matrix:
private void SetupProjection()
{
AspectRatio = Window.Width / (float)Window.Height;
projectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)((Math.PI * Fov)/180), AspectRatio, ZNear, ZFar);
}
I don't see what I'm doing wrong :/
It's a little hard to tell from the code, but I believe this is because in OpenGL, the default forward vector of the camera is negative along the Z axis - yours is positive, which means you're looking at the model from the back. That would be why the X coordinate seems inverted.
Although this question is a few years old, I'd still like to give my input.
The reason you're experiencing this bug is because OpenTK's matrices are row major. All this really means is you have to do all matrix math is reverse. For example, the transformation matrix will be multiplied like so:
public static Matrix4 CreateTransformationMatrix(Vector3 position, Quaternion rotation, Vector3 scale)
{
return Matrix4.CreateScale(scale) *
Matrix4.CreateFromQuaternion(rotation) *
Matrix4.CreateTranslation(position);
}
This goes for any matrix, so if you're using Vector3's instead of Quaternion's for your rotation it would look like this:
public static Matrix4 CreateTransformationMatrix(Vector3 position, Vector3 rotation, Vector3 scale)
{
return Matrix4.CreateScale(scale) *
Matrix4.CreateRotationZ(rotation.Z) *
Matrix4.CreateRotationY(rotation.Y) *
Matrix4.CreateRotationX(rotation.X) *
Matrix4.CreateTranslation(position);
}
Note that your vertex shader will still be multiplied like this:
void main()
{
gl_Position = projection * view * transform * vec4(position, 1.0f);
}
I hope this helps!