Directional Light Shadow Mapping Issues - c++

So I've been trying to re-implement shadow mapping in my engine using directional lights, but I have to throw shade on my progress so far (see what I did there?).
I had it working in a previous commit a while back but refactored my engine and I'm trying to redo some of the shadow mapping. Wouldn't say I'm the best in terms of drawing shadows so thought I'd try and get some help.
Basically my issue seems to stem from the calculation of the light space matrix (seems a lot of people have the same issue). Initially I had a hardcoded projection matrix and simple view matrix for the light like this
void ZLight::UpdateLightspaceMatrix()
{
// …
if (type == ZLightType::Directional) {
auto lightDir = glm::normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
glm::mat4 lightP = glm::ortho(-50.f, 50.f, -50.f, 50.f, -100.f, 100.f);
lightspaceMatrix_ = lightP * lightV;
}
// …
}
This then gets passed unmodified as a shader uniform, with which I multiply the vertex world space positions by. A few months ago this was working but with the recent refactor I did on the engine it no longer shows anything. The output to the shadow map looks like this
And my scene isn't showing any shadows, at least not where it matters
Aside from this, after hours of scouring posts and articles about how to implement a dynamic frustrum for the light that will encompass the scene's contents at any given time, I also implemented a simple solution based on transforming the camera's frustum into light space, using an NDC cube and transforming it with the inverse camera VP matrix, and computing a bounding box from the result, which gets passed in to glm::ortho to make the light's projection matrix
void ZLight::UpdateLightspaceMatrix()
{
static std::vector <glm::vec4> ndcCube = {
glm::vec4{ -1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, 1.0f, 1.0f }
};
if (type == ZLightType::Directional) {
auto activeCamera = Scene()->ActiveCamera();
auto lightDir = normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
lightspaceRegion_ = ZAABBox();
for (const auto& corner : ndcCube) {
auto invVPMatrix = glm::inverse(activeCamera->ProjectionMatrix() * activeCamera->ViewMatrix());
auto transformedCorner = lightV * invVPMatrix * corner;
transformedCorner /= transformedCorner.w;
lightspaceRegion_.minimum.x = glm::min(lightspaceRegion_.minimum.x, transformedCorner.x);
lightspaceRegion_.minimum.y = glm::min(lightspaceRegion_.minimum.y, transformedCorner.y);
lightspaceRegion_.minimum.z = glm::min(lightspaceRegion_.minimum.z, transformedCorner.z);
lightspaceRegion_.maximum.x = glm::max(lightspaceRegion_.maximum.x, transformedCorner.x);
lightspaceRegion_.maximum.y = glm::max(lightspaceRegion_.maximum.y, transformedCorner.y);
lightspaceRegion_.maximum.z = glm::max(lightspaceRegion_.maximum.z, transformedCorner.z);
}
glm::mat4 lightP = glm::ortho(lightspaceRegion_.minimum.x, lightspaceRegion_.maximum.x,
lightspaceRegion_.minimum.y, lightspaceRegion_.maximum.y,
-lightspaceRegion_.maximum.z, -lightspaceRegion_.minimum.z);
lightspaceMatrix_ = lightP * lightV;
}
}
What results is the same output in my scene (no shadows anywhere) and the following shadow map
I've checked the light space matrix calculations over and over, and tried tweaking values dozens of times, including all manner of lightV matrices using the glm::lookAt function, but I never get the desired output.
For more reference, here's my shadow vertex shader
#version 450 core
#include "Shaders/common.glsl" //! #include "../common.glsl"
layout (location = 0) in vec3 position;
layout (location = 5) in ivec4 boneIDs;
layout (location = 6) in vec4 boneWeights;
layout (location = 7) in mat4 instanceM;
uniform mat4 P_lightSpace;
uniform mat4 M;
uniform mat4 Bones[MAX_BONES];
uniform bool rigged = false;
uniform bool instanced = false;
void main()
{
vec4 pos = vec4(position, 1.0);
if (rigged) {
mat4 boneTransform = Bones[boneIDs[0]] * boneWeights[0];
boneTransform += Bones[boneIDs[1]] * boneWeights[1];
boneTransform += Bones[boneIDs[2]] * boneWeights[2];
boneTransform += Bones[boneIDs[3]] * boneWeights[3];
pos = boneTransform * pos;
}
gl_Position = P_lightSpace * (instanced ? instanceM : M) * pos;
}
my soft shadow implementation
float PCFShadow(VertexOutput vout, sampler2D shadowMap) {
vec3 projCoords = vout.FragPosLightSpace.xyz / vout.FragPosLightSpace.w;
if (projCoords.z > 1.0)
return 0.0;
projCoords = projCoords * 0.5 + 0.5;
// PCF
float shadow = 0.0;
float bias = max(0.05 * (1.0 - dot(vout.FragNormal, vout.FragPosLightSpace.xyz - vout.FragPos.xzy)), 0.005);
for (int i = 0; i < 4; ++i) {
float z = texture(shadowMap, projCoords.xy + poissonDisk[i]).r;
shadow += z < (projCoords.z - bias) ? 1.0 : 0.0;
}
return shadow / 4;
}
...
...
float shadow = PCFShadow(vout, shadowSampler0);
vec3 color = (ambient + (1.0 - shadow) * (diffuse + specular)) + materials[materialIndex].emission;
FragColor = vec4(color, albd.a);
and my camera view and projection matrix getters
glm::mat4 ZCamera::ProjectionMatrix()
{
glm::mat4 projectionMatrix(1.f);
auto scene = Scene();
if (!scene) return projectionMatrix;
if (cameraType_ == ZCameraType::Orthographic)
{
float zoomInverse_ = 1.f / (2.f * zoom_);
glm::vec2 resolution = scene->Domain()->Resolution();
float left = -((float)resolution.x * zoomInverse_);
float right = -left;
float bottom = -((float)resolution.y * zoomInverse_);
float top = -bottom;
projectionMatrix = glm::ortho(left, right, bottom, top, -farClippingPlane_, farClippingPlane_);
}
else
{
projectionMatrix = glm::perspective(glm::radians(zoom_),
(float)scene->Domain()->Aspect(),
nearClippingPlane_, farClippingPlane_);
}
return projectionMatrix;
}
glm::mat4 ZCamera::ViewMatrix()
{
return glm::lookAt(Position(), Position() + Front(), Up());
}
Been trying all kinds of small changes but I still don't get correct shadows. Don't know what I'm doing wrong here. The closest I've gotten is by scaling lightspaceRegion_ bounds by a factor of 10 in the light space matrix calculations (only in X and Y) but the shadows are still no where near correct.
The camera near and far clipping planes are set at reasonable values (0.01 and 100.0, respectively), camera zoom is 45.0 degrees and scene→Domain()→Aspect() just returns the width/height aspect ratio of the framebuffer's resolution. My shadow map resolution is set to 2048x2048.
Any help here would be much appreciated. Let me know if I left out any important code or info.

Related

How to Rotate a Quad

I'm trying to make a quad rotate around its center. I am using glm::rotate() and setting the quad to rotate on the z axis. However when I do this it gives this weird effect. The quad stretches and warps. It almost looks 3d but since I am rotating it around the z axis that shouldn't happen right?
Here's relevant code for context:
float rotation = 0.0f;
double prevTime = glfwGetTime();
while (!glfwWindowShouldClose(window))
{
GLCall(glClearColor(0.0f, 0.0f, 0.0f, 1.0f));
GLCall(glClear(GL_COLOR_BUFFER_BIT));
updateInput(window);
shader.Use();
glUniform1f(xMov, x);
glUniform1f(yMov, y);
test.Bind();
double crntTime = glfwGetTime();
if (crntTime - prevTime >= 1 / 60)
{
rotation += 0.5f;
prevTime = crntTime;
}
glm::mat4 model = glm::mat4(1.0f);
model = glm::rotate(model, glm::radians(rotation), glm::vec3(0.0f, 0.0f, 1.0f));
int modelLoc = glGetUniformLocation(shader.id, "model");
glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));
vao.Bind();
vBuffer1.Bind();
iBuffer1.Bind();
GLCall(glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0));
glfwSwapBuffers(window);
glfwPollEvents();
}
Shader:
#version 440 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aColor;
layout(location = 2) in vec2 aTex;
out vec3 color;
out vec2 texCoord;
uniform float xMove;
uniform float yMove;
uniform mat4 model;
void main()
{
gl_Position = model * vec4(aPos.x + xMove, aPos.y + yMove, aPos.z, 1.0);
color = aColor;
texCoord = aTex;
}
Without you showing the graphical output it is hard to say.
Your first issue is, you are not rotating around the center, to rotate by the center you must, offset the quad so that its center is at 0,0. then rotate, then offset back to the original position, but you have this line:
gl_Position = model * vec4(aPos.x + xMove, aPos.y + yMove, aPos.z, 1.0);
Under the assumption that the quad as at the origin to begin with you are rotating it around the point (-xMove, -yMove).

Opengl vertex coordinates for perspective projection

I need to use perspective transformation but I can't understand how to define model coordinates of sprite. If I use orthogonal projection I can define coordinate of each vertex as number pixels on screen. But with perspective projection I can't.
Orthogonal projection:
glm::ortho<GLfloat>(0.0f, screen_width, screen_height, 0.0f, 1.0f, -1.0f));
Perspective:
glm::perspective(glm::radians(45.f), (float)screen_width / (float)screen_height, 0.1f, 100.f);
Vertex shader:
#version 330 core
layout (std140) uniform Matrices
{
mat4 ProjectionMatrix;
mat4 ViewMatrix;
mat4 ModelMatrix;
};
layout (location = 0) in vec2 position;
layout (location = 1) in vec2 inTexCoords;
out vec2 TextureCoords;
void main()
{
TextureCoords = inTexCoords;
gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(position, 1.f, 1.0);
}
For example
vertices[1] = 0.f;
vertices[8] = 0.f;
vertices[12] = 0.f;
vertices[13] = 0.f;
for (GLuint i = 0; i < m_totSprites; ++i) {
// Vertex pos
vertices[0] = m_clips[i].w;
vertices[4] = vertices[0];
vertices[5] = m_clips[i].h;
vertices[9] = vertices[5];
// Texture pos
vertices[2] = (m_clips[i].x + m_clips[i].w) / tw;
vertices[3] = (m_clips[i].y + m_clips[i].h) / th;
vertices[6] = (m_clips[i].x + m_clips[i].w) / tw;
vertices[7] = m_clips[i].y / th;
vertices[10] = m_clips[i].x / tw;
vertices[11] = m_clips[i].y / th;
vertices[14] = m_clips[i].x / tw;
vertices[15] = (m_clips[i].y + m_clips[i].h) / th;
It works well with orthogonal projection. How can I define vertex coordinates for perspective?
What the different with model coordinates in orthogonal projection and perspective? Why in first case it's easy to set coords of vertices as pixel sizes, but in all examples with perspective they normalized between -0.5 to 0.5? It's necessary?
Initially I was misunderstood difference between orthogonal and perspective projections. As I understood now all vertices mapped initially in NDC for perspective projection. Then they moved, scaled, etc with model matrix. Pixel perfect rendering can be realized only with some constant depth or with orthogonal. I't unuseful for 3D with perspective projection.
if you have projection matrix you need a view matrix too.
there's glm::lookAt() for ex
i use this combo usually
glm::lookAt(glm::vec3(-1.2484,0.483,1.84384), glm::vec3(-0.3801, -0.4183,-3.15),glm::vec3( 0., 0.2,-00.))
glm::perspective(45., 1., 1.2, 300.)
glm::mat4(1.)

Lighting abnormal when calculating in tangent space. Probably something wrong with coordinate transformation matrix

I'm trying to calculate lighting in tangent space. But I just keep getting abnormal results. I was modifying the book's demo code and I wander if there maybe something wrong with the transformation matrix I created.
I'm having trouble solving a problem in Introduction to 3D Game Programming with DirectX 11. I tried to use matrix TBN
Tx, Ty, Tz,
Bx, By, Bz,
Nx, Ny, Nz
as the book provided but I found the light vector to be wrongly transformed to tangent space and now I have no clue how to debug this shader.
Here is my Pixel Shader:
float4 PS1(VertexOut pin,
uniform int gLightCount,
uniform bool gUseTexure,
uniform bool gAlphaClip,
uniform bool gFogEnabled,
uniform bool gReflectionEnabled) : SV_Target{
// Interpolating normal can unnormalize it, so normalize it.
pin.NormalW = normalize(pin.NormalW);
pin.TangentW = normalize(pin.TangentW);
// The toEye vector is used in lighting.
float3 toEye = gEyePosW - pin.PosW;
// Cache the distance to the eye from this surface point.
float distToEye = length(toEye);
// Calculate normalMapSample
float3 normalMapSample =
normalize(SampledNormal2Normal(gNormalMap.Sample(samLinear, pin.Tex).rgb));
// normalize toEye
toEye = normalize(toEye);
// Default to multiplicative identity.
float4 texColor = float4(1, 1, 1, 1);
if (gUseTexure)
{
// Sample texture.
texColor = gDiffuseMap.Sample(samLinear, pin.Tex);
if (gAlphaClip)
{
// Discard pixel if texture alpha < 0.1. Note that we do this
// test as soon as possible so that we can potentially exit the shader
// early, thereby skipping the rest of the shader code.
clip(texColor.a - 0.1f);
}
}
//
// Lighting.
//
float4 litColor = texColor;
if (gLightCount > 0)
{
// Start with a sum of zero.
float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 spec = float4(0.0f, 0.0f, 0.0f, 0.0f);
// Sum the light contribution from each light source.
[unroll]
for (int i = 0; i < gLightCount; ++i)
{
float4 A, D, S;
ComputeDirectionalLightInTangent(gMaterial, gDirLights[i],
normalMapSample, World2TangentSpace(pin.NormalW, pin.TangentW, gTexTransform), toEye,
A, D, S);
ambient += A;
diffuse += D;
spec += S;
}
litColor = texColor*(ambient + diffuse) + spec;
if (gReflectionEnabled)
{
float3 incident = -toEye;
float3 reflectionVector = reflect(incident, normalMapSample);
float4 reflectionColor = gCubeMap.Sample(samLinear, reflectionVector);
litColor += gMaterial.Reflect*reflectionColor;
}
}
//
// Fogging
//
if (gFogEnabled)
{
float fogLerp = saturate((distToEye - gFogStart) / gFogRange);
// Blend the fog color and the lit color.
litColor = lerp(litColor, gFogColor, fogLerp);
}
// Common to take alpha from diffuse material and texture.
litColor.a = gMaterial.Diffuse.a * texColor.a;
return litColor;
}
And Here are function SampledNormal2Normal, World2TangentSpace and ComputeDirectionalLightInTangent:
float3 SampledNormal2Normal(float3 sampledNormal)
{
float3 normalT = 2.0f*sampledNormal - 1.0f;
return normalT;
}
float3x3 World2TangentSpace(float3 unitNormalW, float3 tangentW, float4x4 texTransform)
{
// Build orthonormal basis.
float3 N = unitNormalW;
float3 T = normalize(tangentW - dot(tangentW, N)*N);
float3 B = cross(N, T);
float3x3 TBN = float3x3(T, B, N);
/*float3x3 invTBN = float3x3(T.x, T.y, T.z, B.x, B.y, B.z, N.x, N.y, N.z);
return invTBN;*/
float3 T_ = T - dot(N, T)*N;
float3 B_ = B - dot(N, B)*N - (dot(T_, B)*T_) / dot(T_, T_);
float3x3 invT_B_N = float3x3(T_.x, T_.y, T_.z, B_.x, B_.y, B_.z, N.x, N.y, N.z);
return invT_B_N;
}
void ComputeDirectionalLightInTangent(Material mat, DirectionalLight L,
float3 normalT, float3x3 toTS, float3 toEye,
out float4 ambient,
out float4 diffuse,
out float4 spec)
{
// Initialize outputs.
ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
spec = float4(0.0f, 0.0f, 0.0f, 0.0f);
// The light vector aims opposite the direction the light rays travel.
float3 lightVec = -L.Direction;
lightVec = mul(lightVec, toTS);
lightVec = normalize(lightVec);
// toEye to Tangent Space
toEye = mul(toEye, toTS);
toEye = normalize(toEye);
// Add ambient term.
ambient = mat.Ambient * L.Ambient;
// Add diffuse and specular term, provided the surface is in
// the line of site of the light.
float diffuseFactor = dot(lightVec, normalT);
// Flatten to avoid dynamic branching.
[flatten]
if (diffuseFactor > 0.0f)
{
float3 v = reflect(-lightVec, normalT);
float specFactor = pow(max(dot(v, toEye), 0.0f), mat.Specular.w);
diffuse = diffuseFactor * mat.Diffuse * L.Diffuse;
spec = specFactor * mat.Specular * L.Specular;
}
}
The result I got seem to be much darker in most places and too bright in several highlight area. I wonder if anyone can help me with my code or give me advice on how to debug a hlsl shader. My thousand thanks!

Billboard-like Representation For Spheres OpenGL

The world is made of spheres. Since drawing a sphere in OpenGL takes a lot of triangles, I thought it would be faster to use a point and radius to represent a sphere, then use Billboarding in OpenGL to draw it. The current approach I took causes adjacent spheres to not touch when rotating the view.
Here's an example:
There are two spheres:
Sphere 1 Position (0, 0, -3) Radius (0.5)
Sphere 2 Position (-1, 0, -3) Radius (0.5)
The projection matrix is defined using:
glm::perspective(glm::radians(120.0f), 1.0f, 1.0f, 100.0f);
Image 1: When there is no rotation, it looks as expected.
Image 2: When there is rotation, billboarding responds to the camera as expected, but the spheres' do not touch anymore. And if they were actually spheres that were next to each other, you would expect them to touch.
What I have tried:
I tried GL_POINTS, they didn't work as good because it didn't seem to
handle the depth test correctly for me.
I tried a geometry shader that creates a square before and after
the projection matrix was applied.
Here's the code I have now that created the images:
Vertex Shader
#version 460
layout(location = 0) in vec3 position;
layout(location = 1) in float radius;
out float radius_vs;
void main()
{
gl_Position = vec4(position, 1.0);
radius_vs = radius;
}
Geometry Shader
#version 460
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
layout(location = 2) uniform mat4 view_mat;
layout(location = 3) uniform mat4 projection_mat;
in float radius_vs[];
out vec2 bounds;
void main()
{
vec3 x_dir = vec3(view_mat[0][0], view_mat[1][0], view_mat[2][0]) * radius_vs[0];
vec3 y_dir = vec3(view_mat[0][1], view_mat[1][1], view_mat[2][1]) * radius_vs[0];
mat4 fmat = projection_mat * view_mat;
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz - x_dir - y_dir, 1.0f);
bounds = vec2(-1.0f, -1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz - x_dir + y_dir, 1.0f);
bounds = vec2(-1.0f, 1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz + x_dir - y_dir, 1.0f);
bounds = vec2(1.0f, -1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz + x_dir + y_dir, 1.0f);
bounds = vec2(1.0f, 1.0f);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 460
out vec4 colorOut;
in vec2 bounds;
void main()
{
vec2 circCoord = bounds;
if (dot(circCoord, circCoord) > 1.0)
{
discard;
}
colorOut = vec4(1.0f, 1.0f, 0.0f, 1.0);
}

OpenGl view matrix - wrong "camera" orientation and position

The symptom is, that the "position of the camera" seems to be mirrored around the x axis (negative z instead of positive z) and the "orientation of the camera" is opposing to the expected. In other words, I have to rotate the camera by 180 degrees and move it forwards to see any renderings.
In all OpenGl camera tutorials which I have seen, there was always a positive z coordinate for the camera position. Maybe there is only a single sign mistake in the code, but I do not see it. I am also posting the corresponding shader code. My objects are rendered at world coordinate z=0.1.
The initialization of the camera instance is show in the following lines
m_viewMatrix = math::Matrix4D::lookAt(m_cameraPosition, m_cameraPosition + m_cameraForward, m_cameraUp);
where
m_cameraForward(math::Vector3D(0.0f, 0.0f, -1.0f)),
m_cameraRight(math::Vector3D (1.0f, 0.0f, 0.0f)),
m_cameraUp(math::Vector3D(0.0f, 1.0f, 0.0f)),
m_cameraPosition(math::Vector3D(0.0f, 0.0f, 20.0f))
The result is a black screen. When I change the camera position to
m_cameraPosition(math::Vector3D(0.0f, 0.0f, -20.0f)
everything works fine.
The function lookAt is given by the following lines:
Matrix4D Matrix4D::lookAt(
const Vector3D& f_cameraPosition_r,
const Vector3D& f_targetPosition_r,
const Vector3D& f_upDirection_r)
{
const math::Vector3D l_forwardDirection = (f_targetPosition_r - f_cameraPosition_r).normalized();
const math::Vector3D l_rightDirection = f_upDirection_r.cross(l_forwardDirection).normalized();
const math::Vector3D l_upDirection = l_forwardDirection.cross(l_rightDirection); // is normalized
return math::Matrix4D(
l_rightDirection.x, l_rightDirection.y, l_rightDirection.z, l_rightDirection.dot(f_cameraPosition_r*(-1.0f)),
l_upDirection.x, l_upDirection.y, l_upDirection.z, l_upDirection.dot(f_cameraPosition_r*(-1.0f)),
l_forwardDirection.x, l_forwardDirection.y, l_forwardDirection.z, l_forwardDirection.dot(f_cameraPosition_r*(-1.0f)),
0.0f, 0.0f, 0.0f, 1.0f
);
}
The memory layout of the matrix4d is column major, as expected by OpenGl.
All other functions like dot and cross are unit tested.
vertex shader:
#version 430
in layout (location = 0) vec3 position;
in layout (location = 1) vec4 color;
in layout (location = 2) vec3 normal;
uniform mat4 pr_matrix; // projection matrix
uniform mat4 vw_matrix = mat4(1.0); // view matrix <------
uniform mat4 ml_matrix = mat4(1.0); // model matrix
out vec4 colorOut;
void main()
{
gl_Position = pr_matrix * vw_matrix * ml_matrix * vec4(position,1.0);
colorOut = color;
}
fragment shader:
#version 430
out vec4 color;
in vec4 colorOut;
void main()
{
color = colorOut;
}
edit (added perspective matrix):
Matrix4D Matrix4D::perspectiveProjection(
const float f_viewportWidth_f,
const float f_viewportHeight_f,
const float f_nearPlaneDistance_f,
const float f_farPlaneDistance_f,
const float f_radFieldOfViewY_f)
{
const float l_aspectRatio_f = f_viewportWidth_f / f_viewportHeight_f;
const float l_tanHalfFovy_f = tan(f_radFieldOfViewY_f * 0.5);
const float l_frustumLength = f_farPlaneDistance_f - f_nearPlaneDistance_f;
const float l_scaleX = 1.0f / (l_aspectRatio_f * l_tanHalfFovy_f);
const float l_scaleY = 1.0f / l_tanHalfFovy_f;
const float l_scaleZ = - (f_farPlaneDistance_f + f_nearPlaneDistance_f) / l_frustumLength;
const float l_value32 = -(2.0f*f_farPlaneDistance_f*f_nearPlaneDistance_f) / l_frustumLength;
return Matrix4D(
l_scaleX, +0.0f, +0.0f, +0.0f,
+0.0f, l_scaleY, +0.0f, +0.0f,
+0.0f, +0.0f, l_scaleZ, l_value32,
+0.0f, +0.0f, -1.0f, +0.0f);
Your projection matrix is following the "classic" OpenGL conventions: viewing direction is (0,0,-1) in eye space (last row of the matrix).
However, your view matrix does not follow that convention: You must put the negated forward direction into the matrix (also for calculation the translation z component there). In its current form, the view matrix just rotates so that the forward direction is mapped to +z.
Negating this of course means that you will use a right-handed coordinate system for world space (which is what classic GL did). If you don't want that, you can also just change the projection matrix to actually look at +z.