D3D11: How to draw a simple pixel aligned line? - c++

I tried to draw a line between two vertices with D3D11. I have some experiences in D3D9 and D3D11, but it seems to be a problem in D3D11 to draw a line, which starts in one given pixel and ends in an other.
What I did:
I added 0.5f to the pixel coordinates of each vertex to fit the texel-/pixel coordinate system (I read the Microsoft pages to the differeces between D3D9 and D3D11 coordinate systems):
f32 fOff = 0.5f;
ColoredVertex newVertices[2] =
{
{ D3DXVECTOR3(fStartX + fOff, fStartY + fOff,0), vecColorRGB },
{ D3DXVECTOR3(fEndX + fOff, fEndY + fOff,0), vecColorRGB }
};
Generated a ortho projection matrix to fit the render target:
D3DXMatrixOrthoOffCenterLH(&MatrixOrthoProj,0.0f,(f32)uRTWidth,0.0f,(f32)uRTHeight,0.0f,1.0f);
D3DXMatrixTranspose(&cbConstant.m_matOrthoProjection,&MatrixOrthoProj);
Set RasterizerState, BlendState, Viewport, ...
Draw Vertices as D3D11_PRIMITIVE_TOPOLOGY_LINELIST
Problem:
The Line seems to be one pixel to short. It starts in the given pixel coordinate an fits it perfect. The direction of the line looks correct, but the pixel where I want the line to end is still not colored. It looks like the line is just one pixel to short...
Is the any tutorial explaining this problem or does anybody have the same problem? As I remember it wasn't as difficult in D3D9.
Please ask if you need further information.
Thanks, Stefan
EDIT: found the rasterization rules for d3d10 (should be the same for d3d11):
http://msdn.microsoft.com/en-us/library/cc627092%28v=vs.85%29.aspx#Line_1
I hope this will help me understanding...

According to the rasterisation rules (link in the question above) I might have found a solution that should work:
sort the vertices StartX < EndX and StartY < EndY
add (0.5/0.5) to the start vertex (as i did before) to move the vertex to the center of the pixel
add (1.0/1.0) to the end vertex to move the vertex to the lower right corner
This is needed to tell the rasterizer that the last pixel of the line should be drawn.
f32 fXStartOff = 0.5f;
f32 fYStartOff = 0.5f;
f32 fXEndOff = 1.0f;
f32 fYEndOff = 1.0f;
ColoredVertex newVertices[2] =
{
{ D3DXVECTOR3((f32)fStartX + fXStartOff, (f32)fStartY + fYStartOff,0), vecColorRGB },
{ D3DXVECTOR3((f32)fEndX + fXEndOff , (f32)fEndY + fYEndOff,0), vecColorRGB }
};
If you know a better solution, please let me know.

I don't know D3D11, but your problem sounds a lot like the D3DRS_LASTPIXEL render state from D3D9 - maybe there's an equal for D3D11 you need to look into.

I encountered the exact same issue, and i fixed thank to this discussion.
My vertices are stored into a D3D11_PRIMITIVE_TOPOLOGY_LINELIST vertex buffer.
Thank for this usefull post, you made me fix this bug today.
It was REALLY trickier than i thought at start.
Here a few line of my code.
// projection matrix code
float width = 1024.0f;
float height = 768.0f;
DirectX::XMMATRIX offsetedProj = DirectX::XMMatrixOrthographicRH(width, height, 0.0f, 10.0f);
DirectX::XMMATRIX proj = DirectX::XMMatrixMultiply(DirectX::XMMatrixTranslation(- width / 2, height / 2, 0), offsetedProj);
// view matrix code
// screen top left pixel is 0,0 and bottom right is 1023,767
DirectX::XMMATRIX viewMirrored = DirectX::XMMatrixLookAtRH(eye, at, up);
DirectX::XMMATRIX mirrorYZ = DirectX::XMMatrixScaling(1.0f, -1.0f, -1.0f);
DirectX::XMMATRIX view = DirectX::XMMatrixMultiply(mirrorYZ, viewMirrored);
// draw line code in my visual debug tool.
void TVisualDebug::DrawLine2D(int2 const& parStart,
int2 const& parEnd,
TColor parColorStart,
TColor parColorEnd,
float parDepth)
{
FLine2DsDirty = true;
// D3D11_PRIMITIVE_TOPOLOGY_LINELIST
float2 const startFloat(parStart.x() + 0.5f, parStart.y() + 0.5f);
float2 const endFloat(parEnd.x() + 0.5f, parEnd.y() + 0.5f);
float2 const diff = endFloat - startFloat;
// return normalized difference or float2(1.0f, 1.0f) if distance between the points is null. Then multiplies the result by something a little bigger than 0.5f, 0.5f is not enough.
float2 const diffNormalized = diff.normalized_replace_if_null(float2(1.0f, 1.0f)) * 0.501f;
size_t const currentIndex = FLine2Ds.size();
FLine2Ds.resize(currentIndex + 2);
render::vertex::TVertexColor* baseAddress = FLine2Ds.data() + currentIndex;
render::vertex::TVertexColor& v0 = baseAddress[0];
render::vertex::TVertexColor& v1 = baseAddress[1];
v0.FPosition = float3(startFloat.x(), startFloat.y(), parDepth);
v0.FColor = parColorStart;
v1.FPosition = float3(endFloat.x() + diffNormalized.x(), endFloat.y() + diffNormalized.y(), parDepth);
v1.FColor = parColorEnd;
}
I tested Several DrawLine2D calls, and it seems to work well.

Related

OpenGL: screen-to-world transformation and good use of glm::unProject

I have what I believed to be a basic need: from "2D position of the mouse on the screen", I need to get "the closest 3D point in the 3D world". Looks like ray-tracing common problematic (even if it's not mine).
I googled / read a lot: looks like the topic is messy and lots of things gets unfortunately quickly intricated. My initial problem / need involves lots of 3D points what I do not know (meshes or point cloud from the internet), so, it's impossible to understand what result you should expect! Thus, I decided to create simple shapes (triangle, quadrangle, cube) with points that I know (each coord of each point is 0.f or 0.5f in local frame), and, try to see if I can "recover" 3D point positions from the mouse cursor when I move it on the screen.
Note: all coord of all points of all shapes are known values like 0.f or 0.5f. For example, with the triangle:
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
What I do
I have a 3D OpenGL renderer where I added a GUI to have controls on the rendered scene
Transformations: tx, ty, tz, rx, ry, rz are controls that enables to change the model matrix. In code
// create transformations: model represents local to world transformation
model = glm::mat4(1.0f); // initialize matrix to identity matrix first
model = glm::translate(model, glm::vec3(tx, ty, tz));
model = glm::rotate(model, glm::radians(rx), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, glm::radians(ry), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, glm::radians(rz), glm::vec3(0.0f, 0.0f, 1.0f));
ourShader.setMat4("model", model);
model changes only the position of the shape in the world and has no connection with the position of the camera (that's what I understand from tutorials).
Camera: from here, I ended-up with a camera class that holds view and proj matrices. In code
// get view and projection from camera
view = cam.getViewMatrix();
ourShader.setMat4("view", view);
proj = cam.getProjMatrix((float)SCR_WIDTH, (float)SCR_HEIGHT, near, 100.f);
ourShader.setMat4("proj", proj);
The camera is a fly-like camera that can be moved when moving the mouse or using keyboard arrows and that does not act on model, but only on view and proj (that's what I understand from tutorials).
The shader then uses model, view and proj this way:
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
void main()
{
// note that we read the multiplication from right to left
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
Screen to world: as using glm::unProject didn't always returned results I expected, I added a control to not use it (back-projecting by-hand). In code, first I get the cursor mouse position frame3DPos following this
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouseCursorCallback(GLFWwindow* window, double xposIn, double yposIn)
{
// screen to world transformation
xposScreen = xposIn;
yposScreen = yposIn;
int windowWidth = 0, windowHeight = 0; // size in screen coordinates.
glfwGetWindowSize(window, &windowWidth, &windowHeight);
int frameWidth = 0, frameHeight = 0; // size in pixel.
glfwGetFramebufferSize(window, &frameWidth, &frameHeight);
glm::vec2 frameWinRatio = glm::vec2(frameWidth, frameHeight) /
glm::vec2(windowWidth, windowHeight);
glm::vec2 screen2DPos = glm::vec2(xposScreen, yposScreen);
glm::vec2 frame2DPos = screen2DPos * frameWinRatio; // window / frame sizes may be different.
frame2DPos = frame2DPos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention.
glm::vec3 frame3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
frame3DPos.x = frame2DPos.x;
frame3DPos.y = frameHeight - 1.0f - frame2DPos.y; // GL's window origin is at the bottom left
frame3DPos.z = 0.f;
glReadPixels((GLint) frame3DPos.x, (GLint) frame3DPos.y, // CAUTION: cast to GLint.
1, 1, GL_DEPTH_COMPONENT,
GL_FLOAT, &zbufScreen); // CAUTION: GL_DOUBLE is NOT supported.
frame3DPos.z = zbufScreen; // z-buffer.
And then I can call glm::unProject or not (back-projecting by-hand) according to controls in GUI
glm::vec3 world3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
if (screen2WorldUsingGLM) {
glm::vec4 viewport(0.0f, 0.0f, (float) frameWidth, (float) frameHeight);
world3DPos = glm::unProject(frame3DPos, view * model, proj, viewport);
} else {
glm::mat4 trans = proj * view * model;
glm::vec4 frame4DPos(frame3DPos, 1.f);
frame4DPos = glm::inverse(trans) * frame4DPos;
world3DPos.x = frame4DPos.x / frame4DPos.w;
world3DPos.y = frame4DPos.y / frame4DPos.w;
world3DPos.z = frame4DPos.z / frame4DPos.w;
}
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
Z-buffering is always allowed whatever the shape is 2D (triangle, quadrangle) or 3D (cube). In code
glEnable(GL_DEPTH_TEST); // Enable z-buffer.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // also clear the z-buffer
In picture I get
The camera is positioned at (0., 0., 0.) and looks "ahead" (front = -z as z-axis is positive from screen to me). The shape is positioned (using tx, ty, tz, rx, ry, rz) "in front of the camera" with tz = -5 (5 units following the front vector of the camera)
What I get
Triangle in initial setting
I have correct xpos and ypos in world frame but incorrect zpos = 0. (z-buffering is allowed). I expected zpos = -5 (as tz = -5).
Question: why zpos is incorrect?
If I do not use glm::unProject, I get outer space results
Question: why "back-projecting" by-hand doesn't return consistent results compared to glm::unProject? Is this logical? Arethey different operations? (I believed they should be equivalent but they are obviously not)
Triangle moved with translation
After translation of about tx = 0.5 I still get same coordinates (local frame) where I expected to have previous coord translated along x-axis. Not using glm::unProject returns oute-space results here too...
Question: why translation (applied by model - not view nor proj) is ignored?
Cube in initial setting
I get correct xpos, ypos and zpos?!... So why is this not working the same way with the "2D" triangle (which is "3D" one to me, so, they should behave the same)?
Cube moved with translation
Translated along ty this time seems to have no effect (still get same coordinates - local frame).
Question: like with triangle, why translation is ignored?
What I'd like to get
The main question is why the model transformation is ignored? If this is to be expected, I'd like to understand why.
If there's a way to recover the "true" position of the shape in the world (including model transformation) from the position of the mouse cursor, I'd like to understand how.
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
As I am new to OpenGL, I didn't get that object coordinates from glm::unProject doc is another way to refer to local space. Solution: pass view*model to glm::unProject and apply model again, or, pass view to glm::unProject as explained here: Screen Coordinates to World Coordinates.
This fixes all weird behaviors I observed.

MVP matrix for DirectX11 using GLM

First for the part that does work using XMMATH where data.model is a XMMatrix:
static auto model_matrix = DirectX::XMMatrixIdentity();
static auto pos = DirectX::XMVectorSet(0.0f, 0.0f, -10.0f, 0.0f);
static auto focus = DirectX::XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f);
static auto up = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
static auto view_matrix = DirectX::XMMatrixLookAtLH(pos, focus, up);
static auto proj_matrix = DirectX::XMMatrixPerspectiveFovLH(glm::radians(45.0f), 16.0f / 9.0f, 0.1f, 10000.0f);
building the mvp:
data.model = model_matrix * view_matrix * proj_matrix;
data.model = DirectX::XMMatrixTranspose(data.model);
when I hand the data.model over to my HLSL shader, everything works fine and and I can change the pos vector to look at my cube from different angles. HLSL vertex shader:
cbuffer myCbuffer : register(b0) {
float4x4 mat;
}
float4 main(float3 pos : POSITION) : SV_POSITION
{
return mul(float4(pos, 1), mat);
}
Now when I try to make something similar using GLM (I changed the type of data.model to be a glm::mat4 ):
auto gl_m = glm::mat4(1.0f);
static auto gl_pos = glm::vec3(0.0f, 0.0f, -10.0f);
static auto gl_focus = glm::vec3(0.0f, 0.0f, 0.0f);
static auto gl_up = glm::vec3(0.0f, 1.0f, 0.0f);
auto gl_v = glm::lookAtLH(gl_pos, gl_focus, gl_up);
auto gl_p = glm::perspectiveFovLH_ZO(glm::radians(45.0f), 1280.0f, 720.0f, 0.1f, 10000.0f);
building the MVP:
data.model = gl_m * gl_v * gl_p;
Now when I pass this to data.model the cube does get rendered but the whole screen is filled black. (my cube is black and the clear color is light-blue, so I think it's rendering the cube but really close or inside it).
I don't know where to look on how to fix this, the projection matrix should be in the correct clipping space since I'm using perspectiveFovLH_ZO, the ZO fixes the clipping space to [0..1]. It could be how the HLSL shader float4x4 deals with the glm::mat4, but both are column major I believe so no need to transpose.
It might have something to do with the rasterizer culling settings and the FrontCounterClockwise setting, but I'm fairly new to DirectX and don't know what it does exactly.
D3D11_RASTERIZER_DESC raster_desc = {0};
raster_desc.FillMode = D3D11_FILL_MODE::D3D11_FILL_SOLID;
raster_desc.CullMode = D3D11_CULL_MODE::D3D11_CULL_NONE;
raster_desc.FrontCounterClockwise = false;
d3device->CreateRasterizerState(&raster_desc, rasterize_state.GetAddressOf());
Any help is appreciated, let me know if I forgot anything.
Managed to fix it (seems like a bandaid fix but I'll work with it for now).
After I build the mvp I added:
gl_mvp[3][3] = -10.0f;
gl_mvp = glm::transpose(gl_mvp);
the -10 is the z coordinate of the camera position (the same one you pass as the first argument to glm::lookAtLH). I thought HLSL/DirectX matched the column-major GLM matrices but apparently not, it needed the extra transpose call. I'm not sure why that is and what the bottom left element of the MVP is that it has to match the positional z, maybe someone with a better understanding of the math behind it can clarify.
Removed gl_mvp[3][3] = -10.0f; since it's a bad fix, instead I got this now:
changed both the lookAtLH and PerspectiveFovLH_ZO to their RH variants.
Also changed the order of building the MVP from M * V * P to P * V * M.
New code that seems to work well (even flying around using my Camera class):
auto gl_m = glm::mat4(1.0f);
auto gl_v = glm::lookAtRH(position, position + dir, {0, 1, 0});
auto gl_p = glm::perspectiveFovRH_ZO(glm::radians(FOV), 1280.0f, 720.0f, 0.1f, 10000.0f);
glm::mat4 gl_mvp = gl_p * gl_v * gl_m;
return glm::transpose(gl_mvp);
It's a bit different from the previous code cause this is inside my Camera class, so position, dir and FOV are variables I keep track off but you get the idea. Passed this return result to my HLSL shader and all seems well so far.

Unable to mouse pick a quad rendered in a framebuffer

Struggling to mouse pick a point/quad. I believe I am either using coordinates in the wrong space or perhaps not accounting for the framebuffer's position/size (it's a sub window of the main window).
Tried converting to various different coordinate spaces and inverting the model matrix too. Currently projecting a ray in world space (hopefully correctly) and trying to compare it to the point's (quad) location. The point is specified in local space, but the entity is rendered at the origin (0f, 0f, 0f) therefore I don't think it should be any different in world space?
To get the mouse ray in world space:
private fun calculateRay(): Vector3f {
val mousePosition = Mouse.getCursorPosition()
val ndc = toDevice(mousePosition)
val clip = Vector4f(ndc.x, ndc.y, -1f, 1f)
val eye = toEye(clip)
return toWorld(eye)
}
private fun toDevice(mousePosition: Vector2f): Vector2f {
mousePosition.x -= fbo.x // Correct thing to do?
mousePosition.y -= fbo.y
val x = (2f * mousePosition.x) / fboSize.x - 1
val y = (2f * mousePosition.y) / fboSize.y - 1
return Vector2f(x, y)
}
private fun toEye(clip: Vector4f): Vector4f {
val invertedProjection = Matrix4f(projectionMatrix).invert()
val eye = invertedProjection.transform(clip)
return Vector4f(eye.x, eye.y, -1f, 0f)
}
private fun toWorld(eye: Vector4f): Vector3f {
val viewMatrix = Maths.createViewMatrix(camera)
val invertedView = Matrix4f(viewMatrix).invert()
val world = invertedView.transform(eye)
return Vector3f(world.x, world.y, world.z).normalize()
}
When hovering over a point (11.25, -0.75), the ray coords are (0.32847548, 0.05527423). I tried normalising the point's position and it is still not a match.
Feel like I am missing/overlooking something or just manipulating the coordinate systems incorrectly. Any insight would be greatly appreciated, thank you.
EDIT with more information:
The vertices of the quad are:
(-0.5f, 0.5f, -0.5f, -0.5f, 0.5f, 0.5f, 0.5f, -0.5f)
Loading matrices to shader:
private fun loadMatrices(position: Vector3f, rotation: Float, scale: Float, viewMatrix: Matrix4f, currentRay: Vector3f) {
val modelMatrix = Matrix4f()
modelMatrix.translate(position)
modelMatrix.m00(viewMatrix.m00())
modelMatrix.m01(viewMatrix.m10())
modelMatrix.m02(viewMatrix.m20())
modelMatrix.m10(viewMatrix.m01())
modelMatrix.m11(viewMatrix.m11())
modelMatrix.m12(viewMatrix.m21())
modelMatrix.m20(viewMatrix.m02())
modelMatrix.m21(viewMatrix.m12())
modelMatrix.m22(viewMatrix.m22())
modelMatrix.rotate(Math.toRadians(rotation.toDouble()).toFloat(), Vector3f(0f, 0f, 1f))
modelMatrix.scale(scale)
shader.loadModelViewMatrix(viewMatrix.mul(modelMatrix))
shader.loadProjectionMatrix(projectionMatrix)
}
Calculating gl_Position in vertex shader:
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 0.0, 1.0);
EDIT 2: Altered my code after reading up some more material based on Rabbid's comments. Not sure if I require the division by 2 in the viewport size (I have a retina MacBook display).
mousePosition.sub(fboPosition)
val w = (fboSize.x / 2).toInt()
val h = (fboSize.y / 2).toInt()
val y = h - mousePosition.y
val viewMatrix = Maths.createViewMatrix(camera)
val origin = Vector3f()
val dir = Vector3f()
Matrix4f(projectionMatrix).mul(viewMatrix)
.unprojectRay(mousePosition.x, y, intArrayOf(0, 0, w, h), origin, dir)
The top left origin of window sapce is (0,0). So if you get (0, 0), if the mouse is in the the top left of the window you've to skip:
mousePosition.x -= fbo.x // Correct thing to do?
mousePosition.y -= fbo.y
Since the bottom left of the framebuffer is (0,0), the y coordinate has to be flipped:
val y = 1 - (2f * mousePosition.y) / fboSize.y
When a Cartesian coordinate is transformed by a (inverse) projection matrix, then the result is a Homogeneous coordinates. You've to do a Perspective divide, to get a Cartesian coordinate in view space:
val eye = invertedProjection.transform(clip)
return Vector3f(eye.x/eye.w, eye.y/eye.w, eye.z/eye.w)

Mouse to world position without gluUnProject?

I'm trying to implement an editor like placing mod with opengl.
When you click somewhere on the screen an object get placed in that position.
So far this is my code
void RayCastMouse(double posx, double posy)
{
glm::fvec2 NDCCoords = glm::fvec2( (2.0*posx)/ float(SCR_WIDTH)-1.f, ((-2.0*posy) / float(SCR_HEIGHT)+1.f) );
glm::mat4 viewProjectionInverse = glm::inverse(projection * camera.GetViewMatrix());
glm::vec4 worldSpacePosition(NDCCoords.x, NDCCoords.y, 0.0f, 1.0f);
glm::vec4 worldRay = viewProjectionInverse*worldSpacePosition;
printf("X=%f / y=%f / z=%f\n", worldRay.x, worldRay.y, worldRay.z);
m_Boxes.emplace_back(worldRay.x, 0, worldRay.z);
}
The problem is the object isn't placed at the correct position, worldRay vec3 is used to translate the model matrix.
Can anyone one please help with this i will really appreciate it.
By the way division the worldRay xyz component by worldRay.w is setting the object at the camera position.

Cascaded Shadow maps not quite right

Ok. So, I've been messing around with shadows in my game engine for the last week. I've mostly implemented cascading shadow maps (CSM), but I'm having a bit of a problem with shadowing that I just can't seem to solve.
The only light in this scene is a directional light (sun), pointing {-0.1 -0.25 -0.65}. I calculate 4 sets of frustum bounds for the four splits of my CSMs with this code:
// each projection matrix calculated with same near plane, different far
Frustum make_worldFrustum(const glm::mat4& _invProjView) {
Frustum fr; glm::vec4 temp;
temp = _invProjView * glm::vec4(-1, -1, -1, 1);
fr.xyz = glm::vec3(temp) / temp.w;
temp = _invProjView * glm::vec4(-1, -1, 1, 1);
fr.xyZ = glm::vec3(temp) / temp.w;
...etc 6 more times for ndc cube
return fr;
}
For the light, I get a view matrix like this:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
I then create each ortho matrix from the bounds of each frustum:
lightMatVec.clear();
for (auto& frus : cam.frusVec) {
glm::vec3 arr[8] {
glm::vec3(viewMat * glm::vec4(frus.xyz, 1)),
glm::vec3(viewMat * glm::vec4(frus.xyZ, 1)),
etc...
};
glm::vec3 minO = {INFINITY, INFINITY, INFINITY};
glm::vec3 maxO = {-INFINITY, -INFINITY, -INFINITY};
for (auto& vec : arr) {
minO = glm::min(minO, vec);
maxO = glm::max(maxO, vec);
}
glm::mat4 projMat = glm::ortho(minO.x, maxO.x, minO.y, maxO.y, minO.z, maxO.z);
lightMatVec.push_back(projMat * viewMat);
}
I have a 4 layer TEXTURE_2D_ARRAY bound to 4 framebuffers that I draw the scene into with a very simple vertex shader (frag disabled or punchthrough alpha).
I then draw the final scene. The vertex shader outputs four shadow texcoords:
out vec3 slShadcrd[4];
// stuff
for (int i = 0; i < 4; i++) {
vec4 sc = WorldBlock.skylMatArr[i] * vec4(world_pos, 1);
slShadcrd[i] = sc.xyz / sc.w * 0.5f + 0.5f;
}
And a fragment shader, which determines the split to use with:
int csmIndex = 0;
for (uint i = 0u; i < CameraBlock.csmCnt; i++) {
if (-view_pos.z > CameraBlock.csmSplits[i]) index++;
else break;
}
And samples the shadow map array with this function:
float sample_shadow(vec3 _sc, int _csmIndex, sampler2DArrayShadow _tex) {
return texture(_tex, vec4(_sc.xy, _csmIndex, _sc.z)).r;
}
And, this is the scene I get (with each split slightly tinted and the 4 depth layers overlayed):
Great! Looks good.
But, if I turn the camera slightly to the right:
Then shadows start disappearing (and depending on the angle, appearing where they shouldn't be).
I have GL_DEPTH_CLAMP enabled, so that isn't the issue. I'm culling front faces, but turning that off doesn't make a difference to this issue.
What am I missing? I feel like it's an issue with one of my projections, but they all look right to me. Thanks!
EDIT:
All four of the the light's frustums drawn. They are all there, but only z is changing relative to the camera (see comment below):
EDIT:
Probably more useful, this is how the frustums look when I only update them once, when the camera is at (0,0,0) and pointing forwards (0,1,0). Also I drew them with depth testing this time.
IMPORTANT EDIT:
It seems that this issue is directly related to the light's view matrix, currently:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
Changing the values for eye and target seems to affect the buggered shadows. But I don't know what I should actually be setting this to? Should be easy for someone with a better understanding than me :D
Solved it! It was indeed an issue with the light's view matrix! All I had to do was replace camPos with the centre point of each frustum! Meaning that each split's light matrix needed a different view matrix. So I just create each view matrix like this...
glm::mat4 viewMat = glm::lookAt(frusCentre, frusCentre+lightDir, {0,0,1});
And get frusCentre simply...
glm::vec3 calc_frusCentre(const Frustum& _frus) {
glm::vec3 min(INFINITY, INFINITY, INFINITY);
glm::vec3 max(-INFINITY, -INFINITY, -INFINITY);
for (auto& vec : {_frus.xyz, _frus.xyZ, _frus.xYz, _frus.xYZ,
_frus.Xyz, _frus.XyZ, _frus.XYz, _frus.XYZ}) {
min = glm::min(min, vec);
max = glm::max(max, vec);
}
return (min + max) / 2.f;
}
And bam! Everything works spectacularly!
EDIT (Last one!):
What I had was not quite right. The view matrix should actually be:
glm::lookAt(frusCentre-lightDir, frusCentre, {0,0,1});