OpenGL - reconstruct position from depth in VS - opengl

I am trying to reconstruct position from depth texture in Vertex Shader. Usually, this is done in Pixel Shader, but for some reason I need it in VS to transform some geometry.
So my approach.
1) I calculate View Frustrum corners in View Space
I use this input NDC.Those values are transformed via Inverse(view * proj) to put them into World Space and then transformed via view matrix.
//GL - Left Handed - need to "swap" front and back Z coordinate
MyMath::Vector4 cornersVector4[] =
{
//front
MyMath::Vector4(-1, -1, 1, 1), //A
MyMath::Vector4( 1, -1, 1, 1), //B
MyMath::Vector4( 1, 1, 1, 1), //C
MyMath::Vector4(-1, 1, 1, 1), //D
//back
MyMath::Vector4(-1, -1, -1, 1), //E
MyMath::Vector4( 1, -1, -1, 1), //F
MyMath::Vector4( 1, 1, -1, 1), //G
MyMath::Vector4(-1, 1, -1, 1), //H
};
If I print debug output, it seems correct (camera pos is at dist zNear from near plane and far is far enough)
2) post values to shader
3) In shader I do this:
vec3 _cornerPos0 = cornerPos0.xyz * mat3(viewInv);
vec3 _cornerPos1 = cornerPos1.xyz * mat3(viewInv);
vec3 _cornerPos2 = cornerPos2.xyz * mat3(viewInv);
vec3 _cornerPos3 = cornerPos3.xyz * mat3(viewInv);
float x = (TEXCOORD1.x / 100.0); //TEXCOORD1.x = <0, 100>
float y = (TEXCOORD1.y / 100.0); //TEXCOORD1.y = <0, 100>
vec3 ray = mix(mix(_cornerPos0, _cornerPos1, x),
mix(_cornerPos2, _cornerPos3, x),
y);
float depth = texture2D(depthTexture, vec2(x, y));
//depth is created in draw pass before with depth = vertexViewPos.z / farClipPlane;
vec3 reconstructed_posWS = camPos + (depth * ray);
But if I do this nad translate my geometry from [0,0,0] to reconstructed_posWS, only part of screen is covered. What can be incorrect ?
PS: some calculations are useless (transform to space and after that transform back), but speed is not concern atm.

Related

GLM LookAt with different coordinate system

I am using GLM to make a LookAt matrix. I use the normal OpenGL coordinate system, but with the Z axis going inwards which is the opposite of the OpenGL standard. Thus, the LookAt function requires some changes:
glm::vec3 pos = glm::vec3(0, 0, -10); // equal to glm::vec3(0, 0, 10) in standard coords
glm::quat rot = glm::vec3(0.991445, 0.130526, 0, 0); // 15 degrees rotation about the x axis
glm::vec3 resultPos = pos * glm::vec3(1, 1, -1); // flip Z axis
glm::vec3 resultLook = pos + (glm::conjugate(rot) * glm::vec3(0, 0, 1)) * glm::vec3(1, 1, -1); // rotate unit Z vec and then flip Z
glm::vec3 resultUp = (glm::conjugate(rot) * glm::vec3(0, 1, 0)) * glm::vec3(1, 1, -1); // same thing as resultLook but with unit Y vec
glm::mat4 lookAt = glm::lookAt(resultPos, resultLook, resultUp)
However, that is a lot of calculation for just flipping a single axis. What do I need to do to get a view matrix which has a flipped Z axis?

Incorrect render of a cube mesh in DirectX 11

I am practicing DirectX 11 following Frank Luna's book.
I have implemented a demo that renders a cube, but the result is not correct.
https://i.imgur.com/2uSkEiq.gif
As I hope you can see from the image (I apologize for the low quality), it seems like the camera is "trapped" inside the cube even when I move it away. There is also a camera frustum clipping problem.
I think the problem is therefore in the definition of the projection matrix.
Here is the cube vertices definition.
std::vector<Vertex> vertices =
{
{XMFLOAT3(-1, -1, -1), XMFLOAT4(1, 1, 1, 1)},
{XMFLOAT3(-1, +1, -1), XMFLOAT4(0, 0, 0, 1)},
{XMFLOAT3(+1, +1, -1), XMFLOAT4(1, 0, 0, 1)},
{XMFLOAT3(+1, -1, -1), XMFLOAT4(0, 1, 0, 1)},
{XMFLOAT3(-1, -1, +1), XMFLOAT4(0, 0, 1, 1)},
{XMFLOAT3(-1, +1, +1), XMFLOAT4(1, 1, 0, 1)},
{XMFLOAT3(+1, +1, +1), XMFLOAT4(0, 1, 1, 1)},
{XMFLOAT3(+1, -1, +1), XMFLOAT4(1, 0, 1, 1)},
};
Here is how I calculate the view and projection matrices.
void TestApp::OnResize()
{
D3DApp::OnResize();
mProj = XMMatrixPerspectiveFovLH(XM_PIDIV4, AspectRatio(), 1, 1000);
}
void TestApp::UpdateScene(float dt)
{
float x = mRadius * std::sin(mPhi) * std::cos(mTheta);
float y = mRadius * std::cos(mPhi);
float z = mRadius * std::sin(mPhi) * std::sin(mTheta);
XMVECTOR EyePosition = XMVectorSet(x, y, z, 1);
XMVECTOR FocusPosition = XMVectorZero();
XMVECTOR UpDirection = XMVectorSet(0, 1, 0, 0);
mView = XMMatrixLookAtLH(EyePosition, FocusPosition, UpDirection);
}
And here is how I update the camera position on mouse move.
glfwSetCursorPosCallback(mMainWindow, [](GLFWwindow* window, double xpos, double ypos)
{
TestApp* app = reinterpret_cast<TestApp*>(glfwGetWindowUserPointer(window));
if (glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_LEFT) == GLFW_PRESS)
{
float dx = 0.25f * XMConvertToRadians(xpos - app->mLastMousePos.x);
float dy = 0.25f * XMConvertToRadians(ypos - app->mLastMousePos.y);
app->mTheta += dx;
app->mPhi += dy;
app->mPhi = std::clamp(app->mPhi, 0.1f, XM_PI - 0.1f);
}
else if (glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_RIGHT) == GLFW_PRESS)
{
float dx = 0.05f * XMConvertToRadians(xpos - app->mLastMousePos.x);
float dy = 0.05f * XMConvertToRadians(ypos - app->mLastMousePos.y);
app->mRadius += (dx - dy);
app->mRadius = std::clamp(app->mRadius, 3.f, 15.f);
}
app->mLastMousePos = XMFLOAT2(xpos, ypos);
});
Thanks.
The root problem here was in the constant buffer vs. CPU update.
HLSL defaults to column-major matrix definitions per Microsoft Docs. DirectXMath uses row-major matrices, so you have to transpose while updating the Constant Buffer.
Alternatively, you can declare the HLSL matrix with the row_major keyword, #pragma pack_matrix, or the /Zpr compiler switch.

OpenGL ray tracing using inverse transformations

I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r and a normalized direction d finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()

Reflect camera in a plane

I have a camera, which is defined through an up vector, a position and a reference point (camera looks at this point). Furthermore I can calculate the view direction, of course.
Now I tried to reflect this camera in a plane (e.g. z = 0). My first attempt was to reflect every single vector in the plane with the belonging reflection matrix and looked like this:
mat4 mReflection = mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1, 0,
0, 0, 0, 1);
up = mReflection * up;
position = mReflection * position;
lookAt = mReflection * lookAt;
But this didn't work very well and I don't know why. What is wrong with this method?

OpenTK matrix transformations

Here's the vertex shader:
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main(void)
{
gl_Position = projection * view * model * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
My understanding is that using various transformations, the model space is eventually turned to clip space, which is a box bound by each unit in each axis drawn directly to the viewport, i.e. something at (-1, 1,0) is at the top left of the viewport. When I remove all matrix transforms from the shader,
gl_Position = gl_Vertex;
and pass in, as the model, a simple quad
public Vector3[] verts = new Vector3[] {
new Vector3(-1f, -1f, 0),
new Vector3(1f, -1f, 0),
new Vector3(1f, 1f, 0),
new Vector3(-1f, 1f, 0),
};
public Vector2[] coords = new Vector2[] {
new Vector2(0, 1f),
new Vector2(1f, 1f),
new Vector2(1f, 0f),
new Vector2(0f, 0f),
};
public uint[] indices = new uint[] {
0,1,2,
0,2,3,
};
I get the expected full screen image. When I apply the transformations, the image appears as
a small square in the centre of the screen, as you'd expect. The problem arises when I try to calculate the position of a vertex of the model in clip coordinates on the CPU:
public Vector4 testMult(Vector4 v, Matrix4 m)
{
return new Vector4(
m.M11 * v.X + m.M12 * v.Y + m.M13 * v.Z + m.M14 * v.W,
m.M21 * v.X + m.M22 * v.Y + m.M23 * v.Z + m.M24 * v.W,
m.M31 * v.X + m.M32 * v.Y + m.M33 * v.Z + m.M34 * v.W,
m.M41 * v.X + m.M42 * v.Y + m.M43 * v.Z + m.M44 * v.W);
}
Matrix4 test = (GlobalDrawer.projectionMatrix * GlobalDrawer.viewMatrix) * modelMatrix;
Vector4 testv = (new Vector4(1f, 1f, 0, 1));
Console.WriteLine("Test Input: " + testv);
Console.WriteLine("Test Output: " + Vector4.Transform(testv, test));
Vector4 testv2 = testMult(testv, test);
Console.WriteLine("Test Output: " + testv2);
Console.WriteLine("Test Output division: " + testv2 / testv2.W);
(The matrices passed in are identical to the ones passed to the shader)
The program then proceeds to give output outside of clip space, and the division by W leads to divisions by 0:
Test Input: (1, 1, 0, 1)
Test Output: (0.9053301, 1.207107, -2.031746, 0)
Test Output: (0.9053301, 1.207107, -1, 0)
Test Output division: (Infinity, Infinity, -Infinity, NaN)
The matrices are created as follows:
projectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)Math.PI / 4, window.Width / (float)window.Height, 1.0f, 64.0f);
projectionMatrix =
(1.81066, 0, 0, 0)
(0, 2.414213, 0, 0)
(0, 0, -1.031746, -1)
(0, 0, -2.031746, 0)
viewMatrix = Matrix4.LookAt(new Vector3(0,0,4), -Vector3.UnitZ, Vector3.UnitY);
viewMatrix =
(1, 0, 0, 0)
(0, 1, 0, 0)
(0, 0, 1, 0)
(0, 0, -4, 1)
modelMatrix =
(0.5, 0 , 0 , 0)
(0 , 0.5, 0 , 0)
(0 , 0 , 1 , 0)
(0 , 0 , 0 , 1)
So, the question is why; what am I doing wrong?
Edit (Adding real answer from comment)
Your OpenTK matrices are transposed by default. It looks to use row vectors instead of column vectors. Therefore you need to do the multiplication as (model * view * proj), not (proj * view * model). Either that or transpose all the matrices before uploading them.
Actually clip space is not from -1 to 1, but rather from -W to W, where W is the fourth component of the clip space vector.
What you're probably thinking of is called normalized device coodinates, which ranges from -1 to 1 on each axis. You get this value by dividing the X,Y, and Z coordinates of the clip space vector by the clip space W component. This division is called perspective division.
This is done behind the scenes after you pass the clip space coordinate to gl_Position.
Your clip space coordinate is 0 though, which doesn't seem to be correct to me.
There's some more detail here: OpenGL FAQ : Transformations.