Incorrect render of a cube mesh in DirectX 11 - c++

I am practicing DirectX 11 following Frank Luna's book.
I have implemented a demo that renders a cube, but the result is not correct.
https://i.imgur.com/2uSkEiq.gif
As I hope you can see from the image (I apologize for the low quality), it seems like the camera is "trapped" inside the cube even when I move it away. There is also a camera frustum clipping problem.
I think the problem is therefore in the definition of the projection matrix.
Here is the cube vertices definition.
std::vector<Vertex> vertices =
{
{XMFLOAT3(-1, -1, -1), XMFLOAT4(1, 1, 1, 1)},
{XMFLOAT3(-1, +1, -1), XMFLOAT4(0, 0, 0, 1)},
{XMFLOAT3(+1, +1, -1), XMFLOAT4(1, 0, 0, 1)},
{XMFLOAT3(+1, -1, -1), XMFLOAT4(0, 1, 0, 1)},
{XMFLOAT3(-1, -1, +1), XMFLOAT4(0, 0, 1, 1)},
{XMFLOAT3(-1, +1, +1), XMFLOAT4(1, 1, 0, 1)},
{XMFLOAT3(+1, +1, +1), XMFLOAT4(0, 1, 1, 1)},
{XMFLOAT3(+1, -1, +1), XMFLOAT4(1, 0, 1, 1)},
};
Here is how I calculate the view and projection matrices.
void TestApp::OnResize()
{
D3DApp::OnResize();
mProj = XMMatrixPerspectiveFovLH(XM_PIDIV4, AspectRatio(), 1, 1000);
}
void TestApp::UpdateScene(float dt)
{
float x = mRadius * std::sin(mPhi) * std::cos(mTheta);
float y = mRadius * std::cos(mPhi);
float z = mRadius * std::sin(mPhi) * std::sin(mTheta);
XMVECTOR EyePosition = XMVectorSet(x, y, z, 1);
XMVECTOR FocusPosition = XMVectorZero();
XMVECTOR UpDirection = XMVectorSet(0, 1, 0, 0);
mView = XMMatrixLookAtLH(EyePosition, FocusPosition, UpDirection);
}
And here is how I update the camera position on mouse move.
glfwSetCursorPosCallback(mMainWindow, [](GLFWwindow* window, double xpos, double ypos)
{
TestApp* app = reinterpret_cast<TestApp*>(glfwGetWindowUserPointer(window));
if (glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_LEFT) == GLFW_PRESS)
{
float dx = 0.25f * XMConvertToRadians(xpos - app->mLastMousePos.x);
float dy = 0.25f * XMConvertToRadians(ypos - app->mLastMousePos.y);
app->mTheta += dx;
app->mPhi += dy;
app->mPhi = std::clamp(app->mPhi, 0.1f, XM_PI - 0.1f);
}
else if (glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_RIGHT) == GLFW_PRESS)
{
float dx = 0.05f * XMConvertToRadians(xpos - app->mLastMousePos.x);
float dy = 0.05f * XMConvertToRadians(ypos - app->mLastMousePos.y);
app->mRadius += (dx - dy);
app->mRadius = std::clamp(app->mRadius, 3.f, 15.f);
}
app->mLastMousePos = XMFLOAT2(xpos, ypos);
});
Thanks.

The root problem here was in the constant buffer vs. CPU update.
HLSL defaults to column-major matrix definitions per Microsoft Docs. DirectXMath uses row-major matrices, so you have to transpose while updating the Constant Buffer.
Alternatively, you can declare the HLSL matrix with the row_major keyword, #pragma pack_matrix, or the /Zpr compiler switch.

Related

GLM LookAt with different coordinate system

I am using GLM to make a LookAt matrix. I use the normal OpenGL coordinate system, but with the Z axis going inwards which is the opposite of the OpenGL standard. Thus, the LookAt function requires some changes:
glm::vec3 pos = glm::vec3(0, 0, -10); // equal to glm::vec3(0, 0, 10) in standard coords
glm::quat rot = glm::vec3(0.991445, 0.130526, 0, 0); // 15 degrees rotation about the x axis
glm::vec3 resultPos = pos * glm::vec3(1, 1, -1); // flip Z axis
glm::vec3 resultLook = pos + (glm::conjugate(rot) * glm::vec3(0, 0, 1)) * glm::vec3(1, 1, -1); // rotate unit Z vec and then flip Z
glm::vec3 resultUp = (glm::conjugate(rot) * glm::vec3(0, 1, 0)) * glm::vec3(1, 1, -1); // same thing as resultLook but with unit Y vec
glm::mat4 lookAt = glm::lookAt(resultPos, resultLook, resultUp)
However, that is a lot of calculation for just flipping a single axis. What do I need to do to get a view matrix which has a flipped Z axis?

Convert OpenGL coord system to custom and back

I have a task to draw 3D objects on the ground using OpenGL. I use OpenGL left-handed coord system where Y axis is up. But 3D objects and camera orbiting objects should use different coord systems with following properties:
XY plane is a ground plane;
Z-axis is up;
Y-axis is "North";
X-axis is "East";
The azimuth (or horizontal) angle is [0, 360] degrees;
The elevation (or vertical) angle is [0, 90] degrees from XY plane.
End user uses azimuth and elevation to rotate camera around some center. So I made following code to convert from spherical coordinates to quaternion:
//polar: x - radius, y - horizontal angle, z - vertical
QQuaternion CoordinateSystemGround::fromPolarToQuat(const QVector3D& polar) const {
QQuaternion dest = QQuaternion::fromEulerAngles({0, polar.y(), polar.z()});
//convert user coord system back to OpenGL by rotation around X-axis
QQuaternion orig = QQuaternion::fromAxisAndAngle(1, 0, 0, -90);
return dest * orig;
}
and back:
QVector3D CoordinateSystemGround::fromQuatToPolar(const QQuaternion& q) const {
//convert OpenGL coord system to destination by rotation around X-axis
QQuaternion dest = QQuaternion::fromAxisAndAngle(1, 0, 0, 90);
QQuaternion out = q * dest;
QVector3D euler = out.toEulerAngles();
float hor = euler.y();
if(hor < 0.0f)
hor += 360.0f;
float ver = euler.z();
if(ver > 90.0f)
ver = 90.0f;
else if(ver < 0.0f)
ver = 0.0f;
//x changes later
return QVector3D(0, hor, ver);
}
But it doesn't work right. I suppose fromPolarToQuat conversion has mistake somewhere and I can't understand where.
Seems like I found the solution. So, to get polar angles from quaternion:
QVector3D CoordinateSystemGround::fromQuatToPolar(const QQuaternion& q) const {
QQuaternion coord1 = QQuaternion::fromAxisAndAngle(0, 1, 0, -180);
QQuaternion coord2 = QQuaternion::fromAxisAndAngle(1, 0, 0, -90);
QQuaternion out = orig1 * orig2 * q;
QVector3D euler = out.toEulerAngles();
float hor = euler.y();
if(hor < 0.0f)
hor += 360.0f;
float ver = euler.x();
return QVector3D(0, hor, -ver);
}
And back:
QQuaternion CoordinateSystemGround::fromPolarToQuat(const QVector3D& polar) const {
QQuaternion dest = QQuaternion::fromEulerAngles({-polar.z(), polar.y(), 0});
QQuaternion coord1 = QQuaternion::fromAxisAndAngle(1, 0, 0, 90);
QQuaternion coord2 = QQuaternion::fromAxisAndAngle(0, 1, 0, 180);
return coord1 * coord2 * dest;
}
Not sure it's an optimal solution though, but it works as it should.
Edited
After some research I've found few mistakes and made optimized and hope correct version of conversion:
QVector3D CoordinateSystemGround::fromQuatToPolar(const QQuaternion& q) const {
// back to OpenGL coord system: just multiplication on inverted destination coord system
QQuaternion dest = QQuaternion::fromAxes({-1, 0, 0}, {0, 0, 1}, {0, 1, 0}).inverted();
QVector3D euler = (dest * q).toEulerAngles();
float hor = euler.y();
if(hor < 0.0f)
hor += 360.0f;
float ver = euler.x();
return QVector3D(0, hor, -ver);
}
And back:
QQuaternion CoordinateSystemGround::fromPolarToQuat(const QVector3D& polar) const {
//just rotate if we were in OpenGL coord system
QQuaternion orig = QQuaternion::fromEulerAngles({-polar.z(), polar.y(), 0});
//and then multiply on destination coord system
QQuaternion dest = QQuaternion::fromAxes({-1, 0, 0}, {0, 0, 1}, {0, 1, 0});
return dest * orig;
}

OpenGL - reconstruct position from depth in VS

I am trying to reconstruct position from depth texture in Vertex Shader. Usually, this is done in Pixel Shader, but for some reason I need it in VS to transform some geometry.
So my approach.
1) I calculate View Frustrum corners in View Space
I use this input NDC.Those values are transformed via Inverse(view * proj) to put them into World Space and then transformed via view matrix.
//GL - Left Handed - need to "swap" front and back Z coordinate
MyMath::Vector4 cornersVector4[] =
{
//front
MyMath::Vector4(-1, -1, 1, 1), //A
MyMath::Vector4( 1, -1, 1, 1), //B
MyMath::Vector4( 1, 1, 1, 1), //C
MyMath::Vector4(-1, 1, 1, 1), //D
//back
MyMath::Vector4(-1, -1, -1, 1), //E
MyMath::Vector4( 1, -1, -1, 1), //F
MyMath::Vector4( 1, 1, -1, 1), //G
MyMath::Vector4(-1, 1, -1, 1), //H
};
If I print debug output, it seems correct (camera pos is at dist zNear from near plane and far is far enough)
2) post values to shader
3) In shader I do this:
vec3 _cornerPos0 = cornerPos0.xyz * mat3(viewInv);
vec3 _cornerPos1 = cornerPos1.xyz * mat3(viewInv);
vec3 _cornerPos2 = cornerPos2.xyz * mat3(viewInv);
vec3 _cornerPos3 = cornerPos3.xyz * mat3(viewInv);
float x = (TEXCOORD1.x / 100.0); //TEXCOORD1.x = <0, 100>
float y = (TEXCOORD1.y / 100.0); //TEXCOORD1.y = <0, 100>
vec3 ray = mix(mix(_cornerPos0, _cornerPos1, x),
mix(_cornerPos2, _cornerPos3, x),
y);
float depth = texture2D(depthTexture, vec2(x, y));
//depth is created in draw pass before with depth = vertexViewPos.z / farClipPlane;
vec3 reconstructed_posWS = camPos + (depth * ray);
But if I do this nad translate my geometry from [0,0,0] to reconstructed_posWS, only part of screen is covered. What can be incorrect ?
PS: some calculations are useless (transform to space and after that transform back), but speed is not concern atm.

Combining yaw and pitch together

Why doesn't resultA equal resultB? I have no idea what I'm doing wrong. Can you give me an explanation?
float alpha = glm::radians(45.0f);
glm::mat4 xRot(glm::vec4(1, 0, 0, 0),
glm::vec4(0, glm::cos(alpha), glm::sin(alpha), 0),
glm::vec4(0, -glm::sin(alpha), glm::cos(alpha), 0),
glm::vec4(0, 0, 0, 1));
glm::mat4 yRot(glm::vec4(glm::cos(alpha), 0, -glm::sin(alpha), 0),
glm::vec4(0, 1, 0, 0),
glm::vec4(glm::sin(alpha), 0, glm::cos(alpha), 0),
glm::vec4(0, 0, 0, 1));
glm::vec4 vec(0, 0, -100, 1);
glm::vec4 resultA(0.0f);
glm::vec4 resultB(0.0f);
resultA = xRot * yRot * vec; //(-70.7107, 50, -50, 1)
resultB = yRot * xRot * vec; //(-50, 70.7107, -50, 1)
3D rotations don't commute in general, except in very special cases. Thus:
xRot * yRot != yRot * xRot
Essentially what you're doing with the above is proving the point :)
See here: http://en.wikipedia.org/wiki/Commutative_property

gluLookAt alternative doesn't work

I'm trying to calculate a lookat matrix myself, instead of using gluLookAt().
My problem is that my matrix doesn't work. using the same parameters on gluLookAt does work however.
my way of creating a lookat matrix:
Vector3 Eye, At, Up; //these should be parameters =)
Vector3 zaxis = At - Eye; zaxis.Normalize();
Vector3 xaxis = Vector3::Cross(Up, zaxis); xaxis.Normalize();
Vector3 yaxis = Vector3::Cross(zaxis, xaxis); yaxis.Normalize();
float r[16] =
{
xaxis.x, yaxis.x, zaxis.x, 0,
xaxis.y, yaxis.y, zaxis.y, 0,
xaxis.z, yaxis.z, zaxis.z, 0,
0, 0, 0, 1,
};
Matrix Rotation;
memcpy(Rotation.values, r, sizeof(r));
float t[16] =
{
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
-Eye.x, -Eye.y, -Eye.z, 1,
};
Matrix Translation;
memcpy(Translation.values, t, sizeof(t));
View = Rotation * Translation; // i tried reversing this as well (translation*rotation)
now, when i try to use this matrix be calling glMultMatrixf, nothing shows up in my engine, while using the same eye, lookat and up values on gluLookAt works perfect as i said before.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(View);
the problem must be in somewhere in the code i posted here, i know the problem is not in my Vector3/Matrix classes, because they work fine when creating a projection matrix.
I assume you have a right handed coordinate system (it is default in OpenGL).
Try the following code. I think you forgot to normalize up and you have to put "-zaxis" in the matrix.
Vector3 Eye, At, Up; //these should be parameters =)
Vector3 zaxis = At - Eye; zaxis.Normalize();
Up.Normalize();
Vector3 xaxis = Vector3::Cross(Up, zaxis); xaxis.Normalize();
Vector3 yaxis = Vector3::Cross(zaxis, xaxis); yaxis.Normalize();
float r[16] =
{
xaxis.x, yaxis.x, -zaxis.x, 0,
xaxis.y, yaxis.y, -zaxis.y, 0,
xaxis.z, yaxis.z, -zaxis.z, 0,
0, 0, 0, 1,
};