Related
I'm using a "Dymanic Batch Renderer System", and i have an "Object.cpp" that has a function, and when it's call it returns the data it needs for the Batch to render a "Quad" on screen (also i'm gonna mention that this is on a 3D space so the Z movement, Z scaling and the XY rotation exist).
And for the math calculations i'm using the GLM library.
The rendering works fine and the batch too, the problem is the movement. The rotation actually works the way i want it to work, but the movement is what i'm not satisfied because it moves in the "Local Space" of the object. Meaning that, for example, if i rotate an object inside a batch 90° on the Y Axis, the X movement becomes Z movement, and Z movement becomes X movement.
I've been trying to look for an answer and i couldn't find anything. I think the problem probably is from the "rotationMatrix" that allows the object to rotate correctly, but i don't know if there's an extra "function" i have to add to move the object in the "World Space" instead of the "Local Space", and if there is, i don't know what "function" can be.
Now i'm gonna put here the entire code of "Object.cpp" so you guys can see how it works.
Object::Object(glm::vec3 pos, glm::vec3 rot, glm::vec3 sca, int ObjId)
: translationMatrix(glm::mat4(0)), rotationMatrix(glm::mat4(0))
{
id = ObjId;
position = pos;
lastPosition = pos + glm::vec3(1.0f);
scale = sca;
rotation = rot;
lastRotation = rot + glm::vec3(1.0f);
}
glm::mat4 One(1.0f);
Vertex* Object::UpdateObject(Vertex* target)
{
if (lastPosition != position)
{
translationMatrix = glm::translate(glm::identity<glm::mat4>(), -position);
lastPosition = position;
}
if (lastRotation != rotation)
{
glm::mat4 rotMatrixTemp(1.0f);
rotMatrixTemp = glm::rotate(rotMatrixTemp, glm::radians(rotation.x), glm::vec3(1.0f, 0.0f, 0.0f));
rotMatrixTemp = glm::rotate(rotMatrixTemp, glm::radians(rotation.y), glm::vec3(0.0f, 1.0f, 0.0f));
rotMatrixTemp = glm::rotate(rotMatrixTemp, glm::radians(rotation.z + 180.0f), glm::vec3(0.0f, 0.0f, 1.0f));
rotationMatrix = -translationMatrix * rotMatrixTemp * translationMatrix;
lastRotation = rotation;
}
float x = 1.0f, y = 1.0f;
if (flipX)
x *= -1;
if (flipY)
y *= -1;
target->position = rotationMatrix * glm::vec4(position.x - 0.5f * scale.x, position.y + 0.5f * scale.y, position.z, 1.0f);
target->color = glm::vec4(1.0f, 1.0f, 1.0f, 1.0f);
target->texcoord = glm::vec2(0.0f, y);
target++;
target->position = rotationMatrix * glm::vec4(position.x - 0.5f * scale.x, position.y - 0.5f * scale.y, position.z, 1.0f);
target->color = glm::vec4(1.0f, 1.0f, 1.0f, 1.0f);
target->texcoord = glm::vec2(0.0f, 0.0f);
target++;
target->position = rotationMatrix * glm::vec4(position.x + 0.5f * scale.x, position.y - 0.5f * scale.y, position.z, 1.0f);
target->color = glm::vec4(1.0f, 1.0f, 1.0f, 1.0f);
target->texcoord = glm::vec2(x, 0.0f);
target++;
target->position = rotationMatrix * glm::vec4(position.x + 0.5f * scale.x, position.y + 0.5f * scale.y, position.z, 1.0f);
target->color = glm::vec4(1.0f, 1.0f, 1.0f, 1.0f);
target->texcoord = glm::vec2(x, y);
target++;
return target;
}
So, to recap, what i'm trying to accomplish is moving these objects in the "World Space" instead of the "Local Space" (while also, keeping the rotation system in the "Local Space", if possible. Because otherwise the object's center is always gonna be (0, 0, 0) instead of being its own position).
To position an object in the world with a given orientation, you usually apply the rotation first then the translation, unless you are indeed trying to rotate the objects about the origin after translating it out. So either check that, or make sure you have a good reason to do have the translationMatrix in
rotationMatrix = -translationMatrix * rotMatrixTemp * translationMatrix
Because it seems like you have translation logic both in and out of the if blocks
I'm trying to calculate tight ortho projection around the camera for better shadow mapping. I'm first calculating the camera frustum 8 points in world space using basic trigonometry using fov, position, right, forward, near, and far parameters of the camera as follows:
PerspectiveFrustum::PerspectiveFrustum(const Camera* camera)
{
float height = tanf(camera->GetFov() / 2.0f) * camera->GetNear();
float width = height * Screen::GetWidth() / Screen::GetHeight();
glm::vec3 nearTop = camera->GetUp() * camera->GetNear() * height;
glm::vec3 nearRight = camera->GetRight() * camera->GetNear() * width;
glm::vec3 nearCenter = camera->GetEye() + camera->GetForward() * camera->GetNear();
glm::vec3 farTop = camera->GetUp() * camera->GetFar() * height;
glm::vec3 farRight = camera->GetRight() * camera->GetFar() * width;
glm::vec3 farCenter = camera->GetEye() + camera->GetForward() * camera->GetFar();
m_RightNearBottom = nearCenter + nearRight - nearTop;
m_RightNearTop = nearCenter + nearRight + nearTop;
m_LeftNearBottom = nearCenter - nearRight - nearTop;
m_LeftNearTop = nearCenter - nearRight + nearTop;
m_RightFarBottom = farCenter + nearRight - nearTop;
m_RightFarTop = farCenter + nearRight + nearTop;
m_LeftFarBottom = farCenter - nearRight - nearTop;
m_LeftFarTop = farCenter - nearRight + nearTop;
}
Then I calculate the frustum in light view and calculating the min and max point in each axis to calculate the bounding box of the ortho projection as follows:
inline glm::mat4 GetView() const
{
return glm::lookAt(m_Position, glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
}
glm::mat4 DirectionalLight::GetProjection(const Camera& camera) const
{
PerspectiveFrustum frustum = camera.GetFrustum();
glm::mat4 lightView = GetView();
std::array<glm::vec3, 8> frustumToLightView
{
lightView * glm::vec4(frustum.m_RightNearBottom, 1.0f),
lightView * glm::vec4(frustum.m_RightNearTop, 1.0f),
lightView * glm::vec4(frustum.m_LeftNearBottom, 1.0f),
lightView * glm::vec4(frustum.m_LeftNearTop, 1.0f),
lightView * glm::vec4(frustum.m_RightFarBottom, 1.0f),
lightView * glm::vec4(frustum.m_RightFarTop, 1.0f),
lightView * glm::vec4(frustum.m_LeftFarBottom, 1.0f),
lightView * glm::vec4(frustum.m_LeftFarTop, 1.0f)
};
glm::vec3 min{ INFINITY, INFINITY, INFINITY };
glm::vec3 max{ -INFINITY, -INFINITY, -INFINITY };
for (unsigned int i = 0; i < frustumToLightView.size(); i++)
{
if (frustumToLightView[i].x < min.x)
min.x = frustumToLightView[i].x;
if (frustumToLightView[i].y < min.y)
min.y = frustumToLightView[i].y;
if (frustumToLightView[i].z < min.z)
min.z = frustumToLightView[i].z;
if (frustumToLightView[i].x > max.x)
max.x = frustumToLightView[i].x;
if (frustumToLightView[i].y > max.y)
max.y = frustumToLightView[i].y;
if (frustumToLightView[i].z > max.z)
max.z = frustumToLightView[i].z;
}
return glm::ortho(min.x, max.x, min.y, max.y, min.z, max.z);
}
Doing this gives me empty shadow map, so something clearly wrong and I haven't being doing this right. Can someone help me by telling me what I'm doing wrong and why?
EDIT:
As said my calculations of the frustum were wrong and I've changed them to the following:
PerspectiveFrustum::PerspectiveFrustum(const Camera* camera)
{
float nearHalfHeight = tanf(camera->GetFov() / 2.0f) * camera->GetNear();
float nearHalfWidth = nearHalfHeight * Screen::GetWidth() / Screen::GetHeight();
float farHalfHeight = tanf(camera->GetFov() / 2.0f) * camera->GetFar();
float farHalfWidth = farHalfHeight * Screen::GetWidth() / Screen::GetHeight();
glm::vec3 nearCenter = camera->GetEye() + camera->GetForward() * camera->GetNear();
glm::vec3 nearTop = camera->GetUp() * nearHalfHeight;
glm::vec3 nearRight = camera->GetRight() * nearHalfWidth;
glm::vec3 farCenter = camera->GetEye() + camera->GetForward() * camera->GetFar();
glm::vec3 farTop = camera->GetUp() * farHalfHeight;
glm::vec3 farRight = camera->GetRight() * farHalfWidth;
m_RightNearBottom = nearCenter + nearRight - nearTop;
m_RightNearTop = nearCenter + nearRight + nearTop;
m_LeftNearBottom = nearCenter - nearRight - nearTop;
m_LeftNearTop = nearCenter - nearRight + nearTop;
m_RightFarBottom = farCenter + farRight - farTop;
m_RightFarTop = farCenter + farRight + farTop;
m_LeftFarBottom = farCenter - farRight - farTop;
m_LeftFarTop = farCenter - farRight + farTop;
}
Also flipped the z coordinates when creating the ortho projection as follows:
return glm::ortho(min.x, max.x, min.y, max.y, -min.z, -max.z);
Yet still nothing renders to the depth map. Any ideas?
Here's captured results as you can see top left corner quad shows the shadow map which is completely wrong even drawing shadows on the objects themselves as a result as can be seen:
https://gfycat.com/brightwealthybass
(The smearing of the shadow map values is just an artifact of the gif compresser I used it doesn't really happen so there's no problem of me not clearing the z-buffer of the FBO)
EDIT2::
Ok few things GetFov() returned degrees and not radians.. changed it.
I Also try the transformation from NDC to world space with the following code:
glm::mat4 inverseProjectViewMatrix = glm::inverse(camera.GetProjection() * camera.GetView());
std::array<glm::vec4, 8> NDC =
{
glm::vec4{-1.0f, -1.0f, -1.0f, 1.0f},
glm::vec4{1.0f, -1.0f, -1.0f, 1.0f},
glm::vec4{-1.0f, 1.0f, -1.0f, 1.0f},
glm::vec4{1.0f, 1.0f, -1.0f, 1.0f},
glm::vec4{-1.0f, -1.0f, 1.0f, 1.0f},
glm::vec4{1.0f, -1.0f, 1.0f, 1.0f},
glm::vec4{-1.0f, 1.0f, 1.0f, 1.0f},
glm::vec4{1.0f, 1.0f, 1.0f, 1.0f},
};
for (size_t i = 0; i < NDC.size(); i++)
{
NDC[i] = inverseProjectViewMatrix * NDC[i];
NDC[i] /= NDC[i].w;
}
For the far coordinates of the frustum they're equal to my calculation of the frustum, but for the near corners they're off as if my calculation of the near corners is halved by 2 (for x and y only).
For example:
RIGHT TOP NEAR CORNER:
my calculation yields - {0.055, 0.041, 2.9}
inverse NDC yields - {0.11, 0.082, 2.8}
So I'm not sure where my calculation got wrong, maybe you could point out?
Even with the inversed NDC coordinates I tried to use them as following:
glm::mat4 DirectionalLight::GetProjection(const Camera& camera) const
{
glm::mat4 lightView = GetView();
glm::mat4 inverseProjectViewMatrix = glm::inverse(camera.GetProjection() * camera.GetView());
std::array<glm::vec4, 8> NDC =
{
glm::vec4{-1.0f, -1.0f, 0.0f, 1.0f},
glm::vec4{1.0f, -1.0f, 0.0f, 1.0f},
glm::vec4{-1.0f, 1.0f, 0.0f, 1.0f},
glm::vec4{1.0f, 1.0f, 0.0f, 1.0f},
glm::vec4{-1.0f, -1.0f, 1.0f, 1.0f},
glm::vec4{1.0f, -1.0f, 1.0f, 1.0f},
glm::vec4{-1.0f, 1.0f, 1.0f, 1.0f},
glm::vec4{1.0f, 1.0f, 1.0f, 1.0f},
};
for (size_t i = 0; i < NDC.size(); i++)
{
NDC[i] = lightView * inverseProjectViewMatrix * NDC[i];
NDC[i] /= NDC[i].w;
}
glm::vec3 min{ INFINITY, INFINITY, INFINITY };
glm::vec3 max{ -INFINITY, -INFINITY, -INFINITY };
for (unsigned int i = 0; i < NDC.size(); i++)
{
if (NDC[i].x < min.x)
min.x = NDC[i].x;
if (NDC[i].y < min.y)
min.y = NDC[i].y;
if (NDC[i].z < min.z)
min.z = NDC[i].z;
if (NDC[i].x > max.x)
max.x = NDC[i].x;
if (NDC[i].y > max.y)
max.y = NDC[i].y;
if (NDC[i].z > max.z)
max.z = NDC[i].z;
}
return glm::ortho(min.x, max.x, min.y, max.y, min.z, max.z);
}
And still got bad result:
https://gfycat.com/negativemalealtiplanochinchillamouse
Let's start with your frustum calculation here:
float height = tanf(camera->GetFov() / 2.0f) * camera->GetNear();
[...]
glm::vec3 nearTop = camera->GetUp() * camera->GetNear() * height;
[...]
glm::vec3 farTop = camera->GetUp() * camera->GetFar() * height;
That's one to many GetNear in your multiplications. Conceptually, you could height represent half of the frustum height at unit distance (I still would name it differently) without projecting it to the near plane, then the rest of your formulas make more sense.
However, the whole approach is doubtful to begin with. To get the frustum corners in world space, you can simply unproject all 8 vertices of the [-1,1]^3 NDC cube. Since you want to transform that into your light space, you can even combine it to a single matrix m = lightView * inverse(projection * view), just don't forget the perspective divide after the multiplying the NDC cube vertices.
return glm::ortho(min.x, max.x, min.y, max.y, min.z, max.z);
Standard GL conventions use a view space where the camera is looking into negative z direction, but the zNear and zFar parameters are interpreted as distances along the viewing directions, so the actual viewing volume will range from -zFar, -zNear in view space. You'll have to flip the signs of your z dimension to get the actual bounding box you're looking for.
I'm attempting to rotate a cube around an axis and it's definitely behaving incorrectly. I'm assuming the problem lies in my matrix rotation code as everything else seems to be working. I can translate the model correctly along the x, y or z axis, as well as scale. My camera view matrix is working as expected as well and so is my projection matrix. If I remove the view matrix and or the projection matrix implementations the problem remains.
If you wish to see what result I'm getting, it's the exact same output as the gif shown on this stackoverflow post: Rotating a cube in modern opengl... looks strange
The cube appears to fold in on itself while rotating, then returns to normal after a full rotation and seems to rotate fine for about 20 degrees until folding in on itself again and repeating. My issue is the same as that in the linked to article, however my matrix class is not the same, so my problem, though the same, seemingly has a different solution.
Here's my stripped matrix declaration with possibly relevant operators
math.h
typedef struct matrix4x4
{
//Elements stored in ROW MAJOR ORDER
GLfloat matrix[16];
void translate(Vector3f translation);
void rotateX(GLfloat angle);
void rotateY(GLfloat angle);
void rotateZ(GLfloat angle);
void rotate(Vector3f angles);
void scale(Vector3f scales);
void scale(GLfloat scale);
inline matrix4x4& operator*=(const matrix4x4& rhs)
{
this->matrix[0] = this->matrix[0] * rhs.matrix[0] + this->matrix[1] * rhs.matrix[4] + this->matrix[2] * rhs.matrix[8] + this->matrix[3] * rhs.matrix[12];
this->matrix[1] = this->matrix[0] * rhs.matrix[1] + this->matrix[1] * rhs.matrix[5] + this->matrix[2] * rhs.matrix[9] + this->matrix[3] * rhs.matrix[13];
this->matrix[2] = this->matrix[0] * rhs.matrix[2] + this->matrix[1] * rhs.matrix[6] + this->matrix[2] * rhs.matrix[10] + this->matrix[3] * rhs.matrix[14];
this->matrix[3] = this->matrix[0] * rhs.matrix[3] + this->matrix[1] * rhs.matrix[7] + this->matrix[2] * rhs.matrix[11] + this->matrix[3] * rhs.matrix[15];
this->matrix[4] = this->matrix[4] * rhs.matrix[0] + this->matrix[5] * rhs.matrix[4] + this->matrix[6] * rhs.matrix[8] + this->matrix[7] * rhs.matrix[12];
this->matrix[5] = this->matrix[4] * rhs.matrix[1] + this->matrix[5] * rhs.matrix[5] + this->matrix[6] * rhs.matrix[9] + this->matrix[7] * rhs.matrix[13];
this->matrix[6] = this->matrix[4] * rhs.matrix[2] + this->matrix[5] * rhs.matrix[6] + this->matrix[6] * rhs.matrix[10] + this->matrix[7] * rhs.matrix[14];
this->matrix[7] = this->matrix[4] * rhs.matrix[3] + this->matrix[5] * rhs.matrix[7] + this->matrix[6] * rhs.matrix[11] + this->matrix[7] * rhs.matrix[15];
this->matrix[8] = this->matrix[8] * rhs.matrix[0] + this->matrix[9] * rhs.matrix[4] + this->matrix[10] * rhs.matrix[8] + this->matrix[11] * rhs.matrix[12];
this->matrix[9] = this->matrix[8] * rhs.matrix[1] + this->matrix[9] * rhs.matrix[5] + this->matrix[10] * rhs.matrix[9] + this->matrix[11] * rhs.matrix[13];
this->matrix[10] = this->matrix[8] * rhs.matrix[2] + this->matrix[9] * rhs.matrix[6] + this->matrix[10] * rhs.matrix[10] + this->matrix[11] * rhs.matrix[14];
this->matrix[11] = this->matrix[8] * rhs.matrix[3] + this->matrix[9] * rhs.matrix[7] + this->matrix[10] * rhs.matrix[11] + this->matrix[11] * rhs.matrix[15];
this->matrix[12] = this->matrix[12] * rhs.matrix[0] + this->matrix[13] * rhs.matrix[4] + this->matrix[14] * rhs.matrix[8] + this->matrix[15] * rhs.matrix[12];
this->matrix[13] = this->matrix[12] * rhs.matrix[1] + this->matrix[13] * rhs.matrix[5] + this->matrix[14] * rhs.matrix[9] + this->matrix[15] * rhs.matrix[13];
this->matrix[14] = this->matrix[12] * rhs.matrix[2] + this->matrix[13] * rhs.matrix[6] + this->matrix[14] * rhs.matrix[10] + this->matrix[15] * rhs.matrix[14];
this->matrix[15] = this->matrix[12] * rhs.matrix[3] + this->matrix[13] * rhs.matrix[7] + this->matrix[14] * rhs.matrix[11] + this->matrix[15] * rhs.matrix[15];
return *this;
}
}matrix4x4;
matrix4x4 createTransformationMatrix(Vector3f translation, Vector3f rotation, Vector3f scale);
matrix4x4 createPerspectiveProjectionMatrix(GLfloat width, GLfloat height, GLfloat fov, GLfloat nearPlane, GLfloat farPlane);
matrix4x4 createViewMatrix(Vector3f cameraPosition, GLfloat cameraPitch, GLfloat cameraYaw, GLfloat cameraRoll);
and it's relevant implementations
math.cpp
matrix4x4::matrix4x4(GLfloat elements[])
{
//Elements stored in ROW MAJOR ORDER
for (unsigned int i = 0; i <= elementCount; i++)
{
matrix[i] = elements[i];
}
}
void matrix4x4::setIdentity()
{
std::fill(matrix, matrix + sizeof(matrix) / sizeof(GLfloat), 0.0f);
matrix[0] = 1;
matrix[5] = 1;
matrix[10] = 1;
matrix[15] = 1;
}
/*/////////////////////////////////////////////////////
math
/////////////////////////////////////////////////////*/
void matrix4x4::translate(Vector3f translation)
{
GLfloat transformElements[16] =
{
1.0f, 0.0f, 0.0f, translation.x,
0.0f, 1.0f, 0.0f, translation.y,
0.0f, 0.0f, 1.0f, translation.z,
0.0f, 0.0f, 0.0f, 1.0f
};
matrix4x4 transform = matrix4x4(transformElements);
*this *= transform;
}
void matrix4x4::rotateX(GLfloat angle)
{
angle = degreesToRadians(angle);
GLfloat transformElements[16] =
{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, std::cos(-angle), -std::sin(-angle), 0.0f,
0.0f, std::sin(-angle), std::cos(-angle), 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
matrix4x4 transform = matrix4x4(transformElements);
*this *= transform;
}
void matrix4x4::rotateY(GLfloat angle)
{
angle = degreesToRadians(angle);
GLfloat transformElements[16] =
{
std::cos(-angle), 0.0f, std::sin(-angle), 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
-std::sin(-angle), 0.0f, std::cos(-angle), 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
matrix4x4 transform = matrix4x4(transformElements);
*this *= transform;
}
void matrix4x4::rotateZ(GLfloat angle)
{
angle = degreesToRadians(angle);
GLfloat transformElements[16] =
{
std::cos(-angle), -std::sin(-angle), 0.0f, 0.0f,
std::sin(-angle), std::cos(-angle), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
matrix4x4 transform = matrix4x4(transformElements);
*this *= transform;
}
void matrix4x4::rotate(Vector3f angles)
{
matrix4x4 transform = matrix4x4();
transform.setIdentity();
transform.rotateX(angles.x);
transform.rotateY(angles.y);
transform.rotateZ(angles.z);
*this *= transform;
}
void matrix4x4::scale(Vector3f scales)
{
GLfloat transformElements[16] =
{
scales.x, 0.0f, 0.0f, 0.0f,
0.0f, scales.y, 0.0f, 0.0f,
0.0f, 0.0f, scales.z, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
matrix4x4 transform = matrix4x4(transformElements);
*this *= transform;
}
matrix4x4 createTransformationMatrix(Vector3f translation, Vector3f rotation, Vector3f scale)
{
matrix4x4 transformationMatrix;
transformationMatrix.setIdentity();
//I've tried changing the order of these around, as well as only
//doing one operation (skipping translate and scale, or everything but a single axis rotation
transformationMatrix.translate(translation);
transformationMatrix.rotate(rotation);
transformationMatrix.scale(scale);
return transformationMatrix;
}
matrix4x4 createPerspectiveProjectionMatrix(GLfloat width, GLfloat height, GLfloat fov, GLfloat nearPlane, GLfloat farPlane)
{
matrix4x4 projectionMatrix;
projectionMatrix.setIdentity();
GLfloat aspectRatio = width / height;
projectionMatrix.matrix[0] = (1.0f / std::tan((degreesToRadians(fov)) / 2.0f) / aspectRatio);
projectionMatrix.matrix[5] = 1.0f / std::tan((degreesToRadians(fov)) / 2.0f);
projectionMatrix.matrix[10] = (farPlane + nearPlane) / (nearPlane - farPlane);
projectionMatrix.matrix[11] = (2.0f * farPlane * nearPlane) / (nearPlane - farPlane);
projectionMatrix.matrix[14] = -1.0f;
return projectionMatrix;
}
I know my matrix/vector implementations are quick and dirty, but I'm just trying to get something set up. I've got plans to make the math methods (scale, translate, etc) static methods that don't affect the contents of the matrix, but instead accept a matrix as input and return a new one... but that's not the issue right now.
Here's my vertex shader
#version 330 core
//declare inputs
in vec3 position;
in vec2 textureCoords;
//declare output
out vec2 pass_textureCoords;
//uniforms
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
void main(void)
{
//tell OpenGL where to render the vertex on screen
gl_Position = projectionMatrix * viewMatrix * transformationMatrix * vec4(position.x, position.y, position.z, 1.0);
pass_textureCoords = textureCoords;
}
My render method...
void Renderer::render(Entity entity, Shader* shader)
{
...
RawModel* rawModel = texturedModel->getRawModel();
glBindVertexArray(rawModel->getVaoID());
...
matrix4x4 transformationMatrix = createTransformationMatrix(entity.getPosition(), entity.getRotation(), entity.getScale());
shader->loadTransformationMatrix(transformationMatrix);
...
glDrawElements(GL_TRIANGLES, rawModel->getVertexCount(), GL_UNSIGNED_INT, 0);
...
}
And finally the relevant pieces from my main. The cube definitions and so on
//This is a simple cube
std::vector<GLfloat> vertices =
{
-0.5f,0.5f,-0.5f,
-0.5f,-0.5f,-0.5f,
0.5f,-0.5f,-0.5f,
0.5f,0.5f,-0.5f,
-0.5f,0.5f,0.5f,
-0.5f,-0.5f,0.5f,
0.5f,-0.5f,0.5f,
0.5f,0.5f,0.5f,
0.5f,0.5f,-0.5f,
0.5f,-0.5f,-0.5f,
0.5f,-0.5f,0.5f,
0.5f,0.5f,0.5f,
-0.5f,0.5f,-0.5f,
-0.5f,-0.5f,-0.5f,
-0.5f,-0.5f,0.5f,
-0.5f,0.5f,0.5f,
-0.5f,0.5f,0.5f,
-0.5f,0.5f,-0.5f,
0.5f,0.5f,-0.5f,
0.5f,0.5f,0.5f,
-0.5f,-0.5f,0.5f,
-0.5f,-0.5f,-0.5f,
0.5f,-0.5f,-0.5f,
0.5f,-0.5f,0.5f
};
std::vector<GLfloat> textureCoords =
{
...
};
std::vector<GLuint> indices =
{
0,1,3,
3,1,2,
4,5,7,
7,5,6,
8,9,11,
11,9,10,
12,13,15,
15,13,14,
16,17,19,
19,17,18,
20,21,23,
23,21,22
};
//parameters are (model, pos, rotation, scale)
Entity entity = Entity(&texturedModel, Vector3f(0.0f, 0.0f, -2.0f), Vector3f(0.0f, 0.0f, 0.0f), 1.0f);
//SHADER STUFF
Shader textureShader = Shader("uniformVarTextureShader");
textureShader.loadProjectionMatrix(display.getProjectionMatrix());
Camera cam;
//draw in wireframe mode
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
//glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
while (display.checkForClose() == 0)
{
glfwPollEvents();
//TO DO: update logic here
//entity.varyPosition(+0.005f, 0.0f, -0.002f); //this works, as does scaling and camera movement
//entity.varyRotation(0.25f, 0.18f, 0.0f);
entity.setYRotation(entity.getYRotation() + 0.25f); //any sort of rotation operation ends up with the strange behaivor
//rendering commands here
display.prepare();
textureShader.bind();
textureShader.loadViewMatrix(cam);
display.render(entity, &textureShader);
textureShader.stop();
display.swapBuffers();
}
So, to recap; I'm not having any issues with translating, scaling, "camera movement" and the projection matrix appears to work as well. Any time I attempt to rotate however, I get the exact same behavior as the linked to article above.
Final notes: I have depth testing enabled and clear the depth buffer each frame. I also pass GL_TRUE to transpose any matrix data I give to glUniformMatrix4fv. I've checked the locations of each of the uniforms and they are passing correctly; 0, 1 and 2 respectively. No -1.
I'm stumped, any help would be appreciated. I can post more code if need be, but I'm pretty sure this covers the entirety of where the problem most likely lies. Thanks again
The major issue is the matrix multipolication operation.
Since you manipulate the matrix (you read from the matrix and you write to it), are some elements already manipulated, before you read it.
e.g. In the first line this->matrix[0] is written to
this->matrix[0] = this->matrix[0] * rhs.matrix[0] + this->matrix[1] * rhs.matrix[4] + this->matrix[2] * rhs.matrix[8] + this->matrix[3] * rhs.matrix[12];
and in the second line this->matrix[0] is read again:
this->matrix[1] = this->matrix[0] * rhs.matrix[1] + this->matrix[1] * rhs.matrix[5] + this->matrix[2] * rhs.matrix[9] + this->matrix[3] * rhs.matrix[13];
Copy the matrix array to a local variable, to solve the issue:
matrix4x4& operator*=(const matrix4x4& rhs)
{
matrix4x4 act( this->matrix );
this->matrix[0] = act.matrix[0] * rhs.matrix[0] + act.matrix[1] * rhs.matrix[4] + act.matrix[2] * rhs.matrix[8] + act.matrix[3] * rhs.matrix[12];
this->matrix[1] = act.matrix[0] * rhs.matrix[1] + act.matrix[1] * rhs.matrix[5] + act.matrix[2] * rhs.matrix[9] + act.matrix[3] * rhs.matrix[13];
....
return *this;
}
By the way, since you multiply a vector to the matrix from the right, in the shader
gl_Position = projectionMatrix * viewMatrix * transformationMatrix * vec4(position.x, position.y, position.z, 1.0);
the matrix has to be initilized in column major order:
mat4 m44 = mat4(
vec4( Xx, Xy, Xz, 0.0),
vec4( Yx, Xy, Yz, 0.0),
vec4( Zx Zy Zz, 0.0),
vec4( Tx, Ty, Tz, 1.0) );
Note your matrices are initialized in row major order e.g. matrix4x4::translate:
GLfloat transformElements[16] =
{
1.0f, 0.0f, 0.0f, translation.x,
0.0f, 1.0f, 0.0f, translation.y,
0.0f, 0.0f, 1.0f, translation.z,
0.0f, 0.0f, 0.0f, 1.0f
};
So you have to transpose the matrix when you set it to the uniform glUniformMatrix4fv:
glUniformMatrix4fv( ..., ..., GL_TRUE, ... );
In order to be able to determine whether the user clicked on any of my 3D objects I’m trying to turn the screen coordinates of the click into a vector which I then use to check whether any of my triangles got hit. To do so I’m using the XMVector3Unproject method provided by DirectX and I’m implementing everything in C++/CX.
The problem that I’m facing is that the vector that results from unprojecting the screen coordinates is not at all as I expect it to be. The below image illustrates this:
The cursor position at the time that the click occurs (highlighted in yellow) is visible in the isometric view on the left. As soon as I click, the vector resulting from unprojecting appears behind the model indicated in the images as the white line penetrating the model. So instead of originating at the cursor location and going into the screen in the isometric view it is appearing at a completely different position.
When I move the mouse in the isometric view horizontally while clicking and after that moving the mouse vertically and clicking the below pattern appears. All lines in the two images represent vectors resulting from clicking. The model has been removed for better visibility.
So as can be seen from the above image all vectors seem to originate from the same location. If I change the view and repeat the process the same pattern appears but with a different origin of the vectors.
Here are the code snippets that I use to come up with this. First of all I receive the cursor position using the below code and pass it to my “SelectObject” method together with the width and height of the drawing area:
void Demo::OnPointerPressed(Object^ sender, PointerEventArgs^ e)
{
Point currentPosition = e->CurrentPoint->Position;
if(m_model->SelectObject(currentPosition.X, currentPosition.Y, m_renderTargetWidth, m_renderTargetHeight))
{
m_RefreshImage = true;
}
}
The “SelectObject” method looks as follows:
bool Model::SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
XMMATRIX viewMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
XMMATRIX modelMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);
XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
// Code to retrieve v0, v1 and v2 is omitted
if(Intersects(rayOrigin, XMVector3Normalize(v - rayOrigin), v0, v1, v2, depth))
{
return true;
}
}
Eventually the calculated vector is used by the Intersects method of the DirectX::TriangleTests namespace to detect if a triangle got hit. I’ve omitted the code in the above snipped because it is not relevant for this problem.
To render these images I use an orthographic projection matrix and a camera that can be rotated around both its local x- and y-axis which generates the view matrix. The world matrix always stays the same, i.e. it is simply an identity matrix.
The view matrix is calculated as follows (based on the example in Frank Luna’s book 3D Game Programming):
void Camera::SetViewMatrix()
{
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis , m_cameraXAxis);
XMStoreFloat3(&cameraYAxis , m_cameraYAxis);
XMStoreFloat3(&cameraZAxis , m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(1, 0) = cameraXAxis.y;
viewMatrix(2, 0) = cameraXAxis.z;
viewMatrix(3, 0) = x;
viewMatrix(0, 1) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(2, 1) = cameraYAxis.z;
viewMatrix(3, 1) = y;
viewMatrix(0, 2) = cameraZAxis.x;
viewMatrix(1, 2) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(3, 2) = z;
viewMatrix(0, 3) = 0.0f;
viewMatrix(1, 3) = 0.0f;
viewMatrix(2, 3) = 0.0f;
viewMatrix(3, 3) = 1.0f;
m_modelViewProjectionConstantBufferData->view = viewMatrix;
}
It is influenced by two methods which rotate the camera around the x-and y-axis of the camera:
void Camera::ChangeCameraPitch(float angle)
{
XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraXAxis, angle);
m_cameraYAxis = XMVector3TransformNormal(m_cameraYAxis, rotationMatrix);
m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}
void Camera::ChangeCameraYaw(float angle)
{
XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraYAxis, angle);
m_cameraXAxis = XMVector3TransformNormal(m_cameraXAxis, rotationMatrix);
m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}
The world / model matrix and the projection matrix are calculated as follows:
void Model::SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->projection, XMMatrixTranspose(orthographicProjectionMatrix * orientationMatrix));
}
void Model::SetModelMatrix()
{
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->model, XMMatrixTranspose(orientationMatrix));
}
Frankly speaking I do not yet understand the problem that I’m facing. I’d be grateful if anyone with a deeper insight could give me some hints as to where I need to apply changes so that the vector calculated from the unprojection starts at the cursor position and moves into the screen.
Edit 1:
I assume it has to do with the fact that my camera is located at (0, 0, 0) in world coordinates. The camera rotates around its local x- and y-axis. From what I understand the view matrix created by the camera builds the plane onto which the image is projected. If that is the case it would explain why the ray is at a somehow "unexpected" location.
My assumption is that I need to move the camera out of the center so that it is located outside of the object. However, if simply modify the member variable m_cameraPosition of the camera my model gets totally distorted.
Anyone out there able and willing to help?
Thanks for your hint, Kapil. I tried the XMMatrixLookAtRH method but could not change the camera's pitch / yaw using that approach so I discarded that approach and came up with generating the matrix myself.
What resolved my problem was transposing the model, view and projection matrices using XMMatrixTranspose before passing them to XMVector3Unproject. So instead of having the code as follows
XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
XMMATRIX viewMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
XMMATRIX modelMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);
XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
it needs to be
XMMATRIX projectionMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection));
XMMATRIX viewMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view));
XMMATRIX modelMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model));
XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
It's not entirely clear to me why I need to transpose the matrices before passing them to the unproject method. However, I suspect that it is related to the issue that I'm facing when I move my camera. That problem has already been described here on StackOverflow by this posting.
I did not manage to solve that problem yet. Simply transposing the view matrix does not resolve it. However, my main problem is solved and my model is finally clickable.
If anyone has anything to add and shine some light on why the matrices need to be transposed or why moving the camera distorts the model please go ahead and post comments or answers.
I used the XMMatrixLookAtRH API in Model::SetViewMatrix() function to calculate the view matrix and got decent values of v and rayOrigin vectors.
For eg:
XMStoreFloat4x4(
&m_modelViewProjectionConstantBufferData->view,
XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f),
XMVectorSet(1.0f, 0.0f, 0.0f, 0.0f))
);
Though I haven't been able to visualize the output on screen, I checked the result by computing for simple values in a console application and the vector values seem to be correct. Please check in your application and confirm.
NOTE: You have to give focal point and up direction vector parameters to use XMMatrixLookAtRH API instead of your current approach.
I am able to get equal values of v and rayOrigin vectors using XMMatrixLookAtRH method as well as your custom view matrix with this code without needing matrix transpose operations:
#include <directxmath.h>
using namespace DirectX;
XMVECTOR m_cameraXAxis;
XMVECTOR m_cameraYAxis;
XMVECTOR m_cameraZAxis;
XMVECTOR m_cameraPosition;
XMMATRIX gView;
XMMATRIX gView2;
XMMATRIX gProj;
XMMATRIX gModel;
void SetViewMatrix()
{
XMVECTOR lTarget = XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f);
m_cameraPosition = XMVectorSet(1.0f, 1.0f, 1.0f, 1.0f);
m_cameraZAxis = XMVector3Normalize(XMVectorSubtract(m_cameraPosition, lTarget));
m_cameraXAxis = XMVector3Normalize(XMVector3Cross(XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f), m_cameraZAxis));
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
XMStoreFloat3(&cameraZAxis, m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(1, 0) = cameraXAxis.y;
viewMatrix(2, 0) = cameraXAxis.z;
viewMatrix(3, 0) = x;
viewMatrix(0, 1) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(2, 1) = cameraYAxis.z;
viewMatrix(3, 1) = y;
viewMatrix(0, 2) = cameraZAxis.x;
viewMatrix(1, 2) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(3, 2) = z;
viewMatrix(0, 3) = 0.0f;
viewMatrix(1, 3) = 0.0f;
viewMatrix(2, 3) = 0.0f;
viewMatrix(3, 3) = 1.0f;
gView = XMLoadFloat4x4(&viewMatrix);
gView2 = XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f),
XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f));
//m_modelViewProjectionConstantBufferData->view = viewMatrix;
printf("yo");
}
void SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
gProj = XMMatrixTranspose( XMMatrixMultiply(orthographicProjectionMatrix, orientationMatrix));
}
void SetModelMatrix()
{
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMMatrixTranspose( XMLoadFloat4x4(&orientation));
gModel = orientationMatrix;
}
bool SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
XMMATRIX projectionMatrix = gProj;
XMMATRIX viewMatrix = gView;
XMMATRIX modelMatrix = gModel;
XMMATRIX viewMatrix2 = gView2;
XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
// Code to retrieve v0, v1 and v2 is omitted
auto diff = v - rayOrigin;
auto diffNorm = XMVector3Normalize(diff);
XMVECTOR v2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix2,
modelMatrix);
XMVECTOR rayOrigin2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix2,
modelMatrix);
auto diff2 = v2 - rayOrigin2;
auto diffNorm2 = XMVector3Normalize(diff2);
printf("hi");
return true;
}
int main()
{
SetViewMatrix();
SetProjectionMatrix(1000, 1000, 0.0f, 1.0f);
SetModelMatrix();
SelectObject(500, 500, 1000, 1000);
return 0;
}
Please check your application with this code and confirm. You'll see the code is the same as your earlier code. The only addition is initial values of camera parameters, calculation of 2nd view matrix in SetViewMatrix() using XMMatrixLookAtRH method and calculating vectors using both the view matrices in SelectObject().
No need to Transpose
I did not have to transpose any matrix. Transpose should not be required for Projection and Model matrices because they are both diagonal matrices and transposing them will give the same matrix. I don't think a transpose of View matrix is required either. The formula of XMMatrixLookAtRH explained here provides the view matrix exactly like yours. Also, the sample project given here does not transpose its matrices while checking intersection. You can download and check the sample project.
Possible problem sources
1) Initialization: The only code I have not been able to see is your initialization of m_cameraZAxis, m_cameraXAxis, nearZ, farZ parameters, etc. Also, I have not used your camera rotation functions. As you can see, I have initialized camera by using position, target and direction vectors for calculation. Do check if your initial calculation of m_cameraZAxis accords with my sample code.
2) LH/RH look: Make sure there is no accidental mix-up of left-hand and right-hand looks anywhere in your code.
3) Check if your rotation code (ChangeCameraPitch or ChangeCameraYaw) is accidentally creating camera axes which are not orthogonal. You are using the camera's Y-axis as input in ChangeCameraYaw and as output in ChangeCameraPitch. But the Y-axis is being reset in SetViewMatrix by the cross-product or X and Z axes. So the earlier value of Y-axis may get lost.
Good luck with your application! Do tell if you find a proper solution and root cause to your problem.
As mentioned the issue was not fully resolved even though clicking now works. The issue with the distortion of the model when moving the camera, which I suspected is related, was still present. What I meant with "the model gets distorted" is visible in the following illustration:
The left image shows how the model looks when the camera is located in the center of the world, i.e. (0, 0, 0) while the right image shows what happens when I move the camera in negative y-axis direction. As can be seen the model widens at the bottom and gets smaller at the top which is the same behavior described in the link I already provided above.
What I eventually did to resolve both issues is:
Transpose the matrices before passing them to XMVector3Unproject (already described above)
Transposed my view matrix by changing the code of the SetViewMatrix method (code see below)
The SetViewMatrix method now looks as follows:
void Camera::SetViewMatrix()
{
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
//XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
XMStoreFloat3(&cameraZAxis, m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(0, 1) = cameraXAxis.y;
viewMatrix(0, 2) = cameraXAxis.z;
viewMatrix(0, 3) = x;
viewMatrix(1, 0) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(1, 2) = cameraYAxis.z;
viewMatrix(1, 3) = y;
viewMatrix(2, 0) = cameraZAxis.x;
viewMatrix(2, 1) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(2, 3) = z;
viewMatrix(3, 0) = 0.0f;
viewMatrix(3, 1) = 0.0f;
viewMatrix(3, 2) = 0.0f;
viewMatrix(3, 3) = 1.0f;
m_modelViewProjectionConstantBufferData->view = viewMatrix;
}
So I just exchanged row and column coordinates. Note that I had to make sure that my ChangeCameraYaw method gets called before my ChangeCameraPitch method. This is necessary because the orientation of the model otherwise is not as I want it.
There is also another approach that could be used. Instead of transposing the view matrix by exchanging the row and column coordinates and transposing it before passing it to XMVector3Unproject I could use the row_major keyword in the vertex shader together with the view matrix:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
row_major matrix view;
matrix projection;
};
I came across this idea in this blog post. The keyword row_major influences on how the shader compiler interprets the matrix in memory. The same could also be achieved by changing the order of the vector * matrix multiplication in the vertex shader, i.e. using pos = mul(view, pos); instead of pos = mul(pos, view);
That's pretty much it. The two issues are indeed interconnected but using what I posted in this question I was able to resolve both issues so I'm accepting my own reply as answer to this question. Hope it helps someone in the future.
I'm trying to understand how far should I place the camera position in the lookat function (or the object in the model matrix) to have pixel-perfect coordinates to pass in the vertex shader.
This is actually simple with orthographic projection matrices, but I fail to visualize how the math would work for perspective projection.
Here's the perspective matrix I'm using:
glm::mat4 projection = glm::perspective(45.0f, (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 10000.0f);
vertex multiplication in the shader is as simple as:
gl_Position = projection * view * model * vec4(position.xy, 0.0f, 1.0);
I'm basically trying to show a quad on screen that needs to be rotated and show perspective effects (hence why I can't use orthographic projection), but I'd like to specify in pixel coordinates where and how big it should appear on screen.
Well it can only have pixel-coordinates in one "z-plane" if you want to use a trapezoid view-frustum.
Basic Math
If you use a standard camera the basic math for a camera at (0,0,0) would be
for alpha being the vertical fov (45° in your case)
target_y = tan(alpha/2) * z-distance * ((pixel_y/height)*2-1)
target_x = tan(alpha/2) * z-distance * ((pixel_x/width)*aspect-ratio*2-1)
Reversing projection
As for the general case. You can "un-project" to find where a point in 3D before all transforms should be to end up on a specific point.
Basically you need to un-do the math.
gl_Position = projection * view * model * vec4(position.xy, 0.0f, 1.0);
So if you have your final position and want to revert it you do:
unprojection = model^-1 * view^-1 *projection^-1 * gl_Position //not actual glsl notation, '^-1' being the inverse
This is basically what functions like gluUnProject or glm::gtc::matrix_transform::unProject do.
But you should note that the final clip-space after you apply the projection matrix is typically [-1,-1,0] to [1,1,1], so if you want to enter pixel coordinates you can apply an additional matrix to transform into that space.
Something like:
[2/width, 0, 0 -1]
[ 0, 2/height, 0 -1]
screenToClip = [ 0, 0, 1 0]
[ 0, 0, 0 1]
would transform [0,0,0,1] to [-1,-1,0,1] and [width,height,0,1] to [1,1,0,1]
Also, you're probably best off trying some z-value like 0.5 to make sure that you're well within the view frustum and not clipping near the front or back.
You can achieve this effect with a 60 degree field of view. Basically you want to place the camera at a distance from the viewing plane such that the camera forms an equilateral triangle with center points at the top and bottom of the screen.
Here's some code to do that:
float fovy = 60.0f; // field of view - degrees
float aspect = nScreenWidth / nScreenHeight;
float zNearClip = 0.1f;
float zFarClip = nScreenHeight*2.0f;
float degToRad = MF_PI / 180.0f;
float fH = tanf(fovY * degToRad / 2.0f) * zNearClip;
float fW = fH * aspect;
glFrustum(-fW, fW, -fH, fH, zNearClip, zFarClip);
float nCameraDistance = sqrtf( nScreenHeight * nScreenHeight - 0.25f * nScreenHeight * nScreenHeight);
glTranslatef(0, 0, -nCameraDistance);
You can also use a 90 degree fov. In that case the camera distance is 1/2 the height of the window. However, this has a lot of foreshortening.
In the 90 degree case, you could push the camera out by the full height, but then apply a 2x scaling to the x and y components (ie: glScale (2,2,1).
Here's an image of what I mean:
I'll extend PeterT answer and leave here the practical code I used to find the world coordinates of one of the frustum's plane through unprojection
This assumes a basic view matrix (camera pos at 0,0,0)
glm::mat4 projectionInv(0);
glm::mat4 projection = glm::perspective(45.0f, (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 500.0f);
projectionInv = glm::inverse(projection);
std::vector<glm::vec4> NDCCube;
NDCCube.push_back(glm::vec4(-1.0f, -1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, -1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, -1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, -1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, 1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, 1.0f, -1.0f, 1.0f));
NDCCube.push_back(glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
NDCCube.push_back(glm::vec4(-1.0f, 1.0f, 1.0f, 1.0f));
std::vector<glm::vec3> frustumVertices;
for (int i = 0; i < 8; i++)
{
glm::vec4 tempvec;
tempvec = projectionInv * NDCCube.at(i); //multiply by projection matrix inverse to obtain frustum vertex
frustumVertices.push_back(glm::vec3(tempvec.x /= tempvec.w, tempvec.y /= tempvec.w, tempvec.z /= tempvec.w));
}
Keep in mind these coordinates would not end up on screen if your perspective far distance is lower than the one I set in the projection matrix
If you happen to know the world-coordinate width of "some item" that you want to display pixel-exact, this ends up being a bit of trivial trigonometry (works for both y FOV or x FOV):
S = Width of item in world coordinates
T = "Pixel Exact" size of item (say, the width of the texture)
h = Z distance to the object
a = 2 * h * tan(Phi / 2)
b = a / cos(phi / 2)
r = Total screen resolution (width or height depending on the FOV you want)
a = 2 * h * tan(Phi / 2) = (r / T) * S
Theta = atan(2*h / a)
Phi = 180 - 2*Theta
Where b are the sides of your triangle, a is the base of your triangle, h is the height of your triangle, theta is the angles of the two equal angles of the Isosoleces triangle, and Phi is the resulting FOV
So the end code might look something like
float frustumWidth = (float(ScreenWidth) / TextureWidth) * InWorldItemWidth;
float theta = glm::degrees(atan((2 * zDistance) / frustumWidth));
float PixelPerfectFOV = 180 - 2 * theta;