Recently I've been struggling just to set up a good perspective projection matrix and to apply it to a simple triangle. Before I show any code, I have a small question about matrix order: Do I have to multiply my view matrix with my projection matrix or multiply my projection matrix with my view matrix?
Ok now the code. I have tried many different ways to do a perspective matrix without any good result.
1
static Matrix4x4<T> Perspective_S(const T &fovy, const T &aspectRatio, const T &zNear, const T &zFar)
{
T range = tanf(fovy / 2.0f) * zNear;
return Matrix4x4<T>((2.0f * zNear) / (range * aspectRatio + range * aspectRatio), 0.0f, 0.0f, 0.0f,
0.0f, zNear / range, 0.0f, 0.0f,
0.0f, 0.0f, -(zFar + zNear) / (zFar - zNear), -1.0f,
0.0f, 0.0f, (-(2.0f * zFar * zNear) / (zFar - zNear)), 0.0f);
}
2
static Matrix4x4<T> Perspective_S(const T &fovy, const T &aspectRatio, const T &zNear, const T &zFar)
{
T f = 1.0f / tan(fovy / 2.0f);
return Matrix4x4<T>(f / aspectRatio, 0.0f, 0.0f, 0.0f,
0.0f, f, 0.0f, 0.0f,
0.0f, 0.0f, (zFar + zNear) / (zNear - zFar), (2.0f * zFar * zNear) / (zNear - zFar),
0.0f, 0.0f, -1.0f, 0.0f);;
}
3
static Matrix4x4<T> Frustum_S(const T &left, const T &right, const T &bottom, const T &top,
const T &zNear, const T &zFar)
{
return Matrix4x4<T>(2.0f * zNear / (right - left), 0.0f, 0.0f, 0.0f,
0.0f, 2.0f * zNear / (top - bottom), 0.0f, 0.0f,
(right + left) / (right - left), (top + bottom) / (top - bottom), -(zFar + zNear) / (zFar - zNear), -1.0f,
0.0f, 0.0f, -2.0f * zFar * zNear / (zFar - zNear), 0.0f);
}
static Matrix4x4<T> Perspective_S(const T &fovy, const T &aspectRation, const T &zNear, const T &zFar)
{
T scale = tan(fovy) * zNear;
T r = aspectRation * scale, l = -r;
T t = scale, b = -t;
return Frustum_S(l, r, b, t, zNear, zFar);
}
4
static void Perspective_S(Matrix4x4<T> &matrix, T fovyInDegrees, T aspectRatio, T znear, T zfar)
{
T ymax = znear * tanf(fovyInDegrees * 3.14159265358979323846 / 360.0); //c'est pas 180?
//ymin = -ymax;
//xmin = -ymax * aspectRatio;
T xmax = ymax * aspectRatio;
Frustum(matrix, -xmax, xmax, -ymax, ymax, znear, zfar);
}
static void Frustum_S(Matrix4x4<T> &matrix, T left, T right, T bottom, T top,
T znear, T zfar)
{
T temp = 2.0f * znear;
T temp2 = right - left;
T temp3 = top - bottom;
T temp4 = zfar - znear;
matrix = Matrix4x4<T>(temp / temp2, 0.0f, 0.0f, 0.0f,
0.0f, temp / temp3, 0.0f, 0.0f,
(right + left) / temp2, (top + bottom) / temp3, (-zfar - znear) / temp4, -1.0f,
0.0f, 0.0f, (-temp * zfar) / temp4, 0.0f);
}
Some of the functions look like the transposed resulting matrix of some of my other trys. All of those functions were taken from tutorials. One even came from my previous post and it's still not working...
Just in case you might think it's my LookAt code, here it is:
What I do in main.cpp
matptr = (Matrix4x4f::LookAt_S(eye, center, up) *
Matrix4x4f::Perspective_S(M_PI / 3.0f, (float)window->getSize().x / (float)window->getSize().y, 0.001f, 1000.0f)).ToArray();
glUniformMatrix4fv(glGetUniformLocation(shaderProgram, "myMatrix"), 1, GL_FALSE, &matptr[0]);
My LookAt code:
static Matrix4x4<T> LookAt_S(Vector3<T> &eye, Vector3<T> ¢er, Vector3<T> &up)
{
Vector3<T> forward(center - eye);
forward.Normalize();
Vector3<T> side(forward.CrossProduct(up));
side.Normalize();
up = side.CrossProduct(forward);
return Matrix4x4<T> (side.x, up.x, -forward.x, 0.0f,
side.y, up.y, -forward.y, 0.0f,
side.z, up.z, -forward.z, 0.0f);
}
Related
I'm trying to rotate an object, or a group of 4 vertices that're inside a Batch Renderer (it's a dynamic one so it can update their verts and indices at any time).
I'm currently using a method called "Rodriguez Matrix" which i learned how to use it thanks to this StackExchange post
And it works really well but the problem is that the center of all of the objects in the batch is (0, 0, 0) instead of their own position.
And i can't find a solution online so this is my first time trying to ask a question here!
(Also i'm using a library called GLM to make the transformation of the object)
So here's the code, the UpdateObject method is called by the batch in a for loop (because there's a group of objects in there) so i don't think it's necessary to show all of the entire system but yes the "Object.cpp" which is the one that keeps all the info of the objects (containing the Rodriguez Matrix function)
Object::Object(glm::vec3 pos, glm::vec3 rot, glm::vec3 sca)
{
position = pos;
scale = sca;
rotation = rot;
}
glm::mat3 rodriguesMatrix(const double degrees, const glm::vec3& axis) {
glm::mat3 v = glm::mat3(
axis.x * axis.x, axis.x * axis.y, axis.x * axis.z,
axis.x * axis.y, axis.y * axis.y, axis.y * axis.z,
axis.x * axis.z, axis.y * axis.z, axis.z * axis.z
);
glm::mat3 v2 = glm::mat3(
0, -axis.z, axis.y,
axis.z, 0, -axis.x,
-axis.y, axis.x, 0
);
glm::mat3 cosMat(1.0f * cos(degrees * M_PI));
v *= (1 - cos(degrees * M_PI));
v2 *= sin(degrees * M_PI);
glm::mat3 rotation = cosMat + v + v2;
return rotation;
}
Vertex* Object::UpdateObject(Vertex* target)
{
glm::mat3 rotationMatrix;
rotationMatrix = rodriguesMatrix(glm::radians(rotation.x), glm::vec3(1.f, 0.f, 0.f));
rotationMatrix = rodriguesMatrix(glm::radians(rotation.y), glm::vec3(0.f, 1.f, 0.f));
rotationMatrix = rodriguesMatrix(glm::radians(rotation.z), glm::vec3(0.f, 0.f, 1.f));
float size = 1.0f;
target->position = rotationMatrix * glm::vec3(position.x - 0.5f * scale.x, position.y + 0.5f * scale.y, position.z);
target->color = glm::vec3(1.0f, 0.2f, 0.2f);
target->texcoord = glm::vec2(0.0f, 1.0f);
target++;
target->position = rotationMatrix * glm::vec3(position.x - 0.5f * scale.x, position.y - 0.5f * scale.y, position.z);
target->color = glm::vec3(0.2f, 1.0f, 0.2f);
target->texcoord = glm::vec2(0.0f, 0.0f);
target++;
target->position = rotationMatrix * glm::vec3(position.x + 0.5f * scale.x, position.y - 0.5f * scale.y, position.z);
target->color = glm::vec3(0.2f, 0.2f, 1.0f);
target->texcoord = glm::vec2(1.0f, 0.0f);
target++;
target->position = rotationMatrix * glm::vec3(position.x + 0.5f * scale.x, position.y + 0.5f * scale.y, position.z);
target->color = glm::vec3(1.0f, 1.0f, 0.2f);
target->texcoord = glm::vec2(1.0f, 1.0f);
target++;
return target;
}
The Vertex is a struct of 2 "Vector3", one for the position, other one for the color, and a "Vector2" for the Texture Coordinates.
So that's the issue, if someone can help me or give me an answer, that'll be great :'D
Best Regards. Nacho :D
This is a simple example of how to rotate a rectangle around a point using Translation and Rotation matrices. I hope this help you:
#include <iostream>
#include <glm/glm.hpp>
#include <glm/ext.hpp>
void rotateRectangleAroundSomePoint(glm::vec3 vertices[4], float angle, glm::vec3 rotationCenter, glm::vec3 axis)
{
const glm::mat4 translationMatrix = glm::translate(glm::identity<glm::mat4>(), -rotationCenter);
const glm::mat4 rotationMatrix = glm::rotate(glm::identity<glm::mat4>(), angle, axis);
const glm::mat4 reverseTranslationMatrix = glm::translate(glm::identity<glm::mat4>(), rotationCenter);
for (size_t i = 0; i < 4; i++) {
vertices[i] = glm::vec3(
reverseTranslationMatrix * rotationMatrix * translationMatrix * glm::vec4(vertices[i], 1.0f));
}
}
int main()
{
glm::vec3 rectangleVertices[4] =
{
glm::vec3(1.0f, 1.0f, 0.0f),
glm::vec3(3.0f, 1.0f, 0.0f),
glm::vec3(3.0f, 2.0f, 0.0f),
glm::vec3(1.0f, 2.0f, 0.0f),
};
rotateRectangleAroundSomePoint(rectangleVertices,
glm::radians(90.0f),
glm::vec3(2.0f, 1.5f, 0.0),
glm::vec3(0.0f, 0.0f ,1.0f));
for (size_t i = 0; i < 4; i++) {
std::cout
<< rectangleVertices[i].x << " , "
<< rectangleVertices[i].y << " , "
<< rectangleVertices[i].z << std::endl;
}
return 0;
}
I'm trying to understand the OpenGL MVP matrices, and as an exercice I'd like to draw a rectangle filling my window, using the matrices. I thought I would easily find a tutorial for that, but all those I found simply seem to put random values in their MVP matrices setup.
Say my rectangle has these coordinates:
GLfloat vertices[] = {
-1.0f, 1.0f, 0.0f, // Top-left
1.0f, 1.0f, 0.0f, // Top-right
1.0f, -1.0f, 0.0f, // Bottom-right
-1.0f, -1.0f, 0.0f, // Bottom-left
};
Here are my 2 triangles:
GLuint elements[] = {
0, 1, 2,
2, 3, 0
};
If I draw the rectangle with identity MVP matrices, it fills the screen as expected. Now I want to use a frustum. Here are its settings:
float m_fov = 45.0f;
float m_width = 3840;
float m_height = 2160;
float m_zNear = 0.1f;
float m_zFar = 100.0f;
From this I can compute the width / height of my window at z-near & z-far:
float zNearHeight = tan(m_fov) * m_zNear * 2;
float zNearWidth = zNearHeight * m_width / m_height;
float zFarHeight = tan(m_fov) * m_zFar * 2;
float zFarWidth = zFarHeight * m_width / m_height;
Now I can create my view & projection matrices:
glm::mat4 projectionMatrix = glm::perspective(glm::radians(m_fov), m_width / m_height, m_zNear, m_zFar);
glm::mat4 viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -m_zNear));
I'd now expect this to make my rectangle to fill the window:
glm::mat4 identity = glm::mat4(1.0f);
glm::mat4 rectangleModelMatrix = glm::scale(identity, glm::vec3(zNearWidth, zNearHeight, 1));
But doing so, my rectangle is way too big. What did I miss?
SOLUTION: as #Rabbid76 pointed out, the problem was the computation of my z-near size, which must be:
float m_zNearHeight = tan(glm::radians(m_fov) / 2.0f) * m_zNear * 2.0f;
float m_zNearWidth = m_zNearHeight * m_width / m_height;
Also, I now need to specify my object coordinates in normalized view space ([-0.5, 0.5]) rather than device space ([-1, 1]). Thus my vertices must now be:
GLfloat vertices[] = {
-0.5f, 0.5f, 0.0f, // Top-left
0.5f, 0.5f, 0.0f, // Top-right
0.5f, -0.5f, 0.0f, // Bottom-right
-0.5f, -0.5f, 0.0f, // Bottom-left
};
The projected height, of an object on a plan which is parallel to the xy plane of the view is
h' = h * tan(m_fov / 2) / -z
where h is the height of the object on the plane, -z is the depth and m_fov is the field of view angle.
In your case m_fov is 45° and -z is -0.1 (-m_zNear), thus tan(m_fov / 2) / z is ~4,142.
Since the height of the quad is 2, the projected height of the quad is ~8,282.
To create a quad which fits exactly in the viewport, use a filed of view angle of 90° and a distance to the object of 1, because tan(90° / 2) / 1 is 1. e.g:
float m_fov = 90.0f;
glm::mat4 projectionMatrix = glm::perspective(glm::radians(m_fov), m_width / m_height, m_zNear, m_zFar);
glm::mat4 viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -1.0f));
If tan(m_fov / 2) == -z, then an object with the bottom of -1 and the top of 1 fits into the viewport.
Because of the division by z, the projected size of on object on the viewport decrease linear by the distance to the camera.
This is how the result looks like:
and the same result using an Orthogonal matrix:
Any idea why using the projection matrix makes everything look weird?
My Perspective:
inline static Matrix<T> ProjectionPerspectiveOffCenterLH(const T left, const T right, const T bottom, const T top, const T zNear, const T zFar)
{
return Matrix<T>(
(2.0f * zNear) / (right-left), 0.0f, 0.0f, 0.0f,
0.0f, (2.0f * zNear) / (top-bottom), 0.0f, 0.0f,
(left+right)/(left-right), (top+bottom)/(bottom-top), zFar / (zFar - zNear), 1.0f,
0.0f, 0.0f, (zNear * zFar) / (zNear - zFar), 0.0f);
}
My Orthogonal:
inline static Matrix<T> ProjectionOrthogonalOffCenterLH(const T left, const T right, const T bottom, const T top, const T zNear, const T zFar)
{
T farNear = zFar - zNear;
return Matrix<T>(
2.0f / (right-left), 0.0f, 0.0f, 0.0f,
0.0f, 2.0f / (top-bottom), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f / farNear, 0.0f,
(left + right) / (left - right), (top + bottom) / (bottom - top), -zNear / farNear, 1.0f);
}
Just found out why this happens:
In my Perspective Matrix the FOV is 0°. That's why it looks like that.
So better use a Perspective FOV Matrix.
The following two methods are taken from the ios GLKit framework:
GLK_INLINE GLKMatrix4 GLKMatrix4MakeOrtho(float left, float right,
float bottom, float top,
float nearZ, float farZ)
{
float ral = right + left;
float rsl = right - left;
float tab = top + bottom;
float tsb = top - bottom;
float fan = farZ + nearZ;
float fsn = farZ - nearZ;
GLKMatrix4 m = { 2.0f / rsl, 0.0f, 0.0f, 0.0f,
0.0f, 2.0f / tsb, 0.0f, 0.0f,
0.0f, 0.0f, -2.0f / fsn, 0.0f,
-ral / rsl, -tab / tsb, -fan / fsn, 1.0f };
return m;
}
GLK_INLINE GLKMatrix4 GLKMatrix4MakePerspective(float fovyRadians, float aspect, float nearZ, float farZ)
{
float cotan = 1.0f / tanf(fovyRadians / 2.0f);
GLKMatrix4 m = { cotan / aspect, 0.0f, 0.0f, 0.0f,
0.0f, cotan, 0.0f, 0.0f,
0.0f, 0.0f, (farZ + nearZ) / (nearZ - farZ), -1.0f,
0.0f, 0.0f, (2.0f * farZ * nearZ) / (nearZ - farZ), 0.0f };
return m;
}
I would like to smoothly move from perspective view to ortho view and vice-versa. How should i calculate the correct parameters for ortho matrix, given the perspective matrix and parameters?
I wrote a shadow map shader for my graphics engine. I followed these tutorials:
Part 1 and the following part.
Unfortunately, the results I get are quite a bit off. Here are some screenshots. They show what my scene normally looks like, the scene with enabled shadows and the content of the shadow map (please ignore the white stuff in the center, thats just the ducks's geometry).
This is how I compute the coordinates to sample the shadow map with in my fragment shader:
float calcShadowFactor(vec4 lightSpacePosition) {
vec3 projCoords = lightSpacePosition.xyz / lightSpacePosition.w;
vec2 uvCoords;
uvCoords.x = 0.5 * projCoords.x + 0.5;
uvCoords.y = 0.5 * projCoords.y + 0.5;
float z = 0.5 * projCoords.z + 0.5;
float depth = texture2D(shadowMapSampler, uvCoords).x;
if (depth < (z + 0.00001f))
return 0.0f;
else
return 1.0f;
}
The lightSpacePosition vector is computed by:
projectionMatrix * inverseLightTransformationMatrix
* modelTransformationMatrix * vertexPosition
The projection matrix is:
[1.0f / (tan(fieldOfView / 2) * (width / height)), 0.0f, 0.0f, 0.0f]
[0.0f, 1.0f / (tan(fieldOfView / 2), 0.0f, 0.0f]
[0.0f, 0.0f, (-zNear - zFar) / (zNear - zFar), 2.0f * zFar * zNear / (zNear - zFar)]
[0.0f, 0.0f, 1.0f, 0.0f]
My shadow map seems to be okay and I made sure the rendering pass uses the same lightSpacePosition vector as my shadow map pass. But I can't figure out what is wrong.
Although I do not understand this entirely, I think I found the bug:
I needed to transform the coordinates to NDC space and THEN multiply the matrices. My shadow coordinate computation now looks like this:
mat4 biasMatrix = mat4(
0.5f, 0.0f, 0.0f, 0.0f,
0.0f, 0.5f, 0.0f, 0.0f,
0.0f, 0.0f, 0.5f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f
);
vec4 shadowCoord0 = biasMatrix * light * vec4(vertexPosition, 1.0f);
shadowCoord = shadowCoord0.xyz / shadowCoord0.w;
where
light = projectionMatrix * inverseLightTransformationMatrix
* modelTransformationMatrix
Now the fragment shader's shadow factor computation is rather simple:
float shadowFactor = 1.0f;
if (texture(shadowMapSampler, shadowCoord.xy).z < shadowCoord.z - 0.0001f)
shadowFactor = 0.0f;