I'm trying to understand the OpenGL MVP matrices, and as an exercice I'd like to draw a rectangle filling my window, using the matrices. I thought I would easily find a tutorial for that, but all those I found simply seem to put random values in their MVP matrices setup.
Say my rectangle has these coordinates:
GLfloat vertices[] = {
-1.0f, 1.0f, 0.0f, // Top-left
1.0f, 1.0f, 0.0f, // Top-right
1.0f, -1.0f, 0.0f, // Bottom-right
-1.0f, -1.0f, 0.0f, // Bottom-left
};
Here are my 2 triangles:
GLuint elements[] = {
0, 1, 2,
2, 3, 0
};
If I draw the rectangle with identity MVP matrices, it fills the screen as expected. Now I want to use a frustum. Here are its settings:
float m_fov = 45.0f;
float m_width = 3840;
float m_height = 2160;
float m_zNear = 0.1f;
float m_zFar = 100.0f;
From this I can compute the width / height of my window at z-near & z-far:
float zNearHeight = tan(m_fov) * m_zNear * 2;
float zNearWidth = zNearHeight * m_width / m_height;
float zFarHeight = tan(m_fov) * m_zFar * 2;
float zFarWidth = zFarHeight * m_width / m_height;
Now I can create my view & projection matrices:
glm::mat4 projectionMatrix = glm::perspective(glm::radians(m_fov), m_width / m_height, m_zNear, m_zFar);
glm::mat4 viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -m_zNear));
I'd now expect this to make my rectangle to fill the window:
glm::mat4 identity = glm::mat4(1.0f);
glm::mat4 rectangleModelMatrix = glm::scale(identity, glm::vec3(zNearWidth, zNearHeight, 1));
But doing so, my rectangle is way too big. What did I miss?
SOLUTION: as #Rabbid76 pointed out, the problem was the computation of my z-near size, which must be:
float m_zNearHeight = tan(glm::radians(m_fov) / 2.0f) * m_zNear * 2.0f;
float m_zNearWidth = m_zNearHeight * m_width / m_height;
Also, I now need to specify my object coordinates in normalized view space ([-0.5, 0.5]) rather than device space ([-1, 1]). Thus my vertices must now be:
GLfloat vertices[] = {
-0.5f, 0.5f, 0.0f, // Top-left
0.5f, 0.5f, 0.0f, // Top-right
0.5f, -0.5f, 0.0f, // Bottom-right
-0.5f, -0.5f, 0.0f, // Bottom-left
};
The projected height, of an object on a plan which is parallel to the xy plane of the view is
h' = h * tan(m_fov / 2) / -z
where h is the height of the object on the plane, -z is the depth and m_fov is the field of view angle.
In your case m_fov is 45° and -z is -0.1 (-m_zNear), thus tan(m_fov / 2) / z is ~4,142.
Since the height of the quad is 2, the projected height of the quad is ~8,282.
To create a quad which fits exactly in the viewport, use a filed of view angle of 90° and a distance to the object of 1, because tan(90° / 2) / 1 is 1. e.g:
float m_fov = 90.0f;
glm::mat4 projectionMatrix = glm::perspective(glm::radians(m_fov), m_width / m_height, m_zNear, m_zFar);
glm::mat4 viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -1.0f));
If tan(m_fov / 2) == -z, then an object with the bottom of -1 and the top of 1 fits into the viewport.
Because of the division by z, the projected size of on object on the viewport decrease linear by the distance to the camera.
Related
i am trying to get the mouse cursor in world space pos from window space(-1,1 window width and height) using the viewprojectionmatrix.
This is how i calculate my projection matrix:
static mat4 Perspective4x4(float FOV, float AspectRatio, float FarC, float NearC)
{
// Positive x is right
// Positive y is up
// Positive z is forward into the screen
float Cotangent = 1.0f / tanf((FOV)*0.5f);
float Depth = NearC - FarC;
float A = (-FarC - NearC) / Depth;
float B = 2.0f * FarC * NearC / Depth;
return
{
Cotangent/AspectRatio, 0.0f, 0.0f, 0.0f,
0.0f, Cotangent, 0.0f, 0.0f,
0.0f, 0.0f, -A, -B,
0.0f, 0.0f, 1.0f, 0.0f,
};
}
...
float WidthOverHeight = ...;
mat4 ProjectionMatrix = Perspective4x4(DegreesToRadians(90.0f), WidthOverHeight, 50.0f, 0.1f);
This is how i calculate my view matrix:
static mat4 Translate4x4(vec3 V)
{
return
{
1.0f, 0.0f, 0.0f, V.X,
0.0f, 1.0f, 0.0f, V.Y,
0.0f, 0.0f, 1.0f, V.Z,
0.0f, 0.0f, 0.0f, 1.0f
};
}
...
mat4 ViewMatrix = Translate4x4(-CameraPosition);
This is how i calculate the cursor position in worldspace:
mat4 ProjectionMatrix = ...;
mat4 ViewMatrix = ...;
vec2 CursorP = GetBilateralCursorPos(Input); // Values between -1 and 1
v4_f32 WorldSpacePNear = Inverse(ProjectionMatrix) * V4F32(CursorP, -1.0f, 1.0f);
WorldSpacePNear /= WorldSpacePNear.W;
WorldSpacePNear = Inverse(ViewMatrix) * WorldSpacePNear;
v4_f32 WorldSpacePFar = Inverse(ProjectionMatrix) * V4F32(CursorP, 1.0f, 1.0f);
WorldSpacePFar /= WorldSpacePFar.W;
WorldSpacePFar = Inverse(ViewMatrix) * WorldSpacePFar;
WorldSpacePFar.Z *= -1.0f;
WorldSpacePNear.Z *= -1.0f;
EDIT: I also tried dividing by W at the end but it doesnt work properly either.
This is how i send the matrices to OpenGL (legacy):
// I send them transposed because my matrices are row-major
mat4 ProjectionMatrix = Transpose(...);
mat4 ViewMatrix = Transpose(...);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(ProjectionMatrix.E);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(ViewMatrix.E);
The resulting positions do not seem to take in account the view projection because a change in the camera position will offset them making them not accurate.
NOTES:
vec4, vec2, mat4 are custom float-based math types. (they are what you would expect)
I know legacy OpenGL is deprecated, in fact im going to switch to modern opengl very soon, i just want to get this working.
i fixed the issue calculating the cursor pos like this:
mat4 ProjectionMatrix = ...;
mat4 ViewMatrix = ...;
// Z Distance from camera pos
float WorldDistanceFromCameraZ = 1.0f;
vec2 CursorP = GetBilateralCursorPos(Input); // Values between -1 and 1
vec4 ProbeZ = V4F32(World->Camera.P - WorldDistanceFromCameraZ*World->Camera.P.Z, 1.0f);
ProbeZ = (ProjectionMatrix*ViewMatrix) * ProbeZ;
vec4 ClipP = V4F32(CursorP.X*ProbeZ.W, CursorP.Y*ProbeZ.W, ProbeZ.Z, ProbeZ.W);
vec4 WorldP = Inverse(ProjectionMatrix*ViewMatrix) * ClipP;
I am trying to create an Augmented Reality application for vehicle tracking. In the application I want to place a quad model to track the vehicle. For this purpose I draw a quad at a particular screen coordinate every time the texture is updated. The problem is, I am not able to place the quad model at the exact position on the texture.
Below image shows what exactly I want
As in the above image a green quad needs to be placed exactly at the behind of the vehicle.
I have done it in the following ways,
Viewport size ( 0, 0, 1280, 720 )
Render a Texture
Created a Texture of size 1280x720 and rendered with default modelViewProjection (Identical matrix)
GLfloat vVertices[] = { -1.f, 1.f, 0.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-1.f, -1.f, 0.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
1.f, -1.f, 0.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
1.f, 1.f, 0.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
GLushort indices[] = { 0, 1, 2, 0, 2, 3 };```
Drawing a Quad in 3D world
I have used the following model view projection
m_unWidth= 1280
m_unHeight = 720
float aspect = (float) m_unWidth / (float) m_unHeight;
Mperspective = glm::mat4( 1.0f );
Mperspective = glm::perspective( glm::radians(45.0f), aspect, 0.1f, 100.0f );
Mview = glm::mat4(1.0f);
Mview = glm::translate( Mview, glm::vec3(0.0f, 0.0f, -2.0f));
glm::mat4 Mmodel = glm::mat4(1.0f);
Mmodel = glm::translate( Mmodel, glm::vec3( POSITION.X, POSITION.Y, 0.0 ));
Mmodel = glm::rotate( Mmodel, glm::radians(-80.0f), glm::vec3( 1.0f, 0.0f, 0.0f ));
I have used coordinate conversion function to map into [-1,1].
NDCPoint ACCOverlay::ConvertToNDC( unsigned int unX_i, unsigned int unY_i ) {
const int width = m_unWidth;
const int height = m_unHeight;
float x = float(unX_i) * 2 / float(width) - 1;
float y = 1 - float(unY_i) * 2 / float(height);
NDCPoint point;
point.x = x;
point.y = y;
point.z = 1.0f;
return point;
}
Could you please explain how to map quad model at the correct place of texture in perspective projection.
I am trying to get a sphere to make contact with the floor. The spheres radius is 1.0f - I need to work out how to determine the distance between the sphere and the floor.
The Sphere is placed here
glm::mat4 mv_matrix_sphere =
glm::translate(glm::vec3(-2.0f, y, 0.0f)) *
glm::rotate(-t, glm::vec3(0.0f, 1.0f, 0.0f)) *
glm::rotate(-t, glm::vec3(1.0f, 0.0f, 0.0f)) *
glm::mat4(1.0f);
mySphere.mv_matrix = myGraphics.viewMatrix * mv_matrix_sphere;
mySphere.proj_matrix = myGraphics.proj_matrix;
Where y = 20.0f
The sphere will fall and land on the floor at
myFloor.mv_matrix = myGraphics.viewMatrix *
glm::translate(glm::vec3(0.0f, 0.0f, 0.0f)) *
glm::scale(glm::vec3(1000.0f, 0.001f, 1000.0f)) *
glm::mat4(1.0f);
myFloor.proj_matrix = myGraphics.proj_matrix;
The function I need will work out when the Sphere is 1.0f(radius) from the floor so that it collides with it instead of clipping through.
I'm making a solar system in OpenGL and I want the planets to be able to orbit other planets as well as rotate around their own centers.
This is the code I'm currently using to make the planets orbit a specific point:
Model = glm::translate(Model, glm::vec3(-orbit_radius_, 0.0f, 0.0f));
Model = glm::rotate(Model, glm::radians(orbit_speed_) / 100.0f, glm::vec3(0.0f, 1.0f, 0.0f));
Model = glm::translate(Model, glm::vec3(orbit_radius_, 0.0f, 0.0f));
How would I combine this with a transformation that spins the object around itself?
I got it to work by just splitting then transformations and then combining them at the end.
rotate_ = glm::translate(rotate_, glm::vec3(-orbit_radius_, 0.0f, 0.0f));
rotate_ = glm::rotate(rotate_, glm::radians(orbit_speed_) / 100.0f, glm::vec3(0.0f, 1.0f, 0.0f));
rotate_ = glm::translate(rotate_, glm::vec3(orbit_radius_, 0.0f, 0.0f));
spin_ = glm::rotate(spin_, glm::radians(spin_speed_) / 100.0f, glm::vec3(0.0f, 1.0f, 0.0f));
final_ = rotate_ * spin_;
If you want to spinn and rotate an object, the I recommend to create an object which has its center at (0, 0, 0)
The self spinning of the object has to be do first. Then translate and rotate the object:
Model = rotate * translate * spinn
e.g.:
rot_angle += glm::radians(orbit_speed_) / 100.0f;
spin_angle += glm::radians(orbit_speed_) / 100.0f;
glm::vec3 tvec = glm::vec3(orbit_radius_, 0.0f, 0.0f);
glm::vec3 axis = glm::vec3(0.0f, 1.0f, 0.0f)
glm::mat4 translate = glm::translate(glm::mat(1.0f), tvec);
glm::mat4 rotate = glm::rotate(glm::mat(1.0f), rot_angle, axis);
glm::mat4 spin = glm::rotate(glm::mat(1.0f), spin_angle , axis);
Model = rotate * translate * spin;
With this solution rot_angle and spin_angle have to be incremented in every frame by a constant step.
If you don't want to increment the angles, then you have to store 2 matrices, instead of the angles. 1 for the rotation and on for the spin:
mat4 rotate(1.0f);
mat4 spin(1.0f);
glm::vec3 tvec = glm::vec3(-orbit_radius_, 0.0f, 0.0f);
glm::vec3 axis = glm::vec3(0.0f, 1.0f, 0.0f)
float rot_angle = glm::radians(orbit_speed_) / 100.0f;
float spin_angle = glm::radians(spin_speed_) / 100.0f;
rotate = glm::translate(rotate, tvec);
rotate = glm::rotate(rotate, rot_angle, axis );
rotate = glm::translate(rotate, -tvec);
spin = glm::rotate(spin, spin_angle, axis);
Model = rotate * spin;
I wrote a shadow map shader for my graphics engine. I followed these tutorials:
Part 1 and the following part.
Unfortunately, the results I get are quite a bit off. Here are some screenshots. They show what my scene normally looks like, the scene with enabled shadows and the content of the shadow map (please ignore the white stuff in the center, thats just the ducks's geometry).
This is how I compute the coordinates to sample the shadow map with in my fragment shader:
float calcShadowFactor(vec4 lightSpacePosition) {
vec3 projCoords = lightSpacePosition.xyz / lightSpacePosition.w;
vec2 uvCoords;
uvCoords.x = 0.5 * projCoords.x + 0.5;
uvCoords.y = 0.5 * projCoords.y + 0.5;
float z = 0.5 * projCoords.z + 0.5;
float depth = texture2D(shadowMapSampler, uvCoords).x;
if (depth < (z + 0.00001f))
return 0.0f;
else
return 1.0f;
}
The lightSpacePosition vector is computed by:
projectionMatrix * inverseLightTransformationMatrix
* modelTransformationMatrix * vertexPosition
The projection matrix is:
[1.0f / (tan(fieldOfView / 2) * (width / height)), 0.0f, 0.0f, 0.0f]
[0.0f, 1.0f / (tan(fieldOfView / 2), 0.0f, 0.0f]
[0.0f, 0.0f, (-zNear - zFar) / (zNear - zFar), 2.0f * zFar * zNear / (zNear - zFar)]
[0.0f, 0.0f, 1.0f, 0.0f]
My shadow map seems to be okay and I made sure the rendering pass uses the same lightSpacePosition vector as my shadow map pass. But I can't figure out what is wrong.
Although I do not understand this entirely, I think I found the bug:
I needed to transform the coordinates to NDC space and THEN multiply the matrices. My shadow coordinate computation now looks like this:
mat4 biasMatrix = mat4(
0.5f, 0.0f, 0.0f, 0.0f,
0.0f, 0.5f, 0.0f, 0.0f,
0.0f, 0.0f, 0.5f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f
);
vec4 shadowCoord0 = biasMatrix * light * vec4(vertexPosition, 1.0f);
shadowCoord = shadowCoord0.xyz / shadowCoord0.w;
where
light = projectionMatrix * inverseLightTransformationMatrix
* modelTransformationMatrix
Now the fragment shader's shadow factor computation is rather simple:
float shadowFactor = 1.0f;
if (texture(shadowMapSampler, shadowCoord.xy).z < shadowCoord.z - 0.0001f)
shadowFactor = 0.0f;