Project Points on near plane using NDC space - c++

I have severals pairs of points in world space each pair have a different depth. I want to project those points on the near plane of the view frustrum, then recompute their new world position.
note: I want to keep the perspective effect
To do so, I convert the point's location in NDC space. I think that each pair of points on NDC space with the same z value lie on the same plane, parallel to the view direction. So if I set their z value to -1, they should lie on the near plane.
Now that I have thoose new NDC locations I need their world position, I lost the w component by changing the depth, I need to recompute it.
I found this link: unproject ndc
which said that:
wclip * inverse(mvp) * vec4(ndc.xyz, 1.0f) = 1.0f
wclip = 1.0f / (inverse(mvp) * vec4(ndc.xyz, 1.0f))
my full code:
glm::vec4 homogeneousClipSpaceLeft = mvp * leftAnchor;
glm::vec4 homogeneousClipSpaceRight = mvp * rightAnchor;
glm::vec3 ndc_left = homogeneousClipSpaceLeft.xyz() / homogeneousClipSpaceLeft.w;
glm::vec3 ndc_right = homogeneousClipSpaceRight.xyz() / homogeneousClipSpaceRight.w;
ndc_left.z = -1.0f;
ndc_right.z = -1.0f;
float clipWLeft = (1.0f / (inverseMVP * glm::vec4(ndc_left, 1.0f)).w);
float clipWRight = (1.0f / (inverseMVP * glm::vec4(ndc_right, 1.0f)).w);
glm::vec3 worldPositionLeft = clipWLeft * inverseMVP * (glm::vec4(ndc_left, 1.0f));
glm::vec3 worldPositionRight = clipWRight * inverseMVP * (glm::vec4(ndc_right, 1.0f));
It should work in practice but i get weird result, I start with 2 points in world space:
left world position: -116.463 15.6386 -167.327
right world position: 271.014 15.6386 -167.327
left NDC position: -0.59719 0.0790622 -1
right NDC position: 0.722784 0.0790622 -1
final left position: 31.4092 -9.22973 1251.16
final right position: 31.6823 -9.22981 1251.17
mvp
4.83644 0 0 0
0 4.51071 0 0
0 0 -1.0002 -1
-284.584 41.706 1250.66 1252.41
Am I doing something wrong ?
Would you recommend this way to project pair of points to the near plane, with perspective ?

If glm::vec3 ndc_left and glm::vec3 ndc_right are normalized device coordiantes, then the following projects the coordinates to the near plane in normaized device space:
ndc_left.z = -1.0f;
ndc_right.z = -1.0f;
If you want to get the model positon of a point in normalized device space in Cartesian coordinates, then you have to transform the point by the invers model view projection matrix and to devide the x, y and z component by the w component of the result. Not the transfomation by inverseMVP gives a Homogeneous coordinate:
glm::vec4 wlh = inverseMVP * glm::vec4(ndc_left, 1.0f);
glm::vec4 wrh = inverseMVP * glm::vec4(ndc_right, 1.0f);
glm::vec3 worldPositionLeft = glm::vec3( wlh.x, wlh.y, wlh.z ) / wlh.w;
glm::vec3 worldPositionRight = glm::vec3( wrh.x, wrh.y, wrh.z ) / wrh.w;
Note, that the OpenGL Mathematics (GLM) library provides operations vor "unproject". See glm::unProject.

Related

Why is the camera view matrix not changing the position of the point

glm::vec3 Position(0, 0, 500);
glm::vec3 Front(0, 0, 1);
glm::vec3 Up(0, 1, 0);
glm::vec3 vPosition = glm::vec3(Position.x, Position.y, Position.z);
glm::vec3 vFront = glm::vec3(Front.x, Front.y, Front.z);
glm::vec3 vUp = glm::vec3(Up.x, Up.y, Up.z);
glm::mat4 view1 = glm::lookAt(vPosition, vPosition + vFront, vUp);
glm::mat4 projection1 = glm::perspective(glm::radians(45.0f), (float)1920 / (float)1080, 0.1f, 1000.0f);
glm::mat4 VPMatrix = projection1 * view1;
float testZ = 0.0f;
glm::vec3 modelVertices(-50.0f, 50.0f, testZ);
glm::vec4 finalPositionMin = VPMatrix * glm::vec4(modelVertices, 1.0);
Print() << finalPositionMin.x;
In the code if i change the FOV value of perspective than that affects the object drawn on screen for smaller values object size increases on the screen.
At FOV value of 45 the finalPositionMin.x is -50
At FOV value of 25 the finalPositionMin.x is -126
but if i move the camera closer to the object than that should also affect the object and more closer we come to the object the finalPositionMin.x should be affected.
Why changing value the Z positon of camera is not affecting the finalPositionMin.x of the object ?
To get a Cartesian coordinate, you need to divide the x, y, and z components by the w component. You have to print finalPositionMin.x/finalPositionMin.w.
Likely you confuse "window" coordinates and "world" coordinates. In your example, finalPositionMin is not in world space, it is in clip space. To get a world space coordinate you need to multiply modelVertices by the model matrix, but nothing else.
Note, the view matrix (view1) transforms from world space to view space. The projection matrix (projection1 transforms from view space to clip space. With perspective divide you can transforms from clip space to a normalized device coordinate.
To get window coordinates ("pixel" coordinates) you have to "project" the NDC onto the viewport (width, height is the size of the viewport):
x = width * (ndc.x+1)/2
y = height * (1-ndc.y)/2

Issue with Picking (custom unProject() function)

I'm currently working on a STL file viewer. This one use an Arcball camera :
To provide more features on this viewer (which can handle more than one object) I would like to implement a click select. To achieve it, I have used picking(Pseudo code I have used)
At this time, my code to check for a any object 3D between 2 points works. However the conversion of mouse position to a correct set of vector is far away from working:
glm::vec3 range = transform.GetPosition() + ( transform.GetFront() * 1000.0f);
// x and y are cursor position on the screen
glm::vec3 start = UnProject(x,y, transform.GetPosition().z);
glm::vec3 end = UnProject(x,y,range.z);
/*
The code which iterate over all objects in the scene and checks for collision
between my start / end and the object hitbox
*/
As you can see I have tried (maybe it is stupid) to set a the z distance between my start and my end to 100 * theFront vector of my camera. But it's not working the set of vectors I get are incoherents.
By example, placing the camera at 0 0 0 with a front of 0 0 -1 give me this set of Vectors :
Start : 0.0000~ , 0.0000~ , 0.0000~
End : 0.0000~ , 0.0000~ , 0.0000~
which is (by my logic) incoherent, I would have expected something more like (Start : 0, 0, 0) ( End : 0, 0, -1000)
I think there's an issue with my UnProject function :
glm::vec3 UnProject(float winX, float winY, float winZ)
{
// Compute (projection x modelView) ^ -1:
glm::mat4 modelView = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
const glm::mat4 m = glm::inverse(projection * modelView);
// Need to invert Y since screen Y-origin point down,
// while 3D Y-origin points up (this is an OpenGL only requirement):
winY = ScreenSize.cy - winY;
// Transformation of normalized coordinates between -1 and 1:
glm::vec4 in;
in.x = winX / ScreenSize.cx * 2.0 - 1.0;
in.y = winY / ScreenSize.cy * 2.0 - 1.0;
in.z = 2.0 * winZ - 1.0;
in.w = 1.0;
// To world coordinates:
glm::vec4 out(m * in);
if (out.w == 0.0) // Avoid a division by zero
{
return glm::vec3(0.0f);
}
out.w = 1.0 / out.w;
return glm::vec3(out.x * out.w, out.y * out.w,out.z * out.w);
}
Since this function is basic rewrite of the pseudo code (from here) and I'm far from behind good at mathematics I don't really see what could go wrong...
PS: my view matrix (provided by GetViewMatrix()) is correct (since I use it to show my scene)
my projection matrix is also correct
the ScreenSize object carry my viewport size
I have found what's wrong, the return vec3 should be made by dividing each component by the perspective instead of being multiply by it. Here is the new UnProject function :
glm::vec3 UnProject2(float winX, float winY,float winZ){
glm::mat4 View = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
glm::mat4 viewProjInv = glm::inverse(projection * View);
winY = ScreenSize.cy - winY;
glm::vec4 clickedPointOnSreen;
clickedPointOnSreen.x = ((winX - 0.0f) / (ScreenSize.cx)) *2.0f -1.0f;
clickedPointOnSreen.y = ((winY - 0.0f) / (ScreenSize.cy)) * 2.0f -1.0f;
clickedPointOnSreen.z = 2.0f*winZ-1.0f;
clickedPointOnSreen.w = 1.0f;
glm::vec4 clickedPointOrigin = viewProjInv * clickedPointOnSreen;
return glm::vec3(clickedPointOrigin.x / clickedPointOrigin.w,clickedPointOrigin.y / clickedPointOrigin.w,clickedPointOrigin.z / clickedPointOrigin.w);
}
I also changed the way start and end are calculated :
glm::vec3 start = UnProject2(x,y,0.0f);
glm::vec3 end = UnProject2(x,y,1.0f);

OpenGL ray tracing using inverse transformations

I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r and a normalized direction d finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()

Rotate Object around origin as it faces origin in OpenGL with GLM?

I'm trying to make a simple animation where an object rotates around the world origin in OpenGL using glm lib. My ideia is:
Send object to origin
Rotate it
Send back to original position
Make it look at what I want
Here's my implementation:
// Rotates object around point p
void rotate_about(float deltaTime, glm::vec3 p, bool ended) {
glm::vec3 axis = glm::vec3(0,1,0); //rotation axis
glm::mat4 scale_m = glm::scale(glm::mat4(1.0f), glm::vec3(scale, scale, scale)); //scale matrix
glm::mat4 rotation = getMatrix(Right, Up, Front, Position); //builds rotation matrix
rotation = glm::translate(rotation, p - Position );
rotation = glm::rotate(rotation, ROTATION_SPEED * deltaTime, axis);
rotation = glm::translate(rotation, Position - p );
Matrix = rotation * scale_m;
//look at point P
Front = glm::normalize(p - start_Position);
Right = glm::normalize(glm::cross(WorldUp, Front));
Up = glm::normalize(glm::cross(Right, Front));
if (ended == true) { //if last iteration of my animation: saves position
Position.x = Matrix[3][0];
Position.y = Matrix[3][1];
Position.z = Matrix[3][2];
}
}
getMatrix() simply returns a 4x4 matrix as:
| Right.x Right.y Right.z |
| Up.x Up.y Up.z |
| Front.x Front.y Front.z |
| Pos.x Pos.y Pos.z |
I'm using this image as reference:
As it is my model simply disappears when I start the animation. If I remove lines bellow "//look at point P" it rotates around the origin, but twitches every time my animation restarts. I'm guessing I'm losing or mixing informations I shouldn't somewhere.
How can I store my models Front/Right/Up information so I can rebuild its matrix from scratch?
First edit, this is the effect I'm having when I don't try to make my model look at the point P, in this case the origin. When I do try my model disappears. How can I make it look at where I want, and how can I get my models new Front/Right/Up vectors after I finish rotating it?
This is the code I ran in the gif above
Operations like glm::translate() or glm::roate() build a matrix by its parameters and multiply the input matrix by the new matrix
This means that
rotation = glm::translate(rotation, Position - p );
can be expressed as (pseudo code):
rotation = rotation * translation(Position - p);
Note, that the matrix multiplication has to be "read" from the left to the right. (See GLSL Programming/Vector and Matrix Operations)
The operation translate * rotate causes a rotation around the origin of the object:
The operation rotate * translate causes a rotation around the origin of the world:
The matrix glm::mat4 rotation (in the code of your question) is the current model matrix of your object.
It contains the position (translation) and the orientation of the object.
You want to rotate the object around the origin of the world.
To do so you have to create a matrix which contains the new rotation
glm::mat4 new_rot = glm::rotate(glm::mat4(1.0f), ROTATION_SPEED * deltaTime, axis);
Then you can calculate the final matrix as follows:
Matrix = new_rot * rotation * scale_m;
If you want to rotate an object around the a point p and the object should always face a point p, then all you need is the position of the object (start_position) and the rotation axis.
In your case the rotation axis is the up vector of the world.
glm::vec3 WorldUp( 0.0f, 1.0f, 0.0f );
glm::vec3 start_position = ...;
float scale = ...;
glm::vec3 p = ...;
Calculate the rotation matrix and the new (rotated) position
glm::mat4 rotate = glm::rotate(glm::mat4(1.0f), ROTATION_SPEED * deltaTime, WorldUp);
glm::vec4 pos_rot_h = rotate * glm::vec4( start_position - p, 1.0f );
glm::vec3 pos_rot = glm::vec3( pos_rot_h ) + p;
Calculate the direction in which the object should "look"
glm::vec3 Front = glm::normalize(p - pos_rot);
You can use your function getMatrix to setup the current orientation matrix of the object:
glm::vec3 Right = glm::normalize(glm::cross(WorldUp, Front));
glm::mat4 pos_look = getMatrix(Right, WorldUp, Front, pos_rot);
Calculate the model matrix:
glm::mat4 scale_m = glm::scale(glm::mat4(1.0f), glm::vec3(scale));
Matrix = pos_look * scale_m;
The final code may look like this:
glm::mat4 getMatrix(const glm::vec3 &X, const glm::vec3 &Y, const glm::vec3 &Z, const glm::vec3 &T)
{
return glm::mat4(
glm::vec4( X, 0.0f ),
glm::vec4( Y, 0.0f ),
glm::vec4( Z, 0.0f ),
glm::vec4( T, 1.0f ) );
}
void rotate_about(float deltaTime, glm::vec3 p, bool ended) {
glm::mat4 rotate = glm::rotate(glm::mat4(1.0f), ROTATION_SPEED * deltaTime, WorldUp);
glm::vec4 pos_rot_h = rotate * glm::vec4( start_position - p, 1.0f );
glm::vec3 pos_rot = glm::vec3( pos_rot_h ) + p;
glm::vec3 Front = glm::normalize(p - pos_rot);
glm::vec3 Right = glm::normalize(glm::cross(WorldUp, Front));
glm::mat4 pos_look = getMatrix(Right, WorldUp, Front, pos_rot);
glm::mat4 scale_m = glm::scale(glm::mat4(1.0f), glm::vec3(scale));
Matrix = pos_look * scale_m;
if ( ended == true )
Position = glm::vec3(Matrix[3]);
}
SOLUTION:
The problem was in this part:
rotation = glm::translate(rotation, p - Position );
rotation = glm::rotate(rotation, ROTATION_SPEED * deltaTime, axis);
rotation = glm::translate(rotation, Position - p );
if (ended == true) { //if last iteration of my animation: saves position
Position.x = Matrix[3][0];
Position.y = Matrix[3][1];
Position.z = Matrix[3][2];
}
Note that I was using the distance between world origin and model as the radius of the translation. However, after the animation ends I update the models Position, which changes the result of p - Position, i.e, the orbit radius. When this happen the model "twitches", because it lost rotation information.
I solved it by using a different variable for the orbit radius, and applying the translation on the z-axis of the model. When the translation is applied on the x-axis, the model - which faces the camera initially - will end up sideways to the origin. However, applying the translation on the z-axis will end up with the model either facing or backwards to the origin, depending on the signal.

ray casting from mouse with opengl 2

I'm attempting to do ray casting on mouse click with the eventual goal of finding the collision point with a plane. However I'm unable to create the ray. The world is rendered using a frustum and another matrix I'm using as a camera, in the order of frustum * camera * vertex_position. With the top left of the screen as 0,0 I'm able to get the X,Y of the click in pixels. I then use the below code to convert this to the ray:
float x = (2.0f * x_screen_position) / width - 1.0f;
float y = 1.0f - (2.0f * y_screen_position) / height;
Vector4 screen_click = Vector4 (x, y, 1.0f, 1.0f);
Vector4 ray_origin_world = get_camera_matrix() * screen_click;
Vector4 tmp = (inverse(get_view_frustum()) * screen_click;
tmp = get_camera_matrix() * tmp;
Vector4 ray_direction = normalize(tmp);
view_frustum matrix:
Matrix4 view_frustum(float angle_of_view, float aspect_ratio, float z_near, float z_far) {
return Matrix4(
Vector4(1.0/tan(angle_of_view), 0.0, 0.0, 0.0),
Vector4(0.0, aspect_ratio/tan(angle_of_view), 0.0, 0.0),
Vector4(0.0, 0.0, (z_far+z_near)/(z_far-z_near), 1.0),
Vector4(0.0, 0.0, -2.0*z_far*z_near/(z_far-z_near), 0.0)
);
}
When the "camera" matrix is at 0,0,0 this gives the expected results however once I change to a fixed camera position in another location the results returned are not correct at all. The fixed "camera" matrix:
Matrix4(
Vector4(1.0, 0.0, 0.0, 0.0),
Vector4(0.0, 0.70710678118, -0.70710678118, 0.000),
Vector4(0.0, 0.70710678118, 0.70710678118, 0.0),
Vector4(0.0, 8.0, 20.0, 1.000)
);
Because many examples I have found online do not implement a camera in such a way I am unable to found much information to help in this case. Can anyone offer any insight into this or point me in a better direction?
Vector4 tmp = (inverse(get_view_frustum() * get_camera_matrix()) * screen_click; //take the inverse of the camera matrix as well
tmp /= tmp.w; //homogeneous coordinate "normalize" (different to typical normalization), needed with perspective projection or non-linear depth
Vector3 ray_direction = normalize(Vector3(tmp.x, tmp.y, tmp.z)); //make sure to normalize just the direction without w
[EDIT]
A more lengthy and similar post is here: https://stackoverflow.com/a/20143963/1888983
If you only have matrices, a start point and point in the ray direction should be used. It's common to use points on the near and far plane for this (an advantage is if you only want the ray to intersect things that are visible). That is,
(x, y, -1, 1) to (x, y, 1, 1)
These points are in normalized device coordinates (NDC, a -1 to 1 cube that is your viewing volume). All you need to do is move both points all the way to world space and normalize...
ndcPoint4 = /* from above */;
eyespacePoint4 = inverseProjectionMatrix * ndcPoint4;
worldSpacePoint4 = inverseCameraMatrix * eyespacePoint4;
worldSpacePoint3 = worldSpacePoint4.xyz / worldSpacePoint4.w;
//alternatively, with combined matrices
worldToClipMatrix = projectionMatrix * cameraMatrix; //called "clip" space before normalization
clipToWorldMatrix = inverse(worldToClipMatrix);
worldSpacePoint4 = clipToWorldMatrix * ndcPoint4;
worldSpacePoint3 = worldSpacePoint4.xyz / worldSpacePoint4.w;
//then for the ray, after transforming both start/end points
rayStart = worldPointOnNearPlane;
rayEnd = worldPointOnFarPlane;
rayDir = rayEnd - rayStart;
If you have the camera's world space position, you can drop either start or end point since all rays pass through the camera's origin.