I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r and a normalized direction d finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
Related
I want to determine the horizontal and vertical angle, from a camera's position to a world point, in respect to the camera's forward axis.
My linear algebra is a bit rusty, but given the camera's forward, up, and right vector, for example:
camForward = [0 0 1];
camUp = [0 1 0];
camRight = [1 0 0];
And the camera position and world point, for example:
camPosition = [1 2 3];
worldPoint = [5 6 4];
The sought-after angles should be determinable by first taking the difference of the positions:
delta = worldPoint-camPosition;
Then projecting it on the camera axes using the dot products:
deltaHorizontal = dot(delta,camRight);
deltaVertical = dot(delta,camUp);
deltaDepth = dot(delta,camForward);
And finally computing angles as:
angleHorizontal = atan(deltaHorizontal/deltaDepth);
angleVertical = atan(deltaVertical/deltaDepth);
In the example case, this yields that both angles become ~76°, which seems reasonable; varying the positions and axes also seem to give reasonable results.
Thus, if I am not getting the angles I expect, it should be due to that I am using either incorrect position and/or camera axes. It is worth noting that the 3D engine is using OpenGL and GLM.
I am fairly certain that the positions are correct, as moving around in the scene and inspecting the positions in relation to known reference points give consistent and correct results. Leading me to believe that I am using the wrong camera axes. To get the angles I am using (the equivalent of):
glm::vec3 worldPoint = glm::unProject( glm::vec3(windowX, windowY, windowZ), viewMatrix, projectionMatrix, glm::vec4(0,0,windowWidth,windowHeight));
glm::vec3 delta = glm::vec3(worldPoint.x, worldPoint.y, worldPoint.z);
float horizontalDistance = glm::dot(delta, cameraData->right);
float verticalDistance = glm::dot(delta, cameraData->up);
float depthDistance = glm::dot(delta, cameraData->forward);
float horizontalAngle = glm::atan(horizontalDistance/depthDistance)
float verticalAngle = glm::atan(verticalDistance/depthDistance)
Each frame, forward, up, and right are read from a view matrix, viewMatrix which in turn is produced by a converting a quaternion, Q, which holds the camera rotation which is controlled by mouse:
void updateView(CameraData * cameraData, MouseData * mouseData, MouseParameters * mouseParameters){
float deltaX = mouseData->currentX - mouseData->lastX;
float deltaY = mouseData->currentY - mouseData->lastY;
mouseData->lastX = mouseData->currentX;
mouseData->lastY = mouseData->currentY;
float pitch = mouseParameters->sensitivityY * deltaY;
float yaw = mouseParameters->sensitivityX * deltaX;
glm::quat pitch_Q = glm::quat(glm::vec3(pitch, 0.0f, 0.0f));
glm::quat yaw_Q = glm::quat(glm::vec3(0.0f, yaw, 0.0f));
cameraData->Q = pitch_Q * cameraData->Q * yaw_Q;
cameraData->Q = glm::normalize(cameraData->Q);
glm::mat4 rotation = glm::toMat4(cameraData->Q);
glm::mat4 translation = glm::mat4(1.0f);
translation = glm::translate(translation, -(cameraData->position));
cameraData->viewMatrix = rotation * translation;
cameraData->forward = (cameraData->viewMatrix)[2];
cameraData->up = (cameraData->viewMatrix)[1];
cameraData->right = (cameraData->viewMatrix)[0];
}
However, something goes wrong, and the correct angles are seemingly only produced while looking along, or perpendicular to, the world z-axis ([0 0 1]). Where am I mistaken?
I have severals pairs of points in world space each pair have a different depth. I want to project those points on the near plane of the view frustrum, then recompute their new world position.
note: I want to keep the perspective effect
To do so, I convert the point's location in NDC space. I think that each pair of points on NDC space with the same z value lie on the same plane, parallel to the view direction. So if I set their z value to -1, they should lie on the near plane.
Now that I have thoose new NDC locations I need their world position, I lost the w component by changing the depth, I need to recompute it.
I found this link: unproject ndc
which said that:
wclip * inverse(mvp) * vec4(ndc.xyz, 1.0f) = 1.0f
wclip = 1.0f / (inverse(mvp) * vec4(ndc.xyz, 1.0f))
my full code:
glm::vec4 homogeneousClipSpaceLeft = mvp * leftAnchor;
glm::vec4 homogeneousClipSpaceRight = mvp * rightAnchor;
glm::vec3 ndc_left = homogeneousClipSpaceLeft.xyz() / homogeneousClipSpaceLeft.w;
glm::vec3 ndc_right = homogeneousClipSpaceRight.xyz() / homogeneousClipSpaceRight.w;
ndc_left.z = -1.0f;
ndc_right.z = -1.0f;
float clipWLeft = (1.0f / (inverseMVP * glm::vec4(ndc_left, 1.0f)).w);
float clipWRight = (1.0f / (inverseMVP * glm::vec4(ndc_right, 1.0f)).w);
glm::vec3 worldPositionLeft = clipWLeft * inverseMVP * (glm::vec4(ndc_left, 1.0f));
glm::vec3 worldPositionRight = clipWRight * inverseMVP * (glm::vec4(ndc_right, 1.0f));
It should work in practice but i get weird result, I start with 2 points in world space:
left world position: -116.463 15.6386 -167.327
right world position: 271.014 15.6386 -167.327
left NDC position: -0.59719 0.0790622 -1
right NDC position: 0.722784 0.0790622 -1
final left position: 31.4092 -9.22973 1251.16
final right position: 31.6823 -9.22981 1251.17
mvp
4.83644 0 0 0
0 4.51071 0 0
0 0 -1.0002 -1
-284.584 41.706 1250.66 1252.41
Am I doing something wrong ?
Would you recommend this way to project pair of points to the near plane, with perspective ?
If glm::vec3 ndc_left and glm::vec3 ndc_right are normalized device coordiantes, then the following projects the coordinates to the near plane in normaized device space:
ndc_left.z = -1.0f;
ndc_right.z = -1.0f;
If you want to get the model positon of a point in normalized device space in Cartesian coordinates, then you have to transform the point by the invers model view projection matrix and to devide the x, y and z component by the w component of the result. Not the transfomation by inverseMVP gives a Homogeneous coordinate:
glm::vec4 wlh = inverseMVP * glm::vec4(ndc_left, 1.0f);
glm::vec4 wrh = inverseMVP * glm::vec4(ndc_right, 1.0f);
glm::vec3 worldPositionLeft = glm::vec3( wlh.x, wlh.y, wlh.z ) / wlh.w;
glm::vec3 worldPositionRight = glm::vec3( wrh.x, wrh.y, wrh.z ) / wrh.w;
Note, that the OpenGL Mathematics (GLM) library provides operations vor "unproject". See glm::unProject.
I try to use what many people seem to find a good way, I call gluUnproject 2 times with different z-values and then try to calculate the direction vector for the ray from these 2 vectors.
I read this question and tried to use the structure there for my own code:
glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
But this only returns the Homogeneous Clip Coordinates (I believe), since they only range from -1 to 1 on every axis.
How to actually get coordinates from which I can create a ray?
EDIT: This is how I construct the matrices:
Matrix projectionMatrix = vecmath.perspectiveMatrix(60f, aspect, 0.1f,
100f);
//The matrix of the camera = viewMatrix
setTransformation(vecmath.lookatMatrix(eye, center, up));
//And every object sets a ModelMatrix in it's display method
Matrix modelMatrix = parentMatrix.mult(vecmath
.translationMatrix(translation));
modelMatrix = modelMatrix.mult(vecmath.rotationMatrix(1, 0, 1, angle));
EDIT 2:
This is how the function looks right now:
private void calcMouseInWorldPosition(float mouseX, float mouseY, Matrix proj, Matrix view) {
Vector start = vecmath.vector(0, 0, 0);
Vector end = vecmath.vector(0, 0, 0);
FloatBuffer modelBuffer = BufferUtils.createFloatBuffer(16);
modelBuffer.put(view.asArray());
modelBuffer.rewind();
FloatBuffer projBuffer = BufferUtils.createFloatBuffer(16);
projBuffer.put(proj.asArray());
projBuffer.rewind();
FloatBuffer startBuffer = BufferUtils.createFloatBuffer(16);
FloatBuffer endBuffer = BufferUtils.createFloatBuffer(16);
IntBuffer viewBuffer = BufferUtils.createIntBuffer(16);
//The two calls for projection and modelView matrix are disabled here,
as I use my own matrices in this case
// glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
// glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
//I know this is really ugly and bad, but I know that the height and width is always 600
// and this is just for testing purposes
mouseY = 600 - mouseY;
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
}
I'm trying to use my own projection and view matrix, but this only seems to give weirder results.
With the GlGet... stuff I get this for a click in the upper right corner:
start: (0.97333336, -0.98, -1.0)
end: (0.97333336, -0.98, 1.0)
When I use my own stuff I get this for the same position:
start: (-2.4399707, -0.55425626, -14.202201)
end: (-2.4399707, -0.55425626, -16.198204)
Now I actually need a modelView matrix instead of just the view matrix, but I don't know how I am supposed to get it, since it is altered and created anew in every display call of every object.
But is this really the problem? In this tutorial he says "Normally, to get into clip space from eye space we multiply the vector by a projection matrix. We can go backwards by multiplying by the inverse of this matrix." and in the next step he multiplies again by the inverse of the view matrix, so I thought this is what I should actually do?
EDIT 3:
Here I tried what user42813 suggested:
Matrix view = cam.getTransformation();
view = view.invertRigid();
mouseY = height - mouseY - 1;
//Here I only these values, because the Z and W values would be 0
//following your suggestion, so no use adding them here
float tempX = view.get(0, 0) * mouseX + view.get(1, 0) * mouseY;
float tempY = view.get(0, 1) * mouseX + view.get(1, 1) * mouseY;
float tempZ = view.get(0, 2) * mouseX + view.get(1, 2) * mouseY;
origin = vecmath.vector(tempX, tempY, tempZ);
direction = cam.getDirection();
But now the direction and origin values are always the same:
origin: (-0.04557252, -0.0020000197, -0.9989586)
direction: (-0.04557252, -0.0020000197, -0.9989586)
Ok I finally managed to work this out, maybe this will help someone.
I found some formula for this and did this with the coordinates that I was getting, which ranged from -1 to 1:
float tempX = (float) (start.x() * 0.1f * Math.tan(Math.PI * 60f / 360));
float tempY = (float) (start.y() * 0.1f * Math.tan(Math.PI * 60f / 360) * height / width);
float tempZ = -0.1f;
direction = vecmath.vector(tempX, tempY, tempZ); //create new vector with these x,y,z
direction = view.transformDirection(direction);
//multiply this new vector with the INVERSED viewMatrix
origin = view.getPosition(); //set the origin to the position values of the matrix (the right column)
I dont really use deprecated opengl but i would share my thought,
First it would be helpfull if you show us how you build your View matrix,
Second the View matrix you have is in the local space of the camera,
now typically you would multiply your mouseX and (ScreenHeight - mouseY - 1) by the View matrix (i think the inverse of that matrix sorry, not sure!) then you will have the mouse coordinates in camera space, then you will add the Forward vector to that vector created by the mouse, then you will have it, it would look something like that:
float mouseCoord[] = { mouseX, screen_heihgt - mouseY - 1, 0, 0 }; /* 0, 0 because we multipling by a matrix 4.*/
mouseCoord = multiply( ViewMatrix /*Or: inverse(ViewMatrix)*/, mouseCoord );
float ray[] = add( mouseCoord, forwardVector );
I am trying to rotate a group of vectors I sampled to the normal of a triangle
If this was correct, the randomly sampled hemisphere would line up with the triangle.
Currently I generate it on the Z-axis and am attempting to rotate all the samples to the normal of the triangle.
but it seems to be "just off"
glm::quat getQuat(glm::vec3 v1, glm::vec3 v2)
{
glm::quat myQuat;
float dot = glm::dot(v1, v2);
if (dot != 1)
{
glm::vec3 aa = glm::normalize(glm::cross(v1, v2));
float w = sqrt(glm::length(v1)*glm::length(v1) * glm::length(v2)*glm::length(v2)) + dot;
myQuat.x = aa.x;
myQuat.y = aa.y;
myQuat.z = aa.z;
myQuat.w = w;
}
return myQuat;
}
Which I pulled from the bottom of this page : http://lolengine.net/blog/2013/09/18/beautiful-maths-quaternion-from-vectors
Then I :
glm::vec3 zaxis = glm::normalize( glm::vec3(0, 0, 1) ); // hardcoded but test orginal axis
glm::vec3 n1 = glm::normalize( glm::cross((p2 - p1), (p3 - p1)) ); //normal
glm::quat myQuat = glm::normalize(getQuat(zaxis, n1));
glm::mat4 rotmat = glm::toMat4(myQuat); //make a rotation matrix
glm::vec4 n3 = rotmat * glm::vec4(n2,1); // current vector I am trying to rotate
Construct 4x4 transform matrix instead of Quaternions.
Do not forget that OpenGL has column wise matrix
so for double m[16];
is X axis vector in m[ 0],m[ 1],m[ 2]
is Y axis vector in m[ 4],m[ 5],m[ 6]
is Z axis vector in m[ 8],m[ 9],m[10]
and position is in m[12],m[13],m[14]
The LCS mean local coordinate system (your triangle or object or whatever) and GCS mean global coordinate system (world or whatever).
All the X,Y,Z vectors should be normalized to unit vectors otherwise scaling will occur.
construction
set Z-axis vector to your triangle normal
set position (LCS origin) to mid point of your triangle (or average point form its vertexes)
now you just need X and Y axises which is easy
let X = any triangle vertex - triangle midpoint
or X = substraction of any 2 vertexes of triangle
The only condition that must be met for X is that it must lie on triangle plane.
Now let Y = X x Z the cross product will create vector perpendicular to X and Z (which also lies in triangle plane).
now put all this inside matrix and load it to OpenGL as ModelView matrix or what ever.
I have a program in which I am tracking user's position and setting the frustum (setting the camera at user's position) to change the perspective of the scene as per the user's position. Until right now, I had all four corners of the display screen at the same z and I was able to set the asymmetric frustum and change the scene according to the user's perspective.
The current code looks something like the following:
UserCam::begin(){
saveGlobalMatrices();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(_topLeftNear.x, _bottomRightNear.x, _bottomRightNear.y, _topLeftNear.y, _camZNear, _camZFar);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(_wcUserHead.x, _wcUserHead.y, _topLeftScreen.z, _wcUserHead.x, _wcUserHead.y, _topLeftScreen.z-1, 0, 1, 0);
}
UserCam::end(){
loadGlobalMatrices();
}
UserCam::setupCam(){
this->_topLeftScreen = _wcTopLeftScreen - _wcUserHead; //wcTopLeftScreen, wcBottomRightScreen and wcUserHead are in the same frame of reference
this->_bottomRightScreen = _wcBottomRightScreen - _wcUserHead;
this->_topLeftNear = (_topLeftScreen/ _topLeftScreen.z) * _camZNear;
this->_bottomRightNear = (_bottomRightScreen/_bottomRightScreen.z )) * _camZNear;
}
However, I want to be able to do the same with a display which is kept tilted to the user and/or does not have all its vertices at the same Z.
The above can be imagined as a sort of tilted window, the vertices of which would have the frustum defined from the user's position. How is such a frustum possible where the display does not have all the vertices at the same Z?
EDIT
There are three planes in the setup that I am considering. The middle one give the correct asymmetric frustum since all the vertices are at the same Z, whereas the left and right planes have two vertices each at different Z. The vertices of the same are as follows:
Plane1: TL : (-426.66, 0, 200), TR: (0, 0, 0), BL : (-426.66, 320.79, 200), BR : (0, 320.79, 0)
Plane2: TL : (0, 0, 0), TR: (426.66, 0, 0), BL : (0, 320.79, 0), BR: (426.66, 320.79, 0)
Plane3: TL: (426.66, 0, 0), TR: (853.32, 0, 200), BL : (426.66, 320.79, 0), BR : (853.32, 320.79, 200)
The idea in this setup is to transform it to a case where all corners have the same z-coordinate. Usually this is done with a view matrix and you get:
overall_transform = (projection) * (view * world)
or in the OpenGL wording
overall_transform = projection * modelview
If you don't want to tamper with the original modelview matrix, you should introduce another matrix in between:
overall_transform = (projection * adaption) * (view * world)
where adaption is a rotation matrix that maps the screen's corners to a plane with constant z-coordinate.
In order to find the correct parameters for projection you have to transform the screen with adaption.
Edit
We start with an arbitrary scene where the camera's position, direction and the screen is known. We consider that the model matrices are already there for each object:
We then need the view transformation V that aligns the camera with the origin. This matrix can easily calculated with gluLookAt. The overall matrix is then T = V * M:
Up to this step the matrices are the same for all three screens. So this part should be in the modelview matrix. What we add now goes into the projection matrix because it differs per screen.
We need to apply a rotation R that aligns the screen perpendicular to the z-axis. The position of the camera must not change at this step because it represents the projection center. The overall transformation is now T = R * V * M.
In order to calculate the angle, we can use e.g. atan2:
dx = right.x - left.x
dz = right.z - left.z
angle = atan2(dz, dx)
It might be necessary to adapt this calculation slightly to your actual needs.
Now is the time to apply the actual perspective transform, which can be done with glFrustum.
We need to find the local edges of the screen. You could transform the screen coordinates with the current transform (R * V).
TL' = R * V * TL
BL' = R * V * BL
BR' = R * V * BR
Now all three coordinates should have the same z-coordinate. We can use these as follows:
common_z = TL'.z = BL'.z = BR'.z
glFrustum(TL'.x / common_z * z_near,
BR'.x / common_z * z_near,
BL'.y / common_z * z_near,
TL'.y / common_z * z_near,
z_near, z_far)
So overall T = glFrustum * R * V * M:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(...);
//any further model transforms
glMatrixMode(GL_PROJECTION);
glFrustum(...);
glRotate(...);