How to use OpenCV triangulatePoints - c++

I'm struggling to get the OpenCV triangulatePoints function to work. I'm using the function with point matches generated from optical flow. I'm using two consecutive frames/positions from a single moving camera.
Currently these are my steps:
The intrinsics are given and look like one would expect:
2.6551e+003 0. 1.0379e+003
0. 2.6608e+003 5.5033e+002
0. 0. 1.
I then compute the two extrinsic matrices ([R|t]) based on (highly accurate) GPS and camera position relative to the GPS. Note that the GPS data uses a cartesian coordinate system around The Netherlands which uses meters as units (so no weird lat/lng math is required). This yields the following matrices:
Next, I simply remove the bottom row of these matrices and multiply them with the intrinsic matrices to get the projection matrices:
projectionMat = intrinsics * extrinsics;
This results in:
My image points simply consist of all the pixel coordinates for the first set,
(0, 0)...(1080, 1920)
and all pixel coordinates + their computed optical flow for the second set.
(0 + flowY0, 0 + flowX0)...(1080 + flowYN, 1920 + flowXN)
To compute the 3D points, I feed the image points (in the format OpenCV expects) and projection matrices to the triangulatePoints function:
cv::triangulatePoints(projectionMat1, projectionMat2, imagePoints1, imagePoints2, outputPoints);
Finally, I convert the outputPoints from homogeneous coordinates by dividing them by their fourth coordinate (w) and removing this coordinate.
What I end up with is some weird cone-shaped point cloud:
Now I've tried every combination of tweaks I could think of (inverting matrices, changing X/Y/Z order, inverting X/Y/Z axes, changing multiplication order...), but everything yields similarly strange results. The one thing that did give me better results was simply multiplying the optical flow values by 0.01. This results in the following point cloud:
This is still not perfect (areas far away from the camera look really curved), but much more like I would expect.
I'm wondering if anybody can spot something I'm doing wrong. Do my matrices look ok? Is the output I'm getting related to a certain problem?
What I'm quite certain of, is that it's not related to the GPS or optical flow, since I've tested multiple frames and they all yield the same type of output. I really think it has to do with the triangulation itself.

triangulatePoints() is for stereo camera , not for monocular camera!
In the opencv doc, I read the following express :
The function reconstructs 3-dimensional points (in homogeneous coordinates) by using their observations with a stereo camera. Projections matrices can be obtained from stereoRectify()

In my case I had to fix the convention used in the rotation matrices in order to calculate the projection matrix, please make sure you are using this convention for both cameras :
rotationMatrix0 = rotation_by_Y_Matrix_Camera_Calibration(camera_roll)*rotation_by_X_Matrix_Camera_Calibration(camera_pitch) *rotation_by_Z_Matrix_Camera_Calibration(camera_yaw);
Mat3x3 Algebra::rotation_by_Y_Matrix_Camera_Calibration(double yaw)
{
Mat3x3 matrix;
matrix[2][2] = 1.0f;
double sinA = sin(yaw), cosA = cos(yaw);
matrix[0][0] = +cosA; matrix[0][1] = -sinA;
matrix[1][0] = +sinA; matrix[1][1] = +cosA;
return matrix;
}
Mat3x3 Algebra::rotation_by_X_Matrix_Camera_Calibration(double pitch)
{
Mat3x3 matrix;
matrix[1][1] = 1.0f;
double sinA = sin(pitch), cosA = cos(pitch);
matrix[0][0] = +cosA; matrix[0][2] = +sinA;
matrix[2][0] = -sinA; matrix[2][2] = +cosA;
return matrix;
}
Mat3x3 Algebra::rotation_by_Z_Matrix_Camera_Calibration(double roll)
{
Mat3x3 matrix;
matrix[0][0] = 1.0f;
double sinA = sin(roll), cosA = cos(roll);
matrix[1][1] = +cosA; matrix[1][2] = -sinA;
matrix[2][1] = +sinA; matrix[2][2] = +cosA;
return matrix;
}

Related

Irrlicht: draw 2D image in 3D space based on four corner coordinates

I would like to create a function to position a free-floating 2D raster image in space with the Irrlicht engine. The inspiration for this is the function rgl::show2d in the R package rgl. An example implementation in R can be found here.
The input data should be limited to the path to the image and a table with the four corner coordinates of the respective plot rectangle.
My first, pretty primitive and finally unsuccessful approach to realize this with irrlicht:
Create a cube:
ISceneNode * picturenode = scenemgr->addCubeSceneNode();
Flatten one side:
picturenode->setScale(vector3df(1, 0.001, 1));
Add image as texture:
picturenode->setMaterialTexture(0, driver->getTexture("path/to/image.png"));
Place flattened cube at the center position of the four corner coordinates. I just calculate the mean coordinates on all three axes with a small function position_calc().
vector3df position = position_calc(rcdf); picturenode->setPosition(position);
Determine the object rotation by calculating the normal of the plane defined by the four corner coordinates, normalizing the result and trying to somehow translate the resulting vector to rotation angles.
vector3df normal = normal_calc(rcdf);
vector3df angles = (normal.normalize()).getSphericalCoordinateAngles();
picturenode->setRotation(angles);
This solution doesn't produce the expected result. The rotation calculation is wrong. With this approach I'm also not able to scale the image correctly to it's corner coordinates.
How can I fix my workflow? Or is there a much better way to achieve this with Irrlicht that I'm not aware of?
Edit: Thanks to #spug I believe I'm almost there. I tried to implement his method 2, because quaternions are already available in Irrlicht. Here's what I came up with to calculate the rotation:
#include <Rcpp.h>
#include <irrlicht.h>
#include <math.h>
using namespace Rcpp;
core::vector3df rotation_calc(DataFrame rcdf) {
NumericVector x = rcdf["x"];
NumericVector y = rcdf["y"];
NumericVector z = rcdf["z"];
// Z-axis
core::vector3df zaxis(0, 0, 1);
// resulting image's normal
core::vector3df normal = normal_calc(rcdf);
// calculate the rotation from the original image's normal (i.e. the Z-axis)
// to the resulting image's normal => quaternion P.
core::quaternion p;
p.rotationFromTo(zaxis, normal);
// take the midpoint of AB from the diagram in method 1, and rotate it with
// the quaternion P => vector U.
core::vector3df MAB(0, 0.5, 0);
core::quaternion m(MAB.X, MAB.Y, MAB.Z, 0);
core::quaternion rot = p * m * p.makeInverse();
core::vector3df u(rot.X, rot.Y, rot.Z);
// calculate the rotation from U to the midpoint of DE => quaternion Q
core::vector3df MDE(
(x(0) + x(1)) / 2,
(y(0) + y(1)) / 2,
(z(0) + z(1)) / 2
);
core::quaternion q;
q.rotationFromTo(u, MDE);
// multiply in the order Q * P, and convert to Euler angles
core::quaternion f = q * p;
core::vector3df euler;
f.toEuler(euler);
// to degrees
core::vector3df degrees(
euler.X * (180.0 / M_PI),
euler.Y * (180.0 / M_PI),
euler.Z * (180.0 / M_PI)
);
Rcout << "degrees: " << degrees.X << ", " << degrees.Y << ", " << degrees.Z << std::endl;
return degrees;
}
The result is almost correct, but the rotation on one axis is wrong. Is there a way to fix this or is my implementation inherently flawed?
That's what the result looks like now. The points mark the expected corner points.
I've thought of two ways to do this; neither are very graceful - not helped by Irrlicht restricting us to spherical polars.
NB. the below assumes rcdf is centered at the origin; this is to make the rotation calculation a bit more straightforward. Easy to fix though:
Compute the center point (the translational offset) of rcdf
Subtract this from all the points of rcdf
Perform the procedures below
Add the offset back to the result points.
Pre-requisite: scaling
This is easy; simply calculate the ratios of width and height in your rcdf to your original image, then call setScaling.
Method 1: matrix inversion
For this we need an external library which supports 3x3 matrices, since Irrlicht only has 4x4 (I believe).
We need to solve the matrix equation which rotates the image from X-Y to rcdf. For this we need 3 points in each frame of reference. Two of these we can immediately set to adjacent corners of the image; the third must point out of the plane of the image (since we need data in all three dimensions to form a complete basis) - so to calculate it, simply multiply the normal of each image by some offset constant (say 1).
(Note the points on the original image have been scaled)
The equation to solve is therefore:
(Using column notation). The Eigen library offers an implementation for 3x3 matrices and inverse.
Then convert this matrix to spherical polar angles: https://www.learnopencv.com/rotation-matrix-to-euler-angles/
Method 2:
To calculate the quaternion to rotate from direction vector A to B: Finding quaternion representing the rotation from one vector to another
Calculate the rotation from the original image's normal (i.e. the Z-axis) to rcdf's normal => quaternion P.
Take the midpoint of AB from the diagram in method 1, and rotate it with the quaternion P (http://www.geeks3d.com/20141201/how-to-rotate-a-vertex-by-a-quaternion-in-glsl/) => vector U.
Calculate the rotation from U to the midpoint of DE => quaternion Q
Multiply in the order Q * P, and convert to Euler angles: https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles
(Not sure if Irrlicht has support for quaternions)

An inconsistency in my understanding of the GLM lookAt function

Firstly, if you would like an explanation of the GLM lookAt algorithm, please look at the answer provided on this question: https://stackoverflow.com/a/19740748/1525061
mat4x4 lookAt(vec3 const & eye, vec3 const & center, vec3 const & up)
{
vec3 f = normalize(center - eye);
vec3 u = normalize(up);
vec3 s = normalize(cross(f, u));
u = cross(s, f);
mat4x4 Result(1);
Result[0][0] = s.x;
Result[1][0] = s.y;
Result[2][0] = s.z;
Result[0][1] = u.x;
Result[1][1] = u.y;
Result[2][1] = u.z;
Result[0][2] =-f.x;
Result[1][2] =-f.y;
Result[2][2] =-f.z;
Result[3][0] =-dot(s, eye);
Result[3][1] =-dot(u, eye);
Result[3][2] = dot(f, eye);
return Result;
}
Now I'm going to tell you why I seem to be having a conceptual issue with this algorithm. There are two parts to this view matrix, the translation and the rotation. The translation does the correct inverse transformation, bringing the camera position to the origin, instead of the origin position to the camera. Similarly, you expect the rotation that the camera defines to be inversed before being put into this view matrix as well. I can't see that happening here, that's my issue.
Consider the forward vector, this is where your camera looks at. Consequently, this forward vector needs to be mapped to the -Z axis, which is the forward direction used by openGL. The way this view matrix is suppose to work is by creating an orthonormal basis in the columns of the view matrix, so when you multiply a vertex on the right hand side of this matrix, you are essentially just converting it's coordinates to that of different axes.
When I play the rotation that occurs as a result of this transformation in my mind, I see a rotation that is not the inverse rotation of the camera, like what's suppose to happen, rather I see the non-inverse. That is, instead of finding the camera forward being mapped to the -Z axis, I find the -Z axis being mapped to the camera forward.
If you don't understand what I mean, consider a 2D example of the same type of thing that is happening here. Let's say the forward vector is (sqr(2)/2 , sqr(2)/2), or sin/cos of 45 degrees, and let's also say a side vector for this 2D camera is sin/cos of -45 degrees. We want to map this forward vector to (0,1), the positive Y axis. The positive Y axis can be thought of as the analogy to the -Z axis in openGL space. Let's consider a vertex in the same direction as our forward vector, namely (1,1). By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y axis by using a 2x2 matrix that consists of the forward vector in the first column and the side vector in the second column. This is an equivalent calculation of that calculation http://www.wolframalpha.com/input/?i=%28sqr%282%29%2F2+%2C+sqr%282%29%2F2%29++1+%2B+%28sqr%282%29%2F2%2C+-sqr%282%29%2F2+%29+1.
Note that you don't get your (1,1) vertex mapped the positive Y axis like you wanted, instead you have it mapped to the positive X axis. You might also consider what happened to a vertex that was on the positive Y axis if you applied this transformation. Sure enough, it is transformed to the forward vector.
Therefore it seems like something very fishy is going on with the GLM algorithm. However, I doubt this algorithm is incorrect since it is so popular. What am I missing?
Have a look at GLU source code in Mesa: http://cgit.freedesktop.org/mesa/glu/tree/src/libutil/project.c
First in the implementation of gluPerspective, notice the -1 is using the indices [2][3] and the -2 * zNear * zFar / (zFar - zNear) is using [3][2]. This implies that the indexing is [column][row].
Now in the implementation of gluLookAt, the first row is set to side, the next one to up and the final one to -forward. This gives you the rotation matrix which is post-multiplied by the translation that brings the eye to the origin.
GLM seems to be using the same [column][row] indexing (from the code). And the piece you just posted for lookAt is consistent with the more standard gluLookAt (including the translational part). So at least GLM and GLU agree.
Let's then derive the full construction step by step. Noting C the center position and E the eye position.
Move the whole scene to put the eye position at the origin, i.e. apply a translation of -E.
Rotate the scene to align the axes of the camera with the standard (x, y, z) axes.
2.1 Compute a positive orthonormal basis for the camera:
f = normalize(C - E) (pointing towards the center)
s = normalize(f x u) (pointing to the right side of the eye)
u = s x f (pointing up)
with this, (s, u, -f) is a positive orthonormal basis for the camera.
2.2 Find the rotation matrix R that aligns maps the (s, u, -f) axes to the standard ones (x, y, z). The inverse rotation matrix R^-1 does the opposite and aligns the standard axes to the camera ones, which by definition means that:
(sx ux -fx)
R^-1 = (sy uy -fy)
(sz uz -fz)
Since R^-1 = R^T, we have:
( sx sy sz)
R = ( ux uy uz)
(-fx -fy -fz)
Combine the translation with the rotation. A point M is mapped by the "look at" transform to R (M - E) = R M - R E = R M + t. So the final 4x4 transform matrix for "look at" is indeed:
( sx sy sz tx ) ( sx sy sz -s.E )
L = ( ux uy uz ty ) = ( ux uy uz -u.E )
(-fx -fy -fz tz ) (-fx -fy -fz f.E )
( 0 0 0 1 ) ( 0 0 0 1 )
So when you write:
That is, instead of finding the camera forward being mapped to the -Z
axis, I find the -Z axis being mapped to the camera forward.
it is very surprising, because by construction, the "look at" transform maps the camera forward axis to the -z axis. This "look at" transform should be thought as moving the whole scene to align the camera with the standard origin/axes, it's really what it does.
Using your 2D example:
By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y
axis by using a 2x2 matrix that consists of the forward vector in the
first column and the side vector in the second column.
That's the opposite, following the construction I described, you need a 2x2 matrix with the forward and row vector as rows and not columns to map (1, 1) and the other vector to the y and x axes. To use the definition of the matrix coefficients, you need to have the images of the standard basis vectors by your transform. This gives directly the columns of the matrix. But since what you are looking for is the opposite (mapping your vectors to the standard basis vectors), you have to invert the transformation (transpose, since it's a rotation). And your reference vectors then become rows and not columns.
These guys might give some further insights to your fishy issue:
glm::lookAt vertical camera flips when z <= 0
The answer might be of interest to you?

OpenCV Computing Camera Position && Rotation

for a project I need to compute the real world position and orientation of a camera
with respect to a known object.
I have a set of photos, each displays a chessboard from different points of view.
Using CalibrateCamera and solvePnP I am able to reproject Points in 2d, to get a AR-thing.
So my situation is as such:
Intrinsic parameters are known
Distortioncoefficients are known
translation Vector and rotation Vector are known per photo.
I simply cannot figure out how to compute the position of the camera. My guess was:
invert translation vector. (=t')
transform rotation vector to degree (seems to be radian) and invert
use rodriguez on rotation vector
compute RotationMatrix * t'
But the results are somehow totally off...
Basically I want to to compute a ray for each pixel in world coordinates.
If more informations on my problem are needed, I'd be glad to answer quickly.
I dont' get it... somehow the rays are still off. This is my Code btw:
Mat image1CamPos = tvecs[0].clone(); //From calibrateCamera
Mat rot = rvecs[0].clone(); //From calibrateCamera
Rodrigues(rot, rot);
rot = rot.t();
//Position of Camera
Mat pos = rot * image1CamPos;
//Ray-Normal (( (double)mk[i][k].x) are known image-points)
float x = (( (double)mk[i][0].x) / fx) - (cx / fx);
float y = (( (double)mk[i][0].y) / fy) - (cy / fy);
float z = 1;
float mag = sqrt(x*x + y*y + z*z);
x /= mag;
y /= mag;
z /= mag;
Mat unit(3, 1, CV_64F);
unit.at<double>(0, 0) = x;
unit.at<double>(1, 0) = y;
unit.at<double>(2, 0) = z;
//Rotation of Ray
Mat rot = stof1 * unit;
But when plotting this, the rays are off :/
The translation t (3x1 vector) and rotation R (3x3 matrix) of an object with respect to the camera equals the coordinate transformation from object into camera space, which is given by:
v' = R * v + t
The inversion of the rotation matrix is simply the transposed:
R^-1 = R^T
Knowing this, you can easily resolve the transformation (first eq.) to v:
v = R^T * v' - R^T * t
This is the transformation from camera into object space, i.e., the position of the camera with respect to the object (rotation = R^T and translation = -R^T * t).
You can simply get a 4x4 homogeneous transformation matrix from this:
T = ( R^T -R^T * t )
( 0 1 )
If you now have any point in camera coordinates, you can transform it into object coordiantes:
p' = T * (x, y, z, 1)^T
So, if you'd like to project a ray from a pixel with coordinates (a,b) (probably you will need to define the center of the image, i.e. the principal point as reported by CalibrateCamera, as (0,0)) -- let that pixel be P = (a,b)^T. Its 3D coordinates in camera space are then P_3D = (a,b,0)^T. Let's project a ray 100 pixel in positive z-direction, i.e. to the point Q_3D = (a,b,100)^T. All you need to do is transform both 3D coordinates into the object coordinate system using the transformation matrix T and you should be able to draw a line between both points in object space. However, make sure that you don't confuse units: CalibrateCamera will report pixel values while your object coordinate system might be defined in, e.g., cm or mm.

Rotate a 3D- Point around another one

I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.

3d coordinate from point and angles

I'm working on a simple OpenGL world- and so far I've got a bunch of cubes randomly placed about and it's pretty fun to go zooming about. However I'm ready to move on. I would like to drop blocks in front of my camera, but I'm having trouble with the 3d angles. I'm used to 2d stuff where to find an end point we simply do something along the lines of:
endy = y + (sin(theta)*power);
endx = x + (cos(theta)*power);
However when I add the third dimension I'm not sure what to do! It seems to me that the power of the second dimensional plane would be determined by the z axis's cos(theta)*power, but I'm not positive. If that is correct, it seems to me I'd do something like this:
endz = z + (sin(xtheta)*power);
power2 = cos(xtheta) * power;
endx = x + (cos(ytheta) * power2);
endy = y + (sin(ytheta) * power2);
(where x theta is the up/down theta and y = left/right theta)
Am I even close to the right track here? How do I find an end point given a current point and an two angles?
Working with euler angles doesn't work so well in 3D environments, there are several issues and corner cases in which they simply don't work. And you actually don't even have to use them.
What you should do, is exploit the fact, that transformation matrixes are nothing else, then coordinate system bases written down in a comprehensible form. So you have your modelview matrix MV. This consists of a model space transformation, followed by a view transformation (column major matrices multiply right to left):
MV = V * M
So what we want to know is, in which way the "camera" lies within the world. That is given to you by the inverse view matrix V^-1. You can of course invert the view matrix using Gauss Jordan method, but most of the time your view matrix will consist of a 3×3 rotation matrix with a translation vector column P added.
R P
0 1
Recall that
(M * N)^-1 = N^-1 * M^-1
and also
(M * N)^T = M^T * N^T
so it seems there is some kind of relationship between transposition and inversion. Not all transposed matrices are their inverse, but there are some, where the transpose of a matrix is its inverse. Namely it are the so called orthonormal matrices. Rotations are orthonormal. So
R^-1 = R^T
neat! This allows us to find the inverse of the view matrix by the following (I suggest you try to proof it as an exersice):
V = / R P \
\ 0 1 /
V^-1 = / R^T -P \
\ 0 1 /
So how does this help us to place a new object in the scene at a distance from the camera? Well, V is the transformation from world space into camera space, so V^-1 transforms from camera to world space. So given a point in camera space you can transform it back to world space. Say you wanted to place something at the center of the view in distance d. In camera space that would be the point (0, 0, -d, 1). Multiply that with V^-1:
V^-1 * (0, 0, -d, 1) = (R^T)_z * d - P
Which is exactly what you want. In your OpenGL program you somewhere have your view matrix V, probably not properly named yet, but anyway it is there. Say you use old OpenGL-1 and GLU's gluLookAt:
void display(void)
{
/* setup viewport, clear, set projection, etc. */
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(...);
/* the modelview matrix now holds the View transform */
At this point we can extract the modelview matrix
GLfloat view[16];
glGetFloatv(GL_MODELVIEW_MATRIX, view);
Now view is in column major order. If we were to use it directly we could directly address the columns. But remember that transpose is inverse of a rotation, so we actually want the 3rd row vector. So let's assume you keep view around, so that in your event handler (outside display) you can do the following:
GLfloat z_row[3];
z_row[0] = view[2];
z_row[1] = view[6];
z_row[2] = view[10];
And we want the position
GLfloat * const p_column = &view[12];
Now we can calculate the new objects position at distance d:
GLfloat new_object_pos[3] = {
z_row[0]*d - p_column[0],
z_row[1]*d - p_column[1],
z_row[2]*d - p_column[2],
};
There you are. As you can see, nowhere you had to work with angles or trigonometry, it's just straight linear algebra.
Well I was close, after some testing, I found the correct formula for my implementation, it looks like this:
endy = cam.get_pos().y - (sin(toRad(180-cam.get_rot().x))*power1);
power2 = cos(toRad(180-cam.get_rot().x))*power1;
endx = cam.get_pos().x - (sin(toRad(180-cam.get_rot().y))*power2);
endz = cam.get_pos().z - (cos(toRad(180-cam.get_rot().y))*power2);
This takes my camera's position and rotational angles and get's the corresponding points. Works like a charm =]