Need rotation matrix for opengl 3D transformation - c++

The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)

Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.

Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}

Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.

Related

placing objects perpendicularly on the surface of a sphere that has a wavy surface

So I have a sphere. It rotates around a given axis and changes its surface by a sin * cos function.
I also have a bunck of tracticoids at fix points on the sphere. These objects follow the sphere while moving (including the rotation and the change of the surface). But I can't figure out how to make them always perpendicular to the sphere. I have the ponts where the tracticoid connects to the surface of the sphere and its normal vector. The tracticoids are originally orianted by the z axis. So I tried to make it's axis to the given normal vector but I just can't make it work.
This is where i calculate M transformation matrix and its inverse:
virtual void SetModelingTransform(mat4& M, mat4& Minv, vec3 n) {
M = ScaleMatrix(scale) * RotationMatrix(rotationAngle, rotationAxis) * TranslateMatrix(translation);
Minv = TranslateMatrix(-translation) * RotationMatrix(-rotationAngle, rotationAxis) * ScaleMatrix(vec3(1 / scale.x, 1 / scale.y, 1 / scale.z));
}
In my draw function I set the values for the transformation.
_M and _Minv are the matrixes of the sphere so the tracticoids are following the sphere, but when I tried to use a rotation matrix, the tracticoids strated moving on the surface of the sphere.
_n is the normal vector that the tracticoid should follow.
void Draw(RenderState state, float t, mat4 _M, mat4 _Minv, vec3 _n) {
SetModelingTransform(M, Minv, _n);
if (!sphere) {
state.M = M * _M * RotationMatrix(_n.z, _n);
state.Minv = Minv * _Minv * RotationMatrix(-_n.z, _n);
}
else {
state.M = M;
state.Minv = Minv;
}
.
.
.
}
You said your sphere has an axis of rotation, so you should have a vector a aligned with this axis.
Let P = P(t) be the point on the sphere at which your object is positioned. You should also have a vector n = n(t) perpendicular to the surface of the sphere at point P=P(t) for each time-moment t. All vectors are interpreted as column-vectors, i.e. 3 x 1 matrices.
Then, form the matrix
U[][1] = cross(a, n(t)) / norm(cross(a, n(t)))
U[][3] = n(t) / norm(n(t))
U[][2] = cross(U[][3], U[][1])
where for each j=1,2,3 U[][j] is a 3 x 1 vector column. Then
U(t) = [ U[][1], U[][2], U[][3] ]
is a 3 x 3 orthogonal matrix (i.e. it is a 3D rotation around the origin)
For each moment of time t calculate the matrix
M(t) = U(t) * U(0)^T
where ^T is the matrix transposition.
The final transformation that rotates your object from its original position to its position at time t should be
X(t) = P(t) + M(t)*(X - P(0))
I'm not sure if I got your explanations, but here I go.
You have a sphere with a wavy surface. This means that each point on the surface changes its distance to the center of the sphere, like a piece of wood on a wave in the sea changes its distance to the bottom of the sea at that position.
We can tell that the radious R of the sphere is variable at each point/time case.
Now you have a tracticoid (what's a tracticoid?). I'll take it as some object floating on the wave, and following the sphere movements.
Then it seems you're asking as how to make the tracticoid follows both wavy surface and sphere movements.
Well. If we define each movement ("transformation") by a 4x4 matrix it all reduces to combine in the proper order those matrices.
There are some good OpenGL tutorials that teach you about transformations, and how to combine them. See, for example, learnopengl.com.
To your case, there are several transformations to use.
The sphere spins. You need a rotation matrix, let's call it MSR (matrix sphere rotation) and an axis of rotation, ASR. If the sphere also translates then also a MST is needed.
The surface waves, with some function f(lat, long, time) which calculates for those parameters the increment (signed) of the radious. So, Ri = R + f(la,lo,ti)
For the tracticoid, I guess you have some triangles that define a tracticoid. I also guess those triangles are expressed in a "local" coordinates system whose origin is the center of the tracticoid. Your issue comes when you have to position and rotate the tracticoid, right?
You have two options. The first is to rotate the tracticoid to make if aim perpendicular to the sphere and then translate it to follow the sphere rotation. While perfect mathematically correct, I find this option some complicated.
The best option is to make the tracticoid to rotate and translate exactly as the sphere, as if both would share the same origin, the center of the sphere. And then translate it to its current position.
First part is quite easy: The matrix that defines such transformation is M= MST * MSR, if you use the typical OpenGL axis convention, otherwise you need to swap their order. This M is the common part for all objects (sphere & tracticoids).
The second part requires you have a vector Vn that defines the point in the surface, related to the center of the sphere. You should be able to calculate it with the parameters latitude, longitude and the R obtained by f() above, plus the size/2 of the tracticoid (distance from its center to the point where it touches the wave). Use the components of Vn to build a translation matrix MTT
And now, just get the resultant transformation to use with every vertex of the tracticoid: Mt = MTT * M = MTT * MST * MSR
To render the scene you need other two matrices, for the camera (MV) and for the projection (MP). While Mt is for each tracticoid, MV and MP are the same for all objects, including the sphere itself.

How to compute quaternion from max and min coordinate values of the Cube?

I am facing some problem in creating the orientation of a cube (Bounding Box of the detected objects in ROS). I know the max and min coordinate values (xmin, xmax, ymin, ymax, zmin, zmax). So, I can easily find the vertex of the objects Bounding BOX (Cube) which are
[xmin ymin zmin;
xmax ymin zmin;
xmax ymax zmin;
xmin ymax zmin;
xmin ymin zmax;
xmax ymin zmax;
xmax ymax zmax;
xmin ymax zmax]
Now how can I create the quaternion from these vertex in order to get the orientation of the bounding box? I know that a quaternion is a set of 4 numbers, [x y z w], which represents rotations the following way:
// RotationAngle is in radians
x = RotationAxis.x * sin(RotationAngle / 2)
y = RotationAxis.y * sin(RotationAngle / 2)
z = RotationAxis.z * sin(RotationAngle / 2)
w = cos(RotationAngle / 2)
How to get the RotationAxis and RotationAngle when knowing the vertex of the object (in my case is a cube or 3D rectangle)?
Thanks
(I don't know what ROS is, but here's an abstract discussion of your question.)
First of all, a "rotation" starts from a "from" state and ends at a "to" state. You know your "to" state, but you'll have to specify the "from" state. Let's assume your box starts at a default state, with one corner at (0, 0, 0) and another at (1, 1, 1).
This default box state, like your final one is an "axis-aligned" box. Now, the box might have rotated 90, 180, or 270 degrees, but if different faces of the box are not distinct from each other, this might not matter. This scenario will have a 0-degree rotation, and the quaternion representing this rotation is trivially calculated to be (0, 0, 0, 1). You'll still need scale and translation (which are also trivial to compute) to get from that default state to your BB, but no rotation.
Now, if the faces are actually different, then we'll indeed have a rotation. Let's call it an "axis-aligned rotation" or AAR, which can take a default box to any one of 24 different states. Think of it like this: the axis of rotation can be any of the 6 basis vectors in either direction (+x, -x, +y, -y, +z, -z), and the angle can be 0, 90, 180, or 270 degrees (6 * 4 is 24!)
Each case, when you think about it like that, completely defines a rotation quaternion which is trivial to construct. The problem then, becomes finding with of the 24 rotations we have.
I can think of two mental models for this: either pick one "front" face and one "top" face for your box and find out where they ended up in the "to" state (6 places for "front" and 4 places for "top".) Or you can pick an "origin" vertex and a "neighbor" vertex and find where they ended up (8 places for "origin" and 3 places for "neighbor".) In either case, you probably would have a 24-entry table to pre-calculated quaternions, which you will choose from based on your "rotation".
The way you created your bounding box by using the minimum and maximum values leads to a bounding box aligned to the x, y, and z axes. Thus, it doesn't have any rotation and its quaternion would be [0, 0, 0, 1].
Getting the rotation matrix of randomly oriented cubes or rectangles is not complicated if you know which vertex relates to which corner of the cube/ rectangle. Then you can just use two perpendicular edges as the x and y axes in the objects coordinate frame. After normalizing these vectors you can create the z axis in the object frame by building the cross product of the x and y vectors. The columns of the 3x3 rotation matrix are these three vectors:
| x_0 y_0 z_0 |
R = | x_1 y_1 z_1 |
| x_2 y_2 z_2 |
You can search the internet for resources to get the rotation axis and quaternion from the rotation matrix.
For general objects, you can use the same method if you have the vertices of points on two perpendicular axes in the object coordinate frame. If you don't know the corespondig point in the object frame for your vertices, it gets more complicated. If your vertices are uniformly covering your object (e.g., vertices from 3D cameras or laser scans), you can use Principal Component Analysis to find the principal axes of the vertices. These axes form the columns of the rotation matrix (make sure the principal axes form a right-handed fram, i.e., the determinat of the rotation matrix is +1).
If these methods also don't apply to you, you could search for pose estimation methods in the context of point clouds (e.g.: Point Cloud Library PCL, Open3D).
Code (from the tutorial) for the extraction of the OBB:
pcl::MomentOfInertiaEstimation <pcl::PointXYZ> feature_extractor;
feature_extractor.setInputCloud (cloud);
feature_extractor.compute ();
pcl::PointXYZ min_point_OBB;
pcl::PointXYZ max_point_OBB;
pcl::PointXYZ position_OBB;
Eigen::Matrix3f rotational_matrix_OBB;
Eigen::Vector3f major_vector, middle_vector, minor_vector;
Eigen::Vector3f mass_center;
feature_extractor.getOBB (min_point_OBB, max_point_OBB, position_OBB, rotational_matrix_OBB);
feature_extractor.getEigenVectors (major_vector, middle_vector, minor_vector);
feature_extractor.getMassCenter (mass_center);
So to get the final OBB - The coordinates:
Eigen::Vector3f p1 (min_point_OBB.x, min_point_OBB.y, min_point_OBB.z);
Eigen::Vector3f p2 (min_point_OBB.x, min_point_OBB.y, max_point_OBB.z);
Eigen::Vector3f p3 (max_point_OBB.x, min_point_OBB.y, max_point_OBB.z);
Eigen::Vector3f p4 (max_point_OBB.x, min_point_OBB.y, min_point_OBB.z);
Eigen::Vector3f p5 (min_point_OBB.x, max_point_OBB.y, min_point_OBB.z);
Eigen::Vector3f p6 (min_point_OBB.x, max_point_OBB.y, max_point_OBB.z);
Eigen::Vector3f p7 (max_point_OBB.x, max_point_OBB.y, max_point_OBB.z);
Eigen::Vector3f p8 (max_point_OBB.x, max_point_OBB.y, min_point_OBB.z);
p1 = rotational_matrix_OBB * p1 + position;
p2 = rotational_matrix_OBB * p2 + position;
p3 = rotational_matrix_OBB * p3 + position;
p4 = rotational_matrix_OBB * p4 + position;
p5 = rotational_matrix_OBB * p5 + position;
p6 = rotational_matrix_OBB * p6 + position;
p7 = rotational_matrix_OBB * p7 + position;
p8 = rotational_matrix_OBB * p8 + position
Bu rotational_matrix_OBB how to get?

How to rotate model to follow path

I have a spaceship model that I want to move along a circular path. I want the nose of the ship to always point in the direction it is moving in.
Here is the code I have to move it in a circle right now:
glm::mat4 m = glm::mat4(1.0f);
//time
long value_ms = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::time_point_cast<std::chrono::milliseconds>(std::chrono::
high_resolution_clock::now())
.time_since_epoch())
.count();
//translate
m = glm::translate(m, translate);
m = glm::translate(m, glm::vec3(-50, 0, -20));
m = glm::scale(m, glm::vec3(0.025f, 0.025f, 0.025f));
m = glm::translate(m, glm::vec3(1800, 0, 3000));
float speed = .002;
float x = 100 * cos(value_ms * speed); // + 1800;
float y = 0;
float z = 100 * sin(value_ms * speed); // + 3000;
m = glm::translate(m, glm::vec3(x, y, z));
How would I move it so the nose always points ahead? I tried doing glm::rotate with the rotation axis set as x or y or z but I cannot get it to work properly.
First see Understanding 4x4 homogenous transform matrices as I am using terminology and stuff from there...
Its usual to use a transform matrix of object for its navigation purposes and not the other way around ... So you should have a transform matrix M for your space ship that represents its position and orientation in [GCS] (global coordinate system). On top of that is sometimes multiplied another matrix M0 that align your space ship mesh to the first matrix (you know some meshes are not centered around (0,0,0) nor axis aligned...)
Now when you are moving your object you just do local transformations on the M so moving forward is just translating M origin position by a multiple of forward axis basis vector. The same goes for sliding to sides (just use different basis vector) resulting in that the object is alway aligned to where it supposed to be (in respect to movement). The same goes for turns. So going in circle is just moving forward and turning at constant speeds per time iteration step (timer).
You are doing this backwards first you compute position and orientation and then you are trying to make operations resulting in matrix that would do the same... In such case is much much easier to construct the matrix M instead of creating transformations that will create it... So what you need is:
origin position
3 perpendicular (most likely unit) basis vectors
So the origin is your x,y,z position. 2 basis vectors can be obtained from the circle so forward is tangent (or position-last_position) and vector towards circle center cen be used as (right or left). The 3th vector can be obtained by cross product so let assume:
+X axis is right
+Y axis is up
+Z axis is forward
you got:
r=100.0
a=speed*t
pos = (r*cos(a),0.0,r*sin(a))
center = (0.0,0.0,0.0)
so:
Z = (cos(a-0.5*M_PI),0.0,sin(a-0.5*M_PI))
X = (cos(a),0.0,sin(a))-ceneter
Y = cross(X,Z)
O = pos
normalize:
X /= length(X)
Y /= length(Y)
Z /= length(Z)
So now just feed your X,Y,Z,O to your matrix (depending on the conventions you use like multiplication order, direct/inverse matrix, row-major or column-major matrices ...)
so for example like this:
double M[16]=
{
X[0],X[1],X[2],0.0,
Y[0],Y[1],Y[2],0.0,
Z[0],Z[1],Z[2],0.0,
O[0],O[1],O[2],1.0,
};
or:
double M[16]=
{
X[0],Y[0],Z[0],O[0],
X[1],Y[1],Z[1],O[1],
X[2],Y[2],Z[2],O[2],
0.0 ,0.0 ,0.0 ,1.0,
};
And that is all ... The matrix might be transposed, inverted etc based on the conventions you use. Sorry I do not use GLM but the syntax should be very siilar ... the matrix feeding might be even simpler if rows or columns are loadable by a vector ...

Rotating a matrix in the direction of a vector?

I have a player in the shape of a sphere that can move around freely in the directions x and z.
The players current speed is stored in a vector that is added to the players position on every frame:
m_position += m_speed;
I also have a rotation matrix that I'd like to rotate in the direction that the player's moving in (imagine how a ball would rotate if it rolled on the floor).
Here's a short video to help visualise the problem: http://imgur.com/YrTG2al
Notice in the video when I start moving up and down (Z) as opposed to left and right (X) the rotation axis no longer matches the player's movement.
Code used to produce the results:
glm::vec3 UP = glm::vec3(0, 1, 0);
float rollSpeed = fabs(m_Speed.x + m_Speed.z);
if (rollSpeed > 0.0f) {
m_RotationMatrix = glm::rotate(m_RotationMatrix, rollSpeed, glm::cross(UP, glm::normalize(m_Speed)));
}
Thankful for help
Your rollSpeed computation is wrong -- e.g., if the signs of m_Speed.x and m_Speed.z speed are different, they will subtract. You need to use the norm of the speed in the plane:
float rollSpeed = sqrt(m_Speed.x * m_Speed.x + m_Speed.y * m_Speed.y);
To be more general about it, you can re-use your cross product instead. That way, your math is less likely to get out of sync -- something like:
glm::vec3 rollAxis = glm::cross(UP, m_speed);
float rollSpeed = glm::length(rollAxis);
m_RotationMatrix = glm::rotate(m_RotationMatrix, rollSpeed, rollAxis);
rollSpeed should be the size of the speed vector.
float rollSpeed = glm::length(m_Speed);
The matrix transform expects an angle. The angle of rotation will depend on the size of your ball. But say it's radius r then the angle (in radians) you need is
angle = rollSpeed/r;
If I understood correctly you need a matrix rotation which would work in any axis direction(x,y,z).
I think you should write a rotate() method per axis (x, y, z), also you should point to direction on which axis your direction points, you should write direction.x or direction.y or direction.z and rotation matrix will understand to where the direction vector is being point.

OpenCV Computing Camera Position && Rotation

for a project I need to compute the real world position and orientation of a camera
with respect to a known object.
I have a set of photos, each displays a chessboard from different points of view.
Using CalibrateCamera and solvePnP I am able to reproject Points in 2d, to get a AR-thing.
So my situation is as such:
Intrinsic parameters are known
Distortioncoefficients are known
translation Vector and rotation Vector are known per photo.
I simply cannot figure out how to compute the position of the camera. My guess was:
invert translation vector. (=t')
transform rotation vector to degree (seems to be radian) and invert
use rodriguez on rotation vector
compute RotationMatrix * t'
But the results are somehow totally off...
Basically I want to to compute a ray for each pixel in world coordinates.
If more informations on my problem are needed, I'd be glad to answer quickly.
I dont' get it... somehow the rays are still off. This is my Code btw:
Mat image1CamPos = tvecs[0].clone(); //From calibrateCamera
Mat rot = rvecs[0].clone(); //From calibrateCamera
Rodrigues(rot, rot);
rot = rot.t();
//Position of Camera
Mat pos = rot * image1CamPos;
//Ray-Normal (( (double)mk[i][k].x) are known image-points)
float x = (( (double)mk[i][0].x) / fx) - (cx / fx);
float y = (( (double)mk[i][0].y) / fy) - (cy / fy);
float z = 1;
float mag = sqrt(x*x + y*y + z*z);
x /= mag;
y /= mag;
z /= mag;
Mat unit(3, 1, CV_64F);
unit.at<double>(0, 0) = x;
unit.at<double>(1, 0) = y;
unit.at<double>(2, 0) = z;
//Rotation of Ray
Mat rot = stof1 * unit;
But when plotting this, the rays are off :/
The translation t (3x1 vector) and rotation R (3x3 matrix) of an object with respect to the camera equals the coordinate transformation from object into camera space, which is given by:
v' = R * v + t
The inversion of the rotation matrix is simply the transposed:
R^-1 = R^T
Knowing this, you can easily resolve the transformation (first eq.) to v:
v = R^T * v' - R^T * t
This is the transformation from camera into object space, i.e., the position of the camera with respect to the object (rotation = R^T and translation = -R^T * t).
You can simply get a 4x4 homogeneous transformation matrix from this:
T = ( R^T -R^T * t )
( 0 1 )
If you now have any point in camera coordinates, you can transform it into object coordiantes:
p' = T * (x, y, z, 1)^T
So, if you'd like to project a ray from a pixel with coordinates (a,b) (probably you will need to define the center of the image, i.e. the principal point as reported by CalibrateCamera, as (0,0)) -- let that pixel be P = (a,b)^T. Its 3D coordinates in camera space are then P_3D = (a,b,0)^T. Let's project a ray 100 pixel in positive z-direction, i.e. to the point Q_3D = (a,b,100)^T. All you need to do is transform both 3D coordinates into the object coordinate system using the transformation matrix T and you should be able to draw a line between both points in object space. However, make sure that you don't confuse units: CalibrateCamera will report pixel values while your object coordinate system might be defined in, e.g., cm or mm.