Related
I have a number of point clouds taken from an kinect-like instrument that is mounted on a tripod and then rotated. How do I determine the rotation axis accurately? I'm using c++ PCL and Eigen.
I can match the point clouds together with ICP and run a global registration (SLAM or ELCH) to get combined point cloud but for a number of reasons I would like to be able to determine the axis accurately and force the registrations to respect this rotation.
One issue that is related to this problem is the origin of my instrument. I can measure the distance to the rotation axis from the external dimensions of the device fairly accurately but I don't know exactly where is the origin in relation the extremities of the device. Solving this issue could help me to locate the origin too.
There are two methods that I'm considering.
One is to take the transformation matrices of the registered point clouds and extract the translation vectors that represent the locations where the transformation would project the internal origin in the current position. To the set of points acquired this way I could try to fit a circle and the center point would represent a vector from the origin to the rotation axis and the normal direction of the circle would be the direction of the axis.
The other option is to determine the rotation axis directly from any single rotation matrix, but the vector to the rotation axis seems volatile.
Any better solutions or insights on the issue?
You need to calculate the major axis oft the tensor of inertia. https://en.m.wikipedia.org/wiki/Moment_of_inertia.
All points can be considered to have the same mass. Then you can use the Steiner approach.
THIS IS NOT AN ANSWER TO THE ORIGINAL QUESTION BUT CLARIFICATION TO A DISCUSSION ABOUT DEFINING THE ROTATION AXIS BETWEEN TWO POSES
Deepfreeze. Here is an Octave script to demonstrate what we discussed in the chat. I know it might be bad practice to post an answer when not presenting one, but I hope this will give you an insight on what I was trying to explain about the relationship between the translation vector and the point on the rotation axis (t_i in your notation).
rotation_axis_unit = [0,0,1]' % parallel to z axis
angle = 1 /180 *pi() % one degree
% a rotaion matrix of one degree rotation
R = [cos(angle) -sin(angle) 0 ; sin(angle) cos(angle) 0 ; 0 0 1 ];
% the point around wich to rotate
axis_point = [-10,0,0]' % a point
% just a point used to demonstrate that all points form a circular path
test_point = [10,5,0]';
%containers for plotting
path_origin = zeros(360,3);
path_test_point = zeros(360,3);
path_v = zeros(360,3);
origin = [0,0,0]';
% updating rotation matrix
R_i = R;
M1 = [R,R*-axis_point.+axis_point;[0,0,0,1]];
% go around a full circle
for i=1:360
% v = the last column of M. Created from axis_point.
% -axis_point is the vector from axis_point to origin which is being rotated
% then a correction is applied to center it around the axis point
v = R_i * -axis_point .+ axis_point;
% building 4x4 transformation matrix
M = [R_i, v;[0,0,0,1]];
% M could also be built M_i = M1 * M_i, rotating the previous M by one degree around axis_point
% rotatin testing point and saving it
test_point_i = M * [test_point;1];
path_test_point(i,:) = test_point_i(1:3,1)';
% saving the translation part of M
path_v(i,:) = v';
% rotating origin point and saving it
origin_i = test_point_i = M * [origin;1];
path_origin(i,:) = origin_i(1:3,1)';
R_i = R * R_i ;
end
figure(1)
% plot test point path, just to show it forms a circular path, center and axis_point
scatter3(path_test_point(:,1), path_test_point(:,2), path_test_point(:,3), 5,'r')
hold on
% plotting origin path, circular, center at axis_point
scatter3(path_origin(:,1), path_origin(:,2), path_origin(:,3), 7,'r')
hold on
% plotting translation vectors, identical to origin path, (if invisible rotate so that you are watching it from z axis direction)
scatter3(path_v(:,1), path_v(:,2), path_v(:,3), 1, 'black');
hold on
% plots for visual analysis
scatter3(0,0,0,5,'b') % origin
hold on
scatter3(axis_point(1), axis_point(2), axis_point(3), 5, 'g') % axis point, center of the circles
hold on
scatter3(test_point(1), test_point(2), test_point(3), 5, 'black') % test point origin
hold off
% what does this demonstrate?
% it shows that that the ralationship between a 4x4
% transformation matrix and axis_angle notation plus point on the rotation axis
% M = [ R, v,; [0,0,0,1]] = [ R_i , R_i * -c + c; [ 0, 0, 0, 1] ]
%
% where c equals axis_point ( = perpendicular vector from origin to the axis of rotation )
% pay attention to path_v and path_origin
% they are identical
% path_v was extracted from the 4x4 transformation matrix M
% and path_origin was created by rotating the origin point by M
%--> v = R_i * -.c +.c
% Notice that since M describes a rotation of alpha angles around an
% axis that goes through c
% and its translation vector lies on a circle whose center
% is at the rotation_axis and radius is the distance from that
% point to origin ->
%
% M * M will describe a rotation of 2 x alpha angles around the same axis
% Therefore you can easily create more points that lay on that circle
% by multiplying M by itself and extracting the translation vector
%
% c can then be solved by normal circle fit algorithms.
%------------------------------------------------------------
% CAUTION !!!
% this applies perfectly when the transformation matrices have been created so
% that the translation is perfectly orthogonal to the rotation axis
% in real world matrices the translation will not be orthogonal
% therefore the points will not travel on a circular path but on a helix and this needs to be
% dealt with when solving the center of rotation.
An option is to place a chessboard at ~1 [m]. Use the kinect camera to make images for different rotations, where the hole chessboard is still visible. Fit the chessboard using OpenCV.
The goal is to find the xyz coordinates of the chessboard for the different orientations. Use your camera api functions to determine the xyz coordinates of the chessboard or do the following:
Determine camera intrinsics of camera 1 and 2. (use both color and IR images for kinect).
Determine camera extrinsics (camera 2 [R,t] wrt camera 1)
Use the values to calculate projection matrices
Use the projection matrices to triangulate the points of the chessboard to get coordinates in [X,Y,Z] wrt camera1 coordinate system.
Each group of chessboard points is called [x_i]. Now we can write the equation:
Update:
This equation can be solved with a non-linear solver, I used ceres-solver.
#include "ceres/ceres.h"
#include "ceres/rotation.h"
#include "glog/logging.h"
#include "opencv2/opencv.hpp"
#include "csv.h"
#include "Eigen/Eigen"
using ceres::AutoDiffCostFunction;
using ceres::CostFunction;
using ceres::Problem;
using ceres::Solver;
using ceres::Solve;
struct AxisRotationError {
AxisRotationError(double observed_x0, double observed_y0, double observed_z0, double observed_x1, double observed_y1, double observed_z1)
: observed_x0(observed_x0), observed_y0(observed_y0), observed_z0(observed_z0), observed_x1(observed_x1), observed_y1(observed_y1), observed_z1(observed_z1) {}
template <typename T>
bool operator()(const T* const axis, const T* const angle, const T* const trans, T* residuals) const {
//bool operator()(const T* const axis, const T* const trans, T* residuals) const {
// Normalize axis
T a[3];
T k = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2];
a[0] = axis[0] / sqrt(k);
a[1] = axis[1] / sqrt(k);
a[2] = axis[2] / sqrt(k);
// Define quaternion from axis and angle. Convert angle to radians
T pi = T(3.14159265359);
//T angle[1] = {T(10.0)};
T quaternion[4] = { cos((angle[0]*pi / 180.0) / 2.0),
a[0] * sin((angle[0] * pi / 180.0) / 2.0),
a[1] * sin((angle[0] * pi / 180.0) / 2.0),
a[2] * sin((angle[0] * pi / 180.0) / 2.0) };
// Define transformation
T t[3] = { trans[0], trans[1], trans[2] };
// Calculate predicted positions
T observedPoint0[3] = { T(observed_x0), T(observed_y0), T(observed_z0)};
T point[3]; point[0] = observedPoint0[0] - t[0]; point[1] = observedPoint0[1] - t[1]; point[2] = observedPoint0[2] - t[2];
T rotatedPoint[3];
ceres::QuaternionRotatePoint(quaternion, point, rotatedPoint);
T predicted_x = rotatedPoint[0] + t[0];
T predicted_y = rotatedPoint[1] + t[1];
T predicted_z = rotatedPoint[2] + t[2];
// The error is the difference between the predicted and observed position.
residuals[0] = predicted_x - T(observed_x1);
residuals[1] = predicted_y - T(observed_y1);
residuals[2] = predicted_z - T(observed_z1);
return true;
}
// Factory to hide the construction of the CostFunction object from
// the client code.
static ceres::CostFunction* Create(const double observed_x0, const double observed_y0, const double observed_z0,
const double observed_x1, const double observed_y1, const double observed_z1) {
// Define AutoDiffCostFunction. <AxisRotationError, #residuals, #dim axis, #dim angle, #dim trans
return (new ceres::AutoDiffCostFunction<AxisRotationError, 3, 3, 1,3>(
new AxisRotationError(observed_x0, observed_y0, observed_z0, observed_x1, observed_y1, observed_z1)));
}
double observed_x0;
double observed_y0;
double observed_z0;
double observed_x1;
double observed_y1;
double observed_z1;
};
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
// Load points.csv into cv::Mat's
// 216 rows with (x0, y0, z0, x1, y1, z1)
// [x1,y1,z1] = R* [x0-tx,y0-ty,z0-tz] + [tx,ty,tz]
// The xyz coordinates are points on a chessboard, where the chessboard
// is rotated for 4x. Each chessboard has 54 xyz points. So 4x 54,
// gives the 216 rows of observations.
// The chessboard is located at [0,0,1], as the camera_0 is located
// at [-0.1,0,0], the t should become [0.1,0,1.0].
// The chessboard is rotated around axis [0.0,1.0,0.0]
io::CSVReader<6> in("points.csv");
float x0, y0, z0, x1, y1, z1;
// The observations
cv::Mat x_0(216, 3, CV_32F);
cv::Mat x_1(216, 3, CV_32F);
for (int rowNr = 0; rowNr < 216; rowNr++){
if (in.read_row(x0, y0, z0, x1, y1, z1))
{
x_0.at<float>(rowNr, 0) = x0;
x_0.at<float>(rowNr, 1) = y0;
x_0.at<float>(rowNr, 2) = z0;
x_1.at<float>(rowNr, 0) = x1;
x_1.at<float>(rowNr, 1) = y1;
x_1.at<float>(rowNr, 2) = z1;
}
}
std::cout << x_0(cv::Rect(0, 0, 2, 5)) << std::endl;
// The variable to solve for with its initial value. It will be
// mutated in place by the solver.
int numObservations = 216;
double axis[3] = { 0.0, 1.0, 0.0 };
double* pAxis; pAxis = axis;
double angles[4] = { 10.0, 10.0, 10.0, 10.0 };
double* pAngles; pAngles = angles;
double t[3] = { 0.0, 0.0, 1.0,};
double* pT; pT = t;
bool FLAGS_robustify = true;
// Build the problem.
Problem problem;
// Set up the only cost function (also known as residual). This uses
// auto-differentiation to obtain the derivative (jacobian).
for (int i = 0; i < numObservations; ++i) {
ceres::CostFunction* cost_function =
AxisRotationError::Create(
x_0.at<float>(i, 0), x_0.at<float>(i, 1), x_0.at<float>(i, 2),
x_1.at<float>(i, 0), x_1.at<float>(i, 1), x_1.at<float>(i, 2));
//std::cout << "pAngles: " << pAngles[i / 54] << ", " << i / 54 << std::endl;
ceres::LossFunction* loss_function = FLAGS_robustify ? new ceres::HuberLoss(0.001) : NULL;
//ceres::LossFunction* loss_function = FLAGS_robustify ? new ceres::CauchyLoss(0.002) : NULL;
problem.AddResidualBlock(cost_function, loss_function, pAxis, &pAngles[i/54], pT);
//problem.AddResidualBlock(cost_function, loss_function, pAxis, pT);
}
// Run the solver!
ceres::Solver::Options options;
options.linear_solver_type = ceres::DENSE_SCHUR;
//options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
options.trust_region_strategy_type = ceres::LEVENBERG_MARQUARDT;
options.num_threads = 4;
options.use_nonmonotonic_steps = false;
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
//std::cout << summary.FullReport() << "\n";
std::cout << summary.BriefReport() << "\n";
// Normalize axis
double k = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2];
axis[0] = axis[0] / sqrt(k);
axis[1] = axis[1] / sqrt(k);
axis[2] = axis[2] / sqrt(k);
// Plot results
std::cout << "axis: [ " << axis[0] << "," << axis[1] << "," << axis[2] << " ]" << std::endl;
std::cout << "t: [ " << t[0] << "," << t[1] << "," << t[2] << " ]" << std::endl;
std::cout << "angles: [ " << angles[0] << "," << angles[1] << "," << angles[2] << "," << angles[3] << " ]" << std::endl;
return 0;
}
The result I've got:
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 3.632073e-003 0.00e+000 3.76e-002 0.00e+000 0.00e+000 1.00e+004 0 4.30e-004 7.57e-004
1 3.787837e-005 3.59e-003 2.10e-003 1.17e-001 1.92e+000 3.00e+004 1 7.43e-004 8.55e-003
2 3.756202e-005 3.16e-007 1.73e-003 5.49e-001 1.61e-001 2.29e+004 1 5.35e-004 1.13e-002
3 3.589147e-005 1.67e-006 2.90e-004 9.19e-002 9.77e-001 6.87e+004 1 5.96e-004 1.46e-002
4 3.584281e-005 4.87e-008 1.38e-005 2.70e-002 1.00e+000 2.06e+005 1 4.99e-004 1.73e-002
5 3.584268e-005 1.35e-010 1.02e-007 1.63e-003 1.01e+000 6.18e+005 1 6.32e-004 2.01e-002
Ceres Solver Report: Iterations: 6, Initial cost: 3.632073e-003, Final cost: 3.584268e-005, Termination: CONVERGENCE
axis: [ 0.00119037,0.999908,-0.0134817 ]
t: [ 0.0993185,-0.0080394,1.00236 ]
angles: [ 9.90614,9.94415,9.93216,10.1119 ]
The angles result is quite nice 10 degrees. These can even be fixed for my case, as I know the rotation very accurately from my rotation stage. There is a small difference in the t and axis. This is cause by inaccuracies in my virtual stereoCamera simulation. My chessboard squares are not exactly square and the dimensions are also a little off....
My simulation scripts, images, results: blender_simulation.zip
What I am trying to do:
Make an empty 3D image (.dcm in this case) with image direction as
[1,0,0;
0,1,0;
0,0,1].
In this image, I insert an oblique trajectory, which essentially represents a cuboid. Now I wish to insert a hollow hemisphere in this cuboid (cuboid with all white pixels - constant value, hemisphere can be anything but differentiable), so that it is aligned along the axis of the trajectory.
What I am getting
So I used the general formula for a sphere:
x = x0 + r*cos(theta)*sin(alpha)
y = y0 + r*sin(theta)*sin(alpha)
z = z0 + r*cos(alpha)
where, 0 <= theta <= 2 * pi, 0 <= alpha <= pi / 2, for hemisphere.
What I tried to achieve this
So first I thought to just get the rotation matrix, between the image coordinate system and the trajectory coordinate system and multiply all points on the sphere with it. This didn't give me desired results as the rotated sphere was scaled and translated. I don't get why this was happening as I checked the points myself.
Then I thought why not make a hemisphere out of a sphere, which is cut at by a plane lying parallel to the y,z plane of the trajectory coordinate system. For this, I calculated the angle between x,y and z axes of the image with that of the trajectory. Then, I started to get hemisphere coordinates for theta_rotated and alpha_rotated. This didn't work either as instead of a hemisphere, I was getting a rather weird sphere.
This is without any transformations
This is with the angle transformation (second try)
For reference,
The trajectory coordinate system :
[-0.4744, -0.0358506, -0.8553;
-0.7049, 0.613244, 0.3892;
-0.5273, -0.787537, 0.342;];
which gives angles:
x_axis angle 2.06508 pi
y_axis angle 2.2319 pi
z_axis angle 1.22175 pi
Code to generate the cuboid
Vector3d getTrajectoryPoints(std::vector<Vector3d> &trajectoryPoints, Vector3d &target1, Vector3d &tangent1){
double distanceFromTarget = 10;
int targetShift = 4;
target -= z_vector;
target -= (tangent * targetShift);
Vector3d vector_x = -tangent;
y_vector = z_vector.cross(vector_x);
target -= y_vector;
Vector3d start = target - vector_x * distanceFromTarget;
std::cout << "target = " << target << "start = " << start << std::endl;
std::cout << "x " << vector_x << " y " << y_vector << " z " << z_vector << std::endl;
double height = 0.4;
while (height <= 1.6)
{
double width = 0.4;
while (width <= 1.6){
distanceFromTarget = 10;
while (distanceFromTarget >= 0){
Vector3d point = target + tangent * distanceFromTarget;
//std::cout << (point + (z_vector*height) - (y_vector * width)) << std::endl;
trajectoryPoints.push_back(point + (z_vector * height) + (y_vector * width));
distanceFromTarget -= 0.09;
}
width += 0.09;
}
height += 0.09;
}
}
The height and width as incremented with respect to voxel spacing.
Do you guys know how to achieve this and what am I doing wrong? Kindly let me know if you need any other info.
EDIT 1
After the answer from #Dzenan, I tried the following:
target = { -14.0783, -109.8260, -136.2490 }, tangent = { 0.4744, 0.7049, 0.5273 };
typedef itk::Euler3DTransform<double> TransformType;
TransformType::Pointer transform = TransformType::New();
double centerForTransformation[3];
const double pi = std::acos(-1);
try{
transform->SetRotation(2.0658*pi, 1.22175*pi, 2.2319*pi);
// transform->SetMatrix(transformMatrix);
}
catch (itk::ExceptionObject &excp){
std::cout << "Exception caught ! " << excp << std::endl;
transform->SetIdentity();
}
transform->SetCenter(centerForTransformation);
Then I loop over all the points in the hemisphere and transform them using,
point = transform->TransformPoint(point);
Although, I'd prefer to give the matrix which is equal to the trajectory coordinate system (mentioned above), the matrix isn't orthogonal and itk wouldn't take it. It must be said that I used the same matrix for resampling this image and extracting the cuboid and this was fine. Thence, I found the angles between x_image - x_trajectory, y_image - y_trajectory and z_image - z_trajectory and used SetRotation instead which gives me the following result (still incorrect):
EDIT 2
I tried to get the sphere coordinates without actually using the polar coordinates. Following discussion with #jodag, this is what I came up with:
Vector3d center = { -14.0783, -109.8260, -136.2490 };
height = 0.4;
while (height <= 1.6)
{
double width = 0.4;
while (width <= 1.6){
distanceFromTarget = 5;
while (distanceFromTarget >= 0){
// Make sure the point lies along the cuboid direction vectors
Vector3d point = center + tangent * distanceFromTarget + (z_vector * height) + (y_vector * width);
double x = std::sqrt((point[0] - center[0]) * (point[0] - center[0]) + (point[1] - center[1]) * (point[1] - center[1]) + (point[2] - center[2]) * (point[2] - center[2]));
if ((x <= 0.5) && (point[2] >= -136.2490 ))
orientation.push_back(point);
distanceFromTarget -= 0.09;
}
width += 0.09;
}
height += 0.09;
}
But this doesn't seem to work either.
This is the output
I'm a little confused about your first plot because it appears that the points being displayed are not defined in the image coordinates. The example I'm posting below assumes that voxels must be part of the image coordinate system.
The code below transforms the voxel coordinates in the image space into the trajectory space by using an inverse transformation. It then rasterises a 2x2x2 cube centered around 0,0,0 and a 0.9 radius hemisphere sliced along the xy axis.
Rather than continuing a long discussion in the comments I've decided to post this. Please comment if you're looking for something different.
% define trajectory coordinate matrix
R = [-0.4744, -0.0358506, -0.8553;
-0.7049, 0.613244, 0.3892;
-0.5273, -0.787537, 0.342]
% initialize 50x50x50 3d image
[x,y,z] = meshgrid(linspace(-2,2,50));
sz = size(x);
x = reshape(x,1,[]);
y = reshape(y,1,[]);
z = reshape(z,1,[]);
r = ones(size(x));
g = ones(size(x));
b = ones(size(x));
blue = [0,1,0];
green = [0,0,1];
% transform image coordinates to trajectory coordinates
vtraj = R\[x;y;z];
xtraj = vtraj(1,:);
ytraj = vtraj(2,:);
ztraj = vtraj(3,:);
% rasterize 2x2x2 cube in trajectory coordinates
idx = (xtraj <= 1 & xtraj >= -1 & ytraj <= 1 & ytraj >= -1 & ztraj <= 1 & ztraj >= -1);
r(idx) = blue(1);
g(idx) = blue(2);
b(idx) = blue(3);
% rasterize radius 0.9 hemisphere in trajectory coordinates
idx = (sqrt(xtraj.^2 + ytraj.^2 + ztraj.^2) <= 0.9) & (ztraj >= 0);
r(idx) = green(1);
g(idx) = green(2);
b(idx) = green(3);
% plot just the blue and green voxels
green_idx = (r == green(1) & g == green(2) & b == green(3));
blue_idx = (r == blue(1) & g == blue(2) & b == blue(3));
figure(1); clf(1);
plot3(x(green_idx),y(green_idx),z(green_idx),' *g')
hold('on');
plot3(x(blue_idx),y(blue_idx),z(blue_idx),' *b')
axis([1,100,1,100,1,100]);
axis('equal');
axis('vis3d');
You can generate you hemisphere in some physical space, then transform it (translate and rotate) by using e.g. RigidTransform's TransformPoint method. Then use TransformPhysicalPointToIndex method in itk::Image. Finally, use SetPixel method to change intensity. Using this approach you will have to control the resolution of your hemisphere to fully cover all the voxels in the image.
Alternative approach is to construct a new image into which you create you hemisphere, then use resample filter to create a transformed version of the hemisphere in an arbitrary image.
I have image A and i want to get the bird-eye's view of image A. So I used getPerspectiveTransform method to get the transform matrix. The output result is 3x3 matrix. See my code. In my case i want to know the scale factor of the 3x3 matrix. I have looked the opencv document, but i cannot find detail of the transform matrix and i don't know how to get the scale. Also i have read some paper, the paper said we can get scaling, shearing and ratotion from a11, a12, a21, a22. See the pic. So how can i get the scale factor. Can you give me some advice? And can you explain the getPerspectiveTransform output matrix?Thank you!
Points[0] = Point2f(..., ...);
Points[1] = Point2f(..., ...);
Points[2] = Point2f(..., ...);
Points[3] = Point2f(..., ...);
dst[0] = Point2f(..., ...);
dst[1] = Point2f(..., ...);
dst[2] = Point2f(..., ...);
dst[3] = Point2f(..., ...);
Mat trans = getPerspectiveTransform(gpsPoints, dst);//I want to know the scale of trans
warpPerspective(A, B, trans, img.size());
When i change the camara position, the trapezium size and position will change. Now we set it into a rectangle and rectangle width/height was known. But i think camera in different height the rectangle size should have been changed.Because if we set into same size rectangle, the rectangle may have different detal. That's why i want to know scale from 3x3 transfrom matrix. For example, trapezium1 and trapezium2 have transfrom scale s1 and s2. So we can set rectangle1(width,height) = s2/s1 * rectangle2(width,height).
Ok, here you go:
H is the homography
H = T*R*S*L with
T = [1,0,tx; 0,1,ty; 0,0,1]
R = [cos(a),sin(a),0; -sin(a),cos(a),0; 0,0,1]
S = [sx,shear,0; 0,sy,0; 0,0,1]
L = [1,0,0; 0,1,0; lx,ly,1]
where tx/ty is translation; a is rotation angle; sx/sy is scale; shear is shearing factor; lx/ly are perspective foreshortening parameters.
You want to compute sx and sy if I understood right.
Now If lx and ly are both 0 it would be easy to compute sx and sy. It would be to decompose the upper left part of H by QR decomposition resulting in Q*R where Q is an orthogonal matrix (= rotation matrix) and R is an upper triangle matrix ([sx, shear; 0,sy]).
h1 h2 h3
h4 h5 h6
0 0 1
=> Q*R = [h1,h2; h4,h5]
But lx and ly destroy the easy way. So you have to find out how the upper left part of the matrix would look like without the influence of lx and ly.
If your whole homography is:
h1 h2 h3
h4 h5 h6
h7 h8 1
then you'll have:
Q*R =
h1-(h7*h3) h2-(h8*h3)
h4-(h7*h6) h5-(h8*h6)
So if you compute Q and R from this matrix, you can compute rotation, scale and shear easily.
I've tested this with a small C++ program:
double scaleX = (rand()%200) / 100.0;
double scaleY = (rand()%200) / 100.0;
double shear = (rand()%100) / 100.0;
double rotation = CV_PI*(rand()%360)/180.0;
double transX = rand()%100 - 50;
double transY = rand()%100 - 50;
double perspectiveX = (rand()%100) / 1000.0;
double perspectiveY = (rand()%100) / 1000.0;
std::cout << "scale: " << "(" << scaleX << "," << scaleY << ")" << "\n";
std::cout << "shear: " << shear << "\n";
std::cout << "rotation: " << rotation*180/CV_PI << " degrees" << "\n";
std::cout << "translation: " << "(" << transX << "," << transY << ")" << std::endl;
cv::Mat ScaleShearMat = (cv::Mat_<double>(3,3) << scaleX, shear, 0, 0, scaleY, 0, 0, 0, 1);
cv::Mat RotationMat = (cv::Mat_<double>(3,3) << cos(rotation), sin(rotation), 0, -sin(rotation), cos(rotation), 0, 0, 0, 1);
cv::Mat TranslationMat = (cv::Mat_<double>(3,3) << 1, 0, transX, 0, 1, transY, 0, 0, 1);
cv::Mat PerspectiveMat = (cv::Mat_<double>(3,3) << 1, 0, 0, 0, 1, 0, perspectiveX, perspectiveY, 1);
cv::Mat HomographyMatWithoutPerspective = TranslationMat * RotationMat * ScaleShearMat;
cv::Mat HomographyMat = HomographyMatWithoutPerspective * PerspectiveMat;
std::cout << "Homography:\n" << HomographyMat << std::endl;
cv::Mat DecomposedRotaScaleShear(2,2,CV_64FC1);
DecomposedRotaScaleShear.at<double>(0,0) = HomographyMat.at<double>(0,0) - (HomographyMat.at<double>(2,0)*HomographyMat.at<double>(0,2));
DecomposedRotaScaleShear.at<double>(0,1) = HomographyMat.at<double>(0,1) - (HomographyMat.at<double>(2,1)*HomographyMat.at<double>(0,2));
DecomposedRotaScaleShear.at<double>(1,0) = HomographyMat.at<double>(1,0) - (HomographyMat.at<double>(2,0)*HomographyMat.at<double>(1,2));
DecomposedRotaScaleShear.at<double>(1,1) = HomographyMat.at<double>(1,1) - (HomographyMat.at<double>(2,1)*HomographyMat.at<double>(1,2));
std::cout << "Decomposed submat: \n" << DecomposedRotaScaleShear << std::endl;
Now you can test the result by using the QR matrix decomposition of http://www.bluebit.gr/matrix-calculator/
First you can try to set perspectiveX and perspectiveY to zero. You'll see that you can use the upper left part of the matrix to decompose to the input values of rotation angle, shear and scale.
But if you don't set perspectiveX and perspectiveX to zero, you can use the "DecomposedRotaScaleShear" and decompose it to QR.
You'll get a result page with
Q:
a a
-a a
here you can compute acos(a) to get the angle
R:
sx shear
0 sy
here you can read sx and sy directly.
Hope this helps and I hope there is no error ;)
I have to draw an arrow. I have a head point and a tail point now i need to draw a triangular arrow cap. A triangle whose length is of size 5.How can i find coordinates of the end points of triangle. One thing is we have angle of 45.so if we can rotate the vector by 45 to obtain it.
` int x1=arrowStart.X;
int y1=arrowStart.Y;
int x2=arrowend.X;
int y2=arrowend.Y;
PointF arrowPoint=arrowend;
double arrowlength=sqrt(pow(x1-x2,2)+pow(y1-y2,2));
int ArrowMultiplier=1;
double arrowangle=atan2(y1-y2,x1-x2);
double pointx,pointy;
if(x1>x2)
{
pointx=x1 - (cos(arrowangle) * (arrowlength-3 * ArrowMultiplier ));
}
else
{
pointx = cos(arrowangle) * (arrowlength-3 * ArrowMultiplier ) + x1;
}
if (y1 > y2)
{
pointy = y1 - (sin(arrowangle) * (arrowlength -3 * ArrowMultiplier));
}
else
{
pointy = (sin(arrowangle) * (arrowlength-3 * ArrowMultiplier )) + y1;
}
PointF arrowPointBack(pointx,pointy);
double angleB = atan2((3 * ArrowMultiplier), (arrowlength - (3 * ArrowMultiplier)));
double angleC = (3.14) * (90 - (arrowangle * (180 /3.14)) - (angleB * (180 / 3.14))) / 180;
double secondaryLength = (3 * ArrowMultiplier)/sin(angleB);
if (x1 > x2)
{
pointx = x1 - (sin(angleC) * secondaryLength);
}
else
{
pointx = (sin(angleC) * secondaryLength) + x1;
}
if (y1 > y2)
{
pointy = y1 - (cos(angleC) * secondaryLength);
}
else
{
pointy = (cos(angleC) * secondaryLength) + y1;
}
PointF arrowPointLeft((float)pointx, (float)pointy);
angleC = arrowangle - angleB;
if (x1 > x2)
{
pointx = x1 - (cos(angleC) * secondaryLength);
}
else
{
pointx = (cos(angleC) * secondaryLength) +x1;
}
if (y1 > y2)
{
pointy =y1 - (sin(angleC) * secondaryLength);
}
else
{
pointy = (sin(angleC) * secondaryLength) + y1;
}
PointF arrowPointRight((float)pointx,(float)pointy);
PointF arrowPoints[4];
arrowPoints[0] = arrowPoint;
arrowPoints[1] = arrowPointLeft;
//arrowPoints[2] = arrowPointBack;
arrowPoints[2] = arrowPointRight;
arrowPoints[3] = arrowPoint;
`
Right, I suppose I should break it down for you:
First, you need to calculate the angle that the arrow sits at. This can be achieved with the inverse tangent function:
atan(diff_y, diff_x)
where diff_y and diff_x are the difference between the x and y values of your two end-points.
You can then add the desired angle of the arrow-head to this angle and use sin and cos to calculate the x and y values of the first of the extra points of the arrow-head.
new_x = head_x - 5 * cos (angle + pi/4)
new_y = head_y + 5 * sin (angle + pi/4)
for the other point, you do the same, but with a subtraction of the difference in angle.
new_x = head_x - 5 * cos (angle - pi/4)
new_y = head_y + 5 * sin (angle - pi/4)
You then have all the points you need.
I did this for fun (sue me, I was bored) and came up with this:
#include <math.h>
#include <iostream>
const double arrow_head_length = 3;
const double PI = 3.14159265;
const double arrow_head_angle = PI/6;
//returns the angle between two points, with coordinate1 describing the centre of the circle, with the angle progressing clockwise
double angle_between_points( std::pair<double,double> coordinate1, std::pair<double,double> coordinate2)
{
return atan2(coordinate2.second - coordinate1.second, coordinate1.first - coordinate2.first);
}
//calculate the position of a new point [displacement] away from an original point at an angle of [angle]
std::pair<double,double> displacement_angle_offset(std::pair<double,double> coordinate_base, double displacement, double angle)
{
return std::make_pair
(
coordinate_base.first - displacement * cos(angle),
coordinate_base.second + displacement * sin(angle)
);
}
int main()
{
std::pair<double,double> arrow_tail( 0, 0);
std::pair<double,double> arrow_head( 15,-15);
//find the angle of the arrow
double angle = angle_between_points(arrow_head, arrow_tail);
//calculate the new positions
std::pair<double,double> head_point_1 = displacement_angle_offset(arrow_head, arrow_head_length, angle + arrow_head_angle);
std::pair<double,double> head_point_2 = displacement_angle_offset(arrow_head, arrow_head_length, angle - arrow_head_angle);
//output the points in order: tail->head->point1->point2->head so if you follow them it draws the arrow
std::cout << arrow_tail.first << ',' << arrow_tail.second << '\n'
<< arrow_head.first << ',' << arrow_head.second << '\n'
<< head_point_1.first << ',' << head_point_1.second << '\n'
<< head_point_2.first << ',' << head_point_2.second << '\n'
<< arrow_head.first << ',' << arrow_head.second << std::endl;
}
The output can be saved as a .csv and loaded into excel for example, where you can use it to draw a connected scatter-graph that will form the shape of the arrow.
If this is homework, then before you do anything with it, make sure you know exactly how it works. That includes knowing the answers to questions like:
when calculating the angle, why does the code do point2_y-point1_y but point1_x-point2_x?
what direction is angle 0?
why does the angle increase going clockwise and not anti-clockwise?
why are there 5 outputs when only 4 points are needed?
what is the significance of PI/6 in the code? It isn't == 45 degrees. Why would this angle be better?
Also note that this question and answer will now pop up in a google search.
Working example: http://ideone.com/D4IwOy
You can paste the output into any graphing tool (such as this one) or save as a .csv and open in excel/spreadsheet of choice and plot a scatter graph to see the arrow coordinates. Note that it (annoyingly) doesn't keep the x and y scales equal so will stretch arrows like this one:
3,7
24,15
21.0381,15.4768
22.1061,12.6734
24,15
This question already has answers here:
OpenGL Line Width
(4 answers)
Closed 2 years ago.
What is the best way to draw a variable width line without using glLineWidth?
Just draw a rectangle?
Various parallel lines?
None of the above?
You can draw two triangles:
// Draws a line between (x1,y1) - (x2,y2) with a start thickness of t1 and
// end thickness t2.
void DrawLine(float x1, float y1, float x2, float y2, float t1, float t2)
{
float angle = atan2(y2 - y1, x2 - x1);
float t2sina1 = t1 / 2 * sin(angle);
float t2cosa1 = t1 / 2 * cos(angle);
float t2sina2 = t2 / 2 * sin(angle);
float t2cosa2 = t2 / 2 * cos(angle);
glBegin(GL_TRIANGLES);
glVertex2f(x1 + t2sina1, y1 - t2cosa1);
glVertex2f(x2 + t2sina2, y2 - t2cosa2);
glVertex2f(x2 - t2sina2, y2 + t2cosa2);
glVertex2f(x2 - t2sina2, y2 + t2cosa2);
glVertex2f(x1 - t2sina1, y1 + t2cosa1);
glVertex2f(x1 + t2sina1, y1 - t2cosa1);
glEnd();
}
Ok, how about this: (Ozgar)
A
/ \
/ \
. p1 \
/ \
/ D
B - .p2
- - - C
So AB is width1 and CD is width2.
Then,
// find line between p1 and p2
Vector p1p2 = p2 - p1 ;
// find a perpendicular
Vector perp = p1p2.perpendicular().normalize()
// Walk from p1 to A
Vector A = p1 + perp*(width1/2)
Vector B = p1 - perp*(width1/2)
Vector C = p2 - perp*(width2/2)
Vector D = p2 - perp*(width2/2)
// wind triangles
Triangle( A, B, D )
Triangle( B, D, C )
Note there's potentially a CW/CCW winding problem with this algorithm -- if perp is computed as (-y, x) in the above diagram then it will be CCW winding, if (y, -x) then it will be a CW winding.
I've had to do the same thing earlier today.
For creating a line that spans (x1,y1) -> (x2,y2) of a given width, a very easy method is to transform a simple unit-sized square spanning (0., -0.5) -> (1., 0.5) using:
glTranslatef(...) to move it to your desired (x1,y1) location;
glScalef(...) to scale it to the right length and desired width: use length = sqrt( (x2-x1)^2 + (y2-y1)^2 ) or any other low-complexity approximation;
glRotatef(...) to angle it to the right orientation: use angle = atan2(y2-y1, x2-x1).
The unit-square is very simply created from a two-triangle strip GL_TRIANGLE_STRIP, that turns into your solid line after the above transformations.
The burden here is placed primarily on OpenGL (and your graphics hardware) rather than your application code. The procedure above is turned very easily into a generic function by surrounding glPushMatrix() and glPopMatrix() calls.
For those coming looking for a good solution to this, this code is written using LWJGL, but can easily be adapted to any implementation of OpenGL.
import java.awt.Color;
import org.lwjgl.opengl.GL11;
import org.lwjgl.util.vector.Vector2f;
public static void DrawThickLine(int startScreenX, int startScreenY, int endScreenX, int endScreenY, Color color, float alpha, float width) {
Vector2f start = new Vector2f(startScreenX, startScreenY);
Vector2f end = new Vector2f(endScreenX, endScreenY);
float dx = startScreenX - endScreenX;
float dy = startScreenY - endScreenY;
Vector2f rightSide = new Vector2f(dy, -dx);
if (rightSide.length() > 0) {
rightSide.normalise();
rightSide.scale(width / 2);
}
Vector2f leftSide = new Vector2f(-dy, dx);
if (leftSide.length() > 0) {
leftSide.normalise();
leftSide.scale(width / 2);
}
Vector2f one = new Vector2f();
Vector2f.add(leftSide, start, one);
Vector2f two = new Vector2f();
Vector2f.add(rightSide, start, two);
Vector2f three = new Vector2f();
Vector2f.add(rightSide, end, three);
Vector2f four = new Vector2f();
Vector2f.add(leftSide, end, four);
GL11.glBegin(GL11.GL_QUADS);
GL11.glColor4f(color.getRed(), color.getGreen(), color.getBlue(), alpha);
GL11.glVertex3f(one.x, one.y, 0);
GL11.glVertex3f(two.x, two.y, 0);
GL11.glVertex3f(three.x, three.y, 0);
GL11.glVertex3f(four.x, four.y, 0);
GL11.glColor4f(1, 1, 1, 1);
GL11.glEnd();
}
Assume your original points are (x1,y1) -> (x2,y2). Use the following points (x1-width/2, y1), (x1+width/2,y1), (x2-width/2, y2), (x2+width/2,y2) to construct a rectangle and then use quads/tris to draw it. This the simple naive way. Note that for large line widths you'll get weird endpoint behavior. What you really want to do then is some smart parallel line calculations (which shouldn't be that bad) using vector math. For some reason dot/cross product and vector projection come to mind.
A rectangle (i.e. GL_QUAD or two GL_TRIANGLES) sounds like your best bet by the sounds of it, not sure I can think of any other way.
Another way to do this, if you are writing a software rasterizer by chance, is to use barycentric coordinates in your pixel coloration stage and color pixels when one of the barycentric coordinates is near 0. The more of an allowance you make, the thicker the lines will be.