I have a number of point clouds taken from an kinect-like instrument that is mounted on a tripod and then rotated. How do I determine the rotation axis accurately? I'm using c++ PCL and Eigen.
I can match the point clouds together with ICP and run a global registration (SLAM or ELCH) to get combined point cloud but for a number of reasons I would like to be able to determine the axis accurately and force the registrations to respect this rotation.
One issue that is related to this problem is the origin of my instrument. I can measure the distance to the rotation axis from the external dimensions of the device fairly accurately but I don't know exactly where is the origin in relation the extremities of the device. Solving this issue could help me to locate the origin too.
There are two methods that I'm considering.
One is to take the transformation matrices of the registered point clouds and extract the translation vectors that represent the locations where the transformation would project the internal origin in the current position. To the set of points acquired this way I could try to fit a circle and the center point would represent a vector from the origin to the rotation axis and the normal direction of the circle would be the direction of the axis.
The other option is to determine the rotation axis directly from any single rotation matrix, but the vector to the rotation axis seems volatile.
Any better solutions or insights on the issue?
You need to calculate the major axis oft the tensor of inertia. https://en.m.wikipedia.org/wiki/Moment_of_inertia.
All points can be considered to have the same mass. Then you can use the Steiner approach.
THIS IS NOT AN ANSWER TO THE ORIGINAL QUESTION BUT CLARIFICATION TO A DISCUSSION ABOUT DEFINING THE ROTATION AXIS BETWEEN TWO POSES
Deepfreeze. Here is an Octave script to demonstrate what we discussed in the chat. I know it might be bad practice to post an answer when not presenting one, but I hope this will give you an insight on what I was trying to explain about the relationship between the translation vector and the point on the rotation axis (t_i in your notation).
rotation_axis_unit = [0,0,1]' % parallel to z axis
angle = 1 /180 *pi() % one degree
% a rotaion matrix of one degree rotation
R = [cos(angle) -sin(angle) 0 ; sin(angle) cos(angle) 0 ; 0 0 1 ];
% the point around wich to rotate
axis_point = [-10,0,0]' % a point
% just a point used to demonstrate that all points form a circular path
test_point = [10,5,0]';
%containers for plotting
path_origin = zeros(360,3);
path_test_point = zeros(360,3);
path_v = zeros(360,3);
origin = [0,0,0]';
% updating rotation matrix
R_i = R;
M1 = [R,R*-axis_point.+axis_point;[0,0,0,1]];
% go around a full circle
for i=1:360
% v = the last column of M. Created from axis_point.
% -axis_point is the vector from axis_point to origin which is being rotated
% then a correction is applied to center it around the axis point
v = R_i * -axis_point .+ axis_point;
% building 4x4 transformation matrix
M = [R_i, v;[0,0,0,1]];
% M could also be built M_i = M1 * M_i, rotating the previous M by one degree around axis_point
% rotatin testing point and saving it
test_point_i = M * [test_point;1];
path_test_point(i,:) = test_point_i(1:3,1)';
% saving the translation part of M
path_v(i,:) = v';
% rotating origin point and saving it
origin_i = test_point_i = M * [origin;1];
path_origin(i,:) = origin_i(1:3,1)';
R_i = R * R_i ;
end
figure(1)
% plot test point path, just to show it forms a circular path, center and axis_point
scatter3(path_test_point(:,1), path_test_point(:,2), path_test_point(:,3), 5,'r')
hold on
% plotting origin path, circular, center at axis_point
scatter3(path_origin(:,1), path_origin(:,2), path_origin(:,3), 7,'r')
hold on
% plotting translation vectors, identical to origin path, (if invisible rotate so that you are watching it from z axis direction)
scatter3(path_v(:,1), path_v(:,2), path_v(:,3), 1, 'black');
hold on
% plots for visual analysis
scatter3(0,0,0,5,'b') % origin
hold on
scatter3(axis_point(1), axis_point(2), axis_point(3), 5, 'g') % axis point, center of the circles
hold on
scatter3(test_point(1), test_point(2), test_point(3), 5, 'black') % test point origin
hold off
% what does this demonstrate?
% it shows that that the ralationship between a 4x4
% transformation matrix and axis_angle notation plus point on the rotation axis
% M = [ R, v,; [0,0,0,1]] = [ R_i , R_i * -c + c; [ 0, 0, 0, 1] ]
%
% where c equals axis_point ( = perpendicular vector from origin to the axis of rotation )
% pay attention to path_v and path_origin
% they are identical
% path_v was extracted from the 4x4 transformation matrix M
% and path_origin was created by rotating the origin point by M
%--> v = R_i * -.c +.c
% Notice that since M describes a rotation of alpha angles around an
% axis that goes through c
% and its translation vector lies on a circle whose center
% is at the rotation_axis and radius is the distance from that
% point to origin ->
%
% M * M will describe a rotation of 2 x alpha angles around the same axis
% Therefore you can easily create more points that lay on that circle
% by multiplying M by itself and extracting the translation vector
%
% c can then be solved by normal circle fit algorithms.
%------------------------------------------------------------
% CAUTION !!!
% this applies perfectly when the transformation matrices have been created so
% that the translation is perfectly orthogonal to the rotation axis
% in real world matrices the translation will not be orthogonal
% therefore the points will not travel on a circular path but on a helix and this needs to be
% dealt with when solving the center of rotation.
An option is to place a chessboard at ~1 [m]. Use the kinect camera to make images for different rotations, where the hole chessboard is still visible. Fit the chessboard using OpenCV.
The goal is to find the xyz coordinates of the chessboard for the different orientations. Use your camera api functions to determine the xyz coordinates of the chessboard or do the following:
Determine camera intrinsics of camera 1 and 2. (use both color and IR images for kinect).
Determine camera extrinsics (camera 2 [R,t] wrt camera 1)
Use the values to calculate projection matrices
Use the projection matrices to triangulate the points of the chessboard to get coordinates in [X,Y,Z] wrt camera1 coordinate system.
Each group of chessboard points is called [x_i]. Now we can write the equation:
Update:
This equation can be solved with a non-linear solver, I used ceres-solver.
#include "ceres/ceres.h"
#include "ceres/rotation.h"
#include "glog/logging.h"
#include "opencv2/opencv.hpp"
#include "csv.h"
#include "Eigen/Eigen"
using ceres::AutoDiffCostFunction;
using ceres::CostFunction;
using ceres::Problem;
using ceres::Solver;
using ceres::Solve;
struct AxisRotationError {
AxisRotationError(double observed_x0, double observed_y0, double observed_z0, double observed_x1, double observed_y1, double observed_z1)
: observed_x0(observed_x0), observed_y0(observed_y0), observed_z0(observed_z0), observed_x1(observed_x1), observed_y1(observed_y1), observed_z1(observed_z1) {}
template <typename T>
bool operator()(const T* const axis, const T* const angle, const T* const trans, T* residuals) const {
//bool operator()(const T* const axis, const T* const trans, T* residuals) const {
// Normalize axis
T a[3];
T k = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2];
a[0] = axis[0] / sqrt(k);
a[1] = axis[1] / sqrt(k);
a[2] = axis[2] / sqrt(k);
// Define quaternion from axis and angle. Convert angle to radians
T pi = T(3.14159265359);
//T angle[1] = {T(10.0)};
T quaternion[4] = { cos((angle[0]*pi / 180.0) / 2.0),
a[0] * sin((angle[0] * pi / 180.0) / 2.0),
a[1] * sin((angle[0] * pi / 180.0) / 2.0),
a[2] * sin((angle[0] * pi / 180.0) / 2.0) };
// Define transformation
T t[3] = { trans[0], trans[1], trans[2] };
// Calculate predicted positions
T observedPoint0[3] = { T(observed_x0), T(observed_y0), T(observed_z0)};
T point[3]; point[0] = observedPoint0[0] - t[0]; point[1] = observedPoint0[1] - t[1]; point[2] = observedPoint0[2] - t[2];
T rotatedPoint[3];
ceres::QuaternionRotatePoint(quaternion, point, rotatedPoint);
T predicted_x = rotatedPoint[0] + t[0];
T predicted_y = rotatedPoint[1] + t[1];
T predicted_z = rotatedPoint[2] + t[2];
// The error is the difference between the predicted and observed position.
residuals[0] = predicted_x - T(observed_x1);
residuals[1] = predicted_y - T(observed_y1);
residuals[2] = predicted_z - T(observed_z1);
return true;
}
// Factory to hide the construction of the CostFunction object from
// the client code.
static ceres::CostFunction* Create(const double observed_x0, const double observed_y0, const double observed_z0,
const double observed_x1, const double observed_y1, const double observed_z1) {
// Define AutoDiffCostFunction. <AxisRotationError, #residuals, #dim axis, #dim angle, #dim trans
return (new ceres::AutoDiffCostFunction<AxisRotationError, 3, 3, 1,3>(
new AxisRotationError(observed_x0, observed_y0, observed_z0, observed_x1, observed_y1, observed_z1)));
}
double observed_x0;
double observed_y0;
double observed_z0;
double observed_x1;
double observed_y1;
double observed_z1;
};
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
// Load points.csv into cv::Mat's
// 216 rows with (x0, y0, z0, x1, y1, z1)
// [x1,y1,z1] = R* [x0-tx,y0-ty,z0-tz] + [tx,ty,tz]
// The xyz coordinates are points on a chessboard, where the chessboard
// is rotated for 4x. Each chessboard has 54 xyz points. So 4x 54,
// gives the 216 rows of observations.
// The chessboard is located at [0,0,1], as the camera_0 is located
// at [-0.1,0,0], the t should become [0.1,0,1.0].
// The chessboard is rotated around axis [0.0,1.0,0.0]
io::CSVReader<6> in("points.csv");
float x0, y0, z0, x1, y1, z1;
// The observations
cv::Mat x_0(216, 3, CV_32F);
cv::Mat x_1(216, 3, CV_32F);
for (int rowNr = 0; rowNr < 216; rowNr++){
if (in.read_row(x0, y0, z0, x1, y1, z1))
{
x_0.at<float>(rowNr, 0) = x0;
x_0.at<float>(rowNr, 1) = y0;
x_0.at<float>(rowNr, 2) = z0;
x_1.at<float>(rowNr, 0) = x1;
x_1.at<float>(rowNr, 1) = y1;
x_1.at<float>(rowNr, 2) = z1;
}
}
std::cout << x_0(cv::Rect(0, 0, 2, 5)) << std::endl;
// The variable to solve for with its initial value. It will be
// mutated in place by the solver.
int numObservations = 216;
double axis[3] = { 0.0, 1.0, 0.0 };
double* pAxis; pAxis = axis;
double angles[4] = { 10.0, 10.0, 10.0, 10.0 };
double* pAngles; pAngles = angles;
double t[3] = { 0.0, 0.0, 1.0,};
double* pT; pT = t;
bool FLAGS_robustify = true;
// Build the problem.
Problem problem;
// Set up the only cost function (also known as residual). This uses
// auto-differentiation to obtain the derivative (jacobian).
for (int i = 0; i < numObservations; ++i) {
ceres::CostFunction* cost_function =
AxisRotationError::Create(
x_0.at<float>(i, 0), x_0.at<float>(i, 1), x_0.at<float>(i, 2),
x_1.at<float>(i, 0), x_1.at<float>(i, 1), x_1.at<float>(i, 2));
//std::cout << "pAngles: " << pAngles[i / 54] << ", " << i / 54 << std::endl;
ceres::LossFunction* loss_function = FLAGS_robustify ? new ceres::HuberLoss(0.001) : NULL;
//ceres::LossFunction* loss_function = FLAGS_robustify ? new ceres::CauchyLoss(0.002) : NULL;
problem.AddResidualBlock(cost_function, loss_function, pAxis, &pAngles[i/54], pT);
//problem.AddResidualBlock(cost_function, loss_function, pAxis, pT);
}
// Run the solver!
ceres::Solver::Options options;
options.linear_solver_type = ceres::DENSE_SCHUR;
//options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
options.trust_region_strategy_type = ceres::LEVENBERG_MARQUARDT;
options.num_threads = 4;
options.use_nonmonotonic_steps = false;
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
//std::cout << summary.FullReport() << "\n";
std::cout << summary.BriefReport() << "\n";
// Normalize axis
double k = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2];
axis[0] = axis[0] / sqrt(k);
axis[1] = axis[1] / sqrt(k);
axis[2] = axis[2] / sqrt(k);
// Plot results
std::cout << "axis: [ " << axis[0] << "," << axis[1] << "," << axis[2] << " ]" << std::endl;
std::cout << "t: [ " << t[0] << "," << t[1] << "," << t[2] << " ]" << std::endl;
std::cout << "angles: [ " << angles[0] << "," << angles[1] << "," << angles[2] << "," << angles[3] << " ]" << std::endl;
return 0;
}
The result I've got:
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 3.632073e-003 0.00e+000 3.76e-002 0.00e+000 0.00e+000 1.00e+004 0 4.30e-004 7.57e-004
1 3.787837e-005 3.59e-003 2.10e-003 1.17e-001 1.92e+000 3.00e+004 1 7.43e-004 8.55e-003
2 3.756202e-005 3.16e-007 1.73e-003 5.49e-001 1.61e-001 2.29e+004 1 5.35e-004 1.13e-002
3 3.589147e-005 1.67e-006 2.90e-004 9.19e-002 9.77e-001 6.87e+004 1 5.96e-004 1.46e-002
4 3.584281e-005 4.87e-008 1.38e-005 2.70e-002 1.00e+000 2.06e+005 1 4.99e-004 1.73e-002
5 3.584268e-005 1.35e-010 1.02e-007 1.63e-003 1.01e+000 6.18e+005 1 6.32e-004 2.01e-002
Ceres Solver Report: Iterations: 6, Initial cost: 3.632073e-003, Final cost: 3.584268e-005, Termination: CONVERGENCE
axis: [ 0.00119037,0.999908,-0.0134817 ]
t: [ 0.0993185,-0.0080394,1.00236 ]
angles: [ 9.90614,9.94415,9.93216,10.1119 ]
The angles result is quite nice 10 degrees. These can even be fixed for my case, as I know the rotation very accurately from my rotation stage. There is a small difference in the t and axis. This is cause by inaccuracies in my virtual stereoCamera simulation. My chessboard squares are not exactly square and the dimensions are also a little off....
My simulation scripts, images, results: blender_simulation.zip
I found several post about this subject, but none of the solutions are using opencv.
I am wondering if OpenCv has any function, or class that could help on this subject?
I have a 4*4 affine transformation in opencv and I am looking to find the rotation, translation assuming that scaling is 1 and there is no other transformation in matrix.
Is there any function in OpenCV to help finding these parameters?
The problem you're facing is known as a matrix decomposition problem.
You can retrieve your desired matrices following these steps:
Compute the scaling factors as the magnitudes of the first three basis vectors (columns or rows) of the matrix
Divide the first three basis vectors by these values (thus normalizing them)
The upper-left 3x3 part of the matrix now represents the rotation (you can use this as is, or convert it to quaternion form)
The translation is the fourth basis vector of the matrix (in homogeneous coordinates - it'll be the first three elements that you're interested in)
In your case, being your scaling factor 1, you can skip the first two steps.
To retrieve the rotation matrix axis and angle (in radians) I suggest you to port the following Java algorithm in OpenCV (source: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToAngle/).
/**
This requires a pure rotation matrix 'm' as input.
*/
public axisAngle toAxisAngle(matrix m) {
double angle,x,y,z; // variables for result
double epsilon = 0.01; // margin to allow for rounding errors
double epsilon2 = 0.1; // margin to distinguish between 0 and 180 degrees
// optional check that input is pure rotation, 'isRotationMatrix' is defined at:
// http://www.euclideanspace.com/maths/algebra/matrix/orthogonal/rotation/
assert isRotationMatrix(m) : "not valid rotation matrix" ;// for debugging
if ((Math.abs(m[0][1]-m[1][0])< epsilon)
&& (Math.abs(m[0][2]-m[2][0])< epsilon)
&& (Math.abs(m[1][2]-m[2][1])< epsilon)) {
// singularity found
// first check for identity matrix which must have +1 for all terms
// in leading diagonaland zero in other terms
if ((Math.abs(m[0][1]+m[1][0]) < epsilon2)
&& (Math.abs(m[0][2]+m[2][0]) < epsilon2)
&& (Math.abs(m[1][2]+m[2][1]) < epsilon2)
&& (Math.abs(m[0][0]+m[1][1]+m[2][2]-3) < epsilon2)) {
// this singularity is identity matrix so angle = 0
return new axisAngle(0,1,0,0); // zero angle, arbitrary axis
}
// otherwise this singularity is angle = 180
angle = Math.PI;
double xx = (m[0][0]+1)/2;
double yy = (m[1][1]+1)/2;
double zz = (m[2][2]+1)/2;
double xy = (m[0][1]+m[1][0])/4;
double xz = (m[0][2]+m[2][0])/4;
double yz = (m[1][2]+m[2][1])/4;
if ((xx > yy) && (xx > zz)) { // m[0][0] is the largest diagonal term
if (xx< epsilon) {
x = 0;
y = 0.7071;
z = 0.7071;
} else {
x = Math.sqrt(xx);
y = xy/x;
z = xz/x;
}
} else if (yy > zz) { // m[1][1] is the largest diagonal term
if (yy< epsilon) {
x = 0.7071;
y = 0;
z = 0.7071;
} else {
y = Math.sqrt(yy);
x = xy/y;
z = yz/y;
}
} else { // m[2][2] is the largest diagonal term so base result on this
if (zz< epsilon) {
x = 0.7071;
y = 0.7071;
z = 0;
} else {
z = Math.sqrt(zz);
x = xz/z;
y = yz/z;
}
}
return new axisAngle(angle,x,y,z); // return 180 deg rotation
}
// as we have reached here there are no singularities so we can handle normally
double s = Math.sqrt((m[2][1] - m[1][2])*(m[2][1] - m[1][2])
+(m[0][2] - m[2][0])*(m[0][2] - m[2][0])
+(m[1][0] - m[0][1])*(m[1][0] - m[0][1])); // used to normalise
if (Math.abs(s) < 0.001) s=1;
// prevent divide by zero, should not happen if matrix is orthogonal and should be
// caught by singularity test above, but I've left it in just in case
angle = Math.acos(( m[0][0] + m[1][1] + m[2][2] - 1)/2);
x = (m[2][1] - m[1][2])/s;
y = (m[0][2] - m[2][0])/s;
z = (m[1][0] - m[0][1])/s;
return new axisAngle(angle,x,y,z);
}
I'm trying to rotate one point around a central point by an angle - standard problem. I've seen lots of posts about this but I can't get my implementation to work:
void Point::Rotate(const Point Pivot, const float Angle)
{
if (Angle == 0)
return;
float s = sin(Angle);
float c = cos(Angle);
x -= Pivot.x;
y -= Pivot.y;
x = (x * c) - (y * s) + Pivot.x;
y = (x * s) + (y * c) + Pivot.y;
}
This is my code, the logic of which I've gleaned from numerous source, for example here, and here.
As far as I'm aware, it should work. However, when I apply it to rotating for example, the point (0, 100) by 90 degrees (Pi/2 is given to the function) around (0, 0), the rotated point is apparently at (-100, -100); 100px below where it should be.
When trying to draw a circle (36 points) - it creates a vague heart shape. It looks like a graph I saw that I think was in polar coordinates - do I need to convert my point to Cartesian or something?
Can anyone spot anything wrong with my code?
Edit: Sorry, this function is a member function to a Point class - x and y are the member variables :/
You're almost there, but you're modifying x in the next-to-last line, meaning that the value of that coordinate fed into the y calculation is incorrect!
Instead, use temporary variables for the new x and y and then add the Pivot coordinates on afterwards:
double nx = (x * c) - (y * s);
double ny = (x * s) + (y * c);
x = nx + Pivot.x;
y = ny + Pivot.y;
I have been testing collision between two circles using the method:
Circle A = (x1,y1) Circle b = (x2,y2)
Radius A Radius b
x1 - x2 = x' * x'
y1 - y2 = y' * y'
x' + y' = distance
square root of distance - Radius A + Radius B
and if the resulting answer is a negative number it is intersecting.
I have used this method in a test but it doesn't seem to be very accurate at all.
bool circle::intersects(circle & test)
{
Vector temp;
temp.setX(centre.getX() - test.centre.getX());
temp.setY(centre.getY() - test.centre.getY());
float distance;
float temp2;
float xt;
xt = temp.getX();
temp2 = xt * xt;
temp.setX(temp2);
xt = temp.getY();
temp2 = xt * xt;
temp.setY(temp2);
xt = temp.getX() + temp.getY();
distance = sqrt(xt);
xt = radius + test.radius;
if( distance - xt < test.radius)
{
return true;
}
else return false;
}
This is the function using this method maybe I'm wrong here. I just wondered what other methods I could use. I know separating axis theorem is better , but I wouldn't know where to start.
if( distance - xt < test.radius)
{
return true;
}
distance - xt will evaluate to the blue line, the distance between the two disks. It also meets the condition of being less than the test radius, but there is no collision going on.
The solution:
if(distance <= (radius + test.radius) )
return true;
Where distance is the distance from the centres.
Given: xt = radius + test.radius;
The correct test is: if( distance < xt)
Here is an attempt to re-write the body for you: (no compiler, so may be errors)
bool circle::intersects(circle & test)
{
float x = this->centre.getX() - test.centre.getX()
float y = this->centre.getY() - test.centre.getY()
float distance = sqrt(x*x+y*y);
return distance < (this->radius + test.radius);
}
Based on Richard solution but comparing the squared distance. This reduce the computation errors and the computation time.
bool circle::intersects(circle & test)
{
float x = this->centre.getX() - test.centre.getX()
float y = this->centre.getY() - test.centre.getY()
float distance2 = x * x + y * y;
float intersect_distance2 = (this->radius + test.radius) * (this->radius + test.radius);
return distance <= intersect_distance2;
}
Use Pythagoras theorem to compute the distance between the centres
That is a straight line
If they have collided then that distance is shorter that the sum of the two radiuses
I have a hexagon grid:
with template type coordinates T. How I can calculate distance between two hexagons?
For example:
dist((3,3), (5,5)) = 3
dist((1,2), (1,4)) = 2
First apply the transform (y, x) |-> (u, v) = (x, y + floor(x / 2)).
Now the facial adjacency looks like
0 1 2 3
0*-*-*-*
|\|\|\|
1*-*-*-*
|\|\|\|
2*-*-*-*
Let the points be (u1, v1) and (u2, v2). Let du = u2 - u1 and dv = v2 - v1. The distance is
if du and dv have the same sign: max(|du|, |dv|), by using the diagonals
if du and dv have different signs: |du| + |dv|, because the diagonals are unproductive
In Python:
def dist(p1, p2):
y1, x1 = p1
y2, x2 = p2
du = x2 - x1
dv = (y2 + x2 // 2) - (y1 + x1 // 2)
return max(abs(du), abs(dv)) if ((du >= 0 and dv >= 0) or (du < 0 and dv < 0)) else abs(du) + abs(dv)
Posting here after I saw a blog post of mine had gotten referral traffic from another answer here. It got voted down, rightly so, because it was incorrect; but it was a mischaracterization of the solution put forth in my post.
Your 'squiggly' axis - in terms of your x coordinate being displaced every other row - is going to cause you all sorts of headaches with trying to determine distances or doing pathfinding later on, if this is for a game of some sort. Hexagon grids lend themselves to three axes naturally, and a 'squared off' grid of hexagons will optimally have some negative coordinates, which allows for simpler math around distances.
Here's a grid with (x,y) mapped out, with x increasing to the lower right, and y increasing upwards.
By straightening things out, the third axis becomes obvious.
The neat thing about this, is that the three coordinates become interlinked - the sum of all three coordinates will always be 0.
With such a consistent coordinate system, the atomic distance between any two hexes is the largest change between the three coordinates, or:
d = max( abs(x1 - x2), abs(y1 -y2), abs( (-x1 + -y1) - (-x2 + -y2) )
Pretty straightforward. But you must fix your grid first!
The correct explicit formula for the distance, with your coordinate system, is given by:
d((x1,y1),(x2,y2)) = max( abs(x1 - x2),
abs((y1 + floor(x1/2)) - (y2 + floor(x2/2)))
)
Here is what a did:
Taking one cell as center (it is easy to see if you choose 0,0), cells at distance dY form a big hexagon (with “radius” dY). One vertices of this hexagon is (dY2,dY). If dX<=dY2 the path is a zig-zag to the ram of the big hexagon with a distance dY. If not, then the path is the “diagonal” to the vertices, plus an vertical path from the vertices to the second cell, with add dX-dY2 cells.
Maybe better to understand: led:
dX = abs(x1 - x2);
dY = abs(y1 - y2);
dY2= floor((abs(y1 - y2) + (y1+1)%2 ) / 2);
Then:
d = d((x1,y1),(x2,y2))
= dX < dY2 ? dY : dY + dX-dY2 + y1%2 * dY%2
First, you need to transform your coordinates to a "mathematical" coordinate system. Every two columns you shift your coordinates by 1 unit in the y-direction. The "mathamatical" coordinates (s, t) can be calculated from your coordinates (u,v) as follows:
s = u + floor(v/2)
t = v
If you call one side of your hexagons a, the basis vectors of your coordinate system are (0, -sqrt(3)a) and (3a/2, sqrt(3)a/2). To find the minimum distance between your points, you need to calculate the manhattan distance in your coordinate system, which is given by |s1-s2|+|t1-t2| where s and t are the coordinates in your system. The manhattan distance only covers walking in the direction of your basis vectors so it only covers walking like that: |/ but not walking like that: |\. You need to transform your vectors into another coordinate system with basis vectors (0, -sqrt(3)a) and (3a/2, -sqrt(3)a/2). The coordinates in this system are given by s'=s-t and t'=t so the manhattan distance in this coordinate system is given by |s1'-s2'|+|t1'-t2'|. The distance you are looking for is the minimum of the two calculated manhattan distances. Your code would look like this:
struct point
{
int u;
int v;
}
int dist(point const & p, point const & q)
{
int const ps = p.u + (p.v / 2); // integer division!
int const pt = p.v;
int const qs = q.u + (q.v / 2);
int const qt = q.v;
int const dist1 = abs(ps - qs) + abs(pt - qt);
int const dist2 = abs((ps - pt) - (qs - qt)) + abs(pt - qt);
return std::min(dist1, dist2);
}
(odd-r)(without z, only x,y)
I saw some problems with realizations above. Sorry, I didn't check it all but. But maybe my solution will be helpful for someone and maybe it's a bad and not optimized solution.
The main idea to go by diagonal and then by horizontal. But for that we need to note:
1) For example, we have 0;3 (x1=0;y1=3) and to go to the y2=6 we can handle within 6 steps to each point (0-6;6)
so: 0-left_border , 6-right_border
2)Calculate some offsets
#include <iostream>
#include <cmath>
int main()
{
//while(true){
int x1,y1,x2,y2;
std::cin>>x1>>y1;
std::cin>>x2>>y2;
int diff_y=y2-y1; //only up-> bottom no need abs
int left_x,right_x;
int path;
if( y1>y2 ) { // if Down->Up then swap
int temp_y=y1;
y1=y2;
y2=temp_y;
//
int temp_x=x1;
x1=x2;
x2=temp_x;
} // so now we have Up->Down
// Note that it's odd-r horizontal layout
//OF - Offset Line (y%2==1)
//NOF -Not Offset Line (y%2==0)
if( y1%2==1 && y2%2==0 ){ //OF ->NOF
left_x = x1 - ( (y2 - y1 + 1)/2 -1 ); //UP->DOWN no need abs
right_x = x1 + (y2 - y1 + 1)/2; //UP->DOWN no need abs
}
else if( y1%2==0 && y2%2==1 ){ // OF->NOF
left_x = x1 - (y2 - y1 + 1)/2; //UP->DOWN no need abs
right_x = x1 + ( (y2 - y1 + 1)/2 -1 ); //UP->DOWN no need abs
}
else{
left_x = x1 - (y2 - y1 + 1)/2; //UP->DOWN no need abs
right_x = x1 + (y2 - y1 + 1)/2; //UP->DOWN no need abs
}
/////////////////////////////////////////////////////////////
if( x2>=left_x && x2<=right_x ){
path = y2 - y1;
}
else {
int min_1 = std::abs( left_x - x2 );
int min_2 = std::abs( right_x - x2 );
path = y2 - y1 + std::min(min_1, min_2);
}
std::cout<<"Path: "<<path<<"\n\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n";
//}
return 0;
}
I believe the answer you seek is:
d((x1,y1),(x2,y2))=max(abs(x1-x2),abs(y1-y2));
You can find a good explanation on hexagonal grid coordinate-system/distances here:
http://keekerdc.com/2011/03/hexagon-grids-coordinate-systems-and-distance-calculations/