Boost Transform Producing Inf Return Values - c++

What I am trying to do is convert two points in spherical coordinates to geographic coordinates, in order to them make use of the vincenty distance function, in order to accurately measure the distance between the two points on a unit sphere.
The following code fails to transform a pair of spherical points to a pairs of geographic ones, returning inf values for the elements in p1_g and p2_g.
Any suggestion of what I am doing wrong is much appreciated.
VectorXd p1(2) ;
VectorXd p2(2) ;
p1 << -2.35619, 0.955317 ;
p2 << 1.47275, 2.53697 ;
namespace bg = boost::geometry;
typedef boost::geometry::srs::spheroid<double> SpheroidType;
SpheroidType spheriod(1.0,1.0);
typedef boost::geometry::strategy::distance::vincenty<SpheroidType>
VincentyStrategy;
VincentyStrategy vincenty(spheriod);
bg::model::point<double, 2, bg::cs::spherical<bg::radian>> p1_s(p1(0), p1(1));
bg::model::point<double, 2, bg::cs::spherical<bg::radian>> p2_s(p2(0), p2(1));
bg::model::point<double, 2, bg::cs::geographic<bg::radian> > p1_g;
bg::model::point<double, 2, bg::cs::geographic<bg::radian> > p2_g;
bg::transform(p1_s, p1_g, vincenty);
bg::transform(p2_s, p2_g, vincenty);
auto dist = bg::distance(p1_g, p2_g, vincenty);

You appear to be confused about spheres and spheroids.
A sphere is effectively a perfectly round ball. While a spheroid is a sphere that has been squashed (or extended) along an axis, see: https://en.wikipedia.org/wiki/Spheroid.
Another name for spheroid is ellipsoid. The best known spheroid is the WGS-84 spheroid used by GPS systems.
Distances between points on a sphere can be calculated relatively simply using the haversine equation, while distances between points on a spheroid require complicated equations such as Vincenty's or (the more accurate) Karney's equations.
To calculate distances on a unit sphere, simply use the boost haversine strategy and then multiply by the radius to convert the distance from radians to your desired units. The non-Cartesian distance example here shows it being performed with coordinates in degrees.

Related

OpenCV undistortPoints and triangulatePoint give odd results (stereo)

I'm trying to get 3D coordinates of several points in space, but I'm getting odd results from both undistortPoints() and triangulatePoints().
Since both cameras have different resolution, I've calibrated them separately, got RMS errors of 0,34 and 0,43, then used stereoCalibrate() to get more matrices, got an RMS of 0,708, and then used stereoRectify() to get remaining matrices. With that in hand I've started the work on gathered coordinates, but I get weird results.
For example, input is: (935, 262), and the undistortPoints() output is (1228.709125, 342.79841) for one point, while for another it's (934, 176) and (1227.9016, 292.4686) respectively. Which is weird, because both of these points are very close to the middle of the frame, where distortions are the smallest. I didn't expect it to move them by 300 pixels.
When passed to traingulatePoints(), the results get even stranger - I've measured the distance between three points in real life (with a ruler), and calculated the distance between pixels on each picture. Because this time the points were on a pretty flat plane, these two lengths (pixel and real) matched, as in |AB|/|BC| in both cases was around 4/9. However, triangulatePoints() gives me results off the rails, with |AB|/|BC| being 3/2 or 4/2.
This is my code:
double pointsBok[2] = { bokList[j].toFloat()+xBok/2, bokList[j+1].toFloat()+yBok/2 };
cv::Mat imgPointsBokProper = cv::Mat(1,1, CV_64FC2, pointsBok);
double pointsTyl[2] = { tylList[j].toFloat()+xTyl/2, tylList[j+1].toFloat()+yTyl/2 };
//cv::Mat imgPointsTyl = cv::Mat(2,1, CV_64FC1, pointsTyl);
cv::Mat imgPointsTylProper = cv::Mat(1,1, CV_64FC2, pointsTyl);
cv::undistortPoints(imgPointsBokProper, imgPointsBokProper,
intrinsicOne, distCoeffsOne, R1, P1);
cv::undistortPoints(imgPointsTylProper, imgPointsTylProper,
intrinsicTwo, distCoeffsTwo, R2, P2);
cv::triangulatePoints(P1, P2, imgWutBok, imgWutTyl, point4D);
double wResult = point4D.at<double>(3,0);
double realX = point4D.at<double>(0,0)/wResult;
double realY = point4D.at<double>(1,0)/wResult;
double realZ = point4D.at<double>(2,0)/wResult;
The angles between points are kinda sorta good but usually not:
`7,16816 168,389 4,44275` vs `5,85232 170,422 3,72561` (degrees)
`8,44743 166,835 4,71715` vs `12,4064 158,132 9,46158`
`9,34182 165,388 5,26994` vs `19,0785 150,883 10,0389`
I've tried to use undistort() on the entire frame, but got results just as odd. The distance between B and C points should be pretty much unchanged at all times, and yet this is what I get:
7502,42
4876,46
3230,13
2740,67
2239,95
Frame by frame.
Pixel distance (bottom) vs real distance (top) - should be very similar:
Angle:
Also, shouldn't both undistortPoints() and undistort() give the same results (another set of videos here)?
The function cv::undistort does undistortion and reprojection in one go. It performs the following list of operations:
undo camera projection (multiplication with the inverse of the camera matrix)
apply the distortion model to undo the distortion
rotate by the provided Rotation matrix R1/R2
project points to image using the provided Projection matrix P1/P2
If you pass the matrices R1, P1 resp. R2, P2 from cv::stereoCalibrate(), the input points will be undistorted and rectified. Rectification means that the images are transformed in a way such that corresponding points have the same y-coordinate. There is no unique solution for image rectification, as you can apply any translation or scaling to both images, without changing the alignement of corresponding points.
That being said, cv::stereoCalibrate() can shift the center of projection quite a bit (e.g. 300 pixels). If you want pure undistortion you can pass an Identity Matrix (instead of R1) and the original camera Matrix K (instead of P1). This should lead to pixel coordinates similar to the original ones.

Ray tracing texture implementation for spheres

I'm trying to implement textures for spheres in my ray tracer. I managed to get something working, but I am unsure about its correctness. Below is the code for getting the texture coordinates. For now, the texture is random and is generated at runtime.
virtual void GetTextureCoord(Vect hitPoint, int hres, int vres, int& x, int& y) {
float theta = acos(hitPoint.getVectY());
float phi = atan2(hitPoint.getVectX(), hitPoint.getVectZ());
if (phi < 0.0) {
phi += TWO_PI;
}
float u = phi * INV_TWO_PI;
float v = 1 - theta * INV_PI;
y = (int) ((hres - 1) * u);
x = (int) ((vres - 1) * v);
}
This is how the spheres look now:
I had to normalize the coordinates of the hit point to get the spheres to look like that. Otherwise they would look like:
Was normalising the hit point coordinates the right approach, or is something else broken in my code? Thank you!
Instead of normalising the hit point, I tried translating it to the world origin (as if the sphere center was there) and obtained the following result:
I'm using a 256x256 resolution texture by the way.
It's unclear what you mean by "normalizing" the hit point since there's nothing that normalizes it in the code you posted, but you mentioned that your hit point is in world space.
Also, you didn't say what texture mapping you're trying to implement, but I assume you want your U and V texture coordinates to represent latitude and longitude on the sphere's surface.
Your first problem is that converting Cartesian to spherical coordinates requires that the sphere is centered at the origin in the Cartesian space, which isn't true in world space. If the hit point is in world space, you have to subtract the sphere's world-space center point to get the effective hit point in local coordinates. (You figured this part out already and updated the question with a new image.)
Your second problem is that the way you're calculating theta requires that the the sphere have a radius of 1, which isn't true even after you move the sphere's center to the origin. Remember your trigonometry: the argument to acos is the ratio of a triangle's side to its hypotenuse, and is always in the range (-1, +1). In this case your Y-coordinate is the side, and the sphere's radius is the hypotenuse. So you have to divide by the sphere's radius when calling acos. It's also a good idea to clamp the value to the (-1, +1) range in case floating-point rounding error puts it slightly outside.
(In principle you'd also have to divide the X and Z coordinates by the radius, but you're only using those for an inverse tangent, and dividing them both by the radius won't change their quotient and thus won't change phi.)
Right now your sphere intersection and texture-coordinate functions are operating in world space, but you'll probably find it useful later to implement transformation matrices, which let you transform things from one coordinate space to another. Then you can change your sphere functions to operate in a local coordinate space where the center is the origin and the radius is 1, and give each object an associated transformation matrix that maps the local coordinate space to the world coordinate space. This will simplify your ray/sphere intersection code, and let you remove the origin subtraction and radius division from GetTextureCoord (since they're always (0, 0, 0) and 1 respectively).
To intersect a ray with an object, you'd use the object's transformation matrix to transform the ray into the object's local coordinate space, do the intersection (and compute texture coordinates) there, and then transform the result (e.g. hit point and surface normal) back to world space.

transforming projection matrices computed from trifocal tensor to estimate 3D points

I am using this legacy code: http://fossies.org/dox/opencv-2.4.8/trifocal_8cpp_source.html
for estimating 3D points from the given corresponding 2D points from 3 different views. The problem I faced is same as stated here: http://opencv-users.1802565.n2.nabble.com/trifocal-tensor-icvComputeProjectMatrices6Points-icvComputeProjectMatricesNPoints-td2423108.html
I could compute Projection matrices successfully using icvComputeProjectMatrices6Points. I used 6 set of corresponding points from 3 views. Results are shown below:
projMatr1 P1 =
[-0.22742541, 0.054754492, 0.30500898, -0.60233182;
-0.14346679, 0.034095913, 0.33134204, -0.59825808;
-4.4949986e-05, 9.9166318e-06, 7.106331e-05, -0.00014547621]
projMatr2 P2 =
[-0.17060626, -0.0076031247, 0.42357284, -0.7917347;
-0.028817834, -0.0015948272, 0.2217239, -0.33850163;
-3.3046148e-05, -1.3680664e-06, 0.0001002633, -0.00019192585]
projMatr3 P3 =
[-0.033748217, 0.099119112, -0.4576003, 0.75215244;
-0.001807699, 0.0035084449, -0.24180284, 0.39423448;
-1.1765103e-05, 2.9554356e-05, -0.00013438619, 0.00025332544]
Furthermore, I computed 3D points using icvReconstructPointsFor3View. The six 3D points are as following:
4D points =
[-0.4999997, -0.26867214, -1, 2.88633e-07, 1.7766099e-07, -1.1447386e-07;
-0.49999994, -0.28693244, 3.2249036e-06, 1, 7.5971762e-08, 2.1956141e-07;
-0.50000024, -0.72402155, 1.6873783e-07, -6.8603946e-08, -1, 5.8393886e-07;
-0.50000012, -0.56681377, 1.202426e-07, -4.1603233e-08, -2.3659911e-07, 1]
While, actual 3D points are as following:
- { ID:1,X:500.000000, Y:800.000000, Z:3000.000000}
- { ID:2,X:500.000000, Y:800.000000, Z:4000.000000}
- { ID:3,X:1500.000000, Y:800.000000, Z:4000.000000}
- { ID:4,X:1500.000000, Y:800.000000, Z:3000.000000}
- { ID:5,X:500.000000, Y:1800.000000, Z:3000.000000}
- { ID:6,X:500.000000, Y:1800.000000, Z:4000.000000}
My question is now, how to transform P1, P2 and P3 to a form that allows
a meaningful triangulation? I need to compute the correct 3D points using trifocal tensor.
The trifocal tensor won't help you, because like the fundamental matrix, it only enables projective reconstruction of the scene and camera poses. If X0_j and P0_i are the true 3D points and camera matrices, this means that the reconstructed points Xp_j = inv(H).X0_j and camera matrices Pp_i = P0_i.H are only defined up to a common 4x4 matrix H, which is unknown.
In order to obtain a metric reconstruction, you need to know the calibration matrices of your cameras. Whether you know these matrices (e.g. if you use virtual cameras for image rendering) or you estimated them using camera calibration (see OpenCV calibration tutorials), you can find a method to obtain a metric reconstruction in §7.4.5 of "Geometry, constraints and computation of the trifocal tensor", by C.Ressl (PDF).
Note that even when using this method, you cannot obtain an up-to-scale 3D reconstruction, unless you have some additional knowledge (such as knowledge of the actual distance between two fixed 3D points).
Sketch of the algorithm:
Inputs: the three camera matrices P1, P2, P3 (projective world coordinates, with the coordinate system chosen so that P1=[I|0]), the associated calibration matrices K1, K2, K3 and one point correspondence x1, x2, x3.
Outputs: the three camera matrices P1_E, P2_E, P3_E (metric reconstruction).
Set P1_E=K1.[I|0]
Compute the fundamental matrices F21, F31. Denoting P2=[A|a] and P3=[B|b], you have F21=[a]x.A and F31=[b]x.B (see table 9.1 in [HZ00]), where for a 3x1 vector e [e]x = [0,-e_3,e_2;e_3,0,-e_1;-e_2,e_1,0]
Compute the essential matrices E21 = K2'.F21.K1 and E31 = K3'.F31.K1
For i = 2,3, do the following
i. Compute the SVD Ei1=U.S.V'. If det(U)<0 set U=-U. If det(V)<0 set V=-V.
ii. Define W=[0,-1,0;1,0,0;0,0,1], Ri=U.W.V' and ti = third column of U
iii. Define M=[Ri'.ti]x, X1=M.inv(K1).x1 and Xi=M.Ri'.inv(Ki).xi
iv. If X1_3.Xi_3<0, set Ri=U.W'.V' and recompute M and X1
v. If X1_3<0 set ti = -ti
vi. Define Pi_E=Ki.[Ri|ti]
Do the following to retrieve the correct scale for t3 (consistantly to the fact that ||t2||=1):
i. Define p2=R2'.inv(K2).x2 and p3=R3'.inv(K3).x3
ii. Define M=[p2]x
iii. Compute the scale s=(p3'.M.R2'.t2)/(p3'.M.R3'.t3)
iv. Set t3=t3*s
End of the algorithm: the camera matrices P1_E, P2_E, P3_E are valid up to an isotropic scaling of the scene and a change of 3D coordinate system (hence it is a metric reconstruction).
[HZ00] "Multiple view geometry in computer vision" , by R.Hartley and A.Zisserman, 2000.

How to compute the center of a polygon in 2D and 3D space

Consider a simple convex polygon in 2D Cartesian space. If given a list of vertex coordinates sorted in a counter-clockwise orientation like this [[x0, y0], ..., [xn, yn]]. How could you compute the center of the polygon (the point inside the polygon that is equidistant to all vertices)?
Also consider a second case where the polygon is placed in 3D Cartesian space and its normal vector is not parallel to any of the Cartesian axes. How could you compute the center, without rotating the polygon?
I can read C/C++, Fortran, MATLAB and Python, however any pseudo-code is also well appreciated.
EDIT
I now realise that my question was not well-posed. I am sorry for that. It appears that what I was looking for is the centroid of the polygon (i.e. the point on which a cardboard cut-out would balance while assuming uniform density and a uniform gravity field).
You definition of center doesn't make sense in general.
To see this just draw three non-aligned points on a plane and compute the one an only circle that passes for all three points. Clearly your center of the triangle must be the center of this circle.
Now draw a fourth point that doesn't lie on the circle and form the four sided polygon. What is the center? There is no point in the plane that is equidistant from all vertices.
Note also that even in case of triangles using the point equidistant from the vertices can give you points outside and far away from the polygon and is also numerically unstable (given any ε>0 and M>0 you can always build a triangle in which a specific movement of a vertex by a distance of less than ε moves the center by a distance greater than M).
Commonly used "centers" that are simple to compute are the average of all vertices, the average of the boundary, the center of mass or even just the center of the axis-aligned bounding box. All of them can however fall outside the polygon if the polygon is not convex, but in your case they may work.
The simplest reasonable one (because it doesn't depends on the coordinate system) is the barycenter of the vertices (code in Python):
xc = sum(x for (x, y) in points) / len(points)
yc = sum(y for (x, y) in points) / len(points)
something bad about it it's that just splitting one side of the polygon gives you a different center (in other words it depends on the vertices and not on the set of points bounded by the polygon). The simplest that depends on the polygon is IMO the barycenter of the boundary:
sx = sy = sL = 0
for i in range(len(points)): # counts from 0 to len(points)-1
x0, y0 = points[i - 1] # in Python points[-1] is last element of points
x1, y1 = points[i]
L = ((x1 - x0)**2 + (y1 - y0)**2) ** 0.5
sx += (x0 + x1)/2 * L
sy += (y0 + y1)/2 * L
sL += L
xc = sx / sL
yc = sy / sL
For both of them the extension to 3d is trivial... just add z using the same formulas.
In the case of a general (not necessarily convex, not necessarily simply connected) polygon a "center" that I found useful but that is not trivial to compute is the (an) inner point that is at a maximum distance from the boundary (in other words a "most inner" point).
In this case I resorted to use a discrete (bitmap) representation and a gaussian distance transform.
First of all for a polygon, the centroid may not always imply equidistant lengths from the centroid to the vertices. In most cases this is probably NOT true. That being said, you can find the centroid simply by finding the mean of your x coordinates and the mean of your y coordinates. In Matlab: centroidx = mean(xcoords) and centroidy = mean(ycoords) are the coordinates of the centroid. See this if you really need more.

Line - Circle intersection test in 3d world?

i have a 3d world where i have several 2d circles laying on the ground facing to the sky.
how can i check if a line will intersect one of those circles frop top-to-down?
i tried to search but all i get is this kind of intersection test:
http://mathworld.wolfram.com/Circle-LineIntersection.html
but its not what i need, here is image what i mean:
http://imageshack.us/m/192/8343/linecircleintersect.png
If you are in a coordinate system, where the ground is given by z = c for c some constant, then you could simply calculate the x, y coordinates of the line for z = c. Now for a circle of origin x0, y0 and radius R, you would simply check if
(x - x0)^2 + (y - y0)^2 <= R^2.
If this is true, the line intersects the circle.
In a 3D sense you are first concerned with not with a circle but with the plane where the circle lies on. Then you can find the point of intersection between the ray (line) and the plane (disk).
I like to use homogeneous coordinates for point, planes and lines and I hope you are familiar with vector dot · and cross products ×. Here is the method:
Plane (disk) is defined by a point vector r=[rx,ry,rz] and a normal direction vector n=[nx,ny,nz]. Together they form a plane W=[W1,W2]=[n,-r·n].
Line (ray) is defined by two point vectors r_A=[rAx,rAy,rAz] and r_B=[rBx,rBy,rBz]. Together they form the line L=[L1,L2]=[r_B-r_A, r_A×r_B]
The intersecting Point is defined by P=[P1,P2]=[L1×W1-W2*L2, -L2·W1], or expanded out as
P=[ (r_B-r_A)×n-(r·n)*(r_A×r_B), -(r_A×r_B)·n ]
The coordinates for the point are found by r_P = P1/P2 where P1 has three elements and P2 is scalar.
Once you have the coordinates you check the distance with the center of the circle by d=sqrt((r_p-r)·(r_p-r)) and checking d<=R where R is the radius of the circle. Note the difference in notation between a scalar multiplication * and a dot product ·
If you know for sure that the circles lie on the ground (r=[0,0,0]) and face up (n=[0,0,1]) then you can make a lot of simplifications to the above general case.
[ref: Plucker Coordinates]
Update:
When using the ground (with +Z up) as the plane (where circles lie), then use r=[rx,ry,0] and n=[0,0,1] and the above intersection point simplifies to
r_p = [ rBy-rAy, rAx-rBx, 0] / (rAy*rBx-rAx*rBy)
of which you can check the distance to the circle center.