I have a polyline that I need to offset by a constant. Imagine a polyline representing the centre line of a highway, I need to offset/parallel this centre line 50 units (to the left and -50 units (to the right) to create lanes.
What function can I use to perform this offset/parallel transation? I believe I should use a MatrixXd or ArrayXd to store the polyline points? But maybe there is a better object to use to store these? Should I use the method transpose() to achieve my parallel operation? Note the polyline points are 2d not 3d.
That really depends what else you're going to be doing with the points. You can use a Matrix2Xd or MatrixX2d as well, if you want to limit the number of rows/columns. I don't know the effects on a polyline but if you just want to add a constant vector to each point, you can do a rowwise or colwise add:
#include <iostream>
#include <Eigen/Core>
using namespace Eigen;
int main()
{
MatrixXd mat(5, 2);
VectorXd vec(2);
vec << 10., 20;
mat.setRandom();
std::cout << mat << "\n\n";
mat.rowwise() += vec.transpose();
std::cout << mat << "\n\n";
return 0;
}
You have to calculate the first derivative (tangent) for each point of your polyline. Only then any parallel shift wrt this tangent at a certain point makes sense.
Related
My Problem
I would like to calculate the exact volume for an intersection with polygonmeshes. Unfortunately the result is wrong!?!?!.
I think it has something to do with me choosing the wrong options.
I have an STEP/OFF file (both work). I move a smaller Cylinder into a bigger Cylinder.
Then I calculate the Intersection. I do not use a pointmap.
If I calculate the Volume of the first three intersections, CGAL tells me their volume is zero, but it is not.
Why the Result is wrong
I know this, because:
I described this Problem analytically and solved the integral with matlab => Volume is not 0
I solved the Problem Using FreeCAD => Volume is the same as in Matlab
I solved the Volume in CGAL Result do not match 1 and 2.
I wrote the Results of many all the intersection into files, and the files concerning the problem are not empty. With a mesh viewer like (gmsh or meshlab) I can confirm heigth width and length. So the volume should not be 0, because it is intersecting so the volume cannot be 0.
What I have done
I have read this:
The Exact Computation Paradigm
Robustness and Precision Issues in Geometric Computation
FAQ: I am using double (or float or ...) as my number type and get assertion failures or incorrect output. Is this a bug in CGAL?
I did not understand how these three apply to my situation.
I am using the Exact_predicates_exact_constructions_kernel, I defined CGAL_DONT_USE_LAZY_KERNEL. I have used the other Kernels and not defined CGAL_DONT_USE_LAZY_KERNEL, the result does not change.
I do not use the the same output and input variable for intersection like in the
Polygon_mesh_processing/corefinement_consecutive_bool_op.cpp, so i do not use a point map as a result.
If needed I will supply the entire example, but I think, I did something wrong and the includes and way how I calculate the volume should suffice.
// originalExampleFrom corefinement_parallel_union_meshes.cpp;
#include <CGAL/Exact_predicates_exact_constructions_kernel_with_sqrt.h>
//#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Polygon_mesh_processing/transform.h>
#include <CGAL/Polygon_mesh_processing/intersection.h>
#include <CGAL/Named_function_parameters.h>
#include <CGAL/boost/graph/named_params_helper.h>
#include <CGAL/Surface_mesh.h>
#include <CGAL/aff_transformation_tags.h>
#include <CGAL/Polygon_mesh_processing/triangulate_faces.h>
#include <CGAL/Polygon_mesh_processing/corefinement.h>
#include <CGAL/Polygon_mesh_processing/repair.h>
#include <CGAL/Polygon_mesh_processing/IO/polygon_mesh_io.h>
#include <CGAL/Aff_transformation_3.h>
#include <iostream>
#include <fstream> // for write to file
#include <cassert>
#include <typeinfo>
//#define CGAL_DONT_USE_LAZY_KERNEL
/*
The corefinement operation (which is also internally used in the three Boolean operations) will correctly change the topology of the input surface mesh
if the point type used in the point property maps of the input meshes is from a CGAL Kernel with exact predicates.
If that kernel does not have exact constructions, the embedding of the output surface mesh might have self-intersections.
In case of consecutive operations, it is thus recommended to use a point property map with points from a kernel
with exact predicates and exact constructions (such as CGAL::Exact_predicates_exact_constructions_kernel).
In practice, this means that with exact predicates and inexact constructions, edges will be split at each intersection with a triangle but the position of the intersection point might create self-intersections due to the limited precision of floating point numbers.
*/
typedef CGAL::Exact_predicates_exact_constructions_kernel Kernel; //read text abouve about kernel
typedef Kernel::Point_3 Point_3;
typedef CGAL::Surface_mesh<Kernel::Point_3> Mesh;
#define CGAL_DONT_USE_LAZY_KERNEL
namespace PMP = CGAL::Polygon_mesh_processing;
bool simulateDrillingRadial(Mesh cutter, Mesh rotor, Mesh &out, double step) { //Kernel::Point_3 *volume
bool validIntersection = false;
CGAL::Aff_transformation_3<Kernel> trans(CGAL::Translation(),
Kernel::Vector_3(0, step, Z_INIT)); // step * 2
PMP::transform(trans, cutter);
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(rotor));
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(rotor));
validIntersection = CGAL::exact(
PMP::corefine_and_compute_intersection(cutter, rotor, out));
#ifndef NDEBUG
CGAL::IO::write_polygon_mesh("union" + std::to_string(step) + ".off", out,
CGAL::parameters::stream_precision(17));
assert(validIntersection);
std::cout << "Cutter Volume: " << PMP::volume(cutter) << std::endl;
std::cout << "Rotor Volume: " << PMP::volume(rotor) << std::endl;
std::cout << "Out Volume: " << CGAL::exact(PMP::volume(out)) << std::endl;
std::cout << "Numb. of Step: " << step << std::endl;
std::cout << "Bohrtiefe : " << Y_INIT - DRMAX * 1.0 / ND * step
<< std::endl;
#endif
return validIntersection;
}
The Rest of the CODE:
bool simulateDrillingRadial(Mesh cutter, Mesh rotor, Mesh &out, double step) { //Kernel::Point_3 *volume
bool validIntersection = false;
CGAL::Aff_transformation_3<Kernel> trans(CGAL::Translation(),
Kernel::Vector_3(0, step, Z_INIT)); // step * 2
PMP::transform(trans, cutter);
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(rotor));
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(rotor));
validIntersection = CGAL::exact(
PMP::corefine_and_compute_intersection(cutter, rotor, out));
#ifndef NDEBUG
CGAL::IO::write_polygon_mesh("union" + std::to_string(step) + ".step", out,
CGAL::parameters::stream_precision(17));
assert(validIntersection);
std::cout << "Cutter Volume: " << PMP::volume(cutter) << std::endl;
std::cout << "Rotor Volume: " << PMP::volume(rotor) << std::endl;
std::cout << "Out Volume: " << CGAL::exact(PMP::volume(out)) << std::endl;
std::cout << "Bohrtiefe : " << step << std::endl;
#endif
return validIntersection;
}
int main(int argc, char **argv) {
bool validRead = false;
bool validIntersection = false;
Mesh cutter, rotor; //out; // , out;
Mesh out[ND];
Kernel::Point_3 centers[ND];
//Kernel::FT volume[ND];
//GetGeomTraits<TriangleMesh, CGAL_NP_CLASS>::type::FT volume[ND];
double steps[40] = { 40.0000, 39.9981, 39.9926, 39.9833, 39.9703, 39.9535,
39.9330, 39.9086, 39.8804, 39.8482, 39.8121, 39.7719, 39.7276,
39.6791, 39.6262, 39.5689, 39.5070, 39.4404, 39.3689, 39.2922,
39.2103, 39.1227, 39.0293, 38.9297, 38.8235, 38.7102, 38.5893,
38.4601, 38.3219, 38.1736, 38.0140, 37.8415, 37.6542, 37.4490,
37.2221, 36.9671, 36.6740, 36.3232, 35.8661, 35.0000 };
const std::string cutterFile = CGAL::data_file_path("Cutter.off");
const std::string rotorFile = CGAL::data_file_path("Rotor.off");
validRead = (!PMP::IO::read_polygon_mesh(cutterFile, cutter)
|| !PMP::IO::read_polygon_mesh(rotorFile, rotor));
assert(!validRead);
PMP::triangulate_faces(cutter);
PMP::triangulate_faces(rotor);
PMP::transform(rotAroundX(M_PI / 2), cutter);
for (int i = 0; i < ND; i++) {
//simulateDrillingRadial(Mesh & cutter, Mesh & rotor, Mesh & out, unsigned int step)
simulateDrillingRadial(cutter, rotor, out[i], steps[i] + 10);
}
writeToCSV("tmp.csv", ND, out, steps);
return 0;
}
Change the output format of your mesh from *.off to *.stl, then open the intersection mesh in a software like Autodesk Netfabb which can detect and repair errors in the meshes being loaded. I think there’s a high chance the functions you’re using generate meshes with bugs. Possible bugs include holes, duplicate triangles, and self-intersections. Strictly speaking, such meshes do not unambiguously define a solid body and they have no volume.
If you confirm that’s the problem, there’re two ways to fix.
Replace or fix the intersection algorithm making it produce watertight meshes with no self-intersections or duplicate triangles. Maybe the Nef Polyhedra from the same library will help.
Replace volume computation algorithm making it tolerant to the bugs you have in your intersection meshes.
I realize the answer is rather vague. The reason for that — the problem is very hard to solve well. Very smart people published research papers over several decades. Some companies even selling commercial libraries just for reliable Boolean operations on 3D meshes.
I am trying to fill a vector with a specific distribution of nonuniform screen points. These points represent some x and y position on the screen. At some point I am going to draw all of these points on the screen, which should be unevenly distributed at the center. Basically, the frequency of points should increase as you get closer to the center, where one side of the screen is a reflection of the other (can "Mirror over the center of the screen")
I was thinking about using some sort of formula (like y=cos(x) between -pi/2 and pi/2) where the resulting y would equal the frequency of the points in that area of the screen (where -pi/2 would be the leftmost side of the screen, vice versa), but I got stuck on how I would even be able to apply something like this when creating points to put onto the vector. Note: There is a specific number of points that must be generated
If the above hypothesis is not able to work, maybe a cheaty way of achieving this would be to constantly reduce some step size between each point, but I don't know how I would be able to ensure that the specific number of points reach the center.
Ex.
// this is a member function inside a class PointList
// where we fill a member variable list(vector) with nonuniform data
void PointList::FillListNonUniform(const int numPoints, const int numPerPoint)
{
double step = 2;
double decelerator = 0.01;
// Do half the screen then duplicate and reverse the sign
// so both sides of the screen mirror eachother
for (int i = 0; i < numPoints / 2; i++)
{
Eigen::Vector2d newData(step, 0);
for (int j = 0; j < numPerPoint; j++)
{
list.push_back(newData);
}
decelerator += 0.01f;
step -= 0.05f + decelerator;
}
// Do whatever I need to, to mirror the points ...
}
Literally any help would be a appreciated. I have briefly looked into std::normal_distribution, but it appears to me that it relies on randomness, so I am unsure if this would be a good option for what I am trying to do.
You can use something called rejection sampling. The idea is that you have some function of some parameters (in your case 2 parameters x, y), which represents the probability density function. In your 2D case, you can then generate an x, y pair along with a variable representing the probability p. If the probability density function is larger at the coordinates (i.e. f(x, y) > p), the sample is added, otherwise a new pair is generated. You can implement this like:
#include <functional>
#include <vector>
#include <utility>
#include <random>
std::vector<std::pair<double,double>> getDist(int num){
std::random_device rd{};
std::mt19937 gen{rd()};
auto pdf = [] (double x, double y) {
return /* Some probability density function */;
};
std::vector<std::pair<double,double>> ret;
double x,y,p;
while(ret.size() <= num){
x = (double)gen()/SOME_CONST_FOR_X;
y = (double)gen()/SOME_CONST_FOR_Y;
p = (double)gen()/SOME_CONST_FOR_P;
if(pdf(x,y) > p) ret.push_back({x,y});
}
return ret;
}
This is a very crude draft but should give and idea as to how this might work.
An other option (if you want normal distribution), would be std::normal_distribution. The example from the reference page can be adapted so:
#include <random>
#include <vector>
#include <utility>
std::vector<std::pair<double,double>> getDist(int num){
std::random_device rd{};
std::mt19937 gen{rd()};
std::normal_distribution<> d_x{x_center,x_std};
std::normal_distribution<> d_y{y_center,y_std};
while(ret.size() <= num){
ret.push_back({d_x(gen),d_y(gen)});
}
}
There are various ways to approach this, depending on the exact distribution you want. Generally speaking, if you have a distribution function f(x) that gives you the probability of a point at a specific distance to the center, then you can integrate it to get the cumulative distribution function F(x). If the CDF can be inverted, you can use the inverse CDF to map a uniform random variable to distances from the center, such that you get the desired distribution. But not all functions are easily inverted.
Another option would be to fake it a little bit: for example, make a loop that goes from 0 to the maximum distance from the center, and then for each distance you use the probability function to get the expected number of points at that distance. Then just add exactly that many points at randomly chosen angles. This is quite fast and the result might just be good enough.
Rejection sampling as mentioned by Lala5th is another option, giving you the desired distribution, but potentially taking a long time if large areas of the screen have a very low probability. A way to ensure it finishes in bounded time is to not loop until you have num points added, but to loop over every pixel, and add the coordinates of that pixel if pdf(x,y) > p. The drawback of that is that you won't get exactly num points.
I asked the following question here and got a good solution, but have found it to be too slow as far as performance (Takes 2-300 ms with 640x480 image). Now I would like to consider how it can be optimized.
Problem:
Given two polygons (ALWAYS trapezoids that are parallel to the X axis), I would like to calculate by some means how much they match up. By this, I mean overlapping area is not sufficient, because if one polygon has excess area, somehow that needs to count against it. Optimally, I would like to know what percent of the area created by both polygons is common. See image for example as what is desired.
A working (but slow) solution:
- Draw polygon one on an empty image (cv::fillConvexPoly)
- Draw polygon two on an empty image (cv::fillConvexPoly)
- Perform a bitwise and to create an image of all the pixels that overlapped
- Count all nonzero pixels --> overlapped Pixels
- Invert the first image and repeat with non-inverted second --> excessive pixels
- Invert the second image and repeat with non-inverted first --> more excessive pixels
- Take the 'overlapped pixels' over the sum of the 'excessive pixels'
As you can see the current solution is computationally intensive because it is evaluating/ operating on every single pixel of an image ~12 times or so. I would rather a solution that calculates this area that goes through the tedious construction and evaluation of several images.
Existing code:
#define POLYGONSCALING 0.05
typedef std::array<cv::Point, 4> Polygon;
float PercentMatch( const Polygon& polygon,
const cv::Mat optimalmat )
{
//Create blank mats
cv::Mat polygonmat{ cv::Mat(optimalmat.rows, optimalmat.cols, CV_8UC1, cv::Scalar(0)) };
cv::Mat resultmat{ cv::Mat(optimalmat.rows, optimalmat.cols, CV_8UC1, cv::Scalar(0)) };
//Draw polygon
cv::Point cvpointarray[4];
for (int i =0; i < 4; i++ ) {
cvpointarray[i] = cv::Point(POLYGONSCALING * polygon[i].x, POLYGONSCALING *
polygon[i].y);
}
cv::fillConvexPoly( polygonmat, cvpointarray, 4, cv::Scalar(255) );
//Find overlapped pixels
cv::bitwise_and(polygonmat, optimalmat, resultmat);
int overlappedpixels { countNonZero(resultmat) };
//Find excessive pixels
cv::bitwise_not(optimalmat, resultmat);
cv::bitwise_and(polygonmat, resultmat, resultmat);
int excessivepixels { countNonZero(resultmat) };
cv::bitwise_not(polygonmat, resultmat);
cv::bitwise_and(optimalmat, resultmat, resultmat);
excessivepixels += countNonZero(resultmat);
return (100.0 * overlappedpixels) / (overlappedpixels + excessivepixels);
}
Currently the only performance improvements i've devised is drawing the 'optimalmat' outside of the function so it isn't redrawn (it gets compared to many other polygons), and also I added a POLYGONSCALING to size down the polygons and lose some resolution but gain some performance. Still too slow.
I may have misunderstood what you want but I think you should be able to do it faster like this...
Fill your first trapezoid with 1's on a background of zeroes.
Fill your second trapezoid with 2's on a background of zeroes.
Add the two Mats together.
Now each pixel must be either 0, 1, 2 or 3. Create an array with 4 elements and in a single pass over all elements, with no if statements, simply increment the corresponding array element according to the value of each pixel.
Then total of pixels in the first index of the array are where neither trapezoid is present, the elements with indices 1 and 2 are where trapezoid 1 or 2 was present, and the elements with index 3 are overlap.
Also, try benchmarking the flood-fills of the two trapezoids, if that is a significant proportion if your time, maybe have a second thread filling the second trapezoid.
Benchmark
I wrote some code to try out the above theory, and with a 640x480 image it takes:
181 microseconds to draw the first polygon
84 microseconds to draw the second polygon
481 microseconds to calculate the overlap
So the total time is 740 microseconds on my iMac.
You could draw the second polygon in parallel with the first, but the thread creation and joining time is around 20 microseconds, so you would only save 60 microseconds there which is 8% or so - probably not worth it.
Most of the code is timing and debug:
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <chrono>
using namespace cv;
using namespace std;
const int COLS=640;
const int ROWS=480;
typedef std::chrono::high_resolution_clock hrclock;
hrclock::time_point t1,t2;
std::chrono::nanoseconds elapsed;
int e;
int
main(int argc,char*argv[]){
Mat canvas1(ROWS,COLS,CV_8UC1,Scalar(0));
Mat canvas2(ROWS,COLS,CV_8UC1,Scalar(0));
Mat sum(ROWS,COLS,CV_8UC1,Scalar(0));
//Draw polygons on canvases
Point vertices1[4]={Point(10,10),Point(400,10),Point(400,460),Point(10,460)};
Point vertices2[4]={Point(300,50),Point(600,50),Point(600,400),Point(300,400)};
t1 = hrclock::now();
// FilConvexPoly takes around 150 microseconds here
fillConvexPoly(canvas1,vertices1,4,cv::Scalar(1));
t2 = hrclock::now();
elapsed = t2-t1;
e=elapsed.count();
cout << "fillConvexPoly: " << e << "ns" << std::endl;
imwrite("canvas1.png",canvas1);
t1 = hrclock::now();
// FilConvexPoly takes around 80 microseconds here
fillConvexPoly(canvas2,vertices2,4,cv::Scalar(2));
t2 = hrclock::now();
elapsed = t2-t1;
e=elapsed.count();
cout << "fillConvexPoly: " << e << "ns" << std::endl;
imwrite("canvas2.png",canvas2);
sum=canvas1+canvas2;
imwrite("sum.png",sum);
long totals[4]={0,0,0,0};
uchar* p1=sum.data;
t1 = hrclock::now();
for(int j=0;j<ROWS;j++){
uchar* data= sum.ptr<uchar>(j);
for(int i=0;i<COLS;i++) {
totals[data[i]]++;
}
}
t2 = hrclock::now();
elapsed = t2-t1;
e=elapsed.count();
cout << "Count overlap: " << e << "ns" << std::endl;
for(int i=0;i<4;i++){
cout << "totals[" << i << "]=" << totals[i] << std::endl;
}
}
Sample run
fillConvexPoly: 181338ns
fillConvexPoly: 84759ns
Count overlap: 481830ns
totals[0]=60659
totals[1]=140890
totals[2]=70200
totals[3]=35451
Verified using ImageMagick as follows:
identify -verbose sum.png | grep -A4 Histogram:
Histogram:
60659: ( 0, 0, 0) #000000 gray(0)
140890: ( 1, 1, 1) #010101 gray(1)
70200: ( 2, 2, 2) #020202 gray(2)
35451: ( 3, 3, 3) #030303 gray(3)
I'm making a little app to analyze geometry. In one part of my program, I use an algorithm that has to have a convex object as input. Luckily, all my objects are initially convex, but some are just barely so (see image).
After I apply some transformations, my algorithm fails to work (it produces "infinitely" long polygons, etc), and I think this is because of rounding errors as in the image; the top vertex in the cylinder gets "pushed in" slightly because of rounding errors (very exaggerated in image) and is no longer convex.
So my question is: Does anyone know of a method to "slightly convexify" an object? Here's one method I tried to implement but it didn't seem to work (or I implemented it wrong):
1. Average all vertices together to create a vertex C inside the convex shape.
2. Let d[v] be the distance from C to vertex v.
3. Scale each vertex v from the center C with the scale factor 1 / (1+d[v] * CONVEXIFICATION_FACTOR)
Thanks!! I have CGAL and Boost installed so I can use any of those library functions (and I already do).
You can certainly make the object convex by computing the convex hull of it. But that'll "convexify" anything. If you're sure your input has departed only slightly from being convex, then it shouldn't be a problem.
CGAL appears to have an implementation of 3D Quickhull in it, which would be the first thing to try. See http://doc.cgal.org/latest/Convex_hull_3/ for docs and some example programs. (I'm not sufficiently familiar with CGAL to want to reproduce any examples and claim they're correct.)
In the end I discovered the root of this problem was the fact that the convex hull contained lots of triangles, whereas my input shapes were often cube-shaped, making each quadrilateral region appear as 2 triangles which had extremely similar plane equations, causing some sort of problem in the algorithm I was using.
I solved it by "detriangulating" the polyhedra, using this code. If anyone can spot any improvements or problems, let me know!
#include <algorithm>
#include <cmath>
#include <vector>
#include <CGAL/convex_hull_traits_3.h>
#include <CGAL/convex_hull_3.h>
typedef Kernel::Point_3 Point;
typedef Kernel::Vector_3 Vector;
typedef Kernel::Aff_transformation_3 Transformation;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron;
struct Plane_from_facet {
Polyhedron::Plane_3 operator()(Polyhedron::Facet& f) {
Polyhedron::Halfedge_handle h = f.halfedge();
return Polyhedron::Plane_3(h->vertex()->point(),
h->next()->vertex()->point(),
h->opposite()->vertex()->point());
}
};
inline static double planeDistance(Plane &p, Plane &q) {
double sc1 = max(abs(p.a()),
max(abs(p.b()),
max(abs(p.c()),
abs(p.d()))));
double sc2 = max(abs(q.a()),
max(abs(q.b()),
max(abs(q.c()),
abs(q.d()))));
Plane r(p.a() * sc2,
p.b() * sc2,
p.c() * sc2,
p.d() * sc2);
Plane s(q.a() * sc1,
q.b() * sc1,
q.c() * sc1,
q.d() * sc1);
return ((r.a() - s.a()) * (r.a() - s.a()) +
(r.b() - s.b()) * (r.b() - s.b()) +
(r.c() - s.c()) * (r.c() - s.c()) +
(r.d() - s.d()) * (r.d() - s.d())) / (sc1 * sc2);
}
static void detriangulatePolyhedron(Polyhedron &poly) {
vector<Polyhedron::Halfedge_handle> toJoin;
for (auto edge = poly.edges_begin(); edge != poly.edges_end(); edge++) {
auto f1 = edge->facet();
auto f2 = edge->opposite()->facet();
if (planeDistance(f1->plane(), f2->plane()) < 1E-5) {
toJoin.push_back(edge);
}
}
for (auto edge = toJoin.begin(); edge != toJoin.end(); edge++) {
poly.join_facet(*edge);
}
}
...
Polyhedron convexHull;
CGAL::convex_hull_3(shape.begin(),
shape.end(),
convexHull);
transform(convexHull.facets_begin(),
convexHull.facets_end(),
convexHull.planes_begin(),
Plane_from_facet());
detriangulatePolyhedron(convexHull);
Plane bounds[convexHull.size_of_facets()];
int boundCount = 0;
for (auto facet = convexHull.facets_begin(); facet != convexHull.facets_end(); facet++) {
bounds[boundCount++] = facet->plane();
}
...
This gave the desired result (after and before):
We are doing a project in vehicle counting (using OpenCV). Now we have to find the euclidian distance from the centroid of the object in one frame to the adjacent frame? In our project we have done up to finding the centroid.
I am going to assume the camera has not moved between captures, so that you do not have to worry about registration.
You should have two cv::Point objects representing the two acquired centroids. Euclidean distance can be calculated as follows:
double euclideanDist(Point p, Point q)
{
Point diff = p - q;
return cv::sqrt(diff.x*diff.x + diff.y*diff.y);
}
int main(int /*argc*/, char** /*argv*/)
{
Point centroid1(0.0, 0.0);
Point centroid2(3.0, 4.0);
cout << euclideanDist(centroid1, centroid2) << endl;
return 0;
}
This outputs 5 (i.e., 3-4-5 triangle)...
Hope that helps!
If p and q are of type int, make sure to typecast the (diff.x*diff.x + diff.y*diff.y) term to either double or float. That way you can get a more accurate euclidean distance.