hey, I have been given a problem, I basically have been given a piece of grid paper of arbitary size and have to develop a distance matrix using only the coordinates for each of the grid points on the page.
I'm thinking the best approach would be something like the Floyd-Warshall or Djikstra algorithms for shortest path pair, but don't know how to adapt it to coordinate distances, as all the documentation uses a pre-determined distance matrix. so any help would be grand
the distance matrix contains simply the distances to all other points.
Basically, you just have to calculate the distances using an appropriate metric. If you want the "normal" distance, it's sqrt((x1-x2)^2+(y1-y2)^2) where (x/y) are the coordinates of a point in mm / inches. If you want the distance on the paper just following the lines its |x1-x2|+|y1-y2|.
Graph algorithms would be a overkill unless you have walls on the paper.
Related
Let's define a curve as set of 2D points which can be computed to arbitrary precision. For instance, this is a curve:
A set of N intersecting curves is given (N can be arbitrarily large), like in the following image:
How to find the perimeter of the connected area (a bounding box is given if necessary) which is delimited by the set of curves; or, given the example above, the red curve? Note that the perimeter can be concave and it has no obvious parametrization.
A starting point of the red curve can be given
I am interested in efficient ideas to build up a generic algorithm however...
I am coding in C++ and I can use any opensource library to help with this
I do not know if this problem has a name or if there is a ready-made solution, in case please let me know and I will edit the title and the tags.
Additional notes:
The solution is unique as in the region of interest there is only a single connected area which is free from any curve, but of course I can only compute a finite number of curves.
The curves are originally parametrized (and then affine transformations are applied), so I can add as many point as I want. I can compute distances, lengths and go along with them. Intersections are also feasible. Basically any geometric operation that can be built up from point coordinates is acceptable.
I have found that a similar problem is encountered when "cutting" gears eg. https://scialert.net/fulltext/?doi=jas.2014.362.367, but still I do not see how to solve it in a decently efficient way.
If the curves are given in order, you can find the pairwise intersections between successive curves. Depending on their nature, an analytical or numerical solution will do.
Then a first approximation of the envelope is the polyline through these points.
Another approximation can be obtained by drawing the common tangent to successive curves, and by intersecting those tangents pairwise. The common tangent problem is more difficult, anyway.
If the equations of the curves are known in terms of a single parameter, you can find the envelope curve by solving a differential equation, obtained by eliminating the parameter between the implicit equation of the curve and this equation differentiated wrt the parameter. You can integrate this equation numerically.
When I have got such problems (maths are not enough or are terribly tricky) I decompose each curve into segments.
Then, I search segment-segment intersections. For example, a segment in curve Ci with all of segments in curve Cj. Even you can replace a segment with its bounding box and do box-box intersection for quick discard, focusing in those boxes that have intersection.
This gives a rough aproximation of curve-curve intersections.
Apart from intersections you can search for max/min coordinates, aproximated also with segments or boxes.
Once you get a decent aproximation, you can refine it by reducing the length/size of segments and boxes and repeating the intersection (or max/min) checks.
You can have an approximate solution using the grids. First, find a bounding box for the curves. And then griding inside the bounding box. and then search over the cells to find the specified area. And finally using the number of cells over the perimeter approximate the value of the perimeter (as the size of the cells is known).
I have got a binary image/contour containing four human beings, and I want to detect/count all humans. Since there are occlusions, so I think it is best to get the head/maxima in the contour of all the humans. In that case human can be counted.
I am able to get the global maxima\topmost point (in terms of calculus language), but I want to get all the local maximas
The code for finding the topmost point is as suggested by Adrian in his blogpost i.e.:
topmost = tuple(biggest_contour[biggest_contour[:,:,1].argmin()][0])
Can anyone please suggest how to get all the local maximas, instead of just topmost location?
Here is the sample of my Image:
The definition of "local maximum" can be tricky to pin down, but if you start with a simple method you'll develop an intuition to look further. Even if there are methods available on the web to do this work for you, it's worth implementing a few basic techniques yourself before you go googling.
One simple method I've used in the path goes something like this:
Find the contours as arrays/lists/containers of (x,y) coordinates.
At each element N (a pixel) in the list, get the pixels at N - D and N + D; that is the pixels D ahead of the current pixel and D behind the current pixel
Calculate the point-to-point distance
Calculate the distance along the contour from N-D to N+D
Calculate (distanceAlongContour)/(point-to-point distance)
...
There are numerous other ways to do this, but this is quick to implement from scratch, and I think a reasonable starting point: Compare the "geodesic" distance and the Euclidean distance.
A few other possibilities:
Do a bunch of curve fits to chunks of pixels from the contour. (Lots of details to investigate here.)
Use Ramer-Puecker-Douglas to render the outlines as polygons, then choose parameters to ensure those polygons are appropriately simplified. (Second time I've mentioned R-P-D today; it's handy.) Check for vertices with angles that deviate much from 180 degrees.
Try a corner detector. Crude, but easy to implement.
Implement an edge follower that moves from one pixel to the next in the contour list, and calculate some kind of "inertia" as the pixel shifts direction. This wouldn't be useful on a pixel-by-pixel basis, but you could compare, say, pixels N-1,N,N+1 to pixels N+1,N+2,N+3. Or just calculate the angle between them.
I have some 3D Points that roughly, but clearly form a segment of a circle. I now have to determine the circle that fits best all the points. I think there has to be some sort of least squares best fit but I cant figure out how to start.
The points are sorted the way they would be situated on the circle. I also have an estimated curvature at each point.
I need the radius and the plane of the circle.
I have to work in c/c++ or use an extern script.
You could use a Principal Component Analysis (PCA) to map your coordinates from three dimensions down to two dimensions.
Compute the PCA and project your data onto the first to principal components. You can then use any 2D algorithm to find the centre of the circle and its radius. Once these have been found/fitted, you can project the centre back into 3D coordinates.
Since your data is noisy, there will still be some data in the third dimension you squeezed out, but bear in mind that the PCA chooses this dimension such as to minimize the amount of data lost, i.e. by maximizing the amount of data that is represented in the first two components, so you should be safe.
A good algorithm for such data fitting is RANSAC (Random sample consensus). You can find a good description in the link so this is just a short outline of the important parts:
In your special case the model would be the 3D circle. To build this up pick three random non-colinear points from your set, compute the hyperplane they are embedded in (cross product), project the random points to the plane and then apply the usual 2D circle fitting. With this you get the circle center, radius and the hyperplane equation. Now it's easy to check the support by each of the remaining points. The support may be expressed as the distance from the circle that consists of two parts: The orthogonal distance from the plane and the distance from the circle boundary inside the plane.
Edit:
The reason because i would prefer RANSAC over ordinary Least-Squares(LS) is its superior stability in the case of heavy outliers. The following image is showing an example comparision of LS vs. RANSAC. While the ideal model line is created by RANSAC the dashed line is created by LS.
The arguably easiest algorithm is called Least-Square Curve Fitting.
You may want to check the math,
or look at similar questions, such as polynomial least squares for image curve fitting
However I'd rather use a library for doing it.
I'm currently researching a way to implement smoothing of bone-vertex weights (skin weights for joint deformations) and coming up empty on methods that use geodesic (surface) distances between vertices within a parametric distance set by the user.
So far, someone has mentioned the possible use of Dijkstra's Algorithm for getting approximate geodesic distances - but it has limitations over certain types of mesh topology.
The only paper that I found specifically on this issue (so-called "Bone-vertex weight smoothing") uses Laplacian Smoothing of weights on a skinned mesh, but it only considers the one-ring neighboring vertices to each vertex which does not satisfy my need to include vertices up to a distance (shortest geodesic distance):
L(Wi) = 1/m * Sum(j from 0 to m-1)(Wj - Wi)
where vertex i and j are considered with respect to vertex i, m is the number of neighbor vertices and W is the weight on the vertex.
What I am envisioning is a modified Laplacian Smoothing wherein all of the vertices found to be within the parametric distance are used but the distance needs to be a factor also. Maybe just multiply the weight influence by the parametric distance minus the distance between the current vertex and the one being used in the sum. Something like this, maybe:
Wmj = Wj * (maxDistance - Dji)
L(Wi) = 1/m * Sum(j from 0 to m-1)(Wmj - Wi)
so that the influence of the smoothing by Wj is reduced (falloff) by its vertex distance (Dji). Of course, vertices at maxDistance will have no influence and might need to be ignored as part of m.
Would this work?
The first thought that came to my mind was projection. Start by getting the line representing euclidean distance between your start point and end point (going through the mesh). Then project that onto the mesh. But I realized that won't work in certain situations. For the benefit of others, one such situation is if the start point is one one side of a deep pit, and the target is on the opposite side, the shortest distance would be around the rim, rather than straight through. This still may be adequate for you, depending on the types of meshes you are working with, so I can elaborate a more complete approach along these lines if this is good enough for you.
So then my thoughts were to subdivide and then use search. I would use adaptive subdivision, i.e. split edges until all edges are less than some threshold. From that point you can use Dijkstra's, or A* or any other number of search methods. This gets around the problem of skinny triangles, because edges will be subdivided until they are small, so there will be no long, skinny edges.
I know the distance to various points on a plane, as it is being viewed from an angle. I want to find the equation for this plane from just that information (5 to 15 different points, as many as necessary).
I will later use the equation for the plane to estimate what the distance to the plane should be at different points; in order to prove that it is roughly flat.
Unfortunately, a google search doesn't bring much up. :(
If you, indeed, know distances and not coordinates, then it is ill-posed problem - there is infinite number of planes that will have points with any number of given distances from origin.
This is easy to verify. Let's take shortest distance D0, from set of given distances {D0..DN-1} , and construct a plane with normal vector {D0,0,0} (vector of length D0 along x-axis). For each of remaining lengths we now have infinite number of points that will lie in this plane (forming circles in-plane around (D0,0,0) point). Moreover, we can rotate all vectors by an arbitrary angle and get a new plane.
Here is simple picture in 2D (distances to a line; it's simpler to draw ;) ).
As we can see, there are TWO points on the line for each distance D1..DN-1 > D0 - one is shown for D1 and D2, and the two other for these distances would be placed in 4th quadrant (+x, -y). Moreover, we can rotate our line around origin by an arbitrary angle and still satisfy given distances.
I'm going to skip over the process of finding the best fit plane, it's been handled in some other answers, and talk about something else.
"Prove" takes us into statistical inference. The way this is done is you make a formal hypothesis "the surface is flat" and then see if the data supports rejecting this hypothesis at some confidence level.
So you can wind up saying "I'm not even 1% sure that the surface isn't flat" -- but you can't ever prove that it's flat.
Geometry? Sounds like a job for math.SE! What form will the equation take? Will it be a plane?
I will assume you want an accurate solution.
Find the absolute positions with geometry
Make a best fit regression line in C++ in 2 of the 3 dimensions.