I have a closed polygon and I would like to fully cover it with a set of K circles of different radius such that the area covered by the circles but outside the polygon is minimal. This seems the ideal candidate for linear programming. Does anybody know a standard formulation / an algorithm for this problem?
You could have a look at Smallest-circle problem which is equivalent to your problem with K = 1.
In the above Wiki page it is said that a linear algorithm exists. However the algorithm described in the loc. cit. paper of Nimrod Megiddo is complicated.
So my feeling is that, you might be able to state your problem with linear programming, but finding the best algorithm will be far from obvious.
Related
How is it possible to formulate the convex hull of a linear programming (LP) problem to be integral? Are there any general techniques to perform this?
In the sense of a formulation, a linear program yields a polyhedron with (in general) fractional extreme points. If you want to solve exactly this problem, there is nothing to change /manipulate at the polyhedron.
If you have a (mixed) integer linear program (MIP), you may be interested in the description of the convex hull of its integer points. In general, this can be used for a fast solution process, since you can solve its linear relaxation without performing a branch and bound process afterwards.
This means, the linear relaxation of the MIP gives a polyhedron which contains this convex hull - and which itself needs not have integer extreme points. In many cases, you want to tighten this formulation towards the convex hull of the integer points, which is done by the usual solvers (e.g. by adding inequalities).
The aim is always to obtain a formulation of said convex hull. However, finding this formulation is generally NP-hard (so there are no known general techniques to obtain it easily). Especially this means, that the size of such a formulation (i.e., the amount of inequalities) can be exponential.
The are algorithms to compute convex hulls of integer points (or from general polyhedrons), but they are not simple and not "fast". Software, which can help you there is probably Porta or Polymake.
There are properties describing when polyhedrons/formulations are integer. E.g. one of these goes by the name of total (dual) unimodularity. Formulating your problem in such a way or identifying this property is not easy and I am not aware of any structural approaches for doing so.
I hope this helps :)
Kind regards,
Martin
To add a bit to Martin answer above (I think this is too long for a comment):
There is a general procedure that I know of, called Chvátal-Gomorry procedure, which allow to ultimately describe the convex hull by adding Gomorry cuts. This is very interesting theoretically; however, there is a well-known example where this procedure takes n step (a parameter in the LP) for a problem with two variables and two constraints, i.e. the number of cuts added cannot be bounded by the size of the problem.
Totally unimodular matrix are common in problems arising in graph theory, but it is certainly not a "general" method: you can convince yourself just by the definition that the coefficient of the A matrix must be 0, 1 or -1 in a TU matrix, which is usually not the case in a ILP of course.
Of course, since solving an LP is polynomial and solving an ILP is NP-complete, one cannot expect that there is a general efficient method to do what you expect, since that would almost reduce ILP to LP!
But if you are studiying a problem in particular, especially if it has a simple structure, it could be one of the "special cases" where one of the two methods above are effective.
I can provide further references at the end of the week if you are interested.
how would I go about checking to see if a triangular poly is present within a square area? (I.E. picture a grid of squares overlaying a group of 2d polys.)
Or even better, how can I determine the percentage of one of these squares that is occupied by a given poly (if at all).
I've used directx before but can't seem to find the right combination of functions in their documentation. - Though it feels like something with ray-tracing might be relevant.
I use c++ and can use directx if helpful.
Thanks for any suggestions or ideas. :)
You might consider the clipper library for doing generic 2D polygon clipping, area computation, intersection testing, etc. It is fairly compact and easy to deal with, and has decent examples of how to use it.
It is an implementation of the Vatti clipping algorithm and will handle many odd edge cases (which may be overkill for you)
There are a few ways to do this and it's essentially a clipping problem.
One way is to use the Cohen–Sutherland algorithm: http://en.wikipedia.org/wiki/Cohen%E2%80%93Sutherland
You would run the algorithm 3 times (once for each triangle edge).
You can then find the percentage of area occupied by calculating area(clipped_triangle) / area(square_region).
You might consider the clipper library for doing generic 2D polygon clipping, area computation, intersection testing, etc. It is fairly compact and easy to deal with, and has decent examples of how to use it.
It is an implementation of the Vatti clipping algorithm and will handle many odd edge cases (which may be overkill for you)
Can ho celadon city - vinhomes central park
So I have an iterative closest point (ICP) algorithm that has been written and will fit a model to a point cloud. As a quick tutorial for those not in the know ICP is a simple algorithm that fits points to a model ultimately providing a homogeneous transform matrix between the model and points.
Here is a quick picture tutorial.
Step 1. Find the closest point in the model set to your data set:
Step 2: Using a bunch of fun maths (sometimes based on gradiant descent or SVD) pull the clouds closer together and repeat untill a pose is formed:
![Figure 2][2]
Now that bit is simple and working, what i would like help with is:
How do I tell if the pose that I have is a good one?
So currently I have two ideas, but they are kind of hacky:
How many points are in the ICP Algorithm. Ie, if I am fitting to almost no points, I assume that the pose will be bad:
But what if the pose is actually good? It could be, even with few points. I dont want to reject good poses:
So what we see here is that low points can actually make a very good position if they are in the right place.
So the other metric investigated was the ratio of the supplied points to the used points. Here's an example
Now we exlude points that are too far away because they will be outliers, now this means we need a good starting position for the ICP to work, but i am ok with that. Now in the above example the assurance will say NO, this is a bad pose, and it would be right because the ratio of points vs points included is:
2/11 < SOME_THRESHOLD
So thats good, but it will fail in the case shown above where the triangle is upside down. It will say that the upside down triangle is good because all of the points are used by ICP.
You don't need to be an expert on ICP to answer this question, i am looking for good ideas. Using knowledge of the points how can we classify whether it is a good pose solution or not?
Using both of these solutions together in tandem is a good suggestion but its a pretty lame solution if you ask me, very dumb to just threshold it.
What are some good ideas for how to do this?
PS. If you want to add some code, please go for it. I am working in c++.
PPS. Someone help me with tagging this question I am not sure where it should fall.
One possible approach might be comparing poses by their shapes and their orientation.
Shapes comparison can be done with Hausdorff distance up to isometry, that is poses are of the same shape if
d(I(actual_pose), calculated_pose) < d_threshold
where d_threshold should be found from experiments. As isometric modifications of X I would consider rotations by different angles - seems to be sufficient in this case.
Is poses have the same shape, we should compare their orientation. To compare orientation we could use somewhat simplified Freksa model. For each pose we should calculate values
{x_y min, x_y max, x_z min, x_z max, y_z min, y_z max}
and then make sure that each difference between corresponding values for poses does not break another_threshold, derived from experiments as well.
Hopefully this makes some sense, or at least you can draw something useful for your purpose from this.
ICP attempts to minimize the distance between your point-cloud and a model, yes? Wouldn't it make the most sense to evaluate it based on what that distance actually is after execution?
I'm assuming it tries to minimize the sum of squared distances between each point you try to fit and the closest model point. So if you want a metric for quality, why not just normalize that sum, dividing by the number of points it's fitting. Yes, outliers will disrupt it somewhat but they're also going to disrupt your fit somewhat.
It seems like any calculation you can come up with that provides more insight than whatever ICP is minimizing would be more useful incorporated into the algorithm itself, so it can minimize that too. =)
Update
I think I didn't quite understand the algorithm. It seems that it iteratively selects a subset of points, transforms them to minimize error, and then repeats those two steps? In that case your ideal solution selects as many points as possible while keeping error as small as possible.
You said combining the two terms seemed like a weak solution, but it sounds to me like an exact description of what you want, and it captures the two major features of the algorithm (yes?). Evaluating using something like error + B * (selected / total) seems spiritually similar to how regularization is used to address the overfitting problem with gradient descent (and similar) ML algorithms. Selecting a good value for B would take some experimentation.
Looking at your examples, it seems that one of the things that determines whether the match is good or not, is the quality of the points. Could you use/calculate a weighting factor in calculating your metric?
For example, you could weight down points which are co-linear / co-planar, or spatially close, as they probably define the same feature. That would perhaps allow your upside-down triangle to be rejected (as the points are in a line, and that not a great indicator of the overall pose) but the corner-case would be ok, as they roughly define the hull.
Alternatively, maybe the weighting should be on how distributed the points are around the pose, again trying to ensure you have good coverage, rather than matching small indistinct features.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
A simple algorithm for polygon intersection
I'm looking for an outline on how to quickly calculate the intersection of two arbitrarily oriented quadrilaterals (no preset corner angle or side length constraints). I am not looking to simply check whether they intersect, but wish to get the points making up the resulting intersecting region. I know that in general polygon intersection isn't a trivial problem and there are libraries available that do a good job.
But since in this special case where I'm only concerned with four sided shapes, I was wondering if there was a quick method I could use without including an entire additional library in my application.
So far all I've thought of is:
Run 'point in polygon' on both shapes with respect to each other
Intersect each edge of each polygon with each other
Do the above two steps definitively get me all the points that make up the resulting intersection region? Is there a better method to use?
Also it would be nice if I could get the correct ordering of the points that make up the resulting region. It's not mandatory -- if you are aware of any clever/quick ways of doing this bit (convex hull?) I'd appreciate any suggestions.
You didn't state wether the 2 quadriliterals are convex or not; if they are, you could use a regular convex polygon intersection algorithm such as http://www.iro.umontreal.ca/~plante/compGeom/algorithm.html
From what I can gather, it doesn't require any exotic datastructures or operations, so it shouldn't be difficult to implement.
Intersection of convex polygons is relatively easy. Google it, there's a lot of resources both on SO and elsewhere.
Not all quadrilaterals are convex though. Intersection of two non-convex quadrilaterals can result in several disconnected polygons, having just their points will give you very little, but if that's what you need go ahead and intersect each pair of edges. It will be much easier and faster than any general method
Even for convex shapes, the dumb brute-force method may be faster. You have to do some testing to find out what works best for you.
I've got a point in 2d image for example the red Dot in the given picture and a set of n points blue dot (x1,y1)...(xn,yn) and I want to find nearest point to (x0,y0) in a way better than trying all points. Like to have best possible solution. Would appreciate if you share any similar class if you have.
There are many approaches to this, the most common probably being using some form of space partitioning to speed up the search so that it is not O(n). For details, see Nearest neighbor search on Wikipedia.
Most solutions that we could suggest would depend on a little bit more knowledge, I am going to go out on a limb and say that unless you already know that you are short on time. I.e. there are tens of thousands of blue dots or you have to do thousands of these calculations in a short time. "Linear Search" will serve you well enough.
Don't bother calculating the actual distance, save yourself calculating the square root and use this as the "distance".
Most other methods use more complex data structures to sort the points in respect to their geometric arrangement. But are a lot harder to implement.