OpenCV groupRectangles - getting grouped and ungrouped rectangles - c++

I'm using OpenCV and want to group together rectangles that have significant overlap. I've tried using groupRectangles for this, which takes a group threshold argument. With a threshold of 0 it doesn't do any grouping at all, and with a threshold of 1 is only returns rectangles that were the result of at least 2 rectangles. For example, given the rectangles on the left in the image below you end up with the 2 rectangles on the right:
What I'd like to end up with is 3 rectangles. The 2 on the right in the image above, plus the rectangle in the top right of the image to the left that doesn't overlap with any other rectangles. What's the best way to achieve this?

The solution I ended up going with was to duplicate all of the initial rectangles before calling groupRectangles. That way every input rectangle is guaranteed to be grouped with at least one other rectangle, and will appear in the output:
int size = rects.size();
for( int i = 0; i < size; i++ )
{
rects.push_back(Rect(rects[i]));
}
groupRectangles(rects, 1, 0.2);

A little late to the party, however "duplicating" solution did not properly work for me. I also had another problem where merged rectangles would overlap and would need to be merged.
So I came up with an overkill solution (might require C++14 compiler). Here's usage example:
std::vector<cv::Rect> rectangles, test1, test2, test3;
rectangles.push_back(cv::Rect(cv::Point(5, 5), cv::Point(15, 15)));
rectangles.push_back(cv::Rect(cv::Point(14, 14), cv::Point(26, 26)));
rectangles.push_back(cv::Rect(cv::Point(24, 24), cv::Point(36, 36)));
rectangles.push_back(cv::Rect(cv::Point(37, 20), cv::Point(40, 40)));
rectangles.push_back(cv::Rect(cv::Point(20, 37), cv::Point(40, 40)));
test1 = rectangles;
test2 = rectangles;
test3 = rectangles;
//Output format: {Rect(x, y, width, height), ...}
//Merge once
mergeRectangles(test1);
//Output rectangles: test1 = {Rect(5, 5, 31, 31), Rect(20, 20, 20, 20)}
//Merge until there are no rectangles to merge
mergeRectangles(test2, true);
//Output rectangles: test2 = {Rect(5, 5, 35, 35)}
//Override default merge (intersection) function to merge all rectangles
mergeRectangles(test3, false, [](const cv::Rect& r1, const cv::Rect& r2) {
return true;
});
//Output rectangles: test3 = {Rect(5, 5, 35, 35)}
Function:
void mergeRectangles(std::vector<cv::Rect>& rectangles, bool recursiveMerge = false, std::function<bool(const cv::Rect& r1, const cv::Rect& r2)> mergeFn = nullptr) {
static auto defaultFn = [](const cv::Rect& r1, const cv::Rect& r2) {
return (r1.x < (r2.x + r2.width) && (r1.x + r1.width) > r2.x && r1.y < (r2.y + r2.height) && (r1.y + r1.height) > r2.y);
};
static auto innerMerger = [](std::vector<cv::Rect>& rectangles, std::function<bool(const cv::Rect& r1, const cv::Rect& r2)>& mergeFn) {
std::vector<std::vector<std::vector<cv::Rect>::const_iterator>> groups;
std::vector<cv::Rect> mergedRectangles;
bool merged = false;
static auto findIterator = [&](std::vector<cv::Rect>::const_iterator& iteratorToFind) {
for (auto groupIterator = groups.begin(); groupIterator != groups.end(); ++groupIterator) {
auto foundIterator = std::find(groupIterator->begin(), groupIterator->end(), iteratorToFind);
if (foundIterator != groupIterator->end()) {
return groupIterator;
}
}
return groups.end();
};
for (auto rect1_iterator = rectangles.begin(); rect1_iterator != rectangles.end(); ++rect1_iterator) {
auto groupIterator = findIterator(rect1_iterator);
if (groupIterator == groups.end()) {
groups.push_back({rect1_iterator});
groupIterator = groups.end() - 1;
}
for (auto rect2_iterator = rect1_iterator + 1; rect2_iterator != rectangles.end(); ++rect2_iterator) {
if (mergeFn(*rect1_iterator, *rect2_iterator)) {
groupIterator->push_back(rect2_iterator);
merged = true;
}
}
}
for (auto groupIterator = groups.begin(); groupIterator != groups.end(); ++groupIterator) {
auto groupElement = groupIterator->begin();
int x1 = (*groupElement)->x;
int x2 = (*groupElement)->x + (*groupElement)->width;
int y1 = (*groupElement)->y;
int y2 = (*groupElement)->y + (*groupElement)->height;
while (++groupElement != groupIterator->end()) {
if (x1 > (*groupElement)->x)
x1 = (*groupElement)->x;
if (x2 < (*groupElement)->x + (*groupElement)->width)
x2 = (*groupElement)->x + (*groupElement)->width;
if (y1 >(*groupElement)->y)
y1 = (*groupElement)->y;
if (y2 < (*groupElement)->y + (*groupElement)->height)
y2 = (*groupElement)->y + (*groupElement)->height;
}
mergedRectangles.push_back(cv::Rect(cv::Point(x1, y1), cv::Point(x2, y2)));
}
rectangles = mergedRectangles;
return merged;
};
if (!mergeFn)
mergeFn = defaultFn;
while (innerMerger(rectangles, mergeFn) && recursiveMerge);
}

By checking out groupRectangles() in opencv-3.3.0 source code:
if( groupThreshold <= 0 || rectList.empty() )
{
// ......
return;
}
I saw that if groupThreshold is set to less than or equal to 0, the function would just return without doing any grouping.
On the other hand, the following code removed all rectangles which don't have more than groupThreshold similarities.
// filter out rectangles which don't have enough similar rectangles
if( n1 <= groupThreshold )
continue;
That explains why with groupThreshold=1 only rectangles with at least 2 overlappings are in the output.
One possible solution could be to modify the source code shown above (replacing n1 <= groupThreshold with n1 < groupThreshold) and re-compile OpenCV.

Related

Outline of pixels after detecting object (without convex hull)

The idea is to use grabcut (OpenCV) to detect the image inside a rectangle and create a geometry with Direct2D.
My test image is this:
After performing the grab cut, resulting in this image:
the idea is to outline it. I can use an opacity brush to exclude it from the background but I want to use a geometric brush in order to be able to append/widen/combine geometries on it like all other selections in my editor (polygon, lasso, rectangle, etc).
If I apply the convex hull algorithm to the points, I get this:
Which of course is not desired for my case. How do I outline the image?
After getting the image from the grabcut, I keep the points based on luminance:
DWORD* pixels = ...
for (UINT y = 0; y < he; y++)
{
for (UINT x = 0; x < wi; x++)
{
DWORD& col = pixels[y * wi + x];
auto lumthis = lum(col);
if (lumthis > Lum_Threshold)
{
points.push_back({x,y});
}
}
}
Then I sort the points on Y and X:
std::sort(points.begin(), points.end(), [](D2D1_POINT_2F p1, D2D1_POINT_2F p2) -> bool
{
if (p1.y < p2.y)
return true;
if ((int)p1.y == (int)p2.y && p1.x < p2.x)
return true;
return false;
});
Then, for each line (traversing the above point array from top Y to bototm Y) I create "groups" for each line:
struct SECTION
{
float left = 0, right = 0;
};
auto findgaps = [](D2D1_POINT_2F* p,size_t n) -> std::vector<SECTION>
{
std::vector<SECTION> j;
SECTION* jj = 0;
for (size_t i = 0; i < n; i++)
{
if (i == 0)
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
continue;
}
if ((p[i].x - jj->right) < 1.5f)
{
jj->right = p[i].x;
}
else
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
}
}
return j;
};
I'm stuck at this point. I know that from an arbitrary set of points many polygons are possible, but in my case the points have defined what's "left" and what's "right". How would I proceed from here?
For anyone interested, the solution is OpenCV contours. Working example here.

How to merge bounding boxes with groupRectangle?

I have an image with bounding boxes like so:
I want to merge overlapping bounding boxes.
I tried: cv::groupRectangles(detected, 1, 0.8)
My expectation was that I get a single box for each cluster.
But I got this:
As you can see, the problem is, there is no box for the dartboard in the middle and for the right one.
How do I resolve this? I would preferably like to use the OpenCV api rather than coding my own merging algorithm.
I see that it eliminates regions bounded by exactly one box. I want it to not do that.
I have tried tweaking the parameters randomly but I've gotten much worse results. I would love some guidance in the right direction.
How to define overlapping rectangles?
We need a way to define when two rectangles overlap. We can use the & intersection operator to find the intersection of the two rectangles, and check that it's not empty:
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs) {
return (lhs & rhs).area() > 0;
}
If we want to ignore small intersections, we can use a threshold over the intersection area:
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs, int th) {
return (lhs & rhs).area() > th;
}
But now the threshold depends on the dimensions of the rectangles. We can use the "Intersection over Union" metric (IoU) which is in the range [0, 1], and apply a threshold in that interval.
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs, double th) {
double i = static_cast<double>((lhs & rhs).area());
double u = static_cast<double>((lhs | rhs).area());
double iou = i / u;
return iou > th;
}
This works well in general, but may show unexpected results if the two rectangles have a very different size. Another approach could be to check if the first rectangle intersects with the second one for most of its area, and vice versa:
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs, double th) {
double i = static_cast<double>((lhs & rhs).area());
double ratio_intersection_over_lhs_area = i / static_cast<double>(lhs.area());
double ratio_intersection_over_rhs_area = i / static_cast<double>(rhs.area());
return (ratio_intersection_over_lhs_area > th) || (ratio_intersection_over_rhs_area > th);
}
Ok, now we have a few ways to define when two rectangles overlap. Pick one.
How to find overlapping rectangles?
We can cluster the rectangles with cv::partition with a predicate that puts overlapping rectangles in the same cluster. This will put in the same cluster even two rectangles that do not directly overlap each other, but are linked by one or more overlapping rectangles. The output of this function is a vector of cluster, where each cluster consists in a vector of rectangles:
std::vector<std::vector<cv::Rect>> cluster_rects(const std::vector<cv::Rect>& rects, const double th)
{
std::vector<int> labels;
int n_labels = cv::partition(rects, labels, [th](const cv::Rect& lhs, const cv::Rect& rhs) {
double i = static_cast<double>((lhs & rhs).area());
double ratio_intersection_over_lhs_area = i / static_cast<double>(lhs.area());
double ratio_intersection_over_rhs_area = i / static_cast<double>(rhs.area());
return (ratio_intersection_over_lhs_area > th) || (ratio_intersection_over_rhs_area > th);
});
std::vector<std::vector<cv::Rect>> clusters(n_labels);
for (size_t i = 0; i < rects.size(); ++i) {
clusters[labels[i]].push_back(rects[i]);
}
return clusters;
}
For example, from the rectangles in this image:
we obtain these clusters (with a threshold of 0.2). Note that:
in the top left cluster the three rectangles do not overlap with each other
the rectangle on the top right is on its own cluster, because it doesn't intersect enough with the other rectangles.
How to find a rectangle that represents a cluster?
Well, that's really application dependent. It can be the union of all rectangles:
cv::Rect union_of_rects(const std::vector<cv::Rect>& cluster)
{
cv::Rect one;
if (!cluster.empty())
{
one = cluster[0];
for (const auto& r : cluster) { one |= r; }
}
return one;
}
Or it can be the maximum inscribed rectangle (code below):
Or something else. For example, if you have a score associated with each rectangle (e.g. it's a detection with a confidence) you can sort each cluster by score and take only the first one. This is a an example of non-maxima-suppression (NMA) and you keep only the highest score rectangle for each cluster (Not showed in this answer).
Pick one.
Below the working code I used for creating these images. Please play with it :)
#include <opencv2/opencv.hpp>
std::vector<cv::Rect> create_some_rects()
{
std::vector<cv::Rect> rects
{
{20, 20, 20, 40},
{30, 40, 40, 40},
{50, 46, 30, 40},
{100, 120, 30, 40},
{110, 130, 36, 20},
{104, 124, 50, 30},
{200, 80, 40, 50},
{220, 90, 50, 30},
{240, 84, 30, 70},
{260, 60, 20, 30},
};
return rects;
}
void draw_rects(cv::Mat3b& img, const std::vector<cv::Rect>& rects)
{
for (const auto& r : rects) {
cv::Scalar random_color(rand() & 255, rand() & 255, rand() & 255);
cv::rectangle(img, r, random_color);
}
}
void draw_rects(cv::Mat3b& img, const std::vector<cv::Rect>& rects, const cv::Scalar& color)
{
for (const auto& r : rects) {
cv::rectangle(img, r, color);
}
}
void draw_clusters(cv::Mat3b& img, const std::vector<std::vector<cv::Rect>>& clusters)
{
for (const auto& cluster : clusters) {
cv::Scalar random_color(rand() & 255, rand() & 255, rand() & 255);
draw_rects(img, cluster, random_color);
}
}
std::vector<std::vector<cv::Rect>> cluster_rects(const std::vector<cv::Rect>& rects, const double th)
{
std::vector<int> labels;
int n_labels = cv::partition(rects, labels, [th](const cv::Rect& lhs, const cv::Rect& rhs) {
double i = static_cast<double>((lhs & rhs).area());
double ratio_intersection_over_lhs_area = i / static_cast<double>(lhs.area());
double ratio_intersection_over_rhs_area = i / static_cast<double>(rhs.area());
return (ratio_intersection_over_lhs_area > th) || (ratio_intersection_over_rhs_area > th);
});
std::vector<std::vector<cv::Rect>> clusters(n_labels);
for (size_t i = 0; i < rects.size(); ++i) {
clusters[labels[i]].push_back(rects[i]);
}
return clusters;
}
cv::Rect union_of_rects(const std::vector<cv::Rect>& cluster)
{
cv::Rect one;
if (!cluster.empty())
{
one = cluster[0];
for (const auto& r : cluster) { one |= r; }
}
return one;
}
// https://stackoverflow.com/a/30418912/5008845
// https://stackoverflow.com/a/34905215/5008845
cv::Rect findMaxRect(const cv::Mat1b& src)
{
cv::Mat1f W(src.rows, src.cols, float(0));
cv::Mat1f H(src.rows, src.cols, float(0));
cv::Rect maxRect(0, 0, 0, 0);
float maxArea = 0.f;
for (int r = 0; r < src.rows; ++r)
{
for (int c = 0; c < src.cols; ++c)
{
if (src(r, c) == 0)
{
H(r, c) = 1.f + ((r > 0) ? H(r - 1, c) : 0);
W(r, c) = 1.f + ((c > 0) ? W(r, c - 1) : 0);
}
float minw = W(r, c);
for (int h = 0; h < H(r, c); ++h)
{
minw = std::min(minw, W(r - h, c));
float area = (h + 1) * minw;
if (area > maxArea)
{
maxArea = area;
maxRect = cv::Rect(cv::Point(c - minw + 1, r - h), cv::Point(c + 1, r + 1));
}
}
}
}
return maxRect;
}
cv::Rect largest_inscribed_of_rects(const std::vector<cv::Rect>& cluster)
{
cv::Rect roi = union_of_rects(cluster);
cv::Mat1b mask(roi.height, roi.width, uchar(255));
for (const auto& r : cluster) {
cv::rectangle(mask, r - roi.tl(), cv::Scalar(0), cv::FILLED);
}
cv::Rect largest_rect = findMaxRect(mask);
largest_rect += roi.tl();
return largest_rect;
}
std::vector<cv::Rect> find_one_for_cluster(const std::vector<std::vector<cv::Rect>>& clusters)
{
std::vector<cv::Rect> one_for_cluster;
for (const auto& cluster : clusters) {
//cv::Rect one = union_of_rects(cluster);
cv::Rect one = largest_inscribed_of_rects(cluster);
one_for_cluster.push_back(one);
}
return one_for_cluster;
}
int main()
{
cv::Mat3b img(200, 300, cv::Vec3b(0, 0, 0));
std::vector<cv::Rect> rects = create_some_rects();
cv::Mat3b initial_rects_img = img.clone();
draw_rects(initial_rects_img, rects, cv::Scalar(127, 127, 127));
std::vector<std::vector<cv::Rect>> clusters = cluster_rects(rects, 0.2);
cv::Mat3b clustered_rects_img = initial_rects_img.clone();
draw_clusters(clustered_rects_img, clusters);
std::vector<cv::Rect> single_rects = find_one_for_cluster(clusters);
cv::Mat3b single_rects_img = initial_rects_img.clone();
draw_rects(single_rects_img, single_rects);
return 0;
}
Unfortunately, you cannot fine-tune groupRectangles(). The second parameter for your example should be 0 though. With 1, all singular rectangles have to be merged somewhere.
You could first grow small rectangles and stay with a conservative threshold parameter if you want a better clustering of the small ones. Not an optimal solution though.
If you want to cluster based on overlap condition, I would suggest to write your own simple algorithm for that. groupRectangles() simply does not do that. It finds rectangles similar in size and position; it does not accumulate rectangles that form a cluster.
You could fill a mask cv::Mat1b mask(image.size(), uchar(0)); with the rectangles and then use cv::connectedComponents() to find merged regions. Note that filling is trivial, loop over all rectangles and call mask(rect).setTo(255);. If the overlap is not always reliable, you could use cv::dilate() to grow rectangles in the mask before the connected-components step.
You could test all rectangles for overlaps and associate them accordingly. For a huge amount of rectangles, I suggest disjoint-set/union-find data structure for efficiency.

Multiple convex shape corner connection

I have an immutable array structure holding convex shapes as in the image above (they may vary in size and count, however they are always convex and never overlapping). What I want to do is connect the corners between them that can be connected without overlaping any edge, as in the image bellow where the blue lines represent the connections.
The data I have available are data structures holding the corner positions in the convex shapes, represented as a Vector structure similar to the following:
class Vector2
{
public:
float x, y;
}
The convex shape structure looks something like this:
class ConvexShape
{
public:
std::vector<Vector2> edges;
}
What I want to return from the function is an std::vector of a structure similar to the following:
class LinkedVector2 : public Vector2
{
public:
std::vector<LinkedVector2*> links;
}
So each linked vector is supposed to have a pointer to each other linked vector it is connected to.
The final function will thereby have this format:
std::vector<LinkedVector>* generateLinks(const std::vector<ConvexShape>& shapes)
{
std::vector<LinkedVector> links{ new std::vector<LinkedVector>{} };
// Create a linked vector for each shape's corner.
// Calculate links.
return links;
}
All of these links I then want to save for use in a later function which connects two points to the already linked shapes, along the lines of this:
The function should not alter the already existing connections and should look something like this:
// Argument 'links' will contain the previously generated links.
std::vector<LinkedVector>* connectPoints(const Vector2& a, const Vector2& b, const std::vector<LinkedVector>& links)
{
std::vector<LinkedVector>* connections{ new std::vector<LinkedVector>{} };
// Add old links to 'connections'.
// Connect the new links to the old.
// Add the new links to 'connections'.
return connections;
}
Could someone help me with how this could be done?
This is a description for an algorithm with example implementation to get you going.
Step 1
Preprocess every edge of the two shapes (s0 and s1) and extract the following information:
Distances from every edge in one shape to the vertices in the other
An ordered set of the vertices in one shape facing towards the other
Finding the distances is an exhaustive task (O(|V(s0)| * |V(s1)|)), it is also very cheap (line-point distance) and embarrassingly parallelisable. The facing vertices are found using the distances from above:
Start with the first vertex on the first shape, where the other shape is completely outside of its two adjacent edges (i.e. for any adjacent edge there exist outside values in its distances).
Since the facing set is a unique sequential set of vertices for convex polygons, continue adding vertices...
...until you reach a vertex where all vertices from the other shape lie inside of its adjacent edges
Doing this for both sides results in two sequences of facing vertices in each shape (the green dots per shape):
Step 2
To connect the two facing sets a scanline approach can be used:
In the ordered set of facing vertices the first vertex from one shape is always in line of sight of the last vertex from the other shape (first and last in case the shapes are oriented the same). From there we'll search sequentially, using the angle criteria from above for both the query from the first and the candidate vertex from the other shape, in the facing set to initialise our loop.
Looping sequentially over the facing vertices of the first shape, remove vertices that have broken line of sight (red line) and add vertices that came within line of sight (green line).
Step 3
Connecting the two outside points with the shapes is equivalent to finding the facing set of one shape in step 1 but instead of to another shape now there are only those individual outside points.
I've implemented step 1 and 2 in the following little browser demo as a prove of concept:
Click on the canvas and drag to move the camera
Click inside a shape and drag to move the shape
(function(canvas) {
function v2(x, y) { return { x: x, y: y }; }
function v2mul(lhs, rhs) { lhs.x *= rhs.x; lhs.y *= rhs.y; }
function v2subed(lhs, rhs) { return v2(lhs.x - rhs.x, lhs.y - rhs.y); }
function v2dot(lhs, rhs) { return lhs.x * rhs.x + lhs.y * rhs.y; }
function v2normalized(v) { var len = Math.sqrt(v2dot(v, v)); if(len < 1e-7) len = 1; return v2(v.x / len, v.y / len); }
function v2perped(v) { return v2(-v.y, v.x); }
// Line from origin o : v2 and direction d : v2
function Line(o, d) {
this.o = o;
this.d = d;
}
// Signed distance to a point v : v2, in units of direction this.d
Line.prototype.distance = function(v) {
var o = v2subed(v, this.o);
var d = v2perped(this.d);
return v2dot(o, d);
};
// A polygon is made up of a sequence of points (arguments[i] : v2)
function Polygon() {
this.positions = [].slice.call(arguments);
}
// Transform polygon to new base [bx, by] and translation t
Polygon.prototype.transform = function(bx, by, t) {
this.positions.forEach(function(v) {
var x = bx.x * v.x + by.x * v.y + t.x;
var y = bx.y * v.x + by.y * v.y + t.y;
v.x = x;
v.y = y;
});
};
// Naive point inside polygon test for polygon picking
Polygon.prototype.isInside = function(v) {
if(this.positions.length < 3)
return false;
var o0 = this.positions[this.positions.length - 1];
for(var i = 0, imax = this.positions.length; i < imax; ++i) {
var o1 = this.positions[i];
var line = new Line(o0, v2normalized(v2subed(o1, o0)));
if(line.distance(v) <= 0)
return false;
o0 = o1;
}
return true;
};
// A camera positioned at eye : v2
function Camera(eye) {
this.eye = eye;
}
// Prepare temporaries for screen conversions
Camera.prototype.prepare = function(w, h) {
this.screen = {
off: v2(w / 2, h / 2),
};
};
Camera.prototype.toScreenX = function(x) { return x + this.screen.off.x - this.eye.x; }
Camera.prototype.toScreenY = function(y) { return this.screen.off.y - y + this.eye.y; }
Camera.prototype.fromScreenX = function(x) { return x - this.screen.off.x + this.eye.x; }
Camera.prototype.fromScreenY = function(y) { return this.screen.off.y - y + this.eye.y; }
Camera.prototype.toScreen = function(v) { return v2(this.toScreenX(v.x), this.toScreenY(v.y)); };
Camera.prototype.fromScreen = function(v) { return v2(this.fromScreenX(v.x), this.fromScreenY(v.y)); }
// Compute the distances of the line through e0 in p0 to each vertex in p1
// #post e0.distances.length === p1.positions.length
function computeEdge(e0, p0, p1) {
var line = new Line(p0.positions[e0.start], v2normalized(v2subed(p0.positions[e0.end], p0.positions[e0.start])));
var distances = [];
p1.positions.forEach(function(v) { distances.push(line.distance(v)); });
e0.line = line;
e0.distances = distances;
return e0;
}
// Find vertices in a convex polygon p0 that face p1
// #pre edges.length === p0.positions.length
function computeFacing(edges, p0, p1) {
var facing = [];
var count0 = p0.positions.length;
var count1 = p1.positions.length;
function isFacingVertex(i0) {
var e0 = edges[(i0 + count0 - 1) % count0];
var e1 = edges[i0];
for(var i1 = 0; i1 < count1; ++i1)
if(e0.distances[i1] < 0 || e1.distances[i1] < 0)
return true;
return false;
}
// Find the first vertex in the facing set of two non-intersecting, convex polygons
for(var i0 = 0; i0 < count0; ++i0) {
// For the first chance facing vertex
if(isFacingVertex(i0)) {
if(i0 === 0) {
// Search backwards here, s.t. we can complete the loop in one sitting
var iStart = count0;
for(; iStart > 1 && isFacingVertex(iStart - 1); --iStart);
while(iStart < count0)
facing.push(iStart++);
}
facing.push(i0++);
// In a convex polygon the (single) set of facing vertices is sequential
while(i0 < count0 && isFacingVertex(i0))
facing.push(i0++);
break;
}
}
return facing;
}
// Preprocesses the convex polygon p0 building the edges and facing lists
function preprocessPolygon(p0, p1) {
var result = {
edges: [],
facing: null,
};
for(var i = 0, imax = p0.positions.length; i < imax; ++i)
result.edges.push(computeEdge({ start: i, end: (i + 1) % imax }, p0, p1));
result.facing = computeFacing(result.edges, p0, p1);
return result;
}
// Scanline-approach to find all line of sight connections between the facing vertices of two preprocessed convex polygons p0 : Polygon and p1 : Polygon
// Output is prep.connections where prep.connections[i] : { v0, v1 } describes an unobstructed line of sight edge between vertex index v0 in p0 and v1 in p1
function computeConnections(prep, p0, p1) {
var connections = [];
var facing1count = prep.p1.facing.length;
// For oriented polygons the first facing vertex in p0 must surely face the last facing vertex in p1
var facing1begin = facing1count - 1, facing1end = facing1count;
prep.p0.facing.forEach(function(v0) {
function isConnectingVertex(v1) {
// Is v1 outside of adjacent edge-lines from v0?
var count0 = prep.p0.edges.length;
var ep = prep.p0.edges[(v0 + count0 - 1) % count0];
var en = prep.p0.edges[v0];
if(!(ep.distances[v1] < 0 || en.distances[v1] < 0)) return false;
// Is v0 outside of adjacent edge-lines from v1?
var count1 = prep.p1.edges.length;
ep = prep.p1.edges[(v1 + count1 - 1) % count1];
en = prep.p1.edges[v1];
return ep.distances[v0] < 0 || en.distances[v0] < 0;
}
// Throw away vertices that are no longer facing the current vertex
for(; facing1end > 0 && !isConnectingVertex(prep.p1.facing[facing1end - 1]); --facing1end);
// Add newly facing vertices
for(; facing1begin > 0 && isConnectingVertex(prep.p1.facing[facing1begin - 1]); --facing1begin);
// Generate the connections in facing range
for(var facing1 = facing1begin; facing1 < facing1end; ++facing1)
connections.push({ v0: v0, v1: prep.p1.facing[facing1] });
});
prep.connections = connections;
}
function process(prep, p0, p1) {
delete prep.p0;
delete prep.p1;
delete prep.connections;
prep.p0 = preprocessPolygon(p0, p1);
prep.p1 = preprocessPolygon(p1, p0);
computeConnections(prep, p0, p1);
}
var polygons = null;
var prep = null;
var camera = null;
var ui = null;
function reset() {
polygons = [
new Polygon(v2(25, -75), v2(50, -175), v2(140, -225), v2(255, -200), v2(195, -65), v2(140, -40)),
new Polygon(v2(400, -100), v2(295, -70), v2(260, -80), v2(310, -220), v2(425, -230)),
];
// Scale to a fitting size and move to center
var bx = v2(0.5, 0), by = v2(0, 0.5), off = v2(-120, 70);
polygons[0].transform(bx, by, off);
polygons[1].transform(bx, by, off);
prep = {};
camera = new Camera(v2(0, 0));
ui = { pickedPolygon: -1 };
update();
draw();
}
function update() {
// Reprocess polygons
process(prep, polygons[0], polygons[1]);
}
function draw() {
var g = canvas.getContext("2d");
var w = canvas.width;
var h = canvas.height;
camera.prepare(w, h);
g.fillStyle = "linen";
g.fillRect(0, 0, w, h);
var iPick = 0;
polygons.forEach(function(polygon) {
var highlight = iPick++ === ui.pickedPolygon;
var positions = polygon.positions;
if(positions.length > 2) {
g.beginPath();
g.lineWidth = highlight ? 2 : 1;
g.strokeStyle = "black";
var pLast = camera.toScreen(positions[positions.length - 1]);
g.moveTo(pLast.x, pLast.y);
positions.forEach(function(pos) {
var pScreen = camera.toScreen(pos);
g.lineTo(pScreen.x, pScreen.y);
});
g.stroke();
}
});
prep.connections.forEach(function(connection) {
var v0 = camera.toScreen(polygons[0].positions[connection.v0]);
var v1 = camera.toScreen(polygons[1].positions[connection.v1]);
g.beginPath();
g.lineWidth = 2;
g.strokeStyle = "cyan";
g.moveTo(v0.x, v0.y);
g.lineTo(v1.x, v1.y);
g.stroke();
});
}
(function(c) {
reset();
var dragStartPos = null, dragLastPos = null;
var pickedPolygon = null;
var cameraStartPos = v2(0, 0);
function toScreen(client) {
var rect = c.getBoundingClientRect();
return v2(client.x - rect.left, client.y - rect.top);
}
function startDragging(x, y) {
dragStartPos = v2(x, y);
dragLastPos = v2(x, y);
if(pickedPolygon !== null) {
// Nothing to prepare
} else {
cameraStartPos.x = camera.eye.x;
cameraStartPos.y = camera.eye.y;
}
}
function continueDragging(x, y) {
if(pickedPolygon !== null) {
var dx = x - dragLastPos.x, dy = -(y - dragLastPos.y);
pickedPolygon.transform(v2(1, 0), v2(0, 1), v2(dx, dy));
update();
} else {
var dx = -(x - dragStartPos.x), dy = y - dragStartPos.y;
camera.eye.x = cameraStartPos.x + dx;
camera.eye.y = cameraStartPos.y + dy;
}
dragLastPos.x = x;
dragLastPos.y = y;
}
function stopDragging() {
dragStartPos = null;
dragLastPos = null;
if(pickedPolygon !== null) {
// Nothing to do here...
} else {
cameraStartPos.x = 0;
cameraStartPos.y = 0;
}
}
c.onmousemove = function(e) {
if(dragStartPos !== null)
continueDragging(e.clientX, e.clientY);
else {
pickedPolygon = null;
var iPick = 0;
var cursorPos = camera.fromScreen(toScreen(v2(e.clientX, e.clientY)));
for(var imax = polygons.length; iPick < imax; ++iPick) {
if(polygons[iPick].isInside(cursorPos)) {
pickedPolygon = polygons[iPick];
break;
}
}
ui.pickedPolygon = pickedPolygon !== null ? iPick : -1;
}
draw();
};
c.onmouseleave = function(e) {
if(dragStartPos !== null)
stopDragging();
pickedPolygon = null;
ui.pickedPolygon = -1;
draw();
};
c.onmousedown = function(e) {
if(e.button === 0)
startDragging(e.clientX, e.clientY);
draw();
};
c.onmouseup = function(e) {
if(e.button === 0 && dragStartPos !== null)
stopDragging();
draw();
};
})(canvas);
})(document.getElementById("screen"));
<canvas id="screen" width="300" height="300"></canvas>

Get a single line representation for multiple close by lines clustered together in opencv

I detected lines in an image and drew them in a separate image file in OpenCv C++ using HoughLinesP method. Following is a part of that resulting image. There are actually hundreds of small and thin lines which form a big single line.
But I want single few lines that represent all those number of lines. Closer lines should be merged together to form a single line. For example above set of lines should be represented by just 3 separate lines as below.
The expected output is as above. How to accomplish this task.
Up to now progress result from akarsakov's answer.
(separate classes of lines resulted are drawn in different colors). Note that this result is the original complete image I am working on, but not the sample section I had used in the question
If you don't know the number of lines in the image you can use the cv::partition function to split lines on equivalency group.
I suggest you the following procedure:
Split your lines using cv::partition. You need to specify a good predicate function. It really depends on lines which you extract from image, but I think it should check following conditions:
Angle between lines should be quite small (less 3 degrees, for example). Use dot product to calculate angle's cosine.
Distance between centers of segments should be less than half of maximum length of two segments.
For example, it can be implemented as follows:
bool isEqual(const Vec4i& _l1, const Vec4i& _l2)
{
Vec4i l1(_l1), l2(_l2);
float length1 = sqrtf((l1[2] - l1[0])*(l1[2] - l1[0]) + (l1[3] - l1[1])*(l1[3] - l1[1]));
float length2 = sqrtf((l2[2] - l2[0])*(l2[2] - l2[0]) + (l2[3] - l2[1])*(l2[3] - l2[1]));
float product = (l1[2] - l1[0])*(l2[2] - l2[0]) + (l1[3] - l1[1])*(l2[3] - l2[1]);
if (fabs(product / (length1 * length2)) < cos(CV_PI / 30))
return false;
float mx1 = (l1[0] + l1[2]) * 0.5f;
float mx2 = (l2[0] + l2[2]) * 0.5f;
float my1 = (l1[1] + l1[3]) * 0.5f;
float my2 = (l2[1] + l2[3]) * 0.5f;
float dist = sqrtf((mx1 - mx2)*(mx1 - mx2) + (my1 - my2)*(my1 - my2));
if (dist > std::max(length1, length2) * 0.5f)
return false;
return true;
}
Guess you have your lines in vector<Vec4i> lines;. Next, you should call cv::partition as follows:
vector<Vec4i> lines;
std::vector<int> labels;
int numberOfLines = cv::partition(lines, labels, isEqual);
You need to call cv::partition once and it will clusterize all lines. Vector labels will store for each line label of cluster to which it belongs. See documentation for cv::partition
After you get all groups of line you should merge them. I suggest calculating average angle of all lines in group and estimate "border" points. For example, if angle is zero (i.e. all lines are almost horizontal) it would be the left-most and right-most points. It remains only to draw a line between this points.
I noticed that all lines in your examples are horizontal or vertical. In such case you can calculate point which is average of all segment's centers and "border" points, and then just draw horizontal or vertical line limited by "border" points through center point.
Please note that cv::partition takes O(N^2) time, so if you process a huge number of lines it may take a lot of time.
I hope it will help. I used such approach for similar task.
First off I want to note that your original image is at a slight angle, so your expected output seems just a bit off to me. I'm assuming you are okay with lines that are not 100% vertical in your output because they are slightly off on your input.
Mat image;
Mat binary = image > 125; // Convert to binary image
// Combine similar lines
int size = 3;
Mat element = getStructuringElement( MORPH_ELLIPSE, Size( 2*size + 1, 2*size+1 ), Point( size, size ) );
morphologyEx( mask, mask, MORPH_CLOSE, element );
So far this yields this image:
These lines are not at 90 degree angles because the original image is not.
You can also choose to close the gap between the lines with:
Mat out = Mat::zeros(mask.size(), mask.type());
vector<Vec4i> lines;
HoughLinesP(mask, lines, 1, CV_PI/2, 50, 50, 75);
for( size_t i = 0; i < lines.size(); i++ )
{
Vec4i l = lines[i];
line( out, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(255), 5, CV_AA);
}
If these lines are too fat, I've had success thinning them with:
size = 15;
Mat eroded;
cv::Mat erodeElement = getStructuringElement( MORPH_ELLIPSE, cv::Size( size, size ) );
erode( mask, eroded, erodeElement );
Here is a refinement build upon #akarsakov answer.
A basic issue with:
Distance between centers of segments should be less than half of
maximum length of two segments.
is that parallel long lines that are visually far might end up in same equivalence class (as demonstrated in OP's edit).
Therefore the approach that I found working reasonable for me:
Construct a window (bounding rectangle) around a line1.
line2 angle is close enough to line1's and at least one point of line2 is inside line1's bounding rectangle
Often a long linear feature in the image that is quite weak will end up recognized (HoughP, LSD) by a set of line segments with considerable gaps between them. To alleviate this, our bounding rectangle is constructed around line extended in both directions, where extension is defined by a fraction of original line width.
bool extendedBoundingRectangleLineEquivalence(const Vec4i& _l1, const Vec4i& _l2, float extensionLengthFraction, float maxAngleDiff, float boundingRectangleThickness){
Vec4i l1(_l1), l2(_l2);
// extend lines by percentage of line width
float len1 = sqrtf((l1[2] - l1[0])*(l1[2] - l1[0]) + (l1[3] - l1[1])*(l1[3] - l1[1]));
float len2 = sqrtf((l2[2] - l2[0])*(l2[2] - l2[0]) + (l2[3] - l2[1])*(l2[3] - l2[1]));
Vec4i el1 = extendedLine(l1, len1 * extensionLengthFraction);
Vec4i el2 = extendedLine(l2, len2 * extensionLengthFraction);
// reject the lines that have wide difference in angles
float a1 = atan(linearParameters(el1)[0]);
float a2 = atan(linearParameters(el2)[0]);
if(fabs(a1 - a2) > maxAngleDiff * M_PI / 180.0){
return false;
}
// calculate window around extended line
// at least one point needs to inside extended bounding rectangle of other line,
std::vector<Point2i> lineBoundingContour = boundingRectangleContour(el1, boundingRectangleThickness/2);
return
pointPolygonTest(lineBoundingContour, cv::Point(el2[0], el2[1]), false) == 1 ||
pointPolygonTest(lineBoundingContour, cv::Point(el2[2], el2[3]), false) == 1;
}
where linearParameters, extendedLine, boundingRectangleContour are following:
Vec2d linearParameters(Vec4i line){
Mat a = (Mat_<double>(2, 2) <<
line[0], 1,
line[2], 1);
Mat y = (Mat_<double>(2, 1) <<
line[1],
line[3]);
Vec2d mc; solve(a, y, mc);
return mc;
}
Vec4i extendedLine(Vec4i line, double d){
// oriented left-t-right
Vec4d _line = line[2] - line[0] < 0 ? Vec4d(line[2], line[3], line[0], line[1]) : Vec4d(line[0], line[1], line[2], line[3]);
double m = linearParameters(_line)[0];
// solution of pythagorean theorem and m = yd/xd
double xd = sqrt(d * d / (m * m + 1));
double yd = xd * m;
return Vec4d(_line[0] - xd, _line[1] - yd , _line[2] + xd, _line[3] + yd);
}
std::vector<Point2i> boundingRectangleContour(Vec4i line, float d){
// finds coordinates of perpendicular lines with length d in both line points
// https://math.stackexchange.com/a/2043065/183923
Vec2f mc = linearParameters(line);
float m = mc[0];
float factor = sqrtf(
(d * d) / (1 + (1 / (m * m)))
);
float x3, y3, x4, y4, x5, y5, x6, y6;
// special case(vertical perpendicular line) when -1/m -> -infinity
if(m == 0){
x3 = line[0]; y3 = line[1] + d;
x4 = line[0]; y4 = line[1] - d;
x5 = line[2]; y5 = line[3] + d;
x6 = line[2]; y6 = line[3] - d;
} else {
// slope of perpendicular lines
float m_per = - 1/m;
// y1 = m_per * x1 + c_per
float c_per1 = line[1] - m_per * line[0];
float c_per2 = line[3] - m_per * line[2];
// coordinates of perpendicular lines
x3 = line[0] + factor; y3 = m_per * x3 + c_per1;
x4 = line[0] - factor; y4 = m_per * x4 + c_per1;
x5 = line[2] + factor; y5 = m_per * x5 + c_per2;
x6 = line[2] - factor; y6 = m_per * x6 + c_per2;
}
return std::vector<Point2i> {
Point2i(x3, y3),
Point2i(x4, y4),
Point2i(x6, y6),
Point2i(x5, y5)
};
}
To partion, call:
std::vector<int> labels;
int equilavenceClassesCount = cv::partition(linesWithoutSmall, labels, [](const Vec4i l1, const Vec4i l2){
return extendedBoundingRectangleLineEquivalence(
l1, l2,
// line extension length - as fraction of original line width
0.2,
// maximum allowed angle difference for lines to be considered in same equivalence class
2.0,
// thickness of bounding rectangle around each line
10);
});
Now, in order to reduce each equivalence class to single line, we build a point cloud out of it and find a line fit:
// fit line to each equivalence class point cloud
std::vector<Vec4i> reducedLines = std::accumulate(pointClouds.begin(), pointClouds.end(), std::vector<Vec4i>{}, [](std::vector<Vec4i> target, const std::vector<Point2i>& _pointCloud){
std::vector<Point2i> pointCloud = _pointCloud;
//lineParams: [vx,vy, x0,y0]: (normalized vector, point on our contour)
// (x,y) = (x0,y0) + t*(vx,vy), t -> (-inf; inf)
Vec4f lineParams; fitLine(pointCloud, lineParams, CV_DIST_L2, 0, 0.01, 0.01);
// derive the bounding xs of point cloud
decltype(pointCloud)::iterator minXP, maxXP;
std::tie(minXP, maxXP) = std::minmax_element(pointCloud.begin(), pointCloud.end(), [](const Point2i& p1, const Point2i& p2){ return p1.x < p2.x; });
// derive y coords of fitted line
float m = lineParams[1] / lineParams[0];
int y1 = ((minXP->x - lineParams[2]) * m) + lineParams[3];
int y2 = ((maxXP->x - lineParams[2]) * m) + lineParams[3];
target.push_back(Vec4i(minXP->x, y1, maxXP->x, y2));
return target;
});
Demonstration:
Detected partitioned line (with small lines filtered out):
Reduced:
Demonstration code:
int main(int argc, const char* argv[]){
if(argc < 2){
std::cout << "img filepath should be present in args" << std::endl;
}
Mat image = imread(argv[1]);
Mat smallerImage; resize(image, smallerImage, cv::Size(), 0.5, 0.5, INTER_CUBIC);
Mat target = smallerImage.clone();
namedWindow("Detected Lines", WINDOW_NORMAL);
namedWindow("Reduced Lines", WINDOW_NORMAL);
Mat detectedLinesImg = Mat::zeros(target.rows, target.cols, CV_8UC3);
Mat reducedLinesImg = detectedLinesImg.clone();
// delect lines in any reasonable way
Mat grayscale; cvtColor(target, grayscale, CV_BGRA2GRAY);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_NONE);
std::vector<Vec4i> lines; detector->detect(grayscale, lines);
// remove small lines
std::vector<Vec4i> linesWithoutSmall;
std::copy_if (lines.begin(), lines.end(), std::back_inserter(linesWithoutSmall), [](Vec4f line){
float length = sqrtf((line[2] - line[0]) * (line[2] - line[0])
+ (line[3] - line[1]) * (line[3] - line[1]));
return length > 30;
});
std::cout << "Detected: " << linesWithoutSmall.size() << std::endl;
// partition via our partitioning function
std::vector<int> labels;
int equilavenceClassesCount = cv::partition(linesWithoutSmall, labels, [](const Vec4i l1, const Vec4i l2){
return extendedBoundingRectangleLineEquivalence(
l1, l2,
// line extension length - as fraction of original line width
0.2,
// maximum allowed angle difference for lines to be considered in same equivalence class
2.0,
// thickness of bounding rectangle around each line
10);
});
std::cout << "Equivalence classes: " << equilavenceClassesCount << std::endl;
// grab a random colour for each equivalence class
RNG rng(215526);
std::vector<Scalar> colors(equilavenceClassesCount);
for (int i = 0; i < equilavenceClassesCount; i++){
colors[i] = Scalar(rng.uniform(30,255), rng.uniform(30, 255), rng.uniform(30, 255));;
}
// draw original detected lines
for (int i = 0; i < linesWithoutSmall.size(); i++){
Vec4i& detectedLine = linesWithoutSmall[i];
line(detectedLinesImg,
cv::Point(detectedLine[0], detectedLine[1]),
cv::Point(detectedLine[2], detectedLine[3]), colors[labels[i]], 2);
}
// build point clouds out of each equivalence classes
std::vector<std::vector<Point2i>> pointClouds(equilavenceClassesCount);
for (int i = 0; i < linesWithoutSmall.size(); i++){
Vec4i& detectedLine = linesWithoutSmall[i];
pointClouds[labels[i]].push_back(Point2i(detectedLine[0], detectedLine[1]));
pointClouds[labels[i]].push_back(Point2i(detectedLine[2], detectedLine[3]));
}
// fit line to each equivalence class point cloud
std::vector<Vec4i> reducedLines = std::accumulate(pointClouds.begin(), pointClouds.end(), std::vector<Vec4i>{}, [](std::vector<Vec4i> target, const std::vector<Point2i>& _pointCloud){
std::vector<Point2i> pointCloud = _pointCloud;
//lineParams: [vx,vy, x0,y0]: (normalized vector, point on our contour)
// (x,y) = (x0,y0) + t*(vx,vy), t -> (-inf; inf)
Vec4f lineParams; fitLine(pointCloud, lineParams, CV_DIST_L2, 0, 0.01, 0.01);
// derive the bounding xs of point cloud
decltype(pointCloud)::iterator minXP, maxXP;
std::tie(minXP, maxXP) = std::minmax_element(pointCloud.begin(), pointCloud.end(), [](const Point2i& p1, const Point2i& p2){ return p1.x < p2.x; });
// derive y coords of fitted line
float m = lineParams[1] / lineParams[0];
int y1 = ((minXP->x - lineParams[2]) * m) + lineParams[3];
int y2 = ((maxXP->x - lineParams[2]) * m) + lineParams[3];
target.push_back(Vec4i(minXP->x, y1, maxXP->x, y2));
return target;
});
for(Vec4i reduced: reducedLines){
line(reducedLinesImg, Point(reduced[0], reduced[1]), Point(reduced[2], reduced[3]), Scalar(255, 255, 255), 2);
}
imshow("Detected Lines", detectedLinesImg);
imshow("Reduced Lines", reducedLinesImg);
waitKey();
return 0;
}
I would recommend that you use HoughLines from OpenCV.
void HoughLines(InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0 )
You can adjust with rho and theta the possible orientation and position of the lines you want to observe.
In your case, theta = 90° would be fine (only vertical and horizontal lines).
After this, you can get unique line equations with Plücker coordinates. And from there you could apply a K-mean with 3 centers that should fit approximately your 3 lines in the second image.
PS : I will see if i can test the whole process with your image
You can merge multiple close line into single line by clustering lines using rho and theta and finally taking average of rho and theta.
void contourLines(vector<cv::Vec2f> lines, const float rho_threshold, const float theta_threshold, vector< cv::Vec2f > &combinedLines)
{
vector< vector<int> > combineIndex(lines.size());
for (int i = 0; i < lines.size(); i++)
{
int index = i;
for (int j = i; j < lines.size(); j++)
{
float distanceI = lines[i][0], distanceJ = lines[j][0];
float slopeI = lines[i][1], slopeJ = lines[j][1];
float disDiff = abs(distanceI - distanceJ);
float slopeDiff = abs(slopeI - slopeJ);
if (slopeDiff < theta_max && disDiff < rho_max)
{
bool isCombined = false;
for (int w = 0; w < i; w++)
{
for (int u = 0; u < combineIndex[w].size(); u++)
{
if (combineIndex[w][u] == j)
{
isCombined = true;
break;
}
if (combineIndex[w][u] == i)
index = w;
}
if (isCombined)
break;
}
if (!isCombined)
combineIndex[index].push_back(j);
}
}
}
for (int i = 0; i < combineIndex.size(); i++)
{
if (combineIndex[i].size() == 0)
continue;
cv::Vec2f line_temp(0, 0);
for (int j = 0; j < combineIndex[i].size(); j++) {
line_temp[0] += lines[combineIndex[i][j]][0];
line_temp[1] += lines[combineIndex[i][j]][1];
}
line_temp[0] /= combineIndex[i].size();
line_temp[1] /= combineIndex[i].size();
combinedLines.push_back(line_temp);
}
}
function call
You can tune houghThreshold, rho_threshold and theta_threshold as per your application.
HoughLines(edge, lines_t, 1, CV_PI / 180, houghThreshold, 0, 0);
float rho_threshold= 15;
float theta_threshold = 3*DEGREES_TO_RADIANS;
vector< cv::Vec2f > lines;
contourCluster(lines_t, rho_max, theta_max, lines);
#C_Raj made a good point, for lines like this, i.e., most likely extracted from table/form-like images, you should make full use of the fact that many of the line segments captured by Hough transform from the same lines have very similar \rho and \theta.
After clustering these line segments based on their \rho and \theta, you can apply 2D line fitting to obtain estimate of the true lines in an image.
There is a paper describing this idea and it's making further assumptions of the lines in a page.
HTH.

Algorithm for edge intersection?

Given Polygon P which I have its verticies in order. and I have a rectangle R with 4 verticies how could I do this:
If any edge of P (line between adjacent vertexes) intersects an edge of R, then return TRUE, otherwise return FALSE.
Thanks
* *
* *
What you want is a quick way to determine if a line-segment intersects an axis-aligned rectangle. Then just check each line segment in the edge list against the rectangle. You can do the following:
1) Project the line onto the X-axis, resulting in an interval Lx.
2) Project the rectangle onto the X-axis, resulting in an interval Rx.
3) If Lx and Rx do not intersect, the line and rectangle do not intersect.
[Repeat for the Y-axis]:
4) Project the line onto the Y-axis, resulting in an interval Ly.
5) Project the rectangle onto the Y-axis, resulting in an interval Ry.
6) If Ly and Ry do not intersect, the line and rectangle do not intersect.
7) ...
8) They intersect.
Note if we reach step 7, the shapes cannot be separated by an axis-aligned line. The thing to determine now is if the line is fully outside the rectangle. We can determine this by checking that all the corner points on the rectangle are on the same side of the line. If they are, the line and rectangle are not intersecting.
The idea behind 1-3 and 4-6 comes from the separating axis theorem; if we cannot find a separating axis, they must be intersecting. All these cases must be tested before we can conclude they are intersecting.
Here's the matching code:
#include <iostream>
#include <utility>
#include <vector>
typedef double number; // number type
struct point
{
number x;
number y;
};
point make_point(number pX, number pY)
{
point r = {pX, pY};
return r;
}
typedef std::pair<number, number> interval; // start, end
typedef std::pair<point, point> segment; // start, end
typedef std::pair<point, point> rectangle; // top-left, bottom-right
namespace classification
{
enum type
{
positive = 1,
same = 0,
negative = -1
};
}
classification::type classify_point(const point& pPoint,
const segment& pSegment)
{
// implicit line equation
number x = (pSegment.first.y - pSegment.second.y) * pPoint.x +
(pSegment.second.x - pSegment.first.x) * pPoint.y +
(pSegment.first.x * pSegment.second.y -
pSegment.second.x * pSegment.first.y);
// careful with floating point types, should use approximation
if (x == 0)
{
return classification::same;
}
else
{
return (x > 0) ? classification::positive :classification::negative;
}
}
bool number_interval(number pX, const interval& pInterval)
{
if (pInterval.first < pInterval.second)
{
return pX > pInterval.first && pX < pInterval.second;
}
else
{
return pX > pInterval.second && pX < pInterval.first;
}
}
bool inteveral_interval(const interval& pFirst, const interval& pSecond)
{
return number_interval(pFirst.first, pSecond) ||
number_interval(pFirst.second, pSecond) ||
number_interval(pSecond.first, pFirst) ||
number_interval(pSecond.second, pFirst);
}
bool segment_rectangle(const segment& pSegment, const rectangle& pRectangle)
{
// project onto x (discard y values)
interval segmentX =
std::make_pair(pSegment.first.x, pSegment.second.x);
interval rectangleX =
std::make_pair(pRectangle.first.x, pRectangle.second.x);
if (!inteveral_interval(segmentX, rectangleX))
return false;
// project onto y (discard x values)
interval segmentY =
std::make_pair(pSegment.first.y, pSegment.second.y);
interval rectangleY =
std::make_pair(pRectangle.first.y, pRectangle.second.y);
if (!inteveral_interval(segmentY, rectangleY))
return false;
// test rectangle location
point p0 = make_point(pRectangle.first.x, pRectangle.first.y);
point p1 = make_point(pRectangle.second.x, pRectangle.first.y);
point p2 = make_point(pRectangle.second.x, pRectangle.second.y);
point p3 = make_point(pRectangle.first.x, pRectangle.second.y);
classification::type c0 = classify_point(p0, pSegment);
classification::type c1 = classify_point(p1, pSegment);
classification::type c2 = classify_point(p2, pSegment);
classification::type c3 = classify_point(p3, pSegment);
// test they all classify the same
return !((c0 == c1) && (c1 == c2) && (c2 == c3));
}
int main(void)
{
rectangle r = std::make_pair(make_point(1, 1), make_point(5, 5));
segment s0 = std::make_pair(make_point(0, 3), make_point(2, -3));
segment s1 = std::make_pair(make_point(0, 0), make_point(3, 0));
segment s2 = std::make_pair(make_point(3, 0), make_point(3, 6));
segment s3 = std::make_pair(make_point(2, 3), make_point(9, 8));
std::cout << std::boolalpha;
std::cout << segment_rectangle(s0, r) << std::endl;
std::cout << segment_rectangle(s1, r) << std::endl;
std::cout << segment_rectangle(s2, r) << std::endl;
std::cout << segment_rectangle(s3, r) << std::endl;
}
Hope that makes sense.
I think your problem is equivalent to convex polygon intersection, in which case this might help. See also: How do I determine if two convex polygons intersect?
Untested, obviously, but in rough pseudocode:
// test two points against an edge
function intersects ( side, lower, upper, pt1Perp, pt1Par, pt2Perp, pt2Par )
{
if ( ( pt1Perp < side and pt2Perp > side ) or ( pt1Perp > side and pt2Perp < side )
{
intersection = (side - pt1Perp) * (pt2Par - pt1Par) / (pt2Perp - pt1Perp);
return (intersection >= lower and intersection <= higher);
}
else
{
return false;
}
}
// left, right, bottom, top are the bounds of R
for pt1, pt2 adjacent in P // don't forget to do last,first
{
if ( intersects ( left, bottom, top, pt1.x, pt1.y, pt2.x, pt2.y )
or intersects ( right, bottom, top, pt1.x, pt1.y, pt2.x, pt2.y )
or intersects ( top, left, right, pt1.y, pt1.x, pt2.y, pt2.x )
or intersects ( bottom, left, right, pt1.y, pt1.x, pt2.y, pt2.x ) )
{
return true;
}
}
Basically, if two adjacent P vertices are on opposite sides of one of the R's edges, check whether the intersection point falls in range.
Just FYI, geometrictools is a great resource for such things (especially the Math section)