I'd like to detect a custom "multiple-bar pattern" in an image.
The pattern looks like this, kind of a group of parallel black bars with the same width but different height, see this image:
This pattern could be on the image or even not but if it is - I'd like to get it's position.
Note: The color of the pattern is black in every case.
Note: The size of the pattern is unknown, so it could be big or could be super small.
Note: The pattern bar count is a fixed number. It will be the same ( in this case 7) for every occurrence.
An image could look like this:
And after performing the code search algorithm this should happen:
Any help would be very appreciated. Thanks a million in advance, Tempi.
Note: The code I got so far (not working)
Mat myImage; // this is the mat of the photo you can see above
Mat algorithmImage;
myImage.coptyTo(algorithmImage);
cvtColor(algorithmImage, algorithmImage, CV_RGB2HSV);
double imgThreshold = 20;
cv::inRange(algorithmImage, cv::Scalar(0, 0, 0, 0), cv::Scalar(180, 255, 30, 0), 20);
Mat canny;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
Canny( algorithmImage, canny, 3, 6, 3 );
findContours( canny, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
for( int i = 0; i<contours.size(); i++ ) {
// ??
}
bool isLineAlreadyFound(const Vec4i& _l1, const Vec4i& _l2) {
Vec4i l1(_l1), l2(_l2);
float length1 = sqrtf((l1[2] - l1[0])*(l1[2] - l1[0]) + (l1[3] - l1[1])*(l1[3] - l1[1]));
float length2 = sqrtf((l2[2] - l2[0])*(l2[2] - l2[0]) + (l2[3] - l2[1])*(l2[3] - l2[1]));
float product = (l1[2] - l1[0])*(l2[2] - l2[0]) + (l1[3] - l1[1])*(l2[3] - l2[1]);
if (fabs(product / (length1 * length2)) < cos(CV_PI / 30))
return false;
float mx1 = (l1[0] + l1[2]) * 0.5f;
float mx2 = (l2[0] + l2[2]) * 0.5f;
float my1 = (l1[1] + l1[3]) * 0.5f;
float my2 = (l2[1] + l2[3]) * 0.5f;
float dist = sqrtf((mx1 - mx2)*(mx1 - mx2) + (my1 - my2)*(my1 - my2));
if (dist > std::max(length1, length2) * 0.5f)
return false;
return true;
}
Here's how I would approach your problem.
Since your pattern is totally black, may be you can benefit from that
and just do a threshold. Try a very low threshold (if it's a perfect
black, use 1 as a threshold).
After above step, using findContours should already do a good job in
detecting shapes. But what you'll need is a way to recognize your
pattern. Since you your patterns may show up in different scale and/or
orientation, you will need a scale/orientation invariant descriptor. I
mean here, a shape descriptor. You can look at
https://en.wikipedia.org/wiki/Image_moment for Hu moments.
That will be for only one bar in each pattern. You should complete
your search by designing a specific feature for your pattern with the
help of the features extracted from each bar.
Th trick here is that you have 14 parallel edges. Those are impossible to miss. As noted in other answer, threshold to isolate them. Directly after that, run the edge detector. This turns black&white edges into lines, while the rest of the image might produce a few scattered pixels. Visualize this to see for yourself.
Next, run the Hough Transform from OpenCV. This gives you lines in the format {offset, direction}. Even if a few scattered background pixels would coincidentally line up, they wouldn't form 14 lines with the same direction.
You can calculate the maximum offset difference to find the scale of the bar pattern, and double-check the relative offsets to check individual bar widths and spacings. Remember, the 14 lines are edges, so you need to pair them.
At this point you've identified the direction and scale of the bar pattern. it might be a bit surprising to realize you haven't actually identified the actual bar positions. The reason for this procedure is that we tackle the hard problem first. We now go back to the output of the edge detection, and divide all the edges there into three categories: the parallel edges, the rounded end caps between two pairs of parallel edges (follow the contours), and random background pixels. You might need a MORPH_CLOSE operation to close gaps in the contour.
Related
I have a problem with filtering some contours by colors in it. I want to remove all contours, which has black pixels inside and keep only contours with white pixels (see pictures below).
Code to create a contours list. I've used a RETR_TREE contour retrieval mode with CHAIN_APPROX_SIMPLE points selection to avoid a lot of points inside contours.
cv::cvtColor(src_img, gray_img, cv::COLOR_BGR2GRAY);
cv::threshold(gray_img, bin_img, minRGB, maxRGB, cv::THRESH_BINARY_INV);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(bin_img, contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
Then, using these contours, I've built closed paths and display them on the screen.
An input image:
Current my results:
What I need. Fill only contours, which has white content.
I've tried to scale all contours to 1 pixel inside and check if all the pixels equal to dark, but it doesn't work as I've expected. See the code below.
double scaleX = (double(src_img.cols) - 2) / double(src_img.cols);
double scaleY = (double(src_img.rows) - 2) / double(src_img.rows);
for (int i = 0; i < contours.size(); i++) {
std::vector<cv::Point> contour = contours[i];
cv::Moments M = cv::moments(contour);
int cx = int(M.m10 / M.m00);
int cy = int(M.m01 / M.m00);
std::vector<cv::Point> scaledContour(contour.size());
for (int j = 0; j < contour.size(); j++) {
cv::Point point = contour[j];
point = cv::Point(point.x - cx, point.y - cy);
point = cv::Point(point.x * scaleX, point.y * scaleY);
point = cv::Point(point.x + cx, point.y + cy);
scaledContour[j] = point;
}
contours[i] = scaledContour;
}
I will be very grateful if you help with any ideas or solutions, thank you very much!
Hopefully, one thing is clear that the objects in the image should be white and the background black when finding contours that you have done by using THRESH_BINARY_INV.
So we are essentially trying to find white lines and not black. I am not providing the code as I work in python but I'll list it out how it can be done.
Create a black array of the size of the input image. Let's call it mask.
After finding the contours, draw them on mask with white i.e. 255, while providing thickness=-1. This means we are essentially filling the contour.
Now we need to remove the boundary of the contour so the only portion left is the part inside the contour. This can be achieved by again drawing the contour on mask, this time with black with a thickness of 1.
Perform bitwise_and between the image and mask. Only areas having white inside the contour will be left.
Now you just need to see whether the output is completely black or not. If it is not that means you don't need to fill that contour as it contains something inside it.
EDIT
Ohh I didn't realize that your images would be having 600 contours, yes it will take a lot of time for that and I don't know why I didn't think of using hierarchy before.
You can use RETR_TREE itself and the hierarchy values are [next, previous, first_child, parent]. So we just need to check if the value of first_child=-1, that would mean there are no contours inside and you can fill it.
I've changed the mode to RETR_CCOMP and add a region filtration by a hierarchy[contour index][3] != -1 (means, no parent), and my problem was solved.
Thank you!
Is it possible to get the expanded or contracted version of a contour?
For example in the below image, I have used cv::findContour() and cv::drawContour on a binary image to get the contours:
I would like to draw another contour which has a customed pixel distance from the original contour, like these:
Except for eroding, which I think it might not be a good idea as it seems hard to control the pixel distance using eroding, I have no idea on how to solve this problem. May I know what should be the correct direction?
Using cv::erode with a small kernel and multiple iterations may be enough for your needs, even if it's not exact.
C++ code:
cv::Mat img = ...;
int iterations = 10;
cv::erode(img, img,
cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3,3)),
cv::Point(-1,-1),
iterations);
Demo:
# img is the image containing the original black contour
for form in [cv.MORPH_RECT, cv.MORPH_CROSS]:
eroded = cv.erode(img, cv.getStructuringElement(form, (3,3)), iterations=10)
contours, hierarchy = cv.findContours(~eroded, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE)
vis = cv.cvtColor(img, cv.COLOR_GRAY2BGR)
cv.drawContours(vis, contours, 0, (0,0,255))
cv.drawContours(vis, contours, 1, (255,0,0))
show_image(vis)
10 iterations with cv.MORPH_RECT with a 3x3 kernel:
10 iterations with cv.MORPH_CROSS with a 3x3 kernel:
You can change the offset by adjusting the number of iterations.
A much more accurate approach would be to use cv::distanceTransform to find all pixels that lie approximately 10px away from the contour:
dist = cv.distanceTransform(img, cv.DIST_L2, cv.DIST_MASK_PRECISE)
ring = cv.inRange(dist, 9.5, 10.5) # take all pixels at distance between 9.5px and 10.5px
show_image(ring)
contours, hierarchy = cv.findContours(ring, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE)
vis = cv.cvtColor(img, cv.COLOR_GRAY2BGR)
cv.drawContours(vis, contours, 0, (0,0,255))
cv.drawContours(vis, contours, 2, (255,0,0))
show_image(vis)
You'll get two contours on each side of the original contour. Use findContours with RETR_EXTERNAL to recover only the outer contour. To also recover the inner contour, use RETR_LIST
I think the solution can be easier, without dilataion and new contours.
For each contour search mass center: cv::moments(contours[i]) -> cv::Point2f mc(mu.m10 / mu.m00), mu.m01 / mu.m00));
For each point point of contour: make shift for mass center -> multiply by coefficient K -> shift backward: pt_new = (k * (pt - mc) + mc);
But coefficient k must be individual for each point. I will calculate it a little later...
I am working on a small OpenCV project to detect lines of a certain colour from a mobile phone camera.
In short would like to:
Transform the input image into an image of a certain colour (e.g. Red from a specific upper and lower range)
Apply Hough line transformation to the resulting image so that it detects only lines of that specific colour
Superimpose on the original image the lines detected
Those are the functions that I'd like to use but not quiet sure how to fill the missing bits.
This is the processImage function called from an smartphone app when processing images from an instance of CvVideoCamera
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::Mat red_image = ??
// Apply houghes transformation to detect lines between a minimum length and a maximum length (I was thinking of using the CV_HOUGH_PROBABILISTIC method..)
// Comment.. see below..
I am unable to understand the documentation here as the C++
method signature does not have a method field
vector<Vec2f> lines;
From the official documentation:
C++: void HoughLines(InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0 )
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
Taken from sample code, haven't understood properly how it works..
(e.g. what's the usage of theta? How does giving a different angle
reflect into line detection?)
for( size_t i = 0; i < lines.size(); i++ )
{
Here I should only consider lines above a certain size.. (no idea how)
}
Here I should then add the resulting lines to original image (no idea how) so that they can be shown on the screen..
Any help would be greatly appreciated.
You can use HSV color space to extract color tone information.
Here's some code with comments, if there are any questions feel free to ask:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/coloredLines.png");
// convert to HSV color space
cv::Mat hsvImage;
cv::cvtColor(input, hsvImage, CV_BGR2HSV);
// split the channels
std::vector<cv::Mat> hsvChannels;
cv::split(hsvImage, hsvChannels);
// hue channels tells you the color tone, if saturation and value aren't too low.
// red color is a special case, because the hue space is circular and red is exactly at the beginning/end of the circle.
// in literature, hue space goes from 0 to 360 degrees, but OpenCV rescales the range to 0 up to 180, because 360 does not fit in a single byte. Alternatively there is another mode where 0..360 is rescaled to 0..255 but this isn't as common.
int hueValue = 0; // red color
int hueRange = 15; // how much difference from the desired color we want to include to the result If you increase this value, for example a red color would detect some orange values, too.
int minSaturation = 50; // I'm not sure which value is good here...
int minValue = 50; // not sure whether 50 is a good min value here...
cv::Mat hueImage = hsvChannels[0]; // [hue, saturation, value]
// is the color within the lower hue range?
cv::Mat hueMask;
cv::inRange(hueImage, hueValue - hueRange, hueValue + hueRange, hueMask);
// if the desired color is near the border of the hue space, check the other side too:
// TODO: this won't work if "hueValue + hueRange > 180" - maybe use two different if-cases instead... with int lowerHueValue = hueValue - 180
if (hueValue - hueRange < 0 || hueValue + hueRange > 180)
{
cv::Mat hueMaskUpper;
int upperHueValue = hueValue + 180; // in reality this would be + 360 instead
cv::inRange(hueImage, upperHueValue - hueRange, upperHueValue + hueRange, hueMaskUpper);
// add this mask to the other one
hueMask = hueMask | hueMaskUpper;
}
// now we have to filter out all the pixels where saturation and value do not fit the limits:
cv::Mat saturationMask = hsvChannels[1] > minSaturation;
cv::Mat valueMask = hsvChannels[2] > minValue;
hueMask = (hueMask & saturationMask) & valueMask;
cv::imshow("desired color", hueMask);
// now perform the line detection
std::vector<cv::Vec4i> lines;
cv::HoughLinesP(hueMask, lines, 1, CV_PI / 360, 50, 50, 10);
// draw the result as big green lines:
for (unsigned int i = 0; i < lines.size(); ++i)
{
cv::line(input, cv::Point(lines[i][0], lines[i][1]), cv::Point(lines[i][2], lines[i][3]), cv::Scalar(0, 255, 0), 5);
}
cv::imwrite("C:/StackOverflow/Output/coloredLines_mask.png", hueMask);
cv::imwrite("C:/StackOverflow/Output/coloredLines_detection.png", input);
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
using this input image:
Will extract this "red" color (adjust hueValue and hueRange to detect different colors):
and HoughLinesP detects those lines from the mask (should work with HoughLines similarly):
Here's another set of images with non-lines too...
About your different questions:
There are two functions HoughLines and HoughLinesP. HoughLines does not extract a line length, but you can compute it in a post-processing by checking again, which pixels of the edge-mask (HoughLines input) correspond to the extracted line.
parameters:
image - edge image (should be clear?)
lines - lines given by angle and position, no length or sth. they are interpreted infinitely long
rho - the accumulator resolution. The bigger, the more robust in case of slightly distorted lines it should be, but the less accurate in the extracted lines' position/angle
threshold - the bigger the less false positives, but you might miss some lines
theta - angle resolution: the smaller, the more different lines (depending on the orientation) can be detected. If your line's orientation does not fit in the angle steps, the line might not be detected. For example if you CV_PI/180 will detect in 1° resolution, if your line has a 0.5° (e.g. 33.5°) orientation, it might be missed.
I'm not so extremely sure about all the parameters, maybe you'll have to look at the literature about hough line detection, or someone else can add some hints here.
If you instead use cv::HoughLinesP, line segments with start- and end-point will be detected, which is easier to interpret and you can compute the line length from cv::norm(cv::Point(lines[i][0], lines[i][1]) - cv::Point(lines[i][2], lines[i][3]))
I will not show the code but the steps with some tricks.
Assume that you want to detect road lanes (which are lines with white or light yellow color and have some specific properties).
Original Image( I add some extra lines to make noise )
Step 1: Remove parts of image that not need to be used, save the CPU usage (simple but useful)
Step 2: Convert to gray image
Step 3:threshold
Using threshold according to the color of your line, the color will become white and others will become black
Step 4: Using Contours to find the bound of objects
Step 5: Using Fitline with Contours in previous step are input to return equations of lines
Fitline returns (x0,y0) and vector v = (a,b)
Step 6: With the equations of lines you can draw in any line you want
I am trying to find triangles (blue contours) and trapezoids (yellow contours) in real time. In general it's okay.
But there is some problems. First it's a false positives. Triangles become trapezoids and vice versa. And I don't know how to how to solve this problem.
Second it's "noise". . I tried to check area of the figure, but the noise can be equal to the area. So it did not help so much. The noise depends on the thresholding parameters. cv::adaptiveThresholddoes not help at all. It's adds even more noise (and it so SLOW) erode and dilate cant fix it in a proper way
And here is my code.
cv::Mat detect(cv::Mat imageRGB)
{
//RGB -> GRAY
cv::Mat imageGray;
cv::cvtColor(imageRGB, imageGray, CV_BGR2GRAY);
//Bluring it
cv::Mat image;
cv::GaussianBlur(imageGray, image, cv::Size(5,5), 2);
//Thresholding
cv::threshold(image, image, 100, 255, CV_THRESH_BINARY_INV);
//SLOW and NOISE
//cv::adaptiveThreshold(image, image, 255.0, CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY, 21, 0);
//Calculating canny params.
cv::Scalar mu;
cv::Scalar sigma;
cv::meanStdDev(image, mu, sigma);
cv::Mat imageCanny;
cv::Canny(image,
imageCanny,
mu.val[0] + sigma.val[0],
mu.val[0] - sigma.val[0]);
//Detecting conturs.
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(imageCanny, contours, hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
//Hierarchy is not needed here so clear it.
hierarchy.clear();
for (std::size_t i = 0; i < contours.size(); i++)
{
//fitEllipse need at last 5 points.
if (contours.at(i).size() < 5)
{
continue;
}
//Skip small contours.
if (std::fabs(cv::contourArea(contours.at(i))) < 800.0)
{
continue;
}
//Calculating RotatedRect from contours NOT from hull
//because fitEllipse need at last 5 points.
cv::RotatedRect bEllipse = cv::fitEllipse(contours.at(i));
//Finds the convex hull of a point set.
std::vector<cv::Point> hull;
cv::convexHull(contours.at(i), hull, true);
//Approx it, so we'll get 3 point for triangles
//and 4 points for trapez.
cv::approxPolyDP(hull, hull, 15, true);
//Is our contour convex. It's mast be.
if (!cv::isContourConvex(hull))
{
continue;
}
//Triangle
if (hull.size() == 3)
{
cv::drawContours(imageRGB, contours, i, cv::Scalar(255, 0, 0), 2);
cv::circle(imageRGB, bEllipse.center, 3, cv::Scalar(0, 255, 0), 2);
}
//trapez
if (hull.size() == 4)
{
cv::drawContours(imageRGB, contours, i, cv::Scalar(0, 255, 255), 2);
cv::circle(imageRGB, bEllipse.center, 3, cv::Scalar(0, 0, 255), 2);
}
}
return imageRGB;
}
So... In general all problems coused by wrong thresholding parameters, how can I calculete it in a proper way (automatically, of course)? And how can I can (lol, sorry for my english) prevent false positives?
Thesholding - i think that you should try Otsu binarization - here is some theory and a nice picture and here is documentation. This kind of thresholding generally is trying to find 2 most common values in image and use average value of them as a threshold value.
Alternatively consider using HSV color space, it might be easier to distinguish black and white regions from other regions. Another idea is to use inRange function (in RGB or in HSV color space - should work in woth situations) - you need to find 2 ranges (one from black regions and one for white) and search only for those regions (using inRange function) - look at this post.
Another way to accomplish this task might be using some library for blob extraction like this one or blob extractor which is part of OpenCV.
Distinguish triangle from trapezoid - i see 2 basic ways to improve you solution here:
in this line cv::approxPolyDP(hull, hull, 15, true); make third parameter (15 in this situation) not a constant value, but some part of contour area or length. Definitely it should adapt to contour size, it can't be just a canstant value. It's hard to say how to calculate it without some testing - try to start with 1-5% of contour area or length (i would start with length, but this is just my guess) and see whether this value is fine/to big/to small an check other values if needed. Unfortunetely there is no other way, but finding this equation manually shouldn't take very long time.
when you have 4 or 5 points calculate the equations of lines which join consecutive points (point 1 with point 2, point 2 with point 3, etc don't forget to calculate line between first point and last point), than check whether any 2 of those lines are parallel (or at least are close to being parallel - angle between them is close to 0 degress) - if you find any parallel lines than this contour is trapezoid, otherwise it's a triangle.
I'm attempting to work with a depth sensor to add positional tracking to the Oculus Rift dev kit. However, I'm having trouble with the sequence of operations producing a usable result.
I'm starting with a 16 bit depth image, where the values sort of (but not really) correspond to millimeters. Undefined values in the image have already been set to 0.
First I'm eliminating everything outside a certain near and far distance by updating a mask image to exclude them.
cv::Mat result = cv::Mat::zeros(depthImage.size(), CV_8UC3);
cv::Mat depthMask;
depthImage.convertTo(depthMask, CV_8U);
for_each_pixel<DepthImagePixel, uint8_t>(depthImage, depthMask,
[&](DepthImagePixel & depthPixel, uint8_t & maskPixel){
if (!maskPixel) {
return;
}
static const uint16_t depthMax = 1200;
static const uint16_t depthMin = 200;
if (depthPixel < depthMin || depthPixel > depthMax) {
maskPixel = 0;
}
});
Next, since the feature I want is likely to be closer to the camera than the overall scene average, I update the mask again to exclude anything that isn't within a certain range of the median value:
const float depthAverage = cv::mean(depthImage, depthMask)[0];
const uint16_t depthMax = depthAverage * 1.0;
const uint16_t depthMin = depthAverage * 0.75;
for_each_pixel<DepthImagePixel, uint8_t>(depthImage, depthMask,
[&](DepthImagePixel & depthPixel, uint8_t & maskPixel){
if (!maskPixel) {
return;
}
if (depthPixel < depthMin || depthPixel > depthMax) {
maskPixel = 0;
}
});
Finally, I zero out everything that's not in the mask, and scale the remaining values to between 10 & 255 before converting the image format to 8 bit
cv::Mat outsideMask;
cv::bitwise_not(depthMask, outsideMask);
// Zero out outside the mask
cv::subtract(depthImage, depthImage, depthImage, outsideMask);
// Within the mask, normalize to the range + X
cv::subtract(depthImage, depthMin, depthImage, depthMask);
double minVal, maxVal;
minMaxLoc(depthImage, &minVal, &maxVal);
float range = depthMax - depthMin;
float scale = (((float)(UINT8_MAX - 10) / range));
depthImage *= scale;
cv::add(depthImage, 10, depthImage, depthMask);
depthImage.convertTo(depthImage, CV_8U);
The results looks like this:
I'm pretty happy with this section of the code, since it produces pretty clear visual features.
I'm then applying a couple of smoothing operations to get rid of the ridiculous amount of noise from the depth camera:
cv::medianBlur(depthImage, depthImage, 9);
cv::Mat blurred;
cv::bilateralFilter(depthImage, blurred, 5, 250, 250);
depthImage = blurred;
cv::Mat result = cv::Mat::zeros(depthImage.size(), CV_8UC3);
cv::insertChannel(depthImage, result, 0);
Again, the features look pretty clear visually, but I wonder if they couldn't be sharpened somehow:
Next I'm using canny for edge detection:
cv::Mat canny_output;
{
cv::Canny(depthImage, canny_output, 20, 80, 3, true);
cv::insertChannel(canny_output, result, 1);
}
The lines I'm looking for are there, but not well represented towards the corners:
Finally I'm using probabilistic Hough to identify lines:
std::vector<cv::Vec4i> lines;
cv::HoughLinesP(canny_output, lines, pixelRes, degreeRes * CV_PI / 180, hughThreshold, hughMinLength, hughMaxGap);
for (size_t i = 0; i < lines.size(); i++)
{
cv::Vec4i l = lines[i];
glm::vec2 a((l[0], l[1]));
glm::vec2 b((l[2], l[3]));
float length = glm::length(a - b);
cv::line(result, cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar(0, 0, 255), 3, CV_AA);
}
This results in this image
At this point I feel like I've gone off the rails, because I can't find a good set of parameters for Hough to produce a reasonable number of candidate lines in which to search for my shape, and I'm not sure if I should be fiddling with Hough or looking at improving the outputs of the prior steps.
Is there a good way of objectively validating my results at each stage, as opposed to just fiddling with the input values until I think it 'looks good'? Is there a better approach to finding the rectangle given the starting image (and given that it won't necessarily be oriented in a particular direction?
Very cool project!
Though, I feel like your approach does not use all the info that you could get from the depthmap (e.g. 3D points, normals, etc), which would help a lot.
The Point Cloud Library (PCL), which is a C++ library dedicated to the processing of RGB-D data, has a tutorial on plane segmentation using RANSAC which could inspire you. You might not want to use PCL in your program, due to the numerous dependencies, however as it is open-source, you can find the algorithm implementation on Github (PCL SAC segmentation). However, RANSAC might be slow and produce unwanted results depending on the scene.
You could also try to use the approach presented in "Real-Time Plane Segmentation
using RGB-D Cameras" by Holz, Holzer, Rusu and Behnke, 2011 (PDF), which suggests fast normal estimation using integral images followed by plane detection using clustering of normals.