Approximating Lines to Floor plan contours - c++

I am using OpenCV-C++ and 1) I want to approximate the detected contours using findContours by only horizontal or vertical lines, and not by curves, as in floor plans. So can you suggest a method for the same.
2) Is there a way to remove smaller contours like tree borders, which can automate the process for every image, since removing the smaller areas with findContours() can lead to elimination of walls with smaller dimensions.
http://property.magicbricks.com/microsite/buy/provident-welworth/floor-plan.html

On what sort of image do you use the find contours? I assume you did follow this example..
findContour example
if not, please clarify.
However, why not try to first find all horizontal and vertical edges with the corresponding filters? Afterwards you can still try to find contours with the findContours function. Or you can use the hough transform, also available in opencv. hough lines within the hough lines you can easily eliminate smaller line segments.
for 2) what do youi mean by tree borders? you mean the contours of a tree on an image? it would be very helpful if you could provide an example image.
Cheers

Related

Detecting Rectangular Shapes in edge image with OpenCV

I want to detect multiple (similar) rectangular objects in an image that have a lot of substructure within them. So, my current plan is to use gaussian blur, morphology and edge detection (Canny). After using edge detection I get this (with very low threshold parameters):
What I want in the end is the outline of the greater rectangles. See:
Currently I try to get this by using HoughLines and findContours afterwards. For this to work, I need to fiddle a lot with the threshold parameters for Canny and HoughLines.
When I get it right for one image the parameters most likely will not work for the next one (e.g. the edges in the previous image were less dominant leading to too many lines detected by the hough transformation). Another problem is that sometimes inner structures are equally or less dominant than one side of the outer edges.
I tried to use a stronger blur or morphology but at some point this blurred away the small gap between the rectangles.
Can I extract the bigger rectangles somehow else given the edge image (preferably with opencv)?
Getting the 4 corner points would be enough.

Hough transform and plate localization

I'm trying to use Hough Transform during number plate localization process. I have seen some articles and ideas about finding rectangles with that, still almost every example was quite simple - one rectangle on image, usually game card or TV. When I want to implement that in my system, it's not working well. I'm finding usually more then 3000 lines, and much more intersections. I'm using Canny edge filter. I tested that with some different parameters (both, Canny Filter and HoughLinesP function) and always got very huge numer of points. Is it possible to find that plate, when we are having a lot of environment information on our image? Or are there any other options to achieve some good results? I would appreciate any answers and ideas. Some code samples in OpenCV will be very usefull too.
Detecting many line segments is typical for the Hough transform. E.g. the letters on the plates might contain straight line segments, the surroundings of the plate (a car?) and whatever.
So, you should try utilizing more context information on your plate detection such as
background color of the plate (e.g. is it white? or black or yellow or whatever? are your image data colored?) So, try filtering for the color
what size is a typical plate on the image? is it always roughly the same size? Then you could filter the found Hough segments by their length, respectively. Look for sets of colinear line segments, which might be the parts of a single but broken line.
What orientation have the plates? Parallel to the image main axes? Or can they be rotated or even warped by depth projection? In the first case of axe-parallel plates, restrict to all Hough line with angle orientations of 0° or 90°.
Have you applied contrast normalization on the original image? What do the Canny edge images look like, are they already suited for finding plates? Can You see the plates on the edge images or are they hidden between so many edgels or got split apart too much? What about the thresholds for Canny detector?
Finally, have you googled for papers about plate-finding algorithms?

Extending a contour in OpenCv

i have several contours that consist of several black regions in my image. Directly adjacent to these black regions are some brighter regions that do not belong to my contours. I want to add these brighter regions to my black region and therefor extend my contour in OpenCv.
Is there a convenient way to extend a contour? I thought about looking at intensity change from my gradient-image created with cv::Sobel and extend until the gradient changes again, meaning the intensity of pixel is going back to the neither black nor bright regions of the image.
Thanks!
Here are example images. The first picture shows the raw Image, the second the extracted Contour using Canny & findContours, the last one the Sobel-Gradient intensity Image of the same area.
I want to include the bright boundaries in the first image to the Contour.
Update: Now i've used some morphological operations on the Sobelgradients and added a contour around them (see Image below). Next step could be to find the adjacent pair of purple & red contours, but it seems very much like a waste of procession time to actually have to search for directly adjacent contours. Any better ideas?
Update 2: My solution for now is to search for morphed gradient (red) contours in a bounding box around my (purple) contours and pick the one with correct orientation & size. This works for gradient contours where the morphological operation closes the "rise" and "fall" gradient areas like in Figure 3. But it is still a bad solution for cases in which the lighted area is wider then in the image above. Any idea is still very much appreciated, thanks!
What you're trying to do is find two different features and merge them. It's not terribly difficult but you have to use multiple copies of the image to make it happen.
Make one copy, and threshold it for the dark portion
Make another copy and threshold it for the light portion
Merge both thresholded images into a new image
Apply a morphological operation like opening or closing (depending on how you threshold) This will connect nearby components
Find contours in the resultant image
Use those contours on your original image. This will work since all the images are the same size and all based off of the original.

What is the simplest *correct* method to detect rectangles in an image?

I am trying to think of the best method to detect rectangles in an image.
My initial thought is to use the Hough transform for lines, and to select combinations of lines where you have two lines intersected at both the lower portion and upper portion by the same two lines, but this is not sufficient.
Would using a corner detector along with the Hough transform work?
Check out /samples/c/squares.c in your OpenCV distribution. This example provides a square detector, and it should be a pretty good start.
My answer here also applies.
I don't think that currently there exists a simple and robust method to detect rectangles in an image. You have to deal with many problems such as the rectangles not being exactly rectangular but only approximately, partial occlusions, lighting changes, etc.
One possible direction is to do a segmentation of the image and then check how close each segment is to being a rectangle. Since you can't trust your segmentation algorithm, you can run it multiple times with different parameters.
Another direction is to try to parametrically fit a rectangle to the image such that the image gradient magnitude along the contour will be maximized.
If you choose to work on a parametric approach, notice that while the trivial way to parameterize a rectangle is by the locations of it's four corners, which is 8 parameters, there are a few other representations that require less parameters.
There is an extension of Hough that can be useful.
http://en.wikipedia.org/wiki/Generalised_Hough_transform

How to find contours in an image in OpenCV?

I need to find all contours in an image. I know the whole findcontours () and drawContours () thing, but its using the Canny edge detector that I am having trouble with. To use find contours, you either need to use canny edge detection or threshold the image. I cannot threshold the image because this would result in several edges getting blurred out ("merging" of the edges). So I decided to use Canny Edge detection. However, when I do use it instead of getting perfect edges, I get a variety of lines with gaps in them. This prevents me from getting good contours For example instead of getting the edges of a square, I would get 4 separate lines separated by small gaps resulting in me getting 4 contours instead of one. I tried dilating, opening, closing, Gaussian blurring and basically every morphological operator, but none of these are doing the job. Some do not merge the lines, while some merge the lines with non-relevant lines too. So I was wondering does anyone have a solution on how I can get actual contours from Canny Edge detection, or if not does someone have any alternatives to get all the contours from an image?
make blob, then contours come with it. :)
http://code.google.com/p/cvblob/