How can I ignore small imperfections in, mostly rectangular, contour when creating it's bounding rectangle? - c++

I am using this image as an example:
What I want, is to identify a bounding rectangle that's close to the original rectangle part, ignoring most of the imperfections outside of it.
What I get right now is this (contour and bounding rectangle created from that contour):
How can I, more or less, ignore that small area when creating my bounding rectangle, so that it's not included in it?
In other words: what I need is a bounding rectangle that's as close as possible to the original "rectangle part".

Related

Grouping different scale bounding boxes

I've created an openCV application for human detection on images.
I run my algorithm on the same image over different scales, and when detections are made, at the end I have information about the bounding box position and at which scale it was taken from. Then I want to transform that rectangle to the original scale, given that position and size will vary.
I've wrapped my head around this and I've gotten nowhere. This should be rather simple, but at the moment I am clueless.
Help anyone?
Ok, got the answer elsewhere
"What you should do is store the scale where you are at for each detection. Then transforming should be rather easy right. Imagine you have the following.
X and Y coordinates (center of bounding box) at scale 1/2 of the original. This means that you should multiply with the inverse of the scale to get the location in the original, which would be 2X, 2Y (again for the bounxing box center).
So first transform the center of the bounding box, than calculate the width and height of your bounding box in the original, again by multiplying with the inverse. Then from the center, your box will be +-width_double/2 and +-height_double/2."

OpenCV - suitable implementation for Thin - Plate Spline Warping

I have the shape of a face with together with the reconstruction of that face and I want to model the corresponding image of the initial shape.
Basically, I want to move the points from the original shape to the position indicated by the reconstruction of the face. I have tried to do this by using thin plate spline warping, this implementation of it : http://ipwithopencv.blogspot.ro/2010/01/thin-plate-spline-example.html.
However, it's not working as I would want. I want to have the corners of the image fixed and just to move the corresponding points which define the face. I can illustrate this with 2 pictures. In the first picture I have the shape of the original face with the reconstructed shape.
In here I have the picture which I want to modify and the resulted picture by using the code from the link mentioned above. The green points mark the original face points and the blue points mark their new position and where I want to reposition them and stretch my face.
All I want is just to move the green points to the blue points so that it looks deformed. Do you know of any method to do this which you have tested?
Getting the corners fixed is quite easy. Just add four additional correspondencies for the four image corners. In terms of your example:
iP.push_back(cv::Point(0, 0));
iiP.push_back(cv::Point(0, 0));
iP.push_back(cv::Point(0, height-1));
iiP.push_back(cv::Point(0, height-1));
iP.push_back(cv::Point(width-1, 0));
iiP.push_back(cv::Point(width-1, 0));
iP.push_back(cv::Point(width-1, height-1));
iiP.push_back(cv::Point(width-1, height-1));
Where, of course, width is the image width and height is the image height

Extracting an object from a low contrast background

I need to extract an object from an image where the background is almost flat...
Consider for example a book over a big white desktop.. I need to get the coordinates of the 4 corners of the book to extract a ROI.
Which technique using OpenCV would you suggest? I was thinking to use k Means but I can't know the color of the background a priori (also the colors inside the object can be vary)
If your background is really low contrast, why not try a flood fill from the image borders, then you can obtain bounding box or bounding rect afterwards.
Another option is to apply Hough transform and take intersection of most outer lines as corners. This is, if your object is rectangular.

How to remove part of image containing text in opencv

What method can I use in opencv to remove the black section containing text at the bottom of the image?
Any help appreciated
if Blender's suggestion does not work in your case, you can:
Threshold the image so that all pixels higher than 0 will become 255.
Find contours.
Calculate a bounding rectangle for each contour.
Define the largest bounding rectangle as the region to be kept (the other ones are just the letters). You can then simply use the found rectangle as a ROI.
Good luck,

Are there algorithms for computing the bounding rects of sprites drawn on a monochrome background?

Imagine a plain rectangular bitmap of, say, 1024x768 pixels filled with white. There are a few (non-overlapping) sprites drawn onto the bitmap: circles, squares and triangles.
Is there an algorithm (possibly even a C++ implementation) which, given the bitmap and the color which is the background color (white, in the above example), yields a list containing the smallest bounding rectangles for each of the sprites?
Here's some sample: On the left side you can see a sample bitmap which my code is given (together with the information that the 'background' is white). On the right side you can see the same image together with the bounding rectangles of the four shapes (in red); the algorithm I'm looking for computes the geometry of these rectangles.
Some painting programs have a similiar feature for selecting shapes: they can even compute seemingly arbitrary bounding polygons. Instead of dragging a selection rectangle manually, you can click the 'background' (what's background and what's not is determined by some threshold) and then the tool automatically computes the shape of the object drawn onto the background. I need something like this, except that I'm perfectly fine if I just have the rectangular bounding areas for objects.
I became aware of OpenCV; it appears to be relevant (it seems to be a library which includes every graphics algorithm I can think of - and then some) but in the fast amount of information I couldn't find the way to the algorithm I'm thinking of. I would be surprised if OpenCV couldn't do this, but I fear you've got to have a PhD to use it. :-)
Here is the great article on the subject:
http://softsurfer.com/Archive/algorithm_0107/algorithm_0107.htm
I think that PhD is not required here :)
These are my first thoughts, none complicated, except for the edge detection
For each square,
if it's not-white
mark as "found"
if you havn't found one next to it already
add it to points list
for each point in the points list
use basic edge detection to find outline
keep track of bounds while doing so
add bounds to shapes list
remove duplicates from shapes list. (this can happen for concave shapes)
I just realized this will consider white "holes" (like in your leftmost circle in your sample) to be it's own shape. If the first "loop" is a flood fill, it doesn't have this problem, but will be much slower/take much more memory.
The basic edge detection I was thinking of was simple:
given eight cardinal directions left, downleft, etc...
given two relative directions cw(direction-1) and ccw(direction+1)
starting with a point "begin"
set bounds to point
find direction d, where the begin+d is not white, and begin+cw(d) is white.
set current to begin+d
do
if current is outside of bounds, increase bounds
set d = cw(d)
while(cur+d is white or cur+ccw(d) is not white)
d = ccw(d)
cur = cur + d;
while(cur != begin
http://ideone.com/
There's a quite a few edge cases not considered here: what if begin is a single point, what if it runs to the edge of the picture, what if start point is only 1 px wide, but has blobs to two sides, probably others... But the basic algorithm isn't that complicated.