Why opencv finds the contours' border-points, disordered and uncompleted? - c++

I want to write a program which can correct an answer sheet by opencv in C++.
But as I use cvFindContours() the contours' border-points hasn't been completely found.
I mean, I don't have closed-objects. (I use cvDilate and cvErode, without know their real functionality, but dilate omit some contours and erode add some extra, unwanted contours)
And the bigger problem is that I want to find a center-dot in each contour (to compare the location of answers with predefined left & down sidebars), but some contours are not symmetric so the center-dot's location is not exactly in the middle.
Look at the black picture, on the left, in the second contour, some of the bottom points are detected but the top ones are not detected.
cvCvtColor(pic, blackpic, CV_BGR2GRAY);
cvResize(blackpic, src0);
cvSmooth(src0, src0, CV_GAUSSIAN, 3, 3);
cvThreshold(src0, src, 140, 255, CV_THRESH_BINARY);
//Find Contour
CvMemStorage* st = cvCreateMemStorage();
CvSeq* first_contour = NULL;
cvFindContours(src, st, &first_contour, sizeof(CvContour), CV_RETR_LIST);
vector <vector <CvPoint> > cont;
vector <CvPoint> dot;
for (CvSeq* s = first_contour; s != NULL; s = s->h_next)
if (s -> total > C_MIN_SIZE && cvContourArea(s) > C_MIN_AREA)
{
cont.push_back(vector <CvPoint>()); //convert seq to vector
CvPoint c = cvPoint(0,0);
for (int i = 0; i < s -> total; i++)
{
CvPoint* p = CV_GET_SEQ_ELEM(CvPoint, s, i);
cont.back().push_back(*p);
CV_IMAGE_ELEM(test, uchar, p -> y, p -> x) = 255; //drawing each contour's point
c.x += p -> x; //find the center point by average
c.y += p -> y;
}
c.x = floor(c.x / s -> total);
c.y = floor(c.y / s -> total);
dot.push_back(c);
}
Although I am using C++, but I use IplImage* and CvPoint (c structres for opencv) insted of cv::Mat and cv::Point (C++ stractures), if it is possible, please don't use Mat and C++ mode.
I don't understand that when I draw contours by cvDrawContours() the contours are completely drawn, but when I personally iterate over the contours' points and draw them point by point, it seems that most of them are not detected!

For your first question, you can have thousands of links on the internet that explain Erode, Dilate and all the other basic image processing pillars, take your time while reading the documentation don't jump over steps .
Second question :
I am not sure what do you expect from the contour center ? do you think you will have a point exactly at the center of those ellipses ? NO, that will never happen and if it is the case, congratulations for this big heap in Image Processing history !!
What I suggest to do, is a simple manipulation that resolves your issue :
find the contours (exactly like you are doing now)
calculate the center of each contour
when you compare the answers with centers location don't compare using a == b because this never happens !! rather, use a comparison in of distance (threshold) to ensure the operation of your software
Example :
bool correct;
CvPoint answer, center;
double distance = sqrt((answer.X - center.X)^2 + (answer.Y - center.Y)^2); // Euclidean distance
// judge using this distance
if (distance <= 5) // here you select the number as you (want) i gave example 5
{
correct = true; // correct answer :)
}
Good luck

Related

Finding 2D rotation between two sets of points

I am working on a project and I am stuck with one thing. I have two sets of points extracted from contours of the same object but rotated in 2D and I need to find the best rotation transformation (or just angle of rotation) between those points.
What I did is that I rescaled one of the contours so that I have two contours of the same size and also I made those two contours have the same center of mass. Then what I do is that I choose 3 random points from the first set of points and 3 points of the second set of points (some kind of RANSAC in which I draw N times random points) and I need to find the rotation transformation around their center of mass of one set to the other one. I tried to use Kabsch algorithm but I'm not sure if I'm implementing it correctly because it is not working properly. Here is my code:
Here is my code of Kabsch:
// P,Q - sets of points of contours
cv::Mat Pt;
transpose(P, Pt);
cv::Mat H = Pt * Q;
cv::Mat Ht;
transpose(H,Ht);
cv::Mat invH = H.inv();
cv::Mat HtH = Ht * H;
cv::Mat sqrtH(HtH.size(), CV_32F);
for (int i=0;i<2;i++)
for (int j = 0; j < 2; j++)
{
sqrtH.at<float>(j, i) = sqrt(HtH.at<float>(j,i));
}
// Final transform
cv::Mat R = sqrtH * invH;
I would like to at least get an angle of rotation between two sets of points. When I use my code I get strange results and transformation that are messing my sets of points up.

OpenCV C++ how to know the number of contours per row for sorting?

I have a binary image:
In this image I can easily sort the contours that I found from top to bottom and from left to right using the overloaded std::sort.
I first sort from top to bottom via:
sort(contours.begin(), contours.end(), top_to_bottom_contour_sorter());
Then I sort from left to right by:
for (int i = 0; i < contours.size(); i = i + no_of_contours_horizontally)
{
sort(i, i + no_of_contours_horizontally, left_to_right_contour_sorter);
}
Where top_to_bottom and left_to_right are separate functions that I pass to the sort function. And no_of_contours_horizontally with respect to the first image is three (3).
However this only works if I know the number of contours horizontally. If the image I am using will have varying number of contours horizontally like in this image. contours_sample. The program fails. I could brute force and define for a specific index to change the no of contours found. However, it would limit the program to operate on a specific input instead of being flexible. I am thinking of creating rects or lines that I can overlay on top of the image and with that count the number of contours inside so I can get a value of the number of horizontal contours. If there is a more elegant solution I would appreciate it.
Here are my sorting functions
bool top_to_bottom_contour_sorter(const std::vector<Point> &lhs, const std::vector<Point> &rhs)
{
Rect rectLhs = boundingRect(Mat(lhs));
Rect rectRhs = boundingRect(Mat(rhs));
return rectLhs.y < rectRhs.y;
}
bool left_to_right_contour_sorter(const std::vector<Point> &lhs, const std::vector<Point> &rhs)
{
Rect rectLhs = boundingRect(Mat(lhs));
Rect rectRhs = boundingRect(Mat(rhs));
return rectLhs.x < rectRhs.x;
}
EDIT
Here are my current outputs and desired output for each image.
Using the first image and my current working code.
Current_Output
My desired output for the second image.
Desired_Output
I guess, your only problem was not to respect equality for one of the coordinates!?
Here we go:
// Custom sorter.
bool sortContour(std::vector<cv::Point> a, std::vector<cv::Point> b)
{
cv::Rect rectA = cv::boundingRect(a);
cv::Rect rectB = cv::boundingRect(b);
if (rectA.y == rectB.y)
return (rectA.x < rectB.x);
return (rectA.y < rectB.y);
}
int main()
{
// Load image.
cv::Mat image = cv::imread("contours.jpg", cv::IMREAD_GRAYSCALE);
// There are some artifacts in the JPG...
cv::threshold(image, image, 128, 255, cv::THRESH_BINARY);
// Find contours.
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(image, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
// Output unsorted contours.
cv::Mat imageUnsorted = image.clone();
for (int i = 0; i < contours.size(); i++)
{
cv::Rect rect = cv::boundingRect(contours[i]);
cv::putText(imageUnsorted, std::to_string(i), cv::Point(rect.x - 10, rect.y - 10), cv::FONT_HERSHEY_COMPLEX, 0.5, cv::Scalar(255));
}
cv::imwrite("unsorted.png", imageUnsorted);
// Sort using custom sorter.
std::sort(contours.begin(), contours.end(), sortContour);
// Output sorted contours.
cv::Mat imageSorted = image.clone();
for (int i = 0; i < contours.size(); i++)
{
cv::Rect rect = cv::boundingRect(contours[i]);
cv::putText(imageSorted, std::to_string(i), cv::Point(rect.x - 10, rect.y - 10), cv::FONT_HERSHEY_COMPLEX, 0.5, cv::Scalar(255));
}
cv::imwrite("sorted.png", imageSorted);
}
The unsorted contours:
The sorted contours:
As you can see, one could also just inverse the original order, since cv::findContours just goes in the opposite direction(s). ;-)
One big caveat: If the scan (or however you obtain the surveys) is even slightly rotated counterclockwise, this routine will fail. Therefore, the angle of the whole scan (or...) should be checked beforehand.
A simple practical solution is to sort by
y*100 + x
Something more sophisticated that will work also in case of rotated input is
Pick the minimum distance between two blobs
Let's call the vector connecting these two dots (dx, dy)
Sort based on (x*dx + y*dy)*100 + (x*dy - y*dx)
The output will be in a "grid" order (may be one that you want or one rotated by 90 degrees, but with rotated input the problem is ill-posed, you should choose between the two using some rule).

Hausdorff Distance Object Detection

I have been struggling trying to implement the outlining algorithm described here and here.
The general idea of the paper is determining the Hausdorff distance of binary images and using it to find the template image from a test image.
For template matching, it is recommended to construct image pyramids along with sliding windows which you'll use to slide over your test image for detection. I was able to do both of these as well.
I am stuck on how to move forward from here on. Do I slide my template over the test image from different pyramid layers? Or is it the test image over the template? And with regards to the sliding window, is/are they meant to be a ROI of the test or template image?
In a nutshell, I have pieces to the puzzle but no idea of which direction to take to solve the puzzle
int distance(vector<Point>const& image, vector<Point>const& tempImage)
{
int maxDistance = 0;
for(Point imagePoint: image)
{
int minDistance = numeric_limits<int>::max();
for(Point tempPoint: tempImage)
{
Point diff = imagePoint - tempPoint;
int length = (diff.x * diff.x) + (diff.y * diff.y);
if(length < minDistance) minDistance = length;
if(length == 0) break;
}
maxDistance += minDistance;
}
return maxDistance;
}
double hausdorffDistance(vector<Point>const& image, vector<Point>const& tempImage)
{
double maxDistImage = distance(image, tempImage);
double maxDistTemp = distance(tempImage, image);
return sqrt(max(maxDistImage, maxDistTemp));
}
vector<Mat> buildPyramids(Mat& frame)
{
vector<Mat> pyramids;
int count = 6;
Mat prevFrame = frame, nextFrame;
while(count > 0)
{
resize(prevFrame, nextFrame, Size(), .85, .85);
prevFrame = nextFrame;
pyramids.push_back(nextFrame);
--count;
}
return pyramids;
}
vector<Rect> slidingWindows(Mat& image, int stepSize, int width, int height)
{
vector<Rect> windows;
for(size_t row = 0; row < image.rows; row += stepSize)
{
if((row + height) > image.rows) break;
for(size_t col = 0; col < image.cols; col += stepSize)
{
if((col + width) > image.cols) break;
windows.push_back(Rect(col, row, width, height));
}
}
return windows;
}
Edit I: More analysis on my solution can be found here
This is a bi-directional task.
Forward Direction
1. Translation
For each contour, calculate its moment. Then for each point in that contour, translate it about the moment i.e. contour.point[i] = contour.point[i] - contour.moment[i]. This moves all of the contour points to the origin.
PS: You need to keep track of each contour's produced moment because it will be used in the next section
2. Rotation
With the newly translated points, calculate their rotated rect. This will give you the angle of rotation. Depending on this angle, you would want to calculate the new angle which you want to rotate this contour by; this answer would be helpful.
After attaining the new angle, calculate the rotation matrix. Remember that your center here will be the origin i.e. (0, 0). I did not take scaling into account (that's where the pyramids come into play) when calculating the rotation matrix hence I passed 1.
PS: You need to keep track of each contour's produced matrix because it will be used in the next section
Using this matrix, you can go ahead and rotate each point in the contour by it as shown here*.
Once all of this is done, you can go ahead and calculate the Hausdorff distance and find contours which pass your set threshold.
Back Direction
Everything done in the first section, has to be undone in order for us to draw the valid contours onto our camera feed.
1. Rotation
Recall that each detected contour produced a rotation matrix. You want to undo the rotation of the valid contours. Just perform the same rotation but using the inverse matrix.
For each valid contour and corresponding matrix
inverse_matrix = matrix[i].inv(cv2.DECOMP_SVD)
Use * to rotate the points but with inverse_matrix as parameter
PS: When calculating the inverse, if the produced matrix was not a square one, it would fail. cv2.DECOMP_SVD will produce an inverse matrix even if the original matrix was a non-square.
2. Translation
With the valid contours' points rotated back, you just have to undo the previously performed translation. Instead of subtracting, just add the moment to each point.
You can now go ahead and draw these contours to your camera feed.
Scaling
This is were image pyramids come into play.
All you have to do is resize your template image by a fixed size/ratio upto your desired number of times (called layers). The tutorial found here does a good job of explaining how to do this in OpenCV.
It goes without saying that the values you choose to resize your image by and number of layers will and do play a huge role in how robust your program will be.
Put it all together
Template Image Operations
Create a pyramid consisting of n layers
For each layer in n
Find contours
Translate the contour points
Rotate the contour points
This operation should only be performed once and only store the results of the rotated points.
Camera Feed Operations
Assumptions
Let the rotated contours of the template image at each level be stored in templ_contours. So if I say templ_contours[0], this is going to give me the rotated contours at pyramid level 0.
Let the image's translated, rotated contours and moments be stored in transCont, rotCont and moment respectively.
image_contours = Find Contours
for each contour detected in image
moment = calculate moment
for each point in image_contours
transCont.thisPoint = forward_translate(image_contours.thisPoint)
rotCont.thisPoint = forward_rotate(transCont.thisPoint)
for each contour_layer in templ_contours
for each contour in rotCont
calculate Hausdorff Distance
valid_contours = contours_passing_distance_threshold
for each point in valid_contours
valid_point = backward_rotate(valid_point)
for each point in valid_contours
valid_point = backward_translate(valid_point)
drawContours(valid_contours, image)

How do I find if the object ball I track crosses a line that I have drawn?

I am using c++ with OpenCV 3.0 to create a basic form of SimulCam.
I am currently stuck on finding a way to check when the object ball has crossed/intersected with a line that I have drawn on the output window.
The ball is being tracked using contours, and ultimately I would like to work out the exact frame number this intersect happens at.
But first, I would like to understand how to perform the check to see when the Object ball has crossed/intersected with the drawn line.
Scene with ball moving towards line
I have the contours for the object, I would like to understand how to perform the check of an intersection.
Code for finding contours and Object Tracking:
findContours(resizedThresh, contourVector, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
contourVector.resize(contourVector.size());
line(resizedF_Fast, Point(300, 0), Point(300, 360), Scalar(255), 2, 8);
for (size_t i = 0; i < contourVector.size(); i++) {
approxPolyDP(Mat(contourVector[i]), contourVector[i], 0.01*arcLength(contourVector[i], true), true);
double area = contourArea(contourVector[i]);
if (contourVector[i].size() > 5 && (area > 200)) {
++circlesC;
drawContours(resizedF_Fast, contourVector, i, Scalar(255, 255, 255), 2, CV_AA, hierarchy, abs(1));
searchForMovement(resizedThresh, resizedF_Fast);
}
}
I have done some other research, and I have been looking into using lineIterator, but i'm not entirely sure..
Apologies for the potential crude code, novice here. Any help would be greatly appreciated.
My first approach would be to fit a circle into your contour points and then compute the distance between the line and your circle center with the dot product. Maybe like this (didnt tried it out):
Point Pc; // circle center
Point L0(300,0);
Point L1(300,360);
double v[] = {L1.x-L0.x,L1.y-L0.y};
double w[] = {Pc.x-L0.x,Pc.y-L0.y};
Mat v(1,2,CV_32F,v);
Mat w(1,2,CV_32F,w);
double c1 = w.dot(v);
double c2 = v.dot(v);
double b = c1 / c2;
Mat Pb = L0 + b * v;
double distance = norm(Pc,Pb);
Then you check if your distance minus your circle radius is less equal zero.
But due to perspective transformation of your camera the ball becomes an ellipse and my assumption becomes less accurate.
If you need a more accurate solution you need to check every contour point and take the minimum distance.
This link shows some code and further explanations.
I finally worked through this, i'll post the general idea here.
For each frame, calculate the object contours.
each contour will have an x and y coordinate stored
Used LineIterator (e.g. lineIt) to cycle through all values of a line.
if (xpos_contour < lineIt.pos().x) {
// Object is on the left of the line
}
else if (xpos_contour > lineIt.pos().x) {
// Object is to the right of the line
}
Bear in mind the input video im using filmed top down, so only the x coordinate mattered.

How to ignore/remove contours that touch the image boundaries

I have the following code to detect contours in an image using cvThreshold and cvFindContours:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvThreshold( processedImage, processedImage, thresh1, 255, CV_THRESH_BINARY );
nContours = cvFindContours(processedImage, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0) );
I would like to somehow extend this code to filter/ignore/remove any contours that touch the image boundaries. However I am unsure how to go about this. Should I filter the threshold image or can I filter the contours afterwards? Hope somebody knows an elegant solution, since surprisingly I could not come up with a solution by googling.
Update 2021-11-25
updates code example
fixes bugs with image borders
adds more images
adds Github repo with CMake support to build example app
Full out-of-the-box example can be found here:
C++ application with CMake
General info
I am using OpenCV 3.0.0
Using cv::findContours actually alters the input image, so make sure that you work either on a separate copy specifically for this function or do not further use the image at all
Update 2019-03-07: "Since opencv 3.2 source image is not modified by this function." (see corresponding OpenCV documentation)
General solution
All you need to know of a contour is if any of its points touches the image border. This info can be extracted easily by one of the following two procedures:
Check each point of your contour regarding its location. If it lies at the image border (x = 0 or x = width - 1 or y = 0 or y = height - 1), simply ignore it.
Create a bounding box around the contour. If the bounding box lies along the image border, you know the contour does, too.
Code for the second solution (CMake):
cmake_minimum_required(VERSION 2.8)
project(SolutionName)
find_package(OpenCV REQUIRED)
set(TARGETNAME "ProjectName")
add_executable(${TARGETNAME} ./src/main.cpp)
include_directories(${CMAKE_CURRENT_BINARY_DIR} ${OpenCV_INCLUDE_DIRS} ${OpenCV2_INCLUDE_DIR})
target_link_libraries(${TARGETNAME} ${OpenCV_LIBS})
Code for the second solution (C++):
bool contourTouchesImageBorder(const std::vector<cv::Point>& contour, const cv::Size& imageSize)
{
cv::Rect bb = cv::boundingRect(contour);
bool retval = false;
int xMin, xMax, yMin, yMax;
xMin = 0;
yMin = 0;
xMax = imageSize.width - 1;
yMax = imageSize.height - 1;
// Use less/greater comparisons to potentially support contours outside of
// image coordinates, possible future workarounds with cv::copyMakeBorder where
// contour coordinates may be shifted and just to be safe.
// However note that bounding boxes of size 1 will have their start point
// included (of course) but also their and with/height values set to 1
// but should not contain 2 pixels.
// Which is why we have to -1 the "search grid"
int bbxEnd = bb.x + bb.width - 1;
int bbyEnd = bb.y + bb.height - 1;
if (bb.x <= xMin ||
bb.y <= yMin ||
bbxEnd >= xMax ||
bbyEnd >= yMax)
{
retval = true;
}
return retval;
}
Call it via:
...
cv::Size imageSize = processedImage.size();
for (auto c: contours)
{
if(contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
...
Full C++ example:
void testContourBorderCheck()
{
std::vector<std::string> filenames =
{
"0_single_pixel_top_left.png",
"1_left_no_touch.png",
"1_left_touch.png",
"2_right_no_touch.png",
"2_right_touch.png",
"3_top_no_touch.png",
"3_top_touch.png",
"4_bot_no_touch.png",
"4_bot_touch.png"
};
// Load example image
//std::string path = "C:/Temp/!Testdata/ContourBorderDetection/test_1/";
std::string path = "../Testdata/ContourBorderDetection/test_1/";
for (int i = 0; i < filenames.size(); ++i)
{
//std::string filename = "circle3BorderDistance0.png";
std::string filename = filenames.at(i);
std::string fqn = path + filename;
cv::Mat img = cv::imread(fqn, cv::IMREAD_GRAYSCALE);
cv::Mat processedImage;
img.copyTo(processedImage);
// Create copy for contour extraction since cv::findContours alters the input image
cv::Mat workingCopyForContourExtraction;
processedImage.copyTo(workingCopyForContourExtraction);
std::vector<std::vector<cv::Point>> contours;
// Extract contours
cv::findContours(workingCopyForContourExtraction, contours, cv::RetrievalModes::RETR_EXTERNAL, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
// Prepare image for contour drawing
cv::Mat drawing;
processedImage.copyTo(drawing);
cv::cvtColor(drawing, drawing, cv::COLOR_GRAY2BGR);
// Draw contours
cv::drawContours(drawing, contours, -1, cv::Scalar(255, 255, 0), 1);
//cv::imwrite(path + "processedImage.png", processedImage);
//cv::imwrite(path + "workingCopyForContourExtraction.png", workingCopyForContourExtraction);
//cv::imwrite(path + "drawing.png", drawing);
const auto imageSize = img.size();
bool liesOnBorder = contourTouchesImageBorder(contours.at(0), imageSize);
// std::cout << "lies on border: " << std::to_string(liesOnBorder);
std::cout << filename << " lies on border: "
<< liesOnBorder;
std::cout << std::endl;
std::cout << std::endl;
cv::imshow("processedImage", processedImage);
cv::imshow("workingCopyForContourExtraction", workingCopyForContourExtraction);
cv::imshow("drawing", drawing);
cv::waitKey();
//cv::Size imageSize = workingCopyForContourExtraction.size();
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
}
}
int main(int argc, char** argv)
{
testContourBorderCheck();
return 0;
}
Problem with contour detection near image borders
OpenCV seems to have a problem with correctly finding contours near image borders.
For both objects, the detected contour is the same (see images). However, in image 2 the detected contour is not correct since a part of the object lies along x = 0, but the contour lies in x = 1.
This seem like a bug to me.
There is an open issue regarding this here: https://github.com/opencv/opencv/pull/7516
There also seems to be a workaround with cv::copyMakeBorder (https://github.com/opencv/opencv/issues/4374), however it seems a bit complicated.
If you can be a bit patient, I'd recommend waiting for the release of OpenCV 3.2 which should happen within the next 1-2 months.
New example images:
Single pixel top left, objects left, right, top, bottom, each touching and not touching (1px distance)
Example images
Object touching image border
Object not touching image border
Contour for object touching image border
Contour for object not touching image border
Although this question is in C++, the same issue affects openCV in Python. A solution to the openCV '0-pixel' border issue in Python (and which can likely be used in C++ as well) is to pad the image with 1 pixel on each border, then call openCV with the padded image, and then remove the border afterwards. Something like:
img2 = np.pad(img.copy(), ((1,1), (1,1), (0,0)), 'edge')
# call openCV with img2, it will set all the border pixels in our new pad with 0
# now get rid of our border
img = img2[1:-1,1:-1,:]
# img is now set to the original dimensions, and the contours can be at the edge of the image
If anyone needs this in MATLAB, here is the function.
function [touch] = componentTouchesImageBorder(C,im_row_max,im_col_max)
%C is a bwconncomp instance
touch=0;
S = regionprops(C,'PixelList');
c_row_max = max(S.PixelList(:,1));
c_row_min = min(S.PixelList(:,1));
c_col_max = max(S.PixelList(:,2));
c_col_min = min(S.PixelList(:,2));
if (c_row_max==im_row_max || c_row_min == 1 || c_col_max == im_col_max || c_col_min == 1)
touch = 1;
end
end