I tried to calculate contour perimeter using arcLength. The contour is read from the file into Mat which is a black and white picture of contour only.
However, when I pass this Mat into function it throws an error:
Assertion failed (curve.checkVector(2) >= 0 && (curve.depth() == CV_32F || curve.depth() == CV_32S)) in arcLength
I have figured out that the actual cause is that curve.checkVector(2) returns -1. Although I have read the documentation about this method I still do not understand how to fix this error.
Here is the test image with corner points (1,1), (1,21), (21,21), (21,1)
The contour should be (from OpenCV doc):
Input vector of 2D points, stored in std::vector or Mat.
not a b/w image.
You can compute the perimeter is different ways. The most robust is to use findContours to find external contours only (RETR_EXTERNAL), and call arcLength on that contour.
A few examples:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Method 1: length of unsorted points
// NOTE: doesn't work!
vector<Point> points;
findNonZero(img, points);
double len1 = arcLength(points, true);
// 848.78
// Method 2: length of the external contour
vector<vector<Point>> contours;
findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE); // Retrieve only external contour
double len2 = arcLength(contours[0], true);
// 80
// Method 3: length of convex hull of contour
// NOTE: convex hull based methods work reliably only for convex shapes.
vector<Point> hull1;
convexHull(contours[0], hull1);
double len3 = arcLength(hull1, true);
// 80
// Method 4: length of convex hull of unsorted points
// NOTE: convex hull based methods work reliably only for convex shapes.
vector<Point> hull2;
convexHull(points, hull2);
double len4 = arcLength(hull2, true);
// 80
// Method 5: number of points in the contour
// NOTE: this will simply count the number of points in the contour.
// It works only if:
// 1) findContours was used with option CHAIN_APPROX_NONE.
// 2) the contours is thin (has thickness of 1 pixel).
double len5 = contours[0].size();
// 80
return 0;
}
Related
I have shape that I want to extract contours from ( I need to have number of contours right -two), but in hierarchy I get 4 or more instead of two contours. I just cant realise why ,it is obvious and there is no noise, I used diletation and erosion before.
I tried to change all parametars, and nothing. Also I tried with image of white square and didnt work. There is my line for that:
Mat I = imread("test.png", CV_LOAD_IMAGE_GRAYSCALE);
I.convertTo(B, CV_8U);
findContours(B, contour_vec, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
Why is contour so disconnected?What to do to have 2 contours in hierarchy?
In your image there are 5 contours: 2 external contours, 2 internal contours and 1 on the top right.
You can discard internal and external contours looking if they are oriented CW or CCW. You can do this with contourArea with oriented flag:
oriented – Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.
So, drawing external contours in red, and internal in green, you get:
You can then store only external contours (see externalContours) in the code below:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
// Load grayscale image
Mat1b B = imread("path_to_image", IMREAD_GRAYSCALE);
// Find contours
vector<vector<Point>> contours;
findContours(B.clone(), contours, RETR_TREE, CHAIN_APPROX_NONE);
// Create output image
Mat3b out;
cvtColor(B, out, COLOR_GRAY2BGR);
vector<vector<Point>> externalContours;
for (size_t i=0; i<contours.size(); ++i)
{
// Find orientation: CW or CCW
double area = contourArea(contours[i], true);
if (area >= 0)
{
// Internal contours
drawContours(out, contours, i, Scalar(0, 255, 0));
}
else
{
// External contours
drawContours(out, contours, i, Scalar(0, 0, 255));
// Save external contours
externalContours.push_back(contours[i]);
}
}
imshow("Out", out);
waitKey();
return 0;
}
Please remember that findContours corrupts input image (the second image you're showing is garbage). Just pass a clone of the image to findContours to avoid corruptions of the original image.
I'm using opencv 2.4.13
I'm trying to find the perimeter of a connected component, I was thinking of using ConnectedComponentWithStats but it doesn't return the perimeter, only the area, width, etc...
There is a method to find the area with the contour but not the opposite (with one component i mean, not the entire image).
The method arcLength doesn't work as well beause i have all the points of the component, not only the contour.
I know there is a BF way to find it by iterating through each pixel of the component and see if he has neighbors which aren't in the same component. But I'd like a function which costs less.
Otherwise, if you know a way to link a component with the contours found by the method findContours, it suits me as well.
Thanks
Adding to #Miki's answer, This is a faster way to find the perimeter of the connected component
//getting the connected components with statistics
cv::Mat1i labels, stats;
cv::Mat centroids;
int lab = connectedComponentsWithStats(img, labels, stats, centroids);
for (int i = 1; i < lab; ++i)
{
//Rectangle around the connected component
cv::Rect rect(stats(i, 0), stats(i, 1), stats(i, 2), stats(i, 3));
// Get the mask for the i-th contour
Mat1b mask_i = labels(rect) == i;
// Compute the contour
vector<vector<Point>> contours;
findContours(mask_i, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
if(contours.size() <= 0)
continue;
//Finding the perimeter
double perimeter = contours[0].size();
//you can use this as well for measuring perimeter
//double perimeter = arcLength(contours[0], true);
}
The easiest thing is probably to use findContours.
You can compute the contour on the i-th component computed by connectedComponents(WithStats) , so they are aligned with your labels. Using CHAIN_APPROX_NONE you'll get all the points in the contour, so the size() of the vector is already a measure of the perimeter. You can eventually use arcLength(...) to get a more accurate result:
Mat1i labels;
int n_labels = connectedComponents(img, labels);
for (int i = 1; i < n_labels; ++i)
{
// Get the mask for the i-th contour
Mat1b mask_i = labels == i;
// Compute the contour
vector<vector<Point>> contours;
findContours(mask_i.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
if (!contours.empty())
{
// The first contour (and probably the only one)
// is the one you're looking for
// Compute the perimeter
double perimeter_i = contours[0].size();
}
}
I'm playing around with OpenCV and I want to know how you would build a simple version of a perspective transform program. I have a image of a parallelogram and each corner of it consists of a pixel with a specific color, which is nowhere else in the image. I want to iterate through all pixels and find these 4 pixels. Then I want to use them as corner points in a new image in order to warp the perspective of the original image. In the end I should have a zoomed on square.
Point2f src[4]; //Is this the right datatype to use here?
int lineNumber=0;
//iterating through the pixels
for(int y = 0; y < image.rows; y++)
{
for(int x = 0; x < image.cols; x++)
{
Vec3b colour = image.at<Vec3b>(Point(x, y));
if(color.val[1]==245 && color.val[2]==111 && color.val[0]==10) {
src[lineNumber]=this pixel // something like Point2f(x,y) I guess
lineNumber++;
}
}
}
/* I also need to get the dst points for getPerspectiveTransform
and afterwards warpPerspective, how do I get those? Take the other
points, check the biggest distance somehow and use it as the maxlength to calculate
the rest? */
How should you use OpenCV in order to solve the problem? (I just guess I'm not doing it the "normal and clever way") Also how do I do the next step, which would be using more than one pixel as a "marker" and calculate the average point in the middle of multiple points. Is there something more efficient than running through each pixel?
Something like this basically:
Starting from an image with colored circles as markers, like:
Note that is a png image, i.e. with a loss-less compression which preserves the actual color. If you use a lossy compression like jpeg the colors will change a little, and you cannot segment them with an exact match, as done here.
You need to find the center of each marker.
Segment the (known) color, using inRange
Find all connected components with the given color, with findContours
Find the largest blob, here done with max_element with a lambda function, and distance. You can use a for loop for this.
Find the center of mass of the largest blob, here done with moments. You can use a loop also here, eventually.
Add the center to your source vertices.
Your destination vertices are just the four corners of the destination image.
You can then use getPerspectiveTransform and warpPerspective to find and apply the warping.
The resulting image is:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
int main()
{
// Load image
Mat3b img = imread("path_to_image");
// Create a black output image
Mat3b out(300,300,Vec3b(0,0,0));
// The color of your markers, in order
vector<Scalar> colors{ Scalar(0, 0, 255), Scalar(0, 255, 0), Scalar(255, 0, 0), Scalar(0, 255, 255) }; // red, green, blue, yellow
vector<Point2f> src_vertices(colors.size());
vector<Point2f> dst_vertices = { Point2f(0, 0), Point2f(0, out.rows - 1), Point2f(out.cols - 1, out.rows - 1), Point2f(out.cols - 1, 0) };
for (int idx_color = 0; idx_color < colors.size(); ++idx_color)
{
// Detect color
Mat1b mask;
inRange(img, colors[idx_color], colors[idx_color], mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Find largest
int idx_largest = distance(contours.begin(), max_element(contours.begin(), contours.end(), [](const vector<Point>& lhs, const vector<Point>& rhs) {
return lhs.size() < rhs.size();
}));
// Find centroid of largest component
Moments m = moments(contours[idx_largest]);
Point2f center(m.m10 / m.m00, m.m01 / m.m00);
// Found marker center, add to source vertices
src_vertices[idx_color] = center;
}
// Find transformation
Mat M = getPerspectiveTransform(src_vertices, dst_vertices);
// Apply transformation
warpPerspective(img, out, M, out.size());
imshow("Image", img);
imshow("Warped", out);
waitKey();
return 0;
}
I have a contours finder program based on opencv, now I'm trying to get the number of corners in each founded contour using Harris corners detector, my problem is that I have to get one elements of the contours
............................
std::vector<std::vector<cv::Point>> contours;
...........................
for ( int i =0;i <contours.size(); i++){
if(!contours[i].empty()){
Harris.detect(cv::Mat(contours[i])); // here crashes the program because the dimensions don't fit ????
Harris.getCorners(approx,0.4);
std::cout <<"size \n"<< approx.size()<<std::endl;
}
}
.........................
UPDATE
I checked the code again and the program crashed in this part of the Harris class :
void HarrisDetector::detect(const cv::Mat& image) {
// Harris computation
cv::cornerHarris(image,cornerStrength, // here crashs the program
neighbourhood,// neighborhood size
aperture, // aperture size
k); // Harris parameter
// internal threshold computation
double minStrength; // not used
cv::minMaxLoc(cornerStrength,&minStrength,&maxStrength);
//local maxima detection
cv::Mat dilated; // temporary image
cv::dilate(cornerStrength,dilated,cv::Mat());
cv::compare(cornerStrength,dilated,localMax,cv::CMP_EQ);
}
any ideaa
You can use method argument in cv::findContours function to some approximation and then use the contours[i].size() to get a number of corners.
My work is based on images with an array of dots (Fig. 1), and the final result is shown in Fig. 4. I will explain my work step by step.
Fig. 1 Original image
Step 1: Detect the edge of every object, including the dots and a "ring" that I want to delete for better performance. And the result of edge detection is shown in Fig.2. I used Canny edge detector but it didn't work well with some light-gray dots. My first question is how to close the contours of dots and reduce other noise as much as possible?
Fig. 2 Edge detection
Step 2: Dilate every object. I didn't find a good way to fill holes, so I dilate them directly. As shown in Fig.3, holes seem to be enlarged too much and so does other noise. My second question is how to fill or dilate the holes in order to make them be filled circles in the same/similar size?
Fig. 3 Dilation
Step 3: Find and draw the mass center of every dot. As shown in Fig. 4, due to the coarse image processing, there exist mark of the "ring" and some of dots are shown in two white pixels. The result wanted should only show the dots and one white pixel for one dot.
Fig. 4: Mass centers
Here is my code for these 3 steps. Can anyone help to make my work better?
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
using namespace std;
using namespace cv;
// Global variables
Mat src, edge, dilation;
int dilation_size = 2;
// Function header
void thresh_callback(int, void*);
int main(int argc, char* argv)
{
IplImage* img = cvLoadImage("c:\\dot1.bmp", 0); // dot1.bmp = Fig. 1
// Perform canny edge detection
cvCanny(img, img, 33, 100, 3);
// IplImage to Mat
Mat imgMat(img);
src = img;
namedWindow("Step 1: Edge", CV_WINDOW_AUTOSIZE);
imshow("Step 1: Edge", src);
// Apply the dilation operation
Mat element = getStructuringElement(2, Size(2 * dilation_size + 1, 2 * dilation_size + 1),
Point(dilation_size, dilation_size)); // dilation_type = MORPH_ELLIPSE
dilate(src, dilation, element);
// imwrite("c:\\dot1_dilate.bmp", dilation);
namedWindow("Step 2: Dilation", CV_WINDOW_AUTOSIZE);
imshow("Step 2: Dilation", dilation);
thresh_callback( 0, 0 );
waitKey(0);
return 0;
}
/* function thresh_callback */
void thresh_callback(int, void*)
{
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
// Find contours
findContours(dilation, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// Get the moments
vector<Moments> mu(contours.size());
for(int i = 0; i < contours.size(); i++) {
mu[i] = moments(contours[i], false);
}
// Get the mass centers
vector<Point2f> mc(contours.size());
for(int i = 0; i < contours.size(); i++) {
mc[i] = Point2f(mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00);
}
// Draw mass centers
Mat drawing = Mat::zeros(dilation.size(), CV_8UC1);
for( int i = 0; i< contours.size(); i++ ) {
Scalar color = Scalar(255, 255, 255);
line(drawing, mc[i], mc[i], color, 1, 8, 0);
}
namedWindow("Step 3: Mass Centers", CV_WINDOW_AUTOSIZE);
imshow("Step 3: Mass Centers", drawing);
}
There are a few things you can do to improve your results. To reduce noise in the image, you can apply a median blur before applying the Canny operator. This is a common de-noising technique. Also, try to avoid using the C API and IplImage.
cv::Mat img = cv::imread("c:\\dot1.bmp", 0); // dot1.bmp = Fig. 1
cv::medianBlur(img, img, 7);
// Perform canny edge detection
cv::Canny(img, img, 33, 100);
This significantly reduces the amount of noise in your edge image:
To better retain the original sizes of your dots, you can perform a few iterations of morphological closing with a smaller kernel rather than dilation. This will also reduce joining of the dots with the circle:
// This replaces the call to dilate()
cv::morphologyEx(src, dilation, MORPH_CLOSE, cv::noArray(),cv::Point(-1,-1),2);
This will perform two iterations with a 3x3 kernel, indicated by using cv::noArray().
The result is cleaner, and the dots are completely filled:
Leaving the rest of your pipeline unmodified gives the final result. There are still a few spurious mass centers from the circle, but considerably fewer than the original method:
If you wanted to attempt removing the circle from the results entirely, you could try using cv::HoughCircles() and adjusting the parameters until you get a good result. This might have some difficulties because the entire circle is not visible in the image, only segments, but I recommend you experiment with it. If you did detect the innermost circle, you could use it as a mask to filter out external mass centers.
how to close contours of dots? use drawContours method with filled drawing option (CV_FILLED or thickness = -1)
reduce noise? use one of the blurring (low pass filtering) methods.
similar size? use erosion after dilation = morphological closing.
one dot for one circle, output without outer ring? find average of all contour areas. erase contours having big difference to this value. output the remaining centers.
Aurelius already mentioned most of these, but since this problem is quiet interesting, I will probably try and post a complete solution when I have enough time. Good luck.