C++. OpenCV. Count number of conturs - c++

I need get number of contours (only closed/looped contours) from my image. For this purpose I use cv::connectedComponents function. As it said in documentation
returns N, the total number of labels [0, N-1] where 0 represents the background label
So to get real nubmer of counturs I just need decrement returned value (sub background contour). This method works fine for most images I need to process (actually, it autocad files). However, I've got one image which is processed incorrectly. Returned value for this image 4, however we see that there are 4 circles on image and background. So returned value should be 5.
Here is image I got the problem with:
Here is the code I use:
void run_test()
{
cv::Mat img, img_edge, labels;
img = cv::imread("G:\\test.jpg", cv::IMREAD_GRAYSCALE);
cv::threshold(img, img_edge, 128, 255, cv::THRESH_BINARY);
int res = cv::connectedComponents(img_edge, labels, 8, CV_16U);
}
So I've got two questions: why returned value for this image is 4(but not 5) and is it correct way (by using connectedComponents) to get number of contours?

Related

Opencv - How to get number of vertical lines present in image (count of lines)

Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar &currentPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9

Select part of a cv::Mat based on non-zero pixels of another Mat?

I am trying to update part of a Mat based on another Mat. For example, I want to select a part of img that is not zero in mask and add a constant value to it. When I try this:
Mat mask = imread("some grayscale image with a white area in a black background", IMREAD_GRAYSCALE);
Mat img = Mat::zeros(mask.rows, mask.cols, CV_8UC1);
Mat bnry, locations;
threshold(mask, bnry, 100, 255, THRESH_BINARY);
findNonZero(bnry, locations);
img(locations) += 5;
I get this error:
Error: Assertion failed ((int)ranges.size() == d)
img and mask have the same size.
How can I select an area of an image based on another image (mask)?
Many of the OpenCV functions will support mask in default, in other word you don't need to find non zero values and based on that doing sum operation, you just need to use cv::add function that in default support using mask as an argument,
cv::add(img,10,img,mask); // 10 is an arbitrary constant value
And about your code
img(locations) += 5;
As far as I know we don't have any like this overloaded operator+ in OpenCV to use.

How to display PGM image using OpenCV

I'm trying to load and display a .PGM image using OpenCV(2.4.0) for C++.
void open(char* location, int flag, int windowFlag)
{
Mat image = imread(location, flag);
namedWindow("Image window", windowFlag);
imshow("Image window", image);
waitKey(0);
}
I'm calling open like this:
open("./img_00245_c1.pgm", IMREAD_UNCHANGED, CV_WINDOW_AUTOSIZE);
The problem is that the image shown when the window is opened is darker than if I'm opening the file with IrfanView.
Also if I'm trying to write this image to another file like this:
Mat imgWrite;
imgWrite = image;
imwrite("newImage.pgm", imgWrite)
I will get a different file content than the original one and IrfanView will display this as my function displays with imshow.
Is there a different flag in imread for .PGM files such that I can get the original file to be displayed and saved ?
EDIT: Image pgm file
EDIT 2 : Remarked that: IrfanView normalizes the image to a maximum pixel value of 255 .
In order to see the image clearly using OpenCV I should normalize the image also when loading in Mat. Is this possible directly with OpenCV functions without iterating through pixels and modifying their values ?
The problem is not in the way data are loaded, but in the way they are displayed.
Your image is a CV_16UC1, and both imshow and imwrite normalize the values from original range [0, 65535] to the range [0, 255] to fit the range of the type CV_8U.
Since your PGM image has max_value of 4096:
P2
1176 640 // width height
4096 // max_value
it should be normalized from range [0, 4096] instead of [0, 65535].
You can do this with:
Mat img = imread("path_to_image", IMREAD_UNCHANGED);
img.convertTo(img, CV_8U, 255.0 / 4096.0);
imshow("Image", img);
waitKey();
Please note that the values range in your image doesn't correspond to [0, 4096], but:
double minv, maxv;
minMaxLoc(img, &minv, &maxv);
// minv = 198
// maxv = 2414
So the straightforward normalization in [0,255] like:
normalize(img, img, 0, 255, NORM_MINMAX);
img.convertTo(img, CV_8U);
won't work, as it will produce an image brighter than it should be.
This means that to properly show your image you need to know the max_value (here 4096). If it changes every time, you can retrieve it parsing the .pgm file.
Again, it's just a problem with visualization. Data are correct.

OpenCV equivalent for thresholding in MATLAB

I want to implement this MATLAB statement in OpenCV C++:
bwImgLabeled(bwImgLabeled > 0) = 1;
As far as I understand from then OpenCV docs, http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold,
I need to do:
cv::threshold(dst, dst, 0, 1, CV_THRESH_BINARY);
Am I correct here?
Yes you are correct. What the MATLAB code is doing is that it searches for any pixels that are non-zero and sets them to 1.
Recall the definition of cv::threshold:
double threshold(InputArray src, OutputArray dst,
double thresh, double maxval, int type)
So the first two inputs are the source and destination images, where in your case, you want to take the destination image and mutate it to contain the final image. thresh = 0 and maxval = 1, with type=CV_THRESH_BINARY. Recall when using the CV_THRESH_BINARY, the following relationship occurs:
(source: opencv.org)
Therefore, if you specify thresh to be 0, maxval to be 1, you are effectively doing what the MATLAB code is doing. Any pixels that are greater than thresh=0, which are essentially non-zero, you set those intensities to 1. I'm assuming you want the input and output images to be floating-point, so make sure the image is of a compatible type, such as CV_32FC1, or CV_32FC3, and so on.

Count the black pixels using OpenCV

I'm working in opencv 2.4.0 and C++
I'm trying to do an exercise that says I should load an RGB image, convert it to gray scale and save the new image. The next step is to make the grayscale image into a binary image and store that image. This much I have working.
My problem is in counting the amount of black pixels in the binary image.
So far I've searched the web and looked in the book. The method that I've found that seems the most useful is.
int TotalNumberOfPixels = width * height;
int ZeroPixels = TotalNumberOfPixels - cvCountNonZero(cv_image);
But I don't know how to store these values and use them in cvCountNonZero(). When I pass the the image I want counted from to this function I get an error.
int main()
{
Mat rgbImage, grayImage, resizedImage, bwImage, result;
rgbImage = imread("C:/MeBGR.jpg");
cvtColor(rgbImage, grayImage, CV_RGB2GRAY);
resize(grayImage, resizedImage, Size(grayImage.cols/3,grayImage.rows/4),
0, 0, INTER_LINEAR);
imwrite("C:/Jakob/Gray_Image.jpg", resizedImage);
bwImage = imread("C:/Jakob/Gray_Image.jpg");
threshold(bwImage, bwImage, 120, 255, CV_THRESH_BINARY);
imwrite("C:/Jakob/Binary_Image.jpg", bwImage);
imshow("Original", rgbImage);
imshow("Resized", resizedImage);
imshow("Resized Binary", bwImage);
waitKey(0);
return 0;
}
So far this code is very basic but it does what it's supposed to for now. Some adjustments will be made later to clean it up :)
You can use countNonZero to count the number of pixels that are not black (>0) in an image. If you want to count the number of black (==0) pixels, you need to subtract the number of pixels that are not black from the number of pixels in the image (the image width * height).
This code should work:
int TotalNumberOfPixels = bwImage.rows * bwImage.cols;
int ZeroPixels = TotalNumberOfPixels - countNonZero(bwImage);
cout<<"The number of pixels that are zero is "<<ZeroPixels<<endl;