How to count all non-zero pixels within each polygon area efficiently? - c++

I want to calculate numbers of all white pixels within every polygon area efficiently.
Given some processes:
// some codes for reading gray image
// cv::Mat gray = cv::imread("gray.jpg");
// given polygons
// vector< vector<cv::Point> > polygons;
cv::Mat cropped;
cv::Mat mask = cv::Mat::zeros(gray.size(), CV_8UC1);
cv::fillPoly(mask, polygons, cv::Scalar(255));
cv::bitwise_and(gray, gray, cropped, mask);
cv::Mat binary;
cv::threshold(cropped, binary, 20, 255, CV_THRESH_BINARY);
So until now, we can get a image with multiple polygon areas(say we have 3 areas) which have white( with value 255) pixels. Then after some operations we expect to get a vector like:
// some efficient operations
// ...
vector<int> pixelNums;
The size of pixelNums should be same with polygons which is 3 here. And if we print them we may get some outputs like(the values are basically depended on the pre-processes):
index: 0; value: 120
index: 1; value: 1389
index: 2; value: 0
Here is my thought. Counting every pixels within every polygon area with help of cv::countNonZero, but I need to call it within a loop which I don't think it's a efficient way, isn't it?
vector<int> pixelNums;
for(auto polygon : polygons)
{
vector< vector<cv::Point> > temp_polygons;
temp_polygons.push_back(polygon);
cv::Mat cropped;
cv::Mat mask = cv::Mat::zeros(gray.size(), CV_8UC1);
cv::fillPoly(mask, temp_polygons, cv::Scalar(255));
cv::bitwise_and(gray, gray, cropped, mask);
cv::Mat binary;
cv::threshold(cropped, binary, 20, 255, CV_THRESH_BINARY);
pixelNums.push_back(cv::countNonZero(binary));
}
If you have some better ways, please kindly answer this post. Here I say better way is consuming as little time as you can just in cpu environment.

There are some minor improvements that can be done, but all of them combined should provide a decent speedup.
Compute the threshold only once
Make most operations on smaller images, using the bounding box of your polygon to get the region of interest
Avoid unneeded copies in the for loop, use const auto&
Example code:
#include <vector>
#include <opencv2/opencv.hpp>
int main()
{
// Your image
cv::Mat1b gray = cv::imread("path/to/image", cv::IMREAD_GRAYSCALE);
// Your polygons
std::vector<std::vector<cv::Point>> polygons
{
{ {15,120}, {45,200}, {160,160}, {140, 60} },
{ {10,10}, {15,30}, {50,25}, {40, 15} },
// etc...
};
// Compute the threshold just once
cv::Mat1b thresholded = gray > 20;
std::vector<int> pixelNums;
for (const auto& polygon : polygons)
{
// Get bbox of polygon
cv::Rect bbox = cv::boundingRect(polygon);
// Make a new (small) mask
cv::Mat1b mask(bbox.height, bbox.width, uchar(0));
cv::fillPoly(mask, std::vector<std::vector<cv::Point>>{polygon}, cv::Scalar(255), 8, 0, -bbox.tl());
// Get crop
cv::Mat1b cropped = thresholded(bbox) & mask;
// Compute the number of white pixels only on the crop
pixelNums.push_back(cv::countNonZero(cropped));
}
return 0;
}

Related

Closing gaps in objects of opencv mask

I've computed the following mask of an image containing corn:
the problem is, now i have black parts inside the seeds. Is there a way to get rid of them while making sure that seeds remain disconnected from each other?
My ultimate goal is to count the seeds on the picture, using watershed algorithm. I observed when the seeds are touching each other, it introduces a imprecision to the algorithm, so I tried to introduce gaps between seeds by using canny and subtracting the borders from the mask, which is the result above.
My code so far:
auto img = GetAreaOfInterest(orig, bt); // <- just a Gauss blur with OTSU thresh
{
cv::Mat gray, edges;
cv::cvtColor(orig, gray, cv::COLOR_BGR2GRAY);
// tried bilateralFIlter here, but things only got worse
cv::Canny(gray, edges, 40, 160);
cv::dilate(edges, edges,
cv::getStructuringElement(cv::MORPH_ELLIPSE, {5, 5}),
{-1, -1}, 1);
img.setTo(0, edges != 0);
}
cv::Mat background, foreground, unknown, markers;
cv::dilate(img, background,
cv::getStructuringElement(cv::MORPH_ELLIPSE, {3,3}));
/* Get area for which we are sure it's the foreground */
cv::distanceTransform(img, img, cv::DIST_L1, 3, CV_8U);
{
double max;
cv::minMaxLoc(img, nullptr, &max);
cv::threshold(img, img, 0.46 * max, 255, cv::THRESH_BINARY);
}
img.convertTo(foreground, CV_8UC3);
show_resized("distance", foreground, 0.5);
/* Mark unknown areas */
cv::subtract(background, foreground, unknown);
/* Set up markers */
cv::connectedComponents(foreground, markers);
markers = markers + 1;
markers.setTo(0, unknown == 255);
/* Expand markers */
cv::watershed(orig, markers);
// Count objects
original image:
the most problematic image:

How to detect the circles in the image using opencv 3 in c++

How Can I detect the circles and count the number in this image. I'm new to open cv and c++.Can any one help with this issue. I tried with hough circle . But didn't work .
The skeletonized binary image is as follows.
Starting from this image (I removed the border):
You can follow this approach:
1) Use findContour to get the contours.
2) Keep only internal contours. You can do that checking the sign of the area returned by contourArea(..., true). You'll get the 2 internal contours:
3) Now that you have the two contours, you can find a circle with minEnclosingCircle (in blue), or fit an ellipse with fitEllipse (in red):
Here the full code for reference:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Get contours
vector<vector<Point>> contours;
findContours(img, contours, RETR_TREE, CHAIN_APPROX_NONE);
// Create output image
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
Mat3b outContours = out.clone();
// Get internal contours
vector<vector<Point>> internalContours;
for (size_t i = 0; i < contours.size(); ++i) {
// Find orientation: CW or CCW
double area = contourArea(contours[i], true);
if (area >= 0) {
// Internal contour
internalContours.push_back(contours[i]);
// Draw with different color
drawContours(outContours, contours, i, Scalar(rand() & 255, rand() & 255, rand() & 255));
}
}
// Get circles
for (const auto& cnt : internalContours) {
Point2f center;
float radius;
minEnclosingCircle(cnt, center, radius);
// Draw circle in blue
circle(out, center, radius, Scalar(255, 0, 0));
}
// Get ellipses
for (const auto& cnt : internalContours) {
RotatedRect rect = fitEllipse(cnt);
// Draw ellipse in red
ellipse(out, rect, Scalar(0, 0, 255), 2);
}
imshow("Out", out);
waitKey();
return 0;
}
First of all you have to find all contours at your image (see function cv::findContours).
You have to analyse these contours (check it for accordance to your requirements).
P.S. The figure at the picture is definitely not circle. So I can't say exactly how do you have to check received contours.

Watershed boundaries closely surround one area

I am trying to make an average of two blobs in OpenCV. To achieve that I was planning to use watershed algorithm on the image preprocessed in the following way:
cv::Mat common, diff, processed, result;
cv::bitwise_and(blob1, blob2, common); //calc common area of the two blobs
cv::absdiff(blob1, blob2, diff); //calc area where they differ
cv::distanceTransform(diff, processed, CV_DIST_L2, 3); //idea here is that the highest intensity
//will be in the middle of the differing area
cv::normalize(processed, processed, 0, 255, cv::NORM_MINMAX, CV_8U); //convert floats to bytes
cv::Mat watershedMarkers, watershedOutline;
common.convertTo(watershedMarkers, CV_32S, 1. / 255, 1); //change background to label 1, common area to label 2
watershedMarkers.setTo(0, processed); //set 0 (unknown) for area where blobs differ
cv::cvtColor(processed, processed, CV_GRAY2RGB); //watershed wants 3 channels
cv::watershed(processed, watershedMarkers);
cv::rectangle(watershedMarkers, cv::Rect(0, 0, watershedMarkers.cols, watershedMarkers.rows), 1); //remove the outline
//draw the boundary in red (for debugging)
watershedMarkers.convertTo(watershedOutline, CV_16S);
cv::threshold(watershedOutline, watershedOutline, 0, 255, CV_THRESH_BINARY_INV);
watershedOutline.convertTo(watershedOutline, CV_8U);
processed.setTo(cv::Scalar(CV_RGB(255, 0, 0)), watershedOutline);
//convert computed labels back to mask (blob), less relevant but shows my ultimate goal
watershedMarkers.convertTo(watershedMarkers, CV_8U);
cv::threshold(watershedMarkers, watershedMarkers, 1, 0, CV_THRESH_TOZERO_INV);
cv::bitwise_not(watershedMarkers * 255, result);
My problem with the results is that the calculated boundary is (almost) always adjacent to the area common to both blobs. Here are the pictures:
Input markers (black = 0, gray = 1, white = 2)
Watershed input image (distance transform result) with resulting outline drawn in red:
I would expect the boundary to go along the maximum intensity region of the input (that is, along the middle of the differing area). Instead (as you can see) it mostly goes around the area marked as 2, with a bit shifted to touch the background (marked as 1). Do I do something wrong here, or did I misunderstand how watershed works?
Starting from this image:
You can get the correct result simply passing an all-zero image to watershed algorithm. The "basin" is then equally filled of "water" starting from each "side" (then just remember to remove the outer border which is set by default to -1 by watershed algorithm):
Code:
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
Mat1i markers(img.rows, img.cols, int(0));
markers.setTo(1, img == 128);
markers.setTo(2, img == 255);
Mat3b image(markers.rows, markers.cols, Vec3b(0,0,0));
markers.convertTo(markers, CV_32S);
watershed(image, markers);
Mat3b result;
cvtColor(img, result, COLOR_GRAY2BGR);
result.setTo(Scalar(0, 0, 255), markers == -1);
imshow("Result", result);
waitKey();
return(0);
}

Find 4 specific corner pixels and use them with warp perspective

I'm playing around with OpenCV and I want to know how you would build a simple version of a perspective transform program. I have a image of a parallelogram and each corner of it consists of a pixel with a specific color, which is nowhere else in the image. I want to iterate through all pixels and find these 4 pixels. Then I want to use them as corner points in a new image in order to warp the perspective of the original image. In the end I should have a zoomed on square.
Point2f src[4]; //Is this the right datatype to use here?
int lineNumber=0;
//iterating through the pixels
for(int y = 0; y < image.rows; y++)
{
for(int x = 0; x < image.cols; x++)
{
Vec3b colour = image.at<Vec3b>(Point(x, y));
if(color.val[1]==245 && color.val[2]==111 && color.val[0]==10) {
src[lineNumber]=this pixel // something like Point2f(x,y) I guess
lineNumber++;
}
}
}
/* I also need to get the dst points for getPerspectiveTransform
and afterwards warpPerspective, how do I get those? Take the other
points, check the biggest distance somehow and use it as the maxlength to calculate
the rest? */
How should you use OpenCV in order to solve the problem? (I just guess I'm not doing it the "normal and clever way") Also how do I do the next step, which would be using more than one pixel as a "marker" and calculate the average point in the middle of multiple points. Is there something more efficient than running through each pixel?
Something like this basically:
Starting from an image with colored circles as markers, like:
Note that is a png image, i.e. with a loss-less compression which preserves the actual color. If you use a lossy compression like jpeg the colors will change a little, and you cannot segment them with an exact match, as done here.
You need to find the center of each marker.
Segment the (known) color, using inRange
Find all connected components with the given color, with findContours
Find the largest blob, here done with max_element with a lambda function, and distance. You can use a for loop for this.
Find the center of mass of the largest blob, here done with moments. You can use a loop also here, eventually.
Add the center to your source vertices.
Your destination vertices are just the four corners of the destination image.
You can then use getPerspectiveTransform and warpPerspective to find and apply the warping.
The resulting image is:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
int main()
{
// Load image
Mat3b img = imread("path_to_image");
// Create a black output image
Mat3b out(300,300,Vec3b(0,0,0));
// The color of your markers, in order
vector<Scalar> colors{ Scalar(0, 0, 255), Scalar(0, 255, 0), Scalar(255, 0, 0), Scalar(0, 255, 255) }; // red, green, blue, yellow
vector<Point2f> src_vertices(colors.size());
vector<Point2f> dst_vertices = { Point2f(0, 0), Point2f(0, out.rows - 1), Point2f(out.cols - 1, out.rows - 1), Point2f(out.cols - 1, 0) };
for (int idx_color = 0; idx_color < colors.size(); ++idx_color)
{
// Detect color
Mat1b mask;
inRange(img, colors[idx_color], colors[idx_color], mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Find largest
int idx_largest = distance(contours.begin(), max_element(contours.begin(), contours.end(), [](const vector<Point>& lhs, const vector<Point>& rhs) {
return lhs.size() < rhs.size();
}));
// Find centroid of largest component
Moments m = moments(contours[idx_largest]);
Point2f center(m.m10 / m.m00, m.m01 / m.m00);
// Found marker center, add to source vertices
src_vertices[idx_color] = center;
}
// Find transformation
Mat M = getPerspectiveTransform(src_vertices, dst_vertices);
// Apply transformation
warpPerspective(img, out, M, out.size());
imshow("Image", img);
imshow("Warped", out);
waitKey();
return 0;
}

How get contours between objects with OpenCV Watershed?

I use OpenCV Watershed with my image:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter{
private:
cv::Mat markers;
public:
void setMarkers(cv::Mat& markerImage)
{
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(cv::Mat &image)
{
cv::watershed(image, markers);
markers.convertTo(markers,CV_8U);
return markers;
}
};
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread("d:\\projekty\\OpenCV\\trainData\\base01.jpg"); //http://i.imgur.com/sEWFHfY.jpg
cv::Mat blank(image.size(),CV_8U,cv::Scalar(0xFF));
cv::Mat dest;
imshow("originalimage", image);
// Create markers image
cv::Mat markers(image.size(),CV_8U,cv::Scalar(-1));
//Rect(topleftcornerX, topleftcornerY, width, height);
//top rectangle
markers(Rect(0,0,image.cols, 5)) = Scalar::all(1);
//bottom rectangle
markers(Rect(0,image.rows-5,image.cols, 5)) = Scalar::all(1);
//left rectangle
markers(Rect(0,0,5,image.rows)) = Scalar::all(1);
//right rectangle
markers(Rect(image.cols-5,0,5,image.rows)) = Scalar::all(1);
//centre rectangle
int centreW = image.cols/4;
int centreH = image.rows/4;
markers(Rect((image.cols/2)-(centreW/2),(image.rows/2)-(centreH/2), centreW, centreH)) = Scalar::all(2);
markers.convertTo(markers,CV_BGR2GRAY);
imshow("markers", markers);
//Create watershed segmentation object
WatershedSegmenter segmenter;
segmenter.setMarkers(markers);
cv::Mat wshedMask = segmenter.process(image);
cv::Mat mask;
convertScaleAbs(wshedMask, mask, 1, 0);
double thresh = threshold(mask, mask, 1, 255, THRESH_BINARY);
bitwise_and(image, image, dest, mask);
dest.convertTo(dest,CV_8U);
imshow("final_result", dest);
cv::waitKey(0);
return 0;
}
But this give me only individual mask. I also tried to create markers as two points - the result was only one mask. Is it possible with OpenCV to separate cells (objects) with contours as is in example http://biodynamics.ucsd.edu/ir/ ?
If not, is it possible create as result mask with values: 1 for first object, 2 - for second, .. 99 for 99 ?
after performing
cv::watershed(image, markers);
the markers image will be -1 at the boundaries of the regions, and will be 1 in the region corresponding to the seed that was labelled 1, and will be 2 in the region corresponding to the seed that was labelled 2, and so on. So you can do something like this:
cv::Mat region1 = markers==1;
I use the following approach for extracting objects countours after watershed segmentation. Watershed output is one markers image containing a segment code of each pixel. I create a binary mask image for each single object segment from the markers image. That can be done in one iteration over all pixels of the markers image. For the core "for loop", see opencv example https://github.com/Itseez/opencv/blob/master/samples/cpp/watershed.cpp. I have all the objects masks stored in a vector <Mat>. Then I run findContours on every such mask -> contour of each object. See http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html. You just don't need to use the edge detector Canny as the mask images are already binary.