When you retrieve contours from an image, you should get 2 contours per blob - one inner and one outer. Consider the circle below - since the circle is a line with a pixel width larger than one, you should be able to find two contours in the image - one from the inner part of the circle and one from the outer part.
Using OpenCV, I want to retrieve the INNER contours. However, when I use findContours (), I only seem to be getting the outer contours. How would I retrieve the inner contours of a blob using OpenCV?
I am using the C++ API, not C therefore only suggest functions that use the C++ API. (i.e. findContours () rather than cvFindContours ())
Thanks.
I ran this code on your image and it returned an inner and outer contour.
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
int main(int argc, const char * argv[]) {
cv::Mat image= cv::imread("../../so8449378.jpg");
if (!image.data) {
std::cout << "Image file not found\n";
return 1;
}
//Prepare the image for findContours
cv::cvtColor(image, image, CV_BGR2GRAY);
cv::threshold(image, image, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = image.clone();
cv::findContours( contourOutput, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE );
//Draw the contours
cv::Mat contourImage(image.size(), CV_8UC3, cv::Scalar(0,0,0));
cv::Scalar colors[3];
colors[0] = cv::Scalar(255, 0, 0);
colors[1] = cv::Scalar(0, 255, 0);
colors[2] = cv::Scalar(0, 0, 255);
for (size_t idx = 0; idx < contours.size(); idx++) {
cv::drawContours(contourImage, contours, idx, colors[idx % 3]);
}
cv::imshow("Input Image", image);
cvMoveWindow("Input Image", 0, 0);
cv::imshow("Contours", contourImage);
cvMoveWindow("Contours", 200, 0);
cv::waitKey(0);
return 0;
}
Here are the contours it found:
I think what Farhad is asking is to crop the contour from the original image.
To do that you will need to find the contour as explained above, then use a mask to get the inside from the original, and then crop the result into an image with the same size as the contour.
The findcontours function stores all the contours in different vectors, in the code given all contours are drawn, you just have draw the contour corresponding to the inner one, idx is the variable that states which contour is drawn.
Related
How Can I detect the circles and count the number in this image. I'm new to open cv and c++.Can any one help with this issue. I tried with hough circle . But didn't work .
The skeletonized binary image is as follows.
Starting from this image (I removed the border):
You can follow this approach:
1) Use findContour to get the contours.
2) Keep only internal contours. You can do that checking the sign of the area returned by contourArea(..., true). You'll get the 2 internal contours:
3) Now that you have the two contours, you can find a circle with minEnclosingCircle (in blue), or fit an ellipse with fitEllipse (in red):
Here the full code for reference:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Get contours
vector<vector<Point>> contours;
findContours(img, contours, RETR_TREE, CHAIN_APPROX_NONE);
// Create output image
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
Mat3b outContours = out.clone();
// Get internal contours
vector<vector<Point>> internalContours;
for (size_t i = 0; i < contours.size(); ++i) {
// Find orientation: CW or CCW
double area = contourArea(contours[i], true);
if (area >= 0) {
// Internal contour
internalContours.push_back(contours[i]);
// Draw with different color
drawContours(outContours, contours, i, Scalar(rand() & 255, rand() & 255, rand() & 255));
}
}
// Get circles
for (const auto& cnt : internalContours) {
Point2f center;
float radius;
minEnclosingCircle(cnt, center, radius);
// Draw circle in blue
circle(out, center, radius, Scalar(255, 0, 0));
}
// Get ellipses
for (const auto& cnt : internalContours) {
RotatedRect rect = fitEllipse(cnt);
// Draw ellipse in red
ellipse(out, rect, Scalar(0, 0, 255), 2);
}
imshow("Out", out);
waitKey();
return 0;
}
First of all you have to find all contours at your image (see function cv::findContours).
You have to analyse these contours (check it for accordance to your requirements).
P.S. The figure at the picture is definitely not circle. So I can't say exactly how do you have to check received contours.
My goal is to find the biggest contour of a captured webcam frame, then after it's found, find its size and determine either to be rejected or accepted.
Just to explain the objetive of this project, i am currently working for a Hygiene product's Manufacturer. There we have, in total, 6 workers that are responsible for sorting the defective soap bars out of the production line. So in order to gain this workforce for other activities, i am trying to write an algorithm to "replace" their eyes.
I've tried several methods along the way (findcontours, SimpleBlobDetection, Canny, Object tracking), but the problem that i've been facing is that i can't seem to find a way to effectively find the biggest object in a webcam image, find its size and then determine to either discard or accept it.
Below follows my newest code to find the biggest contour in an webcam stream:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv/cv.h"
#include "opencv2\imgproc\imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, const char** argv)
{
Mat src;
Mat imgGrayScale;
Mat imgCanny;
Mat imgBlurred;
/// Load source image
VideoCapture capWebcam(0);
if (capWebcam.isOpened() == false)
{
cout << "Não foi possível abrir webcam!" << endl;
return(0);
}
while (capWebcam.isOpened())
{
bool blnframe = capWebcam.read(src);
if (!blnframe || src.empty())
{
cout << "Erro! Frame não lido!\n";
break;
}
int largest_area = 0;
int largest_contour_index = 0;
Rect bounding_rect;
Mat thr(src.rows, src.cols, CV_8UC1);
Mat dst(src.rows, src.cols, CV_8UC1, Scalar::all(0));
cvtColor(src, imgGrayScale, CV_BGR2GRAY); //Convert to gray
GaussianBlur(imgGrayScale, imgBlurred, Size(5, 5), 1.8);
Canny(imgBlurred, imgCanny, 45, 90); //Threshold the gray
vector<vector<Point>> contours; // Vector for storing contour
vector<Vec4i> hierarchy;
findContours(imgCanny, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); // Find the contours in the image
for (int i = 0; i < contours.size(); i++) // iterate through each contour.
{
double a = contourArea(contours[i], false); // Find the area of contour
if (a > largest_area)
{
largest_area = a;
largest_contour_index = i; //Store the index of largest contour
bounding_rect = boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
Scalar color(255, 255, 255);
drawContours(dst, contours, largest_contour_index, color, CV_FILLED, 8, hierarchy); // Draw the largest contour using previously stored index.
rectangle(src, bounding_rect, Scalar(0, 255, 0), 1, 8, 0);
imshow("src", src);
imshow("largest Contour", dst);
waitKey(27);
}
return(0);
}
And here are the results windows that the program generates and the image of the object that i want to detect and sort.
Thank you all in advance for any clues on how to achieve my goal.
I have shape that I want to extract contours from ( I need to have number of contours right -two), but in hierarchy I get 4 or more instead of two contours. I just cant realise why ,it is obvious and there is no noise, I used diletation and erosion before.
I tried to change all parametars, and nothing. Also I tried with image of white square and didnt work. There is my line for that:
Mat I = imread("test.png", CV_LOAD_IMAGE_GRAYSCALE);
I.convertTo(B, CV_8U);
findContours(B, contour_vec, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
Why is contour so disconnected?What to do to have 2 contours in hierarchy?
In your image there are 5 contours: 2 external contours, 2 internal contours and 1 on the top right.
You can discard internal and external contours looking if they are oriented CW or CCW. You can do this with contourArea with oriented flag:
oriented – Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.
So, drawing external contours in red, and internal in green, you get:
You can then store only external contours (see externalContours) in the code below:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
// Load grayscale image
Mat1b B = imread("path_to_image", IMREAD_GRAYSCALE);
// Find contours
vector<vector<Point>> contours;
findContours(B.clone(), contours, RETR_TREE, CHAIN_APPROX_NONE);
// Create output image
Mat3b out;
cvtColor(B, out, COLOR_GRAY2BGR);
vector<vector<Point>> externalContours;
for (size_t i=0; i<contours.size(); ++i)
{
// Find orientation: CW or CCW
double area = contourArea(contours[i], true);
if (area >= 0)
{
// Internal contours
drawContours(out, contours, i, Scalar(0, 255, 0));
}
else
{
// External contours
drawContours(out, contours, i, Scalar(0, 0, 255));
// Save external contours
externalContours.push_back(contours[i]);
}
}
imshow("Out", out);
waitKey();
return 0;
}
Please remember that findContours corrupts input image (the second image you're showing is garbage). Just pass a clone of the image to findContours to avoid corruptions of the original image.
I'm trying to count object from image. I use logs photo, and I use some steps to get a binary image.
This is my code:
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <features2d.hpp>
using namespace cv;
using namespace std;
int main(int argc, char *argv[])
{
//load image
Mat img = imread("kayu.jpg", CV_LOAD_IMAGE_COLOR);
if(img.empty())
return -1;
//namedWindow( "kayu", CV_WINDOW_AUTOSIZE );
imshow("kayu", img);
//convert to b/w
Mat bw;
cvtColor(img, bw, CV_BGR2GRAY);
imshow("bw1", bw);
threshold(bw, bw, 40, 255, CV_THRESH_BINARY);
imshow("bw", bw);
//distance transform & normalisasi
Mat dist;
distanceTransform(bw, dist, CV_DIST_L2, 3);
normalize(dist, dist, 0, 2., NORM_MINMAX);
imshow("dist", dist);
//threshold to draw line
threshold(dist, dist, .5, 1., CV_THRESH_BINARY);
imshow("dist2", dist);
//dist = bw;
//dilasi
Mat dilation, erotion, element;
int dilation_type = MORPH_ELLIPSE;
int dilation_size = 17;
element = getStructuringElement(dilation_type, Size(2*dilation_size + 1, 2*dilation_size+1), Point(dilation_size, dilation_size ));
erode(dist, erotion, element);
int erotionCount = 0;
for(int i=0; i<erotionCount; i++){
erode(erotion, erotion, element);
}
imshow("erotion", erotion);
dilate(erotion, dilation, element);
imshow("dilation", dilation);
waitKey(0);
return 0;
}
As you can see, I use Erosion and Dilation to get better circular object of log. My problem is, I'm stuck at counting the object. I tried SimpleBlobDetector but I got nothing, because when I try to convert the result of "dilation" step to CV_8U, the white object disappear. I got error too when I use findContours(). It say something about channel of image. I can't show the error here, because that's too many step and I already delete it from my code.
Btw, at the end, i got 1 channel of image.
Can i just use it to counting, or am i have to convert it and what is the best method to do it?
Two simple steps:
Find contours for the binarized image.
Get the count of the contours.
Code:
int count_trees(const cv::Mat& bin_image){
cv::Mat img;
if(bin_image.channels()>1){
cv::cvtColor(bin_image,img,cv::COLOR_BGR2GRAY);
}
else{
img=bin_image.clone();;
}
if(img.type()!=CV_8UC1){
img*=255.f; //This could be stupid, but I do not have an environment to try it
img.convertTo(img,CV_8UC1);
}
std::vector<std::vector<cv::Point>> contours
std::vector<Vec4i> hierarchy;
cv::findContours( img, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
return contours.size();
}
I have the same problem, here's an idea I'm about to implement.
1) Represent your image as an array of integers; 0 = black, 1 = white.
2) set N = 2;
3) Scan your image, pixel-by-pixel. Whenever you find a white pixel, activate a flood-fill algorithm, starting at the pixel just found; paint the region with the value of N++;
4) Iterate 3 until you reach the last pixel. (N-2) is the number of regions found.
This method depends on the shape of the objects; mine are more chaotic than yours (wish me luck..). I'll make use of a recursive flood-fill recipe found somewhere (maybe Rosetta Code).
This solution also makes it easy to compute the size of each region.
try to apply that on the your deleted img
// count
for (int i = 0; i< contours.size(); i = hierarchy[i][0]) // iteration sur chaque contour .
{
Rect r = boundingRect(contours[i]);
if (hierarchy[i][2]<0) {
rectangle(canny_output, Point(r.x, r.y), Point(r.x + r.width, r.y + r.height), Scalar(20, 50, 255), 3, 8, 0);
count++;
}
}
cout << "Numeber of contour = " << count << endl;
imshow("src", src);
imshow("contour", dst);
waitKey(0);
I have a question which i am unable to resolve. I am taking difference of two images using OpenCV. I am getting output in a seperate Mat. Difference method used is the AbsDiff method. Here is the code.
char imgName[15];
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
/*cvtColor(img1, img1, CV_BGR2GRAY);
cvtColor(img2, img2, CV_BGR2GRAY);*/
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
sprintf(imgName,"D:/outputer/d.jpg");
imwrite(imgName, diffImage);
I want to bound the difference part in a rectangle. findContours is drawing too many contours. but i only need a particular portion. My diff image is
I want to draw a single rectangle around all the five dials.
Please point me to right direction.
Regards,
I would search for the highest value for i index giving a non black pixel; that's the right border.
The lowest non black i is the left border. Similar for j.
You can:
binarize the image with a threshold. Background will be 0.
Use findNonZero to retrieve all points that are not 0, i.e. all foreground points.
use boundingRect on the retrieved points.
Result:
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
// Load image (grayscale)
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
rectangle(out, box, Scalar(0,255,0), 3);
// Show
imshow("Result", out);
waitKey();
return 0;
}
Find contours, it will output a set of contours as std::vector<std::vector<cv::Point> let us call it contours:
std::vector<cv::Point> all_points;
size_t points_count{0};
for(const auto& contour:contours){
points_count+=contour.size();
all_points.reserve(all_points);
std::copy(contour.begin(), contour.end(),
std::back_inserter(all_points));
}
auto bounding_rectnagle=cv::boundingRect(all_points);