I'm comparatively new to OpenCV. I was wondering if it is possible to get histogram of a contour (which can be a perfect rectangular or irregular in shape) found by findcontour.
Thanks in advance.
Edit:
This is what exactly I'm trying to achieve. I want to analyse area in contour to detect defects (by analyzing histogram of an area ?) and declare piece defective or good. Images attached.
Good sample. (Contour detected is outlined in gray color)
Defective sample. (defect around top left corner)
You may probably misuse the histogram.
Contour of an image should be a binary-valued, color-less matrix which does not represent grayscale-level of pixels, but the boundaries.
Meanwhile, histogram is a tool for analyzing how grayscale-valued of pixels distribute in your 2D image, isn't it?
Thus, why you want to profile the histogram of a binary-valued matrix which might not help you analyzing the image? Histogram is not the right t ool for the contour analysis though.
What you may get from the histogram should just be only a two-bar histogram because the contoured matrix contains only binary values. This might not be helpful for analysis.
Here's an other way, using morphological operations.
#include <string>
#include <iostream>
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
cv::Mat make_element(int morph_size, int elem_type)
{
cv::Size sz{2*morph_size+1, 2*morph_size+1};
cv::Point pt{morph_size, morph_size};
cv::Mat element{getStructuringElement(elem_type, sz, pt)};
return element;
}
int main(int argc, char **argv)
{
std::string fn{argv[1]};
cv::Mat src{cv::imread(fn)}, dst, mask[3];
if (!src.data) {
std::cerr << "No image data :(" << std::endl;
return -1;
}
// Clean out noise
cv::Mat elem1{make_element(5, cv::MORPH_RECT)};
cv::morphologyEx(src, dst, cv::MORPH_OPEN, elem1);
// Close the hole, then XOR with original
cv::Mat elem2{make_element(45, cv::MORPH_ELLIPSE)};
morphologyEx(dst, dst, cv::MORPH_CLOSE, elem2);
cv::bitwise_xor(src, dst, dst);
// Clean out noise (again)
cv::Mat elem3{make_element(1, cv::MORPH_RECT)};
cv::morphologyEx(dst, dst, cv::MORPH_OPEN, elem3);
// Mark the hole
cv::split(dst, mask);
cv::bitwise_xor(src, dst, dst, mask[0]);
// Overlay
cv::split(dst, mask);
cv::Mat empty{dst.size(), CV_8UC1};
std::vector<cv::Mat> v{empty, empty, mask[0]};
cv::merge(v, dst);
cv::bitwise_or(src, dst, dst);
cv::namedWindow("Defect (ESC to quit)", cv::WINDOW_NORMAL);
cv::startWindowThread();
cv::imshow("Defect (ESC to quit)", dst);
while (true) {
int k = cv::waitKey(100) & 0xff;
if (k == 27) {
break;
}
}
cv::destroyAllWindows();
return 0;
};
Some additional reading:
Shapiro/Stockman, Finding gear defects, Chapter 3
OpenCV morphology tutorial
Related
I'm trying to use OpenCV's FAST corner detection algorithm to get an outline of an image of a ball (Not my final project, I'm using it as a simple example). For some reason, it only works on a third of the input Mat, and stretches the Keypoints across the image. I'm not sure as to what could be going wrong here to make the FAST algorithm not apply to the entire Mat.
Code:
void featureDetection(const Mat& imgIn, std::vector<KeyPoint>& pointsOut) {
int fast_threshold = 20;
bool nonmaxSuppression = true;
FAST(imgIn, pointsOut, fast_threshold, nonmaxSuppression);
}
int main(int argc, char** argv) {
Mat out = imread("ball.jpg", IMREAD_COLOR);
// Detect features
std::vector<KeyPoint> keypoints;
featureDetection(out.clone(), keypoints);
Mat out2 = out.clone();
// Draw features (Normal, missing right side)
for(KeyPoint p : keypoints) {
drawMarker(out, Point(p.pt.x / 3, p.pt.y), Scalar(0, 255, 0));
}
imwrite("out.jpg", out, std::vector<int>(0));
// Draw features (Stretched)
for(KeyPoint p : keypoints) {
drawMarker(out2, Point(p.pt.x, p.pt.y), Scalar(127, 0, 255));
}
imwrite("out2.jpg", out2, std::vector<int>(0));
}
Input image
Output 1 (keypoint.x multiplied by a factor of 1/3, but missing right side)
Output 2 (Coordinates untouched)
I'm using OpenCV 4.5.4 on MinGW.
Most keypoint detectors use grayscale images as input.
If you interpret the memory of a bgr image as grayscale, you will have 3 times the number of pixels. Y axis is still ok if the algorithm uses the width-offset per row, which most algorithms do (because this is useful when subimaging or padding is used).
I don't know whether it is a bug or a feature, that FAST doesn't check for the number of channels snd doesnt throw an exception if the wrong number of channels ist given.
You can convert the image to grayscale by cv::cvtColor with the flag cv:: COLOR_BGR2GRAY
I need help with my project. I read color image(source image) from disk and my task is to apply blur to this image only where Canny function detect edges in this image. So detection of edges is without problems, as you can see in attached images (Top left corner image - Edge Image).
I applied 4 steps from related questions
this and this.
Probably steps 1-3 are correct as you can see in attached image. The first image is showing detected edges, the second shows previous image dilated, the third picture shows blurred second image and copied source image to this image. But at the last step I want to copy this image into final image (source image) to achieve that detected edges will be blurred. But when I use copyTo function from OpenCV library the result does not have blurred edges which Canny function detects as you can see in picture Result (right bottom corner image). Can you help me please what I am doing bad?
#include <cstdlib>
#include <iostream>
#include <QCoreApplication>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace cv;
Mat src, src_gray;
Mat detected_edges;
Mat blurred;
int edgeTresh = 1;
int lowThreshold;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
char* window_name = "Edge Image";
char* window_name2 = "Dilated";
char* window_name3 = "Blurred";
char* window_name4 = "Result";
void CannyThreshold(int, void*)
{
//reducing noise
blur(src_gray, detected_edges, Size(3,3));
//Canny function for detection of edges
Canny(detected_edges,detected_edges, lowThreshold,lowThreshold*ratio, kernel_size);
//show detected edges in source image
imshow(window_name, detected_edges);
//4 steps from stack owerflow
dilate(detected_edges, blurred, Mat()); //1
imshow(window_name2, blurred);
src.copyTo(blurred,blurred); //2
blur(blurred, blurred ,Size(10,10)); //3
imshow(window_name3, blurred);
//here can by a problem when I copy image from step 3 to source image with detected_edges mask.
blurred.copyTo(src,detected_edges); //4
imshow(window_name4, src); //final image
}
int main(int argc, char *argv[])
{
//reading image
src = cv::imread("/home/ja/FCS02/FCS02_3/imageReading/drevo.png");
if(!src.data)
return -1;
//convert to gray
cvtColor(src,src_gray,CV_BGR2GRAY);
//windows for showing each step image
namedWindow(window_name,CV_WINDOW_NORMAL);
namedWindow(window_name2,CV_WINDOW_NORMAL);
namedWindow(window_name3,CV_WINDOW_NORMAL);
namedWindow(window_name4,CV_WINDOW_NORMAL);
//trackbar
createTrackbar("Min Threshold:",window_name, &lowThreshold, max_lowThreshold,CannyThreshold);
//detection of edges
CannyThreshold(0,0);
cv::waitKey(300000);
return EXIT_SUCCESS;
}
Source Image where I want to blur only edges
Results of my code
This image shows what I want
Big thanks for everybody for your help and advices.
When you copy back the blurred edge in your original image, you are using the wrong mask. detected_edges contains the output of the Canny detector (only some sparse pixels). The non-zeros pixels if the mask indicate which pixels of the source image can be copied to the destination. The image blurred contains only the blurred edge, and the rest of the pixels are black. So I think you can directly use it as a mask for the copy.
blurred.copyTo(src, blurred); //4
Keep in mind that the mask needs to be of type CV_8U. It seems that in your example this is the case. If not, you can use the following code to create an image that is black except where the pixels in blurred are not null.
blurred.copyTo(src, (blurred != 0)); //4
I am performing feature detection in a video/live stream/image using OpenCV C++. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images.
The lighting condition in a particular portion of the video also changes over the course of the video. I tried the 'Histogram equalization' function, but it didn't help.
I got a working solution in MATLAB in the following link:
http://in.mathworks.com/help/images/examples/correcting-nonuniform-illumination.html
However, most of the functions used in the above link aren't available in OpenCV.
Can you suggest the alternative of this MATLAB code in OpenCV C++?
OpenCV has the adaptive threshold paradigm available in the framework: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold
The function prototype looks like:
void adaptiveThreshold(InputArray src, OutputArray dst,
double maxValue, int adaptiveMethod,
int thresholdType, int blockSize, double C);
The first two parameters are the input image and a place to store the output thresholded image. maxValue is the thresholded value assigned to an output pixel should it pass the criteria, adaptiveMethod is the method to use for adaptive thresholding, thresholdType is the type of thresholding you want to perform (more later), blockSize is the size of the windows to examine (more later), and C is a constant to subtract from each window. I've never really needed to use this and I usually set this to 0.
The default method for adaptiveThreshold is to analyze blockSize x blockSize windows and calculate the mean intensity within this window subtracted by C. If the centre of this window is above the mean intensity, this corresponding location in the output position of the output image is set to maxValue, else the same position is set to 0. This should combat the non-uniform illumination issue where instead of applying a global threshold to the image, you are performing the thresholding on local pixel neighbourhoods.
You can read the documentation on the other methods for the other parameters, but to get your started, you can do something like this:
// Include libraries
#include <cv.h>
#include <highgui.h>
// For convenience
using namespace cv;
// Example function to adaptive threshold an image
void threshold()
{
// Load in an image - Change "image.jpg" to whatever your image is called
Mat image;
image = imread("image.jpg", 1);
// Convert image to grayscale and show the image
// Wait for user key before continuing
Mat gray_image;
cvtColor(image, gray_image, CV_BGR2GRAY);
namedWindow("Gray image", CV_WINDOW_AUTOSIZE);
imshow("Gray image", gray_image);
waitKey(0);
// Adaptive threshold the image
int maxValue = 255;
int blockSize = 25;
int C = 0;
adaptiveThreshold(gray_image, gray_image, maxValue,
CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,
blockSize, C);
// Show the thresholded image
// Wait for user key before continuing
namedWindow("Thresholded image", CV_WINDOW_AUTOSIZE);
imshow("Thresholded image", gray_image);
waitKey(0);
}
// Main function - Run the threshold function
int main( int argc, const char** argv )
{
threshold();
}
adaptiveThreshold should be your first choice.
But here I report the "translation" from Matlab to OpenCV, so you can easily port your code. As you see, most of the functions are available both in Matlab and OpenCV.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Step 1: Read Image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Step 2: Use Morphological Opening to Estimate the Background
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(15,15));
Mat1b background;
morphologyEx(img, background, MORPH_OPEN, kernel);
// Step 3: Subtract the Background Image from the Original Image
Mat1b img2;
absdiff(img, background, img2);
// Step 4: Increase the Image Contrast
// Don't needed it here, the equivalent would be cv::equalizeHist
// Step 5(1): Threshold the Image
Mat1b bw;
threshold(img2, bw, 50, 255, THRESH_BINARY);
// Step 6: Identify Objects in the Image
vector<vector<Point>> contours;
findContours(bw.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(int i=0; i<contours.size(); ++i)
{
// Step 5(2): bwareaopen
if(contours[i].size() > 50)
{
// Step 7: Examine One Object
Mat1b object(bw.size(), uchar(0));
drawContours(object, contours, i, Scalar(255), CV_FILLED);
imshow("Single Object", object);
waitKey();
}
}
return 0;
}
I am trying to smooth output image edges using opencv framework, I am trying following steps. Steps took from here https://stackoverflow.com/a/17175381/790842
int lowThreshold = 10.0;
int ratio = 3;
int kernel_size = 3;
Mat src_gray,detected_edges,dst,blurred;
/// Convert the image to grayscale
cvtColor( result, src_gray, CV_BGR2GRAY );
/// Reduce noise with a kernel 3x3
cv::blur( src_gray, detected_edges, cv::Size(5,5) );
/// Canny detector
cv::Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
//Works fine upto here I am getting perfect edge mask
cv::dilate(detected_edges, blurred, result);
//I get Assertion failed (src.channels() == 1 && func != 0) in countNonZero ERROR while doing dilate
result.copyTo(blurred, blurred);
cv::blur(blurred, blurred, cv::Size(3.0,3.0));
blurred.copyTo(result, detected_edges);
UIImage *image = [UIImageCVMatConverter UIImageFromCVMat:result];
I want help whether if I am going in right way, or what am I missing?
Thanks for any suggestion and help.
Updated:
I have got an image like below got from grabcut algorithm, now I want to apply edge smoothening to the image, as you can see the image is not smooth.
Do you want to get something like this?
If yes, then here is the code:
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main(int argc, char **argv)
{
cv::namedWindow("result");
Mat img=imread("TestImg.png");
Mat whole_image=imread("D:\\ImagesForTest\\lena.jpg");
whole_image.convertTo(whole_image,CV_32FC3,1.0/255.0);
cv::resize(whole_image,whole_image,img.size());
img.convertTo(img,CV_32FC3,1.0/255.0);
Mat bg=Mat(img.size(),CV_32FC3);
bg=Scalar(1.0,1.0,1.0);
// Prepare mask
Mat mask;
Mat img_gray;
cv::cvtColor(img,img_gray,cv::COLOR_BGR2GRAY);
img_gray.convertTo(mask,CV_32FC1);
threshold(1.0-mask,mask,0.9,1.0,cv::THRESH_BINARY_INV);
cv::GaussianBlur(mask,mask,Size(21,21),11.0);
imshow("result",mask);
cv::waitKey(0);
// Reget the image fragment with smoothed mask
Mat res;
vector<Mat> ch_img(3);
vector<Mat> ch_bg(3);
cv::split(whole_image,ch_img);
cv::split(bg,ch_bg);
ch_img[0]=ch_img[0].mul(mask)+ch_bg[0].mul(1.0-mask);
ch_img[1]=ch_img[1].mul(mask)+ch_bg[1].mul(1.0-mask);
ch_img[2]=ch_img[2].mul(mask)+ch_bg[2].mul(1.0-mask);
cv::merge(ch_img,res);
cv::merge(ch_bg,bg);
imshow("result",res);
cv::waitKey(0);
cv::destroyAllWindows();
}
And I think this link will be interestiong for you too: Poisson Blending
I have followed the following steps to smooth the edges of the Foreground I got from GrabCut.
Create a binary image from the mask I got from GrabCut.
Find the contour of the binary image.
Create an Edge Mask by drawing the contour points. It gives the boundary edges of the Foreground image I got from GrabCut.
Then follow the steps define in https://stackoverflow.com/a/17175381/790842
Hello peeps I have developed a piece of software that draws contours of the input image, now I wont to take this to the next level and draw Bounding Box around objects of interest i.e. A person. I looked at boundingRect() function but i am struggling to understand it. Maybe there are different functions algorithms draw Bounding Box.....?
Here is the code of my program:
#include "iostream"
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv\ml.h>
#include<opencv\cxcore.h>
#include <iostream>
#include <string>
#include <opencv2/core/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/highgui/highgui.hpp> // Video write
using namespace cv;
using namespace std;
Mat image; Mat image_gray; Mat image_gray2; Mat threshold_output;
int thresh=100, max_thresh=255;
int main(int argc, char** argv) {
//Load Image
image =imread("C:/Users/Tomazi/Pictures/Opencv/tomazi.bmp");
//Convert Image to gray & blur it
cvtColor( image,
image_gray,
CV_BGR2GRAY );
blur( image_gray,
image_gray2,
Size(3,3) );
//Threshold Gray&Blur Image
threshold(image_gray2,
threshold_output,
thresh,
max_thresh,
THRESH_BINARY);
//2D Container
vector<vector<Point>> contours;
//Fnd Countours Points, (Imput Image, Storage, Mode1, Mode2, Offset??)
findContours(threshold_output,
contours, // a vector of contours
CV_RETR_EXTERNAL,// retrieve the external contours
CV_CHAIN_APPROX_NONE,
Point(0, 0)); // all pixels of each contours
// Draw black contours on a white image
Mat result(threshold_output.size(),CV_8U,Scalar(255));
drawContours(result,contours,
-1, // draw all contours
Scalar(0), // in black
2); // with a thickness of 2
//Create Window
char* DisplayWindow = "Source";
namedWindow(DisplayWindow, CV_WINDOW_AUTOSIZE);
imshow(DisplayWindow, result);
waitKey(5000);
return 1;
}
Can anyone suggest an solution...? Perhaps direct me to some sources, tutorials etc. Reading OpenCV documentation and looking at the boundingRect() function i still dont understand. HELP PLEASE :)
But you can also easily compute the bounding box yourself and then draw them using the rectangle function:
int maxX = 0, minX = image.cols, maxY=0, minY = image.rows;
for(int i=0; i<contours.size(); i++)
for(int j=0; j<contours[i].size(); j++)
{
Point p = contours[i][j];
maxX = max(maxX, p.x);
minX = min(minX, p.x);
maxY = max(maxY, p.y);
minY = min(minY, p.y);
}
rectangle( result, Point(minX,minY), Point(maxX, maxY), Scalar(0) );
This link was not helpful?
I think it demonstrates how to take the contour object and make it a polygon approximation, plus how to draw the bounding rectangle around it.
It seems to be one of the basic OpenCV demos.
I've talked about the bounding box technique in these posts:
How to detect Text Area from image?
Contours opencv : How to eliminate small contours in a binary image
OpenCv 2.3 C - How to isolate object inside image (simple C++ demo)
I think that the last one can probably help you understand how the standard technique works. What OpenCV offers is an easier approach.