I am trying to detect a defect on a bottle's body. It was very strange that circles are located on the white and light dark circles, not only on the black even though I specified black.
While browsing the net I saw this topic by an expert who confirmed from his side that the color feature is not working properly here. that's the link (You can see it highlighted in red):
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
That's the related chunk of my code :
params.filterByArea = true;
params.minArea = 32;
params.maxArea = 60;
params.filterByColor = true;
params.blobColor = 0;
params.filterByConvexity = true;
params.minConvexity = 0.4;
threshold(src_gray, dst, threshold_value, max_BINARY_value, threshold_type);
imwrite("C:\\Documents\\Output testing\\output.jpg", dst);
Ptr<SimpleBlobDetector>detector = SimpleBlobDetector::create(params);
std::vector<KeyPoint> keypoints;
detector->detect(dst, keypoints); // keypoints vector to store the coordinates of the defects, as well as other parameters like size,etc..
//detector.detect(defect_inv, keypoints);
Mat blob_bottle;
drawKeypoints(dst, keypoints, blob_bottle, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS); // drawing a red circle around the defect on the original masked corrected image
imwrite("C:\\Documents\\Output testing\\BlobTest.jpg", blob_bottle);
`
That's the output I'm getting : https://imgur.com/a/Xq5DyvT , you can see a dark hole, that's the one I am supposed to detect. but changing features (in theory) is not really corresponding the same in reality.
Any help? I also cannot find a clear documentation for the blob topic.
Related
I have a simple task for OpenCV SimpleBlobDetector
cv::SimpleBlobDetector::Params params;
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(params);
std::vector<cv::KeyPoint> keypoints;
detector->detect(crop, keypoints);
drawKeypoints(crop, keypoints, crop, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("crop", crop);
cv::waitKey(0);
It is not detecting half of the blobs in my image.
Please see picture below,
I tried adding parameters and varying them, at no point has it ever detected every single blob.
Blob detection is a simple and straightforward algorithm that should be completely refined in every image processing API. Is this not the case with OpenCV?
//params.minThreshold = 0;
//params.maxThreshold = 255;
//params.filterByArea = true;
//params.minArea = 1000;
//params.maxArea = 5000;
//params.filterByCircularity = true;
//params.minCircularity = 0.4;
//params.filterByConvexity = true;
//params.minConvexity = 0.87;
//params.filterByInertia = true;
//params.minInertiaRatio = 0.71;
I'm using either OpenCV 3.3 or 3.2, I can't seem to find the version number in the sources
Im not sure if this is properly going to answer my question, but I had to write my own blob detection, it appears that OpenCV SimpleBlobDetector is not so simple.
I need to detect all whole and half note from the given image and print the all detected note into a new image. But it seems that the code does not detect the half note it only detects the whole note.
This is the source code I have
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Read image
Mat im = imread("beethoven_ode_to_joy.jpg", IMREAD_GRAYSCALE);
// Setup SimpleBlobDetector parameters.
SimpleBlobDetector::Params params;
// Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
// Filter by Area.
params.filterByArea = true;
params.minArea = 25;
// Filter by Circularity
params.filterByCircularity = true;
params.minCircularity = 0.1;
// Filter by Convexity
params.filterByConvexity = true;
params.minConvexity = 0.87;
// Filter by Inertia
params.filterByInertia = true;
params.minInertiaRatio = 0.01;
// Storage for blobs
vector<KeyPoint> keypoints;
#if CV_MAJOR_VERSION < 3 // If you are using OpenCV 2
// Set up detector with params
SimpleBlobDetector detector(params);
// Detect blobs
detector.detect(im, keypoints);
#else
// Set up detector with params
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blobs
detector->detect(im, keypoints);
#endif
// Draw detected blobs as red circles.
// DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag ensures
// the size of the circle corresponds to the size of blob
Mat im_with_keypoints;
drawKeypoints(im, keypoints, im_with_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
// Show blobs
imshow("keypoints", im_with_keypoints);
waitKey(0);
}
Actually, I don't have openCV now.But I try something to solve this in matlab in short time.Firstly,in this image you will realize that head of the notes are darker than staves.When we get more inside it we see that centers of the notes have 0 value in this image . I suggest you that you can convert yor RGB image to grayscale image, after that can apply thresholding.If the values of pixels is equal to 0 they're ok you should get them but if not you don't get them.Its result is here in this image .Then, I think you can apply some morphologic operations like dilation. Because detected head of notes will be a little bit smaller than original.If you want to eliminate the up side of notes(I mean stick part of notes) you can detect this part with hough line transformation, opencv has functions for this operation (HoughLines or houghLinesP).After detection you can delete this part or if you don't want, you can pass this step.After all, you can find circular objects on the image with hough transform.HoughCircles functions perform this task in opencv.In Matlab it is a little bit easier with findcircles function.Finally, you can draw founded circles with circle function in opencv or viscircles function in matlab.Result is here
Notice that I didn't apply morphologic operations to improve size of heads of notes.Also, I didn't apply houghline transformation to detect and erase stick parts.If you can apply them ,I think you will get better result.
This algorithm is only a suggestion,you can find better algorithm by trying some other operations.
I'm fairly new to OpenCV, and very excited to learn more. I've been toying with the idea of outlining edges, shapes.
I've come across this code (running on an iOS device), which uses Canny. I'd like to be able to render this in color, and circle each shape. Can someone point me in the right direction?
Thanks!
IplImage *grayImage = cvCreateImage(cvGetSize(iplImage), IPL_DEPTH_8U, 1);
cvCvtColor(iplImage, grayImage, CV_BGRA2GRAY);
cvReleaseImage(&iplImage);
IplImage* img_blur = cvCreateImage( cvGetSize( grayImage ), grayImage->depth, 1);
cvSmooth(grayImage, img_blur, CV_BLUR, 3, 0, 0, 0);
cvReleaseImage(&grayImage);
IplImage* img_canny = cvCreateImage( cvGetSize( img_blur ), img_blur->depth, 1);
cvCanny( img_blur, img_canny, 10, 100, 3 );
cvReleaseImage(&img_blur);
cvNot(img_canny, img_canny);
And example might be these burger patties. OpenCV would detect the patty, and outline it.
Original Image:
Color information is often handled by conversion to HSV color space which handles "color" directly instead of dividing color into R/G/B components which makes it easier to handle same colors with different brightness etc.
if you convert your image to HSV you'll get this:
cv::Mat hsv;
cv::cvtColor(input,hsv,CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv, channels);
cv::Mat H = channels[0];
cv::Mat S = channels[1];
cv::Mat V = channels[2];
Hue channel:
Saturation channel:
Value channel:
typically, the hue channel is the first one to look at if you are interested in segmenting "color" (e.g. all red objects). One problem is, that hue is a circular/angular value which means that the highest values are very similar to the lowest values, which results in the bright artifacts at the border of the patties. To overcome this for a particular value, you can shift the whole hue space. If shifted by 50° you'll get something like this instead:
cv::Mat shiftedH = H.clone();
int shift = 25; // in openCV hue values go from 0 to 180 (so have to be doubled to get to 0 .. 360) because of byte range from 0 to 255
for(int j=0; j<shiftedH.rows; ++j)
for(int i=0; i<shiftedH.cols; ++i)
{
shiftedH.at<unsigned char>(j,i) = (shiftedH.at<unsigned char>(j,i) + shift)%180;
}
now you can use a simple canny edge detection to find edges in the hue channel:
cv::Mat cannyH;
cv::Canny(shiftedH, cannyH, 100, 50);
You can see that the regions are a little bigger than the real patties, that might be because of the tiny reflections on the ground around the patties, but I'm not sure about that. Maybe it's just because of jpeg compression artifacts ;)
If you instead use the saturation channel to extract edges, you'll end up with something like this:
cv::Mat cannyS;
cv::Canny(S, cannyS, 200, 100);
where the contours aren't completely closed. Maybe you can combine hue and saturation within preprocessing to extract edges in the hue channel but only where saturation is high enough.
At this stage you have edges. Regard that edges aren't contours yet. If you directly extract contours from edges they might not be closed/separated etc:
// extract contours of the canny image:
std::vector<std::vector<cv::Point> > contoursH;
std::vector<cv::Vec4i> hierarchyH;
cv::findContours(cannyH,contoursH, hierarchyH, CV_RETR_TREE , CV_CHAIN_APPROX_SIMPLE);
// draw the contours to a copy of the input image:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
you can remove those small contours by checking cv::contourArea(contoursH[i]) > someThreshold before drawing. But you see the two patties on the left to be connected? Here comes the hardest part... use some heuristics to "improve" your result.
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
Dilation before contour extraction will "close" the gaps between different objects but increase the object size too.
if you extract contours from that it will look like this:
If you instead choose only the "inner" contours it is exactly what you like:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
if(cv::contourArea(contoursH[i]) < 20) continue; // ignore contours that are too small to be a patty
if(hierarchyH[i][3] < 0) continue; // ignore "outer" contours
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
mind that the dilation and inner contour stuff is a little fuzzy, so it might not work for different images and if the initial edges are placed better around the object border it might 1. not be necessary to do the dilate and inner contour thing and 2. if it is still necessary, the dilate will make the object smaller in this scenario (which luckily is great for the given sample image.).
EDIT: Some important information about HSV: The hue channel will give every pixel a color of the spectrum, even if the saturation is very low ( = gray/white) or if the color is very low (value) so often it is desired to threshold the saturation and value channels to find some specific color! This might be much easier and much more stavle to handle than the dilation I've used in my code.
I've perused this site for an explanation but to no avail...hopefully someone knows the answer.
I'm using simpleBlobDetector to track some blobs. I would like to specify a mask via the detect method, but for some reason the mask doesn't seem to work - my keypoints show up for the whole image. Here are some snippets of my code:
Mat currFrame;
Mat mask;
Mat roi;
cv::Ptr<cv::FeatureDetector> blob_detector = new cv::SimpleBlobDetector(params);//custom set of params I've left out for legibility
blob_detector->create("SimpleBlob");
vector<cv::KeyPoint> myblob;
while(true)
{
captured >> currFrame; // get a new frame from camera >> is grab and retrieve in one go, note grab does not allow frame to be modified but edges can be
// do nothing if frame is empty
if(currFrame.empty())
{
break;
}
/******************** make mask***********************/
mask = Mat::zeros(currFrame.size(),CV_8U);
roi = Mat(mask,Rect(400,400,400,400));
roi = 255;
/******************** image cleanup with some filters*/
GaussianBlur(currFrame,currFrame, Size(5,5), 1.5, 1.5);
cv::medianBlur(currFrame,currFrame,3);
blob_detector->detect(fgMaskMOG,myblob,mask);//fgMaskMOG is currFrame after some filtering and background subtraction
cv::drawKeypoints(fgMaskMOG,myblob,fgMaskMOG,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
imshow("mogForeground", fgMaskMOG);
imshow("original", currFrame);
imshow("mask",mask);
if(waitKey(1) != -1)
break;
}
The thing is, I confirmed that my mask is correctly made by using SurfFeatureDetector as described here (OpenCV: howto use mask parameter for feature point detection (SURF)) If anyone can see whats wrong with my mask, I'd really appreciate the help. Sorry about the messy code!
I had the same issue and couldn't find the solution, so I solved it by checking the mask myself:
blob_detector->detect(img, keypoints);
std::vector<cv::KeyPoint> keypoints_in_range;
for (cv::KeyPoint &kp : keypoints)
if (mask.at<char>(kp.pt) > 0)
keypoints_in_range.push_back(kp)
I found i opencv2.4.8 this code:
void SimpleBlobDetector::detectImpl(const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, const cv::Mat&) const
{
//TODO: support mask
keypoints.clear();
Mat grayscaleImage;
which means that this option is not supported yet.
Solution with filtering keyPoints is not quite good, because it is time taking ( you have to detect blobs in whole image ).
Better workaround is to cut ROI before detection and move each KeyPoint after detection:
int x = 500;
int y = 200;
int width = 700;
int height = 700;
Mat roi = frame(Rect(x,y,width,height));
blob_detector.detect(roi, keypoints);
for (KeyPoint &kp : keypoints)
{
kp.pt.x +=x;
kp.pt.y +=y;
}
drawKeypoints(frame, keypoints, frame,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
Making the usual blob tracker with OpenCV and cvBlobsLib, I've come across this problem and it seems no one else had it, which makes me sad. I get the RGB/BGR frame, choose the color to isolate, treshold it into b/w, find the blobs and add the bounding rectangle on each blob, but when I display the final image, the box is stretched on the x-axis: when the object is on the left the box is close to it (although around 2.5 times larger), and as it moves to the right the box moves faster (= more and more far from the object) until it reaches the right end of the window when the object isn't even halfway. This doesn't happen on the y-axis, where everything is fine. It's not a problem with rectangles, it happens when I use fillBlob aswell, the blob shape comes out stretched and misaligned. Also, it's not a problem related to image capturing, since I've tried with kinect (OpenNI), webcam and even using a single image (imread()), and I verified that every ImageGenerator, Mat, IplImage used were 640x480, 8bit depth, for which I used AUTOSIZE for the namedWindow (enlarging to fullscreen window doesn't help either). Showing the BGR frame and the tresholded image gives no problems, they both fit into the window, but the detected blobs seem to belong to a different resolution space when I merge them with the original image. Here's the code, not much has changed from the usual examples found online everywhere:
//[...]
namedWindow("Color Image", CV_WINDOW_AUTOSIZE);
namedWindow("Color Tracking", CV_WINDOW_AUTOSIZE);
//[...] I already got the two cv::Mat I need, imgBGR and imgTresh
CBlobResult blobs;
CBlob *currentBlob;
Point pt1, pt2;
Rect rect;
//had to do Mat to IplImage conversion, since cvBlobsLib doesn't like mats
IplImage iplTresh = imgTresh;
IplImage iplBGR = imgBGR;
blobs = CBlobResult(&iplTresh, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 100);
int nBlobs = blobs.GetNumBlobs();
for (int i = 0; i < nBlobs; i++)
{
currentBlob = blobs.GetBlob(i);
rect = currentBlob->GetBoundingBox();
pt1.x = rect.x;
pt1.y = rect.y;
pt2.x = rect.x + rect.width;
pt2.y = rect.y + rect.height;
cvRectangle(&iplBGR, pt1, pt2, cvScalar(255, 255, 255, 0), 3, 8, 0);
}
//[...]
imshow("Color Image", imgBGR);
imshow("Color Tracking", imgTresh);
The "[...]" is code that shouldn't have nothing to do with this issue, but if you need further info on how I handled the images, let me know and I'll post it.
Based on the fact that the way I capture the image doesn't change anything, that BGR frame and B/W image are well shown, and that after getting blobs any way of displaying them gives the same (wrong) result, the problem must be something between CBlobResult() and matrix2ipl conversion, but I don't really know how to find it out.
Oh god, I spent ages to write the whole problem and the next day I found the answer almost casually. As I created the B/W matrix for tresholding, I didn't make it single-channel; I copied the BGR matrix type, thus having a treshold image with 3 channels which resulted in a widthStep 3 times the frame width. Resolved creating cv::Mat imgTresh with CV_8UC1 as type.