I have a simple task for OpenCV SimpleBlobDetector
cv::SimpleBlobDetector::Params params;
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(params);
std::vector<cv::KeyPoint> keypoints;
detector->detect(crop, keypoints);
drawKeypoints(crop, keypoints, crop, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("crop", crop);
cv::waitKey(0);
It is not detecting half of the blobs in my image.
Please see picture below,
I tried adding parameters and varying them, at no point has it ever detected every single blob.
Blob detection is a simple and straightforward algorithm that should be completely refined in every image processing API. Is this not the case with OpenCV?
//params.minThreshold = 0;
//params.maxThreshold = 255;
//params.filterByArea = true;
//params.minArea = 1000;
//params.maxArea = 5000;
//params.filterByCircularity = true;
//params.minCircularity = 0.4;
//params.filterByConvexity = true;
//params.minConvexity = 0.87;
//params.filterByInertia = true;
//params.minInertiaRatio = 0.71;
I'm using either OpenCV 3.3 or 3.2, I can't seem to find the version number in the sources
Im not sure if this is properly going to answer my question, but I had to write my own blob detection, it appears that OpenCV SimpleBlobDetector is not so simple.
Related
By using OpenCV version 4.2.0 in c++ (VS 2019) I created project which performs face detection on the given image. I used Opencv's DNN face detector which uses res10_300x300_ssd_iter_140000_fp16.caffemodel model to detect faces. Below is the code of that function:
//variables which are used in function
const double inScaleFactor = 1.0;
const cv::Scalar meanVal = cv::Scalar(104.0, 177.0, 123.0);
const size_t inWidth = 300;
const size_t inHeight = 300;
std::vector<FaceDetectionResult> namespace_name::FaceDetection::detectFaceByOpenCVDNN(std::string filename, FaceDetectionModel model)
{
Net net;
cv::Mat frame = cv::imread(filename);
cv::Mat inputBlob;
std::vector<FaceDetectionResult> vec;
if (frame.empty())
throw std::exception("provided image file is not found or unable to open.");
int frameHeight = frame.rows;
int frameWidth = frame.cols;
if (model == FaceDetectionModel::CAFFE)
{
net = cv::dnn::readNetFromCaffe(caffeConfigFile, caffeWeightFile);
inputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, false, false);
}
else
{
net = cv::dnn::readNetFromTensorflow(tensorflowWeightFile, tensorflowConfigFile);
inputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, true, false);
}
net.setInput(inputBlob, "data");
cv::Mat detection = net.forward("detection_out");
cv::Mat detectionMat(detection.size[2], detection.size[3], CV_32F, detection.ptr<float>());
for (int i = 0; i < detectionMat.rows; i++)
{
if (detectionMat.at<float>(i, 2) >= 0.5)
{
FaceDetectionResult res;
res.faceDetected = true;
res.confidence = detectionMat.at<float>(i, 2);
res.x1 = static_cast<int>(detectionMat.at<float>(i, 3) * frameWidth);
res.y1 = static_cast<int>(detectionMat.at<float>(i, 4) * frameHeight);
res.x2 = static_cast<int>(detectionMat.at<float>(i, 5) * frameWidth);
res.y2 = static_cast<int>(detectionMat.at<float>(i, 6) * frameHeight);
vec.push_back(res);
}
#ifdef aDEBUG
else
{
cout << detectionMat.at<float>(i, 2) << endl;
}
#endif
}
return vec;
}
In the above code, after face detection I assign confidence and co-ordinates of face detected in custom class FaceDetectionResult, which a simple class having bool and int,float members as required.
Function detect faces in the given image, but while playing with this I am doing comparison with dlib's HOG+SVM face detector, So first I am doing face detection by dlib and then same image path is passed to this function.
I found some images where dlib can easily find faces in the image but opencv didn't find a single face, for example look at below image:
As you can see HOG+SVM detected 46 faces in approx 3 sec., If I pass this same image to above function then opencv did not detect a single face in it. Why? Do I need any enhancements in above code? I am not saying that function does not detect faces for any image, it does, but for some images (like above) it could not not.
For ref:
I used https://pastebin.com/9rt9reNY this python program to detect faces using dlib.
After a deep search, unfortunately I couldn't find a good explanation to this problem. The reason why I tried to crop image is that I assumed there can be a maximum detected face number limit. It is also not about occlusion.
I tried some image examples which includes more than 20(appx.) faces and the results were the same but when I cropped those images(decrease the number of faces), program was able to find the faces.This is also not about the resolution(sizes) of the image because the images I tried had different sizes.
I also changed and tried the all parameters(iteration number, confidentThreshold etc.) but the result still wasn't the desired one.
My assumption but not the answer:
The program doesn't let to find the faces if image includes more than a maximum number(approximately 20)
As a solution for this question, we can divide the source image into 2 parts and find the rectangles for each one then can be pasted to source image.
Note: After digging deeply on the internet, I couldnt find a topic related to this problem. I am also curious about the main reason causes this issue so any help will be appreciated. This post only includes my experiences and assumptions.
change this line :
inputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, false, false);
by this line :
frameHeightinputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, false, false);
I am trying to detect a defect on a bottle's body. It was very strange that circles are located on the white and light dark circles, not only on the black even though I specified black.
While browsing the net I saw this topic by an expert who confirmed from his side that the color feature is not working properly here. that's the link (You can see it highlighted in red):
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
That's the related chunk of my code :
params.filterByArea = true;
params.minArea = 32;
params.maxArea = 60;
params.filterByColor = true;
params.blobColor = 0;
params.filterByConvexity = true;
params.minConvexity = 0.4;
threshold(src_gray, dst, threshold_value, max_BINARY_value, threshold_type);
imwrite("C:\\Documents\\Output testing\\output.jpg", dst);
Ptr<SimpleBlobDetector>detector = SimpleBlobDetector::create(params);
std::vector<KeyPoint> keypoints;
detector->detect(dst, keypoints); // keypoints vector to store the coordinates of the defects, as well as other parameters like size,etc..
//detector.detect(defect_inv, keypoints);
Mat blob_bottle;
drawKeypoints(dst, keypoints, blob_bottle, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS); // drawing a red circle around the defect on the original masked corrected image
imwrite("C:\\Documents\\Output testing\\BlobTest.jpg", blob_bottle);
`
That's the output I'm getting : https://imgur.com/a/Xq5DyvT , you can see a dark hole, that's the one I am supposed to detect. but changing features (in theory) is not really corresponding the same in reality.
Any help? I also cannot find a clear documentation for the blob topic.
I'm using OpenCV 3.1 to do some blob detection using SimpleBlobDetector but I'm having no luck and no tutorial has been able to solve this. My environment is XCode on x64.
I'm starting out with this image:
Then I'm turning it into greyscale:
Finally I turn it into a binary image and doing the blob detection on this:
I've included "iostream" and "opencv2/opencv.hpp".
using namespace cv;
using namespace std;
Mat img_rgb;
Mat img_gray;
Mat img_keypoints;
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create();
vector<KeyPoint> keypoints;
img_rgb = imread("summertriangle.jpg");
//Convert to greyscale
cvtColor(img_rgb, img_gray, CV_RGB2GRAY);
imshow("Grey Scale", img_gray);
// Start by creating the matrix that will allocate the new image
Mat img_bw(img_gray.size(), img_gray.type());
// Apply threshhold to convert into binary image and save to new matrix
threshold(img_gray, img_bw, 100, 255, THRESH_BINARY);
// Extract cordinates of blobs at their centroids, save to keypoints variable.
detector->detect(img_bw, keypoints);
cout << "The size of keypoints vector is: " << keypoints.size();
The keypoints vector is always empty. Nothing I've tried works.
So I solved this, did not read the fine print on the docs. Thanks Dai for the heads up on the Params, made me give the docs a closer look.
Default values of parameters are tuned to extract dark circular blobs.
I had to simply do this when creating the SimpleBlobDetector object:
SimpleBlobDetector::Params params;
params.filterByArea = true;
params.minArea = 1;
params.maxArea = 1000;
params.filterByColor = true;
params.blobColor = 255;
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
This did it.
I need to detect all whole and half note from the given image and print the all detected note into a new image. But it seems that the code does not detect the half note it only detects the whole note.
This is the source code I have
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Read image
Mat im = imread("beethoven_ode_to_joy.jpg", IMREAD_GRAYSCALE);
// Setup SimpleBlobDetector parameters.
SimpleBlobDetector::Params params;
// Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
// Filter by Area.
params.filterByArea = true;
params.minArea = 25;
// Filter by Circularity
params.filterByCircularity = true;
params.minCircularity = 0.1;
// Filter by Convexity
params.filterByConvexity = true;
params.minConvexity = 0.87;
// Filter by Inertia
params.filterByInertia = true;
params.minInertiaRatio = 0.01;
// Storage for blobs
vector<KeyPoint> keypoints;
#if CV_MAJOR_VERSION < 3 // If you are using OpenCV 2
// Set up detector with params
SimpleBlobDetector detector(params);
// Detect blobs
detector.detect(im, keypoints);
#else
// Set up detector with params
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blobs
detector->detect(im, keypoints);
#endif
// Draw detected blobs as red circles.
// DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag ensures
// the size of the circle corresponds to the size of blob
Mat im_with_keypoints;
drawKeypoints(im, keypoints, im_with_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
// Show blobs
imshow("keypoints", im_with_keypoints);
waitKey(0);
}
Actually, I don't have openCV now.But I try something to solve this in matlab in short time.Firstly,in this image you will realize that head of the notes are darker than staves.When we get more inside it we see that centers of the notes have 0 value in this image . I suggest you that you can convert yor RGB image to grayscale image, after that can apply thresholding.If the values of pixels is equal to 0 they're ok you should get them but if not you don't get them.Its result is here in this image .Then, I think you can apply some morphologic operations like dilation. Because detected head of notes will be a little bit smaller than original.If you want to eliminate the up side of notes(I mean stick part of notes) you can detect this part with hough line transformation, opencv has functions for this operation (HoughLines or houghLinesP).After detection you can delete this part or if you don't want, you can pass this step.After all, you can find circular objects on the image with hough transform.HoughCircles functions perform this task in opencv.In Matlab it is a little bit easier with findcircles function.Finally, you can draw founded circles with circle function in opencv or viscircles function in matlab.Result is here
Notice that I didn't apply morphologic operations to improve size of heads of notes.Also, I didn't apply houghline transformation to detect and erase stick parts.If you can apply them ,I think you will get better result.
This algorithm is only a suggestion,you can find better algorithm by trying some other operations.
First some background
I have written a C++ function that detect an area of a certain color in an RGB image using OpenCV. The function is used to isolate a small colored area using the FeatureDetector: SimpleBlobDetector.
The problem I have is that this function is used in a crossplatform project. On my OSX 10.8 machine using OpenCV in Xcode this works flawlessly. However when I try to run the same piece of code on Windows using OpenCV in Visual Studio, this code crashes whenever I use:
blobDetector.detect(imgThresh, keypoints)
with an error such as this:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in unknown function, file C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\include\opencv2/core/mat.hpp, line 545
This is the only piece of OpenCV code that have given me problems so far. I tried several solutions like the ones suggested here Using FeatureDetector in OpenCV gives access violation and Access violation reading in FeatureDetector OpenCV 2.4.5 . But to no avail.
A somewhat solution to my problem was to add a threshold() call just before my call to .detect(), which appears to make it work. However I don't like this solution as it forces me to do something I don't have to (as far as I know) and because it is not necessary to do on my Mac for some reason.
Question
Can anyone explain why the following line:
threshold(imgThresh, imgThresh, 100, 255, 0);
is necessary on Windows, but not on OSX, just before the call to .detect() in the following code?
Full code snippet:
#include "ColorDetector.h"
using namespace cv;
using namespace std;
Mat ColorDetection(Mat img, Scalar colorMin, Scalar colorMax, double alpha, int beta)
{
initModule_features2d();
initModule_nonfree();
//Define matrices
Mat contrast_img = constrastImage(img, alpha, beta);
Mat imgThresh;
Mat blob;
//Threshold based on color ranges (Blue/Green/Red scalars)
inRange(contrast_img, colorMin, colorMax, imgThresh); //BGR range
//Apply Blur effect to make blobs more coherent
GaussianBlur(imgThresh, imgThresh, Size(3,3), 0);
//Set SimpleBlobDetector parameters
SimpleBlobDetector::Params params;
params.filterByArea = false;
params.filterByCircularity = false;
params.filterByConvexity = false;
params.filterByInertia = false;
params.filterByColor = true;
params.blobColor = 255;
params.minArea = 100.0f;
params.maxArea = 500.0f;
SimpleBlobDetector blobDetector(params);
blobDetector.create("SimpleBlob");
//Vector to store keypoints (center points for a blob)
vector<KeyPoint> keypoints;
//Try blob detection
threshold(imgThresh, imgThresh, 100, 255, 0);
blobDetector.detect(imgThresh, keypoints);
//Draw resulting keypoints
drawKeypoints(img, keypoints, blob, CV_RGB(255,255,0), DrawMatchesFlags::DEFAULT);
return blob;
}
Try using it that way:
Ptr<SimpleBlobDetector> sbd = SimpleBlobDetector::create(params);
vector<cv::KeyPoint> keypoints;
sbd->detect(imgThresh, keypoints);