Why Opencv DNN based (caffe) face detector failed to find faces? - c++

By using OpenCV version 4.2.0 in c++ (VS 2019) I created project which performs face detection on the given image. I used Opencv's DNN face detector which uses res10_300x300_ssd_iter_140000_fp16.caffemodel model to detect faces. Below is the code of that function:
//variables which are used in function
const double inScaleFactor = 1.0;
const cv::Scalar meanVal = cv::Scalar(104.0, 177.0, 123.0);
const size_t inWidth = 300;
const size_t inHeight = 300;
std::vector<FaceDetectionResult> namespace_name::FaceDetection::detectFaceByOpenCVDNN(std::string filename, FaceDetectionModel model)
{
Net net;
cv::Mat frame = cv::imread(filename);
cv::Mat inputBlob;
std::vector<FaceDetectionResult> vec;
if (frame.empty())
throw std::exception("provided image file is not found or unable to open.");
int frameHeight = frame.rows;
int frameWidth = frame.cols;
if (model == FaceDetectionModel::CAFFE)
{
net = cv::dnn::readNetFromCaffe(caffeConfigFile, caffeWeightFile);
inputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, false, false);
}
else
{
net = cv::dnn::readNetFromTensorflow(tensorflowWeightFile, tensorflowConfigFile);
inputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, true, false);
}
net.setInput(inputBlob, "data");
cv::Mat detection = net.forward("detection_out");
cv::Mat detectionMat(detection.size[2], detection.size[3], CV_32F, detection.ptr<float>());
for (int i = 0; i < detectionMat.rows; i++)
{
if (detectionMat.at<float>(i, 2) >= 0.5)
{
FaceDetectionResult res;
res.faceDetected = true;
res.confidence = detectionMat.at<float>(i, 2);
res.x1 = static_cast<int>(detectionMat.at<float>(i, 3) * frameWidth);
res.y1 = static_cast<int>(detectionMat.at<float>(i, 4) * frameHeight);
res.x2 = static_cast<int>(detectionMat.at<float>(i, 5) * frameWidth);
res.y2 = static_cast<int>(detectionMat.at<float>(i, 6) * frameHeight);
vec.push_back(res);
}
#ifdef aDEBUG
else
{
cout << detectionMat.at<float>(i, 2) << endl;
}
#endif
}
return vec;
}
In the above code, after face detection I assign confidence and co-ordinates of face detected in custom class FaceDetectionResult, which a simple class having bool and int,float members as required.
Function detect faces in the given image, but while playing with this I am doing comparison with dlib's HOG+SVM face detector, So first I am doing face detection by dlib and then same image path is passed to this function.
I found some images where dlib can easily find faces in the image but opencv didn't find a single face, for example look at below image:
As you can see HOG+SVM detected 46 faces in approx 3 sec., If I pass this same image to above function then opencv did not detect a single face in it. Why? Do I need any enhancements in above code? I am not saying that function does not detect faces for any image, it does, but for some images (like above) it could not not.
For ref:
I used https://pastebin.com/9rt9reNY this python program to detect faces using dlib.

After a deep search, unfortunately I couldn't find a good explanation to this problem. The reason why I tried to crop image is that I assumed there can be a maximum detected face number limit. It is also not about occlusion.
I tried some image examples which includes more than 20(appx.) faces and the results were the same but when I cropped those images(decrease the number of faces), program was able to find the faces.This is also not about the resolution(sizes) of the image because the images I tried had different sizes.
I also changed and tried the all parameters(iteration number, confidentThreshold etc.) but the result still wasn't the desired one.
My assumption but not the answer:
The program doesn't let to find the faces if image includes more than a maximum number(approximately 20)
As a solution for this question, we can divide the source image into 2 parts and find the rectangles for each one then can be pasted to source image.
Note: After digging deeply on the internet, I couldnt find a topic related to this problem. I am also curious about the main reason causes this issue so any help will be appreciated. This post only includes my experiences and assumptions.

change this line :
inputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, false, false);
by this line :
frameHeightinputBlob = cv::dnn::blobFromImage(frame, inScaleFactor, cv::Size(inWidth, inHeight), meanVal, false, false);

Related

OpenCV DNN face detection in UWP/C++: bad results

I'm using OpenCV and Cafe to perform face detection on some images I receive from a stream. First, I tried with python:
prototxt_file = 'deploy.prototxt'
weights_file = 'res10_300x300_ssd_iter_140000.caffemodel'
dnn = cv2.dnn.readNetFromCaffe(prototxt_file, weights_file)
for image in images:
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 1.0, (300, 300),
(104.0, 177.0, 123.0))
dnn.setInput(blob)
detections = dnn.forward()
for i in range(0, detections.shape[2]):
confidence = detections[0, 0, i, 2]
box = detections[0, 0, i, 3:7]
if confidence > 0.5:
//Do something
This works quite well. Now, I want to do the same within a C++ Windows UWP App, so I compiled OpenCV from source for UWP (tried with versions 3.4.1 and 4.3.0). After going through this example I tried the following:
std::string caffeConfigFilePath = "deploy.prototxt";
std::string caffeWeightFilePath = "res10_300x300_ssd_iter_140000.caffemodel";
net = cv::dnn::readNetFromCaffe(caffeConfigFilePath, caffeWeightFilePath);
for (image in images)
{
cv::Mat imageResized, imageBlob;
std::vector<cv::Mat> outs;
cv::resize(image, imageResized, cv::Size(300, 300));
cv::dnn::blobFromImage(imageResized, imageBlob, 1, cv::Size(300, 300),
(104.0, 177.0, 123.0));
net.setInput(imageBlob, "data");
net.forward(outs, "detection_out");
CV_Assert(outs.size() > 0);
for (size_t k = 0; k < outs.size(); k++)
{
float* data = (float*)outs[k].data;
for (size_t i = 0; i < outs[k].total(); i += 7)
{
float confidence = data[i + 2];
if (confidence > 0.5)
{
//Do something
}
}
}
This gives me very bad results. I get a lot of detections with a confidence of 1.0, covering the entire image. The face itself, however, is not detected. So I thought I might be reading the output wrong. I also tried the code posted with this question, but the results are the same. I checked everything I could think of (input images in right format, model correctly loaded, etc.) but could not identify the error.
Since the DNN module is usually not included in an OpenCV UWP build (I had to comment some lines in the CMake.txt, but then it compiled without errors), can it be that using it is just not possible from a UWP app? What else could be the reason the code is working in python, but an almost identical code is not working in C++?

OpenCv SimpleBlobDetector does not find all blobs. C++ , VS2015

I have a simple task for OpenCV SimpleBlobDetector
cv::SimpleBlobDetector::Params params;
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(params);
std::vector<cv::KeyPoint> keypoints;
detector->detect(crop, keypoints);
drawKeypoints(crop, keypoints, crop, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("crop", crop);
cv::waitKey(0);
It is not detecting half of the blobs in my image.
Please see picture below,
I tried adding parameters and varying them, at no point has it ever detected every single blob.
Blob detection is a simple and straightforward algorithm that should be completely refined in every image processing API. Is this not the case with OpenCV?
//params.minThreshold = 0;
//params.maxThreshold = 255;
//params.filterByArea = true;
//params.minArea = 1000;
//params.maxArea = 5000;
//params.filterByCircularity = true;
//params.minCircularity = 0.4;
//params.filterByConvexity = true;
//params.minConvexity = 0.87;
//params.filterByInertia = true;
//params.minInertiaRatio = 0.71;
I'm using either OpenCV 3.3 or 3.2, I can't seem to find the version number in the sources
Im not sure if this is properly going to answer my question, but I had to write my own blob detection, it appears that OpenCV SimpleBlobDetector is not so simple.

How to ignore/remove contours that touch the image boundaries

I have the following code to detect contours in an image using cvThreshold and cvFindContours:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvThreshold( processedImage, processedImage, thresh1, 255, CV_THRESH_BINARY );
nContours = cvFindContours(processedImage, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0) );
I would like to somehow extend this code to filter/ignore/remove any contours that touch the image boundaries. However I am unsure how to go about this. Should I filter the threshold image or can I filter the contours afterwards? Hope somebody knows an elegant solution, since surprisingly I could not come up with a solution by googling.
Update 2021-11-25
updates code example
fixes bugs with image borders
adds more images
adds Github repo with CMake support to build example app
Full out-of-the-box example can be found here:
C++ application with CMake
General info
I am using OpenCV 3.0.0
Using cv::findContours actually alters the input image, so make sure that you work either on a separate copy specifically for this function or do not further use the image at all
Update 2019-03-07: "Since opencv 3.2 source image is not modified by this function." (see corresponding OpenCV documentation)
General solution
All you need to know of a contour is if any of its points touches the image border. This info can be extracted easily by one of the following two procedures:
Check each point of your contour regarding its location. If it lies at the image border (x = 0 or x = width - 1 or y = 0 or y = height - 1), simply ignore it.
Create a bounding box around the contour. If the bounding box lies along the image border, you know the contour does, too.
Code for the second solution (CMake):
cmake_minimum_required(VERSION 2.8)
project(SolutionName)
find_package(OpenCV REQUIRED)
set(TARGETNAME "ProjectName")
add_executable(${TARGETNAME} ./src/main.cpp)
include_directories(${CMAKE_CURRENT_BINARY_DIR} ${OpenCV_INCLUDE_DIRS} ${OpenCV2_INCLUDE_DIR})
target_link_libraries(${TARGETNAME} ${OpenCV_LIBS})
Code for the second solution (C++):
bool contourTouchesImageBorder(const std::vector<cv::Point>& contour, const cv::Size& imageSize)
{
cv::Rect bb = cv::boundingRect(contour);
bool retval = false;
int xMin, xMax, yMin, yMax;
xMin = 0;
yMin = 0;
xMax = imageSize.width - 1;
yMax = imageSize.height - 1;
// Use less/greater comparisons to potentially support contours outside of
// image coordinates, possible future workarounds with cv::copyMakeBorder where
// contour coordinates may be shifted and just to be safe.
// However note that bounding boxes of size 1 will have their start point
// included (of course) but also their and with/height values set to 1
// but should not contain 2 pixels.
// Which is why we have to -1 the "search grid"
int bbxEnd = bb.x + bb.width - 1;
int bbyEnd = bb.y + bb.height - 1;
if (bb.x <= xMin ||
bb.y <= yMin ||
bbxEnd >= xMax ||
bbyEnd >= yMax)
{
retval = true;
}
return retval;
}
Call it via:
...
cv::Size imageSize = processedImage.size();
for (auto c: contours)
{
if(contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
...
Full C++ example:
void testContourBorderCheck()
{
std::vector<std::string> filenames =
{
"0_single_pixel_top_left.png",
"1_left_no_touch.png",
"1_left_touch.png",
"2_right_no_touch.png",
"2_right_touch.png",
"3_top_no_touch.png",
"3_top_touch.png",
"4_bot_no_touch.png",
"4_bot_touch.png"
};
// Load example image
//std::string path = "C:/Temp/!Testdata/ContourBorderDetection/test_1/";
std::string path = "../Testdata/ContourBorderDetection/test_1/";
for (int i = 0; i < filenames.size(); ++i)
{
//std::string filename = "circle3BorderDistance0.png";
std::string filename = filenames.at(i);
std::string fqn = path + filename;
cv::Mat img = cv::imread(fqn, cv::IMREAD_GRAYSCALE);
cv::Mat processedImage;
img.copyTo(processedImage);
// Create copy for contour extraction since cv::findContours alters the input image
cv::Mat workingCopyForContourExtraction;
processedImage.copyTo(workingCopyForContourExtraction);
std::vector<std::vector<cv::Point>> contours;
// Extract contours
cv::findContours(workingCopyForContourExtraction, contours, cv::RetrievalModes::RETR_EXTERNAL, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
// Prepare image for contour drawing
cv::Mat drawing;
processedImage.copyTo(drawing);
cv::cvtColor(drawing, drawing, cv::COLOR_GRAY2BGR);
// Draw contours
cv::drawContours(drawing, contours, -1, cv::Scalar(255, 255, 0), 1);
//cv::imwrite(path + "processedImage.png", processedImage);
//cv::imwrite(path + "workingCopyForContourExtraction.png", workingCopyForContourExtraction);
//cv::imwrite(path + "drawing.png", drawing);
const auto imageSize = img.size();
bool liesOnBorder = contourTouchesImageBorder(contours.at(0), imageSize);
// std::cout << "lies on border: " << std::to_string(liesOnBorder);
std::cout << filename << " lies on border: "
<< liesOnBorder;
std::cout << std::endl;
std::cout << std::endl;
cv::imshow("processedImage", processedImage);
cv::imshow("workingCopyForContourExtraction", workingCopyForContourExtraction);
cv::imshow("drawing", drawing);
cv::waitKey();
//cv::Size imageSize = workingCopyForContourExtraction.size();
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
}
}
int main(int argc, char** argv)
{
testContourBorderCheck();
return 0;
}
Problem with contour detection near image borders
OpenCV seems to have a problem with correctly finding contours near image borders.
For both objects, the detected contour is the same (see images). However, in image 2 the detected contour is not correct since a part of the object lies along x = 0, but the contour lies in x = 1.
This seem like a bug to me.
There is an open issue regarding this here: https://github.com/opencv/opencv/pull/7516
There also seems to be a workaround with cv::copyMakeBorder (https://github.com/opencv/opencv/issues/4374), however it seems a bit complicated.
If you can be a bit patient, I'd recommend waiting for the release of OpenCV 3.2 which should happen within the next 1-2 months.
New example images:
Single pixel top left, objects left, right, top, bottom, each touching and not touching (1px distance)
Example images
Object touching image border
Object not touching image border
Contour for object touching image border
Contour for object not touching image border
Although this question is in C++, the same issue affects openCV in Python. A solution to the openCV '0-pixel' border issue in Python (and which can likely be used in C++ as well) is to pad the image with 1 pixel on each border, then call openCV with the padded image, and then remove the border afterwards. Something like:
img2 = np.pad(img.copy(), ((1,1), (1,1), (0,0)), 'edge')
# call openCV with img2, it will set all the border pixels in our new pad with 0
# now get rid of our border
img = img2[1:-1,1:-1,:]
# img is now set to the original dimensions, and the contours can be at the edge of the image
If anyone needs this in MATLAB, here is the function.
function [touch] = componentTouchesImageBorder(C,im_row_max,im_col_max)
%C is a bwconncomp instance
touch=0;
S = regionprops(C,'PixelList');
c_row_max = max(S.PixelList(:,1));
c_row_min = min(S.PixelList(:,1));
c_col_max = max(S.PixelList(:,2));
c_col_min = min(S.PixelList(:,2));
if (c_row_max==im_row_max || c_row_min == 1 || c_col_max == im_col_max || c_col_min == 1)
touch = 1;
end
end

How to generate a valid point cloud representation of a pair of stereo images using OpenCV 3.0 StereoSGBM and PCL

I have recently started working with OpenCV 3.0 and my goal is to capture a pair of stereo images from a set of stereo cameras, create a proper disparity map, convert the disparity map to a 3D point cloud and finally show the resulting point cloud in a point-cloud viewer using PCL.
I have already performed the camera calibration and the resulting calibration RMS is 0.4
You can find my image pairs (Left Image)1 and (Right Image)2 in the links below. I am using StereoSGBM in order to create disparity image. I am also using track-bars to adjust StereoSGBM function parameters in order to obtain better disparity image. Unfortunately I can't post my disparity image since I am new to StackOverflow and don't have enough reputation to post more than two image links!
After getting the disparity image ("disp" in the code below), I use the reprojectImageTo3D() function to convert the disparity image information to XYZ 3D coordinate, and then I convert the results into an array of "pcl::PointXYZRGB" points so they can be shown in a PCL point cloud viewer. After performing the required conversion, what I get as a point cloud is a silly pyramid shape point-cloud which does not make any sense. I have already read and tried all of the suggested methods in the following links:
1- http: //blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html
2- http: //stackoverflow.com/questions/13463476/opencv-stereorectifyuncalibrated-to-3d-point-cloud
3- http: //stackoverflow.com/questions/22418846/reprojectimageto3d-in-opencv
and non of them worked!!!
Below I provided the conversion portion of my code, it would be greatly appreciated if you could tell me what I am missing:
pcl::PointCloud<pcl::PointXYZRGB>::Ptr pointcloud(new pcl::PointCloud<pcl::PointXYZRGB>());
Mat xyz;
reprojectImageTo3D(disp, xyz, Q, false, CV_32F);
pointcloud->width = static_cast<uint32_t>(disp.cols);
pointcloud->height = static_cast<uint32_t>(disp.rows);
pointcloud->is_dense = false;
pcl::PointXYZRGB point;
for (int i = 0; i < disp.rows; ++i)
{
uchar* rgb_ptr = Frame_RGBRight.ptr<uchar>(i);
uchar* disp_ptr = disp.ptr<uchar>(i);
double* xyz_ptr = xyz.ptr<double>(i);
for (int j = 0; j < disp.cols; ++j)
{
uchar d = disp_ptr[j];
if (d == 0) continue;
Point3f p = xyz.at<Point3f>(i, j);
point.z = p.z; // I have also tried p.z/16
point.x = p.x;
point.y = p.y;
point.b = rgb_ptr[3 * j];
point.g = rgb_ptr[3 * j + 1];
point.r = rgb_ptr[3 * j + 2];
pointcloud->points.push_back(point);
}
}
viewer.showCloud(pointcloud);
After doing some work and some research I found my answer and I am sharing it here so other readers can use.
Nothing was wrong with the conversion algorithm from the disparity image to 3D XYZ (and eventually to a point cloud). The problem was the distance of the objects (that I was taking pictures of) to the cameras and amount of information that was available for the StereoBM or StereoSGBM algorithms to detect similarities between the two images (image pair). In order to get proper 3D point cloud it is required to have a good disparity image and in order to have a good disparity image (assuming you have performed good calibration) make sure of the followings:
1- There should be enough detectable and distinguishable common features available between the two frames (right and left frame). The reason being is that StereoBM or StereoSGBM algorithms look for common features between the two frames and they can easily be fooled by similar things in the two frames which may not necessarily belong to the same objects. I personally think these two matching algorithms have lots of room for improvement. So beware of what you are looking at with your cameras.
2- Objects of interest (the ones that you are interested to have their 3D point cloud model) should be within a certain distance to your cameras. The bigger the base-line is (base line is the distance between the two cameras), the further your objects of interest (targets) can be.
A noisy and distorted disparity image never generates a good 3D point cloud. One thing you can do to improve your disparity images is to use track-bars in your applications so you can adjust the StereoSBM or StereoSGBM parameters until you can see good results (clear and smooth disparity image). Code below is a small and simple example on how to generate track-bars (I wrote it as simple as possible). Use as required:
int PreFilterType = 0, PreFilterCap = 0, MinDisparity = 0, UniqnessRatio = 0, TextureThreshold = 0,
SpeckleRange = 0, SADWindowSize = 5, SpackleWindowSize = 0, numDisparities = 0, numDisparities2 = 0, PreFilterSize = 5;
Ptr<StereoBM> sbm = StereoBM::create(numDisparities, SADWindowSize);
while(1)
{
sbm->setPreFilterType(PreFilterType);
sbm->setPreFilterSize(PreFilterSize);
sbm->setPreFilterCap(PreFilterCap + 1);
sbm->setMinDisparity(MinDisparity-100);
sbm->setTextureThreshold(TextureThreshold*0.0001);
sbm->setSpeckleRange(SpeckleRange);
sbm->setSpeckleWindowSize(SpackleWindowSize);
sbm->setUniquenessRatio(0.01*UniqnessRatio);
sbm->setSmallerBlockSize(15);
sbm->setDisp12MaxDiff(32);
namedWindow("Track Bar Window", CV_WINDOW_NORMAL);
cvCreateTrackbar("Number of Disparities", "Track Bar Window", &PreFilterType, 1, 0);
cvCreateTrackbar("Pre Filter Size", "Track Bar Window", &PreFilterSize, 100);
cvCreateTrackbar("Pre Filter Cap", "Track Bar Window", &PreFilterCap, 61);
cvCreateTrackbar("Minimum Disparity", "Track Bar Window", &MinDisparity, 200);
cvCreateTrackbar("Uniqueness Ratio", "Track Bar Window", &UniqnessRatio, 2500);
cvCreateTrackbar("Texture Threshold", "Track Bar Window", &TextureThreshold, 10000);
cvCreateTrackbar("Speckle Range", "Track Bar Window", &SpeckleRange, 500);
cvCreateTrackbar("Block Size", "Track Bar Window", &SADWindowSize, 100);
cvCreateTrackbar("Speckle Window Size", "Track Bar Window", &SpackleWindowSize, 200);
cvCreateTrackbar("Number of Disparity", "Track Bar Window", &numDisparities, 500);
if (PreFilterSize % 2 == 0)
{
PreFilterSize = PreFilterSize + 1;
}
if (PreFilterSize2 < 5)
{
PreFilterSize = 5;
}
if (SADWindowSize % 2 == 0)
{
SADWindowSize = SADWindowSize + 1;
}
if (SADWindowSize < 5)
{
SADWindowSize = 5;
}
if (numDisparities % 16 != 0)
{
numDisparities = numDisparities + (16 - numDisparities % 16);
}
}
}
If you are not getting proper results and smooth disparity image, don't get disappointed. Try using the OpenCV sample images (the one with an orange desk lamp in it) with your algorithm to make sure you have the correct pipe-line and then try taking pictures from different distances and play with StereoBM/StereoSGBM parameters until you can get something useful. I used my own face for this purpose and since I had a very small baseline, I came very close to my cameras (Here is a link to my 3D face point-cloud picture, and hey, don't you dare laughing!!!)1.I was very happy of seeing myself in 3D point-cloud form after a week of struggling. I have never been this happy of seeing myself before!!! ;)

cv::SimpleBlobDetector detect() produce access violation exception in Visual Studio 2010

First some background
I have written a C++ function that detect an area of a certain color in an RGB image using OpenCV. The function is used to isolate a small colored area using the FeatureDetector: SimpleBlobDetector.
The problem I have is that this function is used in a crossplatform project. On my OSX 10.8 machine using OpenCV in Xcode this works flawlessly. However when I try to run the same piece of code on Windows using OpenCV in Visual Studio, this code crashes whenever I use:
blobDetector.detect(imgThresh, keypoints)
with an error such as this:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in unknown function, file C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\include\opencv2/core/mat.hpp, line 545
This is the only piece of OpenCV code that have given me problems so far. I tried several solutions like the ones suggested here Using FeatureDetector in OpenCV gives access violation and Access violation reading in FeatureDetector OpenCV 2.4.5 . But to no avail.
A somewhat solution to my problem was to add a threshold() call just before my call to .detect(), which appears to make it work. However I don't like this solution as it forces me to do something I don't have to (as far as I know) and because it is not necessary to do on my Mac for some reason.
Question
Can anyone explain why the following line:
threshold(imgThresh, imgThresh, 100, 255, 0);
is necessary on Windows, but not on OSX, just before the call to .detect() in the following code?
Full code snippet:
#include "ColorDetector.h"
using namespace cv;
using namespace std;
Mat ColorDetection(Mat img, Scalar colorMin, Scalar colorMax, double alpha, int beta)
{
initModule_features2d();
initModule_nonfree();
//Define matrices
Mat contrast_img = constrastImage(img, alpha, beta);
Mat imgThresh;
Mat blob;
//Threshold based on color ranges (Blue/Green/Red scalars)
inRange(contrast_img, colorMin, colorMax, imgThresh); //BGR range
//Apply Blur effect to make blobs more coherent
GaussianBlur(imgThresh, imgThresh, Size(3,3), 0);
//Set SimpleBlobDetector parameters
SimpleBlobDetector::Params params;
params.filterByArea = false;
params.filterByCircularity = false;
params.filterByConvexity = false;
params.filterByInertia = false;
params.filterByColor = true;
params.blobColor = 255;
params.minArea = 100.0f;
params.maxArea = 500.0f;
SimpleBlobDetector blobDetector(params);
blobDetector.create("SimpleBlob");
//Vector to store keypoints (center points for a blob)
vector<KeyPoint> keypoints;
//Try blob detection
threshold(imgThresh, imgThresh, 100, 255, 0);
blobDetector.detect(imgThresh, keypoints);
//Draw resulting keypoints
drawKeypoints(img, keypoints, blob, CV_RGB(255,255,0), DrawMatchesFlags::DEFAULT);
return blob;
}
Try using it that way:
Ptr<SimpleBlobDetector> sbd = SimpleBlobDetector::create(params);
vector<cv::KeyPoint> keypoints;
sbd->detect(imgThresh, keypoints);