I'm using OpenCV 3.1 to do some blob detection using SimpleBlobDetector but I'm having no luck and no tutorial has been able to solve this. My environment is XCode on x64.
I'm starting out with this image:
Then I'm turning it into greyscale:
Finally I turn it into a binary image and doing the blob detection on this:
I've included "iostream" and "opencv2/opencv.hpp".
using namespace cv;
using namespace std;
Mat img_rgb;
Mat img_gray;
Mat img_keypoints;
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create();
vector<KeyPoint> keypoints;
img_rgb = imread("summertriangle.jpg");
//Convert to greyscale
cvtColor(img_rgb, img_gray, CV_RGB2GRAY);
imshow("Grey Scale", img_gray);
// Start by creating the matrix that will allocate the new image
Mat img_bw(img_gray.size(), img_gray.type());
// Apply threshhold to convert into binary image and save to new matrix
threshold(img_gray, img_bw, 100, 255, THRESH_BINARY);
// Extract cordinates of blobs at their centroids, save to keypoints variable.
detector->detect(img_bw, keypoints);
cout << "The size of keypoints vector is: " << keypoints.size();
The keypoints vector is always empty. Nothing I've tried works.
So I solved this, did not read the fine print on the docs. Thanks Dai for the heads up on the Params, made me give the docs a closer look.
Default values of parameters are tuned to extract dark circular blobs.
I had to simply do this when creating the SimpleBlobDetector object:
SimpleBlobDetector::Params params;
params.filterByArea = true;
params.minArea = 1;
params.maxArea = 1000;
params.filterByColor = true;
params.blobColor = 255;
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
This did it.
Related
I have a simple task for OpenCV SimpleBlobDetector
cv::SimpleBlobDetector::Params params;
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(params);
std::vector<cv::KeyPoint> keypoints;
detector->detect(crop, keypoints);
drawKeypoints(crop, keypoints, crop, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("crop", crop);
cv::waitKey(0);
It is not detecting half of the blobs in my image.
Please see picture below,
I tried adding parameters and varying them, at no point has it ever detected every single blob.
Blob detection is a simple and straightforward algorithm that should be completely refined in every image processing API. Is this not the case with OpenCV?
//params.minThreshold = 0;
//params.maxThreshold = 255;
//params.filterByArea = true;
//params.minArea = 1000;
//params.maxArea = 5000;
//params.filterByCircularity = true;
//params.minCircularity = 0.4;
//params.filterByConvexity = true;
//params.minConvexity = 0.87;
//params.filterByInertia = true;
//params.minInertiaRatio = 0.71;
I'm using either OpenCV 3.3 or 3.2, I can't seem to find the version number in the sources
Im not sure if this is properly going to answer my question, but I had to write my own blob detection, it appears that OpenCV SimpleBlobDetector is not so simple.
I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.
A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.
I need to detect all whole and half note from the given image and print the all detected note into a new image. But it seems that the code does not detect the half note it only detects the whole note.
This is the source code I have
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Read image
Mat im = imread("beethoven_ode_to_joy.jpg", IMREAD_GRAYSCALE);
// Setup SimpleBlobDetector parameters.
SimpleBlobDetector::Params params;
// Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
// Filter by Area.
params.filterByArea = true;
params.minArea = 25;
// Filter by Circularity
params.filterByCircularity = true;
params.minCircularity = 0.1;
// Filter by Convexity
params.filterByConvexity = true;
params.minConvexity = 0.87;
// Filter by Inertia
params.filterByInertia = true;
params.minInertiaRatio = 0.01;
// Storage for blobs
vector<KeyPoint> keypoints;
#if CV_MAJOR_VERSION < 3 // If you are using OpenCV 2
// Set up detector with params
SimpleBlobDetector detector(params);
// Detect blobs
detector.detect(im, keypoints);
#else
// Set up detector with params
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blobs
detector->detect(im, keypoints);
#endif
// Draw detected blobs as red circles.
// DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag ensures
// the size of the circle corresponds to the size of blob
Mat im_with_keypoints;
drawKeypoints(im, keypoints, im_with_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
// Show blobs
imshow("keypoints", im_with_keypoints);
waitKey(0);
}
Actually, I don't have openCV now.But I try something to solve this in matlab in short time.Firstly,in this image you will realize that head of the notes are darker than staves.When we get more inside it we see that centers of the notes have 0 value in this image . I suggest you that you can convert yor RGB image to grayscale image, after that can apply thresholding.If the values of pixels is equal to 0 they're ok you should get them but if not you don't get them.Its result is here in this image .Then, I think you can apply some morphologic operations like dilation. Because detected head of notes will be a little bit smaller than original.If you want to eliminate the up side of notes(I mean stick part of notes) you can detect this part with hough line transformation, opencv has functions for this operation (HoughLines or houghLinesP).After detection you can delete this part or if you don't want, you can pass this step.After all, you can find circular objects on the image with hough transform.HoughCircles functions perform this task in opencv.In Matlab it is a little bit easier with findcircles function.Finally, you can draw founded circles with circle function in opencv or viscircles function in matlab.Result is here
Notice that I didn't apply morphologic operations to improve size of heads of notes.Also, I didn't apply houghline transformation to detect and erase stick parts.If you can apply them ,I think you will get better result.
This algorithm is only a suggestion,you can find better algorithm by trying some other operations.
I've perused this site for an explanation but to no avail...hopefully someone knows the answer.
I'm using simpleBlobDetector to track some blobs. I would like to specify a mask via the detect method, but for some reason the mask doesn't seem to work - my keypoints show up for the whole image. Here are some snippets of my code:
Mat currFrame;
Mat mask;
Mat roi;
cv::Ptr<cv::FeatureDetector> blob_detector = new cv::SimpleBlobDetector(params);//custom set of params I've left out for legibility
blob_detector->create("SimpleBlob");
vector<cv::KeyPoint> myblob;
while(true)
{
captured >> currFrame; // get a new frame from camera >> is grab and retrieve in one go, note grab does not allow frame to be modified but edges can be
// do nothing if frame is empty
if(currFrame.empty())
{
break;
}
/******************** make mask***********************/
mask = Mat::zeros(currFrame.size(),CV_8U);
roi = Mat(mask,Rect(400,400,400,400));
roi = 255;
/******************** image cleanup with some filters*/
GaussianBlur(currFrame,currFrame, Size(5,5), 1.5, 1.5);
cv::medianBlur(currFrame,currFrame,3);
blob_detector->detect(fgMaskMOG,myblob,mask);//fgMaskMOG is currFrame after some filtering and background subtraction
cv::drawKeypoints(fgMaskMOG,myblob,fgMaskMOG,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
imshow("mogForeground", fgMaskMOG);
imshow("original", currFrame);
imshow("mask",mask);
if(waitKey(1) != -1)
break;
}
The thing is, I confirmed that my mask is correctly made by using SurfFeatureDetector as described here (OpenCV: howto use mask parameter for feature point detection (SURF)) If anyone can see whats wrong with my mask, I'd really appreciate the help. Sorry about the messy code!
I had the same issue and couldn't find the solution, so I solved it by checking the mask myself:
blob_detector->detect(img, keypoints);
std::vector<cv::KeyPoint> keypoints_in_range;
for (cv::KeyPoint &kp : keypoints)
if (mask.at<char>(kp.pt) > 0)
keypoints_in_range.push_back(kp)
I found i opencv2.4.8 this code:
void SimpleBlobDetector::detectImpl(const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, const cv::Mat&) const
{
//TODO: support mask
keypoints.clear();
Mat grayscaleImage;
which means that this option is not supported yet.
Solution with filtering keyPoints is not quite good, because it is time taking ( you have to detect blobs in whole image ).
Better workaround is to cut ROI before detection and move each KeyPoint after detection:
int x = 500;
int y = 200;
int width = 700;
int height = 700;
Mat roi = frame(Rect(x,y,width,height));
blob_detector.detect(roi, keypoints);
for (KeyPoint &kp : keypoints)
{
kp.pt.x +=x;
kp.pt.y +=y;
}
drawKeypoints(frame, keypoints, frame,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
First some background
I have written a C++ function that detect an area of a certain color in an RGB image using OpenCV. The function is used to isolate a small colored area using the FeatureDetector: SimpleBlobDetector.
The problem I have is that this function is used in a crossplatform project. On my OSX 10.8 machine using OpenCV in Xcode this works flawlessly. However when I try to run the same piece of code on Windows using OpenCV in Visual Studio, this code crashes whenever I use:
blobDetector.detect(imgThresh, keypoints)
with an error such as this:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in unknown function, file C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\include\opencv2/core/mat.hpp, line 545
This is the only piece of OpenCV code that have given me problems so far. I tried several solutions like the ones suggested here Using FeatureDetector in OpenCV gives access violation and Access violation reading in FeatureDetector OpenCV 2.4.5 . But to no avail.
A somewhat solution to my problem was to add a threshold() call just before my call to .detect(), which appears to make it work. However I don't like this solution as it forces me to do something I don't have to (as far as I know) and because it is not necessary to do on my Mac for some reason.
Question
Can anyone explain why the following line:
threshold(imgThresh, imgThresh, 100, 255, 0);
is necessary on Windows, but not on OSX, just before the call to .detect() in the following code?
Full code snippet:
#include "ColorDetector.h"
using namespace cv;
using namespace std;
Mat ColorDetection(Mat img, Scalar colorMin, Scalar colorMax, double alpha, int beta)
{
initModule_features2d();
initModule_nonfree();
//Define matrices
Mat contrast_img = constrastImage(img, alpha, beta);
Mat imgThresh;
Mat blob;
//Threshold based on color ranges (Blue/Green/Red scalars)
inRange(contrast_img, colorMin, colorMax, imgThresh); //BGR range
//Apply Blur effect to make blobs more coherent
GaussianBlur(imgThresh, imgThresh, Size(3,3), 0);
//Set SimpleBlobDetector parameters
SimpleBlobDetector::Params params;
params.filterByArea = false;
params.filterByCircularity = false;
params.filterByConvexity = false;
params.filterByInertia = false;
params.filterByColor = true;
params.blobColor = 255;
params.minArea = 100.0f;
params.maxArea = 500.0f;
SimpleBlobDetector blobDetector(params);
blobDetector.create("SimpleBlob");
//Vector to store keypoints (center points for a blob)
vector<KeyPoint> keypoints;
//Try blob detection
threshold(imgThresh, imgThresh, 100, 255, 0);
blobDetector.detect(imgThresh, keypoints);
//Draw resulting keypoints
drawKeypoints(img, keypoints, blob, CV_RGB(255,255,0), DrawMatchesFlags::DEFAULT);
return blob;
}
Try using it that way:
Ptr<SimpleBlobDetector> sbd = SimpleBlobDetector::create(params);
vector<cv::KeyPoint> keypoints;
sbd->detect(imgThresh, keypoints);