Opencv2.4.9 SimpleBlobDetector mask does not work - c++

I've perused this site for an explanation but to no avail...hopefully someone knows the answer.
I'm using simpleBlobDetector to track some blobs. I would like to specify a mask via the detect method, but for some reason the mask doesn't seem to work - my keypoints show up for the whole image. Here are some snippets of my code:
Mat currFrame;
Mat mask;
Mat roi;
cv::Ptr<cv::FeatureDetector> blob_detector = new cv::SimpleBlobDetector(params);//custom set of params I've left out for legibility
blob_detector->create("SimpleBlob");
vector<cv::KeyPoint> myblob;
while(true)
{
captured >> currFrame; // get a new frame from camera >> is grab and retrieve in one go, note grab does not allow frame to be modified but edges can be
// do nothing if frame is empty
if(currFrame.empty())
{
break;
}
/******************** make mask***********************/
mask = Mat::zeros(currFrame.size(),CV_8U);
roi = Mat(mask,Rect(400,400,400,400));
roi = 255;
/******************** image cleanup with some filters*/
GaussianBlur(currFrame,currFrame, Size(5,5), 1.5, 1.5);
cv::medianBlur(currFrame,currFrame,3);
blob_detector->detect(fgMaskMOG,myblob,mask);//fgMaskMOG is currFrame after some filtering and background subtraction
cv::drawKeypoints(fgMaskMOG,myblob,fgMaskMOG,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
imshow("mogForeground", fgMaskMOG);
imshow("original", currFrame);
imshow("mask",mask);
if(waitKey(1) != -1)
break;
}
The thing is, I confirmed that my mask is correctly made by using SurfFeatureDetector as described here (OpenCV: howto use mask parameter for feature point detection (SURF)) If anyone can see whats wrong with my mask, I'd really appreciate the help. Sorry about the messy code!

I had the same issue and couldn't find the solution, so I solved it by checking the mask myself:
blob_detector->detect(img, keypoints);
std::vector<cv::KeyPoint> keypoints_in_range;
for (cv::KeyPoint &kp : keypoints)
if (mask.at<char>(kp.pt) > 0)
keypoints_in_range.push_back(kp)

I found i opencv2.4.8 this code:
void SimpleBlobDetector::detectImpl(const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, const cv::Mat&) const
{
//TODO: support mask
keypoints.clear();
Mat grayscaleImage;
which means that this option is not supported yet.
Solution with filtering keyPoints is not quite good, because it is time taking ( you have to detect blobs in whole image ).
Better workaround is to cut ROI before detection and move each KeyPoint after detection:
int x = 500;
int y = 200;
int width = 700;
int height = 700;
Mat roi = frame(Rect(x,y,width,height));
blob_detector.detect(roi, keypoints);
for (KeyPoint &kp : keypoints)
{
kp.pt.x +=x;
kp.pt.y +=y;
}
drawKeypoints(frame, keypoints, frame,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);

Related

remove the black space surrounding the image using opencv c++

Good Day! I'm using imwrite command to save the image below after cropping them in OpenCV (C++) but it seems like it included the black portion surrounding it in writing. All I want is to save the cropped one. Please help.
Here's my code
Mat mask,draft,res;
int nPixels;
char c=0;
while(true && c!='q') {
imshow("SAMPLE", img);
if(!roi.isSet())
roi.set("SAMPLE");
if (roi.isSet()) {
roi.createMask(img.size());
mask = roi.getMask();
res = mask & img.clone();
imwrite("masked.png",res);
imshow("draft", res);
}
c = waitKey(1);
}
Here is an example how to crop an image and save the croped image (see comment from api55). Maybe that helps you.
cv::Mat img = cv::imread("Path/To/Image/image.png", cv::IMREAD_GRAYSCALE);
if(image.empty())
return -1;
cv::Rect roi(0, 0, 100, 100); // define roi here as x0, y0, width, height
cv::Mat cropedImg(img, roi);
cv::imwrite("Path/To/Save/Location/cropedImage.png", cropedImg);

SimpleBlobDetector in opencv does not detect anything

I'm using OpenCV 3.1 to do some blob detection using SimpleBlobDetector but I'm having no luck and no tutorial has been able to solve this. My environment is XCode on x64.
I'm starting out with this image:
Then I'm turning it into greyscale:
Finally I turn it into a binary image and doing the blob detection on this:
I've included "iostream" and "opencv2/opencv.hpp".
using namespace cv;
using namespace std;
Mat img_rgb;
Mat img_gray;
Mat img_keypoints;
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create();
vector<KeyPoint> keypoints;
img_rgb = imread("summertriangle.jpg");
//Convert to greyscale
cvtColor(img_rgb, img_gray, CV_RGB2GRAY);
imshow("Grey Scale", img_gray);
// Start by creating the matrix that will allocate the new image
Mat img_bw(img_gray.size(), img_gray.type());
// Apply threshhold to convert into binary image and save to new matrix
threshold(img_gray, img_bw, 100, 255, THRESH_BINARY);
// Extract cordinates of blobs at their centroids, save to keypoints variable.
detector->detect(img_bw, keypoints);
cout << "The size of keypoints vector is: " << keypoints.size();
The keypoints vector is always empty. Nothing I've tried works.
So I solved this, did not read the fine print on the docs. Thanks Dai for the heads up on the Params, made me give the docs a closer look.
Default values of parameters are tuned to extract dark circular blobs.
I had to simply do this when creating the SimpleBlobDetector object:
SimpleBlobDetector::Params params;
params.filterByArea = true;
params.minArea = 1;
params.maxArea = 1000;
params.filterByColor = true;
params.blobColor = 255;
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
This did it.

Detecting all note head wether it is a whole note or half note

I need to detect all whole and half note from the given image and print the all detected note into a new image. But it seems that the code does not detect the half note it only detects the whole note.
This is the source code I have
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Read image
Mat im = imread("beethoven_ode_to_joy.jpg", IMREAD_GRAYSCALE);
// Setup SimpleBlobDetector parameters.
SimpleBlobDetector::Params params;
// Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
// Filter by Area.
params.filterByArea = true;
params.minArea = 25;
// Filter by Circularity
params.filterByCircularity = true;
params.minCircularity = 0.1;
// Filter by Convexity
params.filterByConvexity = true;
params.minConvexity = 0.87;
// Filter by Inertia
params.filterByInertia = true;
params.minInertiaRatio = 0.01;
// Storage for blobs
vector<KeyPoint> keypoints;
#if CV_MAJOR_VERSION < 3 // If you are using OpenCV 2
// Set up detector with params
SimpleBlobDetector detector(params);
// Detect blobs
detector.detect(im, keypoints);
#else
// Set up detector with params
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blobs
detector->detect(im, keypoints);
#endif
// Draw detected blobs as red circles.
// DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag ensures
// the size of the circle corresponds to the size of blob
Mat im_with_keypoints;
drawKeypoints(im, keypoints, im_with_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
// Show blobs
imshow("keypoints", im_with_keypoints);
waitKey(0);
}
Actually, I don't have openCV now.But I try something to solve this in matlab in short time.Firstly,in this image you will realize that head of the notes are darker than staves.When we get more inside it we see that centers of the notes have 0 value in this image . I suggest you that you can convert yor RGB image to grayscale image, after that can apply thresholding.If the values of pixels is equal to 0 they're ok you should get them but if not you don't get them.Its result is here in this image .Then, I think you can apply some morphologic operations like dilation. Because detected head of notes will be a little bit smaller than original.If you want to eliminate the up side of notes(I mean stick part of notes) you can detect this part with hough line transformation, opencv has functions for this operation (HoughLines or houghLinesP).After detection you can delete this part or if you don't want, you can pass this step.After all, you can find circular objects on the image with hough transform.HoughCircles functions perform this task in opencv.In Matlab it is a little bit easier with findcircles function.Finally, you can draw founded circles with circle function in opencv or viscircles function in matlab.Result is here
Notice that I didn't apply morphologic operations to improve size of heads of notes.Also, I didn't apply houghline transformation to detect and erase stick parts.If you can apply them ,I think you will get better result.
This algorithm is only a suggestion,you can find better algorithm by trying some other operations.

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)

Detection of objects in nonuniform illumination in opencv C++

I am performing feature detection in a video/live stream/image using OpenCV C++. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images.
The lighting condition in a particular portion of the video also changes over the course of the video. I tried the 'Histogram equalization' function, but it didn't help.
I got a working solution in MATLAB in the following link:
http://in.mathworks.com/help/images/examples/correcting-nonuniform-illumination.html
However, most of the functions used in the above link aren't available in OpenCV.
Can you suggest the alternative of this MATLAB code in OpenCV C++?
OpenCV has the adaptive threshold paradigm available in the framework: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold
The function prototype looks like:
void adaptiveThreshold(InputArray src, OutputArray dst,
double maxValue, int adaptiveMethod,
int thresholdType, int blockSize, double C);
The first two parameters are the input image and a place to store the output thresholded image. maxValue is the thresholded value assigned to an output pixel should it pass the criteria, adaptiveMethod is the method to use for adaptive thresholding, thresholdType is the type of thresholding you want to perform (more later), blockSize is the size of the windows to examine (more later), and C is a constant to subtract from each window. I've never really needed to use this and I usually set this to 0.
The default method for adaptiveThreshold is to analyze blockSize x blockSize windows and calculate the mean intensity within this window subtracted by C. If the centre of this window is above the mean intensity, this corresponding location in the output position of the output image is set to maxValue, else the same position is set to 0. This should combat the non-uniform illumination issue where instead of applying a global threshold to the image, you are performing the thresholding on local pixel neighbourhoods.
You can read the documentation on the other methods for the other parameters, but to get your started, you can do something like this:
// Include libraries
#include <cv.h>
#include <highgui.h>
// For convenience
using namespace cv;
// Example function to adaptive threshold an image
void threshold()
{
// Load in an image - Change "image.jpg" to whatever your image is called
Mat image;
image = imread("image.jpg", 1);
// Convert image to grayscale and show the image
// Wait for user key before continuing
Mat gray_image;
cvtColor(image, gray_image, CV_BGR2GRAY);
namedWindow("Gray image", CV_WINDOW_AUTOSIZE);
imshow("Gray image", gray_image);
waitKey(0);
// Adaptive threshold the image
int maxValue = 255;
int blockSize = 25;
int C = 0;
adaptiveThreshold(gray_image, gray_image, maxValue,
CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,
blockSize, C);
// Show the thresholded image
// Wait for user key before continuing
namedWindow("Thresholded image", CV_WINDOW_AUTOSIZE);
imshow("Thresholded image", gray_image);
waitKey(0);
}
// Main function - Run the threshold function
int main( int argc, const char** argv )
{
threshold();
}
adaptiveThreshold should be your first choice.
But here I report the "translation" from Matlab to OpenCV, so you can easily port your code. As you see, most of the functions are available both in Matlab and OpenCV.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Step 1: Read Image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Step 2: Use Morphological Opening to Estimate the Background
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(15,15));
Mat1b background;
morphologyEx(img, background, MORPH_OPEN, kernel);
// Step 3: Subtract the Background Image from the Original Image
Mat1b img2;
absdiff(img, background, img2);
// Step 4: Increase the Image Contrast
// Don't needed it here, the equivalent would be cv::equalizeHist
// Step 5(1): Threshold the Image
Mat1b bw;
threshold(img2, bw, 50, 255, THRESH_BINARY);
// Step 6: Identify Objects in the Image
vector<vector<Point>> contours;
findContours(bw.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(int i=0; i<contours.size(); ++i)
{
// Step 5(2): bwareaopen
if(contours[i].size() > 50)
{
// Step 7: Examine One Object
Mat1b object(bw.size(), uchar(0));
drawContours(object, contours, i, Scalar(255), CV_FILLED);
imshow("Single Object", object);
waitKey();
}
}
return 0;
}