I'm trying to implement FAST feature detection/ description computation using OpenCV 3.1 in C++.
My code:
Ptr<cv::FastFeatureDetector> fast = cv::FastFeatureDetector::create();
fast->detectAndCompute(img1, Mat(), keypoints1, desc);
But when I apply detectAndCompute, I get an error. After debugging, I saw that in the source file (features2d.cpp) this must throw and error:
//[In source file features2d.cpp]
/* Detects keypoints and computes the descriptors */
void Feature2D::detectAndCompute( InputArray, InputArray,
std::vector<KeyPoint>&,
OutputArray,
bool )
{
CV_Error(Error::StsNotImplemented, "");
}
Why is this not implemented? And is there another way for me to use FAST?
You can also create a Feature detector generic pointer in openCV and use it.
cv::Ptr<cv::FeatureDetector> detectorPFast= FeatureDetector::create("PyramidFAST");
std::vector<KeyPoint> keypointsPFast1;
detectorPFast->detect( src, keypointsPFast1 );
FAST is only a feature detector, and has no descriptors to compute. So, you simply need to call:
fast->detect(img1, keypoints1);
Related
I am a beginner in OpenCV feature detection and matching. I am trying to write one simple method to detect and display the detected feature with an image.Here is the method that I have written:
void FeatureDetection::detectSimpleFeature(Mat image){
Mat gray_image, output_image;
cvtColor(image,gray_image,CV_BGR2GRAY);
// Detect Keypoints using Simple Detector
Ptr<Feature2D> detectors;
vector<KeyPoint> keypoint_1;
detectors->detect(image, keypoint_1);
drawKeypoints(gray_image, keypoint_1, output_image, Scalar::all(-1), DrawMatchesFlags:: DEFAULT);
namedWindow("Keypoints Detection");
imshow("Keypoints Detection", output_image);
waitKey(0);
}
There is no compile time error in this function, but while running, the program is crashing.Can anyone please help?
I am also searching special types of detector like SURF, SIFT etc but could not find in my downloaded and built library. Please suggest !!!
I am playing around with an open source openCv application. With the provided image sets, it works great, but when I attempt to pass it a live camera stream, or even recorded frames from that camera stream, it crashes. I assume that this is to do with the cv::Mat type, differing image channels, or some conversion that i am not doing.
The provided dataset is grey-scale, 8 bit, and so are my images.
The application expects grayscale (CV_8U).
My question is:
Given one of the (working) provided images, and one of my recorded (not working) images, what is the best way to compare them using opencv, to find out what the difference might be that is causing my crashes?
Thank you.
I have tried:
Commenting out this code (Which gave assertion errors)
if(mImGray.channels()==3)
{
cvtColor(mImGray,mImGray,CV_BGR2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGR2GRAY);
}
else if(mImGray.channels()==4)
{
cvtColor(mImGray,mImGray,CV_BGRA2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGRA2GRAY);
}
And replacing it with:
cv::Mat TempL;
mImGray.convertTo(TempL, CV_8U);
cvtColor(TempL, mImGray, CV_BayerGR2BGR);
cvtColor(mImGray, mImGray, CV_BGR2GRAY);
And the program crashes with no error...
You can try this code:
if ( mImGray.depth() != CV_8U )
mImGray.convertTo(mImGray, CV_8U);
if (mImGray.channels() == 3 )
{
cvtColor(mImGray, mImGray, COLOR_BGR2GRAY);
}
Or you can define a new Mat with create function and use that.
Hey I've a circular image that I want to make a cartesian in openCV.
I've successfully made it on matlab however I want to do it on OpenCV.
After some digging on internet. I figured out there are actually functions called logPolar, polarToCart and cartToPolar. However OpenCV official documentation is lack of information to use them. Since I don't really understand parameters those functions take I couldn't really use them
So could someone give me (actually I think a lot of people looking for it) appropriate example to use those functions please ?
Just in case I am sharing my sample image too.
thanks in advance
if you're using opencv3, you probably want linearPolar:
note, that for both versions, you need a seperate src and dst image (does not work inplace)
#include "opencv2/opencv.hpp" // needs imgproc, imgcodecs & highgui
Mat src = imread("my.png", 0); // read a grayscale img
Mat dst; // empty.
linearPolar(src,dst, Point(src.cols/2,src.rows/2), 120, INTER_CUBIC );
imshow("linear", dst);
waitKey();
or logPolar:
logPolar(src,dst,Point(src.cols/2,src.rows/2),40,INTER_CUBIC );
[edit:]
if you're still using opencv2.4, you can only use the arcane c-api functions, and need IplImage conversions (not recommended):
Mat src=...;
Mat dst(src.size(), src.type()); // yes, you need to preallocate here
IplImage ipsrc = src; // new header, points to the same pixels
IplImage ipdst = dst;
cvLogPolar( &ipsrc, &ipdst, cvPoint2D32f(src.cols/2,src.rows/2), 40, CV_INTER_CUBIC);
// result is in dst, no need to release ipdst (and please don't do so.)
(polarToCart and cartToPolar work on point coords, not images)
I have succeeded in tracking moving objects in a video.
However I want to decide if an object is person or not. I have tried the HOGDescriptor in OpenCV. HOGDescriptor have two methods for detecting people: HOGDescriptor::detect, and HOGDescriptor::detectMultiScale. OpenCV "sources\samples\cpp\peopledetect.cpp" demonstrates how to use HOGDescriptor::detectMultiScale , which search around the image at different scale and is very slow.
In my case, I have tracked the objects in a rectangle. I think using HOGDescriptor::detect to detect the inside of the rectangle will be much more quickly. But the OpenCV document only have the gpu::HOGDescriptor::detect (I still can't guess how to use it) and missed HOGDescriptor::detect. I want to use HOGDescriptor::detect.
Could anyone provide me with some c++ code snippet demonstrating the usage of HOGDescriptor::detect?
thanks.
Since you already have a list of objects, you can call the HOGDescriptor::detect method for all objects and check the output foundLocations array. If it is not empty the object was classified as a person. The only thing is that HOG works with 64x128 windows by default, so you need to rescale your objects:
std::vector<cv::Rect> movingObjects = ...;
cv::HOGDescriptor hog;
hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
std::vector<cv::Point> foundLocations;
for (size_t i = 0; i < movingObjects.size(); ++i)
{
cv::Mat roi = image(movingObjects[i]);
cv::Mat window;
cv::resize(roi, window, cv::Size(64, 128));
hog.detect(window, foundLocations);
if (!foundLocations.empty())
{
// movingObjects[i] is a person
}
}
If you don't cmake OpenCV with CUDA enabled, calling gpu::HOGDescriptor::detect will be equal to call HOGDescriptor::detect. No GPU is called.
Also for code, you can use
GpuMat img;
vector<Point> found_locations;
gpu::HOGDescriptor::detect(img, found_locations);
if(!found_locations.empty())
{
// img contains/is a real person
}
Edit:
However I want to decide if an object is person or not.
I don't think that you need this step. HOGDescriptor::detect itself is used to detect people, so you don't need to verify them as they are supposed to be people according to your setup. On the other hand, you can setup its threshold to control its detected quality.
I just wondered, if using a SurfFeatureDetector to detect keypoints and a SurfDescriptorExtractor to extract the SURF descriptors (see code below as described here) wouldn't extract the descriptors twice.
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints;
detector.detect( img, keypoints ); //detecting keypoints, extracting descriptors without returning them
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute( img, keypoints, descriptors ); // extracting descriptors a second time
The openCV documentation says, those 2 classes are wrappers for the SURF() class.
The SURF::operator() is overloaded, one version taking just a keypoint vector, the other one additionally taking a vector for the descriptors.
What intrigues me... both then call the cvExtractSURF() function, which seems to extract the descriptors, no matter what... (I did not dive too deep into the C code as I find it hard to understand, so maybe I'm wrong)
But this would mean that the SurfFeatureDetector would extract descriptors without returning them. Using the SurfDescriptorExtractor in the next step just does it a second time, which seems very inefficient to me. But am I right?
You can be assured that detector does not actually compute the descriptors. The key statement to look at is line 687 of surf.cpp if( !descriptors ) continue; Features are not computed during detection, the way it should be. This kind of architecture is most likely due to the fact that surf code was "added" to OpenCV after it was designed/developed to work by itself.
As a background: note that detector and feature extractors are different things. You first "detect" points using SurfFeatureDetector where local features are extracted (using SurfDescriptorExtractor). The snippet you have is a good guide.