Problems with opencv stereoRectifyUncalibrated - c++

I've been trying to rectify and build the disparity mappping for a pair of images using OpenCV stereoRectifyUncalibrated, but I'm not getting very good results. My code is:
template<class T>
T convertNumber(string& number)
{
istringstream ss(number);
T t;
ss >> t;
return t;
}
void readPoints(vector<Point2f>& points, string filename)
{
fstream filest(filename.c_str(), ios::in);
string line;
assert(filest != NULL);
getline(filest, line);
do{
int posEsp = line.find_first_of(' ');
string posX = line.substr(0, posEsp);
string posY = line.substr(posEsp+1, line.size() - posEsp);
float X = convertNumber<float>(posX);
float Y = convertNumber<float>(posY);
Point2f pnt = Point2f(X, Y);
points.push_back(pnt);
getline(filest, line);
}while(!filest.eof());
filest.close();
}
void drawKeypointSequence(Mat lFrame, Mat rFrame, vector<KeyPoint>& lKeyp, vector<KeyPoint>& rKeyp)
{
namedWindow("prevFrame", WINDOW_AUTOSIZE);
namedWindow("currFrame", WINDOW_AUTOSIZE);
moveWindow("prevFrame", 0, 300);
moveWindow("currFrame", 650, 300);
Mat rFrameAux;
rFrame.copyTo(rFrameAux);
Mat lFrameAux;
lFrame.copyTo(lFrameAux);
int size = rKeyp.size();
for(int i=0; i<size; i++)
{
vector<KeyPoint> drawRightKeyp;
vector<KeyPoint> drawleftKeyp;
drawRightKeyp.push_back(rKeyp[i]);
drawleftKeyp.push_back(lKeyp[i]);
cout << rKeyp[i].pt << " <<<>>> " << lKeyp[i].pt << endl;
drawKeypoints(rFrameAux, drawRightKeyp, rFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
drawKeypoints(lFrameAux, drawleftKeyp, lFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("currFrame", rFrameAux);
imshow("prevFrame", lFrameAux);
waitKey(0);
}
imwrite("RightKeypFrame.jpg", rFrameAux);
imwrite("LeftKeypFrame.jpg", lFrameAux);
}
int main(int argc, char* argv[])
{
StereoBM stereo(StereoBM::BASIC_PRESET, 16*5, 21);
double ndisp = 16*4;
assert(argc == 5);
string rightImgFilename(argv[1]); // Right image (current frame)
string leftImgFilename(argv[2]); // Left image (previous frame)
string rightPointsFilename(argv[3]); // Right image points file
string leftPointsFilename(argv[4]); // Left image points file
Mat rightFrame = imread(rightImgFilename.c_str(), 0);
Mat leftFrame = imread(leftImgFilename.c_str(), 0);
vector<Point2f> rightPoints;
vector<Point2f> leftPoints;
vector<KeyPoint> rightKeyp;
vector<KeyPoint> leftKeyp;
readPoints(rightPoints, rightPointsFilename);
readPoints(leftPoints, leftPointsFilename);
assert(rightPoints.size() == leftPoints.size());
KeyPoint::convert(rightPoints, rightKeyp);
KeyPoint::convert(leftPoints, leftKeyp);
// Desenha os keypoints sequencialmente, de forma a testar a consistência do matching
drawKeypointSequence(leftFrame, rightFrame, leftKeyp, rightKeyp);
Mat fundMatrix = findFundamentalMat(leftPoints, rightPoints, CV_FM_8POINT);
Mat homRight;
Mat homLeft;
Mat disp16 = Mat(rightFrame.rows, leftFrame.cols, CV_16S);
Mat disp8 = Mat(rightFrame.rows, leftFrame.cols, CV_8UC1);
stereoRectifyUncalibrated(leftPoints, rightPoints, fundMatrix, rightFrame.size(), homLeft, homRight);
warpPerspective(rightFrame, rightFrame, homRight, rightFrame.size());
warpPerspective(leftFrame, leftFrame, homLeft, leftFrame.size());
namedWindow("currFrame", WINDOW_AUTOSIZE);
namedWindow("prevFrame", WINDOW_AUTOSIZE);
moveWindow("currFrame", 650, 300);
moveWindow("prevFrame", 0, 300);
imshow("currFrame", rightFrame);
imshow("prevFrame", leftFrame);
imwrite("RectfRight.jpg", rightFrame);
imwrite("RectfLeft.jpg", leftFrame);
waitKey(0);
stereo(rightFrame, leftFrame, disp16, CV_16S);
disp16.convertTo(disp8, CV_8UC1, 255/ndisp);
FileStorage file("disp_map.xml", FileStorage::WRITE);
file << "disparity" << disp8;
file.release();
imshow("disparity", disp8);
imwrite("disparity.jpg", disp8);
moveWindow("disparity", 0, 0);
waitKey(0);
}
drawKeyPoint sequence is the way I visually check the consistency of the points I have for both images. By drawing each of their keypoints in sequence, I can be sure that keypoint i on image A is keypoint i on image B.
I've also tried playing with the ndisp parameter, but it didn't help much.
I tried it for the following pair of images:
LeftImage
RightImage
got the following rectified pair:
RectifiedLeft
RectifiedRight
and finally, the following disparity map
DisparityMap
Which, as you can see, is quite bad. I've also tried the same pair of images with the following stereoRectifyUncalibrated example: http://programmingexamples.net/wiki/OpenCV/WishList/StereoRectifyUncalibrated and the SBM_Sample.cpp from opencv tutorial code samples to build the disparity map, and got a very similar result.
I'm using opencv 2.4
Thanks in advance!

Besides possible calibration problems, your images clearly lack some texture for the stereo block matching to work.
This algorithm will see many ambiguities and too large disparities on flat (non-tetxured) parts.
Note however that the keypoints seem to match well, so even if the rectification output seems weird it is probably correct.
You can test your code against standard images from the Middlebury stereo page for sanity checks.

I would suggest to do a stereo calibration using the chessboard, or take multiple pictures with a chess board and use stereocalibrate.cpp on your computer. I am saying that because you are using stereorectifyuncalibrated, While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion, it would be better to correct it before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using calibrateCamera(). Then, the images can be corrected using undistort() , or just the point coordinates can be corrected with undistortPoints().

Related

I want to get high quality feature points only

I'm currently working on real-time feature matching using OpenCV3.4.0, c++ in QT creator.
My code matches features between the first frame that I got by webcam and current frame input from webcam.
Mat frame1, frame2, img1, img2, img1_gray, img2_gray;
int n = 0;
VideoCapture cap1(0);
namedWindow("Video Capture1", WINDOW_NORMAL);
namedWindow("Reference img", WINDOW_NORMAL);
namedWindow("matches1", WINDOW_NORMAL);
moveWindow("Video Capture1",50, 0);
moveWindow("Reference img",50, 100);
moveWindow("matches1",100,100);
while((char)waitKey(1)!='q'){
//raw image saved in frame
cap1>>frame1;
n=n+1;
if (n ==1){
imwrite("frame1.jpg", frame1);
cout<<"First frame saved as 'frame1'!!"<<endl;
}
if(frame1.empty())
break;
imshow("Video Capture1",frame1);
img1 = imread("frame1.jpg");
img2 = frame1;
cvtColor(img1, img1_gray, cv::COLOR_BGR2GRAY);
cvtColor(img2, img2_gray, cv::COLOR_BGR2GRAY);
imshow("Reference img",img1);
// detecting keypoints
int minHessian = 400;
Ptr<Feature2D> detector = xfeatures2d::SurfFeatureDetector::create();
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(img1_gray,keypoints1);
detector->detect(img2_gray,keypoints2);
// computing descriptors
Ptr<DescriptorExtractor> extractor = xfeatures2d::SurfFeatureDetector::create();
Mat descriptors1, descriptors2;
extractor->compute(img1_gray,keypoints1,descriptors1);
extractor->compute(img2_gray,keypoints2,descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches1", img_matches);
But the code returns so many matched points that I cannot distinguish which one matches which.
So, are there any methods to get high-quality matched points only?
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
So, are there any methods to get high-quality matched points only?
I bet there are a lot of different methods. I am using e.g. a symmetry test. So matches from img1 to img2 also have to exist when matching from img2 to img1. I am using the test of Improve matching of feature points with OpenCV. Multiple other tests are shown there.
void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
symMatches.clear();
for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
{
for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
{
if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
{
symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
break;
}
}
}
}
Like András Kovács says in the related answer you can also calculate a Fundamental Matrix with RANSAC to eliminate outliers using cv::findFundamentalMat.
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
I hope I understood it right that you want to have the point coordinates of homologue points that match. I am extracting the coordinates of the points after the symmetryTest.
The coordinates are inside the keypoints.
for (size_t rows = 0; rows < sym_matches.size(); rows++) {
float x1 = keypoints_1[sym_matches[rows].queryIdx].pt.x;
float y1 = keypoints_1[sym_matches[rows].queryIdx].pt.y;
float x2 = keypoints_2[sym_matches[rows].trainIdx].pt.x;
float y2 = keypoints_2[sym_matches[rows].trainIdx].pt.y;
// Push the coordinates in a vector e.g. std:vector<cv::Point2f>>
}
You can do the same with your matches, keypoints1 and keypoint2.

findChessboardCorners gives unexpected results

Here is my code
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include<opencv2/opencv.hpp>
#include<iostream>
//#include<vector>
using namespace cv;
using namespace std;
int main()
{
VideoCapture cap = VideoCapture(0);
int successes = 0;
int numBoards = 0;
int numCornersHor = 6;
int numCornersVer = 4;
int numSquares = (numCornersHor - 1) * (numCornersVer - 1);
Size board_sz = Size(numCornersHor, numCornersVer);
vector<Point2f> corners;
for (;;)
{
Mat img;
cap >> img;
Mat gray;
cvtColor(img, gray, CV_RGB2GRAY);
if (img.empty()) break; // end of video stream
imshow("this is you, smile! :)", gray);
if (waitKey(1) == 27) break; // stop capturing by pressing ESC
bool found = findChessboardCorners(gray, board_sz, corners, CALIB_CB_ADAPTIVE_THRESH);
if (found == 1)
{
cout << corners.size()<<"\n";
cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(gray, board_sz, corners, found);
}
}
cap.release();
waitKey();
return 0;
}
The code is capturing frames from a webcam. If a chessboard is detected, the total number of found corners is printed out (I did it because I was not getting the same output as in the tutorial code and I wanted to find where the bug is).
The output:
First you should follow some ground rules:
Do not use loose papers -> print/glue the chessboard on a flat plate
Print it with a big white border to improve detection
The chessboard has to be completly inside the image (not as in your example)
Take several images with different positions of your chessboard
Second, you cant draw your contours into a 8-bit grayscale image, use an 8-bit color image instead.
And if i count correctly (count inner corners) your chessboard has the size (8,6).
I have the same problem, the number of corners is HUGE. After some search i found this solution Here.
For some reason findChessboardCorners function resizes the corners vector. I tried the solution above, it worked well with the output corners, but i still have assertion failed problem with cornerSubPix function.

Convert Image Color from Grayscale to RGB OpenCV C++

Basically I am trying to convert the below output image to color(RGB). The image that this code currently outputs is grayscale, however, for my application I would like it to be output as color. Please let me know where I should convert the image.
Also the code below is C++ and it using a function from openCV. Please keep in mind that I am using a wrapper to use this code in my iphone application.
cv::Mat CVCircles::detectedCirclesInImage(cv::Mat img, double dp, double minDist, double param1, double param2, int min_radius, int max_radius) {
//(cv::Mat img, double minDist, int min_radius, int max_radius)
if(img.empty())
{
cout << "can not open image " << endl;
return img;
}
Mat cimg;
medianBlur(img, img, 5);
cvtColor(img, cimg, CV_GRAY2RGB);
vector<Vec3f> circles;
HoughCircles( img //InputArray
, circles //OutputArray
, CV_HOUGH_GRADIENT //int method
, 1//dp //double dp=1 1 ... 20
, minDist //double minDist=10 log 1...1000
, 100//param1 //double param1=100
, 30//param2 //double param2=30 10 ... 50
, min_radius //int minRadius=1 1 ... 500
, max_radius //int maxRadius=30 1 ... 500
);
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
circle( cimg, Point(c[0], c[1]), c[2], Scalar(255,0,0), 3, CV_AA);
circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, CV_AA);
}
return cimg;
}
This is currently set up to expect a grayscale image as input. I think that you are asking how to adapt it to accept a colour input image and return a colour output image. You don't need to change much:
cv::Mat CVCircles::detectedCirclesInImage(cv::Mat img, double dp, double minDist, double param1, double param2, int min_radius, int max_radius) {
if(img.empty())
{
cout << "can not open image " << endl;
return img;
}
Mat img;
if (img.type()==CV_8UC1) {
//input image is grayscale
cvtColor(img, cimg, CV_GRAY2RGB);
} else {
//input image is colour
cimg = img;
cvtColor(img, img, CV_RGB2GRAY);
}
the rest stays as is.
If your input image is colour, you are converting it to gray for processing by HoughCircles, and applying the found circles to the original colour image for output.
The cvtImage routine will simply copy your gray element to each of the three elements R, G, and B for each pixel. In other words if the pixel gray value is 26, then the new image will have R = 26, G = 26, B = 26.
The image presented will still LOOK grayscale even though it contains all 3 color components, all you have essentially done is to triple the space necessary to store the same image.
If indeed you want color to appear in the image (when you view it), this is truly impossible to go from grayscale back to the ORIGINAL colors. There are however means of pseudo-coloring or false coloring the image.
http://en.wikipedia.org/wiki/False_color
http://blog.martinperis.com/2011/09/opencv-pseudocolor-and-chroma-depth.html
http://podeplace.blogspot.com/2012/11/opencv-pseudocolors.html
The code you have pasted is returning colored image.
You are already doing cvtColor(img, cimg, CV_GRAY2RGB), and then I don't see cimg getting converted to grayscale anywhere !, To verify it try displaying it before returning from this function :
imshow("c",cimg);
waitKey(0);
return cimg;
You can draw circles to the input color image.
Check the documentation given in the openCV
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html

Calibrate single camera using OpenCV 2.3.1 and C++

I'm trying to calibrate a webcam using OpenCV 2.3.1 and Visual Studio 2010 (c++ console app). I'm using this class:
class CameraCalibrator{
private:
std::vector<std::vector<cv::Point3f>> objectPoints;
std::vector<std::vector<cv::Point2f>> imagePoints;
//Square Lenght
float squareLenght;
//output Matrices
cv::Mat cameraMatrix; //intrinsic
cv::Mat distCoeffs;
//flag to specify how calibration is done
int flag;
//used in image undistortion
cv::Mat map1,map2;
bool mustInitUndistort;
public:
CameraCalibrator(): flag(0), squareLenght(36.0), mustInitUndistort(true){};
int addChessboardPoints(const std::vector<std::string>& filelist,cv::Size& boardSize){
std::vector<std::string>::const_iterator itImg;
std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point3f> objectCorners;
//initialize the chessboard corners in the chessboard reference frame
//3d scene points
for(int i = 0; i<boardSize.height; i++){
for(int j=0;j<boardSize.width;j++){
objectCorners.push_back(cv::Point3f(float(i)*squareLenght,float(j)*squareLenght,0.0f));
}
}
//2D Image points:
cv::Mat image; //to contain chessboard image
int successes = 0;
//cv::namedWindow("Chess");
for(itImg=filelist.begin(); itImg!=filelist.end(); itImg++){
image = cv::imread(*itImg,0);
bool found = cv::findChessboardCorners(image, boardSize, imageCorners);
//cv::drawChessboardCorners(image, boardSize, imageCorners, found);
//cv::imshow("Chess",image);
//cv::waitKey(1000);
cv::cornerSubPix(image, imageCorners, cv::Size(5,5),cv::Size(-1,-1),
cv::TermCriteria(cv::TermCriteria::MAX_ITER+cv::TermCriteria::EPS,30,0.1));
//if we have a good board, add it to our data
if(imageCorners.size() == boardSize.area()){
addPoints(imageCorners,objectCorners);
successes++;
}
}
return successes;
}
void addPoints(const std::vector<cv::Point2f>& imageCorners,const std::vector<cv::Point3f>& objectCorners){
//2D image point from one view
imagePoints.push_back(imageCorners);
//corresponding 3D scene points
objectPoints.push_back(objectCorners);
}
double calibrate(cv::Size &imageSize){
mustInitUndistort = true;
std::vector<cv::Mat> rvecs,tvecs;
return
cv::calibrateCamera(objectPoints, //the 3D points
imagePoints,
imageSize,
cameraMatrix, //output camera matrix
distCoeffs,
rvecs,tvecs,
flag);
}
void remap(const cv::Mat &image, cv::Mat &undistorted){
std::cout << cameraMatrix;
if(mustInitUndistort){ //called once per calibration
cv::initUndistortRectifyMap(
cameraMatrix,
distCoeffs,
cv::Mat(),
cameraMatrix,
image.size(),
CV_32FC1,
map1,map2);
mustInitUndistort = false;
}
//apply mapping functions
cv::remap(image,undistorted,map1,map2,cv::INTER_LINEAR);
}
};
I'm using 10 chessboard images (supposing that's enough for calibation) with resolution 640x480. The main function looks like this:
int main(){
CameraCalibrator calibrateCam;
std::vector<std::string> filelist;
filelist.push_back("img10.jpg");
filelist.push_back("img09.jpg");
filelist.push_back("img08.jpg");
filelist.push_back("img07.jpg");
filelist.push_back("img06.jpg");
filelist.push_back("img05.jpg");
filelist.push_back("img04.jpg");
filelist.push_back("img03.jpg");
filelist.push_back("img02.jpg");
filelist.push_back("img01.jpg");
cv::Size boardSize(8,6);
double calibrateError;
int success;
success = calibrateCam.addChessboardPoints(filelist,boardSize);
std::cout<<"Success:" << success << std::endl;
cv::Size imageSize;
cv::Mat inputImage, outputImage;
inputImage = cv::imread("img10.jpg",0);
outputImage = inputImage.clone();
imageSize = inputImage.size();
calibrateError = calibrateCam.calibrate(imageSize);
std::cout<<"Calibration error:" << calibrateError << std::endl;
calibrateCam.remap(inputImage,outputImage);
cv::namedWindow("Original");
cv::imshow("Original",inputImage);
cv::namedWindow("Undistorted");
cv::imshow("Undistorted",outputImage);
cv::waitKey();
return 0;
}
Everything runs without errors. cameraMatrix looks like this (approximately):
685.65 0 365.14
0 686.38 206.98
0 0 1
Calibration error is 0.310157, which is acceptable.
But when I use remap, output image looks even worse than original. Here is the sample:
Original image: ]
Undistorted image: ]
So, the question is, am I doing something wrong in process of calibration? Is 10 different chessboard images enough for calibration? Do you have any suggestions?
The camera matrix doesn't undistort the lens, those 4 values are simply the focal length (in H and V) and the image centre (in X and Y)
There is another 3 or 4 value row matrix (distCoeffs in your code) which contains the lens mapping - see Karl's Answer for example code
The calibration is done with a numerical optimization that has a pretty shallow slope near the solution. Also, the function being minimized is very nonlinear. So, my guess is that your 10 images aren't enough. I calibrate cameras with very wide-angle lenses (i.e. very distorted images), and I try to get like 50 or 60 images.
I try to get images with the chessboard at 3 or 4 positions along each edge of the image, plus some in the middle, with multiple orientations relative to the camera and at 3 different distances (super close, typical, and as far as you can get and still resolve the checkerboard).
Getting the chessboard near the corners is very important. Your example images do not have the chessboard very near the corner of the image. It's those points that constrain the calibration to do the right thing in the very distorted parts of the image (the corners).

OpenCV Having issues with cv::FAST

I'm trying to use the open CV FAST algorithim in order to detect corners from a video feed. The method call and set-up seems pretty straight forward yet I'm running into a few problems. When I try and use this code
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
Mat mat(tempImg);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
I get this error:
OpenCV Error: Assertion failed (image.data && image.type() == CV_8U) in unknown
function, file ........\ocv\opencv\src\cvaux\cvfast.cpp, line 6039
So I figured its a problem with the depth of the image so I when I add this:
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
I get:
OpenCV Error: Bad number of channels (Incorrect number of channels for this conv
ersion code) in unknown function, file ........\ocv\opencv\src\cv\cvcolor.cpp
, line 2238
I've tried using a Mat instead of a IplImage to capture but I keep getting the same kind of errors.
Any suggestions or help?
Thanks in advance.
The entire file just to make it easier for anyone:
#include "cv.h"
#include "cvaux.hpp"
#include "highgui.h"
#include <time.h>
#include <iostream>
double diffClock(clock_t begin, clock_t end);
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
//Create Mat img for camera capture
IplImage* img;
bool run = true;
CvCapture* capture= 0;
capture = cvCaptureFromCAM(-1);
int key =0;
cvNamedWindow("stream", 1);
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
Mat mat(img);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
else if(key=='x'){
run= false;
}
}
cvDestroyWindow( "stream" );
return 0;
}
Whenever you have a problem using the OpenCV API go check the tests/examples available in the source code: fast.cpp
This practice is extremely useful and educational. Now, if you take a look at that code you will notice that the image gets converted to grayscale before calling cv::FAST() on it:
Mat mat(tempImg);
Mat gray;
cvtColor(mat, gray, CV_BGR2GRAY);
FAST(gray,keypoints,threshold,true);
Seems pretty straight forward, indeed.
You need change this
cvCvtColor(img,tempImg,CV_8U);
To
cvCvtColor(img,tempImg,CV_BGR2GRAY);
You can read this
Good Luck
I started getting the same message with code that had worked previously, and i was certain my Mat was U8 grayscale. It turned out that one of the images i was trying to process was no longer there. So in my case it was a misleading error message.
Take a look at this sample code. The code you are using looks quite outdated opencv, in this sample you will find how feature detectors should be used now.
The sample is generic for several feature detectors (including FAST) so that is like it looks a bit more complicated.
http://code.opencv.org/projects/opencv/repository/entry/branches/2.4/opencv/samples/cpp/matching_to_many_images.cpp
You will also find more samples in the parent directory.
Please follow the following code to have your desired result. For showing an example, I am considering an image only but you can simply use the same idea for video frames
Mat img = imread("IMG.jpg", IMREAD_UNCHANGED);
if( img.empty())
{
cout << "File not available for reading"<<endl;
return -1;
}
Mat grayImage;
if(img.channels() >2){
cvtColor( img, grayImage, CV_BGR2GRAY ); // converting color to gray image
}
else{
grayImage = img;
}
double sigma = 1;
GaussianBlur(grayImage, grayImage, Size(), sigma, sigma); // applying gaussian blur to remove some noise,if present
int thresholdCorner = 40;
vector<KeyPoint> keypointsCorners;
FAST(grayImage,keypointsCorners,thresholdCorner,true); // applying FAST key point detector
if(keypointsCorners.size() > 0){
cout << keypointsCorners.size() << endl;
}
// Drawing a circle around corners
for( int i = 0; i < keypointsCorners.size(); i++ )
{
circle( grayImage, keypointsCorners.at(i).pt, 5, Scalar(0), 2, 8, 0 );
}
cv::namedWindow("Display Image");
cv::imshow("Display Image", grayImage);
cvWaitKey(0);
cvDestroyWindow( "Display Image" );