Stitching Video with fast playback of frames - c++

I'm attempting to stitch two videos together though matching there key points though finding the homography between the overlapping video. I have successfully got this to work with two different images.
With the video I have loaded the two separate video files and looped the frames and copied them to the blank matrix cap1frame and cap2frame for each video.
Then I send each frame from each video to the stitching function which matches the keypoints based on the homography between the two frames and stitch them and display the resultant image. (matching based on openCV example)
The stitching is successful however, it results in a very slow playback of the video and some sort of graphical anomalies on the side of the frame. Seen in the photo.
I'm wondering how I can make this more efficient with fast video playback.
int main(int argc, char** argv){
// Create a VideoCapture object and open the input file
VideoCapture cap1("left.mov");
VideoCapture cap2("right.mov");
// Check if camera opened successfully
if(!cap1.isOpened() || !cap2.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
//Trying to loop frames
for (;;){
Mat cap1frame;
Mat cap2frame;
cap1 >> cap1frame;
cap2 >> cap2frame;
// If the frame is empty, break immediately
if (cap1frame.empty() || cap2frame.empty())
break;
//sending each frame from each video to the stitch function then displaying
imshow( "Result", Stitching(cap1frame,cap2frame));
if(waitKey(30) >= 0) break;
//destroyWindow("Stitching");
// waitKey(0);
}
return 0;
}

I was able to resolve my issue by pre-calculating the homography with just the first frame of video. This is so the function was only called once.
I then looped through the rest of the video to apply the warping of the video frames so they could be stitched together based on the pre-calculated homography. This bit was initially within my stitching function.
I still had an issue at this point with playback still being really slow when calling imshow. But I decided to export the resultant video and this worked when the correct fps was set in the VideoWriter object. I wonder if I just needed to adjust the fps playback of imshow but I'm not sure on that bit.
I've got my full code below:
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include <opencv2/xfeatures2d/nonfree.hpp>
#include <opencv2/xfeatures2d/cuda.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
//To get homography from images passed in. Matching points in the images.
Mat Stitching(Mat image1,Mat image2){
Mat I_1 = image1;
Mat I_2 = image2;
//based on https://docs.opencv.org/3.3.0/d7/dff/tutorial_feature_homography.html
cv::Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
// Step 1: Detect the keypoints:
std::vector<KeyPoint> keypoints_1, keypoints_2;
f2d->detect( I_1, keypoints_1 );
f2d->detect( I_2, keypoints_2 );
// Step 2: Calculate descriptors (feature vectors)
Mat descriptors_1, descriptors_2;
f2d->compute( I_1, keypoints_1, descriptors_1 );
f2d->compute( I_2, keypoints_2, descriptors_2 );
// Step 3: Matching descriptor vectors using BFMatcher :
BFMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
// Keep best matches only to have a nice drawing.
// We sort distance between descriptor matches
Mat index;
int nbMatch = int(matches.size());
Mat tab(nbMatch, 1, CV_32F);
for (int i = 0; i < nbMatch; i++)
tab.at<float>(i, 0) = matches[i].distance;
sortIdx(tab, index, SORT_EVERY_COLUMN + SORT_ASCENDING);
vector<DMatch> bestMatches;
for (int i = 0; i < 200; i++)
bestMatches.push_back(matches[index.at < int > (i, 0)]);
// 1st image is the destination image and the 2nd image is the src image
std::vector<Point2f> dst_pts; //1st
std::vector<Point2f> source_pts; //2nd
for (vector<DMatch>::iterator it = bestMatches.begin(); it != bestMatches.end(); ++it) {
//cout << it->queryIdx << "\t" << it->trainIdx << "\t" << it->distance << "\n";
//-- Get the keypoints from the good matches
dst_pts.push_back( keypoints_1[ it->queryIdx ].pt );
source_pts.push_back( keypoints_2[ it->trainIdx ].pt );
}
Mat H_12 = findHomography( source_pts, dst_pts, CV_RANSAC );
return H_12;
}
int main(int argc, char** argv){
//Mats to get the first frame of video and pass to Stitching function.
Mat I1, h_I1;
Mat I2, h_I2;
// Create a VideoCapture object and open the input file
VideoCapture cap1("left.mov");
VideoCapture cap2("right.mov");
cap1.set(CV_CAP_PROP_BUFFERSIZE, 10);
cap2.set(CV_CAP_PROP_BUFFERSIZE, 10);
//Check if camera opened successfully
if(!cap1.isOpened() || !cap2.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
//passing first frame to Stitching function
if (cap1.read(I1)){
h_I1 = I1;
}
if (cap2.read(I2)){
h_I2 = I2;
}
Mat homography;
//passing here.
homography = Stitching(h_I1,h_I2);
std::cout << homography << '\n';
//creating VideoWriter object with defined values.
VideoWriter video("video/output.avi",CV_FOURCC('M','J','P','G'),30, Size(1280,720));
//Looping through frames of both videos.
for (;;){
Mat cap1frame;
Mat cap2frame;
cap1 >> cap1frame;
cap2 >> cap2frame;
// If the frame is empty, break immediately
if (cap1frame.empty() || cap2frame.empty())
break;
Mat warpImage2;
//warping the second video cap2frame so it matches with the first one.
//size is defined as the final video size
warpPerspective(cap2frame, warpImage2, homography, Size(1280,720), INTER_CUBIC);
//final is the final canvas where both videos will be warped onto.
Mat final (Size(1280,720), CV_8UC3);
//Mat final(Size(cap1frame.cols*2 + cap1frame.cols, cap1frame.rows*2),CV_8UC3);
//Using roi getting the relivent areas of each video.
Mat roi1(final, Rect(0, 0, cap1frame.cols, cap1frame.rows));
Mat roi2(final, Rect(0, 0, warpImage2.cols, warpImage2.rows));
//warping images on to the canvases which are linked with the final canvas.
warpImage2.copyTo(roi2);
cap1frame.copyTo(roi1);
//writing to video.
video.write(final);
//imshow ("Result", final);
if(waitKey(30) >= 0) break;
}
video.release();
return 0;
}

Related

OpenCV match image from camera with same image does not produce 100% matching

My goal is to match an image captured from a camera with some models and find the closest one. However I think I am missing something.
This is what I'm doing: first I get a frame from the camera, select a portion, extract keypoints and compute descriptors using SURF and store them in a xml file (I also store the model as model.png). This is my model.
Then I take another frame (in few seconds), select the same portion, compute descriptors and match these against the previously stored one.
The result is not close to 100% (I use the ratio between good matches and number of keypoints) like I would expect.
To have a comparison, if I load model.png, compute its descriptors and match against the stored descriptors I get 100% matching (more or less), and this is reasonable.
This is my code:
#include <iostream>
#include "opencv2/opencv.hpp"
#include "opencv2/nonfree/nonfree.hpp"
using namespace std;
std::vector<cv::KeyPoint> detectKeypoints(cv::Mat image, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
std::vector<cv::KeyPoint> keypoints;
cv::SurfFeatureDetector detector(hessianTh,nOctaves,nOctaveLayers,extended,upright);
detector.detect(image,keypoints);
return keypoints; }
cv::Mat computeDescriptors(cv::Mat image,std::vector<cv::KeyPoint> keypoints, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
cv::SurfDescriptorExtractor extractor(hessianTh,nOctaves,nOctaveLayers,extended,upright);
cv::Mat imageDescriptors;
extractor.compute(image,keypoints,imageDescriptors);
return imageDescriptors; }
int main(int argc, char *argv[]) {
cv::VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 2304);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 1536);
cap >> frame;
cv::Rect selection(939,482,1063-939,640-482);
cv::Mat roi = frame(selection).clone();
//cv::Mat roi=cv::imread("model.png");
cv::cvtColor(roi,roi,CV_BGR2GRAY);
cv::equalizeHist(roi,roi);
if (std::stoi(argv[1])==1)
{
std::vector<cv::KeyPoint> keypoints = detectKeypoints(roi,400,4,2,true,false);
cv::FileStorage fs("model.xml", cv::FileStorage::WRITE);
cv::write(fs,"keypoints",keypoints);
cv::write(fs,"descriptors",computeDescriptors(roi,keypoints,400,4,2,true,false));
fs.release();
cv::imwrite("model.png",roi);
}
else
{
cv::FileStorage fs("model.xml", cv::FileStorage::READ);
std::vector<cv::KeyPoint> modelkeypoints;
cv::Mat modeldescriptor;
cv::FileNode filenode = fs["keypoints"];
cv::read(filenode,modelkeypoints);
filenode = fs["descriptors"];
cv::read(filenode, modeldescriptor);
fs.release();
std::vector<cv::KeyPoint> roikeypoints = detectKeypoints(roi,400,4,2,true,false);
cv::Mat roidescriptor = computeDescriptors(roi,roikeypoints,400,4,2,true,false);
std::vector<std::vector<cv::DMatch>> matches;
cv::BFMatcher matcher(cv::NORM_L2);
if(roikeypoints.size()<modelkeypoints.size())
matcher.knnMatch(roidescriptor, modeldescriptor, matches, 2); // Find two nearest matches
else
matcher.knnMatch(modeldescriptor, roidescriptor, matches, 2);
vector<cv::DMatch> good_matches;
for (int i = 0; i < matches.size(); ++i)
{
const float ratio = 0.7;
if (matches[i][0].distance < ratio * matches[i][1].distance)
{
good_matches.push_back(matches[i][0]);
}
}
cv::Mat matching;
cv::Mat model = cv::imread("model.png");
if(roikeypoints.size()<modelkeypoints.size())
cv::drawMatches(roi,roikeypoints,model,modelkeypoints,good_matches,matching);
else
cv::drawMatches(model,modelkeypoints,roi,roikeypoints,good_matches,matching);
cv::imwrite("matches.png",matching);
float result = static_cast<float>(good_matches.size())/static_cast<float>(roikeypoints.size());
std::cout << result << std::endl;
}
return 0; }
Any suggestion will be appreciated, this is driving me crazy..
This is expected, the small change between the two frames is the reason you don't get 100% matches. But on the same image, the SURF features are going to be exactly at the same points and the computed descriptors are going to be identical. So tune your method for your camera, plot the distance between features when they are supposed to be identical. Set a threshold on the distance such that most (maybe 95%) of the matches are accepted. This way you will have a low false match rate and still have a large rate of true matches.

findChessboardCorners gives unexpected results

Here is my code
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include<opencv2/opencv.hpp>
#include<iostream>
//#include<vector>
using namespace cv;
using namespace std;
int main()
{
VideoCapture cap = VideoCapture(0);
int successes = 0;
int numBoards = 0;
int numCornersHor = 6;
int numCornersVer = 4;
int numSquares = (numCornersHor - 1) * (numCornersVer - 1);
Size board_sz = Size(numCornersHor, numCornersVer);
vector<Point2f> corners;
for (;;)
{
Mat img;
cap >> img;
Mat gray;
cvtColor(img, gray, CV_RGB2GRAY);
if (img.empty()) break; // end of video stream
imshow("this is you, smile! :)", gray);
if (waitKey(1) == 27) break; // stop capturing by pressing ESC
bool found = findChessboardCorners(gray, board_sz, corners, CALIB_CB_ADAPTIVE_THRESH);
if (found == 1)
{
cout << corners.size()<<"\n";
cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(gray, board_sz, corners, found);
}
}
cap.release();
waitKey();
return 0;
}
The code is capturing frames from a webcam. If a chessboard is detected, the total number of found corners is printed out (I did it because I was not getting the same output as in the tutorial code and I wanted to find where the bug is).
The output:
First you should follow some ground rules:
Do not use loose papers -> print/glue the chessboard on a flat plate
Print it with a big white border to improve detection
The chessboard has to be completly inside the image (not as in your example)
Take several images with different positions of your chessboard
Second, you cant draw your contours into a 8-bit grayscale image, use an 8-bit color image instead.
And if i count correctly (count inner corners) your chessboard has the size (8,6).
I have the same problem, the number of corners is HUGE. After some search i found this solution Here.
For some reason findChessboardCorners function resizes the corners vector. I tried the solution above, it worked well with the output corners, but i still have assertion failed problem with cornerSubPix function.

How to close camera (OpenCv Beaglebone)

Good Day,
I am trying to figure out how to close the camera on a beaglebone in openCV. I have tried numerous commands such as release(&camera) but none exist and the camera continues to stay on when I don't want it to.
VideoCapture capture(0);
capture.set(CV_CAP_PROP_FRAME_WIDTH,320);
capture.set(CV_CAP_PROP_FRAME_HEIGHT,240);
if(!capture.isOpened()){
cout << "Failed to connect to the camera." << endl;
}
Mat frame, edges, cont;
while(1){
cout<<sending<<endl;
if(sending){
for(int i=0; i<frames; i++){
capture >> frame;
if(frame.empty()){
cout << "Failed to capture an image" << endl;
return 0;
}
cvtColor(frame, edges, CV_BGR2GRAY);
Code is something like this, at the end of the for loop, I want to close the camera, but of course it still stays open
The camera will be deinitialized automatically in VideoCapture destructor.
Check this example from opencv docu:
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Also
cvQueryFrame
Grabs and returns a frame from camera or file
IplImage* cvQueryFrame( CvCapture* capture );
capture video capturing structure.
The function cvQueryFrame grabs a frame from camera or video file, decompresses and > returns it. This function is just a combination of cvGrabFrame and cvRetrieveFrame in one > call. The returned image should not be released or modified by user.
Also check: http://derekmolloy.ie/beaglebone/beaglebone-video-capture-and-image-processing-on-embedded-linux-using-opencv/
I hope this works for you. Best of luck.

Problems with opencv stereoRectifyUncalibrated

I've been trying to rectify and build the disparity mappping for a pair of images using OpenCV stereoRectifyUncalibrated, but I'm not getting very good results. My code is:
template<class T>
T convertNumber(string& number)
{
istringstream ss(number);
T t;
ss >> t;
return t;
}
void readPoints(vector<Point2f>& points, string filename)
{
fstream filest(filename.c_str(), ios::in);
string line;
assert(filest != NULL);
getline(filest, line);
do{
int posEsp = line.find_first_of(' ');
string posX = line.substr(0, posEsp);
string posY = line.substr(posEsp+1, line.size() - posEsp);
float X = convertNumber<float>(posX);
float Y = convertNumber<float>(posY);
Point2f pnt = Point2f(X, Y);
points.push_back(pnt);
getline(filest, line);
}while(!filest.eof());
filest.close();
}
void drawKeypointSequence(Mat lFrame, Mat rFrame, vector<KeyPoint>& lKeyp, vector<KeyPoint>& rKeyp)
{
namedWindow("prevFrame", WINDOW_AUTOSIZE);
namedWindow("currFrame", WINDOW_AUTOSIZE);
moveWindow("prevFrame", 0, 300);
moveWindow("currFrame", 650, 300);
Mat rFrameAux;
rFrame.copyTo(rFrameAux);
Mat lFrameAux;
lFrame.copyTo(lFrameAux);
int size = rKeyp.size();
for(int i=0; i<size; i++)
{
vector<KeyPoint> drawRightKeyp;
vector<KeyPoint> drawleftKeyp;
drawRightKeyp.push_back(rKeyp[i]);
drawleftKeyp.push_back(lKeyp[i]);
cout << rKeyp[i].pt << " <<<>>> " << lKeyp[i].pt << endl;
drawKeypoints(rFrameAux, drawRightKeyp, rFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
drawKeypoints(lFrameAux, drawleftKeyp, lFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("currFrame", rFrameAux);
imshow("prevFrame", lFrameAux);
waitKey(0);
}
imwrite("RightKeypFrame.jpg", rFrameAux);
imwrite("LeftKeypFrame.jpg", lFrameAux);
}
int main(int argc, char* argv[])
{
StereoBM stereo(StereoBM::BASIC_PRESET, 16*5, 21);
double ndisp = 16*4;
assert(argc == 5);
string rightImgFilename(argv[1]); // Right image (current frame)
string leftImgFilename(argv[2]); // Left image (previous frame)
string rightPointsFilename(argv[3]); // Right image points file
string leftPointsFilename(argv[4]); // Left image points file
Mat rightFrame = imread(rightImgFilename.c_str(), 0);
Mat leftFrame = imread(leftImgFilename.c_str(), 0);
vector<Point2f> rightPoints;
vector<Point2f> leftPoints;
vector<KeyPoint> rightKeyp;
vector<KeyPoint> leftKeyp;
readPoints(rightPoints, rightPointsFilename);
readPoints(leftPoints, leftPointsFilename);
assert(rightPoints.size() == leftPoints.size());
KeyPoint::convert(rightPoints, rightKeyp);
KeyPoint::convert(leftPoints, leftKeyp);
// Desenha os keypoints sequencialmente, de forma a testar a consistĂȘncia do matching
drawKeypointSequence(leftFrame, rightFrame, leftKeyp, rightKeyp);
Mat fundMatrix = findFundamentalMat(leftPoints, rightPoints, CV_FM_8POINT);
Mat homRight;
Mat homLeft;
Mat disp16 = Mat(rightFrame.rows, leftFrame.cols, CV_16S);
Mat disp8 = Mat(rightFrame.rows, leftFrame.cols, CV_8UC1);
stereoRectifyUncalibrated(leftPoints, rightPoints, fundMatrix, rightFrame.size(), homLeft, homRight);
warpPerspective(rightFrame, rightFrame, homRight, rightFrame.size());
warpPerspective(leftFrame, leftFrame, homLeft, leftFrame.size());
namedWindow("currFrame", WINDOW_AUTOSIZE);
namedWindow("prevFrame", WINDOW_AUTOSIZE);
moveWindow("currFrame", 650, 300);
moveWindow("prevFrame", 0, 300);
imshow("currFrame", rightFrame);
imshow("prevFrame", leftFrame);
imwrite("RectfRight.jpg", rightFrame);
imwrite("RectfLeft.jpg", leftFrame);
waitKey(0);
stereo(rightFrame, leftFrame, disp16, CV_16S);
disp16.convertTo(disp8, CV_8UC1, 255/ndisp);
FileStorage file("disp_map.xml", FileStorage::WRITE);
file << "disparity" << disp8;
file.release();
imshow("disparity", disp8);
imwrite("disparity.jpg", disp8);
moveWindow("disparity", 0, 0);
waitKey(0);
}
drawKeyPoint sequence is the way I visually check the consistency of the points I have for both images. By drawing each of their keypoints in sequence, I can be sure that keypoint i on image A is keypoint i on image B.
I've also tried playing with the ndisp parameter, but it didn't help much.
I tried it for the following pair of images:
LeftImage
RightImage
got the following rectified pair:
RectifiedLeft
RectifiedRight
and finally, the following disparity map
DisparityMap
Which, as you can see, is quite bad. I've also tried the same pair of images with the following stereoRectifyUncalibrated example: http://programmingexamples.net/wiki/OpenCV/WishList/StereoRectifyUncalibrated and the SBM_Sample.cpp from opencv tutorial code samples to build the disparity map, and got a very similar result.
I'm using opencv 2.4
Thanks in advance!
Besides possible calibration problems, your images clearly lack some texture for the stereo block matching to work.
This algorithm will see many ambiguities and too large disparities on flat (non-tetxured) parts.
Note however that the keypoints seem to match well, so even if the rectification output seems weird it is probably correct.
You can test your code against standard images from the Middlebury stereo page for sanity checks.
I would suggest to do a stereo calibration using the chessboard, or take multiple pictures with a chess board and use stereocalibrate.cpp on your computer. I am saying that because you are using stereorectifyuncalibrated, While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion, it would be better to correct it before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using calibrateCamera(). Then, the images can be corrected using undistort() , or just the point coordinates can be corrected with undistortPoints().

OpenCV Having issues with cv::FAST

I'm trying to use the open CV FAST algorithim in order to detect corners from a video feed. The method call and set-up seems pretty straight forward yet I'm running into a few problems. When I try and use this code
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
Mat mat(tempImg);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
I get this error:
OpenCV Error: Assertion failed (image.data && image.type() == CV_8U) in unknown
function, file ........\ocv\opencv\src\cvaux\cvfast.cpp, line 6039
So I figured its a problem with the depth of the image so I when I add this:
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
I get:
OpenCV Error: Bad number of channels (Incorrect number of channels for this conv
ersion code) in unknown function, file ........\ocv\opencv\src\cv\cvcolor.cpp
, line 2238
I've tried using a Mat instead of a IplImage to capture but I keep getting the same kind of errors.
Any suggestions or help?
Thanks in advance.
The entire file just to make it easier for anyone:
#include "cv.h"
#include "cvaux.hpp"
#include "highgui.h"
#include <time.h>
#include <iostream>
double diffClock(clock_t begin, clock_t end);
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
//Create Mat img for camera capture
IplImage* img;
bool run = true;
CvCapture* capture= 0;
capture = cvCaptureFromCAM(-1);
int key =0;
cvNamedWindow("stream", 1);
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
Mat mat(img);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
else if(key=='x'){
run= false;
}
}
cvDestroyWindow( "stream" );
return 0;
}
Whenever you have a problem using the OpenCV API go check the tests/examples available in the source code: fast.cpp
This practice is extremely useful and educational. Now, if you take a look at that code you will notice that the image gets converted to grayscale before calling cv::FAST() on it:
Mat mat(tempImg);
Mat gray;
cvtColor(mat, gray, CV_BGR2GRAY);
FAST(gray,keypoints,threshold,true);
Seems pretty straight forward, indeed.
You need change this
cvCvtColor(img,tempImg,CV_8U);
To
cvCvtColor(img,tempImg,CV_BGR2GRAY);
You can read this
Good Luck
I started getting the same message with code that had worked previously, and i was certain my Mat was U8 grayscale. It turned out that one of the images i was trying to process was no longer there. So in my case it was a misleading error message.
Take a look at this sample code. The code you are using looks quite outdated opencv, in this sample you will find how feature detectors should be used now.
The sample is generic for several feature detectors (including FAST) so that is like it looks a bit more complicated.
http://code.opencv.org/projects/opencv/repository/entry/branches/2.4/opencv/samples/cpp/matching_to_many_images.cpp
You will also find more samples in the parent directory.
Please follow the following code to have your desired result. For showing an example, I am considering an image only but you can simply use the same idea for video frames
Mat img = imread("IMG.jpg", IMREAD_UNCHANGED);
if( img.empty())
{
cout << "File not available for reading"<<endl;
return -1;
}
Mat grayImage;
if(img.channels() >2){
cvtColor( img, grayImage, CV_BGR2GRAY ); // converting color to gray image
}
else{
grayImage = img;
}
double sigma = 1;
GaussianBlur(grayImage, grayImage, Size(), sigma, sigma); // applying gaussian blur to remove some noise,if present
int thresholdCorner = 40;
vector<KeyPoint> keypointsCorners;
FAST(grayImage,keypointsCorners,thresholdCorner,true); // applying FAST key point detector
if(keypointsCorners.size() > 0){
cout << keypointsCorners.size() << endl;
}
// Drawing a circle around corners
for( int i = 0; i < keypointsCorners.size(); i++ )
{
circle( grayImage, keypointsCorners.at(i).pt, 5, Scalar(0), 2, 8, 0 );
}
cv::namedWindow("Display Image");
cv::imshow("Display Image", grayImage);
cvWaitKey(0);
cvDestroyWindow( "Display Image" );