OpenCV Having issues with cv::FAST - c++

I'm trying to use the open CV FAST algorithim in order to detect corners from a video feed. The method call and set-up seems pretty straight forward yet I'm running into a few problems. When I try and use this code
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
Mat mat(tempImg);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
I get this error:
OpenCV Error: Assertion failed (image.data && image.type() == CV_8U) in unknown
function, file ........\ocv\opencv\src\cvaux\cvfast.cpp, line 6039
So I figured its a problem with the depth of the image so I when I add this:
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
I get:
OpenCV Error: Bad number of channels (Incorrect number of channels for this conv
ersion code) in unknown function, file ........\ocv\opencv\src\cv\cvcolor.cpp
, line 2238
I've tried using a Mat instead of a IplImage to capture but I keep getting the same kind of errors.
Any suggestions or help?
Thanks in advance.
The entire file just to make it easier for anyone:
#include "cv.h"
#include "cvaux.hpp"
#include "highgui.h"
#include <time.h>
#include <iostream>
double diffClock(clock_t begin, clock_t end);
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
//Create Mat img for camera capture
IplImage* img;
bool run = true;
CvCapture* capture= 0;
capture = cvCaptureFromCAM(-1);
int key =0;
cvNamedWindow("stream", 1);
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
Mat mat(img);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
else if(key=='x'){
run= false;
}
}
cvDestroyWindow( "stream" );
return 0;
}

Whenever you have a problem using the OpenCV API go check the tests/examples available in the source code: fast.cpp
This practice is extremely useful and educational. Now, if you take a look at that code you will notice that the image gets converted to grayscale before calling cv::FAST() on it:
Mat mat(tempImg);
Mat gray;
cvtColor(mat, gray, CV_BGR2GRAY);
FAST(gray,keypoints,threshold,true);
Seems pretty straight forward, indeed.

You need change this
cvCvtColor(img,tempImg,CV_8U);
To
cvCvtColor(img,tempImg,CV_BGR2GRAY);
You can read this
Good Luck

I started getting the same message with code that had worked previously, and i was certain my Mat was U8 grayscale. It turned out that one of the images i was trying to process was no longer there. So in my case it was a misleading error message.

Take a look at this sample code. The code you are using looks quite outdated opencv, in this sample you will find how feature detectors should be used now.
The sample is generic for several feature detectors (including FAST) so that is like it looks a bit more complicated.
http://code.opencv.org/projects/opencv/repository/entry/branches/2.4/opencv/samples/cpp/matching_to_many_images.cpp
You will also find more samples in the parent directory.

Please follow the following code to have your desired result. For showing an example, I am considering an image only but you can simply use the same idea for video frames
Mat img = imread("IMG.jpg", IMREAD_UNCHANGED);
if( img.empty())
{
cout << "File not available for reading"<<endl;
return -1;
}
Mat grayImage;
if(img.channels() >2){
cvtColor( img, grayImage, CV_BGR2GRAY ); // converting color to gray image
}
else{
grayImage = img;
}
double sigma = 1;
GaussianBlur(grayImage, grayImage, Size(), sigma, sigma); // applying gaussian blur to remove some noise,if present
int thresholdCorner = 40;
vector<KeyPoint> keypointsCorners;
FAST(grayImage,keypointsCorners,thresholdCorner,true); // applying FAST key point detector
if(keypointsCorners.size() > 0){
cout << keypointsCorners.size() << endl;
}
// Drawing a circle around corners
for( int i = 0; i < keypointsCorners.size(); i++ )
{
circle( grayImage, keypointsCorners.at(i).pt, 5, Scalar(0), 2, 8, 0 );
}
cv::namedWindow("Display Image");
cv::imshow("Display Image", grayImage);
cvWaitKey(0);
cvDestroyWindow( "Display Image" );

Related

Stitching Video with fast playback of frames

I'm attempting to stitch two videos together though matching there key points though finding the homography between the overlapping video. I have successfully got this to work with two different images.
With the video I have loaded the two separate video files and looped the frames and copied them to the blank matrix cap1frame and cap2frame for each video.
Then I send each frame from each video to the stitching function which matches the keypoints based on the homography between the two frames and stitch them and display the resultant image. (matching based on openCV example)
The stitching is successful however, it results in a very slow playback of the video and some sort of graphical anomalies on the side of the frame. Seen in the photo.
I'm wondering how I can make this more efficient with fast video playback.
int main(int argc, char** argv){
// Create a VideoCapture object and open the input file
VideoCapture cap1("left.mov");
VideoCapture cap2("right.mov");
// Check if camera opened successfully
if(!cap1.isOpened() || !cap2.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
//Trying to loop frames
for (;;){
Mat cap1frame;
Mat cap2frame;
cap1 >> cap1frame;
cap2 >> cap2frame;
// If the frame is empty, break immediately
if (cap1frame.empty() || cap2frame.empty())
break;
//sending each frame from each video to the stitch function then displaying
imshow( "Result", Stitching(cap1frame,cap2frame));
if(waitKey(30) >= 0) break;
//destroyWindow("Stitching");
// waitKey(0);
}
return 0;
}
I was able to resolve my issue by pre-calculating the homography with just the first frame of video. This is so the function was only called once.
I then looped through the rest of the video to apply the warping of the video frames so they could be stitched together based on the pre-calculated homography. This bit was initially within my stitching function.
I still had an issue at this point with playback still being really slow when calling imshow. But I decided to export the resultant video and this worked when the correct fps was set in the VideoWriter object. I wonder if I just needed to adjust the fps playback of imshow but I'm not sure on that bit.
I've got my full code below:
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include <opencv2/xfeatures2d/nonfree.hpp>
#include <opencv2/xfeatures2d/cuda.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
//To get homography from images passed in. Matching points in the images.
Mat Stitching(Mat image1,Mat image2){
Mat I_1 = image1;
Mat I_2 = image2;
//based on https://docs.opencv.org/3.3.0/d7/dff/tutorial_feature_homography.html
cv::Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
// Step 1: Detect the keypoints:
std::vector<KeyPoint> keypoints_1, keypoints_2;
f2d->detect( I_1, keypoints_1 );
f2d->detect( I_2, keypoints_2 );
// Step 2: Calculate descriptors (feature vectors)
Mat descriptors_1, descriptors_2;
f2d->compute( I_1, keypoints_1, descriptors_1 );
f2d->compute( I_2, keypoints_2, descriptors_2 );
// Step 3: Matching descriptor vectors using BFMatcher :
BFMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
// Keep best matches only to have a nice drawing.
// We sort distance between descriptor matches
Mat index;
int nbMatch = int(matches.size());
Mat tab(nbMatch, 1, CV_32F);
for (int i = 0; i < nbMatch; i++)
tab.at<float>(i, 0) = matches[i].distance;
sortIdx(tab, index, SORT_EVERY_COLUMN + SORT_ASCENDING);
vector<DMatch> bestMatches;
for (int i = 0; i < 200; i++)
bestMatches.push_back(matches[index.at < int > (i, 0)]);
// 1st image is the destination image and the 2nd image is the src image
std::vector<Point2f> dst_pts; //1st
std::vector<Point2f> source_pts; //2nd
for (vector<DMatch>::iterator it = bestMatches.begin(); it != bestMatches.end(); ++it) {
//cout << it->queryIdx << "\t" << it->trainIdx << "\t" << it->distance << "\n";
//-- Get the keypoints from the good matches
dst_pts.push_back( keypoints_1[ it->queryIdx ].pt );
source_pts.push_back( keypoints_2[ it->trainIdx ].pt );
}
Mat H_12 = findHomography( source_pts, dst_pts, CV_RANSAC );
return H_12;
}
int main(int argc, char** argv){
//Mats to get the first frame of video and pass to Stitching function.
Mat I1, h_I1;
Mat I2, h_I2;
// Create a VideoCapture object and open the input file
VideoCapture cap1("left.mov");
VideoCapture cap2("right.mov");
cap1.set(CV_CAP_PROP_BUFFERSIZE, 10);
cap2.set(CV_CAP_PROP_BUFFERSIZE, 10);
//Check if camera opened successfully
if(!cap1.isOpened() || !cap2.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
//passing first frame to Stitching function
if (cap1.read(I1)){
h_I1 = I1;
}
if (cap2.read(I2)){
h_I2 = I2;
}
Mat homography;
//passing here.
homography = Stitching(h_I1,h_I2);
std::cout << homography << '\n';
//creating VideoWriter object with defined values.
VideoWriter video("video/output.avi",CV_FOURCC('M','J','P','G'),30, Size(1280,720));
//Looping through frames of both videos.
for (;;){
Mat cap1frame;
Mat cap2frame;
cap1 >> cap1frame;
cap2 >> cap2frame;
// If the frame is empty, break immediately
if (cap1frame.empty() || cap2frame.empty())
break;
Mat warpImage2;
//warping the second video cap2frame so it matches with the first one.
//size is defined as the final video size
warpPerspective(cap2frame, warpImage2, homography, Size(1280,720), INTER_CUBIC);
//final is the final canvas where both videos will be warped onto.
Mat final (Size(1280,720), CV_8UC3);
//Mat final(Size(cap1frame.cols*2 + cap1frame.cols, cap1frame.rows*2),CV_8UC3);
//Using roi getting the relivent areas of each video.
Mat roi1(final, Rect(0, 0, cap1frame.cols, cap1frame.rows));
Mat roi2(final, Rect(0, 0, warpImage2.cols, warpImage2.rows));
//warping images on to the canvases which are linked with the final canvas.
warpImage2.copyTo(roi2);
cap1frame.copyTo(roi1);
//writing to video.
video.write(final);
//imshow ("Result", final);
if(waitKey(30) >= 0) break;
}
video.release();
return 0;
}

What is causing the "Camera dropped frame!" error when running detectMultiScale using OpenCV?

I have a function that gets called from main in a for loop that searches for faces from a video feed. The code runs perfectly in the first run through, but on the second loop it outputs many "Camera dropped frame!" errors to the console and no longer updates the video feed.
I have found the line that causes the erros, it is the one that contains the detectMultiScale function in it. The full function is here:
void findInFrame(Mat inputFrame)
{
vector<Rect> faces;
Mat grayFrame;
cvtColor(inputFrame, grayFrame, COLOR_BGR2GRAY);
faceClassifier.detectMultiScale( grayFrame, faces);
for(int i=0;i<faces.size();i++)
{
Point center( faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5 );
ellipse(inputFrame,center,Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
Mat faceROI = grayFrame(faces[i]);
}
imshow("frame", inputFrame);
}
The line that throws the error is:
faceClassifier.detectMultiScale( grayFrame, faces);
Every frame after the first causes the errors. How can i fix this?
Main is here:
#include <iostream>
#include <unistd.h>
#include <opencv2/core.hpp>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
string faceHaar = "/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml";
string eyesHaar = "/usr/local/share/OpenCV/haarcascades/haarcascade_eye.xml";
CascadeClassifier faceClassifier;
void findInFrame(Mat inputFrame);
int main(int argc, const char * argv[])
{
VideoCapture cam(0);
Mat frame;
if(!faceClassifier.load(faceHaar))
{
cout << "Error loading face cascade" << endl;
return -1;
}
for(;;)
{
cam >> frame;
if(!frame.empty())
{
findInFrame(frame);
usleep(1000);
}
else
{
cout << "frame empty" << endl;
}
}
return 0;
}
Try specify the function a bit more - I feel like its just taking too long to process your matches.
faceClassifier.detectMultiScale(grayFrame, faces, 1.3, 3,0|CV_HAAR_SCALE_IMAGE, Size(20, 30));
Where size is the size you trained your detector, 1.3 is a scale threshold and 3 is how many nearest neighbours are needed for a match.
Aside from that, dropping frames isn't a huge issue, but you could well be doing some things wrong elsewhere in your code, like where you grab your new frame.
I also would consider changing the function to void findInFrame(Mat &inputFrame) and calling imshow in your main loop, not in the function. Note that the &inputFrame isn't really a conventional pointer and doesn't require you to change how you reference inputFrame in the function

Image Manipulation-Outline

Given below is the code that I am using to find the difference between 2 images.
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
#include<iostream>
int main()
{
char a,b;
cv::Mat frame;
cv::Mat frame2;
VideoCapture cap(0);
if(!cap.isOpened())
{
cout<<"Camera is not connected"<<endl;
getchar();
exit(0);
}
Mat edges;
namedWindow("Camera Feed",1);
cout<<"Ready for background?(y/Y)"<<endl;
cin>>a;
if(a=='y'||a=='Y')
{
cap>>frame;
cv::cvtColor(frame,frame,CV_RGB2GRAY);
cv::GaussianBlur(frame,frame,cv::Size(51,51),2.00,0,BORDER_DEFAULT);
}
cv::waitKey(5000);
cout<<"Ready for foreground?(y/Y)"<<endl;
cin>>b;
if(b=='y'||b=='Y')
{
cap>>frame2;
cv::cvtColor(frame2,frame2,CV_RGB2GRAY);
cv::GaussianBlur(frame2,frame2,cv::Size(51,51),2.00,0,BORDER_DEFAULT);
}
cv::Mat frame3;
cv::absdiff(frame,frame2,frame3);
imwrite("img_bw.jpg",frame3);
return 0;
}
The output is something like this.
I wanted to know if there is any way I can draw something like an outline around the body. Thanks.
I just tried the following method.
First dilated the grayscale image, then applied adaptive thresholding on the image.
Later found contours in the image, and on the assumption that your body will be biggest blob in the image, drew outline for the biggest blob.
import cv2
import numpy as np
img = cv2.imread('sofqn.jpg')
gray = cv2.imread('sofqn.jpg',0)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(10,10))
gray = cv2.dilate(gray,kernel)
thresh = cv2.adaptiveThreshold(gray,255,0,1,11,2)
cont,hier = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
max_area = -1
best_cnt = None
for cnt in cont:
area = cv2.contourArea(cnt)
if area > max_area:
max_area = area
best_cnt = cnt
cv2.drawContours(img,[best_cnt],0,[0,255,0],2)
Below is the result :

Problems with opencv stereoRectifyUncalibrated

I've been trying to rectify and build the disparity mappping for a pair of images using OpenCV stereoRectifyUncalibrated, but I'm not getting very good results. My code is:
template<class T>
T convertNumber(string& number)
{
istringstream ss(number);
T t;
ss >> t;
return t;
}
void readPoints(vector<Point2f>& points, string filename)
{
fstream filest(filename.c_str(), ios::in);
string line;
assert(filest != NULL);
getline(filest, line);
do{
int posEsp = line.find_first_of(' ');
string posX = line.substr(0, posEsp);
string posY = line.substr(posEsp+1, line.size() - posEsp);
float X = convertNumber<float>(posX);
float Y = convertNumber<float>(posY);
Point2f pnt = Point2f(X, Y);
points.push_back(pnt);
getline(filest, line);
}while(!filest.eof());
filest.close();
}
void drawKeypointSequence(Mat lFrame, Mat rFrame, vector<KeyPoint>& lKeyp, vector<KeyPoint>& rKeyp)
{
namedWindow("prevFrame", WINDOW_AUTOSIZE);
namedWindow("currFrame", WINDOW_AUTOSIZE);
moveWindow("prevFrame", 0, 300);
moveWindow("currFrame", 650, 300);
Mat rFrameAux;
rFrame.copyTo(rFrameAux);
Mat lFrameAux;
lFrame.copyTo(lFrameAux);
int size = rKeyp.size();
for(int i=0; i<size; i++)
{
vector<KeyPoint> drawRightKeyp;
vector<KeyPoint> drawleftKeyp;
drawRightKeyp.push_back(rKeyp[i]);
drawleftKeyp.push_back(lKeyp[i]);
cout << rKeyp[i].pt << " <<<>>> " << lKeyp[i].pt << endl;
drawKeypoints(rFrameAux, drawRightKeyp, rFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
drawKeypoints(lFrameAux, drawleftKeyp, lFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("currFrame", rFrameAux);
imshow("prevFrame", lFrameAux);
waitKey(0);
}
imwrite("RightKeypFrame.jpg", rFrameAux);
imwrite("LeftKeypFrame.jpg", lFrameAux);
}
int main(int argc, char* argv[])
{
StereoBM stereo(StereoBM::BASIC_PRESET, 16*5, 21);
double ndisp = 16*4;
assert(argc == 5);
string rightImgFilename(argv[1]); // Right image (current frame)
string leftImgFilename(argv[2]); // Left image (previous frame)
string rightPointsFilename(argv[3]); // Right image points file
string leftPointsFilename(argv[4]); // Left image points file
Mat rightFrame = imread(rightImgFilename.c_str(), 0);
Mat leftFrame = imread(leftImgFilename.c_str(), 0);
vector<Point2f> rightPoints;
vector<Point2f> leftPoints;
vector<KeyPoint> rightKeyp;
vector<KeyPoint> leftKeyp;
readPoints(rightPoints, rightPointsFilename);
readPoints(leftPoints, leftPointsFilename);
assert(rightPoints.size() == leftPoints.size());
KeyPoint::convert(rightPoints, rightKeyp);
KeyPoint::convert(leftPoints, leftKeyp);
// Desenha os keypoints sequencialmente, de forma a testar a consistĂȘncia do matching
drawKeypointSequence(leftFrame, rightFrame, leftKeyp, rightKeyp);
Mat fundMatrix = findFundamentalMat(leftPoints, rightPoints, CV_FM_8POINT);
Mat homRight;
Mat homLeft;
Mat disp16 = Mat(rightFrame.rows, leftFrame.cols, CV_16S);
Mat disp8 = Mat(rightFrame.rows, leftFrame.cols, CV_8UC1);
stereoRectifyUncalibrated(leftPoints, rightPoints, fundMatrix, rightFrame.size(), homLeft, homRight);
warpPerspective(rightFrame, rightFrame, homRight, rightFrame.size());
warpPerspective(leftFrame, leftFrame, homLeft, leftFrame.size());
namedWindow("currFrame", WINDOW_AUTOSIZE);
namedWindow("prevFrame", WINDOW_AUTOSIZE);
moveWindow("currFrame", 650, 300);
moveWindow("prevFrame", 0, 300);
imshow("currFrame", rightFrame);
imshow("prevFrame", leftFrame);
imwrite("RectfRight.jpg", rightFrame);
imwrite("RectfLeft.jpg", leftFrame);
waitKey(0);
stereo(rightFrame, leftFrame, disp16, CV_16S);
disp16.convertTo(disp8, CV_8UC1, 255/ndisp);
FileStorage file("disp_map.xml", FileStorage::WRITE);
file << "disparity" << disp8;
file.release();
imshow("disparity", disp8);
imwrite("disparity.jpg", disp8);
moveWindow("disparity", 0, 0);
waitKey(0);
}
drawKeyPoint sequence is the way I visually check the consistency of the points I have for both images. By drawing each of their keypoints in sequence, I can be sure that keypoint i on image A is keypoint i on image B.
I've also tried playing with the ndisp parameter, but it didn't help much.
I tried it for the following pair of images:
LeftImage
RightImage
got the following rectified pair:
RectifiedLeft
RectifiedRight
and finally, the following disparity map
DisparityMap
Which, as you can see, is quite bad. I've also tried the same pair of images with the following stereoRectifyUncalibrated example: http://programmingexamples.net/wiki/OpenCV/WishList/StereoRectifyUncalibrated and the SBM_Sample.cpp from opencv tutorial code samples to build the disparity map, and got a very similar result.
I'm using opencv 2.4
Thanks in advance!
Besides possible calibration problems, your images clearly lack some texture for the stereo block matching to work.
This algorithm will see many ambiguities and too large disparities on flat (non-tetxured) parts.
Note however that the keypoints seem to match well, so even if the rectification output seems weird it is probably correct.
You can test your code against standard images from the Middlebury stereo page for sanity checks.
I would suggest to do a stereo calibration using the chessboard, or take multiple pictures with a chess board and use stereocalibrate.cpp on your computer. I am saying that because you are using stereorectifyuncalibrated, While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion, it would be better to correct it before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using calibrateCamera(). Then, the images can be corrected using undistort() , or just the point coordinates can be corrected with undistortPoints().

Camera calibration with opencv (Assertion failed fault)

I am trying to get camera calibration parameters by using opencv camera calibration functions. I have a video and trying to find the calibration parameters and find the points in the video which inclused a checkboard in different psoitions. But i couldnt passed the calibration phase yet. I can find the corner of the checkboard and show them in openCV window but when it comes to line:
calibrateCamera(objectPoints,imagePoints..............)
it throws exception and stops.
I get the following error: OpenCV error: Assertion failed 0 &&nimages==int imagePoints1.total ()&&
Here is my code:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "C:/opencv/include/opencv/cv.h"
#include <iostream>
#include <math.h>
using namespace cv;
using namespace std;
std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point3f> objectCorners;
std::vector<std::vector<cv::Point3f>> objectPoints;
std::vector<std::vector<cv::Point2f>> imagePoints;
void addPoints(const std::vector<cv::Point2f>&imageCorners, const std::vector<cv::Point3f>& objectCorners)
{
// 2D image points from one view
imagePoints.push_back(imageCorners);
// corresponding 3D scene points
objectPoints.push_back(objectCorners);
}
int main()
{
int key;
cv::Mat image;
cv::Mat gray_image;
VideoCapture cap("here goes path of the file");
if (!cap.isOpened()) // check if we succeeded
cout<<"failed";
else
cout<<"success";
cvNamedWindow( "video",0);
cv::Size boardSize(8,6);
// output Matrices
cv::Mat cameraMatrix;
std::vector<cv::Mat> rvecs, tvecs;
cv::Mat distCoeffs;
for (int i=0; i<boardSize.height; i++)
{
for (int j=0; j<boardSize.width; j++)
{
objectCorners.push_back(cv::Point3f(i, j, 0.0f));
}
}
int frame=1;
int corner_count=0;
while(1)
{
if(cap.read(image))
{
frame++;
if(frame%20==0)
{
if(waitKey(30) >= 0) break;
bool found = cv::findChessboardCorners(image, boardSize, imageCorners);
cvtColor( image, gray_image, CV_RGB2GRAY );
addPoints(imageCorners, objectCorners);
//bool found = cv::findChessboardCorners(image,boardSize, imageCorners);
cv::drawChessboardCorners(gray_image,boardSize, imageCorners,found);
imshow( "video", gray_image );
}
}
else
break;
}
int flag=0;
std::string text="";
for (int i=1; i<imagePoints.size();i++)
{
std::stringstream out;
out << imagePoints[i];
text=out.str();
cout<<text<<endl;
}
calibrateCamera(objectPoints,imagePoints,gray_image.size(), cameraMatrix, distCoeffs, rvecs, tvecs, flag);
return 0;
}
Print the size of all your std::vector, I suspect you are passing an empty vector to that function.
EDIT:
I've shared some instructions in this answer on how to do camera calibration. Those references include working source code. You'll probably have to do a small adaptation on those programs so they work with video instead.
you should look at this:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#bool findChessboardCorners(InputArray image, Size patternSize, OutputArray corners, int flags)
it says your Source chessboard view must be an 8-bit grayscale or color image.
so you must use this:
bool found = cv::findChessboardCorners(gray_image, boardSize, imageCorners);