Opencv 2.4.12_2 calcOpticalFlowPyrLK() Error - c++

I'm trying to use OpenCV's calcOpticalFlowPyrLK() function and I can't seem to get past this error:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /tmp/opencv20160107-21708-lubvml/opencv-2.4.12/modules/core/src/matrix.cpp, line 1486
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /tmp/opencv20160107-21708-lubvml/opencv-2.4.12/modules/core/src/matrix.cpp:1486: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create
I checked the input parameters and both images are:
Size:[1280 x 720]
Depth:(CV_8U)
Here is the code:
Mat frame;
Mat back;
Mat fore;
Mat temp;
Mat prevImage;
Mat currImage;
// for floodfill
Point seed = Point(0,0);
VideoCapture cap("./Sequence2/Sequence_03_%03d.jpg");
// Background subtraction
BackgroundSubtractorMOG2 bg;
bg.set("nmixtures",5);
bg.set("detectShadows", true);
bg.set("fTau", 0.5);
//
bool foundpoints = false;
vector<uchar> status, err;
// Blob Detector
SimpleBlobDetector::Params params;
//params.filterByColor = true;
//params.blobColor = 255; // use if bitwise not statement used
// // Filter by Area.
params.filterByArea = true; //size
params.minArea = 25;
params.maxArea = 300;
params.filterByCircularity = true; // circle or not
params.minCircularity = 0.15;
params.filterByConvexity = true; // closed or not
params.minConvexity = 0.92;
params.filterByInertia = true; // elongated or not
params.minInertiaRatio = 0.40;
SimpleBlobDetector detector(params);
//
std::vector<std::vector<Point> > contours;
std::vector<KeyPoint> keypoints;
std::vector<Point2f> pKeypoints, prevKeypoints;
namedWindow("Video");
//namedWindow("Background");
for (int i = 0; i < 623; i++) {
file << "./Output3/image" << i << ".jpg";
cap >> frame;
// CONVERT TO GRAY
cvtColor(frame, temp, CV_RGB2GRAY);
//adaptiveThreshold(frame,temp,1,ADAPTIVE_THRESH_MEAN_C,THRESH_BINARY,3,1);
bg.operator ()(temp,fore);
//bg.getBackgroundImage(back);
// THREHOLD THE IMAGE
threshold(fore,fore,80,150,THRESH_TOZERO);
erode(fore,fore,cv::Mat());
dilate(fore,fore,cv::Mat());
findContours(fore,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE);
// REMOVE SOME NOISE
cv::floodFill(fore, seed,(255));
erode(fore,fore,cv::Mat());
dilate(fore,fore,cv::Mat());
//bitwise_not(fore,fore);
erode(fore,fore,cv::Mat());
dilate(fore,fore,cv::Mat());
detector.detect(fore, keypoints);
//std::cout << keypoints.size() << "\n";
if (keypoints.size() > 0){
if (foundpoints == false)
foundpoints = true;
else{
cv::Size winsize = fore.size();
KeyPoint::convert(keypoints,pKeypoints);
// std::cout << pKeypoints.size() << " , " << prevKeypoints.size() << " , " << prevImage.depth() << " , "<< fore.depth();
calcOpticalFlowPyrLK(prevImage,fore,prevKeypoints,pKeypoints,status,err,winsize);
prevImage.pop_back();
}
prevImage.push_back(fore);
KeyPoint::convert(keypoints,prevKeypoints);
}
imshow("Frame",fore); // frame
// imshow("Background",back);
if(cv::waitKey(30) >= 0) break;
imwrite(file.str(), fore);
file.str("");
}

Change vector<uchar> err to Mat err.

Related

Video frame returning !_src.empty() in function 'cvtColor' error [duplicate]

This question already has answers here:
imread returns None, violating assertion !_src.empty() in function 'cvtColor' error
(22 answers)
Closed 8 months ago.
I am trying to convert frames from a video to Tensors as the video is playing. This is my code:
#include <iostream>
#include "src/VideoProcessing.h"
#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
typedef cv::Point3_<float> Pixel;
const uint WIDTH = 224;
const uint HEIGHT = 224;
const uint CHANNEL = 3;
const uint OUTDIM = 128;
void normalize(Pixel &pixel){
pixel.x = (pixel.x / 255.0 - 0.5) * 2.0;
pixel.y = (pixel.y / 255.0 - 0.5) * 2.0;
pixel.z = (pixel.z / 255.0 - 0.5) * 2.0;
}
int main() {
int fps = VideoProcessing::getFPS("trainer.mp4");
unsigned long size = VideoProcessing::getSize("trainer.mp4");
cv::VideoCapture cap("trainer.mp4");
//Check if input video exists
if(!cap.isOpened()){
std::cout<<"Error opening video stream or file"<<std::endl;
return -1;
}
//Create a window to show input video
cv::namedWindow("input video", cv::WINDOW_NORMAL);
//Keep playing video until video is completed
while(true){
cv::Mat frame;
frame.convertTo(frame, CV_32FC3);
cv::cvtColor(frame, frame, cv::COLOR_BGR2RGB); // convert to float; BGR -> RGB
// normalize to -1 & 1
auto* pixel = frame.ptr<Pixel>(0,0);
const Pixel* endPixel = pixel + frame.cols * frame.rows;
for (; pixel != endPixel; pixel++){normalize(*pixel);}
// resize image as model input
cv::resize(frame, frame, cv::Size(WIDTH, HEIGHT));
//Capture frame by frame
bool success = cap.read(frame);
//If frame is empty then break the loop
if (!success){
std::cout << "Found the end of the video" << std::endl;
break;
}
//Show the current frame
cv::imshow("input video", frame);
if (cv::waitKey(10) == 27){
std::cout << "Esc key is pressed by user. Stopping the video" << std::endl;
break;
}
}
//Close window after input video is completed
cap.release();
//Destroy all the opened windows
cv::destroyAllWindows();
std::cout << "Video file FPS: " << fps << std::endl;
std::cout << "Video file size: " << size << std::endl;
return 0;
}
My goal (down the road) is to run inference on each frame to get landmarks. However, at this stage, I see this error:
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.1.0) /home/onur/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
Aborted (core dumped)
Where am I going wrong?
You will have to read the frame before performing any conversion.
Move the part
//Capture frame by frame
bool success = cap.read(frame);
//If frame is empty then break the loop
if (!success){
std::cout << "Found the end of the video" << std::endl;
break;
}
Just after
cv::Mat frame;

opencv face recognition tutorial code not working with opencv 3.0

i am trying to use the code given in the tutorial but it is not working. i am sure all the dependencies are there and the program is compiling but it giving me some errors related that some functions can not be found.
here is the code:
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/face.hpp"
#include <iostream>
#include <fstream>
#include <sstream>
using namespace cv;
using namespace std;
static Mat norm_0_255(InputArray _src){
Mat src = _src.getMat();
Mat dst;
switch(src.channels()){
case 1:
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1);
break;
case 3:
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC3);
break;
default:
src.copyTo(dst);
break;
}
return dst;
}
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';'){
std::ifstream file(filename.c_str(), ifstream::in);
if(!file){
string error_message = "No valid input file was given, please check the given filename.";
CV_Error(CV_StsBadArg, error_message);
}
string line, path, classlabel;
while(getline(file, line)){
stringstream liness(line);
getline(liness, path, separator);
getline(liness, classlabel);
if(!path.empty() && !classlabel.empty()){
images.push_back(imread(path, 0));
labels.push_back(atoi(classlabel.c_str()));
}
}
}
int main(int argc, const char *argv[]){
if(argc < 2){
cout << "usage: " << argv[0] << " <csv.ext> <output_folder> " << endl;
exit(0);
}
string output_folder = ".";
if(argc == 3){
output_folder = string(argv[2]);
}
string fn_csv = string(argv[1]);
vector<Mat> images;
vector<int> labels;
try{
read_csv(fn_csv, images, labels);
}catch(cv::Exception& e){
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
exit(1);
}
if(images.size() <= 1){
string error_message = "This demo needs at least 2 images to work. please add more images to your data set!";
CV_Error(CV_StsError, error_message);
}
int height = images[0].rows;
Mat testSample = images[images.size() - 1];
int testLabel = labels[labels.size() - 1];
images.pop_back();
labels.pop_back();
Ptr<cv::face::FaceRecognizer> model = cv::face::createEigenFaceRecognizer();
model->train(images, labels);
int predictedLabel = model->predict(testSample);
string result_message = format("Predicted class = %d / Actual class = %d ", predictedLabel, testLabel);
cout << result_message << endl;
Mat eigenvalues = model->getMat("eigenvalues");
Mat W = model->getMat("eigenvectors");
Mat mean = model->getMat("mean");
if(argc == 2){
imshow("mean", norm_0_255(mean.reshape(1, images[0].rows)));
}else{
imwrite(format("%s/mean.png", output_folder.c_str()), norm_0_255(mean.reshape(1, images[0].rows)));
}
// Display or save the Eigenfaces:
for (int i = 0; i < min(10, W.cols); i++) {
string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at<double>(i));
cout << msg << endl;
// get eigenvector #i
Mat ev = W.col(i).clone();
// Reshape to original size & normalize to [0...255] for imshow.
Mat grayscale = norm_0_255(ev.reshape(1, height));
// Show the image & apply a Jet colormap for better sensing.
Mat cgrayscale;
applyColorMap(grayscale, cgrayscale, COLORMAP_JET);
// Display or save:
if(argc == 2) {
imshow(format("eigenface_%d", i), cgrayscale);
} else {
imwrite(format("%s/eigenface_%d.png", output_folder.c_str(), i), norm_0_255(cgrayscale));
}
}
// Display or save the image reconstruction at some predefined steps:
for(int num_components = min(W.cols, 10); num_components < min(W.cols, 300); num_components+=15) {
// slice the eigenvectors from the model
Mat evs = Mat(W, Range::all(), Range(0, num_components));
Mat projection = subspaceProject(evs, mean, images[0].reshape(1,1));
Mat reconstruction = subspaceReconstruct(evs, mean, projection);
// Normalize the result:
reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows));
// Display or save:
if(argc == 2) {
imshow(format("eigenface_reconstruction_%d", num_components), reconstruction);
} else {
imwrite(format("%s/eigenface_reconstruction_%d.png", output_folder.c_str(), num_components), reconstruction);
}
}
// Display if we are not writing to an output folder:
if(argc == 2) {
waitKey(0);
}
return 0;
}
CMakeLists.txt file:
cmake_minimum_required(VERSION 2.8)
project(faceReco)
find_package(OpenCV REQUIRED)
add_executable(faceReco faceReco.cpp)
target_link_libraries(faceReco ${OpenCV_LIBS})
Errors:
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp: In function ‘int main(int, const char**)’:
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp:110:27: error: ‘class cv::face::FaceRecognizer’ has no member named ‘getMat’
Mat eigenvalues = model->getMat("eigenvalues");
^
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp:111:17: error: ‘class cv::face::FaceRecognizer’ has no member named ‘getMat’
Mat W = model->getMat("eigenvectors");
^
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp:112:20: error: ‘class cv::face::FaceRecognizer’ has no member named ‘getMat’
Mat mean = model->getMat("mean");
^
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp:130:46: error: ‘COLORMAP_JET’ was not declared in this scope
applyColorMap(grayscale, cgrayscale, COLORMAP_JET);
^
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp:130:58: error: ‘applyColorMap’ was not declared in this scope
applyColorMap(grayscale, cgrayscale, COLORMAP_JET);
^
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp:143:75: error: ‘subspaceProject’ was not declared in this scope
Mat projection = subspaceProject(evs, mean, images[0].reshape(1,1));
^
/home/abdulaziz/workspace/OpenCV_Projects/FaceReco/faceReco.cpp:144:71: error: ‘subspaceReconstruct’ was not declared in this scope
Mat reconstruction = subspaceReconstruct(evs, mean, projection);
^

HOGDescriptor OpenCV dsize.area() assertion failed

I'm trying to train SVM and use it inside HOGDescrpitor from OpenCV.
The xml file was succesfully generated and loaded by HOGDescriptor but when I try to detect some object, then assertion is occurred:
OpenCV Error: Assertion failed (dsize.area() || (inv_scale_x > 0 &&
inv_scale_y > 0)) in resize, file
/build/buildd/opencv-2.4.8+dfsg1/modules/imgproc/src/imgwarp.cpp, line
1825 terminate called after throwing an instance of
'tbb::captured_exception' what():
/build/buildd/opencv-2.4.8+dfsg1/modules/imgproc/src/imgwarp.cpp:1825:
error: (-215) dsize.area() || (inv_scale_x > 0 && inv_scale_y > 0) in
function resize
To implement SVM trainer I used hints from using OpenCV and SVM with images
Generated XML file has about 144Kbytes. For positive and negative samples I used images of size 64x128 (2000 for positive and 2000 for negative)
Parameter for SVM trainer:
CvSVMParams svmParams;
svmParams.svm_type = CvSVM::C_SVC;
svmParams.kernel_type = CvSVM::LINEAR;
svmParams.term_crit = cvTermCriteria( CV_TERMCRIT_ITER, 10000, 1e-6 );
Code for detection:
int main()
{
HOGDescriptor hog();
if(!hog.load("/home/bin/hogdescriptor.xml"))
{
std::cout << "Failed to load file!" << std::endl;
return -1;
}
VideoCapture cap(0);
if(!cap.isOpened())
{
std::cout << "Error opening camera!" << std::endl;
return 1;
}
Mat testImage;
while ((cvWaitKey(30) & 255) != 27)
{
cap >> testImage;
detectTest(hog, testImage);
imshow("HOG custom detection", testImage);
}
return EXIT_SUCCESS;
}
void showDetections(const vector<Rect>& found, Mat& imageData) {
for (const Rect& rect : found)
{
Point rectPoint1;
rectPoint1.x = rect.x;
rectPoint1.y = rect.y;
Point rectPoint2;
rectPoint2.x = rect.x + rect.width;
rectPoint2.y = rect.y + rect.height;
std::cout << "detection x: " << rect.x << ", y: " << rect.y << std::endl;
rectangle(imageData, rectPoint1, rectPoint2, Scalar(0, 255, 0));
}
}
void detectTest(const HOGDescriptor& hog, Mat& imageData)
{
std::cout << "Trying to detect" << std::endl;
vector<Rect> found;
int groupThreshold = 2;
Size padding(Size(32, 32));
Size winStride(Size(8, 8));
double hitThreshold = 0.; // tolerance
hog.detectMultiScale(imageData, found, hitThreshold, winStride, padding, 1.05, groupThreshold);
// hog.detectMultiScale(imageData, found);
std::cout << "Trying to show detections" << std::endl;
showDetections(found, imageData);
}
XML:
<?xml version="1.0"?>
<opencv_storage>
<my_svm type_id="opencv-ml-svm">
<svm_type>C_SVC</svm_type>
<kernel><type>LINEAR</type></kernel>
<C>1.</C>
<term_criteria><epsilon>2.2204460492503131e-16</epsilon>
<iterations>10000</iterations></term_criteria>
<var_all>8192</var_all>
<var_count>8192</var_count>
<class_count>2</class_count>
<class_labels type_id="opencv-matrix">
<rows>1</rows>
<cols>2</cols>
<dt>i</dt>
<data>
-1 1</data></class_labels>
<sv_total>1</sv_total>
<support_vectors>
<_>
-9.25376153e-05 -9.25376153e-05 -9.25376153e-05 -9.25376153e-05 ...and many, many...</_></support_vectors>
<decision_functions>
<_>
<sv_count>1</sv_count>
<rho>-1.</rho>
<alpha>
1.</alpha>
<index>
0</index></_></decision_functions></my_svm>
</opencv_storage>
Can someone explain me this assertion or maybe can provide some solution for this problem? I spent almost 3 days to fix this but without any success... Thanks in advance!
this is the closer what i got... still trying to use this xml
private static void buscar_hog_svm() {
if (clasificador == null) {
clasificador = new CvSVM();
clasificador.load(path_vectores);
}
Mat img_gray = new Mat();
//gray
Imgproc.cvtColor(imag, img_gray, Imgproc.COLOR_BGR2GRAY);
//Extract HogFeature
hog = new HOGDescriptor(
_winSize //new Size(32, 16)
, _blockSize, _blockStride, _cellSize, _nbins);
MatOfFloat descriptorsValues = new MatOfFloat();
MatOfPoint locations = new MatOfPoint();
hog.compute(img_gray,
descriptorsValues,
_winSize,
_padding, locations);
Mat fm = descriptorsValues;
System.out.println("tamano fm: " + fm.size());
//Classification whether data is positive or negative
float result = clasificador.predict(fm);
System.out.println("resultado= " + result);
}
if you have more clues, please share

OpenCV calibrateCamera assertation failed

few days I'm fighting with camera calibration with chessboard example. Everything is going fine (corners are found and displayed, then feed to arrays) till I call final function calibrateCamera. Than I get assertation error:
OpenCV Error: Assertion failed (nimages > 0) in calibrateCamera, file /home/ig/Downloads/opencv-2.4.8/modules/calib3d/src/calibration.cpp, line 3415
here is classic code:
#include <iostream>
#include <fstream>
#include <vector>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
VideoCapture captR(0); // open the video camera no. 0 (RIGHT)
if (!captR.isOpened()) // if not success, exit program
{
cout << "Cannot open the video cam 0" << endl;
return -1;
}
namedWindow("MyVideo (RIGHT)",CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"
namedWindow("Grayscale",CV_WINDOW_AUTOSIZE); //create a window called "Grayscale"
int a = 0; // Frame counter
int numCornersHor = 7; // Chessboard dimensions
int numCornersVer = 5;
int numSquares = numCornersHor * numCornersVer;
Size boardSize = Size(numCornersHor, numCornersVer);
Mat frameR;
// Mat frameL;
Mat gray_frame;
vector<Point3f> obj;
vector<Point2f> corners; // output vectors of image points
for (int i=0; i<boardSize.height; i++) {
for (int j=0; j<boardSize.width; j++) {
obj.push_back(Point3f(i, j, 0.0f));
}
}
while (1){
int key = waitKey(30);
bool bCaptSuccessR = captR.read(frameR); // read a new frame from video
if (!bCaptSuccessR) //if capture not succeded, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
vector<vector<Point3f> > object_points;
vector<vector<Point2f> > image_points;
// make grayscale frame version for conerSubPix
cvtColor(frameR, gray_frame, CV_BGR2GRAY);
// Get the chessboard corners
bool found = findChessboardCorners(frameR, boardSize, corners);
if (found) {
// Increase accuracy by subpixels
cornerSubPix(gray_frame, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(gray_frame, boardSize, corners, found);
imshow("Grayscale", gray_frame);
////////////////////////////////////////////
if(key==32){ // Save good found by pressing [space]
image_points.push_back(corners);
object_points.push_back(obj);
cout << "Captured good calibration image, No " << a << endl;
cout << "corners: " << corners << endl;
//cout << "obj: " << obj << endl;
a++;
}
}
imshow("MyVideo (RIGHT)", frameR); //show right webcam frame in "MyVideo" window
if (key == 27) { //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
cout << "esc key is pressed by user" << endl;
break;
}
if (key == 115){ // If 'S' key pressed begin calibration
//////////// BEGIN CALIBRATION ////////////////////////
cout << "Callibration started..." << endl;
Mat cameraMatrix = Mat(3, 3, CV_64F);
cameraMatrix.at<double>(0,0) = 1.0;
Mat distCoeffs;
distCoeffs = Mat::zeros(8, 1, CV_64F);
vector<Mat> rvecs;
vector<Mat> tvecs;
Size imageSize = frameR.size();
calibrateCamera(object_points, image_points, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs);
cout << "Callibration ended." << endl;
}//callibration
}
captR.release();
return 0;
}
And here is OpenCV file excerpt with line numbers:
3400 double cv::calibrateCamera( InputArrayOfArrays _objectPoints,
3401 InputArrayOfArrays _imagePoints,
3402 Size imageSize, InputOutputArray _cameraMatrix, InputOutputArray _distCoeffs,
3403 OutputArrayOfArrays _rvecs, OutputArrayOfArrays _tvecs, int flags, TermCriteria criteria )
3404 {
3405 int rtype = CV_64F;
3406 Mat cameraMatrix = _cameraMatrix.getMat();
3407 cameraMatrix = prepareCameraMatrix(cameraMatrix, rtype);
3408 Mat distCoeffs = _distCoeffs.getMat();
3409 distCoeffs = prepareDistCoeffs(distCoeffs, rtype);
3410 if( !(flags & CALIB_RATIONAL_MODEL) )
3411 distCoeffs = distCoeffs.rows == 1 ? distCoeffs.colRange(0, 5) : distCoeffs.rowRange(0, 5);
3412
3413 int i;
3414 size_t nimages = _objectPoints.total();
3415 CV_Assert( nimages > 0 );
3416 Mat objPt, imgPt, npoints, rvecM((int)nimages, 3, CV_64FC1), tvecM((int)nimages, 3, CV_64FC1);
3417 collectCalibrationData( _objectPoints, _imagePoints, noArray(),
3418 objPt, imgPt, 0, npoints );
3419 CvMat c_objPt = objPt, c_imgPt = imgPt, c_npoints = npoints;
3420 CvMat c_cameraMatrix = cameraMatrix, c_distCoeffs = distCoeffs;
3421 CvMat c_rvecM = rvecM, c_tvecM = tvecM;
3422
3423 double reprojErr = cvCalibrateCamera2(&c_objPt, &c_imgPt, &c_npoints, imageSize,
3424 &c_cameraMatrix, &c_distCoeffs, &c_rvecM,
3425 &c_tvecM, flags, criteria );
3426
3427 bool rvecs_needed = _rvecs.needed(), tvecs_needed = _tvecs.needed();
3428
3429 if( rvecs_needed )
3430 _rvecs.create((int)nimages, 1, CV_64FC3);
3431 if( tvecs_needed )
3432 _tvecs.create((int)nimages, 1, CV_64FC3);
3433
Using Linux Ubuntu 12.04, OpenCV 2.4.8, gcc 4.6.3, Eclipse, ...
You are declaring object_points and image_points within the while loop, but probably want to put declarations outside of the loop. Otherwise, the lists will (effectively) be cleared after each iteration.
The user presses the first key, to detect the checkerboards. Boards are written to object_points and image_points. Then the user presses the key to calibrate the camera. Processing of the first key ends, object_points and image_points loose scope (and are cleared), processing of the second key starts, calibrateCamera is called with an empty object_points array and fails.
You should also check whether object_points is not empty before calling calibrateCamera.
You may also find the source code in the samples/cpp/tutorial_code/calib3d/camera_calibration/ folder of the OpenCV source library

create header file .h from cpp file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I need to create Header file mainCalib.h from the mainCalib.cpp file
the mainCalib.cpp file include calibration Sample of opencv ..
so finally I can execute the program from main.cpp file:
this is mainCalib.cpp file:
#include <iostream>
#include <sstream>
#include <time.h>
#include <stdio.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>
#ifndef _CRT_SECURE_NO_WARNINGS
# define _CRT_SECURE_NO_WARNINGS
#endif
#include "mainCalib.h"
using namespace cv;
using namespace std;
void help()
{
cout << "This is a camera calibration sample." << endl
<< "Usage: calibration configurationFile" << endl
<< "Near the sample file you'll find the configuration file, which has detailed help of "
"how to edit it. It may be any OpenCV supported file format XML/YAML." << endl;
}
class Settings
{
public:
Settings() : goodInput(false) {}
enum Pattern { NOT_EXISTING, CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
enum InputType {INVALID, CAMERA, VIDEO_FILE, IMAGE_LIST};
void write(FileStorage& fs) const //Write serialization for this class
{
fs << "{" << "BoardSize_Width" << boardSize.width
<< "BoardSize_Height" << boardSize.height
<< "Square_Size" << squareSize
<< "Calibrate_Pattern" << patternToUse
<< "Calibrate_NrOfFrameToUse" << nrFrames
<< "Calibrate_FixAspectRatio" << aspectRatio
<< "Calibrate_AssumeZeroTangentialDistortion" << calibZeroTangentDist
<< "Calibrate_FixPrincipalPointAtTheCenter" << calibFixPrincipalPoint
<< "Write_DetectedFeaturePoints" << bwritePoints
<< "Write_extrinsicParameters" << bwriteExtrinsics
<< "Write_outputFileName" << outputFileName
<< "Show_UndistortedImage" << showUndistorsed
<< "Input_FlipAroundHorizontalAxis" << flipVertical
<< "Input_Delay" << delay
<< "Input" << input
<< "}";
}
void read(const FileNode& node) //Read serialization for this class
{
node["BoardSize_Width" ] >> boardSize.width;
node["BoardSize_Height"] >> boardSize.height;
node["Calibrate_Pattern"] >> patternToUse;
node["Square_Size"] >> squareSize;
node["Calibrate_NrOfFrameToUse"] >> nrFrames;
node["Calibrate_FixAspectRatio"] >> aspectRatio;
node["Write_DetectedFeaturePoints"] >> bwritePoints;
node["Write_extrinsicParameters"] >> bwriteExtrinsics;
node["Write_outputFileName"] >> outputFileName;
node["Calibrate_AssumeZeroTangentialDistortion"] >> calibZeroTangentDist;
node["Calibrate_FixPrincipalPointAtTheCenter"] >> calibFixPrincipalPoint;
node["Input_FlipAroundHorizontalAxis"] >> flipVertical;
node["Show_UndistortedImage"] >> showUndistorsed;
node["Input"] >> input;
node["Input_Delay"] >> delay;
interprate();
}
void interprate()
{
goodInput = true;
if (boardSize.width <= 0 || boardSize.height <= 0)
{
cerr << "Invalid Board size: " << boardSize.width << " " << boardSize.height << endl;
goodInput = false;
}
if (squareSize <= 10e-6)
{
cerr << "Invalid square size " << squareSize << endl;
goodInput = false;
}
if (nrFrames <= 0)
{
cerr << "Invalid number of frames " << nrFrames << endl;
goodInput = false;
}
if (input.empty()) // Check for valid input
inputType = INVALID;
else
{
if (input[0] >= '0' && input[0] <= '9')
{
stringstream ss(input);
ss >> cameraID;
inputType = CAMERA;
}
else
{
if (readStringList(input, imageList))
{
inputType = IMAGE_LIST;
nrFrames = (nrFrames < (int)imageList.size()) ? nrFrames : (int)imageList.size();
}
else
inputType = VIDEO_FILE;
}
if (inputType == CAMERA)
inputCapture.open(cameraID);
if (inputType == VIDEO_FILE)
inputCapture.open(input);
if (inputType != IMAGE_LIST && !inputCapture.isOpened())
inputType = INVALID;
}
if (inputType == INVALID)
{
cerr << " Inexistent input: " << input;
goodInput = false;
}
flag = 0;
if(calibFixPrincipalPoint) flag |= CV_CALIB_FIX_PRINCIPAL_POINT;
if(calibZeroTangentDist) flag |= CV_CALIB_ZERO_TANGENT_DIST;
if(aspectRatio) flag |= CV_CALIB_FIX_ASPECT_RATIO;
calibrationPattern = NOT_EXISTING;
if (!patternToUse.compare("CHESSBOARD")) calibrationPattern = CHESSBOARD;
if (!patternToUse.compare("CIRCLES_GRID")) calibrationPattern = CIRCLES_GRID;
if (!patternToUse.compare("ASYMMETRIC_CIRCLES_GRID")) calibrationPattern = ASYMMETRIC_CIRCLES_GRID;
if (calibrationPattern == NOT_EXISTING)
{
cerr << " Inexistent camera calibration mode: " << patternToUse << endl;
goodInput = false;
}
atImageList = 0;
}
Mat nextImage()
{
Mat result;
if( inputCapture.isOpened() )
{
Mat view0;
inputCapture >> view0;
view0.copyTo(result);
}
else if( atImageList < (int)imageList.size() )
result = imread(imageList[atImageList++], CV_LOAD_IMAGE_COLOR);
return result;
}
static bool readStringList( const string& filename, vector<string>& l )
{
l.clear();
FileStorage fs(filename, FileStorage::READ);
if( !fs.isOpened() )
return false;
FileNode n = fs.getFirstTopLevelNode();
if( n.type() != FileNode::SEQ )
return false;
FileNodeIterator it = n.begin(), it_end = n.end();
for( ; it != it_end; ++it )
l.push_back((string)*it);
return true;
}
public:
Size boardSize; // The size of the board -> Number of items by width and height
Pattern calibrationPattern;// One of the Chessboard, circles, or asymmetric circle pattern
float squareSize; // The size of a square in your defined unit (point, millimeter,etc).
int nrFrames; // The number of frames to use from the input for calibration
float aspectRatio; // The aspect ratio
int delay; // In case of a video input
bool bwritePoints; // Write detected feature points
bool bwriteExtrinsics; // Write extrinsic parameters
bool calibZeroTangentDist; // Assume zero tangential distortion
bool calibFixPrincipalPoint;// Fix the principal point at the center
bool flipVertical; // Flip the captured images around the horizontal axis
string outputFileName; // The name of the file where to write
bool showUndistorsed; // Show undistorted images after calibration
string input; // The input ->
int cameraID;
vector<string> imageList;
int atImageList;
VideoCapture inputCapture;
InputType inputType;
bool goodInput;
int flag;
private:
string patternToUse;
};
static void read(const FileNode& node, Settings& x, const Settings& default_value = Settings())
{
if(node.empty())
x = default_value;
else
x.read(node);
}
enum { DETECTION = 0, CAPTURING = 1, CALIBRATED = 2 };
bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,
vector<vector<Point2f> > imagePoints );
int mainn(int argc, char* argv[])
{
help();
Settings s;
const string inputSettingsFile = argc > 1 ? argv[1] : "D:\\e+v\\Projekte\\Calibration_Test1\\Calibration_Test1\\in_VID5.xml";
FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings
if (!fs.isOpened())
{
cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl;
return -1;
}
fs["Settings"] >> s;
fs.release(); // close Settings file
if (!s.goodInput)
{
cout << "Invalid input detected. Application stopping. " << endl;
return -1;
}
vector<vector<Point2f> > imagePoints;
Mat cameraMatrix, distCoeffs;
Size imageSize;
int mode = s.inputType == Settings::IMAGE_LIST ? CAPTURING : DETECTION;
clock_t prevTimestamp = 0;
const Scalar RED(0,0,255), GREEN(0,255,0);
const char ESC_KEY = 27;
for(int i = 0;;++i)
{
Mat view;
bool blinkOutput = false;
view = s.nextImage();
//----- If no more image, or got enough, then stop calibration and show result -------------
if( mode == CAPTURING && imagePoints.size() >= (unsigned)s.nrFrames )
{
if( runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints))
mode = CALIBRATED;
else
mode = DETECTION;
}
if(view.empty()) // If no more images then run calibration, save and stop loop.
{
if( imagePoints.size() > 0 )
runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints);
break;
}
imageSize = view.size(); // Format input image.
if( s.flipVertical ) flip( view, view, 0 );
vector<Point2f> pointBuf;
bool found;
switch( s.calibrationPattern ) // Find feature points on the input format
{
case Settings::CHESSBOARD:
found = findChessboardCorners( view, s.boardSize, pointBuf,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE);
break;
case Settings::CIRCLES_GRID:
found = findCirclesGrid( view, s.boardSize, pointBuf );
break;
case Settings::ASYMMETRIC_CIRCLES_GRID:
found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID );
break;
default:
found = false;
break;
}
if ( found) // If done with success,
{
// improve the found corners' coordinate accuracy for chessboard
if( s.calibrationPattern == Settings::CHESSBOARD)
{
Mat viewGray;
cvtColor(view, viewGray, COLOR_BGR2GRAY);
cornerSubPix( viewGray, pointBuf, Size(11,11),
Size(-1,-1), TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 ));
}
if( mode == CAPTURING && // For camera only take new samples after delay time
(!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) )
{
imagePoints.push_back(pointBuf);
prevTimestamp = clock();
blinkOutput = s.inputCapture.isOpened();
}
// Draw the corners.
drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );
}
//----------------------------- Output Text ------------------------------------------------
string msg = (mode == CAPTURING) ? "100/100" :
mode == CALIBRATED ? "Calibrated" : "Press 'g' to start";
int baseLine = 0;
Size textSize = getTextSize(msg, 1, 1, 1, &baseLine);
Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);
if( mode == CAPTURING )
{
if(s.showUndistorsed)
msg = format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames );
else
msg = format( "%d/%d", (int)imagePoints.size(), s.nrFrames );
}
putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ? GREEN : RED);
if( blinkOutput )
bitwise_not(view, view);
//------------------------- Video capture output undistorted ------------------------------
if( mode == CALIBRATED && s.showUndistorsed )
{
Mat temp = view.clone();
undistort(temp, view, cameraMatrix, distCoeffs);
}
//------------------------------ Show image and check for input commands -------------------
imshow("Image View", view);
char key = (char)waitKey(s.inputCapture.isOpened() ? 50 : s.delay);
if( key == ESC_KEY )
break;
if( key == 'u' && mode == CALIBRATED )
s.showUndistorsed = !s.showUndistorsed;
if( s.inputCapture.isOpened() && key == 'g' )
{
mode = CAPTURING;
imagePoints.clear();
}
}
// -----------------------Show the undistorted image for the image list ------------------------
if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed )
{
Mat view, rview, map1, map2;
initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),
getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0),
imageSize, CV_16SC2, map1, map2);
for(int i = 0; i < (int)s.imageList.size(); i++ )
{
view = imread(s.imageList[i], 1);
if(view.empty())
continue;
remap(view, rview, map1, map2, INTER_LINEAR);
imshow("Image View", rview);
char c = (char)waitKey();
if( c == ESC_KEY || c == 'q' || c == 'Q' )
break;
}
}
return 0;
}
static double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints,
const vector<Mat>& rvecs, const vector<Mat>& tvecs,
const Mat& cameraMatrix , const Mat& distCoeffs,
vector<float>& perViewErrors)
{
vector<Point2f> imagePoints2;
int i, totalPoints = 0;
double totalErr = 0, err;
perViewErrors.resize(objectPoints.size());
for( i = 0; i < (int)objectPoints.size(); ++i )
{
projectPoints( Mat(objectPoints[i]), rvecs[i], tvecs[i], cameraMatrix,
distCoeffs, imagePoints2);
err = norm(Mat(imagePoints[i]), Mat(imagePoints2), CV_L2);
int n = (int)objectPoints[i].size();
perViewErrors[i] = (float) std::sqrt(err*err/n);
totalErr += err*err;
totalPoints += n;
}
return std::sqrt(totalErr/totalPoints);
}
static void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
Settings::Pattern patternType /*= Settings::CHESSBOARD*/)
{
corners.clear();
switch(patternType)
{
case Settings::CHESSBOARD:
case Settings::CIRCLES_GRID:
for( int i = 0; i < boardSize.height; ++i )
for( int j = 0; j < boardSize.width; ++j )
corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0));
break;
case Settings::ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0));
break;
default:
break;
}
}
static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,
vector<vector<Point2f> > imagePoints, vector<Mat>& rvecs, vector<Mat>& tvecs,
vector<float>& reprojErrs, double& totalAvgErr)
{
cameraMatrix = Mat::eye(3, 3, CV_64F);
if( s.flag & CV_CALIB_FIX_ASPECT_RATIO )
cameraMatrix.at<double>(0,0) = 1.0;
distCoeffs = Mat::zeros(8, 1, CV_64F);
vector<vector<Point3f> > objectPoints(1);
calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern);
objectPoints.resize(imagePoints.size(),objectPoints[0]);
//Find intrinsic and extrinsic camera parameters
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,
distCoeffs, rvecs, tvecs, s.flag|CV_CALIB_FIX_K4|CV_CALIB_FIX_K5);
cout << "Re-projection error reported by calibrateCamera: "<< rms << endl;
bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs);
totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints,
rvecs, tvecs, cameraMatrix, distCoeffs, reprojErrs);
return ok;
}
// Print camera parameters to the output file
static void saveCameraParams( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,
const vector<Mat>& rvecs, const vector<Mat>& tvecs,
const vector<float>& reprojErrs, const vector<vector<Point2f> >& imagePoints,
double totalAvgErr )
{
FileStorage fs( s.outputFileName, FileStorage::WRITE );
time_t tm;
time( &tm );
struct tm *t2 = localtime( &tm );
char buf[1024];
strftime( buf, sizeof(buf)-1, "%c", t2 );
fs << "calibration_Time" << buf;
if( !rvecs.empty() || !reprojErrs.empty() )
fs << "nrOfFrames" << (int)std::max(rvecs.size(), reprojErrs.size());
fs << "image_Width" << imageSize.width;
fs << "image_Height" << imageSize.height;
fs << "board_Width" << s.boardSize.width;
fs << "board_Height" << s.boardSize.height;
fs << "square_Size" << s.squareSize;
if( s.flag & CV_CALIB_FIX_ASPECT_RATIO )
fs << "FixAspectRatio" << s.aspectRatio;
if( s.flag )
{
sprintf( buf, "flags: %s%s%s%s",
s.flag & CV_CALIB_USE_INTRINSIC_GUESS ? " +use_intrinsic_guess" : "",
s.flag & CV_CALIB_FIX_ASPECT_RATIO ? " +fix_aspectRatio" : "",
s.flag & CV_CALIB_FIX_PRINCIPAL_POINT ? " +fix_principal_point" : "",
s.flag & CV_CALIB_ZERO_TANGENT_DIST ? " +zero_tangent_dist" : "" );
cvWriteComment( *fs, buf, 0 );
}
fs << "flagValue" << s.flag;
fs << "Camera_Matrix" << cameraMatrix;
fs << "Distortion_Coefficients" << distCoeffs;
fs << "Avg_Reprojection_Error" << totalAvgErr;
if( !reprojErrs.empty() )
fs << "Per_View_Reprojection_Errors" << Mat(reprojErrs);
if( !rvecs.empty() && !tvecs.empty() )
{
CV_Assert(rvecs[0].type() == tvecs[0].type());
Mat bigmat((int)rvecs.size(), 6, rvecs[0].type());
for( int i = 0; i < (int)rvecs.size(); i++ )
{
Mat r = bigmat(Range(i, i+1), Range(0,3));
Mat t = bigmat(Range(i, i+1), Range(3,6));
CV_Assert(rvecs[i].rows == 3 && rvecs[i].cols == 1);
CV_Assert(tvecs[i].rows == 3 && tvecs[i].cols == 1);
//*.t() is MatExpr (not Mat) so we can use assignment operator
r = rvecs[i].t();
t = tvecs[i].t();
}
cvWriteComment( *fs, "a set of 6-tuples (rotation vector + translation vector) for each view", 0 );
fs << "Extrinsic_Parameters" << bigmat;
}
if( !imagePoints.empty() )
{
Mat imagePtMat((int)imagePoints.size(), (int)imagePoints[0].size(), CV_32FC2);
for( int i = 0; i < (int)imagePoints.size(); i++ )
{
Mat r = imagePtMat.row(i).reshape(2, imagePtMat.cols);
Mat imgpti(imagePoints[i]);
imgpti.copyTo(r);
}
fs << "Image_points" << imagePtMat;
}
}
bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints )
{
vector<Mat> rvecs, tvecs;
vector<float> reprojErrs;
double totalAvgErr = 0;
bool ok = runCalibration(s,imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs,
reprojErrs, totalAvgErr);
cout << (ok ? "Calibration succeeded" : "Calibration failed")
<< ". avg re projection error = " << totalAvgErr ;
if( ok )
saveCameraParams( s, imageSize, cameraMatrix, distCoeffs, rvecs ,tvecs, reprojErrs,
imagePoints, totalAvgErr);
return ok;
}
and here is main.cpp file:
#include<iostream>
#include"mainCalib.h"
using namespace std;
int main()
{
return 0;
}
So please can anyone help me to create the code of mainCalib.h file from mainCalib.cpp
classes and functions!
I tried such as followings but I have much Errors:
mainCalib.h file:
void help();
class Settings;
static void read(const FileNode& , Settings& , const Settings& );
bool runCalibrationAndSave (Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs, vector<vector<Point2f>> imagePoints);
int mainn(int argc, char* argv[])
static double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,const vector<vector<Point2f> >& imagePoints,const vector<Mat>& rvecs, const vector<Mat>& tvecs,const Mat& cameraMatrix , const Mat& distCoeffs,vector<float>& perViewErrors);
static void calcBoardCornerPositions(cv::Size , float , vector<Point3f>& ,
Settings::Pattern /*= Settings::CHESSBOARD*/);
static bool runCalibration( Settings& s, Size& , Mat& , Mat& ,vector<vector<Point2f> > , vector<Mat>& , vector<Mat>& ,vector<float>& , double& );
static void saveCameraParams( Settings& , cv::Size& , Mat& , Mat& ,
const vector<Mat>& , const vector<Mat>& ,
const vector<float>& , const vector<vector<Point2f> >& ,
double );
First read about include guards.
Then you put the actual class definition in the header file.
Then you put the class function implementation in the source file.
Short example
Header file
#ifndef HEADER_FILE_H
#define HEADER_FILE_H
class MyClass
{
public:
MyClass(int val);
~MyClass();
void some_function();
private:
int value;
};
#endif // HEADER_FILE_H
Source file
#include <iostream>
#include "header_file.h"
MyClass::MyClass(int val)
: value(val)
{
}
MyClass::~MyClass()
{
}
void MyClass::some_function()
{
std::cout << "value is " << value << '\n';
}
Main source file
#include "header_file.h"
int main()
{
MyClass my_object(5);
my_object.some_function();
}
Also note that you can't have static function in the file scope if you expect them to be called from outside that source file. Making a function static restricts linkage to the translation unit it is defined in.