I am trying to get camera calibration parameters by using opencv camera calibration functions. I have a video and trying to find the calibration parameters and find the points in the video which inclused a checkboard in different psoitions. But i couldnt passed the calibration phase yet. I can find the corner of the checkboard and show them in openCV window but when it comes to line:
calibrateCamera(objectPoints,imagePoints..............)
it throws exception and stops.
I get the following error: OpenCV error: Assertion failed 0 &&nimages==int imagePoints1.total ()&&
Here is my code:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "C:/opencv/include/opencv/cv.h"
#include <iostream>
#include <math.h>
using namespace cv;
using namespace std;
std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point3f> objectCorners;
std::vector<std::vector<cv::Point3f>> objectPoints;
std::vector<std::vector<cv::Point2f>> imagePoints;
void addPoints(const std::vector<cv::Point2f>&imageCorners, const std::vector<cv::Point3f>& objectCorners)
{
// 2D image points from one view
imagePoints.push_back(imageCorners);
// corresponding 3D scene points
objectPoints.push_back(objectCorners);
}
int main()
{
int key;
cv::Mat image;
cv::Mat gray_image;
VideoCapture cap("here goes path of the file");
if (!cap.isOpened()) // check if we succeeded
cout<<"failed";
else
cout<<"success";
cvNamedWindow( "video",0);
cv::Size boardSize(8,6);
// output Matrices
cv::Mat cameraMatrix;
std::vector<cv::Mat> rvecs, tvecs;
cv::Mat distCoeffs;
for (int i=0; i<boardSize.height; i++)
{
for (int j=0; j<boardSize.width; j++)
{
objectCorners.push_back(cv::Point3f(i, j, 0.0f));
}
}
int frame=1;
int corner_count=0;
while(1)
{
if(cap.read(image))
{
frame++;
if(frame%20==0)
{
if(waitKey(30) >= 0) break;
bool found = cv::findChessboardCorners(image, boardSize, imageCorners);
cvtColor( image, gray_image, CV_RGB2GRAY );
addPoints(imageCorners, objectCorners);
//bool found = cv::findChessboardCorners(image,boardSize, imageCorners);
cv::drawChessboardCorners(gray_image,boardSize, imageCorners,found);
imshow( "video", gray_image );
}
}
else
break;
}
int flag=0;
std::string text="";
for (int i=1; i<imagePoints.size();i++)
{
std::stringstream out;
out << imagePoints[i];
text=out.str();
cout<<text<<endl;
}
calibrateCamera(objectPoints,imagePoints,gray_image.size(), cameraMatrix, distCoeffs, rvecs, tvecs, flag);
return 0;
}
Print the size of all your std::vector, I suspect you are passing an empty vector to that function.
EDIT:
I've shared some instructions in this answer on how to do camera calibration. Those references include working source code. You'll probably have to do a small adaptation on those programs so they work with video instead.
you should look at this:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#bool findChessboardCorners(InputArray image, Size patternSize, OutputArray corners, int flags)
it says your Source chessboard view must be an 8-bit grayscale or color image.
so you must use this:
bool found = cv::findChessboardCorners(gray_image, boardSize, imageCorners);
Related
Here is my code
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include<opencv2/opencv.hpp>
#include<iostream>
//#include<vector>
using namespace cv;
using namespace std;
int main()
{
VideoCapture cap = VideoCapture(0);
int successes = 0;
int numBoards = 0;
int numCornersHor = 6;
int numCornersVer = 4;
int numSquares = (numCornersHor - 1) * (numCornersVer - 1);
Size board_sz = Size(numCornersHor, numCornersVer);
vector<Point2f> corners;
for (;;)
{
Mat img;
cap >> img;
Mat gray;
cvtColor(img, gray, CV_RGB2GRAY);
if (img.empty()) break; // end of video stream
imshow("this is you, smile! :)", gray);
if (waitKey(1) == 27) break; // stop capturing by pressing ESC
bool found = findChessboardCorners(gray, board_sz, corners, CALIB_CB_ADAPTIVE_THRESH);
if (found == 1)
{
cout << corners.size()<<"\n";
cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(gray, board_sz, corners, found);
}
}
cap.release();
waitKey();
return 0;
}
The code is capturing frames from a webcam. If a chessboard is detected, the total number of found corners is printed out (I did it because I was not getting the same output as in the tutorial code and I wanted to find where the bug is).
The output:
First you should follow some ground rules:
Do not use loose papers -> print/glue the chessboard on a flat plate
Print it with a big white border to improve detection
The chessboard has to be completly inside the image (not as in your example)
Take several images with different positions of your chessboard
Second, you cant draw your contours into a 8-bit grayscale image, use an 8-bit color image instead.
And if i count correctly (count inner corners) your chessboard has the size (8,6).
I have the same problem, the number of corners is HUGE. After some search i found this solution Here.
For some reason findChessboardCorners function resizes the corners vector. I tried the solution above, it worked well with the output corners, but i still have assertion failed problem with cornerSubPix function.
I tried to extract SIFT key points. It is working fine for a sample image I downloaded (height 400px width 247px horizontal and vertical resolutions 300dpi). Below image shows the extracted points.
Then I tried to apply the same code to a image that was taken and edited by me (height 443px width 541px horizontal and vertical resolutions 72dpi).
To create the above image I rotated the original image then removed its background and resized it using Photoshop, but my code, for that image doesn't extract features like in the first image.
See the result :
It just extract very few points. I expect a result as in the first case.
For the second case when I'm using the original image without any edit the program gives points as the first case.
Here is the simple code I have used
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>
using namespace cv;
int main(){
Mat src, descriptors,dest;
vector<KeyPoint> keypoints;
src = imread(". . .");
cvtColor(src, src, CV_BGR2GRAY);
SIFT sift;
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}
What I'm doing wrong here? what do I need to do to get a result like in the first case to my own image after resizing ?
Thank you!
Try set nfeatures parameter (may be other parameters also need adjustment) in SIFT constructor.
Here is constructor definition from reference:
SIFT::SIFT(int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)
Your code will be:
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>
using namespace cv;
using namespace std;
int main(){
Mat src, descriptors,dest;
vector<KeyPoint> keypoints;
src = imread("D:\\ImagesForTest\\leaf.jpg");
cvtColor(src, src, CV_BGR2GRAY);
SIFT sift(2000,3,0.004);
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}
The result:
Dense sampling example:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include "opencv2/nonfree/nonfree.hpp"
int main(int argc, char* argv[])
{
cv::initModule_nonfree();
cv::namedWindow("result");
cv::Mat bgr_img = cv::imread("D:\\ImagesForTest\\lena.jpg");
if (bgr_img.empty())
{
exit(EXIT_FAILURE);
}
cv::Mat gray_img;
cv::cvtColor(bgr_img, gray_img, cv::COLOR_BGR2GRAY);
cv::normalize(gray_img, gray_img, 0, 255, cv::NORM_MINMAX);
cv::DenseFeatureDetector detector(12.0f, 1, 0.1f, 10);
std::vector<cv::KeyPoint> keypoints;
detector.detect(gray_img, keypoints);
std::vector<cv::KeyPoint>::iterator itk;
for (itk = keypoints.begin(); itk != keypoints.end(); ++itk)
{
std::cout << itk->pt << std::endl;
cv::circle(bgr_img, itk->pt, itk->size, cv::Scalar(0,255,255), 1, CV_AA);
cv::circle(bgr_img, itk->pt, 1, cv::Scalar(0,255,0), -1);
}
cv::Ptr<cv::DescriptorExtractor> descriptorExtractor = cv::DescriptorExtractor::create("SURF");
cv::Mat descriptors;
descriptorExtractor->compute( gray_img, keypoints, descriptors);
// SIFT returns large negative values when it goes off the edge of the image.
descriptors.setTo(0, descriptors<0);
imshow("result",bgr_img);
cv::waitKey();
return 0;
}
The result:
Following code is used to calculate the normalized gradient at all the pixels of image. But on using imshow on calculated gradient, instead of showing gradient for provided image its showing gradient of provided image 4 times (side by side).
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/core/core.hpp>
#include<iostream>
#include<math.h>
using namespace cv;
using namespace std;
Mat mat2gray(const Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 1.0, NORM_MINMAX);
return dst;
}
Mat setImage(Mat srcImage){
//GaussianBlur(srcImage,srcImage,Size(3,3),0.5,0.5);
Mat avgImage = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat gradient = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat norMagnitude = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat orientation = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
//Mat_<uchar> srcImagetemp = srcImage;
float dx,dy;
for(int i=0;i<srcImage.rows-1;i++){
for(int j=0;j<srcImage.cols-1;j++){
dx=srcImage.at<float>(i,j+1)-srcImage.at<float>(i,j);
dy=srcImage.at<float>(i+1,j)-srcImage.at<float>(i,j);
gradient.at<float>(i,j)=sqrt(dx*dx+dy*dy);
orientation.at<float>(i,j)=atan2(dy,dx);
//cout<<gradient.at<float>(i,j)<<endl;
}
}
GaussianBlur(gradient,avgImage,Size(7,7),3,3);
for(int i=0;i<srcImage.rows;i++){
for(int j=0;j<srcImage.cols;j++){
norMagnitude.at<float>(i,j)=gradient.at<float>(i,j)/max(avgImage.at<float>(i,j),float(4));
//cout<<norMagnitude.at<float>(i,j)<<endl;
}
}
imshow("b",(gradient));
waitKey();
return norMagnitude;
}
int main(int argc,char **argv){
Mat image=imread(argv[1]);
cvtColor( image,image, CV_BGR2GRAY );
Mat newImage=setImage(image);
imshow("a",(newImage));
waitKey();
}
Your incoming source image is of type CV_8UC1, and yet you read it as floats:
dx=srcImage.at<float>(i,j+1)-srcImage.at<float>(i,j);
dy=srcImage.at<float>(i+1,j)-srcImage.at<float>(i,j);
If running under the debugger, this should have thrown an assertion, which would have highlighted the problem.
Try changing those lines to use unsigned char as follows:
dx=(float)(srcImage.at<unsigned char>(i,j+1)-srcImage.at<unsigned char>(i,j));
dy=(float)(srcImage.at<unsigned char>(i+1,j)-srcImage.at<unsigned char>(i,j));
I am doing face detection from video. So I wrote one small code to detect the face.
#include<opencv2/objdetect/objdetect.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <stdio.h>
#include<cv.h>
using namespace std;
using namespace cv;
CvCapture *capture=cvCaptureFromFile("foot.mp4");
double min_face_size=30;
double max_face_size=400;
Mat detectFace(Mat src);
int main( )
{
namedWindow( "window1", 1 );
while(1)
{
Mat frame,frame1;
frame1=cvQueryFrame(capture);;
frame=detectFace(frame1);
imshow( "window1", frame );
if(waitKey(1) == 'c') break;
}
waitKey(0);
return 0;
}
Mat detectFace(Mat image)
{
CascadeClassifier face_cascade( "haarcascade_frontalface_alt2.xml" );
CvPoint ul,lr;
std::vector<Rect> faces;
face_cascade.detectMultiScale( image, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(min_face_size, min_face_size),Size(max_face_size, max_face_size) );
for( int i = 0; i < faces.size(); i++ )
{
min_face_size = faces[i].width*0.8;
max_face_size = faces[i].width*1.2;
ul.x=faces[i].x;
ul.y=faces[i].y;
lr.x=faces[i].x + faces[i].width;
lr.y=faces[i].y + faces[i].height;
rectangle(image,ul,lr,CV_RGB(1,255,0),3,8,0);
}
return image;
}
I took one video for face detection which contains both small and large faces. My problem is using my code, it detects only small faces and also it shows some unwanted detection.
I need to detect both small and large faces in a video. How shall I do this?
Is there any problem with the scaling factor?
Please help me understand this problem.
Try to increase 'double max_face_size', which controls how large faces you want to detect.
You can also increase '2' in the parameters of 'detectMultiScale()', which controls the quality of the faces.
I'm trying to use the open CV FAST algorithim in order to detect corners from a video feed. The method call and set-up seems pretty straight forward yet I'm running into a few problems. When I try and use this code
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
Mat mat(tempImg);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
I get this error:
OpenCV Error: Assertion failed (image.data && image.type() == CV_8U) in unknown
function, file ........\ocv\opencv\src\cvaux\cvfast.cpp, line 6039
So I figured its a problem with the depth of the image so I when I add this:
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
I get:
OpenCV Error: Bad number of channels (Incorrect number of channels for this conv
ersion code) in unknown function, file ........\ocv\opencv\src\cv\cvcolor.cpp
, line 2238
I've tried using a Mat instead of a IplImage to capture but I keep getting the same kind of errors.
Any suggestions or help?
Thanks in advance.
The entire file just to make it easier for anyone:
#include "cv.h"
#include "cvaux.hpp"
#include "highgui.h"
#include <time.h>
#include <iostream>
double diffClock(clock_t begin, clock_t end);
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
//Create Mat img for camera capture
IplImage* img;
bool run = true;
CvCapture* capture= 0;
capture = cvCaptureFromCAM(-1);
int key =0;
cvNamedWindow("stream", 1);
while(run)
{
clock_t begin,end;
img = cvQueryFrame(capture);
key = cvWaitKey(10);
cvShowImage("stream",img);
//Cv::FAST variables
int threshold=9;
vector<KeyPoint> keypoints;
if(key=='a'){
//begin = clock();
IplImage* tempImg = cvCreateImage(Size(img->width,img->height),8,1);
cvCvtColor(img,tempImg,CV_8U);
Mat mat(img);
FAST(mat,keypoints,threshold,true);
//end = clock();
//cout << "\n TIME FOR CALCULATION: " << double(diffClock(begin,end)) << "\n" ;
}
else if(key=='x'){
run= false;
}
}
cvDestroyWindow( "stream" );
return 0;
}
Whenever you have a problem using the OpenCV API go check the tests/examples available in the source code: fast.cpp
This practice is extremely useful and educational. Now, if you take a look at that code you will notice that the image gets converted to grayscale before calling cv::FAST() on it:
Mat mat(tempImg);
Mat gray;
cvtColor(mat, gray, CV_BGR2GRAY);
FAST(gray,keypoints,threshold,true);
Seems pretty straight forward, indeed.
You need change this
cvCvtColor(img,tempImg,CV_8U);
To
cvCvtColor(img,tempImg,CV_BGR2GRAY);
You can read this
Good Luck
I started getting the same message with code that had worked previously, and i was certain my Mat was U8 grayscale. It turned out that one of the images i was trying to process was no longer there. So in my case it was a misleading error message.
Take a look at this sample code. The code you are using looks quite outdated opencv, in this sample you will find how feature detectors should be used now.
The sample is generic for several feature detectors (including FAST) so that is like it looks a bit more complicated.
http://code.opencv.org/projects/opencv/repository/entry/branches/2.4/opencv/samples/cpp/matching_to_many_images.cpp
You will also find more samples in the parent directory.
Please follow the following code to have your desired result. For showing an example, I am considering an image only but you can simply use the same idea for video frames
Mat img = imread("IMG.jpg", IMREAD_UNCHANGED);
if( img.empty())
{
cout << "File not available for reading"<<endl;
return -1;
}
Mat grayImage;
if(img.channels() >2){
cvtColor( img, grayImage, CV_BGR2GRAY ); // converting color to gray image
}
else{
grayImage = img;
}
double sigma = 1;
GaussianBlur(grayImage, grayImage, Size(), sigma, sigma); // applying gaussian blur to remove some noise,if present
int thresholdCorner = 40;
vector<KeyPoint> keypointsCorners;
FAST(grayImage,keypointsCorners,thresholdCorner,true); // applying FAST key point detector
if(keypointsCorners.size() > 0){
cout << keypointsCorners.size() << endl;
}
// Drawing a circle around corners
for( int i = 0; i < keypointsCorners.size(); i++ )
{
circle( grayImage, keypointsCorners.at(i).pt, 5, Scalar(0), 2, 8, 0 );
}
cv::namedWindow("Display Image");
cv::imshow("Display Image", grayImage);
cvWaitKey(0);
cvDestroyWindow( "Display Image" );