OpenCV Camera Calibration, Assertion Failed <ni > 0 && ni==ni1> - c++

I am trying to perform camera calibration using openCV. On the same example data from openCV samples this works great, but my source generates an error:
Assertion failed <ni > 0 && ni==ni1>... collectCallibrationData.. calibration.cpp line 3193"
objectPoints and imagePoints are the same size. Changing other inputs dont affect error code.
Can anyone help me deal with it? The code follows:
void main()//enter code here
{
//initial operations, declatarions etc
int *Asize; float *APoints;
float AAPoints[]= {/* My Array */};
int AAsize[] = {4,54,2}; //My array size
Asize = AAsize; APoints = AAPoints; // will be called as dll that is why this wierd attribution
int page = *(Asize); int row = *(Asize+1); int col = *(Asize +2);
Point3f pointBuf;
vector<vector<Point3f>> imagePoints;
vector<Point3f> vectBuf;
Size boardSize;
boardSize.height = 6; boardSize.width = 9;
Mat cameraMatrix, distCoeffs;
float squareSize=50;
//change 1d array to vector<vector<point3f>>
for (int i = 0; i<page; i++)
{
for (int j = 0; j<row; j++)
{
pointBuf.x = *(APoints + (i*(row*col))+j*2);
pointBuf.y = *(APoints + (i*(row*col))+j*2+1);
pointBuf.z = 0;
vectBuf.push_back(pointBuf);
}
imagePoints.push_back(vectBuf);
vectBuf.clear();
}
// create objectPoint vector<vector<Points3f>>
vector<vector<Point3f>> objectPoints;
vectBuf.clear();
for( int i = 0; i < boardSize.height; ++i )
{
for( int j = 0; j < boardSize.width; ++j )
vectBuf.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0));
}
objectPoints.resize(imagePoints.size(),vectBuf);
//initialize starting variables for calibration
cameraMatrix = Mat::eye(3, 3, CV_64F);
distCoeffs = Mat::zeros(1, 1, CV_64F);
cameraMatrix.at<double>(0,0) = 1.0;
vector<Mat> rvecs, tvecs;
Size imageSize; imageSize.width = 2040; imageSize.height = 2040;
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs);
}

There is a trouble with the use of cv::calibrateCamera(). It requires std::vector<std::vector<cv::Point3f>> as the first argument and std::vector<std::vector<cv::Point2f>> as the second.
e.g.:
cv::Size imageSize(2040, 2040);
cv::Mat cameraMatrix, distCoeffs;
std::vector<cv::Mat> rvecs, tvecs;
std::vector<std::vector<cv::Point2f> > imagePoints;
std::vector<std::vector<cv::Point3f> > objectPoints;
// Fill imagePoints and objectPoints
...
cv::calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs);

I met the same issue, too. In my case, there is a bug to convert vector< vector < Point3f > > to InputArrayOfArrays.
You may run the following sample code to check it in your system.
using namespace std;
using namespace cv;
/** #function main */
int main( void )
{
vector<Point3f> objectPoints_tmp(54);
objectPoints_tmp.clear();
for( int i = 0; i < 9; ++i )
for( int j = 0; j < 6; ++j )
objectPoints_tmp.push_back(Point3f(j*50, i*50, 0.0f));
vector<vector<Point3f> > objectPoints(7, objectPoints_tmp);
InputArrayOfArrays OBJPOINT = objectPoints; std::cout << (int) OBJPOINT.total() << std::endl;
for( int i = 0; i < 7; ++i )
std::cout << OBJPOINT.getMat(i).checkVector(3, CV_32F) << std::endl;
waitKey(0);
return 0;
}
Normally, it should print out
7
54
54
54
54
54
54
54
In my NG case, it prints out random numbers.
I guess that the opencv lib is not fit with my system(win7 x64 + vs2012). After rebuild opencv myself, it may work now. You may google keyword "Win7x64 VS2012 OpenCV CMake TBB".

Related

OpenCV Error: Assertion failed (nimages > 0) in cv::calibrateCamera

I am having some problems using the "calibrationCamera" method from openCV. I am running OpenCV in Windows 64, visual studio 2013.
The problem comes when I call the following function: "cv::calibrateCamera" from OpenCV, it throws the exception "Assertion failed (nimages > 0)".
Here's a sample of the code developed:
vector< vector< Vec3f > > _object_points;
vector< vector< Vec2f > > _image_points;
_hor_corners = 9;
_ver_corners = 6;
_squareSize = 20; // Let's say 20 mm
_patternsize = Size(_hor_corners, _ver_corners);
Mat corners;
vector<Vec2f> imgP;
vector< Vec3f > obj;
for (int i = 0; i < _ver_corners; i++) //Create the 3d vector
for (int j = 0; j < _hor_corners; j++)
obj.push_back(Point3f((float)j * _squareSize, (float)i * _squareSize, 0));
for (int i = 0; i < numImages; i++)
{
image = calibImages[i];
// ... code for detecting corners
if (corners.rows == _hor_corners*_ver_corners)
{
_image_points.push_back(imgP); //fill arrays
_object_points.push_back(obj);
cout << imgP.size() << "___" << _image_points.size() << "___" << obj.size() << "___" << _object_points.size() << endl; //here's everything is okay
imgP.clear();
}
}
cv::calibrateCamera(_object_points, _image_points, image.size(), intrinsic, distCoeffs, rvecs, tvecs, 0); //Throw the exception here
And here where is assertion is being thrown:
double cv::calibrateCamera( InputArrayOfArrays _objectPoints,
InputArrayOfArrays _imagePoints,
Size imageSize, InputOutputArray _cameraMatrix, InputOutputArray _distCoeffs,
OutputArrayOfArrays _rvecs, OutputArrayOfArrays _tvecs, int flags, TermCriteria criteria ){
int rtype = CV_64F;
Mat cameraMatrix = _cameraMatrix.getMat();
cameraMatrix = prepareCameraMatrix(cameraMatrix, rtype);
Mat distCoeffs = _distCoeffs.getMat();
distCoeffs = prepareDistCoeffs(distCoeffs, rtype);
if( !(flags & CALIB_RATIONAL_MODEL) &&
(!(flags & CALIB_THIN_PRISM_MODEL)) &&
(!(flags & CALIB_TILTED_MODEL)))
distCoeffs = distCoeffs.rows == 1 ? distCoeffs.colRange(0, 5) : distCoeffs.rowRange(0, 5);
int nimages = int(_objectPoints.total());
CV_Assert( nimages > 0 );
Mat objPt, imgPt, npoints, rvecM, tvecM;
I did some searching around the internet but couldn't find any possible logical answer for this problem. The vectors are being filled correctly and have the same length. Any idea of what may be happening here?
Thank you very much,

OpenCV Error: Camera Calibration: Assertion failed in matrix_wrap.cpp

The Error:
OpenCV(3.4.1) Error: Assertion failed (i < 0) in cv::debug_build_guard::_InputArray::getMat_,
file C:\build\master_winpack-build-win64-vc15\opencv\modules\core\src\matrix_wrap.cpp, line 50
The Code inside "matrix_wrap.cpp" that triggers the Error:
if( k == STD_VECTOR )
{
CV_Assert( i < 0 );
int t = CV_MAT_TYPE(flags);
const std::vector<uchar>& v = *(const std::vector<uchar>*)obj;
return !v.empty() ? Mat(size(), t, (void*)&v[0]) : Mat();
}
Entire Code:
#include <iostream>
#include <opencv2/opencv.hpp>
int main(int argc, char *argv[])
{
cv::Mat img = cv::imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
cv::Size imageSize = cv::Size(img.size[0], img.size[1]);
cv::Mat cameraMatrix, distCoeffs;
double squareSize = 30;
cv::Size boardSize = cv::Size(6, 11);
std::vector<cv::Point2f> imagePoints;
std::vector<cv::Point3f> objectPoints;
for (int i = 0; i < boardSize.height; i++) {
for (int j = 0; j < boardSize.width; j++) {
objectPoints.push_back(
cv::Point3f(float(j * squareSize), float(i * squareSize), 0));
}
}
std::vector<cv::Mat> rvecs, tvecs;
bool found = false;
if (img.size[0] > 1)
{
bool found = findChessboardCorners(img, boardSize, imagePoints, cv::CALIB_CB_ADAPTIVE_THRESH);
if (found)
{
drawChessboardCorners(img, boardSize, imagePoints, found);
}
objectPoints.resize(imagePoints.size(), objectPoints[0]);
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs);
cv::namedWindow("My Image");
cv::imshow("My Image", img);
cv::waitKey(1000);
}
return 0;
}
I want to use a basic camera calibration with a straigth chessboard.
Until the function "calibrateCamera" everything works fine, but I can't figure out why this Error shows up.
The size of "objectPoints" and "imagePoints" is the same.
Thank you in advance.
P.S.:
I'm new in this forum and also in OpenCV ;)
As per this source of opencv, lines 3139 and 3142 :
CV_Error(CV_StsUnsupportedFormat, "objectPoints should contain vector of vectors of points of type Point3f");
CV_Error(CV_StsUnsupportedFormat, "imagePoints1 should contain vector of vectors of points of type Point2f");
the types for objectPoints and imagePoints should be std::vector<std::vector<Point{2|3}f>>

OpenCV 3.0 - SVM + HOG gives same wrong responses for half-person detection

I'm working in a detector for half bodies, in order to improve the performance of a normal people detector. I know there are more ways to deal with occlusion but this is what i was asked for to do in my end of degree project. My problem is that I'm not getting a good performance, more over, I'm getting kind a pattern in wich 4 rectangles that represent the detections are shown in almost the same position, not even representing a half body.
I have a set of images with 414 images of top-half bodies cropped by myself, used as positive samples, and 8520 negative images. All of them sized 64x64. I extracted the HOG descriptors as follows
int i;
string imgname, index;
HOGDescriptor hog (Size(64,64), Size(16,16), Size(8,8), Size(8,8), 9, 1, -1, HOGDescriptor::L2Hys, 0.2,false, HOGDescriptor::DEFAULT_NLEVELS, false);
vector<float> pos_rec_descript;
vector<Point> locations;
size_t SizeDesc;
SizeDesc = hog.getDescriptorSize();
FileStorage fpd ("Pos_Descriptors.yml", FileStorage::WRITE);
for (i = 1; i < 415; i++) { // 2416 images in ./pos_rec
stringstream a;
a << i;
imgname = "./pos_rec3/img" + a.str();
imgname += ".png";
Mat img = imread(imgname, CV_LOAD_IMAGE_COLOR);
hog.compute(img, pos_rec_descript, Size (16,16), Size (0,0),locations);
fpd << "Descriptores" + a.str() << pos_rec_descript;
}
fpd.release();
And I did the same with the negative samples.
Then, I trained a SVM as follows.
#define POS 414
#define NEG 8520
#define TOTAL 8934
#define DESCRIPT 1764
float trainingData[TOTAL][DESCRIPT];
int labels[TOTAL];
fstream doc;
void set_labels(){
int i;
for (i = 0; i < TOTAL; i++){
if (i < POS) {
labels[i] = 1;
}
else{
labels[i] = -1;
}
}
return;
}
int main(int, char**)
{
FileStorage fsv ("supvec.yml", FileStorage::WRITE);
FileStorage ftd ("TrainData.yml", FileStorage::WRITE);
//FileStorage flm ("Labels.yml", FileStorage::WRITE);
FileStorage fpd ("../HOG_descriptors/Pos_Descriptors.yml", FileStorage::READ);
FileStorage fnd ("../HOG_descriptors_neg/Neg_Descriptors.yml", FileStorage::READ);
set_labels();
// Set up training data
vector <float> pos_D, neg_D, train_D ;
int k = 0;
for (int i = 1; i < POS+1; i++) {
stringstream a;
a << i;
fpd["Descriptores" + a.str()] >> pos_D;
for (int j = 0; j < pos_D.size() ; j++){
train_D.push_back(pos_D[j]);
}
}
fpd.release();
for (int i = 1; i < NEG+1; i++) {
stringstream a;
a << i;
fnd["Descriptores" + a.str()] >> neg_D;
for (int j = 0; j < neg_D.size() ; j++){
train_D.push_back(neg_D[j]);
}
}
fnd.release();
for (int i = 0; i < TOTAL; i++){
for (int j = 0; j < DESCRIPT; j++){
trainingData[i][j] = train_D[k];
k++;
}
}
Mat trainingDataMat(TOTAL, DESCRIPT, CV_32FC1, trainingData);
//memcpy(trainingDataMat.data, train_D.data(), train_D.size()*sizeof(float));
Mat labelsMat(TOTAL, 1, CV_32SC1, labels);
//ftd << "trainingDataMat" << trainingDataMat;
//flm << "labelsMat" << labelsMat;
// Train the SVM
Ptr<SVM> svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setKernel(SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
/*Ptr<TrainData> autoTrainData = TrainData::create(trainingDataMat, ROW_SAMPLE, labelsMat);
ParamGrid Cgrid = SVM::getDefaultGrid(SVM::C);
ParamGrid gammaGrid = SVM::getDefaultGrid(SVM::GAMMA);
ParamGrid pGrid = SVM::getDefaultGrid(SVM::P);
pGrid.logStep = 1;
ParamGrid nuGrid = SVM::getDefaultGrid(SVM::NU);
nuGrid.logStep = 1;
ParamGrid coeffGrid = SVM::getDefaultGrid(SVM::COEF);
coeffGrid.logStep = 1;
ParamGrid degreeGrid = SVM::getDefaultGrid(SVM::DEGREE);
degreeGrid.logStep = 1; */
cout << "Está entrenando..." << endl;
//svm->trainAuto(autoTrainData, 10, Cgrid, gammaGrid, pGrid, nuGrid, coeffGrid, degreeGrid, false);
svm->train(trainingDataMat, ROW_SAMPLE, labelsMat);
svm->save("SVM3_WS16_P0_LINEAR.yml");
I've tried with both LINEAR and RBF kernels (that's why you can see an autotrain part of the code commented that I used to swap between types of SVM) but none of them seems to work. Actually, they give nearly the same responses, something that makes me think that, maybe the training phase or the detection phase (code below) are ruining the whole project.
This is how I load the SVM for the HOG detector and try it over images
using namespace cv;
using namespace std;
using namespace cv::ml;
// static void help()
// {
// printf(
// "\nDemonstrate the use of the HoG descriptor using\n"
// " HOGDescriptor::hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());\n"
// "Usage:\n"
// "./peopledetect (<image_filename> | <image_list>.txt)\n\n");
// }
void get_svm_detector(const Ptr<SVM>& svm, vector< float > & hog_detector );
void get_svm_detector(const Ptr<SVM>& svm, vector< float > & hog_detector )
{
// get the support vectors
Mat sv = svm->getSupportVectors();
const int sv_total = sv.rows;
// get the decision function
Mat alpha, svidx;
double rho = svm->getDecisionFunction(0, alpha, svidx);
CV_Assert( alpha.total() == 1 && svidx.total() == 1 && sv_total == 1 );
CV_Assert( (alpha.type() == CV_64F && alpha.at<double>(0) == 1.) ||
(alpha.type() == CV_32F && alpha.at<float>(0) == 1.f) );
CV_Assert( sv.type() == CV_32F );
hog_detector.clear();
hog_detector.resize(sv.cols + 1);
memcpy(&hog_detector[0], sv.ptr(), sv.cols*sizeof(hog_detector[0]));
hog_detector[sv.cols] = (float)-rho;
}
int main(int argc, char** argv)
{
Mat img;
FILE* f = 0;
char _filename[1024];
if( argc == 1 )
{
printf("Usage: peopledetect (People_imgs | People_imgs.txt)\n");
return 0;
}
img = imread(argv[1]);
if( img.data )
{
strcpy(_filename, argv[1]);
}
else
{
f = fopen(argv[1], "rt");
if(!f)
{
fprintf( stderr, "ERROR: the specified file could not be loaded\n");
return -1;
}
}
// Load SVM
Ptr<SVM> svm = SVM::create();
svm = cv::Algorithm::load<ml::SVM>("../SVM_Train/SVM3_WS16_P0_LINEAR.yml");
HOGDescriptor hog (Size(64,64), Size(16,16), Size(8,8), Size(8,8), 9, 1, -1, HOGDescriptor::L2Hys, 0.2,false, HOGDescriptor::DEFAULT_NLEVELS, false);
vector <float> hog_detector;
get_svm_detector (svm, hog_detector);
hog.setSVMDetector(hog_detector);
namedWindow("people detector", 1);
for(;;)
{
char* filename = _filename;
if(f)
{
if(!fgets(filename, (int)sizeof(_filename)-2, f))
break;
//while(*filename && isspace(*filename))
// ++filename;
if(filename[0] == '#')
continue;
int l = (int)strlen(filename);
while(l > 0 && isspace(filename[l-1]))
--l;
filename[l] = '\0';
img = imread(filename);
}
printf("%s:\n", filename);
if(!img.data)
continue;
fflush(stdout);
vector<Rect> found, found_filtered, searchLocations;
vector<double> found_weights;
double t = (double)getTickCount();
// run the detector with default parameters. to get a higher hit-rate
// (and more false alarms, respectively), decrease the hitThreshold and
// groupThreshold (set groupThreshold to 0 to turn off the grouping completely).
hog.detectMultiScale(img, found, found_weights, 0, Size(16,16), Size(0,0), 1.01, 2);
//hog.detect(img, found, 0, Size(16,16), Size(0,0), searchLocations);
t = (double)getTickCount() - t;
printf("tdetection time = %gms\n", t*1000./cv::getTickFrequency());
size_t i, j;
for( i = 0; i < found.size(); i++ )
{
Rect r = found[i];
for( j = 0; j < found.size(); j++ )
if( j != i && (r & found[j]) == r)
break;
if( j == found.size() )
found_filtered.push_back(r);
}
for( i = 0; i < found_filtered.size(); i++ )
{
Rect r = found_filtered[i];
// the HOG detector returns slightly larger rectangles than the real objects.
// so we slightly shrink the rectangles to get a nicer output.
r.x += cvRound(r.width*0.1);
r.width = cvRound(r.width*0.7);
r.y += cvRound(r.height*0.07);
r.height = cvRound(r.height*0.7);
rectangle(img, r.tl(), r.br(), cv::Scalar(0,255,0), 2);
imshow("people detector", img);
waitKey(0);
}
//imshow("people detector", img);
//string imgname = "./Responses/Win_Stride16_4.png";
//imwrite(imgname, img);
int c = waitKey(0) & 255;
if( c == 'q' || c == 'Q' || !f)
break;
}
if(f)
fclose(f);
return 0;
}
I have checked all dimensions for the descriptors, every Mat seems to be ok. But at the time I use detectMultiScale, it shows things like this:
Image 1: It's strange because is missing lots of detections
Image 2: Here I realized there was a kind of pattern with this 4 rects
My problem is that no matter what I change (descriptors, Kernel, winStride and Padding in detectMultiScale), there is always very similar responses, and nothing indicates that there is a correct detection there.
I'm not very sure about how I'm giving the support vectors to HOG, but is the only way I found to do it (found it in one of the post from StackOverflow).
If any of you has any idea of what is going on here, and why the responses are not changing from one configuration to another, I would be greatly thankfull. This code is giving me headaches since weeks now. I've been changing parametres on fucntions, on HOG, changing Kernels, trying different set of images, but nothing seems to give great changes on the final result.

How to use findChessboardCorners and calibrateCamera?

I want to use findChessboardCorners with calibrateCamera but encounter errors when using calibrateCamera. The output is not very helpful.
OpenCV Error: Assertion failed (i < 0) in cv::_OutputArray::create
static const Size patterSize(8, 6);
auto image = imread("x.jpg", IMREAD_GRAYSCALE);
Mat corners;
auto found = findChessboardCorners(image, patterSize, corners);
//constructing objectPoints
vector<vector<Vec3f>> objectPoints;
objectPoints.push_back(vector<Vec3f>());
for (int row = 0; row < patterSize.height; row++) {
for (int col = 0; col < patterSize.width; col++) {
objectPoints.at(0).push_back(Vec3f(row, col, 0));
}
}
vector<vector<Vec2f>> imagePoints;
imagePoints.push_back(vector<Vec2f>());
for (int row = 0; row < patterSize.height; row++) {
for (int col = 0; col < patterSize.width; col++) {
int num = row*patterSize.width + col;
imagePoints.at(0).push_back(corners.at<Vec2f>(num, 0));
}
}
Mat cameraMatrix, distMatrix, rvecs, tvecs;
calibrateCamera(objectPoints, imagePoints, Size(image.size().width, image.size().height), cameraMatrix, distMatrix, rvecs, tvecs);
The decleration of rvecs, tvecs should be vector<Mat> rather than Mat, and then everything will be fine. For more information, visit 3calibration.cpp.

from float array to mat , concatenate blocks of image

I have an image 800x800 which is broken down to 16 blocks of 200x200.
(you can see previous post here)
These blocks are : vector<Mat> subImages;
I want to use float pointers on them , so I am doing :
float *pdata = (float*)( subImages[ idxSubImage ].data );
1) Now, I want to be able to get again the same images/blocks, going from float array to Mat data.
int Idx = 0;
pdata = (float*)( subImages[ Idx ].data );
namedWindow( "Display window", WINDOW_AUTOSIZE );
for( int i = 0; i < OriginalImgSize.height - 4; i+= 200 )
{
for( int j = 0; j < OriginalImgSize.width - 4; j+= 200, Idx++ )
{
Mat mf( i,j, CV_32F, pdata + 200 );
imshow( "Display window", mf );
waitKey(0);
}
}
So , the problem is that I am receiving an
OpenCV Error: Assertion failed
in imshow.
2) How can I recombine all the blocks to obtain the original 800x800 image?
I tried something like:
int Idx = 0;
pdata = (float*)( subImages[ Idx ].data );
Mat big( 800,800,CV_32F );
for( int i = 0; i < OriginalImgSize.height - 4; i+= 200 )
{
for( int j = 0; j < OriginalImgSize.width - 4; j+= 200, Idx++ )
{
Mat mf( i,j, CV_32F, pdata + 200 );
Rect roi(j,i,200,200);
mf.copyTo( big(roi) );
}
}
imwrite( "testing" , big );
This gives me :
OpenCV Error: Assertion failed (!fixedSize()) in release
in mf.copyTo( big(roi) );.
First, you need to know where are your subimages into the big image. To do this, you can save the rect of each subimage into the vector<Rect> smallImageRois;
Then you can use pointers (keep in mind that subimages are not continuous), or simply use copyTo to the correct place:
Have a look:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
resize(img, img, Size(800, 800));
Mat grayImg;
cvtColor(img, grayImg, COLOR_BGR2GRAY);
grayImg.convertTo(grayImg, CV_32F);
int N = 4;
if (((grayImg.rows % N) != 0) || ((grayImg.cols % N) != 0))
{
// Error
return -1;
}
Size graySize = grayImg.size();
Size smallSize(grayImg.cols / N, grayImg.rows / N);
vector<Mat> smallImages;
vector<Rect> smallImageRois;
for (int i = 0; i < graySize.height; i += smallSize.height)
{
for (int j = 0; j < graySize.width; j += smallSize.width)
{
Rect rect = Rect(j, i, smallSize.width, smallSize.height);
smallImages.push_back(grayImg(rect));
smallImageRois.push_back(rect);
}
}
// Option 1. Using pointer to subimage data.
Mat big1(800, 800, CV_32F);
int big1step = big1.step1();
float* pbig1 = big1.ptr<float>(0);
for (int idx = 0; idx < smallImages.size(); ++idx)
{
float* pdata = (float*)smallImages[idx].data;
int step = smallImages[idx].step1();
Rect roi = smallImageRois[idx];
for (int i = 0; i < smallSize.height; ++i)
{
for (int j = 0; j < smallSize.width; ++j)
{
pbig1[(roi.y + i) * big1step + (roi.x + j)] = pdata[i * step + j];
}
}
}
// Option 2. USing copyTo
Mat big2(800, 800, CV_32F);
for (int idx = 0; idx < smallImages.size(); ++idx)
{
smallImages[idx].copyTo(big2(smallImageRois[idx]));
}
return 0;
}
For concatenating the sub-images into a single squared image, you can use the following function:
// Important: all patches should have exactly the same size
Mat concatPatches(vector<Mat> &patches) {
assert(patches.size() > 0);
// make it square
const int patch_width = patches[0].cols;
const int patch_height = patches[0].rows;
const int patch_stride = ceil(sqrt(patches.size()));
Mat image = Mat::zeros(patch_stride * patch_height, patch_stride * patch_width, patches[0].type());
for (size_t i = 0, iend = patches.size(); i < iend; i++) {
Mat &patch = patches[i];
const int offset_x = (i % patch_stride) * patch_width;
const int offset_y = (i / patch_stride) * patch_height;
// copy the patch to the output image
patch.copyTo(image(Rect(offset_x, offset_y, patch_width, patch_height)));
}
return image;
}
It takes a vector of sub-images (or patches as I refer them to) and concatenates them into a squared image. Example usage:
vector<Mat> patches;
vector<Scalar> colours = {Scalar(255, 0, 0), Scalar(0, 255, 0), Scalar(0, 0, 255)};
// fill vector with circles of different colours
for(int i = 0; i < 16; i++) {
Mat patch = Mat::zeros(100,100, CV_32FC3);
circle(patch, Point(50,50), 40, colours[i % 3], -1);
patches.push_back(patch);
}
Mat img = concatPatches(patches);
imshow("img", img);
waitKey();
Will produce the following image
print the values of i and j before creating Mat mf and I believe you will soon be able to find the error.
Hint 1: i and j will be 0 the first time
Hint 2: Use the copyTo() with a ROI like:
cv::Rect roi(0,0,200,200);
src.copyTo(dst(roi))
Edit:
Hint 3: Try not to do such pointer fiddling, you will get in trouble. Especially if you're ignoring the step (like you seem to do).