thanks for your help...
I wrote some code a while ago that successfully detects cars in a moving video of traffic. So lets consider the output of that code and the eventual input of this code to be 150x200 sized images of vehicles.
What I am trying to implement is an SVM that takes those vehicles and can classify them between sedans and SUVs. (Assume there are only sedans and SUVs).
The following code was implemented by closely following the information on this link: https://docs.opencv.org/3.0-beta/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html
and this link: using OpenCV and SVM with images
Be aware that the syntax affiliated with these links are slightly outdated for SVM implementation on the newest version of SVMs which I have.
//Used to read multiple files from folder
stringstream ss;
string name = "Vehicle_";
string type = ".jpg";
int num_train_images = 29; //29 images will be used to train the SVM
int image_area = 150 * 200;
Mat training_mat(num_train_images, image_area, CV_32FC1); // Creates a 29 rows by 30000 columns... 29 150x200 images will be put into 1 row per image
//Converts 29 2D images into a really long row per image
for (int file_count = 1; file_count < (num_train_images + 1); file_count++)
{
ss << name << file_count << type; //'Vehicle_1.jpg' ... 'Vehicle_2.jpg' ... etc ...
string filename = ss.str();
ss.str("");
Mat training_img = imread(filename, 1); //Reads the training images from the folder
int ii = 0; //Scans each column
for (int i = 0; i < training_img.rows; i++)
{
for (int j = 0; j < training_img.cols; j++)
{
training_mat.at<float>(file_count - 1, ii) = training_img.at<uchar>(i, j); //Fills the training_mat with the read image
ii++;
}
}
}
//Labels are used as the supervised learning portion of the SVM. If it is a 1, its an SUV test image. -1 means a sedan.
float labels[29] = { 1, 1, -1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, -1, 1 };
//Place the labels into into a 29 row by 1 column matrix.
Mat labels_mat(num_train_images, 1, CV_32FC1, labels);
cout << "Beginning Training..." << endl;
//Set SVM Parameters (not sure about these values)
Ptr<SVM> svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setKernel(SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
svm->train(training_mat, ROW_SAMPLE, labels_mat);
cout << "End Training" << endl;
waitKey(0);
Mat test_image(1, image_area, CV_32FC1); //Creates a 1 x 30000 matrix to house the test image.
Mat SUV_image = imread("SUV_1.jpg", 0); //Read the file folder
int jj = 0;
for (int i = 0; i < SUV_image.rows; i++)
{
for (int j = 0; j < SUV_image.cols; j++)
{
test_image.at<float>(0, jj) = SUV_image.at<uchar>(i, j); //Fills the training_mat
jj++;
}
}
//Should return a 1 if its an SUV, or a -1 if its a sedan
svm->predict(test_image);
waitKey(0);
So what I do here is I take the test images and then transform each 150 by 200 image into a 1 row by 30,000 column row in the training_mat.
labels_mat is the supervised learning portion of the SVM which tells if the training images are SUVs or sedans.
The code builds fine, but unfortunately right as it gets to svm->train it fails and I get an abort error that says: "OpenCV Error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainData, or pass integer responses) in cv::ml::SVMImpl::train"
Not quite sure what this means, could be something wrong with my parameters. A friend suggested I may need to extract features of the images before I can feed it into the SVM, to which I'm not sure if its necessary.
Thanks
This issue was solved by changing the labels_mat to CV_32S, to be an integer type. Unfortunately, a new issue still remains which is that svm->predict(test_image) returns a large value that is not -1 or 1.
Related
I am play with OpenCV and SVM to make a classifier to predict facial expression. I have no problem to classify test dadaset, but when I try to predict a new image, I get this:
OpenCV Error: Assertion failed (samples.cols == var_count && samples.type() == CV_32F) in cv::ml::SVMImpl::predict
Error is pretty clear and I have a different number of columns, but of the same type.
I do not know how to achieve that, because I have a matrix of dimensions 1xnumber_of_features, but numbers_of_features is not the same of the trained and tested samples. How can I extract the same number of features from another image? Am I missing something?
To train classifier I did:
Detect face and save ROI;
Sift to extract features;
kmeans to cluster them;
bag of words to get the same numbers of features for each image;
pca to reduce;
train on train dadaset;
predict on test dadaset;
On the new image I did the same thing.
I tried to resize the new image to the same size, but nothing, same error ( and different number of columns, aka features). Vectors are of the same type (CF_32F).
[EDIT 1] Let's try to be more specific.
After succesfuly trained my classifier, I save SVM model in this way
svmClassifier->save(baseDatabasePath);
Then I load it when I need to do real time prediction in this way
cv::Ptr<cv::ml::SVM> svmClassifier;
svmClassifier = cv::ml::StatModel::load<ml::SVM>(path);
Then loop,
while (true)
{
getOneImage();
cv::Mat feature = extractFeaturesFromSingleImage();
float labelPredicted = svmClassifier->predict(feature);
cout << "Label predicted is: " << labelPredicted << endl;
}
But predict returns the error. feature dimension is 1x66, for example. As you can see below, I need like 140 features
<?xml version="1.0"?>
<opencv_storage>
<opencv_ml_svm>
<format>3</format>
<svmType>C_SVC</svmType>
<kernel>
<type>RBF</type>
<gamma>5.0625000000000009e-01</gamma></kernel>
<C>1.2500000000000000e+01</C>
<term_criteria><epsilon>1.1920928955078125e-07</epsilon>
<iterations>1000</iterations></term_criteria>
<var_count>140</var_count>
<class_count>7</class_count>
<class_labels type_id="opencv-matrix">
<rows>7</rows>
<cols>1</cols>
<dt>i</dt>
<data>
0 1 2 3 4 5 6</data></class_labels>
<sv_total>172</sv_total>
I do not know how achieve 140 features, when SIFT, FAST or SURF just give me around 60 features. What am I missing?
EDIT 2: I am going to try to be more formal: how can I put my real time sample on the same dimension of train and test dataset?
EDIT 3:
Extract features with sift and push on a vector of mat.
std::vector<cv::Mat> featuresVector;
for (int i = 0; i < numberImages; ++i)
{
cv::Mat face = cv::imread(facePath, CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat featuresExtracted = runExtractFeature(face, featuresExtractionAlgorithm);
featuresVector.push_back(featuresExtracted);
}
Get total features extracted from all images.
int numberFeatures = 0;
for (int i = 0; i < featuresVector.size(); ++i)
{
numberFeatures += featuresVector[i].rows;
}
Prepare a mat to cluster features (I tried to follow this example)
cv::Mat featuresData = cv::Mat::zeros(numberFeatures, featuresVector[0].cols, CV_32FC1);
int currentIndex = 0;
for (int i = 0; i < featuresVector.size(); ++i)
{
featuresVector[i].copyTo(featuresData.rowRange(currentIndex, currentIndex + featuresVector[i].rows));
currentIndex += featuresVector[i].rows;
}
Perform clustering (I do not know how this parameter suite my case, my I think can be ok for now)
cv::Mat labels;
cv::Mat centers;
int binSize = 1000;
kmeans(featuresData, binSize, labels, cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 100, 1.0), 3, KMEANS_PP_CENTERS, centers);
Prepare a mat to perform bow.
cv::Mat featuresDataHist = cv::Mat::zeros(numberImages, binSize, CV_32FC1);
for (int i = 0; i < numberImages; ++i)
{
cv::Mat feature = cv::Mat::zeros(1, binSize, CV_32FC1);
int numberImageFeatures = featuresVector[i].rows;
for (int j = 0; j < numberImageFeatures; ++j)
{
int bin = labels.at<int>(currentIndex + j);
feature.at<float>(0, bin) += 1;
}
cv::normalize(feature, feature);
feature.copyTo(featuresDataHist.row(i));
currentIndex += featuresVector[i].rows;
}
PCA to try to reduce dimension.
cv::PCA pca(featuresDataHist, cv::Mat(), CV_PCA_DATA_AS_ROW, 50/*0.90*/);
cv::Mat feature;
for (int i = 0; i < numberImages; ++i)
{
feature = pca.project(featuresDataHist.row(i));
}
im learning about SVM, so im making a sample program that trains an SVM to detect if a symbol is in an image or if its not. All the images are black and white (the symbols would be black and the background white). I have 12 training images, 6 positives (with the symbol) and 6 negatives (without it). Im using hu moments to get the descriptors of every image and then i construct the training matrix with those descriptors. also i have a Labels matrix, which contains a label for each image: 1 if its positive and 0 if its negative. but im getting an error (something like a segmentation fault) at the line where i train the SVM. here is my code:
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
//arrays where the labels and the features will be stored
float labels[12] ;
float trainingData[12][7] ;
Moments moment;
double hu[7];
//===============extracting the descriptos for each positive image=========
for ( int i = 0; i <= 5; i++){
//the images are called t0.png ... t5.png and are in the folder train
std::string path("train/t");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
//setting the row i of the training data as the hu moments
for (int j = 0; j <= 6; j++){
trainingData[i][j] = (float)hu[j];
}
labels[i] = 1; //label=1 because is a positive image
}
//===============extracting the descriptos for each negative image=========
for (int i = 0; i <= 5; i++){
//the images are called tn0.png ... tn5.png and are in the folder train
std::string path("train/tn");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
for (int j = 0; j <= 6; j++){
trainingData[i + 6][j] = (float)hu[j];
}
labels[i + 6] = 0; //label=0 because is a negative image
}
//===========================training the SVM================
//we convert the labels and trainingData matrixes to Mat objects
Mat labelsMat(12, 1, CV_32FC1, labels);
Mat trainingDataMat(12, 7, CV_32FC1, trainingData);
//create the SVM
Ptr<ml::SVM> svm = ml::SVM::create();
//set the parameters of the SVM
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::LINEAR);
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
svm->setTermCriteria(criteria);
//Train the SVM !!!!!HERE OCCURS THE ERROR!!!!!!
svm->train(trainingDataMat, ml::ROW_SAMPLE, labelsMat);
//Testing the SVM...
Mat test = imread("train/t1.png", 0); //this should be a positive test
bitwise_not(test, test);
Mat testBin;
threshold(test, testBin, 100, 255, cv::THRESH_BINARY);
Moments momentP = moments(testBin, true); //calculate the moments of the test image
double huP[7];
HuMoments(momentP, huP);
Mat testMat(1, 7, CV_32FC1, huP); //setting the hu moments to the test matrix
double resp = svm->predict(testMat); //pretiction of the SVM
printf("%f", resp); //Response
getchar();
}
i know that the program is running fine until that line because i printed labelsMat and trainingDataMat and the values inside them are ok. Even in the console i can see that the program is running fine until that exact line executes. the console then shows this message:
OpenCV error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainDatam or pass integer responses)
i dont really know what this means. any idea of what could be causing the problem? if you need any other details please tell me.
EDIT
for future readers:
the problem was in the way i defined the labels array as an array of float and the LabelsMat as a Mat of CV_32FC1. the array that contains the labels needs to have integers inside, so i changed:
float labels[12];
to
int labels[12];
and also changed
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
and that solved the error. Thank you
Trying changing:
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
From: http://answers.opencv.org/question/63715/svm-java-opencv-3/
If that doesn't work, hopefully one of these posts will help you:
Opencv 3.0 SVM train classification issues
OpenCV SVM Training Data
I am working on fungus detection using SVM. I have no clue why I am getting this error during training of classifier.
error: (-209) Response array must contain as many
elements as the total number of samples in function cvPreprocessCategoricalResponses
Mat classes;//(PosSamples+NagSamples, 1, CV_32FC1);
Mat trainingData;//(PosSample+NagSample, imgWidth*imgHeight,CV_32FC1 );
cv::Mat trainingImages;
vector<int> trainingLabels;
for (int pimageNum = 0; pimageNum < 359; pimageNum++)
{
// reading Positive Samples
trainingImages.push_back(posImage);
trainingLabels.push_back(1.0);
}
for (int nimageNum = 0; nimageNum < 171; nimageNum++)
{
// reading Nagative Samples
trainingImages.push_back(nagImage);
trainingLabels.push_back(0.0);
}
Mat(trainingImages).copyTo(trainingData);
trainingData.convertTo(trainingData, CV_32FC1);
Mat(trainingLabels).copyTo(classes);
FileStorage fs0("D:\\classifier.yml", FileStorage::WRITE);
fs0 << "TrainingData" << trainingData;
fs0 << "classes" << classes;
fs0.release();
CvSVMParams SVM_params;
SVM_params.svm_type = CvSVM::C_SVC;
SVM_params.kernel_type = CvSVM::LINEAR;
SVM_params.degree = 0;
SVM_params.gamma = 1;
SVM_params.coef0 = 0;
SVM_params.C = 1;
SVM_params.nu = 0;
SVM_params.p = 0;
SVM_params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 1000, 0.01);
//Train SVM
CvSVM svmClassifier(SVM_TrainingData, SVM_Classes, Mat(), Mat(), SVM_params);
///////////////// image size is 50x50 /////////////////
In Classifier.yml file.
TrainingData: !!opencv-matrix
rows: 26500
cols: 50
classes: !!opencv-matrix
rows: 530
cols: 1
Each row (not each image) is a sample. So you have 26500 rows of samples, and 530 classes. That is due the fact that your images are 50 in height: 50*530 = 26500.
Usually you compute some sort of feature on your images to use in the SVM. If you want to use your original images, you should do one:
linearize / resize your images, so that each image is 1x2500. You'll obtain 530 training data and 530 classes.
replicate your classes 50 times for each image. You'll obtain 26500 training and 26500 classes.
It's up to you to decide if your whole image is a feature (case 1), or each row of your image is a feature (case 2).
The complete error:
OpenCV Error: Assertion failed (nimages > 0 && nimages ==
(int)imagePoints1.total() && (!imgPtMat2 || nimages ==
(int)imagePoints2.total())) in collectCalibrationData, file C:\OpenCV
\sources\modules\calib3d\src\calibration.cpp, line 3164
The code:
cv::VideoCapture kalibrowanyPlik; //the video
cv::Mat frame;
cv::Mat testTwo; //undistorted
cv::Mat cameraMatrix = (cv::Mat_<double>(3, 3) << 2673.579, 0, 1310.689, 0, 2673.579, 914.941, 0, 0, 1);
cv::Mat distortMat = (cv::Mat_<double>(1, 4) << -0.208143, 0.235290, 0.001005, 0.001339);
cv::Mat intrinsicMatrix = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);
cv::Mat distortCoeffs = cv::Mat::zeros(8, 1, CV_64F);
//there are two sets for testing purposes. Values for the first two came from GML camera calibration app.
std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
std::vector<std::vector<cv::Point2f> > imagePoints;
std::vector<std::vector<cv::Point3f> > objectPoints;
kalibrowanyPlik.open("625.avi");
//cv::namedWindow("Distorted", CV_WINDOW_AUTOSIZE); //gotta see things
//cv::namedWindow("Undistorted", CV_WINDOW_AUTOSIZE);
int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);
int success = 0; //so we can do the calibration only after we've got a bunch
for(int i=0; i<maxFrames-1; i++) {
kalibrowanyPlik.read(frame);
std::vector<cv::Point2f> corners; //creating these here so they're effectively reset each time
std::vector<cv::Point3f> objectCorners;
int sizeX = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_WIDTH); //imageSize
int sizeY = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_HEIGHT);
cv::cvtColor(frame, frame, CV_BGR2GRAY); //must be gray
cv::Size patternsize(9,6); //interior number of corners
bool patternfound = cv::findChessboardCorners(frame, patternsize, corners, cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE + cv::CALIB_CB_FAST_CHECK); //finding them corners
if(patternfound == false) { //gotta know
qDebug() << "failure";
}
if(patternfound) {
qDebug() << "success!";
std::vector<cv::Point3f> objectCorners; //low priority issue - if I don't do this here, it becomes empty. Not sure why.
for(int y=0; y<6; ++y) {
for(int x=0; x<9; ++x) {
objectCorners.push_back(cv::Point3f(x*28,y*28,0)); //filling the array
}
}
cv::cornerSubPix(frame, corners, cv::Size(11, 11), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
cv::cvtColor(frame, frame, CV_GRAY2BGR); //I don't want gray lines
imagePoints.push_back(corners); //filling array of arrays with pixel coord array
objectPoints.push_back(objectCorners); //filling array of arrays with real life coord array, or rather copies of the same thing over and over
cout << corners << endl << objectCorners;
cout << endl << objectCorners.size() << "___" << objectPoints.size() << "___" << corners.size() << "___" << imagePoints.size() << endl;
cv::drawChessboardCorners(frame, patternsize, cv::Mat(corners), patternfound); //drawing.
if(success > 5) {
double rms = cv::calibrateCamera(objectPoints, corners, cv::Size(sizeX, sizeY), intrinsicMatrix, distortCoeffs, rvecs, tvecs, cv::CALIB_USE_INTRINSIC_GUESS);
//error - caused by passing CORNERS instead of IMAGEPOINTS. Also, imageSize is 640x480, and I've set the central point to 1310... etc
cout << endl << intrinsicMatrix << endl << distortCoeffs << endl;
cout << "\nrms - " << rms << endl;
}
success = success + 1;
//cv::imshow("Distorted", frame);
//cv::imshow("Undistorted", testTwo);
}
}
I've done a little bit of reading (This was an especially informative read), including over a dozen threads made here on StackOverflow, and all I found is that this error is produced by either by uneven imagePoints and objectPoints or by them being partially null or empty or zero (and links to tutorials that don't help). None of that is the case - the output from .size() check is:
54___7___54___7
For objectCorners (real life coords), objectPoints (number of arrays inserted) and the same for corners (pixel coords) and imagePoints. They're not empty either, the output is:
(...)
277.6792, 208.92903;
241.83429, 208.93048;
206.99866, 208.84637;
(...)
84, 56, 0;
112, 56, 0;
140, 56, 0;
168, 56, 0;
(...)
A sample frame:
I know it's a mess, but so far I'm trying to complete the code rather than get an accurate reading.
Each one hs exactly 54 lines of that. Does anyone have any ideas on what is causing the error? I'm using OpenCV 2.4.8 and Qt Creator 5.4 on Windows 7.
First of all, corners and imagePoints have to be switched, as you have aready noticed.
In most cases (if not all), size <= 25 is enough to get a good result. Focal length around 633 is not wierd, it means the focal length is 633 * sensor size. The CCD or CMOS size must be somewhere on the INSTRUCTIONS along with your camera. Find it out , times 633, the result is your focal length.
One suggestion to reduce the number of images used: using images taken from different viewpoints. 10 images from 10 different viewpoints bring much better result than 100 images from the same ( or nearby ) viewpoints. That is one of the reasons why video is not a good input. I guess with your code, all the images passed to calibratecamera may be from nearby viewpoints. If so, the calibration accuracy degrades.
I am extremely new to computer vision and the opencv library.
I've done some googling around to try to find how to make a new image from a vector of Point2fs and haven't found any examples that work. I've seen vector<Point> to Mat but when I use those examples I always get errors.
I'm working from this example and any help would be appreciated.
Code: I pass in occludedSquare.
resize(occludedSquare, occludedSquare, Size(0, 0), 0.5, 0.5);
Mat occludedSquare8u;
cvtColor(occludedSquare, occludedSquare8u, CV_BGR2GRAY);
//convert to a binary image. pixel values greater than 200 turn to white. otherwize black
Mat thresh;
threshold(occludedSquare8u, thresh, 170.0, 255.0, THRESH_BINARY);
GaussianBlur(thresh, thresh, Size(7, 7), 2.0, 2.0);
//Do edge detection
Mat edges;
Canny(thresh, edges, 45.0, 160.0, 3);
//Do straight line detection
vector<Vec2f> lines;
HoughLines( edges, lines, 1.5, CV_PI/180, 50, 0, 0 );
//imshow("thresholded", edges);
cout << "Detected " << lines.size() << " lines." << endl;
// compute the intersection from the lines detected...
vector<Point2f> intersections;
for( size_t i = 0; i < lines.size(); i++ )
{
for(size_t j = 0; j < lines.size(); j++)
{
Vec2f line1 = lines[i];
Vec2f line2 = lines[j];
if(acceptLinePair(line1, line2, CV_PI / 32))
{
Point2f intersection = computeIntersect(line1, line2);
intersections.push_back(intersection);
}
}
}
if(intersections.size() > 0)
{
vector<Point2f>::iterator i;
for(i = intersections.begin(); i != intersections.end(); ++i)
{
cout << "Intersection is " << i->x << ", " << i->y << endl;
circle(occludedSquare8u, *i, 1, Scalar(0, 255, 0), 3);
}
}
//Make new matrix bounded by the intersections
...
imshow("localized", localized);
Should be as simple as
std::vector<cv::Point2f> points;
cv::Mat image(points);
//or
cv::Mat image = cv::Mat(points)
The probably confusion is that a cv::Mat is an image width*height*number of channels but it also a mathematical matrix , rows*columns*other dimension.
If you make a Mat from a vector of 'n' 2D points it will create a 2 column by 'n' rows matrix. You are passing this to a function which expects an image.
If you just have a scattered set of 2D points and want to display them as an image you need to make an empty cv::Mat of large enough size (whatever your maximum x,y point is) and then draw the dots using the drawing functions http://docs.opencv.org/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.html
If you just want to set the pixel values at those point coordinates search SO for opencv setting pixel values, there are lots of answers
Martin's answer is right but IMO it depends on how image cv::Mat is used further along the line. I had some issues and Haofeng's comment helped me fix them. Here is my attempt to explain it in detail:
Let's say the code looks like this:
std::vector<cv::Point2f> points = {cv::Point2f(1.0, 2.0), cv::Point2f(3.0, 4.0), cv::Point2f(5.0, 6.0), cv::Point2f(7.0, 8.0), cv::Point2f(9.0, 10.0)};
cv::Mat image(points); // or cv::Mat image = cv::Mat(points)
std::cout << image << std::endl;
This will print:
[1, 2;
3, 4;
5, 6;
7, 8;
9, 10]
So, at first glance, this looks perfectly correct and as expected: for the five 2D points in the given vector, we got a cv::Mat with 5 rows and 2 columns, right? However, that's not the case here!
If further properties are inspected:
std::cout << image.rows << std::endl; // 5
std::cout << image.cols << std::endl; // 1
std::cout << image.channels() << std::endl; // 2
it can be seen that the above cv::Mat has 5 rows, 1 column, and 2 channels. Depending on the pipeline, we may not want that. Most of the time, we want a matrix with 5 rows, 2 columns, and just 1 channel.
To fix this problem, all we need to do is reshape the matrix:
cv::Mat image(points).reshape(1);
In the above code, 1 is for 1 channel. Check out OpenCV reshape() documentation for further information.
If this matrix is printed out, it will look the same as the previous one. However, that's not the whole picture (metaphorically!) The new matrix has 5 rows, 2 columns, and 1 channel.
I wish OpenCV had different ways of printing out these two similar yet different matrices (from the OpenCV data structure point of view)!