OpenCV 4.5.5 how to use findNearest - c++

I am using OpenCv 4.5.5 and trying to perform NN search using the 'findNearest' provided by ml::KNearest, the classifier has been previously trained and saved to disk and I load it before attempting to perform the search.
I checked that it is properly saved and loaded, and that the types are always CV_32F so I dont think the issue is there. The dimensions of the testing samples is the same as the ones used in training. During runtime I get the following error:
kdtree.cpp:279: error: (-215:Assertion failed) vecmat.isContinuous()
&& vecmat.type() == CV_32F && vecmat.total() == (size_t)points.cols in
function 'findNearest'
I get the same error if I try the method 'predict'.
Why in the documentation says that the 'samples' (input) should be a matrix of dimensions <number_of_samples> * k? I would expect to have 'k' times the number of samples
for the results but not for the input.
Here is the relevant code:
void EigenClassifier::train(const cv::Mat& samples, const cv::Mat& labels)
{
auto prototypes = prepare_samples(samples);
std::cout << "EigenClassifier: Building KD-Tree..." <<std::endl;
m_knearest->train(prototypes, cv::ml::ROW_SAMPLE, labels);
std::cout << "Is trained: " << m_knearest->isTrained() << std::endl;
}
cv::Mat EigenClassifier::find_nearest(const cv::Mat& test_samples, int k, cv::Mat& neighbor_responses, cv::Mat& distances)
{
cv::Mat result;
auto reduced_samples = prepare_samples(test_samples);
std::cout << "Reduced samples size: " << reduced_samples.size << " type: " << reduced_samples.type() << std::endl;
m_knearest->findNearest(reduced_samples, k, result, neighbor_responses, distances);
//m_knearest->predict(reduced_samples.row(0));
return result;
}
Do I need to initialize 'result', 'neighbor_responses' and 'distances' to the expected results size?

Related

How to get not normalized MNIST dataset PyTorch C++

I'm trying to follow this C++ PyTorch example but I need to load the MNIST dataset with its standard values, between 0 and 255. I removed the application of the Normalize() method, but I continue getting value between 0 and 1. What am I doing wrong?
My code is:
int main(int argc, char* argv[]) {
const int64_t batch_size = 1;
// MNIST Dataset
auto train_dataset = torch::data::datasets::MNIST("./mnist")
.map(torch::data::transforms::Stack<>());
// Number of samples in the training set
auto num_train_samples = train_dataset.size().value();
cout << "Number of training samples: " << num_train_samples << endl;
// Data loaders
auto train_loader = torch::data::make_data_loader<torch::data::samplers::RandomSampler>(
std::move(train_dataset), batch_size);
for (auto& batch : *train_loader) {
auto data = batch.data.view({batch_size, -1}).to(device);
auto record = data[0].clone();
cout << "Max value: " << max(record) << endl;
cout << "Min value: " << max(record) << endl;
break;
}
}
The MNIST dataset I downloaded is the original one, from the site.
Thank you in advance for your help.
I have looked at the source file and it appears that pytorch mnist dataset class performs the division by 255 to return only tensors within the [0,1] range. So you will have to multiply the batches by 255 yourself.
The normalize transform was not the culprit. It is used to change the mean and variance of your data

OpenCV - FAST+BRIEF: How to draw keypoints with DrawMatchesFlags::DRAW_RICH_KEYPOINTS?

I have to implement a feature detector using FAST+BRIEF (which is the manual implementation of ORB if I understand correctly).
So, this is the code I have so far:
printf("Calculating FAST+BRIEF features...\n");
Ptr<FastFeatureDetector> FASTdetector = FastFeatureDetector::create();
Ptr<BriefDescriptorExtractor> BRIEFdescriptor = BriefDescriptorExtractor::create();
std::vector<cv::KeyPoint> FASTkeypoints_1, FASTkeypoints_2, FASTkeypoints_3;
Mat BRIEFdescriptors_1, BRIEFdescriptors_2, BRIEFdescriptors_3;
FASTdetector->detect(left08, FASTkeypoints_1);
FASTdetector->detect(right08, FASTkeypoints_2);
FASTdetector->detect(left10, FASTkeypoints_3);
BRIEFdescriptor->compute(left08, FASTkeypoints_1, BRIEFdescriptors_1);
BRIEFdescriptor->compute(right08, FASTkeypoints_2, BRIEFdescriptors_2);
BRIEFdescriptor->compute(left10, FASTkeypoints_3, BRIEFdescriptors_3);
Mat FAST_left08, FAST_right08, FAST_left10;
drawKeypoints(left08, FASTkeypoints_1, FAST_left08, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_left08.png", FAST_left08);
drawKeypoints(right08, FASTkeypoints_2, FAST_right08, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_right08.png", FAST_right08);
drawKeypoints(left10, FASTkeypoints_3, FAST_left10, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_left10.png", FAST_left10);
printf("FAST+BRIEF done. \n");
The code so far works perfectly fine, however I don't get rich keypoints, but standard ones. If I understand correctly, this is because I need to somehow get the descriptor information to the keypoints first, right?
I have done the same implementation with SIFT, SURF and ORB before that, but there I use the computeanddetect function directly, which gives me keypoints, where I can draw with the DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag.
I have to implement a feature detector using FAST+BRIEF (which is the manual implementation of ORB if I understand correctly).
Yes, that is correct.
If I understand correctly, this is because I need to somehow get the descriptor information to the keypoints first, right?
No, keypoints are detected by using different methods. You can use SIFT, FAST, HarrisDetector, SURF etc. only to detect keypoints at first. Then there are different methods to describe the detected keypoints (e.g. a 128-bit float vector descriptor for SIFT) and match them afterwards.
A keypoint in OpenCV can be described by the different attributes angle, size, octave and so on https://docs.opencv.org/3.4.2/d2/d29/classcv_1_1KeyPoint.html
For SIFT every KeyPoint attribute is filled with a number that can later be drawn in the DRAW_RICH_KEYPOINTS flag. For FAST only standard values for the attributes are assigned so that they keypoints can be drawn with the mentioned flag but the size, octave and angle do not vary. Thus, every drawn KeyPoint looks similar.
Here a small code sample as a proof (I only use the ->detect functions):
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/xfeatures2d/nonfree.hpp>
int main(int argc, char** argv)
{
// Load image
cv::Mat img = cv::imread("MT189.jpg", CV_LOAD_IMAGE_GRAYSCALE);
if (!img.data) {
std::cout << "Error reading image" << std::endl;
return EXIT_FAILURE;
}
cv::Mat output;
// Detect FAST keypoints
std::vector<cv::KeyPoint> keypoints_fast, keypoints_sift;
cv::Ptr<cv::FastFeatureDetector> fast = cv::FastFeatureDetector::create();
fast->detect(img, keypoints_fast);
for (size_t i = 0; i < 100; ++i) {
std::cout << "FAST Keypoint #:" << i;
std::cout << " Size " << keypoints_fast[i].size << " Angle " << keypoints_fast[i].angle << " Response " << keypoints_fast[i].response << " Octave " << keypoints_fast[i].octave << std::endl;
}
// Detect SIFT keypoints
cv::Ptr<cv::xfeatures2d::SiftFeatureDetector> sift = cv::xfeatures2d::SiftFeatureDetector::create();
sift->detect(img, keypoints_sift);
for (size_t i = 0; i < 100; ++i) {
std::cout << "SIFT Keypoint #:" << i;
std::cout << " Size " << keypoints_sift[i].size << " Angle " << keypoints_sift[i].angle << " Response " << keypoints_sift[i].response << " Octave " << keypoints_sift[i].octave << std::endl;
}
// Draw SIFT keypoints
cv::drawKeypoints(img, keypoints_sift, output, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("Output", output);
cv::waitKey(0);
}

Opencv - RTrees algorithm : adding weight to class

I am using OpenCV's implementation of Random Forest algorithm (i.e. RTrees) and am facing a little problem when setting parameters.
I have 5 classes and 3 variables and I want to add weight to classes because the samples sizes for each classes vary a lot.
I took a look at the documentation here and here and it seems that the priors array is the solution, but when I try to give it 5 weights (for my 5 classes) it gives me the following error :
OpenCV Error: One of arguments' values is out of range (Every class weight should be positive) in CvDTreeTrainData::set_data, file /home/sguinard/dev/opencv-2.4.13/modules/ml/src/tree.cpp, line 644
terminate called after throwing an instance of 'cv::Exception'
what(): /home/sguinard/dev/opencv-2.4.13/modules/ml/src/tree.cpp:644: error: (-211) Every class weight should be positive in function CvDTreeTrainData::set_data
If I understand well, it's due to the fact that the priors array have 5 elements. And when I try to give it only 3 elements (as my number of variables) everything works.
According to the documentation, this array should be used to add weight to classes but it actually seems that it is used to add weight to variables...
So, does anyone knows how to add weight to classes on OpenCV's RTrees algorithm ? (I'm working with OpenCV 2.4.13 in c++)
Thanks in advance !
Here is my code :
cv::Mat RandomForest(cv::Mat train_data, cv::Mat response_data, cv::Mat sample_data, int size, int size_predict, float weights[5])
{
#undef CV_TERMCRIT_ITER
#define CV_TERMCRIT_ITER 10
#define ATTRIBUTES_PER_SAMPLE 3
cv::RandomTrees RFTree;
float priors[] = {1,1,1};
CvRTParams RFParams = CvRTParams(25, // max depth
500, // min sample count
0, // regression accuracy: N/A here
false, // compute surrogate split, no missing data
5, // max number of categories (use sub-optimal algorithm for larger numbers)
//priors
weights, // the array of priors (use weights or priors)
true,//false, // calculate variable importance
2, // number of variables randomly selected at node and used to find the best split(s).
100, // max number of trees in the forest
0.01f, // forrest accuracy
CV_TERMCRIT_ITER | CV_TERMCRIT_EPS // termination cirteria
);
cv::Mat varIdx = cv::Mat();
cv::Mat vartype( train_data.cols + 1, 1, CV_8U );
vartype.setTo(cv::Scalar::all(CV_VAR_NUMERICAL));
vartype.at<uchar>(ATTRIBUTES_PER_SAMPLE, 0) = CV_VAR_CATEGORICAL;
cv::Mat sampleIdx = cv::Mat();
cv::Mat missingdatamask = cv::Mat();
for (int i=0; i!=train_data.rows; ++i)
{
for (int j=0; j!=train_data.cols; ++j)
{
if(train_data.at<float>(i,j)<0
|| train_data.at<float>(i,j)>10000
|| !float(train_data.at<float>(i,j)))
{train_data.at<float>(i,j)=0;}
}
}
// Training
std::cout << "Training ....." << std::flush;
bool train = RFTree.train(train_data,
CV_ROW_SAMPLE,//tflag,
response_data,//responses,
varIdx,
sampleIdx,
vartype,
missingdatamask,
RFParams);
if (train){std::cout << " Done" << std::endl;}
else{std::cout << " Failed" << std::endl;return cv::Mat();}
std::cout << "Variable Importance : " << std::endl;
cv::Mat VI = RFTree.getVarImportance();
for (int i=0; i!=VI.cols; ++i){std::cout << VI.at<float>(i) << " - " << std::flush;}
std::cout << std::endl;
std::cout << "Predicting ....." << std::flush;
cv::Mat predict(1,sample_data.rows,CV_32F);
float max = 0;
for (int i=0; i!=sample_data.rows; ++i)
{
predict.at<float>(i) = RFTree.predict(sample_data.row(i));
if (predict.at<float>(i)>max){max=predict.at<float>(i);/*std::cout << predict.at<float>(i) << "-"<< std::flush;*/}
}
// Personnal test due to an error I got (everyone sent to 0)
if (max==0){std::cout << " Failed ... Max value = 0" << std::endl;return cv::Mat();}
std::cout << " Done ... Max value = " << max << std::endl;
return predict;
}

I'm trying to convert pixel data to an OpenCV Mat object

I have raw pixel data that I want to output via the opencv cvShowImage() function.
I have the following code:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cvShowImage("result",&mat);
Which outputs:
1124024336, 2, 480, 640
to the console, but fails to output the image with cvShowImage(). Instead throwing an exception with the message:
OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat
I suspect the problem is in the way I create the mat object, but I am having a very hard time finding any more specific information on how I am supposed to do that.
I don't think CV_8UC3 is enough of a description for it to render the array of data. Doesn't it have to know whether the data is RGB or YUY2, etc.? How do I set that?
Try cv::imshow("result", mat) instead of mixing the old C and new C++ APIs. I expect casting a Mat to a CvArr* is the source of the problem.
So, something like this:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cv::imshow("result", mat);

OpenCV findHomography assertion failed error

I'm trying to build the sample program brief_match_test.cpp that comes with OpenCV, but I keep getting this error from the cv::findHomography() function when I run the program:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.3/modules/core/src/matrix.cpp, line 1421
libc++abi.dylib: terminate called throwing an exception
findHomography ... Abort trap: 6
I'm compiling it like this:
g++ `pkg-config --cflags opencv` `pkg-config --libs opencv` brief_match_test.cpp -o brief_match_test
I've added some stuff to the program to show the keypoints that the FAST algorithm finds, but haven't touched the section dealing with homography. I'll include my modified example just in case I did screw something up:
/*
* matching_test.cpp
*
* Created on: Oct 17, 2010
* Author: ethan
*/
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;
//Copy (x,y) location of descriptor matches found from KeyPoint data structures into Point2f vectors
static void matches2points(const vector<DMatch>& matches, const vector<KeyPoint>& kpts_train,
const vector<KeyPoint>& kpts_query, vector<Point2f>& pts_train, vector<Point2f>& pts_query)
{
pts_train.clear();
pts_query.clear();
pts_train.reserve(matches.size());
pts_query.reserve(matches.size());
for (size_t i = 0; i < matches.size(); i++)
{
const DMatch& match = matches[i];
pts_query.push_back(kpts_query[match.queryIdx].pt);
pts_train.push_back(kpts_train[match.trainIdx].pt);
}
}
static double match(const vector<KeyPoint>& /*kpts_train*/, const vector<KeyPoint>& /*kpts_query*/, DescriptorMatcher& matcher,
const Mat& train, const Mat& query, vector<DMatch>& matches)
{
double t = (double)getTickCount();
matcher.match(query, train, matches); //Using features2d
return ((double)getTickCount() - t) / getTickFrequency();
}
static void help()
{
cout << "This program shows how to use BRIEF descriptor to match points in features2d" << endl <<
"It takes in two images, finds keypoints and matches them displaying matches and final homography warped results" << endl <<
"Usage: " << endl <<
"image1 image2 " << endl <<
"Example: " << endl <<
"box.png box_in_scene.png " << endl;
}
const char* keys =
{
"{1| |box.png |the first image}"
"{2| |box_in_scene.png|the second image}"
};
int main(int argc, const char ** argv)
{
Mat outimg;
help();
CommandLineParser parser(argc, argv, keys);
string im1_name = parser.get<string>("1");
string im2_name = parser.get<string>("2");
Mat im1 = imread(im1_name, CV_LOAD_IMAGE_GRAYSCALE);
Mat im2 = imread(im2_name, CV_LOAD_IMAGE_GRAYSCALE);
if (im1.empty() || im2.empty())
{
cout << "could not open one of the images..." << endl;
cout << "the cmd parameters have next current value: " << endl;
parser.printParams();
return 1;
}
double t = (double)getTickCount();
FastFeatureDetector detector(15);
BriefDescriptorExtractor extractor(32); //this is really 32 x 8 matches since they are binary matches packed into bytes
vector<KeyPoint> kpts_1, kpts_2;
detector.detect(im1, kpts_1);
detector.detect(im2, kpts_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "found " << kpts_1.size() << " keypoints in " << im1_name << endl << "fount " << kpts_2.size()
<< " keypoints in " << im2_name << endl << "took " << t << " seconds." << endl;
drawKeypoints(im1, kpts_1, outimg, 200);
imshow("Keypoints - Image1", outimg);
drawKeypoints(im2, kpts_2, outimg, 200);
imshow("Keypoints - Image2", outimg);
Mat desc_1, desc_2;
cout << "computing descriptors..." << endl;
t = (double)getTickCount();
extractor.compute(im1, kpts_1, desc_1);
extractor.compute(im2, kpts_2, desc_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "done computing descriptors... took " << t << " seconds" << endl;
//Do matching using features2d
cout << "matching with BruteForceMatcher<Hamming>" << endl;
BFMatcher matcher_popcount(NORM_HAMMING);
vector<DMatch> matches_popcount;
double pop_time = match(kpts_1, kpts_2, matcher_popcount, desc_1, desc_2, matches_popcount);
cout << "done BruteForceMatcher<Hamming> matching. took " << pop_time << " seconds" << endl;
vector<Point2f> mpts_1, mpts_2;
cout << "matches2points ... ";
matches2points(matches_popcount, kpts_1, kpts_2, mpts_1, mpts_2); //Extract a list of the (x,y) location of the matches
cout << "done" << endl;
vector<char> outlier_mask;
cout << "findHomography ... ";
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier_mask);
cout << "done" << endl;
cout << "drawMatches ... ";
drawMatches(im2, kpts_2, im1, kpts_1, matches_popcount, outimg, Scalar::all(-1), Scalar::all(-1), outlier_mask);
cout << "done" << endl;
imshow("matches - popcount - outliers removed", outimg);
Mat warped;
Mat diff;
warpPerspective(im2, warped, H, im1.size());
imshow("warped", warped);
absdiff(im1,warped,diff);
imshow("diff", diff);
waitKey();
return 0;
}
I don't know for sure, so I'm really answering this just because no one else has so far and it's been 10 hours since you asked the question.
My first thought is that you don't have enough point pairs. A homography requires at least 4 pairs, otherwise a unique solution cannot be found. You may want to make sure that you only call findHomography if the number of matches is at least 4.
Alternatively, the questions here and here are about the same failed assertion (caused by calling different functions than yours, though). I'm guessing OpenCV does some form of dynamic type checking or templating such that a type mismatch error that ought to occur at compile time ends up being a run-time error in the form of a failed assertion.
All this to say, maybe you should convert mpts_1 and mpts_2 to cv::Mat before passing in to findHomography.
It's internal OpenCV types problem. findHomography() wants vector < unsigned char > as the last parameter. But drawMatches() requires vector < char > as last one.
I think that on this page a lot of things are explained about brief_match_test.cpp and the ways to correct it.
You can do like this:
vector<char> outlier_mask;
Mat outlier(outlier_mask);
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier);