OpenCV 3 KNN implementation - c++

As you may know, many things changed in OpenCV 3. In previous verion of OpenCV I used to do it that way:
Mat trainData(classes * samples, ImageSize, CV_32FC1);
Mat trainClasses(classes * samples, 1, CV_32FC1);
KNNLearning(&trainData, &trainClasses); //learning function
KNearest knearest(trainData, trainClasses); //creating
//loading input image
Mat input = imread("input.jpg");
//digital recognition
learningTest(input, knearest);//test
Also I found an example how to figured it out, but I still have errors in create function:
Ptr<KNearest> knearestKdt = KNearest::create(ml::KNearest::Params(10, true, INT_MAX, ml::KNearest::KDTREE));
knearestKdt->train(trainData, ml::ROW_SAMPLE, trainLabels);
knearestKdt->findNearest(testData, 4, bestLabels);
Can you please provide me with information, how to rewrite the actual code of KNearest to openCV 3 properly?

The API has changed once again since #aperture-laboratories answer. I hope they keep up with the documentation when they release new features or changes in the future.
A working example is as follows
using namespace cv::ml;
//Be sure to change number_of_... to fit your data!
Mat matTrainFeatures(0,number_of_train_elements,CV_32F);
Mat matSample(0,number_of_sample_elements,CV_32F);
Mat matTrainLabels(0,number_of_train_elements,CV_32F);
Mat matSampleLabels(0,number_of_sample_elements,CV_32F);
Mat matResults(0,0,CV_32F);
//etcetera code for loading data into Mat variables suppressed
Ptr<TrainData> trainingData;
Ptr<KNearest> kclassifier=KNearest::create();
trainingData=TrainData::create(matTrainFeatures,
SampleTypes::ROW_SAMPLE,matTrainLabels);
kclassifier->setIsClassifier(true);
kclassifier->setAlgorithmType(KNearest::Types::BRUTE_FORCE);
kclassifier->setDefaultK(1);
kclassifier->train(trainingData);
kclassifier->findNearest(matSample,kclassifier->getDefaultK(),matResults);
//Just checking the settings
cout<<"Training data: "<<endl
<<"getNSamples\t"<<trainingData->getNSamples()<<endl
<<"getSamples\n"<<trainingData->getSamples()<<endl
<<endl;
cout<<"Classifier :"<<endl
<<"kclassifier->getDefaultK(): "<<kclassifier->getDefaultK()<<endl
<<"kclassifier->getIsClassifier() : "<<kclassifier->getIsClassifier()<<endl
<<"kclassifier->getAlgorithmType(): "<<kclassifier->getAlgorithmType()<<endl
<<endl;
//confirming sample order
cout<<"matSample: "<<endl
<<matSample<<endl
<<endl;
//displaying the results
cout<<"matResults: "<<endl
<<matResults<<endl
<<endl;
//etcetera ending for main function

KNearest::Params params;
params.defaultK=5;
params.isclassifier=true;
//////// Train and find with knearest
Ptr<TrainData> knn;
knn= TrainData::create(AmatOfFeatures,ROW_SAMPLE,AmatOfLabels);
Ptr<KNearest> knn1;
knn1=StatModel::train<KNearest>(knn,params);
knn1->findNearest(AmatOfFeaturesToTest,4,ResultMatOfNearestNeighbours);
/////////////////
The names of these functions will help you find them in the documentation.
However, the documentation might be a little confusing until it is fully updated, so the best way to do exactly what you want is to make a small toy example and use the trial-and-error way of things.
This is a working example, pasted right out of my own code, which is proven to be working. Hope that helps.

Related

cvtColor ignores dstCn argument OpenCV

I am trying to create a program which imports an RGB image and converts it to grayscale. I would like the output image to consist of 3 channels. To achieve that I use cv::cvtColor function with dstCn parameter set to 3:
cv::Mat mat = cv::imread("lena.bmp");
std::cout << CV_MAT_CN(mat.type()) << "\n"; // prints "3", OK
cv::cvtColor(mat, mat, cv::COLOR_BGR2GRAY, 3);
std::cout << CV_MAT_CN(mat.type()) << "\n"; // prints "1" regardless of dstCn
but it looks like dstCn isn't taken into account, and the output array has only 1 channel.
The OpenCV documentation says:
dstCn - number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.
It's a very basic case and I am aware there are plenty of workarounds, but I would like to know whether it is a bug or my incomprehension.
The answer can be found in the OpenCV source code. Let's have a look at cvtColor function in imageproc/src/color.cpp file. There is a very long switch-case, so I only post here the most interesting part:
void cvtColor( InputArray _src, OutputArray _dst, int code, int dcn )
{
...
switch( code )
{
...
case COLOR_BGR2GRAY: case COLOR_BGRA2GRAY:
case COLOR_RGB2GRAY: case COLOR_RGBA2GRAY:
cvtColorBGR2Gray(_src, _dst, swapBlue(code));
break;
}
}
The code from my question uses COLOR_BGR2GRAY. Nothing special is done before the switch statement. Invoking swapBlue does not do anything interesting too. We can see that this case completely ignores dcn (aka dstCn). So it seems to be fully intentional and my idea was wrong from the start.
I have also found a similar post on OpenCV forum where Doomb0t pointed that:
the concept of greyscale is that you have one channel describing the intensity on a gradual scale between black and white. So, it is not clear why would you need a 3 channels greyscale image (...)
Yes, grayscale is one channel and what you ask doesn't make sense at first sight, however there could be a legit reason if you want it copied in three channels in one operation and then manipulate each of the copies, while they are kept in the same container.
swapBlue is because default format is BGR.
BTW, you can also read it directly in blackwhite and merge it into new 3 channel image:
cv::Mat bw = cv.imread("lena.bmp",0);
vector<Mat> ch(3);
bw3 = Mat::zeros(Size(bw.cols, bw.rows), CV_8UC3); //3 channels 8 bit unsigned
for(i=0; i<3; i++) ch.push_back(bw)
cv.merge(ch, bw3)
(Maybe there's a shorter way, I don't know.)
More examples with merge

Unable to stitch images via OpenCV in C++

I need to stitch few images using OpenCV in C++, so I wrote the following code:
#include <opencv2/opencv.hpp>
#include <opencv2/stitching.hpp>
#include <cstdio>
#include <vector>
void main()
{
std::vector<cv::Mat> vImg;
cv::Mat rImg;
vImg.push_back(cv::imread("./stitching_img/S1.png"));
vImg.push_back(cv::imread("./stitching_img/S2.png"));
vImg.push_back(cv::imread("./stitching_img/S3.png"));
cv::Stitcher stitcher = cv::Stitcher::createDefault();
unsigned long AAtime = 0, BBtime = 0;
AAtime = cv::getTickCount();
cv::Stitcher::Status status = stitcher.stitch(vImg, rImg);
BBtime = cv::getTickCount();
printf("%.2lf sec \n", (BBtime - AAtime) / cv::getTickFrequency());
if (cv::Stitcher::OK == status)
cv::imshow("Stitching Result", rImg);
else
std::printf("Stitching fail.");
cv::waitKey(0);
}
Unfortunately, it always says "Stitching fail" on the following files -- http://imgur.com/a/32ZNS while it works on these files -- http://imgur.com/a/ve5sY
What am I doing wrong? How can I fix it?
Thanks in advance.
cv::Stitchers works by finding common features in the separate images and use those to figure out where the images fit together. In your samples where the stitching works you can find a lot of overlap: the blue roof, the features of the buildings across the road, etc.
In the set where it fails for you, there is no overlap, so the algorithm can't figure out how to fit them together. It seems like you can 'stitch' these images by just putting them next together. For this you can use hconcat as described at this answer: https://stackoverflow.com/a/20079134/1737727
There is a very simple way of displaying two images side by side. The following function can be used which is provided by opencv.
Mat image1, image2;
hconcat(image1,image2,image1);//Syntax->
hconcat(source1,source2,destination);
This function can also be used to copy a set of columns from an image to another image.
Mat image;
Mat columns=image.colRange(20,30);
hconcat(image,columns,image);
vconcat is a similar function to stich images vertically.

Using custom kernel in opencv 2DFilter - causing crash ... convolution how?

Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.
Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.
Code with comments relating to the issue here:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
//Loading the source image
Mat src;
src = imread( "1.png" );
//Output image of the same size and the same number of channels as src.
Mat dst;
//Mat dst = src.clone(); //didn't help...
//desired depth of the destination image
//negative so dst will be the same as src.depth()
int ddepth = -1;
//the convolution kernel, a single-channel floating point matrix:
Mat kernel = imread( "kernel.png" );
kernel.convertTo(kernel, CV_32F); //<<not working
//normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray()); //doesn't help
//cout << kernel.size() << endl; // ... gives 11, 11
//however, the example from tutorial that does work:
//kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);
//default value (-1,-1) here means that the anchor is at the kernel center.
Point anchor = Point(-1,-1);
//value added to the filtered pixels before storing them in dst.
double delta = 0;
//alright, let's do this...
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow("Source", src); //<< unhandled exception here
imshow("Kernel", kernel);
imshow("Destination", dst);
waitKey(1000000);
return 0;
}
As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.
The image is, '1.png':
And the kernel I want 'kernel.png':
I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)
Direct questions:
why the crash?
is the crash indicating a fundamental conceptual mistake?
or (hopefully) is it just some (silly) fault in the code?
Thanks in advance for any help :)
The assertion error should be posted which would help someone to answer you other than questioning why is the crash. Anyways, I have posted below the possible errors and solution for convolution filter2D.
Error 1:
OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in cv::countNo
nZero, file C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\st
at.cpp, line 549
Solution : Your input Image and the kernel should be grayscales. You can use the flag 0 in imread. (ex. cv::imread("kernel.png",0) to read the image as grayscale.) If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
I don't see anything other than the obove error that may crash. Kernel size should in odd numbers and your kernel image is 11X11 which is fine. If it stills crashes kindly provide more information in order to help you out.

Cannot access IplImage data

I have been trying do a comparison of data contents in my IplImage object.
I have the following:
IplImage img1 = IplImage(cv::imread("C:\\TestIm\\barrier_snapshot1.png"),
CV_LOAD_IMAGE_GRAYSCALE);
for (int i=0; i < img1.widthStep * img1.height; i++) {
cout << img1.imageData[i] << endl;
}
But when I try to print it, it is causing an exception and I cannot even catch it to print the message and see what I am doing wrong. My image is Grayscale and I believe if I don't use cvCreateImage() it is okay? I know it will be something stupid or related to array access that I cannot seem to be getting easily from the IplImage documenation.
*WHY AM I BOTHERING TO MIX C AND C++ CODE IN MY DESIGN? *
Unfortunately, I have no choice! I am working on a project that is with improved Motion Detection Applications. My legacy application source code uses heavy BOOST and OpenCV stuff. Especially, it uses IplImage* (I hate it, makes life difficult and causes memory leak) to store stuff like image masks. I understand that If I save IplImage* in long run I will have illegal reference and access violation. so I save the copy of the actual content pointed by the IplImage*. To exemplify:
// getLongHistory() returns IplImage*
IplImage history_long = *(motionHistory.getLongHistory());
There are total 6 mask images that are made using IplImage*. At this moment, I am condemning the programmer who decided to do it in IplImage*. The problem arises when I am trying to load those mask images and this is how I do it:
// Passing pointer to the address of the mask stored (alive in the memory)
motionHistory.setLongHistory(&(matcher.getCurrentSceneObject().getLongHistory()));
I believe I am having a problem with deep copy and shallow copy of IplImage objects. I believe doing saving it as cv::Mat from IplImage* and loading it as IplImage* from cv::Mat will probably reduce the burden as I suspect it probably does SOMETHING underneath the high-level functions so that the copying for data and ROI is done accordingly. But, as a newbee I can assume anything. Please help!
UPDATE
In my code I was doing this in the past:
/* I store all my mask images in a vector of pairs made of <int, IplImage>
* __MASK_LONG__ etc. are predefined intergers
* getMaskLong() etc. methods return IplImage* to the respective mask images.
*/
myImages.clear(); // To make sure that I have no extra stuff
myImages.push_back(std::make_pair<int, IplImage>(_MASK_LONG_, maskHistory.getMaskLong()));
myImages.push_back(std::make_pair<int, IplImage>(_MASK_SHORT_, maskHistory.getMaskShort()));
However, after getting suggestions and doing some basic R&A, I am now doing this to prevent shallow copying:
myImages.clear();
myImages.push_back(std::make_pair<int, IplImage>(_MASK_LONG_, *cvCloneImage(maskHistory.getMaskLong())));
myImages.push_back(std::make_pair<int, IplImage>(_MASK_SHORT_, *cvCloneImage(maskHistory.getMaskShort())));
I can confirm that this works as I can see the latest mask images getting loaded on the OpenCV window! And I am pretty certain how IMPORTANT it is to do Deep Copy at least 2/3 times in any programming task. So thanks for putting me to the right track. But now I have the problem which I was having in mind whilst implementing those changes - memory allocation failure. And the message was encountered:
OpenCV Error: Insufficient memory (Failed to allocate 3686404 bytes) in OutOfMemoryError,
file /home/naresh/OpenCV-2.4.0/modules/core/src/alloc.cpp, line 52
If I am deep enough to know about C/C++, firstly I am commiting a crime of mixing them together (I HAVE NO CHOICE!! IT IS A LEGACY APPLICATION!). Secondly, there is a mismatch i.e. incorrect set of calls to malloc/free in alloc.cpp file (where the problem is being produced). Or it may just be that the heap is corrupted or full. Am I being stupid?
Don't mix the C interface of OpenCV with the C++ interface.
Ideally, you would solve the problem by using exclusively the C++ interface, like the following:
cv::Mat gray = cv::imread("C:\\TestIm\\barrier_snapshot1.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat_<uchar>::iterator it = gray.begin<uchar>();
cv::Mat_<uchar>::iterator end = gray.end<uchar>();
for (; it != end; ++it)
{
cout << *it << endl;
}
Or:
cv::Mat gray = cv::imread("C:\\TestIm\\barrier_snapshot1.png", CV_LOAD_IMAGE_GRAYSCALE);
for (int i = 0; i < gray.cols; i++)
{
for (int j = 0; j < gray.rows; j++)
{
cout << gray[gray.cols * j + i] << endl;
}
}
Yes, cv::imread() can also load an input image as grayscale. But if you really need to stick with the C interface, then drop cv::imread() and use cvLoadImage() instead. There are several posts explaining how to do this, use the search box.
If you decide to continue to mix the interfaces (please don't), check this thread since it explains how to convert IplImage* to cv::Mat.

How to use CvAvgSdv in openCV + VS 2010

I read from this thread - Get most accurate image using OpenCV - that I can use variance to measure which of the input images are the sharpest. I can't seem to find a tutorial for this. I am very new to openCV. Right now, my code scans images from a folder and stores them using vector
for (int ct = 0; ct < images.size() ; ct++) {
//should i put the cvAvgSdv function here?
waitKey(0);
}
Thank you for any help!
Update: I called this fxn:
cvAvgSdv(images[ct],&scalar_mean,&std_dev);
and it gave me an error:
No suitable conversion function from cv::Mat to const cvArr * exists.
Can I use the fxn without converting the Mat to iplImage? If not, what's the easiest way to convert the Mat?
yes, it is.
you should calc like this:
CvScalar mean, std_dev;
cvAvgSdv(img,&mean,&std_dev,NULL);