Is there a simple way to save a KNN classifier in OpenCV by using the C++ API?
I have tried to save a KNN classifier described here after wrapping CvKNearest class inside another class.
It successfully saves to disk, but when I read from it running predict method gives me segmentation fault (core dumped) error.
My wrapper class is as follows:
class KNNWrapper
{
CvKNearest knn;
bool train(Mat& traindata, Mat& trainclasses)
{
}
void test(Mat& testdata, Mat& testclasses)
{
}
}
I've heard that Boost Serialization library is more robust and safe. Can anyone point me to proper resources where I can get this done with Boost library?
#tisch is totally right and I'd like to correct myself. The CvKNearest doesn't override the load and save functions of the CVStatModel.
But since a CvKNearest doesn't compute a model, there's no internal state to store. Of course, you want to store the training and test cv::Mat data you have passed. You can use the FileStorage class for this, a great description and tutorial is given at:
http://docs.opencv.org/modules/core/doc/xml_yaml_persistence.html
If you want to offer the same API as in the other statistical models in OpenCV (which makes sense) I suppose to subclass the CvKNearest and offer a save and load function, which respectively serialize the training/test data and deserialize it by using the FileStorage.
Related
I am trying to use ITK's OtsuMultipleThresholdsImageFilter filter in a project but I do not have output.
My aim is to make a simple interface between OpenCV and ITK.
To convert my data from OpenCV's Mat container to itk::Image I use ITK's bridge to OpenCV and I could check that the data are properly sent to ITK.
I am even able to display thanks to QuickView.
But When I setup the filter inspired by this example the object returned by the method GetThresholds() is empty.
Here is the code I wrote:
typedef itk::Image<uchar,2> image_type;
typedef itk::OtsuMultipleThresholdsImageFilter<image_type, image_type> filter_type;
image_type::Pointer img = itk::OpenCVImageBridge::CVMatToITKImage<image_type>(src);
image_type::SizeType size = img->GetLargestPossibleRegion().GetSize();
filter_type::Pointer filter = filter_type::New();
filter->SetInput(img);
filter->SetNumberOfHistogramBins(256);
filter->SetNumberOfThresholds(K);
filter_type::ThresholdVectorType tmp = filter->GetThresholds();
std::cout<<"CHECK: "<<tmp.size()<<std::endl;
src is OpenCV's Mat of CV_8U(C1) type.
A fundamental and basic concept to using ITK is that it is a pipeline architecture. You must connect the input's and output's then update the pipeline.
You have connected the pipeline but you have not executed it. You must call filter->Update().
Please read the ITK Software Guide to understand the fundamentals of ITK:
https://itk.org/ItkSoftwareGuide.pdf
I'm trying to load a model trained in Python into C++ and classify some data from a CSV. I found this tutorial:
https://medium.com/#hamedmp/exporting-trained-tensorflow-models-to-c-the-right-way-cf24b609d183#.3bmbyvby0
Which lead me to this piece of example code:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/main.cc
Which is looking very hopeful for me. However, the data I want to load is in a CSV, and not an image file, so I'm trying to rewrite the ReadTensorFromImageFile function. I was able to find a class DecodeCSV, but it's a little different than the DecodePNG and DecodeJpeg classes in the example code, and I end up with an OutputList instead of and Output. Using the [] operator on the list seems to crash my program. If anyone happens to know how to deal with this, it would be greatly appreciated. He are the relevant changes to the code:
// inside ReadTensorFromText
Output image_reader;
std::initializer_list<Input>* x = new std::initializer_list<Input>;
::tensorflow::ops::InputList defaults = ::tensorflow::ops::InputList(*x);
OutputList image_read_list;
image_read_list = DecodeCSV(root.WithOpName("csv_reader"), file_reader, defaults).output;
// Now cast the image data to float so we can do normal math on it.
// image_read_list.at(0) crashes the executable.
auto float_caster =
Cast(root.WithOpName("float_caster"), image_read_list.at(0), tensorflow::DT_FLOAT);
I am trying to use transform a vtkPolyData object by using vtkTransform.
However, the tutorials I found are using pipeline, for example: http://www.vtk.org/Wiki/VTK/Examples/Cxx/Filters/TransformPolyData
However, I am using VTK 6.1 which has removed thge GetOutputPort method for stand-alone data object as mentioned here:
http://www.vtk.org/Wiki/VTK/VTK_6_Migration/Replacement_of_SetInput
I have tried to replace the line:
transformFilter->SetInputConnection()
with
transformFilter->SetInputData(polydata_object);
Unfortunately, the data was not read properly (as the pipeline was not set correctly?)
Do you know how to correctly transform a stand-alone vtkPolyData without using pipeline in VTK6?
Thank you!
GetOutputPort was never a method on a data-object. It was always a method on vtkAlgorithm and it still is present on vtkAlgorithm (and subclasses). Where is the polydata_object coming from? If it's an output of a reader, you have two options:
// update the reader to ensure it executes and reads data.
reader->UpdatePipeline()
// now you can get access to the data object.
vtkSmartPointer<vtkPolyData> data = vtkPolyData::SafeDownCast(reader->GetOutputDataObject(0));
// pass that to the transform filter.
transformFilter->SetInputData(data.GetPointer());
transformFilter->Update();
Second option is to simply connect the pipeline:
transformFilter->SetInputConnection(reader->GetOutputPort());
The key is to ensure that the data is updated/reader before passing it to the transform filter, when not using the pipeline.
I'm trying to implement a random tree classifier using Opencv. I succeed implementing it with opencv and it is working.
Then I decided to separate the training part from the classification part.
The idea is to save the trained forest and load it back when you want to classify something.
I tried in two different way:
using write and read methods of the super class CvStatModel
using store and load methods of the super class CvStatModel
But results form the older implementation that did not save trees to file are different and worst.
Following code is the implementation of 2nd point:
To store it:
for (unsigned i=0; i<scenes.size(); ++i) {
char class_fname[50];
char output[100];
sprintf(class_fname,"class_%d.xml",i);
sprintf(output,"class_%d",i);
//classifiers[i]->save(class_fname,output);
classifiers[i]->save(class_fname);
}
To load them back:
for (unsigned int i = 0; i<CLUSTERING_N_CENTERS;i++){
char class_fname[50];
char output[100];
sprintf(class_fname,"class_%d.xml",i);
sprintf(output,"class_%d",i);
classifiers[i] = new CvRTrees();
//classifiers[i]->load(class_fname,output);
classifiers[i]->load(class_fname);
}
I'm using opencv 2.4.6
Does anyone have suggestions on this code?
It was an error due to file mistake!
So the persistency is working!
But I leave the post as sample if someone needs to implement it!
I'm currently trying to figure out how to use the Generic Image Library included in Boost. Right now, I just want to use the library to store pixel data and use the Image IO to write PNGs. I'm having trouble understanding just how to set up the object however.
The hpp says
image(const point_t& dimensions,
std::size_t alignment=1) : _memory(0), _align(alignment) {
allocate_and_default_construct(dimensions);
}
but I cannot find any references to point_t except a type_def for view_t::point_t to point_t.
Also, the tutorial found with the GIL seems to only include writing filters and generic algorithms, and thus each function example they provide has a source image view, from which they take the dimensions.
Am I going about this the wrong way? Or is there something I've missed completely?
Thanks in advance
Edit: I don't know if anyone cares, or has read this, but for the record, I just used the boost interleaved image function to create a PNG. It's not exactly the same solution, but it works for my applications.
it sounds like you solved your problem in the meantime, but just for the record... here are some pointers to information about your problem:
First of all you may have missed the second constructor of boost::gil::image, which offers explicit access to the horizontal and vertical dimensions without the need of the point_t:
image(x_coord_t width, y_coord_t height,
std::size_t alignment=0,
const Alloc alloc_in = Alloc()) : _memory(0), _align_in_bytes(alignment), _alloc(alloc_in) {
allocate_and_default_construct(point_t(width,height));
}
point_t will most likely refer to the point2 class template defined in boost/gil/utilities.hpp.
In general you should check the complete documentation of Boost GIL for all questions not mentioned in the tutorial. For a deeper understanding of the library it is absolutely necessary to get familiar with the Design Guide and the Doxygen Documentation.