my name is titiri and happy that I found waffle library to classification. I think waffle is a good library for machine learning algorithms.
I have a question about waffle library.
After training a model, I want print a prediction, for a instance:
my code is:
GMatrix Instance(1,8);//instance have 8 real attribute and
double out;// value in attribute 'class' is nomial
Instance[0][0]=6;
Instance[0][1]=148;
Instance[0][2]=72;
Instance[0][3]=35;
Instance[0][4]=0;
Instance[0][5]=33.6;
Instance[0][6]=0.62;
Instance[0][7]=50;
modell->predict(Instance[0],&out);
cout<<&out;
This code do not work true and does not print anything.
Please help me!
What do I need to predict class of a instance , then print its class,
have a good performance 'predict' method for classify a instance?
Or is there a better method for this work ?
thanks,
Be happy and win
I suspect the reason your code does not print anything is because you forgot the endl. (This is what Joachim Pileborg mentioned in his comment.)
If you are using Visual Studio, you may want to add a breakpoint at the end of your code (maybe on the return statement) because in certain modes it can close your application before you get to see the output, which can make it seem as if nothing happened.
Example
What follows is a full example that works fine for me. It includes your instance. It loads a K-nearest neighbors learner from 2blobs_knn.json and then evaluates your instance on it. You can replace that file name with the name of any trained supervised model generated by the waffles tools.
With the model I used, the program prints "1" and exits.
If you want to use the exact model that I tested my code with (in case your method of constructing your learner is the problem) see the section after the example code.
#include <GClasses/GMatrix.h>
#include <GClasses/GHolders.h>
#include <GClasses/GRand.h>
#include <GClasses/GLearner.h>
#include <GClasses/GDom.h>
#include <iostream>
#include <cassert>
using namespace GClasses;
using std::cout; using std::endl;
int main(int argc, char *argv[])
{
//Load my trained learner from a file named 2blobs_knn.json and put
//it in hModel which is a shared-pointer class.
GLearnerLoader ll(GRand::global());
GDom dom;
dom.loadJson("2blobs_knn.json");
Holder<GSupervisedLearner> hModel(ll.loadSupervisedLearner(dom.root()));
assert(hModel.get() != NULL);
//Here is your code
GMatrix Instance(1,8);// Instance has 8 real attributes and one row
double out; // The value in attribute 'class' is nominal
Instance[0][0]=6;
Instance[0][1]=148;
Instance[0][2]=72;
Instance[0][3]=35;
Instance[0][4]=0;
Instance[0][5]=33.6;
Instance[0][6]=0.62;
Instance[0][7]=50;
hModel.get()->predict(Instance[0],&out);
cout << out << endl;
return 0;
}
How the learner I used in the example was constructed
To get the learner, I used Matlab (Octave is the free imitator) to generate a CSV file in which class 0 was an 8-dimensional spherical unit Gaussian centered at (0,0,0,0,0,0,0,0) and class 1 had the same distribution but centered at (2,2,2,2,2,2,2,2)
m=[[randn(200,8);randn(200,8)+2], [repmat(0,200,1);repmat(1,200,1)]];
csvwrite('2blobs.csv',m)
Then, I took that CSV, converted it to ARFF using
waffles_transform import 2blobs.csv > 2blobs.arff
Next, I changed the last attribute from #ATTRIBUTE attr8 real to
#ATTRIBUTE class {0,1} in a text editor so it would be nominal.
Finally, I trained the model with
waffles_learn train 2blobs.arff knn -neighbors 10 > 2blobs_knn.json
Related
Full disclosure, I asked this same question on the PyTorch forums about a few days ago and got no reply, so this is technically a repost, but I believe it's still a good question, because I've been unable to find an answer anywhere online. Here goes:
Can you show an example of using register_module with a custom module?
The only examples I’ve found online are registering linear layers or convolutional layers as the submodules.
I tried to write my own module and register it with another module and I couldn’t get it to work.
My IDE is telling me no instance of overloaded function "MyModel::register_module" matches the argument list -- argument types are: (const char [14], TreeEmbedding)
(TreeEmbedding is the name of another struct I made which extends torch::nn::Module.)
Am I missing something? An example of this would be very helpful.
Edit: Additional context follows below.
I have a header file "model.h" which contains the following:
struct TreeEmbedding : torch::nn::Module {
TreeEmbedding();
torch::Tensor forward(Graph tree);
};
struct MyModel : torch::nn::Module{
size_t embeddingSize;
TreeEmbedding treeEmbedding;
MyModel(size_t embeddingSize=10);
torch::Tensor forward(std::vector<Graph> clauses, std::vector<Graph> contexts);
};
I also have a cpp file "model.cpp" which contains the following:
MyModel::MyModel(size_t embeddingSize) :
embeddingSize(embeddingSize)
{
treeEmbedding = register_module("treeEmbedding", TreeEmbedding{});
}
This setup still has the same error as above. The code in the documentation does work (using built-in components like linear layers), but using a custom module does not. After tracking down torch::nn::Linear, it looks as though that is a ModuleHolder (Whatever that is...)
Thanks,
Jack
I will accept a better answer if anyone can provide more details, but just in case anyone's wondering, I thought I would put up the little information I was able to find:
register_module takes in a string as its first argument and its second argument can either be a ModuleHolder (I don't know what this is...) or alternatively it can be a shared_ptr to your module. So here's my example:
treeEmbedding = register_module<TreeEmbedding>("treeEmbedding", make_shared<TreeEmbedding>());
This seemed to work for me so far.
I have a functioning tf.estimator pipeline build in TF 1, but now I made the decision to move to TF 2.0, and I have problems in the end of my pipeline, when I want to save the model in the .pb format
I'm using this high level estimator export_saved_model method:
https://www.tensorflow.org/api_docs/python/tf/estimator/BoostedTreesRegressor#export_saved_model
I have two numeric features, 'age' and 'time_spent'
They're defined using tf.feature_column as such:
age = tf.feature_column.numeric_column('age')
time_spent = tf.feature_column.numeric_column('time_spent')
features = [age,time_spent]
After the model has been trained I turn the list of features into a dict using the method feature_column_make_parse_example_spec() and feed it to another method build_parsing_serving_input_receiver_fn() excactly as outlied on tensorflow's webpage, https://www.tensorflow.org/guide/saved_model under estimators.
columns_dict = tf.feature_column_make_parse_example_spec(features)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(columns_dict)
model.export_saved_model(export_dir,input_receiver_fn)
I then inspect the output using the CLI tools
saved_model_cli show --dir mydir --all:
Resulting in the following:
enter image description here
Somehow Tensorflow squashes my two usefull numeric features into a useless string input crap called "inputs".
In TF 1 this could be circumvented by creating a custom input_receiver_fn() function using some tf.placeholder method, and I'd get the correct output with two distinct numeric features. But tf.placeholder doesn't exist in TF 2, so now it's pretty useless.
Sorry about the raging, but Tensorflow is horribly documented, and I'm really working with high level API's and it should just be straight out on the horse, but no.
I'd really appreciate any help :)
Tensorflow squashes my two usefull numeric features into a useless
string input crap called "inputs"
is not exactly true, as the exported model expects a serialized tf.Example proto. So, you can warp your age and time_spent into two features which will look like:
features {
feature {
key: "age"
value {
float32_list {
value: 10.2
}
}
}
feature {
key: "time_spent"
value {
float32_list {
value: 40.3
}
}
}
}
you can then call your regress function with the serialized string.
I'm looking for methods to export a STEP file into separate STL ones, and extracting relevant information from each of the parts (such as position in the whole model, rotation [if any], colors, and material [if possible]), so I use those in another program that needs STL inputs.
So far I have been trying to work with OpenCascade, but I'm a complete newbie there, and haven't had the proper progress. The code I've been working with so far is shown below (it's just a sample I found in the examples, but I don't really understand the outputs).
#include <iostream>
#include <STEPControl_Reader.hxx>
//#include <STEPCAFControl_Reader.hxx>
#include <Interface_Static.hxx>
#include <string>
int main(){
STEPControl_Reader CAFReader;
IFSelect_ReturnStatus Status = CAFReader.ReadFile("/home/User/Geometry/Module.step");
Standard_Integer ic = Interface_Static::SetIVal("read.precision.mode",1);
Standard_Real rp = Interface_Static::SetRVal("read.precision.val",0.0001);
cout<<ic<<endl;
cout<<rp<<endl;
What I would really need would be the export separate files in .stl and ideally vector(s) containing (for each of the parts):
Position
Rotation
Color
Material
Any inputs would be highly appreciated :)
Thank you in advance.
I'm trying to implement a random tree classifier using Opencv. I succeed implementing it with opencv and it is working.
Then I decided to separate the training part from the classification part.
The idea is to save the trained forest and load it back when you want to classify something.
I tried in two different way:
using write and read methods of the super class CvStatModel
using store and load methods of the super class CvStatModel
But results form the older implementation that did not save trees to file are different and worst.
Following code is the implementation of 2nd point:
To store it:
for (unsigned i=0; i<scenes.size(); ++i) {
char class_fname[50];
char output[100];
sprintf(class_fname,"class_%d.xml",i);
sprintf(output,"class_%d",i);
//classifiers[i]->save(class_fname,output);
classifiers[i]->save(class_fname);
}
To load them back:
for (unsigned int i = 0; i<CLUSTERING_N_CENTERS;i++){
char class_fname[50];
char output[100];
sprintf(class_fname,"class_%d.xml",i);
sprintf(output,"class_%d",i);
classifiers[i] = new CvRTrees();
//classifiers[i]->load(class_fname,output);
classifiers[i]->load(class_fname);
}
I'm using opencv 2.4.6
Does anyone have suggestions on this code?
It was an error due to file mistake!
So the persistency is working!
But I leave the post as sample if someone needs to implement it!
This is a weird question in that I'm not sure where to start looking.
First of all, I haven't done any C++ programming for the last 10 years so it could be me thats forgotten a few things. Secondly, the IDE I'm using is Eclipse based (which I've never used) and customized for Samsung bada based mobile development (it kicks off an emulator for debugging purposes)
I'm posting my code samples as images because the StackOverflow WYSIWYG editor seems to have a problem parsing C++.
[EDIT] Due to complaints I've edited my question to remove the images. Hope that helps :)
I have the following header file...
#include <FApp.h>
#include <FBase.h>
#include <FGraphics.h>
#include <FSystem.h>
#include <FMedia.h>
using namespace Osp::Media;
using namespace Osp::Graphics;
class NineAcross :
public Osp::App::Application,
public Osp::System::IScreenEventListener
{
public:
static Osp::App::Application* CreateInstance(void);
public:
NineAcross();
~NineAcross();
public:
bool OnAppInitializing(Osp::App::AppRegistry& appRegistry);
private:
Image *_problematicDecoder;
};
...and the following cpp file...
#include "NineAcross.h"
using namespace Osp::App;
using namespace Osp::Base;
using namespace Osp::System;
using namespace Osp::Graphics;
using namespace Osp::Media;
NineAcross::NineAcross()
{
}
NineAcross::~NineAcross()
{
}
Application* NineAcross::CreateInstance(void)
{
// Create the instance through the constructor.
return new NineAcross();
}
bool NineAcross::OnAppInitializing(AppRegistry& appRegistry)
{
Image *workingDecoder;
workingDecoder->Construct();
_problematicDecoder->Construct();
return true;
}
Now, in my cpp file, if I comment out the line that reads _problematicDecoder->Construct();...I'm able to set a breakpoint and happily step over the call to Constuct() on workingDecoder. However, as soon as I uncomment the line that reads _problematicDecoder->Construct();... I end up with the IDE telling me...
"No source available for "Osp::Media::Image::Construct()"
In other words, why can I NOT debug this code when I reference Image *image from a header file?
Any ideas?
Thanks :-)
This usually means you're stepping through some code which you do not posses its source.
I assume here that Osp::Media::Image is a class supplied by Samsung or similar for which you do not have the cpp file. So this means the debugger can't show you the current code line while you're at a function of Osp::Media::Image.
Alternatively, there's a good chance you do have all of the source code for this class, but Eclipse doesn't know where it is. In this case you can add the correct directories under the Debug Configurations window.
Ok, problem solved.
The idea is to first new up an instance of Image like so...
_decoder = new Osp::Media::Image();
And then do _decoder->Construct().
Funny enough, this seems blatantly obvious to me now coming from the C# world, though why the code I posted for workingDecoder works is still somewhat mysterious to me. The fact the sample projects pre-loaded with the bada IDE don't seem to make a call to new() leads me to believe that perhaps those samples are outdated our out of synch.
Either that or I really AM wildly out of the C++ loop.
Anyway thanks so much for the effort guys.
Appreciated :)