Multilayer Perceptron while Data Prediction only one value is predicted - weka

I am using Weka for machine learning.
I would like to predict different behaviors using mulitlayer perceptron.Then I make a Min Max normalization and change the order of the data (Randomize).
I did this for the whole data in weka (not programmed in java. the code given here is only an example how it would look like for the training data).
Then I split the Data: 60% training data, 20% cross valid data and 20% test data. After that I create the multilayer percetron model:
public static void main(String[] args) throws Exception {
String filepath = "...Training60%.arff";
FileReader trainreader = new FileReader(filepath);
Instances train = new Instances(trainreader);
train.setClassIndex(train.numAttributes() - 1);
/**
* Min-Max Normalisierung der Attribute in den Testdaten auf die Werte zwischen
* 0 and 1
*/
Normalize normalize = new Normalize();
normalize.setInputFormat(train);
Instances normalizedData = Filter.useFilter(train, normalize);
FileWriter fwriter1 = new FileWriter(
"...OutputJavaNormalize.arff");
fwriter1.write(normalizedData.toString());
fwriter1.close();
System.out.println("Fertig");
/**
* Mischt die Reihenfolge der übergebenen Instanzen (Normalisierte Daten) nach
* dem Zufallsprinzip.
*/
Randomize randomize = new Randomize();
randomize.setInputFormat(normalizedData);
Instances randomizedData = Filter.useFilter(normalizedData, randomize);
FileWriter fwriter2 = new FileWriter(
"...OutputJavaRandomize.arff");
System.out.println("Ende");
fwriter2.write(randomizedData.toString());
fwriter2.close();
Then I create the mulitlayer perceptron model and do the cross validation:
/**
* MultilayerPerceptron model
*/
MultilayerPerceptron mlp = new MultilayerPerceptron();
// Setting Parameters
mlp.setLearningRate(0.1);
mlp.setMomentum(0.2);
mlp.setTrainingTime(2000);
mlp.setSeed(1);
mlp.setValidationThreshold(20);
mlp.setHiddenLayers("9");
mlp.buildClassifier(randomizedData);
weka.core.SerializationHelper.write(".../MLPa753",mlp);
System.out.println("ModelErstellt");
Instances datapredict = new Instances(new BufferedReader(new FileReader(
"...CrossValid_20%.arff")));
datapredict.setClassIndex(datapredict.numAttributes() - 1);
Evaluation eval = new Evaluation(randomizedData);
eval.crossValidateModel(mlp, datapredict, 5, new Random(1));
After that I load the test data and predict the value and probability for it and save it.
// Auswertung/Vorhersage von nicht markierten Daten (20% von gesamten Daten)
Instances datapredict1 = new Instances(new BufferedReader(new FileReader(
"D:...TestSet_20%.arff")));
datapredict1.setClassIndex(datapredict1.numAttributes() - 1);
Instances predicteddata1 = new Instances(datapredict1);
FileWriter fwriter11 = new FileWriter(
".../output.arff");
for (int i1 = 0; i1 < datapredict1.numInstances(); i1++) {
double clsLabel1 = mlp.classifyInstance(datapredict1.instance(i1));
predicteddata1.instance(i1).setClassValue(clsLabel1);
String s = train.instance(i1) + "," + clsLabel1;
fwriter11.write(s.toString());
System.out.println(train.instance(i1) + "," + clsLabel1);
}
fwriter11.close();
System.out.println(eval.toClassDetailsString());
System.out.println(eval.toMatrixString());
System.out.println(eval.toSummaryString()); // Summary of Training
System.out.println(Arrays.toString(mlp.getOptions()));
}
}
When I look at the Confusions matrix
the model looks quite ok. The overview looks like this:
That looks ok too.
But in the output file where the predictions are stored, "Value1" is always predicted for all records. What is the reason for this? How can I change this?

Related

How to use Isolationforest in weka?

I am trying to use isolationforest in weka ,but I cannot find a easy example which shows how to use it ,who can help me ?thanks in advance
import weka.classifiers.misc.IsolationForest;
public class Test2 {
public static void main(String[] args) {
IsolationForest isolationForest = new IsolationForest();
.....................................................
}
}
I strongly suggest you to study a little bit the implementation for IslationForest.
The following code work loading a CSV file with first column with Class (note: a single class value will produce only (1-anomaly score) if it's binary you will get the anomaly score too. Otherwise it just return an error). Note I skip the second column (that in my case is the uuid that is not needed for anomaly detection)
private static void findOutlier(File in, File out) throws Exception {
CSVLoader loader = new CSVLoader();
loader.setSource(new File(in.getAbsolutePath()));
Instances data = loader.getDataSet();
// setting class attribute if the data format does not provide this information
// For example, the XRFF format saves the class attribute information as well
if (data.classIndex() == -1)
data.setClassIndex(0);
String[] options = new String[2];
options[0] = "-R"; // "range"
options[1] = "2"; // first attribute
Remove remove = new Remove(); // new instance of filter
remove.setOptions(options); // set options
remove.setInputFormat(data); // inform filter about dataset **AFTER** setting options
Instances newData = Filter.useFilter(data, remove); // apply filter
IsolationForest randomForest = new IsolationForest();
randomForest.buildClassifier(newData);
// System.out.println(randomForest);
FileWriter fw = new FileWriter(out);
final Enumeration<Attribute> attributeEnumeration = data.enumerateAttributes();
for (Attribute e = attributeEnumeration.nextElement(); attributeEnumeration.hasMoreElements(); e = attributeEnumeration.nextElement()) {
fw.write(e.name());
fw.write(",");
}
fw.write("(1 - anomaly score),anomaly score\n");
for (int i = 0; i < data.size(); ++i) {
Instance inst = data.get(i);
final double[] distributionForInstance = randomForest.distributionForInstance(inst);
fw.write(inst + ", " + distributionForInstance[0] + "," + (1 - distributionForInstance[0]));
fw.write(",\n");
}
fw.flush();
}
The previous function will add at the CSV at last column the anomaly values. Please note I'm using a single class so for getting the corresponding anomaly I do 1 - distributionForInstance[0] otherwise you ca do simply distributionForInstance[1] .
A sample input.csv for getting (1-anomaly score):
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
A,2,41,61,81
A,3,61,37,34
A sample input.csv for getting (1-anomaly score) and anomaly score:
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
B,2,41,61,81
A,3,61,37,34

Tensorflow lite model output always gives same output no matter the input

My goal is to run a Keras model I have made in my ESP32 microcontroller. I have the libraries all working correctly.
I have created a Keras model using google Collab that looks to be working fine when I give it random test data within google Collab. The model has two input features and 4 different outputs.(a multiple-output regression model)
However, when I export and load the model into my c++ application in the ESP32 it does not matter what the inputs are, it always predicts the same output.
I have based myself in this code in order to load and run the model in c++ : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/magic_wand/main_functions.cc
And this is my version of the code
namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;
// Create an area of memory to use for input, output, and intermediate arrays.
// Finding the minimum value for your model may require some trial and error.
constexpr int kTensorArenaSize = 2 * 2048;
uint8_t tensor_arena[kTensorArenaSize];
} // namespace
static void setup(){
static tflite::MicroErrorReporter micro_error_reporter;
error_reporter = &micro_error_reporter;
model = tflite::GetModel(venti_model);
if (model->version() != TFLITE_SCHEMA_VERSION) {
error_reporter->Report(
"Model provided is schema version %d not equal "
"to supported version %d.",
model->version(), TFLITE_SCHEMA_VERSION);
return;
}
// This pulls in all the operation implementations we need.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::ops::micro::AllOpsResolver resolver;
// Build an interpreter to run the model with.
static tflite::MicroInterpreter static_interpreter(
model, resolver, tensor_arena, kTensorArenaSize, error_reporter);
interpreter = &static_interpreter;
// Allocate memory from the tensor_arena for the model's tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
error_reporter->Report("AllocateTensors() failed");
return;
}
// Obtain pointers to the model's input and output tensors.
input = interpreter->input(0);
ESP_LOGI("TENSOR SETUP", "input size = %d", input->dims->size);
ESP_LOGI("TENSOR SETUP", "input size in bytes = %d", input->bytes);
ESP_LOGI("TENSOR SETUP", "Is input float32? = %s", (input->type == kTfLiteFloat32) ? "true" : "false");
ESP_LOGI("TENSOR SETUP", "Input data dimentions = %d",input->dims->data[1]);
output = interpreter->output(0);
ESP_LOGI("TENSOR SETUP", "output size = %d", output->dims->size);
ESP_LOGI("TENSOR SETUP", "output size in bytes = %d", output->bytes);
ESP_LOGI("TENSOR SETUP", "Is input float32? = %s", (output->type == kTfLiteFloat32) ? "true" : "false");
ESP_LOGI("TENSOR SETUP", "Output data dimentions = %d",output->dims->data[1]);
}
static bool setupDone = true;
static void the_ai_algorithm_task(){
/* First time task is init setup the ai model */
if(setupDone == false){
setup();
setupDone = true;
}
/* Load the input data i.e deltaT1 and deltaT2 */
//int i = 0;
input->data.f[0] = 2.0; /* Different values dont change the output */
input->data.f[1] = 3.2;
// Run inference, and report any error
TfLiteStatus invoke_status = interpreter->Invoke();
if (invoke_status != kTfLiteOk) {
error_reporter->Report("Invoke failed");
// return;
}
/* Retrieve outputs Fan , AC , Vent 1 , Vent 2 */
double fan = output->data.f[0];
double ac = output->data.f[1];
double vent1 = output->data.f[2];
double vent2 = output->data.f[3];
ESP_LOGI("TENSOR SETUP", "fan = %lf", fan);
ESP_LOGI("TENSOR SETUP", "ac = %lf", ac);
ESP_LOGI("TENSOR SETUP", "vent1 = %lf", vent1);
ESP_LOGI("TENSOR SETUP", "vent2 = %lf", vent2);
}
The model seems to load ok as the dimensions and sizes are correct. But the output is always the same 4 values
fan = 0.0087
ac = 0.54
vent1 = 0.73
vent2 = 0.32
Any idea on what can be going wrong? Is it something about my model or am I just not using the model correctly in my c++ application?
Could you refer to the "Test the model" section here - https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb#scrollTo=f86dWOyZKmN9 and verify if the TFLite model is producing the correct results?
You can find the issue by testing the 1) TFModel (which you have done already) 2) TFLite model and then 3) TFLite Micro Model (C Source File)
You also need to verify that the inputs passed to the model are of the same type and distribution. eg: If your TFModel was trained on Images in the range 0-255, then you need to pass this to the TFLite and TFLite Micro Model. Instead, if you trained the model using preprocessed data (0-255 get normalized to 0-1 during training), then you need to do the same and preprocess the data for the TFLite and TFLite Micro model.
I have found the issue and the answer.
It was not the C++ code, it was the model. Originally, I made my model with 3 hidden layers of 64, 20, and 8 (I am new to ML so I was only playing with random values) and it was giving me the issue.
To solve it I just changed the hidden layers to 32, 16, and 8 and the C++ code was outputting right values.

a C++ program that loads over 1 billion records into memory?

I have a whole years worth of option prices in a database (along with other bits of information such as strike price, volatility etc) and would like to load it into memory for my program to process (hash maps, program was written in C++). The trouble is it takes over 10 hours to load into memory at startup (from a local database).
Can someone please suggest how i can overcome this issue ? I have workarounds where i only load the portions that i need but it would be great if people can share ideas to the problem. Would loading the data from shared memory after each startup help?
Here is the code so far, in short it queries the database and loads the data into containers
const std::string m_url = "mysql://localhost/test2?user=root&password=";
URL_T optionURL = URL_new(m_url.c_str());
ConnectionPool_T optionPool = ConnectionPool_new(optionURL);
ConnectionPool_setInitialConnections(optionPool,1);
ConnectionPool_setMaxConnections(optionPool,1);
ConnectionPool_start(optionPool);
Connection_T con = ConnectionPool_getConnection(optionPool);
ResultSet_T result = Connection_executeQuery(con,
"SELECT DISTINCT(underlying) FROM Options");
PreparedStatement_T prepareStatement = Connection_prepareStatement(con,"SELECT underlying,underlyingPrice,expiry,strike,issueDate,type,delta,volatility FROM Options o,RiskFreeRate r WHERE underlying = ? AND o.issueDate = r.date");
while(ResultSet_next(result))
{
const std::string symbol = ResultSet_getString(result,1);
PreparedStatement_setString(prepareStatement,1,symbol.c_str());
ResultSet_T resultDetail = PreparedStatement_executeQuery(prepareStatement);
while(ResultSet_next(resultDetail))
{
float strike = ResultSet_getDouble(resultDetail,4);
date expiry = from_string(ResultSet_getString(resultDetail,3));
std::string issueDate = ResultSet_getString(resultDetail,5);
float underlyingPrice = ResultSet_getDouble(resultDetail,2);
float riskFreeRate = 4; //tmp hack
float volatility = ResultSet_getDouble(resultDetail,8);
OptionDateMap optionMap = m_optionMap[symbol];
OptionVec optionVec = optionMap[issueDate];
optionVec.push_back(boost::shared_ptr<WallStreet::FixedIncome::Option::Option> (new WallStreet::FixedIncome::Option::Option(strike,expiry,underlyingPrice, riskFreeRate,volatility)));
optionMap[issueDate] = optionVec;
m_optionMap[symbol] = optionMap;
}
}

Use pre-computed model for text classification in Weka

I have a task of sentiment analysis. I have tweets (labelled as negative or positive) as training data. I created a model out of it using StringToWordVector and NaiveBayesMultinomial.
code:
try{
TextDirectoryLoader loader = new TextDirectoryLoader();
loader.setDirectory(new File("./train/"));
Instances dataRaw = loader.getDataSet();
System.out.println(loader.getStructure());
StringToWordVector filter = new StringToWordVector();
filter.setInputFormat(dataRaw);
Instances dataFiltered = Filter.useFilter(dataRaw, filter);
System.out.println("\n\nFiltered data:\n\n" + dataFiltered);
// train Multinomial NaiveBayes classifier and output model
NaiveBayesMultinomial classifier = new NaiveBayesMultinomial();
classifier.buildClassifier(dataFiltered);
//System.out.println("\n\nClassifier model:\n\n" + classifier);
//save the model
weka.core.SerializationHelper.write("./model/naviebayesmodel/", classifier);
}catch(Exception ex){
ex.printStackTrace();
}
Now I want to test this model on new tweets. I am unable to work out the testing part of the classifier. I tried the following code, but no instances are captured. How to use existing model to test new tweets?
Code:
try{
Classifier cls = (Classifier) weka.core.SerializationHelper.read("./model/naviebayesmodel");
//Instances ins = (Instances)weka.core.SerializationHelper.read("./model/naviebayesmodel");
//System.out.println(ins);
//i.s
TextDirectoryLoader loader = new TextDirectoryLoader();
loader.setDirectory(new File("./test/-1/"));
Instances dataRaw = loader.getDataSet();
//String data = "hello, I am your test case. This is a great clasifier :) !!";
StringToWordVector filter = new StringToWordVector();
filter.setInputFormat(dataRaw);
//Instances unlabeled = new Instances(new BufferedReader(new FileReader("./test/test.txt")));
Instances dataFiltered = Filter.useFilter(dataRaw, filter);
dataRaw.setClassIndex(dataRaw.numAttributes() - 1);
//Instances dataFiltered = Filter.useFilter(unlabeled, filter);
for (int i = 0; i < dataRaw.numInstances(); i++) {
double clsLabel = cls.classifyInstance(dataRaw.instance(i));
System.out.println(clsLabel);
}
//System.out.println(dataRaw.numInstances());
}catch(Exception ex){
ex.printStackTrace();
}

Skip feature when classifying, but show feature in output

I've created a dataset which contains +/- 13000 rows with +/- 50 features. I know how to output every classification result: prediction and actual, but I would like to be able to output some sort of ID with those results. So i've added a ID column to my dataset but I don't know how disregard the ID when classifying while still being able to output the ID with every prediction result. I do know how to select features to output with every prediction.
Use FilteredClassifier. See this and this .
Let's say follwoing are the attributes in the bbcsport.arff that you want to remove and is in a file attributes.txt line by line..
serena
serve
service
sets
striking
tennis
tiebreak
tournaments
wimbledon
..
Here is how you may include or exclude the attributes by setting true or false. (mutually elusive) remove.setInvertSelection(false)
BufferedReader datafile = new BufferedReader(new FileReader("bbcsport.arff"));
BufferedReader attrfile = new BufferedReader(new FileReader("attributes.txt"));
Instances data = new Instances(datafile);
List<Integer> myList = new ArrayList<Integer>();
String line;
while ((line = attrfile.readLine()) != null) {
for (n = 0; n < data.numAttributes(); n++) {
if (data.attribute(n).name().equalsIgnoreCase(line)) {
if(!myList.contains(n))
myList.add(n);
}
}
}
int[] attrs = myList.stream().mapToInt(i -> i).toArray();
Remove remove = new Remove();
remove.setAttributeIndicesArray(attrs);
remove.setInvertSelection(false);
remove.setInputFormat(data); // init filter
Instances filtered = Filter.useFilter(data, remove);
'filtered' has the final attributes..
My blog .. http://ojaslabs.com/include-exclude-attributes-in-weka