Weka Model to Prediction - weka

I have generated two source code for two different model from training set for a classifier in Weka.
Is there any way through which I can combine these two different models generated from the same Weka Classifier?
Any suggestions or solutions would be of great help.
Thank you.

There are multiple ways to combine two or more classifiers into a single classifier. Weka offers schemes such as vote and stacking (weka.classifiers.meta). Boosting and bagging schemes (such as Adaboost) are technically also ways of combining multiple classifiers into one, but they typically work around a single type of classifier and help you train it more intensively.
When using vote you can include multiple classifiers, each of which assign a class to an instance. Vote itself will assign the class that was assigned most often. Stacking trains a meta classifier over the included classifiers. In a way it helps you train a way of interpreting multiple classifications.
You can accomplish that in the Weka Explorer (classification tab), by selecting the pre build models under preBuildClassifiers, or setting them in your code with setPreBuiltClassifiers.

Related

Getting low accuracy on two fields after labelling using the tool, Form Recognizer, Custom Label

I need help with recognition of two particular fields- credit date and credit type. Getting low accuracy (training ~30%) after labelling and even lower on the test set (~10%).
I am using Custom Label API after labelling, tagging and training.
I think as these two fields appear at different places relative to other fields due to different number of entries in different receipts.
Is there anything I can do to improve these fields' accuracy.
Cognitive Services Form Recognizer service has added support for new and exciting features - multiple forms models (model compose), language expansion, pre-built business cards model, selection marks and lots more are now available in the Form Recognizer v2.1 release.
Form Recognizer sample labeling tool has been updated to support the new release functionality, see this quick start for getting started with custom train with labels.
Please find the snapshot for the JSON for the image that you are trying.

Can I include Pyomo.dae into a larger network problem

Basically, I want to optimize a process that includes a heater, a flash unit and a PFR. I've defined constraints for all the units, but don't quite understand how to incorporate the PFR dae model into the overall process model and solve for them together.
I would appreciate some explanation of the architecture of pyomo and how it can combine these models.
Thank you.
If you have those models, format a mathematical model first and define the decision variables and build your pyomo model.

How to train ML .Net model in runtime

is there any way to train an ml .net model in runtime through user input?
I've created a text classification model, trained it local, deployed it and now my users are using it.
Needed workflow:
Text will be categorized, category is displayed to user, he can accept it or select another of the predefined categories, than this feedback should train the model again.
Thanks!
What you are describing seems like online learning.
ML.NET doesn't have any true 'online' models (by which I mean, models that can adapt to new data example by example and instantaneously refresh): all ML.NET algorithms are 'batch' trainers, that require a (typically large) corpus of training data to produce a model.
If your situation allows, you could aggregate the users' responses as 'additional training data', and re-train the model periodically using this data (in addition to the older data, possibly down-sampled or otherwise decayed).
As #Jon pointed out, a slight modification of the above mechanism is to 'incrementally train an existing model on a new batch of data'. This is still a batch method, but it can reduce the retraining time.
Of ML.NET's multiclass trainers, only LbfgsMaximumEntropyMulticlassTrainer supports this mode (see documentation).
It might be tempting to take this approach to the limit, and 'retrain' the model on each 'batch' of one example. Unless you really, really know what you are doing, I would advise against it: more likely than not, such a training regime will be overfitting rapidly and disastrously.

add new vocabulary to existing Doc2vec model

I Already have a Doc2Vec model. I have trained it with my train data.
Now after a while I want to use Doc2Vec for my test data. I want to add my test data vocabulary to my existing model's vocabulary. How can I do this?
I mean how can I update my vocabulary?
Here is my model:
model = model.load('my_model.Doc2vec')
Words that weren't present for training mean nothing to Doc2Vec, so quite commonly, they're just ignored when encountered in later texts.
It would only make sense to add new words to a model if you could also do more training, including those new words, to somehow integrate them with the existing model.
But, while such continued incremental training is theoretically possible, it also requires a lot of murky choices of how much training should be done, at what alpha learning rates, and to what extent older examples should also be retrained to maintain model consistency. There's little published work suggesting working rules-of-thumb, and doing it blindly could just as likely worsen the model's performance as improve it.
(Also, while the parent class for Doc2Vec, Word2Vec, offers an experimental update=True option on its build_vocab() step for later vocabulary-expansion, it wasn't designed or tested with Doc2Vec in mind, and there's an open issue where trying to use it causes memory-fault crashes: https://github.com/RaRe-Technologies/gensim/issues/1019.)
Note that since Doc2Vec is an unsupervised method for creating features from text, if your ultimate task is using Doc2Vec features for classification, it can sometimes be sensible to include your 'test' texts (without class labeling) in the Doc2Vec training set, so that it learns their words and the (unsupervised) relations to other words. The separate supervised classifier would then only be trained on non-test items, and their known labels.

Object detection using python

I am working on my college project and for that I have to recognize different hand gestures, so can any one tell me that how can I learn this image recognization quickly using python?
If you're using Python, I think you would better use Tensorflow.
Check https://github.com/tensorflow/models/tree/master/object_detection.
It is easy to follow instructions and they provide convenient script for retraining a detection model.
If you need to train a model with custom data, you have to prepared images dataset annotated with bounding boxes.