I need to know when is the right time to do discretization in weka.I have data set,i need to create training and testing data samples from that data. Should i do the discretization for the numerical attributes before the sampling or after the sampling?
This should be obvious.
As long as you get the same result independent of the split performed you can do it afterwards. But what is the benefit of that? Just do the preprocessing first then.
If you discretize by rounding - e.g. float to integer - then you should be fine (which is unaffected by the split). But if you discretize e.g. by quantiles, it should be obvious that you can screw up badly, because you will discretize the different parts differently!
Let's say you discretize data into two different values:
Input data Type Output value
0.9 good 1.05
1.0 good 1.05
1.1 good 1.05
1.2 good 1.05
---
2.1 good 2.20
2.3 good 2.20
2.2 good 2.20
--- SPLIT HERE ---
1.1 bad 1.20
1.2 bad 1.20
1.3 bad 1.20
---
1.9 bad 2.00
2.0 bad 2.00
2.1 bad 2.00
See, both "good" and "bad" were discretized into two discrete values, by using the average of each cluster of values. But as the averages for "good" and "bad" differ, the resulting attribute clearly exposes the true membership. The task of detecting "bad" has become substantially easier.
Do not perform separate preprocessing, ever.
Related
I am currently working on a project which makes use of YOLOv5. I have trained my model for 100 epochs on a 5000+ dataset of images and got a good mAP of 0.95. My question came up when I tried to detect objects using the detect.py with the trained weights I got fairly low confidence in detection about 0.30 to 0.70 on certain objects. Should I train my model for more epochs to better my confidence? and does high mAP not result in high confidence?
I am comparing models for the detection of objects for maritime Search and Rescue (SAR) purposes. From the models that I used, I got the best results for the improved version of YOLOv3 for small object detection and for FASTER RCNN.
For YOLOv3 I got the best mAP#50, but for FASTER RCNN I got better all other metrics (precision, recall, F1 score). Now I am wondering how to read it and which model is really better in this case?
I would like to add that there are only two classes in the dataset: small and large objects. We chose this solution because the objects' distinction between classes is not as important to us as the detection of any human origin object.
However, small objects don't mean small GT bounding boxes. These are objects that actually have a small area - less than 2 square meters (e.g. people, buoys). Large objects are objects with a larger area (boats, ships, canoes, etc.).
Here are the results per category:
And two sample images from the dataset (with YOLOv3 detections):
The mAP for object detection is the average of the AP calculated for all the classes. mAP#0.5 means that it is the mAP calculated at IOU threshold 0.5.
The general definition for the Average Precision(AP) is finding the area under the precision-recall curve.
The process of plotting the model's precision and recall as a function of the model’s confidence threshold is the precision recall curve.
Precision measures how accurate is your predictions. i.e. the percentage of your predictions that are correct. Recall measures how good you find all the positives. F1 score is HM (Harmonic Mean) of precision and recall.
To answer your questions now.
How to read it and which model is really better in this case?
The mAP is a good measure of the sensitivity of the neural network. So good mAP indicates a model that's stable and consistent across difference confidence thresholds. In your case faster rcnn results indicate that precision-recall curve metric is bad compared to that of Yolov3, which means that either faster rcnn has very bad recall at higher confidence thresholds or very bad precision at lower confidence threshold compared to that of Yolov3 (especially for small objects).
Precision, Recall and F1 score are computed for given confidence threshold. I'm assuming you're running the model with default confidence threshold (could be 0.25). So higher Precision, Recall and F1 score of faster rcnn indicate that at that confidence threshold it's better in terms of all the 3 metric compared to that of Yolov3.
What metric should be more important?
In general to analyse better performing model, I would suggest you to use validation set (data set that is used to tune hyper-parameters) and test set (data set that is used to assess the performance of a fully-trained model).
Note: FP - False Positive FN - False Negative
On validation set:
Use mAP to select best performing model (model that is more stable and consistent) out of all the trained weights across iterations/epochs. Use mAP to understand whether model should be trained/tuned further or not.
Check class level AP values to ensure model is stable and good across the classes.
As per use-case/application, if you're completely tolerant to FNs and highly intolerant to FPs then to train/tune the model accordingly use Precision.
As per use-case/application, if you're completely tolerant to FPs and highly intolerant to FNs then to train/tune the model accordingly use Recall.
On test set:
If you're neutral towards FPs and FNs, then use F1 score to evaluate best performing model.
If FPs are not acceptable to you (without caring much about FNs) pick the model with higher Precision
If FNs are not acceptable to you (without caring much about FPs) pick the model with higher Recall
Once you decide metric you should be using, try out multiple confidence thresholds (say for example - 0.25, 0.35 and 0.5) for given model to understand for which confidence threshold value the metric you selected works in your favour and also to understand acceptable trade off ranges (say you want Precision of at least 80% and some decent Recall). Once confidence threshold is decided, you use it across different models to find out best performing model.
I have used to below hyper parameters to train the model.
rcf.set_hyperparameters(
num_samples_per_tree=200,
num_trees=250,
feature_dim=1,
eval_metrics =["accuracy", "precision_recall_fscore"])
is there any best way to choose the num_samples_per_tree and num_trees parameters.
what are the best numbers for both num_samples_per_tree and num_trees.
There are natural interpretations for these two hyper-parameters that can help you determine good starting approximations for HPO:
num_samples_per_tree -- the reciprocal of this value approximates the density of anomalies in your data set/stream. For example, if you set this to 200 then the assumption is that approximately 0.5% of the data is anomalous. Try exploring your dataset to make an educated estimate.
num_trees -- the more trees in your RCF model the less noise in scores. That is, if more trees are reporting that the input inference point is an anomaly then the point is much more likely to be an anomaly than if few trees suggest so.
The total number of points sampled from the input dataset is equal to num_samples_per_tree * num_trees. You should make sure that the input training set is at least this size.
(Disclosure - I helped create SageMaker Random Cut Forest)
I have been using sklearn to learn on some data. This is a binary classifcation task and I am using a RBF kernel. My data set is quite unbalanced (80:20) and I'm using only 120 samples, with 10ish features (I've been experimenting with a few less). Since I set class_weight="auto" the accuracy I've calculated from a cross validated (10 folds) gridsearch has dropped dramatically. Why??
I will include a couple of validation accuracy heatmaps to demonstrate the difference.
NOTE: top heatmap is before classweight was changed to auto.
Accuracy is not the best metrics to use when dealing with unbalanced dataset. Let's say you have 99 positive examples and 1 negative example, and if you predict all outputs to be positive, still you will get 99% accuracy, whereas you have mis-classified the only negative example. You might have gotten high accuracy in the first case because your predictions will be on the side which has high number of samples.
When you do class weight = auto, it takes the imbalance into consideration and hence, your predictions might have moved towards center, you can cross-check it using plotting the histograms of predictions.
My suggestion is, don't use accuracy as performance metric, use something like F1 Score or AUC.
Hi i am using WEKA to analyze some data. But i am having problem how to calculate the total accuracy from the output data.
The partial output is bellow
Detailed Accuracy By Class
TP Rate FP Rate Precision Recall F-Measure ROC Area Class
0.85 0.415 0.794 0.85 0.821 0.762 tested_negative
0.585 0.15 0.676 0.585 0.627 0.762 tested_positive
Weighted Avg. 0.758 0.323 0.753 0.758 0.754 0.762
From the above what will the total accuracy?
What's your confusion matrix from WEKA output?
In general case, it is necessary to know it to calculate accuracy.
And yes, I think "total accuracy" in this case means this accuracy:
http://upload.wikimedia.org/math/8/5/f/85fb106488e3cb8c02e397c917222ad4.png
(from http://en.wikipedia.org/wiki/Accuracy_and_precision)
You can see the correctly classified instances reported in the summary part (a little bit above the part it's reporting the accuracy by class). in front of this part you can see a number (which indicates the number of instances) and a percentage (which is the accuracy).