I do understand the principle component analysis. I know how to do it and what it actually does. I have applied PCA and my best result has shown to be two components. I do understand that each of my inputs are now contributing partially in each component. What I do not understand is how to feed the result of PCA (in my case 2 components ) to a machine learning model?
How do we input them?
For example when I want to run a NN on my features, I just can navigate to where they are stored and import them, but my PCA analysis has been run in SPSS and all it shows me is the contribution of my features on each component.
What should I import to my NN model?
PCA is a method of feature extraction, which is used to avoid the problem of co-linearity. For example, if several variables are highly correlated because "they measure the same thing", then PCA can extract a measure of "that thing" (technically: a component), which is called a score. Your data set of, say, 100 measured variables may reduce to, say, 10 significant components. Then you can use the scores your test persons have achieved in those 10 components to do for example a multi-dimensional regression, a cluster analysis or a discriminance analysis. This will result in more valid results than performing the analysis directly on the 100 variables.
So the procedure is to sort the eigenvalues (and -vectors) by size, identify the number of significant components p (e.g., by scree-plot), set up the projection matrix F (eigenvectors corresponding to the largest q eigenvalues in columns) and multiply it with the data matrix D. This will give you the score matrix C (dimension n times q, with n the number of test persons), which you can use as input for whatever method you want to use next.
The sum of the incorrect classification (see tree) in all rules is 2097 (which is from 895+700+428+74) . But the confusion matrix is 2121 (which is from 1999+122). Can someone explain the discrepancy? How come the numbers are different?
Weka output of classifier's model description contains two sections
Error on training data
Stratified cross-validation
First one just evaluate trained classifier on training data itself whereas second one does the cross-validation where it distribute instances of each class equally in each fold. So stratified cross-validation is supposed to produce better picture of classifier's performance as compared to simple cross-validation.
I think here you have posted Confusion matrix of stratified cross-validation & hence number of miss-classified instances shown in tree(They must be from evaluation on training data) is different.
Decision tree output is very nicely described at link https://weka.wikispaces.com/Primer#classifiers. There also miss-classified examples shown in tree are different from those that can be seen from confusion matrix under stratified cross-validation section.
Hope, I am correct.
j48 weka
Hi,
I have problem with my model in weka (j48 cross-validation) that many instances are classified wrong when it comes to the second class. Is there any way to improve it or rather not? I'm not an expert in weka. Thank you in advance. My output is above.
In NaiveBayes it presents better but still TP Rate < 0.5 for the second class.
NaiveByes weka
It is hard to reproduce your example with the given information. However the solution is probably to turn your classifiert into a cost sensitive classifier
https://weka.wikispaces.com/CostSensitiveClassifier?responseToken=019a566fb2ce3b016b9c8c791c92e8e35
What it does it assigns a higher value to misclassifications of a certain class. In your case this would be the "True" class.
You can also simulate such an algorithm by oversampling your positive examples. This is, if you have n positive examples you sample k*n positive example, while you keep your negative examples as they are. You could also simply double positive examples.
I am a bit confused as to the difference between 10-fold cross-validation available in Weka and traditional 10-fold cross-validation.I understand the concept of K-fold cross-validation, but from what I have read 10-fold cross-validation in Weka is a little different.
In Weka FIRST, a model is built on ALL data. Only then is 10-fold cross-validation carried out. In traditional 10-fold cross-validation no model is built beforehand, 10 models are built: one with each iteration (Please correct me if I'm wrong!). But if this is the case, what on earth does Weka do during 10-fold cross-validation? Does it again make a model for each of the ten iterations or does it use the previously assembled model. Thanks!
As far as I know, the cross-validation in Weka (and the other evaluation methods) are only used to estimate the generalisation error. That is, the (implicit) assumption is that you want to use the learned model with data that you didn't give to Weka (also called "validation set"). Hence the model that you get is trained on the entire data.
During the cross-validation, it trains and evaluates a number of different models (10 in your case) to estimate how well the learned model generalises. You don't actually see these models -- they are only used internally. The model that is shown isn't evaluated.
I'm using Weka and would like to perform regression with random forests. Specifically, I have a dataset:
Feature1,Feature2,...,FeatureN,Class
1.0,X,...,1.4,Good
1.2,Y,...,1.5,Good
1.2,F,...,1.6,Bad
1.1,R,...,1.5,Great
0.9,J,...,1.1,Horrible
0.5,K,...,1.5,Terrific
.
.
.
Rather than learning to predict the most likely class, I want to learn the probability distribution over the classes for a given feature vector. My intuition is that using just the RandomForest model in Weka would not be appropriate, since it would be attempting to minimize its absolute error (maximum likelihood) rather than its squared error (conditional probability distribution). Is that intuition right? Is there a better model to be using if I want to perform regression rather than classification?
Edit: I'm actually thinking now that in fact it may not be a problem. Presumably, classifiers are learning the conditional probability P(Class | Feature1,...,FeatureN) and the resulting classification is just finding the c in Class that maximizes that probability distribution. Therefore, a RandomForest classifier should be able to give me the conditional probability distribution. I just had to think about it some more. If that's wrong, please correct me.
If you want to predict the probabilities for each class explicitly, you need different input data. That is, you would need to replace the value to predict. Instead of one data set with the class label, you would need n data sets (for n different labels) with aggregated data for each unique feature vector. Your data would look something like
Feature1,...,Good
1.0,...,0.5
0.3,...,1.0
and
Feature1,...,Bad
1.0,...,0.8
0.3,...,0.1
and so on. You would need to learn one model for each class and run them separately on any data to be classified. That is, for each label you learn a model to predict a number that is the probability of being in that class, given a feature vector.
If you don't need the probabilities to be predicted explicitly, have a look at the Bayesian classifiers in Weka, which make use of probabilities in the models that they learn.