How to get the Average Precision for segmentation model - evaluation

I want to evaluate my segmentation model (deeblabv3+), It has 3 classes including the background class. i couldn’t find any code for average precision to evaluate segmentation models. can anyone help? (i used matlab)
I used evaluateDetectionPrecision function but it didnt work because it is for object detection models and i tried to find the average precision using confiusion mtrix but all of this didnt work

Related

OpenCV Random Decision Forest: How to get posterior probability

I did research on multiple websites, but I couldn't find any solution.
Here's the problem:
I am implementing a pixel-wise classification using RTrees from OpenCV. I need the posterior probability for each class. I tried to get it via cv::ml::StatModel::predict(), but the output matrix only contains the predicted value. Is there another way to get the posterior probability from RTrees?
PS: I'm still quite new to Machine Learning, so please forgive me my lack of knowledge ^^"
Instead of using cv::ml::StatModel::predict, you could refer to the cv::ml::RTrees::getVotes member function. This way, in case of classification, you get the number of trees which voted for each class for given sample. By dividing these numbers of votes by the forest size you get an approximation of posterior probabilities.
The getVotes function should be called instead of predict like this:
cv::Mat samples = [one or multiple samples (their feature vectors)]
cv::Mat votes;
classifier.getVotes(sample, votes, 0);
// provide 0 here unless you would like to manipulate with RTrees flags
What you should be aware of is that the votes matrix is going to have one more row than the number of samples. In this first row there are your classes enumerated (in ascending order if I remember well from the OpenCV source code).
The answer is up to date as of the 3.4.1 version of OpenCV.

Opencv C++ Recognize number

this should be easy. I'm working on a Sedoku solver and I and trying to figure out how to tell which number I am looking at.
I am able to isolate the number as seen above. I just can't get any image recognition to work. I've tried Knearest and something called tesseract but to no avail. Any help?
for easy tasks like this, I would not recommend using something like tesseract. Just think about some simple trick way. For example, threshold it and count the black pixels and see what are the count for each number. of course this method will fail for 6 and 9 so you may cut the number into two half and count each one and compare.. and so on.

What is class_weight parameter does in scikit-learn SGD

I am a frequent user of scikit-learn, I want some insights about the “class_ weight ” parameter with SGD.
I was able to figure out till the function call
plain_sgd(coef, intercept, est.loss_function,
penalty_type, alpha, C, est.l1_ratio,
dataset, n_iter, int(est.fit_intercept),
int(est.verbose), int(est.shuffle), est.random_state,
pos_weight, neg_weight,
learning_rate_type, est.eta0,
est.power_t, est.t_, intercept_decay)
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/stochastic_gradient.py
After this it goes to sgd_fast and I am not very good with cpython. Can you give some celerity on these questions.
I am having a class biased in the dev set where positive class is somewhere 15k and negative class is 36k. does the class_weight will resolve this problem. Or doing undersampling will be a better idea. I am getting better numbers but it’s hard to explain.
If yes then how it actually does it. I mean is it applied on the features penalization or is it a weight to the optimization function. How I can explain this to layman ?
class_weight can indeed help increasing the ROC AUC or f1-score of a classification model trained on imbalanced data.
You can try class_weight="auto" to select weights that are inversely proportional to class frequencies. You can also try to pass your own weights has a python dictionary with class label as keys and weights as values.
Tuning the weights can be achieved via grid search with cross-validation.
Internally this is done by deriving sample_weight from the class_weight (depending on the class label of each sample). Sample weights are then used to scale the contribution of individual samples to the loss function used to trained the linear classification model with Stochastic Gradient Descent.
The feature penalization is controlled independently via the penalty and alpha hyperparameters. sample_weight / class_weight have no impact on it.

Estimate color distribution with Gaussian mixture model

I am trying to use two Gaussian mixtures with EM algorithm to estimate color distribution of a video frame. For that, I want to use two separate peaks in the color distribution as the two Gaussian means to facilitate the EM calculation. I have several difficulties with the implementation of these in OpenCV.
My first question is: how can I determine the two peaks? I've searched about peak estimation in OpenCV, but still couldn't find any seperate function. So I am going to determine two regions, then find their maximum values as peaks. Is this way correct?
My second question is: how to perform Gaussian mixture model with EM in OpenCV? As far as I know, the "cv::EM::predict" function could give me the index of the most probable mixture component. But I have difficulties with training EM. I've searched and found some other codes, but finding the correct parameters is too much difficult for. Could someone provide me any example code for this? Thank you in advance.
#ederman, try {OpenCV library location}\opencv\samples\cpp\em.cpp instead of the web link. I think the sample code in the link is out of date now. I have successfully compiled the sample code in OpenCV 2.3.1. It shouldn't be a problem for 2.4.2.
Good luck:)
My first question is: how can I determine the two peaks?
I would iterate through the range of sample values possible, and test when the does EM.predict(sample)[0] peaks.

Getting the Gaussian model weight

I am wondering how to get single model weight from several Gaussian mixture models in OpenCV. For
For instance, if the number of models is five for GMM I want to get the weight of the second or third model.
Could someone help me on this? Another saying, I want to know which model is the mostly used one among these five?