With respect to semantic segmentation, it seems to me that there are multiple ways for the final pixel-wise labeling, such as
softmax, sigmoid, logistic regression or other classical classification methods.
However, for softmax approach, we need to ensure the output map resulting from the network architecture has multiple channels. The number of channels matches the number of classes. For instance, if we are talking two-classes problem, masks and un-masks, then we will use two channels. Is this right?
Moreover, each channel in the output map can be treated as a probability map for a given class. Is this understanding right?
Yes to both questions. The goal of the softmax function is to transform the scores into probabilities so that you can maximize the probability of the true label.
Related
Is it valid to run a PCA on data that is comprised of proportions? For example, I have data on the proportion of various food items in the diet of different species. Can I run a PCA on this type of data or should I transform the data or do something else beforehand?
I had a similar question. You should search for "compositional data analysis". There are transformation to apply to proportions in order to analyze them with multivariate tecniques such as PCA. You can find also "robust" PCA algorithms to run your analysis in R. Let us know if you find an appropriate solution to your specific problem.
I don't think so.
PCA will give you "impossible" answers. You might get principal components with values that proportions can't have, like negative values or values greater than 1. How would you interpret this component?
In technical terms, the support of your data is a subset of the support of PCA. Say you have $k$ classes. Then:
the support for PCA vectors is $\R^k$
the support for your proportion vectors is the $k$- dimensional simplex. By simplex I mean the set of $p$ vectors of length $k$ such that:
$0 \le p_i \le 1$ where $i = 1, ..., k$
$\sum_{i=1}^k{p_i} = 1$
One way around this is if there's a one to one mapping between the $k$-simplex to all of $\R^k$. If so, you could map from your proportions to $\R^k$, do PCA there, then map the PCA vectors to the simplex.
But I'm not sure the simplex is a self-contained linear space. If you add two elements of the simplex, you don't get an element of the simplex :/
A better approach, I think, is clustering, eg with Gaussian mixtures, or spectral clustering. This is related to PCA. But a nice property of clustering is you can express any element of your data as a "convex combination" of the clusters. If you analyze your proportion data and find clusters, they (unlike PCA vectors) will be within the simplex space, and any mixture of them will be, too.
I also recommend looking into nonnegative matrix factorization. This is like PCA but, as the name suggests, avoids negative components and also negative eigenvectors. It's very useful for inferring structure in strictly positive data, like proportions. But nmf does not give you a basis for simplex space.
In CNN, the filters are usually set as 3x3, 5x5 spatially. Can the sizes be comparable to the image size? One reason is for reducing the number of parameters to be learnt. Apart from this, is there any other key reasons? for example, people want to detect edges first?
You answer a point of the question. Another reason is that most of these useful features may be found in more than one place in an image. So, it makes sense to slide a single kernel all over the image in the hope of extracting that feature in different parts of the image using the same kernel. If you are using big kernel, the features could be interleaved and not concretely detected.
In addition to yourself answer, reduction in computational costs is a key point. Since we use the same kernel for different set of pixels in an image, the same weights are shared across these pixel sets as we convolve on them. And as the number of weights are less than a fully connected layer, we have lesser weights to back-propagate on.
I have a binary classification problem i am solving with SVM. The classes are unbalanced in the training data. I now need to get posterior probabilities outputs, and not just a binary score. I tried to use Platt scaling by either Weka's SMO, and LibSVM. For both of these implementations i get results which, in terms of f1-measure for the minority class, are worse then when i generated only binary results.
Do you know of a way to transform SVM binary results to probabilities which keeps the next rule:
"prob > = 0.5 if and only if decision value >= 0".
Meaning that the label the each sample gets is the same when using either binary classification, or probabilities.
SVM can be set so that they output class membership probabilities. You should look documentation of your toolkit to learn how to enable this.
For example sckit-learn
When the constructor option probability is set to True, class
membership probability estimates (from the methods predict_proba and
predict_log_proba) are enabled.
I am a frequent user of scikit-learn, I want some insights about the “class_ weight ” parameter with SGD.
I was able to figure out till the function call
plain_sgd(coef, intercept, est.loss_function,
penalty_type, alpha, C, est.l1_ratio,
dataset, n_iter, int(est.fit_intercept),
int(est.verbose), int(est.shuffle), est.random_state,
pos_weight, neg_weight,
learning_rate_type, est.eta0,
est.power_t, est.t_, intercept_decay)
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/stochastic_gradient.py
After this it goes to sgd_fast and I am not very good with cpython. Can you give some celerity on these questions.
I am having a class biased in the dev set where positive class is somewhere 15k and negative class is 36k. does the class_weight will resolve this problem. Or doing undersampling will be a better idea. I am getting better numbers but it’s hard to explain.
If yes then how it actually does it. I mean is it applied on the features penalization or is it a weight to the optimization function. How I can explain this to layman ?
class_weight can indeed help increasing the ROC AUC or f1-score of a classification model trained on imbalanced data.
You can try class_weight="auto" to select weights that are inversely proportional to class frequencies. You can also try to pass your own weights has a python dictionary with class label as keys and weights as values.
Tuning the weights can be achieved via grid search with cross-validation.
Internally this is done by deriving sample_weight from the class_weight (depending on the class label of each sample). Sample weights are then used to scale the contribution of individual samples to the loss function used to trained the linear classification model with Stochastic Gradient Descent.
The feature penalization is controlled independently via the penalty and alpha hyperparameters. sample_weight / class_weight have no impact on it.
At the moment I am using OpenCV's KNN implementation to classify images. It currently classifies images into P, S or rectangle, and correctly. However if I feed it an image of noise it will attempt to classify it as 1 of the 3 classifications I stated earlier. To get it to classify as noise, should I train the KNN to put noise in a 'noise' category, or is there some kind of accuracy rating I can use?
The way to do it is to use the dists variable in the knn_nearest function. It spits out the distance between your vector and the K unit vectors, the further the distance the less they have in common with the test data.
yes, but i wouldnt advise it. If you have a classifier which is good at distinguishing between oranges and apples, you shouldn't try making it recognizes "not a fruit". First because you can feed wrong inputs to almost anything, second because it will lower its original performance, and third because you need noise to have a pattern. How do you define noise??