When estimating the risk ratio of a variable with multiple categories, I did not see many using floating confidence interval which is an absolute scale, instead, people are used to the conventional confidence interval, which is a relative scale. How can I calculate float confidence interval in Stata? Thanks!
Related
I am comparing models for the detection of objects for maritime Search and Rescue (SAR) purposes. From the models that I used, I got the best results for the improved version of YOLOv3 for small object detection and for FASTER RCNN.
For YOLOv3 I got the best mAP#50, but for FASTER RCNN I got better all other metrics (precision, recall, F1 score). Now I am wondering how to read it and which model is really better in this case?
I would like to add that there are only two classes in the dataset: small and large objects. We chose this solution because the objects' distinction between classes is not as important to us as the detection of any human origin object.
However, small objects don't mean small GT bounding boxes. These are objects that actually have a small area - less than 2 square meters (e.g. people, buoys). Large objects are objects with a larger area (boats, ships, canoes, etc.).
Here are the results per category:
And two sample images from the dataset (with YOLOv3 detections):
The mAP for object detection is the average of the AP calculated for all the classes. mAP#0.5 means that it is the mAP calculated at IOU threshold 0.5.
The general definition for the Average Precision(AP) is finding the area under the precision-recall curve.
The process of plotting the model's precision and recall as a function of the model’s confidence threshold is the precision recall curve.
Precision measures how accurate is your predictions. i.e. the percentage of your predictions that are correct. Recall measures how good you find all the positives. F1 score is HM (Harmonic Mean) of precision and recall.
To answer your questions now.
How to read it and which model is really better in this case?
The mAP is a good measure of the sensitivity of the neural network. So good mAP indicates a model that's stable and consistent across difference confidence thresholds. In your case faster rcnn results indicate that precision-recall curve metric is bad compared to that of Yolov3, which means that either faster rcnn has very bad recall at higher confidence thresholds or very bad precision at lower confidence threshold compared to that of Yolov3 (especially for small objects).
Precision, Recall and F1 score are computed for given confidence threshold. I'm assuming you're running the model with default confidence threshold (could be 0.25). So higher Precision, Recall and F1 score of faster rcnn indicate that at that confidence threshold it's better in terms of all the 3 metric compared to that of Yolov3.
What metric should be more important?
In general to analyse better performing model, I would suggest you to use validation set (data set that is used to tune hyper-parameters) and test set (data set that is used to assess the performance of a fully-trained model).
Note: FP - False Positive FN - False Negative
On validation set:
Use mAP to select best performing model (model that is more stable and consistent) out of all the trained weights across iterations/epochs. Use mAP to understand whether model should be trained/tuned further or not.
Check class level AP values to ensure model is stable and good across the classes.
As per use-case/application, if you're completely tolerant to FNs and highly intolerant to FPs then to train/tune the model accordingly use Precision.
As per use-case/application, if you're completely tolerant to FPs and highly intolerant to FNs then to train/tune the model accordingly use Recall.
On test set:
If you're neutral towards FPs and FNs, then use F1 score to evaluate best performing model.
If FPs are not acceptable to you (without caring much about FNs) pick the model with higher Precision
If FNs are not acceptable to you (without caring much about FPs) pick the model with higher Recall
Once you decide metric you should be using, try out multiple confidence thresholds (say for example - 0.25, 0.35 and 0.5) for given model to understand for which confidence threshold value the metric you selected works in your favour and also to understand acceptable trade off ranges (say you want Precision of at least 80% and some decent Recall). Once confidence threshold is decided, you use it across different models to find out best performing model.
I have a dataset that have multiple variables with each of them heavily centered around zero to form a high peak. The kurtosis of each variable is more than 100.
What I want to estimate is the probability density of any given value if it belongs to the dataset. The most accessible distribution function I found currently is the multivariant Gaussian distribution. However, since my dataset is not is a normal shape and I am worried that it is inaccurate estimate the probability density using this function.
Does anyone have any good suggestions on which function to use to for this purpose?
You are repeating a common incorrect interpretation of kurtosis, namely, "peakedness," which contributes the confusion about what distribution to use.
Kurtosis does not measure "peakedness" at all. You can have a distribution with a perfectly flat peak, with a V-shaped peak, with a trimodal peak, with a wavy peak, or with any shape peak whatsoever, that has infinite kurtosis. And you can have a distribution with infinite peak than has negative (excess) kurtosis.
Instead, kurtosis is a measure of the tails (outlier potential) of the distribution, not the peak. The only reason people think that there is a "high peak" when there is high kurtosis is that the outliers stretch the horizontal scale of the histogram, making the data appear concentrated in a narrow vertical strip. But if you zoom in on the bulk of the data in that strip, the peak can have any shape whatsoever. Further, if you compare the height of your histogram of standardized data with the height of a corresponding standard normal, either can be higher, no matter what your data show. The "height" mythology was debunked around 1945 by Kaplansky.
For your data, you do not need a "peaked" distribution. Instead, you need a distribution that allows such extreme values as you have observed. Examples include mixture distributions, lognormal distributions, t distributions with small degrees of freedom, or multivariate versions of such, if that's what you need.
References:
Westfall, P.H. (2014). Kurtosis as Peakedness, 1905 – 2014. R.I.P. The American Statistician, 68, 191–195.
(A simplified discussion of the above paper is given in the talk section of the Wikipedia entry on kurtosis.)
After applying a Fourier transform to a signal, the energy of a single sine wave is often spread out across multiple bins (aka smearing). Have a look at the right side of the image below for an illustration:
I want to extract a list of peak frequencies. Just finding the highest bin is easy. But after that smearing becomes a problem.
I would like to have a heuristic which tells me if the magnitude of a specific bin is possibly the result of smearing or if there has to be another peak frequency in order to explain the signal. (It is better if I miss some than have false positives)
My naive approach would be to just calculate a few thousand examples and take the maximum of these to get an envelope curve so that any smearing is likely below that envelope.
But is there a better way to do this?
The FFT result of any rectangularly windowed pure unmodulated sinusoid is a Sinc function. This Sinc (sin(pix)/(pix)) function is only zero for all bins except one (at the peak magnitude) when the frequency of the input sinusoid has exactly an integer number of periods in the FFT width.
For all other frequencies that are not at the exact center of an FFT result bin, if you know that exact frequency and magnitude (which won't be in any single FFT bin), you can calculate all the other bins by sampling the Sinc function.
And, of course, if the input sinusoid isn't perfectly pure, but modulated in any way (amplitude, frequency or phase), this modulation will produce various sidebands in the FFT result as well.
We (I and my colleague) were given a device, which sends to us each second a large amount of discrete integer data (intensities) that tend to have gaussian distribution. These pseudo gaussians flows one by one and we are supposed to pick the largest intensity from center of each gaussian as fast as possible. Moreover, these data contain a noise, so we cannot say that each gaussian can be separated to two monotone parts => we cannot rely on simple fact that if data start to decline, we will find the maximum.
My colleauge came up with an idea:
introduce an intensity threshold to separate gaussians from each other
sum intensities of each gaussian to estimate its area and then estimate its height
But the question is, how can I fast estimate height of this pseudo gaussian from its area?
UPDATE:
To be more clear, the intensities that I get represents "function values" of a gaussian, or batter they represent heights of histogram bins.
You could use a moving average filter, and when that starts to decline, take the maximum value in that window as your height. As long as the noise in the signal is fairly low amplitude and high frequency, that should work reasonably well. You can always combine it with thresholding if required. The people on the DSP site will probably have much better ideas though, so I'd ask there.
I have a set of values which follows exponential distribution. Now, I want to calculate the rate parameter alpha. Can anybody help me how to calculate it (I am using c++ to code it)?
If you know these values are from an exponential distribution, then you can calculate the maximum likelihood of λ (lambda, not alpha) as the average of 1 / value for each of these values (because the mean of the exponential distribution is 1 / λ). this is a statistical calculation, since you are trying to assess a parameter through observation.