plot Roc curve for ANN and SVM - c++

I'm using ANN and SVM for classification of 6 classes. All works well, but I'd like to measure accuracies of the classifiers using ROC curves.
I can easily get the confusion matrix for each of the classifiers but I don't know what parameter I should change to get more points and actually plot the ROC curves.
Could someone help me please!

Related

Dlib SVM predict and distance

I using this example code FHOG Object Detector from Dlib. It's possible to get prediction and distance (probability of class) of sample? I want to generate a ROC curve from distances of samples.
(Like in OpenCV CvSVM::predict)

Why accurcy of CNN using Euclidean distance is 1

I am a super noob in computer vision and ML. I was watching https://www.youtube.com/watch?v=hAeos2TocJ8 video tutorial and the professor said that: the accuracy of the nearest neighbor classifier on the training data, when using the Euclidean distance is 0. Can someone please explain why? I really appreciate your help!
You have heard it wrong. In the video at the time 17:10. He actually said the accuracy of the nearest neighbor classifier on the training data, when using the Euclidean distance is 100% and not 0. Since training data is used, the image under test is also present in the training set. Hence the minimum euclidean distance between the test image and training data is always 0.

What is an ROC curve?

Can someone be kind enough to explain what an ROC curve actually represents with respect to tracking in a test sequence please? An example of an ROC curve is shown in the figure below.
The comments to the original question contain some very useful links to understand ROC curves in general and the discrimination threshold in question. Here is an attempt to understand the reference (Ref. 1) used by the OP and further information specific to the problem of Detecting Pedestrians.
How the ROC Curves are obtained in (Ref. 1) and what is the discrimination threshold in this case:
Motion filters and appearance filters, ("f_i" in eq. (2) p. 156) are evaluated using "integral image" of various time/spatial difference images from video sequences.
Using all these filters the learning algorithm builds the best classifier, (C in eq. (1) p. 156), separating positive examples ( e.g: pedestrians ) from negative examples (e.g: selection of non-pedestrian examples ). The classifier, C, is a thresholded sum of features, F, as given in eq. (1). A feature, F, is a filter, "f_i" thresholded by a feature threshold "t_i".
The thresholds involved (i.e., filter thresholds, "t_i" and classifier threshold "Theta") are computed during AdaBoost training that chooses the features with the lowest weighted error on the training examples.
As in (Ref. 2), a cascade of such classifiers is used to make the detector extremely efficient. During training each stage of the cascade - boosted classifier - is trained using 2250 positive examples (example in Fig. 5 p. 158) and 2250 negative examples.
The final cascade detector is run over a validation sequences to obtain the true positive rate and the false positive rate. This is based on comparing the output of the cascade detector (e.g., presence or absence of a pedestrian) to the ground truth (presence or absence of a pedestrian at the same region based on ground truth or manual review of the video sequence). For a set of threshold values for the entire cascade ("t_i" and "Theta" over all classifier in the cascade) a certain true positive rate and false positive rate is obtained. This will make one point on the ROC curve.
A simple MATLAB example for measuring True Positive Rate and False Positive Rate from a given set of classifier outputs and ground truth can be found here: http://www.mathworks.com/matlabcentral/fileexchange/21212-confusion-matrix---matching-matrix-along-with-precision--sensitivity--specificity-and-model-accuracy
So in this case, each point on the ROC curve will depend on the thresholds chosen for all the cascade layers (hence the discriminative threshold is not a single number in this case). By adjusting these thresholds, one at a time, the output values of true positive rate and false positive rate change (when step 5 is repeated) giving other points on the ROC curve.
This process is repeated for both cases of dynamic and static detectors to obtain the two ROC curves on the figure.
Please go through some more good description and examples through this link on ROC:
ROC curves can be used to compare the performance of classifiers in distinguishing between classes, as example, pedestrian versus non-pedestrian input samples. The area under the ROC curve is used as a measure of the classifier's ability to distinguish between the classes.
Quotes are from:
(Ref. 1) P. Viola, M. J. Jones, and D. Snow, "Detecting Pedestrians Using Patterns of Motion and Appearance", International Journal of Computer Vision 63(2), 153-161, 2005. [online: as of April 2015] http://lear.inrialpes.fr/people/triggs/student/vj/viola-ijcv05.pdf
(Ref. 2) P. Viola and M. J. Jones, "Rapid object detection using a boosted cascade of simple features. In IEEE Conference on Computer Vision and Pattern Recognition. More information at Viola-Jones Algorithm - "Sum of Pixels"?

detect eye iris in a binary image

I am developing an eye tracker application in emgu CV, To track eyes i need to detect iris accurately ,So i used hough circles , but in some cases it fails because the shape of iris is not a perfect circle, So i decided to convert eye image in to binary and detect iris ,
To convert it to binary i used
grayframeright_1 = grayframeright_1.ThresholdBinary(new
Gray(threshold_value), new Gray(220));
and the result is
Now how can i detect iris in the above binary image ?Can i run blob detector to detect iris ?
Please help me to figure this out, your help will be highly appreciated , I am running out of time for my deadline.
Providing code sample would be useful
Thanks in advance
You can try erosion. I've used it in a image processing class at university to find the visual center of airplanes in the sky and it worked surprisingly well.
Erosion is a fairly simple operator used in broader practice like blob detection, which you already mentioned.
Erode should remove border pixels of irregular shapes, leaving at last, just a moment before the shape completely vanishes, only few pixels. The geometrical center of those pixels is c, the visual center of the irregular shape. Starting from c draw a circle of radius r which is completely inscribed in the irregular shape. The circle at c with radius r is an approximation of the iris. Or at least so the story goes.
When I say erosion I mean this: example of erosion
This was just my personal idea based on university work, I've never done this in the industry.
Maybe you should look at a more serious approach to the problem which does not use erosion but wavelets: Iris recognition
I'm very curious. If you try this could you please share your results/findings? A quick comment would suffice!

How Weka draw ROC curve from KNN (IBk)?

I have read a lot about ROC curve before posting.
So, I didn't understand how Weka draw the ROC curves. I can't find the thereshold to variate to generate the points in the curve. Thanks,
Weka uses a class called TheresholdCurve from a method in the Evaluation class.
In this class there are all the points of the ROC Curve.
The threshold will variate depending on the classification used. See here for more details.