i'm using opencv SVM and i want to know if it is possible to know which image is closer to our input image instead of the class which the image belongs
Do you mean which training image? Using an SVM, the short answer is no. The SVM builds a model used for classification and it doesn't contain all the training images, just support vectors. Maybe NN is better suited to your problem.
Related
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
I have seen these two lines of code in may online forums but I don't understand where the SVM vector comes from, i.e. what was the training data that was used to train this SVM and can I find that data and source code anywhere?
And also why does the SVM vector have a length of 3781 for a 64x128 image?
Some insight into this would be really helpful.
Thanks
Here you are using pre-trained people detector as SVM. You can read about it in the doc. I don't know the way that they trained it (The algorithms, parameters). But according to this answer, it was trained with Daimler Pedestrian Detection Dataset.
cv2.HOGDescriptor_getDefaultPeopleDetector() will return a array with size 3781 in size. Those are coefficients that are used by SVM to classify people. It has nothing to do with the input image that you are using.
And most importantly you can train a SVM as you like to detect another object and use as the SVM detector. Check this answer for more.
I have written an object classification program using BoW clustering and SVM classification algorithms. The program runs successfully. Now that I can classify the objects, I want to track them in real time by drawing a bounding rectangle/circle around them. I have researched and came with the following ideas.
1) Use homography by using the train set images from the train data directory. But the problem with this approach is, the train image should be exactly same as the test image. Since I'm not detecting specific objects, the test images are closely related to the train images but not essentially an exact match. In homography we find a known object in a test scene. Please correct me if I am wrong about homography.
2) Use feature tracking. Im planning to extract the features computed by SIFT in the test images which are similar to the train images and then track them by drawing a bounding rectangle/circle. But the issue here is how do I know which features are from the object and which features are from the environment? Is there any member function in SVM class which can return the key points or region of interest used to classify the object?
Thank you
Im trying to implement an image classification program using the BoW to create the visual library and classify using SVM. Now I have done the Bow part and I'm kinda stuck with the SVM training part.
This is what I have coded so far.
1) Extract (detect and describe) features from training images. (I have 4 classes of images)
2) Build a unclustered matrix of all features of the training images
3) Use k-mean clustering to cluster the data into bags (BoW)
5) Save the visual library in YML format for accessing by the SVM predictor
4) Build a BoW descriptor incorporated with SIFT and FLANN to extract features from the test image
5) Build SVM classifier
No my question is how to train the SVM classifier before I proceed with actual prediction.
According OpenCV 3.1 (not 2.4) class reference for SVM,
We can train the classifier using the trainAuto() member function. In order to create the data for training we have to use the one of following member functions.
Now having said that, I prefer to use the Train::create function to create the training data. What are the arguments of TrainData::create() member function. How do I build these arguments for my case where training images are all in the same directory separated by folders. What do arguments layout, responses, varldx, sampleldx, sampleWeights and varType mean? Sorry if I'm asking a basic question. Im new to openCv and I don't find sufficient help for the OpenCV 3.1 on the internet. All the materials I find are related to OpenCV 2.4. OpenCV 3.1 is big overhaul compared to the former version. Furthermore the documentation for 3.1 is in beta stage and does not help much either. Thank you. Any help is appreciated.
i'm working on a project (using opencv) where i need to accomplish the following:
Train a classifier so that it can detect people in an thermal image.
I decided to use opencv and classify with HOG and SVM.
So far, i have gotten to the point where i can
Load several images, positive and negative samples (about 1000)
extract the HOG Features for each image
Store the features with their label
Train the SVM
Get the SVM Settings (alpha and bias), set it as HOG Descriptor's SVM
Run testing
The Testing is horrible, even worse then the original one with
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
I think i'm doing the HOG Features wrong, bc i compute them for the whole image, but i need them computed on the image part where the person is. So i guess, that i have to crop the images where the Person is, resize it to some window size, train the SVM on classifing those windows and THEN pass it to the HOG Descriptor.
When i test the images directly on the trained SVM, i have observed, that i get almost 100% false positives. I guess this caused by the problem i described earlier.
I'm open for any ideas.
Regards,
hh
After obtaining the image dataset, the feature database is constructed for all images which is a vector based on mean and sd of RGB color model and HSV color model for a portion of the image. How can I use a svm to retieve related images from the database once the query image is given.
Also how to use unsupervised learning for the above problem
Assuming the query images are unlabeled, applying SVM would require a way of knowing the labels for dataset images since SVM is a form of supervised learning, which seeks to correctly determine class labels for unlabeled data. You would need another method for generating class labels, such as unsupervised learning, so this approach does not seem relevant if you only have feature vectors but no class labels.
A neural network allows for unsupervised learning with unlabeled data, but is a rather complex approach and is the subject of academic research. You may want to consider a simpler machine learning approach such as k-Nearest Neighbors, which allows you to obtain the k closest training samples that are similar in your feature space. This algorithm is simple to implement and is found in many machine learning libraries. For example in Python you can use scikit learn.
I am unsure what type of images you are working with, but you might also want to explore using feature detector algorithms such as SIFT rather than just pixel intensities.