I'm new to machine learning. I'm doing a project by using opencv open source library. My issue is that I don't have experience in Machine Learning. I have extracted features from different images and I have evaluated them, now I want to classify objects in those images by using SVM but I don't know what to do. BTW, I used 3 different feature extractors, SIFT, SURF and FAST feature detector (with their descriptors)
Can you give me the guide and some examples to classify more than 5 objects in the background, such as coffee cups, coca cola, basket balls etc...
I'm doing my project in C++, environment (UBUNTU).
With the information provided, all I can give you is the following list:
category-level classification tutorial from the CVML2011 summer school. It includes code (for you unfortunately in Matlab) which can help you to understand the concept behind it.
paper called "A Practical Guide to Support Vector Classification" which clearly explains how to prepare the data, train and test SVM
and of course the OpenCV documentation on svm training
As already pointed out by #jillesdewit you should try to be more specific.
Related
Im trying to implement an image classification program using the BoW to create the visual library and classify using SVM. Now I have done the Bow part and I'm kinda stuck with the SVM training part.
This is what I have coded so far.
1) Extract (detect and describe) features from training images. (I have 4 classes of images)
2) Build a unclustered matrix of all features of the training images
3) Use k-mean clustering to cluster the data into bags (BoW)
5) Save the visual library in YML format for accessing by the SVM predictor
4) Build a BoW descriptor incorporated with SIFT and FLANN to extract features from the test image
5) Build SVM classifier
No my question is how to train the SVM classifier before I proceed with actual prediction.
According OpenCV 3.1 (not 2.4) class reference for SVM,
We can train the classifier using the trainAuto() member function. In order to create the data for training we have to use the one of following member functions.
Now having said that, I prefer to use the Train::create function to create the training data. What are the arguments of TrainData::create() member function. How do I build these arguments for my case where training images are all in the same directory separated by folders. What do arguments layout, responses, varldx, sampleldx, sampleWeights and varType mean? Sorry if I'm asking a basic question. Im new to openCv and I don't find sufficient help for the OpenCV 3.1 on the internet. All the materials I find are related to OpenCV 2.4. OpenCV 3.1 is big overhaul compared to the former version. Furthermore the documentation for 3.1 is in beta stage and does not help much either. Thank you. Any help is appreciated.
I am currently working on a hand recognition system. I have been able to detect the hand and draw a contour for it. Now, I have to extract features from the hand region. What is the best feature extraction method that i can use?
I was thinking to use Local Binary Pattern, but since i am new to computer vision i don't know how to use it.
Perhaps you must look at histogram of gradients (HOG), which can be considered as a more general version of LBP. You can have multiple images of hands; by extracting HOG features from each image and using an SVM or neural network classifier, you can learn a statistical model of hand poses. This will help in recognizing an unseen hand. Look also at the current literature on deep learning.
A C++ implementation of HOG is available from vlfeat library [1], which can be called from OpenCV. HOG can be computer from OpenCV also [2].
[1] http://www.vlfeat.org/overview/hog.html
[2] http://goo.gl/8jTetR
I intend to calculate haar-like features of input images, and then classify those features using SVM.
My question is: Is there some library (C++ or Matlab) of calculating haar-like features of an image I can use?
By the way, I know the application opencv_traincascade.exe from OpenCV. But I wonder if there is a separated code just for calculating haar-like features in OpenCV?
I've found source codes of opencv_traincascade.exe and opencv_haartraining.exe. They're in directory ".\sources\apps\".
And the code to calculate haar-like features of an image is in class CvHaarEvaluator from haarfeatures.cpp, but I can't find any explanation of its members.
As far as I know, CvHaarEvaluator is used once in CvCascadeClassifier.cpp, and the latter is then used once in traincascade.cpp. But I also can't find explanations of traincascade.cpp.
Since it seems that it will take me a lot of time to understand these source codes, I've decided to implement a simple one by myself.
Anyway, if anybody finds an explanation or example of how to use CvHaarEvaluator, please tell me. Thanks!
I am trying to develop an automatic(or semi-automatic) image annotator for my final year project with OpenCV. I have been studying many OpenCV resources and have come across cascade classification for training and detection purposes. I understood that part, and also tried the Face Detection tutorial provided with OpenCV. So, now I know how to train and detect objects.
However, I still cannot understand how can I annotate objects present in the image?
For example, the system will show that this is an object, but I want the system to show that it is a ball. How can i accomplish that?
Thanks in advance.
One binary classificator (detector) can separate objects by two classes:
positive - the object type classifier was trained for,
and negative - all others.
If you need detect several distinguished classes you should use one detector for each class, or you can train multiclass classifier ("one vs all" type of classifiers for example), but it usually works slower and with less accuracy (because detector better search for similar objects). You can also take a look at convolutional networks (by Yann LeCun).
This is a very hard task. I suggest simplifying it by using latent SVM detector and limiting yourself to the models it supplies:
http://docs.opencv.org/modules/objdetect/doc/latent_svm.html
What is best method of Traffic Sign Detection and Recognition?
I review the popular traffic sign detection methods prevalent in recent literature, but don't know which way is best!
I would like to use Color-based and shape-based detection methods.
I work image processing using opencv in visual studio c++.
Try this one:
https://sites.google.com/site/mcvibot2011sep/
Check dlib. The file example/train_object_detector.cpp* has some details on how this can be achieved. It uses a feature description technique called Histogram of Oriented Gradients (HOG).
Check the following links for a starting point:
Detecting Road Signs in Mapillary Images with dlib C++
Dlib 18.6 released: Make your own object detector!
* Note: don't just use the examples to train your detector! Read the files as a guide/tutorial! These example programs assume that you are trying to detect faces and make some improvements based on that (such as using image mirrors on training since faces are symmetric, which can be be disastrous for some signs).
Edit: Check my implementation of a traffic sign detector and classifier using dlib:
https://github.com/fabioperez/transito-cv