I am trying to develop an automatic(or semi-automatic) image annotator for my final year project with OpenCV. I have been studying many OpenCV resources and have come across cascade classification for training and detection purposes. I understood that part, and also tried the Face Detection tutorial provided with OpenCV. So, now I know how to train and detect objects.
However, I still cannot understand how can I annotate objects present in the image?
For example, the system will show that this is an object, but I want the system to show that it is a ball. How can i accomplish that?
Thanks in advance.
One binary classificator (detector) can separate objects by two classes:
positive - the object type classifier was trained for,
and negative - all others.
If you need detect several distinguished classes you should use one detector for each class, or you can train multiclass classifier ("one vs all" type of classifiers for example), but it usually works slower and with less accuracy (because detector better search for similar objects). You can also take a look at convolutional networks (by Yann LeCun).
This is a very hard task. I suggest simplifying it by using latent SVM detector and limiting yourself to the models it supplies:
http://docs.opencv.org/modules/objdetect/doc/latent_svm.html
Related
I'm looking for a way to check if a given logo appears on a screenshot of a webpage. So basically, I need to be able to find a small predefined image on a larger image that may or may not contain the smaller image. A match could be of a different scale, somewhat different colors. I need to judge occurrence similarity as well. Need some pointers for what to look at, I've never worked with computer vision before.
Simplest yet not simple way to do it is a normal CNN trained on augmented dataset of the logos.
Trying to keep the answer short, Just make a cnn in tensorflow and train your model on tons images of logos with labels on each training image, It's a simple task and a not-very-crafty CNN must be able to get your work done.
CNN- Convolutional Neural Network
Reference : https://etasr.com/index.php/ETASR/article/view/3919
Given two images, e.g. two cats, is there a library that includes a "quick and dirty" way of telling by how much the two images differ regarding translation and rotation? Image registration is a big field and every application I run into seems to be tailored to medical scans and usually has certain domain specific caps on the transformation ranges. The tool I require should take two images as an input and return an angle of rotation and a translation vector, maybe even a confidence metric, it's that simple. (Most algorithms out there are heavy-duty and focus on minute details for alignment, the tool I'm looking for need not be as exact.)
If it needs not be very precise, you can probably tweak the code from PyImageSearch to better suit your application.
If you know that the two images you are going to compare do contain the same object (i.e., if there is no additional object recognition problem that comes before this step), then you can maybe try using the ORB detector to find the good keypoints, and then estimate the homography using ViSP
My intention is to build a classifier that correctly classify the image ROI with the template that I have manually extracted
Here is what I have done.
My first step is to understand what should be done to achieve the above
I have realized I would need to create the representation vectors(of the template) through research from the net. Hence I have used Bag of words to create the vocabulary
I have used and rewritten the Roy's project to opencv 3.1 and also used his food database. On seeing his database, I have realised that some of the image contain multiple class type. I try to clip the image so that each training image only contains one class of item but the image are now of different size
I have tried to run this code. The result is very disappointing. It always points to one class.
Question I have?
Is my step in processing the training image wrong? I read around and some posts suggest the image size must be constant or at least the aspect ratio. I am confused by this. Is there some tools available for resizing samples?
It does not matter what is the size of the sample images, since Roy's algorithm uses local descriptors extracte from nearby points of interest.
SVM is linear regression classifier and you need to train different SVM-s for each class. For each class it will say whether it's of that class or the rest. The so called one vs. rest.
I have written an object classification program using BoW clustering and SVM classification algorithms. The program runs successfully. Now that I can classify the objects, I want to track them in real time by drawing a bounding rectangle/circle around them. I have researched and came with the following ideas.
1) Use homography by using the train set images from the train data directory. But the problem with this approach is, the train image should be exactly same as the test image. Since I'm not detecting specific objects, the test images are closely related to the train images but not essentially an exact match. In homography we find a known object in a test scene. Please correct me if I am wrong about homography.
2) Use feature tracking. Im planning to extract the features computed by SIFT in the test images which are similar to the train images and then track them by drawing a bounding rectangle/circle. But the issue here is how do I know which features are from the object and which features are from the environment? Is there any member function in SVM class which can return the key points or region of interest used to classify the object?
Thank you
Edit: I didn't make this clear, for this is for the possible future development of an application.
I am looking into individual facial recognition for an application, but an essential part of this seems to be a fairly large training set of images for each individual to be recognized.
Is it important for the images to be taken at different times in different environments, or could several images captured over a few seconds with a handheld camera possibly provide the necessary variations for a good training set?
(This isn't for human facial recognition, by the way, so existing tools and databases won't really help too much. I'm aware that 2D image recognition can not necessarily be applied to all species; let's just assume that it does work in my use case.)
This paper may answer some of your questions:
http://uran.donetsk.ua/~masters/2011/frt/dyrul/library/article8.pdf
From the pattern classification point of view, a usual problem in face recognition is having a plethora of classes and only a few, possibly only one, training sample(s) per class. For this reason, more sophisticated classifiers are not needed but a nearest-neighbour classifier is used.
While I'm not an expert on the subject, it appears to be a common problem to have only one image per person as a training sample and one that has been solved with at least some level of accuracy in controlled lighting/positional situations.
To specifically answer your question, a training set that had multiple images of each person with little or no variation ("several images captured over a few seconds with a handheld camera"), would not be as valuable as one that had more variation (e.g. different facial expressions, lighting, backgrounds).