Using CRF suite C++ on images - c++

I have been implementing Object Localization and Segmentation of images.
I need to classify different objects on an image and segment the image accordingly.
I implemented a research paper which is as follows:
Make super pixels of the image(done using Slic algorithm)
Classify each super pixel using some annotated images
Also I am using the Support vector machine classifier to classify every super pixel.
Till here my code is working correct. But the next part is improvement of the segmentation using Conditional Random Field which is a graphical model to segment the image. But I have no idea how should I implement this Conditional Random Field model. I need to make a graph between the super pixels which i have done.
There is a CRF suite in C++ of which the link is provided but I do not understand how to use it for images that is what will be my training data for CRF model and which functions should I call of the library(CRF).
Now I would like to Ask how should I go about implementing the Conditional Random Field Suite in C++ on this graph of the Super pixel.
I went through this but I am unable to understand how should I go about it.
http://www.chokkan.org/software/crfsuite/

Related

How to find logos on a website screenshot

I'm looking for a way to check if a given logo appears on a screenshot of a webpage. So basically, I need to be able to find a small predefined image on a larger image that may or may not contain the smaller image. A match could be of a different scale, somewhat different colors. I need to judge occurrence similarity as well. Need some pointers for what to look at, I've never worked with computer vision before.
Simplest yet not simple way to do it is a normal CNN trained on augmented dataset of the logos.
Trying to keep the answer short, Just make a cnn in tensorflow and train your model on tons images of logos with labels on each training image, It's a simple task and a not-very-crafty CNN must be able to get your work done.
CNN- Convolutional Neural Network
Reference : https://etasr.com/index.php/ETASR/article/view/3919

classification with SVM using vocab build from Bag of Word

My intention is to build a classifier that correctly classify the image ROI with the template that I have manually extracted
Here is what I have done.
My first step is to understand what should be done to achieve the above
I have realized I would need to create the representation vectors(of the template) through research from the net. Hence I have used Bag of words to create the vocabulary
I have used and rewritten the Roy's project to opencv 3.1 and also used his food database. On seeing his database, I have realised that some of the image contain multiple class type. I try to clip the image so that each training image only contains one class of item but the image are now of different size
I have tried to run this code. The result is very disappointing. It always points to one class.
Question I have?
Is my step in processing the training image wrong? I read around and some posts suggest the image size must be constant or at least the aspect ratio. I am confused by this. Is there some tools available for resizing samples?
It does not matter what is the size of the sample images, since Roy's algorithm uses local descriptors extracte from nearby points of interest.
SVM is linear regression classifier and you need to train different SVM-s for each class. For each class it will say whether it's of that class or the rest. The so called one vs. rest.

Using OpenCV to touch and select object

I'm using the OpenCV framework in iOS Xcode objc, is there a way that I could process the image feed from the video camera and allow the user to touch an object on the screen then we use some functionality in OpenCV to highlight it.
Here is graphically what I mean. The first image shows an example of what the user might see in the video feed:
Then when they tap on the screen on the ipad i want to use OpenCV feature/object detecting to process the area they've clicked to highlight the area. Would look something like this if they clicked the ipad:
Any ideas on how this would be achievable in objc OpenCV?
I can see quite easily how we could achieve this using trained templates of the iPad to match it using OpenCV algorithms but I want to try and get it dynamic so users can just touch anything in the screen and we'll take it from there?
Explanation: why should we use the segmentation approach
According to my understanding, the task which you are trying to solve is segmentation of objects, regardless to their identity.
The object recognition approach is one way to do it. But it has two major downsides:
It requires you to train an object classifier, and to collect a dataset which contains a respectable amount of examples of objects which you would like to recognize. If you choose to take a classifier which is already trained - it won'y necessarily work on any type of object which you would like to detect.
Most of the object recognition solutions find a bounding box around the recognized object, but they don't perform a complete segmentation of it. The segmentation part requires extra effort.
Therefore, I believe that the best way for your case is to use an image segmentation algorithms. More precisly, we'll be using the GrabCut segmentation algorithm.
The GrabCut algorithm
This is an iterative algorithm with two stages:
initial stage - the user specify a bounding box around the object.
given this bounding box the algorithm estimates the color distribution of foreground (the object) and the background by using GMM, followed by a graph cut optimization for finding the optimal boundaries between the foreground and the background.
In the next stage, the user may correct the segmentation if needed, by supplying scribbles of the foreground and the background. The algorithm fixes the model accordingly and perform a new segmentation based on the updated information.
Using this approach has pros and cons.
The pros:
The segmentation algorithm is easy to implement with openCV.
It enables the user to fix segmentation errors if needed.
It doesn't relies on a collecting a dataset and training a classifier.
The main con is that you will need an extra source of information from the user beside of a single tap on the screen. This information will be a bounding box around the object, and in some cases - additional scribbles will be required to correct the segmentation.
Code
Luckily, there is an implementation of this algorithm in OpenCV. The user Itseez create a simple and easy to use sample for using OpenCV's GrabCut algorithm, which can be found here: https://github.com/Itseez/opencv/blob/master/samples/cpp/grabcut.cpp
Application usage:
The application receives a path to an image file as an command line argument input. It renders the image onto the screen and the user is required to supply an initial bounding rect.
The user can press 'n' in order to perform the segmentation for the current iteration or press 'r' to revert his operation.
After choosing a rect, the segmentation is calculated. If the user wants to correct it, he may choose to add foreground or background scribbles by pressing shift+left and Ctrl+left accordingly.
Examples
Segmenting the iPod:
Segmenting the pen:
You Can do it by Training a Classifier of Ipad images using opencv Haar Classifiers and then detecting Ipad images in a given frame.
Now based on coordinates of the touch check if that area overlapped with detected Ipad image area. If it does Drawbounding box on the detected Object.Means from there on you can proceed towards processing your detected ipad image.
Repeat the above procedure for Number of objects that you want to detect.
The task which you are trying to solve is "Object proposal". It doesn't work very accurate and this results are very new.
This two articles give you a good overview of methods for this:
https://pdollar.wordpress.com/2013/12/10/a-seismic-shift-in-object-detection/
https://pdollar.wordpress.com/2013/12/22/generating-object-proposals/
To have state-of-the-art results, look for latest CVPR papers on Object proposals. Quite often they have code available to test.

Real-time object tracking in OpenCV

I have written an object classification program using BoW clustering and SVM classification algorithms. The program runs successfully. Now that I can classify the objects, I want to track them in real time by drawing a bounding rectangle/circle around them. I have researched and came with the following ideas.
1) Use homography by using the train set images from the train data directory. But the problem with this approach is, the train image should be exactly same as the test image. Since I'm not detecting specific objects, the test images are closely related to the train images but not essentially an exact match. In homography we find a known object in a test scene. Please correct me if I am wrong about homography.
2) Use feature tracking. Im planning to extract the features computed by SIFT in the test images which are similar to the train images and then track them by drawing a bounding rectangle/circle. But the issue here is how do I know which features are from the object and which features are from the environment? Is there any member function in SVM class which can return the key points or region of interest used to classify the object?
Thank you

OpenCV to Identify objects from training video set and then test them against another video

I have been tasked to use OpenCV and C++
Read a set of videos for creating a set of images/learning.
Classify objects seen in the videos
Label the images
test against series of test videos to check objects were identified as expected. draw a rectangle around them and label.
I am new to OpenCV however happy to program in C++ as soon as approach is formed. I am also planning to write my own functions at a later stage.
I need your help in formning right way of solution approach as I have to identify household objects [cup, soft toy, phone, camera, keyboard) from a stream of video and then test on another stream of video. The original video has depth information as well but not sure how to use it to my benefit.
Read about Support vector machine (SVM) , Feature extraction (e.g. SIFT/SURF) , SVM training and SVM testing. And, for drawing Rectangle, read about findContour(), drawContour() in openCV.
Approach:
Detect objects (e.g. car/plane etc.). Store the points of its contours
Extract some features of that object using SIFT/SURF
Based upon the extracted features, classify the object using SVM (the input for SVM will be the extracted features)
And if the SVM says -Yes! it is a car. Then, draw a rectangle around it using the points of its contour which you had stored in first step.