Down Sampling Technique for LibSVM - weka

I have an unbalanced training data which I am going to use for training SVM classifier. I have tried out several techniques for handling unbalanced data such as cost sensitive learning and sampling techniques. For sampling techniques I need to find methods for up sampling and downsampling instead random methods. What are the techniques that can be used for up sampling and down sampling? I am using weka and LibSVM for classification.

For up sampling most frequently used method is SMOTE. Here are some useful URLs,
SMOTE description - http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume16/chawla02a-html/node6.html
Weka SMOTE filter -http://weka.sourceforge.net/doc.packages/SMOTE/weka/filters/supervised/instance/SMOTE.html

Related

"Incremental / Decremental" SVM Algorithm - That can be trained on additional data

I am training an SVM model(RBF kernel) on ~ 5000 samples, I tunned my model properly and used it to make predictions.
Now, I have 1000 more samples which can also be used for training.
My question is, do I have to make the model again on total 6000 samples, or is there any way by which I can add training data to my existing SVM model.
Note- Actually the dataset I am using is quite large, and making model again will not be a good thought.
Its called "incremental" or "online" SVM algorithm. Currently its supported mostly for linear kernel type in various libraries:
Liblinear Incremental. Paper
Gert Cauwenberghs
Chris Diehl

Visualize the learned filter of each CNN layer

anyone please tell me how to visualize the learned filter of each CNN layer?
The following answers tell me how to only visualize the learned filters of the first CNN layer, but could not visulize the other CNN layers.
1) You can just recover the filters and use Matlab's functions to display them as images. For example after loading a pretrained net from http://www.vlfeat.org/matconvnet/pretrained/ :
imshow( net.layers{1}.filters(:, :, 3, 1), [] ) ;
2) You may find the VLFeat function vl_imarraysc useful to display several filters. http://www.vlfeat.org/matlab/vl_imarraysc.html
For visualizing filters in intermediate layers. There are several techniques:
(1) show one or three channels as grayscale or RGB at a time. It's not very informative since they filters of ResNet and VGG are small 3x3.
(2) Turn off other units. Backpropgate only this unit to the input space. You can see a pattern that reflects what this unit cares about. There are many papers that use similar techniques. e.g., Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. 2014.
(3) Find input patches that maximally activate this unit and see what they are.

How can i apply SVM or deep neural network for image retrieval

After obtaining the image dataset, the feature database is constructed for all images which is a vector based on mean and sd of RGB color model and HSV color model for a portion of the image. How can I use a svm to retieve related images from the database once the query image is given.
Also how to use unsupervised learning for the above problem
Assuming the query images are unlabeled, applying SVM would require a way of knowing the labels for dataset images since SVM is a form of supervised learning, which seeks to correctly determine class labels for unlabeled data. You would need another method for generating class labels, such as unsupervised learning, so this approach does not seem relevant if you only have feature vectors but no class labels.
A neural network allows for unsupervised learning with unlabeled data, but is a rather complex approach and is the subject of academic research. You may want to consider a simpler machine learning approach such as k-Nearest Neighbors, which allows you to obtain the k closest training samples that are similar in your feature space. This algorithm is simple to implement and is found in many machine learning libraries. For example in Python you can use scikit learn.
I am unsure what type of images you are working with, but you might also want to explore using feature detector algorithms such as SIFT rather than just pixel intensities.

SVM training C++ OpenCV

I was under the impression the training data given to train an SVM consisted of image features, but after reading this post again, the training_mat that is given to the SVM in the example is just the img_mat flattened to 1-Dimension.
So my question is, when training an SVM, do you give it whole images in their entirety, row by row, or do you detect and extract the features, and then flatten a Mat of that into 1-Dimension?
You can extract features, or you can use pixel intensity values as the features. In this example, they have done the latter. In this case, you end up with a very high number of features that many of them may be not useful. This makes the convergence of the SVM training more difficult, but can be still possible. Based on my personal experience, SVM works better if you extract a lower number of "good" features that best describe your data. However, in recent years, it has been shown that state-of-the-art estimators like deep neural networks (when used instead of SVM) can perform very well with only using the pixel intensity values as features. This has eliminated the need for feature extraction in the methods that has led to state-of-the-art results on public data sets (like ImageNet)

Object recognition using LDA and ORB with different sized training image.

I'm trying to build a lightweight object recognition system using ORB for feature extraction and LDA for classification. But I'm running into an issue do to the varying size of extracted features.
These are my steps:
Extract keypoints using ORB.
Extract trainable features in the image by grouping the keypoints.
(example of whats being extracted: http://imgur.com/gaQWk)
Train the recognizer with the extracted features. (This is where problems arise)
Classify objects in an image from the wild.
If I attempt to create a generalized matrix using cv::gemm, I get an exception due to the varying sizes. My first thought was to just to normalize all the images by resizing them, but this causes a lot of accuracy issues when objects have similar small features.
Is there any solution to this? Is LDA an appropriate method for this? I know it's commonly used with facial recognition algorithms such as fisherfaces.
LDA requires fixed length features, as do most optimization and machine learning methods. You could resize the image patches to be a fixed size, but that is probably not going to be a good feature. Normally people use a scale invariant feature such as SIFT. You also might try a color histogram, or some variation of edge detection and spatial histogram binning such as a GIST vector.
It's hard to say if LDA is an appropriate method for this without knowing what you hope to accomplish. You might also look into using SVM, some form of boosting, or just plain nearest neighbor with a large training set.