"Incremental / Decremental" SVM Algorithm - That can be trained on additional data - python-2.7

I am training an SVM model(RBF kernel) on ~ 5000 samples, I tunned my model properly and used it to make predictions.
Now, I have 1000 more samples which can also be used for training.
My question is, do I have to make the model again on total 6000 samples, or is there any way by which I can add training data to my existing SVM model.
Note- Actually the dataset I am using is quite large, and making model again will not be a good thought.

Its called "incremental" or "online" SVM algorithm. Currently its supported mostly for linear kernel type in various libraries:
Liblinear Incremental. Paper
Gert Cauwenberghs
Chris Diehl

Related

How does image resolution affect result and accuracy in Keras?

I'm using Keras (with Tensorflow backend) for an image classification project. I have a total of almost 40 000 hi-resolution (1920x1080) images that I use as training input data. Training takes about 45 minutes and this is becoming a problem so I was thinking that I might be able to speed things up by lowering the resolution of the image files. Looking at the code (I didn't write it myself) it seems all images are re-sized to 30x30 pixels anyway before processing
I have two general questions about this.
Is it reasonable to expect this to improve the training speed?
Would resizing the input image files affect the accuracy of the image classification?
1- Of course it will affect the training speed as the spatial dimensions is one of the most important key of the model speed performance.
2- We can say sure it'll affect the accuracy, but how much exactly that depends on many of other aspects like what objects are you classifying and what dataset are you working with.

Using what dataset has the get DefaultPeopleDetector() SVM been trained on?

hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
I have seen these two lines of code in may online forums but I don't understand where the SVM vector comes from, i.e. what was the training data that was used to train this SVM and can I find that data and source code anywhere?
And also why does the SVM vector have a length of 3781 for a 64x128 image?
Some insight into this would be really helpful.
Thanks
Here you are using pre-trained people detector as SVM. You can read about it in the doc. I don't know the way that they trained it (The algorithms, parameters). But according to this answer, it was trained with Daimler Pedestrian Detection Dataset.
cv2.HOGDescriptor_getDefaultPeopleDetector() will return a array with size 3781 in size. Those are coefficients that are used by SVM to classify people. It has nothing to do with the input image that you are using.
And most importantly you can train a SVM as you like to detect another object and use as the SVM detector. Check this answer for more.

Is it possible to train an SVM or Random Forest on the final layer feature of a Convolutional Neural Network using Keras?

I have designed a Convolutional Neural Network in Keras for image classification with several convolution/max-pooling layers, one densely connected hidden layer and softmax activation on the final layer. I want to replace softmax with an SVM or Random Forest in the final layer to see if that yields a better accuracy. Is there any way to do it in Keras?
In order to have (kind of) SVM simply use a hinge loss instead of log loss. Putting RF does not make sense, as you need a differentiable model to be a part of neural net (unless all you want to do is to train a network, and later chop off its final part and use it as a feature detector which is just fed into RF, but this is not a valid approach in general).

How can i apply SVM or deep neural network for image retrieval

After obtaining the image dataset, the feature database is constructed for all images which is a vector based on mean and sd of RGB color model and HSV color model for a portion of the image. How can I use a svm to retieve related images from the database once the query image is given.
Also how to use unsupervised learning for the above problem
Assuming the query images are unlabeled, applying SVM would require a way of knowing the labels for dataset images since SVM is a form of supervised learning, which seeks to correctly determine class labels for unlabeled data. You would need another method for generating class labels, such as unsupervised learning, so this approach does not seem relevant if you only have feature vectors but no class labels.
A neural network allows for unsupervised learning with unlabeled data, but is a rather complex approach and is the subject of academic research. You may want to consider a simpler machine learning approach such as k-Nearest Neighbors, which allows you to obtain the k closest training samples that are similar in your feature space. This algorithm is simple to implement and is found in many machine learning libraries. For example in Python you can use scikit learn.
I am unsure what type of images you are working with, but you might also want to explore using feature detector algorithms such as SIFT rather than just pixel intensities.

SVM training C++ OpenCV

I was under the impression the training data given to train an SVM consisted of image features, but after reading this post again, the training_mat that is given to the SVM in the example is just the img_mat flattened to 1-Dimension.
So my question is, when training an SVM, do you give it whole images in their entirety, row by row, or do you detect and extract the features, and then flatten a Mat of that into 1-Dimension?
You can extract features, or you can use pixel intensity values as the features. In this example, they have done the latter. In this case, you end up with a very high number of features that many of them may be not useful. This makes the convergence of the SVM training more difficult, but can be still possible. Based on my personal experience, SVM works better if you extract a lower number of "good" features that best describe your data. However, in recent years, it has been shown that state-of-the-art estimators like deep neural networks (when used instead of SVM) can perform very well with only using the pixel intensity values as features. This has eliminated the need for feature extraction in the methods that has led to state-of-the-art results on public data sets (like ImageNet)