HOG in OpenCV for classification of entire images - c++

I understand that Histograms of Gradients in OpenCV are typically used on image patches in order to detect and classify objects in an image.
However, I would like to use HOG to build a feature vector that can be used to classify an entire image. Using the following:
std::vector<float> temp_FV_out;
cv::HOGDescriptor hog;
hog.compute(img_in, temp_FV_out);
gives very long feature vectors each of different lengths, due to the varying size of the image - larger images have more 64 x 128 windows, and each of these contributes to the feature vector's length.
How can I get OpenCV to give a short feature vector (about 5-20 bins) from each image, where the length of the feature vector remains constant regardless of the image's size? I would rather not use bag of words to build a dictionary of HOG 'words'.

First step is to normalize the image size - choose the smallest size you want to process,and resize the rest to this base size. You can also establish a small size as default (100x100, by example) You may need to crop them, if they do not have the same aspect ratio.
Next, you can select a number of features from your vector, based on various algorithms: PCA, decision trees, Ada boost, etc - which can help you extract the most significant values from your data.

Related

Why in CNN for image recognition tasks, the filters are always chosen to be extremely localized?

In CNN, the filters are usually set as 3x3, 5x5 spatially. Can the sizes be comparable to the image size? One reason is for reducing the number of parameters to be learnt. Apart from this, is there any other key reasons? for example, people want to detect edges first?
You answer a point of the question. Another reason is that most of these useful features may be found in more than one place in an image. So, it makes sense to slide a single kernel all over the image in the hope of extracting that feature in different parts of the image using the same kernel. If you are using big kernel, the features could be interleaved and not concretely detected.
In addition to yourself answer, reduction in computational costs is a key point. Since we use the same kernel for different set of pixels in an image, the same weights are shared across these pixel sets as we convolve on them. And as the number of weights are less than a fully connected layer, we have lesser weights to back-propagate on.

Perform multi-scale training (yolov2)

I am wondering how the multi-scale training in YOLOv2 works.
In the paper, it is stated that:
The original YOLO uses an input resolution of 448 × 448. ith the addition of anchor boxes we changed the resolution to 416×416. However, since our model only uses convolutional and pooling layers it can be resized on the fly. We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. Instead of fixing the input image size we change the network every few iterations. Every 10 batches our network randomly chooses a new image dimension size. "Since our model downsamples by a factor of 32, we pull from the following multiples of 32: {320, 352, ..., 608}. Thus the smallest option is 320 × 320 and the largest is 608 × 608. We resize the network to that dimension and continue training. "
I don't get how a network with only convolutional and pooling layers allow input of different resolutions. From my experience of building neural networks, if you change the resolution of the input to different scale, the number of parameters of this network will change, that is, the structure of this network will change.
So, how does YOLOv2 change this on the fly?
I read the configuration file for yolov2, but all I got was a random=1 statement...
if you only have convolutional layers, the number of weights does not change with the size of the 2D part of the layers (but it would change if you resized the number of channels, too).
for example (imagined network), if you have 224x224x3 input images and a 3x3x64 convolutional layer, you will have 64 different 3*3*3 convolutional filter kernels = 1728 weights. This value does not depend on the size of the image at all, since a kernel is applied on each position of the image independently, this is the most important thing of convolution and convolutional layers and the reason, why CNNs can go so deep, and why in faster R-CNN you can just crop the regions out of your feature map.
If there were any fully connected layers or something, it would not work this way, since there, bigger 2D layer dimension would lead to more connections and more weights.
In yolo v2, there is one thing that might look still not fitting right. For example if you double the image size in each dimension, you'll end up with 2 times the number of features in each dimension, right before the final 1x1xN filter, like if your grid was 7x7 for the original network size, the resized network might have 14x14. But then you'll just get 14x14 * B*(5+C) regression results, just fine.
In YoLo if you are only using convolution layers , the size of the output gird changes.
For example if you have size of:
320x320, output size is 10x10
608x608, output size is 19x19
You then calculate loss on these w.r.t to the ground truth grid which is similarly adjusted.
Thus you can back propagate loss without adding any more parameters.
Refer yolov1 paper for the loss function:
Loss Function from the paper
You thus can in theory only adjust this function which depends upon the grid size and no model parameters and you should be good to go.
Paper Link: https://arxiv.org/pdf/1506.02640.pdf
In the video explanation by the author mentions the same.
Time: 14:53
Video Link

opencv clahe parameters explanation

I would like to know proper explanation of the clahe parameters
i.e clipLimit and tileGridSize.
and how does clipLimit value effects the contrast of the image and what factors(like image resolution, object sizes) to be considered to select tileGridSize.
Thanks in advance
this question is for a long time ago but i searched for the answer and saw this,then i found some links which may help,obviously most of below information are from different sites.
AHE is a computer image processing technique used to improve contrast in images. It differs from ordinary histogram equalization in the respect that the adaptive method computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the lightness values of the image. It is therefore suitable for improving the local contrast and enhancing the definitions of edges in each region of an image.
and , AHE has a tendency to over-amplify noise in relatively homogeneous regions of an image ,A variant of adaptive histogram equalization called contrast limited adaptive histogram equalization (CE) prevents this by limiting the amplification.
for first one this image can be useful:
CLAHE limits the amplification by clipping the histogram at a predefined value (called clip limit)
tileGridSize refers to Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.
it is opencv documentation about it's available functions:
https://docs.opencv.org/master/d6/db6/classcv_1_1CLAHE.html
and this link was good at all:
https://en.wikipedia.org/wiki/Adaptive_histogram_equalization#Contrast_Limited_AHE
http://www.cs.utah.edu/~sujin/courses/reports/cs6640/project2/clahe.html
clipLimit is the threshold value.
tileGridSize defines the number of tiles in row and column.
More Information

Haarcascade operates on 348x288 images only?

I am using opencv and c++. Which face detector algorithm to use if I have 348x288 face images. In the paper for Haarcascade http://www.vision.caltech.edu/html-files/EE148-2005-Spring/pprs/viola04ijcv.pdf, it is said that haarcascade operates on 348x288 pixel images. Does that mean I cannot use haarcascade to detect the faces in my images?
It can be used for your images as long as you setup the correct parameters for CascadeClassifier::detectMultiScale(), especially the following three:
scaleFactor – Parameter specifying how much the image size is reduced at each image scale.
minSize – Minimum possible object size. Objects smaller than that are ignored.
maxSize – Maximum possible object size. Objects bigger than that are ignored.

cvSVM training produces poor results for HOGDescriptor

My objective is to train an SVM and get support vectors which i can plug into opencv's HOGdescriptor for object detection.
I have gathered 4000~ positives and 15000~ negatives and I train using the SVM provided by opencv. the results give me too many false positives.(up to 20 per image) I would clip out the false positives and add them into the pool of negatives to retrain. and I would end up with even more false positives at times! I have tried adjusting L2HysThreshold of my hogdescriptor upwards to 300 without significant improvement. is my pool of positives and negatives large enough?
the SVM training is also much faster than expected. I have tried with a feature vector size of 2916 and 12996, using grayscale images and color images on separate tries. SVM training has never taken longer than 20 minutes. I use auto_train. I am new to machine learning but from what i hear training with a dataset as large as mine should take at least a day no?
I believe cvSVM is not doing much learning and according to http://opencv-users.1802565.n2.nabble.com/training-a-HOG-descriptor-td6363437.html, it is not suited for this purpose. does anyone with experience with cvSVM have more input on this?
I am considering using SVMLight http://svmlight.joachims.org/ but it looks like there isn't a way to visualize the SVM hyperplane. What are my options?
I use opencv2.4.3 and have tried the following setsups for hogdescriptor
hog.winSize = cv::Size(100,100);
hog.cellSize = cv::Size(5,5);
hog.blockSize = cv::Size(10,10);
hog.blockStride = cv::Size(5,5); //12996 feature vector
hog.winSize = cv::Size(100,100);
hog.cellSize = cv::Size(10,10);
hog.blockSize = cv::Size(20,20);
hog.blockStride = cv::Size(10,10); //2916 feature vector
Your first descriptor dimension is way too large to be any useful. To form any reliable SVM hyperplane, you need at least the same number of positive and negative samples as your descriptor dimensions. This is because ideally you need separating information in every dimension of the hyperplane.
The number of positive and negative samples should be more or less the same unless you provide your SVM trainer with a bias parameter (may not be available in cvSVM).
There is no guarantee that HOG is a good descriptor for the type of problem you are trying to solve. Can you visually confirm that the object you are trying to detect has a distinct shape with similar orientation in all samples? A single type of flower for example may have a unique shape, however many types of flowers together don't have the same unique shape. A bamboo has a unique shape but may not be distinguishable from other objects easily, or may not have the same orientation in all sample images.
cvSVM is normally not the tool used to train SVMs for OpenCV HOG. Use the binary form of SVMLight (not free for commercial purposes) or libSVM (ok for commercial purposes). Calculate HOGs for all samples using your C++/OpenCV code and write it to a text file in the correct input format for SVMLight/libSVM. Use either of the programs to train a model using linear kernel with the optimal C. Find the optimal C by searching for the best accuracy while changing C in a loop. Calculate the detector vector (a N+1 dimensional vector where N is the dimension of your descriptor) by finding all the support vectors, multiplying alpha values by each corresponding support vector, and then for each dimension adding all the resulting alpha * values to find an ND vector. As the last element add -b where b is the hyperplane bias (you can find it in the model file coming out of SVMLight/libSVM training). Feed this N+1 dimensional detector to HOGDescriptor::setSVMDetector() and use HOGDescriptor::detect() or HOGDescriptor::detectMultiScale() for detection.
I have had successful results using SVMLight to learn SVM models when training from OpenCV, but haven't used cvSVM, so can't compare.
The hogDraw function from http://vision.ucsd.edu/~pdollar/toolbox/doc/index.html will visualise your descriptor.