Training an SVM using hu moments - c++

im learning about SVM, so im making a sample program that trains an SVM to detect if a symbol is in an image or if its not. All the images are black and white (the symbols would be black and the background white). I have 12 training images, 6 positives (with the symbol) and 6 negatives (without it). Im using hu moments to get the descriptors of every image and then i construct the training matrix with those descriptors. also i have a Labels matrix, which contains a label for each image: 1 if its positive and 0 if its negative. but im getting an error (something like a segmentation fault) at the line where i train the SVM. here is my code:
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
//arrays where the labels and the features will be stored
float labels[12] ;
float trainingData[12][7] ;
Moments moment;
double hu[7];
//===============extracting the descriptos for each positive image=========
for ( int i = 0; i <= 5; i++){
//the images are called t0.png ... t5.png and are in the folder train
std::string path("train/t");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
//setting the row i of the training data as the hu moments
for (int j = 0; j <= 6; j++){
trainingData[i][j] = (float)hu[j];
}
labels[i] = 1; //label=1 because is a positive image
}
//===============extracting the descriptos for each negative image=========
for (int i = 0; i <= 5; i++){
//the images are called tn0.png ... tn5.png and are in the folder train
std::string path("train/tn");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
for (int j = 0; j <= 6; j++){
trainingData[i + 6][j] = (float)hu[j];
}
labels[i + 6] = 0; //label=0 because is a negative image
}
//===========================training the SVM================
//we convert the labels and trainingData matrixes to Mat objects
Mat labelsMat(12, 1, CV_32FC1, labels);
Mat trainingDataMat(12, 7, CV_32FC1, trainingData);
//create the SVM
Ptr<ml::SVM> svm = ml::SVM::create();
//set the parameters of the SVM
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::LINEAR);
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
svm->setTermCriteria(criteria);
//Train the SVM !!!!!HERE OCCURS THE ERROR!!!!!!
svm->train(trainingDataMat, ml::ROW_SAMPLE, labelsMat);
//Testing the SVM...
Mat test = imread("train/t1.png", 0); //this should be a positive test
bitwise_not(test, test);
Mat testBin;
threshold(test, testBin, 100, 255, cv::THRESH_BINARY);
Moments momentP = moments(testBin, true); //calculate the moments of the test image
double huP[7];
HuMoments(momentP, huP);
Mat testMat(1, 7, CV_32FC1, huP); //setting the hu moments to the test matrix
double resp = svm->predict(testMat); //pretiction of the SVM
printf("%f", resp); //Response
getchar();
}
i know that the program is running fine until that line because i printed labelsMat and trainingDataMat and the values inside them are ok. Even in the console i can see that the program is running fine until that exact line executes. the console then shows this message:
OpenCV error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainDatam or pass integer responses)
i dont really know what this means. any idea of what could be causing the problem? if you need any other details please tell me.
EDIT
for future readers:
the problem was in the way i defined the labels array as an array of float and the LabelsMat as a Mat of CV_32FC1. the array that contains the labels needs to have integers inside, so i changed:
float labels[12];
to
int labels[12];
and also changed
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
and that solved the error. Thank you

Trying changing:
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
From: http://answers.opencv.org/question/63715/svm-java-opencv-3/
If that doesn't work, hopefully one of these posts will help you:
Opencv 3.0 SVM train classification issues
OpenCV SVM Training Data

Related

opencv cornerSubPix Exception while converting python code to c++

I am trying to port this response to c++ but I am not able to get past this cryptic exception (see image below). Not sure what is the limiting factor. I imagine it is the image color format or the corners parameter but nothing seems to be working. If it is related to converting color format please provide a small code snippet.
The python code provided by Anubhav Singh is working great however I would like to develop in c++. Any help would be greatly appreciated.
I am using OpenCV04.2.0
void CornerDetection(){
std::string image_path = samples::findFile("../wing.png");
Mat img = imread(image_path);
Mat greyMat;
Mat dst;
cv::cvtColor(img, greyMat, COLOR_BGR2GRAY);
threshold(greyMat, greyMat, 0, 255, THRESH_BINARY | THRESH_OTSU);
cornerHarris(greyMat, dst, 9, 5, 0.04);
dilate(dst, dst,NULL);
Mat img_thresh;
threshold(dst, img_thresh, 0.32 * 255, 255, 0);
img_thresh.convertTo(img_thresh, CV_8UC1);
Mat labels = Mat();
Mat stats = Mat();
Mat centroids = Mat();
cv::connectedComponentsWithStats(img_thresh, labels, stats, centroids, 8, CV_32S);
TermCriteria criteria = TermCriteria(TermCriteria::EPS + TermCriteria::MAX_ITER, 30, 0.001);
std::vector<Point2f> corners = std::vector<Point2f>();
Size winSize = Size(5, 5);
Size zeroZone = Size(-1, -1);
cornerSubPix(greyMat, corners, winSize, zeroZone, criteria);
for (int i = 0; i < corners.size(); i++)
{
circle(img, Point(corners[i].x, corners[i].y), 5, Scalar(0, 255, 0), 2);
}
imshow("img", img);
waitKey();
destroyAllWindows();
}
The solution was to iterate over the centroids to build the corners vector before passing the corners variable to the cornerSubPix(...) function.
std::vector<Point2f> corners = std::vector<Point2f>();
for (int i = 0; i < centroids.rows; i++)
{
double x = centroids.at<double>(i, 0);
double y = centroids.at<double>(i, 1);
corners.push_back(Point2f(x, y));
}
The output of the solution is still not exactly what the python output is, regardless it fixed this question in case anyone else ran across this issue.

Matrix assignement value error in opencv C++ with mat.at<uchar>(i,j)

I am learning image processing with OpenCV in C++. To implement a basic down-sampling algorithm I need to work on the pixel level -to remove rows and columns. However, when I assign values with mat.at<>(i,j) other values are assign - things like 1e-38.
Here is the code :
Mat src, dst;
src = imread("diw3.jpg", CV_32F);//src is a 479x359 grayscale image
//dst will contain src low-pass-filtered I checked by displaying it works fine
Mat kernel;
kernel = Mat::ones(3, 3, CV_32F) / (float)(9);
filter2D(src, dst, -1, kernel, Point(-1, -1), 0, BORDER_DEFAULT);
// Now I try to remove half the rows/columns result is stored in downsampled
Mat downsampled = Mat::zeros(240, 180, CV_32F);
for (int i =0; i<downsampled.rows; i ++){
for (int j=0; j<downsampled.cols; j ++){
downsampled.at<uchar>(i,j) = dst.at<uchar>(2*i,2*j);
}
}
Since I read here OpenCV outputing odd pixel values that for cout I needed to cast, I wrote downsampled.at<uchar>(i,j) = (int) before dst.at<uchar> but it does not work also.
The second argument to cv::imread is cv::ImreadModes, so the line:
src = imread("diw3.jpg", CV_32F);
is not correct; it should probably be:
cv::Mat src_8u = imread("diw3.jpg", cv::IMREAD_GRAYSCALE);
src_8u.convertTo(src, CV_32FC1);
which will read the image as 8-bit grayscale image, and will convert it to floating point values.
The loop should look something like this:
Mat downsampled = Mat::zeros(240, 180, CV_32FC1);
for (int i = 0; i < downsampled.rows; i++) {
for (int j = 0; j < downsampled.cols; j++) {
downsampled.at<float>(i,j) = dst.at<float>(2*i,2*j);
}
}
note that the argument to cv::Mat::zeros is CV_32FC1 (1 channel, with 32-bit floating values), so Mat::at<float> method should be used.

OpenCV and C++ - Shape and road signs detection

I have to write a program that detect 3 types of road signs (speed limit, no parking and warnings). I know how to detect a circle using HoughCircles but I have several images and the parameters for HoughCircles are different for each image. There's a general way to detect circles without changing parameters for each image?
Moreover I need to detect triangle (warning signs) so I'm searching for a general shape detector. Have you any suggestions/code that can help me in this task?
Finally for detect the number on speed limit signs I thought to use SIFT and compare the image with some templates in order to identify the number on the sign. Could it be a good approach?
Thank you for the answer!
I know this is a pretty old question but I had been through the same problem and now I show you how I solved it.
The following images show some of the most accurate results that are displayed by the opencv program.
In the following images the street signs detected are circled with three different colors that distinguish the three kinds of street signs (warning, no parking, speed limit).
Red for warning signs
Blue for no parking signs
Fuchsia for speed limit signs
The speed limit value is written in green above the speed limit signs
[![example][1]][1]
[![example][2]][2]
[![example][3]][3]
[![example][4]][4]
As you can see the program performs quite well, it is able to detect and distinguish the three kinds of sign and to recognize the speed limit value in case of speed limit signs. Everything is done without computing too many false positives when, for instance, in the image there are some signs that do not belong to one of the three categories.
In order to achieve this result the software computes the detection in three main steps.
The first step involves a color based approach where the red objects in the image are detected and their region are extract to be analyzed. This step is particularly useful in order to prevent the detection of false positives, because only a small part of the image is processed.
The second step works with a machine learning algorithm: in particular we use a Cascade Classifier to compute the detection. This operation firstly requires to train the classifiers and on a later stage to use them to detect the signs.
In the last step the speed limit values inside the speed limit signs are read, also in this case through a machine learning algorithm but using the k-nearest neighbor algorithm.
Now we are going to see in detail each step.
COLOR BASED STEP
Since the street signs are always circled by a red frame, we can afford to take out and analyze only the regions where the red objects are detected.
In order to select the red objects, we consider all the ranges of the red color: even if this may produce some false positives, they will be easily discarded in the next steps.
inRange(image, Scalar(0, 70, 50), Scalar(10, 255, 255), mask1);
inRange(image, Scalar(170, 70, 50), Scalar(180, 255, 255), mask2);
In the image below we can see an example of the red objects detected with this method.
After having found the red pixels we can gather them to find the regions using a clustering algorithm, I use the method
partition(<#_ForwardIterator __first#>, _ForwardIterator __last, <#_Predicate __pred#>)
After the execution of this method we can save all the points in the same cluster in a vector (one for each cluster) and extract the bounding boxes which represent the
regions to be analyzed in the next step.
HAAR CASCADE CLASSIFIERS FOR SIGNS DETECTION
This is the real detection step where the street signs are detected. In order to perform a cascade classifier the first step consist in building a dataset of positives and negatives images. Now I explain how I have built my own datasets of images.
The first thing to note is that we need to train three different Haar cascades in order to distinguish between the three kind of signs that we have to detect, hence we must repeat the following steps for each of the three kinds of sign.
We need two datasets: one for the positive samples (which must be a set of images that contains the road signs that we are going to detect) and another one for the negative samples which can be any kind of image without street signs.
After collecting a set of 100 images for the positive samples and a set of 200 images for the negatives in two different folders, we need to write two text files:
Signs.info which contains a list of file names like the one below,
one for each positive sample in the positive folder.
pos/image_name.png 1 0 0 50 45
Here, the numbers after the name represent respectively the number
of street signs in the image, the coordinate of the upper left
corner of the street sign, his height and his width.
Bg.txt which contains a list of file names like the one below, one
for each sign in the negative folder.
neg/street15.png
With the command line below we generate the .vect file which contains all the information that the software retrieves from the positive samples.
opencv_createsamples -info sign.info -num 100 -w 50 -h 50 -vec signs.vec
Afterwards we train the cascade classifier with the following command:
opencv_traincascade -data data -vec signs.vec -bg bg.txt -numPos 60 -numNeg 200 -numStages 15 -w 50 -h 50 -featureType LBP
where the number of stages indicates the number of classifiers that will be generated in order to build the cascade.
At the end of this process we gain a file cascade.xml which will be used from the CascadeClassifier program in order to detect the objects in the image.
Now we have trained our algorithm and we can declare a CascadeClassifier for each kind of street sign, than we detect the signs in the image through
detectMultiScale(<#InputArray image#>, <#std::vector<Rect> &objects#>)
this method creates a Rect around each object that has been detected.
It is important to note that exactly as every machine learning algorithm, in order to perform well, we need a large number of samples in the dataset. The dataset that I have built, is not extremely large, thus in some situations it is not able to detect all the signs. This mostly happens when a small part of the street sign is not visible in the image like in the warning sign below:
I have expanded my dataset up to the point where I have obtained a fairly accurate result without
too many errors.
SPEED LIMIT VALUE DETECTION
Like for the street signs detection also here I used a machine learning algorithm but with a different approach. After some work, I realized that an OCR (tesseract) solution does not perform well, so I decided to build my own ocr software.
For the machine learning algorithm I took the image below as training data which contains some speed limit values:
The amount of training data is small. But, since in speed limit signs all letters have the same font, it is not a huge problem.
To prepare the data for training, I made a small code in OpenCV. It does the following things:
It loads the image on the left;
It selects the digits (obviously by contour finding and applying constraints on area and height of letters to avoid false detections).
It draws the bounding rectangle around one letter and it waits for the key to be manually pressed. This time the user presses the digit key corresponding to the letter in box by himself.
Once the corresponding digit key is pressed, it saves 100 pixel values in an array and the correspondent manually entered digit in another array.
Eventually it saves both the arrays in separate txt files.
Following the manual digit classification all the digits in the train data( train.png) are manually labeled, and the image will look like the one below.
Now we enter into training and testing part.
For training we do as follows:
Load the txt files we already saved earlier
Create an instance of classifier that we are going to use ( KNearest)
Then we use KNearest.train function to train the data
Now the detection:
We load the image with the speed limit sign detected
Process the image as before and extract each digit using contour methods
Draw bounding box for it, then resize to 10x10, and store its pixel values in an array as done earlier.
Then we use KNearest.find_nearest() function to find the nearest item to the one we gave.
And it recognizes the correct digit.
I tested this little OCR on many images, and just with this small dataset I have obtained an accuracy of about 90%.
CODE
Below I post all my openCv c++ code in a single class, following my instruction you should be able to achive my result.
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <cmath>
#include <stdlib.h>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui.hpp"
#include <string.h>
#include <opencv2/ml/ml.hpp>
using namespace std;
using namespace cv;
std::vector<cv::Rect> getRedObjects(cv::Mat image);
vector<Mat> detectAndDisplaySpeedLimit( Mat frame );
vector<Mat> detectAndDisplayNoParking( Mat frame );
vector<Mat> detectAndDisplayWarning( Mat frame );
void trainDigitClassifier();
string getDigits(Mat image);
vector<Mat> loadAllImage();
int getSpeedLimit(string speed);
//path of the haar cascade files
String no_parking_signs_cascade = "/Users/giuliopettenuzzo/Desktop/cascade_classifiers/no_parking_cascade.xml";
String speed_signs_cascade = "/Users/giuliopettenuzzo/Desktop/cascade_classifiers/speed_limit_cascade.xml";
String warning_signs_cascade = "/Users/giuliopettenuzzo/Desktop/cascade_classifiers/warning_cascade.xml";
CascadeClassifier speed_limit_cascade;
CascadeClassifier no_parking_cascade;
CascadeClassifier warning_cascade;
int main(int argc, char** argv)
{
//train the classifier for digit recognition, this require a manually train, read the report for more details
trainDigitClassifier();
cv::Mat sceneImage;
vector<Mat> allImages = loadAllImage();
for(int i = 0;i<=allImages.size();i++){
sceneImage = allImages[i];
//load the haar cascade files
if( !speed_limit_cascade.load( speed_signs_cascade ) ){ printf("--(!)Error loading\n"); return -1; };
if( !no_parking_cascade.load( no_parking_signs_cascade ) ){ printf("--(!)Error loading\n"); return -1; };
if( !warning_cascade.load( warning_signs_cascade ) ){ printf("--(!)Error loading\n"); return -1; };
Mat scene = sceneImage.clone();
//detect the red objects
std::vector<cv::Rect> allObj = getRedObjects(scene);
//use the three cascade classifier for each object detected by the getRedObjects() method
for(int j = 0;j<allObj.size();j++){
Mat img = sceneImage(Rect(allObj[j]));
vector<Mat> warningVec = detectAndDisplayWarning(img);
if(warningVec.size()>0){
Rect box = allObj[j];
}
vector<Mat> noParkVec = detectAndDisplayNoParking(img);
if(noParkVec.size()>0){
Rect box = allObj[j];
}
vector<Mat> speedLitmitVec = detectAndDisplaySpeedLimit(img);
if(speedLitmitVec.size()>0){
Rect box = allObj[j];
for(int i = 0; i<speedLitmitVec.size();i++){
//get speed limit and skatch it in the image
int digit = getSpeedLimit(getDigits(speedLitmitVec[i]));
if(digit > 0){
Point point = box.tl();
point.y = point.y + 30;
cv::putText(sceneImage,
"SPEED LIMIT " + to_string(digit),
point,
cv::FONT_HERSHEY_COMPLEX_SMALL,
0.7,
cv::Scalar(0,255,0),
1,
cv::CV__CAP_PROP_LATEST);
}
}
}
}
imshow("currentobj",sceneImage);
waitKey(0);
}
}
/*
* detect the red object in the image given in the param,
* return a vector containing all the Rect of the red objects
*/
std::vector<cv::Rect> getRedObjects(cv::Mat image)
{
Mat3b res = image.clone();
std::vector<cv::Rect> result;
cvtColor(image, image, COLOR_BGR2HSV);
Mat1b mask1, mask2;
//ranges of red color
inRange(image, Scalar(0, 70, 50), Scalar(10, 255, 255), mask1);
inRange(image, Scalar(170, 70, 50), Scalar(180, 255, 255), mask2);
Mat1b mask = mask1 | mask2;
Mat nonZeroCoordinates;
vector<Point> pts;
findNonZero(mask, pts);
for (int i = 0; i < nonZeroCoordinates.total(); i++ ) {
cout << "Zero#" << i << ": " << nonZeroCoordinates.at<Point>(i).x << ", " << nonZeroCoordinates.at<Point>(i).y << endl;
}
int th_distance = 2; // radius tolerance
// Apply partition
// All pixels within the radius tolerance distance will belong to the same class (same label)
vector<int> labels;
// With lambda function (require C++11)
int th2 = th_distance * th_distance;
int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) {
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2;
});
// You can save all points in the same class in a vector (one for each class), just like findContours
vector<vector<Point>> contours(n_labels);
for (int i = 0; i < pts.size(); ++i){
contours[labels[i]].push_back(pts[i]);
}
// Get bounding boxes
vector<Rect> boxes;
for (int i = 0; i < contours.size(); ++i)
{
Rect box = boundingRect(contours[i]);
if(contours[i].size()>500){//prima era 1000
boxes.push_back(box);
Rect enlarged_box = box + Size(100,100);
enlarged_box -= Point(30,30);
if(enlarged_box.x<0){
enlarged_box.x = 0;
}
if(enlarged_box.y<0){
enlarged_box.y = 0;
}
if(enlarged_box.height + enlarged_box.y > res.rows){
enlarged_box.height = res.rows - enlarged_box.y;
}
if(enlarged_box.width + enlarged_box.x > res.cols){
enlarged_box.width = res.cols - enlarged_box.x;
}
Mat img = res(Rect(enlarged_box));
result.push_back(enlarged_box);
}
}
Rect largest_box = *max_element(boxes.begin(), boxes.end(), [](const Rect& lhs, const Rect& rhs) {
return lhs.area() < rhs.area();
});
//draw the rects in case you want to see them
for(int j=0;j<=boxes.size();j++){
if(boxes[j].area() > largest_box.area()/3){
rectangle(res, boxes[j], Scalar(0, 0, 255));
Rect enlarged_box = boxes[j] + Size(20,20);
enlarged_box -= Point(10,10);
rectangle(res, enlarged_box, Scalar(0, 255, 0));
}
}
rectangle(res, largest_box, Scalar(0, 0, 255));
Rect enlarged_box = largest_box + Size(20,20);
enlarged_box -= Point(10,10);
rectangle(res, enlarged_box, Scalar(0, 255, 0));
return result;
}
/*
* code for detect the speed limit sign , it draws a circle around the speed limit signs
*/
vector<Mat> detectAndDisplaySpeedLimit( Mat frame )
{
std::vector<Rect> signs;
vector<Mat> result;
Mat frame_gray;
cvtColor( frame, frame_gray, CV_BGR2GRAY );
//normalizes the brightness and increases the contrast of the image
equalizeHist( frame_gray, frame_gray );
//-- Detect signs
speed_limit_cascade.detectMultiScale( frame_gray, signs, 1.1, 3, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
cout << speed_limit_cascade.getFeatureType();
for( size_t i = 0; i < signs.size(); i++ )
{
Point center( signs[i].x + signs[i].width*0.5, signs[i].y + signs[i].height*0.5 );
ellipse( frame, center, Size( signs[i].width*0.5, signs[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
Mat resultImage = frame(Rect(center.x - signs[i].width*0.5,center.y - signs[i].height*0.5,signs[i].width,signs[i].height));
result.push_back(resultImage);
}
return result;
}
/*
* code for detect the warning sign , it draws a circle around the warning signs
*/
vector<Mat> detectAndDisplayWarning( Mat frame )
{
std::vector<Rect> signs;
vector<Mat> result;
Mat frame_gray;
cvtColor( frame, frame_gray, CV_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect signs
warning_cascade.detectMultiScale( frame_gray, signs, 1.1, 3, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
cout << warning_cascade.getFeatureType();
Rect previus;
for( size_t i = 0; i < signs.size(); i++ )
{
Point center( signs[i].x + signs[i].width*0.5, signs[i].y + signs[i].height*0.5 );
Rect newRect = Rect(center.x - signs[i].width*0.5,center.y - signs[i].height*0.5,signs[i].width,signs[i].height);
if((previus & newRect).area()>0){
previus = newRect;
}else{
ellipse( frame, center, Size( signs[i].width*0.5, signs[i].height*0.5), 0, 0, 360, Scalar( 0, 0, 255 ), 4, 8, 0 );
Mat resultImage = frame(newRect);
result.push_back(resultImage);
previus = newRect;
}
}
return result;
}
/*
* code for detect the no parking sign , it draws a circle around the no parking signs
*/
vector<Mat> detectAndDisplayNoParking( Mat frame )
{
std::vector<Rect> signs;
vector<Mat> result;
Mat frame_gray;
cvtColor( frame, frame_gray, CV_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect signs
no_parking_cascade.detectMultiScale( frame_gray, signs, 1.1, 3, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
cout << no_parking_cascade.getFeatureType();
Rect previus;
for( size_t i = 0; i < signs.size(); i++ )
{
Point center( signs[i].x + signs[i].width*0.5, signs[i].y + signs[i].height*0.5 );
Rect newRect = Rect(center.x - signs[i].width*0.5,center.y - signs[i].height*0.5,signs[i].width,signs[i].height);
if((previus & newRect).area()>0){
previus = newRect;
}else{
ellipse( frame, center, Size( signs[i].width*0.5, signs[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 0 ), 4, 8, 0 );
Mat resultImage = frame(newRect);
result.push_back(resultImage);
previus = newRect;
}
}
return result;
}
/*
* train the classifier for digit recognition, this could be done only one time, this method save the result in a file and
* it can be used in the next executions
* in order to train user must enter manually the corrisponding digit that the program shows, press space if the red box is just a point (false positive)
*/
void trainDigitClassifier(){
Mat thr,gray,con;
Mat src=imread("/Users/giuliopettenuzzo/Desktop/all_numbers.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,125,255,THRESH_BINARY_INV); //Threshold to find contour
imshow("ci",thr);
waitKey(0);
thr.copyTo(con);
// Create sample and label data
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
Mat sample;
Mat response_array;
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); //Find contour
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through first hierarchy level contours
{
Rect r= boundingRect(contours[i]); //Find bounding rect for each contour
rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,0,255),2,8,0);
Mat ROI = thr(r); //Crop the image
Mat tmp1, tmp2;
resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR ); //resize to 10X10
tmp1.convertTo(tmp2,CV_32FC1); //convert to float
imshow("src",src);
int c=waitKey(0); // Read corresponding label for contour from keyoard
c-=0x30; // Convert ascii to intiger value
response_array.push_back(c); // Store label to a mat
rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,255,0),2,8,0);
sample.push_back(tmp2.reshape(1,1)); // Store sample data
}
// Store the data to file
Mat response,tmp;
tmp=response_array.reshape(1,1); //make continuous
tmp.convertTo(response,CV_32FC1); // Convert to float
FileStorage Data("TrainingData.yml",FileStorage::WRITE); // Store the sample data in a file
Data << "data" << sample;
Data.release();
FileStorage Label("LabelData.yml",FileStorage::WRITE); // Store the label data in a file
Label << "label" << response;
Label.release();
cout<<"Training and Label data created successfully....!! "<<endl;
imshow("src",src);
waitKey(0);
}
/*
* get digit from the image given in param, using the classifier trained before
*/
string getDigits(Mat image)
{
Mat thr1,gray1,con1;
Mat src1 = image.clone();
cvtColor(src1,gray1,CV_BGR2GRAY);
threshold(gray1,thr1,125,255,THRESH_BINARY_INV); // Threshold to create input
thr1.copyTo(con1);
// Read stored sample and label for training
Mat sample1;
Mat response1,tmp1;
FileStorage Data1("TrainingData.yml",FileStorage::READ); // Read traing data to a Mat
Data1["data"] >> sample1;
Data1.release();
FileStorage Label1("LabelData.yml",FileStorage::READ); // Read label data to a Mat
Label1["label"] >> response1;
Label1.release();
Ptr<ml::KNearest> knn(ml::KNearest::create());
knn->train(sample1, ml::ROW_SAMPLE,response1); // Train with sample and responses
cout<<"Training compleated.....!!"<<endl;
vector< vector <Point> > contours1; // Vector for storing contour
vector< Vec4i > hierarchy1;
//Create input sample by contour finding and cropping
findContours( con1, contours1, hierarchy1,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
Mat dst1(src1.rows,src1.cols,CV_8UC3,Scalar::all(0));
string result;
for( int i = 0; i< contours1.size(); i=hierarchy1[i][0] ) // iterate through each contour for first hierarchy level .
{
Rect r= boundingRect(contours1[i]);
Mat ROI = thr1(r);
Mat tmp1, tmp2;
resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR );
tmp1.convertTo(tmp2,CV_32FC1);
Mat bestLabels;
float p=knn -> findNearest(tmp2.reshape(1,1),4, bestLabels);
char name[4];
sprintf(name,"%d",(int)p);
cout << "num = " << (int)p;
result = result + to_string((int)p);
putText( dst1,name,Point(r.x,r.y+r.height) ,0,1, Scalar(0, 255, 0), 2, 8 );
}
imwrite("dest.jpg",dst1);
return result ;
}
/*
* from the digits detected, it returns a speed limit if it is detected correctly, -1 otherwise
*/
int getSpeedLimit(string numbers){
if ((numbers.find("30") != std::string::npos) || (numbers.find("03") != std::string::npos)) {
return 30;
}
if ((numbers.find("50") != std::string::npos) || (numbers.find("05") != std::string::npos)) {
return 50;
}
if ((numbers.find("80") != std::string::npos) || (numbers.find("08") != std::string::npos)) {
return 80;
}
if ((numbers.find("70") != std::string::npos) || (numbers.find("07") != std::string::npos)) {
return 70;
}
if ((numbers.find("90") != std::string::npos) || (numbers.find("09") != std::string::npos)) {
return 90;
}
if ((numbers.find("100") != std::string::npos) || (numbers.find("001") != std::string::npos)) {
return 100;
}
if ((numbers.find("130") != std::string::npos) || (numbers.find("031") != std::string::npos)) {
return 130;
}
return -1;
}
/*
* load all the image in the file with the path hard coded below
*/
vector<Mat> loadAllImage(){
vector<cv::String> fn;
glob("/Users/giuliopettenuzzo/Desktop/T1/dataset/*.jpg", fn, false);
vector<Mat> images;
size_t count = fn.size(); //number of png files in images folder
for (size_t i=0; i<count; i++)
images.push_back(imread(fn[i]));
return images;
}
maybe you should try implementing the ransac algorithm, if you are using color images, migt be a good idea (if you are in europe) to get the red channel only since the speed limits are surrounded by a red cricle (or a thin white i think also).
For that you need to filter the image to get the edges, (canny filter).
Here are some useful links:
OpenCV detect partial circle with noise
https://hal.archives-ouvertes.fr/hal-00982526/document
Finally for the numbers detection i think its ok. Other approach is to use something like Viola-Jones algorithm to detect the signals, with pretrained existing models... It's up to you!

SVM Multiclass Image classification with probability outputs

I am trying to classify an Image using support vector machines (SVM) of OpenCV and I get this error (OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheck
TrainData, file ......\modules\ml\src\inner_functions.cpp, line 857) during SVM training.
So far I converted the data to rows of floating data type but still get an error. I would also like to predict and save the final classes of the image. Kindly advice me, my code is as below.
Is there a multiclass SVM classification code example somewhere I can follow?
Thanks.
int main()
{
//Read image;
register int iii, jjj;
Mat image = imread("E:\\DATA\\Dummy\\Input\\ImageFeatures.jpg");
const int rows = image.rows, cols = image.cols, bands = image.channels();
Mat reshapedImage;
reshapedImage = image.reshape(bands, image.rows * image.cols);
reshapedImage.convertTo(reshapedImage,CV_32FC1);
// Set up labels
Mat imagelabels = imread("E:\\DATA\\Dummy\\Input\\TrainingSet.bmp", CV_8UC1);
Mat labels;
labels = imagelabels.reshape(1, rows * cols);
labels.convertTo(labels, CV_32FC1);
// Set up SVM's parameters
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::RBF; //RBF
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
params.C = 10;
params.gamma = 0.9;
// Train the SVM
CvSVM SVM;
SVM.train(reshapedImage, labels, Mat(), Mat(), params);
//Vec3b green(0,255,0), blue (255,0,0), red(0,0,255), gray(230,230,230);
float response = SVM.predict(reshapedImage);
Mat solution;
for (iii = 0; iii < image.rows; iii++)
for (jjj = 0; jjj < image.cols; jjj++)
{
float response = SVM.predict(reshapedImage);
solution.at<float>(iii,jjj) = response;
}
// Show the training data
imwrite("result.bmp", solution); // save the image
solution.convertTo(solution,CV_8UC1);
imshow("SVM Simple Example", solution); // show it to the user
waitKey(0);
}
In my case I have a 1000 by 1000 pixel image and a corresponding fully labelled 1000 by 1000 pixel image with five classes (with labels from 0 to 4). How best do I deal with this scenario?

PCA + SVM using C++ Syntax in OpenCV 2.2

I'm having problems getting PCA and Eigenfaces working using the latest C++ syntax with the Mat and PCA classes. The older C syntax took an array of IplImage* as a parameter to perform its processing and the current API only takes a Mat that is formatted by Column or Row. I took the Row approach using the reshape function to fit my image's matrix to fit in a single row. I eventually want to take this data and then use the SVM algorithm to perform detection, but when I do that all my data is just a stream of 0s. Can someone please help me out? What am I doing wrong? Thanks!
I saw this question and it's somewhat related, but I'm not sure what the solution is.
This is basically what I have:
vector<Mat> images; //This variable will be loaded with a set of images to perform PCA on.
Mat values(images.size(), 1, CV_32SC1); //Values are the corresponding values to each of my images.
int nEigens = images.size() - 1; //Number of Eigen Vectors.
//Load the images into a Matrix
Mat desc_mat(images.size(), images[0].rows * images[0].cols, CV_32FC1);
for (int i=0; i<images.size(); i++) {
desc_mat.row(i) = images[i].reshape(1, 1);
}
Mat average;
PCA pca(desc_mat, average, CV_PCA_DATA_AS_ROW, nEigens);
Mat data(desc_mat.rows, nEigens, CV_32FC1); //This Mat will contain all the Eigenfaces that will be used later with SVM for detection
//Project the images onto the PCA subspace
for(int i=0; i<images.size(); i++) {
Mat projectedMat(1, nEigens, CV_32FC1);
pca.project(desc_mat.row(i), projectedMat);
data.row(i) = projectedMat.row(0);
}
CvMat d1 = (CvMat)data;
CvMat d2 = (CvMat)values;
CvSVM svm;
svm.train(&d1, &d2);
svm.save("svmdata.xml");
What etarion said is correct.
To copy a column or row you always have to write:
Mat B = mat.col(i);
A.copyTo(B);
The following program shows how to perform a PCA in OpenCV. It'll show the mean image and the first three Eigenfaces. The images I used in there are available from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html:
#include "cv.h"
#include "highgui.h"
using namespace std;
using namespace cv;
Mat normalize(const Mat& src) {
Mat srcnorm;
normalize(src, srcnorm, 0, 255, NORM_MINMAX, CV_8UC1);
return srcnorm;
}
int main(int argc, char *argv[]) {
vector<Mat> db;
// load greyscale images (these are from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html)
db.push_back(imread("s1/1.pgm",0));
db.push_back(imread("s1/2.pgm",0));
db.push_back(imread("s1/3.pgm",0));
db.push_back(imread("s2/1.pgm",0));
db.push_back(imread("s2/2.pgm",0));
db.push_back(imread("s2/3.pgm",0));
db.push_back(imread("s3/1.pgm",0));
db.push_back(imread("s3/2.pgm",0));
db.push_back(imread("s3/3.pgm",0));
db.push_back(imread("s4/1.pgm",0));
db.push_back(imread("s4/2.pgm",0));
db.push_back(imread("s4/3.pgm",0));
int total = db[0].rows * db[0].cols;
// build matrix (column)
Mat mat(total, db.size(), CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.col(i);
db[i].reshape(1, total).col(0).convertTo(X, CV_32FC1, 1/255.);
}
// Change to the number of principal components you want:
int numPrincipalComponents = 12;
// Do the PCA:
PCA pca(mat, Mat(), CV_PCA_DATA_AS_COL, numPrincipalComponents);
// Create the Windows:
namedWindow("avg", 1);
namedWindow("pc1", 1);
namedWindow("pc2", 1);
namedWindow("pc3", 1);
// Mean face:
imshow("avg", pca.mean.reshape(1, db[0].rows));
// First three eigenfaces:
imshow("pc1", normalize(pca.eigenvectors.row(0)).reshape(1, db[0].rows));
imshow("pc2", normalize(pca.eigenvectors.row(1)).reshape(1, db[0].rows));
imshow("pc3", normalize(pca.eigenvectors.row(2)).reshape(1, db[0].rows));
// Show the windows:
waitKey(0);
}
and if you want to build the matrix by row (like in your original question above) use this instead:
// build matrix
Mat mat(db.size(), total, CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.row(i);
db[i].reshape(1, 1).row(0).convertTo(X, CV_32FC1, 1/255.);
}
and set the flag in the PCA to:
CV_PCA_DATA_AS_ROW
Regarding machine learning. I wrote a document on machine learning with the OpenCV C++ API that has examples for most of the classifiers, including Support Vector Machines. Maybe you can get some inspiration there: http://www.bytefish.de/pdf/machinelearning.pdf.
data.row(i) = projectedMat.row(0);
This will not work. operator= is a shallow copy, meaning no data is actually copied. Use
cv::Mat sample = data.row(i); // also a shallow copy, points to old data!
projectedMat.row(0).copyTo(sample);
The same also for:
desc_mat.row(i) = images[i].reshape(1, 1);
I would suggest looking at the newly checked in tests in svn head
modules/core/test/test_mat.cpp
online here : https://code.ros.org/svn/opencv/trunk/opencv/modules/core/test/test_mat.cpp
has examples for PCA in the old c and new c++
Hope that helps!