Serializing Face Features for recognition in DLIB - c++

I am writing a dlib code to do face recognition on 1 to 1 basis.
I followed the code sample in dlib samples and did the following:
std::vector<matrix<rgb_pixel>> faces;
for (auto face : detector(img1))
{
auto shape = sp(img1, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img1, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
this is for the first image and then did the same for the second image:
for (auto face : detector(img2))
{
auto shape = sp(img2, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img2, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
then i continue as per the mentioned link:
std::vector<matrix<float, 0, 1>> face_descriptors = net(faces);
std::vector<sample_pair> edges;
for (size_t i = 0; i < face_descriptors.size(); ++i)
{
for (size_t j = i; j < face_descriptors.size(); ++j)
{
if (length(face_descriptors[i] - face_descriptors[j]) < threshold)
edges.push_back(sample_pair(i, j));
}
}
std::vector<unsigned long> labels;
const int num_clusters = chinese_whispers(edges, labels);
//etc
and now comes my question. img1 is an image already available to the code that is read when i need to match a specific person. (ie if I want to mach personX, img1 is read using
load_image(img1, "personX.jpg");
Instead of having the image saved, i was trying to save the features and load them to reduce the time spent the extraction of features. so what I did is I moved the first for loop at a different function (enrollment like) and made it something like this:
std::vector<matrix<rgb_pixel>> faces;
for (auto face : detector(img1))
{
auto shape = sp(img1, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img1, get_face_chip_details(shape, 150, 0.25), face_chip);
serialize("personX.dat") <<face_chip;
}
then at the recognition instead of the loop i used
matrix<rgb_pixel> face_chip;
deserialize("personX.dat")>>face_chip;
faces.push_back(move(face_chip));
and the rest of the code from the extraction of img2 onward remained the same. the code compiled. But during execution when i reach the recognition i end up with the following error:
**************************** FATAL ERROR DETECTED ****************************
Error detected at line 216.
Error detected in file /usr/local/include/dlib/dnn/input.h.
Error detected in function void dlib::input_rgb_image_sized::to_tensor(forward_iterator, forward_iterator, dlib::resizable_tensor&) const [with forward_iterator = __gnu_cxx::__normal_iterator*, std::vector > >; long unsigned int NR = 150ul; long unsigned int NC = 150ul].
Failing expression was i->nr()==NR && i->nc()==NC input_rgb_image_sized::to_tensor()
All input images must have 150 rows and 150 columns, but we got one with 0 rows and 0 columns.
Is there something wrong with the serialization / de-serialization? or should i do writing the features to a file with another method?
code for the full funtion:
try
{
load_image(img1, check_image);
}
catch (...)
{
cout<<"Name: "<<uname<<" doesn't exist"<<endl;
return;
}
else
{
QElapsedTimer timer;
timer.start();
dlib::assign_image(img2, dlib::cv_image<bgr_pixel>(colorImage));
std::vector<matrix<rgb_pixel>> faces;
for (auto face : detector(img1))
{
auto shape = sp(img1, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img1, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
// serialize("out.dat")<<face_chip; //used whin i dont need to read image
}
// matrix<rgb_pixel> face_chip; //used whin i dont need to read image
// deserialize("out.dat")>>face_chip; //used whin i dont need to read image
// faces.push_back(move(face_chip)); //used whin i dont need to read image
cout<<"Time to extract features for enroled image: "<<timer.elapsed()<<endl;
timer.restart();
for (auto face : detector(img2))
{
auto shape = sp(img2, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img2, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
cout<<"Time to extract features for new image: "<<timer.elapsed()<<endl;
timer.restart();
if (faces.size() < 2)
{
cout<<"No Face"<<endl;
}
else
{
std::vector<matrix<float, 0, 1>> face_descriptors = net(faces);
std::vector<sample_pair> edges;
for (size_t i = 0; i < face_descriptors.size(); ++i)
{
for (size_t j = i; j < face_descriptors.size(); ++j)
{
if (length(face_descriptors[i] - face_descriptors[j]) < threshold)
edges.push_back(sample_pair(i, j));
}
}
std::vector<unsigned long> labels;
const int num_clusters = chinese_whispers(edges, labels);
if (num_clusters == 1)
{
cout<<"Recognized"<<endl;
}
else
{
cout<<"Faces don't match";
}
}
cout<<"Needed time is: "<<timer.elapsed()<<" ms"<<endl;
}

instead of serializing the Matrix i serialized the output vector (faces).
serialize("personX.dat")<<faces;
then when doing the recognition i deserialized the dat file and used the resulting vector:
std::vector<matrix<rgb_pixel>> faces;
deserialize("out.dat")>>faces;
for (auto face : detector(img2))
{
auto shape = sp(img2, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img2, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
and I continued as mentioned in the question.
I don't know if this is the best way to do it... but it worked.

Related

Visual Studio: C2027 Use of 'undefined-type' error when using concurrent_vector<matrix<rgb_pixel>> in an async function

I created some async functions, that used the DLIB library,using std::future; so that I can have these functions directly called from my Socket server. But Visual Studio 2019 comes up with compile errors.
Problematic Code from 'scan_fhog_pyramid.h'
if (feats.size() > 1)
{
typedef typename image_traits<image_type>::pixel_type pixel_type;
array2d<pixel_type> temp1, temp2;
pyr(img, temp1);
fe(temp1, feats[1], cell_size,filter_rows_padding,filter_cols_padding);
swap(temp1,temp2);
for (unsigned long i = 2; i < feats.size(); ++i)
{
pyr(temp2, temp1);
fe(temp1, feats[i], cell_size,filter_rows_padding,filter_cols_padding);
swap(temp1,temp2);
}
}
My async function
std::future<Mat> getProcessedFrame_task = std::async([&]()
{
VideoCapture vid_cap(0, CAP_DSHOW);
Mat frame;
if (!vid_cap.isOpened())
{
cout << "Video capture device could not be initalised." << endl;
cout << "Empty vector will be returned." << endl;
return Mat();
}
//Grab Frame from video capture device
vid_cap >> frame;
//Convert OpenCv frame to DLIB frame
dlib::cv_image<rgb_pixel> dlib_frame(frame);
//Vector to store Cropped Face Images
concurrent_vector<matrix<rgb_pixel>>croped_faces;
//Vector to store X Y Coordinates
concurrent_vector<pair<int, int>>xy_coords;
//Vector to store width-height data
concurrent_vector < pair<int, int>> width_height;
//Run the Face Detector on dlib frame
auto detected_faces = this->f_detector(croped_faces);
//Process all detected faces
dlib::parallel_for(0, detected_faces.size(), [&](long i)
{
//Get individual face from detected faces
auto face = detected_faces[i];
//Get the Shape details from the face
packaged_task < dlib::full_object_detection()>sd_task(bind(sp, dlib_frame, face));
auto sd_task_res = sd_task.get_future();
sd_task();
//Extract Face Image from frame
matrix<rgb_pixel> face_img;
extract_image_chip(dlib_frame, get_face_chip_details(sd_task_res.get(), 150, 0.25), face_img);
croped_faces.push_back(face_img);
//Add Corrdinates to vector
xy_coords.push_back(pair<int, int>(face.left(), face.top()));
//Add width-Height to vector
width_height.push_back(pair<int, int>(face.width(), face.height()));
});
//Vector to Store Face Descriptors
std::vector<matrix<float, 0, 1>> face_descriptors;
//Only run if faces are found
if (croped_faces.size() > 0)
{
//Get Face Descriptors for all cropped faces.
face_descriptors = net(croped_faces);
//Process each Face Descriptor
dlib::parallel_for(0, face_descriptors.size(), [&](long i)
{
//Get individual descriptor
auto f_descriptor = face_descriptors[i];
//Get String Label for face descriptor
string face_name = m_trainer.Get_Face_Label(label_faceDescriptor, f_descriptor);
//TODO: Add overlays to the Open CV Frame
//Add Time Stamp to Frame
putText(frame, this->Get_Current_Date_Time(),
Point(50, vid_cap.get(4) - 10),
FONT_HERSHEY_COMPLEX,
1.0,
CV_RGB(0, 255, 0),
2);
//Add Face Name to Frame
putText(frame, //target image
face_name, //text
cv::Point(xy_coords[i].first, xy_coords[i].second + 25), //top-left position
cv::FONT_HERSHEY_DUPLEX,
1.0,
CV_RGB(0, 255, 0), //font color
2);
});
}
return frame;
});
return getProcessedFrame_task.get();
C2027 : undefined type 'dlib::image_traits<image_type>'
C4430: missing type specifier
C2146: Syntax error missing ';' before identifier 'pixel_type'

How do you search for images that have a non white background using c++?

I wrote a program that uses the openCV and boost::filesystem libraries, and the program crops images to fit the object in the image. (Photoshop has already been used to replace most of the backgrounds with white). However, I have thousands and thousands of pictures that I need to sort through. I already know how to use the filesystem library and have no issue traversing the system's directories. However, how do I detect images that have a non-white background (missed in the photoshop process)? This incorrect crop is formatted to have a margin and have a 1:1 aspect ratio, but it still has the odd grayish background. The image should end up looking like this correct crop. So, how do I determine if the image has a background like the incorrect crop?
could you try the code below
( to test the code you should create a directory c:/cropping and some subdirs on it. and put some images in the dirs you created.)
hope it will be helpful
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
vector<Rect> divideHW(Mat src, int dim, double threshold1, double threshold2)
{
Mat gray, reduced, canny;
if (src.channels() == 1)
{
gray = src;
}
if (src.channels() == 3)
{
Laplacian(src, gray, CV_8UC1);
cvtColor(gray, gray, COLOR_BGR2GRAY);
imshow("sobel", gray);
}
reduce(gray, reduced, dim, REDUCE_AVG);
Canny(reduced, canny, threshold1, threshold2);
vector<Point> pts;
findNonZero(canny, pts);
vector<Rect> rects;
Rect rect(0, 0, gray.cols, gray.rows);
if (!pts.size())
{
rects.push_back(rect);
}
int ref_x = 0;
int ref_y = 0;
for (size_t i = 0; i< pts.size(); i++)
{
if (dim)
{
rect.height = pts[i].y - ref_y;
rects.push_back(rect);
rect.y = pts[i].y;
ref_y = rect.y;
if (i == pts.size() - 1)
{
rect.height = gray.rows - pts[i].y;
rects.push_back(rect);
}
}
else
{
rect.width = pts[i].x - ref_x;
rects.push_back(rect);
rect.x = pts[i].x;
ref_x = rect.x;
if (i == pts.size() - 1)
{
rect.width = gray.cols - pts[i].x;
rects.push_back(rect);
}
}
}
return rects;
}
int main( int argc, char** argv )
{
int wait_time = 0; // set this value > 0 for not waiting
vector<String> filenames;
String folder = "c:/cropping/*.*"; // you can change this value or set it by argv[1]
glob(folder, filenames, true);
for (size_t i = 0; i < filenames.size(); ++i)
{
Mat src = imread(filenames[i]);
if (src.data)
{
vector<Rect> rects = divideHW(src, 0, 0, 0);
if (rects.size() < 3) continue;
Rect border;
border.x = rects[0].width;
border.width = src.cols - rects[rects.size() - 1].width - border.x;
rects = divideHW(src, 1, 0, 20);
if (rects.size() < 3) continue;
border.y = rects[0].height;
border.height = src.rows - rects[rects.size() - 1].height - border.y;
Mat cropped = src(border).clone();
src(border).setTo(Scalar(255, 255, 255));
Scalar _mean = mean(src);
int mean_total = _mean[0] + _mean[1] + _mean[2];
if (mean_total > 763)
{
imwrite(filenames[i] + ".jpg", cropped);
imshow("cropped", cropped);
waitKey(wait_time);
}
}
}
return 0;
}
I that you can compute the gradient of an ROI of your image (all rows in column 10 to 15 for exemple).
Then you compute the energy of your gradient (sum of all pixels of the gradient image).
If the energy is very low, you have an uniform background (you can't know the background color with this algorithm). Else you have a textured backgroud.
This is a first approach. You can found in OpenCV all the functions required to do that.
A second approach :
If you are sure that your background is white, you can get the ROI of the first approach, then iterate over all pixels, and check for its color. If there are more than "n" pixels with a different color than "255,255,255", you can mark your image as "non white Background".

Opencv and c++ Fisherface algorithm always predict wrong even with the exactly same image as input and reconstruct a black image

So as the title says, i'm having a problem when working with opencv on c++ using interface of Windows forms only to send the input for the actions on the program, basically i save the treated images on a folder and recover them without any problem i even checked like a hundred times if there was any problem when i recover the images and no, there is nothing wrong there, so i started testing the output image that the program was giving and i realized that the reconstructed image taken from the FaceRecognizer was completely black, i tried to change the declarations but nothing changes, and what is funny is that this same algorith Works on another Project that i created as a testing and there is no interface there, only the console so the program is running on the main function, and in this testing Project when i reconstruct the image from the face FaceRecognizer it returns a normal face image and it recognize the correct face, but on the new Project with interface i use the same image as input and it recognizes all wrong and also reconstruct the black image, i think that is something wrong with the FaceRecognizer model but i have no idea what it is exactly! can anyone help me?
Here is the code to recover the images from the folder:
if (traiModel)
{
// read all folders on the images and for each one give an number id
// read the images for each folder and add on a vector of images
// read the folder name and give to all images to set the label info
Preprocessed_Faces.clear();
faceLabels.clear();
Preprocessed_Faces_Names.clear();
std::string folder = "TrainingFolder\\";
std::vector<cv::string> foldernames;
foldernames = get_all_files_names_within_folder(folder);
std::vector<int> labels;
for (int f = 0; f < foldernames.size(); f++)
{
std::string thisfoldername = folder + foldernames[f];
std::vector<cv::string> filenames;
cv::glob(thisfoldername, filenames);
Preprocessed_Faces_Names.push_back(foldernames[f]);
labels.push_back(f + 1);
for (int fn = 0; fn < filenames.size(); fn++)
{
Preprocessed_Faces.push_back(cv::imread(filenames[fn]));
//std::cout << filenames[fn] << std::endl;
faceLabels.push_back(f + 1);
}
}
cv::imwrite("Traintest.PNG", Preprocessed_Faces[0]);
std::map<int, std::string> map1;
for (int i = 0; i < Preprocessed_Faces_Names.size(); i++)
{
map1.insert(std::pair<int, std::string>(labels[i], Preprocessed_Faces_Names[i]));
std::cout << Preprocessed_Faces_Names[i] << std::endl;
}
model->setLabelsInfo(map1);
model->train(Preprocessed_Faces, faceLabels);
traiModel = false;
}
Here is the code for identify the face he tries to reconstruct the face first and identify:
if (identif)
{
// identify the current face looking on the database
// Prediction Validation
// Get some required data from the FaceRecognizer model.
cv::Mat eigenvectors = model->get<cv::Mat>("eigenvectors");
cv::Mat averageFaceRow = model->get<cv::Mat>("mean");
// Project the input image onto the eigenspace.
cv::Mat projection = cv::subspaceProject(eigenvectors, averageFaceRow, filtered.reshape(1, 1));
// Generate the reconstructed face back from the eigenspace.
cv::Mat reconstructionRow = cv::subspaceReconstruct(eigenvectors, averageFaceRow, projection);
// Make it a rectangular shaped image instead of a single row.
cv::Mat reconstructionMat = reconstructionRow.reshape(1, filtered.rows);
// Convert the floating-point pixels to regular 8-bit uchar.
cv::Mat reconstructedFace = cv::Mat(reconstructionMat.size(), CV_8U);
reconstructionMat.convertTo(reconstructedFace, CV_8U, 1, 0);
cv::imwrite("Teste.PNG", filtered);
cv::imwrite("Teste2.PNG", reconstructedFace);
int identity = model->predict(filtered);
double similarity = getSimilarity(filtered, reconstructedFace);
if (similarity > .7f)
{
//identity = -1; // -1 means that the face is not registred in the trainer
}
std::cout << "This is: " << identity << " and: " << model->getLabelInfo(identity) << std::endl;
identif = false;
}
here is the code where the complete loop for identifying a face runs and also the declaration of the FaceRecognizer the 2 odes above are in there:
void RecognitionAlgo::Running()
{
// Create the cascade classifier object used for the face detection
cv::CascadeClassifier face_cascade;
// Use the haarcascade frontalface_alt.xml library
if (!face_cascade.load("haarcascade_frontalface_alt.xml"))
{
Log("Error at face Cascade Load!")
}
// Setup image files used in the capture process
cv::Mat captureFrame;
cv::Mat grayscaleFrame;
cv::Mat shrinkFrame;
// Create a vector to store the face found
std::vector<cv::Rect> faces;
std::vector<cv::Mat> Preprocessed_Faces;
std::vector<cv::string> Preprocessed_Faces_Names;
cv::string myname;
std::vector<int> faceLabels;
// Init the face Recognizer
cv::initModule_contrib();
std::string facerecAlgorithm = "FaceRecognizer.Fisherfaces";
cv::Ptr<cv::FaceRecognizer> model;
// Use OpenCV's new FaceRecognizer in the "contrib" module;
model = cv::Algorithm::create<cv::FaceRecognizer>(facerecAlgorithm);
if (model.empty())
{
std::cerr << "ERROR: FaceRecognizer" << std::endl;
}
try
{
//model->load("TrainedModel.xml");
}
catch (const std::exception&)
{
}
// Create a loop to caoture and find faces
while (true)
{
// Capture a new image frame
if (swCamera)
{
captureDevice >> captureFrame;
}
else
{
captureFrame = cv::imread("face3.PNG");
}
// Shrink the captured image, Convert to gray scale and equalize
cv::cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
cv::equalizeHist(grayscaleFrame, grayscaleFrame);
shrinkFrame = ShrinkingImage(grayscaleFrame);
// Find faces on the shrink image (because it's faster) and store them in the vector array
face_cascade.detectMultiScale(shrinkFrame, faces, 1.1, 4, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_SCALE_IMAGE, cv::Size(30, 30));
if (faces.size() > 0)
faces = EnlargeResults(captureFrame, faces);
// Draw a rectangle for all found faces in the vector array on original image
for (int i = 0; i < faces.size(); i++)
{
cv::Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
cv::Point pt2(faces[i].x, faces[i].y);
cv::Mat theFace = grayscaleFrame(faces[i]);
// try to treat the face by identifying the eyes, if the eyes fail to detect, it returns theface
cv::Mat filtered = TreatmentForFace(theFace);
// Collecting faces and learning from them.
if (starTraining && TrainName != "")
{
if (colFace)
{
Preprocessed_Faces.push_back(filtered);
if (myname == "")
{
myname = TrainName;
}
colFace = false;
}
}
else
{
if (!starTraining && Preprocessed_Faces.size() > 0)
{
// create the person folder
std::string command = "mkdir ";
std::string foldercom = "TrainingFolder\\" + myname;
command += foldercom;
system(command.c_str());
// create a string to access the recent created folder
std::string foldername = foldercom.substr(0, foldercom.size() - (myname.size() + 1));
foldername.append("/");
foldername.append(myname);
foldername.append("/");
foldername.append(myname);
// save the colected faces on the folder
for (int i = 0; i < Preprocessed_Faces.size(); i++)
{
std::ostringstream oss;
oss << i;
cv::imwrite(foldername + oss.str() + ".PNG", Preprocessed_Faces[i]);
}
myname = "";
Preprocessed_Faces.clear();
}
}
if (traiModel)
{
// read all folders on the images and for each one give an number id
// read the images for each folder and add on a vector of images
// read the folder name and give to all images to set the label info
Preprocessed_Faces.clear();
faceLabels.clear();
Preprocessed_Faces_Names.clear();
std::string folder = "TrainingFolder\\";
std::vector<cv::string> foldernames;
foldernames = get_all_files_names_within_folder(folder);
std::vector<int> labels;
for (int f = 0; f < foldernames.size(); f++)
{
std::string thisfoldername = folder + foldernames[f];
std::vector<cv::string> filenames;
cv::glob(thisfoldername, filenames);
Preprocessed_Faces_Names.push_back(foldernames[f]);
labels.push_back(f + 1);
for (int fn = 0; fn < filenames.size(); fn++)
{
Preprocessed_Faces.push_back(cv::imread(filenames[fn]));
//std::cout << filenames[fn] << std::endl;
faceLabels.push_back(f + 1);
}
}
cv::imwrite("Traintest.PNG", Preprocessed_Faces[0]);
std::map<int, std::string> map1;
for (int i = 0; i < Preprocessed_Faces_Names.size(); i++)
{
map1.insert(std::pair<int, std::string>(labels[i], Preprocessed_Faces_Names[i]));
std::cout << Preprocessed_Faces_Names[i] << std::endl;
}
model->setLabelsInfo(map1);
model->train(Preprocessed_Faces, faceLabels);
traiModel = false;
}
if (identif)
{
// identify the current face looking on the database
// Prediction Validation
// Get some required data from the FaceRecognizer model.
cv::Mat eigenvectors = model->get<cv::Mat>("eigenvectors");
cv::Mat averageFaceRow = model->get<cv::Mat>("mean");
// Project the input image onto the eigenspace.
cv::Mat projection = cv::subspaceProject(eigenvectors, averageFaceRow, filtered.reshape(1, 1));
// Generate the reconstructed face back from the eigenspace.
cv::Mat reconstructionRow = cv::subspaceReconstruct(eigenvectors, averageFaceRow, projection);
// Make it a rectangular shaped image instead of a single row.
cv::Mat reconstructionMat = reconstructionRow.reshape(1, filtered.rows);
// Convert the floating-point pixels to regular 8-bit uchar.
cv::Mat reconstructedFace = cv::Mat(reconstructionMat.size(), CV_8U);
reconstructionMat.convertTo(reconstructedFace, CV_8U, 1, 0);
cv::imwrite("Teste.PNG", filtered);
cv::imwrite("Teste2.PNG", reconstructedFace);
int identity = model->predict(filtered);
double similarity = getSimilarity(filtered, reconstructedFace);
if (similarity > .7f)
{
//identity = -1; // -1 means that the face is not registred in the trainer
}
std::cout << "This is: " << identity << " and: " << model->getLabelInfo(identity) << std::endl;
identif = false;
}
}
// Print the output
cv::resize(captureFrame, captureFrame, cv::Size(800, 600));
cv::imshow("outputCapture", captureFrame);
// pause for 33ms
cv::waitKey(33);
}
}
The OpenCV FaceRecognizer is implemented to be used with grayscale images. The problem is where you are reading the images from disk using cv::imread().
(cv::imread(filenames[fn]));
Even though you might have grayscale images saved, imread() by default loads 3 channel images. Specify CV_LOAD_IMAGE_GRAYSCALE as shown below.
(cv::imread(filenames[fn],CV_LOAD_IMAGE_GRAYSCALE));

How to run findContours() on meanShiftSegmentation() output?

I'm trying to rewrite my very slow naive segmentation using floodFill to something faster. I ruled out meanShiftFiltering a year ago because of the difficulty in labelling the colours and then finding their contours.
The current version of opencv seems to have a fast new function that labels segments using mean shift: gpu::meanShiftSegmentation(). It produces images like the following:
(source: ekran.org)
So this looks to me pretty close to being able to generating contours. How can I run findContours to generate segments?
Seems to me, this would be done by extracting the labelled colours from the image, and then testing which pixel values in the image match each label colour to make a boolean image suitable for findContours. This is what I have done in the following (but its a bit slow and strikes me there should be a better way):
Mat image = imread("test.png");
...
// gpu operations on image resulting in gpuOpen
...
// Mean shift
TermCriteria iterations = TermCriteria(CV_TERMCRIT_ITER, 2, 0);
gpu::meanShiftSegmentation(gpuOpen, segments, 10, 20, 300, iterations);
// convert to greyscale (HSV image)
vector<Mat> channels;
split(segments, channels);
// get labels from histogram of image.
int size = 256;
labels = Mat(256, 1, CV_32SC1);
calcHist(&channels.at(2), 1, 0, Mat(), labels, 1, &size, 0);
// Loop through hist bins
for (int i=0; i<256; i++) {
float count = labels.at<float>(i);
// Does this bin represent a label in the image?
if (count > 0) {
// find areas of the image that match this label and findContours on the result.
Mat label = Mat(channels.at(2).rows, channels.at(2).cols, CV_8UC1, Scalar::all(i)); // image filled with label colour.
Mat boolImage = (channels.at(2) == label); // which pixels in labeled image are identical to this label?
vector<vector<Point>> labelContours;
findContours(boolImage, labelContours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Loop through contours.
for (int idx = 0; idx < labelContours.size(); idx++) {
// get bounds for this contour.
bounds = boundingRect(labelContours[idx]);
// create ROI for bounds to extract this region
Mat patchROI = image(bounds);
Mat maskROI = boolImage(bounds);
}
}
}
Is this the best approach or is there a better way to get the label colours? Seems it would be logical for meanShiftSegmentation to provide this information? (vector of colour values, or vector of masks for each label, etc.)
Thank you.
Following is another way of doing this without thowing away the colour information in the meanShiftSegmentation results. I did not compare the two for performance.
// Loop through whole image, pixel and pixel and then use the colour to index an array of bools indicating presence.
vector<Scalar> colours;
vector<Scalar>::iterator colourIter;
vector< vector< vector<bool> > > colourSpace;
vector< vector< vector<bool> > >::iterator colourSpaceBIter;
vector< vector<bool> >::iterator colourSpaceGIter;
vector<bool>::iterator colourSpaceRIter;
// Initialize 3D Vector
colourSpace.resize(256);
for (int i = 0; i < 256; i++) {
colourSpace[i].resize(256);
for (int j = 0; j < 256; j++) {
colourSpace[i][j].resize(256);
}
}
// Loop through pixels in the image (should be fastish, look into LUT for faster)
uchar r, g, b;
for (int i = 0; i < segments.rows; i++)
{
Vec3b* pixel = segments.ptr<Vec3b>(i); // point to first pixel in row
for (int j = 0; j < segments.cols; j++)
{
b = pixel[j][0];
g = pixel[j][1];
r = pixel[j][2];
colourSpace[b][g][r] = true; // this colour is in the image.
//cout << "BGR: " << int(b) << " " << int(g) << " " << int(r) << endl;
}
}
// Get all the unique colours from colourSpace
// loop through colourSpace
int bi=0;
for (colourSpaceBIter = colourSpace.begin(); colourSpaceBIter != colourSpace.end(); colourSpaceBIter++) {
int gi=0;
for (colourSpaceGIter = colourSpaceBIter->begin(); colourSpaceGIter != colourSpaceBIter->end(); colourSpaceGIter++) {
int ri=0;
for (colourSpaceRIter = colourSpaceGIter->begin(); colourSpaceRIter != colourSpaceGIter->end(); colourSpaceRIter++) {
if (*colourSpaceRIter)
colours.push_back( Scalar(bi,gi,ri) );
ri++;
}
gi++;
}
bi++;
}
// For each colour
int segmentCount = 0;
for (colourIter = colours.begin(); colourIter != colours.end(); colourIter++) {
Mat label = Mat(segments.rows, segments.cols, CV_8UC3, *colourIter); // image filled with label colour.
Mat boolImage = Mat(segments.rows, segments.cols, CV_8UC3);
inRange(segments, *colourIter, *colourIter, boolImage); // which pixels in labeled image are identical to this label?
vector<vector<Point> > labelContours;
findContours(boolImage, labelContours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Loop through contours.
for (int idx = 0; idx < labelContours.size(); idx++) {
// get bounds for this contour.
Rect bounds = boundingRect(labelContours[idx]);
float area = contourArea(labelContours[idx]);
// Draw this contour on a new blank image
Mat maskImage = Mat::zeros(boolImage.rows, boolImage.cols, boolImage.type());
drawContours(maskImage, labelContours, idx, Scalar(255,255,255), CV_FILLED);
Mat patchROI = frame(bounds);
Mat maskROI = maskImage(bounds);
}
segmentCount++;
}

Getting error using SVM with SURF

Below is my code , which is running fine but after a long processing it show me the run time error
// Initialize constant values
const int nb_cars = files.size();
const int not_cars = files_no.size();
const int num_img = nb_cars + not_cars; // Get the number of images
// Initialize your training set.
cv::Mat training_mat(num_img,dictionarySize,CV_32FC1);
cv::Mat labels(0,1,CV_32FC1);
std::vector<string> all_names;
all_names.assign(files.begin(),files.end());
all_names.insert(all_names.end(), files_no.begin(), files_no.end());
// Load image and add them to the training set
int count = 0;
vector<string>::const_iterator i;
string Dir;
for (i = all_names.begin(); i != all_names.end(); ++i)
{
Dir=( (count < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
Mat row_img = cv::imread( Dir +*i, 0 );
detector.detect( row_img, keypoints);
RetainBestKeypoints(keypoints, 20); // retain top 10 key points
extractor->compute( row_img, keypoints, descriptors_1);
//uncluster.push_back(descriptors_1);
descriptors.reshape(1,1);
bow.add(descriptors_1);
++count;
}
int count_2=0;
vector<string>::const_iterator k;
Mat vocabulary = bow.cluster();
dextract.setVocabulary(vocabulary);
for (k = all_names.begin(); k != all_names.end(); ++k)
{
Dir=( (count_2 < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
row_img = cv::imread( Dir +*k, 0 );
detector.detect( row_img, keypoints);
RetainBestKeypoints(keypoints, 20);
dextract.compute( row_img, keypoints, descriptors_1);
descriptors_1.reshape(1,1);
training_mat.push_back(descriptors_1);
labels.at< float >(count_2, 0) = (count_2<nb_cars)?1:-1;
++count_2;
}
Error :
OpenCv Error : Formats of input argument do not match() in unknown function , file ..\..\..\src\opencv\modules\core\src\matrix.cpp, line 652
I made the descriptor_1 in second loop to reshape into row for SVM , but error is not solving
I think you are trying to cluster with less features then the number of classes.
You can take more images or more then 10 descriptors from each image.
As far i find out after 3 days that my error is in labeling , when i label the image i got the error there , and yes above answer is also relevant that using less number of images also cause the error , but in my case this is not the reason , When i start checking error line by line , the error starts from here :
labels.at< float >(count_2, 0) = (count_2<nb_cars)?1:-1;
Because of the line :
Mat labels(0,1,CV_32FC1);
Instead of :
Mat labels(num_img,1,CV_32FC1);
and i should use
my_img.convertTo( training_mat.row(count_2), CV_32FC1 );