Below is my code , which is running fine but after a long processing it show me the run time error
// Initialize constant values
const int nb_cars = files.size();
const int not_cars = files_no.size();
const int num_img = nb_cars + not_cars; // Get the number of images
// Initialize your training set.
cv::Mat training_mat(num_img,dictionarySize,CV_32FC1);
cv::Mat labels(0,1,CV_32FC1);
std::vector<string> all_names;
all_names.assign(files.begin(),files.end());
all_names.insert(all_names.end(), files_no.begin(), files_no.end());
// Load image and add them to the training set
int count = 0;
vector<string>::const_iterator i;
string Dir;
for (i = all_names.begin(); i != all_names.end(); ++i)
{
Dir=( (count < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
Mat row_img = cv::imread( Dir +*i, 0 );
detector.detect( row_img, keypoints);
RetainBestKeypoints(keypoints, 20); // retain top 10 key points
extractor->compute( row_img, keypoints, descriptors_1);
//uncluster.push_back(descriptors_1);
descriptors.reshape(1,1);
bow.add(descriptors_1);
++count;
}
int count_2=0;
vector<string>::const_iterator k;
Mat vocabulary = bow.cluster();
dextract.setVocabulary(vocabulary);
for (k = all_names.begin(); k != all_names.end(); ++k)
{
Dir=( (count_2 < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
row_img = cv::imread( Dir +*k, 0 );
detector.detect( row_img, keypoints);
RetainBestKeypoints(keypoints, 20);
dextract.compute( row_img, keypoints, descriptors_1);
descriptors_1.reshape(1,1);
training_mat.push_back(descriptors_1);
labels.at< float >(count_2, 0) = (count_2<nb_cars)?1:-1;
++count_2;
}
Error :
OpenCv Error : Formats of input argument do not match() in unknown function , file ..\..\..\src\opencv\modules\core\src\matrix.cpp, line 652
I made the descriptor_1 in second loop to reshape into row for SVM , but error is not solving
I think you are trying to cluster with less features then the number of classes.
You can take more images or more then 10 descriptors from each image.
As far i find out after 3 days that my error is in labeling , when i label the image i got the error there , and yes above answer is also relevant that using less number of images also cause the error , but in my case this is not the reason , When i start checking error line by line , the error starts from here :
labels.at< float >(count_2, 0) = (count_2<nb_cars)?1:-1;
Because of the line :
Mat labels(0,1,CV_32FC1);
Instead of :
Mat labels(num_img,1,CV_32FC1);
and i should use
my_img.convertTo( training_mat.row(count_2), CV_32FC1 );
Related
I want to use FER+ Emotion Recognition in my project:
string modelPath = "../data/model.onnx";
Mat frame = imread("../data/Smile-Mood.jpg");
Mat gray;
cvtColor(frame, gray, COLOR_BGR2GRAY);
float scale = 1.0;
int inHeight = 64;
int inWidth = 64;
bool swapRB = false;
//Read and initialize network
cv::dnn::Net net = cv::dnn::readNetFromONNX(modelPath);
Mat blob;
//Create a 4D blob from a frame
cv::dnn::blobFromImage(gray, blob, scale, Size(inWidth, inHeight), swapRB, false);
//Set input blob
net.setInput(blob);
//Make forward pass
Mat prob = net.forward();
I have this error in last line:
Unhandled exception at 0x00007FFDA25AA799 in FERPlusDNNOpenCV.exe: Microsoft C++ exception: cv::Exception at memory location 0x00000063839BE050. occurred
OpenCV(4.1.0) Error: Assertion failed (ngroups > 0 && inpCn % ngroups == 0 && outCn % ngroups == 0) in cv::dnn::ConvolutionLayerImpl::getMemoryShapes
how to fix that?
UPDATE ACCORDING #Micka COMMENT:
Mat dstImg = Mat(64, 64, CV_32FC1);
resize(gray, dstImg, Size(64, 64));
std::vector<float> array;
if (dstImg.isContinuous())
array.assign(dstImg.data, dstImg.data + dstImg.total());
else{
for (int i = 0; i < dstImg.rows; ++i) {
array.insert(array.end(), dstImg.ptr<float>(i), dstImg.ptr<float>(i) + dstImg.cols);
}
}
//Set input blob
net.setInput(array);
//Make forward pass
Mat prob = net.forward();
I've got this error: vector subscript out of range.
Should I use another function instead setInput?
I am writing a dlib code to do face recognition on 1 to 1 basis.
I followed the code sample in dlib samples and did the following:
std::vector<matrix<rgb_pixel>> faces;
for (auto face : detector(img1))
{
auto shape = sp(img1, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img1, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
this is for the first image and then did the same for the second image:
for (auto face : detector(img2))
{
auto shape = sp(img2, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img2, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
then i continue as per the mentioned link:
std::vector<matrix<float, 0, 1>> face_descriptors = net(faces);
std::vector<sample_pair> edges;
for (size_t i = 0; i < face_descriptors.size(); ++i)
{
for (size_t j = i; j < face_descriptors.size(); ++j)
{
if (length(face_descriptors[i] - face_descriptors[j]) < threshold)
edges.push_back(sample_pair(i, j));
}
}
std::vector<unsigned long> labels;
const int num_clusters = chinese_whispers(edges, labels);
//etc
and now comes my question. img1 is an image already available to the code that is read when i need to match a specific person. (ie if I want to mach personX, img1 is read using
load_image(img1, "personX.jpg");
Instead of having the image saved, i was trying to save the features and load them to reduce the time spent the extraction of features. so what I did is I moved the first for loop at a different function (enrollment like) and made it something like this:
std::vector<matrix<rgb_pixel>> faces;
for (auto face : detector(img1))
{
auto shape = sp(img1, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img1, get_face_chip_details(shape, 150, 0.25), face_chip);
serialize("personX.dat") <<face_chip;
}
then at the recognition instead of the loop i used
matrix<rgb_pixel> face_chip;
deserialize("personX.dat")>>face_chip;
faces.push_back(move(face_chip));
and the rest of the code from the extraction of img2 onward remained the same. the code compiled. But during execution when i reach the recognition i end up with the following error:
**************************** FATAL ERROR DETECTED ****************************
Error detected at line 216.
Error detected in file /usr/local/include/dlib/dnn/input.h.
Error detected in function void dlib::input_rgb_image_sized::to_tensor(forward_iterator, forward_iterator, dlib::resizable_tensor&) const [with forward_iterator = __gnu_cxx::__normal_iterator*, std::vector > >; long unsigned int NR = 150ul; long unsigned int NC = 150ul].
Failing expression was i->nr()==NR && i->nc()==NC input_rgb_image_sized::to_tensor()
All input images must have 150 rows and 150 columns, but we got one with 0 rows and 0 columns.
Is there something wrong with the serialization / de-serialization? or should i do writing the features to a file with another method?
code for the full funtion:
try
{
load_image(img1, check_image);
}
catch (...)
{
cout<<"Name: "<<uname<<" doesn't exist"<<endl;
return;
}
else
{
QElapsedTimer timer;
timer.start();
dlib::assign_image(img2, dlib::cv_image<bgr_pixel>(colorImage));
std::vector<matrix<rgb_pixel>> faces;
for (auto face : detector(img1))
{
auto shape = sp(img1, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img1, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
// serialize("out.dat")<<face_chip; //used whin i dont need to read image
}
// matrix<rgb_pixel> face_chip; //used whin i dont need to read image
// deserialize("out.dat")>>face_chip; //used whin i dont need to read image
// faces.push_back(move(face_chip)); //used whin i dont need to read image
cout<<"Time to extract features for enroled image: "<<timer.elapsed()<<endl;
timer.restart();
for (auto face : detector(img2))
{
auto shape = sp(img2, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img2, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
cout<<"Time to extract features for new image: "<<timer.elapsed()<<endl;
timer.restart();
if (faces.size() < 2)
{
cout<<"No Face"<<endl;
}
else
{
std::vector<matrix<float, 0, 1>> face_descriptors = net(faces);
std::vector<sample_pair> edges;
for (size_t i = 0; i < face_descriptors.size(); ++i)
{
for (size_t j = i; j < face_descriptors.size(); ++j)
{
if (length(face_descriptors[i] - face_descriptors[j]) < threshold)
edges.push_back(sample_pair(i, j));
}
}
std::vector<unsigned long> labels;
const int num_clusters = chinese_whispers(edges, labels);
if (num_clusters == 1)
{
cout<<"Recognized"<<endl;
}
else
{
cout<<"Faces don't match";
}
}
cout<<"Needed time is: "<<timer.elapsed()<<" ms"<<endl;
}
instead of serializing the Matrix i serialized the output vector (faces).
serialize("personX.dat")<<faces;
then when doing the recognition i deserialized the dat file and used the resulting vector:
std::vector<matrix<rgb_pixel>> faces;
deserialize("out.dat")>>faces;
for (auto face : detector(img2))
{
auto shape = sp(img2, face);
matrix<rgb_pixel> face_chip;
extract_image_chip(img2, get_face_chip_details(shape, 150, 0.25), face_chip);
faces.push_back(move(face_chip));
}
and I continued as mentioned in the question.
I don't know if this is the best way to do it... but it worked.
I had a code which is getting ORB keypoints and descriptors of images and then it trains a bag of word using "BOWKMeansTrainer". the purpose of this code is to cluster photos using k-means. I don't have any problem using ORB.
Here is my problem:
I saw that openCV provides a VGG, You can find it HERE. I want to use VGG descriptor instead of ORB. Based on the documents, VGG doesn't have any function to make keypoints, so I used ORB keypoint and then used ORB keypoints to create VGG descriptors.
Ptr<ORB> detector = ORB::create();
Ptr<cv::xfeatures2d::VGG> detectorVGG = cv::xfeatures2d::VGG::create(VGG::VGG_120, 1.4f, true, true, 0.75f, true);
detector->detect(grayFrame, kp);
detectorVGG->compute(grayFrame, kp, descriptors);
This one also works and I can easily train bag of word and cluster them similar to the way I did using ORB. However when I want to test the bag of words, I'm getting this error:
error: (-215) _queryDescriptors.type() == trainDescType in function knnMatchImpl Aborttrap: 6
I tried to change the type of query descriptor Mat and descriptor Mat but I cannot solve it. In the best case , the error is changed to this:
(-215) (type == CV_8U && dtype == CV_32S) || dtype == CV_32F in function batchDistance
I am not sure if the problem is a small mistake in types or it is because of using ORB keypoints and VGG descriptors at the same time.
Part of my code is here:
vector<Mat> makeDescriptor(vector<Mat> frames)
{
int frameSize = frames.size();
vector<Mat> characteristics;
for (int i = 0 ; i < frameSize ; i++)
{
vector<KeyPoint> kp;
Mat descriptors;
Ptr<cv::xfeatures2d::VGG> detectorVGG = cv::xfeatures2d::VGG::create();
Ptr<ORB> detectorORB = ORB::create();
Mat frame = frames[i];
if (frame.empty()) {
cout << "Frame empty"<< "\n";
continue;
}
Mat grayFrame;
try
{
cvtColor(frame,grayFrame,CV_RGB2GRAY);
}
catch(Exception& e)
{
continue;
}
detectorORB->detect(grayFrame,kp);
detectorVGG->compute(grayFrame,kp,descriptors);
characteristics.push_back(descriptors);
}
return characteristics;
}
Mat makeDict(vector<Mat> characteristics, int _nCluster)
{
BOWKMeansTrainer bow(_nCluster);
for (int j = 0 ; j < characteristics.size() ; j++)
{
Mat descr = characteristics[j];
if (!descr.empty())
{
bow.add(descr);
}
}
Mat voc = bow.cluster();
return voc;
}
BOWImgDescriptorExtractor makeBow(Mat dict)
{
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
Ptr<DescriptorExtractor> extractor = VGG::create();
BOWImgDescriptorExtractor bowDE(extractor,matcher);
Mat voc;
dict.convertTo(voc,CV_32F);
bowDE.setVocabulary(voc);
return bowDE;
}
void testBow(BOWImgDescriptorExtractor bowDE, vector<Mat> frames)
{
Mat features;
vector< vector<KeyPoint> > keypoints = makeORBKeyPoints(frames);
for (int i = 0 ; i < frames.size() ; i++)
{
Mat bowDesc;
if (!keypoints[i].empty())
{
cout << "inside if" << '\n';
Mat frame;
frames[i].convertTo(frame, CV_8U);
bowDE.compute(frame, keypoints[i], bowDesc); //This Line
bowDesc.convertTo(bowDesc,CV_32F);
}
features.push_back(bowDesc);
}
}
The runtime error is happening at this line:
bowDE.compute(frame, keypoints[i], bowDesc); //This Line
I appreciate if someone helps me out to find a solution for it.
Thanks,
I'm trying to save my Surf descriptors (of different images) Mat into a file, for later use using the code below:
int offline(int nb) { //creates descriptors image;
Mat img;
int nfeatures = 25;
int nOctaveLayers = 2;
double contrastThreshold = 0.04;
double edgeThreshold = 10;
double sigma = 1.6;
SurfFeatureDetector surfDetector = SURF(edgeThreshold, 2, 2, true, false);
vector< KeyPoint> keypoints;
SurfDescriptorExtractor surfExtractor;
Mat imgDescriptors;
for (int i = 2;i <= nb;i++) {
img = imread("images/" + to_string(i) + ".jpg", CV_LOAD_IMAGE_GRAYSCALE);
if (!img.data)
{
return -1;
}
surfDetector.detect(img, keypoints);
surfExtractor.compute(img, keypoints, imgDescriptors);
imwrite(to_string(i)+".jgp",imgDescriptors); //impossible to save
//imshow("lol", imgDescriptors); //impossible to show
}
return 0;
}
i'm getting this exception : OpenCV Error: Unspecified error (could not find a writer for the specified extension) in cv::imwrite_, file C:\builds\2_4_PackSlave-win64-vc12-shared\opencv\modules\highgui\src\loadsave.cpp, line 275 [RESOLVED BY adding the extension +".jpg".
So i thought i should try to show the image, so i'm having another error:
Any clue about this ? (SECOND EXCEPTION)
I am labeling my images but this part of my code give me error runtime error , where i assign the labels to it , when i remove this line code runs fine :
int count_2=0;
cv::Mat training_mat(num_img , dictionarySize,CV_32FC1);
cv::Mat labels(0,1,CV_32FC1);
for (k = all_names.begin(); k != all_names.end(); ++k)
{
Dir=( (count_2 < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
Mat row_img_2 = cv::imread( Dir +*k, 0 );
detector.detect( row_img_2, keypoints);
RetainBestKeypoints(keypoints, 20);
dextract.compute( row_img_2, keypoints, descriptors_1);
Mat my_img = descriptors_1.reshape(1,1);
my_img.convertTo( training_mat.row(count_2), CV_32FC1 );
//training_mat.push_back(descriptors_1);
***Here is the error***
//labels.at< float >(count_2, 0) = (count_2<nb_face)?1:-1; // 1 for face, -1 otherwise*/
++count_2;
}
Above the part of my code , i want to give 1 to the directory which contain positive images and -1 to the directory of images which contain negative images , nb_face is the file.size() of positive images
You need to give a size to the label vector when you create it.
Otherwhise you get an error that you access the label vector using out of bounds indicies.
So try to change
cv::Mat labels(0,1,CV_32FC1);
to
cv::Mat labels(num_img,1,CV_32FC1);