Why labels give me error - c++

I am labeling my images but this part of my code give me error runtime error , where i assign the labels to it , when i remove this line code runs fine :
int count_2=0;
cv::Mat training_mat(num_img , dictionarySize,CV_32FC1);
cv::Mat labels(0,1,CV_32FC1);
for (k = all_names.begin(); k != all_names.end(); ++k)
{
Dir=( (count_2 < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
Mat row_img_2 = cv::imread( Dir +*k, 0 );
detector.detect( row_img_2, keypoints);
RetainBestKeypoints(keypoints, 20);
dextract.compute( row_img_2, keypoints, descriptors_1);
Mat my_img = descriptors_1.reshape(1,1);
my_img.convertTo( training_mat.row(count_2), CV_32FC1 );
//training_mat.push_back(descriptors_1);
***Here is the error***
//labels.at< float >(count_2, 0) = (count_2<nb_face)?1:-1; // 1 for face, -1 otherwise*/
++count_2;
}
Above the part of my code , i want to give 1 to the directory which contain positive images and -1 to the directory of images which contain negative images , nb_face is the file.size() of positive images

You need to give a size to the label vector when you create it.
Otherwhise you get an error that you access the label vector using out of bounds indicies.
So try to change
cv::Mat labels(0,1,CV_32FC1);
to
cv::Mat labels(num_img,1,CV_32FC1);

Related

Kmeans Assertion error: my Mat isn't empty though

My kmeans() function is failing with the error:
OpenCV Error: Assertion failed (N >= K) in cv::kmeans, file C:\builds\master_Pac
kSlave-win32-vc12-shared\opencv\modules\core\src\kmeans.cpp, line 231
Whats my error? 'N >= K' must mean its checking whether the Mat rows*cols(length) is > clusters - which mine is (I think). My Mat has 1 row with around 80k columns. Is my Mat that I am passing as the first parameter (for kmeans) empty of pixel/voxel data? I have confirmed that this parameter is a 'collapsed' image (1 row, 80K columns). So its not quite empty but it could all that it's all black pixels which may be the error?
Mat image = imread("images/jp.png", CV_32F); // The Jurassic Park movie logo
cvtColor(image, image, CV_BGR2RGB);
Mat collapsedImage = image.reshape(1, 1);
collapsedImage.convertTo(collapsedImage, CV_32F);
int clusterCount = 2;
Mat labels, centres;
// Assertion error thrown here
kmeans(collapsedImage, clusterCount, labels,
TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 10, 1.0),
3, KMEANS_PP_CENTERS, centres);
imshow("flat", collapsedImage); // shows long flat with all black pixels
imshow("image", image);
Ok looks like the problem was my reshaping. It should be:
Mat collapsedImage = image.reshape(1, image.rows * image.cols);

my application crashes at KNearest::find_nearest

I want to implement a OCR feature.
I have collected some samples and i want to use K-Nearest to implement it.
So, i use the below code to load data and initialize KNearest
KNearest knn = new KNearest;
Mat mData, mClass;
for (int i = 0; i <= 9; ++i)
{
Mat mImage = imread( FILENAME ); // the filename format is '%d.bmp', presenting a 15x15 image
Mat mFloat;
if (mImage.empty()) break; // if the file doesn't exist
mImage.convertTo(mFloat, CV_32FC1);
mData.push_back(mFloat.reshape(1, 1));
mClass.push_back( '0' + i );
}
knn->train(mData, mClass);
Then, i call the code to find best result
for (vector<Mat>::iterator it = charset.begin(); it != charset.end(); ++it)
{
Mat mFloat;
it->convertTo(mFloat, CV_32FC1); // 'it' presents a 15x15 gray image
float result = knn->find_nearest(mFloat.reshape(1, 1), knn->get_max_k());
}
But, my application crashes at find_nearest.
Anyone could help me?
I seemed to find the problem...
My sample image is a converted gray image by cvtColor, but my input image isn't.
After i add
cvtColor(mImage, mImage, COLOR_BGR2GRAY);
between
if (mImage.empty()) break;
mImage.convertTo(mFloat, CV_32FC1);
find_nearest() return a value and my application is fine.

OpenCV: Error copying one image to another

I am trying to copy one image to another pixel by pixel (I know there are sophisticated methods available. I am trying to solve another problem and answer to this will be useful).
This is my code:
int main()
{
Mat Img;
Img = imread("../../../stereo_images/left01.jpg");
Mat copyImg = Mat::zeros(Img.size(), CV_8U);
for(int i=0; i<Img.rows; i++){
for(int j=0; j<Img.cols; j++){
copyImg.at<uchar>(j,i) = Img.at<uchar>(j,i);
}}
namedWindow("Image", CV_WINDOW_AUTOSIZE );
imshow("Image", Img);
namedWindow("copyImage", CV_WINDOW_AUTOSIZE );
imshow("copyImage", copyImg);
waitKey(0);
return 0;
}
When I run this code in visual studio I get the following error
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)si
ze.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channel
s()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3
) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file c:\opencv\opencv-2.4.9\ope
ncv\build\include\opencv2\core\mat.hpp, line 537
I know for fact that Img's type is CV_8U. Why does this happen ?
Thanks!
// will read in a rgb image , no matter what the content is
Img = imread("../../../stereo_images/left01.jpg");
to make it read grayscale images use:
Img = imread("../../../stereo_images/left01.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then, you don't need to copy per pixel (and you should even avoid that), just use:
Mat im2 = Img.clone();
if you do per-pixel loops, watch out to get the indices right. it's row-col world here, not x,y, so it should be:
copyImg.at<uchar>(i,j) = Img.at<uchar>(i,j);
in your case
I know for fact that Img's type is CV_8U.
But CV_8U is just the image depth (8-bit U-nsigned). The type also specifies the number of channels, which is usually three. One for blue, one for green and one for red in this order as default for OpenCV. The type would be CV_8UC3 (C-hannels = 3). imread will convert even a black and white image to a 3-channel image by default. imread(filename, CV_LOAD_IMAGE_GRAYSCALE) will load a 1-channel image (CV_8UC1). But if you're not sure the easiest solution is
Mat copyImg = Mat::zeros(Img.size(), Img.type());
To access the array elements you have to know the size of it. Using .at<uchar>() on a 3-channel image will only access the first channel because you have 3*8 bit per pixel. So on a 3-channel image you have to use
copyImg.at<Vec3b>(i,j) = Img.at<Vec3b>(i,j);
where Vec3b is a cv::Vec<uchar, 3>. You should also note that the first argument of at<>(,) is the index along dim 0 which are the rows and second argument cols. Or in other words in classic 2d-xy-chart order you access a pixel with .at<>(y,x) == .at<>(Point(x,y)).
your problem is with this line :
copyImg.at<uchar>(j,i) = Img.at<uchar>(j,i);
It should be :
copyImg.at<uchar>(i,j) = Img.at<uchar>(i,j);
Note that if you want to copy image you can simply do this :
Mat copyImg = Img.clone();

Getting error using SVM with SURF

Below is my code , which is running fine but after a long processing it show me the run time error
// Initialize constant values
const int nb_cars = files.size();
const int not_cars = files_no.size();
const int num_img = nb_cars + not_cars; // Get the number of images
// Initialize your training set.
cv::Mat training_mat(num_img,dictionarySize,CV_32FC1);
cv::Mat labels(0,1,CV_32FC1);
std::vector<string> all_names;
all_names.assign(files.begin(),files.end());
all_names.insert(all_names.end(), files_no.begin(), files_no.end());
// Load image and add them to the training set
int count = 0;
vector<string>::const_iterator i;
string Dir;
for (i = all_names.begin(); i != all_names.end(); ++i)
{
Dir=( (count < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
Mat row_img = cv::imread( Dir +*i, 0 );
detector.detect( row_img, keypoints);
RetainBestKeypoints(keypoints, 20); // retain top 10 key points
extractor->compute( row_img, keypoints, descriptors_1);
//uncluster.push_back(descriptors_1);
descriptors.reshape(1,1);
bow.add(descriptors_1);
++count;
}
int count_2=0;
vector<string>::const_iterator k;
Mat vocabulary = bow.cluster();
dextract.setVocabulary(vocabulary);
for (k = all_names.begin(); k != all_names.end(); ++k)
{
Dir=( (count_2 < files.size() ) ? YourImagesDirectory : YourImagesDirectory_2);
row_img = cv::imread( Dir +*k, 0 );
detector.detect( row_img, keypoints);
RetainBestKeypoints(keypoints, 20);
dextract.compute( row_img, keypoints, descriptors_1);
descriptors_1.reshape(1,1);
training_mat.push_back(descriptors_1);
labels.at< float >(count_2, 0) = (count_2<nb_cars)?1:-1;
++count_2;
}
Error :
OpenCv Error : Formats of input argument do not match() in unknown function , file ..\..\..\src\opencv\modules\core\src\matrix.cpp, line 652
I made the descriptor_1 in second loop to reshape into row for SVM , but error is not solving
I think you are trying to cluster with less features then the number of classes.
You can take more images or more then 10 descriptors from each image.
As far i find out after 3 days that my error is in labeling , when i label the image i got the error there , and yes above answer is also relevant that using less number of images also cause the error , but in my case this is not the reason , When i start checking error line by line , the error starts from here :
labels.at< float >(count_2, 0) = (count_2<nb_cars)?1:-1;
Because of the line :
Mat labels(0,1,CV_32FC1);
Instead of :
Mat labels(num_img,1,CV_32FC1);
and i should use
my_img.convertTo( training_mat.row(count_2), CV_32FC1 );

OpenCV running kmeans algorithm on an image

I am trying to run kmeans on a 3 channel color image, but every time I try to run the function it seems to crash with the following error:
OpenCV Error: Assertion failed (data.dims <= 2 && type == CV_32F && K > 0) in unknown function, file ..\..\..\OpenCV-2.3.0\modules\core\src\matrix.cpp, line 2271
I've included the code below with some comments to help specify what is being passed in. Any help is greatly appreciated.
// Load in an image
// Depth: 8, Channels: 3
IplImage* iplImage = cvLoadImage("C:/TestImages/rainbox_box.jpg");
// Create a matrix to the image
cv::Mat mImage = cv::Mat(iplImage);
// Create a single channel image to create our labels needed
IplImage* iplLabels = cvCreateImage(cvGetSize(iplImage), iplImage->depth, 1);
// Convert the image to grayscale
cvCvtColor(iplImage, iplLabels, CV_RGB2GRAY);
// Create the matrix for the labels
cv::Mat mLabels = cv::Mat(iplLabels);
// Create the labels
int rows = mLabels.total();
int cols = 1;
cv::Mat list(rows, cols, mLabels .type());
uchar* src;
uchar* dest = list.ptr(0);
for(int i=0; i<mLabels.size().height; i++)
{
src = mLabels.ptr(i);
memcpy(dest, src, mLabels.step);
dest += mLabels.step;
}
list.convertTo(list, CV_32F);
// Run the algorithm
cv::Mat labellist(list.size(), CV_8UC1);
cv::Mat centers(6, 1, mImage.type());
cv::TermCriteria termcrit(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0);
kmeans(mImage, 6, labellist, termcrit, 3, cv::KMEANS_PP_CENTERS, centers);
The error says all: Assertion failed (data.dims <= 2 && type == CV_32F && K > 0)
These are very simple rules to understand, the function will work only if:
mImage.depth() is CV_32F
if mImage.dims is <= 2
and if K > 0. In this case, you define K as 6.
From what you stated on the question, it seems that:
IplImage* iplImage = cvLoadImage("C:/TestImages/rainbox_box.jpg");`
is loading the image as IPL_DEPTH_8U by default and not IPL_DEPTH_32F. This means that mImage is also IPL_DEPTH_8U, which is why your code is not working.