This piece of code:
int main() {
Mat input_img = imread("abcdef.png", CV_8UC1); // Image of size 1000*800
Moments moment = moments(input_img, false);
double humm[7];
HuMoments(moment, humm);
for (int i = 0; i<7; i++)
cout << humm[i] << endl;
}
prints out:
0.000789284
1.24093e-07
2.37587e-15
1.48852e-15
-3.19408e-31
4.09704e-20
-2.78098e-30
which is wrong. Hu's invariant moments are not that small. I can only remember reading somewhere, the first moment is usually >100, the second >60 ...
Did I miss something?
Related
I am reading a Bitmap-file from disk and write a copy back to disk after some manipulation, also writing a copy of the original file. The bitmaps are relatively small with a resolution of 31 x 31 pixel.
What I see is that when I have a resolution of 30 x 30 pixel then cv::imwrite correctly writes out the files, however if I go for a resolution of 31 x 31 pixel then cv:imwrite just gets stuck and does not return. This is happening on the same directories.
<...>
image = cv::imread(imageName, IMREAD_GRAYSCALE); // Read the file
if( image.empty() ) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
Mat image_flip (width,height,CV_8U);
int8_t pixel_8b;
for (int i=0; i< width; i++){
for (int j=0; j < height; j++){
pixel_8b= image.at<int8_t>(i,j);
image_flip.at<int8_t>(width-i,j) = pixel_8b;
}
}
cout << "Writing files" << endl;
result=cv::imwrite("./output_flip.bmp", image_flip);
cout << result << endl;
return 0;
In the good case I get the file output_flip.bmp written to the disk and result is displayed. In the bad case of being stuck the last thing I see is "Writing files" and then nothing anymore. I can switch back and forth between the good and the bad case by just resizing the input image.
Any ideas how to solve that issue?
As already discussed in the comments, you didn't provide a minimal, reproducible example (MRE). So, I derived the following MRE from your code, because I wanted to point out several things (and wondered, how your code could work at all):
#include <opencv.hpp>
int main()
{
cv::Mat image = cv::imread("path/to/your/image.png", cv::IMREAD_GRAYSCALE);
// cv::resize(image, image, cv::Size(30, 30));
cv::Mat image_flip(image.size().height, image.size().width, CV_8U);
for (int i = 0; i < image.size().width; i++)
{
for (int j = 0; j < image.size().height; j++)
{
const uint8_t pixel_8b = image.at<uint8_t>(j, i);
image_flip.at<uint8_t>(j, image.size().width - 1 - i) = pixel_8b;
}
}
std::cout << "Writing files" << std::endl;
const bool result = cv::imwrite("./output_flip.bmp", image_flip);
std::cout << result << std::endl;
return 0;
}
For single channel, 8-bit image (CV_8U), use uint8_t when accessing single pixels.
When using .at, please notice, that the syntax is .at(y, x). For square images, it might be equal, but in general, it's a common source for errors.
Accessing .at(j, width-i) MUST fail for i = 0, if width = image.size().width, since the last index of image is width - 1.
After correcting these issues, I could run your code without problems for larger images, as well as resized images to 30 x 30 or 31 x 31. So, please have a look, if you can resolve your issue(s) by modifying your code accordingly.
(I'm aware, that the actual issue as stated in the question (hanging imwrite) is not addressed at all in my answer, but as I said, I couldn't even run the provided code in the first place...)
Hope that helps!
int main(){
Mat cmp, Ref, Diff;
cmp = imread("image1.tif", CV_LOAD_IMAGE_UNCHANGED);
Ref = imread("image2.tif", CV_LOAD_IMAGE_UNCHANGED);
ShiftChk(cmp, Ref);
absdiff(cmp, Ref, Diff);
imshow("difference image", Diff);
waitKey(0);
double min, max;
minMaxLoc(Diff, &min, &max);
Point min_loc, max_loc;
minMaxLoc(Diff, &min, &max, &min_loc, &max_loc);
Size sz = Diff.size();
cout << "max val : " << max << endl;//5
cout << "max val: " << max_loc << endl; //[26,38]
vector<vector<double>>test;
for (int i = 0; i < Diff.cols; i++) {
for (int j = 0; j < Diff.rows; j++) {
Point difference = Diff.at<uchar>(26, 38) - Diff.at<uchar>(j, i);
double dist = sqrt(difference.x*difference.x + difference.y*difference.y);
test.push_back(dist);
}
}
}
I am trying to find the Euclidean distance between a single point in an image to all other pixels. The distance values are to be stored in vector test but its showing some error in it. And also I don't know whether the logic I have used is correct to give the right answer(Euclidean distance). Can anyone help me out. Thanks in advance
Error message is:
error C2664:
'void std::vector<std::vector<double,std::allocator<_Ty>>,std::allocator<std::vector<_Ty,std::allocator<_Ty>>>>::push_back(const std::vector<_Ty,std::allocator<_Ty>> &)' :
cannot convert argument 1 from 'double' to 'std::vector<double,std::allocator<_Ty>> &&'
There are two major issues:
You're appending the values to the test vector wrong. You need either to create an intermediate vector and push_back it to test (as shown in #0X0nosugar answer), or better initialize your vectors with correct dimensions and put the value at the right place.
vector<vector<double>> test(Diff.rows, vector<double>(Diff.cols));
for (int i = 0; i < Diff.rows; i++) {
for (int j = 0; j < Diff.cols; j++) {
test[i][j] = ...
}
}
As shown in the snippet above, it's better (and faster) to scan by rows, becuase OpenCV stores images row-wise.
You are not computing the distance between two points. You are in fact taking the difference of the values at two given points and creating a Point object out of this (which makes no sense). Also you can avoid to compute explicitly the euclidean distance. You can use cv::norm:
test[i][j] = norm(Point(38, 26) - Point(j, i)); // Pay attention to i,j order!
Putting all together:
Point ref(38, 26);
vector<vector<double>> test(Diff.rows, vector<double>(Diff.cols));
for (int i = 0; i < Diff.rows; i++) {
for (int j = 0; j < Diff.cols; j++) {
test[i][j] = norm(ref - Point(j,i));
}
}
For the following code, here is a bit of context.
Mat img0; // 1280x960 grayscale
--
timer.start();
for (int i = 0; i < img0.rows; i++)
{
vector<double> v;
uchar* p = img0.ptr<uchar>(i);
for (int j = 0; j < img0.cols; ++j)
{
v.push_back(p[j]);
}
}
cout << "Single thread " << timer.end() << endl;
and
timer.start();
concurrency::parallel_for(0, img0.rows, [&img0](int i) {
vector<double> v;
uchar* p = img0.ptr<uchar>(i);
for (int j = 0; j < img0.cols; ++j)
{
v.push_back(p[j]);
}
});
cout << "Multi thread " << timer.end() << endl;
The result:
Single thread 0.0458856
Multi thread 0.0329856
The speedup is hardly noticeable.
My processor is Intel i5 3.10 GHz
RAM 8 GB DDR3
EDIT
I tried also a slightly different approach.
vector<Mat> imgs = split(img0, 2,1); // `split` is my custom function that, in this case, splits `img0` into two images, its left and right half
--
timer.start();
concurrency::parallel_for(0, (int)imgs.size(), [imgs](int i) {
Mat img = imgs[i];
vector<double> v;
for (int row = 0; row < img.rows; row++)
{
uchar* p = img.ptr<uchar>(row);
for (int col = 0; col < img.cols; ++col)
{
v.push_back(p[col]);
}
}
});
cout << " Multi thread Sectored " << timer.end() << endl;
And I get much better result:
Multi thread Sectored 0.0232881
So, it looks like I was creating 960 threads or something when I ran
parallel_for(0, img0.rows, ...
And that didn't work well.
(I must add that Kenney's comment is correct. Do not put too much relevance to the specific numbers I stated here. When measuring small intervals such as these, there are high variations. But in general, what I wrote in the edit, about splitting the image in half, improved performance in comparison to old approach.)
I think your problem is that you are limited by memory bandwidth. Your second snippet is basically reading from the whole of the image, and that has got to come out of main memory into cache. (Or out of L2 cache into L1 cache).
You need to arrange your code so that all four cores are working on the same bit of memory at once (I presume you are not actually trying to optimize this code - it is just a simple example).
Edit: Insert crucial "not" in last parenthetical remark.
I have tested the following two snippets and they give a different result. The second one is right. I do not understand why it is and I wonder if there is a bug in opencv?
A result matrix f_sub is different in those examples.
1)
Mat f = Mat::zeros(96,112,CV_8UC1);
randu(f,0,255);
Mat f_sub = f(cv::Rect(17,14,78,68));
//mat2File("f.mm",f,1);
//mat2File("f_sub.mm",f_sub,1);
exit(0);
2)
Mat f = Mat::zeros(96,112,CV_8UC1);
randu(f,0,255);
Mat f_sub = f(cv::Rect(17,14,78,68)).clone();
//mat2File("f.mm",f,1);
//mat2File("f_sub.mm",f_sub,1);
exit(0);
The mat2File is just print a mat into a file
void mat2File(string filename, Mat M, int y)
{
ofstream fout(filename.c_str());
//fout << M.rows<<" "<<M.cols<<endl;
uchar *M_ptr = (uchar*)M.ptr();
for(size_t i=0; i<M.rows; i++)
{
fout<<endl;
for(size_t j=0; j<M.cols; j++)
{
fout<< (size_t)M_ptr[i*M.cols+j]<<" ";
}
}
}
The mat2File seems to be a culprit.
M_ptr[i*M.cols+j] is incorrect for non-continuous matrices because pitch between matrix lines is greater than M.cols. You'd better use M.at<uchar>(y,x) to access Mat pixels.
I' m trying to create my own SVM classifier, and i stuck at the beginning. I have two sets of data for positive and negative images, and i want to create training matrix, to pass it into SVM. So i read this fundamental post, and tried to do as it says:
char* path_positive, path_negative;
int numPos, numNeg;
int imageWidth=130;
int imageHeight=160;
numPos= 176;
numNeg= 735;
path_positive= "C:\\SVM\\positive\\";
path_negative= "C:\\SVM\negative\\";
Mat classes(numPos+numNeg, 1, CV_32FC1);
Mat trainingMat(numPos+numNeg, imageWidth*imageHeight, CV_32FC1 );
vector<int> trainingLabels;
for(int file_num=0; file_num < numPos; file_num++)
{
stringstream ss(stringstream::in | stringstream::out);
ss << path_positive << file_num << ".jpg";
Mat img = imread(ss.str(), CV_LOAD_IMAGE_GRAYSCALE);
int ii = 0; // Current column in training_mat
for (int i = 0; i<img.rows; i++) {
for (int j = 0; j < img.cols; j++) {
trainingMat.at<float>(file_num,ii++) = trainingMat.at<uchar>(i,j);
}
}
trainingLabels.push_back(1);
}
But when i start this code i got Assertation failderror:
I know, that i made some stupid mistake, because i'm very new to openCV.
Thanks for any help.
EDIT: The error occurs on this :
trainingMat.at(file_num,ii++) = trainingMat.at(i,j);