I've seen linked questions but I can't understand why MATLAB and OpenCV give different results.
MATLAB Code
>> A = [6 4 23 -3; 9 -10 4 11; 2 8 -5 1]
A =
6 4 23 -3
9 -10 4 11
2 8 -5 1
>> Col_step_1 = std(A, 0, 1)
Col_step_1 =
3.5119 9.4516 14.2945 7.2111
>> Col_final = std(Col_step_1)
Col_final =
4.5081
Using OpenCV and this function:
double getColWiseStd(cv::Mat in)
{
CV_Assert( in.type() == CV_64F );
cv::Mat meanValue, stdValue, m2, std2;
cv::Mat colSTD(1, A.cols, CV_64F);
cv::Mat colMEAN(1, A.cols, CV_64F);
for (int i = 0; i < A.cols; i++)
{
cv::meanStdDev(A.col(i), meanValue, stdValue);
colSTD.at<double>(i) = stdValue.at<double>(0);
colMEAN.at<double>(i) = meanValue.at<double>(0);
}
std::cout<<"\nCOLstd:\n"<<colSTD<<std::endl;
cv::meanStdDev(colSTD, m2, std2);
std::cout<<"\nCOLstd_f:\n"<<std2<<std::endl;
return std2.at<double>(0,0);
}
Applied to the same matrix yields the following:
Matrix:
[6, 4, 23, -3;
9, -10, 4, 11;
2, 8, -5, 1]
COLstd:
[2.867441755680876, 7.71722460186015, 11.67142760000773, 5.887840577551898]
COLstd_f:
[3.187726614989861]
I'm pretty sure that the OpenCV and MATLAB std function are correct, and thus can't find what I'm doing wrong, am I missing a type conversion? Something else?
The standard deviation you're calculating in OpenCV is normalised by number of observations (N) whereas you're calculating standard deviation in MATLAB normalised by N-1 (which is also the default normalisation factor in MATLAB and is known as Bessel's correction). Hence there is the difference.
You can normalise by N in MATLAB by selecting the second input argument as 1:
Col_step_1 = std(A, 1, 1);
Col_final = std(Col_step_1, 1);
Related
I'm new to OpenCV (in C++) and image processing. I want, given a grayscale image to replace the value of each pixel computing the average value of the grayscale in a 3x3 neighborhood.
First of all I open the image
Mat img = imread(samples::findFile(argv[1]), IMREAD_GRAYSCALE);
// Example of image
[4 3 9 1,
2 9 8 0,
3 5 2 1,
7 5 8 3]
In order to get the average value of the 3x3 closest pixels of corners (top left, top right, bottom left and bottom right) I make a padding of the image: an 1x1x1x1 constant border
Mat imgPadding;
copyMakeBorder(img, imgPadding, 1,1,1,1, BORDER_CONSTANT, Scalar(0));
// Padding example
[0 0 0 0 0 0,
0 4 3 9 1 0,
0 2 9 8 0 0,
0 3 5 2 1 0,
0 7 5 8 3 0,
0 0 0 0 0 0]
Now I've got some troubles with the output image. I have tried in various ways, but no way brings me to the solution. I tried this, using mean() function to get the average grayscale value of the i,j-th 3x3 matrix got with Rect() method. The for loop starts from the first non-padding pixel and ends at the last non-padding pixel.
Mat imgAvg = Mat::zeros(img.rows, img.cols, img.type());
// initialization of the output Mat object with same input size and type
for (int i = 1; i < imgAvg.rows; i++)
for (int j = 1; j < imgAvg.cols; j++)
imgAvg.at<Scalar>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)));
but I got this runtime error
main: malloc.c:2379: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.
I tried also reducing randomly the range
for (int i = 1; i < imgAvg.rows - 35; i++)
for (int j = 1; j < imgAvg.cols - 35; j++)
imgAvg.at<Scalar>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)));
and I got this weird output: screenshot
Thanks in advance!
EDIT:
Thank you all for the answers, I didn't know yet the blur() function.
In this way I import the image and simply call the blur function
Mat img = imread(samples::findFile(argv[1]), IMREAD_GRAYSCALE);
Mat imgAvg = Mat::zeros(img.rows, img.cols, img.type());
blur(img, imgAvg, Size(3, 3));
But since I'm still a beginner and I think the purpose of the exercise assigned to me was to write a "handmade" code, I tried also this working solution
for (int i = 1; i <= imgAvg.rows; i++)
for (int j = 1; j <= imgAvg.cols; j++)
imgAvg.at<uint8_t>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)))[0];
Result of the algorithm (identical for both solutions)
Just apply a smoothing filter to the image - the blur function in the imgproc module should accomplish what you need. A good example is in the documentation: https://docs.opencv.org/3.4/dc/dd3/tutorial_gausian_median_blur_bilateral_filter.html
In this case, the arguments you need are the image (img), a destination image (dst), and kernel size (ksize), which is 3 in this case:
src = ...
Mat dst = Mat::zeros( src.size(), src.type() )
blur( src, dst, Size( 3, 3 ))
Smoothing manually will not be as performant, and is more prone to error.
Good luck!
What you want to do is called "box filtering" in image processing. In OpenCV you do:
cv::blur(src_img,
dest_img, // same shape and type as src, cannot be src
cv::Size(3, 3)) // use a kernel of size 3x3
The default padding is to reflect the border pixel, which won't skew the image statistics. See the documentation if you prefer a different border mode.
I would like to do the following operation (which is at the current state in Matlab) using cv::Mat variables.
I have matrix mask:
mask =
1 0 0
1 0 1
then matrix M:
M =
1
2
3
4
5
6
3
and samples = M(mask,:)
samples =
1
2
6
My question is, how can I perform the same operation like, M(mask,:), with OpenCV?
With my knowledge the closet function to this thing is copyTo function in opencv that get matrix and mask for inputs. but this function hold original structure of your matrix you can test it.
I think there is no problem to use for loop in opencv(in c++) because it's fast. I propose to use for loop with below codes.
Mat M=(Mat_<uchar>(2,3)<<1,2,3,4,5,6); //Create M
cout<<M<<endl;
Mat mask=(Mat_<bool>(2,3)<<1,0,0,1,0,1); // Create mask
cout<<mask<<endl;
Mat samples;
///////////////////////////////
for(int i=0;i<M.total();i++)
{
if(mask.at<uchar>(i))
samples.push_back(M.at<uchar>(i));
}
cout<<samples<<endl;
above code result below outputs.
[ 1, 2, 3;
4, 5, 6]
[ 1, 0, 0;
1, 0, 1]
[ 1;
4;
6]
with using copyTo your output will be like below
[1 0 0
4 0 6];
I am very new to C++ and OpenCV but more familiar with Matlab. I have a task that I need to move to C++ for faster processing. So I would like to ask for your suggestion on a image processing problem. I have 10 images in a folder and I was able to read them all using dirent.h like in this and extract each frame by calling frame[count] = rawImage in a while loop:
int count = 0;
std::vector<cv::Mat> frames;
frames.resize(10);
while((_dirent = readdir(directory)) != NULL)
{
std::string fileName = inputDirectory + "\\" +std::string(_dirent->d_name);
cv::Mat rawImage = cv::imread(fileName.c_str(),CV_LOAD_IMAGE_GRAYSCALE);
frames[count] = rawImage; // Insert the rawImage to frames (this is original images)
count++;
}
Now I want to access each frames and do calculation similar to Matlab to get another matrix A such that A = frames(:,:,1)+2*frames(:,:,2). How to do that?
Since frames is a std::vector<cv::Mat>, you should be able to access each Mat this way:
// suppose you want the nth matrix
cv::Mat frame_n = frames[n];
Now, if you want to do the calculation you said on the first two Mats, then:
cv::Mat A = frames[0] + 2 * frames[1];
Example:
// mat1 = [[1 1 1]
// [2 2 2]
// [3 3 3]]
cv::Mat mat1 = (cv::Mat_<double>(3, 3) << 1, 1, 1, 2, 2, 2, 3, 3, 3);
cv::Mat mat2 = mat1 * 2; // multiplication matrix x scalar
// just to look like your case
std::vector<cv::Mat> frames;
frames.push_back(mat1);
frames.push_back(mat2);
cv::Mat A = frames[0] + 2 * frames[1]; // your calculation works
// A = [[ 5 5 5]
// [10 10 10]
// [15 15 15]]
You can always read the list of acceptable expressions.
So i'm testing my algorithm in MATLAB and it's done.
Then now doing cording for porting on C++ with OpenCV 2.4.5.
The problem is inverse fourier transform methods of two platforms, OpenCV and MATLAB.
So i have tested with simple matrix.
Here's test results.
The subject matrix is... 3 by 3 2-D.
1 2 3
4 5 6
7 8 9
-MATLAB-
test = [ 1, 2, 3;
4, 5, 6;
7, 8, 9];
ifft2(test);
result
5.0000 + 0.0000i -0.5000 - 0.2887i -0.5000 + 0.2887i
-1.5000 - 0.8660i 0.0000 + 0.0000i 0.0000 + 0.0000i
-1.5000 + 0.8660i 0.0000 + 0.0000i 0.0000 + 0.0000i
-OPENCV-
Note:Elements are same values.
Mat a = Mat::zeros(3, 3, CV_64FC1);
Mat b = Mat::zeros(3, 3, CV_64FC1);
a.at<double>(0,0) = 1;
a.at<double>(0,1) = 2;
a.at<double>(0,2) = 3;
a.at<double>(1,0) = 4;
a.at<double>(1,1) = 5;
a.at<double>(1,2) = 6;
a.at<double>(2,0) = 7;
a.at<double>(2,1) = 8;
a.at<double>(2,2) = 9;
idft(a, b, DFT_SCALE, 0);
result
4.33333 -4.13077 2.79743
-2.10313 -0.103134 -2.83518
-0.563533 2.16852 1.43647
I still didnt have found the solution. Even this couldn't gave me a solution.
EDIT: The problem has been solved. I put the CV_64FC1 to idft() as an input and CV_64FC2 as an output. A two matrices must be have same depth, both input and output are have to be 64_FC2. And flags DFT+COMPLEX_OUTPUT+DFT_SCALE is same as MATLAB's ifft2.
-SOLVED-
Mat input = Mat::zeros(3, 3, CV_64FC2);
Mat output = Mat::zeros(3, 3, CV_64FC2);
idft(input, output, DFT_COMPLEX_OUTPUT+DFT_SCALE, 0);
I believe you need cv::DFT_COMPLEX_OUTPUT+cv::DFT_SCALE since the input to idft clearly results in a complex-valued matrix.
Also, I think you'll need a 2-channel array for the output (type CV_64FC2), similarly for the input. As with any multi-channel image in OpenCV, you then access elements with the appropriate vector type (e.g. for doubles, .at<cv::Vec2d>(i,j), where the Vec2d stores the real and imaginary components at location i,j).
I think if you use 2 channel input matrices (CV_64FC2) you should use
a.at<Vec2d>(0,0)[0] = 1; // Re - part
a.at<Vec2d>(0,0)[1] = 0; // Im - part
instead of:
a.at<double>(0,0) = 1;
I want calculate angles of gradients from depth map and group it for some directions (8 sectors)
But my function calculates only first 3 directions
cv::Mat calcAngles(cv::Mat dimg)//dimg is depth map
{
const int directions_num = 8;//number of directions
const int degree_grade = 360;
int range_coeff = 255 / (directions_num + 1);//just for visualize
cv::Mat x_edge, y_edge, full_edge, angles;
dimg.copyTo(x_edge);
dimg.copyTo(y_edge);
dimg.copyTo(full_edge);
//compute gradients
Sobel( dimg, x_edge, CV_8U, 1, 0, 5, 1, 19, 4 );
Sobel( dimg, y_edge, CV_8U, 0, 1, 5, 1, 19, 4 );
Sobel( dimg, full_edge, CV_8U, 1, 1, 5, 1, 19, 4 );
float freq[directions_num + 1];//for collect direction's frequency
memset(freq, 0, sizeof(freq));
angles = cv::Mat::zeros(dimg.rows, dimg.cols, CV_8U);//store directions here
for(int i = 0; i < angles.rows; i++)
{
for(int j = 0; j < angles.cols; j++)
{
angles.at<uchar>(i, j) = (((int)cv::fastAtan2(y_edge.at<uchar>(i, j), x_edge.at<uchar>(i, j))) / (degree_grade/directions_num) + 1
) * (dimg.at<uchar>(i, j) ? 1 : 0);//fastatan returns values from 0 to 360, if i not mistaken. I want group angles by directions_num sectors. I use first 'direction' (zero value) for zero values from depth map (zero value at my depth map suggest that it is bad pixel)
freq[angles.at<uchar>(i, j)] += 1;
}
}
for(int i = 0; i < directions_num + 1; i++)
{
printf("%2.2f\t", freq[i]);
}
printf("\n");
angles *= range_coeff;//for visualization
return angles;
}
Out from one of the frames:
47359.00 15018.00 8199.00 6224.00 0.00 0.00 0.00 0.00 0.00
(first value is "zero pixel", next is number of gradients in n-place but only 3 are not zero)
Visualization
Is there way out? Or these result is OK?
PS Sorry for my writing mistakes. English in not my native language.
You used CV_8U type for Sobel output. It is unsigned integer 8 bit. So it can store only positive values. That's why fastAtan2 returns less or equal than 90. Change type to CV_16S and use short type for accessing the elements:
cv::Sobel(dimg, x_edge, CV_16S, 1, 0, 5, 1, 19, 4);
cv::Sobel(dimg, y_edge, CV_16S, 0, 1, 5, 1, 19, 4);
cv::fastAtan2(y_edge.at<short>(i, j), x_edge.at<short>(i, j))