The results of OpenCV idft() and MATLAB ifft2 does not match - c++

So i'm testing my algorithm in MATLAB and it's done.
Then now doing cording for porting on C++ with OpenCV 2.4.5.
The problem is inverse fourier transform methods of two platforms, OpenCV and MATLAB.
So i have tested with simple matrix.
Here's test results.
The subject matrix is... 3 by 3 2-D.
1 2 3
4 5 6
7 8 9
-MATLAB-
test = [ 1, 2, 3;
4, 5, 6;
7, 8, 9];
ifft2(test);
result
5.0000 + 0.0000i -0.5000 - 0.2887i -0.5000 + 0.2887i
-1.5000 - 0.8660i 0.0000 + 0.0000i 0.0000 + 0.0000i
-1.5000 + 0.8660i 0.0000 + 0.0000i 0.0000 + 0.0000i
-OPENCV-
Note:Elements are same values.
Mat a = Mat::zeros(3, 3, CV_64FC1);
Mat b = Mat::zeros(3, 3, CV_64FC1);
a.at<double>(0,0) = 1;
a.at<double>(0,1) = 2;
a.at<double>(0,2) = 3;
a.at<double>(1,0) = 4;
a.at<double>(1,1) = 5;
a.at<double>(1,2) = 6;
a.at<double>(2,0) = 7;
a.at<double>(2,1) = 8;
a.at<double>(2,2) = 9;
idft(a, b, DFT_SCALE, 0);
result
4.33333 -4.13077 2.79743
-2.10313 -0.103134 -2.83518
-0.563533 2.16852 1.43647
I still didnt have found the solution. Even this couldn't gave me a solution.
EDIT: The problem has been solved. I put the CV_64FC1 to idft() as an input and CV_64FC2 as an output. A two matrices must be have same depth, both input and output are have to be 64_FC2. And flags DFT+COMPLEX_OUTPUT+DFT_SCALE is same as MATLAB's ifft2.
-SOLVED-
Mat input = Mat::zeros(3, 3, CV_64FC2);
Mat output = Mat::zeros(3, 3, CV_64FC2);
idft(input, output, DFT_COMPLEX_OUTPUT+DFT_SCALE, 0);

I believe you need cv::DFT_COMPLEX_OUTPUT+cv::DFT_SCALE since the input to idft clearly results in a complex-valued matrix.
Also, I think you'll need a 2-channel array for the output (type CV_64FC2), similarly for the input. As with any multi-channel image in OpenCV, you then access elements with the appropriate vector type (e.g. for doubles, .at<cv::Vec2d>(i,j), where the Vec2d stores the real and imaginary components at location i,j).

I think if you use 2 channel input matrices (CV_64FC2) you should use
a.at<Vec2d>(0,0)[0] = 1; // Re - part
a.at<Vec2d>(0,0)[1] = 0; // Im - part
instead of:
a.at<double>(0,0) = 1;

Related

Scatter/Gather like Numpy in ArrayFire

I want to scatter and gather elements from an array X at specific indices along one axis.
So given an array of indices idx, I want to select the idx(0)th element along the 0th column, the idx(1)th element along the 1st column, etc..
In Numpy, the following statement:
X = np.array([[1, 2, 3], [4, 5, 6]])
print(X[[0, 1, 1], range(3)])
prints [1, 5, 6].
Furthermore, I can do this process in reverse:
Y = np.zeros((2, 3))
Y[[0, 1, 1], range(3)] = [1, 5, 6]
print(Y)
This will print
[[1. 0. 0.]
[0. 5. 6.]]
However, when I try to replicate this behavior in ArrayFire:
float elements[] = {1, 2, 3, 4, 5, 6};
af::array X = af::array(3, 2, elements);
int idx_elements[] = {0, 1, 1};
af::array idx = af::array(3, idx_elements);
af::print("", X(af::span, idx));
I get an array of shape [3, 3, 1, 1] with the elements
1.0000 4.0000 4.0000
2.0000 5.0000 5.0000
3.0000 6.0000 6.0000
So how can I achieve the desired numpy-like behavior for scattering and gathering elements in ArrayFire?
To perform the gather operation on a matrix, I can extract the diagonal of the resulting matrix but that may not work in the multidimensional case and it doesn't work in the other (scatter) direction.
X
[3 2 1 1]
1.0000 4.0000
2.0000 5.0000
3.0000 6.0000
idx
[3 1 1 1]
0
1
1
ArrayFire does Cartesian product when af::array are involved. Hence, the output.
Please see the below indices because of that.
Col\Row 0 1 1 from array
0 (0, 0) (0,1) (0, 1)
1 (1, 0) (1,1) (1, 1)
2 (2, 0) (2,1) (2, 1)
^
^ from sequence
Thus, the output of X(af::span, idx)) is a 3x3 matrix.
To gather elements based on coordinates, you would need different function
approx2. Note that this function takes it's indices as floating point arrays only.
float idx_elements[] = {0, 1, 1}; // changed the idx to floats
af::array colIdx = af::array(3, idx_elements);
af::array rowIdx = af::iota(3); // same effect as span
af::array out = approx2(X, rowIdx, colIdx);
af_print(out);
// out
// [3 1 1 1]
// 1.0000
// 5.0000
// 6.0000
To set the values for given indices, you would have to flatten the array because of very reason
that array::operator() considers cartesian product when af::array is involved.
af::array A = af::constant(0, 3, 2); // same size as X
af::array B = af::flat(A); // flatten the array, this involves meta data modification only
B(rowIdx + 3 * colIdx) = out; // use row & col indices to fetch linear indices
// rowIdx + 3 * colIdx
// [3 1 1 1]
// 0.0000
// 4.0000
// 5.0000
B = moddims(B, A.dims()); // reset the dimensions to original A dims
af_print(B);
// B
// [3 2 1 1]
// 1.0000 0.0000
// 0.0000 5.0000
// 0.0000 6.0000
You can look more details in our indexing tutorial.

OpenCV col-wise standard deviation result vs MATLAB

I've seen linked questions but I can't understand why MATLAB and OpenCV give different results.
MATLAB Code
>> A = [6 4 23 -3; 9 -10 4 11; 2 8 -5 1]
A =
6 4 23 -3
9 -10 4 11
2 8 -5 1
>> Col_step_1 = std(A, 0, 1)
Col_step_1 =
3.5119 9.4516 14.2945 7.2111
>> Col_final = std(Col_step_1)
Col_final =
4.5081
Using OpenCV and this function:
double getColWiseStd(cv::Mat in)
{
CV_Assert( in.type() == CV_64F );
cv::Mat meanValue, stdValue, m2, std2;
cv::Mat colSTD(1, A.cols, CV_64F);
cv::Mat colMEAN(1, A.cols, CV_64F);
for (int i = 0; i < A.cols; i++)
{
cv::meanStdDev(A.col(i), meanValue, stdValue);
colSTD.at<double>(i) = stdValue.at<double>(0);
colMEAN.at<double>(i) = meanValue.at<double>(0);
}
std::cout<<"\nCOLstd:\n"<<colSTD<<std::endl;
cv::meanStdDev(colSTD, m2, std2);
std::cout<<"\nCOLstd_f:\n"<<std2<<std::endl;
return std2.at<double>(0,0);
}
Applied to the same matrix yields the following:
Matrix:
[6, 4, 23, -3;
9, -10, 4, 11;
2, 8, -5, 1]
COLstd:
[2.867441755680876, 7.71722460186015, 11.67142760000773, 5.887840577551898]
COLstd_f:
[3.187726614989861]
I'm pretty sure that the OpenCV and MATLAB std function are correct, and thus can't find what I'm doing wrong, am I missing a type conversion? Something else?
The standard deviation you're calculating in OpenCV is normalised by number of observations (N) whereas you're calculating standard deviation in MATLAB normalised by N-1 (which is also the default normalisation factor in MATLAB and is known as Bessel's correction). Hence there is the difference.
You can normalise by N in MATLAB by selecting the second input argument as 1:
Col_step_1 = std(A, 1, 1);
Col_final = std(Col_step_1, 1);

Improper Translation Matrix from SVD of Essential Matrix for 3D reconstruction using 2 Images

I am trying to find a 3D model from 2 images taken from the same camera using OpenCV with C++. I followed this method. I am still not able to rectify mistake in R and T computation.
Image 1: With Background Removed for eliminating mismatches
Image 2: Translated only in X direction wrt Image 1 With Background Removed for eliminating mismatches
I have found the Intrinsic Camera Matrix (K) using MATLAB Toolbox. I found it to be :
K=
[3058.8 0 -500
0 3057.3 488
0 0 1]
All image matching keypoints (using SIFT and BruteForce Matching, Mismatches Eliminated) were aligned wrt center of image as follows:
obj_points.push_back(Point2f(keypoints1[symMatches[i].queryIdx].pt.x - image1.cols / 2, -1 * (keypoints1[symMatches[i].queryIdx].pt.y - image1.rows / 2)));
scene_points.push_back(Point2f(keypoints2[symMatches[i].trainIdx].pt.x - image1.cols / 2, -1 * (keypoints2[symMatches[i].trainIdx].pt.y - image1.rows / 2)));
From Point Correspondeces, I found out Fundamental Matrix Using RANSAC in OpenCV
Fundamental Matrix:
[0 0 -0.0014
0 0 0.0028
0.00149 -0.00572 1 ]
Essential Matrix obtained using:
E = (camera_Intrinsic.t())*f*camera_Intrinsic;
E obtained:
[ 0.0094 36.290 1.507
-37.2245 -0.6073 14.71
-1.3578 -23.545 -0.442]
SVD of E:
E.convertTo(E, CV_32F);
Mat W = (Mat_<float>(3, 3) << 0, -1, 0, 1, 0, 0, 0, 0, 1);
Mat Z = (Mat_<float>(3, 3) << 0, 1, 0, -1, 0, 0, 0, 0, 0);
SVD decomp = SVD(E);
Mat U = decomp.u;
Mat Lambda = decomp.w;
Mat Vt = decomp.vt;
New Essential Matrix for epipolar constraint:
Mat diag = (Mat_<float>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 0);
Mat new_E = U*diag*Vt;
SVD new_decomp = SVD(new_E);
Mat new_U = new_decomp.u;
Mat new_Lambda = new_decomp.w;
Mat new_Vt = new_decomp.vt;
Rotation from SVD:
Mat R1 = new_U*W*new_Vt;
Mat R2 = new_U*W.t()*new_Vt;
Translation from SVD:
Mat T1 = (Mat_<float>(3, 1) << new_U.at<float>(0, 2), new_U.at<float>(1, 2), new_U.at<float>(2, 2));
Mat T2 = -1 * T1;
I was getting the R matrices to be :
R1:
[ -0.58 -0.042 0.813
-0.020 -0.9975 -0.066
0.81 -0.054 0.578]
R2:
[ 0.98 0.0002 0.81
-0.02 -0.99 -0.066
0.81 -0.054 0.57 ]
Translation Matrices:
T1:
[0.543
-0.030
0.838]
T2:
[-0.543
0.03
-0.83]
Please clarify wherever there is a mistake.
This 4 sets of P2 matrix R|T with P1=[I] are giving incorrect triangulated models.
Also, I think the T matrix obtained is incorrect, as it was supposed to be only x shift and no z shift.
When tried with same image1=image2 -> I got T=[0,0,1]. What is the meaning of Tz=1? (where there is no z shift as both images are same)
And should I be aligning my keypoint coordinates with image center, or with principle focus obtained from calibration?

Matrix masking operation in OpenCV(C++) and in Matlab

I would like to do the following operation (which is at the current state in Matlab) using cv::Mat variables.
I have matrix mask:
mask =
1 0 0
1 0 1
then matrix M:
M =
1
2
3
4
5
6
3
and samples = M(mask,:)
samples =
1
2
6
My question is, how can I perform the same operation like, M(mask,:), with OpenCV?
With my knowledge the closet function to this thing is copyTo function in opencv that get matrix and mask for inputs. but this function hold original structure of your matrix you can test it.
I think there is no problem to use for loop in opencv(in c++) because it's fast. I propose to use for loop with below codes.
Mat M=(Mat_<uchar>(2,3)<<1,2,3,4,5,6); //Create M
cout<<M<<endl;
Mat mask=(Mat_<bool>(2,3)<<1,0,0,1,0,1); // Create mask
cout<<mask<<endl;
Mat samples;
///////////////////////////////
for(int i=0;i<M.total();i++)
{
if(mask.at<uchar>(i))
samples.push_back(M.at<uchar>(i));
}
cout<<samples<<endl;
above code result below outputs.
[ 1, 2, 3;
4, 5, 6]
[ 1, 0, 0;
1, 0, 1]
[ 1;
4;
6]
with using copyTo your output will be like below
[1 0 0
4 0 6];

C++ OpenCV linear algebra on multiple images?

I am very new to C++ and OpenCV but more familiar with Matlab. I have a task that I need to move to C++ for faster processing. So I would like to ask for your suggestion on a image processing problem. I have 10 images in a folder and I was able to read them all using dirent.h like in this and extract each frame by calling frame[count] = rawImage in a while loop:
int count = 0;
std::vector<cv::Mat> frames;
frames.resize(10);
while((_dirent = readdir(directory)) != NULL)
{
std::string fileName = inputDirectory + "\\" +std::string(_dirent->d_name);
cv::Mat rawImage = cv::imread(fileName.c_str(),CV_LOAD_IMAGE_GRAYSCALE);
frames[count] = rawImage; // Insert the rawImage to frames (this is original images)
count++;
}
Now I want to access each frames and do calculation similar to Matlab to get another matrix A such that A = frames(:,:,1)+2*frames(:,:,2). How to do that?
Since frames is a std::vector<cv::Mat>, you should be able to access each Mat this way:
// suppose you want the nth matrix
cv::Mat frame_n = frames[n];
Now, if you want to do the calculation you said on the first two Mats, then:
cv::Mat A = frames[0] + 2 * frames[1];
Example:
// mat1 = [[1 1 1]
// [2 2 2]
// [3 3 3]]
cv::Mat mat1 = (cv::Mat_<double>(3, 3) << 1, 1, 1, 2, 2, 2, 3, 3, 3);
cv::Mat mat2 = mat1 * 2; // multiplication matrix x scalar
// just to look like your case
std::vector<cv::Mat> frames;
frames.push_back(mat1);
frames.push_back(mat2);
cv::Mat A = frames[0] + 2 * frames[1]; // your calculation works
// A = [[ 5 5 5]
// [10 10 10]
// [15 15 15]]
You can always read the list of acceptable expressions.