fftshift c++ implemetation for openCV - c++

I have already looked in this question
fftshift/ifftshift C/C++ source code
I'm trying to implement fftshift from matlab
this is the code from the matlab function for 1D array
numDims = ndims(x);
idx = cell(1, numDims);
for k = 1:numDims
m = size(x, k);
p = ceil(m/2);
idx{k} = [p+1:m 1:p];
end
y = x(idx{:});
my c++/openCV code is, what fftshift basically does is swap the values from a certain pivot place.
since I can't seem to understand how is the matrix built in opencv for complex numbers.
it says here http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#dft
CCS (complex-conjugate-symmetrical
I thought it will be easier to split the complex numbers into real and imaginary and swap them. and then merge back to one matrix.
cv::vector<float> distanceF (f.size());
//ff = fftshift(ff);
cv::Mat ff;
cv::dft(distanceF, ff, cv::DFT_COMPLEX_OUTPUT);
//Make place for both the complex and the real values
cv::Mat planes[] = {cv::Mat::zeros(distanceF.size(),1, CV_32F), cv::Mat::zeros(distanceF.size(),1, CV_32F)};
cv::split(ff, planes); // planes[0] = Re(DFT(I), planes[1] = Im(DFT(I))
int numDims = ff.dims;
for (int i = 0; i < numDims; i++)
{
int m = ff.rows;
int p = ceil(m/2);
}
my problem is that because of my input to the DFT is a vector<float> I can't seem to be able to create planes mat in order to split the complex numbers?
Can you think how a better way to make the swap of the values inside the cv::mat data struct?

Ok, this thread is may be out of date in the meantime but maybe for other users.. Take a look at the samples:
opencv/samples/cpp/dft.cpp (line 66 - 80)
int cx = mag.cols/2;
int cy = mag.rows/2;
// rearrange the quadrants of Fourier image
// so that the origin is at the image center
Mat tmp;
Mat q0(mag, Rect(0, 0, cx, cy));
Mat q1(mag, Rect(cx, 0, cx, cy));
Mat q2(mag, Rect(0, cy, cx, cy));
Mat q3(mag, Rect(cx, cy, cx, cy));
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
I think that's a short and clean way for different dimensions.

I know, this is quite an old thread, but I found it today while looking for a solution to shift the fft-result. and maybe the little function I wrote with the help of this site and other sources, could be helpful for future readers searching the net and ending up here too.
bool FftShift(const Mat& src, Mat& dst)
{
if(src.empty()) return true;
const uint h=src.rows, w=src.cols; // height and width of src-image
const uint qh=h>>1, qw=w>>1; // height and width of the quadrants
Mat qTL(src, Rect( 0, 0, qw, qh)); // define the quadrants in respect to
Mat qTR(src, Rect(w-qw, 0, qw, qh)); // the outer dimensions of the matrix.
Mat qBL(src, Rect( 0, h-qh, qw, qh)); // thus, with odd sizes, the center
Mat qBR(src, Rect(w-qw, h-qh, qw, qh)); // line(s) get(s) omitted.
Mat tmp;
hconcat(qBR, qBL, dst); // build destination matrix with switched
hconcat(qTR, qTL, tmp); // quadrants 0 & 2 and 1 & 3 from source
vconcat(dst, tmp, dst);
return false;
}

How about using adjustROI and copyTo instead of .at()? It would certainly be more efficient:
Something in the lines of (for your 1D case):
Mat shifted(ff.size(),ff.type());
pivot = ff.cols / 2;
ff(Range::all(),Range(pivot + 1, ff.cols)).copyTo(shifted(Range::all(),Range(0,pivot)));
ff(Range::all(),Range(0,pivot+1)).copyTo(shifted(Range::all(),Range(pivot,ff.cols)));
For the 2D case, two more lines should be added, and the rows ranges modified...

I have been implementing it myself based on this post, I used Fabian implementation which is working fine. But there is a problem when there is an odd number of row or column, the shift is then not correct.
You need then to padd your matrix and after to get rid of the extra row or column.
{bool flag_row = false;
bool flag_col = false;
if( (inputMatrix.rows % 2)>0)
{
cv::Mat row = cv::Mat::zeros(1,inputMatrix.cols, CV_64F);
inputMatrix.push_back(row);
flag_row =true;
}
if( (inputMatrix.cols % 2)>0)
{
cv::Mat col = cv::Mat::zeros(1,inputMatrix.rows, CV_64F);
cv::Mat tmp;
inputMatrix.copyTo(tmp);
tmp=tmp.t();
tmp.push_back(col);
tmp=tmp.t();
tmp.copyTo(inputMatrix);
flag_col = true;
}
int cx = inputMatrix.cols/2.0;
int cy = inputMatrix.rows/2.0;
cv::Mat outputMatrix;
inputMatrix.copyTo(outputMatrix);
// rearrange the quadrants of Fourier image
// so that the origin is at the image center
cv::Mat tmp;
cv::Mat q0(outputMatrix, cv::Rect(0, 0, cx, cy));
cv::Mat q1(outputMatrix, cv::Rect(cx, 0, cx, cy));
cv::Mat q2(outputMatrix, cv::Rect(0, cy, cx, cy));
cv::Mat q3(outputMatrix, cv::Rect(cx, cy, cx, cy));
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
int row = inputMatrix.rows;
int col = inputMatrix.cols;
if(flag_row)
{
outputMatrix = Tools::removerow(outputMatrix,row/2-1);
}
if(flag_col)
{
outputMatrix = Tools::removecol(outputMatrix,col/2-1);
}
return outputMatrix;

Here is what I do (quick and dirty, can be optimized):
// taken from the opencv DFT example (see opencv/samples/cpp/dft.cpp within opencv v440 sourcecode package)
cv::Mat fftshift(const cv::Mat& mat){
// create copy to not mess up the original matrix (ret is only a "window" over the provided matrix)
cv::Mat cpy;
mat.copyTo(cpy);
// crop the spectrum, if it has an odd number of rows or columns
cv::Mat ret = cpy(cv::Rect(0, 0, cpy.cols & -2, cpy.rows & -2));
// rearrange the quadrants of Fourier image so that the origin is at the image center
int cx = ret.cols/2;
int cy = ret.rows/2;
cv::Mat q0(ret, cv::Rect(0, 0, cx, cy)); // Top-Left - Create a ROI per quadrant
cv::Mat q1(ret, cv::Rect(cx, 0, cx, cy)); // Top-Right
cv::Mat q2(ret, cv::Rect(0, cy, cx, cy)); // Bottom-Left
cv::Mat q3(ret, cv::Rect(cx, cy, cx, cy)); // Bottom-Right
cv::Mat tmp; // swap quadrants (Top-Left with Bottom-Right)
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp); // swap quadrant (Top-Right with Bottom-Left)
q2.copyTo(q1);
tmp.copyTo(q2);
return ret;
}
// reverse the swapping of fftshift. (-> reverse the quadrant swapping)
cv::Mat ifftshift(const cv::Mat& mat){
// create copy to not mess up the original matrix (ret is only a "window" over the provided matrix)
cv::Mat cpy;
mat.copyTo(cpy);
// crop the spectrum, if it has an odd number of rows or columns
cv::Mat ret = cpy(cv::Rect(0, 0, cpy.cols & -2, cpy.rows & -2));
// rearrange the quadrants of Fourier image so that the origin is at the image center
int cx = ret.cols/2;
int cy = ret.rows/2;
cv::Mat q0(ret, cv::Rect(0, 0, cx, cy)); // Top-Left - Create a ROI per quadrant
cv::Mat q1(ret, cv::Rect(cx, 0, cx, cy)); // Top-Right
cv::Mat q2(ret, cv::Rect(0, cy, cx, cy)); // Bottom-Left
cv::Mat q3(ret, cv::Rect(cx, cy, cx, cy)); // Bottom-Right
cv::Mat tmp; // swap quadrants (Bottom-Right with Top-Left)
q3.copyTo(tmp);
q0.copyTo(q3);
tmp.copyTo(q0);
q2.copyTo(tmp); // swap quadrant (Bottom-Left with Top-Right)
q1.copyTo(q2);
tmp.copyTo(q1);
return ret;
}

There are no implementations in earlier answers that work correctly for odd-sized images.
fftshift moves the origin from the top-left to the center (at size/2).
ifftshift moves the origin from the center to the top-left.
These two actions are identical for even sizes, but differ for odd-sizes.
For an odd size, fftshift swaps the first (size+1)/2 pixels with the remaining size/2 pixels, which moves the pixel at index 0 to size/2. ifftshift does the reverse, swapping the first size/2 pixels with the remaining (size+1)/2 pixels. This code is the most simple implementation of both these actions that I can come up with. (Note that (size+1)/2 == size/2 if size is even.)
bool forward = true; // true for fftshift, false for ifftshift
cv::Mat img = ...; // the image to process
// input sizes
int sx = img.cols;
int sy = img.rows;
// size of top-left quadrant
int cx = forward ? (sx + 1) / 2 : sx / 2;
int cy = forward ? (sy + 1) / 2 : sy / 2;
// split the quadrants
cv::Mat top_left(img, cv::Rect(0, 0, cx, cy));
cv::Mat top_right(img, cv::Rect(cx, 0, sx - cx, cy));
cv::Mat bottom_left(img, cv::Rect(0, cy, cx, sy - cy));
cv::Mat bottom_right(img, cv::Rect(cx, cy, sx - cx, sy - cy));
// merge the quadrants in right order
cv::Mat tmp1, tmp2;
cv::hconcat(bottom_right, bottom_left, tmp1);
cv::hconcat(top_right, top_left, tmp2);
cv::vconcat(tmp1, tmp2, img);
This code makes a copy of the full image twice, but it is easy and quick to implement. A more performant implementation would swap values in-place. This answer has correct code to do so on a single line, it would have to be applied to each column and each row of the image.

this is for future reference:
been tested and is bit accurate for 1D
cv::Mat ff;
cv::dft(distanceF, ff, cv::DFT_ROWS|cv::DFT_COMPLEX_OUTPUT);
//Make place for both the complex and the real values
cv::Mat planes[] = {cv::Mat::zeros(distanceF.size(),1, CV_32F), cv::Mat::zeros(distanceF.size(),1, CV_32F)};
cv::split(ff, planes); // planes[0] = Re(DFT(I), planes[1] = Im(DFT(I))
int m = planes[0].cols;
int pivot = ceil(m/2);
//duplicate FFT results with Complex conjugate in order to get exact matlab results
for (int i = pivot + 1, k = pivot; i < planes[1].cols; i++, k--)
{
planes[1].at<float>(i) = planes[1].at<float>(k) * -1;
planes[0].at<float>(i) = planes[0].at<float>(k);
}
//TODO maybe we need to see what happens for pair and odd ??
float im = planes[1].at<float>(0);
float re = planes[0].at<float>(0);
for (int i = 0; i < pivot; i++)
{
//IM
planes[1].at<float>(i) = planes[1].at<float>(pivot + i +1);
planes[1].at<float>(pivot +i +1) = planes[1].at<float>(i +1);
//Real
planes[0].at<float>(i) = planes[0].at<float>(pivot + i +1);
planes[0].at<float>(pivot +i +1) = planes[0].at<float>(i +1);
}
planes[1].at<float>(pivot) = im;
planes[0].at<float>(pivot) = re;

In Matlab's implementation, the main code are the two lines:
idx{k} = [p+1:m 1:p];
y = x(idx{:});
The first one obtains the correct index order against the original one; then the second one assigns the output array according to the index order. Therefore, if you want to re-write Matlab's implementation without data swapping, you need to allocate a new array and assign the array.

Related

How to use fourier transform pair on image using opencv correctly?

Firstly, I utilize putText function to create a zero-filled image:
std::string text("Mengranlin");
int rows = 222;
int cols = 112;
double textSize = 1.5;
int textWidth = 2;
int num = 255;
cv::Mat zero_filled_img = cv::Mat::zeros(cols, rows, CV_32F);
putText(zero_filled_img, text,
cv::Point(zero_filled_img.cols * 0.5,
zero_filled_img.rows * 0.3),
cv::FONT_HERSHEY_PLAIN, textSize, cv::Scalar(num, num, num), textWidth);
cv::Mat zero_filled_img2;
flip(zero_filled_img, zero_filled_img2, -1);
zero_filled_img += zero_filled_img2;
transpose(zero_filled_img, zero_filled_img);
flip(zero_filled_img, zero_filled_img, 1);
Here is the image:
Secondly, I utilize inverse Fourier transform to the image:
int m = getOptimalDFTSize(rows);
int n = getOptimalDFTSize(cols);
cv::Mat dst;
copyMakeBorder(zero_filled_img, dst, 0, m - rows, 0, n - cols, BORDER_CONSTANT, Scalar::all(0));
cv::Mat planes[] = { cv::Mat_<float>(dst),
cv::Mat::zeros(dst.size(), CV_32F) };
cv::Mat complex;
cv::merge(planes,2, complex);
idft(complex, complex);
split(complex, planes);
magnitude(planes[0], planes[1], planes[0]);
Thirdly, I utilize Fourier transform to the result of inverse Fourier transform:
cv::merge(planes2, 2, complex);
dft(complex, complex);
split(complex, planes2);
magnitude(planes2[0], planes2[1], planes2[0]);
cv::Mat result = planes2[0];
Finally, I save the image:
result += 1;
log(result, result);
result = result(cv::Rect(0, 0, cols, rows));
int cx = result.cols / 2;
int cy = result.rows / 2;
cv::Mat temp;
cv::Mat q0(result, cv::Rect(0, 0, cx, cy));
cv::Mat q1(result, cv::Rect(cx, 0, cx, cy));
cv::Mat q2(result, cv::Rect(0, cy, cx, cy));
cv::Mat q3(result, cv::Rect(cx, cy, cx, cy));
q0.copyTo(temp);
q3.copyTo(q0);
temp.copyTo(q3);
q1.copyTo(temp);
q2.copyTo(q1);
temp.copyTo(q2);
imwrite("./image/log_result.jpg", result);
Here is the image:
Although the "Mengnalin" can be found from the image, that is very weak. And then, I save the normalization of the result, but I found nothing:
normalize(result, result);
imwrite("./image/normalize_result.jpg", result);
result *= 255;
imwrite("./image/normalize_result255.jpg", result);
Here is the normalization image:
Here is the normalization image x 255:
The experiment is successful when using Matlab. I want to know where the error is?
Below is the complete code that I ran:
std::string text("Mengranlin");
int rows = 222;
int cols = 112;
double textSize = 1.5;
int textWidth = 2;
int num = 255;
cv::Mat zero_filled_img = cv::Mat::zeros(cols, rows, CV_32F);
putText(zero_filled_img, text, cv::Point(zero_filled_img.cols * 0.5, zero_filled_img.rows * 0.3),
cv::FONT_HERSHEY_PLAIN, textSize, cv::Scalar(num, num, num), textWidth);
cv::Mat zero_filled_img2;
flip(zero_filled_img, zero_filled_img2, -1);
zero_filled_img += zero_filled_img2;
transpose(zero_filled_img, zero_filled_img);
flip(zero_filled_img, zero_filled_img, 1);
cv::Mat de = cv::Mat_<uchar>(zero_filled_img);
cv::imwrite("./image/zero_filled_img.jpg", zero_filled_img);
//idft
int m = getOptimalDFTSize(rows);
int n = getOptimalDFTSize(cols);
cv::Mat dst;
copyMakeBorder(zero_filled_img, dst, 0, m - rows, 0, n - cols, BORDER_CONSTANT, Scalar::all(0));
cv::Mat planes[] = { cv::Mat_<float>(dst), cv::Mat::zeros(dst.size(), CV_32F) };
cv::Mat complex;
cv::merge(planes,2, complex);
idft(complex, complex);
split(complex, planes);
magnitude(planes[0], planes[1], planes[0]);
cv::Mat freq = planes[0];
freq = freq(cv::Rect(0, 0, cols, rows));
normalize(freq, freq, 0, 1, CV_MINMAX);
//dft
cv::Mat planes2[] = {planes[0], planes[1]};
cv::merge(planes2, 2, complex);
dft(complex, complex);
split(complex, planes2);
magnitude(planes2[0], planes2[1], planes2[0]);
cv::Mat result = planes2[0];
//float min_v, max_v; min_max(result, min_v, max_v);
imwrite("./image/img.jpg", result);
result += 1;
imwrite("./image/img_plus_zero.jpg", result);
log(result, result);
result = result(cv::Rect(0, 0, cols, rows));
//float min_v1, max_v1; min_max(result, min_v1, max_v1);
imwrite("./image/log_img.jpg", result);
int cx = result.cols / 2;
int cy = result.rows / 2;
cv::Mat temp;
cv::Mat q0(result, cv::Rect(0, 0, cx, cy));
cv::Mat q1(result, cv::Rect(cx, 0, cx, cy));
cv::Mat q2(result, cv::Rect(0, cy, cx, cy));
cv::Mat q3(result, cv::Rect(cx, cy, cx, cy));
q0.copyTo(temp);
q3.copyTo(q0);
temp.copyTo(q3);
q1.copyTo(temp);
q2.copyTo(q1);
temp.copyTo(q2);
normalize(result, result);
imwrite("./image/normalize_img.jpg", result);
result *= 255;
imwrite("./image/normalize_img255.jpg", result);
Your code splits the output of idft into planes[0] (real component) and planes[1] (imaginary component), then computes the magnitude and writes it to planes[0]:
idft(complex, complex);
split(complex, planes);
magnitude(planes[0], planes[1], planes[0]);
Next, you merge planes[0] and planes[1] as the real and imaginary parts of a complex-valued image, and compute the dft:
cv::Mat planes2[] = {planes[0], planes[1]};
cv::merge(planes2, 2, complex);
dft(complex, complex);
But because planes[0] doesn't contain the real part of the output of idft any more, but its magnitude, dft will not perform the inverse calculation that idft did.
You can fix this easily. Instead of:
magnitude(planes[0], planes[1], planes[0]);
cv::Mat freq = planes[0];
Do:
cv::Mat freq;
magnitude(planes[0], planes[1], freq);
You can significantly simplify your code. Try the following code (zero_filled_img is the input image computed earlier):
// DFT
cv::Mat complex;
dft(zero_filled_img, complex, DFT_COMPLEX_OUTPUT);
// IDFT
cv::Mat result;
idft(complex, result, DFT_REAL_OUTPUT);
imwrite("./image/img.jpg", result);
result should be equal to zero_filled_img within numerical accuracy.
The DFT_COMPLEX_OUTPUT flag forces the creation of a full, complex-valued DFT, even though the input array is real-valued. Likewise, DFT_REAL_OUTPUT causes any imaginary output components to be dropped, this is equivalent to computing the complex IDFT and then taking the real part only.
I have reversed the DFT and IDFT to be conceptually correct (though it is perfectly fine to reverse these two operations). DFT_COMPLEX_OUTPUT only works with the forward transform and DFT_REAL_OUTPUT only works with the inverse transform, so the code above will not work (I believe) if you use these two operations in the order you attempted in your own code.
The code above also doesn't bother with padding to a favourable size. Doing so might reduce computation time, but for such a small image it will not matter at all.
Note also that taking the magnitude of the output of the inverse transform (the second transform you apply) is OK in your case, but not in general. This second transform is expected to produce a real-valued output (since the input to the first one was real-valued). Any imaginary component should be 0 within numerical precision. Thus, the real component of the complex output should be kept. If you take the magnitude, you obtain the absolute value of the real component, meaning that any negative values in the original input will become positive values in the final output. In the case of the example images, all pixels are non-negative, but this is not necessarily true. Do the correct thing and take the real component rather than the magnitude.

Performing convolution in frequency domain manually but getting wrong output image in CPP/Opencv

I followed the following steps:-
1. Calculated dft of image
2. Calculated dft of kernel (but 1st padded it to size of image)
3. Multiplied real and imaginary parts of both dft individually
4. Calculated inverse dft
I tried to display the images in each intermediate step but the final image comes out to be almost black except in corners.
Image fourier transform output after multiplication and its inverse dft output
input image
enter code here
#include <iostream>
#include <stdlib.h>
#include <opencv2/opencv.hpp>
#include <stdio.h>
int r=100;
#define SIGMA_CLIP 6.0f
using namespace cv;
using namespace std;
void updateResult(Mat complex)
{
Mat work;
idft(complex, work);
Mat planes[] = {Mat::zeros(complex.size(), CV_32F), Mat::zeros(complex.size(), CV_32F)};
split(work, planes); // planes[0] = Re(DFT(I)), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], work); // === sqrt(Re(DFT(I))^2 + Im(DFT(I))^2)
normalize(work, work, 0, 1, NORM_MINMAX);
imshow("result", work);
}
void shift(Mat magI) {
// crop if it has an odd number of rows or columns
magI = magI(Rect(0, 0, magI.cols & -2, magI.rows & -2));
int cx = magI.cols/2;
int cy = magI.rows/2;
Mat q0(magI, Rect(0, 0, cx, cy)); // Top-Left - Create a ROI per quadrant
Mat q1(magI, Rect(cx, 0, cx, cy)); // Top-Right
Mat q2(magI, Rect(0, cy, cx, cy)); // Bottom-Left
Mat q3(magI, Rect(cx, cy, cx, cy)); // Bottom-Right
Mat tmp; // swap quadrants (Top-Left with Bottom-Right)
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp); // swap quadrant (Top-Right with Bottom-Left)
q2.copyTo(q1);
tmp.copyTo(q2);
}
Mat updateMag(Mat complex )
{
Mat magI;
Mat planes[] = {Mat::zeros(complex.size(), CV_32F), Mat::zeros(complex.size(), CV_32F)};
split(complex, planes); // planes[0] = Re(DFT(I)), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], magI); // sqrt(Re(DFT(I))^2 + Im(DFT(I))^2)
// switch to logarithmic scale: log(1 + magnitude)
magI += Scalar::all(1);
log(magI, magI);
shift(magI);
normalize(magI, magI, 1, 0, NORM_INF); // Transform the matrix with float values into a
return magI; // viewable image form (float between values 0 and 1).
//imshow("spectrum", magI);
}
Mat createGausFilterMask(Size imsize, int radius) {
// call openCV gaussian kernel generator
double sigma = (r/SIGMA_CLIP+0.5f);
Mat kernelX = getGaussianKernel(2*radius+1, sigma, CV_32F);
Mat kernelY = getGaussianKernel(2*radius+1, sigma, CV_32F);
// create 2d gaus
Mat kernel = kernelX * kernelY.t();
int w = imsize.width-kernel.cols;
int h = imsize.height-kernel.rows;
int r = w/2;
int l = imsize.width-kernel.cols -r;
int b = h/2;
int t = imsize.height-kernel.rows -b;
Mat ret;
copyMakeBorder(kernel,ret,t,b,l,r,BORDER_CONSTANT,Scalar::all(0));
return ret;
}
//code reference https://docs.opencv.org/2.4/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
int main( int argc, char** argv )
{
String file;
file = "lena.png";
Mat image = imread(file, CV_LOAD_IMAGE_GRAYSCALE);
Mat padded;
int m = getOptimalDFTSize( image.rows );
int n = getOptimalDFTSize( image.cols );
copyMakeBorder(image, padded, 0, m - image.rows, 0, n -image.cols, BORDER_CONSTANT, Scalar::all(0));//expand input image to optimal size , on the border add zero values
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexI;
merge(planes, 2, complexI);
dft(complexI, complexI); //computing dft
split(complexI, planes); //image converted to complex and real dft here
Mat mask = createGausFilterMask(padded.size(),r ); // Forming the gaussian filter
Mat mplane[] = {Mat_<float>(mask), Mat::zeros(mask.size(), CV_32F)};
Mat kernelcomplex;
merge(mplane, 2, kernelcomplex);
dft(kernelcomplex, kernelcomplex);
split(kernelcomplex, mplane);// splitting the dft of kernel to real and complex
mplane[1]=mplane[0]; //overwriting imaginary values with real values of kernel dft
Mat kernel_spec;
merge(mplane, 2, kernel_spec);
mulSpectrums(complexI, kernel_spec, complexI, DFT_ROWS);
Mat magI=updateMag(complexI);
namedWindow( "image fourier", CV_WINDOW_AUTOSIZE );
imshow("spectrum magnitude", magI);
updateResult(complexI); //converting to viewable form, computing idft
waitKey(0);
return 0;
}
Which step is going wrong? Or am i missing on to some concept?
Edited the code with help of Cris and it now works perfectly.
There are two immediately apparent issues:
The Gaussian is real-valued and symmetric. Its Fourier transform should be too. If the DFT of your kernel has a non-zero imaginary component, you're doing something wrong.
Likely, what you are doing wrong is that your kernel has its origin in the middle of the image, rather than at the top-left sample. This is the same issue as in this other question. The solution is to use the equivalent of MATLAB's ifftshift, an implementation of which is shown in the OpenCV documentation ("step 6, Crop and rearrange").
To apply the convolution, you need to multiply the two DFTs together, not the real parts and imaginary parts of the DFTs. Multiplying two complex numbers a+ib and c+id results in ac-bd+iad+ibc, not ac+ibd.
But since the DFT of your kernel should be real-valued only, you can simply multiply the real component of the kernel with both the real and imaginary components of the image: (a+ib)c = ac+ibc.
It seems very roundabout what you are doing with the complex-valued images. Why not let OpenCV handle all of that for you? You can probably* just do something like this:
Mat image = imread(file, CV_LOAD_IMAGE_GRAYSCALE);
// Expand input image to optimal size, on the border add zero values
Mat padded;
int m = getOptimalDFTSize(image.rows);
int n = getOptimalDFTSize(image.cols);
copyMakeBorder(image, padded, 0, m - image.rows, 0, n -image.cols, BORDER_CONSTANT, Scalar::all(0));
// Computing DFT
Mat DFTimage;
dft(padded, DFTimage);
// Forming the Gaussian filter
Mat kernel = createGausFilterMask(padded.size(), r);
shift(kernel);
Mat DFTkernel;
dft(kernel, DFTkernel);
// Convolution
mulSpectrums(DFTimage, DFTkernel, DFTimage, DFT_ROWS);
// Display Fourier-domain result
Mat magI = updateMag(DFTimage);
imshow("spectrum magnitude", magI);
// IDFT
Mat work;
idft(complex, work); // <- NOTE! Don't inverse transform log-transformed magnitude image!
Note that the Fourier-Domain result is actually a special representation of the complex-conjugate symmetric DFT, intended to save space and computations. To compute the full complex output, add the DFT_COMPLEX_OUTPUT to the call to dft, and DFT_REAL_OUTPUT to the call to idft (this latter then assumes symmetry, and produces a real-valued output, saving you the hassle of computing the magnitude).
* I say probably because I haven't compiled any of this... If there's something wrong, please let me know, or edit the answer and fix it.

Weiner deconvolution using opencv

I have developed a way to estimate the point spread function of a motion blur, however I'd like to use the PSF to perform deconvolution. I decided to use the wiener method.
cv::Mat deconvolution(cv::Mat input, cv::Mat kernel){
cv::Mat Fin, Fkern, padded_kern, Fdeblur,out;
cv::normalize(kernel,kernel);
cv::dft(input,Fin,cv::DFT_COMPLEX_OUTPUT);
cv::copyMakeBorder(kernel,padded_kern,0,Fin.rows-kernel.rows,0,Fin.cols-kernel.cols,cv::BORDER_CONSTANT,cv::Scalar::all(0));
cv::dft(padded_kern,Fkern,cv::DFT_COMPLEX_OUTPUT);
cv::mulSpectrums(Fin,Fkern,Fdeblur,0,true);
cv::dft(Fdeblur,out,cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
cv::normalize(out,out,0, 1, CV_MINMAX);
return out;
}
However even after setting cv::mulSpectrums(Fin,Fkern,Fdeblur,0,true); last option to true, I still seem to be performing a normal convolution. Should'nt the last true option mean that I am multiplying by the conjugate and therefore dividing the kernel?
Just implemented this filter:
#include <windows.h>
#include <iostream>
#include <vector>
#include <stdio.h>
#include "fstream"
#include "iostream"
#include <algorithm>
#include <iterator>
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
#include <iostream>
#include <vector>
void Recomb(Mat &src, Mat &dst)
{
int cx = src.cols >> 1;
int cy = src.rows >> 1;
Mat tmp;
tmp.create(src.size(), src.type());
src(Rect(0, 0, cx, cy)).copyTo(tmp(Rect(cx, cy, cx, cy)));
src(Rect(cx, cy, cx, cy)).copyTo(tmp(Rect(0, 0, cx, cy)));
src(Rect(cx, 0, cx, cy)).copyTo(tmp(Rect(0, cy, cx, cy)));
src(Rect(0, cy, cx, cy)).copyTo(tmp(Rect(cx, 0, cx, cy)));
dst = tmp;
}
void convolveDFT(Mat& A, Mat& B, Mat& C)
{
// reallocate the output array if needed
C.create(abs(A.rows - B.rows) + 1, abs(A.cols - B.cols) + 1, A.type());
Size dftSize;
// compute the size of DFT transform
dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);
dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);
// allocate temporary buffers and initialize them with 0's
Mat tempA(dftSize, A.type(), Scalar::all(0));
Mat tempB(dftSize, B.type(), Scalar::all(0));
// copy A and B to the top-left corners of tempA and tempB, respectively
Mat roiA(tempA, Rect(0, 0, A.cols, A.rows));
A.copyTo(roiA);
Mat roiB(tempB, Rect(0, 0, B.cols, B.rows));
B.copyTo(roiB);
// now transform the padded A & B in-place;
// use "nonzeroRows" hint for faster processing
dft(tempA, tempA, 0, A.rows);
dft(tempB, tempB, 0, A.rows);
// multiply the spectrums;
// the function handles packed spectrum representations well
mulSpectrums(tempA, tempB, tempA, 0);
// transform the product back from the frequency domain.
// Even though all the result rows will be non-zero,
// you need only the first C.rows of them, and thus you
// pass nonzeroRows == C.rows
dft(tempA, tempA, DFT_INVERSE + DFT_SCALE);
// now copy the result back to C.
C = tempA(Rect((dftSize.width - A.cols) / 2, (dftSize.height - A.rows) / 2, A.cols, A.rows)).clone();
// all the temporary buffers will be deallocated automatically
}
//----------------------------------------------------------
// Compute Re and Im planes of FFT from Image
//----------------------------------------------------------
void ForwardFFT(Mat &Src, Mat *FImg)
{
int M = getOptimalDFTSize(Src.rows);
int N = getOptimalDFTSize(Src.cols);
Mat padded;
copyMakeBorder(Src, padded, 0, M - Src.rows, 0, N - Src.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = { Mat_<double>(padded), Mat::zeros(padded.size(), CV_64FC1) };
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
split(complexImg, planes);
// crop result
planes[0] = planes[0](Rect(0, 0, Src.cols, Src.rows));
planes[1] = planes[1](Rect(0, 0, Src.cols, Src.rows));
FImg[0] = planes[0].clone();
FImg[1] = planes[1].clone();
}
//----------------------------------------------------------
// Compute image from Re and Im parts of FFT
//----------------------------------------------------------
void InverseFFT(Mat *FImg, Mat &Dst)
{
Mat complexImg;
merge(FImg, 2, complexImg);
dft(complexImg, complexImg, DFT_INVERSE + DFT_SCALE);
split(complexImg, FImg);
Dst = FImg[0];
}
//----------------------------------------------------------
// wiener Filter
//----------------------------------------------------------
void wienerFilter(Mat &src, Mat &dst, Mat &_h, double k)
{
//---------------------------------------------------
// Small epsilon to avoid division by zero
//---------------------------------------------------
const double eps = 1E-8;
//---------------------------------------------------
int ImgW = src.size().width;
int ImgH = src.size().height;
//--------------------------------------------------
Mat Yf[2];
ForwardFFT(src, Yf);
//--------------------------------------------------
Mat h = Mat::zeros(ImgH, ImgW, CV_64FC1);
int padx = h.cols - _h.cols;
int pady = h.rows - _h.rows;
copyMakeBorder(_h, h, pady / 2, pady - pady / 2, padx / 2, padx - padx / 2, BORDER_CONSTANT, Scalar::all(0));
Mat Hf[2];
ForwardFFT(h, Hf);
//--------------------------------------------------
Mat Fu[2];
Fu[0] = Mat::zeros(ImgH, ImgW, CV_64FC1);
Fu[1] = Mat::zeros(ImgH, ImgW, CV_64FC1);
complex<double> a;
complex<double> b;
complex<double> c;
double Hf_Re;
double Hf_Im;
double Phf;
double hfz;
double hz;
double A;
for (int i = 0; i < h.rows; i++)
{
for (int j = 0; j < h.cols; j++)
{
Hf_Re = Hf[0].at<double>(i, j);
Hf_Im = Hf[1].at<double>(i, j);
Phf = Hf_Re*Hf_Re + Hf_Im*Hf_Im;
hfz = (Phf < eps)*eps;
hz = (h.at<double>(i, j) > 0);
A = Phf / (Phf + hz + k);
a = complex<double>(Yf[0].at<double>(i, j), Yf[1].at<double>(i, j));
b = complex<double>(Hf_Re + hfz, Hf_Im + hfz);
c = a / b; // Deconvolution :) other work to avoid division by zero
Fu[0].at<double>(i, j) = (c.real()*A);
Fu[1].at<double>(i, j) = (c.imag()*A);
}
}
InverseFFT(Fu, dst);
Recomb(dst, dst);
}
// ---------------------------------
//
// ---------------------------------
int main(int argc, char** argv)
{
namedWindow("Image");
namedWindow("Kernel");
namedWindow("Result");
Mat Img = imread("F:\\ImagesForTest\\lena.jpg", 0); // Source image
Img.convertTo(Img, CV_32FC1, 1.0 / 255.0);
Mat kernel = imread("F:\\ImagesForTest\\Point.jpg", 0); // PSF
//resize(kernel, kernel, Size(), 0.5, 0.5);
kernel.convertTo(kernel, CV_32FC1, 1.0 / 255.0);
float kernel_sum = cv::sum(kernel)[0];
kernel /= kernel_sum;
int width = Img.cols;
int height = Img.rows;
Mat resim;
convolveDFT(Img, kernel, resim);
Mat resim2;
kernel.convertTo(kernel, CV_64FC1);
// Apply filter
wienerFilter(resim, resim2, kernel, 0.01);
imshow("Результат фильтрации", resim2);
imshow("Kernel", kernel * 255);
imshow("Image", Img);
imshow("Result", resim);
cvWaitKey(0);
}
Results look like this (as you see it not 100% restoration):

OpenCV : homomorphic filter

i want to use a homomorphic filter to work on underwater image. I tried to code it with the codes found on the internet but i have always a black image... I tried to normalized my result but didn't work.
Here my functions :
void HomomorphicFilter::butterworth_homomorphic_filter(Mat &dft_Filter, int D, int n, float high_h_v_TB, float low_h_v_TB)
{
Mat single(dft_Filter.rows, dft_Filter.cols, CV_32F);
Point centre = Point(dft_Filter.rows/2, dft_Filter.cols/2);
double radius;
float upper = (high_h_v_TB * 0.01);
float lower = (low_h_v_TB * 0.01);
//create essentially create a butterworth highpass filter
//with additional scaling and offset
for(int i = 0; i < dft_Filter.rows; i++)
{
for(int j = 0; j < dft_Filter.cols; j++)
{
radius = (double) sqrt(pow((i - centre.x), 2.0) + pow((double) (j - centre.y), 2.0));
single.at<float>(i,j) =((upper - lower) * (1/(1 + pow((double) (D/radius), (double) (2*n))))) + lower;
}
}
//normalize(single, single, 0, 1, CV_MINMAX);
//Apply filter
mulSpectrums( dft_Filter, single, dft_Filter, 0);
}
void HomomorphicFilter::Shifting_DFT(Mat &fImage)
{
//For visualization purposes we may also rearrange the quadrants of the result, so that the origin (0,0), corresponds to the image center.
Mat tmp, q0, q1, q2, q3;
/*First crop the image, if it has an odd number of rows or columns.
Operator & bit to bit by -2 (two's complement : -2 = 111111111....10) to eliminate the first bit 2^0 (In case of odd number on row or col, we take the even number in below)*/
fImage = fImage(Rect(0, 0, fImage.cols & -2, fImage.rows & -2));
int cx = fImage.cols/2;
int cy = fImage.rows/2;
/*Rearrange the quadrants of Fourier image so that the origin is at the image center*/
q0 = fImage(Rect(0, 0, cx, cy));
q1 = fImage(Rect(cx, 0, cx, cy));
q2 = fImage(Rect(0, cy, cx, cy));
q3 = fImage(Rect(cx, cy, cx, cy));
/*We reverse each quadrant of the frame with its other quadrant diagonally opposite*/
/*We reverse q0 and q3*/
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
/*We reverse q1 and q2*/
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
}
void HomomorphicFilter::Fourier_Transform(Mat frame_bw, Mat &image_phase, Mat &image_mag)
{
Mat frame_log;
frame_bw.convertTo(frame_log, CV_32F);
/*Take the natural log of the input (compute log(1 + Mag)*/
frame_log += 1;
log( frame_log, frame_log); // log(1 + Mag)
/*2. Expand the image to an optimal size
The performance of the DFT depends of the image size. It tends to be the fastest for image sizes that are multiple of 2, 3 or 5.
We can use the copyMakeBorder() function to expand the borders of an image.*/
Mat padded;
int M = getOptimalDFTSize(frame_log.rows);
int N = getOptimalDFTSize(frame_log.cols);
copyMakeBorder(frame_log, padded, 0, M - frame_log.rows, 0, N - frame_log.cols, BORDER_CONSTANT, Scalar::all(0));
/*Make place for both the complex and real values
The result of the DFT is a complex. Then the result is 2 images (Imaginary + Real), and the frequency domains range is much larger than the spatial one. Therefore we need to store in float !
That's why we will convert our input image "padded" to float and expand it to another channel to hold the complex values.
Planes is an arrow of 2 matrix (planes[0] = Real part, planes[1] = Imaginary part)*/
Mat image_planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat image_complex;
/*Creates one multichannel array out of several single-channel ones.*/
merge(image_planes, 2, image_complex);
/*Make the DFT
The result of thee DFT is a complex image : "image_complex"*/
dft(image_complex, image_complex);
/***************************/
//Create spectrum magnitude//
/***************************/
/*Transform the real and complex values to magnitude
NB: We separe Real part to Imaginary part*/
split(image_complex, image_planes);
//Starting with this part we have the real part of the image in planes[0] and the imaginary in planes[1]
phase(image_planes[0], image_planes[1], image_phase);
magnitude(image_planes[0], image_planes[1], image_mag);
}
void HomomorphicFilter::Inv_Fourier_Transform(Mat image_phase, Mat image_mag, Mat &inverseTransform)
{
/*Calculates x and y coordinates of 2D vectors from their magnitude and angle.*/
Mat result_planes[2];
polarToCart(image_mag, image_phase, result_planes[0], result_planes[1]);
/*Creates one multichannel array out of several single-channel ones.*/
Mat result_complex;
merge(result_planes, 2, result_complex);
/*Make the IDFT*/
dft(result_complex, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT);
/*Take the exponential*/
exp(inverseTransform, inverseTransform);
}
and here my main code :
/**************************/
/****Homomorphic filter****/
/**************************/
/**********************************************/
//Getting the frequency and magnitude of image//
/**********************************************/
Mat image_phase, image_mag;
HomomorphicFilter().Fourier_Transform(frame_bw, image_phase, image_mag);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/********************************/
//Butterworth homomorphic filter//
/********************************/
int high_h_v_TB = 101;
int low_h_v_TB = 99;
int D = 10;// radius of band pass filter parameter
int order = 2;// order of band pass filter parameter
HomomorphicFilter().butterworth_homomorphic_filter(image_mag, D, order, high_h_v_TB, low_h_v_TB);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/*******************************/
//Inv Discret Fourier Transform//
/*******************************/
Mat inverseTransform;
HomomorphicFilter().Inv_Fourier_Transform(image_phase, image_mag, inverseTransform);
imshow("Result", inverseTransform);
If someone can explain me my mistakes, I would appreciate a lot :). Thank you and sorry for my poor english.
EDIT : Now, i have something but it's not perfect ... I modified 2 things in my code.
I applied log(mag + 1) after dft and not on the input image.
I removed exp() after idft.
here the results (i can post only 2 links ...) :
my input image :
final result :
after having seen several topics, i find similar results on my butterworth filter and on my magnitude after dft/shifting.
Unfortunately, my final result isn't very good. Why i have so much "noise" ?
I was doing this method to balance illumination when camera was changed caused the Image waw dark!
I tried to FFT to the frequency to filter the image! it's work.but use too much time.(2750*3680RGB image).so I do it in Spatial domain.
here is my code!
//IplImage *imgSrcI=cvLoadImage("E:\\lean.jpg",-1);
Mat imgSrcM(imgSrc,true);
Mat imgDstM;
Mat imgGray;
Mat imgHls;
vector<Mat> vHls;
Mat imgTemp1=Mat::zeros(imgSrcM.size(),CV_64FC1);
Mat imgTemp2=Mat::zeros(imgSrcM.size(),CV_64FC1);
if(imgSrcM.channels()==1)
{
imgGray=imgSrcM.clone();
}
else if (imgSrcM.channels()==3)
{
cvtColor(imgSrcM, imgHls, CV_BGR2HLS);
split(imgHls, vHls);
imgGray=vHls.at(1);
}
else
{
return -1;
}
imgGray.convertTo(imgTemp1,CV_64FC1);
imgTemp1=imgTemp1+0.0001;
log(imgTemp1,imgTemp1);
GaussianBlur(imgTemp1, imgTemp2, Size(21, 21), 0.1, 0.1, BORDER_DEFAULT);//imgTemp2是低通滤波的结果
imgTemp1 = (imgTemp1 - imgTemp2);//imgTemp1是对数减低通的高通
addWeighted(imgTemp2, 0.7, imgTemp1, 1.4, 1, imgTemp1, -1);//imgTemp1是压制低频增强高频的结构
exp(imgTemp1,imgTemp1);
normalize(imgTemp1,imgTemp1,0,1,NORM_MINMAX);
imgTemp1=imgTemp1*255;
imgTemp1.convertTo(imgGray, CV_8UC1);
//imwrite("E:\\leanImgGray.jpg",imgGray);
if (imgSrcM.channels()==3)
{
vHls.at(1)=imgGray;
merge(vHls,imgHls);
cvtColor(imgHls, imgDstM, CV_HLS2BGR);
}
else if (imgSrcM.channels()==1)
{
imgDstM=imgGray.clone();
}
cvCopy(&(IplImage)imgDstM,imgDst);
//cvShowImage("jpg",imgDst);
return 0;
I took your code corrected it at a few places and got decent results as the homographic filter output.
Here are the corrections that I made.
1)
Instead of working just on the image_mag, work on the full output of the FFT.
2)
your filter values of high_h_v_TB = 101 and low_h_v_TB = 99 virtually made little effect in filtering.
Here are the values I used.
int high_h_v_TB = 100;
int low_h_v_TB = 20;
int D = 10;// radius of band pass filter parameter
int order = 4;
Here is my main code
//float_img == grayscale image in 0-1 scale
Mat log_img;
log(float_img, log_img);
Mat fft_phase, fft_mag;
Mat fft_complex;
HomomorphicFilter::Fourier_Transform(log_img, fft_complex);
HomomorphicFilter::ShiftFFT(fft_complex);
int high_h_v_TB = 100;
int low_h_v_TB = 30;
int D = 10;// radius of band pass filter parameter
int order = 4;
//get a butterworth filter of same image size as the input image
//dont call mulSpectrums yet, just get the filter of correct size
Mat butterWorthFreqDomain;
HomomorphicFilter::ButterworthFilter(fft_complex.size(), butterWorthFreqDomain, D, order, high_h_v_TB, low_h_v_TB);
//this should match fft_complex in size and type
//and is what we will be using for 'mulSpectrums' call
Mat butterworth_complex;
//make two channels to match fft_complex
Mat butterworth_channels[] = {Mat_<float>(butterWorthFreqDomain.size()), Mat::zeros(butterWorthFreqDomain.size(), CV_32F)};
merge(butterworth_channels, 2, butterworth_complex);
//do mulSpectrums on the full fft
mulSpectrums(fft_complex, butterworth_complex, fft_complex, 0);
//shift back the output
HomomorphicFilter::ShiftFFT(fft_complex);
Mat log_img_out;
HomomorphicFilter::Inv_Fourier_Transform(fft_complex, log_img_out);
Mat float_img_out;
exp(log_img_out, float_img_out);
//float_img_out is gray in 0-1 range
Here is my output.

DFT to spatial domain in OpenCV is not working

I have created dft of an image and after some adjustment with filters i want to convert it back to the real image but every time when i do that it gives me wrong result ..seems like its not converting it back.
ForierTransform and createGaussianHighPassFilter are my own functions rest of the code i am using like below for the inversion back to real image.
Mat fft = ForierTransform(HeightPadded,WidthPadded);
Mat ghpf = createGaussianHighPassFilter(Size(WidthPadded, HeightPadded), db);
Mat res;
cv::multiply(fft,ghpf,res);
imshow("fftXhighpass1", res);
idft(res,res,DFT_INVERSE,res.rows);
cv::Mat croped = res(cv::Rect(0, 0, img.cols,img.rows));
//res.convertTo(res,CV_32S);
imshow("fftXhighpass", res);
even if i dont apply the filter i am unable to reverse back dft result ...
here is my dft code is , i could not find any sample to reverse dft back to normal image..
Mat ForierTransform(int M,int N)
{
Mat img = imread("thumb1-small-test.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat padded;
copyMakeBorder(img, padded, 0, M - img.rows, 0, N - img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
split(complexImg, planes);
magnitude(planes[0], planes[1], planes[0]);
Mat mag = planes[0];
mag += Scalar::all(1);
log(mag, mag);
// crop the spectrum, if it has an odd number of rows or columns
mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));
normalize(mag, mag, 0, 1, CV_MINMAX);
return mag;
}
kindly help
[EDIT: After I found the solution with the help of mevatron] (below is the correct code)
Mat ForierTransform(int M,int N)
{
Mat img = imread("thumb1-small-test.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat padded;
copyMakeBorder(img, padded, 0, M - img.rows, 0, N - img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
return complexImg;
}
Mat img = imread("thumb1-small-test.jpg",CV_LOAD_IMAGE_GRAYSCALE);
int WidthPadded=0,HeightPadded=0;
WidthPadded=img.cols*2;
HeightPadded=img.rows*2;
int M = getOptimalDFTSize( img.rows );
//Create a Gaussian Highpass filter 5% the height of the Fourier transform
double db = 0.05 * HeightPadded;
Mat fft = ForierTransform(HeightPadded,WidthPadded);
Mat ghpf = createGaussianHighPassFilter(Size(WidthPadded, HeightPadded), db);
Mat res;
cv::mulSpectrums(fft,ghpf,res,DFT_COMPLEX_OUTPUT);
idft(res,res,DFT_COMPLEX_OUTPUT,img.rows);
Mat padded;
copyMakeBorder(img, padded, 0, img.rows, 0, img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
split(res, planes);
magnitude(planes[0], planes[1], planes[0]);
Mat mag = planes[0];
mag += Scalar::all(1);
log(mag, mag);
// crop the spectrum, if it has an odd number of rows or columns
mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));
int cx = mag.cols/2;
int cy = mag.rows/2;
normalize(mag, mag, 1, 0, CV_MINMAX);
cv::Mat croped = mag(cv::Rect(cx, cy, img.cols,img.rows));
cv::threshold(croped , croped , 0.56, 1, cv::THRESH_BINARY);
imshow("fftPLUShpf", mag);
imshow("cropedBinary", croped);
It now can able to display ridges valley of finger , and can be more optimize with respect to threshold as well
I see a few problems going on here.
First, you need to use the mulSpectrums function to convolve two FFTs, and not multiply.
Second, the createGaussianHighPassFilter is only outputting a single channel non-complex filter. You'll probably need to just set the complex channel to Mat::zeros like you did for your input image.
Third, don't convert the output of the FFT to log-magnitude spectrum. It will not combine correctly with the filter, and you won't get the same thing when performing the inverse. So, just return complexImg right after the DFT is executed. Log-magnitude spectrum is useful for a human to look at the data, but not for what you are trying to do.
Finally, make sure you pay attention to the difference to between the full-complex output of dft and the Complex Conjugate Symmetric (CCS) packed output. Intel has a good page on how this data is formatted here. In your case, for simplicity I would keep everything in full-complex mode to make your life easier.
Hope that helps!