I have created dft of an image and after some adjustment with filters i want to convert it back to the real image but every time when i do that it gives me wrong result ..seems like its not converting it back.
ForierTransform and createGaussianHighPassFilter are my own functions rest of the code i am using like below for the inversion back to real image.
Mat fft = ForierTransform(HeightPadded,WidthPadded);
Mat ghpf = createGaussianHighPassFilter(Size(WidthPadded, HeightPadded), db);
Mat res;
cv::multiply(fft,ghpf,res);
imshow("fftXhighpass1", res);
idft(res,res,DFT_INVERSE,res.rows);
cv::Mat croped = res(cv::Rect(0, 0, img.cols,img.rows));
//res.convertTo(res,CV_32S);
imshow("fftXhighpass", res);
even if i dont apply the filter i am unable to reverse back dft result ...
here is my dft code is , i could not find any sample to reverse dft back to normal image..
Mat ForierTransform(int M,int N)
{
Mat img = imread("thumb1-small-test.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat padded;
copyMakeBorder(img, padded, 0, M - img.rows, 0, N - img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
split(complexImg, planes);
magnitude(planes[0], planes[1], planes[0]);
Mat mag = planes[0];
mag += Scalar::all(1);
log(mag, mag);
// crop the spectrum, if it has an odd number of rows or columns
mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));
normalize(mag, mag, 0, 1, CV_MINMAX);
return mag;
}
kindly help
[EDIT: After I found the solution with the help of mevatron] (below is the correct code)
Mat ForierTransform(int M,int N)
{
Mat img = imread("thumb1-small-test.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat padded;
copyMakeBorder(img, padded, 0, M - img.rows, 0, N - img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
return complexImg;
}
Mat img = imread("thumb1-small-test.jpg",CV_LOAD_IMAGE_GRAYSCALE);
int WidthPadded=0,HeightPadded=0;
WidthPadded=img.cols*2;
HeightPadded=img.rows*2;
int M = getOptimalDFTSize( img.rows );
//Create a Gaussian Highpass filter 5% the height of the Fourier transform
double db = 0.05 * HeightPadded;
Mat fft = ForierTransform(HeightPadded,WidthPadded);
Mat ghpf = createGaussianHighPassFilter(Size(WidthPadded, HeightPadded), db);
Mat res;
cv::mulSpectrums(fft,ghpf,res,DFT_COMPLEX_OUTPUT);
idft(res,res,DFT_COMPLEX_OUTPUT,img.rows);
Mat padded;
copyMakeBorder(img, padded, 0, img.rows, 0, img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
split(res, planes);
magnitude(planes[0], planes[1], planes[0]);
Mat mag = planes[0];
mag += Scalar::all(1);
log(mag, mag);
// crop the spectrum, if it has an odd number of rows or columns
mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));
int cx = mag.cols/2;
int cy = mag.rows/2;
normalize(mag, mag, 1, 0, CV_MINMAX);
cv::Mat croped = mag(cv::Rect(cx, cy, img.cols,img.rows));
cv::threshold(croped , croped , 0.56, 1, cv::THRESH_BINARY);
imshow("fftPLUShpf", mag);
imshow("cropedBinary", croped);
It now can able to display ridges valley of finger , and can be more optimize with respect to threshold as well
I see a few problems going on here.
First, you need to use the mulSpectrums function to convolve two FFTs, and not multiply.
Second, the createGaussianHighPassFilter is only outputting a single channel non-complex filter. You'll probably need to just set the complex channel to Mat::zeros like you did for your input image.
Third, don't convert the output of the FFT to log-magnitude spectrum. It will not combine correctly with the filter, and you won't get the same thing when performing the inverse. So, just return complexImg right after the DFT is executed. Log-magnitude spectrum is useful for a human to look at the data, but not for what you are trying to do.
Finally, make sure you pay attention to the difference to between the full-complex output of dft and the Complex Conjugate Symmetric (CCS) packed output. Intel has a good page on how this data is formatted here. In your case, for simplicity I would keep everything in full-complex mode to make your life easier.
Hope that helps!
Related
I am trying to detect blur rate of the face images with below code.
cv::Mat greyMat;
cv::Mat laplacianImage;
cv::Mat imageClone = LapMat.clone();
cv::resize(imageClone, imageClone, cv::Size(150, 150), 0, 0, cv::INTER_CUBIC);
cv::cvtColor(imageClone, greyMat, CV_BGR2GRAY);
Laplacian(greyMat, laplacianImage, CV_64F);
cv::Scalar mean, stddev; // 0:1st channel, 1:2nd channel and 2:3rd channel
meanStdDev(laplacianImage, mean, stddev, cv::Mat());
double variance = stddev.val[0] * stddev.val[0];
cv::Mat M = (cv::Mat_(3, 1) << -1, 2, -1);
cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F);
cv::Mat Lx;
cv::sepFilter2D(LapMat, Lx, CV_64F, M, G);
cv::Mat Ly;
cv::sepFilter2D(LapMat, Ly, CV_64F, G, M);
cv::Mat FM = cv::abs(Lx) + cv::abs(Ly);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
it some times gives not good results as attached picture.
Is there a best practice way to detect blurry faces ?
I attached an example image which is high scored with above code which is false.
Best
I'm not sure how are you interpreting your results. To measure blur, you usually take the output of the Blur Detector (a number) and compare it against a threshold value, then determine if the input is, in fact, blurry or not. I don't see such a comparison in your code.
There are several ways to measure "blurriness", or rather, sharpness. Let's take a look at one. It involves computing the variance of the Laplacian and then comparing it to an expected value. This is the code:
//read the image and convert it to grayscale:
cv::Mat inputImage = cv::imread( "dog.png" );
cv::Mat gray;
cv::cvtColor( inputImage, gray, cv::COLOR_RGB2GRAY );
//Cool, let's compute the laplacian of the gray image:
cv::Mat laplacianImage;
cv::Laplacian( gray, laplacianImage, CV_64F );
//Prepare to compute the mean and standard deviation of the laplacian:
cv::Scalar mean, stddev;
cv::meanStdDev( laplacianImage, mean, stddev, cv::Mat() );
//Let’s compute the variance:
double variance = stddev.val[0] * stddev.val[0];
Up until this point, we've effectively calculated the variance of the Laplacian, but we still need to compare against a threshold:
double blurThreshold = 300;
if ( variance <= blurThreshold ) {
std::cout<<"Input image is blurry!"<<std::endl;
} else {
std::cout<<"Input image is sharp"<<std::endl;
}
Let’s check out the results. These are my test images. I've printed the variance value in the lower-left corner of the images. The threshold value is 300, blue text is within limits, red text is below.
Firstly, I utilize putText function to create a zero-filled image:
std::string text("Mengranlin");
int rows = 222;
int cols = 112;
double textSize = 1.5;
int textWidth = 2;
int num = 255;
cv::Mat zero_filled_img = cv::Mat::zeros(cols, rows, CV_32F);
putText(zero_filled_img, text,
cv::Point(zero_filled_img.cols * 0.5,
zero_filled_img.rows * 0.3),
cv::FONT_HERSHEY_PLAIN, textSize, cv::Scalar(num, num, num), textWidth);
cv::Mat zero_filled_img2;
flip(zero_filled_img, zero_filled_img2, -1);
zero_filled_img += zero_filled_img2;
transpose(zero_filled_img, zero_filled_img);
flip(zero_filled_img, zero_filled_img, 1);
Here is the image:
Secondly, I utilize inverse Fourier transform to the image:
int m = getOptimalDFTSize(rows);
int n = getOptimalDFTSize(cols);
cv::Mat dst;
copyMakeBorder(zero_filled_img, dst, 0, m - rows, 0, n - cols, BORDER_CONSTANT, Scalar::all(0));
cv::Mat planes[] = { cv::Mat_<float>(dst),
cv::Mat::zeros(dst.size(), CV_32F) };
cv::Mat complex;
cv::merge(planes,2, complex);
idft(complex, complex);
split(complex, planes);
magnitude(planes[0], planes[1], planes[0]);
Thirdly, I utilize Fourier transform to the result of inverse Fourier transform:
cv::merge(planes2, 2, complex);
dft(complex, complex);
split(complex, planes2);
magnitude(planes2[0], planes2[1], planes2[0]);
cv::Mat result = planes2[0];
Finally, I save the image:
result += 1;
log(result, result);
result = result(cv::Rect(0, 0, cols, rows));
int cx = result.cols / 2;
int cy = result.rows / 2;
cv::Mat temp;
cv::Mat q0(result, cv::Rect(0, 0, cx, cy));
cv::Mat q1(result, cv::Rect(cx, 0, cx, cy));
cv::Mat q2(result, cv::Rect(0, cy, cx, cy));
cv::Mat q3(result, cv::Rect(cx, cy, cx, cy));
q0.copyTo(temp);
q3.copyTo(q0);
temp.copyTo(q3);
q1.copyTo(temp);
q2.copyTo(q1);
temp.copyTo(q2);
imwrite("./image/log_result.jpg", result);
Here is the image:
Although the "Mengnalin" can be found from the image, that is very weak. And then, I save the normalization of the result, but I found nothing:
normalize(result, result);
imwrite("./image/normalize_result.jpg", result);
result *= 255;
imwrite("./image/normalize_result255.jpg", result);
Here is the normalization image:
Here is the normalization image x 255:
The experiment is successful when using Matlab. I want to know where the error is?
Below is the complete code that I ran:
std::string text("Mengranlin");
int rows = 222;
int cols = 112;
double textSize = 1.5;
int textWidth = 2;
int num = 255;
cv::Mat zero_filled_img = cv::Mat::zeros(cols, rows, CV_32F);
putText(zero_filled_img, text, cv::Point(zero_filled_img.cols * 0.5, zero_filled_img.rows * 0.3),
cv::FONT_HERSHEY_PLAIN, textSize, cv::Scalar(num, num, num), textWidth);
cv::Mat zero_filled_img2;
flip(zero_filled_img, zero_filled_img2, -1);
zero_filled_img += zero_filled_img2;
transpose(zero_filled_img, zero_filled_img);
flip(zero_filled_img, zero_filled_img, 1);
cv::Mat de = cv::Mat_<uchar>(zero_filled_img);
cv::imwrite("./image/zero_filled_img.jpg", zero_filled_img);
//idft
int m = getOptimalDFTSize(rows);
int n = getOptimalDFTSize(cols);
cv::Mat dst;
copyMakeBorder(zero_filled_img, dst, 0, m - rows, 0, n - cols, BORDER_CONSTANT, Scalar::all(0));
cv::Mat planes[] = { cv::Mat_<float>(dst), cv::Mat::zeros(dst.size(), CV_32F) };
cv::Mat complex;
cv::merge(planes,2, complex);
idft(complex, complex);
split(complex, planes);
magnitude(planes[0], planes[1], planes[0]);
cv::Mat freq = planes[0];
freq = freq(cv::Rect(0, 0, cols, rows));
normalize(freq, freq, 0, 1, CV_MINMAX);
//dft
cv::Mat planes2[] = {planes[0], planes[1]};
cv::merge(planes2, 2, complex);
dft(complex, complex);
split(complex, planes2);
magnitude(planes2[0], planes2[1], planes2[0]);
cv::Mat result = planes2[0];
//float min_v, max_v; min_max(result, min_v, max_v);
imwrite("./image/img.jpg", result);
result += 1;
imwrite("./image/img_plus_zero.jpg", result);
log(result, result);
result = result(cv::Rect(0, 0, cols, rows));
//float min_v1, max_v1; min_max(result, min_v1, max_v1);
imwrite("./image/log_img.jpg", result);
int cx = result.cols / 2;
int cy = result.rows / 2;
cv::Mat temp;
cv::Mat q0(result, cv::Rect(0, 0, cx, cy));
cv::Mat q1(result, cv::Rect(cx, 0, cx, cy));
cv::Mat q2(result, cv::Rect(0, cy, cx, cy));
cv::Mat q3(result, cv::Rect(cx, cy, cx, cy));
q0.copyTo(temp);
q3.copyTo(q0);
temp.copyTo(q3);
q1.copyTo(temp);
q2.copyTo(q1);
temp.copyTo(q2);
normalize(result, result);
imwrite("./image/normalize_img.jpg", result);
result *= 255;
imwrite("./image/normalize_img255.jpg", result);
Your code splits the output of idft into planes[0] (real component) and planes[1] (imaginary component), then computes the magnitude and writes it to planes[0]:
idft(complex, complex);
split(complex, planes);
magnitude(planes[0], planes[1], planes[0]);
Next, you merge planes[0] and planes[1] as the real and imaginary parts of a complex-valued image, and compute the dft:
cv::Mat planes2[] = {planes[0], planes[1]};
cv::merge(planes2, 2, complex);
dft(complex, complex);
But because planes[0] doesn't contain the real part of the output of idft any more, but its magnitude, dft will not perform the inverse calculation that idft did.
You can fix this easily. Instead of:
magnitude(planes[0], planes[1], planes[0]);
cv::Mat freq = planes[0];
Do:
cv::Mat freq;
magnitude(planes[0], planes[1], freq);
You can significantly simplify your code. Try the following code (zero_filled_img is the input image computed earlier):
// DFT
cv::Mat complex;
dft(zero_filled_img, complex, DFT_COMPLEX_OUTPUT);
// IDFT
cv::Mat result;
idft(complex, result, DFT_REAL_OUTPUT);
imwrite("./image/img.jpg", result);
result should be equal to zero_filled_img within numerical accuracy.
The DFT_COMPLEX_OUTPUT flag forces the creation of a full, complex-valued DFT, even though the input array is real-valued. Likewise, DFT_REAL_OUTPUT causes any imaginary output components to be dropped, this is equivalent to computing the complex IDFT and then taking the real part only.
I have reversed the DFT and IDFT to be conceptually correct (though it is perfectly fine to reverse these two operations). DFT_COMPLEX_OUTPUT only works with the forward transform and DFT_REAL_OUTPUT only works with the inverse transform, so the code above will not work (I believe) if you use these two operations in the order you attempted in your own code.
The code above also doesn't bother with padding to a favourable size. Doing so might reduce computation time, but for such a small image it will not matter at all.
Note also that taking the magnitude of the output of the inverse transform (the second transform you apply) is OK in your case, but not in general. This second transform is expected to produce a real-valued output (since the input to the first one was real-valued). Any imaginary component should be 0 within numerical precision. Thus, the real component of the complex output should be kept. If you take the magnitude, you obtain the absolute value of the real component, meaning that any negative values in the original input will become positive values in the final output. In the case of the example images, all pixels are non-negative, but this is not necessarily true. Do the correct thing and take the real component rather than the magnitude.
I followed the following steps:-
1. Calculated dft of image
2. Calculated dft of kernel (but 1st padded it to size of image)
3. Multiplied real and imaginary parts of both dft individually
4. Calculated inverse dft
I tried to display the images in each intermediate step but the final image comes out to be almost black except in corners.
Image fourier transform output after multiplication and its inverse dft output
input image
enter code here
#include <iostream>
#include <stdlib.h>
#include <opencv2/opencv.hpp>
#include <stdio.h>
int r=100;
#define SIGMA_CLIP 6.0f
using namespace cv;
using namespace std;
void updateResult(Mat complex)
{
Mat work;
idft(complex, work);
Mat planes[] = {Mat::zeros(complex.size(), CV_32F), Mat::zeros(complex.size(), CV_32F)};
split(work, planes); // planes[0] = Re(DFT(I)), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], work); // === sqrt(Re(DFT(I))^2 + Im(DFT(I))^2)
normalize(work, work, 0, 1, NORM_MINMAX);
imshow("result", work);
}
void shift(Mat magI) {
// crop if it has an odd number of rows or columns
magI = magI(Rect(0, 0, magI.cols & -2, magI.rows & -2));
int cx = magI.cols/2;
int cy = magI.rows/2;
Mat q0(magI, Rect(0, 0, cx, cy)); // Top-Left - Create a ROI per quadrant
Mat q1(magI, Rect(cx, 0, cx, cy)); // Top-Right
Mat q2(magI, Rect(0, cy, cx, cy)); // Bottom-Left
Mat q3(magI, Rect(cx, cy, cx, cy)); // Bottom-Right
Mat tmp; // swap quadrants (Top-Left with Bottom-Right)
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp); // swap quadrant (Top-Right with Bottom-Left)
q2.copyTo(q1);
tmp.copyTo(q2);
}
Mat updateMag(Mat complex )
{
Mat magI;
Mat planes[] = {Mat::zeros(complex.size(), CV_32F), Mat::zeros(complex.size(), CV_32F)};
split(complex, planes); // planes[0] = Re(DFT(I)), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], magI); // sqrt(Re(DFT(I))^2 + Im(DFT(I))^2)
// switch to logarithmic scale: log(1 + magnitude)
magI += Scalar::all(1);
log(magI, magI);
shift(magI);
normalize(magI, magI, 1, 0, NORM_INF); // Transform the matrix with float values into a
return magI; // viewable image form (float between values 0 and 1).
//imshow("spectrum", magI);
}
Mat createGausFilterMask(Size imsize, int radius) {
// call openCV gaussian kernel generator
double sigma = (r/SIGMA_CLIP+0.5f);
Mat kernelX = getGaussianKernel(2*radius+1, sigma, CV_32F);
Mat kernelY = getGaussianKernel(2*radius+1, sigma, CV_32F);
// create 2d gaus
Mat kernel = kernelX * kernelY.t();
int w = imsize.width-kernel.cols;
int h = imsize.height-kernel.rows;
int r = w/2;
int l = imsize.width-kernel.cols -r;
int b = h/2;
int t = imsize.height-kernel.rows -b;
Mat ret;
copyMakeBorder(kernel,ret,t,b,l,r,BORDER_CONSTANT,Scalar::all(0));
return ret;
}
//code reference https://docs.opencv.org/2.4/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
int main( int argc, char** argv )
{
String file;
file = "lena.png";
Mat image = imread(file, CV_LOAD_IMAGE_GRAYSCALE);
Mat padded;
int m = getOptimalDFTSize( image.rows );
int n = getOptimalDFTSize( image.cols );
copyMakeBorder(image, padded, 0, m - image.rows, 0, n -image.cols, BORDER_CONSTANT, Scalar::all(0));//expand input image to optimal size , on the border add zero values
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexI;
merge(planes, 2, complexI);
dft(complexI, complexI); //computing dft
split(complexI, planes); //image converted to complex and real dft here
Mat mask = createGausFilterMask(padded.size(),r ); // Forming the gaussian filter
Mat mplane[] = {Mat_<float>(mask), Mat::zeros(mask.size(), CV_32F)};
Mat kernelcomplex;
merge(mplane, 2, kernelcomplex);
dft(kernelcomplex, kernelcomplex);
split(kernelcomplex, mplane);// splitting the dft of kernel to real and complex
mplane[1]=mplane[0]; //overwriting imaginary values with real values of kernel dft
Mat kernel_spec;
merge(mplane, 2, kernel_spec);
mulSpectrums(complexI, kernel_spec, complexI, DFT_ROWS);
Mat magI=updateMag(complexI);
namedWindow( "image fourier", CV_WINDOW_AUTOSIZE );
imshow("spectrum magnitude", magI);
updateResult(complexI); //converting to viewable form, computing idft
waitKey(0);
return 0;
}
Which step is going wrong? Or am i missing on to some concept?
Edited the code with help of Cris and it now works perfectly.
There are two immediately apparent issues:
The Gaussian is real-valued and symmetric. Its Fourier transform should be too. If the DFT of your kernel has a non-zero imaginary component, you're doing something wrong.
Likely, what you are doing wrong is that your kernel has its origin in the middle of the image, rather than at the top-left sample. This is the same issue as in this other question. The solution is to use the equivalent of MATLAB's ifftshift, an implementation of which is shown in the OpenCV documentation ("step 6, Crop and rearrange").
To apply the convolution, you need to multiply the two DFTs together, not the real parts and imaginary parts of the DFTs. Multiplying two complex numbers a+ib and c+id results in ac-bd+iad+ibc, not ac+ibd.
But since the DFT of your kernel should be real-valued only, you can simply multiply the real component of the kernel with both the real and imaginary components of the image: (a+ib)c = ac+ibc.
It seems very roundabout what you are doing with the complex-valued images. Why not let OpenCV handle all of that for you? You can probably* just do something like this:
Mat image = imread(file, CV_LOAD_IMAGE_GRAYSCALE);
// Expand input image to optimal size, on the border add zero values
Mat padded;
int m = getOptimalDFTSize(image.rows);
int n = getOptimalDFTSize(image.cols);
copyMakeBorder(image, padded, 0, m - image.rows, 0, n -image.cols, BORDER_CONSTANT, Scalar::all(0));
// Computing DFT
Mat DFTimage;
dft(padded, DFTimage);
// Forming the Gaussian filter
Mat kernel = createGausFilterMask(padded.size(), r);
shift(kernel);
Mat DFTkernel;
dft(kernel, DFTkernel);
// Convolution
mulSpectrums(DFTimage, DFTkernel, DFTimage, DFT_ROWS);
// Display Fourier-domain result
Mat magI = updateMag(DFTimage);
imshow("spectrum magnitude", magI);
// IDFT
Mat work;
idft(complex, work); // <- NOTE! Don't inverse transform log-transformed magnitude image!
Note that the Fourier-Domain result is actually a special representation of the complex-conjugate symmetric DFT, intended to save space and computations. To compute the full complex output, add the DFT_COMPLEX_OUTPUT to the call to dft, and DFT_REAL_OUTPUT to the call to idft (this latter then assumes symmetry, and produces a real-valued output, saving you the hassle of computing the magnitude).
* I say probably because I haven't compiled any of this... If there's something wrong, please let me know, or edit the answer and fix it.
I was looking at this tutorial, and it said "You can make a symmetric face, by averaging a face and its mirror reflection." - and there was an example of Obama's face being made symmetrical. I tried doing the same with openCV and C++, but these are the results I'm getting using the following code:
Mat3b getMean(const vector<Mat3b>& images) {
Mat m(images[0].rows, images[0].cols, CV_64FC3); // Create a 0 initialized image to use as accumulator
m.setTo(Scalar(0, 0, 0, 0)); //set all image elements to 0
Mat temp; // Use a temp image to hold the conversion of each input image to CV_64FC3
for (int i = 0; i < images.size(); ++i) { //loop through the images
images[i].convertTo(temp, CV_64FC3); // Convert the input images to CV_64FC3...
m += temp; //...so you can accumulate
}
m.convertTo(m, CV_8U, 1. / images.size()); // Convert back to CV_8UC3 type, applying the division to get the actual mean
return m;
}
int main() {
Mat img1 = imread("E:/barack-obama.jpg"), img2, img4;
resize(img1, img1, Size(0.4 * img1.cols, 0.4 * img1.rows), 1, 1, INTER_LINEAR);
flip(img1, img2, +1);
vector<Mat3b> imgs;
imgs.push_back(img1);
imgs.push_back(img2);
Mat3b img3 = getMean(imgs); // Compute the mean
//img3 = (img1 + img2)*0.5;
double alpha = 0.5, beta;
beta = (1.0 - alpha);
addWeighted(img1, alpha, img2, beta, 0.0, img4);
imshow("Original", img1);
imshow("getMean", img3);
imshow("AddWeighted", img4);
waitKey(0);
}
How can I apply a notch filter on an image spectrum using OpenCV 2.4 and C++? I want to calculate the DFT of an image, suppress certain frequencies and calculate inverse dft. Can anyone show me some sample code how to apply a notch filter in frequecy domain?
EDIT:
Here is what I tried, but the quadrants of the frequency spectrum are not in order so the origin of the spectrum is not the center of the image. That makes is difficult for me to identify the frequencies to suppress. When swapping quadrants so that the origin is the center, inverse DFT shows wrong results. Can anyone show me how to do inverse dft with swapped quadrants?
I don't understand the number of columns in the frequency images filter1 and filter2 (see code). If I use filter1.cols as u in the for loop, I don't access the right border of the images. Filter1 and filter2 seem to have approx. 5000 columns but the source image has a resolution of 1280x1024 (grayscale). Any thoughts on that?
Any further comments about my code?
Mat img;
img=imread(filename,CV_LOAD_IMAGE_GRAYSCALE);
int M = getOptimalDFTSize( img.rows );
int N = getOptimalDFTSize( img.cols );
Mat padded;
copyMakeBorder(img, padded, 0, M - img.rows, 0, N - img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg,cv::DFT_SCALE|cv::DFT_COMPLEX_OUTPUT);
split(complexImg, planes);
Mat filter1;
planes[0].copyTo(filter1);
Mat filter2;
planes[1].copyTo(filter2);
for( int i = 0; i < filter1.rows; ++i)
{
for(int u=7;u<15;++u)
{
filter1.at<uchar>(i,u)=0;
filter2.at<uchar>(i,u)=0;
}
Mat inverse[] = {filter1,filter2};
Mat filterspec;
merge(inverse, 2, filterspec);
cv::Mat inverseTransform;
cv::dft(filterspec, inverseTransform,cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
cv::Mat finalImage;
inverseTransform.convertTo(finalImage, CV_8U);