I am attempting to convert a greyscale image to and from the frequency domain using the Fourier transform in OpenCV. However, the resulting image in very distorted even though I made no changes to the image while in frequency domain. Could anyone help me with this? I've found several other questions explaining this like the links below and I have followed them exactly, but the result always ends up like this.
Inverse fourier transformation in OpenCV
https://coderedirect.com/questions/165340/inverse-fourier-transformation-in-opencv
//Make grayscale image
cvtColor(src, gray_in, COLOR_BGR2GRAY);
gray_in.convertTo(gray_in, CV_32FC1);
//Create complex output variable
//From https://docs.opencv.org/4.x/d8/d01/tutorial_discrete_fourier_transform.html
Mat planes[] = { Mat_<float>(gray_in), Mat::zeros(gray_in.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
//Transform
dft(gray_in, complexI, DFT_COMPLEX_OUTPUT);
//Compute inverse transform
dft(complexI, tgt, DFT_SCALE | DFT_INVERSE | DFT_REAL_OUTPUT);
//Save file
tgt.convertTo(tgt, CV_32FC2);
imwrite(outfile, tgt);
//Display image
namedWindow(windowName);
imshow(windowName, tgt);
waitKey(0);
destroyWindow(windowName);
Related
While reading the image with IMREAD_COLOR, 'dft' function throws the error:
DFT function works just fine when reading an image with IMREAD_GRAYSCALE. But I want to read the image with IMREAD_COLOR.
main function
const char* filename = "face.jpg";
Mat I = imread(filename, IMREAD_COLOR);
if(I.empty()) return 0;
Mat padded;
I.convertTo(padded, CV_32F);
Mat fft;
Mat planes[2];
dft(padded, fft, DFT_SCALE|DFT_COMPLEX_OUTPUT);
Mat fftBlur = fft.clone();
fftBlur *= 0.5;
split(fftBlur, planes);
Mat ph, mag;
mag.zeros(planes[0].rows, planes[0].cols, CV_32F);
ph.zeros(planes[0].rows, planes[0].cols, CV_32F);
cartToPolar(planes[0], planes[1], mag, ph);
merge(planes, 2, fftBlur);
//inverse
Mat invfft;
dft(fftBlur, invfft, DFT_INVERSE|DFT_REAL_OUTPUT);
Mat result;
invfft.convertTo(result, CV_8U);
Mat image;
cvtColor(result, image, COLOR_GRAY2RGB);
imshow("Output", result);
imshow("Image", image);
waitKey();
The message you receive is an assertion it tells you DFT function only takes single precision floating point image with one or two channels (CV_32FC1, CV_32FC2, the letter C at the end of the flag mean channel) or double precision floating point images with one or two channels (CV_64FC1, CV_64FC2).
The two channel case is actually the representation of complex image in OpenCV data storage.
If you want you can split you image to std::vector<cv::Mat> where each element does represent one channel, using cv::split apply the DFT on each channels do the processing you want on it and recreate an multichannel image thanks to cv::merge.
From Learning OpenCV (about dft function):
The input array must be of floating-point type and may be single- or double-channel. In the single-channel case, the entries are assumed to be real numbers, and the output will be packed in a special space-saving format called complex conjugate symmetrical.
The same question is mentioned here in terms of matlab image processing.
You can check out cv::split function if you want to separate channels of your initial image.
Instead of OpenCV's normal dft, I'd like to use cuda::dft. As a start I tried performing a forward and inverse transform, but the result doesn't look anything like the input. Here's a minimal example using an OpenCV example image:
// Load 8bit test image (https://raw.githubusercontent.com/opencv/opencv/master/samples/data/basketball1.png)
Mat testImg;
testImg = imread("basketball1.png", CV_LOAD_IMAGE_GRAYSCALE);
// Convert input to complex float image
Mat_<float> imgReal;
testImg.convertTo(imgReal, CV_32F, 1.0/255.0);
Mat imgImag = Mat(imgReal.rows, imgReal.cols, CV_32F, float(0));
vector<Mat> channels;
channels.push_back(imgReal);
channels.push_back(imgImag);
Mat imgComplex;
merge(channels,imgComplex);
imshow("Img real", imgReal);
waitKey(0);
//Perform a Fourier transform
cuda::GpuMat imgGpu, fftGpu;
imgGpu.upload(imgComplex);
cuda::dft(imgGpu, fftGpu, imgGpu.size());
//Performs an inverse Fourier transform
cuda::GpuMat propGpu, convFftGpu;
cuda::dft(fftGpu, propGpu, imgGpu.size(), DFT_REAL_OUTPUT | DFT_SCALE);
Mat output(propGpu);
output.convertTo(output, CV_8U, 255, 0);
imshow("Output", output);
waitKey(0);
I played with the flags but output never looks anything like input. Using the above code I get as output:
While it should look like this:
I found the answer here. Apparently, when starting with a complex input image, it's not possible to use the flag DFT_REAL_OUTPUT.
Either you do the forward transform with a one channel float input and then you get the same as an output from the inverse transform, or you start with a two channel complex input image and get that type as output. The upside to using a complex input image is that the forward transform delivers the full sized complex field to work with, e.g. perform a convolution (see linked answer for details).
I'll add that in order to obtain an 8bit image from the inverse transform, compute the magnitude yourself like so:
Mat output(propGpu);
Mat planes[2];
split(output,planes);
Mat mag;
magnitude(planes[0],planes[1],mag);
mag.convertTo(mag, CV_8U, 255, 0);
To go into Fourier domain using OpenCV Cuda FFT and back into the spatial domain, you can simply follow the below example (to learn more, you can refer to cufft documentation, on which OpenCV Cuda FFT source code is based).
Mat test_im;
test_im = imread("frame.png", IMREAD_GRAYSCALE);
// Convert input input to real value type (CV_64F for double precision)
Mat im_real;
test_im.convertTo(im_real, CV_32F, 1.0/255.0);
imshow("Input Image", im_real);
waitKey(0);
// Perform The Fourier Transform
cuda::GpuMat in_im_gpu, fft_im;
in_im_gpu.upload(im_real);
cuda::dft(in_im_gpu, fft_im, in_im_gpu.size(), 0);
// Performs an inverse Fourier transform
cuda::GpuMat ifft_im_gpu;
//! int odd_size = imgGpu.size().width % 2;
//! cv::Size dest_size((fftGpu.size().width-1)*2 + (odd_size ? 1 : 0), fftGpu.size().height);
cv::Size dest_size = in_im_gpu.size();
int flag = (DFT_SCALE + DFT_REAL_OUTPUT) | DFT_INVERSE;
cuda::dft(fft_im, ifft_im_gpu, dest_size, flag);
Mat ifft_im(ifft_im_gpu);
ifft_im.convertTo(ifft_im, CV_8U, 255, 0);
imshow("Inverse FFT", ifft_im);
waitKey(0);
I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.
A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.
I am new to image processing and I need to calculate the strength of edges present in an image. Assume a situation where you have an image and you add blur effect to that image. The strength of the edges of these two images are different. I need to calculate that edge strength for both images separately.
So far I have got the canny edge detection of the image using the code below.
Mat src1;
src1 = imread("D.PNG", CV_LOAD_IMAGE_COLOR);
namedWindow("Original image", CV_WINDOW_AUTOSIZE);
imshow("Original image", src1);
Mat gray, edge, draw;
cvtColor(src1, gray, CV_BGR2GRAY);
Canny(gray, edge, 50, 150, 3);
edge.convertTo(draw, CV_8U);
namedWindow("image", CV_WINDOW_AUTOSIZE);
imshow("image", draw);
waitKey(0);
return 0;
Is there any method to calculate of the strength of this edge image..?
mean will give you the mean value of your image. If you're using Canny as above you can do:
Scalar pixelMean = mean(draw);
To get the mean of only the edge pixels, you would use the image as the mask as well:
Scalar edgeMean = mean(draw, draw);
Unfortunately, since Canny sets all edge pixels to 255, your mean will always be 255. If this is the measure you're looking for, you'll probably want to use Sobel (after Gaussian Blur) and calculate the gradients to get the relative edge strengths.
How can I apply a notch filter on an image spectrum using OpenCV 2.4 and C++? I want to calculate the DFT of an image, suppress certain frequencies and calculate inverse dft. Can anyone show me some sample code how to apply a notch filter in frequecy domain?
EDIT:
Here is what I tried, but the quadrants of the frequency spectrum are not in order so the origin of the spectrum is not the center of the image. That makes is difficult for me to identify the frequencies to suppress. When swapping quadrants so that the origin is the center, inverse DFT shows wrong results. Can anyone show me how to do inverse dft with swapped quadrants?
I don't understand the number of columns in the frequency images filter1 and filter2 (see code). If I use filter1.cols as u in the for loop, I don't access the right border of the images. Filter1 and filter2 seem to have approx. 5000 columns but the source image has a resolution of 1280x1024 (grayscale). Any thoughts on that?
Any further comments about my code?
Mat img;
img=imread(filename,CV_LOAD_IMAGE_GRAYSCALE);
int M = getOptimalDFTSize( img.rows );
int N = getOptimalDFTSize( img.cols );
Mat padded;
copyMakeBorder(img, padded, 0, M - img.rows, 0, N - img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg,cv::DFT_SCALE|cv::DFT_COMPLEX_OUTPUT);
split(complexImg, planes);
Mat filter1;
planes[0].copyTo(filter1);
Mat filter2;
planes[1].copyTo(filter2);
for( int i = 0; i < filter1.rows; ++i)
{
for(int u=7;u<15;++u)
{
filter1.at<uchar>(i,u)=0;
filter2.at<uchar>(i,u)=0;
}
Mat inverse[] = {filter1,filter2};
Mat filterspec;
merge(inverse, 2, filterspec);
cv::Mat inverseTransform;
cv::dft(filterspec, inverseTransform,cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
cv::Mat finalImage;
inverseTransform.convertTo(finalImage, CV_8U);