I have a weird problem where my average gradient magnitude result is different if I use a mask as opposed to creating a new Mat of that small ROI. I'll explain the 2 different ways I do this and 2 different average gradient magnitude results I get. I thought I should get the same average gradient magnitude result?
Scenario: Image A is my source/original image of a landscape. I want to get the average gradient magnitude in the region A (10,100), (100,100), (100,150), (10,150).
Technique 1:
- Create a ROI Mat that just shows region A. So its dimensions are 90 by 50.
- Perform cv::Sobel(), cv::magnitude() then cv::meanStdDev()
- My average gradient magnitude result is 11.34.
Technique 2:
- Create a new Mat that is a mask. The mat is the same dimensions as Image A and has a white area where Region A is. Then create a new Mat that just shows that region of Image A and the rest of the Mat is black - hopefully this makes sense.
- Perform cv::Sobel(), cv::magnitude() (but use the mask) then cv::meanStdDev()
- My average gradient magnitude result is 43.76.
Why the different result?
Below is my code:
static Mat backupSrc;
static Mat curSrc;
// Technique 1
void inspectRegion(const Point& strt, const Point& end) {
curSrc = Mat(backupSrc.size(), CV_8UC3);
cvtColor(backupSrc, curSrc, CV_GRAY2RGB);
Rect region = Rect(strt, end);
Mat regionImg = Mat(curSrc, region);
// Calculate the average gradient magnitude/strength across the image
Mat dX, dY, mag;
Sobel(regionImg, dX, CV_32F, 1, 0);
Sobel(regionImg, dY, CV_32F, 0, 1);
magnitude(dX, dY, mag);
Scalar sMMean, sMStdDev;
meanStdDev(mag, sMMean, sMStdDev);
double magnitudeMean = sMMean[0];
double magnitudeStdDev = sMStdDev[0];
rectangle(curSrc, region, { 0 }, 1);
printf("[Gradient Magnitude Mean: %.3f, Gradient Magnitude Std Dev: %.3f]\n", magnitudeMean, magnitudeStdDev);
}
// Technique 2
void inspectRegion(const std::vector<Point>& pnts) {
curSrc = Mat(backupSrc.size(), CV_8UC3);
cvtColor(backupSrc, curSrc, CV_GRAY2RGB);
std::vector<std::vector<Point>> cPnts;
cPnts.push_back(pnts);
Mat mask = Mat::zeros(curSrc.rows, curSrc.cols, CV_8UC1);
fillPoly(mask, cPnts, { 255 });
Mat regionImg;
curSrc.copyTo(regionImg, mask);
// Calculate the average gradient magnitude/strength across the image
Mat dX, dY, mag;
Sobel(regionImg, dX, CV_32F, 1, 0);
Sobel(regionImg, dY, CV_32F, 0, 1);
magnitude(dX, dY, mag);
Scalar sMMean, sMStdDev;
meanStdDev(mag, sMMean, sMStdDev, mask);
double magnitudeMean = sMMean[0];
double magnitudeStdDev = sMStdDev[0];
polylines(curSrc, pnts, true, { 255 }, 3);
printf("[Gradient Magnitude Mean: %.3f, Gradient Magnitude Std Dev: %.3f]\n", magnitudeMean, magnitudeStdDev);
}
In technique 2 the gradients around the boarders of your rectangle will be very high and will corrupt the calculation.
Consider dilating your mask before computing the gradients so that this spike is outside of the non-dilated mask that you send into the meanStdDev function.
Related
I am trying to detect blur rate of the face images with below code.
cv::Mat greyMat;
cv::Mat laplacianImage;
cv::Mat imageClone = LapMat.clone();
cv::resize(imageClone, imageClone, cv::Size(150, 150), 0, 0, cv::INTER_CUBIC);
cv::cvtColor(imageClone, greyMat, CV_BGR2GRAY);
Laplacian(greyMat, laplacianImage, CV_64F);
cv::Scalar mean, stddev; // 0:1st channel, 1:2nd channel and 2:3rd channel
meanStdDev(laplacianImage, mean, stddev, cv::Mat());
double variance = stddev.val[0] * stddev.val[0];
cv::Mat M = (cv::Mat_(3, 1) << -1, 2, -1);
cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F);
cv::Mat Lx;
cv::sepFilter2D(LapMat, Lx, CV_64F, M, G);
cv::Mat Ly;
cv::sepFilter2D(LapMat, Ly, CV_64F, G, M);
cv::Mat FM = cv::abs(Lx) + cv::abs(Ly);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
it some times gives not good results as attached picture.
Is there a best practice way to detect blurry faces ?
I attached an example image which is high scored with above code which is false.
Best
I'm not sure how are you interpreting your results. To measure blur, you usually take the output of the Blur Detector (a number) and compare it against a threshold value, then determine if the input is, in fact, blurry or not. I don't see such a comparison in your code.
There are several ways to measure "blurriness", or rather, sharpness. Let's take a look at one. It involves computing the variance of the Laplacian and then comparing it to an expected value. This is the code:
//read the image and convert it to grayscale:
cv::Mat inputImage = cv::imread( "dog.png" );
cv::Mat gray;
cv::cvtColor( inputImage, gray, cv::COLOR_RGB2GRAY );
//Cool, let's compute the laplacian of the gray image:
cv::Mat laplacianImage;
cv::Laplacian( gray, laplacianImage, CV_64F );
//Prepare to compute the mean and standard deviation of the laplacian:
cv::Scalar mean, stddev;
cv::meanStdDev( laplacianImage, mean, stddev, cv::Mat() );
//Let’s compute the variance:
double variance = stddev.val[0] * stddev.val[0];
Up until this point, we've effectively calculated the variance of the Laplacian, but we still need to compare against a threshold:
double blurThreshold = 300;
if ( variance <= blurThreshold ) {
std::cout<<"Input image is blurry!"<<std::endl;
} else {
std::cout<<"Input image is sharp"<<std::endl;
}
Let’s check out the results. These are my test images. I've printed the variance value in the lower-left corner of the images. The threshold value is 300, blue text is within limits, red text is below.
I followed the following steps:-
1. Calculated dft of image
2. Calculated dft of kernel (but 1st padded it to size of image)
3. Multiplied real and imaginary parts of both dft individually
4. Calculated inverse dft
I tried to display the images in each intermediate step but the final image comes out to be almost black except in corners.
Image fourier transform output after multiplication and its inverse dft output
input image
enter code here
#include <iostream>
#include <stdlib.h>
#include <opencv2/opencv.hpp>
#include <stdio.h>
int r=100;
#define SIGMA_CLIP 6.0f
using namespace cv;
using namespace std;
void updateResult(Mat complex)
{
Mat work;
idft(complex, work);
Mat planes[] = {Mat::zeros(complex.size(), CV_32F), Mat::zeros(complex.size(), CV_32F)};
split(work, planes); // planes[0] = Re(DFT(I)), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], work); // === sqrt(Re(DFT(I))^2 + Im(DFT(I))^2)
normalize(work, work, 0, 1, NORM_MINMAX);
imshow("result", work);
}
void shift(Mat magI) {
// crop if it has an odd number of rows or columns
magI = magI(Rect(0, 0, magI.cols & -2, magI.rows & -2));
int cx = magI.cols/2;
int cy = magI.rows/2;
Mat q0(magI, Rect(0, 0, cx, cy)); // Top-Left - Create a ROI per quadrant
Mat q1(magI, Rect(cx, 0, cx, cy)); // Top-Right
Mat q2(magI, Rect(0, cy, cx, cy)); // Bottom-Left
Mat q3(magI, Rect(cx, cy, cx, cy)); // Bottom-Right
Mat tmp; // swap quadrants (Top-Left with Bottom-Right)
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp); // swap quadrant (Top-Right with Bottom-Left)
q2.copyTo(q1);
tmp.copyTo(q2);
}
Mat updateMag(Mat complex )
{
Mat magI;
Mat planes[] = {Mat::zeros(complex.size(), CV_32F), Mat::zeros(complex.size(), CV_32F)};
split(complex, planes); // planes[0] = Re(DFT(I)), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], magI); // sqrt(Re(DFT(I))^2 + Im(DFT(I))^2)
// switch to logarithmic scale: log(1 + magnitude)
magI += Scalar::all(1);
log(magI, magI);
shift(magI);
normalize(magI, magI, 1, 0, NORM_INF); // Transform the matrix with float values into a
return magI; // viewable image form (float between values 0 and 1).
//imshow("spectrum", magI);
}
Mat createGausFilterMask(Size imsize, int radius) {
// call openCV gaussian kernel generator
double sigma = (r/SIGMA_CLIP+0.5f);
Mat kernelX = getGaussianKernel(2*radius+1, sigma, CV_32F);
Mat kernelY = getGaussianKernel(2*radius+1, sigma, CV_32F);
// create 2d gaus
Mat kernel = kernelX * kernelY.t();
int w = imsize.width-kernel.cols;
int h = imsize.height-kernel.rows;
int r = w/2;
int l = imsize.width-kernel.cols -r;
int b = h/2;
int t = imsize.height-kernel.rows -b;
Mat ret;
copyMakeBorder(kernel,ret,t,b,l,r,BORDER_CONSTANT,Scalar::all(0));
return ret;
}
//code reference https://docs.opencv.org/2.4/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
int main( int argc, char** argv )
{
String file;
file = "lena.png";
Mat image = imread(file, CV_LOAD_IMAGE_GRAYSCALE);
Mat padded;
int m = getOptimalDFTSize( image.rows );
int n = getOptimalDFTSize( image.cols );
copyMakeBorder(image, padded, 0, m - image.rows, 0, n -image.cols, BORDER_CONSTANT, Scalar::all(0));//expand input image to optimal size , on the border add zero values
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexI;
merge(planes, 2, complexI);
dft(complexI, complexI); //computing dft
split(complexI, planes); //image converted to complex and real dft here
Mat mask = createGausFilterMask(padded.size(),r ); // Forming the gaussian filter
Mat mplane[] = {Mat_<float>(mask), Mat::zeros(mask.size(), CV_32F)};
Mat kernelcomplex;
merge(mplane, 2, kernelcomplex);
dft(kernelcomplex, kernelcomplex);
split(kernelcomplex, mplane);// splitting the dft of kernel to real and complex
mplane[1]=mplane[0]; //overwriting imaginary values with real values of kernel dft
Mat kernel_spec;
merge(mplane, 2, kernel_spec);
mulSpectrums(complexI, kernel_spec, complexI, DFT_ROWS);
Mat magI=updateMag(complexI);
namedWindow( "image fourier", CV_WINDOW_AUTOSIZE );
imshow("spectrum magnitude", magI);
updateResult(complexI); //converting to viewable form, computing idft
waitKey(0);
return 0;
}
Which step is going wrong? Or am i missing on to some concept?
Edited the code with help of Cris and it now works perfectly.
There are two immediately apparent issues:
The Gaussian is real-valued and symmetric. Its Fourier transform should be too. If the DFT of your kernel has a non-zero imaginary component, you're doing something wrong.
Likely, what you are doing wrong is that your kernel has its origin in the middle of the image, rather than at the top-left sample. This is the same issue as in this other question. The solution is to use the equivalent of MATLAB's ifftshift, an implementation of which is shown in the OpenCV documentation ("step 6, Crop and rearrange").
To apply the convolution, you need to multiply the two DFTs together, not the real parts and imaginary parts of the DFTs. Multiplying two complex numbers a+ib and c+id results in ac-bd+iad+ibc, not ac+ibd.
But since the DFT of your kernel should be real-valued only, you can simply multiply the real component of the kernel with both the real and imaginary components of the image: (a+ib)c = ac+ibc.
It seems very roundabout what you are doing with the complex-valued images. Why not let OpenCV handle all of that for you? You can probably* just do something like this:
Mat image = imread(file, CV_LOAD_IMAGE_GRAYSCALE);
// Expand input image to optimal size, on the border add zero values
Mat padded;
int m = getOptimalDFTSize(image.rows);
int n = getOptimalDFTSize(image.cols);
copyMakeBorder(image, padded, 0, m - image.rows, 0, n -image.cols, BORDER_CONSTANT, Scalar::all(0));
// Computing DFT
Mat DFTimage;
dft(padded, DFTimage);
// Forming the Gaussian filter
Mat kernel = createGausFilterMask(padded.size(), r);
shift(kernel);
Mat DFTkernel;
dft(kernel, DFTkernel);
// Convolution
mulSpectrums(DFTimage, DFTkernel, DFTimage, DFT_ROWS);
// Display Fourier-domain result
Mat magI = updateMag(DFTimage);
imshow("spectrum magnitude", magI);
// IDFT
Mat work;
idft(complex, work); // <- NOTE! Don't inverse transform log-transformed magnitude image!
Note that the Fourier-Domain result is actually a special representation of the complex-conjugate symmetric DFT, intended to save space and computations. To compute the full complex output, add the DFT_COMPLEX_OUTPUT to the call to dft, and DFT_REAL_OUTPUT to the call to idft (this latter then assumes symmetry, and produces a real-valued output, saving you the hassle of computing the magnitude).
* I say probably because I haven't compiled any of this... If there's something wrong, please let me know, or edit the answer and fix it.
I am writing my thesis and one part of the task is to interpolate between images to create intermediate images. The work has to be done in c++ using openCV 2.4.13.
The best solution I've found so far is computing optical flow and remapping. But this solution has two problems that I am unable to solve on my own:
There are pixels that should go out of view (bottom of image for example), but they do not.
Some pixels do not move, creating a distorted result (upper right part of the couch)
What has made the flow&remap approach better:
Equalizing the intensity. This i'm allowed to do. You can check the result by comparing the couch form (centre of remapped image and original).
Reducing size of image. This i'm NOT allowed to do, as I need the same size output. Is there a way to rescale the optical flow result to get the bigger remapped image?
Other approaches tried and failed:
cuda::interpolateFrames. Creates incredible ghosting.
blending images with cv::addWeighted. Even worse ghosting.
Below is the code I am using at the moment. And images: dropbox link with input and result images
int main(){
cv::Mat second, second_gray, cutout, cutout_gray, flow_n;
second = cv::imread( "/home/zuze/Desktop/forstack/second_L.jpg", 1 );
cutout = cv::imread("/home/zuze/Desktop/forstack/cutout_L.png", 1);
cvtColor(second, second_gray, CV_BGR2GRAY);
cvtColor(cutout, cutout_gray, CV_RGB2GRAY );
///----------COMPUTE OPTICAL FLOW AND REMAP -----------///
cv::calcOpticalFlowFarneback( second_gray, cutout_gray, flow_n, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_n; //looks like it's drunk.
createNewFrame(remap_n, flow_n, 1, second, cutout );
cv::Mat cflow_n;
cflow_n = cutout_gray;
cvtColor(cflow_n, cflow_n, CV_GRAY2BGR);
drawOptFlowMap(flow_n, cflow_n, 10, CV_RGB(0,255,0));
///--------EQUALIZE INTENSITY, COMPUTE OPTICAL FLOW AND REMAP ----///
cv::Mat cutout_eq, second_eq;
cutout_eq= equalizeIntensity(cutout);
second_eq= equalizeIntensity(second);
cv::Mat flow_eq, cutout_eq_gray, second_eq_gray, cflow_eq;
cvtColor( cutout_eq, cutout_eq_gray, CV_RGB2GRAY );
cvtColor( second_eq, second_eq_gray, CV_RGB2GRAY );
cv::calcOpticalFlowFarneback( second_eq_gray, cutout_eq_gray, flow_eq, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_eq;
createNewFrame(remap_eq, flow_eq, 1, second, cutout_eq );
cflow_eq = cutout_eq_gray;
cvtColor(cflow_eq, cflow_eq, CV_GRAY2BGR);
drawOptFlowMap(flow_eq, cflow_eq, 10, CV_RGB(0,255,0));
cv::imshow("remap_n", remap_n);
cv::imshow("remap_eq", remap_eq);
cv::imshow("cflow_eq", cflow_eq);
cv::imshow("cflow_n", cflow_n);
cv::imshow("sec_eq", second_eq);
cv::imshow("cutout_eq", cutout_eq);
cv::imshow("cutout", cutout);
cv::imshow("second", second);
cv::waitKey();
return 0;
}
Function for remapping, to be used for intermediate image creation:
void createNewFrame(cv::Mat & frame, const cv::Mat & flow, float shift, cv::Mat & prev, cv::Mat &next){
cv::Mat mapX(flow.size(), CV_32FC1);
cv::Mat mapY(flow.size(), CV_32FC1);
cv::Mat newFrame;
for (int y = 0; y < mapX.rows; y++){
for (int x = 0; x < mapX.cols; x++){
cv::Point2f f = flow.at<cv::Point2f>(y, x);
mapX.at<float>(y, x) = x + f.x*shift;
mapY.at<float>(y, x) = y + f.y*shift;
}
}
remap(next, newFrame, mapX, mapY, cv::INTER_LANCZOS4);
frame = newFrame;
cv::waitKey();
}
Function to display optical flow in vector form:
void drawOptFlowMap (const cv::Mat& flow, cv::Mat& cflowmap, int step, const cv::Scalar& color) {
cv::Point2f sum; //zz
std::vector<float> all_angles;
int count=0; //zz
float angle, sum_angle=0; //zz
for(int y = 0; y < cflowmap.rows; y += step)
for(int x = 0; x < cflowmap.cols; x += step)
{
const cv::Point2f& fxy = flow.at< cv::Point2f>(y, x);
if((fxy.x != fxy.x)||(fxy.y != fxy.y)){ //zz, for SimpleFlow
//std::cout<<"meh"; //do nothing
}
else{
line(cflowmap, cv::Point(x,y), cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)),color);
circle(cflowmap, cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)), 1, color, -1);
sum +=fxy;//zz
angle = atan2(fxy.y,fxy.x);
sum_angle +=angle;
all_angles.push_back(angle*180/M_PI);
count++; //zz
}
}
}
Function to equalize intensity of images, for better results:
cv::Mat equalizeIntensity(const cv::Mat& inputImage){
if(inputImage.channels() >= 3){
cv::Mat ycrcb;
cvtColor(inputImage,ycrcb,CV_BGR2YCrCb);
std::vector<cv::Mat> channels;
cv::split(ycrcb,channels);
cv::equalizeHist(channels[0], channels[0]);
cv::Mat result;
cv::merge(channels,ycrcb);
cvtColor(ycrcb,result,CV_YCrCb2BGR);
return result;
}
return cv::Mat();
}
So to recap, my questions:
Is it possible to resize Farneback optical flow to apply to 2xbigger image?
How to deal with pixels that go out of view like at the bottom of my images (the brown wooden part should disappear).
How to deal with distortion that is created because optical flow wasn't computed for those pixels, while many pixels around there have motion? (couch upper right, & lion figurine has a ghost hand in the remapped image).
With OpenCV's Farneback optical flow, you will only get a rough estimation of pixel displacement, hence the distortions that appear on the result images.
I don't think optical flow is the way to go for what you are trying to achieve IMHO. Instead I'd recommend you to have a look at Image / Pixel Registration for instace here : http://docs.opencv.org/trunk/db/d61/group__reg.html
Image / Pixel Registration is the science of matching pixels of two images. Active research is ongoing about this complex non-trivial subject that is not yet accurately resolved.
i want to use a homomorphic filter to work on underwater image. I tried to code it with the codes found on the internet but i have always a black image... I tried to normalized my result but didn't work.
Here my functions :
void HomomorphicFilter::butterworth_homomorphic_filter(Mat &dft_Filter, int D, int n, float high_h_v_TB, float low_h_v_TB)
{
Mat single(dft_Filter.rows, dft_Filter.cols, CV_32F);
Point centre = Point(dft_Filter.rows/2, dft_Filter.cols/2);
double radius;
float upper = (high_h_v_TB * 0.01);
float lower = (low_h_v_TB * 0.01);
//create essentially create a butterworth highpass filter
//with additional scaling and offset
for(int i = 0; i < dft_Filter.rows; i++)
{
for(int j = 0; j < dft_Filter.cols; j++)
{
radius = (double) sqrt(pow((i - centre.x), 2.0) + pow((double) (j - centre.y), 2.0));
single.at<float>(i,j) =((upper - lower) * (1/(1 + pow((double) (D/radius), (double) (2*n))))) + lower;
}
}
//normalize(single, single, 0, 1, CV_MINMAX);
//Apply filter
mulSpectrums( dft_Filter, single, dft_Filter, 0);
}
void HomomorphicFilter::Shifting_DFT(Mat &fImage)
{
//For visualization purposes we may also rearrange the quadrants of the result, so that the origin (0,0), corresponds to the image center.
Mat tmp, q0, q1, q2, q3;
/*First crop the image, if it has an odd number of rows or columns.
Operator & bit to bit by -2 (two's complement : -2 = 111111111....10) to eliminate the first bit 2^0 (In case of odd number on row or col, we take the even number in below)*/
fImage = fImage(Rect(0, 0, fImage.cols & -2, fImage.rows & -2));
int cx = fImage.cols/2;
int cy = fImage.rows/2;
/*Rearrange the quadrants of Fourier image so that the origin is at the image center*/
q0 = fImage(Rect(0, 0, cx, cy));
q1 = fImage(Rect(cx, 0, cx, cy));
q2 = fImage(Rect(0, cy, cx, cy));
q3 = fImage(Rect(cx, cy, cx, cy));
/*We reverse each quadrant of the frame with its other quadrant diagonally opposite*/
/*We reverse q0 and q3*/
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
/*We reverse q1 and q2*/
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
}
void HomomorphicFilter::Fourier_Transform(Mat frame_bw, Mat &image_phase, Mat &image_mag)
{
Mat frame_log;
frame_bw.convertTo(frame_log, CV_32F);
/*Take the natural log of the input (compute log(1 + Mag)*/
frame_log += 1;
log( frame_log, frame_log); // log(1 + Mag)
/*2. Expand the image to an optimal size
The performance of the DFT depends of the image size. It tends to be the fastest for image sizes that are multiple of 2, 3 or 5.
We can use the copyMakeBorder() function to expand the borders of an image.*/
Mat padded;
int M = getOptimalDFTSize(frame_log.rows);
int N = getOptimalDFTSize(frame_log.cols);
copyMakeBorder(frame_log, padded, 0, M - frame_log.rows, 0, N - frame_log.cols, BORDER_CONSTANT, Scalar::all(0));
/*Make place for both the complex and real values
The result of the DFT is a complex. Then the result is 2 images (Imaginary + Real), and the frequency domains range is much larger than the spatial one. Therefore we need to store in float !
That's why we will convert our input image "padded" to float and expand it to another channel to hold the complex values.
Planes is an arrow of 2 matrix (planes[0] = Real part, planes[1] = Imaginary part)*/
Mat image_planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat image_complex;
/*Creates one multichannel array out of several single-channel ones.*/
merge(image_planes, 2, image_complex);
/*Make the DFT
The result of thee DFT is a complex image : "image_complex"*/
dft(image_complex, image_complex);
/***************************/
//Create spectrum magnitude//
/***************************/
/*Transform the real and complex values to magnitude
NB: We separe Real part to Imaginary part*/
split(image_complex, image_planes);
//Starting with this part we have the real part of the image in planes[0] and the imaginary in planes[1]
phase(image_planes[0], image_planes[1], image_phase);
magnitude(image_planes[0], image_planes[1], image_mag);
}
void HomomorphicFilter::Inv_Fourier_Transform(Mat image_phase, Mat image_mag, Mat &inverseTransform)
{
/*Calculates x and y coordinates of 2D vectors from their magnitude and angle.*/
Mat result_planes[2];
polarToCart(image_mag, image_phase, result_planes[0], result_planes[1]);
/*Creates one multichannel array out of several single-channel ones.*/
Mat result_complex;
merge(result_planes, 2, result_complex);
/*Make the IDFT*/
dft(result_complex, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT);
/*Take the exponential*/
exp(inverseTransform, inverseTransform);
}
and here my main code :
/**************************/
/****Homomorphic filter****/
/**************************/
/**********************************************/
//Getting the frequency and magnitude of image//
/**********************************************/
Mat image_phase, image_mag;
HomomorphicFilter().Fourier_Transform(frame_bw, image_phase, image_mag);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/********************************/
//Butterworth homomorphic filter//
/********************************/
int high_h_v_TB = 101;
int low_h_v_TB = 99;
int D = 10;// radius of band pass filter parameter
int order = 2;// order of band pass filter parameter
HomomorphicFilter().butterworth_homomorphic_filter(image_mag, D, order, high_h_v_TB, low_h_v_TB);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/*******************************/
//Inv Discret Fourier Transform//
/*******************************/
Mat inverseTransform;
HomomorphicFilter().Inv_Fourier_Transform(image_phase, image_mag, inverseTransform);
imshow("Result", inverseTransform);
If someone can explain me my mistakes, I would appreciate a lot :). Thank you and sorry for my poor english.
EDIT : Now, i have something but it's not perfect ... I modified 2 things in my code.
I applied log(mag + 1) after dft and not on the input image.
I removed exp() after idft.
here the results (i can post only 2 links ...) :
my input image :
final result :
after having seen several topics, i find similar results on my butterworth filter and on my magnitude after dft/shifting.
Unfortunately, my final result isn't very good. Why i have so much "noise" ?
I was doing this method to balance illumination when camera was changed caused the Image waw dark!
I tried to FFT to the frequency to filter the image! it's work.but use too much time.(2750*3680RGB image).so I do it in Spatial domain.
here is my code!
//IplImage *imgSrcI=cvLoadImage("E:\\lean.jpg",-1);
Mat imgSrcM(imgSrc,true);
Mat imgDstM;
Mat imgGray;
Mat imgHls;
vector<Mat> vHls;
Mat imgTemp1=Mat::zeros(imgSrcM.size(),CV_64FC1);
Mat imgTemp2=Mat::zeros(imgSrcM.size(),CV_64FC1);
if(imgSrcM.channels()==1)
{
imgGray=imgSrcM.clone();
}
else if (imgSrcM.channels()==3)
{
cvtColor(imgSrcM, imgHls, CV_BGR2HLS);
split(imgHls, vHls);
imgGray=vHls.at(1);
}
else
{
return -1;
}
imgGray.convertTo(imgTemp1,CV_64FC1);
imgTemp1=imgTemp1+0.0001;
log(imgTemp1,imgTemp1);
GaussianBlur(imgTemp1, imgTemp2, Size(21, 21), 0.1, 0.1, BORDER_DEFAULT);//imgTemp2是低通滤波的结果
imgTemp1 = (imgTemp1 - imgTemp2);//imgTemp1是对数减低通的高通
addWeighted(imgTemp2, 0.7, imgTemp1, 1.4, 1, imgTemp1, -1);//imgTemp1是压制低频增强高频的结构
exp(imgTemp1,imgTemp1);
normalize(imgTemp1,imgTemp1,0,1,NORM_MINMAX);
imgTemp1=imgTemp1*255;
imgTemp1.convertTo(imgGray, CV_8UC1);
//imwrite("E:\\leanImgGray.jpg",imgGray);
if (imgSrcM.channels()==3)
{
vHls.at(1)=imgGray;
merge(vHls,imgHls);
cvtColor(imgHls, imgDstM, CV_HLS2BGR);
}
else if (imgSrcM.channels()==1)
{
imgDstM=imgGray.clone();
}
cvCopy(&(IplImage)imgDstM,imgDst);
//cvShowImage("jpg",imgDst);
return 0;
I took your code corrected it at a few places and got decent results as the homographic filter output.
Here are the corrections that I made.
1)
Instead of working just on the image_mag, work on the full output of the FFT.
2)
your filter values of high_h_v_TB = 101 and low_h_v_TB = 99 virtually made little effect in filtering.
Here are the values I used.
int high_h_v_TB = 100;
int low_h_v_TB = 20;
int D = 10;// radius of band pass filter parameter
int order = 4;
Here is my main code
//float_img == grayscale image in 0-1 scale
Mat log_img;
log(float_img, log_img);
Mat fft_phase, fft_mag;
Mat fft_complex;
HomomorphicFilter::Fourier_Transform(log_img, fft_complex);
HomomorphicFilter::ShiftFFT(fft_complex);
int high_h_v_TB = 100;
int low_h_v_TB = 30;
int D = 10;// radius of band pass filter parameter
int order = 4;
//get a butterworth filter of same image size as the input image
//dont call mulSpectrums yet, just get the filter of correct size
Mat butterWorthFreqDomain;
HomomorphicFilter::ButterworthFilter(fft_complex.size(), butterWorthFreqDomain, D, order, high_h_v_TB, low_h_v_TB);
//this should match fft_complex in size and type
//and is what we will be using for 'mulSpectrums' call
Mat butterworth_complex;
//make two channels to match fft_complex
Mat butterworth_channels[] = {Mat_<float>(butterWorthFreqDomain.size()), Mat::zeros(butterWorthFreqDomain.size(), CV_32F)};
merge(butterworth_channels, 2, butterworth_complex);
//do mulSpectrums on the full fft
mulSpectrums(fft_complex, butterworth_complex, fft_complex, 0);
//shift back the output
HomomorphicFilter::ShiftFFT(fft_complex);
Mat log_img_out;
HomomorphicFilter::Inv_Fourier_Transform(fft_complex, log_img_out);
Mat float_img_out;
exp(log_img_out, float_img_out);
//float_img_out is gray in 0-1 range
Here is my output.
I want to use OpenCV's Canny edge detector, such as is outlined in this question. For example:
cv::Canny(image,contours,10,350);
However, I wish to not only get the final thresholded image out, but I also wish to get the detected edge angle at each pixel. Is this possible in OpenCV?
canny doesn't give you this directly.
However, you can calculate the angle from the Sobel transform, which is used internally in canny().
Pseudo code:
cv::Canny(image,contours,10,350);
cv::Sobel(image, dx, CV_64F, 1, 0, 3, 1, 0, cv::BORDER_REPLICATE);
cv::Sobel(image, dy, CV_64F, 0, 1, 3, 1, 0, cv::BORDER_REPLICATE);
cv::Mat angle(image.size(), CV_64F)
foreach (i,j) such that contours[i, j] > 0
{
angle[i, j] = atan2(dy[i,j], dx[i , j])
}
Instead of using for loop you can also provide dx and dy gradients to phase function that returns grayscale image of angles direction, then pass it to applyColorMap function and then mask it with edges, so the background is black.
Here is the workflow:
Get the angles
Mat angles;
phase(dx, dy, angles, true);
true argument idicates that the angles are returned in degrees.
Change the range of angles to 0-255 so you can convert to CV_8U without data loss
angles = angles / 360 * 255;
note that angles is still in CV_64F type as it comes from Sobel function
Convert to CV_8U
angles.convertTo(angles, CV_8U);
Apply color map of your choice
applyColorMap(angles, angles, COLORMAP_HSV);
in this case I choose HSV colormap. See this for more info: https://www.learnopencv.com/applycolormap-for-pseudocoloring-in-opencv-c-python/
Apply the edges mask so the background is black
Mat colored;
angles.copyTo(colored, contours);
Finally display image :D
imshow("Colored angles", colored);
In case your source is a video or webcam, before applying the mask of edges you addtionlly must clear colored image, to prevent aggregation:
colored.release();
angles.copyTo(colored, contours);
Full code here:
Mat angles, colored;
phase(dx, dy, angles, true);
angles = angles / 360 * 255;
angles.convertTo(angles, CV_8U);
applyColorMap(angles, angles, COLORMAP_HSV);
colored.release();
angles.copyTo(colored, contours);
imshow("Colored angles", colored);