Shouldn't GaussianBlur be symmetric? - c++

I expected a Gaussian Blur operation to be symmetric, but using the OpenCV 2.4.11 GaussianBlur I am getting differences.
Here's an example. I apply a GaussianBlur to an image, and to a flipped version of the image. I've separately verified the flip operation doesn't change the image pixel values (not shown). When I flip the blurred image back, I expected it to be the same as the blur of the original, but the diff shows a lot of small differences (between 0.0 and 6.103515625e-005). I know that's small, but it has a knock-on effect in my subsequent processing.
The Gaussian Kernel is symmetric, so the result should be the same. Is this simply a rounding error in the implementation?
int main(int, char **)
{
// e.g. 2008_005541.jpg from VOC2012 dataset
char const * const filename = "...";
float const sig_diff = 1.24899971f;
cv::Mat image = cv::imread(filename, cv::IMREAD_GRAYSCALE);
cv::Mat gray_fpt;
image.convertTo(gray_fpt, cv::DataType<float>::type, 1, 0);
GaussianBlur(gray_fpt, gray_fpt, cv::Size(), sig_diff, sig_diff);
cv::Mat mirror;
flip(image, mirror, 1);
cv::Mat mirror_gray_fpt;
mirror.convertTo(mirror_gray_fpt, cv::DataType<float>::type, 1, 0);
GaussianBlur(mirror_gray_fpt, mirror_gray_fpt, cv::Size(), sig_diff, sig_diff);
flip(mirror_gray_fpt, mirror_gray_fpt, 1);
cv::Mat diff = abs(gray_fpt - mirror_gray_fpt);
double minval, maxval;
minMaxLoc(diff, &minval, &maxval);
// minval = 0.0;
// maxval = 6.103515625e-005;
// easier to visualise the differences with this:
normalize(diff, diff, 0.0, 1.0, cv::NORM_MINMAX, CV_32FC1);
return 0;
}
EDIT: I changed the type from cv::DataType<float>::type to cv::DataType<double>::type and now the max error is 1.1368683772161603e-013, so rounding seems to be the problem.

Changing the code above to call gaussian_blur (below) instead of GaussianBlur produces no differences in the example images I've tested so far.
From this, if the working type of the Gaussian operation is double precision, then the output in floating point precision yields no error. That seems like a nice solution to my problem.
// Perform gaussian blur in double precision and convert back
void gaussian_blur(
cv::Mat const &src, cv::Mat &dst,
cv::Size ksize, double sigmaX, double sigmaY=0,
int borderType=cv::BORDER_DEFAULT)
{
cv::Mat src_dp;
src.convertTo(src_dp, cv::DataType<double>::type, SIFT_FIXPT_SCALE, 0);
cv::Mat dst_dp;
GaussianBlur(src_dp, dst_dp, ksize, sigmaX, sigmaY, borderType);
dst_dp.convertTo(dst, src.type(), SIFT_FIXPT_SCALE, 0);
}

Related

How to detect Blur rate of a face effectively in c++?

I am trying to detect blur rate of the face images with below code.
cv::Mat greyMat;
cv::Mat laplacianImage;
cv::Mat imageClone = LapMat.clone();
cv::resize(imageClone, imageClone, cv::Size(150, 150), 0, 0, cv::INTER_CUBIC);
cv::cvtColor(imageClone, greyMat, CV_BGR2GRAY);
Laplacian(greyMat, laplacianImage, CV_64F);
cv::Scalar mean, stddev; // 0:1st channel, 1:2nd channel and 2:3rd channel
meanStdDev(laplacianImage, mean, stddev, cv::Mat());
double variance = stddev.val[0] * stddev.val[0];
cv::Mat M = (cv::Mat_(3, 1) << -1, 2, -1);
cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F);
cv::Mat Lx;
cv::sepFilter2D(LapMat, Lx, CV_64F, M, G);
cv::Mat Ly;
cv::sepFilter2D(LapMat, Ly, CV_64F, G, M);
cv::Mat FM = cv::abs(Lx) + cv::abs(Ly);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
it some times gives not good results as attached picture.
Is there a best practice way to detect blurry faces ?
I attached an example image which is high scored with above code which is false.
Best
I'm not sure how are you interpreting your results. To measure blur, you usually take the output of the Blur Detector (a number) and compare it against a threshold value, then determine if the input is, in fact, blurry or not. I don't see such a comparison in your code.
There are several ways to measure "blurriness", or rather, sharpness. Let's take a look at one. It involves computing the variance of the Laplacian and then comparing it to an expected value. This is the code:
//read the image and convert it to grayscale:
cv::Mat inputImage = cv::imread( "dog.png" );
cv::Mat gray;
cv::cvtColor( inputImage, gray, cv::COLOR_RGB2GRAY );
//Cool, let's compute the laplacian of the gray image:
cv::Mat laplacianImage;
cv::Laplacian( gray, laplacianImage, CV_64F );
//Prepare to compute the mean and standard deviation of the laplacian:
cv::Scalar mean, stddev;
cv::meanStdDev( laplacianImage, mean, stddev, cv::Mat() );
//Let’s compute the variance:
double variance = stddev.val[0] * stddev.val[0];
Up until this point, we've effectively calculated the variance of the Laplacian, but we still need to compare against a threshold:
double blurThreshold = 300;
if ( variance <= blurThreshold ) {
std::cout<<"Input image is blurry!"<<std::endl;
} else {
std::cout<<"Input image is sharp"<<std::endl;
}
Let’s check out the results. These are my test images. I've printed the variance value in the lower-left corner of the images. The threshold value is 300, blue text is within limits, red text is below.

Calculate Mean: different result for masked image vs ROI

I have a weird problem where my average gradient magnitude result is different if I use a mask as opposed to creating a new Mat of that small ROI. I'll explain the 2 different ways I do this and 2 different average gradient magnitude results I get. I thought I should get the same average gradient magnitude result?
Scenario: Image A is my source/original image of a landscape. I want to get the average gradient magnitude in the region A (10,100), (100,100), (100,150), (10,150).
Technique 1:
- Create a ROI Mat that just shows region A. So its dimensions are 90 by 50.
- Perform cv::Sobel(), cv::magnitude() then cv::meanStdDev()
- My average gradient magnitude result is 11.34.
Technique 2:
- Create a new Mat that is a mask. The mat is the same dimensions as Image A and has a white area where Region A is. Then create a new Mat that just shows that region of Image A and the rest of the Mat is black - hopefully this makes sense.
- Perform cv::Sobel(), cv::magnitude() (but use the mask) then cv::meanStdDev()
- My average gradient magnitude result is 43.76.
Why the different result?
Below is my code:
static Mat backupSrc;
static Mat curSrc;
// Technique 1
void inspectRegion(const Point& strt, const Point& end) {
curSrc = Mat(backupSrc.size(), CV_8UC3);
cvtColor(backupSrc, curSrc, CV_GRAY2RGB);
Rect region = Rect(strt, end);
Mat regionImg = Mat(curSrc, region);
// Calculate the average gradient magnitude/strength across the image
Mat dX, dY, mag;
Sobel(regionImg, dX, CV_32F, 1, 0);
Sobel(regionImg, dY, CV_32F, 0, 1);
magnitude(dX, dY, mag);
Scalar sMMean, sMStdDev;
meanStdDev(mag, sMMean, sMStdDev);
double magnitudeMean = sMMean[0];
double magnitudeStdDev = sMStdDev[0];
rectangle(curSrc, region, { 0 }, 1);
printf("[Gradient Magnitude Mean: %.3f, Gradient Magnitude Std Dev: %.3f]\n", magnitudeMean, magnitudeStdDev);
}
// Technique 2
void inspectRegion(const std::vector<Point>& pnts) {
curSrc = Mat(backupSrc.size(), CV_8UC3);
cvtColor(backupSrc, curSrc, CV_GRAY2RGB);
std::vector<std::vector<Point>> cPnts;
cPnts.push_back(pnts);
Mat mask = Mat::zeros(curSrc.rows, curSrc.cols, CV_8UC1);
fillPoly(mask, cPnts, { 255 });
Mat regionImg;
curSrc.copyTo(regionImg, mask);
// Calculate the average gradient magnitude/strength across the image
Mat dX, dY, mag;
Sobel(regionImg, dX, CV_32F, 1, 0);
Sobel(regionImg, dY, CV_32F, 0, 1);
magnitude(dX, dY, mag);
Scalar sMMean, sMStdDev;
meanStdDev(mag, sMMean, sMStdDev, mask);
double magnitudeMean = sMMean[0];
double magnitudeStdDev = sMStdDev[0];
polylines(curSrc, pnts, true, { 255 }, 3);
printf("[Gradient Magnitude Mean: %.3f, Gradient Magnitude Std Dev: %.3f]\n", magnitudeMean, magnitudeStdDev);
}
In technique 2 the gradients around the boarders of your rectangle will be very high and will corrupt the calculation.
Consider dilating your mask before computing the gradients so that this spike is outside of the non-dilated mask that you send into the meanStdDev function.

Farneback optical flow - dealing with pixels out of view, pixels with wrong flow result, different size image

I am writing my thesis and one part of the task is to interpolate between images to create intermediate images. The work has to be done in c++ using openCV 2.4.13.
The best solution I've found so far is computing optical flow and remapping. But this solution has two problems that I am unable to solve on my own:
There are pixels that should go out of view (bottom of image for example), but they do not.
Some pixels do not move, creating a distorted result (upper right part of the couch)
What has made the flow&remap approach better:
Equalizing the intensity. This i'm allowed to do. You can check the result by comparing the couch form (centre of remapped image and original).
Reducing size of image. This i'm NOT allowed to do, as I need the same size output. Is there a way to rescale the optical flow result to get the bigger remapped image?
Other approaches tried and failed:
cuda::interpolateFrames. Creates incredible ghosting.
blending images with cv::addWeighted. Even worse ghosting.
Below is the code I am using at the moment. And images: dropbox link with input and result images
int main(){
cv::Mat second, second_gray, cutout, cutout_gray, flow_n;
second = cv::imread( "/home/zuze/Desktop/forstack/second_L.jpg", 1 );
cutout = cv::imread("/home/zuze/Desktop/forstack/cutout_L.png", 1);
cvtColor(second, second_gray, CV_BGR2GRAY);
cvtColor(cutout, cutout_gray, CV_RGB2GRAY );
///----------COMPUTE OPTICAL FLOW AND REMAP -----------///
cv::calcOpticalFlowFarneback( second_gray, cutout_gray, flow_n, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_n; //looks like it's drunk.
createNewFrame(remap_n, flow_n, 1, second, cutout );
cv::Mat cflow_n;
cflow_n = cutout_gray;
cvtColor(cflow_n, cflow_n, CV_GRAY2BGR);
drawOptFlowMap(flow_n, cflow_n, 10, CV_RGB(0,255,0));
///--------EQUALIZE INTENSITY, COMPUTE OPTICAL FLOW AND REMAP ----///
cv::Mat cutout_eq, second_eq;
cutout_eq= equalizeIntensity(cutout);
second_eq= equalizeIntensity(second);
cv::Mat flow_eq, cutout_eq_gray, second_eq_gray, cflow_eq;
cvtColor( cutout_eq, cutout_eq_gray, CV_RGB2GRAY );
cvtColor( second_eq, second_eq_gray, CV_RGB2GRAY );
cv::calcOpticalFlowFarneback( second_eq_gray, cutout_eq_gray, flow_eq, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_eq;
createNewFrame(remap_eq, flow_eq, 1, second, cutout_eq );
cflow_eq = cutout_eq_gray;
cvtColor(cflow_eq, cflow_eq, CV_GRAY2BGR);
drawOptFlowMap(flow_eq, cflow_eq, 10, CV_RGB(0,255,0));
cv::imshow("remap_n", remap_n);
cv::imshow("remap_eq", remap_eq);
cv::imshow("cflow_eq", cflow_eq);
cv::imshow("cflow_n", cflow_n);
cv::imshow("sec_eq", second_eq);
cv::imshow("cutout_eq", cutout_eq);
cv::imshow("cutout", cutout);
cv::imshow("second", second);
cv::waitKey();
return 0;
}
Function for remapping, to be used for intermediate image creation:
void createNewFrame(cv::Mat & frame, const cv::Mat & flow, float shift, cv::Mat & prev, cv::Mat &next){
cv::Mat mapX(flow.size(), CV_32FC1);
cv::Mat mapY(flow.size(), CV_32FC1);
cv::Mat newFrame;
for (int y = 0; y < mapX.rows; y++){
for (int x = 0; x < mapX.cols; x++){
cv::Point2f f = flow.at<cv::Point2f>(y, x);
mapX.at<float>(y, x) = x + f.x*shift;
mapY.at<float>(y, x) = y + f.y*shift;
}
}
remap(next, newFrame, mapX, mapY, cv::INTER_LANCZOS4);
frame = newFrame;
cv::waitKey();
}
Function to display optical flow in vector form:
void drawOptFlowMap (const cv::Mat& flow, cv::Mat& cflowmap, int step, const cv::Scalar& color) {
cv::Point2f sum; //zz
std::vector<float> all_angles;
int count=0; //zz
float angle, sum_angle=0; //zz
for(int y = 0; y < cflowmap.rows; y += step)
for(int x = 0; x < cflowmap.cols; x += step)
{
const cv::Point2f& fxy = flow.at< cv::Point2f>(y, x);
if((fxy.x != fxy.x)||(fxy.y != fxy.y)){ //zz, for SimpleFlow
//std::cout<<"meh"; //do nothing
}
else{
line(cflowmap, cv::Point(x,y), cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)),color);
circle(cflowmap, cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)), 1, color, -1);
sum +=fxy;//zz
angle = atan2(fxy.y,fxy.x);
sum_angle +=angle;
all_angles.push_back(angle*180/M_PI);
count++; //zz
}
}
}
Function to equalize intensity of images, for better results:
cv::Mat equalizeIntensity(const cv::Mat& inputImage){
if(inputImage.channels() >= 3){
cv::Mat ycrcb;
cvtColor(inputImage,ycrcb,CV_BGR2YCrCb);
std::vector<cv::Mat> channels;
cv::split(ycrcb,channels);
cv::equalizeHist(channels[0], channels[0]);
cv::Mat result;
cv::merge(channels,ycrcb);
cvtColor(ycrcb,result,CV_YCrCb2BGR);
return result;
}
return cv::Mat();
}
So to recap, my questions:
Is it possible to resize Farneback optical flow to apply to 2xbigger image?
How to deal with pixels that go out of view like at the bottom of my images (the brown wooden part should disappear).
How to deal with distortion that is created because optical flow wasn't computed for those pixels, while many pixels around there have motion? (couch upper right, & lion figurine has a ghost hand in the remapped image).
With OpenCV's Farneback optical flow, you will only get a rough estimation of pixel displacement, hence the distortions that appear on the result images.
I don't think optical flow is the way to go for what you are trying to achieve IMHO. Instead I'd recommend you to have a look at Image / Pixel Registration for instace here : http://docs.opencv.org/trunk/db/d61/group__reg.html
Image / Pixel Registration is the science of matching pixels of two images. Active research is ongoing about this complex non-trivial subject that is not yet accurately resolved.

Magnification of high intensities using openCV

I have an image which has areas of high intensities and I would like to magnify those intensities. I accomplished this in Matlab by converting a integer array in (0,255) to floating point from (0,1), then squaring each value and finally multiplying by 255 and converting back to integer.
How would something like this be done in openCV? Is there a way to access the elements piece by piece? Even so, I suppose this would be inefficient and wonder if there are openCV methods which are vectorized or otherwise optimized to accomplish this.
Given an input grayscale image:
the result of your algorithm is:
You can:
convert and scale with convertTo.
square each pixel with element-wise multiplication mul, or use pow to raise to an arbitrary number.
This is the simple code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img = imread("path_to_image", IMREAD_GRAYSCALE);
imshow("Original", img);
// converting to float in (0,1)
img.convertTo(img, CV_32F, 1.0 / 255.0);
// power with an arbitrary number. Use 2 to square
pow(img, 2, img);
// multiplying by 255 and back to integer
img.convertTo(img, CV_8U, 255.0);
imshow("Result", img);
waitKey();
return 0;
}

Understanding OpenCV's undistort function

I'm looking to undistort an image using the distortion coefficients that I've computed for my camera, without changing the camera matrix. This is exactly what undistort() does, but I wanted to draw the output to a larger canvas image.
When I tried this:
Mat drawtransform = getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, size, 1.0, size * 2);
undistort(inputimage, undistorted, cameraMatrix, distCoeffs, drawtransform);
It still wrote out the same sized image, but only the top left quarter of the scaled-up-by-two undistorted result. Like the documentation says, undistort writes into a target image of the same size.
It's pretty obvious that I can just go copy out and reimplement a slightly tweaked version of undistort() but I am having some trouble understanding what it is doing. Here's the source:
void cv::undistort( InputArray _src, OutputArray _dst, InputArray _cameraMatrix,
InputArray _distCoeffs, InputArray _newCameraMatrix )
{
Mat src = _src.getMat(), cameraMatrix = _cameraMatrix.getMat();
Mat distCoeffs = _distCoeffs.getMat(), newCameraMatrix = _newCameraMatrix.getMat();
_dst.create( src.size(), src.type() );
Mat dst = _dst.getMat();
CV_Assert( dst.data != src.data );
int stripe_size0 = std::min(std::max(1, (1 << 12) / std::max(src.cols, 1)), src.rows);
Mat map1(stripe_size0, src.cols, CV_16SC2), map2(stripe_size0, src.cols, CV_16UC1);
Mat_<double> A, Ar, I = Mat_<double>::eye(3,3);
cameraMatrix.convertTo(A, CV_64F);
if( distCoeffs.data )
distCoeffs = Mat_<double>(distCoeffs);
else
{
distCoeffs.create(5, 1, CV_64F);
distCoeffs = 0.;
}
if( newCameraMatrix.data )
newCameraMatrix.convertTo(Ar, CV_64F);
else
A.copyTo(Ar);
double v0 = Ar(1, 2);
for( int y = 0; y < src.rows; y += stripe_size0 )
{
int stripe_size = std::min( stripe_size0, src.rows - y );
Ar(1, 2) = v0 - y;
Mat map1_part = map1.rowRange(0, stripe_size),
map2_part = map2.rowRange(0, stripe_size),
dst_part = dst.rowRange(y, y + stripe_size);
initUndistortRectifyMap( A, distCoeffs, I, Ar, Size(src.cols, stripe_size),
map1_part.type(), map1_part, map2_part );
remap( src, dst_part, map1_part, map2_part, INTER_LINEAR, BORDER_CONSTANT );
}
}
About half of the lines here are for sanity checking and initializing input parameters. What I'm confused about is what's going on with map1 and map2. These names are sadly less descriptive than most. I must be missing some explanation, maybe it's tucked away in some introduction page, or under the doc for another function.
map1 is a two channel signed short integer matrix and map2 is an unsigned short integer matrix, both are of dimension (height, max(4096/width, 1)). The question is, why? What will these maps contain? What is the significance and purpose of this striping? What is the significance and purpose of the strange dimension of the stripes?
Use initUndistortRectifyMap to obtain the transformation to the scale you desire , then apply its output (the two matrices you mention) to remap .
The first map is used to compute the transform the x coordinate at each pixel position, the second is used to transform the y coordinate.
You might want to read the description for the function remap. The map represents the pixel X,Y location in the source image for every pixel in the destination image. Map1_part is every X location in the source, and Map2_part is every Y location in the source.
Without reading into it much, the striping could be a method of speeding up the transformation process.
EDIT:
Also, if you are looking to just scale your image to a larger dimension you could just re-size the output image.
double scaleX = 2.0;
double scaleY = 2.0;
cv::Mat undistortedScaled;
cv::resize(undistorted, undistortedScaled, cv::Size(0,0), scaleX, scaleY);