Image Gaussian convolution in Fourier domain: works, while should't - c++

The problem is I can't fully understand the principles of convolution in frequency domain.
I have an image of size 256x256, which I want to convolve with 3x3 gaussian matrix. It's coefficients are (1/16, 1/8, 1/4):
PlainImage<float> FourierRunner::getGaussMask(int sz)
{
PlainImage<float> G(3,3);
*G.at(0, 0) = 1.0/16; *G.at(0, 1) = 1.0/8; *G.at(0, 2) = 1.0/16;
*G.at(1, 0) = 1.0/8; *G.at(1, 1) = 1.0/4; *G.at(1, 2) = 1.0/8;
*G.at(2, 0) = 1.0/16; *G.at(2, 1) = 1.0/8; *G.at(2, 2) = 1.0/16;
return G;
}
To get FFT of both image and filter kernel, I zero-pad them. sz_common stands for the extended size. Image and kernel are moved to the center of h and g ComplexImages respectively, so they are zero-padded at right, left, bottom and top.
I've read that size should be sz_common >= sz+gsz-1 because of circular convolution property: filter can change undesired image values on boundaries.
But it don't works: adequate results are only when sz_common = sz, when sz_common = sz+gsz-1 or sz_common = 2*sz, after IFFT I get 2-3 times smaller convolved image! Why?
Also I'm confused that filter matrix values should be multiplied by 256, like pixel values: other questions on SO contain Matlab code without such normalization. As in previous case, without such multiplying it works bad: I get black image. Why?
// fft_in is shifted fourier image with center in [sz/2;sz/2]
void FourierRunner::convolveImage(ComplexImage& fft_in)
{
int sz = 256; // equal to fft_in.width()
// Get original complex image (backward fft_in)
ComplexImage original_complex = fft_in;
fft2d_backward(fft_in, original_complex);
int gsz = 3;
PlainImage<float> filter = getGaussMask(gsz);
ComplexImage filter_complex = ComplexImage::fromFloat(filter);
int sz_common = pow2ceil(sz); // should be sz+gsz-1 ???
ComplexImage h = ComplexImage::zeros(sz_common,sz_common);
ComplexImage g = ComplexImage::zeros(sz_common,sz_common);
copyImageToCenter(h, original_complex);
copyImageToCenter(g, filter_complex);
LOOP_2D(sz_common, sz_common) g.setPoint(x, y, g.at(x, y)*256);
fft2d_forward(g, g);
fft2d_forward(h, h);
fft2d_fft_shift(g);
// CONVOLVE
LOOP_2D(sz_common,sz_common) h.setPoint(x, y, h.at(x, y)*g.at(x, y));
copyImageToCenter(fft_in, h);
fft2d_backward(fft_in, fft_in);
fft2d_fft_shift(fft_in);
// TEST DIFFERENCE BTW DOMAINS
PlainImage<float> frequency_res(sz,sz);
writeComplexToPlainImage(fft_in, frequency_res);
fft2d_forward(fft_in, fft_in);
}
I tried to zero-padd image at right and bottom, such that smaller image is copied to the start of bigger, but it also doesn't work.
I wrote convolution in spatial domain to compare results, frequency blur results are almost the same as in spatial domain (avg. error btw pixels is 5), only when sz_common = sz.
So, could you explain phenomena of zero-padding and normalization for this case? Thanks in advance.

Convolution in the Spatial Domain is equivalent of Multiplication in the Fourier Domain.
This is the truth for Continuous functions which are defined everywhere.
Yet in practice, we have discrete signals and convolution kernels.
Which require more gentle caring.
If you have an image of the size M x N and a Kernel of the size of MM x NN if you apply DFT (FFT is an efficient way to calculate the DFT) on them you'll get functions of the size of M x N and MM x NN respectively.
Moreover, the theorem above, about the multiplication equivalence requires to multiply the same frequencies one with each other.
Since practically the Kernel is much smaller than the image, usually it is zero padded to the size of the image.
Now, by applying the DFT you'll get to matrices of the same M x N size and will be able to multiply them.
Yet, this will be equivalent of the Circular Convolution between the Image and Kernel.
To apply the linear convolution you should make them both in the size of (M + MM - 1) x (N + NN - 1).
Usually this would be by applying "Replicate" boundary condition on the image and zero pad the Kernel.
Enjoy...
P.S.
Could you support a new Community Proposal for SE at - http://area51.stackexchange.com/proposals/86832/.
We need more people to follow, up vote questions with less than 10 up votes and more question to be asked.
Thank You.

Related

2D FFT what to do after converting both matrix into FFT-ed form?

Assume that I have 2 matrix: image, filter; with size MxM and NxN.
My regular convolution looks like this and produces matrix output size (M-N+1)x(M-N+1). Basically it places the top-left corner of a filter on a pixel, convolute, then assign the sum onto that pixel:
for (int i=0; i<M-N; i++)
for (int j=0; j<M-N; j++)
{
float sum = 0;
for (int u=0; u<N; u++)
for (int v=0; v<N; v++)
sum += image[i+u][j+v] * filter[u][v];
output[i][j] = sum;
}
Next, to perform FFT:
Apply zero-padding to both image, filter to the right and bottom (that is, adding more zero columns to the right, zero rows to the bottom). Now both have size (M+N)x(M+N); the original image is at
image[0->M-1][0-M-1].
(Do the same for both matrix) Calculate the FFT of each row into a new matrix, then calculate the FFT of each column of that new matrix.
Now, I have 2 matrices imageFreq and filterFreq, both size (M+N)x(M+N), which is the FFT-ed form of the image and the filter.
But how can I get the convolution values that I need (as described in the sample code) from them?
convolution between A,B using FFT is done by per element multiplication in the frequency domain so in 1D something like this:
convert A,B by FFT
assuming the sizes are N,M of A[N],B[M] first zero pad to common size Q which is a power of 2 and at least M+N in size and then apply FFT:
Q = exp2(ceil(log2(M+N)));
zeropad(A,Q);
zeropad(B,Q);
a = FFT(A);
b = FFT(B);
convolute
in frequency domain use just element wise multiplication:
for (i=0;i<Q;i++) a[i]*=b[i];
reconstruct result
simply apply IFFT (inverse of FFT)...
AB = IFFT(a); // crop to first N (real) elements
and use only the first N element (unless algorithm used need more depends on what you are doing...)
For 2D you can either convolute directly in 2D (using 2 nested for loops) or convolve each axis separately. Beware that separating axises need also to normalize the result by some constant (which depends on dimensionality, resolution and kernel used)
So when put together (also assuming the same resolution NxN and MxM) first zero pad to (QxQ) and then:
Q = exp2(ceil(log2(M+N)));
zeropad(A,Q,Q);
zeropad(B,Q,Q);
a = FFT(A);
b = FFT(B);
for (i=0;i<Q;i++)
for (j=0;j<Q;j++) a[i][j]*=b[i][j];
AB = IFFT(a); // crop to first NxN (real) elements
And again crop to AB to NxN size (unless ...) for more info see:
How to compute Discrete Fourier Transform?
and all sublinks there... Also here at the end is 1D convolution example using NTT (its a special form of FFT) to compute bignum multiplication:
Modular arithmetics and NTT (finite field DFT) optimizations
Also if you want real result then just use only the real parts of the result (ignore imaginary part).

Given camera matrices, how to find point correspondances using OpenCV?

I'm following this tutorial, which uses Features2D + Homography. If I have known camera matrix for each image, how can I optimize the result? I tried some images, but it didn't work well.
//Edit
After reading some materials, I think I should rectify two image first. But the rectification is not perfect, so a vertical line on image 1 correspond a vertical band on image 2 generally. Are there any good algorithms?
I'm not sure if I understand your problem. You want to find corresponding points between the images or you want to improve the correctness of your matches by use of the camera intrinsics?
In principle, in order to use camera geometry for finding matches, you would need the fundamental or essential matrix, depending on wether you know the camera intrinsics (i.e. calibrated camera). That means, you would need an estimate for the relative rotation and translation of the camera. Then, by computing the epipolar lines corresponding to the features found in one image, you would need to search along those lines in the second image to find the best match. However, I think it would be better to simply rely on automatic feature matching. Given the fundamental/essential matrix, you could try your luck with correctMatches, which will move the correspondences such that the reprojection error is minimised.
Tips for better matches
To increase the stability and saliency of automatic matches, it usually pays to
Adjust the parameters of the feature detector
Try different detection algorithms
Perform a ratio test to filter out those keypoints which have a very similar second-best match and are therefore unstable. This is done like this:
Mat descriptors_1, descriptors_2; // obtained from feature detector
BFMatcher matcher;
vector<DMatch> matches;
matcher = BFMatcher(NORM_L2, false); // norm depends on feature detector
vector<vector<DMatch>> match_candidates;
const float ratio = 0.8; // or something
matcher.knnMatch(descriptors_1, descriptors_2, match_candidates, 2);
for (int i = 0; i < match_candidates.size(); i++)
{
if (match_candidates[i][0].distance < ratio * match_candidates[i][1].distance)
matches.push_back(match_candidates[i][0]);
}
A more involved way of filtering would be to compute the reprojection error for each keypoint in the first frame. This means to compute the corresponding epipolar line in the second image and then checking how far its supposed matching point is away from that line. Throwing away those points whose distance exceeds some threshold would remove the matches which are incompatible with the epiploar geometry (which I assume would be known). Computing the error can be done like this (I honestly do not remember where I took this code from and I may have modified it a bit, also the SO editor is buggy when code is inside lists, sorry for the bad formatting):
double computeReprojectionError(vector& imgpts1, vector& imgpts2, Mat& inlier_mask, const Mat& F)
{
double err = 0;
vector lines[2];
int npt = sum(inlier_mask)[0];
// strip outliers so validation is constrained to the correspondences
// which were used to estimate F
vector imgpts1_copy(npt),
imgpts2_copy(npt);
int c = 0;
for (int k = 0; k < inlier_mask.size().height; k++)
{
if (inlier_mask.at(0,k) == 1)
{
imgpts1_copy[c] = imgpts1[k];
imgpts2_copy[c] = imgpts2[k];
c++;
}
}
Mat imgpt[2] = { Mat(imgpts1_copy), Mat(imgpts2_copy) };
computeCorrespondEpilines(imgpt[0], 1, F, lines[0]);
computeCorrespondEpilines(imgpt1, 2, F, lines1);
for(int j = 0; j < npt; j++ )
{
// error is computed as the distance between a point u_l = (x,y) and the epipolar line of its corresponding point u_r in the second image plus the reverse, so errij = d(u_l, F^T * u_r) + d(u_r, F*u_l)
Point2f u_l = imgpts1_copy[j], // for the purpose of this function, we imagine imgpts1 to be the "left" image and imgpts2 the "right" one. Doesn't make a difference
u_r = imgpts2_copy[j];
float a2 = lines1[j][0], // epipolar line
b2 = lines1[j]1,
c2 = lines1[j][2];
float norm_factor2 = sqrt(pow(a2, 2) + pow(b2, 2));
float a1 = lines[0][j][0],
b1 = lines[0][j]1,
c1 = lines[0][j][2];
float norm_factor1 = sqrt(pow(a1, 2) + pow(b1, 2));
double errij =
fabs(u_l.x * a2 + u_l.y * b2 + c2) / norm_factor2 +
fabs(u_r.x * a1 + u_r.y * b1 + c1) / norm_factor1; // distance of (x,y) to line (a,b,c) = ax + by + c / (a^2 + b^2)
err += errij; // at this point, apply threshold and mark bad matches
}
return err / npt;
}
The point is, grab the fundamental matrix, use it to compute epilines for all the points and then compute the distance (the lines are given in a parametric form so you need to do some algebra to get the distance). This is somewhat similar in outcome to what findFundamentalMat with the RANSAC method does. It returns a mask wherein for each match there is either a 1, meaning that it was used to estimate the matrix, or a 0 if it was thrown out. But estimating the fundamental Matrix like this will probably be less accurate than using chessboards.
EDIT: Looks like oarfish beat me to it, but I'll leave this here.
The fundamental matrix (F) defines a mapping from a point in the left image to a line in the right image on which the corresponding point must lie, assuming perfect calibration. This is the epipolar line, i.e. the line though the point in the left image and the two epipoles of the stereo camera pair. For references, see these lecture notes and this chapter of the HZ book.
Given a set of point correspondences in the left and right images: (p_L, p_R), from SURF (or any other feature matcher), and given F, the constraint from epipolar geometry of the stereo pair says that p_R should lie on the epipolar line projected by p_L onto the right image, i.e.
In practice, calibration errors from noise as well as erroneous feature matches lead to a non-zero value.
However, using this idea, you can then perform outlier removal by rejecting those feature matches for which this equation is greater than a certain threshold value, i.e. reject (p_L, p_R) if and only if:
When selecting this threshold, keep in mind that it is the distance in image space of a point from an epipolar line that you are willing to tolerate, which in some sense is your epipolar error tolerance.
Degenerate case: To visually imagine what this means, let us assume that the stereo pair differ only in a pure X-translation. Then the epipolar lines are horizontal. This means that you can connect the feature matched point pairs by a line and reject those pairs whose line slope is not close to zero. The equation above is a generalization of this idea to arbitrary stereo rotation and translation, which is accounted for by the matrix F.
Your specific images: It looks like your feature matches are sparse. I suggest instead to use a dense feature matching approach so that after outlier removal, you are still left with a sufficient number of good-quality matches. I'm not sure which dense feature matcher is already implemented in OpenCV, but I suggest starting here.
Giving your pictures, your are trying to do a stereo matching.
This page will be helpfull. The rectification you want can be done using stereoCalibrate then stereoRectify.
The result (from the doc):
In order to find the Fundamental Matrix, you need correct correspondances, but in order to get good correspondances, you need a good estimate of the fundamental matrix. This might sound like an impossible chicken-and-the-egg-problem, but there is well established methods to do this; RANSAC.
It randomly selects a small set of correspondances, uses those to calculate a fundamental matrix (using the 7 or 8 point algorithm) and then tests how many of the other correspondences that comply with this matrix (using the method described by scribbleink for measuring the distance between point and epipolar line). It keeps testing new combinations of correspondances for a certain number of iterations and selects the one with the most inliers.
This is already implemented in OpenCV as cv::findFundamentalMat (http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmat). Select the method CV_FM_RANSAC to use ransac to remove bad correspondances. It will output a list of all the inlier correspondances.
The requirement for this is that all the points does not lie on the same plane.

Calculating scale, rotation and translation from Homography matrix

I am trying to calculate scale, rotation and translation between two consecutive frames of a video. So basically I matched keypoints and then used opencv function findHomography() to calculate the homography matrix.
homography = findHomography(feature1 , feature2 , CV_RANSAC); //feature1 and feature2 are matched keypoints
My question is: How can I use this matrix to calculate scale, rotation and translation?.
Can anyone provide me the code or explanation as to how to do it?
if you can use opencv 3.0, this decomposition method is available
http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#decomposehomographymat
The right answer is to use homography as it is defined dst = H ⋅ src and explore what it does to small segments around a particular point.
Translation
Given a single point, for translation do
T = dst - (H ⋅ src)
Rotation
Given two points p1 and p2
p1 = H ⋅ p1
p2 = H ⋅ p2
Now just calculate the angle between vectors p1 p2 and p1' p2'.
Scale
You can use the same trick but now just compare the lengths: |p1 p2| and |p1' p2'|.
To be fair, use another segment orthogonal to the first and average the result. You will see that there is no constant scale factor or translation one. They will depend on the src location.
Given Homography matrix H:
|H_00, H_01, H_02|
H = |H_10, H_11, H_12|
|H_20, H_21, H_22|
Assumptions:
H_20 = H_21 = 0 and normalized to H_22 = 1 to obtain 8 DOF.
The translation along x and y axes are directly calculated from H:
tx = H_02
ty = H_12
The 2x2 sub matrix on the top left corner is decomposed to calculate shear, scaling and rotation. An easy and quick decomposition method is explained here.
Note: this method assumes invertible matrix.
Since i had to struggle for a couple of days to create my homography transformation function I'm going to put it here for the benefit of everyone.
Here you can see the main loop where every input position is multiplied by the homography matrix h. Then the result is used to copy the pixel from the original position to the destination position.
for (tempIn[0] = 0; tempIn[0] < stride; tempIn[0]++)
{
for (tempIn[1] = 0; tempIn[1] < rows; tempIn[1]++)
{
double w = h[6] * tempIn[0] + h[7] * tempIn[1] + 1; // very important!
//H_20 = H_21 = 0 and normalized to H_22 = 1 to obtain 8 DOF. <-- this is wrong
tempOut[0] = ((h[0] * tempIn[0]) + (h[1] * tempIn[1]) + h[2])/w;
tempOut[1] =(( h[3] * tempIn[0]) +(h[4] * tempIn[1]) + h[5])/w;
if (tempOut[1] < destSize && tempOut[0] < destSize && tempOut[0] >= 0 && tempOut[1] >= 0)
dest_[destStride * tempOut[1] + tempOut[0]] = src_[stride * tempIn[1] + tempIn[0]];
}
}
After such process an image with some kind of grid will be produced. Some kind of filter is needed to remove the grid. In my code i have used a simple linear filter.
Note: Only the central part of the original image is really required for producing a correct image. Some rows and columns can be safely discarded.
For estimating a tree-dimensional transform and rotation induced by a homography, there exist multiple approaches. One of them provides closed formulas for decomposing the homography, but they are very complex. Also, the solutions are never unique.
Luckily, OpenCV 3 already implements this decomposition (decomposeHomographyMat). Given an homography and a correctly scaled intrinsics matrix, the function provides a set of four possible rotations and translations.
The question seems to be about 2D parameters. Homography matrix captures perspective distortion. If the application does not create much perspective distortion, one can approximate a real world transformation using affine transformation matrix (that uses only scale, rotation, translation and no shearing/flipping). The following link will give an idea about decomposing an affine transformation into different parameters.
https://math.stackexchange.com/questions/612006/decomposing-an-affine-transformation

How do you implement a calculated Gaussian kernel?

I am struggling with my ability to implement a calculated gaussian kernel to return a blurred image.
My current code that calculates the kernel is below:
const int m = 5;
const int n = 5;
double sigma = std;
Mat Gauss;
double kernel[m][n];
for ( int x = 0; x < m; ++x )
for ( int y = 0; y < n; ++y )
{
kernel[x][y] = (1 / (sigma * (sqrt(2 * M_PI))))
* exp(-0.5 * (std::pow((x - avg) / sigma, 2.0)
+ pow((y - avg) / sigma, 2.0) ) / (2 * M_PI * sigma * sigma));
}
However, I can't figure out how to apply this to the image in a way that I am returned a blurred image.
I would appreciate it if anyone could give me some pointers in a way that I can apply this to an image.
I was thinking of using a for loop to replace the pixels of the original image but I could not properly implement this idea.
Thank you for your time.
It sounds like you want to compute a convolution of the original image with a Gaussian kernel, something like this:
blurred[x][y] = Integral (kernel[s][t] * original[x-s][y-t]) ds dt
There are a number of techniques for that:
Direct convolution: go through the grid and compute the above integral at each point. This works well for kernels with very small support, on the order of 5 grid points in each direction, but for kernels with larger support becomes too slow. For Gaussian kernels a rule of thumb for truncating support is about 3*sigma, so it's not unreasonable to do direct convolution with sigma under 2 grid points.
Fast Fourier Transform (FFT). This works reasonable fast for any kernel. Therefore FFT became the standard way to compute convolution of nearly anything with nearly anything. Direct convolution beats FFT only for kernel with very small support.
Analytical: integrals of some kernels have analytical expressions. In particular, integral of a Gaussian is the Erf function, and, at least on Unix systems, it's available as a function call. Moreover, on some hardware (such as GPUs) Erf is implemented in hardware. In some rare (but important) cases of coarse bi-level images one can replace convolution with Gaussian with a loop of Erf function calls.
For most computational system your best bet would be to go with FFT: it's fast and it's flexible enough to handle correctly any kernels and images.

Mapping colors to an interval

I'm porting a MATLAB piece of code in C/C++ and I need to map many RGB colors in a graph to an integer interval.
Let [-1;1] be the interval a function can have a value in, I need to map -1 and any number below it to a color, +1 and any number above it to another color, any number between -1 and +1 to another color intermediate between the boundaries. Obviously numbers are infinite so I'm not getting worried about how many colors I'm going to map, but it would be great if I could map at least 40-50 colors in it.
I thought of subdividing the [-1;1] interval into X sub-intervals and map every one of them to a RGB color, but this sounds like a terribly boring and long job.
Is there any other way to achieve this? And if there isn't, how should I do this in C/C++?
If performance isn't an issue, then I would do something similar to what High Performance Mark suggested, except maybe do it in HSV color space: Peg the S and V values at maximum and vary the H value linearly over a particular range:
s = 1.0; v = 1.0;
if(x <= -1){h = h_min;}
else if(x >= 1){h = h_max;}
else {h = h_min + (h_max - h_min)*0.5*(x + 1.0);}
// then convert h, s, v back to r, g, b - see the wikipedia link
If performance is an issue (e.g., you're trying to process video in real-time or something), then calculate the rgb values ahead of time and load them from a file as an array. Then simply map the value of x to an index:
int r, g, b;
int R[NUM_COLORS];
int G[NUM_COLORS];
int B[NUM_COLORS];
// load R, G, B from a file, or define them in a header file, etc
unsigned int i = 0.5*(x + 1.0);
i = MIN(NUM_COLORS-1, i);
r = R[i]; g = G[i]; b = B[i];
Here's a poor solution. Define a function which takes an input, x, which is a float (or double) and returns a triplet of integers each in the range 0-255. This triplet is, of course, a specification of an RGB color.
The function has 3 pieces;
if x<=-1 f[x] = {0,0,0}
if x>= 1 f[x] = {255,255,255}
if -1<x<1 f[x] = {floor(((x + 1)/2)*255),floor(((x + 1)/2)*255),floor(((x + 1)/2)*255)}
I'm not very good at writing C++ so I'll leave this as pseudocode, you shouldn't have too much problem turning it into valid code.
The reason it isn't a terribly good function is that there isn't a natural color gradient between the values that this plots through RGB color space. I mean, this is likely to produce a sequence of colors which is at odds to most people's expectations of how colors should change. If you are one of those people, I invite you to modify the function as you see fit.
For all of this I blame RGB color space, it is ill-suited to this sort of easy computation of 'neighbouring' colors.