Automatic separation of two images that have been multiplied together - c++

I am searching for an algorithm or C++/Matlab library that can be used to separate two images multiplied together. A visual example of this problem is given below.
Image 1 can be anything (such as a relatively complicated scene). Image 2 is very simple, and can be mathematically generated. Image 2 always has similar morphology (i.e. downward trend). By multiplying Image 1 by Image 2 (using point-by-point multiplication), we get a transformed image.
Given only the transformed image, I would like to estimate Image 1 or Image 2. Is there an algorithm that can do this?
Here are the Matlab code and images:
load('trans.mat');
imageA = imread('room.jpg');
imageB = abs(response); % loaded from MAT file
[m,n] = size(imageA);
image1 = rgb2gray( imresize(im2double(imageA), [m n]) );
image2 = imresize(im2double(imageB), [m n]);
figure; imagesc(image1); colormap gray; title('Image 1 of Room')
colorbar
figure; imagesc(image2); colormap gray; title('Image 2 of Response')
colorbar
% This is image1 and image2 multiplied together (point-by-point)
trans = image1 .* image2;
figure; imagesc(trans); colormap gray; title('Transformed Image')
colorbar
UPDATE
There are a number of ways to approach this problem. Here are the results of my experiments. Thank you to all who responded to my question!
1. Low-pass filtering of image
As noted by duskwuff, taking the low-pass filter of the transformed image returns an approximation of Image 2. In this case, the low-pass filter has been Gaussian. You can see that it is possible to identify multiplicative noise in the image using the low-pass filter.
2. Homomorphic Filtering
As suggested by EitenT I examined homomorphic filtering. Knowing the name of this type of image filtering, I managed to find a number of references that I think would be useful in solving similar problems.
S. P. Banks, Signal processing, image processing, and pattern recognition. New York: Prentice Hall, 1990.
A. Oppenheim, R. Schafer, and J. Stockham, T., “Nonlinear filtering of multiplied and convolved signals,” IEEE Transactions on Audio and Electroacoustics, vol. 16, no. 3, pp. 437 – 466, Sep. 1968.
Blind image Deconvolution: theory and applications. Boca Raton: CRC Press, 2007.
Chapter 5 of the Blind image deconvolution book is particularly good, and contains many references to homomorphic filtering. This is perhaps the most generalized approach that will work well in many different applications.
3. Optimization using fminsearch
As suggested by Serg, I used an objective function with fminsearch. Since I know the mathematical model of the noise, I was able to use this as input to an optimization algorithm. This approach is entirely problem-specific, and may not be always useful in all situations.
Here is a reconstruction of Image 2:
Here is a reconstruction of Image 1, formed by dividing by the reconstruction of Image 2:
Here is the image containing the noise:
Source code
Here is the source code for my problem. As shown by the code, this is a very specific application, and will not work well in all situations.
N = 1001;
q = zeros(N, 1);
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
[R, ~, ~] = get_response(N, q, dt, wSize, Glim, ginv);
rows = wSize;
cols = N;
cut_val = 200;
figure; imagesc(abs(R)); title('Matrix output of algorithm')
colorbar
figure;
imagesc(abs(R)); title('abs(response)')
figure;
imagesc(imag(R)); title('imag(response)')
imageA = imread('room.jpg');
% images should be of the same size
[m,n] = size(R);
image1 = rgb2gray( imresize(im2double(imageA), [m n]) );
% here is the multiplication (with the image in complex space)
trans = ((image1.*1i)) .* (R(end:-1:1, :));
figure;
imagesc(abs(trans)); colormap(gray);
% take the imaginary part of the response
imagLogR = imag(log(trans));
% The beginning and end points are not usable
Mderiv = zeros(rows, cols-2);
for k = 1:rows
val = deriv_3pt(imagLogR(k,:), dt);
val(val > cut_val) = 0;
Mderiv(k,:) = val(1:end-1);
end
% This is the derivative of the imaginary part of R
% d/dtau(imag((log(R)))
% Do we need to remove spurious values from the matrix?
figure;
imagesc(abs(log(Mderiv)));
disp('Running iteration');
% Apply curve-fitting to get back the values
% by cycling over the cols
q0 = 10;
q1 = 500;
NN = cols - 2;
qout = zeros(NN, 1);
for k = 1:NN
data = Mderiv(:,k);
qout(k) = fminbnd(#(q) curve_fit_to_get_q(q, dt, rows, data),q0,q1);
end
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure;
plot(qout); title('Reconstructed q')
ylim([0 200]); xlim([0 1001])
% make the vector the same size as the other
qout2 = [qout(1); qout; qout(end)];
% get the reconstructed response
[RR, ~, ~] = get_response(N, qout2, dt, wSize, Glim, ginv);
RR = RR(end:-1:1,:);
figure; imagesc(abs(RR)); colormap gray
title('Reconstructed Image 2')
colorbar;
% here is the reconstructed image of the room
% NOTE the division in the imagesc function
check0 = image1 .* abs(R(end:-1:1, :));
figure; imagesc(check0./abs(RR)); colormap gray
title('Reconstructed Image 1')
colorbar;
figure; imagesc(check0); colormap gray
title('Original image with noise pattern')
colorbar;
function [response, L, inte] = get_response(N, Q, dt, wSize, Glim, ginv)
fs = 1 / dt;
Npad = wSize - 1;
N1 = wSize + Npad;
N2 = floor(N1 / 2 + 1);
f = (fs/2)*linspace(0,1,N2);
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
sigma2 = exp(-(0.23*Glim + 1.63));
sign = 1;
if(ginv == 1)
sign = -1;
end
ratio = omega ./ omegah;
rs_r = zeros(N2, 1);
rs_i = zeros(N2, 1);
termr = zeros(N2, 1);
termi = zeros(N2, 1);
termr_sub1 = zeros(N2, 1);
termi_sub1 = zeros(N2, 1);
response = zeros(N2, N);
L = zeros(N2, N);
inte = zeros(N2, N);
% cycle over cols of matrix
for ti = 1:N
term0 = omega ./ (2 .* Q(ti));
gamma = 1 / (pi * Q(ti));
% calculate for the real part
if(ti == 1)
Lambda = ones(N2, 1);
termr_sub1(1) = 0;
termr_sub1(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
else
termr(1) = 0;
termr(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
rs_r = rs_r - dt.*(termr + termr_sub1);
termr_sub1 = termr;
Beta = exp( -1 .* -0.5 .* rs_r );
Lambda = (Beta + sigma2) ./ (Beta.^2 + sigma2); % vector
end
% calculate for the complex part
if(ginv == 1)
termi(1) = 0;
termi(2:end) = (ratio(2:end).^(sign .* gamma) - 1) .* omega(2:end);
else
termi = (ratio.^(sign .* gamma) - 1) .* omega;
end
rs_i = rs_i - dt.*(termi + termi_sub1);
termi_sub1 = termi;
integrand = exp( 1i .* -0.5 .* rs_i );
L(:,ti) = Lambda;
inte(:,ti) = integrand;
if(ginv == 1)
response(:,ti) = Lambda .* integrand;
else
response(:,ti) = (1 ./ Lambda) .* integrand;
end
end % ti loop
function sse = curve_fit_to_get_q(q, dt, rows, data)
% q = trial q value
% dt = timestep
% rows = number of rows
% data = actual dataset
fs = 1 / dt;
N2 = rows;
f = (fs/2)*linspace(0,1,N2); % vector for frequency along cols
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
ratio = omega ./ omegah;
gamma = 1 / (pi * q);
% calculate for the complex part
termi = ((ratio.^(gamma)) - 1) .* omega;
% for now, just reverse termi
termi = termi(end:-1:1);
%
% Do non-linear curve-fitting
% termi is a column-vector with the generated noise pattern
% data is the log-transformed image
% sse is the value that is returned to fminsearchbnd
Error_Vector = termi - data;
sse = sum(Error_Vector.^2);
function output = deriv_3pt(x, dt)
N = length(x);
N0 = N - 1;
output = zeros(N0, 1);
denom = 2 * dt;
for k = 2:N0
output(k - 1) = (x(k+1) - x(k-1)) / denom;
end

This is going to be a difficult, unreliable process, as you're fundamentally trying to extract information (the separation of the two images) which has been destroyed. Bringing it back perfectly is impossible; the best you can do is guess.
If the second image is always going to be relatively "smooth", you may be able to reconstruct it (or, at least, an approximation of it) by applying a strong low-pass filter to the transformed image. With that in hand, you can invert the multiplication, or equivalently use a complementary high-pass filter to get the first image. It won't be quite the same, but it'll at least be something.

I would try constrained optimization (fmincon in Matlab).
If you understand the source / nature of the 2-nd image, you probably can define a multivariate function that generates similar noise patterns. The objective function can be the correlation between the generated noise image, and the last image.

Related

opencv how to get middle line of a contour

I have an image from which I get the contour of using findContours. This products something that looks like the following: (showing the "inner and outer contour").
Is there a way for me to get the "midpoint" of these two contours? ie some kind of polyline that would fit exactly in between the two lines seen in the image, such that the distance at any point on the resultant time is the same from it to the top contour as is from it to the bottom contour?
More complicated example would be something as follows:
Please note, that it doesnt matter too much what happens at intersections, so long as nothing traces back on itself, so the result of the more complicated example would need multiple lines.
There is a way to get the "midpoint" of the two contours, but I don't think there is an existing OpenCV solution.
You may use the following stages:
Convert image to Grayscale, and apply binary threshold.
You may use cvtColor(... COLOR_BGR2GRAY) and threshold(...) OpenCV functions.
Fill the pixels outsize the area between lines with white color.
You may use floodFill OpenCV function.
Apply "distance transform" to the binary image.
You may use distanceTransform OpenCV function.
Use CV_DIST_L2 for euclidean distance.
Apply Dijkstra's algorithm for finding the shortest paths between most left and most right nodes.
Representing "distance transform" result (image) as weighted graph and applying Dijkstra's algorithm is the most challenging stage.
I implemented the solution in MATLAB.
The MATLAB implemented is used as a "proof of concept".
I know you were expecting C++ implementation, but it requires a lot of work.
The MATLAB implementation uses im2graph function, I downloaded from here.
Here is the MATLAB implementation:
origI = imread('two_contours.png'); % Read input image
I = rgb2gray(origI); % Convert RGB to Grayscale.
BW = imbinarize(I); % Convert from Grayscale to binary image.
% Fill pixels outsize the area between lines.
BW2 = imfill(BW, ([1, size(I,2)/2; size(I,1), size(I,2)/2]));
% Apply "distance transform" (find compute euclidean distance from closest white pixel)
D = bwdist(BW2);
% Mark all pixels outsize the area between lines with zero.
D(BW2 == 1) = 0;
figure;imshow(D, []);impixelinfo % Display D matrix as image
[M, N] = size(D);
% Find starting point and end point - assume we need to find a path from left side to right side.
x0 = 1;
[~, y0] = max(D(:, x0));
x1 = N;
[~, y1] = max(D(:, x1));
% https://www.mathworks.com/matlabcentral/fileexchange/46088-dijkstra-lowest-cost-for-images
StartNode = y0;
EndNode = M*N - (M-y1-1);
conn = 8;%4 or 8 - connected neighborhood for linking pixels
% Use 100 - D, because graphshortestpath searches for minimum weight (and we are looking for maximum weight path).
CostMat = 100 - D;
G = im2graph(CostMat, conn);
%Find "shortest" path from StartNode to EndNode
[dist, path, pred] = graphshortestpath(G, StartNode, EndNode);
% Mark white path in image J image
J = origI;R = J(:,:,1);G = J(:,:,2);B = J(:,:,3);
R(path) = 255;G(path) = 255;B(path) = 255;
J = cat(3, R, G, B);
figure;imshow(J);impixelinfo % Display J image
Result:
D - Result of distance transform:
J - Original image with "path" marked with white color:
Update:
For the new example you can define three paths.
The solution becomes more complicated.
The example is not generalized to solve all the cases.
There must be a simpler solution, I just can't think of one.
tmpI = imread('three_contours.png'); % Read input image
origI = permute(tmpI, [2, 1, 3]); % Transpose image
I = rgb2gray(origI); % Convert RGB to Grayscale.
BW = imbinarize(I); % Convert from Grayscale to binary image.
% Fill pixels outsize the area between lines.
%BW2 = imfill(BW, ([1, size(I,2)/2; size(I,1), size(I,2)/2]));
BW2 = imfill(BW, ([1, 1; size(I,1), size(I,2); size(I,2)/2, 1]));
% Apply "distance transform" (find compute euclidean distance from closest white pixel)
D = bwdist(BW2);
% Mark all pixels outsize the area between lines with zero.
D(BW2 == 1) = 0;
figure;imshow(D, []);impixelinfo % Display D matrix as image
[M, N] = size(D);
% Find starting point and end point - assume we need to find a path from left side to right side.
x0 = 1;
[~, y0a] = max(D(1:M/2, x0));
% Y coordinate of second point
[~, y0b] = max(D(M/2:M, x0));
y0b = y0b + M/2;
x1 = N;
[~, y1] = max(D(:, x1));
% https://www.mathworks.com/matlabcentral/fileexchange/46088-dijkstra-lowest-cost-for-images
StartNodeA = y0a;
StartNodeB = y0b;
EndNode = M*N - (M-y1-1);
conn = 8;%4 or 8 - connected neighborhood for linking pixels
% Use 100 - D, because graphshortestpath searches for minimum weight (and we are looking for maximum weight path).
D(D==0) = -10000; % Increase the "cost" where D is zero
CostMat = 1000 - D;
G = im2graph(CostMat, conn);
%Find "shortest" path from StartNode to EndNode
[dist, pathA, pred] = graphshortestpath(G, StartNodeA, EndNode);
[dist, pathB, pred] = graphshortestpath(G, StartNodeB, EndNode);
[dist, pathC, pred] = graphshortestpath(G, StartNodeA, StartNodeB);
% Mark white path in image J image
J = origI;R = J(:,:,1);G = J(:,:,2);B = J(:,:,3);
R(pathA) = 255;
G(pathB) = 255;
B(pathC) = 255;
J = cat(3, R, G, B);
J = permute(J, [2, 1, 3]); % Transpose image
figure;imshow(J);impixelinfo % Display J image
Three lines:

how to make complex shapes using swarm of dots......like chair,rocket and many more using pygame and numpy

i am working on a project of swarm algorithms and i am trying to make complex shapes using the swarm consensus. However, the mathematics to achieve that seems quite difficult for me.
I have been able to make shapes like stars, circle and triangle but to make other complex shapes seems more harder. It would be very helpful if i get the idea of using numpy arrays to build these complex shapes using swarms....................................................
# general function to reset radian angle to [-pi, pi)
def reset_radian(radian):
while radian >= math.pi:
radian = radian - 2*math.pi
while radian < -math.pi:
radian = radian + 2*math.pi
return radian
# general function to calculate next position node along a heading direction
def cal_next_node(node_poses, index_curr, heading_angle, rep_times):
for _ in range(rep_times):
index_next = index_curr + 1
x = node_poses[index_curr][0] + 1.0*math.cos(heading_angle)
y = node_poses[index_curr][1] + 1.0*math.sin(heading_angle)
node_poses[index_next] = np.array([x,y])
index_curr = index_next
return index_next
##### script to generate star #####
filename = 'star'
swarm_size = 30
node_poses = np.zeros((swarm_size, 2))
outer_angle = 2*math.pi / 5.0
devia_right = outer_angle
devia_left = 2*outer_angle
# first node is at bottom left corner
heading_angle = outer_angle / 2.0 # current heading
heading_dir = 0 # current heading direction: 0 for left, 1 for right
seg_count = 0 # current segment count
for i in range(1,swarm_size):
node_poses[i] = (node_poses[i-1] +
np.array([math.cos(heading_angle), math.sin(heading_angle)]))
seg_count = seg_count + 1
if seg_count == 3:
seg_count = 0
if heading_dir == 0:
heading_angle = reset_radian(heading_angle - devia_right)
heading_dir = 1
else:
heading_angle = reset_radian(heading_angle + devia_left)
heading_dir = 0
print(node_poses)
with open(filename, 'w') as f:
pickle.dump(node_poses, f)
pygame.init()
# find the right world and screen sizes
x_max, y_max = np.max(node_poses, axis=0)
x_min, y_min = np.min(node_poses, axis=0)
pixel_per_length = 30
world_size = (x_max - x_min + 2.0, y_max - y_min + 2.0)
screen_size = (int(world_size[0])*pixel_per_length, int(world_size[1])*pixel_per_length)
# convert node poses in the world to disp poses on screen
def cal_disp_poses():
poses_temp = np.zeros((swarm_size, 2))
# shift the loop to the middle of the world
middle = np.array([(x_max+x_min)/2.0, (y_max+y_min)/2.0])
for i in range(swarm_size):
poses_temp[i] = (node_poses[i] - middle +
np.array([world_size[0]/2.0, world_size[1]/2.0]))
# convert to display coordinates
poses_temp[:,0] = poses_temp[:,0] / world_size[0]
poses_temp[:,0] = poses_temp[:,0] * screen_size[0]
poses_temp[:,1] = poses_temp[:,1] / world_size[1]
poses_temp[:,1] = 1.0 - poses_temp[:,1]
poses_temp[:,1] = poses_temp[:,1] * screen_size[1]
return poses_temp.astype(int)
disp_poses = cal_disp_poses()
# draw the loop shape on pygame window
color_white = (255,255,255)
color_black = (0,0,0)
screen = pygame.display.set_mode(screen_size)
screen.fill(color_white)
for i in range(swarm_size):
pygame.draw.circle(screen, color_black, disp_poses[i], 5, 0)
for i in range(swarm_size-1):
pygame.draw.line(screen, color_black, disp_poses[i], disp_poses[i+1],2)
pygame.draw.line(screen, color_black, disp_poses[0], disp_poses[swarm_size-1], 2)
pygame.display.update()
Your method for drawing takes huge advantage of the symmetries in the shapes you are drawing. More complex shapes will have fewer symmetries and so your method will require a lot of tedious work to get them drawn with stars. Without symmetry you may be better served writing each individual line 'command' in a list and following that list. For example, drawing the number 4 starting from the bottom (assuming 0 degrees is --> that way):
angles = [90,225,0]
distances = [20,15,12]
Then with a similar program to what you have, you can start drawing dots in a line at 90 degrees for 20 dots, then 225 degrees for 15 dots etc... Then by adding to these two lists you can build up a very complicated shape without relying on symmetry.

Transform images with bezier curves

I'm using this article: nonlingr as a font to understand non linear transformations, in the section GLYPHS ALONG A PATH he explains how to use a parametric curve to transform an image, i'm trying to apply a cubic bezier to an image, however i have been unsuccessfull, this is my code:
OUT.aloc(IN.width(), IN.height());
//get the control points...
wVector p0(values[vindex], values[vindex+1], 1);
wVector p1(values[vindex+2], values[vindex+3], 1);
wVector p2(values[vindex+4], values[vindex+5], 1);
wVector p3(values[vindex+6], values[vindex+7], 1);
//this is to calculate t based on x
double trange = 1 / (OUT.width()-1);
//curve coefficients
double A = (-p0[0] + 3*p1[0] - 3*p2[0] + p3[0]);
double B = (3*p0[0] - 6*p1[0] + 3*p2[0]);
double C = (-3*p0[0] + 3*p1[0]);
double D = p0[0];
double E = (-p0[1] + 3*p1[1] - 3*p2[1] + p3[1]);
double F = (3*p0[1] - 6*p1[1] + 3*p2[1]);
double G = (-3*p0[1] + 3*p1[1]);
double H = p0[1];
//apply the transformation
for(long i = 0; i < OUT.height(); i++){
for(long j = 0; j < OUT.width(); j++){
//t = x / width
double t = trange * j;
//apply the article given formulas
double x_path_d = 3*t*t*A + 2*t*B + C;
double y_path_d = 3*t*t*E + 2*t*F + G;
double angle = 3.14159265/2.0 + std::atan(y_path_d / x_path_d);
mapped_point.Set((t*t*t)*A + (t*t)*B + t*C + D + i*std::cos(angle),
(t*t*t)*E + (t*t)*F + t*G + H + i*std::sin(angle),
1);
//test if the point is inside the image
if(mapped_point[0] < 0 ||
mapped_point[0] >= OUT.width() ||
mapped_point[1] < 0 ||
mapped_point[1] >= IN.height())
continue;
OUT.setPixel(
long(mapped_point[0]),
long(mapped_point[1]),
IN.getPixel(j, i));
}
}
Applying this code in a 300x196 rgb image all i get is a black screen no matter what control points i use, is hard to find information about this kind of transformation, searching for parametric curves all i find is how to draw them, not apply to images. Can someone help me on how to transform an image with a bezier curve?
IMHO applying a curve to an image sound like using a LUT. So you will need to check for the value of the curve for different image values and then switch the image value with the one on the curve, so, create a Look-Up-Table for each possible value in the image (e.g : 0, 1, ..., 255, for a gray value 8 bit image), that is a 2x256 matrix, first column has the values from 0 to 255 and the second one having the value of the curve.

ordfilt2: Find requires variable sizing

I want to generate c++ code from the following Matlab function (Harris corner detection) that detects corners from an image.My constraint is that I have to generate a static library in C++ without variable-sizing support.
So, I have disabled variable size support from settings and also selected target platform as unspecified 32 bit processor.
In this way I'll be able to use it in Vivado HLS for an FPGA project.
However, when I generate the code, the line containing ordfilt2 function throws an error that FIND requires variable sizing.
Please, help me if there is a workaround to this problem.I have seen a similar question posted here Matlab error "Find requires variable sizing" . But I am not sure how this applies to my case.Thanks.
Here's the code:
function [cim] = harris(im , thresh)
dx = [-1 0 1; -1 0 1; -1 0 1]; % Derivative masks
dy = dx';
Ix = conv2(im, dx, 'same'); % Image derivatives
Iy = conv2(im, dy, 'same');
% Generate Gaussian filter of size 6*sigma (+/- 3sigma) and of
% minimum size 1x1.
sigma = 1.5;
g = fspecial('gaussian',max(1,fix(6*sigma)), sigma);
Ix2 = conv2(Ix.^2, g, 'same'); % Smoothed squared image derivatives
Iy2 = conv2(Iy.^2, g, 'same');
Ixy = conv2(Ix.*Iy, g, 'same');
cim = (Ix2.*Iy2 - Ixy.^2)./(Ix2 + Iy2 + eps); % Harris corner measure
% Extract local maxima by performing a grey scale morphological
% dilation and then finding points in the corner strength image that
% match the dilated image and are also greater than the threshold.
radius = 1.5;
sze = 2*radius+1; % Size of mask.
mx = ordfilt2(cim,sze^2,ones(sze)); % Grey-scale dilate.
cim = (cim==mx)&(cim>thresh); % Find maxima.
end

create 2D LoG kernel in openCV like fspecial in Matlab

My question is not how to filter an image using the laplacian of gaussian (basically using filter2D with the relevant kernel etc.).
What I want to know is how I generate the NxN kernel.
I'll give an example showing how I generated a [Winsize x WinSize] Gaussian kernel in openCV.
In Matlab:
gaussianKernel = fspecial('gaussian', WinSize, sigma);
In openCV:
cv::Mat gaussianKernel = cv::getGaussianKernel(WinSize, sigma, CV_64F);
cv::mulTransposed(gaussianKernel,gaussianKernel,false);
Where sigma and WinSize are predefined.
I want to do the same for a Laplacian of Gaussian.
In Matlab:
LoGKernel = fspecial('log', WinSize, sigma);
How do I get the exact kernel in openCV (exact up to negligible numerical differences)?
I'm working on a specific application where I need the actual kernel values and simply finding another way of implementing LoG filtering by approximating Difference of gaussians is not what I'm after.
Thanks!
You can generate it manually, using formula
LoG(x,y) = (1/(pi*sigma^4)) * (1 - (x^2+y^2)/(sigma^2))* (e ^ (- (x^2 + y^2) / 2sigma^2)
http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm
cv::Mat kernel(WinSize,WinSize,CV_64F);
int rows = kernel.rows;
int cols = kernel.cols;
double halfSize = (double) WinSize / 2.0;
for (size_t i=0; i<rows;i++)
for (size_t j=0; j<cols;j++)
{
double x = (double)j - halfSize;
double y = (double)i - halfSize;
kernel.at<double>(j,i) = (1.0 /(M_PI*pow(sigma,4))) * (1 - (x*x+y*y)/(sigma*sigma))* (pow(2.718281828, - (x*x + y*y) / 2*sigma*sigma));
}
If function above is not OK, you can simply rewrite matlab version of fspecial:
case 'log' % Laplacian of Gaussian
% first calculate Gaussian
siz = (p2-1)/2;
std2 = p3^2;
[x,y] = meshgrid(-siz(2):siz(2),-siz(1):siz(1));
arg = -(x.*x + y.*y)/(2*std2);
h = exp(arg);
h(h<eps*max(h(:))) = 0;
sumh = sum(h(:));
if sumh ~= 0,
h = h/sumh;
end;
% now calculate Laplacian
h1 = h.*(x.*x + y.*y - 2*std2)/(std2^2);
h = h1 - sum(h1(:))/prod(p2); % make the filter sum to zero
I want to thank old-ufo for nudging me in the correct direction.
I was hoping I won't have to reinvent the wheel by doing a quick matlab-->openCV conversion but guess this is the best solution I have for a quick solution.
NOTE - I did this for square kernels only (easy to modify otherwise, but I have no need for that so...).
Maybe this can be written in a more elegant form but is a quick job I did so I can carry on with more pressing matters.
From main function:
int WinSize(7); int sigma(1); // can be changed to other odd-sized WinSize and different sigma values
cv::Mat h = fspecialLoG(WinSize,sigma);
And the actual function is:
// return NxN (square kernel) of Laplacian of Gaussian as is returned by Matlab's: fspecial(Winsize,sigma)
cv::Mat fspecialLoG(int WinSize, double sigma){
// I wrote this only for square kernels as I have no need for kernels that aren't square
cv::Mat xx (WinSize,WinSize,CV_64F);
for (int i=0;i<WinSize;i++){
for (int j=0;j<WinSize;j++){
xx.at<double>(j,i) = (i-(WinSize-1)/2)*(i-(WinSize-1)/2);
}
}
cv::Mat yy;
cv::transpose(xx,yy);
cv::Mat arg = -(xx+yy)/(2*pow(sigma,2));
cv::Mat h (WinSize,WinSize,CV_64F);
for (int i=0;i<WinSize;i++){
for (int j=0;j<WinSize;j++){
h.at<double>(j,i) = pow(exp(1),(arg.at<double>(j,i)));
}
}
double minimalVal, maximalVal;
minMaxLoc(h, &minimalVal, &maximalVal);
cv::Mat tempMask = (h>DBL_EPSILON*maximalVal)/255;
tempMask.convertTo(tempMask,h.type());
cv::multiply(tempMask,h,h);
if (cv::sum(h)[0]!=0){h=h/cv::sum(h)[0];}
cv::Mat h1 = (xx+yy-2*(pow(sigma,2))/(pow(sigma,4));
cv::multiply(h,h1,h1);
h = h1 - cv::sum(h1)[0]/(WinSize*WinSize);
return h;
}
There is some difference between your function and the matlab version:
http://br1.einfach.org/tmp/log-matlab-vs-opencv.png.
Above is matlab fspecial('log', 31, 6) and below is the result of your function with the same parameters. Somehow the hat is more 'bent' - is this intended and what is the effect of this in later processing?
I can create a kernel very similar to the matlab one with these functions, which just directly reflect the LoG formula:
float LoG(int x, int y, float sigma) {
float xy = (pow(x, 2) + pow(y, 2)) / (2 * pow(sigma, 2));
return -1.0 / (M_PI * pow(sigma, 4)) * (1.0 - xy) * exp(-xy);
}
static Mat LOGkernel(int size, float sigma) {
Mat kernel(size, size, CV_32F);
int halfsize = size / 2;
for (int x = -halfsize; x <= halfsize; ++x) {
for (int y = -halfsize; y <= halfsize; ++y) {
kernel.at<float>(x+halfsize,y+halfsize) = LoG(x, y, sigma);
}
}
return kernel;
}
Here's a NumPy version that is directly translated from the fspecial function in MATLAB.
import numpy as np
import sys
def get_log_kernel(siz, std):
x = y = np.linspace(-siz, siz, 2*siz+1)
x, y = np.meshgrid(x, y)
arg = -(x**2 + y**2) / (2*std**2)
h = np.exp(arg)
h[h < sys.float_info.epsilon * h.max()] = 0
h = h/h.sum() if h.sum() != 0 else h
h1 = h*(x**2 + y**2 - 2*std**2) / (std**4)
return h1 - h1.mean()
The code below is the exact equivalent to fspecial('log', p2, p3):
def fspecial_log(p2, std):
siz = int((p2-1)/2)
x = y = np.linspace(-siz, siz, 2*siz+1)
x, y = np.meshgrid(x, y)
arg = -(x**2 + y**2) / (2*std**2)
h = np.exp(arg)
h[h < sys.float_info.epsilon * h.max()] = 0
h = h/h.sum() if h.sum() != 0 else h
h1 = h*(x**2 + y**2 - 2*std**2) / (std**4)
return h1 - h1.mean()
I wrote exact Implementation of Matlab fspecial function in OpenCV
function:
Mat C_fspecial_LOG(double* kernel_size,double sigma)
{
double size[2]={ (kernel_size[0]-1)/2 , (kernel_size[1]-1)/2};
double std = sigma;
const double eps = 2.2204e-16;
cv::Mat kernel(kernel_size[0],kernel_size[1],CV_64FC1,0.0);
int row=0,col=0;
for (double y = -size[0]; y <= size[0]; ++y,++row)
{
col=0;
for (double x = -size[1]; x <= size[1]; ++x,++col)
{
kernel.at<double>(row,col)=exp( -( pow(x,2) + pow(y,2) ) /(2*pow(std,2)));
}
}
double MaxValue;
cv::minMaxLoc(kernel,nullptr,&MaxValue,nullptr,nullptr);
Mat condition=~(kernel < eps*MaxValue)/255;
condition.convertTo(condition,CV_64FC1);
kernel = kernel.mul(condition);
cv::Scalar SUM = cv::sum(kernel);
if(SUM[0]!=0)
{
kernel /= SUM[0];
}
return kernel;
}
usage of this function :
double kernel_size[2] = {4,4}; // kernel size set to 4x4
double sigma = 2.1;
Mat kernel = C_fspecial_LOG(kernel_size,sigma);
compare OpenCV result with Matlab:
opencv result:
[0.04918466596701741, 0.06170341496034986, 0.06170341496034986, 0.04918466596701741;
0.06170341496034986, 0.07740850411228289, 0.07740850411228289, 0.06170341496034986;
0.06170341496034986, 0.07740850411228289, 0.07740850411228289, 0.06170341496034986;
0.04918466596701741, 0.06170341496034986, 0.06170341496034986, 0.04918466596701741]
Matlab result for fspecial('gaussian', 4, 2.1) :
0.0492 0.0617 0.0617 0.0492
0.0617 0.0774 0.0774 0.0617
0.0617 0.0774 0.0774 0.0617
0.0492 0.0617 0.0617 0.0492
Just for the sake of reference, here is a Python implementation which creates the LoG filter kernel to detect blobs of a pre-defined radius in pixels.
def create_log_filter_kernel(r_in_px: float):
"""
Creates a LoG filter-kernel to detect blobs of a given radius r_in_px.
\[
LoG(x,y) = \frac{-1}{\pi\sigma^4}\left(1 - \frac{x^2 + y^2}{2\sigma^2}\right)e^{\frac{-(x^2+y^2)}{2\sigma^2}}
\]
Look for maxima if blob is black, minima if blob is white.
:param r_in_px:
:return: filter kernel
"""
# sigma from radius: LoG has zero-crossing at $1 - \frac{x^2 + y^2}{2\sigma^2} = 0$
# i.e. r^2 = 2\sigma^2$ and thus $sigma = r / \sqrt{2}$
sigma = r_in_px/np.sqrt(2)
# ksize such that filter covers $3\sigma$
ksize = int(np.round(sigma*3))*2 + 1
# setup filter
xgv = np.arange(0, ksize) - ksize / 2
ygv = np.arange(0, ksize) - ksize / 2
x, y = np.meshgrid(xgv, ygv)
kernel = -1 / (np.pi * sigma**4) * (1 - (x**2 + y**2) / (2*sigma**2)) * np.exp(-(x**2 + y**2) / (2 * sigma**2))
#normalize to sum zero (does not change zero crossing, I tried it out for r < 100)
kernel -= np.sum(kernel) / ksize**2
#this is important: normalize such that positive/negative parts are comparable over different scales
kernel /= np.sum(kernel[kernel>0])
return kernel