Representation of Fourier series depends on tabulation points - c++

Well, I had task to create function that does Fourier series with some mathematical function, so I found all the formulas, but the main problem is when I change count of point on some interval to draw those series I have very strange artifact:
This is Fourier series of sin(x) on interavl (-3.14; 314) with 100 point for tabulation
And this is same function with same interval but with 100000 points for tabulation
Code for Fourier series coeficients:
void fourieSeriesDecompose(std::function<double(double)> func, double period, long int iterations, double *&aParams, double *&bParams){
aParams = new double[iterations];
aParams[0] = integrateRiemans(func, 0, period, 1000);
for(int i = 1; i < iterations; i++){
auto sineFunc = [&](double x) -> double { return 2 * (func(x) * cos((2 * x * i * M_PI) / period)); };
aParams[i] = integrateRiemans(sineFunc, -period / 2, period / 2, 1000) / period;
}
bParams = new double[iterations];
for(int i = 1; i < iterations; i++){
auto sineFunc = [&](double x) -> double { return 2 * (func(x) * sin(2 * (x * (i + 1) * M_PI) / period)); };
bParams[i] = integrateRiemans(sineFunc, -period / 2, period / 2, 1000) / period;
}
}
This code I use to reproduce function using found coeficients:
double fourieSeriesCompose(double x, double period, long iterations, double *aParams, double *bParams){
double y = aParams[0];
for(int i = 1; i < iterations; i++){
y += sqrt(aParams[i] * aParams[i] + bParams[i] * bParams[i]) * cos((2 * i * x * M_PI) / period - atan(bParams[i] / aParams[i]));
}
return y;
}
And the runner code
double period = M_PI * 2;
auto startFunc = [](double x) -> double{ return sin(x); };
fourieSeriesDecompose(*startFunc, period, 1000, aCoeficients, bCoeficients);
auto readyFunc = [&](double x) -> double{ return fourieSeriesCompose(x, period, 1000, aCoeficients, bCoeficients); };
tabulateFunc(readyFunc);
scaleFunc();
//Draw methods after this

see:
How to compute Discrete Fourier Transform?
So if I deciphered it correctly the aParams,bParams represent the real and imaginary part of the result then the angles in sin and cos must be the same but you have different! You got this:
auto sineFunc = [&](double x) -> double { return 2*(func(x)*cos((2* x* i *M_PI)/period));
auto sineFunc = [&](double x) -> double { return 2*(func(x)*sin( 2*(x*(i+1)*M_PI)/period));
as you can see its not the same angle. Also what is period? You got iterations! if it is period of the function you want to transform then it should be applied to it and not to the kernel ... Also integrateRiemans does what? its the nested for loop to integrate the furrier transform? Btw. hope that func is real domain otherwise the integration/sumation needs both real and imaginary part not just one ...
So what you should do is:
create (cplx) table of the func(x) data on the interval you want with iterations samples
so for loop where x = x0+i*(x1-x0)/(iterations-1) and x0,x1 is the range you want the func to sample. Lets call it f[i]
for (i=0;i<iteration;i++) f[i]=func(x0+i*(x1-x0)/(iterations-1));
furrier transform it
something like this:
for (i=0;i<iteration;i++) a[i]=b[i]=0;
for (j=0;j<iteration;j++)
for (i=0;i<iteration;i++)
{
a[j]+=f[i]*cos(-2.0*M_PI*i*j/iterations);
b[j]+=f[i]*sin(-2.0*M_PI*i*j/iterations);
}
now a[],b[] should hold your slow DFT result ... beware integer rounding ... depending on compiler you might need to cast some stuff to double to avoid integer rounding.

Related

Efficient floating point scaling in C++

I'm working on my fast (and accurate) sin implementation in C++, and I have a problem regarding the efficient angle scaling into the +- pi/2 range.
My sin function for +-pi/2 using Taylor series is the following
(Note: FLOAT is a macro expanded to float or double just for the benchmark)
/**
* Sin for 'small' angles, accurate on [-pi/2, pi/2], fairly accurate on [-pi, pi]
*/
// To switch between float and double
#define FLOAT float
FLOAT
my_sin_small(FLOAT x)
{
constexpr FLOAT C1 = 1. / (7. * 6. * 5. * 4. * 3. * 2.);
constexpr FLOAT C2 = -1. / (5. * 4. * 3. * 2.);
constexpr FLOAT C3 = 1. / (3. * 2.);
constexpr FLOAT C4 = -1.;
// Correction for sin(pi/2) = 1, due to the ignored taylor terms
constexpr FLOAT corr = -1. / 0.9998431013994987;
const FLOAT x2 = x * x;
return corr * x * (x2 * (x2 * (x2 * C1 + C2) + C3) + C4);
}
So far so good... The problem comes when I try to scale an arbitrary angle into the +-pi/2 range. My current solution is:
FLOAT
my_sin(FLOAT x)
{
constexpr FLOAT pi = 3.141592653589793238462;
constexpr FLOAT rpi = 1 / pi;
// convert to +-pi/2 range
int n = std::nearbyint(x * rpi);
FLOAT xbar = (n * pi - x) * (2 * (n & 1) - 1);
// (2 * (n % 2) - 1) is a sign correction (see below)
return my_sin_small(xbar);
};
I made a benchmark, and I'm losing a lot for the +-pi/2 scaling.
Tricking with int(angle/pi + 0.5) is a nope since it is limited to the int precision, also requires +- branching, and i try to avoid branches...
What should I try to improve the performance for this scaling? I'm out of ideas.
Benchmark results for float. (In the benchmark the angle could be out of the validity range for my_sin_small, but for the bench I don't care about that...):
Benchmark results for double.
Sign correction for xbar in my_sin():
Algo accuracy compared to python sin() function:
Candidate improvements
Convert the radians x to rotations by dividing by 2*pi.
Retain only the fraction so we have an angle (-1.0 ... 1.0). This simplifies the OP's modulo step to a simple "drop the whole number" step instead. Going forward with different angle units simply involves a co-efficient set change. No need to scale back to radians.
For positive values, subtract 0.5 so we have (-0.5 ... 0.5) and then flip the sign. This centers the possible values about 0.0 and makes for better convergence of the approximating polynomial as compared to the math sine function. For negative values - see below.
Call my_sin_small1() that uses this (-0.5 ... 0.5) rotations range rather than [-pi ... +pi] radians.
In my_sin_small1(), fold constants together to drop the corr * step.
Rather than use the truncated Taylor's series, use a more optimal set. IMO, this will provide better answers, especially near +/-pi.
Notes: No int to/from float code. With more analysis, possible to get a better set of coefficients that fix my_sin(+/-pi) closer to 0.0. This is just a quick set of code to demo less FP steps and good potential results.
C like code for OP to port to C++
FLOAT my_sin_small1(FLOAT x) {
static const FLOAT A1 = -5.64744881E+01;
static const FLOAT A2 = +7.81017968E+01;
static const FLOAT A3 = -4.11145353E+01;
static const FLOAT A4 = +6.27923581E+00;
const FLOAT x2 = x * x;
return x * (x2 * (x2 * (x2 * A1 + A2) + A3) + A4);
}
FLOAT my_sin1(FLOAT x) {
static const FLOAT pi = 3.141592653589793238462;
static const FLOAT pi2i = 1/(pi * 2);
x *= pi2i;
FLOAT xfraction = 0.5f - (x - truncf(x));
return my_sin_small1(xfraction);
}
For negative values, use -my_sin1(-x) or like code to flip the sign - or add 0.5 in the above minus 0.5 step.
Test
#include <math.h>
#include <stdio.h>
int main(void) {
for (int d = 0; d <= 360; d += 20) {
FLOAT x = d / 180.0 * M_PI;
FLOAT y = my_sin1(x);
printf("%12.6f %11.8f %11.8f\n", x, sin(x), y);
}
}
Output
0.000000 0.00000000 -0.00022483
0.349066 0.34202013 0.34221691
0.698132 0.64278759 0.64255589
1.047198 0.86602542 0.86590189
1.396263 0.98480775 0.98496443
1.745329 0.98480775 0.98501128
2.094395 0.86602537 0.86603642
2.443461 0.64278762 0.64260530
2.792527 0.34202022 0.34183803
3.141593 -0.00000009 0.00000000
3.490659 -0.34202016 -0.34183764
3.839724 -0.64278757 -0.64260519
4.188790 -0.86602546 -0.86603653
4.537856 -0.98480776 -0.98501128
4.886922 -0.98480776 -0.98496443
5.235988 -0.86602545 -0.86590189
5.585053 -0.64278773 -0.64255613
5.934119 -0.34202036 -0.34221727
6.283185 0.00000017 -0.00022483
Alternate code below makes for better results near 0.0, yet might cost a tad more time. OP seems more inclined to speed.
FLOAT xfraction = 0.5f - (x - truncf(x));
// vs.
FLOAT xfraction = x - truncf(x);
if (x >= 0.5f) x -= 1.0f;
[Edit]
Below is a better set with about 10% reduced error.
-56.0833765f
77.92947047f
-41.0936875f
6.278635918f
Yet another approach:
Spend more time (code) to reduce the range to ±pi/4 (±45 degrees), then possible to use only 3 or 2 terms of a polynomial that is like the usually Taylors series.
float sin_quick_small(float x) {
const float x2 = x * x;
#if 0
// max error about 7e-7
static const FLOAT A2 = +0.00811656036940792f;
static const FLOAT A3 = -0.166597759850666f;
static const FLOAT A4 = +0.999994132743861f;
return x * (x2 * (x2 * A2 + A3) + A4);
#else
// max error about 0.00016
static const FLOAT A3 = -0.160343346851626f;
static const FLOAT A4 = +0.999031566686144f;
return x * (x2 * A3 + A4);
#endif
}
float cos_quick_small(float x) {
return cosf(x); // TBD code.
}
float sin_quick(float x) {
if (x < 0.0) {
return -sin_quick(-x);
}
int quo;
float x90 = remquof(fabsf(x), 3.141592653589793238462f / 2, &quo);
switch (quo % 4) {
case 0:
return sin_quick_small(x90);
case 1:
return cos_quick_small(x90);
case 2:
return sin_quick_small(-x90);
case 3:
return -cos_quick_small(x90);
}
return 0.0;
}
int main() {
float max_x = 0.0;
float max_error = 0.0;
for (int d = -45; d <= 45; d += 1) {
FLOAT x = d / 180.0 * M_PI;
FLOAT y = sin_quick(x);
double err = fabs(y - sin(x));
if (err > max_error) {
max_x = x;
max_error = err;
}
printf("%12.6f %11.8f %11.8f err:%11.8f\n", x, sin(x), y, err);
}
printf("x:%.6f err:%.6f\n", max_x, max_error);
return 0;
}

Memory leaks in a simple Rcpp function

I am developing a package in R that I would like to convert to Rcpp for better performance. I'm new to Rcpp (and C++ in general.) My problem is that the Rcpp function I've written works fine if I run it many times with one set of arguments, but if I try to loop it over many combinations of arguments, it springs memory leaks and causes the R session to abort.
Here is the code in R, which holds up well to any test I throw at it:
raw_noise <- function(timesteps, mu, sigma, phi) {
delta <- mu * (1 - phi)
variance <- sigma^2 * (1 - phi^2)
noise <- vector(mode = "double", length = timesteps)
noise[1] <- c(rnorm(1, mu, sigma))
for (i in (1:(timesteps - 1))) {
noise[i + 1] <- delta + phi * noise[i] + rnorm(1, 0, sqrt(variance))
}
return(noise)
}
Here is the code in Rcpp, using three Rcpp sugar functions (pow, sqrt, rnorm):
NumericVector raw_noise(int timesteps, double mu, double sigma, double phi) {
double delta = mu * (1 - phi);
double variance = pow(sigma, 2.0) * (1 - pow(phi, 2.0));
NumericVector noise(timesteps);
noise[0] = R::rnorm(mu, sigma);
for(int i = 0; i < timesteps; ++i) {
noise[i+1] = delta + phi*noise[i] + R::rnorm(0, sqrt(variance));
}
return noise;
}
What really confuses me is that this code runs without problems:
library(purrr)
rerun(10000, raw_noise(timesteps = 30, mu = 0.5, sigma = 0.2, phi = 0.3))
But when I run this code:
test_loop <- function(timesteps, mu, sigma, phi, replicates) {
params <- cross_df(list(timesteps = timesteps, phi = phi, mu = mu, sigma =
sigma))
for (i in 1:nrow(params)) {
print(params[i,])
pmap(params[i,], raw_noise)
}
}
library(purrr)
test_loop(timesteps=c(5, 6, 7, 8, 9, 10), mu=c(0.2, 0.5), sigma=c(0.2, 0.5),
phi=c(0, 0.1))
More often than not, the R session aborts and RStudio crashes altogether. But sometimes I manage to catch this error message before the R session aborts:
Error in match(x, table, nomatch = 0L) : GC encountered a node
(0x10db7af50) with an unknown SEXP type: NEWSXP at memory.c:1692
As I understand it, NEWSXP is an exotic object type in R that doesn't come up very often. What's happening looks to me like a memory leak, but I'm not at all sure how to fix it. Like I said, I'm new to Rcpp and C++ generally so I'd appreciate any nudges in the right direction.
You have an out of bounds error:
for(int i = 0; i < timesteps; ++i)
causes
noise[i+1]
to exceed the defined range since C++ indices start at 0 and not 1.
For example, 0 to timesteps - 1 has a length of timesteps and, thus, is okay.
but
0 to timesteps would have a length of timesteps + 1
This can be seen if you change noise[i+1] to noise(i+1), which performs a bounds check on the requested index.
Error in raw_noise(100, 2, 3, 0.2) :
Index out of bounds: [index=100; extent=100].
To address this, make the following change:
NumericVector raw_noise(int timesteps, double mu, double sigma, double phi) {
double delta = mu * (1 - phi);
double variance = pow(sigma, 2.0) * (1 - pow(phi, 2.0));
NumericVector noise(timesteps);
noise[0] = R::rnorm(mu, sigma);
// change here
for(int i = 0; i < timesteps - 1; ++i) { // 1 less time step
noise[i+1] = delta + phi*noise[i] + R::rnorm(0, sqrt(variance));
}
return noise;
}

Fast approximate float division

On modern processors, float division is a good order of magnitude slower than float multiplication (when measured by reciprocal throughput).
I'm wondering if there are any algorithms out there for computating a fast approximation to x/y, given certain assumptions and tolerance levels. For example, if you assume that 0<x<y, and are willing to accept any output that is within 10% of the true value, are there algorithms faster than the built-in FDIV operation?
I hope that this helps because this is probably as close as your going to get to what you are looking for.
__inline__ double __attribute__((const)) divide( double y, double x ) {
// calculates y/x
union {
double dbl;
unsigned long long ull;
} u;
u.dbl = x; // x = x
u.ull = ( 0xbfcdd6a18f6a6f52ULL - u.ull ) >> (unsigned char)1;
// pow( x, -0.5 )
u.dbl *= u.dbl; // pow( pow(x,-0.5), 2 ) = pow( x, -1 ) = 1.0/x
return u.dbl * y; // (1.0/x) * y = y/x
}
See also:
Another post about reciprocal approximation.
The Wikipedia page.
FDIV is usually exceptionally slower than FMUL just b/c it can't be piped like multiplication and requires multiple clk cycles for iterative convergence HW seeking process.
Easiest way is to simply recognize that division is nothing more than the multiplication of the dividend y and the inverse of the divisor x. The not so straight forward part is remembering a float value x = m * 2 ^ e & its inverse x^-1 = (1/m)*2^(-e) = (2/m)*2^(-e-1) = p * 2^q approximating this new mantissa p = 2/m = 3-x, for 1<=m<2. This gives a rough piece-wise linear approximation of the inverse function, however we can do a lot better by using an iterative Newton Root Finding Method to improve that approximation.
let w = f(x) = 1/x, the inverse of this function f(x) is found by solving for x in terms of w or x = f^(-1)(w) = 1/w. To improve the output with the root finding method we must first create a function whose zero reflects the desired output, i.e. g(w) = 1/w - x, d/dw(g(w)) = -1/w^2.
w[n+1]= w[n] - g(w[n])/g'(w[n]) = w[n] + w[n]^2 * (1/w[n] - x) = w[n] * (2 - x*w[n])
w[n+1] = w[n] * (2 - x*w[n]), when w[n]=1/x, w[n+1]=1/x*(2-x*1/x)=1/x
These components then add to get the final piece of code:
float inv_fast(float x) {
union { float f; int i; } v;
float w, sx;
int m;
sx = (x < 0) ? -1:1;
x = sx * x;
v.i = (int)(0x7EF127EA - *(uint32_t *)&x);
w = x * v.f;
// Efficient Iterative Approximation Improvement in horner polynomial form.
v.f = v.f * (2 - w); // Single iteration, Err = -3.36e-3 * 2^(-flr(log2(x)))
// v.f = v.f * ( 4 + w * (-6 + w * (4 - w))); // Second iteration, Err = -1.13e-5 * 2^(-flr(log2(x)))
// v.f = v.f * (8 + w * (-28 + w * (56 + w * (-70 + w *(56 + w * (-28 + w * (8 - w))))))); // Third Iteration, Err = +-6.8e-8 * 2^(-flr(log2(x)))
return v.f * sx;
}

create 2D LoG kernel in openCV like fspecial in Matlab

My question is not how to filter an image using the laplacian of gaussian (basically using filter2D with the relevant kernel etc.).
What I want to know is how I generate the NxN kernel.
I'll give an example showing how I generated a [Winsize x WinSize] Gaussian kernel in openCV.
In Matlab:
gaussianKernel = fspecial('gaussian', WinSize, sigma);
In openCV:
cv::Mat gaussianKernel = cv::getGaussianKernel(WinSize, sigma, CV_64F);
cv::mulTransposed(gaussianKernel,gaussianKernel,false);
Where sigma and WinSize are predefined.
I want to do the same for a Laplacian of Gaussian.
In Matlab:
LoGKernel = fspecial('log', WinSize, sigma);
How do I get the exact kernel in openCV (exact up to negligible numerical differences)?
I'm working on a specific application where I need the actual kernel values and simply finding another way of implementing LoG filtering by approximating Difference of gaussians is not what I'm after.
Thanks!
You can generate it manually, using formula
LoG(x,y) = (1/(pi*sigma^4)) * (1 - (x^2+y^2)/(sigma^2))* (e ^ (- (x^2 + y^2) / 2sigma^2)
http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm
cv::Mat kernel(WinSize,WinSize,CV_64F);
int rows = kernel.rows;
int cols = kernel.cols;
double halfSize = (double) WinSize / 2.0;
for (size_t i=0; i<rows;i++)
for (size_t j=0; j<cols;j++)
{
double x = (double)j - halfSize;
double y = (double)i - halfSize;
kernel.at<double>(j,i) = (1.0 /(M_PI*pow(sigma,4))) * (1 - (x*x+y*y)/(sigma*sigma))* (pow(2.718281828, - (x*x + y*y) / 2*sigma*sigma));
}
If function above is not OK, you can simply rewrite matlab version of fspecial:
case 'log' % Laplacian of Gaussian
% first calculate Gaussian
siz = (p2-1)/2;
std2 = p3^2;
[x,y] = meshgrid(-siz(2):siz(2),-siz(1):siz(1));
arg = -(x.*x + y.*y)/(2*std2);
h = exp(arg);
h(h<eps*max(h(:))) = 0;
sumh = sum(h(:));
if sumh ~= 0,
h = h/sumh;
end;
% now calculate Laplacian
h1 = h.*(x.*x + y.*y - 2*std2)/(std2^2);
h = h1 - sum(h1(:))/prod(p2); % make the filter sum to zero
I want to thank old-ufo for nudging me in the correct direction.
I was hoping I won't have to reinvent the wheel by doing a quick matlab-->openCV conversion but guess this is the best solution I have for a quick solution.
NOTE - I did this for square kernels only (easy to modify otherwise, but I have no need for that so...).
Maybe this can be written in a more elegant form but is a quick job I did so I can carry on with more pressing matters.
From main function:
int WinSize(7); int sigma(1); // can be changed to other odd-sized WinSize and different sigma values
cv::Mat h = fspecialLoG(WinSize,sigma);
And the actual function is:
// return NxN (square kernel) of Laplacian of Gaussian as is returned by Matlab's: fspecial(Winsize,sigma)
cv::Mat fspecialLoG(int WinSize, double sigma){
// I wrote this only for square kernels as I have no need for kernels that aren't square
cv::Mat xx (WinSize,WinSize,CV_64F);
for (int i=0;i<WinSize;i++){
for (int j=0;j<WinSize;j++){
xx.at<double>(j,i) = (i-(WinSize-1)/2)*(i-(WinSize-1)/2);
}
}
cv::Mat yy;
cv::transpose(xx,yy);
cv::Mat arg = -(xx+yy)/(2*pow(sigma,2));
cv::Mat h (WinSize,WinSize,CV_64F);
for (int i=0;i<WinSize;i++){
for (int j=0;j<WinSize;j++){
h.at<double>(j,i) = pow(exp(1),(arg.at<double>(j,i)));
}
}
double minimalVal, maximalVal;
minMaxLoc(h, &minimalVal, &maximalVal);
cv::Mat tempMask = (h>DBL_EPSILON*maximalVal)/255;
tempMask.convertTo(tempMask,h.type());
cv::multiply(tempMask,h,h);
if (cv::sum(h)[0]!=0){h=h/cv::sum(h)[0];}
cv::Mat h1 = (xx+yy-2*(pow(sigma,2))/(pow(sigma,4));
cv::multiply(h,h1,h1);
h = h1 - cv::sum(h1)[0]/(WinSize*WinSize);
return h;
}
There is some difference between your function and the matlab version:
http://br1.einfach.org/tmp/log-matlab-vs-opencv.png.
Above is matlab fspecial('log', 31, 6) and below is the result of your function with the same parameters. Somehow the hat is more 'bent' - is this intended and what is the effect of this in later processing?
I can create a kernel very similar to the matlab one with these functions, which just directly reflect the LoG formula:
float LoG(int x, int y, float sigma) {
float xy = (pow(x, 2) + pow(y, 2)) / (2 * pow(sigma, 2));
return -1.0 / (M_PI * pow(sigma, 4)) * (1.0 - xy) * exp(-xy);
}
static Mat LOGkernel(int size, float sigma) {
Mat kernel(size, size, CV_32F);
int halfsize = size / 2;
for (int x = -halfsize; x <= halfsize; ++x) {
for (int y = -halfsize; y <= halfsize; ++y) {
kernel.at<float>(x+halfsize,y+halfsize) = LoG(x, y, sigma);
}
}
return kernel;
}
Here's a NumPy version that is directly translated from the fspecial function in MATLAB.
import numpy as np
import sys
def get_log_kernel(siz, std):
x = y = np.linspace(-siz, siz, 2*siz+1)
x, y = np.meshgrid(x, y)
arg = -(x**2 + y**2) / (2*std**2)
h = np.exp(arg)
h[h < sys.float_info.epsilon * h.max()] = 0
h = h/h.sum() if h.sum() != 0 else h
h1 = h*(x**2 + y**2 - 2*std**2) / (std**4)
return h1 - h1.mean()
The code below is the exact equivalent to fspecial('log', p2, p3):
def fspecial_log(p2, std):
siz = int((p2-1)/2)
x = y = np.linspace(-siz, siz, 2*siz+1)
x, y = np.meshgrid(x, y)
arg = -(x**2 + y**2) / (2*std**2)
h = np.exp(arg)
h[h < sys.float_info.epsilon * h.max()] = 0
h = h/h.sum() if h.sum() != 0 else h
h1 = h*(x**2 + y**2 - 2*std**2) / (std**4)
return h1 - h1.mean()
I wrote exact Implementation of Matlab fspecial function in OpenCV
function:
Mat C_fspecial_LOG(double* kernel_size,double sigma)
{
double size[2]={ (kernel_size[0]-1)/2 , (kernel_size[1]-1)/2};
double std = sigma;
const double eps = 2.2204e-16;
cv::Mat kernel(kernel_size[0],kernel_size[1],CV_64FC1,0.0);
int row=0,col=0;
for (double y = -size[0]; y <= size[0]; ++y,++row)
{
col=0;
for (double x = -size[1]; x <= size[1]; ++x,++col)
{
kernel.at<double>(row,col)=exp( -( pow(x,2) + pow(y,2) ) /(2*pow(std,2)));
}
}
double MaxValue;
cv::minMaxLoc(kernel,nullptr,&MaxValue,nullptr,nullptr);
Mat condition=~(kernel < eps*MaxValue)/255;
condition.convertTo(condition,CV_64FC1);
kernel = kernel.mul(condition);
cv::Scalar SUM = cv::sum(kernel);
if(SUM[0]!=0)
{
kernel /= SUM[0];
}
return kernel;
}
usage of this function :
double kernel_size[2] = {4,4}; // kernel size set to 4x4
double sigma = 2.1;
Mat kernel = C_fspecial_LOG(kernel_size,sigma);
compare OpenCV result with Matlab:
opencv result:
[0.04918466596701741, 0.06170341496034986, 0.06170341496034986, 0.04918466596701741;
0.06170341496034986, 0.07740850411228289, 0.07740850411228289, 0.06170341496034986;
0.06170341496034986, 0.07740850411228289, 0.07740850411228289, 0.06170341496034986;
0.04918466596701741, 0.06170341496034986, 0.06170341496034986, 0.04918466596701741]
Matlab result for fspecial('gaussian', 4, 2.1) :
0.0492 0.0617 0.0617 0.0492
0.0617 0.0774 0.0774 0.0617
0.0617 0.0774 0.0774 0.0617
0.0492 0.0617 0.0617 0.0492
Just for the sake of reference, here is a Python implementation which creates the LoG filter kernel to detect blobs of a pre-defined radius in pixels.
def create_log_filter_kernel(r_in_px: float):
"""
Creates a LoG filter-kernel to detect blobs of a given radius r_in_px.
\[
LoG(x,y) = \frac{-1}{\pi\sigma^4}\left(1 - \frac{x^2 + y^2}{2\sigma^2}\right)e^{\frac{-(x^2+y^2)}{2\sigma^2}}
\]
Look for maxima if blob is black, minima if blob is white.
:param r_in_px:
:return: filter kernel
"""
# sigma from radius: LoG has zero-crossing at $1 - \frac{x^2 + y^2}{2\sigma^2} = 0$
# i.e. r^2 = 2\sigma^2$ and thus $sigma = r / \sqrt{2}$
sigma = r_in_px/np.sqrt(2)
# ksize such that filter covers $3\sigma$
ksize = int(np.round(sigma*3))*2 + 1
# setup filter
xgv = np.arange(0, ksize) - ksize / 2
ygv = np.arange(0, ksize) - ksize / 2
x, y = np.meshgrid(xgv, ygv)
kernel = -1 / (np.pi * sigma**4) * (1 - (x**2 + y**2) / (2*sigma**2)) * np.exp(-(x**2 + y**2) / (2 * sigma**2))
#normalize to sum zero (does not change zero crossing, I tried it out for r < 100)
kernel -= np.sum(kernel) / ksize**2
#this is important: normalize such that positive/negative parts are comparable over different scales
kernel /= np.sum(kernel[kernel>0])
return kernel

Recursively create a sine wave given a single sine wave value and the period

I am trying to write a .oct function for Octave that, given a single sine wave value, between -1 and 1, and sine wave period, returns a sine wave vector of period length with the last value in the vector being the given sine wave value. My code so far is:
#include <octave/oct.h>
#include <octave/dColVector.h>
#include <math.h>
#define PI 3.14159265
DEFUN_DLD (sinewave_recreate, args, , "args(0) sinewave value, args(1) is period")
{
octave_value_list retval;
double sinewave_value = args(0).double_value ();
double period = args(1).double_value ();
ColumnVector output_sinewave(period);
double degrees_inc = 360 / period;
double output_sinewave_degrees;
output_sinewave_degrees = asin( sinewave_value ) * 180 / PI;
output_sinewave(period-1) = sin( output_sinewave_degrees * PI / 180 );
for (octave_idx_type ii (1); ii < period; ii++) // Start the loop
{
output_sinewave_degrees = output_sinewave_degrees - degrees_inc;
if ( output_sinewave_degrees < 0 )
{
output_sinewave_degrees += 360 ;
}
output_sinewave( period-1-ii ) = sin( output_sinewave_degrees * PI / 180 );
}
retval(0) = output_sinewave;
return retval;
}
but is giving patchy results. By this I mean that it sometimes recreates the sine wave quite accurately and other times it is way off. I have determined this simply by creating a given sine wave, taking the last value in time and plugging this into the function to recreate the sine wave backwards through time and then comparing plots of the two. Obviously I am doing something wrong, but I can't seem to identify what.
Lets start with some trigonometric identities:
sin(x)^2 + cos(x)^2 == 1
sin(x+y) == sin(x)*cos(y) + sin(y)*cos(x)
cos(x+y) == cos(x)*cos(y) - sin(x)*sin(y)
Given the sine and cosine at a point x, we can exactly calculate the values after a step of size d, after precalculating sd = sin(d) and cd = cos(d):
sin(x+d) = sin(x)*cd + cos(x)*sd
cos(x+d) = cos(x)*cd - sin(x)*sd
Given the initial sine value, you can calculate the initial cosine value:
cos(x) = sqrt(1 - sin(x)^2)
Note that there are two possible solutions, corresponding to the two possible square-root values. Also note that all the angles in these identities are in radians, and d needs to be negative if you're going back through the wave.
Mike's note that there are two possible solutions for cos(x) made me realise that I would need to resolve the phase ambiguity of the sine wave. My second, successful attempt at this function is:
#include <octave/oct.h>
#include <octave/dColVector.h>
#include <math.h>
#define PI 3.14159265
DEFUN_DLD (sinewave_recreate_3, args, , "args(0) sinewave value, args(1) is period, args(2) is the phase")
{
octave_value_list retval;
double sinewave_value = args(0).double_value ();
double period = args(1).double_value ();
double phase = args(2).double_value ();
ColumnVector output_sinewave(period);
double X0 = asin(sinewave_value);
if (sinewave_value < 0 & phase > 180 & phase < 270)
{
X0 = PI + (0 - X0);
}
if (sinewave_value < 0 & phase >= 270)
{
X0 = X0 + 2 * PI;
}
if (sinewave_value > 0 & phase > 90)
{
X0 = PI - X0;
}
if (sinewave_value > 0 & phase < 0)
{
X0 = X0 + PI / 2;
}
double dx = PI / 180 * (360/period);
for (octave_idx_type ii (0); ii < period; ii++) // Start the loop
{
output_sinewave(period-1-ii) = sin(X0 - dx * ii);
}
retval(0) = output_sinewave;
return retval;
}
Thanks are also due to Keynslug.
There is simple formula. Here is the example in Python:
import math
import numpy as np
# We are supposing step is equal to 1degree
T = math.radians(1.0/360.0)
PrevBeforePrevValue = np.sin(math.radians(49.0)) # y(t-2)
PrevValue = np.sin(math.radians(50.0)) # y(t-1)
ValueNowRecursiveFormula = ((2.0*(4.0-T*T))/(4.0+T*T))*PrevValue - PrevBeforePrevValue
print("From RECURSIVE formula - " + str(ValueNowRecursiveFormula))
The details can be found here:
http://howtodoit.com.ua/en/on-the-way-of-developing-recursive-sinewave-generator/
You might try an easier way to go through.
Just recall that if
y = sin(x)
then first derivative of y will be equal to
dy/dx = cos(x)
So at every step of computation you add to the current value of y some delta equal to
dy = cos(x) * dx
But that might cut your accuracy down as a side-effect. You could probe it whatever. HTH.
It seems that slightly improved equation tend to be more accurate:
dy = cos(x + dx/2) * dx
Take a look at this.