How to calculate Gaussian-weighted Circular Window? - c++

I have a Matrix with values filled in every Field. The size is e.g. 15x15(225) now I want to calculate the Weight of every Field based on the Center Field of the Matrix. For a bigger distance, the value of the Pixel will be less weighted for the calculation. This should be look like a circle around the center Field. Here a example Image:
The small Rectangle is the centre field. The weighting should be a Gaussain-weighted circular window with a sigma of 1.5. How could I get this done? My thought was sth. like this where every Weight is filled in a Matrix with the same Size for the calculation afterwards.
expf = 1.f/(2.f * 1.5 * 1.5);
[...]
W[k] = (i*i + j*j) * expf;
Where i and j are the distanze from the centre pixel (e.g. for first iteration i = -7, j = -7)
For me this solution seemed to be fine, but the values I get are always very small e.g:
W[0]: 3.48362e-10
W[1]: 6.26123e-09
W[2]: 7.21553e-08
W[3]: 5.3316e-07
W[4]: 2.52596e-06
W[5]: 7.67319e-06
W[6]: 1.49453e-05
[...]
W[40]: 0.000523195
W[41]: 0.000110432
W[42]: 1.49453e-05
W[43]: 1.29687e-06
W[44]: 7.21553e-08
W[45]: 5.3316e-07
W[46]: 9.58266e-06
W[47]: 0.000110432
W[48]: 0.000815988
[...]
W[85]: 0.055638
W[86]: 0.0117436
W[87]: 0.00158933
W[88]: 0.000137913
[...]
W[149]: 7.67319e-06
W[150]: 2.52596e-06
W[151]: 4.53999e-05
W[152]: 0.000523195
W[153]: 0.00386592
Could it be, that the calculation of the weights is wrong?

The PDF of a multivariate normal distribution is
2 π -k / 2 |Σ|-0.5exp(-0.5 ((x - μ) |Σ|-1 ((x - μ))
For your case, this translates to
double weight(int i, int j, double var) {
return 1 / (2 * M_PI) * std::exp(-0.5 * (i * i + j * j) / var / var);
}
where i and j are centered at 0 and 0, and var is the variance.
Note:
This is the PDF. If you want the value to be 1 at the center, use weight(i, j, var) / weight(0, 0, var). Otherwise, you will indeed get small numbers.
The decay is specified by var - lower values will show larger decay.
The following code prints
$ g++ --std=c++11 gs.cpp && ./a.out
1
0.884706
1
4.78512e-06
for example
#include <cmath>
#include <iostream>
double weight(int i, int j, double var) {
return 1 / (2 * M_PI) * std::exp(-0.5 * (i * i + j * j) / var / var);
}
int main() {
{
const double f = weight(0, 0, 20);
std::cout << weight(0, 0, 20) / f << std::endl;
std::cout << weight(-7, -7, 20) / f << std::endl;
}
{
const double f = weight(0, 0, 2);
std::cout << weight(0, 0, 2) / f << std::endl;
std::cout << weight(-7, -7, 2) / f << std::endl;
}
}

Related

Representation of Fourier series depends on tabulation points

Well, I had task to create function that does Fourier series with some mathematical function, so I found all the formulas, but the main problem is when I change count of point on some interval to draw those series I have very strange artifact:
This is Fourier series of sin(x) on interavl (-3.14; 314) with 100 point for tabulation
And this is same function with same interval but with 100000 points for tabulation
Code for Fourier series coeficients:
void fourieSeriesDecompose(std::function<double(double)> func, double period, long int iterations, double *&aParams, double *&bParams){
aParams = new double[iterations];
aParams[0] = integrateRiemans(func, 0, period, 1000);
for(int i = 1; i < iterations; i++){
auto sineFunc = [&](double x) -> double { return 2 * (func(x) * cos((2 * x * i * M_PI) / period)); };
aParams[i] = integrateRiemans(sineFunc, -period / 2, period / 2, 1000) / period;
}
bParams = new double[iterations];
for(int i = 1; i < iterations; i++){
auto sineFunc = [&](double x) -> double { return 2 * (func(x) * sin(2 * (x * (i + 1) * M_PI) / period)); };
bParams[i] = integrateRiemans(sineFunc, -period / 2, period / 2, 1000) / period;
}
}
This code I use to reproduce function using found coeficients:
double fourieSeriesCompose(double x, double period, long iterations, double *aParams, double *bParams){
double y = aParams[0];
for(int i = 1; i < iterations; i++){
y += sqrt(aParams[i] * aParams[i] + bParams[i] * bParams[i]) * cos((2 * i * x * M_PI) / period - atan(bParams[i] / aParams[i]));
}
return y;
}
And the runner code
double period = M_PI * 2;
auto startFunc = [](double x) -> double{ return sin(x); };
fourieSeriesDecompose(*startFunc, period, 1000, aCoeficients, bCoeficients);
auto readyFunc = [&](double x) -> double{ return fourieSeriesCompose(x, period, 1000, aCoeficients, bCoeficients); };
tabulateFunc(readyFunc);
scaleFunc();
//Draw methods after this
see:
How to compute Discrete Fourier Transform?
So if I deciphered it correctly the aParams,bParams represent the real and imaginary part of the result then the angles in sin and cos must be the same but you have different! You got this:
auto sineFunc = [&](double x) -> double { return 2*(func(x)*cos((2* x* i *M_PI)/period));
auto sineFunc = [&](double x) -> double { return 2*(func(x)*sin( 2*(x*(i+1)*M_PI)/period));
as you can see its not the same angle. Also what is period? You got iterations! if it is period of the function you want to transform then it should be applied to it and not to the kernel ... Also integrateRiemans does what? its the nested for loop to integrate the furrier transform? Btw. hope that func is real domain otherwise the integration/sumation needs both real and imaginary part not just one ...
So what you should do is:
create (cplx) table of the func(x) data on the interval you want with iterations samples
so for loop where x = x0+i*(x1-x0)/(iterations-1) and x0,x1 is the range you want the func to sample. Lets call it f[i]
for (i=0;i<iteration;i++) f[i]=func(x0+i*(x1-x0)/(iterations-1));
furrier transform it
something like this:
for (i=0;i<iteration;i++) a[i]=b[i]=0;
for (j=0;j<iteration;j++)
for (i=0;i<iteration;i++)
{
a[j]+=f[i]*cos(-2.0*M_PI*i*j/iterations);
b[j]+=f[i]*sin(-2.0*M_PI*i*j/iterations);
}
now a[],b[] should hold your slow DFT result ... beware integer rounding ... depending on compiler you might need to cast some stuff to double to avoid integer rounding.

how to wrap radians between -pi and pi with mod? [duplicate]

I'm looking for some nice C code that will accomplish effectively:
while (deltaPhase >= M_PI) deltaPhase -= M_TWOPI;
while (deltaPhase < -M_PI) deltaPhase += M_TWOPI;
What are my options?
Edit Apr 19, 2013:
Modulo function updated to handle boundary cases as noted by aka.nice and arr_sea:
static const double _PI= 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348;
static const double _TWO_PI= 6.2831853071795864769252867665590057683943387987502116419498891846156328125724179972560696;
// Floating-point modulo
// The result (the remainder) has same sign as the divisor.
// Similar to matlab's mod(); Not similar to fmod() - Mod(-3,4)= 1 fmod(-3,4)= -3
template<typename T>
T Mod(T x, T y)
{
static_assert(!std::numeric_limits<T>::is_exact , "Mod: floating-point type expected");
if (0. == y)
return x;
double m= x - y * floor(x/y);
// handle boundary cases resulted from floating-point cut off:
if (y > 0) // modulo range: [0..y)
{
if (m>=y) // Mod(-1e-16 , 360. ): m= 360.
return 0;
if (m<0 )
{
if (y+m == y)
return 0 ; // just in case...
else
return y+m; // Mod(106.81415022205296 , _TWO_PI ): m= -1.421e-14
}
}
else // modulo range: (y..0]
{
if (m<=y) // Mod(1e-16 , -360. ): m= -360.
return 0;
if (m>0 )
{
if (y+m == y)
return 0 ; // just in case...
else
return y+m; // Mod(-106.81415022205296, -_TWO_PI): m= 1.421e-14
}
}
return m;
}
// wrap [rad] angle to [-PI..PI)
inline double WrapPosNegPI(double fAng)
{
return Mod(fAng + _PI, _TWO_PI) - _PI;
}
// wrap [rad] angle to [0..TWO_PI)
inline double WrapTwoPI(double fAng)
{
return Mod(fAng, _TWO_PI);
}
// wrap [deg] angle to [-180..180)
inline double WrapPosNeg180(double fAng)
{
return Mod(fAng + 180., 360.) - 180.;
}
// wrap [deg] angle to [0..360)
inline double Wrap360(double fAng)
{
return Mod(fAng ,360.);
}
One-liner constant-time solution:
Okay, it's a two-liner if you count the second function for [min,max) form, but close enough — you could merge them together anyways.
/* change to `float/fmodf` or `long double/fmodl` or `int/%` as appropriate */
/* wrap x -> [0,max) */
double wrapMax(double x, double max)
{
/* integer math: `(max + x % max) % max` */
return fmod(max + fmod(x, max), max);
}
/* wrap x -> [min,max) */
double wrapMinMax(double x, double min, double max)
{
return min + wrapMax(x - min, max - min);
}
Then you can simply use deltaPhase = wrapMinMax(deltaPhase, -M_PI, +M_PI).
The solutions is constant-time, meaning that the time it takes does not depend on how far your value is from [-PI,+PI) — for better or for worse.
Verification:
Now, I don't expect you to take my word for it, so here are some examples, including boundary conditions. I'm using integers for clarity, but it works much the same with fmod() and floats:
Positive x:
wrapMax(3, 5) == 3: (5 + 3 % 5) % 5 == (5 + 3) % 5 == 8 % 5 == 3
wrapMax(6, 5) == 1: (5 + 6 % 5) % 5 == (5 + 1) % 5 == 6 % 5 == 1
Negative x:
Note: These assume that integer modulo copies left-hand sign; if not, you get the above ("Positive") case.
wrapMax(-3, 5) == 2: (5 + (-3) % 5) % 5 == (5 - 3) % 5 == 2 % 5 == 2
wrapMax(-6, 5) == 4: (5 + (-6) % 5) % 5 == (5 - 1) % 5 == 4 % 5 == 4
Boundaries:
wrapMax(0, 5) == 0: (5 + 0 % 5) % 5 == (5 + 0) % 5 == 5 % 5 == 0
wrapMax(5, 5) == 0: (5 + 5 % 5) % 5 == (5 + 0) % 5== 5 % 5 == 0
wrapMax(-5, 5) == 0: (5 + (-5) % 5) % 5 == (5 + 0) % 5 == 5 % 5 == 0
Note: Possibly -0 instead of +0 for floating-point.
The wrapMinMax function works much the same: wrapping x to [min,max) is the same as wrapping x - min to [0,max-min), and then (re-)adding min to the result.
I don't know what would happen with a negative max, but feel free to check that yourself!
If ever your input angle can reach arbitrarily high values, and if continuity matters, you can also try
atan2(sin(x),cos(x))
This will preserve continuity of sin(x) and cos(x) better than modulo for high values of x, especially in single precision (float).
Indeed, exact_value_of_pi - double_precision_approximation ~= 1.22e-16
On the other hand, most library/hardware use a high precision approximation of PI for applying the modulo when evaluating trigonometric functions (though x86 family is known to use a rather poor one).
Result might be in [-pi,pi], you'll have to check the exact bounds.
Personaly, I would prevent any angle to reach several revolutions by wrapping systematically and stick to a fmod solution like the one of boost.
There is also fmod function in math.h but the sign causes trouble so that a subsequent operation is needed to make the result fir in the proper range (like you already do with the while's). For big values of deltaPhase this is probably faster than substracting/adding `M_TWOPI' hundreds of times.
deltaPhase = fmod(deltaPhase, M_TWOPI);
EDIT:
I didn't try it intensively but I think you can use fmod this way by handling positive and negative values differently:
if (deltaPhase>0)
deltaPhase = fmod(deltaPhase+M_PI, 2.0*M_PI)-M_PI;
else
deltaPhase = fmod(deltaPhase-M_PI, 2.0*M_PI)+M_PI;
The computational time is constant (unlike the while solution which gets slower as the absolute value of deltaPhase increases)
I would do this:
double wrap(double x) {
return x-2*M_PI*floor(x/(2*M_PI)+0.5);
}
There will be significant numerical errors. The best solution to the numerical errors is to store your phase scaled by 1/PI or by 1/(2*PI) and depending on what you are doing store them as fixed point.
Instead of working in radians, use angles scaled by 1/(2π) and use modf, floor etc. Convert back to radians to use library functions.
This also has the effect that rotating ten thousand and a half revolutions is the same as rotating half then ten thousand revolutions, which is not guaranteed if your angles are in radians, as you have an exact representation in the floating point value rather than summing approximate representations:
#include <iostream>
#include <cmath>
float wrap_rads ( float r )
{
while ( r > M_PI ) {
r -= 2 * M_PI;
}
while ( r <= -M_PI ) {
r += 2 * M_PI;
}
return r;
}
float wrap_grads ( float r )
{
float i;
r = modff ( r, &i );
if ( r > 0.5 ) r -= 1;
if ( r <= -0.5 ) r += 1;
return r;
}
int main ()
{
for (int rotations = 1; rotations < 100000; rotations *= 10 ) {
{
float pi = ( float ) M_PI;
float two_pi = 2 * pi;
float a = pi;
a += rotations * two_pi;
std::cout << rotations << " and a half rotations in radians " << a << " => " << wrap_rads ( a ) / two_pi << '\n' ;
}
{
float pi = ( float ) 0.5;
float two_pi = 2 * pi;
float a = pi;
a += rotations * two_pi;
std::cout << rotations << " and a half rotations in grads " << a << " => " << wrap_grads ( a ) / two_pi << '\n' ;
}
std::cout << '\n';
}}
Here is a version for other people finding this question that can use C++ with Boost:
#include <boost/math/constants/constants.hpp>
#include <boost/math/special_functions/sign.hpp>
template<typename T>
inline T normalizeRadiansPiToMinusPi(T rad)
{
// copy the sign of the value in radians to the value of pi
T signedPI = boost::math::copysign(boost::math::constants::pi<T>(),rad);
// set the value of rad to the appropriate signed value between pi and -pi
rad = fmod(rad+signedPI,(2*boost::math::constants::pi<T>())) - signedPI;
return rad;
}
C++11 version, no Boost dependency:
#include <cmath>
// Bring the 'difference' between two angles into [-pi; pi].
template <typename T>
T normalizeRadiansPiToMinusPi(T rad) {
// Copy the sign of the value in radians to the value of pi.
T signed_pi = std::copysign(M_PI,rad);
// Set the value of difference to the appropriate signed value between pi and -pi.
rad = std::fmod(rad + signed_pi,(2 * M_PI)) - signed_pi;
return rad;
}
I encountered this question when searching for how to wrap a floating point value (or a double) between two arbitrary numbers. It didn't answer specifically for my case, so I worked out my own solution which can be seen here. This will take a given value and wrap it between lowerBound and upperBound where upperBound perfectly meets lowerBound such that they are equivalent (ie: 360 degrees == 0 degrees so 360 would wrap to 0)
Hopefully this answer is helpful to others stumbling across this question looking for a more generic bounding solution.
double boundBetween(double val, double lowerBound, double upperBound){
if(lowerBound > upperBound){std::swap(lowerBound, upperBound);}
val-=lowerBound; //adjust to 0
double rangeSize = upperBound - lowerBound;
if(rangeSize == 0){return upperBound;} //avoid dividing by 0
return val - (rangeSize * std::floor(val/rangeSize)) + lowerBound;
}
A related question for integers is available here:
Clean, efficient algorithm for wrapping integers in C++
A two-liner, non-iterative, tested solution for normalizing arbitrary angles to [-π, π):
double normalizeAngle(double angle)
{
double a = fmod(angle + M_PI, 2 * M_PI);
return a >= 0 ? (a - M_PI) : (a + M_PI);
}
Similarly, for [0, 2π):
double normalizeAngle(double angle)
{
double a = fmod(angle, 2 * M_PI);
return a >= 0 ? a : (a + 2 * M_PI);
}
In the case where fmod() is implemented through truncated division and has the same sign as the dividend, it can be taken advantage of to solve the general problem thusly:
For the case of (-PI, PI]:
if (x > 0) x = x - 2PI * ceil(x/2PI) #Shift to the negative regime
return fmod(x - PI, 2PI) + PI
And for the case of [-PI, PI):
if (x < 0) x = x - 2PI * floor(x/2PI) #Shift to the positive regime
return fmod(x + PI, 2PI) - PI
[Note that this is pseudocode; my original was written in Tcl, and I didn't want to torture everyone with that. I needed the first case, so had to figure this out.]
deltaPhase -= floor(deltaPhase/M_TWOPI)*M_TWOPI;
The way suggested you suggested is best. It is fastest for small deflections. If angles in your program are constantly being deflected into the proper range, then you should only run into big out of range values rarely. Therefore paying the cost of a complicated modular arithmetic code every round seems wasteful. Comparisons are cheap compared to modular arithmetic (http://embeddedgurus.com/stack-overflow/2011/02/efficient-c-tip-13-use-the-modulus-operator-with-caution/).
In C99:
float unwindRadians( float radians )
{
const bool radiansNeedUnwinding = radians < -M_PI || M_PI <= radians;
if ( radiansNeedUnwinding )
{
if ( signbit( radians ) )
{
radians = -fmodf( -radians + M_PI, 2.f * M_PI ) + M_PI;
}
else
{
radians = fmodf( radians + M_PI, 2.f * M_PI ) - M_PI;
}
}
return radians;
}
If linking against glibc's libm (including newlib's implementation) you can access
__ieee754_rem_pio2f() and __ieee754_rem_pio2() private functions:
extern __int32_t __ieee754_rem_pio2f (float,float*);
float wrapToPI(float xf){
const float p[4]={0,M_PI_2,M_PI,-M_PI_2};
float yf[2];
int q;
int qmod4;
q=__ieee754_rem_pio2f(xf,yf);
/* xf = q * M_PI_2 + yf[0] + yf[1] /
* yf[1] << y[0], not sure if it could be ignored */
qmod4= q % 4;
if (qmod4==2)
/* (yf[0] > 0) defines interval (-pi,pi]*/
return ( (yf[0] > 0) ? -p[2] : p[2] ) + yf[0] + yf[1];
else
return p[qmod4] + yf[0] + yf[1];
}
Edit: Just realised that you need to link to libm.a, I couldn't find the symbols declared in libm.so
I have used (in python):
def WrapAngle(Wrapped, UnWrapped ):
TWOPI = math.pi * 2
TWOPIINV = 1.0 / TWOPI
return UnWrapped + round((Wrapped - UnWrapped) * TWOPIINV) * TWOPI
c-code equivalent:
#define TWOPI 6.28318531
double WrapAngle(const double dWrapped, const double dUnWrapped )
{
const double TWOPIINV = 1.0/ TWOPI;
return dUnWrapped + round((dWrapped - dUnWrapped) * TWOPIINV) * TWOPI;
}
notice that this brings it in the wrapped domain +/- 2pi so for +/- pi domain you need to handle that afterward like:
if( angle > pi):
angle -= 2*math.pi

How to determine the axis of rotation from a set of point clouds

I have a number of point clouds taken from an kinect-like instrument that is mounted on a tripod and then rotated. How do I determine the rotation axis accurately? I'm using c++ PCL and Eigen.
I can match the point clouds together with ICP and run a global registration (SLAM or ELCH) to get combined point cloud but for a number of reasons I would like to be able to determine the axis accurately and force the registrations to respect this rotation.
One issue that is related to this problem is the origin of my instrument. I can measure the distance to the rotation axis from the external dimensions of the device fairly accurately but I don't know exactly where is the origin in relation the extremities of the device. Solving this issue could help me to locate the origin too.
There are two methods that I'm considering.
One is to take the transformation matrices of the registered point clouds and extract the translation vectors that represent the locations where the transformation would project the internal origin in the current position. To the set of points acquired this way I could try to fit a circle and the center point would represent a vector from the origin to the rotation axis and the normal direction of the circle would be the direction of the axis.
The other option is to determine the rotation axis directly from any single rotation matrix, but the vector to the rotation axis seems volatile.
Any better solutions or insights on the issue?
You need to calculate the major axis oft the tensor of inertia. https://en.m.wikipedia.org/wiki/Moment_of_inertia.
All points can be considered to have the same mass. Then you can use the Steiner approach.
THIS IS NOT AN ANSWER TO THE ORIGINAL QUESTION BUT CLARIFICATION TO A DISCUSSION ABOUT DEFINING THE ROTATION AXIS BETWEEN TWO POSES
Deepfreeze. Here is an Octave script to demonstrate what we discussed in the chat. I know it might be bad practice to post an answer when not presenting one, but I hope this will give you an insight on what I was trying to explain about the relationship between the translation vector and the point on the rotation axis (t_i in your notation).
rotation_axis_unit = [0,0,1]' % parallel to z axis
angle = 1 /180 *pi() % one degree
% a rotaion matrix of one degree rotation
R = [cos(angle) -sin(angle) 0 ; sin(angle) cos(angle) 0 ; 0 0 1 ];
% the point around wich to rotate
axis_point = [-10,0,0]' % a point
% just a point used to demonstrate that all points form a circular path
test_point = [10,5,0]';
%containers for plotting
path_origin = zeros(360,3);
path_test_point = zeros(360,3);
path_v = zeros(360,3);
origin = [0,0,0]';
% updating rotation matrix
R_i = R;
M1 = [R,R*-axis_point.+axis_point;[0,0,0,1]];
% go around a full circle
for i=1:360
% v = the last column of M. Created from axis_point.
% -axis_point is the vector from axis_point to origin which is being rotated
% then a correction is applied to center it around the axis point
v = R_i * -axis_point .+ axis_point;
% building 4x4 transformation matrix
M = [R_i, v;[0,0,0,1]];
% M could also be built M_i = M1 * M_i, rotating the previous M by one degree around axis_point
% rotatin testing point and saving it
test_point_i = M * [test_point;1];
path_test_point(i,:) = test_point_i(1:3,1)';
% saving the translation part of M
path_v(i,:) = v';
% rotating origin point and saving it
origin_i = test_point_i = M * [origin;1];
path_origin(i,:) = origin_i(1:3,1)';
R_i = R * R_i ;
end
figure(1)
% plot test point path, just to show it forms a circular path, center and axis_point
scatter3(path_test_point(:,1), path_test_point(:,2), path_test_point(:,3), 5,'r')
hold on
% plotting origin path, circular, center at axis_point
scatter3(path_origin(:,1), path_origin(:,2), path_origin(:,3), 7,'r')
hold on
% plotting translation vectors, identical to origin path, (if invisible rotate so that you are watching it from z axis direction)
scatter3(path_v(:,1), path_v(:,2), path_v(:,3), 1, 'black');
hold on
% plots for visual analysis
scatter3(0,0,0,5,'b') % origin
hold on
scatter3(axis_point(1), axis_point(2), axis_point(3), 5, 'g') % axis point, center of the circles
hold on
scatter3(test_point(1), test_point(2), test_point(3), 5, 'black') % test point origin
hold off
% what does this demonstrate?
% it shows that that the ralationship between a 4x4
% transformation matrix and axis_angle notation plus point on the rotation axis
% M = [ R, v,; [0,0,0,1]] = [ R_i , R_i * -c + c; [ 0, 0, 0, 1] ]
%
% where c equals axis_point ( = perpendicular vector from origin to the axis of rotation )
% pay attention to path_v and path_origin
% they are identical
% path_v was extracted from the 4x4 transformation matrix M
% and path_origin was created by rotating the origin point by M
%--> v = R_i * -.c +.c
% Notice that since M describes a rotation of alpha angles around an
% axis that goes through c
% and its translation vector lies on a circle whose center
% is at the rotation_axis and radius is the distance from that
% point to origin ->
%
% M * M will describe a rotation of 2 x alpha angles around the same axis
% Therefore you can easily create more points that lay on that circle
% by multiplying M by itself and extracting the translation vector
%
% c can then be solved by normal circle fit algorithms.
%------------------------------------------------------------
% CAUTION !!!
% this applies perfectly when the transformation matrices have been created so
% that the translation is perfectly orthogonal to the rotation axis
% in real world matrices the translation will not be orthogonal
% therefore the points will not travel on a circular path but on a helix and this needs to be
% dealt with when solving the center of rotation.
An option is to place a chessboard at ~1 [m]. Use the kinect camera to make images for different rotations, where the hole chessboard is still visible. Fit the chessboard using OpenCV.
The goal is to find the xyz coordinates of the chessboard for the different orientations. Use your camera api functions to determine the xyz coordinates of the chessboard or do the following:
Determine camera intrinsics of camera 1 and 2. (use both color and IR images for kinect).
Determine camera extrinsics (camera 2 [R,t] wrt camera 1)
Use the values to calculate projection matrices
Use the projection matrices to triangulate the points of the chessboard to get coordinates in [X,Y,Z] wrt camera1 coordinate system.
Each group of chessboard points is called [x_i]. Now we can write the equation:
Update:
This equation can be solved with a non-linear solver, I used ceres-solver.
#include "ceres/ceres.h"
#include "ceres/rotation.h"
#include "glog/logging.h"
#include "opencv2/opencv.hpp"
#include "csv.h"
#include "Eigen/Eigen"
using ceres::AutoDiffCostFunction;
using ceres::CostFunction;
using ceres::Problem;
using ceres::Solver;
using ceres::Solve;
struct AxisRotationError {
AxisRotationError(double observed_x0, double observed_y0, double observed_z0, double observed_x1, double observed_y1, double observed_z1)
: observed_x0(observed_x0), observed_y0(observed_y0), observed_z0(observed_z0), observed_x1(observed_x1), observed_y1(observed_y1), observed_z1(observed_z1) {}
template <typename T>
bool operator()(const T* const axis, const T* const angle, const T* const trans, T* residuals) const {
//bool operator()(const T* const axis, const T* const trans, T* residuals) const {
// Normalize axis
T a[3];
T k = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2];
a[0] = axis[0] / sqrt(k);
a[1] = axis[1] / sqrt(k);
a[2] = axis[2] / sqrt(k);
// Define quaternion from axis and angle. Convert angle to radians
T pi = T(3.14159265359);
//T angle[1] = {T(10.0)};
T quaternion[4] = { cos((angle[0]*pi / 180.0) / 2.0),
a[0] * sin((angle[0] * pi / 180.0) / 2.0),
a[1] * sin((angle[0] * pi / 180.0) / 2.0),
a[2] * sin((angle[0] * pi / 180.0) / 2.0) };
// Define transformation
T t[3] = { trans[0], trans[1], trans[2] };
// Calculate predicted positions
T observedPoint0[3] = { T(observed_x0), T(observed_y0), T(observed_z0)};
T point[3]; point[0] = observedPoint0[0] - t[0]; point[1] = observedPoint0[1] - t[1]; point[2] = observedPoint0[2] - t[2];
T rotatedPoint[3];
ceres::QuaternionRotatePoint(quaternion, point, rotatedPoint);
T predicted_x = rotatedPoint[0] + t[0];
T predicted_y = rotatedPoint[1] + t[1];
T predicted_z = rotatedPoint[2] + t[2];
// The error is the difference between the predicted and observed position.
residuals[0] = predicted_x - T(observed_x1);
residuals[1] = predicted_y - T(observed_y1);
residuals[2] = predicted_z - T(observed_z1);
return true;
}
// Factory to hide the construction of the CostFunction object from
// the client code.
static ceres::CostFunction* Create(const double observed_x0, const double observed_y0, const double observed_z0,
const double observed_x1, const double observed_y1, const double observed_z1) {
// Define AutoDiffCostFunction. <AxisRotationError, #residuals, #dim axis, #dim angle, #dim trans
return (new ceres::AutoDiffCostFunction<AxisRotationError, 3, 3, 1,3>(
new AxisRotationError(observed_x0, observed_y0, observed_z0, observed_x1, observed_y1, observed_z1)));
}
double observed_x0;
double observed_y0;
double observed_z0;
double observed_x1;
double observed_y1;
double observed_z1;
};
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
// Load points.csv into cv::Mat's
// 216 rows with (x0, y0, z0, x1, y1, z1)
// [x1,y1,z1] = R* [x0-tx,y0-ty,z0-tz] + [tx,ty,tz]
// The xyz coordinates are points on a chessboard, where the chessboard
// is rotated for 4x. Each chessboard has 54 xyz points. So 4x 54,
// gives the 216 rows of observations.
// The chessboard is located at [0,0,1], as the camera_0 is located
// at [-0.1,0,0], the t should become [0.1,0,1.0].
// The chessboard is rotated around axis [0.0,1.0,0.0]
io::CSVReader<6> in("points.csv");
float x0, y0, z0, x1, y1, z1;
// The observations
cv::Mat x_0(216, 3, CV_32F);
cv::Mat x_1(216, 3, CV_32F);
for (int rowNr = 0; rowNr < 216; rowNr++){
if (in.read_row(x0, y0, z0, x1, y1, z1))
{
x_0.at<float>(rowNr, 0) = x0;
x_0.at<float>(rowNr, 1) = y0;
x_0.at<float>(rowNr, 2) = z0;
x_1.at<float>(rowNr, 0) = x1;
x_1.at<float>(rowNr, 1) = y1;
x_1.at<float>(rowNr, 2) = z1;
}
}
std::cout << x_0(cv::Rect(0, 0, 2, 5)) << std::endl;
// The variable to solve for with its initial value. It will be
// mutated in place by the solver.
int numObservations = 216;
double axis[3] = { 0.0, 1.0, 0.0 };
double* pAxis; pAxis = axis;
double angles[4] = { 10.0, 10.0, 10.0, 10.0 };
double* pAngles; pAngles = angles;
double t[3] = { 0.0, 0.0, 1.0,};
double* pT; pT = t;
bool FLAGS_robustify = true;
// Build the problem.
Problem problem;
// Set up the only cost function (also known as residual). This uses
// auto-differentiation to obtain the derivative (jacobian).
for (int i = 0; i < numObservations; ++i) {
ceres::CostFunction* cost_function =
AxisRotationError::Create(
x_0.at<float>(i, 0), x_0.at<float>(i, 1), x_0.at<float>(i, 2),
x_1.at<float>(i, 0), x_1.at<float>(i, 1), x_1.at<float>(i, 2));
//std::cout << "pAngles: " << pAngles[i / 54] << ", " << i / 54 << std::endl;
ceres::LossFunction* loss_function = FLAGS_robustify ? new ceres::HuberLoss(0.001) : NULL;
//ceres::LossFunction* loss_function = FLAGS_robustify ? new ceres::CauchyLoss(0.002) : NULL;
problem.AddResidualBlock(cost_function, loss_function, pAxis, &pAngles[i/54], pT);
//problem.AddResidualBlock(cost_function, loss_function, pAxis, pT);
}
// Run the solver!
ceres::Solver::Options options;
options.linear_solver_type = ceres::DENSE_SCHUR;
//options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
options.trust_region_strategy_type = ceres::LEVENBERG_MARQUARDT;
options.num_threads = 4;
options.use_nonmonotonic_steps = false;
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
//std::cout << summary.FullReport() << "\n";
std::cout << summary.BriefReport() << "\n";
// Normalize axis
double k = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2];
axis[0] = axis[0] / sqrt(k);
axis[1] = axis[1] / sqrt(k);
axis[2] = axis[2] / sqrt(k);
// Plot results
std::cout << "axis: [ " << axis[0] << "," << axis[1] << "," << axis[2] << " ]" << std::endl;
std::cout << "t: [ " << t[0] << "," << t[1] << "," << t[2] << " ]" << std::endl;
std::cout << "angles: [ " << angles[0] << "," << angles[1] << "," << angles[2] << "," << angles[3] << " ]" << std::endl;
return 0;
}
The result I've got:
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 3.632073e-003 0.00e+000 3.76e-002 0.00e+000 0.00e+000 1.00e+004 0 4.30e-004 7.57e-004
1 3.787837e-005 3.59e-003 2.10e-003 1.17e-001 1.92e+000 3.00e+004 1 7.43e-004 8.55e-003
2 3.756202e-005 3.16e-007 1.73e-003 5.49e-001 1.61e-001 2.29e+004 1 5.35e-004 1.13e-002
3 3.589147e-005 1.67e-006 2.90e-004 9.19e-002 9.77e-001 6.87e+004 1 5.96e-004 1.46e-002
4 3.584281e-005 4.87e-008 1.38e-005 2.70e-002 1.00e+000 2.06e+005 1 4.99e-004 1.73e-002
5 3.584268e-005 1.35e-010 1.02e-007 1.63e-003 1.01e+000 6.18e+005 1 6.32e-004 2.01e-002
Ceres Solver Report: Iterations: 6, Initial cost: 3.632073e-003, Final cost: 3.584268e-005, Termination: CONVERGENCE
axis: [ 0.00119037,0.999908,-0.0134817 ]
t: [ 0.0993185,-0.0080394,1.00236 ]
angles: [ 9.90614,9.94415,9.93216,10.1119 ]
The angles result is quite nice 10 degrees. These can even be fixed for my case, as I know the rotation very accurately from my rotation stage. There is a small difference in the t and axis. This is cause by inaccuracies in my virtual stereoCamera simulation. My chessboard squares are not exactly square and the dimensions are also a little off....
My simulation scripts, images, results: blender_simulation.zip

create 2D LoG kernel in openCV like fspecial in Matlab

My question is not how to filter an image using the laplacian of gaussian (basically using filter2D with the relevant kernel etc.).
What I want to know is how I generate the NxN kernel.
I'll give an example showing how I generated a [Winsize x WinSize] Gaussian kernel in openCV.
In Matlab:
gaussianKernel = fspecial('gaussian', WinSize, sigma);
In openCV:
cv::Mat gaussianKernel = cv::getGaussianKernel(WinSize, sigma, CV_64F);
cv::mulTransposed(gaussianKernel,gaussianKernel,false);
Where sigma and WinSize are predefined.
I want to do the same for a Laplacian of Gaussian.
In Matlab:
LoGKernel = fspecial('log', WinSize, sigma);
How do I get the exact kernel in openCV (exact up to negligible numerical differences)?
I'm working on a specific application where I need the actual kernel values and simply finding another way of implementing LoG filtering by approximating Difference of gaussians is not what I'm after.
Thanks!
You can generate it manually, using formula
LoG(x,y) = (1/(pi*sigma^4)) * (1 - (x^2+y^2)/(sigma^2))* (e ^ (- (x^2 + y^2) / 2sigma^2)
http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm
cv::Mat kernel(WinSize,WinSize,CV_64F);
int rows = kernel.rows;
int cols = kernel.cols;
double halfSize = (double) WinSize / 2.0;
for (size_t i=0; i<rows;i++)
for (size_t j=0; j<cols;j++)
{
double x = (double)j - halfSize;
double y = (double)i - halfSize;
kernel.at<double>(j,i) = (1.0 /(M_PI*pow(sigma,4))) * (1 - (x*x+y*y)/(sigma*sigma))* (pow(2.718281828, - (x*x + y*y) / 2*sigma*sigma));
}
If function above is not OK, you can simply rewrite matlab version of fspecial:
case 'log' % Laplacian of Gaussian
% first calculate Gaussian
siz = (p2-1)/2;
std2 = p3^2;
[x,y] = meshgrid(-siz(2):siz(2),-siz(1):siz(1));
arg = -(x.*x + y.*y)/(2*std2);
h = exp(arg);
h(h<eps*max(h(:))) = 0;
sumh = sum(h(:));
if sumh ~= 0,
h = h/sumh;
end;
% now calculate Laplacian
h1 = h.*(x.*x + y.*y - 2*std2)/(std2^2);
h = h1 - sum(h1(:))/prod(p2); % make the filter sum to zero
I want to thank old-ufo for nudging me in the correct direction.
I was hoping I won't have to reinvent the wheel by doing a quick matlab-->openCV conversion but guess this is the best solution I have for a quick solution.
NOTE - I did this for square kernels only (easy to modify otherwise, but I have no need for that so...).
Maybe this can be written in a more elegant form but is a quick job I did so I can carry on with more pressing matters.
From main function:
int WinSize(7); int sigma(1); // can be changed to other odd-sized WinSize and different sigma values
cv::Mat h = fspecialLoG(WinSize,sigma);
And the actual function is:
// return NxN (square kernel) of Laplacian of Gaussian as is returned by Matlab's: fspecial(Winsize,sigma)
cv::Mat fspecialLoG(int WinSize, double sigma){
// I wrote this only for square kernels as I have no need for kernels that aren't square
cv::Mat xx (WinSize,WinSize,CV_64F);
for (int i=0;i<WinSize;i++){
for (int j=0;j<WinSize;j++){
xx.at<double>(j,i) = (i-(WinSize-1)/2)*(i-(WinSize-1)/2);
}
}
cv::Mat yy;
cv::transpose(xx,yy);
cv::Mat arg = -(xx+yy)/(2*pow(sigma,2));
cv::Mat h (WinSize,WinSize,CV_64F);
for (int i=0;i<WinSize;i++){
for (int j=0;j<WinSize;j++){
h.at<double>(j,i) = pow(exp(1),(arg.at<double>(j,i)));
}
}
double minimalVal, maximalVal;
minMaxLoc(h, &minimalVal, &maximalVal);
cv::Mat tempMask = (h>DBL_EPSILON*maximalVal)/255;
tempMask.convertTo(tempMask,h.type());
cv::multiply(tempMask,h,h);
if (cv::sum(h)[0]!=0){h=h/cv::sum(h)[0];}
cv::Mat h1 = (xx+yy-2*(pow(sigma,2))/(pow(sigma,4));
cv::multiply(h,h1,h1);
h = h1 - cv::sum(h1)[0]/(WinSize*WinSize);
return h;
}
There is some difference between your function and the matlab version:
http://br1.einfach.org/tmp/log-matlab-vs-opencv.png.
Above is matlab fspecial('log', 31, 6) and below is the result of your function with the same parameters. Somehow the hat is more 'bent' - is this intended and what is the effect of this in later processing?
I can create a kernel very similar to the matlab one with these functions, which just directly reflect the LoG formula:
float LoG(int x, int y, float sigma) {
float xy = (pow(x, 2) + pow(y, 2)) / (2 * pow(sigma, 2));
return -1.0 / (M_PI * pow(sigma, 4)) * (1.0 - xy) * exp(-xy);
}
static Mat LOGkernel(int size, float sigma) {
Mat kernel(size, size, CV_32F);
int halfsize = size / 2;
for (int x = -halfsize; x <= halfsize; ++x) {
for (int y = -halfsize; y <= halfsize; ++y) {
kernel.at<float>(x+halfsize,y+halfsize) = LoG(x, y, sigma);
}
}
return kernel;
}
Here's a NumPy version that is directly translated from the fspecial function in MATLAB.
import numpy as np
import sys
def get_log_kernel(siz, std):
x = y = np.linspace(-siz, siz, 2*siz+1)
x, y = np.meshgrid(x, y)
arg = -(x**2 + y**2) / (2*std**2)
h = np.exp(arg)
h[h < sys.float_info.epsilon * h.max()] = 0
h = h/h.sum() if h.sum() != 0 else h
h1 = h*(x**2 + y**2 - 2*std**2) / (std**4)
return h1 - h1.mean()
The code below is the exact equivalent to fspecial('log', p2, p3):
def fspecial_log(p2, std):
siz = int((p2-1)/2)
x = y = np.linspace(-siz, siz, 2*siz+1)
x, y = np.meshgrid(x, y)
arg = -(x**2 + y**2) / (2*std**2)
h = np.exp(arg)
h[h < sys.float_info.epsilon * h.max()] = 0
h = h/h.sum() if h.sum() != 0 else h
h1 = h*(x**2 + y**2 - 2*std**2) / (std**4)
return h1 - h1.mean()
I wrote exact Implementation of Matlab fspecial function in OpenCV
function:
Mat C_fspecial_LOG(double* kernel_size,double sigma)
{
double size[2]={ (kernel_size[0]-1)/2 , (kernel_size[1]-1)/2};
double std = sigma;
const double eps = 2.2204e-16;
cv::Mat kernel(kernel_size[0],kernel_size[1],CV_64FC1,0.0);
int row=0,col=0;
for (double y = -size[0]; y <= size[0]; ++y,++row)
{
col=0;
for (double x = -size[1]; x <= size[1]; ++x,++col)
{
kernel.at<double>(row,col)=exp( -( pow(x,2) + pow(y,2) ) /(2*pow(std,2)));
}
}
double MaxValue;
cv::minMaxLoc(kernel,nullptr,&MaxValue,nullptr,nullptr);
Mat condition=~(kernel < eps*MaxValue)/255;
condition.convertTo(condition,CV_64FC1);
kernel = kernel.mul(condition);
cv::Scalar SUM = cv::sum(kernel);
if(SUM[0]!=0)
{
kernel /= SUM[0];
}
return kernel;
}
usage of this function :
double kernel_size[2] = {4,4}; // kernel size set to 4x4
double sigma = 2.1;
Mat kernel = C_fspecial_LOG(kernel_size,sigma);
compare OpenCV result with Matlab:
opencv result:
[0.04918466596701741, 0.06170341496034986, 0.06170341496034986, 0.04918466596701741;
0.06170341496034986, 0.07740850411228289, 0.07740850411228289, 0.06170341496034986;
0.06170341496034986, 0.07740850411228289, 0.07740850411228289, 0.06170341496034986;
0.04918466596701741, 0.06170341496034986, 0.06170341496034986, 0.04918466596701741]
Matlab result for fspecial('gaussian', 4, 2.1) :
0.0492 0.0617 0.0617 0.0492
0.0617 0.0774 0.0774 0.0617
0.0617 0.0774 0.0774 0.0617
0.0492 0.0617 0.0617 0.0492
Just for the sake of reference, here is a Python implementation which creates the LoG filter kernel to detect blobs of a pre-defined radius in pixels.
def create_log_filter_kernel(r_in_px: float):
"""
Creates a LoG filter-kernel to detect blobs of a given radius r_in_px.
\[
LoG(x,y) = \frac{-1}{\pi\sigma^4}\left(1 - \frac{x^2 + y^2}{2\sigma^2}\right)e^{\frac{-(x^2+y^2)}{2\sigma^2}}
\]
Look for maxima if blob is black, minima if blob is white.
:param r_in_px:
:return: filter kernel
"""
# sigma from radius: LoG has zero-crossing at $1 - \frac{x^2 + y^2}{2\sigma^2} = 0$
# i.e. r^2 = 2\sigma^2$ and thus $sigma = r / \sqrt{2}$
sigma = r_in_px/np.sqrt(2)
# ksize such that filter covers $3\sigma$
ksize = int(np.round(sigma*3))*2 + 1
# setup filter
xgv = np.arange(0, ksize) - ksize / 2
ygv = np.arange(0, ksize) - ksize / 2
x, y = np.meshgrid(xgv, ygv)
kernel = -1 / (np.pi * sigma**4) * (1 - (x**2 + y**2) / (2*sigma**2)) * np.exp(-(x**2 + y**2) / (2 * sigma**2))
#normalize to sum zero (does not change zero crossing, I tried it out for r < 100)
kernel -= np.sum(kernel) / ksize**2
#this is important: normalize such that positive/negative parts are comparable over different scales
kernel /= np.sum(kernel[kernel>0])
return kernel

Waypoint generator between 2 points on 2d map

I'm now trying to generate straight sync points on my 2D map path.
In other words I want to spilt distance between point A on map for example X : 301 Y : 679 to point B X : 360 Y : 630, every 8 unit of distance is passed.
Every 8 distance unit calculated by sqrt(pow(a_x - b_x, 2), pow(a_y - b_y, 2)). I want to obtain next coordinates on map, like by a a_x + distance and b_y + distance.
I tried to do it but it didn't work, x axis doesnt change propertly.
Here is my code :
float base_x = active->getX();
float base_y = active->getY();
float destx = incoming_packet.get()->getFloat(4);
float desty = incoming_packet.get()->getFloat(8);
float distance = active->distance(destx, desty); // calculated by sqrt(pow(curent character x pos - destx, 2), pow(current character y pos - desty, 2))
float delta_X = active->vDistance(base_x, destx); // calculated by sqrt(pow(base_x - destx, 2))
float delta_Y = active->vDistance(base_y, desty); // calculated by sqrt(pow(base_y - desty, 2))
float cosa = delta_X / distance;
float sina = delta_Y / distance;
int mapx = 1;
int mapy = 1;
if(distance > 8)///active sync
{
for(float dist = 8; dist < distance;dist+=8)
{
base_x += mapx * (cosa * 8);
base_y += mapy * (sina * 8);
BOOST_LOG_TRIVIAL(debug) << "[ACTIVESYNC]NEXT SYNC ACK X : " << base_x << " Y : " << base_y;
}
}
What I'm doing here wrong ?
"cosa" (and cosb) are apparently dimensionless. (i.e. meters / meters)
"mapx" (and "mapy") are also dimensionless.
Note that in your for loop, base_x, base_y describe a point on the map.
And the 2 interesting computations in that loop
base_x += mapx * (cosa * 8);
base_y += mapy * (sina * 8);
become meaningless by attempting to add dimensionless numbers to a point. It might be ok to multiply by dimensionless numbers, but it is unreasonable to add dimensionless numbers to map points.
I suggest cosa and cosb be changed to represent the distance (i.e. meters) for each step.
float cosa = delta_X / 8; // size of steps in x direction
float sina = delta_Y / 8; // size of steps in y direction
Now the for loop can add 8 steps of cosa and sina appropriately to describe the path way points, and cosa and sina both have appropriate dimensions for the next computation.
The for loop can simplify to :
for(int step = 0; step < 8; step += 1)
{
base_x += (mapx * cosa);
base_y += (mapy * sina);
// remove or adapt the following
std::cout << std::setw(10) << std::left << (step+1) << std::setw(10)
<< base_x << std::setw(10) << base_y << std::endl;
// you were using:
//BOOST_LOG_TRIVIAL(debug) << "[ACTIVESYNC]NEXT SYNC ACK X : "
// << base_x << " Y : " << base_y;
}
My dummy code outputs:
base x/y 301 679
dest x/y 360 630
delta x/y 59 -49
step_x = 7.375
step_y = -6.125
step base_x base_y
0 301 679
1 308.375 672.875
2 315.75 666.75
3 323.125 660.625
4 330.5 654.5
5 337.875 648.375
6 345.25 642.25
7 352.625 636.125
8 360 630
Do these way points look more what you are looking for?