Coefficient-wise custom functions in Eigen - c++

I have a do_magic method which takes a double and adds 42 to it. I'd like to apply this method to each coefficient of a Eigen::Matrix or Eigen::Array (that means, I wouldn't mind if it's only possible with one of both types).
Is this possible?
Like this:
Eigen::MatrixXd m(2, 2);
m << 1,2,1,2;
m.applyCoefficientWise(do_magic);
// m is now 43, 44, 43, 44

You can use unaryExpr, though this returns a new view onto the matrix, rather than allowing you to modify the elements in place.
Copying the example out of the documentation:
double ramp(double x)
{
if (x > 0)
return x;
else
return 0;
}
int main(int, char**)
{
Matrix4d m1 = Matrix4d::Random();
cout << m1 << endl << "becomes: " << endl << m1.unaryExpr(ptr_fun(ramp)) << endl;
return 0;
}

Related

Create Eigen::Ref from std::vector

It is easy to copy data between e.g. Eigen::VectorXd and std::vector<double> or std::vector<Eigen::Vector3d>, for example
std::vector<Eigen::Vector3> vec1(10, {0,0,0});
Eigen::VectorXd vec2(30);
VectorXd::Map(&vec2[0], vec1.size()) = vec1;
(see e.g. https://stackoverflow.com/a/26094708/4069571 or https://stackoverflow.com/a/21560121/4069571)
Also, it is possible to create an Eigen::Ref<VectorXd> from a Matrix block/column/... for example like
MatrixXd mat(10,10);
Eigen::Ref<VectorXd> vec = mat.col(0);
The Question
Is it possible to create an Eigen::Ref<VectorXd> from a std::vector<double> or even std::vector<Eigen::Vector3d> without first copying the data?
I tried and it actually works as I describe in my comment by first mapping and then wrapping it as a Eigen::Ref object. Shown here through a google test.
void processVector(Eigen::Ref<Eigen::VectorXd> refVec) {
size_t size = refVec.size();
ASSERT_TRUE(10 == size);
std::cout << "Sum before change: " << refVec.sum(); // output is 50 = 10 * 5.0
refVec(0) = 10.0; // for a sum of 55
std::cout << "Sum after change: " << refVec.sum() << std::endl;
}
TEST(testEigenRef, onStdVector) {
std::vector<double> v10(10, 5.0);
Eigen::Map<Eigen::VectorXd> mPtr(&v10[0], 10);
processVector(mPtr);
// confirm that no copy is made and std::vector is changed as well
std::cout << "Std vec[0]: " << v10[0] << std::endl; // output is 10.0
}
Made it a bit more elaborate after the 2nd edit. Now I have my google unit test for Eigen::Ref (thank you). Hope this helps.

Gas Particle simulation Collision calculation C++

This I feel is a rather complicated problem, I hope I can fit it in to small enough of a space to make it understandable. I'm presently writing code to
simulate Ideal gas particles inside a box. I'm calculating if two particles will collide having calculated the time taken for them to reach their closest point. (using an example where they have head on collision).
In this section of code I need to find if they will collide at all for two particles, before then calculating at what time and how they collide etc.
Thus for my two paricles:
Main.cpp
Vector vp1(0,0,0);
Vector vv1(1,0,0);
Vector vp2(12,0,0);
Vector vv2(-1,0,0);
Particle Particle1(1, vp1, vv1);
Particle Particle2(1, vp2, vv2);
Particle1.timeToCollision(Particle2);
Within my program I define a particle to be:
Header File
class Particle {
private:
Vector p; //position
Vector v; //velocity
double radius; //radius
public:
Particle();
Particle(double r, const Vector Vecp, const Vector Vecv);
void setPosition(Vector);
void setVelocity(Vector);
Vector getPosition() const;
Vector getVelocity() const;
double getRadius() const;
void move(double t);
double timeToCollision(const Particle particle);
void collideParticles(Particle);
~Particle();
};
Vector is another class that in short gives x, y, z values. It also contains multiple functions for manipulating these.
And the part that I need help with, within the .cpp (Ignore the cout start and letters etc, they are simple checks where my code exits for tests.)
Given the equations:
I have already written code to do the dot product and modulus for me and:
where
s is distance travelled in time tac.
double Particle::timeToCollision(const Particle particle){
Vector r2 = particle.getPosition();
Vector r1 = p;
Vector v2 = particle.getVelocity();
Vector v1 = v;
Vector r0 = r2 - r1;
Vector v = v2 - v1;
double modv;
double tca;
double result = 0;
double bsqr;
modv = getVelocity().modulus();
cout << "start" << endl;
if(modv < 0.0000001){
cout << "a" << endl;
result = FLT_MAX;
}else{
cout << "b" << endl;
tca = ((--r0).dot(v)) / v.modulusSqr();
// -- is an overridden operator that gives the negation ( eg (2, 3, 4) to (-2, -3, -4) )
if (tca < 0) {
cout << "c" << endl;
result = FLT_MAX;
}else{
cout << "d" << endl;
Vector s(v.GetX(), v.GetY(), v.GetZ());
s.Scale(tca);
cout << getVelocity().GetX() << endl;
cout << getVelocity().GetY() << endl;
cout << getVelocity().GetZ() << endl;
double radsqr = radius * radius;
double bx = (r0.GetX() * r0.GetX() - (((r0).dot(v)) *((r0).dot(v)) / v.modulusSqr()));
double by = (r0.GetY() * r0.GetY() - (((r0).dot(v)) *((r0).dot(v)) / v.modulusSqr()));
double bz=(r0.GetZ() * r0.GetZ() - (((r0).dot(v)) * ((r0).dot(v)) / v.modulusSqr()));
if (bsqr < 4 * radsqr) {
cout << "e" << endl;
result = FLT_MAX;
} else {
}
cout << "tca: " << tca << endl;
}
}
cout << "fin" << endl;
return result;
}
I have equations for calculating several aspects, tca refers to Time of closest approach.
As written in the code I need to check if b > 4 r^2, I Have made some attempts and written the X, Y and Z components of b out. But I'm getting rubbish answers.
I just need help to establish if I've already made mistakes or the sort of direction I should be heading.
All my code prior to this works as expected and I've written multiple tests for each to check.
Please inform me in a comment for any information you feel I've left out etc.
Any help greatly appreciated.
You had several mistakes in your code. You never set result to a value different from 0 or FLT_MAX. You also never calculate bsqr. And I guess the collision happens if bsqr < 4r^2 and not the other way round. (well i do not understand why 4r^2 instead of r^2 but okay). Since you hide your vector implementation I used a common vector library. I also recommend to not use handcrafted stuff anyway. Take a look into armadillo or Eigen.
Here you go with a try in Eigen.
#include <iostream>
#include <limits>
#include <type_traits>
#include "Eigen/Dense"
struct Particle {
double radius;
Eigen::Vector3d p;
Eigen::Vector3d v;
};
template <class FloatingPoint>
std::enable_if_t<std::is_floating_point<FloatingPoint>::value, bool>
almost_equal(FloatingPoint x, FloatingPoint y, unsigned ulp=1)
{
FloatingPoint max = std::max(std::abs(x), std::abs(y));
return std::abs(x-y) <= std::numeric_limits<FloatingPoint>::epsilon()*max*ulp;
}
double timeToCollision(const Particle& left, const Particle& right){
Eigen::Vector3d r0 = right.p - left.p;
Eigen::Vector3d v = right.v - left.v;
double result = std::numeric_limits<double>::infinity();
double vv = v.dot(v);
if (!almost_equal(vv, 0.)) {
double tca = (-r0).dot(v) / vv;
if (tca >= 0) {
Eigen::Vector3d s = tca*v;
double bb = r0.dot(r0) - s.dot(s);
double radius = std::max(left.radius, right.radius);
if (bb < 4*radius*radius)
result = tca;
}
}
return result;
}
int main()
{
Eigen::Vector3d vp1 {0,0,0};
Eigen::Vector3d vv1 {1,0,0};
Eigen::Vector3d vp2 {12,0,0};
Eigen::Vector3d vv2 {-1,0,0};
Particle p1 {1, vp1, vv1};
Particle p2 {1, vp2, vv2};
std::cout << timeToCollision(p1, p2) << '\n';
}
My apologies for a very poorly worded question that was to long and bulky to make much sense of. Luckily I have found my own answer to be much easier then initially anticipated.
double Particle::timeToCollision(const Particle particle){
Vector r2=particle.getPosition();
Vector r1=p;
Vector v2=particle.getVelocity();
Vector v1=v;
Vector r0=r2-r1;
Vector v=v2-v1;
double modv;
double tca = ((--r0).dot(v)) / v.modulusSqr();
double bsqr;
double result=0;
double rColTestx=r0.GetX()+v.GetX()*tca;
double rColTesty=r0.GetY()+v.GetY()*tca;
double rColTestz=r0.GetZ()+v.GetZ()*tca;
Vector rtColTest(rColTestx, rColTesty, rColTestz);
modv=getVelocity().modulus();
cout << "start " << endl;
if(modv<0.0000001){
cout << "a" << endl;
result=FLT_MAX;
}else{
cout << "b" << endl;
if (tca < 0) {
cout << "c" << endl;
result=FLT_MAX;
}else{
cout << "d" << endl;
Vector s(v.GetX(), v.GetY(), v.GetZ());
s.Scale(tca);
cout << getVelocity().GetX() << endl;
cout << getVelocity().GetY() << endl;
cout << getVelocity().GetZ() << endl;
double radsqr= radius*radius;
bsqr=rtColTest.modulusSqr();
if (bsqr < 4*radsqr) {
cout << "e" << endl;
cout << "collision occurs" << endl;
result = FLT_MAX;
} else {
cout << "collision does not occurs" << endl;
}
}
}
cout << "fin" << endl;
return result;
}
Sorry its a large section of code. Also FLT_MAX is from the cfloat lib. I didn't stat this in my question. I found this to work for several examples I calculated on paper to check.
To be Clear, the return resultand result=0 were arbitrary. I later edit to return time but for this part didn't need or want that.

Inverting matrices mod-26 with Eigen C++ library

I'm trying to write a program to crack a Hill cipher of arbitrary dimensions (MxM) in C++. Part of the process requires me to calculate the mod-26 inverse of a matrix.
For example, the modular inverse of 2x2 array
14 3
11 0
is
0 19
9 24
I have a function that can accomplish this for 2x2 arrays only, which is not sufficient. I know that calculating inverses on larger-dimension arrays is difficult, so I'm using the Eigen C++ library. However, the Eigen inverse() function gives me this as the inverse of the above matrix:
0.000 0.091
0.333 -0.424
How can I calculate the modular 26 inverse that I need for a matrix of any dimensions with Eigen?
Try this:
#include <iostream>
#include <functional>
#include <Eigen/Dense>
using namespace Eigen;
using namespace std;
int inverse_mod_26(int d)
{
// We're not going to use Euclidean Alg. or
// even Fermat's Little Theorem, but brute force
int base = 26, inv = 1;
while ( (inv < base) &&
(((d * ++inv) % 26) != 1)) {}
return inv;
}
int main(int argc, char **argv)
{
Matrix2d m, minv;
int inv_factor;
m << 14, 3, 15, 0;
double mdet = m.determinant();
minv = mdet * m.inverse();
transform(&minv.data()[0], &minv.data()[4], &minv.data()[0],
[](double d){ return static_cast<int>(d) % 26;});
if ((static_cast<int>(mdet) % 26) == 1) { // no further modification}
else
{
inv_factor = inverse_mod_26(std::abs((m * minv)(0,0)));
if (inv_factor == 26)
{
cerr << "No inverse exists!" << endl;
return EXIT_FAILURE;
}
transform(&minv.data()[0], &minv.data()[4], &minv.data()[0],
[=](double d){ return static_cast<int>(d) * inv_factor;});
}
cout << "m = " << endl << m << endl;
cout << "minv = " << endl << minv << endl;
cout << "(m * minv) = " << endl << m * minv << endl;
return 0;
}
This is a 2x2 case, for base 26, but can easily be modified. The algorithm relies on modifying the normal matrix inverse, and can easily be explained, if you wish. If your original matrix has determinant (in the normal sense) that is not relatively prime to 26; i.e., if GCD(det(m), 26) != 1, then it will not have an inverse.
Tip: to avoid this problem, and the else clause above, pad your dictionary with three arbitrary characters, bringing the size to 29, which is prime, and will trivially satisfy the GCD property above.

Pairwise differences between two matrices in Eigen

In matlab/octave pairwise distances between matrices as required for e.g. k-means are calculated by one function call (see cvKmeans.m), to distFunc(Codebook, X) with as arguments two matrices of dimensions KxD.
In Eigen this can be done for a matrix and one vector by using broadcasting, as explained on eigen.tuxfamily.org:
(m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
However, in this case v is not just a vector, but a matrix. What's the equivalent oneliner in Eigen to calculate such pairwise (Euclidean) distances across all entries between two matrices?
I think the appropriate solution is to abstract this functionality into a function. That function may well be templated; and it may well use a loop - the loop will be really short, after all. Many matrix operations are implemented using loops - that's not a problem.
For example, given your example of...
MatrixXd p0(2, 4);
p0 <<
1, 23, 6, 9,
3, 11, 7, 2;
MatrixXd p1(2, 2);
p1 <<
2, 20,
3, 10;
then we can construct a matrix D such that D(i,j) = |p0(i) - p1(j)|2
MatrixXd D(p0.cols(), p0.rows());
for (int i = 0; i < p1.cols(); i++)
D.col(i) = (p0.colwise() - p1.col(i)).colwise().squaredNorm().transpose();
I think this is fine - we can use some broadcasting to avoid 2 levels of nesting: we iterate over p1's points, but not over p0's points, nor over their dimensions.
However, you can make a oneliner if you observe that |p0(i) - p1(j)|2 = |p0(i)|2 + |p1(j)|2 - 2 p0(i)T p1(j). In particular, the last component is just matrix multiplication, so D = -2 p0T p1 + ...
The blank left to be filled is composed of a component that only depends on the row; and a component that only depends on the column: these can be expressed using rowwise and columnwise operations.
The final "oneliner" is then:
D = ( (p0.transpose() * p1 * -2
).colwise() + p0.colwise().squaredNorm().transpose()
).rowwise() + p1.colwise().squaredNorm();
You could also replace the rowwise/colwise trickery with an (outer) product with a 1 vector.
Both methods result in the following (squared) distances:
1 410
505 10
32 205
50 185
You'd have to benchmark which is fastest, but I wouldn't be surprised to see the loop win, and I expect that's more readable too.
Eigen is more of a headache than I thought on first sight.
There is no reshape() functionality for example (and conservativeResize is something else).
It also seems (I'd like to be corrected) to be the case that Map does not just offer a view on the data, but assignments to temporary variables seem to be required.
The minCoeff function after the colwise operator cannot return a minimum element and an index to that element.
It is unclear to me if replicate is actually allocating duplicates of the data. The reason behind broadcasting is that this is not required.
matrix_t data(2,4);
matrix_t means(2,2);
// data points
data << 1, 23, 6, 9,
3, 11, 7, 2;
// means
means << 2, 20,
3, 10;
std::cout << "Data: " << std::endl;
std::cout << data.replicate(2,1) << std::endl;
column_vector_t temp1(4);
temp1 = Eigen::Map<column_vector_t>(means.data(),4);
std::cout << "Means: " << std::endl;
std::cout << temp1.replicate(1,4) << std::endl;
matrix_t temp2(4,4);
temp2 = (data.replicate(2,1) - temp1.replicate(1,4));
std::cout << "Differences: " << std::endl;
std::cout << temp2 << std::endl;
matrix_t temp3(2,8);
temp3 = Eigen::Map<matrix_t>(temp2.data(),2,8);
std::cout << "Remap to 2xF: " << std::endl;
std::cout << temp3 << std::endl;
matrix_t temp4(1,8);
temp4 = temp3.colwise().squaredNorm();
std::cout << "Squared norm: " << std::endl;
std::cout << temp4 << std::endl;//.minCoeff(&index);
matrix_t temp5(2,4);
temp5 = Eigen::Map<matrix_t>(temp4.data(),2,4);
std::cout << "Squared norm result, the distances: " << std::endl;
std::cout << temp5.transpose() << std::endl;
//matrix_t::Index x, y;
std::cout << "Cannot get the indices: " << std::endl;
std::cout << temp5.transpose().colwise().minCoeff() << std::endl; // .minCoeff(&x,&y);
This is not a nice oneliner and seems overkill just to compare every column in data with every column in means and return a matrix with their differences. However, the versatility of Eigen does not seem to be such that this can be written down much shorter.

How to use the Eigen unsupported levenberg marquardt implementation?

I'm trying to minimize a following sample function:
F(x) = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1])
A normal way to minimize such a funct could be the Levenberg-Marquardt algorithm.
I would like to perform this minimization in c++ and have done some initial tests
with Eigen that resulted in the expected solution.
My question is the following:
I'm used to optimization in python using i.e. scipy.optimize.fmin_powell. Here
the input function parameters are (func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None).
So I can define a func(x0), give the x0 vector and start optimizing. If needed I can change
the optimization parameters.
Now the Eigen Lev-Marq algorithm works in a different way. I need to define a function
vector (why?) Furthermore I can't manage to set the optimization parameters.
According to:
http://eigen.tuxfamily.org/dox/unsupported/classEigen_1_1LevenbergMarquardt.html
I should be able to use the setEpsilon() and other set functions.
But when I have the following code:
my_functor functor;
Eigen::NumericalDiff<my_functor> numDiff(functor);
Eigen::LevenbergMarquardt<Eigen::NumericalDiff<my_functor>,double> lm(numDiff);
lm.setEpsilon(); //doesn't exist!
So I have 2 questions:
Why is a function vector needed and why wouldn't a function scalar be enough?
References where I've searched for an answer:
http://www.ultimatepp.org/reference$Eigen_demo$en-us.html
http://www.alglib.net/optimization/levenbergmarquardt.php
How do I set the optimization parameters using the set functions?
So I believe I've found the answers.
1) The function is able to work as a function vector and as a function scalar.
If there are m solveable parameters, a Jacobian matrix of m x m needs to be created or numerically calculated. In order to do a Matrix-Vector multiplication J(x[m]).transpose*f(x[m]) the function vector f(x) should have m items. This can be the m different functions, but we can also give f1 the complete function and make the other items 0.
2) The parameters can be set and read using lm.parameters.maxfev = 2000;
Both answers have been tested in the following example code:
#include <iostream>
#include <Eigen/Dense>
#include <unsupported/Eigen/NonLinearOptimization>
#include <unsupported/Eigen/NumericalDiff>
// Generic functor
template<typename _Scalar, int NX = Eigen::Dynamic, int NY = Eigen::Dynamic>
struct Functor
{
typedef _Scalar Scalar;
enum {
InputsAtCompileTime = NX,
ValuesAtCompileTime = NY
};
typedef Eigen::Matrix<Scalar,InputsAtCompileTime,1> InputType;
typedef Eigen::Matrix<Scalar,ValuesAtCompileTime,1> ValueType;
typedef Eigen::Matrix<Scalar,ValuesAtCompileTime,InputsAtCompileTime> JacobianType;
int m_inputs, m_values;
Functor() : m_inputs(InputsAtCompileTime), m_values(ValuesAtCompileTime) {}
Functor(int inputs, int values) : m_inputs(inputs), m_values(values) {}
int inputs() const { return m_inputs; }
int values() const { return m_values; }
};
struct my_functor : Functor<double>
{
my_functor(void): Functor<double>(2,2) {}
int operator()(const Eigen::VectorXd &x, Eigen::VectorXd &fvec) const
{
// Implement y = 10*(x0+3)^2 + (x1-5)^2
fvec(0) = 10.0*pow(x(0)+3.0,2) + pow(x(1)-5.0,2);
fvec(1) = 0;
return 0;
}
};
int main(int argc, char *argv[])
{
Eigen::VectorXd x(2);
x(0) = 2.0;
x(1) = 3.0;
std::cout << "x: " << x << std::endl;
my_functor functor;
Eigen::NumericalDiff<my_functor> numDiff(functor);
Eigen::LevenbergMarquardt<Eigen::NumericalDiff<my_functor>,double> lm(numDiff);
lm.parameters.maxfev = 2000;
lm.parameters.xtol = 1.0e-10;
std::cout << lm.parameters.maxfev << std::endl;
int ret = lm.minimize(x);
std::cout << lm.iter << std::endl;
std::cout << ret << std::endl;
std::cout << "x that minimizes the function: " << x << std::endl;
std::cout << "press [ENTER] to continue " << std::endl;
std::cin.get();
return 0;
}
This answer is an extension of two existing answers:
1) I adapted the source code provided by #Deepfreeze to include additional comments and two different test functions.
2) I use the suggestion from #user3361661 to rewrite the objective function in the correct form. As he suggested, it reduced the iteration count on my first test problem from 67 to 4.
#include <iostream>
#include <Eigen/Dense>
#include <unsupported/Eigen/NonLinearOptimization>
#include <unsupported/Eigen/NumericalDiff>
/***********************************************************************************************/
// Generic functor
// See http://eigen.tuxfamily.org/index.php?title=Functors
// C++ version of a function pointer that stores meta-data about the function
template<typename _Scalar, int NX = Eigen::Dynamic, int NY = Eigen::Dynamic>
struct Functor
{
// Information that tells the caller the numeric type (eg. double) and size (input / output dim)
typedef _Scalar Scalar;
enum { // Required by numerical differentiation module
InputsAtCompileTime = NX,
ValuesAtCompileTime = NY
};
// Tell the caller the matrix sizes associated with the input, output, and jacobian
typedef Eigen::Matrix<Scalar,InputsAtCompileTime,1> InputType;
typedef Eigen::Matrix<Scalar,ValuesAtCompileTime,1> ValueType;
typedef Eigen::Matrix<Scalar,ValuesAtCompileTime,InputsAtCompileTime> JacobianType;
// Local copy of the number of inputs
int m_inputs, m_values;
// Two constructors:
Functor() : m_inputs(InputsAtCompileTime), m_values(ValuesAtCompileTime) {}
Functor(int inputs, int values) : m_inputs(inputs), m_values(values) {}
// Get methods for users to determine function input and output dimensions
int inputs() const { return m_inputs; }
int values() const { return m_values; }
};
/***********************************************************************************************/
// https://en.wikipedia.org/wiki/Test_functions_for_optimization
// Booth Function
// Implement f(x,y) = (x + 2*y -7)^2 + (2*x + y - 5)^2
struct BoothFunctor : Functor<double>
{
// Simple constructor
BoothFunctor(): Functor<double>(2,2) {}
// Implementation of the objective function
int operator()(const Eigen::VectorXd &z, Eigen::VectorXd &fvec) const {
double x = z(0); double y = z(1);
/*
* Evaluate the Booth function.
* Important: LevenbergMarquardt is designed to work with objective functions that are a sum
* of squared terms. The algorithm takes this into account: do not do it yourself.
* In other words: objFun = sum(fvec(i)^2)
*/
fvec(0) = x + 2*y - 7;
fvec(1) = 2*x + y - 5;
return 0;
}
};
/***********************************************************************************************/
// https://en.wikipedia.org/wiki/Test_functions_for_optimization
// Himmelblau's Function
// Implement f(x,y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2
struct HimmelblauFunctor : Functor<double>
{
// Simple constructor
HimmelblauFunctor(): Functor<double>(2,2) {}
// Implementation of the objective function
int operator()(const Eigen::VectorXd &z, Eigen::VectorXd &fvec) const {
double x = z(0); double y = z(1);
/*
* Evaluate Himmelblau's function.
* Important: LevenbergMarquardt is designed to work with objective functions that are a sum
* of squared terms. The algorithm takes this into account: do not do it yourself.
* In other words: objFun = sum(fvec(i)^2)
*/
fvec(0) = x * x + y - 11;
fvec(1) = x + y * y - 7;
return 0;
}
};
/***********************************************************************************************/
void testBoothFun() {
std::cout << "Testing the Booth function..." << std::endl;
Eigen::VectorXd zInit(2); zInit << 1.87, 2.032;
std::cout << "zInit: " << zInit.transpose() << std::endl;
Eigen::VectorXd zSoln(2); zSoln << 1.0, 3.0;
std::cout << "zSoln: " << zSoln.transpose() << std::endl;
BoothFunctor functor;
Eigen::NumericalDiff<BoothFunctor> numDiff(functor);
Eigen::LevenbergMarquardt<Eigen::NumericalDiff<BoothFunctor>,double> lm(numDiff);
lm.parameters.maxfev = 1000;
lm.parameters.xtol = 1.0e-10;
std::cout << "max fun eval: " << lm.parameters.maxfev << std::endl;
std::cout << "x tol: " << lm.parameters.xtol << std::endl;
Eigen::VectorXd z = zInit;
int ret = lm.minimize(z);
std::cout << "iter count: " << lm.iter << std::endl;
std::cout << "return status: " << ret << std::endl;
std::cout << "zSolver: " << z.transpose() << std::endl;
std::cout << "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" << std::endl;
}
/***********************************************************************************************/
void testHimmelblauFun() {
std::cout << "Testing the Himmelblau function..." << std::endl;
// Eigen::VectorXd zInit(2); zInit << 0.0, 0.0; // soln 1
// Eigen::VectorXd zInit(2); zInit << -1, 1; // soln 2
// Eigen::VectorXd zInit(2); zInit << -1, -1; // soln 3
Eigen::VectorXd zInit(2); zInit << 1, -1; // soln 4
std::cout << "zInit: " << zInit.transpose() << std::endl;
std::cout << "soln 1: [3.0, 2.0]" << std::endl;
std::cout << "soln 2: [-2.805118, 3.131312]" << std::endl;
std::cout << "soln 3: [-3.77931, -3.28316]" << std::endl;
std::cout << "soln 4: [3.584428, -1.848126]" << std::endl;
HimmelblauFunctor functor;
Eigen::NumericalDiff<HimmelblauFunctor> numDiff(functor);
Eigen::LevenbergMarquardt<Eigen::NumericalDiff<HimmelblauFunctor>,double> lm(numDiff);
lm.parameters.maxfev = 1000;
lm.parameters.xtol = 1.0e-10;
std::cout << "max fun eval: " << lm.parameters.maxfev << std::endl;
std::cout << "x tol: " << lm.parameters.xtol << std::endl;
Eigen::VectorXd z = zInit;
int ret = lm.minimize(z);
std::cout << "iter count: " << lm.iter << std::endl;
std::cout << "return status: " << ret << std::endl;
std::cout << "zSolver: " << z.transpose() << std::endl;
std::cout << "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" << std::endl;
}
/***********************************************************************************************/
int main(int argc, char *argv[])
{
std::cout << "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" << std::endl;
testBoothFun();
testHimmelblauFun();
return 0;
}
The output at the command line from running this test script is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Testing the Booth function...
zInit: 1.87 2.032
zSoln: 1 3
max fun eval: 1000
x tol: 1e-10
iter count: 4
return status: 2
zSolver: 1 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Testing the Himmelblau function...
zInit: 1 -1
soln 1: [3.0, 2.0]
soln 2: [-2.805118, 3.131312]
soln 3: [-3.77931, -3.28316]
soln 4: [3.584428, -1.848126]
max fun eval: 1000
x tol: 1e-10
iter count: 8
return status: 2
zSolver: 3.58443 -1.84813
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As an alternative you may simply create a new functor like this,
struct my_functor_w_df : Eigen::NumericalDiff<my_functor> {};
and then initialize the LevenbergMarquardt instance using like this,
my_functor_w_df functor;
Eigen::LevenbergMarquardt<my_functor_w_df> lm(functor);
Personally, I find this approach a bit cleaner.
It seems that the function is more general:
Let's say you have an m parameter model.
You have n observations to which you want to fit the m-parameter model in a least-squares sense.
The Jacobian, if provided, will be n times m.
You will need to supply n error values in the fvec.
Also, there is no need to square the f-values because it is implicitly assumed that the overall error function is made up of the sum of squares of the fvec components.
So, if you follow these guidelines and change the code to:
fvec(0) = sqrt(10.0)*(x(0)+3.0);
fvec(1) = x(1)-5.0;
It will converge in a ridiculously small number of iterations - like less than 5. I also tried it on a more complex example - the Hahn1 benchmark at http://www.itl.nist.gov/div898/strd/nls/data/hahn1.shtml with m=7 parameters and n=236 observations and it converges to the known right solution in only 11 iterations with the numerically computed Jacobian.