How to integrate std::valarray<double> with gsl? - c++

I am relatively new to C++, but I have some (scarce) coding and numerical experience.
I know that this question gets posted every now and then, how do you integrate an array. In MATLAB you can force your array to be a function (I forgot how, but I know I did it before) and send it to inbuilt integrators, so my question is how do you do it in C++.
I have this integral:
I = integral(A(z)*sin(qz)*dz)
q is just double const, z is the integrating variable, but A(z) is an array (I'll call it actualfunction from now on) that has same number of points as z axis in my code. Integrating boundaries are z[0] and z[nz-1].
I calculated this integral by using trapezium rule, and for z-axis of 5000 points this takes 0.06 sec. My problem is that this calculation occurs roughly 300 * 30 * 20 times (I have 3 for loops), and this 0.06 sec grows very quickly to 3 hours of the simulation. And the entire bottleneck of my code is this integration (I can obviously speed up by reducing z, but that's not the point.)
I know that library functions are usually much better then user-written ones. I also know that I can't use something simpler as Simpson's rule, because the integrand is highly oscillatory, and I want to avoid my own implementation of some complicated numerical alghoritm.
GSL needs a function in form:
F = f(double x, void *params)
and I can probably use QAWO adaptive integration from gsl, but how do I make my function in form that turns my array into function?
I am thinking something as:
F(double z, void *params)
{
std::valarray<double> actualfunction = *(std::valarray<double> *) params;
double dz = *(double *) params; // Pretty sure this is wrong
unsigned int actual_index = z / dz; // crazy assumption (my z[0] was 0)
return actualfunction[actual_index];
}
Is something like this possible? I doubt that numerical algorithm will use same spatial difference as actualfunction had, should I then somehow do interpolation of actualfunction or something?
Is there something better then gsl?

template<class F>
struct func_ptr_helper {
F f;
void* pvoid(){ return std::addressof(f); }
template<class R, class...Args>
using before_ptr=R(*)(void*,Args...);
template<class R, class...Args>
using after_ptr=R(*)(Args...,void*);
template<class R, class...Args>
static before_ptr<R,Args...> before_func() {
return [](void* p, Args...args)->R{
return (*static_cast<F*>(p))(std::forward<Args>(args)...);
};
}
template<class R, class...Args>
static after_ptr<R,Args...> after_func() {
return [](Args...args, void* p)->R{
return (*static_cast<F*>(p))(std::forward<Args>(args)...);
};
}
};
template<class F>
func_ptr_helper<F> lambda_to_pfunc( F f ){ return {std::move(f)}; }
use:
auto f = lambda_to_pfunc([&actualfunction, &dz](double z){
unsigned int actual_index = z / dz; // crazy assumption (my z[0] was 0)
return actualfunction[actual_index];
});
then
void* pvoid - f.pvoid();
void(*pfun)(double, void*) = f.after_func();
and you can pass pfun and pvoid through.
Apologies for any typos.
The idea is we write a lambda that does what we want. Then lambda_to_pfunc wraps it up so we can pass it as a void* and function pointer to C style APIs.
You'll have to properly manage the lifetime of everything of course.

Related

Passing math expression to function

I need to make function in C++ to calculate integrals. I am using the simpsone rule to calculate value of the given integral. I know how to calculate that. I don't have any problem with math. I need to know how can I pass whole expression to make my program flexible.
I have 4 f(x) functions for which I should make calculations. For example:
f(x)=2e^x
f(x)=x^3e
etc.
I have two options to make it.
1)I can do separate function for each f(x) function.
double function1() {
...
calculations 2e^x
...
return resault;
}
double function2() {
...
calculations x^3e
...
return resault;
}
This way is easy and fast to write, but the code is not flexible at all. In this case I need to make new function for every new given f(x) function.
I would like to have one function to which I can pass selected f(x) function.
2) Second case I prefer is to make some kind of interpreter of expressions. I thought about putting the parts of expression into std::vector and then making calculations for each cell of vector.
I've seen already an idea to parse string to the expression, but I think at the end it will be almost the same as idea with vector. I can be wrong.
What is the best way to make my code flexible and easy to use for users(not programmers)?
Suppose you have a function that takes two expressions and returns sum of the results of them. You can pass the expressions to function using lambda expression which is supported since C++11 as follow:
template<typename Func, typename Func2>
int calculate(Func &lambda_expr1, int param1, Func2 &lambda_expr2, int param2)
{
return lambda_expr1(param1) + lambda_expr2(param2);
}
void main()
{
// case 1
auto f1 = [](int p) {return p*p; }; // expression 1
auto f2 = [](int p) {return p*p*p; }; // expression 2
int result = calculate(f1, 3, f2, 4);
// result = 73
// case 2
result = calculate([](int p) {return p*p/2; }, 4, [](int p) {return p*p*p/3; }, 3);
// result = 17
}

Pass #define content as parameter

I have a long algorithm that should process some instruction described from more than one #define in order to reduce drastically my source-code. For example:
#define LongFunction(x, y, alg) return alg(x, y)
#define Alg1(x, y) ((x)+(y))
#define Alg2(x, y) ((x)^((x)-(y)))
And all I need to do is
LongFunction(x, y, Alg1);
LongFunction(x, y, Alg2);
I'd like to not pass a function as parameter because LongFunction is full of loops and I want that the code will be as fast as possible. How can I accomplish this task smartly?
There are many ways to parameterize on function.
Using macros might seem simple, but macros don't respect scopes, and there are problems with parameter substitution and side-effects, so they're Evil™.
In C++11 and later the most natural alternative is to use std::function and lambdas, like this:
#include <functional> // std::function
#include <math.h> // pow
using std::function;
auto long_function(
double const x,
double const y,
function<auto(double, double) -> double> alg
)
-> double
{
// Whatever.
return alg( x, y ); // Combined with earlier results.
}
auto alg1(double const x, double const y)
-> double
{ return x + y; }
auto alg2(double const x, double const y)
-> double
{ return pow( x, x - y ); }
#include <iostream>
using namespace std;
auto main() -> int
{
cout << long_function( 3, 5, alg1 ) << endl;
}
Regarding “fast as possible”, with a modern compiler the macro code is not likely to be faster. But since this is important, do measure. Only measurements, for release build and in the typical execution environment, can tell you what's fastest and whether the speed is relevant to the end user.
Of old, and formally, you could use the inline specifier to hint to the compiler that it should machine code inline calls to a function. Modern compilers are likely to just ignore inline for this (it has another more guaranteed meaning wrt. ODR). But it probably won't hurt to apply it. Again, it's important to measure. And note that results can vary with compilers.
One alternative to the above is to pass a simple function pointer. That might be faster than std::function, but is less general. However, in the other direction, you can templatize on a type, with a member function, and that gives the compiler more information, more opportunity to inline, at the cost of not being able to e.g. select operations from array at runtime. I believe that when you measure, if this is important enough, you'll find that templatization yields fastest code. Or at least as fast as the above.
Example of templatizing on a type that provides the operation:
#include <math.h> // pow
template< class Op >
auto long_function( double const x, double const y )
-> double
{
// Whatever.
return Op()( x, y ); // Combined with earlier results.
}
struct Alg1
{
auto operator()(double const x, double const y)
-> double
{ return x + y; }
};
struct Alg2
{
auto operator()(double const x, double const y)
-> double
{ return pow( x, x - y ); }
};
#include <iostream>
using namespace std;
auto main() -> int
{
cout << long_function<Alg1>( 3, 5 ) << endl;
}
By the way, note that ^ is not an exponentiation operator in C++ (it is in e.g. Visual Basic). In C and C++ it's a bitlevel XOR operator. In the code above I've assumed that you really meant exponentiation, and used the pow function from <math.h>.
If, instead, you really meant bitlevel XOR, then the arguments would need to be integers (preferably unsigned integers), which then would indicate that you want argument types for long_function depending on the argument types for the specified operation. That's more thorny issue, but involves either overloading or templating, or both. If that's what you really want then please do elaborate on that.

Optimal way to choose less or greater operator before loop

I have two arrays comprising x,y vales for y=f(x). I would like to provide a function that finds the value of x that corresponds to either the min or max sampled value of y.
What is an efficient way to select proper comparison operator before looping over the values in the arrays?
For example, I would like to do something like the following:
double FindExtremum(const double* x, const double* y,
const unsigned int n, const bool isMin) {
static std::less<double> lt;
static std::greater<double> gt;
std::binary_function<double,double,bool>& IsBeyond = isMin ? lt : gt;
double xm(*x), ym(*y);
for (unsigned int i=0; i<n; ++i, ++x, ++y) {
if (IsBeyond()(*y,ym)) {
ym = *y;
xm = *x;
}
}
}
Unfortunately, the base class std::binary_function does not define a virtual operator().
Will a compiler like g++ 4.8 be able to optimize the most straight forward implementation?
double FindExtremum(const double* x, const double* y,
const unsigned int n, const bool isMin) {
double xm(*x), ym(*y);
for (unsigned int i=0; i<n; ++i, ++x, ++y) {
if ( ( isMin && (*y<ym)) ||
(!isMin && (*y>ym)) ) {
ym = *y;
xm = *x;
}
}
}
Is there another way to arrange things to make it easy for the compiler to optimize? Is there a well known algorithm for doing this?
I would prefer to avoid using a templated function, if possible.
You would need to pass the comparison functor as a templated function parameter, e.g.
template <typename Compare>
double FindExtremum(const double* x, const double* y,
const unsigned int n, Compare compare) {
double xm(*x), ym(*y);
for (unsigned int i=0; i<n; ++i, ++x, ++y) {
if (compare(*y,ym)) {
ym = *y;
xm = *x;
}
}
}
Then if you need runtime choice, write something like this:
if (isMin) {
FindExtremum(x, y, n, std::less<double>());
} else {
FindExtremum(x, y, n, std::greater<double>());
}
Avoiding a templated function is not really possible in this case. The best performing code will be one that embeds the comparison operation directly in the loop, avoiding a function call - you can either write a template or write two copies of this function. A templated function is clearly the better solution.
For ultimate efficiency, make the comparison operator or the comparison operator choice a template parameter, and don't forget to measure.
When striving for utmost micro-efficiency, doing virtual calls is not in the direction of the goal.
That said, this is most likely a case of premature optimization, which Donald Knuth described thusly:
“Premature optimization is the root of all evil”
(I omitted his reservations, it sounds more forceful that way. :-) )
Instead of engaging in micro-optimization frenzy, which gains you little if anything, and wastes your time, I recommend more productively trying to make the code as clear and provably correct as possible. For example, use std::vector instead of raw arrays and separately passed sizes. And, for example, don't call the boolean comparison operator compare, as recommended in another answer, since that's the conventional name for tri-valued compare (e.g. as in std::string::compare).
Some questions arise here. First, I think you're overcomplicating the situation. For example, it would be easier to have two functions, one that calculates the min and other that calculates the max, and then call either of them depending on the value of isMin.
More so, note how in each iteration you're making the test to see if isMin is true or not, (at least in the "optimized" code you show last) and that comparison could have been done just once.
Now, if isMin can be deduced in any way at compile time, you can use a template class that selects the correct implementation optimized for the case, and without any run-time overhead (not tested, written from memory):
template<bool isMin>
class ExtremeFinder
{
static float FindExtreme(const double* x, const double* y,
const unsigned int n)
{
// Version that calculates when isMin is false
}
};
template<>
class ExtremeFinder<true>
static float FindExtreme(const double* x, const double* y,
const unsigned int n)
{
// Version that calculates when isMin is true
}
};
and call it as ExtremeFinder<test_to_know_isMin>::FindExtreme(...);, or, if you cannot decide it at compile time, you can always do:
if (isMin_should_be_true)
ExtremeFinder<true>::FindExtreme(...);
else
ExtremeFinder<false>::FindExtreme(...);
If you had 2 disjunct criteria, e.g. < and >=, you could use a bool less function argument and use XOR in loop:
if (less ^ (a>=b))
Don't know about performance, but is easy to write.
Or not-covering-all-possibilities-disjunct < and >:
if ( (a!=b) && (less ^ (a>b) )

C++: newton raphson and overloading functions

I wrote a simple implementation of the newton raphson root finding algorithm which takes an initial guess init, a unary function f and the tolerance tol as arguments, as shown below:
bool newton_raphson(double& init,
double(*f)(double),
double tol){
const int max_iter = 10000;
double next_x, soln = init;
int i = 0;
while(++i < max_iter){
next_x = soln - f(soln)/fp_x(f, soln);
if(fabs(next_x - soln) < tol){
init = next_x;
return true;
}
soln = next_x;
}
return false;
}
double fp_x(double(*f)(double),
double x){
const double h = 0.000001;
return (f(x + h) - f(x - h))/2.0/h;
}
My question is: although this works perfectly fine for unary functions, I would like to change the implementation so that it works for functions f that have more than one parameter, but all except one parameter have constant values. To clarify: if I have a function f(x) = 3x + 2 as shown below
double f(double x){
return (3*x + 2);
}
Then my implementation works. However, I would also like it to work for any functions with any given number of arguments, but only the first argument is variable. So, if I have a function f(x,y) = 3x + 2y
double f(double x, double y){
return (3*x + 2*y);
}
I would like to find the root of f(x,2), or f(x,3) using the same function, and so on for n arguments, not just one or two (please ignore the idea that the functions I showed in the example are simple linear functions, this is just an example). Is there any way to implement the function for a varying number of arguments or do I have to write an implementation for every case?
Thanks,
NAX
NOTE
As you could tell by now, this question isn't really about newton-raphson, but it makes it easier if I use it as an example for the actual question, which is a single implementation for functions of different numbers of arguments.
UPDATE
A few answers below use std::bind and std::function to solve the problem, which actually better address my question than the selected answer; however, they are c++11 library classes/functions, (which, don't get me wrong, is something I strongly urge every c++ programmer to go ahead and learn) and at the time of this writing, I was facing some problems using them; Eclipse Juno using g++ 4.7 (which is c++11 compliant) still somehow failed to recognize std::function, and I therefore decided to go and stick with the checked answer below, which also works nicely.
I think you're asking for variadic functions:
A variadic function – a function declared with a parameter list ending
with ellipsis (...) – can accept a varying number of arguments of
differing types. Variadic functions are flexible, but they are also
hazardous. The compiler can't verify that a given call to a variadic
function passes an appropriate number of arguments or that those
arguments have appropriate types. Consequently, a runtime call to a
variadic function that passes inappropriate arguments yields undefined
behavior. Such undefined behavior could be exploited to run arbitrary
code.
From here:
https://www.securecoding.cert.org/confluence/display/cplusplus/DCL31-CPP.+Do+not+define+variadic+functions
However, as quoted above, there are a number of problems with them.
Most notably, it only works for compile time!
However, if you are interested in implementing one, here's an article with a nice example:
http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=138
UPDATE:
IMO, I think you're better off defining functions that take structure or object arguments (i.e. a general function object), and writing functions that work on those arguments explicitly.
The other option is to do some compile-time reflection - which would be useful, but is too much trouble to do in such an example as this. Plus, "reflection" in C++ isn't "true" reflection, but rather a bad and incomplete implementation of it.
For what you're trying to do here, what you're looking for is std::bind (or, if you're dealing with a C++03 compiler, std::bind1st and std::bnd2nd).
These will let you "bind" values to the other parameters, leaving you with a function (technically, a function object) that only requires a single parameter.
What you'd ideally like would be something like this:
double f(double x, double y) {
return 3*x + 2*y;
}
double init = 1.0;
newton_raphson(init, std::bind2nd(f, 3), 1e-4);
Unfortunately, in real use, it's not quite that simple -- to work with std::bind2nd, you can't use an actual function; you need to use a function object instead, and it has to derive from std::binary_function.
std::bind is quite a bit more flexible, so that's what you almost certainly want to use instead (if at all possible).
I used your question as a way to force myself to learn C++11 variadic template, here is a working example.
template< typename... Ts >
double f( Ts... Vs ) {
double array[] = { Vs... };
int numArg = sizeof...( Vs );
switch (numArg) {
case 1:
return 3 * array[0] + 2;
case 2:
return 3 * array[0] + 2 * array[1];
case 3:
return 3 * array[0] + 2 * array[1] + 1 * array[3];
....
default:
return 0.0;
}
}
template< typename... Ts >
double newton_raphson( double &init, double tol,
double (*func) ( Ts... Vs ), Ts... Vs ) {
return func( Vs... );
}
you can call it like
newton_raphson( &init, 1.0, f, 1.0, 2.0, 3.0, 4.0, 5.0 );
You can use std::bind and std::function. The type std::function<double(double)> represents a functional that takes in a double and returns a double. Similarly std::function<double(int,int)> is for a functional taking 2 ints and returns a double.
#include <functional>
bool newton_raphson(double& init,
std::function<double(double)>& f,
double tol){
const int max_iter = 10000;
double next_x, soln = init;
int i = 0;
while(++i < max_iter){
next_x = soln - f(soln)/fp_x(f, soln);
if(fabs(next_x - soln) < tol){
init = next_x;
return true;
}
soln = next_x;
}
return false;
}
double myfunction(double x, double y){
return (3*x + 2*y);
}
double fp_x(std::function<double(double)> f, double x) {
...
}
...
double d = 1.0;
// Here we set y=2.5 and we tell bind that 1st parameter is unbounded
// If we wanted to switch and set x=2.5 and let y be unbounded, then
// we would use (&myfunction, 2.5, std::placeholders::_1)
newton_raphson(d, std::bind(&myfunction, std::placeholders::_1, 2.5) , 1e-6);
...

Possible to get type of Eigen::MatrixBase<T> &

having a Eigen::MatrixBase & data, are there any way to get if it is a float or double matrix?
I need to create an new complex matrix of the same size and type as the MatrixBase.
If it is MatrixXf then i need to create MatrixXcf, and if MatrixXd i need MatrixXcD.?
template <typename A>
int dowork(const Eigen::MatrixBase<A>& data)
That's pretty simple, just use the A::RealScalar typedef to build your complex type:
template <typename A>
int dowork(const Eigen::MatrixBase<A>& data) {
typedef Matrix<std::complex<typename A::RealScalar, Dynamic, Dynamic> MatCplx;
...
Im not 100 percent sure I understand the question being asked, but I think you asking to having another matrix allocated of the same time type after some condition is met? Do you want this new matrix to not allow data types that don't match the type?
If not, because you are using template classes you have a lot of freedom and can just use general template data types. Also look into representing the matrix using vector formats for ease of use. Maybe even sparse format stuff like so
// Local variables used.
int a = 0;
int b = 0;
int endN, endM;
// Iterates through matrix checking when last matrix value is reached for end point.
while (endN != sizeN && endM != sizeM) {
if (a == sizeN) {
b++;
a = 0;
endM = b;
} else {
if (Matrix[a][b] != 0) {
// Stores non-zero matrix values in queue SpareseFormat.
SparseFormat.push(a);
SparseFormat.push(b);
SparseFormat.push(Matrix[a][b]);
}
endN = a;
a++;
}
}
Sorry if I totally didn't understand your question. It has been a long night :P