I want to solve a system of coupled differential equations using boost::numeric::odeint::runge_kutta4. It is a 3D lattice system so it would be natural for me (and convenient) to work with 3D arrays. Is there a way for runge_kutta4 to work with user defined data structures or boost multi_array ?
In principle this is possible. odeint provideds a mechanism to use custom data structures - algebras and operations. Have a look here. Either you use one of the existing algebras and try to adapt your data structure to work with this algebra. Or you implement your own algebra and instantiate the Runge Kutta stepper with it.
You might also want to have a look at a library like Eigen, Mtl4, boost.ublas, or Armadillo. They might have data types for higher order tensors. For example Eigen works very well with odeint.
Related
I want to do all kinds of matrix operation like finding a sub-matrix and finding its index and multiplication using vectors but can't find proper syntax for that on the internet. Please let me know the correct syntax to transition from primitive data types to vectors.
I suggest that you have a look at linear algrebra libraries, such as eigen, armadillo or lapack.
For armadillo you can find here an example on how to transform data form your structure to theirs.
Transforming or copying will take some time. Nevertheless, you will have a hard time creating a linear algebra library on your own, that will out-perform one of these libraries. Moreover, if you design your datastructure accordingly, you don't have to pay for this overhead.
I am used to Eigen for almost all my mathematical linear algebra work.
Recently, I have discovered that Boost also provides a C++ template class library that provides Basic Linear Algebra Library (Boost::uBLAS). This got me wondering if I can get all my work based only on boost as it is already a major library for my code.
A closer look at both didn't really got me a clearer distinction between them:
Boost::uBLAS :
uBLAS provides templated C++ classes for dense, unit and sparse vectors, dense, identity, triangular, banded, symmetric, hermitian and sparse matrices. Views into vectors and matrices can be constructed via ranges, slices, adaptor classes and indirect arrays. The library covers the usual basic linear algebra operations on vectors and matrices: reductions like different norms, addition and subtraction of vectors and matrices and multiplication with a scalar, inner and outer products of vectors, matrix vector and matrix matrix products and triangular solver.
...
Eigen :
It supports all matrix sizes, from small fixed-size matrices to arbitrarily large dense matrices, and even sparse matrices.
It supports all standard numeric types, including std::complex, integers, and is easily extensible to custom numeric types.
It supports various matrix decompositions and geometry features.
Its ecosystem of unsupported modules provides many specialized features such as non-linear optimization, matrix functions, a polynomial solver, FFT, and much more.
...
Does anyone have a better idea about their key differences and on which basis can we choose between them?
I'm rewriting a substantial project from boost::uBLAS to Eigen. This is production code in a commercial environment. I was the one who chose uBLAS back in 2006 and now recommended the change to Eigen.
uBLAS results in very little actual vectorization performed by the compiler. I can look at the assembly output of big source files, compiled to amd64 architecture, with SSE, using the float type, and not find a single ***ps instruction (addps, mulps, subps, 4 way packed single-precision floating point instructions) and only ***ss instructions (addss, ..., scalar single-precision).
With Eigen, the library is written to make sure that vector instructions result.
Eigen is very feature complete. Has lots of matrix factorizations and solvers. In boost::uBLAS the LU factorization is an undocumented add-on, a piece of contributed code. Eigen has additions for 3D geometry, such as rotations and quaternions, not uBLAS.
uBLAS is slightly more complete on the most basic operations. Eigen lacks some things, such as projection (indexing a matrix using another matrix), while uBLAS has it. For features that both have, Eigen is more terse, resulting in expressions that are easier to read.
Then, uBLAS is completely stale. I can't understand how anyone considers it in 2016/2017. Read the FAQ:
Q: Should I use uBLAS for new projects?
A: At the time of writing (09/2012) there are a lot of good matrix libraries available, e.g., MTL4, armadillo, eigen. uBLAS offers a stable, well tested set of vector and matrix classes, the typical operations for linear algebra and solvers for triangular systems of equations. uBLAS offers dense, structured and sparse matrices - all using similar interfaces. And finally uBLAS offers good (but not outstanding) performance. On the other side, the last major improvement of uBLAS was in 2008 and no significant change was committed since 2009. So one should ask himself some questions to aid the decision: Availability? uBLAS is part of boost and thus available in many environments. Easy to use? uBLAS is easy to use for simple things, but needs decent C++ knowledge when you leave the path. Performance? There are faster alternatives. Cutting edge? uBLAS is more than 10 years old and missed all new stuff from C++11.
I just did a time complexity comparison between boost and eigen for fairly trivial matrix computations. These results, limited as they are, seem to denote that boost is a much better alternative.
I had an FEM code which does the pre-processing parts (setting up the element matrices and stitching them together). So naturally, this would involve a lot of memory allocations.
I wrote identical pieces of codes with Boost and Eigen on C++ (gcc 5.4.0, ubuntu 16.04, Intel i3 Quad Core, 2.40GHz, RAM : 4Gb) and ran them separately for varying node sizes (N) and calculated time using the linux cl-utility.
As far as I'm concerned, I have decided to proceed with my code in Boost.
Choose Eigen if you care the performance and performance gain introduced by expression templates, and choose uBlas if you only want to learn expression templates.
http://eigen.tuxfamily.org/index.php?title=Benchmark
I am interested in solving a sparse complex linear system Ax=b where A is a square matrix of complex numbers and b is vector of complex numbers.
If possible I would like such a library to be templated (for the ease of installation and use)
sth in the spirit of Eigen
I checked out Eigen but it does not, I think, look like it supports solving linear equations with complex sparse matrices, (although one can create and do elementary operations on complex matrices.)
Another trick someone suggested to me was one can work around this, by solving an extended system of twice the dimension using the fact that (A1 + iA2)(x1 + ix2) = (b1 + ib2)
but I would prefer some simple black box which gets the job done.
Any suggestions?
Transferring it to a real-valued system of twice the dimension might be the most immediate way. You could write an adapter to pack the transformation logic. Also may try this one: http://trilinos.sandia.gov/packages/docs/r4.0/packages/komplex/doc/html/
I'd like to ask about mathematical operations of arrays. I am mainly interested in carrying out operations such as:
vector products:
C=A+B
C=A*B
where A and B are arrays (or vectors), and
matrix products:
D=E*F;
where D[m][n], E[m][p], F[p][n];
Could anyone tell me what is the most efficient way to manipulate large quantities of numbers? Is it only possible by looping through the elements of an array or is there another way? Can vectors be used and how?
The C++ spec does not have mathematical constructs as you describe. The language surely provides all the features necessary for people to implement them. There are a lot of libraries out there, so its just up to you to choose one that fits your requirements.
Searching through stack overflow questions might give you an idea of where to start identifying those requirements if you don't know them already.
What's a good C++ library for matrix operations
Looking for an elegant and efficient C++ matrix library
High Performance Math Library for Vector And Matrix Calculations
Matrix Data Type in C++
Best C++ Matrix Library for sparse unitary matrices
Check out Armadillo, it provides lots of matrix functionality in a C++ interface. And it supports LAPACK, which is what MATLAB uses for linear algebra calculations.
C++ does not come with any "number aggregate" handling functionality out of the box, which the possible exception of std::valarray. (Compiler vendors could make valarray use vectorized operations, but generally speaking they don't)
I am currently working on a C++-based library for large, sparse linear algebra problems (yes, I know many such libraries exist, but I'm rolling my own mostly to learn about iterative solvers, sparse storage containers, etc..).
I am to the point where I am using my solvers within other programming projects of mine, and would like to test the solvers against problems that are not my own. Primarily, I am looking to test against symmetric sparse systems that are positive definite. I have found several sources for such system matrices such as:
Matrix Market
UF Sparse Matrix Collection
That being said, I have not yet found any sources of good test matrices that include the entire system- system matrix and RHS. This would be great to have in order to check results. Any tips on where I can find such full systems, or alternatively, what I might do to generate a "good" RHS for the system matrices I can get online? I am currently just filling a matrix with random values, or all ones, but suspect that this is not necessarily the best way.
I would suggest using a right-hand-side vector obtained from a predefined 'goal' solution x:
b = A*x
Then you have a goal solution, x, and a resulting solution, x, from the solver.
This means you can compare the error (difference of the goal and resulting solutions) as well as the residuals (A*x - b).
Note that for careful evaluation of an iterative solver you'll also need to consider what to use for the initial x.
The online collections of matrices primarily contain the left-hand-side matrix, but some do include right-hand-sides and also some have solution vectors too.:
http://www.cise.ufl.edu/research/sparse/matrices/rhs.txt
By the way, for the UF sparse matrix collection I'd suggest this link instead:
http://www.cise.ufl.edu/research/sparse/matrices/
I haven't used it yet, I'm about to, but GiNAC seems like the best thing I've found for C++. It is the library used behind Maple for CAS, I don't know the performance it has for .
http://www.ginac.de/
it would do well to specify which kind of problems are you solving...
different problems will require different RHS to be of any use to check validity..... what i'll suggest is get some example code from some projects like DUNE Numerics (i'm working on this right now), FENICS, deal.ii which are already using the solvers to solve matrices... generally they'll have some functionality to output your matrix in some kind of file (DUNE Numerics has functionality to output matrices and RHS in a matlab-compliant files).
This you can then feed to your solvers..
and then again use their the libraries functionality to create output data
(like DUNE Numerics uses a VTK format)... That was, you'll get to analyse data using powerful tools.....
you may have to learn a little bit about compiling and using those libraries...
but it is not much... and i believe the functionality you'll get would be worth the time invested......
i guess even a single well-defined and reasonably complex problem should be good enough for testing your libraries.... well actually two
one for Ax=B problems and another for Ax=cBx (eigenvalue problems) ....