I'm looking for the best choice of C++ Mathematics library to make easier some operations from LabView blocks.
I need to realize many complex mathematics things on C++ alike: linear regression, peak detection, derivative for the graph and many others alike.
I found there are a lot of libraries for it: http://en.wikipedia.org/wiki/List_of_numerical_libraries#C_and_C.2B.2B
Which library is better to chose for my tasks?
(currently Im thinking about boost BLAS but I never worked with it earlier so maybe this choice is wrong)
Note that there isn't much more to boost uBLAS than basic linear algebra; and even if you consider the larger boost "Math and Numerics" section, it can hardly be considered a complete scientific computing package.
GSL is very good in that it's quite comprehensive. However, it's very much a 'C' library so you need to be prepared to work with raw pointers to array data and function pointer callbacks rather than higher level classes.
(Personally these days I find myself using Python/Numpy/Scipy as much as possible; the scope of Scipy is truly incredible and Numpy arrays are fantastically easy to work with; if there was a LabView/Python/Scipy integration that met any other requirements it'd be the first thing I'd be looking it).
I've been wanting to write my own multithreaded realtime raytracer in C++, but I don't want to implement all the vector and matrix logic that comes with it. I figured I'd do some research to find a good library for this, but I haven't had much success...
It's important that the implementation is fast, and preferably that it comes with some friendly licensing. I've read that boost has basic algebra, but I couldn't find anything on how good it was regarding its speed.
For the rest, Google gave me Armadillo, which claims to be very fast, and compares itself to certain other libraries that I haven't heard of.
Then I got Seldon, which also claims to be efficient and convenient, although I couldn't find out where exactly they are on the scale.
Lastly I read about Eigen, which I've also seen mentioned here on StackOverflow while searching here.
In the CG lecture at my university, they use HLSL for the algebra (making the students implement/optimise parts of the raytracer), which got me thinking whether or not I could use GLSL for this. Again, I have no idea what option is most efficient, or what the general consensus is on algebra libraries. I was hoping SO could help me out here, so I can get started with some real development :)
PS: I tried linking to sites, but I don't have enough rep yet
I'd recommend writing your own routines. When I wrote my raytracer, I found that most of the algebra used the same small collection of methods. Basically all you need is a vector class that supports addition, subtraction, etc. And from there all you really need is Dot and Cross.
And to be honest using GLSL isn't going to give you much more than that anyways (they only support dot, cross and simple vector math, everything else must be hand coded). I'd also recommend prototyping in C++ then moving to CUDA afterwards. It's rather difficult to debug a GPU code, so you can get it working in the CPU then recode it a bit to work in CUDA.
In reality raytracers are fairly simple. It's making them fast that is hard. It's the acceleration structures that are going to take most of your time and optimization. At least it did for me.
You should take look at http://ompf.org/forum/
This forum treats of realtime raytracing, mostly in C++. It will give you pointers, and sample source.
Most of the time, as every cycle count, people do not rely on external vector math libs: optimizations depend on the compiler you're using, inlining, use of SSE (or kindof) or not, etc.
I recommend "IlmBase" that is part of the OpenEXR package. It's well-written C++, developed by ILM, and widely used by people who professionally write and use graphics software.
For my projects I used glm, maybe it would also suit you.
Note that libraries such as boost::ublas or seldon probably won't suit you, because they're BLAS-oriented (and I assume you're looking for a good 3D-driven linear algebra library).
Also, the dxmath DirectX library is pretty good, although sometimes hard to use, because of it's C-compatible style.
You might look at the source code for POVRAY
I just found out about the Boost Phoenix library (hidden in the Spirit project) and as a fan of the functional-programming style (but still an amateur; some small experience with haskell and scheme) i wanted to play around with this library to learn about reasonable applications of this library.
Besides the increasement of expressiveness and clarity of the code using fp-style, i'm especially interested in lazy-evaluation for speeding up computations at low costs.
A small and simple example would be the following:
there is some kind of routing problem (like the tsp), which is using a eucliedean distance matrix. We assume, that some of the values of the distance matrix are never used, and some are used very often (so it isn't a good idea to compute them on the fly for every call). Now it seems to be reasonable to have a lazy data-structure holding the distance values. How would that be possible with phoenix? (ignoring the fact that i would be easily done without fp-style-programming at all) Reading the official documentation of phoenix didn't let me understand enough to answer that.
Is it possible at all? (in Haskell for example the ability to create thunks which are guaranteeing that the value can be computed later are in the core of the language).
What does using a vector with all the lazy functions defined in phoenix mean? As naive as i am, i tried to fill two matrices (vector >) with random values, one with the normal push_back, the other with boost::phoenix::push_back and tried to read out only a small amount of values from these matrices and store them in a container for printing out. The lazy one was alway empty. Am i using phoenix in a wrong way / it should be possible? Or did i misunderstand the function of the containers/algorithms in phoenix. A small clue for the latter one is the existence of a special list-data-structure in the FP++ library, which influenced phoenix.
Additionally:
What are you using phoenix for?
Do you know some good ressources regarding phoenix? (tutorials, blog entries...)
Thanks for all your input!
As requested, my comment (with additions and small modifications) as an answer...
I know your position exactly, I too played around with Phoenix (although I didn't dig in very deeply, mostly a byproduct of reading the Boost::Spirit tutorial) a while ago, relatively soon after catching the functional bug and learning basic Haskell - and I didn't get anything working :( This is btw in synch with my general experience with dark template magic: Very easy to misunderstand, screw up and get punched in the face with totally unexpected behaviour or incomprehensible error messages.
I'd advice you to stay away from Phoenix for a long time. I like FP too, but FP in C++ is even uglier than mutability in Haskell (they'd be head to head but C++ is already ugly and Haskell is, at least according to Larry Wall, the most beautiful language ever ;) ). Learn and use FP, and when you're good at it and forced to use C++, use Phoenix. But for learning, a library that bolts a wholly different paradigm on an already complex language (i.e. FP in C++) is not advisable.
I frequently use the STL containers but have never used the STL algorithms that are to be used with the STL containers.
One benefit of using the STL algorithms is that they provide a method for removing loops so that code logic complexity is reduced. There are other benefits that I won't list here.
I have never seen C++ code that uses the STL algorithms. From sample code within web page articles to open source projects, I haven't seen their use.
Are they used more frequently than it seems?
Short answer: Always.
Long answer: Always. That's what they are there for. They're optimized for use with STL containers, and they're faster, clearer, and more idiomatic than anything you can write yourself. The only situation you should consider rolling your own is if you can articulate a very specific, mission-critical need that the STL algorithms don't satisfy.
Edited to add: (Okay, so not really really always, but if you have to ask whether you should use STL, the answer is "yes".)
You've gotten a number of answers already, but I can't really agree with any of them. A few come fairly close to the mark, but fail to mention the crucial point (IMO, of course).
At least to me, the crucial point is quite simple: you should use the standard algorithms when they help clarify the code you're writing.
It's really that simple. In some cases, what you're doing would require an arcane invocation using std::bind1st and std::mem_fun_ref (or something on that order) that's extremely dense and opaque, where a for loop would be almost trivially simple and straightforward. In such a case, go ahead and use the for loop.
If there is no standard algorithm that does what you want, take some care and look again -- you'll often have missed something that really will do what you want (one place that's often missed: the algorithms in <numeric> are often useful for non-numeric uses). Having looked a couple of times, and confirmed that there's really not a standard algorithm to do what you want, instead of writing that for loop (or whatever) inline, consider writing an generic algorithm to do what you need done. If you're using it one place, there's a pretty good chance you can use it two or three more, at which point it can be a big win in clarity.
Writing generic algorithms isn't all that hard -- in fact, it's often almost no extra work compared to writing a loop inline, so even if you can only use it twice you're already saving a little bit of work, even if you ignore the improvement in the code's readability and clarity.
STL algorithms should be used whenever they fit what you need to do. Which is almost all the time.
When should the STL algorithms be used instead of using your own?
When you value your time and sanity and have more fun things to do than reinventing the wheel again and again.
You need to use your own algorithms when project demands it, and there are no acceptable alternatives to writing stuff yourself, or if you identified STL algorithm as a bottleneck (using profiler, of course), or have some kind of restrictions STL doesn't conform to, or adapting STL for the task will take longer than writing algorithm from scratch (I had to use twisted version of binary search few times...). STL is not perfect and isn't fit for everything, but when you can, you should use it. When someone already did all the work for you, there is frequently no reason to do the same thing again.
I write performance critical applications. These are the kinds of things that need to process millions of pieces of information in as fast a time as possible. I wouldn't be able to do some of the things that I do now if it weren't for STL. Use them always.
There are many good algorithms besides stuff like std::foreach.
However there are lots of non-trivial and very useful algorithms:
Sorting: std::sort, std::upper_bound, std::lower_bound, std::binary_search
Min/Max std::max, std::min, std::partition, std::min_element, std::max_element
Search like std::find, std::find_first_of etc.
And many others.
Algorithms like std::transform are much useful with C++0x lambda expressions or stuff like boost::lambda or boost::bind
If I had to write something due this afternoon, and I knew how to do it using hand-made loops and would need to figure out how to do it in STL algorithms, I would write it using hand-made loops.
Having said that, I would work to make the STL algorithms a reliable part of my toolkit, for reasons articulated in the other answers.
--
Reasons you might not see it is in code is that it is either legacy code or written by legacy programmers. We had about 20 years of C++ programming before the STL came out, and at that point we had a community of programmers who knew how to do things the old way and had not yet learned the STL way. This will likely remain for a generation.
Bear in mind that the STL algorithms cover a lot of bases, but most C++ developers will probably end up coding something that does something equivalent to std::find(), std::find_if() and std::max() almost every day of their working lives (if they're not using the STL versions already). By using the STL versions you separate the algorithm from both the logical flow of your code and from the data representation.
For other less commonly used STL algorithms such as std::merge() or std::lower_bound() these are hugely useful routines (the first for merging two sorted containers, the second for working out where to insert an item in a container to keep it ordered). If you were to try to implement them yourself then it would probably take a few attempts (the algorithms aren't complicated, but you'd probably get off-by-one errors or the like).
I myself use them every day of my professional career. Some legacy codebases that predate a stable STL may not use it as extensively, but if there's a newer project that is intentionally avoiding it I would be inclined to think it was by a part-time hacker who was still labouring under the mid-90's assumption that templates are slow and therefore to be avoided.
The only time I don't use STL algorithms is when the cross-platform implementation differences affect the outcome of my program. This has only happened in one or two rare cases (on the PlayStation 3). Although the interface of the STL is standardized across platforms, the implementation is not.
Also, in certain extremely high performance applications (think: video games, video game servers) we replaced a some STL structures with our own to eke out a bit more efficiency.
However, the vast majority of the time using STL is the way to go. And in my other (non-video game) jobs, I used the STL exclusively.
The main problem with STL algorithms until now was that, even though the algorithm call itself clearer, defining the functors that you'd need to pass to them would make your code longer and more complex, due to the way the language forced you to do it. C++0x is expected to change that, with its support for lambda expressions.
I've been using STL heavily for the past six years and although I tried to use STL algorithms anywhere I could, in most instances it would make my code more obscure, so I got back to a simple loop. Now with C++0x is the opposite, the code seems to always look simpler with them.
The problem is that by now C++0x support is still limited to a few compilers, even because the standard is not completely finished yet. So probably we will have to wait a few years to really see widespread use of STL algorithms in production code.
I would not use STL in two cases:
When STL is not designed for your task. STL is nearly the best for general purposes. However, for specific applications STL may not always be the best. For example, in one of my programs, I need a huge hash table while STL/tr1's hashmap equivalence takes too much memory.
When you are learning algorithms. I am one of the few who enjoy reinventing the wheels and learn a lot in this process. For that program, I reimplemented a hash table. It really took me a lot of time, but in the end all the efforts paid off. I have learned many things that greatly benefit my future career as a programmer.
When you think you can code it better than a really clever coder who spent weeks researching and testing and trying to cope with every conceivable set of inputs.
For most Earthlings the answer is never!
I want to answer the "when not to use STL" case, with a clear example.
(as a challenge to all of you, show me that you can solve this with any STL algorithm until C++17)
Convert a vector of integers std::vector<int> into a vector of std::pair<int,int>, i.e.:
Convert
std::vector<int> values = {1,2,3,4,5,6,7,8,9,10};
to
std::vector<std::pair<int,int>> values = { {1,2}, {3,4} , {5,6}, {7,8} ,{9,10} };
And guess what? It's impossible with any STL algorithm until C++17.
See the complete discussion on solving this problem here:
How can I convert std::vector<T> to a vector of pairs std::vector<std::pair<T,T>> using an STL algorithm?
So to answer your question: Use STL algorithm always only when it perfectly fits your problem. Don't hack an STL algorithm to make it fit your problem.
Are they used more frequently than it seems?
I've never seen them used; except in books. Maybe they're used in the implementation of the STL itself. Maybe they'll become more used because easier to use (see for example Lambda functions and expressions), or even become obsoleted (see for example the Range-based for-loop), in the next version of C++ .
I would need some basic vector mathematics constructs in an application. Dot product, cross product. Finding the intersection of lines, that kind of stuff.
I can do this by myself (in fact, have already) but isn't there a "standard" to use so bugs and possible optimizations would not be on me?
Boost does not have it. Their mathematics part is about statistical functions, as far as I was able to see.
Addendum:
Boost 1.37 indeed seems to have this. They also gracefully introduce a number of other solutions at the field, and why they still went and did their own. I like that.
Re-check that ol'good friend of C++ programmers called Boost. It has a linear algebra package that may well suits your needs.
I've not tested it, but the C++ eigen library is becoming increasingly more popular these days. According to them, they are on par with the fastest libraries around there and their API looks quite neat to me.
Armadillo
Armadillo employs a delayed evaluation
approach to combine several operations
into one and reduce (or eliminate) the
need for temporaries. Where
applicable, the order of operations is
optimised. Delayed evaluation and
optimisation are achieved through
recursive templates and template
meta-programming.
While chained operations such as
addition, subtraction and
multiplication (matrix and
element-wise) are the primary targets
for speed-up opportunities, other
operations, such as manipulation of
submatrices, can also be optimised.
Care was taken to maintain efficiency
for both "small" and "big" matrices.
I would stay away from using NRC code for anything other than learning the concepts.
I think what you are looking for is Blitz++
Check www.netlib.org, which is maintained by Oak Ridge National Lab and the University of Tennessee. You can search for numerical packages there. There's also Numerical Recipes in C++, which has code that goes with it, but the C++ version of the book is somewhat expensive and I've heard the code described as "terrible." The C and FORTRAN versions are free, and the associated code is quite good.
There is a nice Vector library for 3d graphics in the prophecy SDK:
Check out http://www.twilight3d.com/downloads.html
For linear algebra: try JAMA/TNT . That would cover dot products. (+matrix factoring and other stuff) As far as vector cross products (really valid only for 3D, otherwise I think you get into tensors), I'm not sure.
For an extremely lightweight (single .h file) library, check out CImg. It's geared towards image processing, but has no problem handling vectors.