Standardize 2D/3D Vector / Coordinate Class - c++

Question
This is something that's bugging me for some time now, but I couldn't find a definitive answer for it:
Is anyone aware of a proposal to introduce a standard 2D and/or 3D Vector (a struct with x,y and z members) to the STL?
If not, is there a realistic way to get such a class into the next version of the standard - short of writing a complete and perfectly written proposal myself?
And, are there any good reasons (aside from no one having the time) why this hasn't already been done?
I'm definitely willing to contribute, but I believe I lack the experience to produce something of high enough quality to get accepted (I'm not a professional programmer).
Reasoning / Background
By now I've seen dozens of libraries and frameworks (be it for graphics, physics, math, navigation, sensor fusion ...) which all basically implement their own version of
struct Vector2d {
double x,y;
//...
};
/* ...
* operator overloads
*/
and/or its 3D equivalent - not to mention all the occasions, where I implemented one myself before I took the time to do a proper, reusable version.
Obviously, this is not something difficult and I'm not worrying about suboptimal implementations, but every time, I want to combine two libraries or reuse code from a different project, I have to take care of converting one version into the other (either by casting or - if possible - text replacement).
Now that the committee strives to significantly extend the standard library for c++17 (especially with a 2D graphics framework), I really would like to have a common 2D vector baked into all interfaces from the start, so I can just write e.g.:
drawLine(transformCoordinates(trackedObject1.estimatePos(),params),
transformCoordinates(trackedObject2.estimatePos(),params));
rather than
MyOwnVec2D t1{trackedObject1.estimatePosX(), trackedObject1.estPosY()};
MyOwnVec2D t2{trackedObject2.estimatePosX(), trackedObject2.estPosY()};
t1 = transformCoordinates(t1,params);
t2 = transformCoordinates(t2,params);
drawLine(t1.x,t1.y,t2.x,t2.y);
The example might be a little exaggerated, but I think it shows my point.
I'm aware of std::valarray, which already goes in the right direction, as it allows standard operations like addition and multiplication, but it carries far too much weight if all you need are two or three coordinates. I think a valarray, with fixed size and no dynamic memory allocation (e.g. based on std::array) would be an acceptable solution, especially as it would come with a trivial iterator implementation, but I personally would prefer a class with x, y (and z) members.
Remark: I'm sorry if this topic has already been discussed (and I would be surprised if it hasn't), but every time I'm searching for 2d vectors I get results talking about something like std::vector<std::vector<T>> or how to implement a certain transformation, but nothing on the topic of standardization.

are there any good reasons (aside from no one having the time) why this hasn't already been done?
There's essentially no reason to.
Forming a type that contains two or three elements is utterly trivial, and all the operations can be trivially defined too. Furthermore, the C++ standard library is not intended to be a general-purpose mathematical toolsuite: it makes sense to use specialised third-party libraries for that if you are serious about mathematical types and constructs beyond the functions and operators that you can throw together in half an hour.
And we do not standardise things that do not need to be standardised.
If C++ were to gain some kind of standardised 3D graphics API then I can see this changing but not until then. And hopefully C++ will never gain any kind of standardised 3D graphics API, because that is not what it is for.
However, if you feel strongly about it, you can start a conversation on std-discussion where all the experts (and some assuredly non-experts) live; sometimes such conversations lead to the formation of proposals, and it needn't necessarily be you who ends up writing it.

In case someone else has an interest in it, I wanted to point out that the July 2014 version of "A Proposal to Add 2D Graphics Rendering and Display to C++" includes a 2D point class / struct (my question was based on the initial draft from January 2014). So maybe there will be at least a simple standard 2D-Vector in c++1z.

Related

C++ choice of pass by value vs pass by reference for POD math structure classes for high performance applications considering cache coherency

For many high performance applications, such as game engines or financial software, considerations of cache coherency, memory layout, and cache misses are crucial for maintaining smooth performance. As the C++ standard has evolved, especially with the introduction of Move Semantics and C++14, it has become less clear when to draw the line of pass by value vs. pass by reference for mathematical POD based classes.
Consider the common POD Vector3 class:
class Vector3
{
public:
float32 x;
float32 y;
float32 z;
// Implementation Functions below (all non-virtual)...
}
This is the most commonly used math structure in game development. It is a non-virtual, 12 byte size class, even in 64 bit since we are explicitly using IEEE float32, which uses 4 bytes per float. My question is as follows - What is the general best practice guideline to use when deciding to pass POD mathematical classes by value or by reference for high performance applications?
Some things for consideration when answering this question:
It is safe to assume the default constructor does not initialize any values
It is safe to assume no arrays beyond 1D are used for any POD math structures
Clearly most people pass 4-8 byte POD constants by value, so there doesn't seem to be much debate there
What happens when this Vector is a class member variable vs a local variable on the stack? If pass by reference is used, then it would use the memory address of the variable on the class vs a memory address of something local on the stack. Does this use-case matter? Could this difference where PBR is used result in more cache misses?
What about the case where SIMD is used or not used?
What about move semantic compiler optimizations? I have noticed that when switching to C++14, the compiler will often use move semantics when chain function calls are made passing the same vector by value, especially when it is const. I observed this by perusing the assembly breakdown
When using pass by value and pass by reference with these math structures, does const make a much impact on compiler optimizations? See the above point
Given the above, what is a good guideline for when to use pass by value vs pass by reference with modern C++ compilers (C++14 and above) to minimize cache misses and promote cache coherency? At what point might someone say this POD math structure is too large for pass by value, such as a 4v4 affine transform matrix, which is 64 bytes in size assuming use of float32. Does the Vector, or rather any small POD math structure, declared on the stack vs. being referenced as a member variable matter when making this decision?
I am hoping someone can provide some analysis and insight to where a good modern guideline for best practices can be established for the above situation. I believe the line has become more blurry as for when to use PBV vs PBR for POD classes as the C++ standard has evolved, especially in regard to minimizing cache misses.
I see the question title is on the choice of pass-by-value vs. pass-by-reference, though it sounds like what you are after more broadly is the best practice to efficiently passing around 3D vectors and other common PODs. Passing data is fundamental and intertwined with programming paradigm, so there isn't a consensus on the best way to do it. Besides performance, there are considerations to weigh like code readability, flexibility, and portability to decide which approach to favor in a given application.
That said, in recent years, "data-oriented design" has become a popular alternative to object-oriented programming, especially in video game development. The essential idea is to think about the program in terms of data it needs to process, and how all that data can be organized in memory for good cache locality and computation performance. There was a great talk about it at CppCon 2014: "Data-Oriented Design and C++" by Mike Acton.
With your Vector3 example for instance, it is often the case that a program has not just one but many 3D vectors that are all processed the same way, say, all undergo the same geometric transformation. Data-oriented design suggests it is then a good idea to lay the vectors out in contiguously in memory and that they are all transformed together in a batch operation. This improves caching and creates opportunities to leverage SIMD instructions. You could implement this example with the Eigen C++ linear algebra library. The vectors can be represented using a Eigen::Matrix<float, 3, Eigen::Dynamic> of shape 3xN to store N vectors, then manipulated using Eigen's SIMD-accelerated operations.

Why is the second parameter of std::assoc_laguerre an unsigned int?

In C++17, a lot of special functions were added to the standard library. One function is the associated Laguerre polynomials. The second argument requires an unsigned int, but the mathematical definition is valid for real numbers as well. Is there any reason that it's only limited to nonnegative integers? Is it simply due to the fact that binomial(n,k) is is easier/faster/simpler to calculate when n and k are both positive integers?
Walter E. Brown, father of special functions in C++, never explicitly answered your Laguerre question as far as I know. Nevertheless, when one reads what Brown did write, a likely motive grows clear:
Many of the proposed Special Functions have definitions over some or all of the complex plane as well as over some or all of the real numbers. Further, some of these functions can produce complex results, even over real-valued arguments. The present proposal restricts itself by considering only real-valued arguments and (correspondingly) real-valued results.
Our investigation of the alternative led us to realize that the complex landscape for the Special Functions is figuratively dotted with land mines. In coming to our recommendation, we gave weight to the statement from a respected colleague that “Several Ph.D. dissertations would [or could] result from efforts to implement this set of functions over the complex domain.” This led us to take the position that there is insufficient prior art in this area to serve as a basis for standardization, and that such standardization would be therefore premature....
Of course, you are asking about real numbers rather than complex, so I cannot prove that the reason is the same, but Abramowitz and Stegun (upon whose manual Brown's proposal was chiefly based) provide extra support to some special functions of integer order. Curiously, in chapter 13 of my copy of Abramowitz and Stegun, I see no extra support, so you have a point, don't you? Maybe Brown was merely being cautious, not wishing to push too much into the C++ standard library all at once, but there does not immediately seem to me to be any obvious reason why floating-point arguments should not have been supported in this case.
Not that I would presume to second-guess Brown.
As you likely know, you can probably use chapter 13 of Abramowitz and Stegun to get the effect you want without too much trouble. Maybe Brown himself would have said just this: you can get the effect you want without too much trouble. To be sure, one would need to ask Brown himself.
For information, Brown's proposal, earlier linked, explicitly refers the assoc_laguerre of C++ to Abramowitz and Stegun, sect. 13.6.9.
In summary, at the time C++ was first getting special-function support, caution was heeded. The library did not want to advance too far, too fast. It seems fairly probable that this factor chiefly accounts for the lack you note.
I know for a fact that there was a great concern over real and perceived implementability. Also, new library features have to justified. Features have to be useful to more than one community. Integer order assoc_laguerre is the type physicists most often use. A signed integer or real order was probably thought too abstract. As it was, the special math functions barely made it into C++17. I actually think part of the reason they got in was the perception that 17 was a light release and folks wanted to bolster it.
As we all know, there is no reason for the order parameter to be unsigned, or even integral for that matter. The underlying implementation that I put in libstdc++ ( gcc) has a real order alpha. Real order assoc_laguerre is useful in quadrature rules for example.
I might recommend making overloads for real order to the committee in addition to relaxing the unsigned integer to be signed.

Why separate entry points for left-handed and right-handed matrices instead of a handedness flag or conversion function?

The DirectX Math API for matrix calculation contains separate functions for generating left handed vs. right handed matrices (e.g. XMMatrixLookAtLH vs. XMMatrixLootAtRH alongside XMMatrixPerspectiveLH vs. XMMatrixPerspectiveRH).
I don't exactly understand the full difference between the two coordinate systems (especially as they apply to traditional DirectX and OpenGL) but why is the API structured like this, as opposed to combining the entry points and providing, say, an enum indicating handedness or an extra generic function that converts a matrix intended for a right handed system to a left handed one (or vice versa)? Is it simply that both operations need to be fast (i.e. you could provide those options, but they would be too slow for any practical purposes and thus not worth supporting) or is there something fundamental about these matrix functions that requires the LH and RH variations to be entirely separate endpoints?
EDIT: To clarify: While I do appreciate answers expounding on why the API design decisions were made, my primary curiosity is whether a parameterization or after-the-fact conversion function can be implemented correctly or efficiently when you consider the math and the implementation (i.e. if the two halves can't really share code, that would be inefficient).
The DirectXMath project has a long history to it. I started working on it back in 2008 when it was "xboxmath" for the Xbox 360 focused on VMX128 with no SSE/SSE2 optimizations. Much of the initial API surface area has been preserved since then as I've tried to maintain support for existing clients as I moved from xboxmath to xnamath then xnamath to DirectXMath, including this "LH" vs. "RH" as two distinct functions.
There is a practical reason for this design: a single application is only going to use one or the other, not both. Having a potential run-time check of a parameter to pick something that is fixed and known is not that useful.
Another practical reason is to minimize branching in the code. Most of the DirectXMath functions are straight-line code that avoids all branching, using element selects instead. This was originally motivated by the fact that the Xbox 360 was an extremely fast in-order processor, but didn't have a particularly advanced branch predictor.
Generally the choice of viewing coordinate system is a matter of historical comfort: OpenGL has long preferred column-major, right-handed viewing systems. DirectX has historically used row-major, left-handed viewing coordinates. XNA Game Studio choose to go with row-major, right-handed viewing coordinates.
With the modern programmable GPU pipeline, there is actually no requirement to use one or the other as long as you are consistent: the DirectX API can support either LH or RH. Most DirectX samples including anything written using DXUT uses left-handed viewing coordinates. Most Windows Store samples and .NET-based systems generally stick with the XNA Game Studio convention.
Because of all this, DirectXMath supports row-major matrices and leave it up to the developer to use Right-Handed vs. Left-Handed. The SimpleMath wrapper for DirectXMath is intended to be natural to those coming from C#'s XNA Game Studio math library, so it assumes right-handed.
In response to #galpo1n: "DirectXMath is quite ancient now too, not a good example of C++, it is good enough for what it does, but most project would rewrite their math library."
xboxmath's tradition is to use C callable functions because in the early days of Xbox 360 there were still plenty of developers who preferred C over C++. That has become far less important over time as C++ compilers have matured and developer tastes have changed. In the transition from XNAMath to DirectXMath, I made the library C++ only (i.e. no C) and took advantage of things like stdint.h, C++ namespaces, and I made use of templates and specializations to improve the implementation of permutes and shuffle operations for the SSE/SSE2 instruction set.
The C++ language use of DirectXMath has also tracked the Visual C++ compiler support. DirectXMath 3.08 uses =default, and the upcoming 3.09 release uses constexpr. At it's core, it remains a basically C interface by design. Really the best way to think of it is that each DirectXMath function is a 'meta-intrinsic'. They are all inline, and really you want the compiler to stitch a bunch of them together into one codepath for maximum efficiency. While more recent compilers have gotten better at optimizing C++ code patterns, even the old ones (think Visual C++ .NET 2002 era) did pretty well with C code.
The original API implemented VXM128 and "no-intrinsics". XNAMath implemented VMX128, no-intrinsics, and SSE/SSE2 for x86 & x64. DirectXMath no longer supports VXM128, but added ARM-NEON and I've since added optional codepaths for SSE3, SSE4.1, AVX, and AVX2. As such, this C-style API has proven to map well to the SIMD intrinsics operations across a variety of processor families.
SimpleMath is where I decided to put C++ type conversion behaviors to hide the verbosity around loading & storing data. This is less efficient as the programmer may not realize that they are actually doing something expensive, which is why I kept it out of the base library. If you avoid SimpleMath and stick with DirectXMath, then you will be writing more verbose code but in return you know when you are doing something that's potentially performance impacting where with C++ implicit conversions and constructors you can end up spilling to memory through temporaries when you didn't expect to. If I had put this in the base library, performance-sensitive programmers couldn't easily opt-out. It's all a matter of trade-offs.
UPDATE: If you really need a parameterized version, you could do something simple and let the compiler take care of optimizing it:
inline XMMATRIX XM_CALLCONV XMMatrixPerspectiveFov(float FovAngleY, float AspectRatio, float NearZ, float FarZ, bool rhcoords )
{
if (rhcoords)
{
return XMMatrixPerspectiveFovRH(FovAngleY, AspectRatio, NearZ, FarZ);
}
else
{
return XMMatrixPerspectiveFovLH(FovAngleY, AspectRatio, NearZ, FarZ);
}
}
The library is all inline, so you basically have the source. Each case is a little different, so you can optimize each parameterized version individually. Some of the functions have full expansions of codepaths, some just have one. In most cases it just comes down to a negative on the Z.
This is quite opinion based to design an API. In an engine, the time you spend computing a matrix with an effective difference based on the handiness is negligible. They could have use a set of flags to drive instead of name and code duplication without any real major issue.
DirectXMath is quite ancient now too, not a good example of C++, it is good enough for what it does, but most project would rewrite their math library.
With modern GPU and shaders, the handiness is a pure fashion choice, as long as your pipeline is consistant, or perform conversion when required, from the modeling tool to the render engine. It is worth to note that in addition to handiness, you often have to deal with a Y or Z up convention.
The easy way to understand handiness is to form a frame with your finger ( thumb is X, index is Y, middle is Z ). If you do that with both your hands and try to align two fingers, the difference is obvious, the third axis is inverted. That's all :)

Understanding and using the Boost Phoenix Library with a focus on lazy evaluation

I just found out about the Boost Phoenix library (hidden in the Spirit project) and as a fan of the functional-programming style (but still an amateur; some small experience with haskell and scheme) i wanted to play around with this library to learn about reasonable applications of this library.
Besides the increasement of expressiveness and clarity of the code using fp-style, i'm especially interested in lazy-evaluation for speeding up computations at low costs.
A small and simple example would be the following:
there is some kind of routing problem (like the tsp), which is using a eucliedean distance matrix. We assume, that some of the values of the distance matrix are never used, and some are used very often (so it isn't a good idea to compute them on the fly for every call). Now it seems to be reasonable to have a lazy data-structure holding the distance values. How would that be possible with phoenix? (ignoring the fact that i would be easily done without fp-style-programming at all) Reading the official documentation of phoenix didn't let me understand enough to answer that.
Is it possible at all? (in Haskell for example the ability to create thunks which are guaranteeing that the value can be computed later are in the core of the language).
What does using a vector with all the lazy functions defined in phoenix mean? As naive as i am, i tried to fill two matrices (vector >) with random values, one with the normal push_back, the other with boost::phoenix::push_back and tried to read out only a small amount of values from these matrices and store them in a container for printing out. The lazy one was alway empty. Am i using phoenix in a wrong way / it should be possible? Or did i misunderstand the function of the containers/algorithms in phoenix. A small clue for the latter one is the existence of a special list-data-structure in the FP++ library, which influenced phoenix.
Additionally:
What are you using phoenix for?
Do you know some good ressources regarding phoenix? (tutorials, blog entries...)
Thanks for all your input!
As requested, my comment (with additions and small modifications) as an answer...
I know your position exactly, I too played around with Phoenix (although I didn't dig in very deeply, mostly a byproduct of reading the Boost::Spirit tutorial) a while ago, relatively soon after catching the functional bug and learning basic Haskell - and I didn't get anything working :( This is btw in synch with my general experience with dark template magic: Very easy to misunderstand, screw up and get punched in the face with totally unexpected behaviour or incomprehensible error messages.
I'd advice you to stay away from Phoenix for a long time. I like FP too, but FP in C++ is even uglier than mutability in Haskell (they'd be head to head but C++ is already ugly and Haskell is, at least according to Larry Wall, the most beautiful language ever ;) ). Learn and use FP, and when you're good at it and forced to use C++, use Phoenix. But for learning, a library that bolts a wholly different paradigm on an already complex language (i.e. FP in C++) is not advisable.

What are the most widely used C++ vector/matrix math/linear algebra libraries, and their cost and benefit tradeoffs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
It seems that many projects slowly come upon a need to do matrix math, and fall into the trap of first building some vector classes and slowly adding in functionality until they get caught building a half-assed custom linear algebra library, and depending on it.
I'd like to avoid that while not building in a dependence on some tangentially related library (e.g. OpenCV, OpenSceneGraph).
What are the commonly used matrix math/linear algebra libraries out there, and why would decide to use one over another? Are there any that would be advised against using for some reason? I am specifically using this in a geometric/time context*(2,3,4 Dim)* but may be using higher dimensional data in the future.
I'm looking for differences with respect to any of: API, speed, memory use, breadth/completeness, narrowness/specificness, extensibility, and/or maturity/stability.
Update
I ended up using Eigen3 which I am extremely happy with.
There are quite a few projects that have settled on the Generic Graphics Toolkit for this. The GMTL in there is nice - it's quite small, very functional, and been used widely enough to be very reliable. OpenSG, VRJuggler, and other projects have all switched to using this instead of their own hand-rolled vertor/matrix math.
I've found it quite nice - it does everything via templates, so it's very flexible, and very fast.
Edit:
After the comments discussion, and edits, I thought I'd throw out some more information about the benefits and downsides to specific implementations, and why you might choose one over the other, given your situation.
GMTL -
Benefits: Simple API, specifically designed for graphics engines. Includes many primitive types geared towards rendering (such as planes, AABB, quatenrions with multiple interpolation, etc) that aren't in any other packages. Very low memory overhead, quite fast, easy to use.
Downsides: API is very focused specifically on rendering and graphics. Doesn't include general purpose (NxM) matrices, matrix decomposition and solving, etc, since these are outside the realm of traditional graphics/geometry applications.
Eigen -
Benefits: Clean API, fairly easy to use. Includes a Geometry module with quaternions and geometric transforms. Low memory overhead. Full, highly performant solving of large NxN matrices and other general purpose mathematical routines.
Downsides: May be a bit larger scope than you are wanting (?). Fewer geometric/rendering specific routines when compared to GMTL (ie: Euler angle definitions, etc).
IMSL -
Benefits: Very complete numeric library. Very, very fast (supposedly the fastest solver). By far the largest, most complete mathematical API. Commercially supported, mature, and stable.
Downsides: Cost - not inexpensive. Very few geometric/rendering specific methods, so you'll need to roll your own on top of their linear algebra classes.
NT2 -
Benefits: Provides syntax that is more familiar if you're used to MATLAB. Provides full decomposition and solving for large matrices, etc.
Downsides: Mathematical, not rendering focused. Probably not as performant as Eigen.
LAPACK -
Benefits: Very stable, proven algorithms. Been around for a long time. Complete matrix solving, etc. Many options for obscure mathematics.
Downsides: Not as highly performant in some cases. Ported from Fortran, with odd API for usage.
Personally, for me, it comes down to a single question - how are you planning to use this. If you're focus is just on rendering and graphics, I like Generic Graphics Toolkit, since it performs well, and supports many useful rendering operations out of the box without having to implement your own. If you need general purpose matrix solving (ie: SVD or LU decomposition of large matrices), I'd go with Eigen, since it handles that, provides some geometric operations, and is very performant with large matrix solutions. You may need to write more of your own graphics/geometric operations (on top of their matrices/vectors), but that's not horrible.
So I'm a pretty critical person, and figure if I'm going to invest in a library, I'd better know what I'm getting myself into. I figure it's better to go heavy on the criticism and light on the flattery when scrutinizing; what's wrong with it has many more implications for the future than what's right. So I'm going to go overboard here a little bit to provide the kind of answer that would have helped me and I hope will help others who may journey down this path. Keep in mind that this is based on what little reviewing/testing I've done with these libs. Oh and I stole some of the positive description from Reed.
I'll mention up top that I went with GMTL despite it's idiosyncrasies because the Eigen2 unsafeness was too big of a downside. But I've recently learned that the next release of Eigen2 will contain defines that will shut off the alignment code, and make it safe. So I may switch over.
Update: I've switched to Eigen3. Despite it's idiosyncrasies, its scope and elegance are too hard to ignore, and the optimizations which make it unsafe can be turned off with a define.
Eigen2/Eigen3
Benefits: LGPL MPL2, Clean, well designed API, fairly easy to use. Seems to be well maintained with a vibrant community. Low memory overhead. High performance. Made for general linear algebra, but good geometric functionality available as well. All header lib, no linking required.
Idiocyncracies/downsides: (Some/all of these can be avoided by some defines that are available in the current development branch Eigen3)
Unsafe performance optimizations result in needing careful following of rules. Failure to follow rules causes crashes.
you simply cannot safely pass-by-value
use of Eigen types as members requires special allocator customization (or you crash)
use with stl container types and possibly other templates required
special allocation customization (or you will crash)
certain compilers need special care to prevent crashes on function calls (GCC windows)
GMTL
Benefits: LGPL, Fairly Simple API, specifically designed for graphics engines.
Includes many primitive types geared towards rendering (such as
planes, AABB, quatenrions with multiple interpolation, etc) that
aren't in any other packages. Very low memory overhead, quite fast,
easy to use. All header based, no linking necessary.
Idiocyncracies/downsides:
API is quirky
what might be myVec.x() in another lib is only available via myVec[0] (Readability problem)
an array or stl::vector of points may cause you to do something like pointsList[0][0] to access the x component of the first point
in a naive attempt at optimization, removed cross(vec,vec) and
replaced with makeCross(vec,vec,vec) when compiler eliminates
unnecessary temps anyway
normal math operations don't return normal types unless you shut
off some optimization features e.g.: vec1 - vec2 does not return a
normal vector so length( vecA - vecB ) fails even though vecC = vecA -
vecB works. You must wrap like: length( Vec( vecA - vecB ) )
operations on vectors are provided by external functions rather than
members. This may require you to use the scope resolution everywhere
since common symbol names may collide
you have to do
length( makeCross( vecA, vecB ) )
or
gmtl::length( gmtl::makeCross( vecA, vecB ) )
where otherwise you might try
vecA.cross( vecB ).length()
not well maintained
still claimed as "beta"
documentation missing basic info like which headers are needed to
use normal functionalty
Vec.h does not contain operations for Vectors, VecOps.h contains
some, others are in Generate.h for example. cross(vec&,vec&,vec&) in
VecOps.h, [make]cross(vec&,vec&) in Generate.h
immature/unstable API; still changing.
For example "cross" has moved from "VecOps.h" to "Generate.h", and
then the name was changed to "makeCross". Documentation examples fail
because still refer to old versions of functions that no-longer exist.
NT2
Can't tell because they seem to be more interested in the fractal image header of their web page than the content. Looks more like an academic project than a serious software project.
Latest release over 2 years ago.
Apparently no documentation in English though supposedly there is something in French somewhere.
Cant find a trace of a community around the project.
LAPACK & BLAS
Benefits: Old and mature.
Downsides:
old as dinosaurs with really crappy APIs
For what it's worth, I've tried both Eigen and Armadillo. Below is a brief evaluation.
Eigen
Advantages:
1. Completely self-contained -- no dependence on external BLAS or LAPACK.
2. Documentation decent.
3. Purportedly fast, although I haven't put it to the test.
Disadvantage:
The QR algorithm returns just a single matrix, with the R matrix embedded in the upper triangle. No idea where the rest of the matrix comes from, and no Q matrix can be accessed.
Armadillo
Advantages:
1. Wide range of decompositions and other functions (including QR).
2. Reasonably fast (uses expression templates), but again, I haven't really pushed it to high dimensions.
Disadvantages:
1. Depends on external BLAS and/or LAPACK for matrix decompositions.
2. Documentation is lacking IMHO (including the specifics wrt LAPACK, other than changing a #define statement).
Would be nice if an open source library were available that is self-contained and straightforward to use. I have run into this same issue for 10 years, and it gets frustrating. At one point, I used GSL for C and wrote C++ wrappers around it, but with modern C++ -- especially using the advantages of expression templates -- we shouldn't have to mess with C in the 21st century. Just my tuppencehapenny.
If you are looking for high performance matrix/linear algebra/optimization on Intel processors, I'd look at Intel's MKL library.
MKL is carefully optimized for fast run-time performance - much of it based on the very mature BLAS/LAPACK fortran standards. And its performance scales with the number of cores available. Hands-free scalability with available cores is the future of computing and I wouldn't use any math library for a new project doesn't support multi-core processors.
Very briefly, it includes:
Basic vector-vector, vector-matrix,
and matrix-matrix operations
Matrix factorization (LU decomp, hermitian,sparse)
Least squares fitting and eigenvalue problems
Sparse linear system solvers
Non-linear least squares solver (trust regions)
Plus signal processing routines such as FFT and convolution
Very fast random number generators (mersenne twist)
Much more.... see: link text
A downside is that the MKL API can be quite complex depending on the routines that you need. You could also take a look at their IPP (Integrated Performance Primitives) library which is geared toward high performance image processing operations, but is nevertheless quite broad.
Paul
CenterSpace Software ,.NET Math libraries, centerspace.net
What about GLM?
It's based on the OpenGL Shading Language (GLSL) specification and released under the MIT license.
Clearly aimed at graphics programmers
I've heard good things about Eigen and NT2, but haven't personally used either. There's also Boost.UBLAS, which I believe is getting a bit long in the tooth. The developers of NT2 are building the next version with the intention of getting it into Boost, so that might count for somthing.
My lin. alg. needs don't exteed beyond the 4x4 matrix case, so I can't comment on advanced functionality; I'm just pointing out some options.
I'm new to this topic, so I can't say a whole lot, but BLAS is pretty much the standard in scientific computing. BLAS is actually an API standard, which has many implementations. I'm honestly not sure which implementations are most popular or why.
If you want to also be able to do common linear algebra operations (solving systems, least squares regression, decomposition, etc.) look into LAPACK.
I'll add vote for Eigen: I ported a lot of code (3D geometry, linear algebra and differential equations) from different libraries to this one - improving both performance and code readability in almost all cases.
One advantage that wasn't mentioned: it's very easy to use SSE with Eigen, which significantly improves performance of 2D-3D operations (where everything can be padded to 128 bits).
Okay, I think I know what you're looking for. It appears that GGT is a pretty good solution, as Reed Copsey suggested.
Personally, we rolled our own little library, because we deal with rational points a lot - lots of rational NURBS and Beziers.
It turns out that most 3D graphics libraries do computations with projective points that have no basis in projective math, because that's what gets you the answer you want. We ended up using Grassmann points, which have a solid theoretical underpinning and decreased the number of point types. Grassmann points are basically the same computations people are using now, with the benefit of a robust theory. Most importantly, it makes things clearer in our minds, so we have fewer bugs. Ron Goldman wrote a paper on Grassmann points in computer graphics called "On the Algebraic and Geometric Foundations of Computer Graphics".
Not directly related to your question, but an interesting read.
FLENS
http://flens.sf.net
It also implements a lot of LAPACK functions.
I found this library quite simple and functional (http://kirillsprograms.com/top_Vectors.php). These are bare bone vectors implemented via C++ templates. No fancy stuff - just what you need to do with vectors (add, subtract multiply, dot, etc).