Gecode vs. Z3 for Constrained Randomization - c++

I'm looking for a C++-based alternative to the SystemVerilog language.
While I doubt anything out there can match the simplicity and flexibility of the SystemVerilog constraint language, I have settled on using either Z3 or Gecode for what I'm working on, primarily because they're both under the MIT license.
What I'm looking for is:
Support for variable-sized bit vectors AND bit vector arithmetic logic operations. For example:
bit_vector a<30>;
bit_vector b<30>;
constraint {
a == (b << 2);
a == (b * 2);
b < a;
}
The problem with Gecode, as far as I can tell, is that it does not provide bit vectors right out of the box. However, its programming model seems a bit simpler, and it does provide a means for one to create their own types of variables. So I could perhaps creates some kind of wrapper around the C++ bitset, similar to how IntVar wraps around 32-bit integers. However, that would lack the ability to perform multiplication-based constrains, since C++ bitsets don't support such operations.
Z3 does provide bit vectors right out of the box, but I'm not sure how it would handle trying to set constraints on, for example, 128-bit vectors. I'm also unsure how I can specify that I want to produce a variety of randomized variables that satisfy a constraint when possible. With Gecode, it's much clearer given how thorough its documentation is.
A simplistic constraint programming model, close or similar to SystemVerilog. For example, a language where I only need to type (x == y + z) instead of something like EQ(x, y + z). As far as I can tell, both APIs provide such a simple programming model.
A means of performing constrained randomization, for the sake of producing random stimulus. As in, I can provide some random seed that, depending on the constraints, result in an answer that may differ from the previous answer. Similar to how SystemVerilog randomize calls may produce new random results. Gecode seems to support the use of random seeds. Z3, it's much less clear.
Support for weighted distribution. Gecode appears to support this via weighted sets. I imagine I can establish a relationship between certain conditions and boolean variables, and then add weights to those variables. Z3 appears to be more flexible in that you can assign weights to expressions, via the optimize class.
At the moment, I'm undecided, because Z3 lacks in documentation what Gecode lacks in out-of-the-box variable types. I'm wondering if anyone has any prior experience using either tool to achieve what SystemVerilog could. I'd like to hear any suggestions for any other API under a flexible license as well.

While z3 (or any SMT solver) can handle all of these, getting a nice sampling of satisfying assignments would be rather difficult to control. SMT solvers are optimized for just giving you a model, and they don't have much in terms of how you want to sample the solution space.
Incidentally, this is an active research area in SMT solving. Here's a paper that appeared only 6 weeks ago on this very topic: https://ieeexplore.ieee.org/document/8894251
So, I'd say if support for "good sampling" is your primary motivation, using an SMT solver is probably not the best choice. If your goal is to find satisfying assumptions for bit-vectors expressed conveniently (there are high level APIs in any language you can imagine these days), then z3 would be an extremely fine choice.
From your description, good sampling sounds like the primary motivation though, and for that SMT solvers are probably not that great. At least not for the time being.

Related

Fast gradient-descent implementation in a C++ library? [duplicate]

I'm looking to run a gradient descent optimization to minimize the cost of an instantiation of variables. My program is very computationally expensive, so I'm looking for a popular library with a fast implementation of GD. What is the recommended library/reference?
GSL is a great (and free) library that already implements common functions of mathematical and scientific interest.
You can peruse through the entire reference manual online. Poking around, this starts to look interesting, but I think we'd need to know more about the problem.
It sounds like you're fairly new to minimization methods. Whenever I need to learn a new set of numeric methods, I usually look in Numerical Recipes. It's a book that provides a nice overview of the most common methods in the field, their tradeoffs, and (importantly) where to look in the literature for more information. It's usually not where I stop, but it's often a helpful starting point.
For example, if your function is costly, then your goal is to minimization the number of evaluations to need to converge. If you have analytical expressions for the gradient, then a gradient-based method will probably work to your advantage, assuming that the function and its gradient are well-behaved (lack singularities) in the domain of interest.
If you don't have analytical gradients, then you're almost always better off using an approach like downhill simplex that only evaluates the function (not its gradients). Numerical gradients are expensive.
Also note that all of these approaches will converge to local minima, so they're fairly sensitive to the point at which you initially start the optimizer. Global optimization is a totally different beast.
As a final thought, almost all of the code you can find for minimization will be reasonably efficient. The real cost of minimization is in the cost function. You should spend time profiling and optimizing your cost function, and select an algorithm that will minimize the number of times you need to call it (methods like downhill simplex, conjugate gradient, and BFGS all shine on different kinds of problems).
In terms of actual code, you can find a lot of nice routines at NETLIB, in addition to the other libraries that have been mentioned. Most of the routines are in FORTRAN 77, but not all; to convert them to C, f2c is quite useful.
One of the best respected libraries for this kind of optimization work is the NAG libraries. These are used all over the world in universities and industry. They're available for C / FORTRAN. They're very non-free, and contain a lot more than just minimisation functions - A lot of general numerical mathematics is covered.
Anyway I suspect this library is overkill for what you need. But here are the parts pertaining to minimisation: Local Minimisation and Global Minimization.
Try CPLEX which is available for free for students.

Fast gradient-descent implementation in a C++ library?

I'm looking to run a gradient descent optimization to minimize the cost of an instantiation of variables. My program is very computationally expensive, so I'm looking for a popular library with a fast implementation of GD. What is the recommended library/reference?
GSL is a great (and free) library that already implements common functions of mathematical and scientific interest.
You can peruse through the entire reference manual online. Poking around, this starts to look interesting, but I think we'd need to know more about the problem.
It sounds like you're fairly new to minimization methods. Whenever I need to learn a new set of numeric methods, I usually look in Numerical Recipes. It's a book that provides a nice overview of the most common methods in the field, their tradeoffs, and (importantly) where to look in the literature for more information. It's usually not where I stop, but it's often a helpful starting point.
For example, if your function is costly, then your goal is to minimization the number of evaluations to need to converge. If you have analytical expressions for the gradient, then a gradient-based method will probably work to your advantage, assuming that the function and its gradient are well-behaved (lack singularities) in the domain of interest.
If you don't have analytical gradients, then you're almost always better off using an approach like downhill simplex that only evaluates the function (not its gradients). Numerical gradients are expensive.
Also note that all of these approaches will converge to local minima, so they're fairly sensitive to the point at which you initially start the optimizer. Global optimization is a totally different beast.
As a final thought, almost all of the code you can find for minimization will be reasonably efficient. The real cost of minimization is in the cost function. You should spend time profiling and optimizing your cost function, and select an algorithm that will minimize the number of times you need to call it (methods like downhill simplex, conjugate gradient, and BFGS all shine on different kinds of problems).
In terms of actual code, you can find a lot of nice routines at NETLIB, in addition to the other libraries that have been mentioned. Most of the routines are in FORTRAN 77, but not all; to convert them to C, f2c is quite useful.
One of the best respected libraries for this kind of optimization work is the NAG libraries. These are used all over the world in universities and industry. They're available for C / FORTRAN. They're very non-free, and contain a lot more than just minimisation functions - A lot of general numerical mathematics is covered.
Anyway I suspect this library is overkill for what you need. But here are the parts pertaining to minimisation: Local Minimisation and Global Minimization.
Try CPLEX which is available for free for students.

Equations Equality test (in C++ or with Unix tools) (algebra functions isomorphism) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for C++ open-source library (or just open-source Unix tool) to do: Equality test on Equations .
Equations can be build during runtime as AST Trees, string or other format.
Equations will mostly be simple algebra ones, with some assumptions about unknown functions. Domain, will be integer arithmetic (no floating point issues, as related issues are well known - Thanks #hardmath for stressing it out, I've assumed it's known).
Example: Input might contain function phi, with assumptions about it (most cases) phi(x,y)=phi(y,x) and try to solve :
equality_test( phi( (a+1)*(a+1) , a+b ) = phi( b+a, a*a + 2a + 1 )
It can be fuzzy or any equality test - what I mean is that, it does not have to always succeed (It may return "false" even if equations are equal).
If there would be problem with supporting assumptions like above about phi function, I can handle this, so just simple linear algebra equations equality testers are welcome as well.
Could you recommend some C/C++ programming libraries or Unix tools ? (open-source)
If possible, could you attach some example how such equality test, might look like in given library/tool ?
P.S. If such equality_test could (in case of success) return isomorphism - (what I mean, a kind of "mapping") - between two given equations, would be highly welcome. But tools without such capabilities are highly welcome as well.
P.S. By "fuzzy tester" I mean that in internals equation solver will be "fuzzy" in terms of looking for "isomorphism" of two functions, not in terms of testing against random inputs - I could implement this, for sure, but I try to find something with better precision.
P.P.S. There is another issue, why I need better performance solution, than brute-force "all inputs testing". Above equation is simplyfied form of my internal problem, where I do not have mapping between variables in equations. That is, I have eq1=phi( (a+1)*(a+1) , a+b ) and eq2=phi( l+k, k*k + 2k + 1 ) , and I have to find out that a==k and b==l. But this sub-problem I can handle with "brute-force" approach (even asymptotic complexity of this approach), case there is just a few variables, let it be 8. So I would need to do this equation_test for each possible mapping. If there is a tool that that whole job, I would be highly thankful, and could contribute to such project. But I don't require such functionality, simply equation_test() will be enough, I can handle rest easily.
To sum it up:
equality_test() is only one of many subproblems I have to solve, so computational complexity matters.
it does not have to be 100% reliable, but higher likelihood, than just testing equations with a few random inputs and variable mapping is highly welcome :).
output of "yes" or "no" (all additional information might be useful but in future, at this stage I need "Yes"/"No")
Your topic is one of automated theorem proving, for which a number of free/open source software packages have been developed. Many of these are meant for proof verification, but what you ask for is proof searching.
Dealing with the abstract topic of equations would be the theories mathematicians call varieties. These theories have nice properties with respect to the existence and regularity of their models.
It is possible you have in mind equations that deal specifically with real numbers or other system, which would add some axioms to the theory in which a proof is sought.
If in principle an algorithm exists to determine whether or not a logical statement can be proven in a theory, that theory is called decidable. For example, the theory of real closed fields is decidable, as Tarski showed in 1951. However a practical implementation of such an algorithm is lacking and perhaps impossible.
Here are a few open source packages that might be worth learning something about to guide your design and development:
Tac: A generic and adaptable interactive theorem prover
Prover9: An automated theorem prover for first-order and equational logic
E(quational) Theorem Prover
I am not sure for any library but how about you do it yourself by generating a random set of inputs for your equation and substituting it in both equations which have to be compared. This would give you a almost correct result given you generate considerable amount of random data.
Edit:
Also you can try http://www.wolframalpha.com/
with
(x+1)*(y+1) equals x+y+xy+2
and
(x+1)*(y+1) equals x+y+xy+1
I think you can get pretty far with using Reverse Polish Notation.
Write out your equation using RPN
Apply transformations to bring all expressions to the same form, e.g. *A+BC --> +*AB*AC (which is the RPN equivalent of A*(B+C) --> A*B+A*C), ^*BCA --> *^BA^CA (i.e. (B*C)^A --> B^A * C^A)
"Sort" the arguments of symmetric binary operator so that "lighter" operations appear on one side (e.g. A*B + C --> C + A*B)
You will have problem with dummy variables, for example sum indices. There is no other way, I think, but to try every combination of matching them on both sides of the equation.
In general, the problem is very complicated.
You can try a hack, though: use an optimizing compiler (C,Fortran) and compile both sides of the equation to optimized machine code and compare the outputs. It may work, or may not.
Opensource (GPL) project Maxima has tool simmilar to Wolfram Alpha's equals tool :
(a+b+c)+(x+y)**2 equals (x**2+b+c+a+2*x*y+y**2)
Which is is(equal()), that solves formulas :
(%i1) is(equal( (a+b+c)+(x+y)**2 , (x**2+b+c+a+2*x*y+y**2) ));
(%o1) true
For this purpose, it uses rational simplifier - ratsimp, in order to simplify the difference of two equations. When difference of two equations is simplified to zero, we know they are equal for all possible values:
(%i2) ratsimp( ((a+b+c)+(x+y)**2) - ((x**2+b+c+a+2*x*y+y**2)) );
(%o2) 0
This answer, just shows direction (like other answers). If you know about something similar, that can be used as a part of C++ Unix program - programming library ? Good C/C++ binding similar tool to this. Please, post new answer.

Understanding and using the Boost Phoenix Library with a focus on lazy evaluation

I just found out about the Boost Phoenix library (hidden in the Spirit project) and as a fan of the functional-programming style (but still an amateur; some small experience with haskell and scheme) i wanted to play around with this library to learn about reasonable applications of this library.
Besides the increasement of expressiveness and clarity of the code using fp-style, i'm especially interested in lazy-evaluation for speeding up computations at low costs.
A small and simple example would be the following:
there is some kind of routing problem (like the tsp), which is using a eucliedean distance matrix. We assume, that some of the values of the distance matrix are never used, and some are used very often (so it isn't a good idea to compute them on the fly for every call). Now it seems to be reasonable to have a lazy data-structure holding the distance values. How would that be possible with phoenix? (ignoring the fact that i would be easily done without fp-style-programming at all) Reading the official documentation of phoenix didn't let me understand enough to answer that.
Is it possible at all? (in Haskell for example the ability to create thunks which are guaranteeing that the value can be computed later are in the core of the language).
What does using a vector with all the lazy functions defined in phoenix mean? As naive as i am, i tried to fill two matrices (vector >) with random values, one with the normal push_back, the other with boost::phoenix::push_back and tried to read out only a small amount of values from these matrices and store them in a container for printing out. The lazy one was alway empty. Am i using phoenix in a wrong way / it should be possible? Or did i misunderstand the function of the containers/algorithms in phoenix. A small clue for the latter one is the existence of a special list-data-structure in the FP++ library, which influenced phoenix.
Additionally:
What are you using phoenix for?
Do you know some good ressources regarding phoenix? (tutorials, blog entries...)
Thanks for all your input!
As requested, my comment (with additions and small modifications) as an answer...
I know your position exactly, I too played around with Phoenix (although I didn't dig in very deeply, mostly a byproduct of reading the Boost::Spirit tutorial) a while ago, relatively soon after catching the functional bug and learning basic Haskell - and I didn't get anything working :( This is btw in synch with my general experience with dark template magic: Very easy to misunderstand, screw up and get punched in the face with totally unexpected behaviour or incomprehensible error messages.
I'd advice you to stay away from Phoenix for a long time. I like FP too, but FP in C++ is even uglier than mutability in Haskell (they'd be head to head but C++ is already ugly and Haskell is, at least according to Larry Wall, the most beautiful language ever ;) ). Learn and use FP, and when you're good at it and forced to use C++, use Phoenix. But for learning, a library that bolts a wholly different paradigm on an already complex language (i.e. FP in C++) is not advisable.

What are the most widely used C++ vector/matrix math/linear algebra libraries, and their cost and benefit tradeoffs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
It seems that many projects slowly come upon a need to do matrix math, and fall into the trap of first building some vector classes and slowly adding in functionality until they get caught building a half-assed custom linear algebra library, and depending on it.
I'd like to avoid that while not building in a dependence on some tangentially related library (e.g. OpenCV, OpenSceneGraph).
What are the commonly used matrix math/linear algebra libraries out there, and why would decide to use one over another? Are there any that would be advised against using for some reason? I am specifically using this in a geometric/time context*(2,3,4 Dim)* but may be using higher dimensional data in the future.
I'm looking for differences with respect to any of: API, speed, memory use, breadth/completeness, narrowness/specificness, extensibility, and/or maturity/stability.
Update
I ended up using Eigen3 which I am extremely happy with.
There are quite a few projects that have settled on the Generic Graphics Toolkit for this. The GMTL in there is nice - it's quite small, very functional, and been used widely enough to be very reliable. OpenSG, VRJuggler, and other projects have all switched to using this instead of their own hand-rolled vertor/matrix math.
I've found it quite nice - it does everything via templates, so it's very flexible, and very fast.
Edit:
After the comments discussion, and edits, I thought I'd throw out some more information about the benefits and downsides to specific implementations, and why you might choose one over the other, given your situation.
GMTL -
Benefits: Simple API, specifically designed for graphics engines. Includes many primitive types geared towards rendering (such as planes, AABB, quatenrions with multiple interpolation, etc) that aren't in any other packages. Very low memory overhead, quite fast, easy to use.
Downsides: API is very focused specifically on rendering and graphics. Doesn't include general purpose (NxM) matrices, matrix decomposition and solving, etc, since these are outside the realm of traditional graphics/geometry applications.
Eigen -
Benefits: Clean API, fairly easy to use. Includes a Geometry module with quaternions and geometric transforms. Low memory overhead. Full, highly performant solving of large NxN matrices and other general purpose mathematical routines.
Downsides: May be a bit larger scope than you are wanting (?). Fewer geometric/rendering specific routines when compared to GMTL (ie: Euler angle definitions, etc).
IMSL -
Benefits: Very complete numeric library. Very, very fast (supposedly the fastest solver). By far the largest, most complete mathematical API. Commercially supported, mature, and stable.
Downsides: Cost - not inexpensive. Very few geometric/rendering specific methods, so you'll need to roll your own on top of their linear algebra classes.
NT2 -
Benefits: Provides syntax that is more familiar if you're used to MATLAB. Provides full decomposition and solving for large matrices, etc.
Downsides: Mathematical, not rendering focused. Probably not as performant as Eigen.
LAPACK -
Benefits: Very stable, proven algorithms. Been around for a long time. Complete matrix solving, etc. Many options for obscure mathematics.
Downsides: Not as highly performant in some cases. Ported from Fortran, with odd API for usage.
Personally, for me, it comes down to a single question - how are you planning to use this. If you're focus is just on rendering and graphics, I like Generic Graphics Toolkit, since it performs well, and supports many useful rendering operations out of the box without having to implement your own. If you need general purpose matrix solving (ie: SVD or LU decomposition of large matrices), I'd go with Eigen, since it handles that, provides some geometric operations, and is very performant with large matrix solutions. You may need to write more of your own graphics/geometric operations (on top of their matrices/vectors), but that's not horrible.
So I'm a pretty critical person, and figure if I'm going to invest in a library, I'd better know what I'm getting myself into. I figure it's better to go heavy on the criticism and light on the flattery when scrutinizing; what's wrong with it has many more implications for the future than what's right. So I'm going to go overboard here a little bit to provide the kind of answer that would have helped me and I hope will help others who may journey down this path. Keep in mind that this is based on what little reviewing/testing I've done with these libs. Oh and I stole some of the positive description from Reed.
I'll mention up top that I went with GMTL despite it's idiosyncrasies because the Eigen2 unsafeness was too big of a downside. But I've recently learned that the next release of Eigen2 will contain defines that will shut off the alignment code, and make it safe. So I may switch over.
Update: I've switched to Eigen3. Despite it's idiosyncrasies, its scope and elegance are too hard to ignore, and the optimizations which make it unsafe can be turned off with a define.
Eigen2/Eigen3
Benefits: LGPL MPL2, Clean, well designed API, fairly easy to use. Seems to be well maintained with a vibrant community. Low memory overhead. High performance. Made for general linear algebra, but good geometric functionality available as well. All header lib, no linking required.
Idiocyncracies/downsides: (Some/all of these can be avoided by some defines that are available in the current development branch Eigen3)
Unsafe performance optimizations result in needing careful following of rules. Failure to follow rules causes crashes.
you simply cannot safely pass-by-value
use of Eigen types as members requires special allocator customization (or you crash)
use with stl container types and possibly other templates required
special allocation customization (or you will crash)
certain compilers need special care to prevent crashes on function calls (GCC windows)
GMTL
Benefits: LGPL, Fairly Simple API, specifically designed for graphics engines.
Includes many primitive types geared towards rendering (such as
planes, AABB, quatenrions with multiple interpolation, etc) that
aren't in any other packages. Very low memory overhead, quite fast,
easy to use. All header based, no linking necessary.
Idiocyncracies/downsides:
API is quirky
what might be myVec.x() in another lib is only available via myVec[0] (Readability problem)
an array or stl::vector of points may cause you to do something like pointsList[0][0] to access the x component of the first point
in a naive attempt at optimization, removed cross(vec,vec) and
replaced with makeCross(vec,vec,vec) when compiler eliminates
unnecessary temps anyway
normal math operations don't return normal types unless you shut
off some optimization features e.g.: vec1 - vec2 does not return a
normal vector so length( vecA - vecB ) fails even though vecC = vecA -
vecB works. You must wrap like: length( Vec( vecA - vecB ) )
operations on vectors are provided by external functions rather than
members. This may require you to use the scope resolution everywhere
since common symbol names may collide
you have to do
length( makeCross( vecA, vecB ) )
or
gmtl::length( gmtl::makeCross( vecA, vecB ) )
where otherwise you might try
vecA.cross( vecB ).length()
not well maintained
still claimed as "beta"
documentation missing basic info like which headers are needed to
use normal functionalty
Vec.h does not contain operations for Vectors, VecOps.h contains
some, others are in Generate.h for example. cross(vec&,vec&,vec&) in
VecOps.h, [make]cross(vec&,vec&) in Generate.h
immature/unstable API; still changing.
For example "cross" has moved from "VecOps.h" to "Generate.h", and
then the name was changed to "makeCross". Documentation examples fail
because still refer to old versions of functions that no-longer exist.
NT2
Can't tell because they seem to be more interested in the fractal image header of their web page than the content. Looks more like an academic project than a serious software project.
Latest release over 2 years ago.
Apparently no documentation in English though supposedly there is something in French somewhere.
Cant find a trace of a community around the project.
LAPACK & BLAS
Benefits: Old and mature.
Downsides:
old as dinosaurs with really crappy APIs
For what it's worth, I've tried both Eigen and Armadillo. Below is a brief evaluation.
Eigen
Advantages:
1. Completely self-contained -- no dependence on external BLAS or LAPACK.
2. Documentation decent.
3. Purportedly fast, although I haven't put it to the test.
Disadvantage:
The QR algorithm returns just a single matrix, with the R matrix embedded in the upper triangle. No idea where the rest of the matrix comes from, and no Q matrix can be accessed.
Armadillo
Advantages:
1. Wide range of decompositions and other functions (including QR).
2. Reasonably fast (uses expression templates), but again, I haven't really pushed it to high dimensions.
Disadvantages:
1. Depends on external BLAS and/or LAPACK for matrix decompositions.
2. Documentation is lacking IMHO (including the specifics wrt LAPACK, other than changing a #define statement).
Would be nice if an open source library were available that is self-contained and straightforward to use. I have run into this same issue for 10 years, and it gets frustrating. At one point, I used GSL for C and wrote C++ wrappers around it, but with modern C++ -- especially using the advantages of expression templates -- we shouldn't have to mess with C in the 21st century. Just my tuppencehapenny.
If you are looking for high performance matrix/linear algebra/optimization on Intel processors, I'd look at Intel's MKL library.
MKL is carefully optimized for fast run-time performance - much of it based on the very mature BLAS/LAPACK fortran standards. And its performance scales with the number of cores available. Hands-free scalability with available cores is the future of computing and I wouldn't use any math library for a new project doesn't support multi-core processors.
Very briefly, it includes:
Basic vector-vector, vector-matrix,
and matrix-matrix operations
Matrix factorization (LU decomp, hermitian,sparse)
Least squares fitting and eigenvalue problems
Sparse linear system solvers
Non-linear least squares solver (trust regions)
Plus signal processing routines such as FFT and convolution
Very fast random number generators (mersenne twist)
Much more.... see: link text
A downside is that the MKL API can be quite complex depending on the routines that you need. You could also take a look at their IPP (Integrated Performance Primitives) library which is geared toward high performance image processing operations, but is nevertheless quite broad.
Paul
CenterSpace Software ,.NET Math libraries, centerspace.net
What about GLM?
It's based on the OpenGL Shading Language (GLSL) specification and released under the MIT license.
Clearly aimed at graphics programmers
I've heard good things about Eigen and NT2, but haven't personally used either. There's also Boost.UBLAS, which I believe is getting a bit long in the tooth. The developers of NT2 are building the next version with the intention of getting it into Boost, so that might count for somthing.
My lin. alg. needs don't exteed beyond the 4x4 matrix case, so I can't comment on advanced functionality; I'm just pointing out some options.
I'm new to this topic, so I can't say a whole lot, but BLAS is pretty much the standard in scientific computing. BLAS is actually an API standard, which has many implementations. I'm honestly not sure which implementations are most popular or why.
If you want to also be able to do common linear algebra operations (solving systems, least squares regression, decomposition, etc.) look into LAPACK.
I'll add vote for Eigen: I ported a lot of code (3D geometry, linear algebra and differential equations) from different libraries to this one - improving both performance and code readability in almost all cases.
One advantage that wasn't mentioned: it's very easy to use SSE with Eigen, which significantly improves performance of 2D-3D operations (where everything can be padded to 128 bits).
Okay, I think I know what you're looking for. It appears that GGT is a pretty good solution, as Reed Copsey suggested.
Personally, we rolled our own little library, because we deal with rational points a lot - lots of rational NURBS and Beziers.
It turns out that most 3D graphics libraries do computations with projective points that have no basis in projective math, because that's what gets you the answer you want. We ended up using Grassmann points, which have a solid theoretical underpinning and decreased the number of point types. Grassmann points are basically the same computations people are using now, with the benefit of a robust theory. Most importantly, it makes things clearer in our minds, so we have fewer bugs. Ron Goldman wrote a paper on Grassmann points in computer graphics called "On the Algebraic and Geometric Foundations of Computer Graphics".
Not directly related to your question, but an interesting read.
FLENS
http://flens.sf.net
It also implements a lot of LAPACK functions.
I found this library quite simple and functional (http://kirillsprograms.com/top_Vectors.php). These are bare bone vectors implemented via C++ templates. No fancy stuff - just what you need to do with vectors (add, subtract multiply, dot, etc).