I have been using the ExprTk library quite frequently in the past in order to further process large output files generated with Mathematica (containing mathematical expressions) in C.
Until now, I exclusively used this library to process expressions that yield values of the type <double>, for which the library works flawlessly by defining the types
typedef exprtk::symbol_table<double> symbol_table_t;
typedef exprtk::expression<double> expression_t;
typedef exprtk::parser<double> parser_t;
and storing "everything" in a struct
struct readInExpression
{
double a, b;
symbol_table_t symbol_table;
expression_t expression;
};
Reading in a text file that contains the variables a and b as well as e.g. the user-defined function
double my_function(double a, double b) {
return a+b;
}
can be achieved by means of
void readInFromFile(readInExpression* f, parser_t* p) {
std::string file = "xyz.txt";
std::ifstream ifs(file);
std::string content( (std::istreambuf_iterator<char>(ifs) ),
(std::istreambuf_iterator<char>() ) );
f->symbol_table.add_variable("a",f->a);
f->symbol_table.add_variable("b",f->b);
f->symbol_table.add_function("my_function",my_function);
f->expression.register_symbol_table(f->symbol_table);
p->compile(content,f->expression);
}
One may then evaluate the read-in expression for arbitrary values of a and b by using
double evaluateFile(readInExpression* f, double a, double b) {
f->a = a;
f->b = b;
return f->expression.value();
}
Recently, I ran into problems when trying to process text files that contain complex numbers and functions that return complex values of the type std::complex<double>. More specifically, I have a .txt file that contains expressions of the form
2*m*A0(m*m) + Complex(1.0,2.0)*B0(M*M,0.0,m*m)
where A0(a) and B0(a,b,c) are the scalar loop integrals that arise from the Passarino-Veltman reduction of (tensor) loop integrals in high-energy physics.
These can be evaluated numerically in C using LoopTools, where it is to be noted that they take complex values for certain values of a, b, and c. Simply replacing <double> by std::complex<double> in the typedefs above throws tons of errors when compiling. I am not sure whether the ExprTk library is able to handle complex numbers at all -- I know that it cannot deal with custom classes, but from what I understand, it should be able to handle native datatypes (as I found here, ExprTk is able to at least deal with vectors, but given the complexity of the expressions I need to process, I do not think it will be possible to somehow rewrite everything in form of vectors, in particular due to the difference in doing algebra with complex numbers and vectors). Note that neither can I split the expressions into real and imaginary part because I have to evaluate the expressions for many different values of the variables.
Although I dealt with complex numbers and the mentioned functions A0(a) and B0(a,b,c) in text files before, I solved this by simply including the .txt files in C using #include "xyz.txt", implemented in a corresponding function, which, however, seems impossible given the size of the text files at hand (the compiler throws an error if I try to do so).
Does anybody know if and how ExprTk can deal with complex numbers? (A MWE would be highly appreciated.) If that is not the case, can anyone here suggest a different math parser that is user friendly and can deal with complex numbers of the type std::complex<double>, at the same time allowing to define custom functions that themselves return such complex values?
A MWE:
/************/
/* Includes */
/************/
#include <iostream> // Input and Output on the console
#include <fstream> // Input and Output to files
#include <string> // In order to work with strings -- needed to loop through the Mathematica output files
#include "exprtk.hpp" // Parser to evaluate string expressions as mathematical/arithmetic input
#include <math.h> // Simple Math Stuff
#include <gsl/gsl_math.h> // GSL math stuff
#include <complex> // Complex Numbers
/**********/
/* Parser */
/**********/
// Type definitions for the parser
typedef exprtk::symbol_table<double> symbol_table_t; // (%)
typedef exprtk::expression<double> expression_t; // (%)
typedef exprtk::parser<double> parser_t; // (%)
/* This struct is used to store certain information of the Mathematica files in
order to later evaluate them for different variables with the parser library. */
struct readInExpression
{
double a,b; // (%)
symbol_table_t symbol_table;
// Instantiate expression
expression_t expression;
};
/* Global variable where the read-in file/parser is stored. */
readInExpression file;
parser_t parser;
/*******************/
/* Custom function */
/*******************/
double my_function(double a, double b) {
return a+b;
}
/***********************************/
/* Converting Mathematica Notation */
/***********************************/
/* Mathematica prints complex numbers as Complex(x,y), so we need a function to convert to C++ standard. */
std::complex<double> Complex(double a, double b) { // (%)
std::complex<double> c(a,b);
return c;
}
/************************************/
/* Processing the Mathematica Files */
/************************************/
double evaluateFileDoubleValuedInclude(double a, double b) {
return
#include "xyz.txt"
;
}
std::complex<double> evaluateFileComplexValuedInclude(double a, double b) {
return
#include "xyzC.txt"
;
}
void readInFromFile(readInExpression* f, parser_t* p) {
std::string file = "xyz.txt"; // (%)
std::ifstream ifs(file);
std::string content( (std::istreambuf_iterator<char>(ifs) ),
(std::istreambuf_iterator<char>() ) );
// Register variables with the symbol_table
f->symbol_table.add_variable("a",f->a);
f->symbol_table.add_variable("b",f->b);
// Add custom functions to the evaluation list (see definition above)
f->symbol_table.add_function("my_function",my_function); // (%)
// f->symbol_table.add_function("Complex",Complex); // (%)
// Register symbol_table to instantiated expression
f->expression.register_symbol_table(f->symbol_table);
// Compile the expression with the instantiate parser
p->compile(content,f->expression);
}
std::complex<double> evaluateFile(readInExpression* f, double a, double b) { // (%)
// Set the values of the struct to the input values
f->a = a;
f->b = b;
// Evaluate the result for the upper values
return f->expression.value();
}
int main() {
exprtk::symbol_table<std::complex<double> > st1; // Works
exprtk::expression<std::complex<double> > e1; // Works
// exprtk::parser<std::complex<double> > p1; // Throws an error
double a = 2.0;
double b = 3.0;
std::cout << "Evaluating the text file containing only double-valued functions via the #include method: \n" << evaluateFileDoubleValuedInclude(a,b) << "\n \n";
std::cout << "Evaluating the text file containing complex-valued functions via the #include method: \n" << evaluateFileComplexValuedInclude(a,b) << "\n \n";
readInFromFile(&file,&parser);
std::cout<< "Evaluating either the double-valued or the complex-valued file [see the necessary changes tagged with (%)]:\n" << evaluateFile(&file,a,b) << "\n";
return 0;
}
xyz.txt
a + b * my_function(a,b)
xyzC.txt
2.0*Complex(a,b) + 3.0*a
To get the MWE to work, put the exprtk.hpp file in the same folder where you compile.
Note that the return type of the evaluateFile(...) function can be/is std::complex<double>, even though only double-valued types are returned. Lines tagged with // (%) are subject to change when trying out the complex-valued file xyzC.txt.
Instantiating exprtk::parser<std::complex<double> > throws (among others)
./exprtk.hpp:1587:10: error: no matching function for call to 'abs_impl'
exprtk_define_unary_function(abs )
while all other needed types seem to not complain about the type std::complex<double>.
I actually know next to nothing about ExprTk (just what I just read in its documentation and a bit of its code -- EDIT: now somewhat more of its code), but it seems to me unlikely that you'll be able to accomplish what you want without doing some major surgery to the package.
The basis of its API, as you demonstrate in your question, are template objects specialised on a single datatype. The documentation says that that type "…can be any floating point type. This includes… any custom type conforming to an interface comptaible (sic) with the standard floating point type." Unfortunately, it doesn't clarify what they consider the interface of the standard floating point type to be; if it includes every standard library function which could take a floating point argument, it's a very big interface indeed. However, the distribution includes the adaptor used to create a compatible interface for the MPFR package which gives some kind of idea what is necessary.
But the issue here is that I suspect you don't want an evaluator which can only handle complex numbers. It seems to me that you want to be able to work with both real and complex numbers. For example, there are expressions which are unambiguous for real numbers and somewhat arbitrary for complex numbers; these include a < b and max(a, b), neither of which are implemented for complex types in C++. However, they are quite commonly used in real-valued expressions, so just eliminating them from the evaluation language would seem a bit arbitrary. [Note 1] ExprTK does assume that the numeric type is ordered for some purposes, including its incorrect (imho) delta equality operator, and those comparisons are responsible for a lot of the error messages which you are receiving.
The ExprTK header does correctly figure out that std::complex<double> is a complex type, and tags it as such. However, no implementation is provided for any standard function called on complex numbers, even though C++ includes implementations for many of them. This absence is basically what triggers the rest of the errors, including the one you mention for abs, which is an example of a math function which C++ does implement for complex numbers. [Note 2]
That doesn't mean that there is no solution; just that the solution is going to involve a certain amount of work. So a first step might be to fill in the implementations for complex types, as per the MPFR adaptor linked to above (although not everything goes through the _impl methods, as noted above with respect to ordered comparison operators).
One option would be to write your own "real or complex" datatype; in a simple implementation, you could use a discriminated union like:
template<typename T>
class RealOrComplex {
public:
RealOrComplex(T a = 0)
: is_complex_(false), value_.real(a) {}
RealOrComplex(std::complex<T> a)
: is_complex_(true), value_.cplx(a) {}
// Operator implementations, omitted
private:
bool is_complex_;
union {
T real;
std::complex<T> cmplx;
} value_;
};
A possible simpler but more problematic approach would be to let a real number simply be a complex number whose imaginary part is 0 and then write shims for any missing standard library math functions. But you might well need to shim a large part of the Loop library as well.
So while all that is doable, actually doing it is, I'm afraid, too much work for an SO answer.
Since there is some indication in the ExprTK source that the author is aware of the existence of complex numbers, you might want to contact them directly to ask about the possibility of a future implementation.
Notes
It seems that Mathematica throws an error, if these operations are attempted on complex arguments. On the other hand, MatLab makes an arbitrary choice: ordered comparison looks only at the real part, but maximum is (inconsistently) handled by converting to polar coordinates and then comparing component-wise.
Forcing standard interfaces to be explicitly configured seems to me to be a curious implementation choice. Surely it would have been better to allow standard functions to be the default implementation, which would also avoid unnecessary shims like the explicit reimplementation of log1p and other standard math functions. (Perhaps these correspond to known inadequacies in certain math libraries, though.)
... can anyone here suggest a different math parser that is user friendly and can deal with complex numbers of the type std::complex(double), at the same time allowing to define custom functions that themselves return such complex values?
I came across only one math parser that handled complex numbers, Foreval:
https://sourceforge.net/projects/foreval/
Implementation as dll library.
You can connect real and complex variables of the "double" and "extended" type to Foreval in any form, passing the addresses of the variables.
Has built-in standard functions with complex variables. Also, you can connect external functions with complex variables.
But only the type of complex variables, passed and returned in functions is own, internal and will be different from std::complex(double).
There are examples for GCC in the source.
There are two disadvantages:
There is only a 32-bit version of Foreval.dll (connection is possible only for 32-bit program).
And only for OS Windows.
But there are also advantages:
Foreval.dll is a compiler, that generates machine code (fast calculations).
There is a real type with floating point - "extended" ("double" is too), also for complex numbers.
Related
[ edit: changed meters/yards to foo/bar; this isn't about converting meters to yards. ]
What's the best way to attach a type to a scalar such as a double? The typical use-case is units-of-measure (but I'm not looking for an actual implementation, boost has one).
This would appear to be a simple as:
template <typename T>
struct Double final
{
typedef T type;
double value;
};
namespace tags
{
struct foo final {};
struct bar final {};
}
constexpr double FOOS_TO_BARS_ = 3.141592654;
inline Double<tags::bar> to_bars(const Double<tags::foo>& foos)
{
return Double<tags::bar> { foos.value * FOOS_TO_BARS_ };
}
static void test(double value)
{
using namespace tags;
const Double<foo> value_in_foos{ value };
const Double<bar> value_in_bars = to_bars(value_in_foos);
}
Is that really the case? Or are there hidden complexities or other important considerations to this approach?
This would seem far, far superior to
inline double foos_to_bars(double foos)
{
return foos * FOOS_TO_BARS_;
}
without adding hardly any complexity or overhead.
I'd go with a ratio-based approach, much like std::chrono. (Howard Hinnant shows it in his recent C++Con 2016 talk about <chrono>)
template<typename Ratio = std::ratio<1>, typename T = double>
struct Distance
{
using ratio = Ratio;
T value;
};
template<typename To, typename From>
To distance_cast(From f)
{
using r = std::ratio_divide<typename To::ratio, typename From::ratio>;
return To{ f.value * r::den / r::num };
}
using yard = Distance<std::ratio<10936133,10000000>>;
using meter = Distance<>;
using kilometer = Distance<std::kilo>;
using foot = Distance<std::ratio<3048,10000>>;
demo
This is a naive implementation and probably could be improved a lot (at the very least by allowing implicit casts where they're safe), but it's a proof of concept and it's trivially extensible.
Pros:
meter m = yard{10} is either a compile time error or a safe implicit conversion,
pretty type names, you'd have to work against the solution very hard to make an invalid conversion
simple to use
Cons:
Possible integer overflows/precision problems (may be alleviated by quality of implementation?)
may be non-trivial to implement correctly
Firstly, yes, I think the way you have suggested is quite reasonable, though whether it is to be preferred would depend on the context.
Your approach has the advantage that you define conversions that might not just be simple multiplications (example Celsius and Fahrenheit).
Your method however does create different types, which leads to a need to create conversions, this can be good or bad depending on the use.
(I appreciate that your yards and metres were just an example, I'll use it as an just as an example too)
If I'm writing code that deals with lengths, (most of) the logic is going to be the same whatever the units. Whilst I could make the function that contains that logic a template so it can take different units, there's still a reasonable use case where data is needed from 2 different sources and is supplied in to different units. In this situation I'd rather be dealing in one Length class rather than a class per unit, these lengths could either hold their conversion information or it could just use one fixed unit with conversion being done at the input/output stages.
On the other hand when we have different types for different measurements e.g. length, area, temperature. Not having default conversions between these types is a good thing. And it's good that I can't accidently add a length to a temperature.
(Of course multiplication of types is different.)
In my opinion, your approach is over-designed to the point that bugs have crept in that are hard to spot. Even at this point the syntactic complexity you have introduced has allowed your conversion to become inaccurate: you are out from the 8th decimal significant figure.
The standard conversion is 1 inch is 25.4mm which means that one yard is exactly 0.9144m.
Neither this nor its reciprocal can be represented exactly in IEEE754 binary floating point.
If I were you I'd define
constexpr double METERS_IN_YARDS = 0.9144;
constexpr double YARDS_IN_METERS = 1.0 / 0.9144;
to keep the bugs away, and work in double precision floating point arithmetic the old-fashioned way.
I am building a C++ program to verify a mathematical conjecture for up to 100 billion iterations. In order to test such high numbers, I cannot use a C++ int, so I am using the NTL library, using the type ZZ as my number type.
My algorithm looks like this:
ZZ generateNthSeq(ZZ n)
{
return floor(n*sqrt(2));
}
I have the two libraries being imported:
#include <cmath>
#include <NTL/ZZ.h>
But obviously this cannot compile because I get the error:
$ g++ deepness*.cpp
deepness.cpp: In function ‘NTL::ZZ generateNthSeq(NTL::ZZ)’:
deepness.cpp:41: error: no matching function for call to ‘floor(NTL::ZZ)’
/usr/include/bits/mathcalls.h:185: note: candidates are: double floor(double)
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/cmath:262: note: long double std::floor(long double)
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/cmath:258: note: float std::floor(float)
Stating that the floor mathematical operation cannot accept a ZZ class type. But I need the numbers to be pretty big. How can I accomplish what I want to do, which is to floor the function, while using the NTL library?
Note that it doesn't really make sense to apply floor to an integral type (well, it does, it's just a no-op). What you should be really worried about is the fact that your code is apparently passing something of type ZZ into floor!
That is, what can n * sqrt(2) possibly mean here?
Also, before even writing that, I'd've checked the documentation to see if integer * floating point actually exists in the library -- usually for that to be useful at all, you need arbitrary precision floating types available.
Checking through the headers, there is only one multiplication operator:
ZZ operator*(const ZZ& a, const ZZ& b);
and there is a conversion constructor:
explicit ZZ(long a); // promotion constructor
I can't figure out how your code is even compiling. Maybe you're using a different version of the library than I'm looking at, and the conversion constructor is implicit, and your double is getting "promoted" to a ZZ. This is surely not what you want, since promoting sqrt(2) to a ZZ is simply going to give you the integer 1.
You either need to:
look into whether or not NTL has arbitrary precision floating point capabilities
switch to a library that does have arbitrary precision floating point capabilities
convert your calculation to pure integer arithmetic
That last one is fairly easy here: you want
return SqrRoot(sqr(n) * 2); // sqr(n) will be a bit more efficient than `n * n`
I found this example at cppreference.com, and it seems to be the defacto example used through-out StackOverflow:
template<int N>
struct S {
int a[N];
};
Surely, non-type templatization has more value than this example. What other optimizations does this syntax enable? Why was it created?
I am curious, because I have code that is dependent on the version of a separate library that is installed. I am working in an embedded environment, so optimization is important, but I would like to have readable code as well. That being said, I would like to use this style of templating to handle version differences (examples below). First, am I thinking of this correctly, and MOST IMPORTANTLY does it provide a benefit or drawback over using a #ifdef statement?
Attempt 1:
template<int VERSION = 500>
void print (char *s);
template<int VERSION>
void print (char *s) {
std::cout << "ERROR! Unsupported version: " << VERSION << "!" << std::endl;
}
template<>
void print<500> (char *s) {
// print using 500 syntax
}
template<>
void print<600> (char *s) {
// print using 600 syntax
}
OR - Since the template is constant at compile time, could a compiler consider the other branches of the if statement dead code using syntax similar to:
Attempt 2:
template<int VERSION = 500>
void print (char *s) {
if (VERSION == 500) {
// print using 500 syntax
} else if (VERSION == 600) {
// print using 600 syntax
} else {
std::cout << "ERROR! Unsupported version: " << VERSION << "!" << std::endl;
}
}
Would either attempt produce output comparable in size to this?
void print (char *s) {
#if defined(500)
// print using 500 syntax
#elif defined(600)
// print using 600 syntax
#else
std::cout << "ERROR! Unsupported version: " << VERSION << "!" << std::endl;
#endif
}
If you can't tell I'm somewhat mystified by all this, and the deeper the explanation the better as far as I'm concerned.
Compilers find dead code elimination easy. That is the case where you have a chain of ifs depending (only) on a template parameter's value or type. All branches must contain valid code, but when compiled and optimized the dead branches evaporate.
A classic example is a per pixel operation written with template parameters that control details of code flow. The body can be full of branches, yet the compiled output branchless.
Similar techniques can be used to unroll loops (say scanline loops). Care must be taken to understand the code size multiplication that can result: especially if your compiler lacks ICF (aka comdat folding) such as the gold gcc linker and msvc (among others) have.
Fancier things can also be done, like manual jump tables.
You can do pure compile time type checks with no runtime behaviour at alll stuff like dimensional analysis. Or distinguish between points and vectors in n-space.
Enums can be used to name types or switches. Pointers to functions to enable efficient inlining. Pointers to data to allow 'global' state that is mockable, or siloable, or decoupled from implementation. Pointers to strings to allow efficient readable names in code. Lists of integral values for myriads of purposes, like the indexes trick to unpack tuples. Complex operations on static data, like compile time sorting of data in multiple indexes, or checking integrity of static data with complex invariants.
I am sure I missed some.
An obvious optimization is when using an integer, the compiler has a constant rather than a variable:
int foo(size_t); // definition not visible
// vs
template<size_t N>
size_t foo() {return N*N;}
With the template, there's nothing to compute at runtime, and the result may be used as a constant, which can aid other optimizations. You can take this example further by declaring it constexpr, as 5gon12eder mentioned below.
Next example:
int foo(double, size_t); // definition not visible
// vs
template<size_t N>
size_t foo(double p) {
double r(p);
for (size_t i(0) i < N; ++i) {
r *= p;
}
return r;
}
Ok. Now the number of iterations of the loop is known. The loop may be unrolled/optimized accordingly, which can be good for size, speed, and eliminating branches.
Also, basing off your example, std::array<> exists. std::array<> can be much better than std::vector<> in some contexts, because std::vector<> uses heap allocations and non-local memory.
There's also the possibility that some specializations will have different implementations. You can separate those and (potentially) reduce other referenced definitions.
Of course, templates<> can also work against you unnecessarily duplication of your programs.
templates<> also require longer symbol names.
Getting back to your version example: Yes, it's certainly possible that if VERSION is known at compilation, the code which is never executed can be deleted and you may also be able to reduce referenced functions. The primary difference will be that void print (char *s) will have a shorter name than the template (whose symbol name includes all template parameters). For one function, that's counting bytes. For complex programs with many functions and templates, that cost can go up quickly.
There is an enormous range of potential applications of non-typename template parameters. In his book The C++ Programming Language, Stroustrup gives an interesting example that sketches out a type-safe zero-overhead framework for dealing with physical quantities. Basically, the idea is that he writes a template that accepts integers denoting the powers of fundamental physical quantities such as length or mass and then defines arithmetic on them. In the resulting framework, you can add speed with speed or divide distance by time but you cannot add mass to time. Have a look at Boost.Units for an industry-strength implementation of this idea.
For your second question. Any reasonable compiler should be able to produce exactly the same machine code for
#define FOO
#ifdef FOO
do_foo();
#else
do_bar();
#endif
and
#define FOO_P 1
if (FOO_P)
do_foo();
else
do_bar();
except that the second version is much more readable and the compiler can catch errors in both branches simultaneously. Using a template is a third way to generate the same code but I doubt that it will improve readability.
Apparently std::stoi does not accept strings representing integers in exponential notation, like "1e3" (= 1000). Is there an easy way to parse such a string into an integer? One would think that since this notation works in C++ source code, the standard library has a way to parse this.
You can use stod (see docs) to do this, by parsing it as a double first. Be wary of precision issues when casting back though...
#include <iostream> // std::cout
#include <string> // std::string, std::stod
int main () {
std::string text ("1e3");
std::string::size_type sz; // alias of size_t
double result = std::stod(text,&sz);
std::cout << "The result is " << (int)result << std::endl; // outputs 1000
return 0;
}
One would think that since this notation works in C++ source code, the standard library has a way to parse this.
The library and the compiler are unrelated. The reason this syntax works in C++ is that the language allows you to assign expressions of type double to integer variables:
int n = 1E3;
assigns a double expression (i.e. a numeric literal of type double) to an integer variable.
Knowing what's going on here you should be able to easily identify the function in the Standard C++ Library that does what you need.
You can read it as a double using standard streams, for example
double d;
std::cin >> d; //will read scientific notation properly
and then cast it to an int, but obviously double can represent far more values than int, so be careful about that.
Emitting exponential notation into std::stoi would overflow too often and integer overflow in C++ is undefined behaviour.
You need to build your own where you can taylor the edge cases to your specific requirements.
I'd be inclined not to go along the std::stod route since a cast from a double to int is undefined behaviour if the integral part of the double cannot be represented by the int.
I was trying some hashing algorithms with stdext::hash_value() on VS2010 and realized this:
#include <iostream>
#include <xhash>
using namespace std;
int main()
{
#ifdef _WIN64
std::cout << "x64" << std::endl;
#else
std::cout << "x32" << std::endl;
#endif
std::cout << stdext::hash_value(345.34533) << std::endl;
std::cout << stdext::hash_value(345.566) << std::endl;
return 0;
}
// Output is:
// x64
//3735928758
//3735928758
I tried some other couples of double variable that has the same integer but different fractional part. Like 1.234 vs 1.568. Hash values were always the same. So I took a look at source of hash_value() and saw
#define _HASH_SEED (size_t)0xdeadbeef
template<class _Kty> inline
size_t hash_value(const _Kty& _Keyval)
{ // hash _Keyval to size_t value one-to-one
return ((size_t)_Keyval ^ _HASH_SEED);
}
_KeyVal is cast to size_t which does not make sense to me. The function simply ignores the fractional part of double. What is the logic behind this implementation? Is it a bug or feature?
stdext::hash_value isn't a hash function. It's the input to a hash function, you specialize it for your type so that it can be used as a key for the stdext hash classes. There doesn't seem to be any documentation for it however. The actual hash function is stdext::hash_compare.
But because there is no default specialization of hash_value it uses the convert-to-int method which ignores the fractional part.
There is an almost-identical bug for the standard std::tr1::hash/std::hash function up until vc10:
http://connect.microsoft.com/VisualStudio/feedback/details/361851/std-tr1-hash-float-or-hash-double-is-poorly-implemented
in vc10 std::hash gets a double specialization that hashes the bits. I guess stdext is considered obsolete now so there's no fix for it even in vc10.
The function is written to work with any type of data. It makes no assumption about the size and hence is inefficient for certain types. You can override this behavior for doubles to make it more efficient via a template specialization
template<>
size_t hash_value<double>(const double& key) {
return // Some awesome double hashing algorithm
}
Putting this definition above your main method will cause the calls to stdext::hash_value(354.566) to bind to this definition as opposed to the standard one
That's old code - doesn't seem very good.
You should probably try std::hash instead.
This is, apparently, an attempt to provide a generic hash function for
integers (although I don't see what the xor adds). It quite clearly
won't work for most other types. Including floating point.
Providing a good hash function for a floating point value is difficult;
if I were trying to make a generic hash, I'd probably start by testing
for 0, NaN and Inf, and returning predefined hashes for those (or
rejecting NaN completely as it is not a valid hashable value), and then
simply using a standard string hash on the underlying bytes. This will
at least make hashing compatible with the == operator. But the problems
of precision mean that == itself may not be what is needed. Nor <, in
the case of std::map, since std::map uses < to define an equality
relationship, and depending on the source of the floats or doubles, such
an equality relationship might not be appropriate for a hash table.
At any rate, don't expect to find a standard hash function for the floating point types.
VC10 contains the C++0x Standard hashing mechanisms, so there's no need to use the stdext ones anymore, and std::hash does contain a mechanism for double which performs a bitwise conversion and then hashes. That code that you have for stdext is just a fallback mechanism, and it's not really intended for use with floating-point types. I guess it's a design oversight.
template<>
class hash<double>
: public unary_function<double, size_t>
{ // hash functor
public:
typedef double _Kty;
typedef _ULonglong _Inttype; // use first 2*32 bits
size_t operator()(const _Kty& _Keyval) const
{ // hash _Keyval to size_t value by pseudorandomizing transform
_Inttype _Bits = *(_Inttype *)&_Keyval;
return (hash<_Inttype>()(
(_Bits & (_ULLONG_MAX >> 1)) == 0 ? 0 : _Bits));
}
};