difference between hashing on x86 or x64 - c++

I want to implement a hashmap into my code, so I decided to stick to murmurhash3
I currently only deliver my programs compiled for x86 and have tried to keep the code general so I've never had trouble running the programs on x64.
Now I've looked at the header files of murmurhash and the library offers following functions:
MurmurHash3_x86_32
MurmurHash3_x86_64
MurmurHash3_x86_128
MurmurHash3_x64_32
MurmurHash3_x64_64
MurmurHash3_x64_128
Does this mean I have to use the x64 functions and provide a x64 executable to be able to use this hash library on x64 systems? Or can I simply use the x86 version, and just encounter poorer performance?
Am I correct in thinking that the _32 _64 _128 bit versions only mean that more bit versions offer better distribution?

Edit: Changed everything after looking at the murmurhash3 documentation.
First, the _x86 variants are portable hash algorithms. The _32/_64/_128 indicates the width of the hash in bits. Generally _32 should be fine as long as your hash algorithm is smaller than 232 buckets.
The _x64 variants are an entirely different family of hash algorithms. All the _x64 variants are based on the _x64_128 implementation - a 128-bit hash. They then throw away part of the hash to get the _32 and _64 bit sizes. This may or may not be faster than the _x86 variant - the documentation claims some impressive speedups, though. Note, however, that it's very likely to get different hash values than the x86 variant.

x86 indicates that the algorithm is optimized for 32-bit platforms. This means it operates on 32-bit unsigned integers.
x64 is then optimized for 64-bit platforms, operating on 64-bit unsigned integers.
Also, the results between the two are not compatible. The hash values for the same input will be different depending if it is MurmurHash3_x86_128 or MurmurHash3_x64_128 for example.
Does this mean I have to use the x64 functions and provide a x64 executable to be able to use this hash library on x64 systems? Or can I simply use the x86 version, and just encounter poorer performance?
64-bit hash functions can be compiled for 32-bit systems but will end up being quite slow because the compiler splits computations into two parts. If 32-bit support is important, you should use a x86-optimized function, not a x64-optimized one. On x64 systems 32-bit code runs fine, although I would consider that to be an under-utilization. x64-optimized algorithms are much more efficient when on 64-bit CPUs.
Am I correct in thinking that the _32 _64 _128 bit versions only mean that more bit versions offer better distribution?
I suppose the answer is yes. If by distribution you mean "less likely to cause collisions". Each additional bit of memory used in a hash dramatically increases the number of possible outcomes. A 4-bit hash has 16 possible hashes, while 64 provides 18 quintillion (128 then providing 340.2 undecillion!). 256 bits provide so much that it is often enough for cryptographic security purposes.
Something else to be aware of: Lately, modern hash functions utilize new instruction sets of CPUs such as CRC32, AES, SSE2, SIMD - where the function takes advantage of specific CPU features/instructions to achieve better performance under supported hardware. This can greatly speed up hashing on CPUs that support these modern features.

Related

Cross Platform Floating Point Consistency

I'm developing a cross-platform game which plays over a network using a lockstep model. As a brief overview, this means that only inputs are communicated, and all game logic is simulated on each client's computer. Therefore, consistency and determinism is very important.
I'm compiling the Windows version on MinGW32, which uses GCC 4.8.1, and on Linux I'm compiling using GCC 4.8.2.
What struck me recently was that, when my Linux version connected to my Windows version, the program would diverge, or de-sync, instantly, even though the same code was compiled on both machines! Turns out the problem was that the Linux build was being compiled via 64 bit, whereas the Windows version was 32 bit.
After compiling a Linux 32 bit version, I was thankfully relieved that the problem was resolved. However, it got me thinking and researching on floating point determinism.
This is what I've gathered:
A program will be generally consistent if it's:
ran on the same architecture
compiled using the same compiler
So if I assume, targeting a PC market, that everyone has a x86 processor, then that solves requirement one. However, the second requirement seems a little silly.
MinGW, GCC, and Clang (Windows, Linux, Mac, respectively) are all different compilers based/compatible with/on GCC. Does this mean it's impossible to achieve cross-platform determinism? or is it only applicable to Visual C++ vs GCC?
As well, do the optimization flags -O1 or -O2 affect this determinism? Would it be safer to leave them off?
In the end, I have three questions to ask:
1) Is cross-platform determinism possible when using MinGW, GCC, and Clang for compilers?
2) What flags should be set across these compilers to ensure the most consistency between operating systems / CPUs?
3) Floating point accuracy isn't that important for me -- what's important is that they are consistent. Is there any method to reducing floating point numbers to a lower precision (like 3-4 decimal places) to ensure that the little rounding errors across systems become non-existent? (Every implementation I've tried to write so far has failed)
Edit: I've done some cross-platform experiments.
Using floatation points for velocity and position, I kept a Linux Intel Laptop and a Windows AMD Desktop computer in sync for up to 15 decimal places of the float values. Both systems are, however, x86_64. The test was simple though -- it was just moving entities around over a network, trying to determine any visible error.
Would it make sense to assume that the same results would hold if a x86 computer were to connect to a x86_64 computer? (32 bit vs 64 bit Operating System)
Cross-platform and cross-compiler consistency is of course possible. Anything is possible given enough knowledge and time! But it might be very hard, or very time-consuming, or indeed impractical.
Here are the problems I can foresee, in no particular order:
Remember that even an extremely small error of plus-or-minus 1/10^15 can blow up to become significant (you multiply that number with that error margin with one billion, and now you have a plus-or-minus 0.000001 error which might be significant.) These errors can accumulate over time, over many frames, until you have a desynchronized simulation. Or they can manifest when you compare values (even naively using "epsilons" in floating-point comparisons might not help; only displace or delay the manifestation.)
The above problem is not unique to distributed deterministic simulations (like yours.) The touch on the issue of "numerical stability", which is a difficult and often neglected subject.
Different compiler optimization switches, and different floating-point behavior determination switches might lead to the compiler generate slightly different sequences of CPU instructions for the same statements. Obviously these must be the same across compilations, using the same exact compilers, or the generated code must be rigorously compared and verified.
32-bit and 64-bit programs (note: I'm saying programs and not CPUs) will probably exhibit slightly different floating-point behaviors. By default, 32-bit programs cannot rely on anything more advanced than x87 instruction set from the CPU (no SSE, SSE2, AVX, etc.) unless you specify this on the compiler command line (or use the intrinsics/inline assembly instructions in your code.) On the other hand, a 64-bit program is guaranteed to run on a CPU with SSE2 support, so the compiler will use those instructions by default (again, unless overridden by the user.) While x87 and SSE2 float datatypes and operations on them are similar, they are - AFAIK - not identical. Which will lead to inconsistencies in the simulation if one program uses one instruction set and another program uses another.
The x87 instruction set includes a "control word" register, which contain flags that control some aspects of floating-point operations (e.g. exact rounding behavior, etc.) This is a runtime thing, and your program can do one set of calculations, then change this register, and after that do the exact same calculations and get a different result. Obviously, this register must be checked and handled and kept identical on the different machines. It is possible for the compiler (or the libraries you use in your program) to generate code that changes these flags at runtime inconsistently across the programs.
Again, in case of the x87 instruction set, Intel and AMD have historically implemented things a little differently. For example, one vendor's CPU might internally do some calculations using more bits (and therefore arrive at a more accurate result) that the other, which means that if you happen to run on two different CPUs (both x86) from two different vendors, the results of simple calculations might not be the same. I don't know how and under what circumstances these higher accuracy calculations are enabled and whether they happen under normal operating conditions or you have to ask for them specifically, but I do know these discrepancies exist.
Random numbers and generating them consistently and deterministically across programs has nothing to do with floating-point consistency. It's important and source of many bugs, but in the end it's just a few more bits of state that you have to keep synched.
And here are a couple of techniques that might help:
Some projects use "fixed-point" numbers and fixed-point arithmetic to avoid rounding errors and general unpredictability of floating-point numbers. Read the Wikipedia article for more information and external links.
In one of my own projects, during development, I used to hash all the relevant state (including a lot of floating-point numbers) in all the instances of the game and send the hash across the network each frame to make sure even one bit of that state wasn't different on different machines. This also helped with debugging, where instead of trusting my eyes to see when and where inconsistencies existed (which wouldn't tell me where they originated, anyways) I would know the instant some part of the state of the game on one machine started diverging from the others, and know exactly what it was (if the hash check failed, I would stop the simulation and start comparing the whole state.)
This feature was implemented in that codebase from the beginning, and was used only during the development process to help with debugging (because it had performance and memory costs.)
Update (in answer to first comment below): As I said in point 1, and others have said in other answers, that doesn't guarantee anything. If you do that, you might decrease the probability and frequency of an inconsistency occurring, but the likelihood doesn't become zero. If you don't analyze what's happening in your code and the possible sources of problems carefully and systematically, it is still possible to run into errors no matter how much you "round off" your numbers.
For example, if you have two numbers (e.g. as results of two calculations that were supposed to produce identical results) that are 1.111499999 and 1.111500001 and you round them to three decimal places, they become 1.111 and 1.112 respectively. The original numbers' difference was only 2E-9, but it has now become 1E-3. In fact, you have increased your error 500'000 times. And still they are not equal even with the rounding. You've exacerbated the problem.
True, this doesn't happen much, and the examples I gave are two unlucky numbers to get in this situation, but it is still possible to find yourself with these kinds of numbers. And when you do, you're in trouble. The only sure-fire solution, even if you use fixed-point arithmetic or whatever, is to do rigorous and systematic mathematical analysis of all your possible problem areas and prove that they will remain consistent across programs.
Short of that, for us mere mortals, you need to have a water-tight way to monitor the situation and find exactly when and how the slightest discrepancies occur, to be able to solve the problem after the fact (instead of relying on your eyes to see problems in game animation or object movement or physical behavior.)
No, not in practice. For example, sin() might come from a library or from a compiler intrinsic, and differ in rounding. Sure, that's only one bit, but that's already out of sync. And that one bit error may add up over time, so even an imprecise comparison may not be sufficient.
N/A
You can't reduce FP precision for a given type, and I don't even see how it would help you. You'd turn the occasional 1E-6 difference into an occasional 1E-4 difference.
Next to your concerns on determinism, I have another remark: if you are worried about calculation consistency on a distributed system, you may have a design issue.
You could think about your application as a bunch of nodes, each responsible for their own calculations. If information about another node is needed, it should sent to you by that node.
1.)
In principle cross platform, OS, hardware compatibility is possible but in practice it's a pain.
In general your results will depend on which OS you use, which compiler, and which hardware you use. Change any one of those and your results might change. You have to test all changes.
I use Qt Creator and qmake (cmake is probably better but qmake works for me) and test my code in MSVC on Windows, GCC on Linux, and MinGW-w64 on Windows. I test both 32-bit and 64-bit. This has to be done whenever code changes.
2.) and 3.)
In terms of floating point some compilers will use x87 instead of SSE in 32-bit mode. See this as an example of the consequences of when that happens Why a number crunching program starts running much slower when diverges into NaNs? All 64-bit systems have SSE so I think most use SSE/AVX in 64-bit otherwise, e.g. in 32 bit mode, you might need to force SSE with something like -mfpmath=sse and -msse2.
But if you want a more compatible version of GCC on windows then I would used MingGW-w64 for 32-bit (aka MinGW-w32) or MinGW-w64 in 64bit . This is not the same thing as MinGW (aka mingw32). The projects have diverged. MinGW depends on MSVCRT (the MSVC C runtime library) and MinGW-w64 does not. The Qt project has a pretty good description of MinGW-w64 and installiation. http://qt-project.org/wiki/MinGW-64-bit
You might also want to consider writing a CPU dispatcher cpu dispatcher for visual studio for AVX and SSE.

Filesize difference when cross compiling

I am writing a small application in c++ that runs on my host machine (linux x86) and on a a target machine(arm).
The problem I have is that on the host machine my binary is about 700kb of size but on the target machine it is about 7mb.
I am using the same compile switches for both platforms. My first though was that a library on the arget machine got linked statically but I checked both binaries with objdump and both use the same dynamically link libraries.
So can anyone give me hint how I can figure out why there is such a huge difference in size?
While different computer architectures can theoretically require completely different amounts of executable code for the same program, a factor of 10 is not really expected among modern architectures. ARM and x86 may be different, but they are still designed in the same universe where memory and bandwidth is not something to waste, leading CPU designers to try to keep the executable code as tight as possible.
I would, therefore, look at the following possibilities, in order of probability:
Symbol stripping: if one of the two binaries has been stripped from its symbols, then it would be significantly smaller, especially if compiled with debugging information. You might want to try to strip both binaries and see what happens.
Static linking: I have occasionally encountered build systems for embedded targets that would prefer static linking over using shared libraries. Examining the library dependencies of each binary would probably detect this.
Additional enabled code: The larger binary may have additional code enabled because e.g. the build system found an additional optional library or because the target platform requires specific handle.
Still, a factor of 10 is probably too much for this, unless the smaller binary is missing a lot of functionality or the larger one has linked in some optional library statically.
Different compiler configuration: You should not only look at the compiler options that you supply, but also at the defaults the compiler uses for each target. For example if the compiler has significantly higher inlining or loop unrolling limits in one architecture, the resulting executable could baloon-out noticeably.
first there is no reason to expect the same code compiled for different architectures to have any kind of relationship in size to each other. You can easily have A be larger than B then change one line of code and then B is larger than A.
Second the "binaries" you are talking about are I am guessing elf, which is a little bit of binary and some to a lot of overhead. The overhead can vary between architectures and other such things.
Bottom line if you are compiling the same code for two architectures/platforms or with different compilers or compile options for the same architecture there is no reason to expect the file sizes to have any relationship to each other.
Different architectures can have completely different ways to handle the same thing. For example loading immediate value on CISC (e.g. x86) architecture is usually one instruction, while on RISC (e.g. ppc, arm) it usually is more than one instruction, the actual number needed being dependent on the value. For example if the instruction set only allows 16bit immediate values, you may need up to 7 instructions to load a 64bit value (loading by 16bits and shifting in between the loads). Hence the code is inherently different.
One reason not mentioned so far, but relevant to ARM/x86 comparisons is Floating Point emulation. All x86 chips today come with native FP support (and x86-64 even with SIMD FP support via SSE), but ARM CPU's often lack a FP unit. That in turn means even a simple FP addition has to be turned into a long sequence of integer operations on exponents and mantissa's.

GCC __builtin_ functions

Are the following functions executed in a single clock cycle?
__builtin_popcount
__builtin_ctz
__builtin_clz
also what is the no of clock cycles for the ll(64 bit) version of the same.
are they portable. why or why not?
Do these functions execute in a single clock-cycle?
Not necessarily. On architectures where they can be implemented with a single instruction, they will typically be the fastest way to compute that function (but still not necessarily a single clock cycle). On architectures where they cannot be implemented as a single instruction, their performance is less certain.
On my processor (a Core 2 Duo), __builtin_ctz and __builtin_clz can be implemented with a single instruction (Bit Scan Forward and Bit Scan Reverse). However, __builtin_popcount cannot be implemented with a single instruction on my processor. For __builtin_popcount, gcc 4.7.2 calls a library function, while clang 3.1 generates an inline instruction sequence (implementing this bit twiddling hack). Clearly, the performance of those two implementations will not be the same.
Are they portable?
They are not portable across compilers. They originated with GCC (as far as I know), and are also implemented in some other compilers such as Clang.
Compilers that do support these functions may provide them for multiple architectures, but implementation quality (performance) is likely to vary.
__builtin functions like this are used to access specific machine instructions in a somewhat easier way than using inline assembly. If you need to achieve the highest performance and are willing to sacrifice portability to do so or to provide an alternate implementation for compilers or platforms where these functions are not provided, then it makes sense to use them. If optimal low level performance is your goal you should also check the assembly output of the compiler, to determine whether it really is generating the instruction that you expect it to use.
You can get a first idea of what your compiler does with it by compiling it with -O3 -march=native -S into assembler code. There you can check if this resolves to just one assembler statement. If so, this is not a guarantee that this is done in one cycle. To know the real cost, you'd have to measure.

Taking advantage of SSE and other CPU extensions

Theres are couple of places in my code base where the same operation is repeated a very large number of times for a large data set. In some cases it's taking a considerable time to process these.
I believe that using SSE to implement these loops should improve their performance significantly, especially where many operations are carried out on the same set of data, so once the data is read into the cache initially, there shouldn't be any cache misses to stall it. However I'm not sure about going about this.
Is there a compiler and OS independent way writing the code to take advantage of SSE instructions? I like the VC++ intrinsics, which include SSE operations, but I haven't found any cross compiler solutions.
I still need to support some CPU's that either have no or limited SSE support (eg Intel Celeron). Is there some way to avoid having to make different versions of the program, like having some kind of "run time linker" that links in either the basic or SSE optimised code based on the CPU running it when the process is started?
What about other CPU extensions, looking at the instruction sets of various Intel and AMD CPU's shows there are a few of them?
For your second point there are several solutions as long as you can separate out the differences into different functions:
plain old C function pointers
dynamic linking (which generally relies on C function pointers)
if you're using C++, having different classes that represent the support for different architectures and using virtual functions can help immensely with this.
Note that because you'd be relying on indirect function calls, the functions that abstract the different operations generally need to represent somewhat higher level functionality or you may lose whatever gains you get from the optimized instruction in the call overhead (in other words don't abstract the individual SSE operations - abstract the work you're doing).
Here's an example using function pointers:
typedef int (*scale_func_ptr)( int scalar, int* pData, int count);
int non_sse_scale( int scalar, int* pData, int count)
{
// do whatever work needs done, without SSE so it'll work on older CPUs
return 0;
}
int sse_scale( int scalar, in pData, int count)
{
// equivalent code, but uses SSE
return 0;
}
// at initialization
scale_func_ptr scale_func = non_sse_scale;
if (useSSE) {
scale_func = sse_scale;
}
// now, when you want to do the work:
scale_func( 12, theData_ptr, 512); // this will call the routine that tailored to SSE
// if the CPU supports it, otherwise calls the non-SSE
// version of the function
Good reading on the subject: Stop the instruction set war
Short overview: Sorry, it is not possible to solve your problem in simple and most compatible (Intel vs. AMD) way.
The SSE intrinsics work with visual c++, GCC and the intel compiler. There is no problem to use them these days.
Note that you should always keep a version of your code that does not use SSE and constantly check it against your SSE implementation.
This helps not only for debugging, it is also usefull if you want to support CPUs or architectures that don't support your required SSE versions.
In answer to your comment:
So effectively, as long as I don't try to actually execute code containing unsupported instructions I'm fine, and I could get away with an "if(see2Supported){...}else{...}" type switch?
Depends. It's fine for SSE instructions to exist in the binary as long as they're not executed. The CPU has no problem with that.
However, if you enable SSE support in the compiler, it will most likely swap a number of "normal" instructions for their SSE equivalents (scalar floating-point ops, for example), so even chunks of your regular non-SSE code will blow up on a CPU that doesn't support it.
So what you'll have to do is most likely compile on or two files separately, with SSE enabled, and let them contain all your SSE routines. Then link that with the rest of the app, which is compiled without SSE support.
Rather than hand-coding an alternative SSE implementation to your scalar code, I strongly suggest you have a look at OpenCL. It is a vendor-neutral portable, cross-platform system for computationally intensive applications (and is highly buzzword-compliant!). You can write your algorithm in a subset of C99 designed for vectorised operations, which is much easier than hand-coding SSE. And best of all, OpenCL will generate the best implementation at runtime, to execute either on the GPU or on the CPU. So basically you get the SSE code written for you.
Theres are couple of places in my code base where the same operation is repeated a very large number of times for a large data set. In some cases it's taking a considerable time to process these.
Your application sounds like just the kind of problem that OpenCL is designed to address. Writing alternative functions in SSE would certainly improve the execution speed, but it is a great deal of work to write and debug.
Is there a compiler and OS independent way writing the code to take advantage of SSE instructions? I like the VC++ intrinsics, which include SSE operations, but I haven't found any cross compiler solutions.
Yes. The SSE intrinsics have been essentially standardised by Intel, so the same functions work the same between Windows, Linux and Mac (specifically with Visual C++ and GNU g++).
I still need to support some CPU's that either have no or limited SSE support (eg Intel Celeron). Is there some way to avoid having to make different versions of the program, like having some kind of "run time linker" that links in either the basic or SSE optimised code based on the CPU running it when the process is started?
You could do that (eg. using dlopen()) but it is a very complex solution. Much simpler would be (in C) to define a function interface and call the appropriate version of the optimised function via function pointer, or in C++ to use different implementation classes, depending on the CPU detected.
With OpenCL it is not necessary to do this, as the code is generated at runtime for the given architecture.
What about other CPU extensions, looking at the instruction sets of various Intel and AMD CPU's shows there are a few of them?
Within the SSE instruction set, there are many flavours. It can be quite difficult to code the same algorithm in different subsets of SSE when certain instructions are not present. I suggest (at least to begin with) that you choose a minimum supported level, such as SSE2, and fall back to the scalar implementation on older machines.
This is also an ideal situation for unit/regression testing, which is very important to ensure your different implementations produce the same results. Have a test suite of input data and known good output data, and run the same data through both versions of the processing function. You may need to have a precision test for passing (ie. the difference epsilon between the result and the correct answer is below 1e6, for example). This will greatly aid in debugging, and if you build in high-resolution timing to your testing framework, you can compare the performance improvements at the same time.

long long implementation in 32 bit machine

As per c99 standard, size of long long should be minimum 64 bits. How is this implemented in a 32 bit machine (eg. addition or multiplication of 2 long longs). Also, What is the equivalent of long long in C++.
The equivalent in C++ is long long as well. It's not required by the standard, but most compilers support it because it's so usefull.
How is it implemented? Most computer architectures already have built-in support for multi-word additions and subtractions. They don't do 64 bit addititions directly but use the carry flag and a special add-instruction to build a 64 bit add from two 32 bit adds.
The same extension exists for subtraction as well (the carry is called borrow in these cases).
Longword multiplications and divisions can be built from smaller multiplications without the help of carry-flags. Sometimes simply doing the operations bit by bit is faster though.
There are architectures that don't have any flags at all (some DSP chips and simple micros). On these architectures the overflow has to be detected with logic operations. Multi-word arithmetic tend to be slow on these machines.
On the IA32 architecture, 64-bit integer are implemented in using two 32-bit registers (eax and edx).
There are platform specific equivalents for C++, and you can use the stdint.h header where available (boost provides you with one).
As everyone has stated, a 64-bit integer is typically implemented by simply using two 32-bit integers together. Then clever code generation is used to keep track of the carry and/or borrow bits to keep track of overflow, and adjust accordingly.
This of course makes such arithmetic more costly in terms of code space and execution time, than the same code compiled for an architecture with native support for 64-bit operations.
If you care about bit-sizes, you should use
#include <stdint.h>
int32_t n;
and friends. This works for C++ as well.
64-bit numbers on 32-bit machines are implemented as you think,
by 4 extra bytes. You could therefore implement your own 64-bit
datatype by doing something like this:
struct my_64bit_integer {
uint32_t low;
uint32_t high;
};
You would of course have to implement mathematical operators yourself.
There is an int64_t in the stdint.h that comes with my GCC version,
and in Microsoft Visual C++ you have an __int64 type as well.
The next C++ standard (due 2009, or maybe 2010), is slated to include the "long long" type. As mentioned earlier, it's already in common use.
The implementation is up to the compiler writers, although computers have always supported multiple precision operations. Some languages, like Python and Common Lisp, require support for indefinite-precision integers. Long ago, I wrote 64-bit multiplication and division routines for a computer (the Z80) that could manage 16-bit addition and subtraction, with no hardware multiplication at all.
Probably the easiest way to see how an operation is implemented on your particular compiler is to write a code sample and examine the assembler output, which is available from all the major compilers I've worked with.