Choosing between 32 and 64 bit intrinsic CRC on Intel CPU - c++

I need to calculate CRC in order to form a hash function on an INTEL machine and came up with the following two intrinsic functions:
_mm_crc32_u32
_mm_crc32_u64
In my project, I am dealing with 32-bit variables and my dilemma is between shifting and ORing each two variables (thus creating a 64-bit variable) and then using the 64-bit CRC or run the 32-bit CRC on each of the two 32-bit variables.
I can't find anywhere the amount of cycles that each one of these functions take, and from the Intel function specifications it is unclear which one is preferable.
The same dilemma also applies on the 16-bit version of the CRC function:
_mm_crc32_u16
I tried checking it by taking the time before and after the CRC. The results were pretty much the same. So I need a more sophisticated way of calculating it.

Don't use CRC for hash values. It's not the same kind of thing.
Use the murmurhash for classic computer science hashing needs (that is, not huge cryptographic strength hashes). That also has implementations for different widths.
I don't understand what you mean: you have two 32-bit values and want a hash of that? That might be sensible or might not, depending on why. Can you clarify what you are trying to accomplish?

Related

Why and how are C++ bitfields non-portable?

I've come across many comments on various questions regarding bitfields asserting that bitfields are non-portable, but I've never been able to find a source explaining precisely why.
At face value, I would have presumed all bitfields merely compile to variations of the same bitshifting code, but evidently there must be more too it than that or there would not be such vehement dislike for them.
So my question is what is it that makes bitfields non-portable?
Bit fields are non-portable in the same sense as integers are non-portable. You can use integers to write a portable program, but you cannot expect to send a binary representation of int as is to a remote machine and expect it to interpret the data correctly.
This is because 1. word lengths of processors differ, and because of that, the sizes of integer types differ (1.1 byte length can differ too, but that is these days rare outside embedded systems). And because 2. the byte endianness differs across processors.
These problems are easy to overcome. Native endianness can be easily converted to agreed upon endianness (big endian is de facto standard for network communication), and the size can be inspected at compile time and fixed length integer types are available these days. Therefore integers can be used to communicate across network, as long as these details are taken care of.
Bit fields build upon regular integer types, so they have the same problems with endianness and integer sizes. But they have even more implementation specified behaviour.
Everything about the actual allocation details of bit fields within the class object
For example, on some platforms, bit fields don't straddle bytes, on others they do
Also, on some platforms, bit fields are packed left-to-right, on others right-to-left
Whether char, short, int, long, and long long bit fields are signed or unsigned (when not declared so explicitly).
Unlike endianness, it is not trivial to convert "everything about the actual allocation details" to a canonical form.
Also, while endianness is cpu architecture specific, the bit field details are specific to the compiler implementer. So, bit fields are not portable for communication even between separate processes within the same computer, unless we can guarantee that they were compiled using the same (or binary compatible) compiler.
TL;DR bit fields are not a portable way to communicate between computers. Integers aren't either, but their non-portability is easy to work around.
Bit fields are non-portable in the sense that the ordering of the bit is unspecified. So the bit at index 0 with one compiler could very well be the last bit with another compiler.
This prevents the use of bit fields in applications like toggling bits in memory-mapped hardware registers.
But, you will see hardware vendor use bitfields in the code they release (like microchip for instance). Usually, it's because they also release the compiler with it or target a single compiler. In microchip case for instance, the licence for their source code mandates you to use their own compiler (for 8 bits low-end devices)
The link pointed to by #Pharap contains an extract of the (c++14) norm related to this un-specified ordering: is-there-a-portable-alternative-to-c-bitfields

Recommendable test for software integer multiplication?

I've coded a number of integer multiplication routines for Atmel's AVR architecture. I found following a simple pattern for the multiplier (and a similar one for the multiplicand) useful, if unconvincing (start at zero, step by a one in every byte (in addition to eventual carries)).
There seems to be quite a bit about testing hardware multiplier implementations, but:
What can be recommended for testing software implementations of integer multiplication? Exhaustive testing gets out of hand - if not at, then beyond 16×16 bit.
Most approaches uses Genere & Test
generate test operands
The safest is to use all combinations of operands. But with bignums or small computing power or memory is this not possible or practical. In that case are usually used some test cases which will (most likely) stress the algorithm tested (like propagating carry ... or be near safe limits) and use only them. Another option is to use pseudo random operands and hope for a valid test sample. or combine all these in some way together
compute multiplication
Just apply the tested algorithm on generated input data.
assess the results
so you need to compare the algorithm(multiplication) result to something. Mostly use different algorithm of the same process to compare with. Or use inverse functions (like c=a*b; if (a!=c/b) ...). The problem with this is that you can not distinguish between error in compared or compared to algorithm ... unless you use something 100% working or compare to more then just one operation... There is also the possibility to have precomputed result table (computed on different platform 100% bug free) and compare to that.
AVR Tag make this a tough question. So some hints:
many MCU's have a lot of Flash memory for program that can be used to store precomputed values to compare to
sometimes running in emulator is faster/more comfortable but with the risk of dismissing some HW related issues.

Float to SInt32

I have series of c++ signal processing classes which use 32 bit floats as their primary sample datatype. For example all the oscillator classes return floats for every sample thats requested. This is the same for all the classes, all calculations of samples are in floating point.
I am porting these classes to iOS.. and for performance issues I want to operate in 8.24 fixed point to get the most out of the processor, word has it there are major performance advantages on iOS to crunching integers instead of floats.. I'm currently doing all the calculations in floats, then converting to SInt32 at the final stage before output which means every sample at the final stage needs to be converted.
Do I simply change the datatype used inside my classes from Float to SInt32. So my oscillators and filters etc calculate in fixed point by passing SInt32's around internally instead of floats ??
is it really this simple ? or do I have to completely rewrite all the different algorithms ?
is there any other voodoo I need to understand before taking on this mission ?
Many Thanks for anyone who finds the time to comment on this.. Its much appreciated..
It's mostly a myth. Floating point performance used to be slow if you compiled for armv6 in Thumb mode; this not an issue in armv7 which supports Thumb 2 (I'll avoid further discussion of armv6 which is no longer supported in Xcode). You also want to avoid using doubles, since floats can use the faster NEON (a.k.a. Advanced SIMD Instructions) unit — this is easy to do accidentally; try enabling -Wshorten.
I also doubt you'll get significantly better performance doing an 8.24 multiply, especially over making use of the NEON unit. Changing float int/int32_t/SInt32 will also not automatically do the necessary shifts for an 8.24 multiply.
If you know that converting floats to ints is the slow bit, consider using some of the functions in Accelerate.framework, namely vDSP_vfix16() or vDSP_vfixr16().

What is faster? two ints or __int64?

What is faster: (Performance)
__int64 x,y;
x=y;
or
int x,y,a,b;
x=a;
y=b;
?
Or they are equal?
__int64 is a non-standard compiler extension, so whilst it may or may not be faster, you don't want to use it if you want cross platform code. Instead, you should consider using #include <cstdint> and using uint64_t etc. These derive from the C99 standard which provides stdint.h and inttypes.h for fixed width integer arithmetic.
In terms of performance, it depends on the system you are on - x86_64 for example should not see any performance difference in adding 32- and 64- bit integers, since the add instruction can handle 32 or 64 bit registers.
However, if you're running code on a 32-bit platform or compiling for a 32-bit architecture, the addition of a 64-bit integers will actually require adding two 32-bit registers, as opposed to one. So if you don't need the extra space, it would be wasteful to allocate it.
I have no idea if compilers can or do optimise the types down to a smaller size if necessary. I expect not, but I'm no compiler engineer.
I hate these sort of questions.
1) If you don't know how to measure for yourself then you almost certainly don't need to know the answer.
2) On modern processors it's very hard to predict how fast something will be based on single instructions, it's far more important to understand your program's cache usage and overall performance than to worry about optimising silly little snippets of code that use an assignment. Let the compiler worry about that and spend your time improving the algorithm used or other things that will have a much bigger impact.
So in short, you probably can't tell and it probably doesn't matter. It's a silly question.
The compiler's optimizer will remove all the code in your example so in that way there is no difference. I think you want to know whether it is faster to move data 32 bits at a time or 64 bits at a time. If your data is aligned to 8 bytes and you are on a 64 bit machine then it should be faster to move data at 8 bytes at a time. There are several caveat to that, however. You may find that your compiler is already doing this optimization for you (you would have to look at the emitted assembly code to be sure), in which case you would see no difference. Also, consider using memcpy instead of rolling your own if you are moving a lot of data. If you are considering casting an array of 32 bit ints to 64 bit in order to copy faster or do some other operation faster (i.e. in half the number of instructions), be sure to Google for the strict aliasing rule.
__int64 should be faster on most platforms, but be careful - some of the architectures require alignment to 8 for this to take effect, and some would even crash your app if it's not aligned.

What does SSE instructions optimize in practice, and how does the compiler enables and use them?

SSE and/or 3D now! have vector instructions, but what do they optimize in practice ? Are 8 bits characters treated 4 by 4 instead of 1 by 1 for example ? Are there optimisation for some arithmetical operations ? Does the word size have any effect (16 bits, 32 bits, 64 bits) ?
Does all compilers use them when they are available ?
Does one really have to understand assembly to use SSE instructions ? Does knowing about electronics and gate logics helps understanding this ?
Background: SSE has both vector and scalar instructions. 3DNow! is dead.
It is uncommon for any compiler to extract a meaningful benefit from vectorization without the programmer's help. With programming effort and experimentation, one can often approach the speed of pure assembly, without actually mentioning any specific vector instructions. See your compiler's vector programming guide for details.
There are a couple portability tradeoffs involved. If you code for GCC's vectorizer, you might be able to work with non-Intel architectures such as PowerPC and ARM, but not other compilers. If you use Intel intrinsics to make your C code more like assembly, then you can use other compilers but not other architectures.
Electronics knowledge will not help you. Learning the available instructions will.
In the general case, you can't rely on compilers to use vectorized instructions at all. Some do (Intel's C++ compiler does a reasonable job of it in many simple cases, and GCC attempts to do so too, with mixed success)
But the idea is simply to apply the same operation to 4 32-bit words (or 2 64 bit values in some cases).
So instead of the traditional `add´ instruction which adds together the values from 2 different 32-bit wide registers, you can use a vectorized add, which uses special, 128-bit wide registers containing four 32-bit values, and adds them together as a single operation.
Duplicate of other questions:
Using SSE instructions
In short, SSE is short for Streaming SIMD Extensions, where SIMD = Single Instruction, Multiple Data. This is useful for performing a single mathematical or logical operation on many values at once, as is typically done for matrix or vector math operations.
The compiler can target this instruction set as part of it's optimizations (research your /O options), however you typically have to restructure code and either code SSE manually, or use a library like Intel Performance Primitives to really take advantage of it.
If you know what you are doing, you might get a huge performance boost. See for example here, where this guy improved the performances of his algorithm 6 times.