How to use SSE instruction set on C6678 DSP? - c++

SSE can only be used on x86 x64 CPUs. I have a problem using the SPEEXDSP library on a TI C6678. I've never used the SSE instruction, I've tried many ways and can't get it to work on the DSP.
Is it possible to modify SSE instructions to normal C++ instructions? How to modify it?
Looking forward to your reply.
Example:
static inline double interpolate_product_double(const float* a, const float* b, unsigned int len, const spx_uint32_t oversample, float* frac) {
int i;
double ret;
__m128d sum;
__m128d sum1 = _mm_setzero_pd();
__m128d sum2 = _mm_setzero_pd();
__m128 f = _mm_loadu_ps(frac);
__m128d f1 = _mm_cvtps_pd(f);
__m128d f2 = _mm_cvtps_pd(_mm_movehl_ps(f, f));
__m128 t;
for (i = 0; i < len; i += 2)
{
t = _mm_mul_ps(_mm_load1_ps(a + i), _mm_loadu_ps(b + i * oversample));
sum1 = _mm_add_pd(sum1, _mm_cvtps_pd(t));
sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t)));
t = _mm_mul_ps(_mm_load1_ps(a + i + 1), _mm_loadu_ps(b + (i + 1) * oversample));
sum1 = _mm_add_pd(sum1, _mm_cvtps_pd(t));
sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t)));
}
sum1 = _mm_mul_pd(f1, sum1);
sum2 = _mm_mul_pd(f2, sum2);
sum = _mm_add_pd(sum1, sum2);
sum = _mm_add_sd(sum, _mm_unpackhi_pd(sum, sum));
_mm_store_sd(&ret, sum);
return ret;
}

Yes, you can use SIMD Everywhere (SIMDe). It provides portable implementations of many intrinsics, including all of the ones in your code. Full disclosure: I am the lead developer.
Edit: replying to phuclv here since it's a bit long for a comment.
SIMDe doesn't currently use the c6x instrinsics to implement functions like we often do for NEON, AltiVec/VSX, WASM SIMD, etc. There is nothing preventing it, and patches are very much welcome, but they're not there yet.
However, every function in SiMDe has fallback implementations all the way back to standard C. Usually things don't get that far, though; even discounting the architecture-specific implementations mentioned above, if the compiler supports it the operations are also implemented using GNU C vector extensions, and even the portable fallbacks are actually annotated with OpenMP SIMD directives. Conversion functions use compiler built-ins like __builtin_convertvector, and functions which require shuffling data around will use __builtin_shuffle / __builtin_shufflevector.
Basically, SIMDe goes to great lengths to get the compiler to vectorize the whenever possible, even if SIMDe doesn't actually know how to do it. The functions above are all pretty straightforward; I don't know enough about c6x SIMD to know what kind of operations are supported in hardware, but GCC and clang (which the TI compilers are based on) generally do a very good job with all the information SIMDe gives them. Honestly, the thing I'm most worried about here is whether the c6x supports double-precision floating point in SIMD (which the code above uses)... there is a pretty good chance it only supports single-precision floats.

Is it possible to modify SSE instructions to normal C++ instructions?
There's no such thing as "C++ instructions" because C++ is a high level language with only statements and no instructions. But yes it's possible to convert SSE intrinsics to C++ expressions because they're simply multiple operations in parallel
SSE is one of the SIMD instruction sets so just convert it to the corresponding SIMD in the target architecture. In your case TI C6678 does have SIMD support:
The C64x+ and C674x DSPs support 2-way SIMD operations for 16-bit data and 4-way SIMD operations for 8-bit data. On the C66x DSP, the vector processing capability is improved by extending the width of the SIMD instructions. C66x DSPs can execute instructions that operate on 128-bit vectors.

The C66x architecture indeed supports a number of SIMD instructions, somewhat comparable to those of Intel's SSE.
You need to be aware of the processor's register set in both architectures and compare the available instructions.
For example, _mm_add_ps performs four simultaneous additions of single-precision floats, contained four by four in the SSE registers. The DSP has a similar DADDSP instruction that only performs two such additions. Hence you will need to translate one _mm_add_ps by two DADDSP.
Read the manuals (these instruction sets are online), understand what the instructions are doing, and find the equivalences. In case of a dead-end, you still have the recourse to good old scalar operations, like C[0]= A[0]+B[0]; C[1]= A[1]+B[1];

Related

Do we need vectorization in C++ or are for loops already fast enough?

In Matlab we use vectorization to speed up code. For example, here are two ways of performing the same calculation:
% Loop
tic
i = 0;
for t = 0:.01:1e5
i = i + 1;
y(i) = sin(t);
end
toc
% Vectorization
tic
t = 0:.01:1e5;
y = sin(t);
toc
The results are:
Elapsed time is 1.278207 seconds. % For loop
Elapsed time is 0.099234 seconds. % Vectorization
So the vectorized code is almost 13 times faster. Actually, if we run it again we get:
Elapsed time is 0.200800 seconds. % For loop
Elapsed time is 0.103183 seconds. % Vectorization
The vectorized code is now only 2 times as fast instead of 13 times as fast. So it appears we get a huge speedup on the first run of the code, but on future runs the speedup is not as great since Matlab appears to know that the for loop hasn't changed and is optimizing for it. In any case the vectorized code is still twice as fast as the for loop code.
Now I have started using C++ and I am wondering about vectorization in this language. Do we need to vectorize for loops in C++ or are they already fast enough? Maybe the compiler automatically vectorizes them? Actually, I don't know if Matlab type vectorization is even a concept in C++, maybe its just needed for Matlab because this is an interpreted language? How would you write the above function in C++ to make it as efficient as possible?
Do we need vectorization in C++
Vectorisation is not necessarily needed always, but it can make some programs faster.
C++ compilers support auto-vectorisation, although if you need to have vectorisation, then you might not be able to rely on such optimisation because not every loop can be vectorised automatically.
are [loops] already fast enough?
Depends on the loop, the target CPU, the compiler and its options, and crucially: How fast does it need to be.
Some things that you could do to potentially achieve vectorisation in standard C++:
Enable compiler optimisations that perform auto vectorisation. (See the manual of your compiler)
Specify a target CPU that has vector operations in their instruction set. (See the manual of your compiler)
Use standard algorithms with std::parallel_unsequenced_policy or std::unsequenced_policy.
Ensure that the data being operated on is sufficiently aligned for SIMD instructions. You can use alignas. See the manual of the target CPU for what alignment you need.
Ensure that the optimiser knows as much as possible by using link time optimisation.
Partially unroll your loops. Limitation of this is that you hard code the amount of parallelisation:
for (int i = 0; i < count; i += 4) {
operation(i + 0);
operation(i + 1);
operation(i + 2);
operation(i + 3);
}
Outside of standard, portable C++, there are implementation specific ways:
Some compilers provide language extension to write explicitly vectorised programs. This is portable across different CPUs but not portable to compilers that don't implement the extension.
using v4si = int __attribute__ ((vector_size (16)));
v4si a, b, c;
a = b + 1; /* a = b + {1,1,1,1}; */
a = 2 * b; /* a = {2,2,2,2} * b; */
Some compilers provide "builtin" functions to invoke specific CPU instructions which can be used to invoke SIMD vector instructions. Using these is not portable across incompatible CPUs.
Some compilers support OpenMP API which has #pragma omp simd.

SSE slower than FPU?

I have a large piece of code, part of whose body contains this piece of code:
result = (nx * m_Lx + ny * m_Ly + m_Lz) / sqrt(nx * nx + ny * ny + 1);
which I have vectorized as follows (everything is already a float):
__m128 r = _mm_mul_ps(_mm_set_ps(ny, nx, ny, nx),
_mm_set_ps(ny, nx, m_Ly, m_Lx));
__declspec(align(16)) int asInt[4] = {
_mm_extract_ps(r,0), _mm_extract_ps(r,1),
_mm_extract_ps(r,2), _mm_extract_ps(r,3)
};
float (&res)[4] = reinterpret_cast<float (&)[4]>(asInt);
result = (res[0] + res[1] + m_Lz) / sqrt(res[2] + res[3] + 1);
The result is correct; however, my benchmarking shows that the vectorized version is slower:
The non-vectorized version takes 3750 ms
The vectorized version takes 4050 ms
Setting result to 0 directly (and removing this part of the code entirely) reduces the entire process to 2500 ms
Given that the vectorized version only contains one set of SSE multiplications (instead of four individual FPU multiplications), why is it slower? Is the FPU indeed faster than SSE, or is there a confounding variable here?
(I'm on a mobile Core i5.)
You are spending a lot of time moving scalar values to/from SSE registers with _mm_set_ps and _mm_extract_ps - this is generating a lot of instructions, the execution time of which will far outweigh any benefit from using _mm_mul_ps. Take a look at the generated assembly output to see how much code is being generated in addition to the single MULPS instruction.
To vectorize this properly you need to use 128 bit SSE loads and stores (_mm_load_ps/_mm_store_ps) and then use SSE shuffle instructions to move elements around within registers where needed.
One further point to note - modern CPUs such as Core i5, Core i7, have two scalar FPUs and can issue 2 floating point multiplies per clock. The potential benefit from SSE for single precision floating point is therefore only 2x at best. It's easy to lose most/all of this 2x benefit if you have excessive "housekeeping" instructions, as is the case here.
There are several problems :
You will not see much benefits from using SSE instructions in such operations, because the SSE instructions are supposed to be better on parallel operations (that is, multiplying several values at the same time). What you did is a misuse of the SSE
do not set the values, use the pointer to the 1st value in the array, but then your values are not in the array
do not extract and copy values into the array. That is also a misuse of SSE. The result is supposed to be in an array.
My take would be that the processor has the time to compute the first multiplication when using the FPU while loading the next values. The SSE has to load all the values first.

C++ Adding 2 arrays together quickly

Given the arrays:
int canvas[10][10];
int addon[10][10];
Where all the values range from 0 - 100, what is the fastest way in C++ to add those two arrays so each cell in canvas equals itself plus the corresponding cell value in addon?
IE, I want to achieve something like:
canvas += another;
So if canvas[0][0] =3 and addon[0][0] = 2 then canvas[0][0] = 5
Speed is essential here as I am writing a very simple program to brute force a knapsack type problem and there will be tens of millions of combinations.
And as a small extra question (thanks if you can help!) what would be the fastest way of checking if any of the values in canvas exceed 100? Loops are slow!
Here is an SSE4 implementation that should perform pretty well on Nehalem (Core i7):
#include <limits.h>
#include <emmintrin.h>
#include <smmintrin.h>
static inline int canvas_add(int canvas[10][10], int addon[10][10])
{
__m128i * cp = (__m128i *)&canvas[0][0];
const __m128i * ap = (__m128i *)&addon[0][0];
const __m128i vlimit = _mm_set1_epi32(100);
__m128i vmax = _mm_set1_epi32(INT_MIN);
__m128i vcmp;
int cmp;
int i;
for (i = 0; i < 10 * 10; i += 4)
{
__m128i vc = _mm_loadu_si128(cp);
__m128i va = _mm_loadu_si128(ap);
vc = _mm_add_epi32(vc, va);
vmax = _mm_max_epi32(vmax, vc); // SSE4 *
_mm_storeu_si128(cp, vc);
cp++;
ap++;
}
vcmp = _mm_cmpgt_epi32(vmax, vlimit); // SSE4 *
cmp = _mm_testz_si128(vcmp, vcmp); // SSE4 *
return cmp == 0;
}
Compile with gcc -msse4.1 ... or equivalent for your particular development environment.
For older CPUs without SSE4 (and with much more expensive misaligned loads/stores) you'll need to (a) use a suitable combination of SSE2/SSE3 intrinsics to replace the SSE4 operations (marked with an * above) and ideally (b) make sure your data is 16-byte aligned and use aligned loads/stores (_mm_load_si128/_mm_store_si128) in place of _mm_loadu_si128/_mm_storeu_si128.
You can't do anything faster than loops in just C++. You would need to use some platform specific vector instructions. That is, you would need to go down to the assembly language level. However, there are some C++ libraries that try to do this for you, so you can write at a high level and have the library take care of doing the low level SIMD work that is appropriate for whatever architecture you are targetting with your compiler.
MacSTL is a library that you might want to look at. It was originally a Macintosh specific library, but it is cross platform now. See their home page for more info.
The best you're going to do in standard C or C++ is to recast that as a one-dimensional array of 100 numbers and add them in a loop. (Single subscripts will use a bit less processing than double ones, unless the compiler can optimize it out. The only way you're going to know how much of an effect there is, if there is one, is to test.)
You could certainly create a class where the addition would be one simple C++ instruction (canvas += addon;), but that wouldn't speed anything up. All that would happen is that the simple C++ instruction would expand into the loop above.
You would need to get into lower-level processing in order to speed that up. There are additional instructions on many modern CPUs to do such processing that you might be able to use. You might be able to run something like this on a GPU using something like Cuda. You could try making the operation parallel and running on several cores, but on such a small instance you'll have to know how caching works on your CPU.
The alternatives are to improve your algorithm (on a knapsack-type problem, you might be able to use dynamic programming in some way - without more information from you, we can't tell you), or to accept the performance. Tens of millions of operations on a 10 by 10 array turn into hundreds of billions of operations on numbers, and that's not as intimidating as it used to be. Of course, I don't know your usage scenario or performance requirements.
Two parts: first, consider your two-dimensional array [10][10] as a single array [100]. The layout rules of C++ should allow this. Second, check your compiler for intrinsic functions implementing some form of SIMD instructions, such as Intel's SSE. For example Microsoft supplies a set. I believe SSE has some instructions for checking against a maximum value, and even clamping to the maximum if you want.
Here is an alternative.
If you are 100% certain that all your values are between 0 and 100, you could change your type from an int to a uint8_t. Then, you could add 4 elements together at once of them together using uint32_t without worrying about overflow.
That is ...
uint8_t array1[10][10];
uint8_t array2[10][10];
uint8_t dest[10][10];
uint32_t *pArr1 = (uint32_t *) &array1[0][0];
uint32_t *pArr2 = (uint32_t *) &array2[0][0];
uint32_t *pDest = (uint32_t *) &dest[0][0];
int i;
for (i = 0; i < sizeof (dest) / sizeof (uint32_t); i++) {
pDest[i] = pArr1[i] + pArr2[i];
}
It may not be the most elegant, but it could help keep you from going to architecture specific code. Additionally, if you were to do this, I would strongly recommend you comment what you are doing and why.
You should check out CUDA. This kind of problem is right up CUDA's street. Recommend the Programming Massively Parallel Processors book.
However, this does require CUDA capable hardware, and CUDA takes a bit of effort to get setup in your development environment, so it would depend how important this really is!
Good luck!

Using SSE instructions

I have a loop written in C++ which is executed for each element of a big integer array. Inside the loop, I mask some bits of the integer and then find the min and max values. I heard that if I use SSE instructions for these operations it will run much faster compared to a normal loop written using bitwise AND , and if-else conditions. My question is should I go for these SSE instructions? Also, what happens if my code runs on a different processor? Will it still work or these instructions are processor specific?
SSE instructions are processor specific. You can look up which processor supports which SSE version on wikipedia.
If SSE code will be faster or not depends on many factors: The first is of course whether the problem is memory-bound or CPU-bound. If the memory bus is the bottleneck SSE will not help much. Try simplifying your integer calculations, if that makes the code faster, it's probably CPU-bound, and you have a good chance of speeding it up.
Be aware that writing SIMD-code is a lot harder than writing C++-code, and that the resulting code is much harder to change. Always keep the C++ code up to date, you'll want it as a comment and to check the correctness of your assembler code.
Think about using a library like the IPP, that implements common low-level SIMD operations optimized for various processors.
SIMD, of which SSE is an example, allows you to do the same operation on multiple chunks of data. So, you won't get any advantage to using SSE as a straight replacement for the integer operations, you will only get advantages if you can do the operations on multiple data items at once. This involves loading some data values that are contiguous in memory, doing the required processing and then stepping to the next set of values in the array.
Problems:
1 If the code path is dependant on the data being processed, SIMD becomes much harder to implement. For example:
a = array [index];
a &= mask;
a >>= shift;
if (a < somevalue)
{
a += 2;
array [index] = a;
}
++index;
is not easy to do as SIMD:
a1 = array [index] a2 = array [index+1] a3 = array [index+2] a4 = array [index+3]
a1 &= mask a2 &= mask a3 &= mask a4 &= mask
a1 >>= shift a2 >>= shift a3 >>= shift a4 >>= shift
if (a1<somevalue) if (a2<somevalue) if (a3<somevalue) if (a4<somevalue)
// help! can't conditionally perform this on each column, all columns must do the same thing
index += 4
2 If the data is not contigous then loading the data into the SIMD instructions is cumbersome
3 The code is processor specific. SSE is only on IA32 (Intel/AMD) and not all IA32 cpus support SSE.
You need to analyse the algorithm and the data to see if it can be SSE'd and that requires knowing how SSE works. There's plenty of documentation on Intel's website.
This kind of problem is a perfect example of where a good low level profiler is essential. (Something like VTune) It can give you a much more informed idea of where your hotspots lie.
My guess, from what you describe is that your hotspot will probably be branch prediction failures resulting from min/max calculations using if/else. Therefore, using SIMD intrinsics should allow you to use the min/max instructions, however, it might be worth just trying to use a branchless min/max caluculation instead. This might achieve most of the gains with less pain.
Something like this:
inline int
minimum(int a, int b)
{
int mask = (a - b) >> 31;
return ((a & mask) | (b & ~mask));
}
If you use SSE instructions, you're obviously limited to processors that support these.
That means x86, dating back to the Pentium 2 or so (can't remember exactly when they were introduced, but it's a long time ago)
SSE2, which, as far as I can recall, is the one that offers integer operations, is somewhat more recent (Pentium 3? Although the first AMD Athlon processors didn't support them)
In any case, you have two options for using these instructions. Either write the entire block of code in assembly (probably a bad idea. That makes it virtually impossible for the compiler to optimize your code, and it's very hard for a human to write efficient assembler).
Alternatively, use the intrinsics available with your compiler (if memory serves, they're usually defined in xmmintrin.h)
But again, the performance may not improve. SSE code poses additional requirements of the data it processes. Mainly, the one to keep in mind is that data must be aligned on 128-bit boundaries. There should also be few or no dependencies between the values loaded into the same register (a 128 bit SSE register can hold 4 ints. Adding the first and the second one together is not optimal. But adding all four ints to the corresponding 4 ints in another register will be fast)
It may be tempting to use a library that wraps all the low-level SSE fiddling, but that might also ruin any potential performance benefit.
I don't know how good SSE's integer operation support is, so that may also be a factor that can limit performance. SSE is mainly targeted at speeding up floating point operations.
If you intend to use Microsoft Visual C++, you should read this:
http://www.codeproject.com/KB/recipes/sseintro.aspx
We have implemented some image processing code, similar to what you describe but on a byte array, In SSE. The speedup compared to C code is considerable, depending on the exact algorithm more than a factor of 4, even in respect to the Intel compiler. However, as you already mentioned you have the following drawbacks:
Portability. The code will run on every Intel-like CPU, so also AMD, but not on other CPUs. That is not a problem for us because we control the target hardware. Switching compilers and even to a 64 bit OS can also be a problem.
You have a steep learning curve, but I found that after you grasp the principles writing new algorithms is not that hard.
Maintainability. Most C or C++ programmers have no knowledge of assembly/SSE.
My advice to you will be to go for it only if you really need the performance improvement, and you can't find a function for your problem in a library like the intel IPP, and if you can live with the portability issues.
I can tell from my experince that SSE brings a huge (4x and up) speedup over a plain c version of the code (no inline asm, no intrinsics used) but hand-optimized assembler can beat Compiler-generated assembly if the compiler can't figure out what the programmer intended (belive me, compilers don't cover all possible code combinations and they never will).
Oh and, the compiler can't everytime layout the data that it runs at the fastest-possible speed.
But you need much experince for a speedup over an Intel-compiler (if possible).
SSE instructions were originally just on Intel chips, but recently (since Athlon?) AMD supports them as well, so if you do code against the SSE instruction set, you should be portable to most x86 procs.
That being said, it may not be worth your time to learn SSE coding unless you're already familiar with assembler on x86's - an easier option might be to check your compiler docs and see if there are options to allow the compiler to autogenerate SSE code for you. Some compilers do very well vectorizing loops in this way. (You're probably not surprised to hear that the Intel compilers do a good job of this :)
Write code that helps the compiler understand what you are doing. GCC will understand and optimize SSE code such as this:
typedef union Vector4f
{
// Easy constructor, defaulted to black/0 vector
Vector4f(float a = 0, float b = 0, float c = 0, float d = 1.0f):
X(a), Y(b), Z(c), W(d) { }
// Cast operator, for []
inline operator float* ()
{
return (float*)this;
}
// Const ast operator, for const []
inline operator const float* () const
{
return (const float*)this;
}
// ---------------------------------------- //
inline Vector4f operator += (const Vector4f &v)
{
for(int i=0; i<4; ++i)
(*this)[i] += v[i];
return *this;
}
inline Vector4f operator += (float t)
{
for(int i=0; i<4; ++i)
(*this)[i] += t;
return *this;
}
// Vertex / Vector
// Lower case xyzw components
struct {
float x, y, z;
float w;
};
// Upper case XYZW components
struct {
float X, Y, Z;
float W;
};
};
Just don't forget to have -msse -msse2 on your build parameters!
Although it is true that SSE is specific to some processors (SSE may be relatively safe, SSE2 much less in my experience), you can detect the CPU at runtime, and load the code dynamically depending on the target CPU.
SIMD intrinsics (such as SSE2) can speed this sort of thing up but take expertise to use correctly. They are very sensitive to alignment and pipeline latency; careless use can make performance even worse than it would have been without them. You'll get a much easier and more immediate speedup from simply using cache prefetching to make sure all your ints are in L1 in time for you to operate on them.
Unless your function needs a throughput of better than 100,000,000 integers per second, SIMD probably isn't worth the trouble for you.
Just to add briefly to what has been said before about different SSE versions being available on different CPUs: This can be checked by looking at the respective feature flags returned by the CPUID instruction (see e.g. Intel's documentation for details).
Have a look at inline assembler for C/C++, here is a DDJ article. Unless you are 100% certain your program will run on a compatible platform you should follow the recommendations many have given here.
I agree with the previous posters. Benefits can be quite large but to get it may require a lot of work. Intel documentation on these instructions is over 4K pages. You may want to check out EasySSE (c++ wrappers library over intrinsics + examples) free from Ocali Inc.
I assume my affiliation with this EasySSE is clear.
I don't recommend doing this yourself unless you're fairly proficient with assembly. Using SSE will, more than likely, require careful reorganization of your data, as Skizz points out, and the benefit is often questionable at best.
It would probably be much better for you to write very small loops and keep your data very tightly organized and just rely on the compiler doing this for you. Both the Intel C Compiler and GCC (since 4.1) can auto-vectorize your code, and will probably do a better job than you. (Just add -ftree-vectorize to your CXXFLAGS.)
Edit: Another thing I should mention is that several compilers support assembly intrinsics, which would probably, IMO, be easier to use than the asm() or __asm{} syntax.

Fastest way to see how many bytes are equal between fixed length arrays

I have 2 arrays of 16 elements (chars) that I need to "compare" and see how many elements are equal between the two.
This routine is going to be used millions of times (a usual run is about 60 or 70 million times), so I need it to be as fast as possible. I'm working on C++ (C++Builder 2007, for the record)
Right now, I have a simple:
matches += array1[0] == array2[0];
repeated 16 times (as profiling it appears to be 30% faster than doing it with a for loop)
Is there any other way that could work faster?
Some data about the environment and the data itself:
I'm using C++Builder, which doesn't have any speed optimizations to take into account. I will try eventually with another compiler, but right now I'm stuck with this one.
The data will be different most of the times. 100% equal data is usually very very rare (maybe less than 1%)
UPDATE: This answer has been modified to make my comments match the source code provided below.
There is an optimization available if you have the capability to use SSE2 and popcnt instructions.
16 bytes happens to fit nicely in an SSE register. Using c++ and assembly/intrinsics, load the two 16 byte arrays into xmm registers, and cmp them. This generates a bitmask representing the true/false condition of the compare. You then use a movmsk instruction to load a bit representation of the bitmask into an x86 register; this then becomes a bit field where you can count all the 1's to determine how many true values you had. A hardware popcnt instruction can be a fast way to count all the 1's in a register.
This requires knowledge of assembly/intrinsics and SSE in particular. You should be able to find web resources for both.
If you run this code on a machine that does not support either SSE2 or popcnt, you must then iterate through the arrays and count the differences with your unrolled loop approach.
Good luck
Edit:
Since you indicated you did not know assembly, here's some sample code to illustrate my answer:
#include "stdafx.h"
#include <iostream>
#include "intrin.h"
inline unsigned cmpArray16( char (&arr1)[16], char (&arr2)[16] )
{
__m128i first = _mm_loadu_si128( reinterpret_cast<__m128i*>( &arr1 ) );
__m128i second = _mm_loadu_si128( reinterpret_cast<__m128i*>( &arr2 ) );
return _mm_movemask_epi8( _mm_cmpeq_epi8( first, second ) );
}
int _tmain( int argc, _TCHAR* argv[] )
{
unsigned count = 0;
char arr1[16] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0 };
char arr2[16] = { 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0 };
count = __popcnt( cmpArray16( arr1, arr2 ) );
std::cout << "The number of equivalent bytes = " << count << std::endl;
return 0;
}
Some notes: This function uses SSE2 instructions and a popcnt instruction introduced in the Phenom processor (that's the machine that I use). I believe the most recent Intel processors with SSE4 also have popcnt. This function does not check for instruction support with CPUID; the function is undefined if used on a processor that does not have SSE2 or popcnt (you will probably get an invalid opcode instruction). That detection code is a separate thread.
I have not timed this code; the reason I think it's faster is because it compares 16 bytes at a time, branchless. You should modify this to fit your environment, and time it yourself to see if it works for you. I wrote and tested this on VS2008 SP1.
SSE prefers data that is aligned on a natural 16-byte boundary; if you can guarantee that then you should get additional speed improvements, and you can change the _mm_loadu_si128 instructions to _mm_load_si128, which requires alignment.
The key is to do the comparisons using the largest register your CPU supports, then fallback to bytes if necessary.
The below code demonstrates with using 4-byte integers, but if you are running on a SIMD architecture (any modern Intel or AMD chip) you could compare both arrays in one instruction before falling back to an integer-based loop. Most compilers these days have intrinsic support for 128-bit types so will NOT require ASM.
(Note that for the SIMD comparisions your arrays would have to be 16-byte aligned, and some processors (e.g MIPS) would require the arrays to be 4-byte aligned for the int-based comparisons.
E.g.
int* array1 = (int*)byteArray[0];
int* array2 = (int*)byteArray[1];
int same = 0;
for (int i = 0; i < 4; i++)
{
// test as an int
if (array1[i] == array2[i])
{
same += 4;
}
else
{
// test individual bytes
char* bytes1 = (char*)(array1+i);
char* bytes2 = (char*)(array2+i);
for (int j = 0; j < 4; j++)
{
same += (bytes1[j] == bytes2[j];
}
}
}
I can't remember what exactly the MSVC compiler supports for SIMD, but you could do something like;
// depending on compiler you may have to insert the words via an intrinsic
__m128 qw1 = *(__m128*)byteArray[0];
__m128 qw2 = *(__m128*)byteArray[1];
// again, depending on the compiler the comparision may have to be done via an intrinsic
if (qw1 == qw2)
{
same = 16;
}
else
{
// do int/byte testing
}
If you have the ability to control the location of the arrays, putting one right after the other in memory for instance, it might cause them to be loaded to the CPU's cache on the first access.
It depends on the CPU and its cache structure and will vary from one machine to another.
You can read about memory hierarchy and cache in Henessy & Patterson's Computer Architecture: A Quantitative Approach
If you need absolute lowest footprint, I'd go with assembly code. I haven't done this in a while but I'll bet MMX (or more likely SSE2/3) have instructions that can enable you to do exactly that in very few instructions.
If matches are the common case then try loading the values as 32 bit ints instead of 16 so you can compare 2 in one go (and count it as 2 matches).
If the two 32 bit values are not the same then you will have to test them separately (AND out the top and bottom 16 bit values).
The code will be more complex, but should be faster.
If you are targeting a 64-bit system you could do the same trick with 64 bit ints, and if you really want to push the limit then look at dropping into assembler and using the various vector based instructions which would let you work with 128 bits at once.
Magical compiler options will vary the time greatly. In particular making it generate SSE vectorization will likely get you a huge speedup.
Does this have to be platform independent, or will this code always run on the same type of CPU? If you restrict yourself to modern x86 CPUs, you may be able to use MMX instructions, which should allow you to operate on an array of 8 bytes in one clock tick. AFAIK, gcc allows you to embed assembly in your C code, and the Intel's compiler (icc) supports intrinsics, which are wrappers that allow you to call specific assembly instructions directly. Other SIMD instruction sets, such as SSE, may also be useful for this.
Is there any connection between the values in the arrays? Are some bytes more likely to be the same then others? Might there be some intrinsic order in the values? Then you could optimize for the most probable case.
If you explain what the data actually represents then there might be a totally different way to represent the data in memory that would make this type of brute force compare unnecessary. Care to elaborate on what the data actually represents??
Is it faster as one statement?
matches += (array1[0] == array2[0]) + (array1[1] == array2[1]) + ...;
If writing that 16 times is faster than a simple loop, then your compiler either sucks or you don't have optimization turned on.
Short answer: there's no faster way, unless you do vector operations on parallel hardware.
Try using pointers instead of arrays:
p1 = &array1[0];
p2 = &array2[0];
match += (*p1++ == *p2++);
// copy 15 times.
Of course you must measure this against other approaches to see which is fastest.
And are you sure that this routine is a bottleneck in your processing? Do you actually speed up the performance of your application as a whole by optimizing this? Again, only measurement will tell.
Is there any way you can modify the way the arrays are stored? Comparing 1 byte at a time is extremely slow considering you are probably using a 32-bit compiler. Instead if you stored your 16 bytes in 4 integers (32-bit) or 2 longs (64-bit), you would only need to perform 4 or 2 comparisons respectively.
The question to ask yourself is how much is the cost of storing the data as 4-integer or 2-long arrays. How often do you need to access the data, etc.
There's always the good old x86 REPNE CMPS instruction.
One extra possible optimization: if you are expecting that most of the time the arrays are identical then it might be slightly faster to do a memcmp() as the first step, setting '16' as the answer if the test returns true. If course if you are not expecting the arrays to be identical very often that would only slow things down.