I'm trying matrix multiplication using MPI and would like to ask some help to understang one issue. The machine has 6 cores, 32KB L1 cache, 256KB L2 cache and 15MB L3 cache. And multiplication goes like this:
vector<vector<double>> mult_mpi(vector<vector<double>> m,
vector<vector<double>> n) {
int rows = m.size();
int size = n.size();
vector<vector<double>> r(rows, vector<double>(size));
for (int i = 0; i < rows; ++i)
for (int k = 0; k < size; ++k)
for (int j = 0; j < size; ++j)
r[i][j] += m[i][k] * n[k][j];
return r;
}
I have the same for int:
vector<vector<int>> mult_mpi(vector<vector<int>> m, vector<vector<int>> n);
Then I made some plots, different line colors indicate the number of nodes.
The following plot shows time spent to multiply two int matrices:
And the following plot shows time spent to multiply two double matrices:
Why I get the same times for 4 and 6 nodes in the double case? Am I running into the limit on the bandwidth to memory?
I tried multiple times int the last hour, same result. Also checked machine load with top but to my eyes I'm alone there.
Are you sure that you aren't timing the allocation of 4K vector<>'s...?
vector<vector< >> is not a suitable type to squeeze optimal performance. The matrix multiplication is one of the best operation regarding scalability and "computation density" with respect to memory accesses. In fact the number of operations scale as O(N^3) while the number of data as O(N^2).
In fact it's used to benchmark top500 fastest systems on earth: HPL is for "high performance linpack", being linpack a reference implementation for linear algebral. Guess what... The operation being used in the benchmarks is DGEMM, that is "Double precision GEneral Matrix Matrix multiply".
DGEMM is the name of the operation in the BLAS library, e de-facto standard for linear algebra. Today there are many locally optimized BLAS library either commercial (INTEL MKL, IBM ESSL,...) and open source (ATLAS), but all of them use the same original (originally fortran, now C too) BLAS interface. (NOTE: the original implementation is not optimized)
Based on BLAS there are also LAPACK libraries: system solver, eigensystems,... There are also optimized lapack libraries, but usually 90% of the performance is squeezed by using optimized BLAS library.
I know very well one (not the only one... HPL is another one) powerful MPI based parallel library, which is SCALAPACK, and it contains PBLAS (parallel BLAS), and in it... an optimized and parallel version of DGEMM among other stuff.
SCALAPACK comes with the SLUG, where you can find an excellent explanation of the block-cyclic distribution, that's the data distribution strategy used to squeeze optimal performance arranging liner algebra problems on parallel systems.
To obtain optimal performance however you will need to link your MPI executable with a locally optimized BLAS library. Or write your own, but you are not alone, so don't reinvent the wheel.
Local optimization is obtained accessing matrices not by row, nor by column, but by block. With the block size being tuned in order to optimize the usage of the caches and/or of the TLB (I remember just now libgoto, another blas library, that optimized in order to minimize TLB misses, reached and surpassed on some systems Intel MKL... time ago). Find more information for example in this ATLAS paper.
In any case, if you really want to... I would start analyzing how the other wheels have been forged, before trying to make mine ;)
Related
Literally, I want to put entries - maaaaany entries- into sparse matrix provided by armadillo module.
If I do this like below (B is Na X N^2 Matrix)
sp_cx_mat BB(Na*Na, N*N*Na);
for (int i = 0; i < Na; i++){
BB(span(i*N, (i + 1)*N - 1), span(i*N*N, (i + 1)*N*N - 1)) = B;
}
it takes so much time to do this. (N and Na is about 64 or more like 128~)
Is there any efficient way to make this faster, including kernel or optimisation settings? I'm already using Open BLAS kernel and AVX optimisation setting.
There is overhead in dealing with sparse matrices. They are designed only for very large matrices where the vast majority of the elements (90%+) is zero. The internal representation is essentially a compressed version of the matrix.
If the total number of elements (zero and non-zero) would fit into available memory, using a normal dense matrix might be faster. Dense matrices not have the overhead of compression.
Sometimes it's simply easier to install more memory or run your algorithms on a bigger machine.
I have a high-performance dynamic program in C++ whose results are placed in an M × N table, which is roughly on the order of 2000 rows × 30000 columns.
Each entry (r, c) depends on a few of the rows in a few other columns in the table.
The most obvious way to parallelize the computation of row r across P processors is to statically partition the data: i.e., have processor p compute the entries (r, p + k P) for all valid k.
However, the entries for different columns take a somewhat different amount of time to compute (e.g., one might take five times as long as the other), so I would like to partition the data dynamically to achieve better performance, by avoiding the stalling of CPUs that finish early and having them instead steal work from CPUs that are still catching up.
One way to approach this is to keep an atomic global counter that specifies the number of columns already computed, and to increase it each time a CPU needs more work.
However, this forces all CPUs to access the same global counter after computing every entry in the table -- i.e., it serializes the program to some extent. Since computing each entry is more or less a quick process, this is somewhat undesirable.
So, my question is:
Is there a way to perform this dynamic partitioning in a more scalable fashion (i.e. without having to access a single global counter after computing every entry)?
I assume you're using a second array for the new values. If so, it sounds like the looping constructs from either TBB or Cilk Plus would do. Both use workstealing to apportion the work among the available processors and when one processor runs out of work it will steal work from processors who have work available. This evens out the "chunkiness" of the data.
To use Cilk, you'll need a commpiler that supports Cilk Plus. Both GCC 4.9 and the Intel compiler support it. Typically you'd write something like:
cilk_for (int x = 0; x < XMAX; x++) {
for (int y = 0; y < YMAX; y++) {
perform-the-calculation;
}
}
TBB's constructs are similar.
Another approach is to "tile" the calculation in a cache-oblivious way. That is, you implement your own divide-and-conquer algorithm that will break the calculation into chunks that will be cache-efficient. See http://www.1024cores.net/home/parallel-computing/cache-oblivious-algorithms/implementation for more information on cache-oblivious algorithms.
Barry
I have a key algorithm in which most of its runtime is spent on calculating a dense matrix product:
A*A'*Y, where: A is an m-by-n matrix,
A' is its conjugate transpose,
Y is an m-by-k matrix
Typical characteristics:
- k is much smaller than both m or n (k is typically < 10)
- m in the range [500, 2000]
- n in the range [100, 1000]
Based on these dimensions, according to the lessons of the matrix chain multiplication problem, it's clear that it's optimal in a number-of-operations sense to structure the computation as A*(A'*Y). My current implementation does that, and the performance boost from just forcing that associativity to the expression is noticeable.
My application is written in C++ for the x86_64 platform. I'm using the Eigen linear algebra library, with Intel's Math Kernel Library as a backend. Eigen is able to use IMKL's BLAS interface to perform the multiplication, and the boost from moving to Eigen's native SSE2 implementation to Intel's optimized, AVX-based implementation on my Sandy Bridge machine is also significant.
However, the expression A * (A.adjoint() * Y) (written in Eigen parlance) gets decomposed into two general matrix-matrix products (calls to the xGEMM BLAS routine), with a temporary matrix created in between. I'm wondering if, by going to a specialized implementation for evaluating the entire expression at once, I can arrive at an implementation that is faster than the generic one that I have now. A couple observations that lead me to believe this are:
Using the typical dimensions described above, the input matrix A usually won't fit in cache. Therefore, the specific memory access pattern used to calculate the three-matrix product would be key. Obviously, avoiding the creation of a temporary matrix for the partial product would also be advantageous.
A and its conjugate transpose obviously have a very related structure that could possibly be exploited to improve the memory access pattern for the overall expression.
Are there any standard techniques for implementing this sort of expression in a cache-friendly way? Most optimization techniques that I've found for matrix multiplication are for the standard A*B case, not larger expressions. I'm comfortable with the micro-optimization aspects of the problem, such as translating into the appropriate SIMD instruction sets, but I'm looking for any references out there for breaking this structure down in the most memory-friendly manner possible.
Edit: Based on the responses that have come in thus far, I think I was a bit unclear above. The fact that I'm using C++/Eigen is really just an implementation detail from my perspective on this problem. Eigen does a great job of implementing expression templates, but evaluating this type of problem as a simple expression just isn't supported (only products of 2 general dense matrices are).
At a higher level than how the expressions would be evaluated by a compiler, I'm looking for a more efficient mathematical breakdown of the composite multiplication operation, with a bent toward avoiding unneeded redundant memory accesses due to the common structure of A and its conjugate transpose. The result would likely be difficult to implement efficiently in pure Eigen, so I would likely just implement it in a specialized routine with SIMD intrinsics.
This is not a full answer (yet - and I'm not sure it will become one).
Let's think of the math first a little. Since matrix multiplication is associative we can either do
(A*A')Y or A(A'*Y).
Floating point operations for (A*A')*Y
2*m*n*m + 2*m*m*k //the twos come from addition and multiplication
Floating point operations for A*(A'*Y)
2*m*n*k + 2*m*n*k = 4*m*n*k
Since k is much smaller than m and n it's clear why the second case is much faster.
But by symmetry we could in principle reduce the number of calculations for A*A' by two (though this might not be easy to do with SIMD) so we could reduce the number of floating point operations of (A*A')*Y to
m*n*m + 2*m*m*k.
We know that both m and n are larger than k. Let's choose a new variable for m and n called z and find out where case one and two are equal:
z*z*z + 2*z*z*k = 4*z*z*k //now simplify
z = 2*k.
So as long as m and n are both more than twice k the second case will have less floating point operations. In your case m and n are both more than 100 and k less than 10 so case two uses far fewer floating point operations.
In terms of efficient code. If the code is optimized for efficient use of the cache (as MKL and Eigen are) then large dense matrix multiplication is computation bound and not memory bound so you don't have to worry about the cache. MKL is faster than Eigen since MKL uses AVX (and maybe fma3 now?).
I don't think you will be able to do this more efficiently than you're already doing using the second case and MKL (through Eigen). Enable OpenMP to get maximum FLOPS.
You should calculate the efficiency by comparing FLOPS to the peak FLOPS of you processor. Assuming you have a Sandy Bridge/Ivy Bridge processor. The peak SP FLOPS is
frequency * number of physical cores * 8 (8-wide AVX SP) * 2 (addition + multiplication)
For double precession divide by two. If you have Haswell and MKL uses FMA then double the peak FLOPS. To get the frequency right you have to use the turbo boost values for all cores (it's lower than for a single core). You can look this up if you have not overclocked your system or use CPU-Z on Windows or Powertop on Linux if you have an overclocked system.
Use a temporary matrix to compute A'*Y, but make sure you tell eigen that there's no aliasing going on: temp.noalias() = A.adjoint()*Y. Then compute your result, once again telling eigen that objects aren't aliased: result.noalias() = A*temp.
There would be redundant computation only if you would perform (A*A')*Y since in this case (A*A') is symmetric and only half of the computation are required. However, as you noticed it is still much faster to perform A*(A'*Y) in which case there is no redundant computations. I confirm that the cost of the temporary creation is completely negligible here.
I guess that perform the following
result = A * (A.adjoint() * Y)
will be the same as do that
temp = A.adjoint() * Y
result = A * temp;
If your matrix Y fits in the cache, you can probably take advantage of doing it like that
result = A * (Y.adjoint() * A).adjoint()
or, if the previous notation is not allowed, like that
temp = Y.adjoint() * A
result = A * temp.adjoint();
Then you dont need to do the adjoint of matrix A, and store the temporary adjoint matrix for A, that will be much expensive that the one for Y.
If your matrix Y fits in the cache, it should be much faster doing a loop runing over the colums of A for the first multiplication, and then over the rows of A for the second multimplication (having Y.adjoint() in the cache for the first multiplication and temp.adjoint() for the second), but I guess that internally eigen is already taking care of that things.
I need to run a matrix-vector multiplication 240000 times per second. The matrix is 5x5 and is always the same, whereas the vector changes at each iteration. The data type is float. I was thinking of using some SSE (or similar) instructions.
I am concerned that the number of arithmetic operations is too small compared to the number of memory operations involved. Do you think I can get some tangible (e.g. > 20%) improvement?
Do I need the Intel compiler to do it?
Can you point out some references?
The Eigen C++ template library for vectors, matrices, ... has both
optimised code for small fixed size matrices (as well as dynamically sized ones)
optimised code that uses SSE optimisations
so you should give it a try.
In principle the speedup could be 4 times with SSE (8 times with AVX). Let me explain.
Let's call your fixed 5x5 matrix M. Defining the components of a 5D vector as (x,y,z,w,t). Now form a 5x4 matrix U from the first four vectors.
U =
xxxx
yyyy
zzzz
wwww
tttt
Next, do the matrix product MU = V. The matrix V contains the product of M and the first four vectors. The only problem is that for SSE we need read in the rows of U but in memory U is stored as xyzwtxyzwtxyzwtxyzwt so we have to transpose it to xxxxyyyyzzzzwwwwtttt. This can be done with shuffles/blends in SSE. Once we have this format the matrix product is very efficient.
Instead of taking O(5x5x4) operations with scalar code it only takes O(5x5) operations i.e. a 4x speedup. With AVX the matrix U will be 5x8 so instead of taking O(5x5x8) operations it only taxes O(5x5), i.e. a 8x speedup.
The matrix V, however, will be in xxxxyyyyzzzzwwwwtttt format so depending on the application it might have to be transposed to xyzwtxyzwtxyzwtxyzwt format.
Repeat this for the next four vectors (8 for AVX) and so forth until done.
If you have control over the vectors, for example if your application generates the vectors on the fly, then you can generate them in xxxxyyyyzzzzwwwwtttt format and avoid the transpose of the array. In that case you should get a 4x speed up with SSE and a 8x with AVX. If you combine this with threading, e.g. OpenMP, your speedup should be close to 16x (assuming four physical cores) with SSE. I think that's the best you can do with SSE.
Edit: Due to instruction level parallelism (ILP) you can get another factor of 2 in speedup so the speedup for SSE could 32x with four cores (64x AVX) and again another factor of 2 with Haswell due to FMA3.
I would suggest using Intel IPP and abstract yourself of dependency on techniques
If you're using GCC, note that the -O3 option will enable auto-vectorization, which will automatically generate SSE or AVX instructions in many cases. In general, if you just write it as a simple for-loop, GCC will vectorize it. See http://gcc.gnu.org/projects/tree-ssa/vectorization.html for more information.
This should be easy, especially when you're on Core 2 or later: You neeed 5* _mm_dp_ps , one _mm_mul_ps, two _mm_add_ps, one ordinary multiplication, plus some shuffles, loads and stores (and if the matrix is fixed, You can keep most of it in SSE registers, if you don't need them for anything else).
As for memory bandwidth: we're talking about 2,4 megabytes of vectors, when memory bandwidths are in single-digit gigabytes per second.
What is known about the vector? Since the matrix is fixed, AND if there is a limited amount of values that the vector can take, then I'd suggest that you pre-compute the calculations and access them using a table look-up.
The classic optimization technique to trade memory for cycles...
I would recommend having a look at an optimised BLAS library, such as the Intel MKL or the AMD ACML. Based on your description I would assume that you'd be after the SGEMV level 2 matrix-vector routine, to do y = A*x style operations.
If you really want to implement something yourself, using the (available) SSE..SSE4 and AVX instruction sets can offer significant performance improvements in some cases, although this is exactly what a good BLAS library will be doing. You also need to think alot about cache friendly data access patterns.
I don't know if this is applicable in your case, but can you operate on "chunks" of vectors at a time?? So rather than repeatedly doing an y = A*x style operation can you operate on blocks of [y1 y2 ... yn] = A * [x1 x2 ... xn]. If so, this means that you could use an optimised matrix-matrix routine, such as SGEMM. Due to the data access patterns this may be significantly more efficient than repeated calls to SGEMV. If it were me, I would try to go down this path...
Hope this helps.
If you know the vectors in advance (e.g., doing all 240k at once), you'd get a better speedup by parallelising the loop than by going to SSE. If you've already taken that step, or you don't know them all at once, SSE could be a big benefit.
If the memory is contiguous, then don't worry too much about the memory operations. If you've got a linked list or something then you're in trouble, but it should be able to keep up without too much problem.
5x5 is a funny size, but you could do at least 4 flops in one SSE instruction and try to cut your arithmetic overheads. You don't need the Intel compiler, but it might be better, I've heard legends about how it's much better with arithmetic code. Visual Studio has intrinsics for dealing with SSE2, and I think up to SSE4 depending on what you need. Of course, you'd have to roll it yourself. Grabbing a library might be the smart move here.
Out of curiosity I decided to benchmark my own matrix multiplication function versus the BLAS implementation... I was to say the least surprised at the result:
Custom Implementation, 10 trials of
1000x1000 matrix multiplication:
Took: 15.76542 seconds.
BLAS Implementation, 10 trials of
1000x1000 matrix multiplication:
Took: 1.32432 seconds.
This is using single precision floating point numbers.
My Implementation:
template<class ValT>
void mmult(const ValT* A, int ADim1, int ADim2, const ValT* B, int BDim1, int BDim2, ValT* C)
{
if ( ADim2!=BDim1 )
throw std::runtime_error("Error sizes off");
memset((void*)C,0,sizeof(ValT)*ADim1*BDim2);
int cc2,cc1,cr1;
for ( cc2=0 ; cc2<BDim2 ; ++cc2 )
for ( cc1=0 ; cc1<ADim2 ; ++cc1 )
for ( cr1=0 ; cr1<ADim1 ; ++cr1 )
C[cc2*ADim2+cr1] += A[cc1*ADim1+cr1]*B[cc2*BDim1+cc1];
}
I have two questions:
Given that a matrix-matrix multiplication say: nxm * mxn requires n*n*m multiplications, so in the case above 1000^3 or 1e9 operations. How is it possible on my 2.6Ghz processor for BLAS to do 10*1e9 operations in 1.32 seconds? Even if multiplcations were a single operation and there was nothing else being done, it should take ~4 seconds.
Why is my implementation so much slower?
A good starting point is the great book The Science of Programming Matrix Computations by Robert A. van de Geijn and Enrique S. Quintana-Ortí. They provide a free download version.
BLAS is divided into three levels:
Level 1 defines a set of linear algebra functions that operate on vectors only. These functions benefit from vectorization (e.g. from using SSE).
Level 2 functions are matrix-vector operations, e.g. some matrix-vector product. These functions could be implemented in terms of Level1 functions. However, you can boost the performance of this functions if you can provide a dedicated implementation that makes use of some multiprocessor architecture with shared memory.
Level 3 functions are operations like the matrix-matrix product. Again you could implement them in terms of Level2 functions. But Level3 functions perform O(N^3) operations on O(N^2) data. So if your platform has a cache hierarchy then you can boost performance if you provide a dedicated implementation that is cache optimized/cache friendly. This is nicely described in the book. The main boost of Level3 functions comes from cache optimization. This boost significantly exceeds the second boost from parallelism and other hardware optimizations.
By the way, most (or even all) of the high performance BLAS implementations are NOT implemented in Fortran. ATLAS is implemented in C. GotoBLAS/OpenBLAS is implemented in C and its performance critical parts in Assembler. Only the reference implementation of BLAS is implemented in Fortran. However, all these BLAS implementations provide a Fortran interface such that it can be linked against LAPACK (LAPACK gains all its performance from BLAS).
Optimized compilers play a minor role in this respect (and for GotoBLAS/OpenBLAS the compiler does not matter at all).
IMHO no BLAS implementation uses algorithms like the Coppersmith–Winograd algorithm or the Strassen algorithm. The likely reasons are:
Maybe its not possible to provide a cache optimized implementation of these algorithms (i.e. you would loose more then you would win)
These algorithms are numerically not stable. As BLAS is the computational kernel of LAPACK this is a no-go.
Although these algorithms have a nice time complexity on paper, the Big O notation hides a large constant, so it only starts to become viable for extremely large matrices.
Edit/Update:
The new and ground breaking paper for this topic are the BLIS papers. They are exceptionally well written. For my lecture "Software Basics for High Performance Computing" I implemented the matrix-matrix product following their paper. Actually I implemented several variants of the matrix-matrix product. The simplest variants is entirely written in plain C and has less than 450 lines of code. All the other variants merely optimize the loops
for (l=0; l<MR*NR; ++l) {
AB[l] = 0;
}
for (l=0; l<kc; ++l) {
for (j=0; j<NR; ++j) {
for (i=0; i<MR; ++i) {
AB[i+j*MR] += A[i]*B[j];
}
}
A += MR;
B += NR;
}
The overall performance of the matrix-matrix product only depends on these loops. About 99.9% of the time is spent here. In the other variants I used intrinsics and assembler code to improve the performance. You can see the tutorial going through all the variants here:
ulmBLAS: Tutorial on GEMM (Matrix-Matrix Product)
Together with the BLIS papers it becomes fairly easy to understand how libraries like Intel MKL can gain such a performance. And why it does not matter whether you use row or column major storage!
The final benchmarks are here (we called our project ulmBLAS):
Benchmarks for ulmBLAS, BLIS, MKL, openBLAS and Eigen
Another Edit/Update:
I also wrote some tutorial on how BLAS gets used for numerical linear algebra problems like solving a system of linear equations:
High Performance LU Factorization
(This LU factorization is for example used by Matlab for solving a system of linear equations.)
I hope to find time to extend the tutorial to describe and demonstrate how to realise a highly scalable parallel implementation of the LU factorization like in PLASMA.
Ok, here you go: Coding a Cache Optimized Parallel LU Factorization
P.S.: I also did make some experiments on improving the performance of uBLAS. It actually is pretty simple to boost (yeah, play on words :) ) the performance of uBLAS:
Experiments on uBLAS.
Here a similar project with BLAZE:
Experiments on BLAZE.
So first of all BLAS is just an interface of about 50 functions. There are many competing implementations of the interface.
Firstly I will mention things that are largely unrelated:
Fortran vs C, makes no difference
Advanced matrix algorithms such as Strassen, implementations dont use them as they dont help in practice
Most implementations break each operation into small-dimension matrix or vector operations in the more or less obvious way. For example a large 1000x1000 matrix multiplication may broken into a sequence of 50x50 matrix multiplications.
These fixed-size small-dimension operations (called kernels) are hardcoded in CPU-specific assembly code using several CPU features of their target:
SIMD-style instructions
Instruction Level Parallelism
Cache-awareness
Furthermore these kernels can be executed in parallel with respect to each other using multiple threads (CPU cores), in the typical map-reduce design pattern.
Take a look at ATLAS which is the most commonly used open source BLAS implementation. It has many different competing kernels, and during the ATLAS library build process it runs a competition among them (some are even parameterized, so the same kernel can have different settings). It tries different configurations and then selects the best for the particular target system.
(Tip: That is why if you are using ATLAS you are better off building and tuning the library by hand for your particular machine then using a prebuilt one.)
First, there are more efficient algorithms for matrix multiplication than the one you're using.
Second, your CPU can do much more than one instruction at a time.
Your CPU executes 3-4 instructions per cycle, and if the SIMD units are used, each instruction processes 4 floats or 2 doubles. (of course this figure isn't accurate either, as the CPU can typically only process one SIMD instruction per cycle)
Third, your code is far from optimal:
You're using raw pointers, which means that the compiler has to assume they may alias. There are compiler-specific keywords or flags you can specify to tell the compiler that they don't alias. Alternatively, you should use other types than raw pointers, which take care of the problem.
You're thrashing the cache by performing a naive traversal of each row/column of the input matrices. You can use blocking to perform as much work as possible on a smaller block of the matrix, which fits in the CPU cache, before moving on to the next block.
For purely numerical tasks, Fortran is pretty much unbeatable, and C++ takes a lot of coaxing to get up to a similar speed. It can be done, and there are a few libraries demonstrating it (typically using expression templates), but it's not trivial, and it doesn't just happen.
I don't know specfically about BLAS implementation but there are more efficient alogorithms for Matrix Multiplication that has better than O(n3) complexity. A well know one is Strassen Algorithm
Most arguments to the second question -- assembler, splitting into blocks etc. (but not the less than N^3 algorithms, they are really overdeveloped) -- play a role. But the low velocity of your algorithm is caused essentially by matrix size and the unfortunate arrangement of the three nested loops. Your matrices are so large that they do not fit at once in cache memory. You can rearrange the loops such that as much as possible will be done on a row in cache, this way dramatically reducing cache refreshes (BTW splitting into small blocks has an analogue effect, best if loops over the blocks are arranged similarly). A model implementation for square matrices follows. On my computer its time consumption was about 1:10 compared to the standard implementation (as yours).
In other words: never program a matrix multiplication along the "row times column" scheme that we learned in school.
After having rearranged the loops, more improvements are obtained by unrolling loops, assembler code etc.
void vector(int m, double ** a, double ** b, double ** c) {
int i, j, k;
for (i=0; i<m; i++) {
double * ci = c[i];
for (k=0; k<m; k++) ci[k] = 0.;
for (j=0; j<m; j++) {
double aij = a[i][j];
double * bj = b[j];
for (k=0; k<m; k++) ci[k] += aij*bj[k];
}
}
}
One more remark: This implementation is even better on my computer than replacing all by the BLAS routine cblas_dgemm (try it on your computer!). But much faster (1:4) is calling dgemm_ of the Fortran library directly. I think this routine is in fact not Fortran but assembler code (I do not know what is in the library, I don't have the sources). Totally unclear to me is why cblas_dgemm is not as fast since to my knowledge it is merely a wrapper for dgemm_.
This is a realistic speed up. For an example of what can be done with SIMD assembler over C++ code, see some example iPhone matrix functions - these were over 8x faster than the C version, and aren't even "optimized" assembly - there's no pipe-lining yet and there is unnecessary stack operations.
Also your code is not "restrict correct" - how does the compiler know that when it modifies C, it isn't modifying A and B?
With respect to the original code in MM multiply, memory reference for most operation is the main cause of bad performance. Memory is running at 100-1000 times slower than cache.
Most of speed up comes from employing loop optimization techniques for this triple loop function in MM multiply. Two main loop optimization techniques are used; unrolling and blocking. With respect to unrolling, we unroll the outer two most loops and block it for data reuse in cache. Outer loop unrolling helps optimize data-access temporally by reducing the number of memory references to same data at different times during the entire operation. Blocking the loop index at specific number, helps with retaining the data in cache. You can choose to optimize for L2 cache or L3 cache.
https://en.wikipedia.org/wiki/Loop_nest_optimization
For many reasons.
First, Fortran compilers are highly optimized, and the language allows them to be as such. C and C++ are very loose in terms of array handling (e.g. the case of pointers referring to the same memory area). This means that the compiler cannot know in advance what to do, and is forced to create generic code. In Fortran, your cases are more streamlined, and the compiler has better control of what happens, allowing him to optimize more (e.g. using registers).
Another thing is that Fortran store stuff columnwise, while C stores data row-wise. I havent' checked your code, but be careful of how you perform the product. In C you must scan row wise: this way you scan your array along contiguous memory, reducing the cache misses. Cache miss is the first source of inefficiency.
Third, it depends of the blas implementation you are using. Some implementations might be written in assembler, and optimized for the specific processor you are using. The netlib version is written in fortran 77.
Also, you are doing a lot of operations, most of them repeated and redundant. All those multiplications to obtain the index are detrimental for the performance. I don't really know how this is done in BLAS, but there are a lot of tricks to prevent expensive operations.
For example, you could rework your code this way
template<class ValT>
void mmult(const ValT* A, int ADim1, int ADim2, const ValT* B, int BDim1, int BDim2, ValT* C)
{
if ( ADim2!=BDim1 ) throw std::runtime_error("Error sizes off");
memset((void*)C,0,sizeof(ValT)*ADim1*BDim2);
int cc2,cc1,cr1, a1,a2,a3;
for ( cc2=0 ; cc2<BDim2 ; ++cc2 ) {
a1 = cc2*ADim2;
a3 = cc2*BDim1
for ( cc1=0 ; cc1<ADim2 ; ++cc1 ) {
a2=cc1*ADim1;
ValT b = B[a3+cc1];
for ( cr1=0 ; cr1<ADim1 ; ++cr1 ) {
C[a1+cr1] += A[a2+cr1]*b;
}
}
}
}
Try it, I am sure you will save something.
On you #1 question, the reason is that matrix multiplication scales as O(n^3) if you use a trivial algorithm. There are algorithms that scale much better.