CUDA Thrust slow when operating large vectors on my machine - c++

I'm a CUDA beginner and reading on some thrust tutorials.I write a simple but terribly organized code and try to figure out the acceleration of thrust.(is this idea correct?). I try to add two vectors (with 10000000 int) to another vector, by adding array on cpu and adding device_vector on gpu.
Here is the thing:
#include <iostream>
#include "cuda.h"
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <thrust/device_vector.h>
#include <thrust/host_vector.h>
#define N 10000000
int main(void)
{
float time_cpu;
float time_gpu;
int *a = new int[N];
int *b = new int[N];
int *c = new int[N];
for(int i=0;i<N;i++)
{
a[i]=i;
b[i]=i*i;
}
clock_t start_cpu,stop_cpu;
start_cpu=clock();
for(int i=0;i<N;i++)
{
c[i]=a[i]+b[i];
}
stop_cpu=clock();
time_cpu=(double)(stop_cpu-start_cpu)/CLOCKS_PER_SEC*1000;
std::cout<<"Time to generate (CPU):"<<time_cpu<<std::endl;
thrust::device_vector<int> X(N);
thrust::device_vector<int> Y(N);
thrust::device_vector<int> Z(N);
for(int i=0;i<N;i++)
{
X[i]=i;
Y[i]=i*i;
}
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord(start,0);
thrust::transform(X.begin(), X.end(),
Y.begin(),
Z.begin(),
thrust::plus<int>());
cudaEventRecord(stop,0);
cudaEventSynchronize(stop);
float elapsedTime;
cudaEventElapsedTime(&elapsedTime,start,stop);
std::cout<<"Time to generate (thrust):"<<elapsedTime<<std::endl;
cudaEventDestroy(start);
cudaEventDestroy(stop);
getchar();
return 0;
}
The CPU results appear really fast, But gpu runs REALLY slow on my machine(i5-2320,4G,GTX 560 Ti), CPU time is about 26,GPU time is around 30! Did I just do the thrust wrong with stupid errors in my code? or was there a deeper reason?
As a C++ rookie, I checked my code over and over and still got a slower time on GPU with thrust, so I did some experiments to show the difference of calculating vectorAdd with five different approaches.
I use windows API QueryPerformanceFrequency() as unified time measurement method.
Each of the experiments looks like this:
f = large_interger.QuadPart;
QueryPerformanceCounter(&large_interger);
c1 = large_interger.QuadPart;
for(int j=0;j<10;j++)
{
for(int i=0;i<N;i++)//CPU array adding
{
c[i]=a[i]+b[i];
}
}
QueryPerformanceCounter(&large_interger);
c2 = large_interger.QuadPart;
printf("Time to generate (CPU array adding) %lf ms\n", (c2 - c1) * 1000 / f);
and here is my simple __global__ function for GPU array adding:
__global__ void add(int *a, int *b, int *c)
{
int tid=threadIdx.x+blockIdx.x*blockDim.x;
while(tid<N)
{
c[tid]=a[tid]+b[tid];
tid+=blockDim.x*gridDim.x;
}
}
and the function is called as:
for(int j=0;j<10;j++)
{
add<<<(N+127)/128,128>>>(dev_a,dev_b,dev_c);//GPU array adding
}
I add vector a[N] and b[N] to vector c[N] for a loop of 10 times by:
add array on CPU
add std::vector on CPU
add thrust::host_vector on CPU
add thrust::device_vector on GPU
add array on GPU. and here is the result
with N=10000000
and I get results:
CPU array adding 268.992968ms
CPU std::vector adding 1908.013595ms
CPU Thrust::host_vector adding 10776.456803ms
GPU Thrust::device_vector adding 297.156610ms
GPU array adding 5.210573ms
And this confused me, I'm not familiar with the implementation of template library. Did the performance really differs so much between containers and raw data structures?

Most of the execution time is being spent in your loop that is initializing X[i] and Y[i]. While this is legal, it's a very slow way to initialize large device vectors. It would be better to create host vectors, initialize them, then copy those to the device. As a test, modify your code like this (right after the loop where you are initializing the device vectors X[i] and Y[i]):
} // this is your line of code
std::cout<< "Starting GPU run" <<std::endl; //add this line
cudaEvent_t start, stop; //this is your line of code
You will then see that the GPU timing results appear almost immediately after that added line prints out. So all of the time you're waiting is spent in initializing those device vectors directly from host code.
When I run this on my laptop, I get a CPU time of about 40 and a GPU time of about 5, so the GPU is running about 8 times faster than the CPU for the sections of code you are actually timing.
If you create X and Y as host vectors, and then create analogous d_X and d_Y device vectors, the overall execution time will be shorter, like so:
thrust::host_vector<int> X(N);
thrust::host_vector<int> Y(N);
thrust::device_vector<int> Z(N);
for(int i=0;i<N;i++)
{
X[i]=i;
Y[i]=i*i;
}
thrust::device_vector<int> d_X = X;
thrust::device_vector<int> d_Y = Y;
and change your transform call to:
thrust::transform(d_X.begin(), d_X.end(),
d_Y.begin(),
Z.begin(),
thrust::plus<int>());
OK so you've now indicated that the CPU run measurement is faster than the GPU measurement. Sorry I jumped to conclusions. My laptop is an HP laptop with a 2.6GHz core i7 and a Quadro 1000M gpu. I'm running centos 6.2 linux. A few comments: if you're running any heavy display tasks on your GPU, that can detract from performance. Also, when benchmarking these things it's common practice to use the same mechanism for comparison, you can use cudaEvents for both if you want, it can time CPU code the same as GPU code. Also, it's common practice with thrust to do a warm up run that is untimed, then repeat the test for a measurement, and likewise it's common practice to run the test 10 times or more in a loop, then divide to get an average. In my case, I can tell the clocks() measurement is pretty coarse because successive runs will give me 30, 40 or 50. On the GPU measurement I get something like 5.18256. Some of these things may help, but I can't say exactly why your results and mine differ so much (on the GPU side).
OK I did another experiment. The compiler will make a big difference on CPU side. I compiled with -O3 switch and the CPU time dropped to 0. Then I converted the CPU timing measurement from the clocks() method to cudaEvents, and I got a CPU measured time of 12.4 (with -O3 optimization) and still 5.1 on GPU side.
Your mileage will vary based on timing method and which compiler you are using on the CPU side.

First, Y[i]=i*i; does not fit in an integer for 10M elements. Integers holds roughly 1e10 and your code needs 1e14.
Second, it looks like the timing of transform is correct and should be faster than the CPU, regardless of which library you're using. Robert's suggestion to initialize vectors on CPU and then transfer to GPU is a good one for this case.
Third, since we can't do the integer multiple, below is some simpler CUDA library code (using ArrayFire that I work on) to do similar with floats, for your benchmarking:
int n = 10e6;
array x = array(seq(n));
array y = x * x;
timer t = timer::tic();
array z = x + y;
af::eval(z); af::sync();
printf("elapsed seconds: %g\n", timer::toc( t));
Good luck!

I am running similar test recently using CUDA Thrust on my Quadro 1000m. I use the thrust::sort_by_key as a benchmark to test its performance and the result is too good to convince my boos.It takes 100+ms to sort 512MB pairs.
For your problem, I am confused for 2 things.
(1) Why you multiple this time_cpu by 1000? Without the 1000, it is already in seconds.
time_cpu=(double)(stop_cpu-start_cpu)/CLOCKS_PER_SEC*1000;
(2) And, by mentioning 26, 30, 40, do you mean seconds or ms? The 'cudaEvent' report elapsed time in 'ms' not 's'.

Related

How Can I make it faster in c++11 with std::vector?

I have cv::Mat Mat_A and cv::Mat Mat_B both are (800000 X 512) floats
and below code is looks slow .
int rows = Mat_B.rows;
cv::Mat Mat_A = cv::repeat(img, rows, 1, Mat_A);
Mat_A = Mat_A - Mat_B
cv::pow(Mat_A,2,Mat_A)
cv::reduce(Mat_A, Mat_A, 1, CV_REDUCE_SUM);
cv::minMaxLoc(Mat_A, &dis, 0, &point, 0);
How Can I do this in std::vector ?
I think it should be faster.
In my 2.4 Ghz mabook pro it took 4 sec ? very slow.
I don't think you should use std::vector to do these operations. Image processing (CV aka Computer Vision) algorithms tend to be quite computationally heavy because there is so much data to deal with. OpenCV 2.0 C++ is highly optimized for this kind of operations, e.g. cv::Mat has a header and whenever a cv::Mat is copied with copy assignment or constructor, only the headers are copied with a pointer to the data. They use reference counting to keep track of instances. So memory management is done for you, and that's a good thing.
https://docs.opencv.org/2.4/doc/tutorials/core/mat_the_basic_image_container/mat_the_basic_image_container.html
You could try to compile without debug symbols, i.e. release vs debug. And you can also try to compile with optimization flags, e.g. for gcc -O3 which should reduce the size of your binary and speed up runtime operations. Maybe it might make a difference.
https://www.rapidtables.com/code/linux/gcc/gcc-o.html
Another thing you could try is to give your process a higher priority, i.e. the higher the priority, the less it the process yields the CPU. Again, that might not make a lot of difference, it all depends of other processes and their priorities, etc.
https://superuser.com/questions/42817/is-there-any-way-to-set-the-priority-of-a-process-in-mac-os-x
I hope that helps a bit.
Well your thinking is wrong.
Why your program is slow:
Your CPU have to loop through a lot of number and do calculation. This will make computation complexity high. That's why it's slow. Your program's speed is in proportion to size of Mat A and B. You can check this point by reducing/increasing the size of Mat A and B.
Can we accelerate it by std::vector
Sorry but it's no. Using std::vector will not reduce the calculation complexity. The math arthmetic of opencv is da "best", re-writing will only lead to slower code.
How to accelerate the calculation: you need to enable the acceleration options for opencv
you can see it at : https://github.com/opencv/opencv/wiki/CPU-optimizations-build-options . Intel provide intel mkl library to accelerate the matrix calculation. You could try it first.
Personally, the easiest approach is to use the GPU. But your machine doesn't have GPU, so it's out of the scope here.
You keep iterating over the data over and over again to do independent operations on them.
Something like this iterates only once over the data.
//assumes Mat_B and img cv::Mat
using px_t = float;//you mentioned float so I'll assume both img and Mat_B use floats
int rows = Mat_B.rows;
cv::Mat output(1,rows, Mat_B.type());
auto output_ptr = output.ptr<px_t>(0);
auto img_ptr = img.ptr<px_t>(0);
int min_idx =0;
int max_idx =0;
px_t min_ele = std::numeric_limits<px_t>::max();
px_t max_ele = std::numeric_limits<px_t>::min();
for(int i = 0; i< rows; ++i)
{
output[i]=0;
auto mat_row = Mat_B.ptr<px_t>(i);
for(int j = 0; j< Mat_B.cols; ++j)
{
output[i] +=(img_ptr[j]-mat_row[j])*(img_ptr[j]-mat_row[j]);
}
if(output[i]<min_ele)
{
min_idx = i;
min_ele = output[i];
}
if(output[i]>max_ele)
{
max_idx = i;
max_ele = output[i];
}
}
While I am also not sure if it is faster you can do this, assuming Mat_B contains uchar
std::vector<uchar> array_B(Mat_B.rows* Mat_B.cols);
if(Mat_B.isContinuous())
array_B = Mat_B.data;

Multi Threading Performance in Multiplication of 2 Arrays / Images - Intel IPP

I'm using Intel IPP for multiplication of 2 Images (Arrays).
I'm using Intel IPP 8.2 which comes with Intel Composer 2015 Update 6.
I created a simple function to multiply too large images (The whole project is attached, see below).
I wanted to see the gains using Intel IPP Multi Threaded Library.
Here is the simple project (I also attached the complete project form Visual Studio):
#include "ippi.h"
#include "ippcore.h"
#include "ipps.h"
#include "ippcv.h"
#include "ippcc.h"
#include "ippvm.h"
#include <ctime>
#include <iostream>
using namespace std;
const int height = 6000;
const int width = 6000;
Ipp32f mInput_image [1 * width * height];
Ipp32f mOutput_image[1 * width * height] = {0};
int main()
{
IppiSize size = {width, height};
double start = clock();
for (int i = 0; i < 200; i++)
ippiMul_32f_C1R(mInput_image, 6000 * 4, mInput_image, 6000 * 4, mOutput_image, 6000 * 4, size);
double end = clock();
double douration = (end - start) / static_cast<double>(CLOCKS_PER_SEC);
cout << douration << endl;
cin.get();
return 0;
}
I compiled this project once using Intel IPP Single Threaded and once using Intel IPP Multi Threaded.
I tried different sizes of arrays and in all of them the Multi Threaded version yields no gains (Sometimes it is even slower).
I wonder, how come there is no gain in this task with multi threading?
I know Intel IPP uses the AVX and I thought maybe the task becomes Memory Bounded?
I tried another approach by using OpenMP manually to have Multi Threaded approach using Intel IPP Single Thread implementation.
This is the code:
#include "ippi.h"
#include "ippcore.h"
#include "ipps.h"
#include "ippcv.h"
#include "ippcc.h"
#include "ippvm.h"
#include <ctime>
#include <iostream>
using namespace std;
#include <omp.h>
const int height = 5000;
const int width = 5000;
Ipp32f mInput_image [1 * width * height];
Ipp32f mOutput_image[1 * width * height] = {0};
int main()
{
IppiSize size = {width, height};
double start = clock();
IppiSize blockSize = {width, height / 4};
const int NUM_BLOCK = 4;
omp_set_num_threads(NUM_BLOCK);
Ipp32f* in;
Ipp32f* out;
// ippiMul_32f_C1R(mInput_image, width * 4, mInput_image, width * 4, mOutput_image, width * 4, size);
#pragma omp parallel \
shared(mInput_image, mOutput_image, blockSize) \
private(in, out)
{
int id = omp_get_thread_num();
int step = blockSize.width * blockSize.height * id;
in = mInput_image + step;
out = mOutput_image + step;
ippiMul_32f_C1R(in, width * 4, in, width * 4, out, width * 4, blockSize);
}
double end = clock();
double douration = (end - start) / static_cast<double>(CLOCKS_PER_SEC);
cout << douration << endl;
cin.get();
return 0;
}
The results were the same, again, no gain of performance.
Is there a way to benefit from Multi Threading in this kind of task?
How can I validate whether a task becomes memory bounded and hence no benefit in parallelize it?
Are there benefit to parallelize task of multiplying 2 arrays on CPU with AVX?
The Computers I tried it on is based on Core i7 4770k (Haswell).
Here is a link to the Project in Visual Studio 2013.
Thank You.
Your images occupy 200 MB in total (2 x 5000 x 5000 x 4 bytes). Each block therefore consists of 50 MB of data. This is more than 6 times than the size of your CPU's L3 cache (see here). Each AVX vector multiplication operates on 256 bits of data, which is half a cache line, i.e. it consumes one cache line per vector instruction (half a cache line for each argument). A vectorised multiplication on Haswell has a latency of 5 cycles and the FPU can retire two such instructions per cycle (see here). The memory bus of i7-4770K is rated at 25.6 GB/s (theoretical maximum!) or no more than 430 million cache lines per second . The nominal speed of the CPU is 3.5 GHz. The AVX part is clocked a bit lower, let's say at 3.1 GHz. At that speed, it takes an order of magnitude more cache lines per second to fully feed the AVX engine.
In those conditions, a single thread of vectorised code saturates almost fully the memory bus of your CPU. Adding a second thread might result in a very slight improvement. Adding further threads only results in contentions and added overhead. The only way to speed up such a calculation is to increase the memory bandwidth:
run on a NUMA system with more memory controllers and therefore higher aggregate memory bandwidth, e.g. a multisocket server board;
switch to a different architecture with much higher memory bandwidth, e.g. Intel Xeon Phi or a GPGPU.
From some researching on my own, it looks like your total CPU cache is around 8MB. 6000*4/4 (6000 floats split into blocks of 4) is 6MB. Multiply this by 2 (in and out), and you're outside of the cache.
I haven't tested this, but increasing the number of blocks should increase the performannce. Try 8 to start out with (your CPU siports hyperthreading to 8 virtual cores).
Currently, each of the different processes spawned on OpenMP is having cache conflicts and having to (re)load from main memory. Reducing the size of the blocks can help with this. Having distinct cahces would effectively increase the size of your cache, but it seems thats not an option.
If you're just doing this as a proof of principle, you may want to test this by running it on your graphics card. Although, that can be even harder to implement properly.
If you run with hyperthread enabled you should try the openmp version of ipp with 1 thread per core and set omp_places=cores if ipp doesn't do it automatically. If you use Cilk_ ipp try varying cilk_workers.
You might try a test case large enough to span multiple 4kb pages. Then additional factors come into play. Ideally, ipp will put the threads to work on different pages. On Linux (or Mac?) transparent huge pages should kick in. On Windows, haswell CPU introduced hardware page prefetch which should reduce but not eliminate importance of thp.

OpenMP - Poor performance when solving system of linear equations

I am trying to use OpenMP to parallelize a simple c++ code that solves a system of linear equations by Gauss elimination.
The relevant part of my code is:
#include <iostream>
#include <time.h>
using namespace std;
#define nl "\n"
void LinearSolve(double **& M, double *& V, const int N, bool parallel, int threads){
//...
for (int i=0;i<N;i++){
#pragma omp parallel for num_threads(threads) if(parallel)
for (int j=i+1;j<N;j++){
double aux, * Mi=M[i], * Mj=M[j];
aux=Mj[i]/Mi[i];
Mj[i]=0;
for (int k=i+1;k<N;k++) {
Mj[k]-=Mi[k]*aux;
};
V[j]-=V[i]*aux;
};
};
//...
};
class Time {
clock_t startC, endC;
time_t startT, endT;
public:
void start() {startC=clock(); time (&startT);};
void end() {endC=clock(); time (&endT);};
double timedifCPU() {return(double(endC-startC)/CLOCKS_PER_SEC);};
int timedif() {return(int(difftime (endT,startT)));};
};
int main (){
Time t;
double ** M, * V;
int N=5000;
cout<<"number of equations "<<N<<nl<<nl;
M= new double * [N];
V=new double [N];
for (int i=0;i<N;i++){
M[i]=new double [N];
};
for (int m=1;m<=16;m=2*m){
cout<<m<<" threads"<<nl;
for (int i=0;i<N;i++){
V[i]=i+1.5*i*i;
for (int j=0;j<N;j++){
M[i][j]=(j+2.3)/(i-0.2)+(i+2)/(j+3); //some function to get regular matrix
};
};
t.start();
LinearSolve(M,V,N,m!=1,m);
t.end();
cout<<"time "<<t.timedif()<<", CPU time "<<t.timedifCPU()<<nl<<nl;
};
}
Since the code is extremely simple I would expect that the time would be
inversely proportional to the number of threads. However the typical result I get is (the code is compiled with gcc on Linux)
number of equations 5000
1 threads
time 217, CPU time 215.89
2 threads
time 125, CPU time 245.18
4 threads
time 80, CPU time 302.72
8 threads
time 67, CPU time 458.55
16 threads
time 55, CPU time 634.41
There is a decrease in time, but much less that I would like to and the CPU time mysteriously grows.
I suspect the problem is in memory sharing, but I have been unable to identify it. Access to row M[j] should not be a problem, since each thread writes to the a different row of the matrix. There could be a problem in reading from row M[i], so I also tried to make a separate copy of this row for each thread by replacing the parallel loop by
#pragma omp parallel num_threads(threads) if(parallel)
{
double Mi[N];
for (int j=i;j<N;j++) Mi[j]=M[i][j];
#pragma omp for
for (int j=i+1;j<N;j++){
double aux, * Mj=M[j];
aux=Mj[i]/Mi[i];
Mj[i]=0;
for (int k=i+1;k<N;k++) {
Mj[k]-=Mi[k]*aux;
};
V[j]-=V[i]*aux;
};
};
Unfortunately it does not help at all.
I would very much appreciate any help.
Your problem is excessive OpenMP synchronization.
Having the #omp parallel inside the first loop means that each iteration of the outer loop comes with the whole synchronization overhead.
Take a look at the top-most chart on the image here (more detail can be found on the Allinea MAP OpenMP profiler introduction). The top line is application activity - and dark gray means "OpenMP synchronization" and green means "doing compute".
You can see a lot of dark gray in the right hand side of that top graph/chart - that is when the 16 threads are running. You're spending a lot of time synchronizing.
I also see a lot of time being spent in memory access (more than in compute) - so it's probably this that is making what should be balanced workload actually be highly unbalanced and giving the synchronization delay.
As the other respondent suggested - it's worth reading other literature for ideas here.
I think the underlying problem may be that traditional Gaussian elimination may not be suitable for parallelization.
Gaussian elimination is a process where by each subsequent step relies on the result of the previous step i.e. each iteration of your linear solve loop is dependent on the results of the previous iteration, i.e. it must be done serially. Try searching the literature for "parallel row reduction algorithms".
Also glancing at your code it looks like you will have race condition.

Using loop to measure execution time of cuFFT, the relationship between time and how many loops is not linear

Basically I want to measure the time cost by cuFFT function by putting the cuFFT execution function in a for loop, here is the code I used first time (This is the simple example used in the Nvidia website for CUDA):
By the way, my CPU is Intel I7-3630QM 2.40GHz, and GPU is Nvidia NVS 5200M. The platform I used is Visual Studio 2012 and CUDA 5.5, operation system is Windows 7, 64 bits.
#include "cuda_runtime.h"
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <time.h>
#include <cufft.h>
#include <cuda.h>
#include <cuda_runtime_api.h>
#define NX 1024
#define NY 1024
int main(int argc, char** argv) {
int i;
int Iter;
cufftHandle plan;//A data structure named plan containing all information needed for Fourier Transform.
cufftComplex *data1;//data structure to store the real value and complex value of the input and output of Fourier Transform.
cudaMalloc((void**)&data1, sizeof(cufftComplex)*NX*NY);//Prepare the NX*NY 2D Fourier Transform by alloc input values on GPU Memory
cufftPlan2d(&plan, NX, NY, CUFFT_C2C);//Prepare 2D Fourier Transform (NX*NY), type is C2C that is complex to complex.
Iter = 1000;
clock_t begin, end;
double cost;
begin = clock();
for (i = 0;i <Iter;i++){
cufftExecC2C(plan, data1, data1, CUFFT_FORWARD);//Execute the Fourier Transform with input data in data1 and output result to data1, CUFFT_FORWARD means it is FFT not iFFT.
}
end = clock();
cost = (double)(end - begin) / CLOCKS_PER_SEC;
printf("%lf seconds\n", cost);
cufftDestroy(plan);
cudaFree(data1);
return 0;
}
This program returns the time normally 0.030s, if I change the value of Iter (how many loops) to 1100, the results turned to be 0.033s, and if Iter = 1200, the result is 0.036s, which seems linear.
This keeps correct until the Iter = 1500, the time when Iter = 1500 is 0.195s,when Iter = 1600, time = 0.431s.
I don't understand why the time cost is like this, anyone can help me?
Thank you in advance.
Modify your code as follows:
cudaDeviceSynchronize(); // add this line
end = clock();
And I believe you'll get sane results.
The CUFFT functions are asynchronous so they can support streamed overlap of copy and compute. That means they return before the underlying GPU operation is complete. So your for-loop is in effect queueing up a large number of transforms to be performed one after the other. But they are not necessarily finished by the time you complete your timing. The complex behavior you are observing I believe is related to exceeding an internal queue depth of the number of kernel launches that can be queued up, before additional requests must wait for queue slots to open up. But that isn't the central issue.
The central issue is that your timing method is flawed. This is just another example of the dangers inherent in using host-based timing methods to time asynchronous GPU activity.

FFTW vs Matlab FFT

I posted this on matlab central but didn't get any responses so I figured I'd repost here.
I recently wrote a simple routine in Matlab that uses an FFT in a for-loop; the FFT dominates the calculations. I wrote the same routine in mex just for experimentation purposes and it calls the FFTW 3.3 library. It turns out that the matlab routine runs faster than the mex routine for very large arrays (about twice as fast). The mex routine uses wisdom and and performs the same FFT calculations. I also know matlab uses FFTW, but is it possible their version is slightly more optimized? I even used the FFTW_EXHAUSTIVE flag and its still about twice as slow for large arrays than the MATLAB counterpart. Furthermore I ensured the matlab I used was single threaded with the "-singleCompThread" flag and the mex file I used was not in debug mode. Just curious if this was the case - or if there are some optimizations matlab is using under the hood that I dont know about. Thanks.
Here's the mex portion:
void class_cg_toeplitz::analysis() {
// This method computes CG iterations using FFTs
// Check for wisdom
if(fftw_import_wisdom_from_filename("cd.wis") == 0) {
mexPrintf("wisdom not loaded.\n");
} else {
mexPrintf("wisdom loaded.\n");
}
// Set FFTW Plan - use interleaved FFTW
fftw_plan plan_forward_d_buffer;
fftw_plan plan_forward_A_vec;
fftw_plan plan_backward_Ad_buffer;
fftw_complex *A_vec_fft;
fftw_complex *d_buffer_fft;
A_vec_fft = fftw_alloc_complex(n);
d_buffer_fft = fftw_alloc_complex(n);
// CREATE MASTER PLAN - Do this on an empty vector as creating a plane
// with FFTW_MEASURE will erase the contents;
// Use d_buffer
// This is somewhat dangerous because Ad_buffer is a vector; but it does not
// get resized so &Ad_buffer[0] should work
plan_forward_d_buffer = fftw_plan_dft_r2c_1d(d_buffer.size(),&d_buffer[0],d_buffer_fft,FFTW_EXHAUSTIVE);
plan_forward_A_vec = fftw_plan_dft_r2c_1d(A_vec.height,A_vec.value,A_vec_fft,FFTW_WISDOM_ONLY);
// A_vec_fft.*d_buffer_fft will overwrite d_buffer_fft
plan_backward_Ad_buffer = fftw_plan_dft_c2r_1d(Ad_buffer.size(),d_buffer_fft,&Ad_buffer[0],FFTW_EXHAUSTIVE);
// Get A_vec_fft
fftw_execute(plan_forward_A_vec);
// Find initial direction - this is the initial residual
for (int i=0;i<n;i++) {
d_buffer[i] = b.value[i];
r_buffer[i] = b.value[i];
}
// Start CG iterations
norm_ro = norm(r_buffer);
double fft_reduction = (double)Ad_buffer.size(); // Must divide by size of vector because inverse FFT does not do this
while (norm(r_buffer)/norm_ro > relativeresidual_cutoff) {
// Find Ad - use fft
fftw_execute(plan_forward_d_buffer);
// Get A_vec_fft.*fft(d) - A_vec_fft is only real, but d_buffer_fft
// has complex elements; Overwrite d_buffer_fft
for (int i=0;i<n;i++) {
d_buffer_fft[i][0] = d_buffer_fft[i][0]*A_vec_fft[i][0]/fft_reduction;
d_buffer_fft[i][1] = d_buffer_fft[i][1]*A_vec_fft[i][0]/fft_reduction;
}
fftw_execute(plan_backward_Ad_buffer);
// Calculate r'*r
rtr_buffer = 0;
for (int i=0;i<n;i++) {
rtr_buffer = rtr_buffer + r_buffer[i]*r_buffer[i];
}
// Calculate alpha
alpha = 0;
for (int i=0;i<n;i++) {
alpha = alpha + d_buffer[i]*Ad_buffer[i];
}
alpha = rtr_buffer/alpha;
// Calculate new x
for (int i=0;i<n;i++) {
x[i] = x[i] + alpha*d_buffer[i];
}
// Calculate new residual
for (int i=0;i<n;i++) {
r_buffer[i] = r_buffer[i] - alpha*Ad_buffer[i];
}
// Calculate beta
beta = 0;
for (int i=0;i<n;i++) {
beta = beta + r_buffer[i]*r_buffer[i];
}
beta = beta/rtr_buffer;
// Calculate new direction vector
for (int i=0;i<n;i++) {
d_buffer[i] = r_buffer[i] + beta*d_buffer[i];
}
*total_counter = *total_counter+1;
if(*total_counter >= iteration_cutoff) {
// Set total_counter to -1, this indicates failure
*total_counter = -1;
break;
}
}
// Store Wisdom
fftw_export_wisdom_to_filename("cd.wis");
// Free fft alloc'd memory and plans
fftw_destroy_plan(plan_forward_d_buffer);
fftw_destroy_plan(plan_forward_A_vec);
fftw_destroy_plan(plan_backward_Ad_buffer);
fftw_free(A_vec_fft);
fftw_free(d_buffer_fft);
};
Here's the matlab portion:
% Take FFT of A_vec.
A_vec_fft = fft(A_vec); % Take fft once
% Find initial direction - this is the initial residual
x = zeros(n,1); % search direction
r = zeros(n,1); % residual
d = zeros(n+(n-2),1); % search direction; pad to allow FFT
for i = 1:n
d(i) = b(i);
r(i) = b(i);
end
% Enter CG iterations
total_counter = 0;
rtr_buffer = 0;
alpha = 0;
beta = 0;
Ad_buffer = zeros(n+(n-2),1); % This holds the product of A*d - calculate this once per iteration and using FFT; only 1:n is used
norm_ro = norm(r);
while(norm(r)/norm_ro > 10^-6)
% Find Ad - use fft
Ad_buffer = ifft(A_vec_fft.*fft(d));
% Calculate rtr_buffer
rtr_buffer = r'*r;
% Calculate alpha
alpha = rtr_buffer/(d(1:n)'*Ad_buffer(1:n));
% Calculate new x
x = x + alpha*d(1:n);
% Calculate new residual
r = r - alpha*Ad_buffer(1:n);
% Calculate beta
beta = r'*r/(rtr_buffer);
% Calculate new direction vector
d(1:n) = r + beta*d(1:n);
% Update counter
total_counter = total_counter+1;
end
In terms of time, for N = 50000 and b = 1:n it takes about 10.5 seconds with mex and 4.4 seconds with matlab. I'm using R2011b. Thanks
A few observations rather than a definite answer since I do not know any of the specifics of the MATLAB FFT implementation:
Based on the code you have, I can see two explanations for the speed difference:
the speed difference is explained by differences in levels of optimization of the FFT
the while loop in MATLAB is executed a significantly smaller number of times
I will assume you already looked into the second issue and that the number of iterations are comparable. (If they aren't, this is most likely to some accuracy issues and worth further investigations.)
Now, regarding FFT speed comparison:
Yes, the theory is that FFTW is faster than other high-level FFT implementations but it is only relevant as long as you compare apples to apples: here you are comparing implementations at a level further down, at the assembly level, where not only the selection of the algorithm but its actual optimization for a specific processor and by software developers with varying skills comes at play
I have optimized or reviewed optimized FFTs in assembly on many processors over the year (I was in the benchmarking industry) and great algorithms are only part of the story. There are considerations that are very specific to the architecture you are coding for (accounting for latencies, scheduling of instructions, optimization of register usage, arrangement of data in memory, accounting for branch taken/not taken latencies, etc.) and that make differences as important as the selection of the algorithm.
With N=500000, we are also talking about large memory buffers: yet another door for more optimizations that can quickly get pretty specific to the platform you run your code on: how well you manage to avoid cache misses won't be dictated by the algorithm so much as by how the data flow and what optimizations a software developer may have used to bring data in and out of memory efficiently.
Though I do not know the details of the MATLAB FFT implementation, I am pretty sure that an army of DSP engineers has been (and is still) honing on its optimization as it is key to so many designs. This could very well mean that MATLAB had the right combination of developers to produce a much faster FFT.
This is classic performance gain thanks to low-level and architecture-specific optimization.
Matlab uses FFT from the Intel MKL (Math Kernel Library) binary (mkl.dll). These are routines optimized (at assembly level) by Intel for Intel processors. Even on AMD's it seems to give nice performance boosts.
FFTW seems like a normal c library that is not as optimized. Hence the performance gain to use the MKL.
I have found the following comment on the MathWorks website [1]:
Note on large powers of 2: For FFT dimensions that are powers of
2, between 2^14 and 2^22, MATLAB software uses special preloaded
information in its internal database to optimize the FFT computation.
No tuning is performed when the dimension of the FTT is a power of 2,
unless you clear the database using the command fftw('wisdom', []).
Although it relates to powers of 2, it may hint upon that MATLAB employs its own 'special wisdom' when using FFTW for certain (large) array sizes. Consider: 2^16 = 65536.
[1] R2013b Documentation available from http://www.mathworks.de/de/help/matlab/ref/fftw.html (accessed on 29 Oct 2013)
EDIT: #wakjah 's reply to this answer is accurate: FFTW does support split real and imaginary memory storage via its Guru interface. My claim about hacking is thus not accurate but can very well apply if FFTW's Guru interface is not used - which is the case by default, so beware still!
First, sorry for being a year late. I'm not convinced that the speed increase you see comes from MKL or other optimizations. There is something quite fundamentally different between FFTW and Matlab, and that is how complex data is stored in memory.
In Matlab, the real and imaginary parts of a complex vector X are separate arrays Xre[i] and Xim[i] (linear in memory, efficient when operating on either of them separately).
In FFTW, the real and imaginary parts are interlaced as double[2] by default, i.e. X[i][0] is the real part, and X[i][1] is the imaginary part.
Thus, to use the FFTW library in mex files one cannot use the Matlab array directly, but must allocate new memory first, then pack the input from Matlab into FFTW format, and then unpack the output from FFTW into Matlab format. i.e.
X = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
Y = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
then
for (size_t i=0; i<N; ++i) {
X[i][0] = Xre[i];
X[i][1] = Xim[i];
}
then
for (size_t i=0; i<N; ++i) {
Yre[i] = Y[i][0];
Yim[i] = Y[i][1];
}
Hence, this requires 2x memory allocations + 4x reads + 4x writes -- all of size N. This does take a toll speed-wise on large problems.
I have a hunch that Mathworks may have hacked the FFTW3 code to allow it to read input vectors directly in the Matlab format, which avoids all of the above.
In this scenario, one can only allocate X and use X for Y to run FFTW in-place (as fftw_plan_*(N, X, X, ...) instead of fftw_plan_*(N, X, Y, ...)), since it'll be copied to the Yre and Yim Matlab vector, unless the application requires/benefits from keeping X and Y separate.
EDIT: Looking at the memory consumption in real-time when running Matlab's fft2() and my code based on the fftw3 library, it shows that Matlab only allocates only one additional complex array (the output), whereas my code needs two such arrays (the *fftw_complex buffer plus the Matlab output). An in-place conversion between the Matlab and fftw formats is not possible because the Matlab's real and imaginary arrays are not consecutive in memory. This suggests that Mathworks hacked the fftw3 library to read/write the data using the Matlab format.
One other optimization for multiple calls, is to allocate persistently (using mexMakeMemoryPersistent()). I'm not sure if the Matlab implementation does this as well.
Cheers.
p.s. As a side note, the Matlab complex data storage format is more efficient for operating on the real or imaginary vectors separately. On FFTW's format you'd have to do ++2 memory reads.