I'm trying to implement a multichannel integral image algorithm, but it's too slow(8 seconds for 200 images(640x480), on Core 2 Quad). I expect it to reach 1 second for 200 images.
This is the profiling result(over 200 images, n_bins=8):
How can I optimize *ps = *psu + s?
Plain text version of code
Start to check compiler settings, is it set to maximum performance?
Than, depending from architecture, calculation of integral image have several bottleneck.
Computations itself, some low cost CPU can't perform integer math with good performance. No solution.
Data flow is not optimal. The solution is to provide optimal data flows ( number of sequential read and write streams). For example you can process 2 rows simultaneously.
Data dependency of algorithm. On modern CPU it can be biggest problem. The solution is to change processing algorithm. For example calculate odd/even pixels without dependency (more calculations , less dependency).
Processing can be done using GPU.
I have trouble believing that profile result. In this code
16 for (int x = 1; x < w + 1; x++, pg++, ps += n_bins, psu += n_bins) {
17 s += *pg;
18 *ps = *psu + s;
19 }
it says the lion's share of time is on line 18, very little on 17, and next to nothing on line 16.
Yet it is also doing a comparison, two increments, and three adds on every iteration.
Cache-misses might explain it, but there's no harm in double-checking, which I do with this technique.
Regardless, the loop could be unrolled, for example:
int x = w;
while(x >= 4){
s += pg[0];
ps[n_bins*0] = psu[n_bins*0] + s;
s += pg[1];
ps[n_bins*1] = psu[n_bins*1] + s;
s += pg[2];
ps[n_bins*2] = psu[n_bins*2] + s;
s += pg[3];
ps[n_bins*3] = psu[n_bins*3] + s;
x -= 4;
pg += 4;
ps += n_bins*4;
psu += n_bins*4;
}
for(; --x >= 0;){
s += *pg;
*ps = *psu + s;
pg++;
ps += n_bins;
psu += n_bins;
}
If n_bins happens to be a constant, this could enable the compiler to do some more optimizing of the code in the while loop.
You probably don't compute integral images just for the sake of computing integral images.
I imagine two situations:
1) you use the integral images on every pixel to compute a box filter or similar.
2) you use them at a much smaller number of places.
In case 1), the computation of the integral images will probably not be the bottleneck in your application.
In case 2), you should wonder if it is worth to compute the entire integral images.
This said, parallelization with four threads is also an option. The easiest is to let every thread compute every fourth image.
You can also split every image in four, but you will be penalized by the need to synchronize the threads, but also by the fact that prefix sums are constrained by a data dependency. (You can split the image in four and compute separate integral images, but after this step you will need to add a constant to three of the quarter images.)
Related
I have a while loop that continues as long as energy variable (type double) has not converged to below a certain threshold. One of the variables needed to calculate this energy is an Armadillo matrix of doubles, named f_mo. In the while loop, this f_mo updates iteratively, so I calculate f_mo at the beginning of each loop as:
arma::mat f_mo = h_core_mo;//h_core_mo is an Armadillo matrix of doubles
for(size_t p = 0; p < n_mo; p++) {//n_mo is of type size_t
for(size_t q = 0; q < n_mo; q++) {
double sum = 0.0;
for(size_t i = 0; i < n_occ; i++) {//n_occ is of type size_t
//f_mo(p,q) += 2.0*g_mat_full_qp1_qp1_mo(p*n_mo + q, i*n_mo + i)-g_mat_full_qp1_qp1_mo(p*n_mo+i,i*n_mo+q); //all g_mat_ are Armadillo matrices of doubles
sum += 2.0*g_mat_full_qp1_qp1_mo(p*n_mo + q, i*n_mo + i)-g_mat_full_qp1_qp1_mo(p*n_mo+i,i*n_mo+q);
}
for(size_t i2 = 0; i2 < n_occ2; i2++) {//n_occ2 is of type size_t
//f_mo(p,q) -= 1.0*g_mat_full_qp1_qp2_mo(p*n_mo + q, i2*n_mo2 + i2);
sum -= 1.0*g_mat_full_qp1_qp2_mo(p*n_mo + q, i2*n_mo2 + i2);
}
f_mo(p,q) +=sum;
}}
But say I replace the sum (which I add at the end to f_mo(p,q)) with addition to f_mo(p,q) directly (the commented out code). The output f_mo matrices are identical to machine precision. Nothing about the code should change. The only variables affected in the loop are sum and f_mo. And YET, the code converges to a different energy and in vastly different number of while loop iterations. I am at a loss as to the cause of the difference. When I run the same code 2,3,4,5 times, I get the same result every time. When I recompile with no optimization, I get the same issue. When I run on a different computer (controlling for environment), I yet again get a discrepancy in # of while loops despite identical f_mo, but the total number of iterations for each method (sum += and f_mo(p,q) += ) differ.
It is worth noting that the point at which the code outputs differ is always g_mat_full_qp1_qp2_mo, which is recalculated later in the while loop. HOWEVER, every variable going into the calculation of g_mat_full_qp1_qp2_mo is identical between the two codes. This leads me to think there is something more profound about C++ that I do not understand. I welcome any ideas as to how you would proceed in debugging this behavior (I am all but certain it is not a typical bug, and I've controlled for environment and optimization)
I'm going to assume this a Hartree-Fock, or some other kind of electronic structure calculation where you adding the two-electron integrals to the core Hamiltonian, and apply some domain knowledge.
Part of that assume is the individual elements of the two-electron integrals are very small, in particular compared to the core Hamiltonian. Hence as 1201ProgramAlarm mentions in their comment, the order of addition will matter. You will get a more accurate result if you add smaller numbers together first to avoid loosing precision when adding two numbers many orders of magintude apart.. Because you iterate this processes until the Fock matrix f_mo has tightly converged you eventually converge to the same value.
In order to add up the numbers in a more accurate order, and hopefully converge faster, most electronic structure programs have a seperate routine to calculate the two-electron integrals, and then add them to the core Hamiltonian, which is what you are doing, element by element, in your example code.
Presentation on numerical computing.
Here's the basic algorithm to count number of distinct values in a buffer:
unsigned getCount(const uint8_t data[16])
{
uint8_t pop[256] = { 0 };
unsigned count = 0;
for (int i = 0; i < 16; ++i)
{
uint8_t b = data[i];
if (0 == pop[b])
count++;
pop[b]++;
}
return count;
}
Can this be done somehow in neon efficiently by loading into a q-reg and doing some bit magic? Alternatively, can I efficiently say that data has all elements identical, or contains only two distinct values or more than two?
For example, using vminv_u8 and vmaxv_u8 I can find min and max elements and if they are equal I know that data has identical elements. If not, then I can vceq_u8 with min value and vceq_u8 with max value and then vorr_u8 these results and compare that I have all 1-s in the result. Basically, in neon it can be done this way. Any ideas how to make it better?
unsigned getCountNeon(const uint8_t data[16])
{
uint8x16_t s = vld1q_u8(data);
uint8x16_t smin = vdupq_n_u8(vminvq_u8(s));
uint8x16_t smax = vdupq_n_u8(vmaxvq_u8(s));
uint8x16_t res = vdupq_n_u8(1);
uint8x16_t one = vdupq_n_u8(1);
for (int i = 0; i < 14; ++i) // this obviously needs to be unrolled
{
s = vbslq_u8(vceqq_u8(s, smax), smin, s); // replace max with min
uint8x16_t smax1 = vdupq_n_u8(vmaxvq_u8(s));
res = vaddq_u8(res, vaddq_u8(vceqq_u8(smax1, smax), one));
smax = smax1;
}
res = vaddq_u8(res, vaddq_u8(vceqq_u8(smax, smin), one));
return vgetq_lane_u8(res, 0);
}
With some optimizations and improvements perhaps a 16-byte block can be processed in 32-48 neon instructions. Can this be done better in arm? Unlikely
Some background why I ask this question. As I'm working on an algorithm I'm trying different approaches at processing data and I'm not sure yet what exactly I'll use at the end. Information that might be of use:
count of distinct elements per 16-byte block
value that repeats most per 16-byte block
average per block
median per block
speed of light?.. that's a joke, it cannot be computed in neon from 16-byte block :)
so, I'm trying stuff, and before I use any approach I want to see if that approach can be well optimized. For example, average per block will be memcpy speed on arm64 basically.
If you're expecting a lot of duplicates, and can efficiently get a horizontal min with vminv_u8, this might be better than scalar. Or not, maybe NEON->ARM stalls for the loop condition kill it. >.< But it should be possible to mitigate that with unrolling (and saving some info in registers to figure out how far you overshot).
// pseudo-code because I'm too lazy to look up ARM SIMD intrinsics, edit welcome
// But I *think* ARM can do these things efficiently,
// except perhaps the loop condition. High latency could be ok, but stalling isn't
int count_dups(uint8x16_t v)
{
int dups = (0xFF == vmax_u8(v)); // count=1 if any elements are 0xFF to start
auto hmin = vmin_u8(v);
while (hmin != 0xff) {
auto min_bcast = vdup(hmin); // broadcast the minimum
auto matches = cmpeq(v, min_bcast);
v |= matches; // min and its dups become 0xFF
hmin = vmin_u8(v);
dups++;
}
return dups;
}
This turns unique values into 0xFF, one set of duplicates at a time.
The loop-carried dep chain through v / hmin stays in vector registers; it's only the loop branch that needs NEON->integer.
Minimizing / hiding NEON->integer/ARM penalties
Unroll by 8 with no branches on hmin, leaving results in 8 NEON registers. Then transfer those 8 values; back-to-back transfers of multiple NEON registers to ARM only incurs one total stall (of 14 cycles on whatever Jake tested on.) Out-of-order execution could also hide some of the penalty for this stall. Then check those 8 integer registers with a fully-unrolled integer loop.
Tune the unroll factor to be large enough that you usually don't need another round of SIMD operations for most input vectors. If almost all of your vectors have at most 5 unique values, then unroll by 5 instead of 8.
Instead of transferring multiple hmin results to integer, count them in NEON. If you can use ARM32 NEON partial-register tricks to put multiple hmin values in the same vector for free, it's only a bit more work to shuffle 8 of them into one vector and compare for not-equal to 0xFF. Then horizontally add that compare result to get a -count.
Or if you have values from different input vectors in different elements of a single vector, you can use vertical operations to add results for multiple input vectors at once without needing horizontal ops.
There's almost certainly room to optimize this, but I don't know ARM that well, or ARM performance details. NEON's hard to use for anything conditional because of the big performance penalty for NEON->integer, totally unlike x86. Glibc has a NEON memchr with NEON->integer in the loop, but I don't know if it uses it or if it's faster than scalar.
Speeding up repeated calls to the scalar ARM version:
Zeroing the 256-byte buffer every time would be expensive, but we don't need to do that. Use a sequence number to avoid needing to reset:
Before every new set of elements: ++seq;
For each element in the set:
sum += (histogram[i] == seq);
histogram[i] = seq; // no data dependency on the load result, unlike ++
You might make the histogram an array of uint16_t or uint32_t to avoid needing to re-zero if a uint8_t seq wraps. But then it takes more cache footprint, so maybe just re-zeroing every 254 sequence numbers makes the most sense.
I am the author of an open source scientific code called vampire ( http://github.com/richard-evans/vampire ), and being compute intensive means any improvement in code performance can significantly increase the amount of research that can be done. Typical runtimes of this code can be hundreds of core hours, so I am always looking for ways to improve the performance critical sections of the code. However, I have come a bit stuck with the following bit of relatively innocuous looking bit of code, which makes up around 40% of the runtime:
for (int atom = start_index; atom < end_index; atom++){
register double Hx = 0.0;
register double Hy = 0.0;
register double Hz = 0.0;
const int start = atoms::neighbour_list_start_index[atom];
const int end = atoms::neighbour_list_end_index[atom] + 1;
for (int nn = start; nn < end; nn++){
const int natom = atoms::neighbour_list_array[nn];
const double Jij = atoms::i_exchange_list[atoms::neighbour_interaction_type_array[nn]].Jij;
Hx -= Jij * atoms::x_spin_array[natom];
Hy -= Jij * atoms::y_spin_array[natom];
Hz -= Jij * atoms::z_spin_array[natom];
}
atoms::x_total_spin_field_array[atom] += Hx;
atoms::y_total_spin_field_array[atom] += Hy;
atoms::z_total_spin_field_array[atom] += Hz;
}
The high level overview of the function and variables of this code is as follows: There is a 1D array of a physical vector ( split into three 1D arrays for each component x,y,z for memory caching purposes, atoms::x_spin_array, etc) called 'spin'. Each of these spins interact with some other spins, and all the interactions are stored as a 1D neighbour list (atoms::neighbour_list_array). The relevant list of interactions for each atom is determined by a start and end index of the neighbor listarray in two separate arrays. At the end of the calculation the each atomic spin has an effective field which is the vector sum of the interactions.
Given the small amount of code and the sizable fraction of the runtime it occupies I have done by best, but I feel there must be a way to optimize this further, but as a physicist rather than a computer scientist maybe I am missing something?
You've got a constant stream of multiply, subtract and adds on contiguous data. That's seems like an ideal use of SSE. If it's memory bandwidth limited, then OpenCL/CUDA instead.
Try using this library if you aren't familiar with all the low level instructions.
That inner loop could potentially then be restructured significantly maybe leading to speed ups.
If the x, y, z components are indeed linked lists, doing x[i], y[i] and z[i] will cause the lists to be traversed multiple times, giving (n^2)/2 iterations. Using vectors will make this an O(1) operation.
You mention that the three coordinates are split out for memory caching purposes, but this will affect the Level 1 and Level 2 cache locality as you are accessing 3 different areas in memory. The linked list is also impacting your cache locality.
Using something like:
struct vector3d {
double x;
double y;
double z;
};
std::vector<vector3d> spin;
std::vector<vector3d> total_spin;
This should improve the cache locality, as the x, y and z values are adjacent in memory and the spins occupy a linear block of memory.
I feel following suggestions can help you optimize the code a bit if not completely:
Use initialization over assignments wherever possible
Prefer pre-increment over post for better speed.(believe me, it does make a change)
Apart from that I think the code is just fine. There are some pros and cons of each DS..you gotta live with it.
Happy Coding!
I am trying to construct a summed area table for later use in an adaptive thresholding routine. Since this code is going to be used in time critical software, I am trying to squeeze as many cycles as possible out of it.
For performance, the table is unsigned integers for every pixel.
When I attach my profiler, I am showing that my largest performance bottleneck occurs when performing the x-pass.
The simple math expression for the computation is:
sat_[y * width + x] = sat_[y * width + x - 1] + buff_[y * width + x]
where the running sum resets at every new y position.
In this case, sat_ is a 1-D pointer of unsigned integers representing the SAT, and buff_ is an 8-bit unsigned monochrome buffer.
My implementation looks like the following:
uint *pSat = sat_;
char *pBuff = buff_;
for (size_t y = 0; y < height; ++y, pSat += width, pBuff += width)
{
uint curr = 0;
for (uint x = 0; x < width; x += 4)
{
pSat[x + 0] = curr += pBuff[x + 0];
pSat[x + 1] = curr += pBuff[x + 1];
pSat[x + 2] = curr += pBuff[x + 2];
pSat[x + 3] = curr += pBuff[x + 3];
}
}
The loop is unrolled manually because my compiler (VC11) didn't do it for me. The problem I have is that the entire segmentation routine is spending an extraordinary amount of time just running through that loop, and I am wondering if anyone has any thoughts on what might speed it up. I have access to all of the SSE's sets, and AVX for any machine this routine will run on, so if there is something there, that would be extremely useful.
Also, once I squeeze out the last cycles, I then plan on extending this to multi-core, but I want to get the single thread computation as tight as possible before I make the model more complex.
You have a dependency chain running along each row; each result depends on the previous one. So you cannot vectorise/parallelise in that direction.
But, it sounds like each row is independent of all the others, so you can vectorise/paralellise by computing multiple rows simultaneously. You'd need to transpose your arrays, in order to allow the vector instructions to access neighbouring elements in memory.*
However, that creates a problem. Walking along rows would now be absolutely terrible from a cache point of view (every iteration would be a cache miss). The way to solve this is to interchange the loop order.
Note, though, that each element is read precisely once. And you're doing very little computation per element. So you'll basically be limited by main-memory bandwidth well before you hit 100% CPU usage.
* This restriction may be lifted in AVX2, I'm not sure...
Algorithmically, I don't think there is anything you can do to optimize this further. Even though you didn't use the term OLAP cube in your description, you are basically just building an OLAP cube. The code you have is the standard approach to building an OLAP cube.
If you give details about the hardware you're working with, there might be some optimizations available. For example, there is a GPU programming approach that may or may not be faster. Note: Another post on this thread mentioned that parallelization is not possible. This isn't necessarily true... Your algorithm can't be implemented in parallel, but there are algorithms that maintain data-level parallelism, which could be exploited with a GPU approach.
I have the following tight loop that makes up the serial bottle neck of my code. Ideally I would parallelize the function that calls this but that is not possible.
//n is about 60
for (int k = 0;k < n;k++)
{
double fone = z[k*n+i+1];
double fzer = z[k*n+i];
z[k*n+i+1]= s*fzer+c*fone;
z[k*n+i] = c*fzer-s*fone;
}
Are there any optimizations that can be made such as vectorization or some evil inline that can help this code?
I am looking into finding eigen solutions of tridiagonal matrices. http://www.cimat.mx/~posada/OptDoglegGraph/DocLogisticDogleg/projects/adjustedrecipes/tqli.cpp.html
Short answer: Change the memory layout of your matrix from row-major order to column-major order.
Long answer:
It seems you are accessing the (i)th and (i+1)th column of a matrix stored in row-major order - probably a big matrix that doesn't as a whole fit into CPU cache. Basically, on every loop iteration the CPU has to wait for RAM (in the order of hundred cycles). After a few iteraterations, theoretically, the address prediction should kick in and the CPU should speculatively load the data items even before the loop acesses them. That should help with RAM latency. But that still leaves the problem that the code uses the memory bus inefficiently: CPU and memory never exchange single bytes, only cache-lines (64 bytes on current processors). Of every 64 byte cache-line loaded and stored your code only touches 16 bytes (or a quarter).
Transposing the matrix and accessing it in native major order would increase memory bus utilization four-fold. Since that is probably the bottle-neck of your code, you can expect a speedup of about the same order.
Whether it is worth it, depends on the rest of your algorithm. Other parts may of course suffer because of the changed memory layout.
I take it you are rotating something (or rather, lots of things, by the same angle (s being a sin, c being a cos))?
Counting backwards is always good fun and cuts out variable comparison for each iteration, and should work here. Making the counter the index might save a bit of time also (cuts out a bit of arithmetic, as said by others).
for (int k = (n-1) * n + i; k >= 0; k -= n)
{
double fone=z[k+1];
double fzer=z[k];
z[k+1]=s*fzer+c*fone;
z[k] =c*fzer-s*fone;
}
Nothing dramatic here, but it looks tidier if nothing else.
As first move i'd cache pointers in this loop:
//n is about 60
double *cur_z = &z[0*n+i]
for (int k = 0;k < n;k++)
{
double fone = *(cur_z+1);
double fzer = *cur_z;
*(cur_z+1)= s*fzer+c*fone;
*cur_z = c*fzer-s*fone;
cur_z += n;
}
Second, i think its better to make templatized version of this function. As a result, you can get good perfomance benefit if your matrix holds integer values (since FPU operations are slower).