What is a "cache-friendly" code? - c++

What is the difference between "cache unfriendly code" and the "cache friendly" code?
How can I make sure I write cache-efficient code?

Preliminaries
On modern computers, only the lowest level memory structures (the registers) can move data around in single clock cycles. However, registers are very expensive and most computer cores have less than a few dozen registers. At the other end of the memory spectrum (DRAM), the memory is very cheap (i.e. literally millions of times cheaper) but takes hundreds of cycles after a request to receive the data. To bridge this gap between super fast and expensive and super slow and cheap are the cache memories, named L1, L2, L3 in decreasing speed and cost. The idea is that most of the executing code will be hitting a small set of variables often, and the rest (a much larger set of variables) infrequently. If the processor can't find the data in L1 cache, then it looks in L2 cache. If not there, then L3 cache, and if not there, main memory. Each of these "misses" is expensive in time.
(The analogy is cache memory is to system memory, as system memory is to hard disk storage. Hard disk storage is super cheap but very slow).
Caching is one of the main methods to reduce the impact of latency. To paraphrase Herb Sutter (cfr. links below): increasing bandwidth is easy, but we can't buy our way out of latency.
Data is always retrieved through the memory hierarchy (smallest == fastest to slowest). A cache hit/miss usually refers to a hit/miss in the highest level of cache in the CPU -- by highest level I mean the largest == slowest. The cache hit rate is crucial for performance since every cache miss results in fetching data from RAM (or worse ...) which takes a lot of time (hundreds of cycles for RAM, tens of millions of cycles for HDD). In comparison, reading data from the (highest level) cache typically takes only a handful of cycles.
In modern computer architectures, the performance bottleneck is leaving the CPU die (e.g. accessing RAM or higher). This will only get worse over time. The increase in processor frequency is currently no longer relevant to increase performance. The problem is memory access. Hardware design efforts in CPUs therefore currently focus heavily on optimizing caches, prefetching, pipelines and concurrency. For instance, modern CPUs spend around 85% of die on caches and up to 99% for storing/moving data!
There is quite a lot to be said on the subject. Here are a few great references about caches, memory hierarchies and proper programming:
Agner Fog's page. In his excellent documents, you can find detailed examples covering languages ranging from assembly to C++.
If you are into videos, I strongly recommend to have a look at Herb Sutter's talk on machine architecture (youtube) (specifically check 12:00 and onwards!).
Slides about memory optimization by Christer Ericson (director of technology # Sony)
LWN.net's article "What every programmer should know about memory"
Main concepts for cache-friendly code
A very important aspect of cache-friendly code is all about the principle of locality, the goal of which is to place related data close in memory to allow efficient caching. In terms of the CPU cache, it's important to be aware of cache lines to understand how this works: How do cache lines work?
The following particular aspects are of high importance to optimize caching:
Temporal locality: when a given memory location was accessed, it is likely that the same location is accessed again in the near future. Ideally, this information will still be cached at that point.
Spatial locality: this refers to placing related data close to each other. Caching happens on many levels, not just in the CPU. For example, when you read from RAM, typically a larger chunk of memory is fetched than what was specifically asked for because very often the program will require that data soon. HDD caches follow the same line of thought. Specifically for CPU caches, the notion of cache lines is important.
Use appropriate c++ containers
A simple example of cache-friendly versus cache-unfriendly is c++'s std::vector versus std::list. Elements of a std::vector are stored in contiguous memory, and as such accessing them is much more cache-friendly than accessing elements in a std::list, which stores its content all over the place. This is due to spatial locality.
A very nice illustration of this is given by Bjarne Stroustrup in this youtube clip (thanks to #Mohammad Ali Baydoun for the link!).
Don't neglect the cache in data structure and algorithm design
Whenever possible, try to adapt your data structures and order of computations in a way that allows maximum use of the cache. A common technique in this regard is cache blocking (Archive.org version), which is of extreme importance in high-performance computing (cfr. for example ATLAS).
Know and exploit the implicit structure of data
Another simple example, which many people in the field sometimes forget is column-major (ex. fortran,matlab) vs. row-major ordering (ex. c,c++) for storing two dimensional arrays. For example, consider the following matrix:
1 2
3 4
In row-major ordering, this is stored in memory as 1 2 3 4; in column-major ordering, this would be stored as 1 3 2 4. It is easy to see that implementations which do not exploit this ordering will quickly run into (easily avoidable!) cache issues. Unfortunately, I see stuff like this very often in my domain (machine learning). #MatteoItalia showed this example in more detail in his answer.
When fetching a certain element of a matrix from memory, elements near it will be fetched as well and stored in a cache line. If the ordering is exploited, this will result in fewer memory accesses (because the next few values which are needed for subsequent computations are already in a cache line).
For simplicity, assume the cache comprises a single cache line which can contain 2 matrix elements and that when a given element is fetched from memory, the next one is too. Say we want to take the sum over all elements in the example 2x2 matrix above (lets call it M):
Exploiting the ordering (e.g. changing column index first in c++):
M[0][0] (memory) + M[0][1] (cached) + M[1][0] (memory) + M[1][1] (cached)
= 1 + 2 + 3 + 4
--> 2 cache hits, 2 memory accesses
Not exploiting the ordering (e.g. changing row index first in c++):
M[0][0] (memory) + M[1][0] (memory) + M[0][1] (memory) + M[1][1] (memory)
= 1 + 3 + 2 + 4
--> 0 cache hits, 4 memory accesses
In this simple example, exploiting the ordering approximately doubles execution speed (since memory access requires much more cycles than computing the sums). In practice, the performance difference can be much larger.
Avoid unpredictable branches
Modern architectures feature pipelines and compilers are becoming very good at reordering code to minimize delays due to memory access. When your critical code contains (unpredictable) branches, it is hard or impossible to prefetch data. This will indirectly lead to more cache misses.
This is explained very well here (thanks to #0x90 for the link): Why is processing a sorted array faster than processing an unsorted array?
Avoid virtual functions
In the context of c++, virtual methods represent a controversial issue with regard to cache misses (a general consensus exists that they should be avoided when possible in terms of performance). Virtual functions can induce cache misses during look up, but this only happens if the specific function is not called often (otherwise it would likely be cached), so this is regarded as a non-issue by some. For reference about this issue, check out: What is the performance cost of having a virtual method in a C++ class?
Common problems
A common problem in modern architectures with multiprocessor caches is called false sharing. This occurs when each individual processor is attempting to use data in another memory region and attempts to store it in the same cache line. This causes the cache line -- which contains data another processor can use -- to be overwritten again and again. Effectively, different threads make each other wait by inducing cache misses in this situation.
See also (thanks to #Matt for the link): How and when to align to cache line size?
An extreme symptom of poor caching in RAM memory (which is probably not what you mean in this context) is so-called thrashing. This occurs when the process continuously generates page faults (e.g. accesses memory which is not in the current page) which require disk access.

In addition to #Marc Claesen's answer, I think that an instructive classic example of cache-unfriendly code is code that scans a C bidimensional array (e.g. a bitmap image) column-wise instead of row-wise.
Elements that are adjacent in a row are also adjacent in memory, thus accessing them in sequence means accessing them in ascending memory order; this is cache-friendly, since the cache tends to prefetch contiguous blocks of memory.
Instead, accessing such elements column-wise is cache-unfriendly, since elements on the same column are distant in memory from each other (in particular, their distance is equal to the size of the row), so when you use this access pattern you are jumping around in memory, potentially wasting the effort of the cache of retrieving the elements nearby in memory.
And all that it takes to ruin the performance is to go from
// Cache-friendly version - processes pixels which are adjacent in memory
for(unsigned int y=0; y<height; ++y)
{
for(unsigned int x=0; x<width; ++x)
{
... image[y][x] ...
}
}
to
// Cache-unfriendly version - jumps around in memory for no good reason
for(unsigned int x=0; x<width; ++x)
{
for(unsigned int y=0; y<height; ++y)
{
... image[y][x] ...
}
}
This effect can be quite dramatic (several order of magnitudes in speed) in systems with small caches and/or working with big arrays (e.g. 10+ megapixels 24 bpp images on current machines); for this reason, if you have to do many vertical scans, often it's better to rotate the image of 90 degrees first and perform the various analysis later, limiting the cache-unfriendly code just to the rotation.

Optimizing cache usage largely comes down to two factors.
Locality of Reference
The first factor (to which others have already alluded) is locality of reference. Locality of reference really has two dimensions though: space and time.
Spatial
The spatial dimension also comes down to two things: first, we want to pack our information densely, so more information will fit in that limited memory. This means (for example) that you need a major improvement in computational complexity to justify data structures based on small nodes joined by pointers.
Second, we want information that will be processed together also located together. A typical cache works in "lines", which means when you access some information, other information at nearby addresses will be loaded into the cache with the part we touched. For example, when I touch one byte, the cache might load 128 or 256 bytes near that one. To take advantage of that, you generally want the data arranged to maximize the likelihood that you'll also use that other data that was loaded at the same time.
For just a really trivial example, this can mean that a linear search can be much more competitive with a binary search than you'd expect. Once you've loaded one item from a cache line, using the rest of the data in that cache line is almost free. A binary search becomes noticeably faster only when the data is large enough that the binary search reduces the number of cache lines you access.
Time
The time dimension means that when you do some operations on some data, you want (as much as possible) to do all the operations on that data at once.
Since you've tagged this as C++, I'll point to a classic example of a relatively cache-unfriendly design: std::valarray. valarray overloads most arithmetic operators, so I can (for example) say a = b + c + d; (where a, b, c and d are all valarrays) to do element-wise addition of those arrays.
The problem with this is that it walks through one pair of inputs, puts results in a temporary, walks through another pair of inputs, and so on. With a lot of data, the result from one computation may disappear from the cache before it's used in the next computation, so we end up reading (and writing) the data repeatedly before we get our final result. If each element of the final result will be something like (a[n] + b[n]) * (c[n] + d[n]);, we'd generally prefer to read each a[n], b[n], c[n] and d[n] once, do the computation, write the result, increment n and repeat 'til we're done.2
Line Sharing
The second major factor is avoiding line sharing. To understand this, we probably need to back up and look a little at how caches are organized. The simplest form of cache is direct mapped. This means one address in main memory can only be stored in one specific spot in the cache. If we're using two data items that map to the same spot in the cache, it works badly -- each time we use one data item, the other has to be flushed from the cache to make room for the other. The rest of the cache might be empty, but those items won't use other parts of the cache.
To prevent this, most caches are what are called "set associative". For example, in a 4-way set-associative cache, any item from main memory can be stored at any of 4 different places in the cache. So, when the cache is going to load an item, it looks for the least recently used3 item among those four, flushes it to main memory, and loads the new item in its place.
The problem is probably fairly obvious: for a direct-mapped cache, two operands that happen to map to the same cache location can lead to bad behavior. An N-way set-associative cache increases the number from 2 to N+1. Organizing a cache into more "ways" takes extra circuitry and generally runs slower, so (for example) an 8192-way set associative cache is rarely a good solution either.
Ultimately, this factor is more difficult to control in portable code though. Your control over where your data is placed is usually fairly limited. Worse, the exact mapping from address to cache varies between otherwise similar processors. In some cases, however, it can be worth doing things like allocating a large buffer, and then using only parts of what you allocated to ensure against data sharing the same cache lines (even though you'll probably need to detect the exact processor and act accordingly to do this).
False Sharing
There's another, related item called "false sharing". This arises in a multiprocessor or multicore system, where two (or more) processors/cores have data that's separate, but falls in the same cache line. This forces the two processors/cores to coordinate their access to the data, even though each has its own, separate data item. Especially if the two modify the data in alternation, this can lead to a massive slowdown as the data has to be constantly shuttled between the processors. This can't easily be cured by organizing the cache into more "ways" or anything like that either. The primary way to prevent it is to ensure that two threads rarely (preferably never) modify data that could possibly be in the same cache line (with the same caveats about difficulty of controlling the addresses at which data is allocated).
Those who know C++ well might wonder if this is open to optimization via something like expression templates. I'm pretty sure the answer is that yes, it could be done and if it was, it would probably be a pretty substantial win. I'm not aware of anybody having done so, however, and given how little valarray gets used, I'd be at least a little surprised to see anybody do so either.
In case anybody wonders how valarray (designed specifically for performance) could be this badly wrong, it comes down to one thing: it was really designed for machines like the older Crays, that used fast main memory and no cache. For them, this really was a nearly ideal design.
Yes, I'm simplifying: most caches don't really measure the least recently used item precisely, but they use some heuristic that's intended to be close to that without having to keep a full time-stamp for each access.

Welcome to the world of Data Oriented Design. The basic mantra is to Sort, Eliminate Branches, Batch, Eliminate virtual calls - all steps towards better locality.
Since you tagged the question with C++, here's the obligatory typical C++ Bullshit. Tony Albrecht's Pitfalls of Object Oriented Programming is also a great introduction into the subject.

Just piling on: the classic example of cache-unfriendly versus cache-friendly code is the "cache blocking" of matrix multiply.
Naive matrix multiply looks like:
for(i=0;i<N;i++) {
for(j=0;j<N;j++) {
dest[i][j] = 0;
for( k=0;k<N;k++) {
dest[i][j] += src1[i][k] * src2[k][j];
}
}
}
If N is large, e.g. if N * sizeof(elemType) is greater than the cache size, then every single access to src2[k][j] will be a cache miss.
There are many different ways of optimizing this for a cache. Here's a very simple example: instead of reading one item per cache line in the inner loop, use all of the items:
int itemsPerCacheLine = CacheLineSize / sizeof(elemType);
for(i=0;i<N;i++) {
for(j=0;j<N;j += itemsPerCacheLine ) {
for(jj=0;jj<itemsPerCacheLine; jj+) {
dest[i][j+jj] = 0;
}
for( k=0;k<N;k++) {
for(jj=0;jj<itemsPerCacheLine; jj+) {
dest[i][j+jj] += src1[i][k] * src2[k][j+jj];
}
}
}
}
If the cache line size is 64 bytes, and we are operating on 32 bit (4 byte) floats, then there are 16 items per cache line. And the number of cache misses via just this simple transformation is reduced approximately 16-fold.
Fancier transformations operate on 2D tiles, optimize for multiple caches (L1, L2, TLB), and so on.
Some results of googling "cache blocking":
http://stumptown.cc.gt.atl.ga.us/cse6230-hpcta-fa11/slides/11a-matmul-goto.pdf
http://software.intel.com/en-us/articles/cache-blocking-techniques
A nice video animation of an optimized cache blocking algorithm.
http://www.youtube.com/watch?v=IFWgwGMMrh0
Loop tiling is very closely related:
http://en.wikipedia.org/wiki/Loop_tiling

Processors today work with many levels of cascading memory areas. So the CPU will have a bunch of memory that is on the CPU chip itself. It has very fast access to this memory. There are different levels of cache each one slower access ( and larger ) than the next, until you get to system memory which is not on the CPU and is relatively much slower to access.
Logically, to the CPU's instruction set you just refer to memory addresses in a giant virtual address space. When you access a single memory address the CPU will go fetch it. in the old days it would fetch just that single address. But today the CPU will fetch a bunch of memory around the bit you asked for, and copy it into the cache. It assumes that if you asked for a particular address that is is highly likely that you are going to ask for an address nearby very soon. For example if you were copying a buffer you would read and write from consecutive addresses - one right after the other.
So today when you fetch an address it checks the first level of cache to see if it already read that address into cache, if it doesn't find it, then this is a cache miss and it has to go out to the next level of cache to find it, until it eventually has to go out into main memory.
Cache friendly code tries to keep accesses close together in memory so that you minimize cache misses.
So an example would be imagine you wanted to copy a giant 2 dimensional table. It is organized with reach row in consecutive in memory, and one row follow the next right after.
If you copied the elements one row at a time from left to right - that would be cache friendly. If you decided to copy the table one column at a time, you would copy the exact same amount of memory - but it would be cache unfriendly.

It needs to be clarified that not only data should be cache-friendly, it is just as important for the code. This is in addition to branch predicition, instruction reordering, avoiding actual divisions and other techniques.
Typically the denser the code, the fewer cache lines will be required to store it. This results in more cache lines being available for data.
The code should not call functions all over the place as they typically will require one or more cache lines of their own, resulting in fewer cache lines for data.
A function should begin at a cache line-alignment-friendly address. Though there are (gcc) compiler switches for this be aware that if the the functions are very short it might be wasteful for each one to occupy an entire cache line. For example, if three of the most often used functions fit inside one 64 byte cache line, this is less wasteful than if each one has its own line and results in two cache lines less available for other usage. A typical alignment value could be 32 or 16.
So spend some extra time to make the code dense. Test different constructs, compile and review the generated code size and profile.

As #Marc Claesen mentioned that one of the ways to write cache friendly code is to exploit the structure in which our data is stored. In addition to that another way to write cache friendly code is: change the way our data is stored; then write new code to access the data stored in this new structure.
This makes sense in the case of how database systems linearize the tuples of a table and store them. There are two basic ways to store the tuples of a table i.e. row store and column store. In row store as the name suggests the tuples are stored row wise. Lets suppose a table named Product being stored has 3 attributes i.e. int32_t key, char name[56] and int32_t price, so the total size of a tuple is 64 bytes.
We can simulate a very basic row store query execution in main memory by creating an array of Product structs with size N, where N is the number of rows in table. Such memory layout is also called array of structs. So the struct for Product can be like:
struct Product
{
int32_t key;
char name[56];
int32_t price'
}
/* create an array of structs */
Product* table = new Product[N];
/* now load this array of structs, from a file etc. */
Similarly we can simulate a very basic column store query execution in main memory by creating an 3 arrays of size N, one array for each attribute of the Product table. Such memory layout is also called struct of arrays. So the 3 arrays for each attribute of Product can be like:
/* create separate arrays for each attribute */
int32_t* key = new int32_t[N];
char* name = new char[56*N];
int32_t* price = new int32_t[N];
/* now load these arrays, from a file etc. */
Now after loading both the array of structs (Row Layout) and the 3 separate arrays (Column Layout), we have row store and column store on our table Product present in our memory.
Now we move on to the cache friendly code part. Suppose that the workload on our table is such that we have an aggregation query on the price attribute. Such as
SELECT SUM(price)
FROM PRODUCT
For the row store we can convert the above SQL query into
int sum = 0;
for (int i=0; i<N; i++)
sum = sum + table[i].price;
For the column store we can convert the above SQL query into
int sum = 0;
for (int i=0; i<N; i++)
sum = sum + price[i];
The code for the column store would be faster than the code for the row layout in this query as it requires only a subset of attributes and in column layout we are doing just that i.e. only accessing the price column.
Suppose that the cache line size is 64 bytes.
In the case of row layout when a cache line is read, the price value of only 1(cacheline_size/product_struct_size = 64/64 = 1) tuple is read, because our struct size of 64 bytes and it fills our whole cache line, so for every tuple a cache miss occurs in case of a row layout.
In the case of column layout when a cache line is read, the price value of 16(cacheline_size/price_int_size = 64/4 = 16) tuples is read, because 16 contiguous price values stored in memory are brought into the cache, so for every sixteenth tuple a cache miss ocurs in case of column layout.
So the column layout will be faster in the case of given query, and is faster in such aggregation queries on a subset of columns of the table. You can try out such experiment for yourself using the data from TPC-H benchmark, and compare the run times for both the layouts. The wikipedia article on column oriented database systems is also good.
So in database systems, if the query workload is known beforehand, we can store our data in layouts which will suit the queries in workload and access data from these layouts. In the case of above example we created a column layout and changed our code to compute sum so that it became cache friendly.

Be aware that caches do not just cache continuous memory. They have multiple lines (at least 4) so discontinous and overlapping memory can often be stored just as efficiently.
What is missing from all the above examples is measured benchmarks. There are many myths about performance. Unless you measure it you do not know. Do not complicate your code unless you have a measured improvement.

Cache-friendly code is code that has been optimized to make efficient use of the CPU cache. This typically involves organizing data in a way that takes advantage of spatial and temporal locality, which refers to the idea that data that is accessed together is likely to be stored together in memory, and that data that is accessed frequently is likely to be accessed again in the near future.
There are several ways to make code cache-friendly, including:
Using contiguous memory layouts: By storing data in contiguous
blocks in memory, you can take advantage of spatial locality and
reduce the number of cache misses.
Using arrays: Arrays are a good choice for data structures when you
need to access data sequentially, as they allow you to take
advantage of temporal locality and keep hot data in the cache.
Using pointers carefully: Pointers can be used to access data that
is not stored contiguously in memory, but they can also lead to
cache misses if they are used excessively. If you need to use
pointers, try to use them in a way that takes advantage of spatial
and temporal locality to minimize cache misses.
Using compiler optimization flags: Most compilers have optimization
flags that can be used to optimize the use of the CPU cache. These
flags can help to minimize the number of cache misses and improve
the overall performance of your code.
It is important to note that the specific techniques that work best for optimizing the use of the CPU cache will depend on the specific requirements and constraints of your system. It may be necessary to experiment with different approaches to find the best solution for your needs.

Related

Does random indexing an array have any performance implications over sequential indexing?

Lets say that we have a long array named DataList. And we have two other arrays which contains indexes, one contains the indexes in the ascending order (ie: 0, 1, 2, 3, 4, ...) named sIndexes, the other array is made of indexes which are randomly packed (ie: 6, 5, 1, 9, 7, ...) named rIndexes.
These index arrays (sIndexes and rIndexes) are used to index the DataList array. When we use the sIndex array to index DataList, it indexes the elements sequentially. When we use rIndexes to index the DataList, it indexes at random places in the array.
So my question is,
Is there any performance differences when using random indexing over sequential indexing? Isn't cache misses gonna contribute in performance loss (like if the index is pointing to a location which is not available in the cache line)?
Linear access is much faster, because:
The minimum size to fetch from RAM is one cache-line (usually 64 bytes). If you only use one byte from that, you are wasting precious memory bandwidth.
Modern CPUs can detect regular access patterns and will pre-fetch data from RAM, so it will be cached before you even access it. (Or at least it will already be streaming at maximum memory bandwidth.)
If linear access is detected at compile-time (probably not the case for your code), it can be vectorized into SIMD instructions, processing many items at once.
Random access will be much slower, unless your whole array fits into L2 cache, or you do a lot of work per item. A L2 cache miss is usually quite expensive.
For details I recommend the classic What Every Programmer Should Know About Memory[1], a long and fascinating read. It's from 2007 but CPU architecture has not fundamentally changed; if anything this has become more relevant.
The above is true for modern, large CPUs. Small embedded CPUs exist that have no cache but predictable SRAM access. In this case random indexing will perform the same.
[1] https://www.gwern.net/docs/cs/2007-drepper.pdf
What happens "under the hood" is likely a multiplication (index * sizeof(data)) and that takes the same amount of time for the types involved no matter what the values are.
As you yourself point out, actually fetching the value from the array may actually take longer time if the part of the array that is accessed isn't in the current cacheline.
For a single threaded application, one usually wants to pack the data tightly to promote cache hits (see the idea behind std::hardware_constructive_interference_size) but for multithreaded applications one often tries to keep the elements apart (std::hardware_destructive_interference_size) to avoid false sharing and hopefully let each thread have its cacheline undisturbed by the others.

For-loop variables and cache misses? [duplicate]

What is the difference between "cache unfriendly code" and the "cache friendly" code?
How can I make sure I write cache-efficient code?
Preliminaries
On modern computers, only the lowest level memory structures (the registers) can move data around in single clock cycles. However, registers are very expensive and most computer cores have less than a few dozen registers. At the other end of the memory spectrum (DRAM), the memory is very cheap (i.e. literally millions of times cheaper) but takes hundreds of cycles after a request to receive the data. To bridge this gap between super fast and expensive and super slow and cheap are the cache memories, named L1, L2, L3 in decreasing speed and cost. The idea is that most of the executing code will be hitting a small set of variables often, and the rest (a much larger set of variables) infrequently. If the processor can't find the data in L1 cache, then it looks in L2 cache. If not there, then L3 cache, and if not there, main memory. Each of these "misses" is expensive in time.
(The analogy is cache memory is to system memory, as system memory is to hard disk storage. Hard disk storage is super cheap but very slow).
Caching is one of the main methods to reduce the impact of latency. To paraphrase Herb Sutter (cfr. links below): increasing bandwidth is easy, but we can't buy our way out of latency.
Data is always retrieved through the memory hierarchy (smallest == fastest to slowest). A cache hit/miss usually refers to a hit/miss in the highest level of cache in the CPU -- by highest level I mean the largest == slowest. The cache hit rate is crucial for performance since every cache miss results in fetching data from RAM (or worse ...) which takes a lot of time (hundreds of cycles for RAM, tens of millions of cycles for HDD). In comparison, reading data from the (highest level) cache typically takes only a handful of cycles.
In modern computer architectures, the performance bottleneck is leaving the CPU die (e.g. accessing RAM or higher). This will only get worse over time. The increase in processor frequency is currently no longer relevant to increase performance. The problem is memory access. Hardware design efforts in CPUs therefore currently focus heavily on optimizing caches, prefetching, pipelines and concurrency. For instance, modern CPUs spend around 85% of die on caches and up to 99% for storing/moving data!
There is quite a lot to be said on the subject. Here are a few great references about caches, memory hierarchies and proper programming:
Agner Fog's page. In his excellent documents, you can find detailed examples covering languages ranging from assembly to C++.
If you are into videos, I strongly recommend to have a look at Herb Sutter's talk on machine architecture (youtube) (specifically check 12:00 and onwards!).
Slides about memory optimization by Christer Ericson (director of technology # Sony)
LWN.net's article "What every programmer should know about memory"
Main concepts for cache-friendly code
A very important aspect of cache-friendly code is all about the principle of locality, the goal of which is to place related data close in memory to allow efficient caching. In terms of the CPU cache, it's important to be aware of cache lines to understand how this works: How do cache lines work?
The following particular aspects are of high importance to optimize caching:
Temporal locality: when a given memory location was accessed, it is likely that the same location is accessed again in the near future. Ideally, this information will still be cached at that point.
Spatial locality: this refers to placing related data close to each other. Caching happens on many levels, not just in the CPU. For example, when you read from RAM, typically a larger chunk of memory is fetched than what was specifically asked for because very often the program will require that data soon. HDD caches follow the same line of thought. Specifically for CPU caches, the notion of cache lines is important.
Use appropriate c++ containers
A simple example of cache-friendly versus cache-unfriendly is c++'s std::vector versus std::list. Elements of a std::vector are stored in contiguous memory, and as such accessing them is much more cache-friendly than accessing elements in a std::list, which stores its content all over the place. This is due to spatial locality.
A very nice illustration of this is given by Bjarne Stroustrup in this youtube clip (thanks to #Mohammad Ali Baydoun for the link!).
Don't neglect the cache in data structure and algorithm design
Whenever possible, try to adapt your data structures and order of computations in a way that allows maximum use of the cache. A common technique in this regard is cache blocking (Archive.org version), which is of extreme importance in high-performance computing (cfr. for example ATLAS).
Know and exploit the implicit structure of data
Another simple example, which many people in the field sometimes forget is column-major (ex. fortran,matlab) vs. row-major ordering (ex. c,c++) for storing two dimensional arrays. For example, consider the following matrix:
1 2
3 4
In row-major ordering, this is stored in memory as 1 2 3 4; in column-major ordering, this would be stored as 1 3 2 4. It is easy to see that implementations which do not exploit this ordering will quickly run into (easily avoidable!) cache issues. Unfortunately, I see stuff like this very often in my domain (machine learning). #MatteoItalia showed this example in more detail in his answer.
When fetching a certain element of a matrix from memory, elements near it will be fetched as well and stored in a cache line. If the ordering is exploited, this will result in fewer memory accesses (because the next few values which are needed for subsequent computations are already in a cache line).
For simplicity, assume the cache comprises a single cache line which can contain 2 matrix elements and that when a given element is fetched from memory, the next one is too. Say we want to take the sum over all elements in the example 2x2 matrix above (lets call it M):
Exploiting the ordering (e.g. changing column index first in c++):
M[0][0] (memory) + M[0][1] (cached) + M[1][0] (memory) + M[1][1] (cached)
= 1 + 2 + 3 + 4
--> 2 cache hits, 2 memory accesses
Not exploiting the ordering (e.g. changing row index first in c++):
M[0][0] (memory) + M[1][0] (memory) + M[0][1] (memory) + M[1][1] (memory)
= 1 + 3 + 2 + 4
--> 0 cache hits, 4 memory accesses
In this simple example, exploiting the ordering approximately doubles execution speed (since memory access requires much more cycles than computing the sums). In practice, the performance difference can be much larger.
Avoid unpredictable branches
Modern architectures feature pipelines and compilers are becoming very good at reordering code to minimize delays due to memory access. When your critical code contains (unpredictable) branches, it is hard or impossible to prefetch data. This will indirectly lead to more cache misses.
This is explained very well here (thanks to #0x90 for the link): Why is processing a sorted array faster than processing an unsorted array?
Avoid virtual functions
In the context of c++, virtual methods represent a controversial issue with regard to cache misses (a general consensus exists that they should be avoided when possible in terms of performance). Virtual functions can induce cache misses during look up, but this only happens if the specific function is not called often (otherwise it would likely be cached), so this is regarded as a non-issue by some. For reference about this issue, check out: What is the performance cost of having a virtual method in a C++ class?
Common problems
A common problem in modern architectures with multiprocessor caches is called false sharing. This occurs when each individual processor is attempting to use data in another memory region and attempts to store it in the same cache line. This causes the cache line -- which contains data another processor can use -- to be overwritten again and again. Effectively, different threads make each other wait by inducing cache misses in this situation.
See also (thanks to #Matt for the link): How and when to align to cache line size?
An extreme symptom of poor caching in RAM memory (which is probably not what you mean in this context) is so-called thrashing. This occurs when the process continuously generates page faults (e.g. accesses memory which is not in the current page) which require disk access.
In addition to #Marc Claesen's answer, I think that an instructive classic example of cache-unfriendly code is code that scans a C bidimensional array (e.g. a bitmap image) column-wise instead of row-wise.
Elements that are adjacent in a row are also adjacent in memory, thus accessing them in sequence means accessing them in ascending memory order; this is cache-friendly, since the cache tends to prefetch contiguous blocks of memory.
Instead, accessing such elements column-wise is cache-unfriendly, since elements on the same column are distant in memory from each other (in particular, their distance is equal to the size of the row), so when you use this access pattern you are jumping around in memory, potentially wasting the effort of the cache of retrieving the elements nearby in memory.
And all that it takes to ruin the performance is to go from
// Cache-friendly version - processes pixels which are adjacent in memory
for(unsigned int y=0; y<height; ++y)
{
for(unsigned int x=0; x<width; ++x)
{
... image[y][x] ...
}
}
to
// Cache-unfriendly version - jumps around in memory for no good reason
for(unsigned int x=0; x<width; ++x)
{
for(unsigned int y=0; y<height; ++y)
{
... image[y][x] ...
}
}
This effect can be quite dramatic (several order of magnitudes in speed) in systems with small caches and/or working with big arrays (e.g. 10+ megapixels 24 bpp images on current machines); for this reason, if you have to do many vertical scans, often it's better to rotate the image of 90 degrees first and perform the various analysis later, limiting the cache-unfriendly code just to the rotation.
Optimizing cache usage largely comes down to two factors.
Locality of Reference
The first factor (to which others have already alluded) is locality of reference. Locality of reference really has two dimensions though: space and time.
Spatial
The spatial dimension also comes down to two things: first, we want to pack our information densely, so more information will fit in that limited memory. This means (for example) that you need a major improvement in computational complexity to justify data structures based on small nodes joined by pointers.
Second, we want information that will be processed together also located together. A typical cache works in "lines", which means when you access some information, other information at nearby addresses will be loaded into the cache with the part we touched. For example, when I touch one byte, the cache might load 128 or 256 bytes near that one. To take advantage of that, you generally want the data arranged to maximize the likelihood that you'll also use that other data that was loaded at the same time.
For just a really trivial example, this can mean that a linear search can be much more competitive with a binary search than you'd expect. Once you've loaded one item from a cache line, using the rest of the data in that cache line is almost free. A binary search becomes noticeably faster only when the data is large enough that the binary search reduces the number of cache lines you access.
Time
The time dimension means that when you do some operations on some data, you want (as much as possible) to do all the operations on that data at once.
Since you've tagged this as C++, I'll point to a classic example of a relatively cache-unfriendly design: std::valarray. valarray overloads most arithmetic operators, so I can (for example) say a = b + c + d; (where a, b, c and d are all valarrays) to do element-wise addition of those arrays.
The problem with this is that it walks through one pair of inputs, puts results in a temporary, walks through another pair of inputs, and so on. With a lot of data, the result from one computation may disappear from the cache before it's used in the next computation, so we end up reading (and writing) the data repeatedly before we get our final result. If each element of the final result will be something like (a[n] + b[n]) * (c[n] + d[n]);, we'd generally prefer to read each a[n], b[n], c[n] and d[n] once, do the computation, write the result, increment n and repeat 'til we're done.2
Line Sharing
The second major factor is avoiding line sharing. To understand this, we probably need to back up and look a little at how caches are organized. The simplest form of cache is direct mapped. This means one address in main memory can only be stored in one specific spot in the cache. If we're using two data items that map to the same spot in the cache, it works badly -- each time we use one data item, the other has to be flushed from the cache to make room for the other. The rest of the cache might be empty, but those items won't use other parts of the cache.
To prevent this, most caches are what are called "set associative". For example, in a 4-way set-associative cache, any item from main memory can be stored at any of 4 different places in the cache. So, when the cache is going to load an item, it looks for the least recently used3 item among those four, flushes it to main memory, and loads the new item in its place.
The problem is probably fairly obvious: for a direct-mapped cache, two operands that happen to map to the same cache location can lead to bad behavior. An N-way set-associative cache increases the number from 2 to N+1. Organizing a cache into more "ways" takes extra circuitry and generally runs slower, so (for example) an 8192-way set associative cache is rarely a good solution either.
Ultimately, this factor is more difficult to control in portable code though. Your control over where your data is placed is usually fairly limited. Worse, the exact mapping from address to cache varies between otherwise similar processors. In some cases, however, it can be worth doing things like allocating a large buffer, and then using only parts of what you allocated to ensure against data sharing the same cache lines (even though you'll probably need to detect the exact processor and act accordingly to do this).
False Sharing
There's another, related item called "false sharing". This arises in a multiprocessor or multicore system, where two (or more) processors/cores have data that's separate, but falls in the same cache line. This forces the two processors/cores to coordinate their access to the data, even though each has its own, separate data item. Especially if the two modify the data in alternation, this can lead to a massive slowdown as the data has to be constantly shuttled between the processors. This can't easily be cured by organizing the cache into more "ways" or anything like that either. The primary way to prevent it is to ensure that two threads rarely (preferably never) modify data that could possibly be in the same cache line (with the same caveats about difficulty of controlling the addresses at which data is allocated).
Those who know C++ well might wonder if this is open to optimization via something like expression templates. I'm pretty sure the answer is that yes, it could be done and if it was, it would probably be a pretty substantial win. I'm not aware of anybody having done so, however, and given how little valarray gets used, I'd be at least a little surprised to see anybody do so either.
In case anybody wonders how valarray (designed specifically for performance) could be this badly wrong, it comes down to one thing: it was really designed for machines like the older Crays, that used fast main memory and no cache. For them, this really was a nearly ideal design.
Yes, I'm simplifying: most caches don't really measure the least recently used item precisely, but they use some heuristic that's intended to be close to that without having to keep a full time-stamp for each access.
Welcome to the world of Data Oriented Design. The basic mantra is to Sort, Eliminate Branches, Batch, Eliminate virtual calls - all steps towards better locality.
Since you tagged the question with C++, here's the obligatory typical C++ Bullshit. Tony Albrecht's Pitfalls of Object Oriented Programming is also a great introduction into the subject.
Just piling on: the classic example of cache-unfriendly versus cache-friendly code is the "cache blocking" of matrix multiply.
Naive matrix multiply looks like:
for(i=0;i<N;i++) {
for(j=0;j<N;j++) {
dest[i][j] = 0;
for( k=0;k<N;k++) {
dest[i][j] += src1[i][k] * src2[k][j];
}
}
}
If N is large, e.g. if N * sizeof(elemType) is greater than the cache size, then every single access to src2[k][j] will be a cache miss.
There are many different ways of optimizing this for a cache. Here's a very simple example: instead of reading one item per cache line in the inner loop, use all of the items:
int itemsPerCacheLine = CacheLineSize / sizeof(elemType);
for(i=0;i<N;i++) {
for(j=0;j<N;j += itemsPerCacheLine ) {
for(jj=0;jj<itemsPerCacheLine; jj+) {
dest[i][j+jj] = 0;
}
for( k=0;k<N;k++) {
for(jj=0;jj<itemsPerCacheLine; jj+) {
dest[i][j+jj] += src1[i][k] * src2[k][j+jj];
}
}
}
}
If the cache line size is 64 bytes, and we are operating on 32 bit (4 byte) floats, then there are 16 items per cache line. And the number of cache misses via just this simple transformation is reduced approximately 16-fold.
Fancier transformations operate on 2D tiles, optimize for multiple caches (L1, L2, TLB), and so on.
Some results of googling "cache blocking":
http://stumptown.cc.gt.atl.ga.us/cse6230-hpcta-fa11/slides/11a-matmul-goto.pdf
http://software.intel.com/en-us/articles/cache-blocking-techniques
A nice video animation of an optimized cache blocking algorithm.
http://www.youtube.com/watch?v=IFWgwGMMrh0
Loop tiling is very closely related:
http://en.wikipedia.org/wiki/Loop_tiling
Processors today work with many levels of cascading memory areas. So the CPU will have a bunch of memory that is on the CPU chip itself. It has very fast access to this memory. There are different levels of cache each one slower access ( and larger ) than the next, until you get to system memory which is not on the CPU and is relatively much slower to access.
Logically, to the CPU's instruction set you just refer to memory addresses in a giant virtual address space. When you access a single memory address the CPU will go fetch it. in the old days it would fetch just that single address. But today the CPU will fetch a bunch of memory around the bit you asked for, and copy it into the cache. It assumes that if you asked for a particular address that is is highly likely that you are going to ask for an address nearby very soon. For example if you were copying a buffer you would read and write from consecutive addresses - one right after the other.
So today when you fetch an address it checks the first level of cache to see if it already read that address into cache, if it doesn't find it, then this is a cache miss and it has to go out to the next level of cache to find it, until it eventually has to go out into main memory.
Cache friendly code tries to keep accesses close together in memory so that you minimize cache misses.
So an example would be imagine you wanted to copy a giant 2 dimensional table. It is organized with reach row in consecutive in memory, and one row follow the next right after.
If you copied the elements one row at a time from left to right - that would be cache friendly. If you decided to copy the table one column at a time, you would copy the exact same amount of memory - but it would be cache unfriendly.
It needs to be clarified that not only data should be cache-friendly, it is just as important for the code. This is in addition to branch predicition, instruction reordering, avoiding actual divisions and other techniques.
Typically the denser the code, the fewer cache lines will be required to store it. This results in more cache lines being available for data.
The code should not call functions all over the place as they typically will require one or more cache lines of their own, resulting in fewer cache lines for data.
A function should begin at a cache line-alignment-friendly address. Though there are (gcc) compiler switches for this be aware that if the the functions are very short it might be wasteful for each one to occupy an entire cache line. For example, if three of the most often used functions fit inside one 64 byte cache line, this is less wasteful than if each one has its own line and results in two cache lines less available for other usage. A typical alignment value could be 32 or 16.
So spend some extra time to make the code dense. Test different constructs, compile and review the generated code size and profile.
As #Marc Claesen mentioned that one of the ways to write cache friendly code is to exploit the structure in which our data is stored. In addition to that another way to write cache friendly code is: change the way our data is stored; then write new code to access the data stored in this new structure.
This makes sense in the case of how database systems linearize the tuples of a table and store them. There are two basic ways to store the tuples of a table i.e. row store and column store. In row store as the name suggests the tuples are stored row wise. Lets suppose a table named Product being stored has 3 attributes i.e. int32_t key, char name[56] and int32_t price, so the total size of a tuple is 64 bytes.
We can simulate a very basic row store query execution in main memory by creating an array of Product structs with size N, where N is the number of rows in table. Such memory layout is also called array of structs. So the struct for Product can be like:
struct Product
{
int32_t key;
char name[56];
int32_t price'
}
/* create an array of structs */
Product* table = new Product[N];
/* now load this array of structs, from a file etc. */
Similarly we can simulate a very basic column store query execution in main memory by creating an 3 arrays of size N, one array for each attribute of the Product table. Such memory layout is also called struct of arrays. So the 3 arrays for each attribute of Product can be like:
/* create separate arrays for each attribute */
int32_t* key = new int32_t[N];
char* name = new char[56*N];
int32_t* price = new int32_t[N];
/* now load these arrays, from a file etc. */
Now after loading both the array of structs (Row Layout) and the 3 separate arrays (Column Layout), we have row store and column store on our table Product present in our memory.
Now we move on to the cache friendly code part. Suppose that the workload on our table is such that we have an aggregation query on the price attribute. Such as
SELECT SUM(price)
FROM PRODUCT
For the row store we can convert the above SQL query into
int sum = 0;
for (int i=0; i<N; i++)
sum = sum + table[i].price;
For the column store we can convert the above SQL query into
int sum = 0;
for (int i=0; i<N; i++)
sum = sum + price[i];
The code for the column store would be faster than the code for the row layout in this query as it requires only a subset of attributes and in column layout we are doing just that i.e. only accessing the price column.
Suppose that the cache line size is 64 bytes.
In the case of row layout when a cache line is read, the price value of only 1(cacheline_size/product_struct_size = 64/64 = 1) tuple is read, because our struct size of 64 bytes and it fills our whole cache line, so for every tuple a cache miss occurs in case of a row layout.
In the case of column layout when a cache line is read, the price value of 16(cacheline_size/price_int_size = 64/4 = 16) tuples is read, because 16 contiguous price values stored in memory are brought into the cache, so for every sixteenth tuple a cache miss ocurs in case of column layout.
So the column layout will be faster in the case of given query, and is faster in such aggregation queries on a subset of columns of the table. You can try out such experiment for yourself using the data from TPC-H benchmark, and compare the run times for both the layouts. The wikipedia article on column oriented database systems is also good.
So in database systems, if the query workload is known beforehand, we can store our data in layouts which will suit the queries in workload and access data from these layouts. In the case of above example we created a column layout and changed our code to compute sum so that it became cache friendly.
Be aware that caches do not just cache continuous memory. They have multiple lines (at least 4) so discontinous and overlapping memory can often be stored just as efficiently.
What is missing from all the above examples is measured benchmarks. There are many myths about performance. Unless you measure it you do not know. Do not complicate your code unless you have a measured improvement.
Cache-friendly code is code that has been optimized to make efficient use of the CPU cache. This typically involves organizing data in a way that takes advantage of spatial and temporal locality, which refers to the idea that data that is accessed together is likely to be stored together in memory, and that data that is accessed frequently is likely to be accessed again in the near future.
There are several ways to make code cache-friendly, including:
Using contiguous memory layouts: By storing data in contiguous
blocks in memory, you can take advantage of spatial locality and
reduce the number of cache misses.
Using arrays: Arrays are a good choice for data structures when you
need to access data sequentially, as they allow you to take
advantage of temporal locality and keep hot data in the cache.
Using pointers carefully: Pointers can be used to access data that
is not stored contiguously in memory, but they can also lead to
cache misses if they are used excessively. If you need to use
pointers, try to use them in a way that takes advantage of spatial
and temporal locality to minimize cache misses.
Using compiler optimization flags: Most compilers have optimization
flags that can be used to optimize the use of the CPU cache. These
flags can help to minimize the number of cache misses and improve
the overall performance of your code.
It is important to note that the specific techniques that work best for optimizing the use of the CPU cache will depend on the specific requirements and constraints of your system. It may be necessary to experiment with different approaches to find the best solution for your needs.

Which is most cache friendly?

I am trying to get a good grip on data oriented design and how to program best with the cache in mind. There's basically two scenarios that I cannot quite decide which is better and why - is it better to have a vector of objects, or several vectors with the objects atomic data?
A) Vector of objects example
struct A
{
GLsizei mIndices;
GLuint mVBO;
GLuint mIndexBuffer;
GLuint mVAO;
size_t vertexDataSize;
size_t normalDataSize;
};
std::vector<A> gMeshes;
for_each(gMeshes as mesh)
{
glBindVertexArray(mesh.mVAO);
glDrawElements(GL_TRIANGLES, mesh.mIndices, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
....
}
B) Vectors with the atomic data
std::vector<GLsizei> gIndices;
std::vector<GLuint> gVBOs;
std::vector<GLuint> gIndexBuffers;
std::vector<GLuint> gVAOs;
std::vector<size_t> gVertexDataSizes;
std::vector<size_t> gNormalDataSizes;
size_t numMeshes = ...;
for (index = 0; index++; index < numMeshes)
{
glBindVertexArray(gVAOs[index]);
glDrawElements(GL_TRIANGLES, gIndices[index], GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
....
}
Which one is more memory efficient and cache friendly resulting in less cache misses and better performance, and why?
With some variation according to which level of cache you're talking about, cache works as follows:
if the data is already in cache then it is fast to access
if the data is not in cache then you incur a cost, but an entire cache line (or page, if we're talking RAM vs swap file rather than cache vs RAM) is brought into cache, so access close to the missed address will not miss.
if you're lucky then the memory subsystem will detect sequential access and pre-fetch data that it thinks you're about to need.
So naively the questions to ask are:
how many cache misses occur? -- B wins, because in A you fetch some unused data per record, whereas in B you fetch none other than a small rounding error at the end of the iteration. So in order to visit all of the necessary data, B fetches fewer cache lines, assuming a significant number of records. If the number of records is insignificant, then cache performance may have little or nothing to do with the performance of your code, because a program that uses a small enough amount of data will find that it's all in cache all the time.
is the access sequential? -- yes in both cases, although this might be harder to detect in case B because there are two interleaved sequences rather than just one.
So, I would sort of expect B to be faster for this code. However:
if this is the only access to the data, then you could speed up A by removing most of the data members from the struct. So do that. Presumably in fact it is not the only access to the data in your program, and the other accesses might affect performance in two ways: the time they actually take, and whether they populate the cache with the data you need.
what I expect and what actually happens are frequently different things, and there is little point relying on speculation if you have any ability to test it. In the best case, the sequential access means that there are no cache misses in either code. Testing performance requires no special tool (although they can make it easier), just a clock with a second hand. At a pinch, fashion a pendulum from your phone charger.
there are some complications I have ignored. Depending on hardware, if you're unlucky with B then at the lowest cache level you could find that the accesses to one vector are evicting the accesses to the other vector, because the corresponding memory just happens to use the same location in cache. This would cause two cache misses per record. This will only happen on what's called "direct-mapped cache". "Two-way cache" or better would save the day, by allowing chunks of both vectors to co-exist even if their first preference location in cache is the same. I don't think that PC hardware generally uses direct-mapped cache, but I don't know for sure and I don't know much about GPUs.
I understand that this is partly opinion-based, and also that it could be a case of premature optimization, but your first option definitely has the best aesthetics. It's one vector versus six - no contest in my eyes.
For cache performance, it ought to be better. That is because the alternative requires access to two different vectors, which splits memory access every single time you render a mesh.
With the structure approach, the mesh is essentially a self-contained object and correctly implies no relation to other meshes. When drawing, you only access that mesh, and when rendering all meshes, you do one at a time in a cache-friendly manner. Yes, you will eat cache more quickly because your vector elements are larger, but you won't be contesting it.
You may also find other benefits later on from using this representation. ie if you want to store additional data about a mesh. Adding extra data in more vectors will quickly clutter your code and increase the risk of making silly errors, whereas it's trivial to make changes to the structure.
I recommend profiling with either perf or oprofile and posting your results back here (assuming you are running linux), including the number of elements you iterated across, number of iterations in total, and the hardware you tested on.
If I had to guess (and this is only a guess), I'd suspect that the first approach might be faster due to the locality of data within each structure, and hopefully the OS/hardware can prefetch additional elements for you. But again, this will depend on cache size, cache line size, and other aspects.
Defining "better" is interesting too. Are you looking for overall time to process N elements, low variance in each sample, minimal cache misses (which will be influenced by other processes running on your system), etc.
Don't forget that with STL vectors, you are also at the mercy of the allocator... e.g. it can decide at any time to reallocate the array, which will invalidate your cache. Another factor to try to isolate if you can!
Depends on your access patterns. Your first version is AoS (array of structures), second is SoA (structure of arrays).
SoA tends to use less memory (unless you store so few elements that the overhead of the arrays is actually non-trivial) if there's any kind of structure padding that you'd normally get in the AoS representation. It also tends to be a much bigger PITA to code against since you have to maintain/sync parallel arrays.
AoS tends to excel for random-access. As an example, for simplicity let's say each element fits into a cache line and is properly aligned (64 byte size and alignment, e.g.). In that case, if you are randomly accessing an nth element, you get all the relevant data for the element in a single cache line. If you used an SoA and dispersed those fields across separate arrays, you'd have to load memory into multiple cache lines just to load the data for that one element. And because we're accessing the data in a random pattern, we don't benefit from spatial locality much at all since the next element we're going to be accessing could be somewhere completely else in memory.
However, SoA tends to excel for sequential access mainly because there's often less data to load into the CPU cache in the first place for the entire sequential loop because it excludes structure padding and cold fields. By cold fields, I mean fields you don't need to access in a particular sequential loop. For example, a physics system might not care about particle fields involved with how the particle looks to the user, like color and a sprite handle. That's irrelevant data. It only cares about particle positions. The SoA allows you to avoid loading that irrelevant data into cache lines. It allows you to load as much relevant data into a cache line at once so you end up with fewer compulsory cache misses (as well as page faults for large enough data) with the SoA.
That's also only covering memory access patterns. With SoA reps, you also tend to be able to write more efficient and simpler SIMD instructions. But again it's mainly suited for sequential access.
You can also mix the two concepts. You might use an AoS for hot fields frequently accessed together in random-access patterns, then hoist out the cold fields and store them in parallel.

Will a modern processor (like the i7) follow pointers and prefetch their data while iterating over a list of them?

I want to learn how to write better code that takes advantage of the CPU's cache. Working with contiguous memory seems to be the ideal situation. That being said, I'm curious if there are similar improvements that can be made with non-contiguous memory, but with an array of pointers to follow, like:
struct Position {
int32_t x,y,z;
}
...
std::vector<Position*> posPointers;
...
updatePosition () {
for (uint32_t i = 0; i < posPointers.size(); i++) {
Position& nextPos = *posPointers[i];
nextPos.x++;
nextPos.y++;
nextPos.z++;
}
}
This is just some rough mock-up code, and for the sake of learning this properly let's just say that all Position structs were created randomly all over the heap.
Can modern, smart, processors such as Intel's i7 look ahead and see that it's going to need X_ptr's data very shortly? Would the following line of code help?
... // for loop
Position& nextPos1 = *posPointers[i];
Position& nextPos2 = *posPointers[i+1];
Position& nextPos3 = *posPointers[i+2];
Position& nextPos4 = *posPointers[i+3];
... // Work on data here
I had read some presentation slides that seemed to indicate code like this would cause the processor to prefetch some data. Is that true? I am aware there are non-standard, platform specific, ways to call prefetching like __builtin_prefetch, but throwing that all over the place just seems like an ugly premature optimization. I am looking for a way I can subconsciously write cache-efficient code.
I know you didn't ask (and probably don't need a sermon on proper treatment of caches, but I thought I'd contribute my two cents anyways. Note that all this only applies in hot code. Remember that premature optimization is the root of all evil.
As has been pointed out in the comments, the best way is to have containers of actual data. Generally speaking, flat data structures are much preferable to "pointer spaghetti", even if you have to duplicate some data and/or pay a price for resizing/moving/defragmenting your data structures.
And as you know, flat data structures (e.g. an array of data) only pay off if you access them linearly and sequentially most of the time.
But this strategy may not always be usable. In lieu of actual linear data, you can use other strategies like employing pool allocators, and iterating over the pools themselves, instead of over the vector holding the pointers. This of course has its own disadvantages and can be a bit more complicated.
I'm sure you know this already, but it bears mentioning again that one of the most effective techniques for getting most out of your cache is having smaller data! In the above code, if you can get away with int16_t instead of int32_t, you should definitely do so. You should pack your many bools and flags and enums into bit-fields, use indexes instead of pointers (specially on 64-bit systems,) use fixed-size hash values in your data structures instead of strings, etc.
Now, about your main question that whether the processor can follow random pointers around and bring the data into cache before they are needed. To a very limited extent, this does happen. As probably you know, modern CPUs employ a lot of tricks to increase their speed (i.e. increase their instruction retire rate.) Tricks like having a store buffer, out-of-order execution, superscalar pipelines, multiple functional units of every kind, branch prediction, etc. Most of the time, these tricks all just help the CPU to keep executing instructions, even if the current instructions have stalled or take too long to finish. For memory loads (which is the slowest thing to do, iff the data is not in the cache,) this means that the CPU should get to the instruction as soon as possible, calculate the address, and request the data from the memory controller. However, the memory controller can have only a very limited number of outstanding requests (usually two these days, but I'm not sure.) This means that even if the CPU did very sophisticated stuff to look ahead into other memory locations (e.g. the elements of your posPointers vector) and deduce that these are the addresses of new data that your code is going to need, it couldn't get very far ahead because the memory controller can have only so many requests pending.
In any case, AFAIK, I don't think that CPUs actually do that yet. Note that this is a hard case, because the addresses of your randomly distributed memory locations are themselves in memory (as opposed to being in a register or calculable from the contents of a register.) And if the CPUs did it, it wouldn't have that much of an effect anyways because of memory interface limitations.
The prefetching technique you mentioned seems valid to me and I've seen it used, but it only yields noticeable effect if your CPU has something to do while waiting for the future data to arrive. Incrementing three integers takes a lot less time than loading 12 bytes from memory (loading one cache line, actually) and therefor it won't mean much for the execution time. But if you had something worthwhile and more heavyweight to overlay on top of the memory prefetches (e.g. calculating a complex function that doesn't require data from memory!) then you could get very nice speedups. You see, the time to go through the above loop is essentially the sum of the time of all the cache misses; and you are getting the coordinate increments and the loop bookkeeping for free. So, you'd have won more if the free stuff were more valuable!
Modern processors have hardware prefetching mechanisms: Intel Hardware prefetcher. They infer stride access patterns to memory and prefetch memory locations that are likely to be accessed in the near future.
However in the case of totally random pointer chasing such techniques can not help. The processor does not know that the program in execution is performing pointer chasing, therefore it can not prefetch accordingly. In such cases hardware mechanisms are detrimental for performance as they would prefetch values that are not likely to be used.
The best that you can do is try to organize you data structures in memory in such a way that accesses to contiguous portions of memory are more likely.

What is the most efficient (yet sufficiently flexible) way to store multi-dimensional variable-length data?

I would like to know what the best practice for efficiently storing (and subsequently accessing) sets of multi-dimensional data arrays with variable length. The focus is on performance, but I also need to be able to handle changing the length of an individual data set during runtime without too much overhead.
Note: I know this is a somewhat lengthy question, but I have looked around quite a lot and could not find a solution or example which describes the problem at hand with sufficient accuracy.
Background
The context is a computational fluid dynamics (CFD) code that is based on the discontinuous Galerkin spectral element method (DGSEM) (cf. Kopriva (2009), Implementing Spectral Methods for Partial Differential Equations). For the sake of simplicity, let us assume a 2D data layout (it is in fact in three dimensions, but the extension from 2D to 3D should be straightforward).
I have a grid that consists of K square elements k (k = 0,...,K-1) that can be of different (physical) sizes. Within each grid element (or "cell") k, I have N_k^2 data points. N_k is the number of data points in each dimension, and can vary between different grid cells.
At each data point n_k,i (where i = 0,...,N_k^2-1) I have to store an array of solution values, which has the same length nVars in the whole domain (i.e. everywhere), and which does not change during runtime.
Dimensions and changes
The number of grid cells K is of O(10^5) to O(10^6) and can change during runtime.
The number of data points N_k in each grid cell is between 2 and 8 and can change during runtime (and may be different for different cells).
The number of variables nVars stored at each grid point is around 5 to 10 and cannot change during runtime (it is also the same for every grid cell).
Requirements
Performance is the key issue here. I need to be able to regularly iterate in an ordered fashion over all grid points of all cells in an efficient manner (i.e. without too many cache misses). Generally, K and N_k do not change very often during the simulation, so for example a large contiguous block of memory for all cells and data points could be an option.
However, I do need to be able to refine or coarsen the grid (i.e. delete cells and create new ones, the new ones may be appended to the end) during runtime. I also need to be able to change the approximation order N_k, so the number of data points I store for each cell can change during runtime as well.
Conclusion
Any input is appreciated. If you have experience yourself, or just know a few good resources that I could look at, please let me know. However, while the solution will be crucial to the performance of the final program, it is just one of many problems, so the solution needs to be of an applied nature and not purely academic.
Should this be the wrong venue to ask this question, please let me know what a more suitable place would be.
Often, these sorts of dynamic mesh structures can be very tricky to deal with efficiently, but in block-structured adaptive mesh refinement codes (common in astrophysics, where complex geometries aren't important) or your spectral element code where you have large block sizes, it is often much less of an issue. You have so much work to do per block/element (with at least 10^5 cells x 2 points/cell in your case) that the cost of switching between blocks is comparitively minor.
Keep in mind, too, that you can't generally do too much of the hard work on each element or block until a substantial amount of that block's data is already in cache. You're already going to have to had flushed most of block N's data out of cache before getting much work done on block N+1's anyway. (There might be some operations in your code which are exceptions to this, but those are probably not the ones where you're spending much time anyway, cache or no cache, because there's not a lot of data reuse - eg, elementwise operations on cell values). So keeping each the blocks/elements beside each other isn't necessarily a huge deal; on the other hand, you definitely want the blocks/elements to be themselves contiguous.
Also notice that you can move blocks around to keep them contiguous as things get resized, but not only are all those memory copies also going to wipe your cache, but the memory copies themselves get very expensive. If your problem is filling a significant fraction of memory (and aren't we always?), say 1GB, and you have to move 20% of that around after a refinement to make things contiguous again, that's .2 GB (read + write) / ~20 GB/s ~ 20 ms compared to reloading (say) 16k cache lines at ~100ns each ~ 1.5 ms. And your cache is trashed after the shuffle anyway. This might still be worth doing if you knew that you were going to do the refinement/derefinement very seldom.
But as a practical matter, most adaptive mesh codes in astrophysical fluid dynamics (where I know the codes well enough to say) simply maintain a list of blocks and their metadata and don't worry about their contiguity. YMMV of course. My suggestion would be - before spending too much time crafting the perfect data structure - to first just test the operation on two elements, twice; the first, with the elements in order and computing on them 1-2, and the second, doing the operation in the "wrong" order, 2-1, and timing the two computations several times.
For each cell, store the offset in which to find the cell data in a contiguous array. This offset mapping is very efficient and widely used. You can reorder the cells for cache reuse in traversals. When the order or number of cells changes, create a new array and interpolate, then throw away the old arrays. This storage is much better for external analysis because operations like inner products in Krylov methods and stages in Runge-Kutta methods can be managed without reference to the mesh. It also requires minimal memory per vector (e.g. in Krylov bases and with time integration).