How can I make this parallel sum function use vector instructions? - c++

As sort of a side project, I'm working on a multithreaded sum algorihm, which would outperform std::accumulate when working on a large enough array. First I'm going to describe my thought process leading up to this, but if you want to skip straight to the problem, feel free to scroll down to that part.
I found many parallel sum algorihms online, most of which take the following approach:
template <typename T, typename IT>
T parallel_sum(IT _begin, IT _end, T _init) {
const auto size = distance(_begin, _end);
static const auto n = thread::hardware_concurrency();
if (size < 10000 || n == 1) return accumulate(_begin, _end, _init);
vector<future<T>> partials;
partials.reserve(n);
auto chunkSize = size / n;
for (unsigned i{ 0 }; i < n; i++) {
partials.push_back(async(launch::async, [](IT _b, IT _e){
return accumulate(_b, _e, T{0});
}, next(_begin, i*chunkSize), (i==n-1)?_end:next(_begin, (i+1)*chunkSize)));
}
for (auto& f : partials) _init += f.get();
return _init;
}
Assuming there are 2 threads available (as reported by thread::hardware_concurrency()), this function would access the elements in memory the following way:
As a simple example, we are looking at 8 elements here. The two threads are indicated by red and blue. The arrows show the location from with the threads wish to load data. Once the cells turn either red or blue, they have been loaded by the corresponding thread.
This approach (at least in my opinion) is not the best, since the threads load data from different parts of memory simultaneously. If you have many processing threads, say 16 on an 8-core hyper-threaded CPU, or even more than that, the CPU's prefetcher would have a very hard time keeping up with all these reads from completely different parts of memory (assuming the array is far too big to fit in cache). This is why I think the second example should be faster:
template <typename T, typename IT>
T parallel_sum2(IT _begin, IT _end, T _init) {
const auto size = distance(_begin, _end);
static const auto n = thread::hardware_concurrency();
if (size < 10000 || n == 1) return accumulate(_begin, _end, _init);
vector<future<T>> partials;
partials.reserve(n);
for (unsigned i{ 0 }; i < n; i++) {
partials.push_back(async(launch::async, [](IT _b, IT _e, unsigned _s){
T _ret{ 0 };
for (; _b < _e; advance(_b, _s)) _ret += *_b;
return _ret;
}, next(_begin, i), _end, n));
}
for (auto& f : partials) _init += f.get();
return _init;
}
This function accesses memory in a sort-of-sequential way, like so:
This way the prefetcher is always able to stay ahead, since all the threads access the same-ish part of memory, so there should be less cache misses, and faster load times over all, at least I think so.
The problem is that while this is all fine and dandy in theory, actual compiled versions of these show a different result. The second one is way slower. I dug a little deeper into the problem, and found out that the assembly code that is produced for the actual addition is very different. These are the "hot loops" in each one that perform the addition (remember that the first one uses std::accumulate internally, so you're basically looking at that):
Please ignore the percentages and the colors, my profiler sometimes gets things wrong.
I noticed that std::accumulate when compiled, uses an AVX2 vector instruction, vpaddq. This can add four 64-bit integers at once. I think the reason why the second version cannot be vectorized, is that each thread only accesses one element at a time, then skips over some. The vector addition would load several contiguous elements then add them together. Clearly this cannot be done, since the threads don't load elements contiguously. I tried manually unrolling the for loop in the second version, and that vector instruction did appear in the assembly, but the whole thing became painfully slow for some reason.
The above results and assembly code comes from a gcc-compiled version, but the same kind of behavior can be observed with Visual Studio 2015 as well, although I haven't looked at the assembly it produces.
So is there a way to take advante of vector instructions while retaining this sequential memory access model? Or is this memory access method something that would help at all when compared to the first version of the function?
I wrote a little benchmark program, which is ready to compile and run, just in case you want to see the performance yourself.
PS.: My primary target hardware is modern x86_64 (like haswell and such).

Each core has its own cache and prefetching.
You should look at each thread as independently executing program. In this case shortcomings of second approach will be clear: you do not access sequental data in single thread. There are holes which should not be processed, so thread cannot use vector instructions.
Another problem: CPU prefetches data in chunks. Due to how different cache levels work, changing some data within chunk marks that cache stale, and if other core tries to do some operation on same chunk of data it will be required to wait until first core will write changes and retrieve that chunk again. Basicly in your second example cache is always stale and you see raw memory access perfomance.
The best way to handle concurrent processing is to process data in large sequental chunks.

Related

Using TBB for an simple example

I am new to TBB and try to do a simple exprement.
My data for functions are:
int n = 9000000;
int *data = new int[n];
I created a function, the first one without using TBB:
void _array(int* &data, int n) {
for (int i = 0; i < n; i++) {
data[i] = busyfunc(data[i])*123;
}
}
It takes 0.456635 seconds.
And also created a to function, the first one with using TBB:
void parallel_change_array(int* &data,int list_count) {
//Instructional example - parallel version
parallel_for(blocked_range<int>(0, list_count),
[=](const blocked_range<int>& r) {
for (int i = r.begin(); i < r.end(); i++) {
data[i] = busyfunc(data[i])*123;
}
});
}
It takes me 0.584889 seconds.
As for busyfunc(int m):
int busyfunc(int m)
{
m *= 32;
return m;
}
Can you tell me, why the function without using TBB spends less time, than if it is with TBB?
I think, the problem is that the functions are simple, and it's easy to calculate without using TBB.
First, the busyfunc() seems not so busy because 9M elements are computed in just half a second, which makes this example rather memory bound (uncached memory operations take orders of magnitude more cycles than arithmetic operations). Memory bound computations scale not as good as compute-bound, e.g. plain memory copying usually scales up to no more than, say, 4 times even running on much bigger number of cores/processors.
Also, memory bound programs are more sensitive to NUMA effects and since you allocated this array as contiguous memory using standard C++, it will be allocated by default entirely on the same memory node where the initialization occurs. This default can be altered by running with numactl -i all --.
And the last, but the most important thing is that TBB initializes threads lazily and pretty slowly. I guess you do not intend writing an application which exits after 0.5 seconds spent on parallel computation. Thus, a fair benchmark should take into account all the warm-up effects, which are expected in the real application. At the very least, it has to wait until all the threads are up and running before starting measurements. This answer suggests one way to do that.
[update] Please also refer to Alexey's answer for another possible reason lurking in compiler optimization differences.
In addition to Anton's asnwer, I recommend to check if the compiler was able to optimize the code equivalently.
For start, check performance of the TBB version executed by a single thread, without real parallelism. You can use tbb::global_control or tbb::task_scheduler_init to limit the number of threads to 1, e.g.
tbb::global_control ctl(tbb::global_control::max_allowed_parallelism, 1);
The overheads of thread creation, as well as cache locality or NUMA effects, should not play a role when all the code is executed by one thread. Therefore you should see approximately the same performance as for the no-TBB version. If you do, then you have a scalability issue, and Anton explained possible reasons.
However if you see that performance drops a lot, then it is a serial optimization issue. One of known reasons is that some compilers cannot optimize the loop over a blocked_range as good as they optimize the original loop; and it was also observed that storing r.end() into a local variable may help:
int rend = r.end();
for (int i = r.begin(); i < rend; i++) {
data[i] = busyfunc(data[i])*123;
}

Example of cache friendly code

template<size_t size>
class Objects{
std::array<int,size> a;
std::array<int,size> b;
std::array<int,size> c;
void update(){
for (size_t i = 0; i < size; ++i){
c[i] = a[i] + b[i];
}
}
};
I am gathering information of how to write cache friendly code since a week and I read though several articles but I still haven't understood the basics.
Code like I have written above is used in most of the examples, but for me this is not cache friendly at all.
For me the memory layout should look like this
aaaabbbbcccc
and in the first loop it will access
[a]aaa[b]bbb[c]ccc
If I understood it correctly, the cpu prefetches elements that are near in memory. I am not sure how intelligent this method is but I assume it's primitive and it just fetches the nth nearest elements.
The problem is that [a]aaa[b]bbb[c]ccc will not access the elements in order at all. So it might fetch the next '3' elements a[aaa]bbbbcccc which is nice for the next a because it will be a cache hit but not for the b.
Is the example above cache friendly code?
I suggest you use an array of structures:
struct Cache_Item
{
int a;
int b;
int c;
};
Cache_Item cache_line[size];
for (unsigned int i = 0; i < size; ++i)
{
cache_line[i].c = cache_line[i].a + cache_line[i].b;
}
The structure arrangement allows for all the variables in use to be next to each other in the cache line or very close.
In your array method, element b[0] ideally is at location a[size], so they are size items apart. This could mean that they are on different cache lines. The result location, c[0], is at a[size + size], which means it could be 2 cache lines away.
Your code is not particularly unfriendly. It requires three active cache lines at a time instead of one, but that isn't too much to ask. Your code would be a lot more cache-unfriendly if instead of
std::array<int,size> a;
you had
std::array<struct { int x; char description[5000]; }, size> a;
because then the CPU would have to pick out the lone x among the thousands of bytes of description (which your loop never uses).
Your example would also be more cache-unfriendly if you had not just a, b, and c, but also d-z and aa-az and maybe a few more. (How far you have to go depends on the sophistication of your cache - how many way-associative it is, etc.)
Have you profiled yours vs Thomas Matthews' code?
You should trust the compiler optimization work (and of course enable optimizations); it probably deals quite well with the CPU cache (perhaps by issuing appropriate prefetch instructions).
Sometimes you can hint the compiler thru builtins or pragmas. For example with GCC on x86-64 you might -with care- use the __builtin_prefetch. Usually it is not worth the effort (and if you misuse it performance will suffer). See this answer to a related question.

efficiency when handling consecutive blocks vs. non consecutive blocks of memory

i have a struct
struct A
{
int v[10000000];
};
if i have A a[2]; and wish to calculate the total sum of values which of these 2 methods is the fastest?
int method_1(const A &a[],int length)
{
int total = 0;
for(int i=0;i<length;i++)
for(int j=0;j<10000000;j++)
total+=a[i][j];
return total;
}
int method_2(const A &a[],int length)
{
int total = 0;
for(int j=0;j<10000000;j++)
for(int i=0;i<length;i++)
total+=a[i][j];
return total;
}
a[2] is declared as two consective blocks of struct A as so:
----a[0]---- /--- a[1]----
[][][][][][][][]/[][][][][][][][]
so, i might be tempted to say that method_1 is faster, based on intuition that the blocks are consecutive and the iteration through each block's v is also consecutive.
What i am really interested in is how the memory is really accessed and how is the most efficient way to access it.
EDIT
i have changed the v size from 32 to 10000000, because apparently it wasn't understood that i was referring to a general case
Each time a memory fragment is read a whole cache line will be read from main memory to the CPU cache, today you'll probably have a 32byte long cache lines. Mostly because of this reading consecutive memory blocks is fast.
Now there is more then one cache line...
In your case both cases may have similar performance as both arrays will most probably not collide into the same cache line and so both may be in the cache on different lines so I suspect performance will be similar.
One related thing you may consider in this case to change the performance is NOT using the [] operators in favor of iterating more using "iterators" like this:
int method_1(const A &a[],int length)
{
int total = 0;
for(const A* aIt=a;aIt<a+length;++aIt)
for(const v* vIt=aIt->v;vIt<aIt->v+10000000;++vIt)
total+=*vIt;
return total;
}
This way you avoid a double [] which is simply a multiplication by the sizeof of an array element (which may be optimized but may not and if not it will be costly when called millions of times). Your compiler may be smart enough to optimize the code just as I've shown to only use additions but... it very well may not be and I've seen this make a big difference when the operation performed for each of the elements is as trivial as an incrementation - you're best to measure this and see how these options work out in your environment.
Accessing elements in the order they appear in memory will improve performance in most cases since it allows the prefetcher to load data before you even use it. Besides, if you use data in a non-contiguous way, you might load and discard the same cache line many times and this has a cost.
Data size is small enough to be fit completely in a single cache line on modern CPUs. I'm not sure about vertorizing this code by compiler
I don't think method_2 is slower than method_1. The chunk of memory will be taken to CPUs main memory and then accessing a[0] and a[1] both will be take same time.
For a safer side, method_1 can always be considered better than method_2.

Which one is more optimized for accessing array?

Solving the following exercise:
Write three different versions of a program to print the elements of
ia. One version should use a range for to manage the iteration, the
other two should use an ordinary for loop in one case using subscripts
and in the other using pointers. In all three programs write all the
types directly. That is, do not use a type alias, auto, or decltype to
simplify the code.[C++ Primer]
a question came up: Which of these methods for accessing array is optimized in terms of speed and why?
My Solutions:
Foreach Loop:
int ia[3][4]={{1,2,3,4},{5,6,7,8},{9,10,11,12}};
for (int (&i)[4]:ia) //1st method using for each loop
for(int j:i)
cout<<j<<" ";
Nested for loops:
for (int i=0;i<3;i++) //2nd method normal for loop
for(int j=0;j<4;j++)
cout<<ia[i][j]<<" ";
Using pointers:
int (*i)[4]=ia;
for(int t=0;t<3;i++,t++){ //3rd method. using pointers.
for(int x=0;x<4;x++)
cout<<(*i)[x]<<" ";
Using auto:
for(auto &i:ia) //4th one using auto but I think it is similar to 1st.
for(auto j:i)
cout<<j<<" ";
Benchmark result using clock()
1st: 3.6 (6,4,4,3,2,3)
2nd: 3.3 (6,3,4,2,3,2)
3rd: 3.1 (4,2,4,2,3,4)
4th: 3.6 (4,2,4,5,3,4)
Simulating each method 1000 times:
1st: 2.29375 2nd: 2.17592 3rd: 2.14383 4th: 2.33333
Process returned 0 (0x0) execution time : 13.568 s
Compiler used:MingW 3.2 c++11 flag enabled. IDE:CodeBlocks
I have some observations and points to make and I hope you get your answer from this.
The fourth version, as you mention yourself, is basically the same as the first version. auto can be thought of as only a coding shortcut (this is of course not strictly true, as using auto can result in getting different types than you'd expected and therefore result in different runtime behavior. But most of the time this is true.)
Your solution using pointers is probably not what people mean when they say that they are using pointers! One solution might be something like this:
for (int i = 0, *p = &(ia[0][0]); i < 3 * 4; ++i, ++p)
cout << *p << " ";
or to use two nested loops (which is probably pointless):
for (int i = 0, *p = &(ia[0][0]); i < 3; ++i)
for (int j = 0; j < 4; ++j, ++p)
cout << *p << " ";
from now on, I'm assuming this is the pointer solution you've written.
In such a trivial case as this, the part that will absolutely dominate your running time is the cout. The time spent in bookkeeping and checks for the loop(s) will be completely negligible comparing to doing I/O. Therefore, it won't matter which loop technique you use.
Modern compilers are great at optimizing such ubiquitous tasks and access patterns (iterating over an array.) Therefore, chances are that all these methods will generate exactly the same code (with the possible exception of the pointer version, which I will talk about later.)
The performance of most codes like this will depend more on the memory access pattern rather than how exactly the compiler generates the assembly branch instructions (and the rest of the operations.) This is because if a required memory block is not in the CPU cache, it's going to take a time roughly equivalent of several hundred CPU cycles (this is just a ballpark number) to fetch those bytes from RAM. Since all the examples access memory in exactly the same order, their behavior in respect to memory and cache will be the same and will have roughly the same running time.
As a side note, the way these examples access memory is the best way for it to be accessed! Linear, consecutive and from start to finish. Again, there are problems with the cout in there, which can be a very complicated operation and even call into the OS on every invocation, which might result, among other things, an almost complete deletion (eviction) of everything useful from the CPU cache.
On 32-bit systems and programs, the size of an int and a pointer are usually equal (both are 32 bits!) Which means that it doesn't matter much whether you pass around and use index values or pointers into arrays. On 64-bit systems however, a pointer is 64 bits but an int will still usually be 32 bits. This suggests that it is usually better to use indexes into arrays instead of pointers (or even iterators) on 64-bit systems and programs.
In this particular example, this is not significant at all though.
Your code is very specific and simple, but the general case, it is almost always better to give as much information to the compiler about your code as possible. This means that you must use the narrowest, most specific device available to you to do a job. This in turn means that a generic for loop (i.e. for (int i = 0; i < n; ++i)) is worse than a range-based for loop (i.e. for (auto i : v)) for the compiler, because in the latter case the compiler simply knows that you are going to iterate over the whole range and not go outside of it or break out of the loop or something, while in the generic for loop case, specially if your code is more complex, the compiler cannot be sure of this and has to insert extra checks and tests to make sure the code executes as the C++ standard says it should.
In many (most?) cases, although you might think performance matters, it does not. And most of the time you rewrite something to gain performance, you don't gain much. And most of the time the performance gain you get is not worth the loss in readability and maintainability that you sustain. So, design your code and data structures right (and keep performance in mind) but avoid this kind of "micro-optimization" because it's almost always not worth it and even harms the quality of the code too.
Generally, performance in terms of speed is very hard to reason about. Ideally you have to measure the time with real data on real hardware in real working conditions using sound scientific measuring and statistical methods. Even measuring the time it takes a piece of code to run is not at all trivial. Measuring performance is hard, and reasoning about it is harder, but these days it is the only way of recognizing bottlenecks and optimizing the code.
I hope I have answered your question.
EDIT: I wrote a very simple benchmark for what you are trying to do. The code is here. It's written for Windows and should be compilable on Visual Studio 2012 (because of the range-based for loops.) And here are the timing results:
Simple iteration (nested loops): min:0.002140, avg:0.002160, max:0.002739
Simple iteration (one loop): min:0.002140, avg:0.002160, max:0.002625
Pointer iteration (one loop): min:0.002140, avg:0.002160, max:0.003149
Range-based for (nested loops): min:0.002140, avg:0.002159, max:0.002862
Range(const ref)(nested loops): min:0.002140, avg:0.002155, max:0.002906
The relevant numbers are the "min" times (over 2000 runs of each test, for 1000x1000 arrays.) As you see, there is absolutely no difference between the tests. Note that you should turn on compiler optimizations or test 2 will be a disaster and cases 4 and 5 will be a little worse than 1 and 3.
And here are the code for the tests:
// 1. Simple iteration (nested loops)
unsigned sum = 0;
for (unsigned i = 0; i < gc_Rows; ++i)
for (unsigned j = 0; j < gc_Cols; ++j)
sum += g_Data[i][j];
// 2. Simple iteration (one loop)
unsigned sum = 0;
for (unsigned i = 0; i < gc_Rows * gc_Cols; ++i)
sum += g_Data[i / gc_Cols][i % gc_Cols];
// 3. Pointer iteration (one loop)
unsigned sum = 0;
unsigned * p = &(g_Data[0][0]);
for (unsigned i = 0; i < gc_Rows * gc_Cols; ++i)
sum += *p++;
// 4. Range-based for (nested loops)
unsigned sum = 0;
for (auto & i : g_Data)
for (auto j : i)
sum += j;
// 5. Range(const ref)(nested loops)
unsigned sum = 0;
for (auto const & i : g_Data)
for (auto const & j : i)
sum += j;
It has many factors affecting it:
It depends on the compiler
It depends on the compiler flags used
It depends on the computer used
There is only one way to know the exact answer: measuring the time used when dealing with huge arrays (maybe from a random number generator) which is the same method you have already done except that the array size should be at least 1000x1000.

C++ heap memory performance improvement

I'm writing a function where I need a significant amount of heap memory. Is it possible to tell the compiler that those data will be accessed frequently within a specific for loop, so as to improve performance (through compile options or similar)?
The reason I cannot use the stack is that the number of elements I need to store is big, and I get segmentation fault if I try to do it.
Right now the code is working but I think it could be faster.
UPDATE:
I'm doing something like this
vector< set<uint> > vec(node_vec.size());
for(uint i = 0; i < node_vec.size(); i++)
for(uint j = i+1; j < node_vec.size(); j++)
// some computation, basic math, store the result in variable x
if( x > threshold ) {
vec[i].insert(j);
vec[j].insert(i);
}
some details:
- I used hash_set, little improvement, beside the fact that hash_set is not available in all machines I have for simulation purposes
- I tried to allocate vec on the stack using arrays but, as I said, I might get segmentation fault if the number of elements is too big
If node_vec.size() is, say, equal to k, where k is of the order of a few thousands, I expect vec to be 4 or 5 times bigger than node_vec. With this order of magnitude the code appears to be slow, considering the fact that I have to run it many times. Of course, I am using multithreading to parallelize these calls, but I can't get the function per se to run much faster than what I'm seeing right now.
Would it be possible, for example, to have vec allocated in the cache memory for fast data retrieval, or something similar?
I'm writing a function where I need a significant amount of heap memory ... will be accessed frequently within a specific for loop
This isn't something you can really optimize at a compiler level. I think your concern is that you have a lot of memory that may be "stale" (paged out) but at a particular point in time you will need to iterate over all of it, maybe several times and you don't want the memory pages to be paged out to disk.
You will need to investigate strategies that are platform specific to improve performance. Keeping the pages in memory can be achieved with mlockall or VirtualLock but you really shouldn't need to do this. Make sure you know what the implications of locking your application's memory pages into RAM is, however. You're hogging memory from other processes.
You might also want to investigate a low fragmentation heap (however it may not be relevant at all to this problem) and this page which describes cache lines with respect to for loops.
The latter page is about the nitty-gritty of how CPUs work (a detail you normally shouldn't have to be concerned with) with respect to memory access.
Example 1: Memory accesses and performance
How much faster do you expect Loop 2 to run, compared Loop 1?
int[] arr = new int[64 * 1024 * 1024];
// Loop 1
for (int i = 0; i < arr.Length; i++) arr[i] *= 3;
// Loop 2
for (int i = 0; i < arr.Length; i += 16) arr[i] *= 3;
The first loop multiplies every value in the array by 3, and the second loop multiplies only every 16-th. The second loop only does about 6% of the work of the first loop, but on modern machines, the two for-loops take about the same time: 80 and 78 ms respectively on my machine.
UPDATE
vector< set<uint> > vec(node_vec.size());
for(uint i = 0; i < node_vec.size(); i++)
for(uint j = i+1; j < node_vec.size(); j++)
// some computation, basic math, store the result in variable x
if( x > threshold ) {
vec[i].insert(j);
vec[j].insert(i);
}
That still doesn't show much, because we cannot know how often the condition x > threshold will be true. If x > threshold is very frequently true, then the std::set might be the bottleneck, because it has to do a dynamic memory allocation for every uint you insert.
Also we don't know what "some computation" actually means/does/is. If it does much, or does it in the wrong way that could be the bottleneck.
And we don't know how you need to access the result.
Anyway, on a hunch:
vector<pair<int, int> > vec1;
vector<pair<int, int> > vec2;
for (uint i = 0; i < node_vec.size(); i++)
{
for (uint j = i+1; j < node_vec.size(); j++)
{
// some computation, basic math, store the result in variable x
if (x > threshold)
{
vec1.push_back(make_pair(i, j));
vec2.push_back(make_pair(j, i));
}
}
}
If you can use the result in that form, you're done. Otherwise you could do some post-processing. Just don't copy it into a std::set again (obviously). Try to stick to std::vector<POD>. E.g. you could build an index into the vectors like this:
// ...
vector<int> index1 = build_index(node_vec.size(), vec1);
vector<int> index2 = build_index(node_vec.size(), vec2);
// ...
}
vector<int> build_index(size_t count, vector<pair<int, int> > const& vec)
{
vector<int> index(count, -1);
size_t i = vec.size();
do
{
i--;
assert(vec[i].first >= 0);
assert(vec[i].first < count);
index[vec[i].first] = i;
}
while (i != 0);
return index;
}
ps.: I'm almost sure your loop is not memory-bound. Can't be sure though... if the "nodes" you're not showing us are really big it might still be.
Original answer:
There is no easy I_will_access_this_frequently_so_make_it_fast(void* ptr, size_t len)-kind-of solution.
You can do some things though.
Make sure the compiler can "see" the implementation of every function that's called inside critical loops. What is necessary for the compiler to be able to "see" the implementation depends on the compiler. There is one way to be sure though: define all relevant functions in the same translation unit before the loop, and declare them as inline.
This also means you should not by any means call "external" functions in those critical loops. And by "external" functions I mean things like system-calls, runtime-library stuff or stuff implemented in a DLL/SO. Also don't call virtual functions and don't use function pointers. And or course don't allocate or free memory (inside the critical loops).
Make sure you use an optimal algorithm. Linear optimization is moot if the complexity of the algorithm is higher than necessary.
Use the smallest possible types. E.g. don't use int if signed char will do the job. That's something I wouldn't normally recommend, but when processing a large chunk of memory it can increase performance quite a lot. Especially in very tight loops.
If you're just copying or filling memory, use memcpy or memset. Disable the intrinsic version of those two functions if the chunks are larger then about 50 to 100 bytes.
Make sure you access the data in a cache-friendly manner. The optimum is "streaming" - i.e. accessing the memory with ascending or descending addresses. You can "jump" ahead some bytes at a time, but don't jump too far. The worst is random access to a big block of memory. E.g. if you have to work on a 2 dimensional matrix (like a bitmap image) where p[0] to p[1] is a step "to the right" (x + 1), make sure the inner loop increments x and the outer increments y. If you do it the other way around performance will be much much worse.
If your pointers are alias-free, you can tell the compiler (how that's done depends on the compiler). If you don't know what alias-free means I recommend searching the net and your compiler's documentation, since an explanation would be beyond the scope.
Use intrinsic SIMD instructions if appropriate.
Use explicit prefetch instructions if you know which memory locations will be needed in the near future.
You can't do that with compiler options. Depending on your usage (insertion, random-access, deleting, sorting, etc.), you could maybe get a better suited container.
The compiler can already see that the data is accessed frequently within the loop.
Assuming you're only allocating the data from the heap once before doing the looping, note, as #lvella, that memory is memory and if it's accessed frequently it should be effectively cached during execution.