I am in need of a simple and portable way to explicitly prefetch data. I do not want to use the specific feature of any specific compiler or platform, just something generic enough to work across different platforms and compilers.
One very naive solution that comes to mind is just move a byte/int from the memory location to a register, that "should" bring up that memory segment into the CPU cache to fill a line, at least this is what I logically assume. But maybe it won't be that easy? One possibility is for the compiler to optimize away the operation if that data is not accessed in the particular scope, so no prefetching will occur.
Generally speaking, prefetching and memory loads are not exactly the same operations. There are a few fundamental differences:
Prefetching invalid address does not generate faults whereas attempting to read, write or execute invalid address generates a fault (if CPU has MPU/MMU, of course).
Prefetching can be done for reading and/or writing whereas just reading a byte into register is just reading a byte into register.
You can (theoretically) specify memory locality when prefetching.
CPU might have special instructions for prefetching that are not the same as memory load instructions.
So just stick with __builtin_prefetch and let the compiler do the hard work.
Also, keep in mind that optimizing compilers may generate prefetch instructions automatically. I guess if they do, then you'd have to make sure you do not interfere with that.
Another interesting thing is that, in general, explicit prefetching does not improve performance but slightly degrades it instead. See this LWN article for details and explanation why prefetching was totally removed from the Linux kernel.
Hope it helps. Good Luck!
Related
The problem does not take CPU cache into consideration. That is, let the cache do its job (let cpu cache improve the performance).
My idea is to allocate a big enough chunk of memory (so that not all of it fit into cache) and treat them as one data type(like int) and do addition to avoid the compiler completely optimize away the code to read the memory. The problem is does the data type affect the measurement? Or is there a more general way of doing it?
EDIT: Might be a bit mis-leading before. An example is AIDA64's memory and cache benchmark, which is able to measure the memory read/write speed as well as latency. I want to know a general idea of how it is done.
Microbenchmarks like this are not easy in C/C++. The amount of time something takes in C++ is not a specified aspect of the language. Indeed, for every use case except this one, faster is better, so compilers are encouraged to do smart things.
The trick to these is to write the benchmark, compile it, and then look at the assembly to see whether its doing clever tricks. Or, at the very least, check to make sure that it makes sense (accessing more memory = more time).
Compilers are getting smart. Addition is not always enough. More than once I've had Visual Studio realize what I was doing to construct the microbenchmark and compile it all away.
At the moment, I am having good luck using the argc argument passed into main as a seed, and using a cryptographic hash like SHA1 or MD-5 to fill the data. This tends to be enough to trick the compiler into actually writing all of the reads. But verify your results. There's no guarantee that a new compiler doesn't get even smarter.
Finding it really hard to find all sorts of optimisations that takes place in relaxed memory model.
I have came across: speculation and register allocation, but sure the list don't ends here.
What are the various sort of compiler optimisations that happen when developer try to use relaxed memory model in cpp?
Speculation is a general term, there are dozens of types. Register allocation is part of the natural process of compilation (for logical registers) and CPU work (for physical registers, is such exist) - neither of these things is related to relaxed memory models.
The main optimization I could think of is that relaxed memory models allow store reordering, with common compilers and CPUs otherwise prevent. This allows better memory parallelism since you don't serialize your writes, and even better cache hit rates since you can use cached lines without stalling and risking losing the line from cache. This also allows better chances of combining multiple stores to the same cacheline (write combining), which gives better bandwidth.
I'm mostly talking about stores because loads are usually already being optimized by the hardware in modern out-of-order CPUs. There are some precautions used to detect problems there, but the penalty is probably not too bad.
There are also barriers, and some weak models can use lighter ones than the heavy fences x86 is doing, but you may actually have to use more fences on a weaker model so it really depends on what you're trying to achieve and how.
I want to prefetch some code into the instruction cache. The code path is used infrequently but I need it to be in the instruction cache or at least in L2 for the rare cases that it is used. I have some advance notice of these rare cases. Does _mm_prefetch work for code? Is there a way to get this infrequently used code in cache? For this problem I don't care about portability so even asm would do.
The answer depends on your CPU architecture.
That said, if you are using gcc or clang, you can use the __builtin_prefetch instruction to try to generate a prefetch instruction. On Pentium 3 and later x86-type architectures, this will generate a PREFETCHh instruction, which requests a load into the data cache hierarchy. Since these architectures have unified L2 and higher caches, it may help.
The function looks like this:
__builtin_prefetch(const void *address, int locality);
The locality argument should be in the range 0...3. Assuming locality maps directly to the h part of the PREFETCHh instruction, you want to pass 1 or 2, which ask for the data to be loaded into the L2 and higher caches. See IntelĀ® 64 and IA-32 Architectures Software Developer's Manual
Volume 2B: Instruction Set Reference, M-Z (PDF) page 4-277. (Find other volumes here.)
If you're using another compiler that doesn't have __builtin_prefetch, see whether it has the _mm_prefetch function. You may need to include a header file to get that function. For example, on OS X, that function, and constants for the locality argument, are declared in xmmintrin.h.
There isn't any (official [1] x86) instruction to prefetch code, only data. I find this a rather bizarre use-case, where the code-path is known beforehand, but executes rarely, and there is a significant benefit in prefetching the code. It would be great to understand where you've come to the conclusion that there is a significant benefit in pre-loading the code for this special case, since it would require not only analyzing that the code is significantly slower when it's not been hit for a long time, but also determining that there is spare bus-cycles to actually load the code before the processor can prefetch it by it's normal mechanism for loading code.
You may be able to use the prefetch instructions that fetch into L2, which is typically shared between I- and D-cache.
[1] I know there are some "secret" instructions that allow the processor to manipulate cache-content, but since those would require a lot of extra work, even if you could use them in user-mode code [and I expect this is not some kernel-mode code].
I want to dynamically allocate a memory block for an array in C/C++, and this array will be accessed at a high frequency. So I want this array to stay on chip, i.e., in the Cache. How can I do this explicitly with code in C/C++?
There is no standard C++ language feature that allows you to do this.
Depending on your compiler and CPU, you may be able to use an arch-specific CPU instruction in an asm block:
T* p = new T(...);
size_t n = sizeof(T);
asm {
"CACHE n bytes at address p"
}
...or some builtin compiler function ("intrinsic") that does this.
You will need to consult your CPU manual and/or your compiler manual.
As an example, x86 CPUs have a set of instructions starting with PREFETCH.
And another example, GCC has a function called __builtin_prefetch. See GCC Data Prefetch Support
I will try to answer this question from a bit different perspective. Do you really need to do this. And even if it would be a way to do so, will it worth it? Imagine there is a "magic" void * malloc_and_lock_in_cache( int cacheLevel ) function. What you going to do with this data. If it's an application limited to while (1) loop with random array access from single thread you will have such behaviour anyway due to optimisation and CPU architecture. If you think about more real world solutions you always have logic around access. For example locking for multithreading, certain conditions, etc. The the question - do the rest of your application algorithms are so perfect that only left to do is to allocate array in cache.
Do all other access/sorting/lookup functions are state-of-art logic which cannot be reviewed rather then gaining very limited performance kickback trying to overwrite CPU optimisation.
Also do you consider to run your application without ANY operation system on a raw hardware so you shouldn't care about how your allocation will affects OS behaviour, rest of application running around?
And what should happen if your application will run inside virtual machine or environments like XEN.?
I can remember one similar popular subject 15-18 years ago about physical memory usage and disk caching utilities. Indeed tools like MS-DOS smartdrive or similar utilities were REALLY useful and speed up things a lot. Usenet was full of 'tuning advices' and performance analyses for things like write-through/write-back settings.
Especially if your DOS application were processing large amounts of data and implemented some memory swapping logic (I am talking about times then 4MB RAM was luxury) that's became mostly a drama, that from one point of view you need as much memory you can, but from another point of view you need swapping, so you actually need to swap, but swapping goes through cache etc..
But what happened next. We've got VM386 mode, disk cache/memory swaps integrated into OS, and who was care anymore about things like tuning smartdrive/ramdisks. In general it was 'cheaper' to allocate as much as you need VM then implement own voodoo algorithms to swap physical memory blocks (although this functionality is still in WinAPI).
So I would really recommend to concentrate efforts on algorithms and application design rather then trying to use some very low level features with really unpredictable results until you dont develop some new microkernel OS.
I don't think you can. First, which cache? L3, L2, L1? You can prefetch, and align so it its access is more optimized, and then you can query it periodically maybe to make it stay and not go LRU'd, but you can't really make it stay in cache.
First you have to know what's the architecture of the machine you want to run the code on. Then you should check it there's an instruction doing that kind of stuff.
Actually using the memory heavily will force the cache controller to put this region in cache.
And there are three rules of optimizing, you may want to know them first :)
http://c2.com/cgi/wiki?RulesOfOptimization
Newer ARM processors include the PLD and PLI instructions.
I'm writing tight inner loops (in C++) which have a non-sequential memory access pattern, but a pattern that naturally my code fully understands. I would anticipate a substantial speedup if I could prefetch the next location whilst processing the current memory location, and I would expect this to be quick-enough to try out to be worth the experiment!
I'm using new expensive compilers from ARM, and it doesn't seem to be including PLD instructions anywhere, let alone in this particular loop that I care about.
How can I include explicit prefetch instructions in my C++ code?
There should be some Compiler-specific Features. There is no standard way to do it for C/C++. Check out you compiler Compiler Reference Guide. For RealView Compiler see this or this.
If you are trying to extract truly maximum performance from these loops, than I would recommend writing the entire looping construct in assembler. You should be able to use inline assembly depending on the data structures involved in your loop. Even better if you can unroll any piece of your loop (like the parts involved in making the access non-sequential).
At the risk of asking the obvious: have you verified the compiler's target architecture? For example (humor me), if by default the compiler is targeted to ARM7, you're never going to see the PLD instruction.
It is not outside the realm of possibility that other optimizations like software pipelining and loop unrolling may achieve the same effect as your prefetching idea (hiding the latency of the loads by overlapping it with useful computation), but without the extra instruction-cache pressure caused by the extra instructions. I would even go so far as to say that this is the case more often than not, for tight inner loops that tend to have few instructions and little control flow. Is your compiler doing these types of traditional optimizations instead. If so, it may be worth looking at the pipeline diagram to develop a more detailed cost model of how your processor works, and evaluate more quantitatively whether prefetching would help.