I have a problem....
I'm writing a data into array in the while-loop. And the point is that I'm doing it really frequently. It seems to be that this writing is now a bottle-neck in the code. So as i presume it's caused by the writing to memory. This array is not really large (smth like 300 elements). The question is it possible to do it in that way: to store it in the cache and update in the memory only after the while-loop is finished?
[edit - copied from an answer added by Alex]
double* array1 = new double[1000000]; // this array has elements
unsigned long* array2 = unsigned long[300];
double varX,t,sum=0;
int iter=0,i=0;
while(i<=max_steps)
{
varX+=difX;
nm0 = int(varX);
if(nm1!=nm0)
{
array2[iter] = nm0; // if you comment this string application works more then 2 times faster :)
nm1=nm0;
t = array1[nm0]; // if you comment this string , there is almost no change in time
++iter;
}
sum+=t;
++i;
}
For the first I'd like to thank all of you for answers. Indeed, it was a little dumb not to place a code. So i decided to do it now.
double* array1 = new double[1000000]; // this array has elements
unsigned long* array2 = unsigned long[300];
double varX,t,sum=0;
int iter=0,i=0;
while(i<=max_steps)
{
varX+=difX;
nm0 = int(varX);
if(nm1!=nm0)
{
array2[iter] = nm0; // if you comment this string application works more then 2 times faster :)
nm1=nm0;
t = array1[nm0]; // if you comment this string , there is almost no change in time
++iter;
}
sum+=t;
++i;
}
So that was it. It would be nice if someone will have any ideas. Thank you very much again.
Sincerely
Alex
Not intentionally, no. Among other things, you have no idea how big the cache is, so you have no idea of what's going to fit. Furthermore, if the app were allowed to lock off part of the cache, the effects on the OS might be devastating to overall system performance. This falls squarely onto my list of "you can't do it because you shouldn't do it. Ever."
What you can do is to improve your locality of reference - try to arrange the loop such that you don't access the elements more than once, and try to access them in order in memory.
Without more clues about your application, I don't think more specific advice can be given.
The CPU does not usually offer fine-grained cache control, you're not allowed to choose what is evicted or to pin things in cache. You do have a few cache operations on some CPUs. Just as a bit of info on what you can do: Here's some interesting cache related instructions on newer x86{-64} CPUs (Doing things like this makes portability hell, but I figured you may be curious)
Software Data Prefecth
The non-temporal instruction is
prefetchnta, which fetches the data
into the second-level cache,
minimizing cache pollution.
The temporal instructions are as
follows:
* prefetcht0 – fetches the data into all cache levels, that is, to the
second-level cache for the Pentium® 4 processor.
* prefetcht1 – Identical to prefetcht0
* prefetcht2 – Identical to prefetcht0
Additionally there are a set of instructions for accessing data in memory but explicitly tell the processor to not insert the data into the cache. These are called non-temporal instructions. An example of one is here: MOVNTI.
You could use the non temporal instructions for every piece of data you DON'T want in cache, in the hopes that the rest will always stay in cache. I don't know if this would actually improve performance as there are subtle behaviors to be aware of when it comes to the cache. Also it sounds like it'd be relatively painful to do.
I have a problem.... I'm writing a data into array in the while-loop. And the point is that I'm doing it really frequently. It seems to be that this writing is now a bottle-neck in the code. So as i presume it's caused by the writing to memory. This array is not really large (smth like 300 elements). The question is it possible to do it in that way: to store it in the cache and update in the memory only after the while-loop is finished?
You don't need to. The only reason why it might get pushed out of the cache is if some other data is deemed more urgent to put in the cache.
Apart from this, an array of 300 elements should fit in the cache with no trouble (assuming the element size isn't too crazy), so most likely, your data is already in cach.
In any case, the most effective solution is probably to tweak your code. Use lots of temporaries (to indicate to the compiler that the memory address isn't important), rather than writing/reading into the array constantly. Reorder your code so loads are performed once, at the start of the loop, and break up dependency chains as much as possible.
Manually unrolling the loop gives you more flexibility to achieve these things.
And finally, two obvious tools you should use rather than guessing about the cache behavior:
A profiler, and cachegrind if available. A good profiler can tell you a lot of statistics on cache misses, and cachegrind give you a lot of information too.
Us here at StackOverflow. If you post your loop code and ask how its performance can be improved, I'm sure a lot of us will find it a fun challenge.
But as others have mentioned, when working with performance, don't guess. You need hard data and measurements, not hunches and gut feelings.
Unless your code does something completely different in between writing to the array, then most of the array will probably be kept in the cache.
Unfortunately there isn't anything you can do to affect what is in the cache, apart from rewriting your algorithm with the cache in mind. Try to use as little memory as possible in between writing to the memory: don't use lot's of variables, don't call many other functions, and try to write to the same region of the array consecutively.
I doubt that this is possible, at least on a high-level multitasking operating system. You can't guarantee that your process won't be pre-empted, and lose the CPU. If your process then owns the cache, other processes can't use it, which would make their exeucution very slow, and complicate things a great deal. You really don't want to run a modern several-GHz processor without cache, just because one application has locked all others out of it.
In this case, array2 will be quite "hot" and stay in cache for that reason alone. The trick is keeping array1 out of cache (!). You're reading it only once, so there is no point in caching it. The SSE instruction for that is MOVNTPD, intrinsic void_mm_stream_pd(double *destination, __m128i source)
Even if you could, it's a bad idea.
Modern desktop computers use multiple-core CPUs. Intel's chips are the most common chips in Desktop machines... but the Core and Core 2 processors don't share an on-die cache.
That is, didn't share a cache until the Core 2 i7 chips were released, which share an on-die 8MB L3 cache.
So, if you were able to lock data in the cache on the computer I'm typing this from, you can't even guarantee this process will be scheduled on the same core, so that cache lock may be totally useless.
If your writes are slow, make sure that no other CPU core is writing in the same memory area at the same time.
When you have a performance problem, don't assume anything, measure first. For example, comment out the writes, and see if the performance is any different.
If you are writing to an array of structures, use a structure pointer to cache the address of the structure so you are not doing the array multiply each time you do an access. Make sure you are using the native word length for the array indexer variable for maximum optimisation.
As other people have said, you can't control this directly, but changing your code may indirectly enable better caching. If you're running on linux and want to get more insight into what's happening with the CPU cache when your program runs, you can use the Cachegrind tool, part of the Valgrind suite. It's a simulation of a processor, so it's not totally realistic, but it gives you information that is hard to get any other way.
It might be possible to use some assembly code, or as onebyone pointed out, assembly intrinsics, to prefetch lines of memory into the cache, but that would cost a lot of time to tinker with it.
Just for trial, try to read in all the data (in a manner that the compiler won't optimize away), and then do the write. See if that helps.
In the early boot phases of CoreBoot (formerly LinuxBIOS) since they have no access to RAM yet (we are talking about BIOS code, and therefore the RAM hasn't been initialized yet) they set up something they call Cache-as-RAM (CAR), i.e. they use the processor cache as RAM even if not backed by actual RAM.
Related
I am wondering if setting a 32 bit variable after checking it will be faster than just setting it? E.g. variable a is of uint32
if( a != 0)
{
a = 0;
}
or
a = 0;
The code will be running in a loop which it will run many times so I want to reduce the time to run the code.
Note variable a will be 0 most of the time, so the question can possibly be shortened to if it is faster to check a 32 bit variable or to set it. Thank you in advance!
edit: Thank you all who commented on the question, I have created a for loop and tested both assigning and if-ing for 100 thousand times. It turns out assigning is faster.(54ms for if-ing and 44ms for assigning)
What you describe is called a "silent store" optimization.
PRO: unnecessary stores are avoided.
This can reduce pressure on the store to load forwarding buffers, a component of a modern out-of-order CPU that is quite expensive in hardware, and, as a result, is often undersized, and therefore a performance bottleneck. On Intel x86 CPUs there are performance Event Monitoring counters (EMON) that you can use to investigate whether this is a problem in your program.
Interestingly, it can also reduce the number of loads that your program does. First, SW: if the stores are not eliminated, the compiler may be unable to prove that the do not write to the memory occupied by a different variable, the so-called address and pointer disambiguation problem, so the compiler may generate unnecessary reloads of such possibly but not actually conflicting memory locations. Eliminate the stores, some of these loDs may also be eliminated. Second, HW: most modern CPUs have store to load dependency predictors: fewer stores increase accuracy. If a dependency is predicted, the load may actually not be performed by hardware, and maybe converted into a register to register move. This was the subject of the recent patent lawsuits that the University of Wisconsin asserted against Intel and Apple, with awards exceeding hundreds of millions of dollars.
But the most important Reason to eliminate the unnecessary stores is to avoid unnecessarily dirtying the cache. A dirty cache line eventually has to be written to memory, even if unchanged. Wasting power. In many systems it will eventually be written to flash or SSD, wasting power and consuming the limited write cycles of the device.
These considerations have motivated academic research in silent stores, such as http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.8947&rep=rep1&type=pdf. However, a quick google scholar search shows these papers are mainly 2000-2004, and I am aware of no modern CPUs implementing true silent store elimination - actually having hardware read the old value. I suspect, however, that this lack of deployment of silent stores us mainly because CPU design went on pause for more than a decade, as focus changed from desktop PCs to cell phones. Now that cell phones are almost caught up to the sophistication of 2000-era desktop CPUs, it may arise again.
CON: Eliminating the silent store in software takes more instructions. Worse, it takes a branch. If the branch is not very predictable, the resulting branch mispredictions will consume any savings. Some machines have instructions that allow you to eliminate such stores without a branch: eg Intel's LRBNI vector store instructions with a conditional vector mask. I believe AVX has these instructions. If you or your compiler can use such instructions, then the cost is just the load of the old value and a vector compare if the old value is already in a register, then just the compare.
By the way, you can get some benefit without completely eliminating the store, but by redirecting it to a safe address. Instead if
If a[i] != 0 then a[i] := 0
Do
ptr = a+I; if *ptr == 0 then ptr.:= &safe; *ptr:=0
Still doing the store, but not dirtying so many cache lines. I have used this way if faking a conditional store instruction a lot. It is very unlikely that a compiler will do this sort of optimization.
So, unfortunately, the answer is "it depends". If you are on a vector mask machine or a GPU, and the silent stores are very common, like, more than 30%, worth thinking about. If in scalar code, probably need more like 90% silent.
Ideally, measure it yourself. Although it can be hard to make realistic measurements.
I would start with what is probably the best case for thus optimization:
char a[1024*1024*1024]; // zero filled
const int cachelinesize = 64;
for(char*p=a; p
Every store is eliminated here - mAke sure that the compiler still emits them. Good branch prediction, etc
If this limit case shows no benefit, your realistic code is unlikely to.
Come to think if it, I ran such a benchmark back in the last century. The silent store code was 2x faster, since totally memory bound, and the silent stores generate no dirty cache lines on a write back cache. Recheck thus, and then try on more realistic workload.
But first, measure whether you are memory bottlenecked or not.
By the way: if hardware implementations of silent store elimination become common, then you will never want to do it in software.
But at the moment I am aware of no hardware implementations of silent store elimination in commercially available CPUs.
As ECC becomes more common, silent store elimination becomes almost free - since you have to read the old bytes anyway to recalculate ECC in many cases.
The assignment would do you better as firstly the if statement is redundant and it would make it clearer if you omitted it, also the assignment only should be faster and even if you are not quite sure of it you can just create a simple function to test it with and without the if statement.
I am running the c++ code of someone to do the benchmarking on a dataset. The issue I have is that often I get a timing for the first run, and these numbers massively change (i.e. 28 seconds to 10 seconds) if I run the same code again. I assume this happens due to CPU's automatic caching. Is there a way to flush the cache, or prevent these fluctuations somehow?
Not one that works "for everything, everywhere". Most processors have special instructions to flush the cache, but they are often privileged instructions, so it has to be done from inside the OS kernel, not your user-mode code. And of course, it's completely different instructions for each processor architecture.
All current x86 processors does have a clflush instruction, that flushes one cache-line, but to do that, you have to have the address of the data (or code) you want to flush. Which is fine for small and simple data structures, not so good if you have a binary tree that is all over the place. And of course, not at all portable.
In most environments, reading and writing a large block of alternative data, e.g. something like:
// Global variables.
const size_t bigger_than_cachesize = 10 * 1024 * 1024;
long *p = new long[bigger_than_cachesize];
...
// When you want to "flush" cache.
for(int i = 0; i < bigger_than_cachesize; i++)
{
p[i] = rand();
}
Using rand will be much slower than filling with something constant/known. But the compiler can't optimise the call away, which means it's (almost) guaranteed that the code will stay.
The above won't flush instruction caches - that is a lot more difficult to do, basically, you have to run some (large enough) other piece of code to do that reliably. However, instruction caches tend to have less effect on overall benchmark performance (instruction cache is EXTREMELY important for modern processor's perforamnce, that's not what I'm saying, but in the sense that the code for a benchmark is typically small enough that it all fits in cache, and the benchmark runs many times over the same code, so it's only slower the first iteration)
Other ideas
Another way to simulate "non-cache" behaviour is allocate a new area for each benchmark pass - in other words, not freeing the memory until the end of the benchmark or using an array containing the data, and output results, such that each run has it's own set of data to work on.
Further, it's common to actually measure the performance of the "hot runs" of a benchmark, not the first "cold run" where the caches are empty. This does of course depend on what you are actually trying to achieve...
Here's my basic approach:
Allocate a memory region 2x the size of the LLC, if you can determine the LLC size dynamically (or you know it statically), or if you don't, some reasonable multiple of the largest LLC size on the platform of interest1.
memset the memory region to some non-zero value: 1 will do just fine.
"Sink" the pointer somewhere so that the compiler can't optimize out the stuff above or below (writing to a volatile global works pretty much 100% of the time).
Read from random indexes in the region until you've touched each cache line an average of 10 times or so (accumulate the read values into a sum that you sink in a similar way to (3)).
Here are some notes on why this is generally works and why doing less may not work - the details are x86-centric, but similar concerns will apply on many other architectures.
You absolutely want to write to the allocated memory (step 2) before you begin your main read-only flushing loop, since otherwise you might just be repeatedly reading from the same small zero-mapped page returned by the OS to satisfy your memory allocation.
You want to use a region considerably larger than the LLC size, since the outer cache levels are typically physically addressed, but you can only allocate and access virtual addresses. If you just allocate an LLC-sized region, you generally won't get full coverage of all the ways of every cache set: some sets will be over-represented (and so will be fully flushed), while other sets be under-represented and so not all existing values can even be flushed by accessing this region of memory. A 2x over-allocation makes it highly likely that almost all sets have enough representation.
You want to avoid the optimizer doing clever things, such as noting the memory never escapes the function and eliminating all your reads and writes.
You want to iterate randomly around the memory region, rather than just striding through it linearly: some designs like the LLC on recent Intel detect when a "streaming" pattern is present, and switch from LRU to MRU since LRU is about the worst-possible replacement policy for such a load. The effect is that no matter how many times you stream though memory, some "old" lines from before your efforts can remain in the cache. Randomly accessing memory defeats this behavior.
You want to access more than just LLC amount of memory for (a) the same reason you allocate more than the LLC size (virtual access vs physical caching) and (b) because random access needs more accesses before you have a high likelihood of hitting every set enough times (c) caches are usually only pseudo-LRU, so you need more than the number of accesses you'd expect under exact-LRU to flush out every line.
Even this is not foolproof. Other hardware optimizations or caching behaviors not considered above could cause this approach to fail. You might get very unlucky with the page allocation provided by the OS and not be able to reach all the pages (you can largely mitigate this by using 2MB pages). I highly recommend testing whether your flush technique is adequate: one approach is to measure the number of cache misses using CPU performance counters while running your benchmark and see if the number makes sense based on the known working-set size2.
Note that this leaves all levels of the cache with lines in E (exclusive) or perhaps S (shared) state, and not the M (modified) state. This means that these lines don't need to be evicted to other cache levels when they are replaced by accesses in your benchmark: they can simply be dropped. The approach described in the other answer will leave most/all lines in the M state, so you'll initially have 1 line of eviction traffic for every line you access in your benchmark. You can achieve the same behavior with my recipe above by changing step 4 to write rather than read.
In that regard, neither approach here is inherently "better" than the other: in the real world the cache levels will have a mix of modified and not-modified lines, while these approaches leave the cache at the two extremes of the continuum. In principle you could benchmark with both the all-M and no-M states, and see if it matters much: if it does, you can try to evaluate what the real-world state of the cache will usually be an replicate that.
1Remember that LLC sizes are growing almost every CPU generation (mostly because core counts are increasing), so you want to leave some room for growth if this needs to be future-proof.
2 I just throw that out there as if it was "easy", but in reality may be very difficult depending on your exact problem.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Firstly I would like to tell that I come from a non-Computer Science background & I have been learning the C++ language.
I am unable to understand what exactly is a cache?
It has different meaning in different contexts.
I would like to know what would be called as a cache in a C++ program?
For example, if I have some int data in a file. If I read it & store in an int array, then would this mean that I have 'cached' the data?
To me this seems like common sense to use the data since reading from a file is always bad than reading from RAM.
But I am a little confused due to this article.
In a CPU there can be several caches, to speed up instructions in
loops or to store often accessed data. These caches are small but very
fast. Reading data from cache memory is much faster than reading it
from RAM.
It says that reading data from cache is much faster than from RAM.
I thought RAM & cache were the same.
Can somebody please clear my confusion?
EDIT: I am updating the question because previously it was too broad.
My confusion started with this answer. He says
RowData and m_data are specific to my implementation, but they are
simply used to cache information about a row in the file
What does cache in this context mean?
Any modern CPU has several layers of cache that are typically named things like L1, L2, L3 or even L4. This is called a multi-level cache. The lower the number, the faster the cache will be.
It's important to remember that the CPU runs at speeds that are significantly faster than the memory subsystem. It takes the CPU a tiny eternity to wait for something to be fetched from system memory, many, many clock-cycles elapse from the time the request is made to when the data is fetched, sent over the system bus, and received by the CPU.
There's no programming construct for dealing with caches, but if your code and data can fit neatly in the L1 cache, then it will be fastest. Next is if it can fit in the L2, and so on. If your code or data cannot fit at all, then you'll be at the mercy of the system memory, which can be orders of magnitude slower.
This is why counter-intuitive things like unrolling loops, which should be faster, might end up being slower because your code becomes too large to fit in cache. It's also why shaving a few bytes off a data structure could pay huge dividends even though the memory footprint barely changes. If it fits neatly in the cache, it will be faster.
The only way to know if you have a performance problem related to caching is to benchmark very carefully. Remember each processor type has varying amounts of cache, so what might work well on your i7 CPU might be relatively terrible on an i5.
It's only in extremely performance sensitive applications that the cache really becomes something you worry about. For example, if you need to maintain a steady 60FPS frame rate in a game, you'll be looking at cache problems constantly. Every millisecond counts here. Likewise, anything that runs the CPU at 100% for extended periods of time, such as rendering video, will want to pay very close attention to how much they could gain from adjusting the code that's emitted.
You do have control over how your code is generated with compiler flags. Some will produce smaller code, some theoretically faster by unrolling loops and other tricks. To find the optimal setting can be a very time-consuming process. Likewise, you'll need to pay very careful attention to your data structures and how they're used.
[Cache] has different meaning in different contexts.
Bingo. Here are some definitions:
Cache
Verb
Definition: To place data in some location from which it can be more efficiently or reliably retrieved than its current location. For instance:
Copying a file to a local hard drive from some remote computer
Copying data into main memory from a file on a local hard drive
Copying a value into a variable when it is stored in some kind of container type in your procedural or object oriented program.
Examples: "I'm going to cache the value in main memory", "You should just cache that, it's expensive to look up"
Noun 1
Definition: A copy of data that is presumably more immediately accessible than the source data.
Examples: "Please keep that in your cache, don't hit our servers so much"
Noun 2
Definition: A fast access memory region that is on the die of a processor, modern CPUs generally have several levels of cache. See cpu cache, note that GPUs and other types of processors will also have their own caches with different implementation details.
Examples: "Consider keeping that data in an array so that accessing it sequentially will be cache coherent"
My definition for Cache would be some thing that is in limited amount but faster to access as there is less area to look for. If you are talking about caching in any programming language then it means you are storing some information in form of a variable(variable is nothing a way to locate your data in memory) in memory. Here memory means both RAM and physical cache (CPU cache).
Physical/CPU cache is nothing but memory that is even more used than RAM, it actually stores copies of some data on RAM which is used by CPU very often. You have another level of categorisation after that as well which is on board cache(faster) and off-board cache. youu can see this link
I am updating the question because previously it was too broad. My
confusion started with this answer. He says
RowData and m_data are specific to my implementation,
but they are simply used to cache information about a row in the file
What does cache in this context mean?
This particular use means that RowData is held as a copy in memory, rather than reading (a little bit of) the row from a file every time we need some data from it. Reading from a file is a lot slower [1] than holding on to a copy of the data in our program's memory.
[1] Although in a modern OS, the actual data from the hard-disk is probably held in memory, in file-system cache, to avoid having to read the disk many times to get the same data over and over. However, this still means that the data needs to be copied from the file-system cache to the application using the data.
I want to dynamically allocate a memory block for an array in C/C++, and this array will be accessed at a high frequency. So I want this array to stay on chip, i.e., in the Cache. How can I do this explicitly with code in C/C++?
There is no standard C++ language feature that allows you to do this.
Depending on your compiler and CPU, you may be able to use an arch-specific CPU instruction in an asm block:
T* p = new T(...);
size_t n = sizeof(T);
asm {
"CACHE n bytes at address p"
}
...or some builtin compiler function ("intrinsic") that does this.
You will need to consult your CPU manual and/or your compiler manual.
As an example, x86 CPUs have a set of instructions starting with PREFETCH.
And another example, GCC has a function called __builtin_prefetch. See GCC Data Prefetch Support
I will try to answer this question from a bit different perspective. Do you really need to do this. And even if it would be a way to do so, will it worth it? Imagine there is a "magic" void * malloc_and_lock_in_cache( int cacheLevel ) function. What you going to do with this data. If it's an application limited to while (1) loop with random array access from single thread you will have such behaviour anyway due to optimisation and CPU architecture. If you think about more real world solutions you always have logic around access. For example locking for multithreading, certain conditions, etc. The the question - do the rest of your application algorithms are so perfect that only left to do is to allocate array in cache.
Do all other access/sorting/lookup functions are state-of-art logic which cannot be reviewed rather then gaining very limited performance kickback trying to overwrite CPU optimisation.
Also do you consider to run your application without ANY operation system on a raw hardware so you shouldn't care about how your allocation will affects OS behaviour, rest of application running around?
And what should happen if your application will run inside virtual machine or environments like XEN.?
I can remember one similar popular subject 15-18 years ago about physical memory usage and disk caching utilities. Indeed tools like MS-DOS smartdrive or similar utilities were REALLY useful and speed up things a lot. Usenet was full of 'tuning advices' and performance analyses for things like write-through/write-back settings.
Especially if your DOS application were processing large amounts of data and implemented some memory swapping logic (I am talking about times then 4MB RAM was luxury) that's became mostly a drama, that from one point of view you need as much memory you can, but from another point of view you need swapping, so you actually need to swap, but swapping goes through cache etc..
But what happened next. We've got VM386 mode, disk cache/memory swaps integrated into OS, and who was care anymore about things like tuning smartdrive/ramdisks. In general it was 'cheaper' to allocate as much as you need VM then implement own voodoo algorithms to swap physical memory blocks (although this functionality is still in WinAPI).
So I would really recommend to concentrate efforts on algorithms and application design rather then trying to use some very low level features with really unpredictable results until you dont develop some new microkernel OS.
I don't think you can. First, which cache? L3, L2, L1? You can prefetch, and align so it its access is more optimized, and then you can query it periodically maybe to make it stay and not go LRU'd, but you can't really make it stay in cache.
First you have to know what's the architecture of the machine you want to run the code on. Then you should check it there's an instruction doing that kind of stuff.
Actually using the memory heavily will force the cache controller to put this region in cache.
And there are three rules of optimizing, you may want to know them first :)
http://c2.com/cgi/wiki?RulesOfOptimization
Lets say my array is 32KB, L1 is 64 KB. Does Windows use some of it while program is running? Maybe I am not able to use L1 because windows is making other programs work? Should I set priority of my program to use all cache?
for(int i=0;i<8192;i++)
{
array_3[i]+=clock()*(rand()%256);//clock() and rand in cache too?
//how many times do I need to use a variable to make it stay in cache?
//or cache is only for reading? look below plz
temp_a+=array_x[i]*my_function();
}
The program is in C/C++.
Same thing for L2 too please.
Also are functions kept in cache? Cache is read only? (If I change my array then it loses the cache bond?)
Does the compiler create the asm codes to use cache more yield?
Thanks
How can i know my array is in cache?
In general, you can't. Generally speaking, the cache is managed directly by hardware, not by Windows. You also can't control whether data resides in the cache (although it is possible to specify that an area of memory shouldn't be cached).
Does windows use some of it while program is running? Maybe i am not able to use L1 because windows is making other programs work? Should i set priority of my program to use all cache?
The L1 and L2 caches are shared by all processes running on a given core. When your process is running, it will use all of cache (if it needs it). When there's a context switch, some or all of the cache will be evicted, depending on what the second process needs. So next time there's a context switch back to your process, the cache may have to be refilled all over again.
But again, this is all done automatically by the hardware.
also functions are kept in cache?
On most modern processors, there is a separate cache for instructions. See e.g. this diagram which shows the arrangement for the Intel Nehalem architecture; note the shared L2 and L3 caches, but the separate L1 caches for instructions and data.
cache is read only?(if i change my array then it loses the cache bond?)
No. Caches can handle modified data, although this is considerably more complex (because of the problem of synchronising multiple caches in a multi-core system.)
does the compiler create the asm codes to use cache more yield?
As cache activity is generally all handled automatically by the hardware, no special instructions are needed.
Cache is not directly controlled by the operating system, it is done
in hardware
In case of a context switch, another application may modify the
cache, but you should not care about this. It is more important to
handle cases when your program behaves cache unfriendly.
Functions are kept in cache (I-Cahce , instruction cache)
Cache is not read only, when you write something it goes to [memory
and] the cache.
The cache is primarily controlled by the hardware. However, I know that Windows scheduler tends to schedule execution of a thread to the same core as before specifically because of the caches. It understands that it will be necessary to reload them on another core. Windows is using this behavior at least since Windows 2000.
As others have stated, you generally cannot control what is in cache. If you are writing code for high-performance and need to rely on cache for performance, then it is not uncommon to write your code so that you are using about half the space of L1 cache. Methods for doing so involve a great deal of discussion beyond the scope of StackOverflow questions. Essentially, you would want to do as much work as possible on some data before moving on to other data.
As a matter of what works practically, using about half of cache leaves enough space for other things to occur that most of your data will remain in cache. You cannot rely on this without cooperation from the operating system and other aspects of the computing platform, so it may be a useful technique for speeding up research calculations but it cannot be used where real-time performance must be guaranteed, as in operating dangerous machinery.
There are additional caveats besides how much data you use. Using data that maps to the same cache lines can evict data from cache even though there is plenty of cache unused. Matrix transposes are notorious for this, because a matrix whose row length is a multiple of a moderate power of two will have columns in which elements map to a small set of cache lines. So learning to use cache efficiently is a significant job.
As far as I know, you can't control what will be in the cache. You can declare a variable as register var_type a and then access to it will be in a single cycle(or a small number of cycles). Moreover, the amount of cycles it will take you to access a chunk of memory also depends on virtual memory translation and TLB.
It should be noted that the register keyword is merely a suggestion and the compiler is perfectly free to ignore it, as was suggested by the comment.
Even though you may not know which data is in cache and which not, you still may get an idea how much of the cache you are utilizing. Modern processor have quite many performance counters and some of them related to cache. Intel's processors may tell you how many L1 and L2 misses there were. Check this for more details of how to do it: How to read performance counters on i5, i7 CPUs