Slow writing to array in C++ - c++

I was just wondering if this is expected behavior in C++. The code below runs at around 0.001 ms:
for(int l=0;l<100000;l++){
int total=0;
for( int i = 0; i < num_elements; i++)
{
total+=i;
}
}
However if the results are written to an array, the time of execution shoots up to 15 ms:
int *values=(int*)malloc(sizeof(int)*100000);
for(int l=0;l<100000;l++){
int total=0;
for( unsigned int i = 0; i < num_elements; i++)
{
total+=i;
}
values[l]=total;
}
I can appreciate that writing to the array takes time but is the time proportionate?
Cheers everyone

The first example can be implemented using just CPU registers. Those can be accessed billions of times per second. The second example uses so much memory that it certainly overflows L1 and possibly L2 cache (depending on CPU model). That will be slower. Still, 15 ms/100.000 writes comes out to 1.5 ns per write - 667 Mhz effectively. That's not slow.

It looks like the compiler is optimizing that loop out entirely in the first case.
The total effect of the loop is a no-op, so the compiler just removes it.

It's very simple.
In first case You have just 3 variables, which can be easily stored in GPR (general purpose registers), but it doesn't mean that they are there all the time, but they are probably in L1 cache memory, which means thah they can be accessed very fast.
In second case You have more than 100k variables, and You need about 400kB to store them. That is deffinitely to much for registers and L1 cache memory. In best case it could be in L2 cache memory, but probably not all of them will be in L2. If something is not in register, L1, L2 (I assume that your processor doesn't have L3) it means that You need to search for it in RAM and it takes muuuuuch more time.

I would suspect that what you are seeing is an effect of virtual memory and possibly paging. The malloc call is going to allocate a decent sized chunk of memory that is probably represented by a number of virtual pages. Each page is linked into process memory separately.
You may also be measuring the cost of calling malloc depending on how you timed the loop. In either case, the performance is going to be very sensitive to compiler optimization options, threading options, compiler versions, runtime versions, and just about anything else. You cannot safely assume that the cost is linear with the size of the allocation. The only thing that you can do is measure it and figure out how to best optimize once it has been proven to be a problem.

Related

Using one loop vs two loops

I was reading this blog :- https://developerinsider.co/why-is-one-loop-so-much-slower-than-two-loops/. And I decided to check it out using C++ and Xcode. So, I wrote a simple program given below and when I executed it, I was surprised by the result. Actually the 2nd function was slower compared to the first function contrary to what is stated in the article. Can anyone please help me figure out why this is the case?
#include <iostream>
#include <vector>
#include <chrono>
using namespace std::chrono;
void function1() {
const int n=100000;
int a1[n], b1[n], c1[n], d1[n];
for(int j=0;j<n;j++){
a1[j] = 0;
b1[j] = 0;
c1[j] = 0;
d1[j] = 0;
}
auto start = high_resolution_clock::now();
for(int j=0;j<n;j++){
a1[j] += b1[j];
c1[j] += d1[j];
}
auto stop = high_resolution_clock::now();
auto duration = duration_cast<microseconds>(stop - start);
std::cout << duration.count() << " Microseconds." << std::endl;
}
void function2() {
const int n=100000;
int a1[n], b1[n], c1[n], d1[n];
for(int j=0; j<n; j++){
a1[j] = 0;
b1[j] = 0;
c1[j] = 0;
d1[j] = 0;
}
auto start = high_resolution_clock::now();
for(int j=0; j<n; j++){
a1[j] += b1[j];
}
for(int j=0;j<n;j++){
c1[j] += d1[j];
}
auto stop = high_resolution_clock::now();
auto duration = duration_cast<microseconds>(stop - start);
std::cout << duration.count() << " Microseconds." << std::endl;
}
int main(int argc, const char * argv[]) {
function1();
function2();
return 0;
}
TL;DR: The loops are basically the same, and if you are seeing differences, then your measurement is wrong. Performance measurement and more importantly, reasoning about performance requires a lot of computer knowledge, some scientific rigor, and much engineering acumen. Now for the long version...
Unfortunately, there is some very inaccurate information in the article to which you've linked, as well as in the answers and some comments here.
Let's start with the article. There won't be any disk caching that has any effect on the performance of these functions. It is true that virtual memory is paged to disk, when demand on physical memory exceeds what's available, but that's not a factor that you have to consider for programs that touch 1.6MB of memory (4 * 4 * 100K).
And if paging comes into play, the performance difference won't exactly be subtle either. If these arrays were paged to disk and back, the performance difference would be in order of 1000x for fastest disks, not 10% or 100%.
Paging and page faults and its effect on performance is neither trivial, nor intuitive. You need to read about it, and experiment with it seriously. What little information that article has is completely inaccurate to the point of being misleading.
The second is your profiling strategy and the micro-benchmark itself. Clearly, with such simple operations on data (an add,) the bottleneck will be memory bandwidth itself (maybe instruction retire limits or something like that with such a simple loop.) And since you only read memory linearly, and use all you read, whether its in 4 interleaving streams or 2, you are making use of all the bandwidth that is available.
However, if you call your function1 or function2 in a loop, you will be measuring the bandwidth of different parts of the memory hierarchy depending on N, from L1 all the way to L3 and main memory. (You should know the size of all levels of cache on your machine, and how they work.) This is obvious if you know how CPU caches work, and really mystifying otherwise. Do you want to know how fast this is when you do it the first time, when the arrays are cold, or do you want to measure the hot access?
Is your real use case copying the same mid-sized array over and over again?
If not, what is it? What are you benchmarking? Are you trying to measure something or just experimenting?
Shouldn't you be measuring the fastest run through a loop, rather than the average since that can be massively affected by a (basically random) context switch or an interrupt?
Have you made sure you are using the correct compiler switches? Have you looked at the generated assembly code to make sure the compiler is not adding debug checks and what not, and is not optimizing stuff away that it shouldn't (after all, you are just executing useless loops, and an optimizing compiler wants nothing more than to avoid generating code that is not needed).
Have you looked at the theoretical memory/cache bandwidth number for your hardware? Your specific CPU and RAM combination will have theoretical limits. And be it 5, 50, or 500 GiB/s, it will give you an upper bound on how much data you can move around and work with. The same goes with the number of execution units, the IPC or your CPU, and a few dozen other numbers that will affect the performance of this kind of micro-benchmark.
If you are reading 4 integers (4 bytes each, from a, b, c, and d) and then doing two adds and writing the two results back, and doing it 100'000 times, then you are - roughly - looking at 2.4MB of memory read and write. If you do it 10 times in 300 micro-seconds, then your program's memory (well, store buffer/L1) throughput is about 80 GB/s. Is that low? Is that high? Do you know? (You should have a rough idea.)
And let me tell you that the other two answers here at the time of this writing (namely this and this) do not make sense. I can't make heads nor tails of the first one, and the second one is almost completely wrong (conditional branches in a 100'000-times for loop are bad? allocating an additional iterator variable is costly? cold access to array on stack vs. on the heap has "serious performance implications?)
And finally, as written, the two functions have very similar performances. It is really hard separating the two, and unless you can measure a real difference in a real use case, I'd say write whichever one that makes you happier.
If you really really want a theoretical difference between them, I'd say the one with two separate loops is very slightly better because it is usually not a good idea interleaving access to unrelated data.
This has nothing to do with caching or instruction efficiency. Simple iterations over long vectors are purely a matter of bandwidth. (Google: stream benchmark.) And modern CPUs have enough bandwidth to satisfy not all of their cores, but a good deal.
So if you combine the two loops, executing them on a single core, there is probably enough bandwidth for all loads and stores at the rate that memory can sustain. But if you use two loops, you leave bandwidth unused, and the runtime will be a little less than double.
The reasons why the second is faster in your case (I do not think that this works on any machine) is better cpu caching at the point at ,which you cpu has enough cache to store the arrays, the stuff your OS requires and so on, the second function will probably be much slower than the first.
from a performance standpoint. I doubt that the two loop code will give better performance if there are enough other programs running as well, because the second function has obviously worse efficiency then the first and if there is enough other stuff cached the performance lead throw caching will be eliminated.
I'll just chime in here with a little something to keep in mind when looking into performance - unless you are writing embedded software for a real-time device, the performance of such low level code as this should not be a concern.
In 99.9% of all other cases, they will be fast enough.

OpenMP first kernel much slower than the second kernel

I have a huge 98306 by 98306 2D array initialized. I created a kernel function that counts the total number of elements below a certain threshold.
#pragma omp parallel for reduction(+:num_below_threshold)
for(row)
for(col)
index = get_corresponding_index(row, col);
if (array[index] < threshold)
num_below_threshold++;
For benchmark purpose I measured the execution time of the kernel executing when the number of thread is set to 1. I noticed that the first time the kernel executes it took around 11 seconds. The next call to the kernel executing on the same array with one thread only took around 3 seconds. I thought it might be a problem related to cache but it doesn't seem to be related. What is the possible reasons that caused this?
This array is initialized as:
float *array = malloc(sizeof(float) * 98306 * 98306);
for (int i = 0; i < 98306 * 98306; i++) {
array[i] = rand() % 10;
}
This same kernel is applied to this array twice and the second execution time is much faster than the first kernel. I though of lazy allocation on Linux but that shouldn't be a problem because of the initialization function. Any explanations will be helpful. Thanks!
Since you don't provide any Minimal, Complete and Verifiable Example, I'll have to make some wild guesses here, but I'm pretty confident I have the gist of the issue.
First, you have to notice that 98,306 x 98,306 is 9,664,069,636 which is way larger than the maximum value a signed 32 bit integer can store (which is 2,147,483,647). Therefore, the upper limit of your for initialization loop, after overflowing, could become 1,074,135,044 (as on my machines, although it is undefined behavior so strictly speaking, anything could happen), which is roughly 9 times smaller than what you expected.
So now, after the initialization loop, only 11% of the memory you thought you allocated has actually been allocated and touched by the operating system. However, your first reduction loop does a good job in going over the various elements of the array, and since for about 89% of it, it's for the fist time, the OS does the actual memory allocation there and then, which takes some significant amount of time.
And now, for your second reduction loop, all memory has been properly allocated and touched, which makes it much faster.
So that's what I believe happened. That said, many other parameters can enter into play here, such as:
Swapping: the array you try to allocate represents about 36GB of memory. If your machine doesn't have that much memory available, then your code might swap, which will potentially make a big mess of whatever performance measurement you can come up with
NUMA effect: if your machine has multiple NUMA nodes, then thread pinning and memory affinity, when not managed properly, can have a large impact on performance between loop occurrences
Compiler optimization: you didn't mention which compiler you used and which level of optimization you requested. Depending on that, you'd be amazed on how shortened your code could become. For example, the compiler could totally remove the second loop as it does the same thing as the first and becomes useless as the result will be the same... And many other interesting and unexpected things which render your benchmark meaningless

Memory access comparison

Which one of the 2 is faster (C++)?
for(i=0; i<n; i++)
{
sum_a = sum_a + a[i];
sum_b = sum_b + b[i];
}
Or
for(i=0; i<n; i++)
{
sum_a = sum_a + a[i];
}
for(i=0; i<n; i++)
{
sum_b = sum_b + b[i];
}
I am a beginner so I don't know whether this makes sense, but in the first version, array 'a' is accessed, then 'b', which might lead to many memory switches, since arrays 'a' and 'b' are at different memory locations. But in the second version, whole of array 'a' is accessed first, and then whole of array 'b', which means continuous memory locations are accessed instead of alternating between the two arrays.
Does this make any difference between the execution time of the two versions (even a very negligible one)?
I don't think there is correct answer to this question. In general, second version has more twice as much iterations (CPU execution overhead), but worse access to memory (Memory access overhead). Now imagine you run this code on PC that has slow clock, but insanely good cache. Memory overhead gets reduced, but since clock is slow running same loop twice makes execution much longer. Other way around: fast clock, but bad memory - running two loops is not a problem, so it's better to optimize for memory access.
Here is cool example on how you can profile your app: Link
Which one of the 2 is faster (C++)?
Either. It depends on
The implementation of operator+ and operator[] (in case they are overloaded)
Location of the arrays in memory (adjacent or not)
Size of the arrays
Size of the cpu caches
Associativity of caches
Cache speed in relation to memory speed
Possibly other factors
As Revolver_Ocelot mentionend in their observation in a comment, some compilers may even transform the written loop into the other form.
Does this make any difference between the execution time of the two versions (even a very negligible one)?
It can make a difference. The difference may be significant or negligible.
Your analysis is sound. Memory access is typically much slower than cache, and jumping between two memory locations may cause cache thrashing † in some situations. I would recommend using the separated approach by default, and only combine the loops if you have measured it to be faster on your target CPU.
† As MSalters points out thrashing shouldn't be a problem modern desktop processors (modern as in ~x86).

How to find the size of the L1 cache line size with IO timing measurements?

As a school assignment, I need to find a way to get the L1 data cache line size, without reading config files or using api calls. Supposed to use memory accesses read/write timings to analyze & get this info. So how might I do that?
In an incomplete try for another part of the assignment, to find the levels & size of cache, I have:
for (i = 0; i < steps; i++) {
arr[(i * 4) & lengthMod]++;
}
I was thinking maybe I just need vary line 2, (i * 4) part? So once I exceed the cache line size, I might need to replace it, which takes sometime? But is it so straightforward? The required block might already be in memory somewhere? Or perpahs I can still count on the fact that if I have a large enough steps, it will still work out quite accurately?
UPDATE
Heres an attempt on GitHub ... main part below
// repeatedly access/modify data, varying the STRIDE
for (int s = 4; s <= MAX_STRIDE/sizeof(int); s*=2) {
start = wall_clock_time();
for (unsigned int k = 0; k < REPS; k++) {
data[(k * s) & lengthMod]++;
}
end = wall_clock_time();
timeTaken = ((float)(end - start))/1000000000;
printf("%d, %1.2f \n", s * sizeof(int), timeTaken);
}
Problem is there dont seem to be much differences between the timing. FYI. since its for L1 cache. I have SIZE = 32 K (size of array)
Allocate a BIG char array (make sure it is too big to fit in L1 or L2 cache). Fill it with random data.
Start walking over the array in steps of n bytes. Do something with the retrieved bytes, like summing them.
Benchmark and calculate how many bytes/second you can process with different values of n, starting from 1 and counting up to 1000 or so. Make sure that your benchmark prints out the calculated sum, so the compiler can't possibly optimize the benchmarked code away.
When n == your cache line size, each access will require reading a new line into the L1 cache. So the benchmark results should get slower quite sharply at that point.
If the array is big enough, by the time you reach the end, the data at the beginning of the array will already be out of cache again, which is what you want. So after you increment n and start again, the results will not be affected by having needed data already in the cache.
Have a look at Calibrator, all of the work is copyrighted but source code is freely available. From its document idea to calculate cache line sizes sounds much more educated than what's already said here.
The idea underlying our calibrator tool is to have a micro benchmark whose performance only depends
on the frequency of cache misses that occur. Our calibrator is a simple C program, mainly a small loop
that executes a million memory reads. By changing the stride (i.e., the offset between two subsequent
memory accesses) and the size of the memory area, we force varying cache miss rates.
In principle, the occurance of cache misses is determined by the array size. Array sizes that fit into
the L1 cache do not generate any cache misses once the data is loaded into the cache. Analogously,
arrays that exceed the L1 cache size but still fit into L2, will cause L1 misses but no L2 misses. Finally,
arrays larger than L2 cause both L1 and L2 misses.
The frequency of cache misses depends on the access stride and the cache line size. With strides
equal to or larger than the cache line size, a cache miss occurs with every iteration. With strides
smaller than the cache line size, a cache miss occurs only every n iterations (on average), where n is
the ratio cache
line
size/stride.
Thus, we can calculate the latency for a cache miss by comparing the execution time without
misses to the execution time with exactly one miss per iteration. This approach only works, if
memory accesses are executed purely sequential, i.e., we have to ensure that neither two or more load
instructions nor memory access and pure CPU work can overlap. We use a simple pointer chasing
mechanism to achieve this: the memory area we access is initialized such that each load returns the
address for the subsequent load in the next iteration. Thus, super-scalar CPUs cannot benefit from
their ability to hide memory access latency by speculative execution.
To measure the cache characteristics, we run our experiment several times, varying the stride and
the array size. We make sure that the stride varies at least between 4 bytes and twice the maximal
expected cache line size, and that the array size varies from half the minimal expected cache size to
at least ten times the maximal expected cache size.
I had to comment out #include "math.h" to get it compiled, after that it found my laptop's cache values correctly. I also couldn't view postscript files generated.
You can use the CPUID function in assembler, although non portable, it will give you what you want.
For Intel Microprocessors, the Cache Line Size can be calculated by multiplying bh by 8 after calling cpuid function 0x1.
For AMD Microprocessors, the data Cache Line Size is in cl and the instruction Cache Line Size is in dl after calling cpuid function 0x80000005.
I took this from this article here.
I think you should write program, that will walk throught array in random order instead straight, because modern process do hardware prefetch.
For example, make array of int, which values will number of next cell.
I did similar program 1 year ago http://pastebin.com/9mFScs9Z
Sorry for my engish, I am not native speaker.
See how to memtest86 is implemented. They measure and analyze data transfer rate in some way. Points of rate changing is corresponded to size of L1, L2 and possible L3 cache size.
If you get stuck in the mud and can't get out, look here.
There are manuals and code that explain how to do what you're asking. The code is pretty high quality as well. Look at "Subroutine library".
The code and manuals are based on X86 processors.
Just a note.
Cache line size is variable on few ARM Cortex families and can change during execution without any notifications to a current program.
I think it should be enough to time an operation that uses some amount of memory. Then progresively increase the memory (operands for instance) used by the operation.
When the operation performance severelly decreases you have found the limit.
I would go with just reading a bunch of bytes without printing them (printing would hit the performance so bad that would become a bottleneck). While reading, the timing should be directly proportinal to the ammount of bytes read until the data cannot fit the L1 anymore, then you will get the performance hit.
You should also allocate the memory once at the start of the program and before starting to count time.

Structure of arrays and array of structures - performance difference

I have a class like this:
//Array of Structures
class Unit
{
public:
float v;
float u;
//And similarly many other variables of float type, upto 10-12 of them.
void update()
{
v+=u;
v=v*i*t;
//And many other equations
}
};
I create an array of objects of Unit type. And call update on them.
int NUM_UNITS = 10000;
void ProcessUpdate()
{
Unit *units = new Unit[NUM_UNITS];
for(int i = 0; i < NUM_UNITS; i++)
{
units[i].update();
}
}
In order to speed up things, and possibly autovectorize the loop, I converted AoS to structure of arrays.
//Structure of Arrays:
class Unit
{
public:
Unit(int NUM_UNITS)
{
v = new float[NUM_UNITS];
}
float *v;
float *u;
//Mnay other variables
void update()
{
for(int i = 0; i < NUM_UNITS; i++)
{
v[i]+=u[i];
//Many other equations
}
}
};
When the loop fails to autovectorize, i am getting a very bad performance for structure of arrays. For 50 units, SoA's update is slightly faster than AoS.But then from 100 units onwards, SoA is slower than AoS. At 300 units, SoA is almost twice as worse. At 100K units, SoA is 4x slower than AoS. While cache might be an issue for SoA, i didnt expect the performance difference to be this high. Profiling on cachegrind shows similar number of misses for both approach. Size of a Unit object is 48 bytes. L1 cache is 256K, L2 is 1MB and L3 is 8MB. What am i missing here? Is this really a cache issue?
Edit:
I am using gcc 4.5.2. Compiler options are -o3 -msse4 -ftree-vectorize.
I did another experiment in SoA. Instead of dynamically allocating the arrays, i allocated "v" and "u" in compile time. When there are 100K units, this gives a performance which is 10x faster than the SoA with dynamically allocated arrays. Whats happening here? Why is there such a performance difference between static and dynamically allocated memory?
Structure of arrays is not cache friendly in this case.
You use both u and v together, but in case of 2 different arrays for them they will not be loaded simultaneously into one cache line and cache misses will cost huge performance penalty.
_mm_prefetch can be used to make AoS representation even faster.
Prefetches are critical to code that spends most of its execution time waiting for data to show up. Modern front side busses have enough bandwidth that prefetches should be safe to do, provided that your program isn't going too far ahead of its current set of loads.
For various reasons, structures and classes can create numerous performance issues in C++, and may require more tweaking to get acceptable levels of performance. When code is large, use object-oriented programming. When data is large (and performance is important), don't.
float v[N];
float u[N];
//And similarly many other variables of float type, up to 10-12 of them.
//Either using an inlined function or just adding this text in main()
v[j] += u[j];
v[j] = v[j] * i[j] * t[j];
Two things you should be aware that can made a huge difference, depending on your CPU:
alignment
cache line aliasing
Since you are using SSE4, using a specialized memory allocation function that returns an address that aligned at a 16 byte boundary instead of new may give you a boost, since you or the compiler will be able to use aligned load and stores. I have not noticed much difference in newer CPUs, but using unaligned load and stores on older CPUs may be a little bit slower.
As for cache line aliasing, Intel explicit mentions it on its reference manuals
(search for "Intel® 64 and IA-32 Architectures Optimization Reference Manual"). Intel says it is something you should be aware, specially when using SoA. So, one thing you can try is to pad your arrays so the lower 6 bits of their addresses are different. The idea is to avoid having them fighting for the same cache line.
Certainly, if you don't achieve vectorization, there's not much incentive to make an SoA transformation.
Besides the fairly wide de facto acceptance of __RESTRICT, gcc 4.9 has adopted #pragma GCC ivdep to break assumed aliasing dependencies.
As to use of explicit prefetch, if it is useful, of course you may need more of them with SoA. The primary point might be to accelerate DTLB miss resolution by fetching pages ahead, so your algorithm could become more cache hungry.
I don't think intelligent comments could be made about whatever you call "compile time" allocation without more details, including specifics about your OS. There's no doubt that the tradition of allocating at a high level and re-using the allocation is important.