It might be a very common and simple question but I need some explanation about the curve that I just obtained from a cache benchmarks code. The goal here is to find the cache line size. I used the code from here:
(h**ps://github.com/jiewmeng/cs3210-assign1/blob/master/cache-l1-line.cpp)
This is the curve that I have obtained from running the code on my machine (Macbook Pro with core i7 - cache line size is 64byte - L1 data cache is 32KB).
The Time vs different stride size curve
I think the peak happens on 128 bytes and not on the 64 bytes. if it is true I want to know why?
Why the time is reduced at 512 bytes?
Update:
I also ran a code to determine the size of the L1 and L2 caches. Here is the figure just to document the data. As you can see there is two peak in 32KB (L1 Cache size) and 256KB (L2 Cache size).
Question:
I am wondering if there is any way to find the size of L3 shared cache.
Cache size figure.
Thanks
I'm guessing that the 128B peak is most likely due to spatial prefetching. You can see in Intels' Optimization guide, under section 2.1.5.4
This prefetcher strives to complete every cache line fetched to the L2 cache with the pair line that completes it to a 128-byte aligned chunk
It wouldn't be a clean jump since this prefetches is not always firing, and even when it does, it only prefetches into the L2, but it's much better than fetching from memory. To make sure this is the case, you can disable prefetches (through BIOS or other means, although some systems may not support that), and check again.
As for the L3 size - you didn't specify your exact model, but i'm guessing you have more than 4M L3 - just keep the curve going and see if it jumps.
EDIT
Just noticed another thing - your k*i expression is probably overflowing int at the max range, which means your access pattern might not be cyclic as you expect.
My BusSpeed benchmark was intended to identify cache sizes and performance at different strides, to show burst reading on buses:
http://www.roylongbottom.org.uk/busspd2k%20results.htm
Following are results on a Core i7 with 8 MB L3:
Memory Reg2 Reg2 Reg2 Reg2 Reg1 Reg2 Reg1 Reg2 Reg1 Reg8
KBytes Inc64 Inc32 Inc16 Inc8 Inc4 Inc4 Inc4 Inc4 Inc8 Inc8
Used MB/S MB/S MB/S MB/S MB/S MB/S MB/S MB/S MB/S MB/S
4 10025 10800 11262 11498 11612 11634 5850 11635 23093 23090
8 10807 11267 11505 11627 11694 11694 5871 11694 23299 23297
16 11251 11488 11620 11614 11712 11719 5873 11718 23391 23398
32 9893 9853 10890 11170 11558 11492 5872 11466 21032 21025
64 3219 4620 7289 9479 10805 10805 5875 10797 14426 14426
128 3213 4805 7305 9467 10811 10810 5875 10805 14442 14408
256 3144 4592 7231 9445 10759 10733 5870 10743 14336 14337
512 2005 3497 5980 9056 10466 10467 5871 10441 13906 13905
1024 2003 3482 5974 9017 10468 10466 5874 10467 13896 13818
2048 2004 3497 5958 9088 10447 10448 5870 10447 13857 13857
4096 1963 3398 5778 8870 10328 10328 5851 10328 13591 13630
8192 1729 3045 5322 8270 9977 9963 5728 9965 12923 12892
16384 692 1402 2495 4593 7811 7782 5406 7848 8335 8337
32768 695 1406 2492 4584 7820 7826 5401 7792 8317 8322
65536 695 1414 2488 4584 7823 7826 5403 7800 8321 8321
131072 696 1402 2491 4575 7827 7824 5411 7846 8322 8323
262144 696 1413 2498 4594 7791 7826 5409 7829 8333 8334
524288 693 1416 2498 4595 7841 7842 5411 7847 8319 8285
1048576 704 1415 2478 4591 7845 7840 5410 7853 8290 8283
End of test Fri Jul 30 16:44:29 2010
CPUID and RDTSC Assembly Code
CPU GenuineIntel, Features Code BFEBFBFF, Model Code 000106A5
Intel(R) Core(TM) i7 CPU 930 # 2.80GHz Measured 2807 MHz
Related
root#dellr740:/ycsb_build# sudo ./ycsb
IBRS and IBPB supported : yes
STIBP supported : yes
Spec arch caps supported : yes
Number of physical cores: 20
Number of logical cores: 40
Number of online logical cores: 40
Threads (logical cores) per physical core: 2
Num sockets: 1
Physical cores per socket: 20
Core PMU (perfmon) version: 4
Number of core PMU generic (programmable) counters: 4
Width of generic (programmable) counters: 48 bits
Number of core PMU fixed counters: 3
Width of fixed counters: 48 bits
Nominal core frequency: 2100000000 Hz
IBRS enabled in the kernel : yes
STIBP enabled in the kernel : no
The processor is not susceptible to Rogue Data Cache Load: yes
The processor supports enhanced IBRS : yes
Package thermal spec power: 125 Watt; Package minimum power: 63 Watt; Package maximum power: 307 Watt;
ERROR: Secure Boot detected. Recompile PCM with -DPCM_USE_PERF or disable Secure Boot.
Socket 0: 2 memory controllers detected with total number of 6 channels. 0 QPI ports detected. 2 M2M (mesh to memory) blocks detected.
.......
.......
......
result here, pcm metric, i get 0 byte
WARNING: Custom counter 0 is in use. MSR_PERF_GLOBAL_INUSE on core 0: 0x800000000000000f
WARNING: Custom counter 1 is in use. MSR_PERF_GLOBAL_INUSE on core 0: 0x800000000000000f
WARNING: Custom counter 2 is in use. MSR_PERF_GLOBAL_INUSE on core 0: 0x800000000000000f
WARNING: Custom counter 3 is in use. MSR_PERF_GLOBAL_INUSE on core 0: 0x800000000000000f
WARNING: Core 0 IA32_PERFEVTSEL0_ADDR are not zeroed 1245244
Error opening PCM: 2
Zeroed PMU registers
Cleaning up
Zeroed uncore PMU registers
PCM Metrics:
L2 HitRatio: 1
L3 HitRatio: -1
L2 misses: 0
L3 misses: 0
DRAM Reads (bytes): 0
DRAM Writes (bytes): 0
NVM Reads (bytes): 0
NVM Writes (bytes): 0
I was running the Intel MPI Benchmarks on a mini MPI cluster of two nodes on AWS EC2.
The executables of the benchmark were compiled with make CC=/usr/bin/mpicc CXX=/usr/bin/mpicxx.
The cluster was set up based on this tutorial.
(What I did differently is that I installed OpenMPI instead of MPICH.)
The type of the AWS instances is t3a.xlarge.
According to this spec the network performance is up to 5 Gbps (i.e. 625 MB/s).
I chose Ubuntu 18.04 as the OS for these instances.
The PingPong benchmark gave me surprising results:
mpiuser#mpin0:~$ mpirun -rf rankfile -np 2 ./IMB-MPI1 PingPong
#------------------------------------------------------------
# Intel(R) MPI Benchmarks 2019 Update 3, MPI-1 part
#------------------------------------------------------------
# Date : Thu Nov 28 15:17:38 2019
# Machine : x86_64
# System : Linux
# Release : 4.15.0-1054-aws
# Version : #56-Ubuntu SMP Thu Nov 7 16:15:59 UTC 2019
# MPI Version : 3.1
# MPI_Datatype : MPI_BYTE
#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
#bytes #repetitions t[usec] Mbytes/sec
0 1000 32.05 0.00
1 1000 31.74 0.03
2 1000 31.98 0.06
4 1000 32.08 0.12
8 1000 31.86 0.25
16 1000 31.30 0.51
32 1000 31.40 1.02
64 1000 32.86 1.95
128 1000 32.66 3.92
256 1000 33.56 7.63
512 1000 35.00 14.63
1024 1000 34.74 29.48
2048 1000 36.56 56.02
4096 1000 39.80 102.91
8192 1000 49.92 164.10
16384 1000 57.73 283.79
32768 1000 78.86 415.54
65536 640 170.26 384.91
131072 320 239.80 546.60
262144 160 357.34 733.61
524288 80 609.82 859.74
1048576 40 1106.36 947.77
2097152 20 2514.62 833.98
4194304 10 5830.39 719.39
Some of the speeds exceed 625 MB/s.
The rankfile was used to force distributing the two processes onto two nodes.
It is not the case that the two MPI processes are running on the same node.
The content of the rankfile is:
rank 0=mpin0 slot=0
rank 1=mpin1 slot=0
What are the possible reasons that caused the benchmark result to exceed the theoretical limit?
I understand pretty well how transparent hugepages work, and that any allocation, such as those performed by malloc may be satisfied by a huge page.
What I'd like to know, is if there is any check I can make (possibly heuristic) after an allocation to determine if the memory is backed by a huge page.
You can determine the exact status of any page, including whether it is backed by a transparent (or non-transparent) hugepage by looking up the "pfn" (page frame number) in the /proc/kpageflags file. You get the pfn for a page by reading from the /proc/$PID/pagemap file for your process, which is indexed by virtual address.
Unfortunately, both the pfn value from pagemap1 and the entire /proc/kpageflags file are accessible only to root users. Still if you can run your process as root at least in the testing or benchmarking scenario you are interested in, this works well.
I wrote a small library called page-info which does the relevant parsing for you. Give it a range of memory and it will return you info on each page, including whether it is present in memory, backed by a hugepage, etc.
For example, running the included test process as sudo ./page-info-test THP gives the following output:
PAGE_SIZE = 4096, PID = 18868
size memset FLAG SET UNSET UNAVAIL
0.25 MiB BEFORE THP 0 1 64
0.25 MiB AFTER THP 0 65 0
0.50 MiB BEFORE THP 0 1 128
0.50 MiB AFTER THP 0 129 0
1.00 MiB BEFORE THP 0 1 256
1.00 MiB AFTER THP 0 257 0
2.00 MiB BEFORE THP 0 1 512
2.00 MiB AFTER THP 0 513 0
4.00 MiB BEFORE THP 0 1 1024
4.00 MiB AFTER THP 512 513 0
8.00 MiB BEFORE THP 0 1 2048
8.00 MiB AFTER THP 1536 513 0
16.00 MiB BEFORE THP 0 1 4096
16.00 MiB AFTER THP 3584 513 0
32.00 MiB BEFORE THP 0 1 8192
32.00 MiB AFTER THP 7680 513 0
64.00 MiB BEFORE THP 0 1 16384
64.00 MiB AFTER THP 15872 513 0
128.00 MiB BEFORE THP 0 1 32768
128.00 MiB AFTER THP 32256 513 0
256.00 MiB BEFORE THP 0 1 65536
256.00 MiB AFTER THP 65024 513 0
512.00 MiB BEFORE THP 0 1 131072
512.00 MiB AFTER THP 124416 6657 0
1024.00 MiB BEFORE THP 0 1 262144
1024.00 MiB AFTER THP 0 262145 0
DONE
The UNAVAIL column means that no information about the mapping was available - usually because the page has never been accesses and so isn't yet backed by any page at all. You can see that for these "largeish" allocations only a single page is mapped in following the allocation, since we haven't touched the memory.
The AFTER rows are the same information after calling memset() on the entire allocation, which causes all pages to be physically allocated. Here we can see that no allocations are backed by transparent hugepages until we hit allocations of 4 MiB, at which point the majority of each allocation is backed by THP, except for 513 pages (which turn out to be at the edges of the allocated region). At 512 MiB the system starts running out of available hugepages but still satisfies most of the allocation, but at 1024 MiB the entire allocation is satisfied with small pages.
This library isn't production ready so don't use it for anything critical (e.g., some failures simply call exit()). Contributions welcome.
1 Since kernel 4.0 approximately, before that the pfn was accessible to non-root user processes. From 4.0 to 4.1 or thereabouts, the entire pagemap was off-limits to non-root processes, but since then the file is again available but with the pfn masked out (it will always appear as zero).
There is a difference between traditional hugepages and transparent huge pages (THP). In the case of THP's, the application can use huge pages without any developer support (mmap, shmget, etc) or sys-admin intervention.
In the code, I am afraid there may be no straight forward way check this. However, if you know the sizeof() allocated data structure or buffers, it worth grepping and checking the THP usage on the system using the following command. This usage should increase while running your application:
# grep AnonHugePages /proc/meminfo
AnonHugePages: 2648064 kB
I have written a project, which uses some basic functions in openssl such as RAND_bytes and des_ecb_encrypt.
My computer has i7-2600(4 cores and 8 logic CPU). When I run my project with 4 threads, it will costs 10 seconds. When I run it with 8 threads, it also costs 10 seconds.
What I mean is that hyper-threading doesn't give me any performance improvement. In Linux, the experiment result is same.
I found here tells me that hyper-threading doesn't give me some improvement in some situations. Also, I found here give me some intuitive results.
However, I have tried to write some simple tests and found some simple examples which will show hyper-threading won't give me apparent improvement. Sadly, I don't find it.
So, my questions is that whether there are some simple tests shows the hyper-threading won't give me any performance improvement.
You may find that hyperthreading helps more on code that is using large amounts of memory, so that the processor is regularly blocked on fetching from memory.
In my experience, it's quite hard to find "simple code" that shows benefits from hyperthreading. It tends to be more complex examples that show the benefit. Still, the benefit will most likely not be 2x that of "no hyperthreading". Count on getting perhaps 20-30% improvement.
Hyper threading takes advantage of the fact that the CPU has many components and when one is used, when there's no hyper threading, the others just sit there idle. You can try writing two types of threads, one doing integer calculations (that will hopefully use the ALU) and one doing floating point arithmetic (that will hopefully use the FPU).
I did not try this myself but it seems that in such a scenario hyper threading should improve the performance.
To show the opposite you can use only one type of the threads (either threads only doing integer operations or threads only doing floating point operations).
It may also be that your test is flawed, but in order to know if that is the case we'll need more information about that test.
I have written a project, which use some basic functions in openssl such as RAND_bytes and des_ecb_encrypt... My computer has i7-2600(4 cores and 8 logic CPU). When I run my project with 4 threads, it will costs 10 seconds. When I run it with 8 threads, it also costs 10 seconds.
When using RDRAND (which RAND_bytes will do in this case), the bus us the limiting factor. You should peak at around 800MB/sec. It does not matter how many threads you have - the bus cannot transfer data fast enough. See Intel rdrand instruction revisited.
If you used AES, then you might see a better speedup over the DES/3DES observations. Your Ivy Bridge has AES-NI and it can achieve almost 1.3 cycle/byte, and that should be about double or triple AES is software. To ensure you are using the AES-NI instructions, you have to use the EVP_* interfaces.
I found here tells me that hyper-threading doesn't give me some improvement in some situations. Also, I found here give me some intuitive results.
I think #selalerer and #Mats Petersson answered your question. The problem does not scale linearly and there's a maximum speedup you will encounter. Intel states its about 30%.
Intel's newest architecture favors of Out-Of-Order execution over Hyper-threading execution because its supposed to be more efficient. Read about the Silvermont processor cores.
But if you want a formal deep dive, then see a book on computer engineering. Here's the book we used when I studied it in college: Computer Organization and Design (its probably a bit dated now).
However, I have tried to write some simple tests and found some simple examples which will show hyper-threading won't give me apparent improvement.
OpenSSL also has a benchmarking app. See the source code in <openssl source>/apps/speed.c.
Also, benchmarking apps have their own personalities. An encryption stress test may not reveal the differences as predominantly as you hope to see them. See, for example, Benchmarking Tools.
Following are details and results of my MP benchmarks for Linux and Windows, that can behave differently. Not much HT but Linux tests include Atom (1 core 2 threads) and Windows has Core i7 results (4+4).
http://www.roylongbottom.org.uk/linux%20multithreading%20benchmarks.htm
http://www.roylongbottom.org.uk/quad%20core%208%20thread.htm
Take your pick, depending what you want to prove whether HT provides better or worse performance. Following are RandMem results on i7 (Linux seems better using this test). For such as i7, you also need to consider Turbo Boost that might be lower with multiple threads.
CPUs MBytes Per Second Using Threads Gain At Threads
/HTs 1 2 4 6 8 2 4 6 8
Serial RD
Core i7 4/8 L1 11458 22661 37039 43717 46374 2.0 3.2 3.8 4.0
930 L2 10380 20832 32853 41711 42839 2.0 3.2 4.0 4.1
#### MHz L3 8828 17743 29610 38414 40330 2.0 3.4 4.4 4.6
Win 764 RAM 4266 8712 17347 24946 25589 2.0 4.1 5.8 6.0
Serial RW
Core i7 4/8 L1 15282 13724 16240 16209 18379 0.9 1.1 1.1 1.2
930 L2 12223 18216 25326 28104 27047 1.5 2.1 2.3 2.2
#### MHz L3 10234 19266 21931 24450 26351 1.9 2.1 2.4 2.6
Win 764 RAM 4533 7656 13876 14543 13390 1.7 3.1 3.2 3.0
Random RD
Core i7 4/8 L1 11266 22548 38174 45592 47141 2.0 3.4 4.0 4.2
930 L2 6233 12463 20059 24986 25667 2.0 3.2 4.0 4.1
#### MHz L3 3499 6915 9211 10002 9531 2.0 2.6 2.9 2.7
Win 764 RAM 459 909 1241 1398 1364 2.0 2.7 3.0 3.0
Random RW
Core i7 4/8 L1 14375 3027 2780 2901 3297 0.2 0.2 0.2 0.2
930 L2 5887 4555 6117 6693 7281 0.8 1.0 1.1 1.2
#### MHz L3 3104 4604 4721 5047 4933 1.5 1.5 1.6 1.6
Win 764 RAM 428 860 899 948 1026 2.0 2.1 2.2 2.4
#### 2.8 GHz running at up to 3.06 GHz via Turbo Boost, dual channel 1066 MHz DDR3 RAM
Then the MP Whetstone benchmark that shows real gains
MWIPS MFLOP MFLOP MFLOP COS EXP FIXPT IF EQUAL
CPU MHz 1 2 3 MOPS MOPS MOPS MOPS MOPS
Core i7 1 Thrd #### 3115 1065 886 738 79.3 39.7 2447 2936 1154
Core i7 Win7 #### 21690 8676 7621 5844 531 291 16643 12027 5034
Quad Core Thread 1 1091 1027 728 66.4 36.5 2050 1501 629
Plus HT Thread 2 1089 1037 742 66.0 36.5 2090 1507 630
Thread 3 1090 946 742 66.8 36.5 2069 1534 631
Thread 4 1092 1037 727 66.6 36.6 2031 1501 630
Thread 5 1042 959 736 66.4 36.5 1912 1483 630
Thread 6 1091 874 723 66.6 36.1 2049 1507 629
Thread 7 1090 867 725 65.6 36.3 2094 1516 631
Thread 8 1091 874 722 66.3 36.3 2350 1476 624
Gain % 696 815 860 792 670 733 680 410 436
I have a small application that I was running right now and I wanted to check if I have any memory leaks in it so I put in this piece of code:
for (unsigned int i = 0; i<10000; i++) {
for (unsigned int j = 0; j<10000; j++) {
std::ifstream &a = s->fhandle->open("test");
char temp[30];
a.getline(temp, 30);
s->fhandle->close("test");
}
}
When I ran the application i cat'ed /proc//status to see if the memory increases.
The output is the following after about 2 Minutes of runtime:
Name: origin-test
State: R (running)
Tgid: 7267
Pid: 7267
PPid: 6619
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 20 24 46 110 111 119 122 1000
VmPeak: 183848 kB
VmSize: 118308 kB
VmLck: 0 kB
VmHWM: 5116 kB
VmRSS: 5116 kB
VmData: 9560 kB
VmStk: 136 kB
VmExe: 28 kB
VmLib: 11496 kB
VmPTE: 240 kB
VmSwap: 0 kB
Threads: 2
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000002004
SigCgt: 00000001800044c2
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: ffffffffffffffff
Cpus_allowed: 3f
Cpus_allowed_list: 0-5
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 120
nonvoluntary_ctxt_switches: 26475
None of the values did change except the last one, so does the mean there are no memory leaks?
But what's more important and what I would like to know is, if it is bad that the last value is increasing rapidly (about 26475 switches in about 2 Minutes!).
I looked at some other applications to compare how much non-volunary switches they have:
Firefox: about 200
Gdm: 2
Netbeans: 19
Then I googled and found out some stuff but it's to technical for me to understand.
What I got from it is that this happens when the application switches the processor or something? (I have an Amd 6-core processor btw).
How can I prevent my application from doing that and in how far could this be a problem when running the application?
Thanks in advance,
Robin.
Voluntary context switch occurs when your application is blocked in a system call and the kernel decide to give it's time slice to another process.
Non voluntary context switch occurs when your application has used all the timeslice the scheduler has attributed to it (the kernel try to pretend that each application has the whole computer for themselves, and can use as much CPU they want, but has to switch from one to another so that the user has the illusion that they are all running in parallel).
In your case, since you're opening, closing and reading from the same file, it probably stay in the virtual file system cache during the whole execution of the process, and youre program is being preempted by the kernel as it is not blocking (either because of system or library caches). On the other hand, Firefox, Gdm and Netbeans are mostly waiting for input from the user or from the network, and must not be preempted by the kernel.
Those context switches are not harmful. On the contrary, it allow your processor to be used to fairly by all application even when one of them is waiting for some resource.≈
And BTW, to detect memory leaks, a better solution would be to use a tool dedicated to this, such as valgrind.
To add to #Sylvain's info, there is a nice background article on Linux scheduling here: "Inside the Linux scheduler" (developerWorks, June 2006).
To look for a memory leak it is much better to install and use valgrind, http://www.valgrind.org/. It will identify memory leaks in the heap and memory error conditions (using uninitialized memory, tons of other problems). I use it almost every day.