Capture CPU and Memory usage dynamically - c++

I am running a shell script to execute a c++ application, which measures the performance of an api. i can capture the latency (time taken to return a value for a given set of parameters) of the api, but i also wish to capture the cpu and memory usage alongside at intervals of say 5-10 seconds.
is there a way to do this without effecting the performance of the system too much and that too within the same script? i have found many examples where one can do outside (independently) of the script we are running; but not one where we can do within the same script.

If you are looking for capturing CPU and Mem utilization dynamically for entire linux box, then following command can help you too:
CPU
vmstat -n 15 10| awk '{now=strftime("%Y-%m-%d %T "); print now $0}'> CPUDataDump.csv &
vmstat is used for collection of CPU counters
-n for delay value, in this case it's 15, that means after every 15 sec, stats will be collected.
then 10 is the number of intervals, there would be 10 iterations in this example
awk '{now=strftime("%Y-%m-%d %T "); print now $0}' this will dump the timestamp of each iteration
in the end, the dump file with & for continuation
Memory
free -m -s 10 10 | awk '{now=strftime("%Y-%m-%d %T "); print now $0}'> DataDumpMemoryfile.csv &
free is for mem stats collection
-m this is for units of mem (you can use -b for bytes, -k for kilobytes, -g for gigabytes)
then 10 is the number of intervals (there would be 10 iterations in this example)
awk'{now=strftime("%Y-%m-%d %T "); print now $0}' this will dump the timestamp of each iteration
in the end, the dump & for continuation

I'd suggest to use 'time' command and also 'vmstat' command. The first will give CPU usage of executable execution and second - periodic (i.e. once per second) dump of CPU/memory/IO of the system.
Example:
time dd if=/dev/zero bs=1K of=/dev/null count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 0.738194 seconds, 1.4 GB/s
0.218u 0.519s 0:00.73 98.6% 0+0k 0+0io 0pf+0w <== that's time result

Related

Inconsistency when benchmarking two contiguous measurements

I was benchmarking a function and I see that some iteration are slower than other.
After some tests I tried to benchmark two contiguous measurements and I still got some weird results.
The code is on wandbox.
For me the important part is :
using clock = std::chrono::steady_clock;
// ...
for (int i = 0; i < statSize; i++)
{
auto t1 = clock::now();
auto t2 = clock::now();
}
The loop is optimized away as we can see on godbolt.
call std::chrono::_V2::steady_clock::now()
mov r12, rax
call std::chrono::_V2::steady_clock::now()
The code was compiled with:
g++ bench.cpp -Wall -Wextra -std=c++11 -O3
and gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) on an Intel® Xeon® W-2195 Processor.
I was the only user on the machine and I try to run with and without higth priority (nice or chrt) and the result was the same.
The result I got with 100 000 000 iterations was:
The Y-axis is in nanoseconds, it's the result of the line
std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
These 4 lines make me think of: No cache/L1/L2/L3 cache misses (even if the “L3 cache misses” line seems to be too close of the L2 line)
I am not sure why there will be cache misses, may be the storage of the result, but it’s not in the measured code.
I have try to run 10 000 times the program with a loop of 1500, because the L1 cache of this processor is:
lscpu | grep L1
L1d cache: 32K
L1i cache: 32K
And 1500*16 bits = 24 000 bits which is less than 32K so there shouldn’t have cache miss.
And the results:
I still have my 4 lines (and some noise).
So if it’s realy a cache miss I don’t have any idea why it is happening.
I don’t kown if it’s useful for you but I run:
sudo perf stat -e cache-misses,L1-dcache-load-misses,L1-dcache-load ./a.out 1000
With the value 1 000 / 10 000 / 100 000 / 1 000 000
I got between 4.70 and 4.30% of all L1-dcache hits, which seem pretty decent to me.
So the questions are:
What is the cause of these slowdown?
How produce a qualitative benchmark of a function when I can’t have a constant time for a No operation ?
Ps : I don’t kwon if I am missing useful informations / flags, feel free to ask !
How reproduce:
The code:
#include <iostream>
#include <chrono>
#include <vector>
int main(int argc, char **argv)
{
int statSize = 1000;
using clock = std::chrono::steady_clock;
if (argc == 2)
{
statSize = std::atoi(argv[1]);
}
std::vector<uint16_t> temps;
temps.reserve(statSize);
for (int i = 0; i < statSize; i++)
{
auto t1 = clock::now();
auto t2 = clock::now();
temps.push_back(
std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count());
}
for (auto t : temps)
std::cout << (int)t << std::endl;
return (0);
}
Build:
g++ bench.cpp -Wall -Wextra -std=c++11 -O3
Generate output (sudo needed):
In this case I run 10 000 time the program. Each time take 100 measures, and I remove the first which is always about 5 time slower :
for i in {1..10000} ; do sudo nice -n -17 ./a.out 100 | tail -n 99 >> fast_1_000_000_uint16_100 ; done
Generate graph:
cat fast_1_000_000_uint16_100 | gnuplot -p -e "plot '<cat'"
The result I have on my machine:
Where I am after the answer of Zulan and all the comments
The current_clocksource is set on tsc and no switch seen in dmesg, command used:
dmesg -T | grep tsc
I use this script to remove the HyperThreading (HT)
then
grep -c proc /proc/cpuinfo
=> 18
Subtract 1 from the last result to obtain the last available core:
=> 17
Edit /etc/grub/default and add isolcpus=(last result) in in GRUB_CMDLINE_LINUX:
GRUB_CMDLINE_LINUX="isolcpus=17"
Finaly:
sudo update-grub
reboot
// reexecute the script
Now I can use:
taskset -c 17 ./a.out XXXX
So I run 10 000 times a loop of 100 iterations.
for i in {1..10000} ; do sudo /usr/bin/time -v taskset -c 17 ./a.out 100 > ./core17/run_$i 2>&1 ; done
Check if there is any Involuntary context switches:
grep -L "Involuntary context switches: 0" result/* | wc -l
=> 0
There is none, good. Let's plot :
for i in {1..10000} ; do cat ./core17/run_$i | head -n 99 >> ./no_switch_taskset ; done
cat no_switch_taskset | gnuplot -p -e "plot '<cat'"
Result :
There are still 22 measures greater than 1000 (when most values are arround 20) that I don't understand.
Next step, TBD
Do the part :
sudo nice -n -17 perf record...
of the Zulan answer's
I can't reproduce it with these particular clustered lines, but here is some general information.
Possible causes
As discussed in the comments, nice on a normal idle system is just a best effort. You still have at least
The scheduling tick timer
Kernel tasks that are bound to a certain code
Your task may be migrated from one core to another for an arbitrary reason
You can use isolcpus and taskset to get exclusive cores for certain processes to avoid some of that, but I don't think you can really get rid of all the kernel tasks. In addition, use nohz=full to disable the scheduling tick. You should also disable hyperthreading to get exclusive access to a core from a hardware thread.
Except for taskset, which I absolutely recommend for any performance measurement, these are quite unusual measures.
Measure instead of guessing
If there is a suspicion what could be happening, you can usually setup a measurement to confirm or disprove that hypothesis. perf and tracepoints are great for that. For example, we can start with looking at scheduling activity and some interrupts:
sudo nice -n -17 perf record -o perf.data -e sched:sched_switch -e irq:irq_handler_entry -e irq:softirq_entry ./a.out ...
perf script will now tell list you every occurrence. To correlate that with slow iterations you can use perf probe and a slightly modified benchmark:
void __attribute__((optimize("O0"))) record_slow(int64_t count)
{
(void)count;
}
...
auto count = std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count();
if (count > 100) {
record_slow(count);
}
temps.push_back(count);
And compile with -g
sudo perf probe -x ./a.out record_slow count
Then add -e probe_a:record_slow to the call to perf record. Now if you are lucky, you find some close events, e.g.:
a.out 14888 [005] 51213.829062: irq:softirq_entry: vec=1 [action=TIMER]
a.out 14888 [005] 51213.829068: probe_a:record_slow: (559354aec479) count=9029
Be aware: while this information will likely explain some of your observation, you enter a world of even more puzzling questions and oddities. Also, while perf is pretty low-overhead, there may be some perturbation on what you measure.
What are we benchmarking?
First of all, you need to be clear what you actually measure: The time to execute std::chrono::steady_clock::now(). It's actually good to do that to figure out at least this measurement overhead as well as the precision of the clock.
That's actually a tricky point. The cost of this function, with clock_gettime underneath, depends on your current clocksource1. If that's tsc you're fine - hpet is much slower. Linux may switch quietly2 from tsc to hpet during operation.
What to do to get stable results?
Sometimes you might need to do benchmarks with extreme isolation, but usually that's not necessary even for very low-level micro-architecture benchmarks. Instead, you can use statistical effects: repeat the measurement. Use the appropriate methods (mean, quantiles), sometimes you may want to exclude outliers.
If the measurement kernel is not significantly longer than timer precision, you will have to repeat the kernel and measure outside to get a throughput rather than a latency, which may or may not be different.
Yes - benchmarking right is very complicated, you need to consider a lot of aspects, especially when you get closer to the hardware and the your kernel times get very short. Fortunately there's some help, for example Google's benchmark library provides a lot of help in terms of doing the right amount of repetitions and also in terms having experiment factors.
1 /sys/devices/system/clocksource/clocksource0/current_clocksource
2 Actually it's in dmesg as something like
clocksource: timekeeping watchdog on CPU: Marking clocksource 'tsc' as unstable because the skew is too large:

Performance Issue in Executing Shell Commands

In my application, I need to execute large amount of shell commands via c++ code. I found the program takes more than 30 seconds to execute 6000 commands, this is so unacceptable! Is there any other better way to execute shell commands (using c/c++ code)?
//Below functions is used to set rules for
//Linux tool --TC, and in runtime there will
//be more than 6000 rules to be set from shell
//those TC commans are like below example:
//tc qdisc del dev eth0 root
//tc qdisc add dev eth0 root handle 1:0 cbq bandwidth
// 10Mbit avpkt 1000 cell 8
//tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 5 allot 1514
// cell 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:3 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:1 classid 1:1001 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 8 allot 1514 cell
// 8 maxburst 20 avpkt 1000
//......
void CTCProxy::ApplyTCCommands(){
FILE* OutputStream = NULL;
//mTCCommands is a vector<string>
//every string in it is a TC rule
int CmdCount = mTCCommands.size();
for (int i = 0; i < CmdCount; i++){
OutputStream = popen(mTCCommands[i].c_str(), "r");
if (OutputStream){
pclose(OutputStream);
} else {
printf("popen error!\n");
}
}
}
UPDATE
I tried to put all the shell commands into a shell script and let the test app call this script file using system("xxx.sh"). This time it takes 24 seconds to execute all 6000 entries of shell commands, less than what we toke before. But this is still much bigger than what we expected! Is there any other way that can decrease the execution time to less than 10 seconds?
So, most likely (based on my experience in a similar type of thing), the majority of the time is spent starting a new process running a shell, the execution of the actual command in the shell is very short. (And 6000 in 30 seconds doesn't sound too terrible, actually).
There are a variety of ways you could do this. I'd be tempted to try to combine it all into one shell script, rather than running individual lines. This would involve writing all the 'tc' strings to a file, and then passing that to popen().
Another thought is if you can actually combine several strings together into one execute, perhaps?
If the commands are complete and directly executable (that is, no shell is needed to execute the program), you could also do your own fork and exec. This would save creating a shell process, which then creates the actual process.
Also, you may consider running a small number of processes in parallel, which on any modern machine will likely speed things up by the number of processor cores you have.
You can start shell (/bin/sh) and pipe all commands there parsing the output. Or you can create a Makefile as this would give you more control on how the commands whould be executed, parallel execution and error handling.

How to get total cpu usage in Linux using C++

I am trying to get total cpu usage in %. First I should start by saying that "top" will simply not do, as there is a delay between cpu dumps, it requires 2 dumps and several seconds, which hangs my program (I do not want to give it its own thread)
next thing what I tried is "ps" which is instant but always gives very high number in total (20+) and when I actually got my cpu to do something it stayed at about 20...
Is there any other way that I could get total cpu usage? It does not matter if it is over one second or longer periods of time... Longer periods would be more useful, though.
cat /proc/stat
http://www.linuxhowtos.org/System/procstat.htm
I agree with this answer above. The cpu line in this file gives the total number of "jiffies" your system has spent doing different types of processing.
What you need to do is take 2 readings of this file, seperated by whatever interval of time you require. The numbers are increasing values (subject to integer rollover) so to get the %cpu you need to calculate how many jiffies have elapsed over your interval, versus how many jiffies were spend doing work.
e.g.
Suppose at 14:00:00 you have
cpu 4698 591 262 8953 916 449 531
total_jiffies_1 = (sum of all values) = 16400
work_jiffies_1 = (sum of user,nice,system = the first 3 values) = 5551
and at 14:00:05 you have
cpu 4739 591 289 9961 936 449 541
total_jiffies_2 = 17506
work_jiffies_2 = 5619
So the %cpu usage over this period is:
work_over_period = work_jiffies_2 - work_jiffies_1 = 68
total_over_period = total_jiffies_2 - total_jiffies_1 = 1106
%cpu = work_over_period / total_over_period * 100 = 6.1%
Try reading /proc/loadavg. The first three numbers are the number of processes actually running (i.e., using a CPU), averaged over the last 1, 5, and 15 minutes, respectively.
http://www.linuxinsight.com/proc_loadavg.html
Read /proc/cpuinfo to find the number of CPU/cores available to the systems.
Call the getloadavg() (or alternatively read the /proc/loadavg), take the first value, multiply it by 100 (to convert to percents), divide by number of CPU/cores. If the value is greater than 100, truncate it to 100. Done.
Relevant documentation: man getloadavg and man 5 proc
N.B. Load average, usual to *NIX systems, can be more than 100% (per CPU/core) because it actually measures number of processes ready to be run by scheduler. With Windows-like CPU metric, when load is at 100% you do not really know whether it is optimal use of CPU resources or system is overloaded. Under *NIX, optimal use of CPU loadavg would give you value ~1.0 (or 2.0 for dual system). If the value is much greater than number CPU/cores, then you might want to plug extra CPUs into the box.
Otherwise, dig the /proc file system.
cpu-stat is a C++ project that permits to read Linux CPU counter from /proc/stat .
Get CPUData.* and CPUSnaphot.* files from cpu-stat's src directory.
Quick implementation to get overall cpu usage:
#include "CPUSnapshot.h"
#include <chrono>
#include <thread>
#include <iostream>
int main()
{
CPUSnapshot previousSnap;
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
CPUSnapshot curSnap;
const float ACTIVE_TIME = curSnap.GetActiveTimeTotal() - previousSnap.GetActiveTimeTotal();
const float IDLE_TIME = curSnap.GetIdleTimeTotal() - previousSnap.GetIdleTimeTotal();
const float TOTAL_TIME = ACTIVE_TIME + IDLE_TIME;
int usage = 100.f * ACTIVE_TIME / TOTAL_TIME;
std::cout << "total cpu usage: " << usage << " %" << std::endl;
}
Compile it:
g++ -std=c++11 -o CPUUsage main.cpp CPUSnapshot.cpp CPUData.cpp
cat /proc/stat
http://www.linuxhowtos.org/System/procstat.htm
I suggest two files to starting...
/proc/stat and /proc/cpuinfo.
http://www.mjmwired.net/kernel/Documentation/filesystems/proc.txt
have a look at this C++ Lib.
The information is parsed from /proc/stat. it also parses memory usage from /proc/meminfo and ethernet load from /proc/net/dev
----------------------------------------------
current CPULoad:5.09119
average CPULoad 10.0671
Max CPULoad 10.0822
Min CPULoad 1.74111
CPU: : Intel(R) Core(TM) i7-10750H CPU # 2.60GHz
----------------------------------------------
network load: wlp0s20f3 : 1.9kBit/s : 920Bit/s : 1.0kBit/s : RX Bytes Startup: 15.8mByte TX Bytes Startup: 833.5mByte
----------------------------------------------
memory load: 28.4% maxmemory: 16133792 Kb used: 4581564 Kb Memload of this Process 170408 KB
----------------------------------------------

Reading binary files, Linux Buffer Cache

I am busy writing something to test the read speeds for disk IO on Linux.
At the moment I have something like this to read the files:
Edited to change code to this:
const int segsize = 1048576;
char buffer[segsize];
ifstream file;
file.open(sFile.c_str());
while(file.readsome(buffer,segsize)) {}
For foo.dat, which is 150GB, the first time I read it in, it takes around 2 minutes.
However if I run it within 60 seconds of the first run, it will then take around 3 seconds to run. How is that possible? Surely the only place that could be read from that fast is the buffer cache in RAM, and the file is too big to fit in RAM.
The machine has 50GB of ram, and the drive is a NFS mount with all the default settings. Please let me know where I could look to confirm that this file is actually being read at this speed? Is my code wrong? It appears to take a correct amount of time the first time the file is read.
Edited to Add:
Found out that my files were only reading up to a random point. I've managed to fix this by changing segsize down to 1024 from 1048576. I have no idea why changing this allows the ifstream to read the whole file instead of stopping at a random point.
Thanks for the answers.
On Linux, you can do this for a quick troughput test:
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.863904 s, 243 MB/s
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.0748273 s, 2.8 GB/s
$ sync && echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.919688 s, 228 MB/s
echo 3 > /proc/sys/vm/drop_caches will flush the cache properly
in_avail doesn't give the length of the file, but a lower bound of what is available (especially if the buffer has already been used, it return the size available in the buffer). Its goal is to know what can be read without blocking.
unsigned int is most probably unable to hold a length of more than 4GB, so what is read can very well be in the cache.
C++0x Stream Positioning may be interesting to you if you are using large files
in_avail returns the lower bound of how much is available to read in the streams read buffer, not the size of the file. To read the whole file via the stream, just keep
calling the stream's readsome() method and checking how much was read with the gcount() method - when that returns zero, you have read everthing.
It appears to take a correct amount of time the first time the file is read.
On that first read, you're reading 150GB in about 2 minutes. That works out to about 10 gigabits per second. Is that what you're expecting (based on the network to your NFS mount)?
One possibility is that the file could be at least in part sparse. A sparse file has regions that are truly empty - they don't even have disk space allocated to them. These sparse regions also don't consume much cache space, and so reading the sparse regions will essentially only require time to zero out the userspace pages they're being read into.
You can check with ls -lsh. The first column will be the on-disk size - if it's less than the file size, the file is indeed sparse. To de-sparse the file, just write to every page of it.
If you would like to test for true disk speeds, one option would be to use the O_DIRECT flag to open(2) to bypass the cache. Note that all IO using O_DIRECT must be page-aligned, and some filesystems do not support it (in particular, it won't work over NFS). Also, it's a bad idea for anything other than benchmarking. See some of Linus's rants in this thread.
Finally, to drop all caches on a linux system for testing, you can do:
echo 3 > /proc/sys/vm/drop_caches
If you do this on both client and server, you will force the file out of memory. Of course, this will have a negative performance impact on anything else running at the time.

Unix Command For Benchmarking Code Running K times

Suppose I have a code executed in Unix this way:
$ ./mycode
My question is is there a way I can time the running time of my code
executed K times. The value of K = 1000 for example.
I am aware of Unix "time" command, but that only executed 1 instance.
to improve/clarify on Charlie's answer:
time (for i in $(seq 10000); do ./mycode; done)
try
$ time ( your commands )
write a loop to go in the parens to repeat your command as needed.
Update
Okay, we can solve the command line too long issue. This is bash syntax, if you're using another shell you may have to use expr(1).
$ time (
> while ((n++ < 100)); do echo "n = $n"; done
> )
real 0m0.001s
user 0m0.000s
sys 0m0.000s
Just a word of advice: Make sure this "benchmark" comes close to your real usage of the executed program. If this is a short living process, there could be a significant overhead caused by the process creation alone. Don't assume that it's the same as implementing this as a loop within your program.
To enhance a little bit some other responses, some of them (those based on seq) may cause a command line too long if you decide to test, say one million times. The following does not have this limitation
time ( a=0 ; while test $a -lt 10000 ; do echo $a ; a=`expr $a + 1` ; done)
Another solution to the "command line too long" problem is to use a C-style for loop within bash:
$ for ((i=0;i<10;i++)); do echo $i; done
This works in zsh as well (though I bet zsh has some niftier way of using it, I'm just still new to zsh). I can't test others, as I've never used any other.
forget time, hyperfine will do exactly what you are looking for: https://github.com/sharkdp/hyperfine
% hyperfine 'sleep 0.3'
Benchmark 1: sleep 0.3
Time (mean ± σ): 310.2 ms ± 3.4 ms [User: 1.7 ms, System: 2.5 ms]
Range (min … max): 305.6 ms … 315.2 ms 10 runs
Linux perf stat has a -r repeat_count option. Its output only gives you the mean and standard deviation for each HW/software event, not min/max as well.
It doesn't discard the first run as a warm-up or anything either, but it's somewhat useful in a lot of cases.
Scroll to the right for the stddev results like ( +- 0.13% ) for cycles. Less variance in that than in task-clock, probably because CPU frequency was not fixed. (I intentionally picked a quite short run time, although with Skylake hardware P-state and EPP=performance, it should be ramping up to max turbo quite quickly even compared to a 34 ms run time. But for a CPU-bound task that's not memory-bound at all, its interpreter loop runs at a constant number of clock cycles per iteration, modulo only branch misprediction and interrupts. --all-user is counting CPU events like instructions and cycles only for user-space, not inside interrupt handlers and system calls / page-faults.)
$ perf stat --all-user -r5 awk 'BEGIN{for(i=0;i<1000000;i++){}}'
Performance counter stats for 'awk BEGIN{for(i=0;i<1000000;i++){}}' (5 runs):
34.10 msec task-clock # 0.984 CPUs utilized ( +- 0.40% )
0 context-switches # 0.000 /sec
0 cpu-migrations # 0.000 /sec
178 page-faults # 5.180 K/sec ( +- 0.42% )
139,277,791 cycles # 4.053 GHz ( +- 0.13% )
360,590,762 instructions # 2.58 insn per cycle ( +- 0.00% )
97,439,689 branches # 2.835 G/sec ( +- 0.00% )
16,416 branch-misses # 0.02% of all branches ( +- 8.14% )
0.034664 +- 0.000143 seconds time elapsed ( +- 0.41% )
awk here is just a busy-loop to give us something to measure. If you're using this to microbenchmark a loop or function, construct it to have minimal startup overhead as a fraction of total run time, so perf stat event counts for the whole run mostly reflect the code you wanted to time. Often this means building a repeat-loop into your own program, to loop over the initialized data multiple times.
See also Idiomatic way of performance evaluation? - timing very short things is hard due to measurement overhead. Carefully constructing a repeat loop that tells you something interesting about the throughput or latency of your code under test is important.
Run-to-run variation is often a thing, but often back-to-back runs like this will have less variation within the group than between runs separated by half a second to up-arrow/return. Perhaps something to do with transparent hugepage availability, or choice of alignment? Usually for small microbenchmarks, so not sensitive to the file getting evicted from the pagecache.
(The +- range printed by perf is just I think one standard deviation based on the small sample size, not the full range it saw.)
If you're worried about the overhead of constantly load and unloading the executable into process space, I suggest you set up a ram disk and time your app from there.
Back in the 70's we used to be able to set a "sticky" bit on the executable and have it remain in memory.. I don't know of a single unix which now supports this behaviour as it made updating applications a nightmare.... :o)