Preprocessor to count number of strings in file - c++

I wanted to have a macro (or anything else that works) that can go through the C/C++ file, and count the number of occurrences of a specific string (in the physical C/C++ file).
#define numInFile(str) [???]
int main() {
printf("blahblah");
printf("You've used printf %d times", numinFile ("printf") - 2); //-2 account for this call
return 0;
}
Edit: Question was originally specific to using this functionality for exit calls. It is now generalize for any use.

If I understand you correctly, you want to have unique error codes, that you can trace back to the line where the error happened?
I will address that Y question instead of your X one:
You can use __LINE__. __LINE__ expands to an integer constant of the current line number. You could #define quit as:
#define quit(code) (quit)(__LINE__+(code))
void (quit)(code) { // seperate func in case you want to do more
exit(code);
}
Keep in mind though that the exit code of a process is not the best way to encode such information. On POSIX, only the lower 8 bit of an exit code are guaranteed to be available. But as you already use 300 as base value, I assume you are on Windows or some other system where this isn't a concern.
For debugging purposes, alternatively consider writing to stderr, when an error happens (maybe with a command line flag).
If exit was just an example, and you intend to use it inside your application, you could save __LINE__ and __FILE__ in global (or _Thread_local) variables on error and store only the exit reason in the error code.
Regarding your X question, the preprocessor doesn't do such stuff. You will have to offload such tasks to a shell/perl/whatever script that your build script can call.

There's nothing built-in to do this. It would be possible to hook something up to your build system to generate a header file with the relevant counts and use a macro to pull the right value from that header file.
However, based on the fact that various unix systems put limits on the range of exit values (the linux machine I am looking at will only use the lowest 8 bits, meaning that exit(256) will be identical to exit(0)) you probably actually don't want to do this in the first place, you'd be better off using a logging macro that emits the name of the compilation unit, the line where it was expanded and then uses exit(EXIT_FAILURE).

Related

Compare execution paths of same code under different inputs

I'm debugging a very complex C++ function that gives me some unexpected results under some inputs. I'd like to compare code executions under different input so that I find out what part causes me bug. The tool that can compare code execution paths is what I am looking for. Please let me know if such a tool exists. Or otherwise if there's some techniques I can employ to do the same thing?
To describe my problem concretely, here I'm using a contrived example.
Say this is the function in pseudocode,
double payTax(double income)
{
if (income < 10000)
return noTax();
else if ( 10000 < income < 30000)
return levelOneTax();
else if (30000 < income < 48000)
return levelTwoTax();
else
return levelThreeAboveTax();
}
Given input 15000, the function computes the correct amount of tax, but somehow input 16000 gives an erroneous tax amount. Supposedly, input 15000 and 16000 would cause the function to go through exactly the same execution paths; on the other hand, if they go different paths, then something must have gone wrong within the function. Therefore, a tool that compares execution paths would reveal enough information that could help me quickly identify the bug. I'm looking for such a tool. Preferably compatible with Visual Studio 2010. It would be better if such a tool also keeps values of variables.
P.S. debugging is the last thing I want to do because the code base I am working with is much much bigger and complex than the trivial payTax example.
Please help. Thanks.
The keywords you are looking for is "code coverage" or "coverage analysis" or "code coverage analysis".
Which tool you use will naturally depend on the rest of your environment.
I know that this question is almost ten years old now, but I'll still give my answer here, since it may be useful for a random googler.
The approach does not require any additional 3rd party tools, except maybe a usual text diff application to compare text files with execution paths.
You'll need to output the execution path yourself from your application, but instead of adding logging code in every function, you'll use special support from the compiler to make it call your handlers upon each function entry and exit. Microsoft compiler calls it Hook Functions and require separate flags for hooking function entering (/Gh) and exiting (/GH). GNU C++ compiler calls it instrumentation functions and requires a single -finstrument-functions flag for both.
Given those flags the compilers will add special prologue and epilogue code for each function being compiled, which will call special handlers, one for enter and one for exit. You'll need to provide the implementation of those handlers yourself. On GNU C++ the handlers are already passed with the pointers to the function being entered into or exited from. If you're on MSVC++, you'll need to use the return of _ReturnAddress intrinsic with some modification to get the address of the function.
Then you can just output the address as is and then use something line add2line tool to translate the address to function name. Or you can go one step further and make the translation yourself.
On MSVC++ you can use DbgHelp API and specifically SymInitialize and SymFromAddr helpers to translate the address into the function name. This will require your application to be compiled with debug information.
On GNU C++ you may probably want to use backtrace_symbols to translate the address into function name, and then maybe __cxa_demangle to demangle the returned name. This will probably require your executable to be built with -rdynamic.
Having all this in place you can output the name of each called function with the needed indent and thus have the call path. Or even do some fancier stuff like this, this or this.
You can use this MSVC++ code or this GCC code as a starting point, or just use your favorite search engine for other examples which are plenty.
The tool you want is printf or std::cerr!
And you have a substantial error in your code: a statement like if ( 10000 < income < 30000) will not work as expected! You want to write it like if( 10000 < income && income < 30000 ).
And to keep testing simple, please use curly brackets as in:
if( 10000 < income && income < 30000 ) {
return levelOneTax();
} else if( ...
Because then it will be much easier to add debug output, as in:
if( 10000 < income && income < 30000 ) {
std::cerr << "using levelOneTax for income=" << income << std::endl;
return levelOneTax();
} else if( ...
EDIT
BTW: "a tool that compares execution paths would reveal enough information [...]", BUT in the sense you are expecting, such a tool would reveal TOO MUCH information to handle. The best thing you can do is debugging and verifying that your code is doing what you expect it to do. A "code coverage" tool would probably be too big for your case (and also such tools are not cheap).

How can I monitor what's being put into the standard out buffer and break when a specific string is deposited in the pipe?

In Linux, with C/C++ code, using gdb, how can you add a gdb breakpoint to scan the incoming strings in order to break on a particular string?
I don't have access to a specific library's code, but I want to break as soon as that library sends a specific string to standard out so I can go back up the stack and investigate the part of my code that is calling the library. Of course I don't want to wait until a buffer flush occurs. Can this be done? Perhaps a routine in libstdc++ ?
This question might be a good starting point: how can I put a breakpoint on "something is printed to the terminal" in gdb?
So you could at least break whenever something is written to stdout. The method basically involves setting a breakpoint on the write syscall with a condition that the first argument is 1 (i.e. STDOUT). In the comments, there is also a hint as to how you could inspect the string parameter of the write call as well.
x86 32-bit mode
I came up with the following and tested it with gdb 7.0.1-debian. It seems to work quite well. $esp + 8 contains a pointer to the memory location of the string passed to write, so first you cast it to an integral, then to a pointer to char. $esp + 4 contains the file descriptor to write to (1 for STDOUT).
$ gdb break write if 1 == *(int*)($esp + 4) && strcmp((char*)*(int*)($esp + 8), "your string") == 0
x86 64-bit mode
If your process is running in x86-64 mode, then the parameters are passed through scratch registers %rdi and %rsi
$ gdb break write if 1 == $rdi && strcmp((char*)($rsi), "your string") == 0
Note that one level of indirection is removed since we're using scratch registers rather than variables on the stack.
Variants
Functions other than strcmp can be used in the above snippets:
strncmp is useful if you want match the first n number of characters of the string being written
strstr can be used to find matches within a string, since you can't always be certain that the string you're looking for is at the beginning of string being written through the write function.
Edit: I enjoyed this question and finding it's subsequent answer. I decided to do a blog post about it.
catch + strstr condition
The cool thing about this method is that it does not depend on glibc write being used: it traces the actual system call.
Furthermore, it is more resilient to printf() buffering, as it might even catch strings that are printed across multiple printf() calls.
x86_64 version:
define stdout
catch syscall write
commands
printf "rsi = %s\n", $rsi
bt
end
condition $bpnum $rdi == 1 && strstr((char *)$rsi, "$arg0") != NULL
end
stdout qwer
Test program:
#define _XOPEN_SOURCE 700
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
write(STDOUT_FILENO, "asdf1", 5);
write(STDOUT_FILENO, "qwer1", 5);
write(STDOUT_FILENO, "zxcv1", 5);
write(STDOUT_FILENO, "qwer2", 5);
printf("as");
printf("df");
printf("qw");
printf("er");
printf("zx");
printf("cv");
fflush(stdout);
return EXIT_SUCCESS;
}
Outcome: breaks at:
qwer1
qwer2
fflush. The previous printf didn't actually print anything, they were buffered! The write syacall only happened on the fflush.
Notes:
$bpnum thanks to Tromey at: https://sourceware.org/bugzilla/show_bug.cgi?id=18727
rdi: register that contains the number of the Linux system call in x86_64, 1 is for write
rsi: first argument of the syscall, for write it points to the buffer
strstr: standard C function call, searches for submatches, returns NULL if non found
Tested in Ubuntu 17.10, gdb 8.0.1.
strace
Another option if you are feeling interactive:
setarch "$(uname -m)" -R strace -i ./stdout.out |& grep '\] write'
Sample output:
[00007ffff7b00870] write(1, "a\nb\n", 4a
Now copy that address and paste it into:
setarch "$(uname -m)" -R strace -i ./stdout.out |& grep -E '\] write\(1, "a'
The advantage of this method is that you can use the usual UNIX tools to manipulate strace output, and it does not require deep GDB-fu.
Explanation:
-i makes strace output RIP
setarch -R disables ASLR for a process with a personality system call: How to debug with strace -i when everytime address is different GDB already does that by default, so no need to do it again.
Anthony's answer is awesome. Following his answer, I tried out another solution on Windows(x86-64 bits Windows). I know this question here is for GDB on Linux, however, I think this solution is a supplement for this kind of question. It might be helpful for others.
Solution on Windows
In Linux a call to printf would result in call to the API write. And because Linux is an open source OS, we could debug within the API. However, the API is different on Windows, it provided it's own API WriteFile. Due to Windows is a commercial non-open source OS, breakpoints could not be added in the APIs.
But some of the source code of VC is published together with Visual Studio, so we could find out in the source code where finally called the WriteFile API and set a breakpoint there. After debugging on the sample code, I found the printf method could result in a call to _write_nolock in which WriteFile is called. The function is located in:
your_VS_folder\VC\crt\src\write.c
The prototype is:
/* now define version that doesn't lock/unlock, validate fh */
int __cdecl _write_nolock (
int fh,
const void *buf,
unsigned cnt
)
Compared to the write API on Linux:
#include <unistd.h>
ssize_t write(int fd, const void *buf, size_t count);
They have totally the same parameters. So we could just set a condition breakpoint in _write_nolock just refer to the solutions above, with only some differences in detail.
Portable Solution for Both Win32 and x64
It is very lucky that we could use the name of parameters directly on Visual Studio when setting a condition for breakpoints on both Win32 and x64. So it becomes very easy to write the condition:
Add a breakpoints in _write_nolock
NOTICE: There are little difference on Win32 and x64. We could just use the function name to set the location of breakpoints on Win32. However, it won't work on x64 because in the entrance of the function, the parameters is not initialized. Therefore, we could not use the parameter name to set the condition of breakpoints.
But fortunately we have some work around: use the location in the function rather than the function name to set the breakpoints, e.g., the 1st line of the function. The parameters are already initialized there. (I mean use the filename+line number to set the breakpoints, or open the file directly and set a breakpoint in the function, not the entrance but the first line. )
Restrict the condition:
fh == 1 && strstr((char *)buf, "Hello World") != 0
NOTICE: there is still a problem here, I tested two different ways to write something into stdout: printf and std::cout. printf would write all the strings to the _write_nolock function at once. However std::cout would only pass character by character to _write_nolock, which means the API would be called strlen("your string") times. In this case, the condition could not be activated forever.
Win32 Solution
Of course we could use the same methods as Anthony provided: set the condition of breakpoints by registers.
For a Win32 program, the solution is almost the same with GDB on Linux. You might notice that there is a decorate __cdecl in the prototype of _write_nolock. This calling convention means:
Argument-passing order is Right to left.
Calling function pops the arguments from the stack.
Name-decoration convention: Underscore character (_) is prefixed to names.
No case translation performed.
There is a description here. And there is an example which is used to show the registers and stacks on Microsoft's website. The result could be found here.
Then it is very easy to set the condition of breakpoints:
Set a breakpoint in _write_nolock.
Restrict the condition:
*(int *)($esp + 4) == 1 && strstr(*(char **)($esp + 8), "Hello") != 0
It is the same method as on the Linux. The first condition is to make sure the string is written to stdout. The second one is to match the specified string.
x64 Solution
Two important modification from x86 to x64 are the 64-bit addressing capability and a flat set of 16 64-bit registers for general use. As the increase of registers, x64 only use __fastcall as the calling convention. The first four integer arguments are passed in registers. Arguments five and higher are passed on the stack.
You could refer to the Parameter Passing page on Microsoft's website. The four registers (in order left to right) are RCX, RDX, R8 and R9. So it is very easy to restrict the condition:
Set a breakpoint in _write_nolock.
NOTICE: it's different from the portable solution above, we could just set the location of breakpoint to the function rather than the 1st line of the function. The reason is all the registers are already initialized at the entrance.
Restrict condition:
$rcx == 1 && strstr((char *)$rdx, "Hello") != 0
The reason why we need cast and dereference on esp is that $esp accesses the ESP register, and for all intents and purposes is a void*. While the registers here stores directly the values of parameters. So another level of indirection is not needed anymore.
Post
I also enjoy this question very much, so I translated Anthony's post into Chinese and put my answer in it as a supplement. The post could be found here. Thanks for #anthony-arnold 's permission.
Anthony's answer is very interesting and it definitely gives some results.
Yet, I think it might miss the buffering of printf.
Indeed on Difference between write() and printf(), you can read that: "printf doesn't necessarily call write every time. Rather, printf buffers its output."
STDIO WRAPPER SOLUTION
Hence I came with another solution that consists in creating a helper library that you can pre-load to wrap the printf like functions. You can then set some breakpoints on this library source and backtrace to get the info about the program you are debugging.
It works on Linux and target the libc, I do not know for c++ IOSTREAM, also if the program use write directly, it will miss it.
Here is the wrapper to hijack the printf (io_helper.c).
#include<string.h>
#include<stdio.h>
#include<stdarg.h>
#define MAX_SIZE 0xFFFF
int printf(const char *format, ...){
char target_str[MAX_SIZE];
int i=0;
va_list args1, args2;
/* RESOLVE THE STRING FORMATING */
va_start(args1, format);
vsprintf(target_str,format, args1);
va_end(args1);
if (strstr(target_str, "Hello World")){ /* SEARCH FOR YOUR STRING */
i++; /* BREAK HERE */
}
/* OUTPUT THE STRING AS THE PROGRAM INTENTED TO */
va_start(args2, format);
vprintf(format, args2);
va_end(args2);
return 0;
}
int puts(const char *s)
{
return printf("%s\n",s);
}
I added puts because gcc tend to replace printf by puts when it can. So I force it back to printf.
Next you just compile it to a shared library.
gcc -shared -fPIC io_helper.c -o libio_helper.so -g
And you load it before running gdb.
LD_PRELOAD=$PWD/libio_helper.so; gdb test
Where test is the program you are debugging.
Then you can break with break io_helper.c:19 because you compiled the library with -g.
EXPLANATIONS
Our luck here is that printf and other fprintf, sprintf... are just here to resolve the variadic arguments and to call their 'v' equivalent. (vprintf in our case). Doing this job is easy, so we can do it and leave the real work to libc with the 'v' function. To get the variadic args of printf, we just have to use va_start and va_end.
The main advantages of this method is that you are sure that when you break, you are in the portion of the program that output your target string and that this is not a leftover in a buffer. Also you do not make any assumption on the hardware. The drawback is that you are assuming that the program use the libc stdio function to output things.

What's a Good Way to Test that Identifiers aren't Being Truncated and Thereby Mixed Up?

In C++ class today, we discussed the maximum possible length of identifiers, and how the compiler will eventually stop treating variables as different, after a certain length. (My professor seems to have implied that really long identifiers are truncated.) I posted another question earlier, hoping to see if the limit is defined somewhere. My question here is a little different. Suppose I wanted to test either a practical or enforced limit on identifier name lengths. How would I go about doing so? Here's what I'm thinking of doing, but somehow it seems to be too simple.
Step 1: Generate at least two variables with really long names and print them to the console. If the identifier names are really that unlimited, I am not going to waste time typing them. My code should do it for me.
Step 2: Attempt to perform some operations with the variables, such as compare them, or any arithmetic. If the compiler stops differentiating, then in theory, certain arithmetic will break, such as x/(reallyLongA-reallyLongB), since reallyLongA and reallyLongB will be so long that the compiler will just treat them as the same thing. At that point, the division operation will become a division-by-zero, which should crash and burn horribly.
Am I approaching this correctly? Will I run out of memory before I "break" the compiler or "runtime"?
I don't think you need to even generate any operations on the variables.
The following code will generate a redefinition error at compilation time;
int name;
int name;
I'd expect you'd get the same error with
int namewithlastsignificantcharacterhere_abc;
int namewithlastsignificantcharacterhere_123;
I'd use a macro scripting language to generate successively longer names until you got one that broke. Here's a Ruby one-liner
C:>ruby -e "(1..2048).each{|i| puts \"int #{'variable'*i}#{i};\"}" > var.txt
When I #include var.txt in a c file, and compile with VS2008, I get the error
"1>c:\code\quiz\var.txt(512) : fatal error C1064: compiler limit : token overflowed internal buffer"
and 512*8 chars is the 4096 that JRL cited.
Your professor is wrong. ยง 2.11/1 of the C++ standard says: "All characters are significant". Certainly compilers may impose a limit on the allowed length, as noted in your other question. That doesn't mean they can ignore characters after that.
He's probably confusing C and C++. The two languages have similar but not identical rules. Historically, C had limits as low as six significant characters.
As for your test, there's a far simpeler way to test your hypothesis. Note that
int a;
int a;
is illegal, because you define the same identifier twice. Now if ReallyLongNameA and ReallyLongNameB would differ only in non-significant characters, then
int ReallyLongNameA;
int ReallyLongNameB;
would also be a compile-time error, because both would declare the same variable. You don't need to run the code. You can just generate test.cpp with those two lines, and try to compile it. So, write a small test program that creates increasingly long identifier names, write them to test.cpp, and call system("path/to/compiler -compileroptions test.cpp"); to see if it compiles.
For Windows C++:
Only the first 2048 characters of Microsoft C++ identifiers are
significant. Names for user-defined types are "decorated" by the
compiler to preserve type information. The resultant name, including
the type information, cannot be longer than 2048 characters.
Thus seems you could do a pretty simple test using a MS compiler, at least.
Edit:
Didn't do extensive testing, but on my Visual Studio Pro 2008 at least, a variable named aaaa... (total length 4095 characters) compiles, and after that (>= 4096 you get Fatal Error C1064: compiler limit : token overflowed internal buffer).
I would assume that if it still works after the length reaches some ridiculous size (like > 1MB), that the compiler probably is able to handle arbitrary sized identifiers.
Of course there's no sure way to tell as it is entirely possible for the identifier length limit to exceed the amount of memory you have. (a limit of 2^32 - 1 is entirely possible)

Optimal virtual machine/byte-code interpreter loop

My project has a VM that executes a byte-code compiled from a domain-specific-language. I'm looking at ways that I can improve the execution time of the byte-code. As a first step I'd like to see if there is a way to simply improve the byte-code interpreter before I venture into machine code compilation.
The main loop of the interpreter looks like this:
while(true)
{
uint8_t cmd = *code++;
switch( cmd )
{
case op_1: ...; break;
...
}
}
QUESTION: Is there a faster way to implement this loop without resorting to assembler?
The one option I see is GCC specific to use dynamic goto with label addresses. Rather than a break at the end of each case I could jump directly to the next instruction. I had hoped the optimizer would do this for me, but looking at the disassembly it apparently doesn't: there is a repeated constant jump at the end of most op_codes.
If relevant the VM is a simple register based machine with floating point and integer registers (8 of each). There is no stack, only a global heap (that language is not that complicated).
One very easy optimisation is that instead of
switch /case/case/case/case/case,
just define an array with function pointers (where each function would process a specified command, or a couple of commands in which case you could set several entries in the array to the same function, and the function itself could check the exact code), and instead of
switch(cmd)
just do a
array[cmd]()
This is given that you dont have too many commands. Also, do some checking if you will not define all the possible commands (maybe you only have 300 commands, but you have to use 2 bytes for representing them, so instead of definining an array with 65536 items, just check if the command is less than 301 and if its not, dont do the lookup)
If you won't do that, at least sort the commands that the most used ones are in the beginning of the switch statement.
Otherwise it would be to look into hashtables, but I assume you don't have that many commands, and in that case overhead of doing a hash function would probably cost you more than not having to do a switch. (Or have a VERY simple hash function)
What's the architecture? You may get a speed-up with word-aligned opcodes, but it'll blow out your code size, which means you'll have to balance it against the cost of a cache miss.
Few obvious optimization I see are,
If you don't use cmd anywhere than switch() then, directly use the pointer indirection, switch( *code++ ). For longer while(true) loop, this can be little helpful.
In switch(), you can use continue instead of break. Because when continue is used inside if/else or switch, compiler knows that execution has to jump to the outer loop; the same is not true for break (with respect to switch).
Hope this helps.

Is there a way to figure out the top callers of a C function?

Say I have function that is called a LOT from many different places. So I would like to find out who calls this functions the most. For example, top 5 callers or who ever calls this function more than N times.
I am using AS3 Linux, gcc 3.4.
For now I just put a breakpoint and then stop there after every 300 times, thus brute-forcing it...
Does anyone know of tools that can help me?
Thanks
Compile with -pg option, run the program for a while and then use gprof. Running a program compiled with -pg option will generate gmon.out file with execution profile. gprof can read this file and present it in readable form.
I wrote call logging example just for fun. A macro change the function call with an instrumented one.
include <stdio.h>.
int funcA( int a, int b ){ return a+b; }
// instrumentation
void call_log(const char*file,const char*function,const int line,const char*args){
printf("file:%s line: %i function: %s args: %s\n",file,line,function,args);
}
#define funcA(...) \
(call_log(__FILE__, __FUNCTION__, __LINE__, "" #__VA_ARGS__), funcA(__VA_ARGS__)).
// testing
void funcB(void){
funcA(7,8);
}
int main(void){
int x = funcA(1,2)+
funcA(3,4);
printf( "x: %i (==10)\n", x );
funcA(5,6);
funcB();
}
Output:
file:main.c line: 22 function: main args: 1,2
file:main.c line: 24 function: main args: 3,4
x: 10 (==10)
file:main.c line: 28 function: main args: 5,6
file:main.c line: 17 function: funcB args: 7,8
Profiling helps.
Since you mentioned oprofile in another comment, I'll say that oprofile supports generating callgraphs on profiled programs.
See http://oprofile.sourceforge.net/doc/opreport.html#opreport-callgraph for more details.
It's worth noting this is definitely not as clear as the callers profile you may get from gprof or another profiler, as the numbers it reports is the number of times oprofile collected a sample in which X is the caller for a given function, not the number of times X called a given function. But this should be sufficient to figure out the top callers of a given function.
A somewhat cumbersome method, but not requiring additional tools:
#define COUNTED_CALL( fn, ...) do{ \
fprintf( call_log_fp, "%s->%s\n", __FUNCTION__, #fn ) ; \
(fn)(__VA_ARGS__) ; \
}while(0) ;
Then all calls written like:
int input_available = COUNTED_CALL( scanf, "%s", &instring ) ;
will be logged to the file associated to call_log_fp (a global FILE* which you must have initialised). The log for the above would look like:
main->scanf
You can then process that log file to extract the data you need. You could even write your own code to do the instrumentation which would make it perhaps less cumbersome.
Might be a bit ambiguous for C++ class member functions though. I am not sure if there is a __CLASS__ macro.
In addition to the aforementioned gprof profiler, you may also try the gcov code-coverage tool. Information on compiling for and using both should be included in the gcc manual.
Once again, stack sampling to the rescue! Just take a bunch of "stackshots", as many as you like. Discard any samples where your function (call it F) is not somewhere on the stack. (If you're discarding most of them, then F is not a performance problem.)
On each remaining sample, locate the call to F, and see what function (call it G) that call is in. If F is recursive (it appears more than once on the sample) only use the topmost call.
Rank your Gs by how many stacks each one appears in.
If you don't want to do this by hand, you could make a simple tool or script. You don't need a zillion samples. 20 or so will give you reasonably good information.
By the way, if what you're really trying to do is find performance problems, you don't actually need to do all that discarding and ranking. In fact - don't discard the exact locations of the call instruction inside each G. Those can actually tell you a good bit more than just the fact that they were somewhere inside G.
P.S. This is all based on the assumption that when you say "calls it the most" you mean "spends the most wall clock time in calling it", not "calls it the greatest number of times". If you are interested in performance, fraction of wall clock time is more useful than invocation count.