By using time.h header, I'm getting execution time of sqrt() as 2 nanoseconds (with the gcc command in a Linux terminal) and 44 nanoseconds (with the g++ command in Ubuntu terminal). Can anyone tell me any other method to measure the execution time of the math.h library functions?
Below is the code:
#include <time.h>
#include <stdio.h>
#include<math.h>
int main()
{
time_t begin,end; // time_t is a datatype to store time values.
time (&begin); // note time before execution
for(int i=0;i<1000000000;i++) //using for loop till 10^9 times to make the execution time in nanoseconds
{
cbrt(9999999); // calling the cube root function from math library
}
time (&end); // note time after execution
double difference = difftime (end,begin);
printf ("time taken for function() %.2lf in Nanoseconds.\n", difference );
printf(" cube root is :%f \t",cbrt(9999999));
return 0;
}
OUTPUT:
by using **gcc**: time taken for function() 2.00 seconds.
cube root is :215.443462
by using **g++**: time taken for function() 44.00 in Nanoseconds.
cube root is:215.443462
Linux terminal result
Give or take the length of the prompt:
$ g++ t1.c
$ ./a.out
time taken for function() 44.00 in Nanoseconds.
cube root is :215.443462
$ gcc t1.c
$ ./a.out
time taken for function() 2.00 in Nanoseconds.
cube root is :215.443462
$
how to measure the execution time of c math.h library functions?
C compilers are often allowed to analyze well known standard library functions and replace such fix code like cbrt(9999999); with 215.443462.... Further, since dropping the function in the loop does not affect the function of the code, that loop may be optimized out.
Use of volatile prevents much of this as the compiler cannot assume no impact when the function is replaced, removed.
for(int i=0;i<1000000000;i++) {
// cbrt(9999999);
volatile double x = 9999999.0;
volatile double y = cbrt(x);
}
The granularity of time() is often only 1 second and if the billion loops only results in a few seconds, consider more loops.
Code could use below to factor out the loop overhead.
time_t begin,middle,end;
time (&begin);
for(int i=0;i<1000000000;i++) {
volatile double x = 9999999.0;
volatile double y = x;
}
time (&middle);
for(int i=0;i<1000000000;i++) {
volatile double x = 9999999.0;
volatile double y = cbrt(x);
}
time (&end);
double difference = difftime(end,middle) - difftime(middle,begin);
Timing code is an art, and one part of the art is making sure that the compiler doesn't optimize your code away. For standard library functions, the compiler may well be aware of what it is/does and be able to evaluate a constant at compile time. In your example, the call cbrt(9999999); gives two opportunities for optimization. The value from cbrt() can be evaluated at compile-time because the argument is a constant. Secondly, the return value is not used, and the standard function has no side-effects, so the compiler can drop it altogether. You can avoid those problems by capturing the result (for example, by evaluating the sum of the cube roots from 0 to one billion (minus one) and printing that value after the timing code.
tm97.c
When I compiled your code, shorn of comments, I got:
$ cat tm97.c
#include <time.h>
#include <stdio.h>
#include <math.h>
int main(void)
{
time_t begin, end;
time(&begin);
for (int i = 0; i < 1000000000; i++)
{
cbrt(9999999);
}
time(&end);
double difference = difftime(end, begin);
printf("time taken for function() %.2lf in Nanoseconds.\n", difference );
printf(" cube root is :%f \t", cbrt(9999999));
return 0;
}
$ make tm97
gcc -O3 -g -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes tm97.c -o tm97 -L../lib -lsoq
tm97.c: In function ‘main’:
tm97.c:11:9: error: statement with no effect [-Werror=unused-value]
11 | cbrt(9999999);
| ^~~~
cc1: all warnings being treated as errors
rmk: error code 1
$
I'm using GCC 9.3.0 on a 2017 MacBook Pro running macOS Mojave 10.14.6 with XCode 11.3.1 (11C504) and GCC 9.3.0 — XCode 11.4 requires Catalina 10.15.2, but work hasn't got around organizing support for that, yet. Interestingly, when the same code is compiled by g++, it compiles without warnings (errors):
$ ln -s tm97.c tm89.cpp
make tm89 SXXFLAGS=-std=c++17 CXX=g++
g++ -O3 -g -I../inc -std=c++17 -Wall -Wextra -Werror -L../lib tm89.cpp -lsoq -o tm89
$
I routinely use some timing code that is available in my SOQ (Stack Overflow Questions) repository on GitHub as files timer.c and timer.h in the src/libsoq sub-directory. The code is only compiled as C code in my library, so I created a simple wrapper header, timer2.h, so that the programs below could use #include "timer2.h" and it would work OK with both C and C++ compilations:
#ifndef TIMER2_H_INCLUDED
#define TIMER2_H_INCLUDED
#ifdef __cplusplus
extern "C" {
#endif
#include "timer.h"
#ifdef __cplusplus
}
#endif
#endif /* TIMER2_H_INCLUDED */
tm29.cpp and tm31.c
This code uses the sqrt() function for testing. It accumulates the sum of the square roots. It uses the timing code from timer.h/timer.c around your timing code — type Clock and functions clk_init(), clk_start(), clk_stop(), and clk_elapsed_us() to evaluate the elapsed time in microseconds between when the clock was started and last stopped.
The source code can be compiled by either a C compiler or a C++ compiler.
#include <time.h>
#include <stdio.h>
#include <math.h>
#include "timer2.h"
int main(void)
{
time_t begin, end;
double sum = 0.0;
int i;
Clock clk;
clk_init(&clk);
clk_start(&clk);
time(&begin);
for (i = 0; i < 1000000000; i++)
{
sum += sqrt(i);
}
time(&end);
clk_stop(&clk);
double difference = difftime(end, begin);
char buffer[32];
printf("Time taken for sqrt() is %.2lf nanoseconds (%s ns).\n",
difference, clk_elapsed_us(&clk, buffer, sizeof(buffer)));
printf("Sum of square roots from 0 to %d is: %f\n", i, sum);
return 0;
}
tm41.c and tm43.cpp
This code is almost identical to the previous code, but the tested function is the cbrt() (cube root) function.
#include <time.h>
#include <stdio.h>
#include <math.h>
#include "timer2.h"
int main(void)
{
time_t begin, end;
double sum = 0.0;
int i;
Clock clk;
clk_init(&clk);
clk_start(&clk);
time(&begin);
for (i = 0; i < 1000000000; i++)
{
sum += cbrt(i);
}
time(&end);
clk_stop(&clk);
double difference = difftime(end, begin);
char buffer[32];
printf("Time taken for cbrt() is %.2lf nanoseconds (%s ns).\n",
difference, clk_elapsed_us(&clk, buffer, sizeof(buffer)));
printf("Sum of cube roots from 0 to %d is: %f\n", i, sum);
return 0;
}
tm59.c and tm61.c
This code uses fabs() instead of either sqrt() or cbrt(). It's still a function call, but it might be inlined. It invokes the conversion from int to double explicitly; without that cast, GCC complains that it should be using the integer abs() function instead.
#include <time.h>
#include <stdio.h>
#include <math.h>
#include "timer2.h"
int main(void)
{
time_t begin, end;
double sum = 0.0;
int i;
Clock clk;
clk_init(&clk);
clk_start(&clk);
time(&begin);
for (i = 0; i < 1000000000; i++)
{
sum += fabs((double)i);
}
time(&end);
clk_stop(&clk);
double difference = difftime(end, begin);
char buffer[32];
printf("Time taken for fabs() is %.2lf nanoseconds (%s ns).\n",
difference, clk_elapsed_us(&clk, buffer, sizeof(buffer)));
printf("Sum of absolute values from 0 to %d is: %f\n", i, sum);
return 0;
}
tm73.cpp
This file uses the original code with my timing wrapper code too. The C version doesn't compile — the C++ version does:
#include <time.h>
#include <stdio.h>
#include <math.h>
#include "timer2.h"
int main(void)
{
time_t begin, end;
Clock clk;
clk_init(&clk);
clk_start(&clk);
time(&begin);
for (int i = 0; i < 1000000000; i++)
{
cbrt(9999999);
}
time(&end);
clk_stop(&clk);
double difference = difftime(end, begin);
char buffer[32];
printf("Time taken for cbrt() is %.2lf nanoseconds (%s ns).\n",
difference, clk_elapsed_us(&clk, buffer, sizeof(buffer)));
printf("Cube root is: %f\n", cbrt(9999999));
return 0;
}
Timing
Using a command timecmd which reports start and stop time, and PID, of programs as well as the timing code built into the various commands (it's a variant on the theme of the time command), I got the following results. (rmk is just an alternative implementation of make.)
$ for prog in tm29 tm31 tm41 tm43 tm59 tm61 tm73
> do rmk $prog && timecmd -ur -- $prog
> done
g++ -O3 -g -I../inc -std=c++11 -Wall -Wextra -Werror tm29.cpp -o tm29 -L../lib -lsoq
2020-03-28 08:47:50.040227 [PID 19076] tm29
Time taken for sqrt() is 1.00 nanoseconds (1.700296 ns).
Sum of square roots from 0 to 1000000000 is: 21081851051977.781250
2020-03-28 08:47:51.747494 [PID 19076; status 0x0000] - 1.707267s - tm29
gcc -O3 -g -I../inc -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes tm31.c -o tm31 -L../lib -lsoq
2020-03-28 08:47:52.056021 [PID 19088] tm31
Time taken for sqrt() is 1.00 nanoseconds (1.679867 ns).
Sum of square roots from 0 to 1000000000 is: 21081851051977.781250
2020-03-28 08:47:53.742383 [PID 19088; status 0x0000] - 1.686362s - tm31
gcc -O3 -g -I../inc -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes tm41.c -o tm41 -L../lib -lsoq
2020-03-28 08:47:53.908285 [PID 19099] tm41
Time taken for cbrt() is 7.00 nanoseconds (6.697999 ns).
Sum of cube roots from 0 to 1000000000 is: 749999999499.628418
2020-03-28 08:48:00.613357 [PID 19099; status 0x0000] - 6.705072s - tm41
g++ -O3 -g -I../inc -std=c++11 -Wall -Wextra -Werror tm43.cpp -o tm43 -L../lib -lsoq
2020-03-28 08:48:00.817975 [PID 19110] tm43
Time taken for cbrt() is 7.00 nanoseconds (6.614539 ns).
Sum of cube roots from 0 to 1000000000 is: 749999999499.628418
2020-03-28 08:48:07.438298 [PID 19110; status 0x0000] - 6.620323s - tm43
gcc -O3 -g -I../inc -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes tm59.c -o tm59 -L../lib -lsoq
2020-03-28 08:48:07.598344 [PID 19121] tm59
Time taken for fabs() is 1.00 nanoseconds (1.114822 ns).
Sum of absolute values from 0 to 1000000000 is: 499999999067108992.000000
2020-03-28 08:48:08.718672 [PID 19121; status 0x0000] - 1.120328s - tm59
g++ -O3 -g -I../inc -std=c++11 -Wall -Wextra -Werror tm61.cpp -o tm61 -L../lib -lsoq
2020-03-28 08:48:08.918745 [PID 19132] tm61
Time taken for fabs() is 2.00 nanoseconds (1.117780 ns).
Sum of absolute values from 0 to 1000000000 is: 499999999067108992.000000
2020-03-28 08:48:10.042134 [PID 19132; status 0x0000] - 1.123389s - tm61
g++ -O3 -g -I../inc -std=c++11 -Wall -Wextra -Werror tm73.cpp -o tm73 -L../lib -lsoq
2020-03-28 08:48:10.236899 [PID 19143] tm73
Time taken for cbrt() is 0.00 nanoseconds (0.000004 ns).
Cube root is: 215.443462
2020-03-28 08:48:10.242322 [PID 19143; status 0x0000] - 0.005423s - tm73
$
I've run the programs many times; the times above are representative of what I got each time. There are a number of conclusions that can be drawn:
sqrt() (1.7 ns) is quicker than cbrt() (6.7 ns).
fabs() (1.1 ns) is quicker than sqrt() (1.7 ns).
However, fabs() gives a moderate approximation to the time taken with loop overhead and conversion from int to double.
When the result of cbrt() is not used, the compiler eliminates the loop.
When compiled with the C++ compiler, the code with from the question removes the loop altogether, leaving only the calls to time() to be measured. The result printed by clk_elapsed_us() is the time taken to execute the code between clk_start() and clk_stop() in seconds with microsecond resolution — 0.000004 is 4 microseconds elapsed time. The value is marked in ns because when the loop executes one billion times, the elapsed time in seconds also represents the time in nanoseconds for one loop — there are a billion nanoseconds in a second.
The times reported by timecmd are consistent with the times reported by the programs. There is the overhead of starting the process (fork() and exec()) and the I/O in the process that is included in the times reported by timecmd.
Although not shown, the timings with clang and clang++ (instead of GCC 9.3.0) are very comparable, though the cbrt() code takes about 7.5 ns per iteration instead of 6.7 ns. The timing differences for the others are basically noise.
The number suffixes are all 2-digit primes. They have no other significance except to keep the different programs separate.
As #Jonathan Leffler commented, compiler can optimize your C / c++ code. If the C code just loops from 0 to 1000 w/o doing anything with the counter i (I mean, w/o printing it or using the intermediate values in any other operation, indexes, etc), compiler may not even create the assembly code that corresponds to that loop. Possible arithmetic operations will even be pre-computed. For the code below;
int foo(int x) {
return x * 5;
}
int main() {
int x = 3;
int y = foo(x);
...
...
}
it is not surprising for the compiler to generate just two lines of assembly code (the compiler may even by-pass calling the function foo and generate an inline instruction) for function foo:
mov $15, %eax
; compiler will not bother multiplying 5 by 3
; but just move the pre-computed '15' to register
ret
; and then return
Q: Is it possible to improve IO of this code with LLVM Clang under OS X:
test_io.cpp:
#include <iostream>
#include <string>
constexpr int SIZE = 1000*1000;
int main(int argc, const char * argv[]) {
std::ios_base::sync_with_stdio(false);
std::cin.tie(nullptr);
std::string command(argv[1]);
if (command == "gen") {
for (int i = 0; i < SIZE; ++i) {
std::cout << 1000*1000*1000 << " ";
}
} else if (command == "read") {
int x;
for (int i = 0; i < SIZE; ++i) {
std::cin >> x;
}
}
}
Compile:
clang++ -x c++ -lstdc++ -std=c++11 -O2 test_io.cpp -o test_io
Benchmark:
> time ./test_io gen | ./test_io read
real 0m2.961s
user 0m3.675s
sys 0m0.012s
Apart from the sad fact that reading of 10MB file costs 3 seconds, it's much slower than g++ (installed via homebrew):
> gcc-6 -x c++ -lstdc++ -std=c++11 -O2 test_io.cpp -o test_io
> time ./test_io gen | ./test_io read
real 0m0.149s
user 0m0.167s
sys 0m0.040s
My clang version is Apple LLVM version 7.0.0 (clang-700.0.72). clangs installed from homebrew (3.7 and 3.8) also produce slow io. clang installed on Ubuntu (3.8) generates fast io. Apple LLVM version 8.0.0 generates slow io (2 people asked).
I also dtrussed it a bit (sudo dtruss -c "./test_io gen | ./test_io read") and found that clang version makes 2686 write_nocancel syscalls, while gcc version makes 2079 writev syscalls. Which probably points to the root of the problem.
The issue is in libc++ that does not implement sync_with_stdio.
Your command line clang++ -x c++ -lstdc++ -std=c++11 -O2 test_io.cpp -o test_io does not use libstdc++, it will use libc++. To force use libstdc++ you need -stdlib=libstdc++.
Minimal example if you have the input file ready:
int main(int argc, const char * argv[]) {
std::ios_base::sync_with_stdio(false);
int x;
for (int i = 0; i < SIZE; ++i) {
std::cin >> x;
}
}
Timings:
$ clang++ test_io.cpp -o test -O2 -std=c++11
$ time ./test read < input
real 0m2.802s
user 0m2.780s
sys 0m0.015s
$ clang++ test_io.cpp -o test -O2 -std=c++11 -stdlib=libstdc++
clang: warning: libstdc++ is deprecated; move to libc++
$ time ./test read < input
real 0m0.185s
user 0m0.169s
sys 0m0.012s
I have a program that does independent computations on a bunch of images. This seems like a good idea to use OpenMP:
//file: WoodhamData.cpp
#include <omp.h>
...
void WoodhamData::GenerateLightingDirection() {
int imageWidth = (this->normalMap)->width();
int imageHeight = (this->normalMap)->height();
#pragma omp paralell for num_threads(2)
for (int r = 0; r < RadianceMaps.size(); r++) {
if (omp_get_thread_num() == 0){
std::cout<<"threads="<<omp_get_num_threads()<<std::endl;
}
...
}
}
In order to use OpenMP, I add -fopenmp to my makefile, so it outputs:
g++ -g -o test.exe src/test.cpp src/WoodhamData.cpp -pthread -L/usr/X11R6/lib -fopenmp --std=c++0x -lm -lX11 -Ilib/eigen/ -Ilib/CImg
However, I am sad to say, my program reports threads=1 (run from terminal ./test.exe ...)
Does anyone know what might be wrong? This is the slowest part of my program, and it would be great to speed it up a bit.
Your OpenMP directive is wrong - it is "parallel" not "paralell".
This question already has answers here:
What's the difference between -O3 and (-O2 + flags that man gcc says -O3 adds to -O2)?
(2 answers)
Closed 8 years ago.
Here's the function I'm looking at:
template <uint8_t Size>
inline uint64_t parseUnsigned( const char (&buf)[Size] )
{
uint64_t val = 0;
for (uint8_t i = 0; i < Size; ++i)
if (buf[i] != ' ')
val = (val * 10) + (buf[i] - '0');
return val;
}
I have a test harness which passes in all possible numbers with Size=5, left-padded with spaces. I'm using GCC 4.7.2. When I run the program under callgrind after compiling with -O3 I get:
I refs: 7,154,919
When I compile with -O2 I get:
I refs: 9,001,570
OK, so -O3 improves the performance (and I confirmed that some of the improvement comes from the above function, not just the test harness). But I don't want to completely switch from -O2 to -O3, I want to find out which specific option(s) to add. So I consult man g++ to get the list of options it says are added by -O3:
-fgcse-after-reload [enabled]
-finline-functions [enabled]
-fipa-cp-clone [enabled]
-fpredictive-commoning [enabled]
-ftree-loop-distribute-patterns [enabled]
-ftree-vectorize [enabled]
-funswitch-loops [enabled]
So I compile again with -O2 followed by all of the above options. But this gives me even worse performance than plain -O2:
I refs: 9,546,017
I discovered that adding -ftree-vectorize to -O2 is responsible for this performance degradation. But I can't figure out how to match the -O3 performance with any combination of options. How can I do this?
In case you want to try it yourself, here's the test harness (put the above parseUnsigned() definition under the #includes):
#include <cmath>
#include <stdint.h>
#include <cstdio>
#include <cstdlib>
#include <cstring>
template <uint8_t Size>
inline void increment( char (&buf)[Size] )
{
for (uint8_t i = Size - 1; i < 255; --i)
{
if (buf[i] == ' ')
{
buf[i] = '1';
break;
}
++buf[i];
if (buf[i] > '9')
buf[i] -= 10;
else
break;
}
}
int main()
{
char str[5];
memset(str, ' ', sizeof(str));
unsigned max = std::pow(10, sizeof(str));
for (unsigned ii = 0; ii < max; ++ii)
{
uint64_t result = parseUnsigned(str);
if (result != ii)
{
printf("parseUnsigned(%*s) from %u: %lu\n", sizeof(str), str, ii, result);
abort();
}
increment(str);
}
}
A very similar question was already answered here: https://stackoverflow.com/a/6454659/483486
I've copied the relevant text underneath.
UPDATE: There are questions about it in GCC WIKI:
"Is -O1 (-O2,-O3 or -Os) equivalent to individual -foptimization options?"
No. First, individual optimization options (-f*) do not enable optimization, an option -Os or -Ox with x > 0 is required. Second, the -Ox flags enable many optimizations that are not controlled by any individual -f* option. There are no plans to add individual options for controlling all these optimizations.
"What specific flags are enabled by -O1 (-O2, -O3 or -Os)?"
Varies by platform and GCC version. You can get GCC to tell you what flags it enables by doing this:
touch empty.c
gcc -O1 -S -fverbose-asm empty.c
cat empty.s
I am considering the following C++ program:
#include <iostream>
#include <limits>
int main(int argc, char **argv) {
unsigned int sum = 0;
for (unsigned int i = 1; i < std::numeric_limits<unsigned int>::max(); ++i) {
double f = static_cast<double>(i);
unsigned int t = static_cast<unsigned int>(f);
sum += (t % 2);
}
std::cout << sum << std::endl;
return 0;
}
I use the gcc / g++ compiler, g++ -v gives gcc version 4.7.2 20130108 [gcc-4_7-branch revision 195012] (SUSE Linux).
I am running openSUSE 12.3 (x86_64) and have a Intel(R) Core(TM) i7-3520M CPU.
Running
g++ -O3 test.C -o test_64_opt
g++ -O0 test.C -o test_64_no_opt
g++ -m32 -O3 test.C -o test_32_opt
g++ -m32 -O0 test.C -o test_32_no_opt
time ./test_64_opt
time ./test_64_no_opt
time ./test_32_opt
time ./test_32_no_opt
yields
2147483647
real 0m4.920s
user 0m4.904s
sys 0m0.001s
2147483647
real 0m16.918s
user 0m16.851s
sys 0m0.019s
2147483647
real 0m37.422s
user 0m37.308s
sys 0m0.000s
2147483647
real 0m57.973s
user 0m57.790s
sys 0m0.011s
Using float instead of double, the optimized 64 bit variant even finishes in 2.4 seconds, while the other running times stay roughly the same. However, with float I get different outputs depending on optimization, probably due to the higher processor-internal precision.
I know 64 bit may have faster math, but we have a factor of 7 (and nearly 15 with floats) here.
I would appreciate an explanation of these running time discrepancies.
The problem isn't 32bit vs 64bit, it's the lack of SSE and SSE2. When compiling for 64bit, gcc assumes it can use SSE and SSE2 since all available x86_64 processors have it.
Compile your 32bit version with -msse -msse2 and the runtime difference nearly disappears.
My benchmark results for completeness:
-O3 -m32 -msse -msse2 4.678s
-O3 (64bit) 4.524s