Measuring execution time of a program - profiling

The link here says that gettimeofday() sets a structure which contains number of seconds and microseconds since Epoch (please tell me what Epoch is). With that thing in mind I set a structure before and after calling sleep function with parameter 3. So the total time difference setting of these structure is 3 seconds or 3000000 microseconds but it seem to give some wrong output. Where am I getting wrong?
#include<iostream>
#include<ctime>
#include<unistd.h>
#include<cstdio>
#include<sys/time.h>
using namespace std;
int main()
{
struct timeval start,end;
gettimeofday(&start,NULL);
sleep(3);
gettimeofday(&end,NULL);
cout<<start.tv_usec<<endl;
cout<<end.tv_usec<<endl;
cout<<end.tv_usec-start.tv_usec;
return 0;
}

Here's the point you're missing:
unsigned long time_in_micros = 1000000 * tv_sec + tv_usec;
To get the elapsed time in microseconds, you need to ADD "seconds" to "microseconds". You can't just ignore the tv_sec field!
Sample code:
#include <unistd.h>
#include <stdio.h>
#include <time.h>
#include <sys/time.h>
int main(int argc, char *argv[])
{
struct timeval start,end;
gettimeofday(&start,NULL);
sleep(3);
gettimeofday(&end,NULL);
printf ("start: %ld:%ld\n", start.tv_sec, start.tv_usec);
printf ("end: %ld:%ld\n", end.tv_sec, end.tv_usec);
printf ("diff: %ld:%ld\n",
end.tv_sec-start.tv_sec, end.tv_usec-start.tv_usec);
gettimeofday(&start,NULL);
sleep(10);
gettimeofday(&end,NULL);
printf ("start: %ld:%ld\n", start.tv_sec, start.tv_usec);
printf ("end: %ld:%ld\n", end.tv_sec, end.tv_usec);
printf ("diff: %ld:%ld\n",
end.tv_sec-start.tv_sec, end.tv_usec-start.tv_usec);
return 0;
}
Corresponding output:
start: 1459100430:214715
end: 1459100433:215357
diff: 3:642
start: 1459100433:215394
end: 1459100443:217024
diff: 10:1630
gettimeofday() links:
http://linux.die.net/man/2/gettimeofday
https://blog.habets.se/2010/09/gettimeofday-should-never-be-used-to-measure-time
Measure time in Linux - time vs clock vs getrusage vs clock_gettime vs gettimeofday vs timespec_get?

Related

clock_gettime: identifier not found in Visual Studio in Windows 10

I try running this program to execute the time taken for function by the help of clock_gettime in Visual Studio 2015.
I have followed the reference from here: https://www.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html
#include <iostream>
#include <stdio.h> /* for printf */
#include <stdint.h> /* for uint64 definition */
#include <stdlib.h> /* for exit() definition */
#include <ctime>
#include<windows.h>
#define _POSIX_C_SOURCE 200809L
#define BILLION 1000000000L
void fun() {
Sleep(3);
}
int main()
{
struct timespec start, end;
int i;
uint64_t diff;
/* measure monotonic time */
clock_gettime(CLOCK_MONOTONIC, &start); /* mark start time */
fun();
clock_gettime(CLOCK_MONOTONIC, &end); /* mark the end time */
diff = BILLION * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
printf("elapsed time = %llu nanoseconds\n", (long long unsigned int) diff);
system("pause");
return 0;
}
I tried running in Linux, it works fine. But in Windows, VS 2015 shows error.
'CLOCK_MONOTONIC' : undeclared identifier
'clock_gettime': identifier not found
Please suggest me how to fix this error or how to find the elapsed time in Visual studio 2015. Thanks.
Function clock_gettime() is defined by POSIX. Windows is not POSIX compliant.
Here's a link to old post of porting clock_gettime() to Windows.
For Windows I would use std::chrono library.
Simple example is :
#include <chrono>
auto start = std::chrono::high_resolution_clock::now();
func();
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<float> duration = end - start;
printf("Duration : %f", duration.count());

How to Get system clock in microseconds in C++?

I am working on a project using Visual C++ /CLR in console mode.
How can I get the system clock in microseconds ?
I want to display hours:minutes:seconds:microseconds
The following program works well but is not compatible with other platforms:
#include <stdio.h>
#include <sys/time.h>
int main()
{
struct timeval tv;
struct timezone tz;
struct tm *tm;
gettimeofday(&tv, &tz);
tm=localtime(&tv.tv_sec);
printf(" %d:%02d:%02d %ld \n", tm->tm_hour, tm->tm_min,tm->tm_sec, tv.tv_usec);
return 0;
}
You could use ptime microsec_clock::local_time() from Boost.
The documentation is available here.
After that, you can use std::string to_iso_extended_string(ptime) to display the returned time as a string or you can use the members of ptime directly to format the output by yourself.
Anyway it is worth noting that:
Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution.
So I guess it depends on how precise you require your "clock" to be.
thank you Mr ereOn
I followed your instructions and i have wrote this code ==> it works 100 %
#include <iostream>
#include "boost/date_time/posix_time/posix_time.hpp"
typedef boost::posix_time::ptime Time;
int main (){
int i;
Time t1;
for (int i=0;i<1000;i++)
{
t1=boost::posix_time::microsec_clock::local_time();
std::cout << to_iso_extended_string(t1) << "\n";
}
return 0;
}

Why is my c++ clock() based function returning a negative value?

I am still new to C++, is the clock function absolute (meaning it counts how long you sleep for), or is it how much time the application actually executes for?
I want a reliable way to produce exact intervals of 1 second. I am saving files, so I need to account for that. I was returning the runtime for that in milliseconds, and then sleeping for the remainder.
Is there a more accurate or simpler way to do this?
EDIT:
The main problem I am having is that I am getting a negative number:
double FCamera::getRuntime(clock_t* end, clock_t* start)
{
return((double(end - start)/CLOCKS_PER_SEC)*1000);
}
clock_t start = clock();
doWork();
clock_t end = clock();
double runtimeInMilliseconds = getRuntime(&end, &start);
It's giving me a negative number, what's up with that?
Walter
clock() returns the number of clock ticks elapsed since the program was launched. If you want to convert the value returned by clock into seconds divide by CLOCKS_PER_SEC (and multiply for the other way around).
There is just one pitfall, the initial moment of reference used by clock as the beginning of the program execution may vary between platforms. To calculate the actual processing times of a program, the value returned by clock should be compared to a value returned by an initial call to clock.
EDIT
larsman has been so kind to post other pitfalls in the comments. I have included them here for future reference.
On several other implementations, the value returned by clock() also includes the times of any children whose status has been collected via wait(2) (or another wait-type call). Linux does not include the times of waited-for children in the value returned by clock().
Note that the time can wrap around. On a 32-bit system where CLOCKS_PER_SEC equals 1000000 [as mandated by POSIX] this function will return the same value approximately every 72 minutes.
EDIT2
After messing around a while here is my portable (Linux/Windows) msleep. Be wary though, I'm not experienced with C/C++ and will most likely contain the stupidest error ever.
#ifdef _WIN32
#include <windows.h>
#define msleep(ms) Sleep((DWORD) ms)
#else
#include <unistd.h>
inline void msleep(unsigned long ms) {
while (ms--) usleep(1000);
}
#endif
You missed * (pointer) ,
Your argument is pointer (address of clock_t variable)
so, Your code must be modified::
return((double(*end - *start)/CLOCKS_PER_SEC)*1000);
Under windows, you can use:
VOID WINAPI Sleep(
__in DWORD dwMilliseconds
);
In linux, you will want to use:
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
Notice the parameter difference - milliseconds under windows and seconds under linux.
My approach relies on:
int gettimeofday(struct timeval *tv, struct timezone *tz);
which gives the number of seconds and microseconds since the Epoch. According to the man pages:
The tv argument is a struct timeval (as specified in <sys/time.h>):
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
So here we go:
#include <sys/time.h>
#include <iostream>
#include <iomanip>
static long myclock()
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (tv.tv_sec * 1000000) + tv.tv_usec;
}
double getRuntime(long* end, long* start)
{
return (*end - *start);
}
void doWork()
{
sleep(3);
}
int main(void)
{
long start = myclock();
doWork();
long end = myclock();
std::cout << "Time elapsed: " << std::setprecision(6) << getRuntime(&end, &start)/1000.0 << " miliseconds" << std::endl;
std::cout << "Time elapsed: " << std::setprecision(3) << getRuntime(&end, &start)/1000000.0 << " seconds" << std::endl;
return 0;
}
Outputs:
Time elapsed: 3000.08 miliseconds
Time elapsed: 3 seconds

Sleep for milliseconds

I know the POSIX sleep(x) function makes the program sleep for x seconds. Is there a function to make the program sleep for x milliseconds in C++?
In C++11, you can do this with standard library facilities:
#include <chrono>
#include <thread>
std::this_thread::sleep_for(std::chrono::milliseconds(x));
Clear and readable, no more need to guess at what units the sleep() function takes.
Note that there is no standard C API for milliseconds, so (on Unix) you will have to settle for usleep, which accepts microseconds:
#include <unistd.h>
unsigned int microseconds;
...
usleep(microseconds);
To stay portable you could use Boost::Thread for sleeping:
#include <boost/thread/thread.hpp>
int main()
{
//waits 2 seconds
boost::this_thread::sleep( boost::posix_time::seconds(1) );
boost::this_thread::sleep( boost::posix_time::milliseconds(1000) );
return 0;
}
This answer is a duplicate and has been posted in this question before. Perhaps you could find some usable answers there too.
In Unix you can use usleep.
In Windows there is Sleep.
Depending on your platform you may have usleep or nanosleep available. usleep is deprecated and has been deleted from the most recent POSIX standard; nanosleep is preferred.
Why don't use time.h library? Runs on Windows and POSIX systems (don't use this code in production!):
CPU stays in IDLE state:
#include <iostream>
#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
#endif // _WIN32
using namespace std;
void sleepcp(int milliseconds);
void sleepcp(int milliseconds) // Cross-platform sleep function
{
#ifdef _WIN32
Sleep(milliseconds);
#else
usleep(milliseconds * 1000);
#endif // _WIN32
}
int main()
{
cout << "Hi! At the count to 3, I'll die! :)" << endl;
sleepcp(3000);
cout << "urrrrggghhhh!" << endl;
}
#include <chrono>
#include <thread>
std::this_thread::sleep_for(std::chrono::milliseconds(1000)); // sleep for 1 second
Remember to import the two headers.
From C++14 using std and also its numeric literals:
#include <chrono>
#include <thread>
using namespace std::chrono_literals;
std::this_thread::sleep_for(123ms);
#include <windows.h>
Syntax:
Sleep ( __in DWORD dwMilliseconds );
Usage:
Sleep (1000); //Sleeps for 1000 ms or 1 sec
nanosleep is a better choice than usleep - it is more resilient against interrupts.
If using MS Visual C++ 10.0, you can do this with standard library facilities:
Concurrency::wait(milliseconds);
you will need:
#include <concrt.h>
On platforms with the select function (POSIX, Linux, and Windows) you could do:
void sleep(unsigned long msec) {
timeval delay = {msec / 1000, msec % 1000 * 1000};
int rc = ::select(0, NULL, NULL, NULL, &delay);
if(-1 == rc) {
// Handle signals by continuing to sleep or return immediately.
}
}
However, there are better alternatives available nowadays.
The way to sleep your program in C++ is the Sleep(int) method. The header file for it is #include "windows.h."
For example:
#include "stdafx.h"
#include "windows.h"
#include "iostream"
using namespace std;
int main()
{
int x = 6000;
Sleep(x);
cout << "6 seconds have passed" << endl;
return 0;
}
The time it sleeps is measured in milliseconds and has no limit.
Second = 1000 milliseconds
Minute = 60000 milliseconds
Hour = 3600000 milliseconds
Select call is a way of having more precision (sleep time can be specified in nanoseconds).
Use Boost asynchronous input/output threads, sleep for x milliseconds;
#include <boost/thread.hpp>
#include <boost/asio.hpp>
boost::thread::sleep(boost::get_system_time() + boost::posix_time::millisec(1000));
As a Win32 replacement for POSIX systems:
void Sleep(unsigned int milliseconds) {
usleep(milliseconds * 1000);
}
while (1) {
printf(".");
Sleep((unsigned int)(1000.0f/20.0f)); // 20 fps
}
The question is old, but I managed to figure out a simple way to have this in my app. You can create a C/C++ macro as shown below use it:
#ifndef MACROS_H
#define MACROS_H
#include <unistd.h>
#define msleep(X) usleep(X * 1000)
#endif // MACROS_H
I use this:
#include <thread>
#define sleepms(val) std::this_thread::sleep_for(val##ms)
example:
sleepms(200);
Elegant solution from the one answer, bit modified.. One can easilly add select() usage if there's no better functionality available. Just make function that uses select() etc. ..
Code:
#include <iostream>
/*
Prepare defines for millisecond sleep function that is cross-platform
*/
#ifdef _WIN32
# include <Windows.h>
# define sleep_function_name Sleep
# define sleep_time_multiplier_for_ms 1
#else
# include <unistd.h>
# define sleep_function_name usleep
# define sleep_time_multiplier_for_ms 1000
#endif
/* Cross platform millisecond sleep */
void cross_platform_sleep_ms(unsigned long int time_to_sleep_in_ms)
{
sleep_function_name ( sleep_time_multiplier_for_ms * time_to_sleep_in_ms );
}
for C use /// in gcc.
#include <windows.h>
then use Sleep(); /// Sleep() with capital S. not sleep() with s .
//Sleep(1000) is 1 sec /// maybe.
clang supports sleep(), sleep(1) is for 1 sec time delay/wait.

Definitive function for get elapsed time in miliseconds

I have tried clock_gettime(CLOCK_REALTIME) and gettimeofday() without luck - And the most basic like clock(), what return 0 to me (?).
But none of they count the time under sleep. I don't need a high resolution timer, but I need something for getting the elapsed time in ms.
EDIT: Final program:
#include <iostream>
#include <string>
#include <time.h>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
// Non-system sleep (wasting cpu)
void wait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait) {}
}
int show_time() {
timeval tv;
gettimeofday(&tv, 0);
time_t t = tv.tv_sec;
long sub_sec = tv.tv_usec;
cout<<"t value: "<<t<<endl;
cout<<"sub_sec value: "<<sub_sec<<endl;
}
int main() {
cout<<show_time()<<endl;
sleep(2);
cout<<show_time()<<endl;
wait(2);
cout<<show_time()<<endl;
}
You need to try gettimeofday() again, it certainly count the wall clock time, so it counts when the process sleep as well.
long long getmsofday()
{
struct timeval tv;
gettimeofday(&tv);
return (long long)tv.tv_sec*1000 + tv.tv_usec/1000;
}
...
long long start = getmsofday();
do_something();
long long end = getmsofday();
printf("do_something took %lld ms\n",end - start);
Your problem probably relates to integral division. You need to cast one of the division operands to float/double to avoid truncation of decimal values less than a second.
clock_t start = clock();
// do stuff
// Can cast either operand for the division result to a double.
// I chose the right-hand operand, CLOCKS_PER_SEC.
double time_passed = clock() / static_cast<double>(CLOCKS_PER_SEC);
[Edit] As pointed out, clock() measures CPU time (clock ticks/cycles) and is not suitable well-suited for wall timer tests. If you want a portable solution for that, #see Boost.Timer as a possible solution
You actually want clock_gettime(CLOCK_MONOTONIC, ...).