I am trying to create a simple class for time measurements where strat() would start a measurement and end() would end it and cout the result. So far i have:
#include <sys/time.h>
#include <string>
#include <iostream>
using namespace std;
class Time {
public:
Time() {strTmp.clear();}
void start(string str) {
strTmp=str;
gettimeofday(&tim, NULL);
timeTmp = tim.tv_sec+(tim.tv_usec/1000000.0);
}
void end() {
gettimeofday(&tim, NULL);
cout << strTmp << " time: " << timeTmp - tim.tv_sec+(tim.tv_usec/1000000.0) << "s" << endl;
strTmp.clear();
}
private:
double timeTmp;
string strTmp;
timeval tim;
};
int main()
{
Time t;
t.start("test");
t.end();
return 0;
}
Unfortunately there is a 1 second build in delay in the measurement.
This delay disappears without the string input.
Is there a way to avoid the delay and still have the string input?
(i use g++ with -std=c++11 -O3 to compile)
You need to remember operator precedence:
cout << strTmp << " time: " << timeTmp - tim.tv_sec+(tim.tv_usec/1000000.0) << "s" << endl;
This is subtracting the whole seconds at the end from the sum of the start time and the number of microseconds at the end a - b + c/d is not the same as a - ( b + c/d ). As your comment to #PaulMcKenzie suggested, changing this to tim.tv_sec+(tim.tv_usec/1000000.0) - timeTmp gives more meaningful results.
A simple string shouldn't add that much time to a test (1 second?).
In any event, pass the string by const reference, not by value. You are incurring an unnecessary copy when you pass by value:
void start(const string& str) {
The other option is stylistic -- what purpose does that string serve except to make your output look "fancy"? Why not just get rid of it? In addition, why does your class do cout's? If the goal is to encapsulate Time, there is no need for the cout -- let the client of the Time class handle the I/O.
It seems like you should pass your timer tagging string to the constructor instead of the start function. Also, you shouldn't need to do this calculation timeTmp = tim.tv_sec+(tim.tv_usec/1000000.0); while the timer is actively running. Wait until after you've registered the time that end is called to do unit conversion stuff like this.
Related
I have made a program to compute the permutations of an 8 character string "sharjeel".
#include <iostream>
#include <time.h>
char string[] = "sharjeel";
int len = 8;
int count = 0;
void swap(char& a, char& b){
char t = a;
a = b;
b = t;
}
void permute(int pos) {
if(pos==len-1){
std::cout << ++count << "\t" << string << std::endl;
return;
}
else {
for (int i = pos; i < len;i++)
{
swap(string[i], string[pos]);
permute(pos + 1);
swap(string[i], string[pos]);
}
}
}
int main(){
clock_t start = clock();
permute(0);
std::cout << "Permutations: " << count << std::endl;
std::cout << "Time taken: " << (double)(clock() - start) / (double)CLOCKS_PER_SEC << std::endl;
return 1;
}
If I print each permutation along it takes about 9.8 seconds for the execution to complete.
40314 lshaerej
40315 lshareej
40316 lshareje
40317 lshareej
40318 lshareje
40319 lsharjee
40320 lsharjee
Permutations: 40320
Time taken: 9.815
Now if I replace the line:
std::cout << ++count << "\t" << string << std::endl;
with this:
++count;
and then recompile, the output is:
Permutations: 40320
Time taken: 0.001
Running again:
Permutations: 40320
Time taken: 0.002
Compiled using g++ with -O3
Why is std::cout so relatively time consuming? Is there a way to print that is faster?
EDIT: Made a C# version of the program
/*
* Permutations
* in c#
* much faster than the c++ version
*/
using System;
using System.Diagnostics;
namespace Permutation_C
{
class MainClass
{
private static uint len;
private static char[] input;
private static int count = 0;
public static void Main (string[] args)
{
Console.Write ("Enter a string to permute: ");
input = Console.ReadLine ().ToCharArray();
len = Convert.ToUInt32(input.Length);
Stopwatch clock = Stopwatch.StartNew();
permute (0u);
Console.WriteLine("Time Taken: {0} seconds", clock.ElapsedMilliseconds/1000.0);
}
static void permute(uint pos)
{
if (pos == len - 1u) {
Console.WriteLine ("{0}.\t{1}",++count, new string(input));
return;
} else {
for (uint i = pos; i < len; i++) {
swap (Convert.ToInt32(i),Convert.ToInt32(pos));
permute (pos + 1);
swap (Convert.ToInt32(i),Convert.ToInt32(pos));
}
}
}
static void swap(int a, int b) {
char t = input[a];
input[a] = input[b];
input[b] = t;
}
}
}
Output:
40313. lshaerje
40314. lshaerej
40315. lshareej
40316. lshareje
40317. lshareej
40318. lshareje
40319. lsharjee
40320. lsharjee
Time Taken: 4.628 seconds
Press any key to continue . . .
From here, Console.WriteLine() seems almost twice as fast when compared with the results from std::cout. What seems to be slowing std::cout down?
std::cout ultimately results in the operating system being invoked.
If you want something to compute fast, you have to make sure that no external entities are involved in the computation, especially entities that have been written with versatility more than performance in mind, like the operating system.
Want it to run faster? You have a few options:
Replace << std::endl; with << '\n'. This will refrain from flushing the internal buffer of the C++ runtime to the operating system on every single line. It should result in a huge performance improvement.
Use std::ios::sync_with_stdio(false); as user Galik Mar suggests in a comment.
Collect as much as possible of your outgoing text in a buffer, and output the entire buffer at once with a single call.
Write your output to a file instead of the console, and then keep that file displayed by a separate application such as Notepad++ which can keep track of changes and keep scrolling to the bottom.
As for why it is so "time consuming", (in other words, slow,) that's because the primary purpose of std::cout (and ultimately the operating system's standard output stream) is versatility, not performance. Think about it: std::cout is a C++ library function which will invoke the operating system; the operating system will determine that the file being written to is not really a file, but the console, so it will send the data to the console subsystem; the console subsystem will receive the data and it will start invoking the graphics subsystem to render the text in the console window; the graphics subsystem will be drawing font glyphs on a raster display, and while rendering the data, there will be scrolling of the console window, which involves copying large amounts of video RAM. That's an awful lot of work, even if the graphics card takes care of some of it in hardware.
As for the C# version, I am not sure exactly what is going on, but what is probably happening is something quite different: In C# you are not invoking Console.Out.Flush(), so your output is cached and you are not suffering the overhead incurred by C++'s std::cout << std::endl which causes each line to be flushed to the operating system. However, when the buffer does become full, C# must flush it to the operating system, and then it is hit not only by the overhead represented by the operating system, but also by the formidable managed-to-native and native-to-managed transition that is inherent in the way it's virtual machine works.
Abstract:
I wrote a short program dealing with the Chrono library in C++ for experimentation purposes. I want the CPU to count as high as it can within one second, display what it counted to, then repeat the process within an infinite loop.
Current Code:
#include <iostream>
#include <chrono>
int counter()
{
int num = 0;
auto startTime = std::chrono::system_clock::now();
while (true)
{
num++;
auto currentTime = std::chrono::system_clock::now();
if (std::chrono::duration_cast<std::chrono::seconds>(currentTime - startTime).count() == 1)
return num;
}
}
int main()
{
while(true)
std::cout << "You've counted to " << counter() << "in one second!";
return 0;
}
Problem:
The conditional statement in my program:
if (std::chrono::duration_cast<std::chrono::seconds>(currentTime - startTime).count() == 1)
isn't being triggered because the casted value of currentTime - startTime never equals nor rises above one. This can be demonstrated by replacing the operator '==' with '<', which outputs an incorrect result, as opposed to outputting nothing at all. I don't understand why the condition isn't being met; if this program is gathering time from the system clock at one point, then repeatedly comparing it to the current time, shouldn't the integer value of the difference equal one at some point?
You're hitting a cout issue, not a chrono issue. The problem is that you're printing with cout which doesn't flush if it doesn't feel like it.
cerr will flush on newline. Change to cerr and add a \n and you'll get what you expect.
std::cerr << "You've counted to " << counter() << "in one second!\n";
I am trying to calculate the time a certain function takes to run
#include <iostream>
#include <cstdlib>
#include <ctime>
#include "time.h"
int myFunction(int n)
{
.............
}
int n;
time_t start;
std::cout<<"What number would you like to enter ";
std::cout << std::endl;
std::cin>>n;
start = clock();
std::cout<<myFunction(m)<<std::endl;
std::cout << "Time it took: " << (clock() - start) / (double)(CLOCKS_PER_SEC/ 1000 ) << std::endl;
std::cout << std::endl;
This works fine in x-code (getting numbers such 4.2, 2.6 ...), but doesn't on a linux based server where I'm always getting 0. Any ideas why that is and how to fix it?
The "tick" of clock may be more than 1/CLOCKS_PER_SEC seconds, for example it could be 10ms, 15.832761ms or 32 microseconds.
If the time consumed by your code is smaller than this "tick", then the time taken will appear to be zero.
There's no simple way to find out what that is, other than - you could call clock repeatedly until the return-value is not the same as last time, but that may not be entirely reliable, and if the clock-tick is VERY small, it may not be accurate in that direction, but if the time is quite long, you may be able to find out.
For measuring very short times (a few milliseconds), and assuming the function is entirely CPU/Memory-bound, and not spending time waiting for file I/O or sending receiving packets over a network, then std::chrono can be used to measure the time. For extremely short times, using the processor time-stamp-counter can also be a method, although that can be quite tricky to use because it varies in speed depending on load, and can have different values between different cores.
In my compiler project, I'm using this to measure the time of things:
This part goes into a header:
class TimeTrace
{
public:
TimeTrace(const char *func) : impl(0)
{
if(timetrace)
{
createImpl(func);
}
}
~TimeTrace() { if(impl) destroyImpl(); }
private:
void createImpl(const char *func);
void destroyImpl();
TimeTraceImpl *impl;
};
This goes into a source file.
class TimeTraceImpl
{
public:
TimeTraceImpl(const char *func) : func(func)
{
start = std::chrono::steady_clock::now();
}
~TimeTraceImpl()
{
end = std::chrono::steady_clock::now();
uint64_t elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end-start).count();
std::cerr << "Time for " << func << " "
<< std::fixed << std::setprecision(3) << elapsed / 1000.0 << " ms" << std::endl;
}
private:
std::chrono::time_point<std::chrono::steady_clock> start, end;
const char* func;
};
void TimeTrace::createImpl(const char *func)
{
impl = new TimeTraceImpl(func);
}
void TimeTrace::destroyImpl()
{
delete impl;
}
The reason for the rather comple pImpl implementation is that I don't want to burden the code with extra work when the timing is turned off (timetrace is false).
Of course, the smallest actual tick of std::chrono also varies, but in most Linux implementations, it will be nanoseconds or some small multiple thereof, so (much) better precision than clock.
The drawback is that it measures the elapsed time, not the CPU-usage. This is fine for when the bottleneck is the CPU and memory, but not for things that depend on external hardware to perform something [unless you actually WANT that measurement].
I am trying to create a timer where it begins with a certain value and ends with another value like.
int pktctr = (unsigned char)unpkt[0];
if(pktctr == 2)
{
cout << "timer-begin" << endl;
//start timer here
}
if(pktctr == 255)
{
cout << "timer-end" << endl;
//stop timer here
//timer display total time then reset.
}
cout << "displays total time it took from 1 to 255 here" << endl;
Any idea on how to achieve this?
void WINAPI MyUCPackets(char* unpkt, int packetlen, int iR, int arg)
{
int pktctr = (unsigned char)unpkt[0];
if(pktctr == 2)
{
cout << "timer-begin" << endl;
}
if(pktctr == 255)
{
cout << "timer-end" << endl;
}
return MyUC2Packets(unpkt,packetlen,iR,arg);
}
Everytime this function is called unpkt starts from 2 then reaches max of 255 then goes back to 1. And I want to compute how long it took for every revolution?
This will happen alot of times. But I just wanted to check how many seconds it took for this to happen because it won't be the same everytime.
Note: This is done with MSDetours 3.0...
I'll assume you're using Windows (from the WINAPI in the code) in which case you can use GetTickCount:
/* or you could have this elsewhere, e.g. as a class member or
* in global scope (yuck!) As it stands, this isn't thread safe!
*/
static DWORD dwStartTicks = 0;
int pktctr = (unsigned char)unpkt[0];
if(pktctr == 2)
{
cout << "timer-begin" << endl;
dwStartTicks = GetTickCount();
}
if(pktctr == 255)
{
cout << "timer-end" << endl;
DWORD dwDuration = GetTickCount() - dwStartTicks;
/* use dwDuration - it's in milliseconds, so divide by 1000 to get
* seconds if you so desire.
*/
}
Things to watch out for: overflow of GetTickCount is possible (it resets to 0 approximately every 47 days, so it's possible that if you start your timer close to the rollover time, it will finish after the rollover). You can solve this in two ways, either use GetTickCount64 or simply notice when dwStartTicks > GetTickCount and if so, calculate how many milliseconds were from dwStartTicks until the rollover, and how many millseconds from 0 to the result of GetTickCount() and add those numbers together (bonus points if you can do this in a more clever way).
Alternatively, you can use the clock function. You can find out more on that, including an example of how to use it at http://msdn.microsoft.com/en-us/library/4e2ess30(v=vs.71).aspx and it should be fairly easy to adapt and integrate into your code.
Finally, if you're interested in a more "standard" solution, you can use the <chrono> stuff from the C++ standard library. Check out http://en.cppreference.com/w/cpp/chrono for an example.
If you want to use the Windows-API use GetSystemTime(). Provide a struct SYSTEMTIME, initialize it properly and pass it to GetSystemTime():
#include <Windows.h>
...
SYSTEMTIME sysTime;
GetFileTime(&sysTime);
// use sysTime and create differences
Look here for GetSystemTime() there is a link for SYSTEMTIME there, too.
I think boost timer is the best solution for you.
You can check the elapsed time like this:
#include <boost/timer.hpp>
int main() {
boost::timer t; // start timing
...
double elapsed_time = t.elapsed();
...
}
This question already has answers here:
Measuring execution time of a function in C++
(14 answers)
Closed 4 years ago.
In C# I would fire up the Stopwatch class to do some quick-and-dirty timing of how long certain methods take.
What is the equivalent of this in C++? Is there a high precision timer built in?
I used boost::timer for measuring the duration of an operation. It provides a very easy way to do the measurement, and at the same time being platform independent. Here is an example:
boost::timer myTimer;
doOperation();
std::cout << myTimer.elapsed();
P.S. To overcome precision errors, it would be great to measure operations that take a few seconds. Especially when you are trying to compare several alternatives. If you want to measure something that takes very little time, try putting it into a loop. For example run the operation 1000 times, and then divide the total time by 1000.
I've implemented a timer for situations like this before: I actually ended up with a class with two different implemations, one for Windows and one for POSIX.
The reason was that Windows has the QueryPerformanceCounter() function which gives you access to a very accurate clock which is ideal for such timings.
On POSIX however this isn't available so I just used boost.datetime's classes to store the start and end times then calculated the duration from those. It offers a "high resolution" timer but the resolution is undefined and varies from platform to platform.
I use my own version of Python's time_it function. The advantage of this function is that it repeats a computation as many times as necessary to obtain meaningful results. If the computation is very fast, it will be repeated many times. In the end you obtain the average time of all the repetitions. It does not use any non-standard functionality:
#include <ctime>
double clock_diff_to_sec(long clock_diff)
{
return double(clock_diff) / CLOCKS_PER_SEC;
}
template<class Proc>
double time_it(Proc proc, int N=1) // returns time in microseconds
{
std::clock_t const start = std::clock();
for(int i = 0; i < N; ++i)
proc();
std::clock_t const end = std::clock();
if(clock_diff_to_sec(end - start) < .2)
return time_it(proc, N * 5);
return clock_diff_to_sec(end - start) * (1e6 / N);
}
The following example uses the time_it function to measure the performance of different STL containers:
void dummy_op(int i)
{
if(i == -1)
std::cout << i << "\n";
}
template<class Container>
void test(Container const & c)
{
std::for_each(c.begin(), c.end(), &dummy_op);
}
template<class OutIt>
void init(OutIt it)
{
for(int i = 0; i < 1000; ++i)
*it = i;
}
int main( int argc, char ** argv )
{
{
std::vector<int> c;
init(std::back_inserter(c));
std::cout << "vector: "
<< time_it(boost::bind(&test<std::vector<int> >, c)) << "\n";
}
{
std::list<int> c;
init(std::back_inserter(c));
std::cout << "list: "
<< time_it(boost::bind(&test<std::list<int> >, c)) << "\n";
}
{
std::deque<int> c;
init(std::back_inserter(c));
std::cout << "deque: "
<< time_it(boost::bind(&test<std::deque<int> >, c)) << "\n";
}
{
std::set<int> c;
init(std::inserter(c, c.begin()));
std::cout << "set: "
<< time_it(boost::bind(&test<std::set<int> >, c)) << "\n";
}
{
std::tr1::unordered_set<int> c;
init(std::inserter(c, c.begin()));
std::cout << "unordered_set: "
<< time_it(boost::bind(&test<std::tr1::unordered_set<int> >, c)) << "\n";
}
}
In case anyone is curious here is the output I get (compiled with VS2008 in release mode):
vector: 8.7168
list: 27.776
deque: 91.52
set: 103.04
unordered_set: 29.76
You can use the ctime library to get the time in seconds. Getting the time in milliseconds is implementation-specific. Here is a discussion exploring some ways to do that.
See also: How to measure time in milliseconds using ANSI C?
High-precision timers are platform-specific and so aren't specified by the C++ standard, but there are libraries available. See this question for a discussion.
I humbly submit my own micro-benchmarking mini-library (on Github). It's super simple -- the only advantage it has over rolling your own is that it already has the high-performance timer code implemented for Windows and Linux, and abstracts away the annoying boilerplate.
Just pass in a function (or lambda), the number of times it should be called per test run (default: 1), and the number of test runs (default: 100). The fastest test run (measured in fractional milliseconds) is returned:
// Example that times the compare-and-swap atomic operation from C++11
// Sample GCC command: g++ -std=c++11 -DNDEBUG -O3 -lrt main.cpp microbench/systemtime.cpp -o bench
#include "microbench/microbench.h"
#include <cstdio>
#include <atomic>
int main()
{
std::atomic<int> x(0);
int y = 0;
printf("CAS takes %.4fms to execute 100000 iterations\n",
moodycamel::microbench(
[&]() { x.compare_exchange_strong(y, 0); }, /* function to benchmark */
100000, /* iterations per test run */
100 /* test runs */
)
);
// Result: Clocks in at 1.2ms (12ns per CAS operation) in my environment
return 0;
}
#include <time.h>
clock_t start, end;
start = clock();
//Do stuff
end = clock();
printf("Took: %f\n", (float)((end - start) / (float)CLOCKS_PER_SEC));
This might be an OS-dependent issue rather than a language issue.
If you're on Windows then you can access a millisecond 10- to 16-millisecond timer through GetTickCount() or GetTickCount64(). Just call it once at the start and once at the end, and subtract.
That was what I used before if I recall correctly. The linked page has other options as well.
You can find useful this class.
Using RAII idiom, it prints the text given in construction when destructor is called, filling elapsed time placeholder with the proper value.
Example of use:
int main()
{
trace_elapsed_time t("Elapsed time: %ts.\n");
usleep(1.005 * 1e6);
}
Output:
Elapsed time: 1.00509s.