This question already has answers here:
What are the uses of std::chrono::high_resolution_clock?
(2 answers)
Closed 6 years ago.
So I was trying to use std::chrono::high_resolution_clock to time how long something takes to executes. I figured that you can just find the difference between the start time and end time...
To check my approach works, I made the following program:
#include <iostream>
#include <chrono>
#include <vector>
void long_function();
int main()
{
std::chrono::high_resolution_clock timer;
auto start_time = timer.now();
long_function();
auto end_time = timer.now();
auto diff_millis = std::chrono::duration_cast<std::chrono::duration<int, std::milli>>(end_time - start_time);
std::cout << "It took " << diff_millis.count() << "ms" << std::endl;
return 0;
}
void long_function()
{
//Should take a while to execute.
//This is calculating the first 100 million
//fib numbers and storing them in a vector.
//Well, it doesn't actually, because it
//overflows very quickly, but the point is it
//should take a few seconds to execute.
std::vector<unsigned long> numbers;
numbers.push_back(1);
numbers.push_back(1);
for(int i = 2; i < 100000000; i++)
{
numbers.push_back(numbers[i-2] + numbers[i-1]);
}
}
The problem is, it just outputs 3000ms exactly, when it clearly wasn't actually that.
On shorter problems, it just outputs 0ms... What am I doing wrong?
EDIT: If it's of any use, I'm using the GNU GCC compiler with -std=c++0x flag on
The resolution of the high_resolution_clock depends on the platform.
Printing the following will give you an idea of the resolution of the implementation you use
std::cout << "It took " << std::chrono::nanoseconds(end_time - start_time).count() << std::endl;
I have got a similar problem with g++ (rev5, Built by MinGW-W64 project) 4.8.1 under window7.
int main()
{
auto start_time = std::chrono::high_resolution_clock::now();
int temp(1);
const int n(1e7);
for (int i = 0; i < n; i++)
temp += temp;
auto end_time = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end_time - start_time).count() << " ns.";
return 0;
}
if n=1e7 it displays 19999800 ns
but if
n=1e6 it displays 0 ns.
the precision seems weak.
Related
I'm just comparing the speed of a couple Fibonacci functions, one gives an output almost immediately and reads it got done in 500 nanoseconds, while the other, depending on the depth, may sit there loading for many seconds, yet when it is done, it will read that it took it only 100 nanoseconds... After I just sat there and waited like 20 seconds for it.
It's not a big deal as I can prove the other is slower just with raw human perception, but why would chrono not be working? Something to do with recursion?
PS I know that fibonacci2() doesn't give the correct output on odd numbered depths, I'm just testing some things and the output is actually just there so the compiler doesn't optimize it away or something. Go ahead and just copy this code and you'll see fibonacci2() immediately output but you'll have to wait like 5 seconds for fibonacci(). Thank you.
#include <iostream>
#include <chrono>
int fibonacci2(int depth) {
static int a = 0;
static int b = 1;
if (b > a) {
a += b; //std::cout << a << '\n';
}
else {
b += a; //std::cout << b << '\n';
}
if (depth > 1) {
fibonacci2(depth - 1);
}
return a;
}
int fibonacci(int n) {
if (n <= 1) {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
int main() {
int f = 0;
auto start2 = std::chrono::steady_clock::now();
f = fibonacci2(44);
auto stop2 = std::chrono::steady_clock::now();
std::cout << f << '\n';
auto duration2 = std::chrono::duration_cast<std::chrono::nanoseconds>(stop2 - start2);
std::cout << "faster function time: " << duration2.count() << '\n';
auto start = std::chrono::steady_clock::now();
f = fibonacci(44);
auto stop = std::chrono::steady_clock::now();
std::cout << f << '\n';
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start);
std::cout << "way slower function with incorrect time: " << duration.count() << '\n';
}
I don't know what compiler you are using and with which compiler options, but I tested x64 msvc v19.28 using /O2 in godbolt. Here the compiled instructions are reordered such that it queries the perf_counter twice before invoking the fibonacci(int) function, which in code would look like
auto start = ...;
auto stop = ...;
f = fibonacci(44);
A solution to disallow this reordering might be to use a atomic_thread_fence just before and after the fibonacci function call.
As Mestkon answered the compiler can reorder your code.
Examples of how to prevent the compiler from reordering Memory Ordering - Compile Time Memory Barrier
It would be beneficial in the future if you provided information on what compiler you were using.
gcc 7.5 with -O2 for example does not reorder the timer instructions in this given scenario.
What I want to do, my project:
I want to make a program that waits 0.5 seconds, for example, does something, let's say cout << "Hello World", once and then again the same for about 10 times(this is a test for another program), but without sleep, sleep_for, sleep or anything similar BCS I don't want the processor to actually sleep, BCS at that time the processor does not just wait, it does nothing for that time, for these 0.5 seconds it does nothing and I don't want that, and the main reason is BCS it also doesn't take input.
What I tried:
What I tried was to keep two points in time(time_point start,end), duration_cast their difference (end - start) in a for loop ((int i = 0;i < 10;i++)), and if their difference was 500 milliseconds, then, cout << "Hello World\n";.
My code looked something like this:
#include <iostream>
#include <chrono>
#include <ctime>
using namespace std;
using namespace chrono;
int main()
{
time_point<steady_clock> t = steady_clock::now():
for (int i = 0; i < 10;)
{
duration<double> d = steady_clock::now() - t;
uint32_t a = duration_cast<milliseconds>(d).count();
if (a >= 500)
{
cout << a << " Hello World!" << endl;
t = steady_clock::now();
i++;
}
}
return 0;
}
My problem:
It overflows, most of the time, I don't know what exactly overflows, but a appears to be sometimes 6??? others 47??? (? = some digit)
I tried many things, I ended up to something like this:
#include <iostream>
#include <chrono>
#include <ctime>
using namespace std;
using namespace chrono;
int main()
{
time_point<high_resolution_clock> t = high_resolution_clock::now();
for (int i = 0; i< 10;)
{
duration<double,ratio<1,1000000>> d = high_resolution_clock::now() - t;
uint32_t a = duration_cast<microseconds>(d).count();
if (d >= microseconds(500000) )
{
cout << a << " Hello World!" << endl;
i++;
t = high_resolution_clock::now();
}
}
return 0;
}
It didn't really solve the problem, but the max value appears is `~1500(1500000 in microseconds) and when it happens it takes longer to print the message, I don't know if its still overflow, to be honest, but...
Question
Anyway, do you have any suggestions about how to stop the overflow or a completely different way to achieve what I want, even if you don't, thanks for spending time to read my question, I hope to express someone else's question if there someone who has the same question as me.
Not sure if this is what you're looking for or not. But if not, maybe we can build on this to figure out what you want:
#include <chrono>
#include <iostream>
int
main()
{
using namespace std;
using namespace std::chrono;
auto t = steady_clock::now();
for (int i = 0; i < 10; ++i)
{
auto t1 = t + 500ms;
while (steady_clock::now() < t1)
;
cout << duration<double>(t1-t).count() << " Hello World!" << endl;
t = t1;
}
}
The code sets a time_point for 500ms in the future, and then enters a busy loop until that future time_point is now.
I want to be able to measure time elapsed (for frame time) with my Clock class. (Problem described below the code.)
Clock.h
typedef std::chrono::high_resolution_clock::time_point timePt;
class Clock
{
timePt currentTime;
timePt lastTime;
public:
Clock();
void update();
uint64_t deltaTime();
};
Clock.cpp
#include "Clock.h"
using namespace std::chrono;
Clock::Clock()
{
currentTime = high_resolution_clock::now();
lastTime = currentTime;
}
void Clock::update()
{
lastTime = currentTime;
currentTime = high_resolution_clock::now();
}
uint64_t Clock::deltaTime()
{
microseconds delta = duration_cast<microseconds>(currentTime - lastTime);
return delta.count();
}
When I try to use Clock like so
Clock clock;
while(1) {
clock.update();
uint64_t dt = clock.deltaTime();
for (int i=0; i < 10000; i++)
{
//do something to waste time between updates
int k = i*dt;
}
cout << dt << endl; //time elapsed since last update in microseconds
}
For me it prints about 30 times "0" until it finally prints a number which is always very close to something like "15625" microseconds (15.625 milliseconds).
My question is, why isn't there anything between? I'm wondering whether my implementation is wrong or the precision on high_resolution_clock is acting strange. Any ideas?
EDIT: I am using Codeblocks with mingw32 compiler on a windows 8 computer.
EDIT2:
I tried running the following code that should display high_resolution_clock precision:
template <class Clock>
void display_precision()
{
typedef std::chrono::duration<double, std::nano> NS;
NS ns = typename Clock::duration(1);
std::cout << ns.count() << " ns\n";
}
int main()
{
display_precision<std::chrono::high_resolution_clock>();
}
For me it prints: "1000 ns". So I guess high_resolution_clock has a precision of 1 microsecond right? Yet in my tests it seems to have a precision of 16 milliseconds?
What system are you using? (I guess it's Windows? Visual Studio is known to had this problem, now fixed in VS 2015, see the bug report). On some systems high_resolution_clock is defined as just an alias to system_clock, which can have really low resolution, like 16 ms you are seeing.
See for example this question.
I have the same problem with msys2 on Windows 10: the delta returned is 0 for most of my subfunctions tested and suddenly returns 15xxx or 24xxx microseconds. I thought there was a problem in my code as all the tutorials do not mention any problem.
Same thing for difftime(finish, start) in time.h which often returns 0.
I finally changed all my high_resolution clock with steady_clock, and I can find the proper times:
auto t_start = std::chrono::steady_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock ::now() - t_start).count() << " microseconds" << std::endl;
// returns the proper value (or at least a plausible value)
whereas this returns mostly 0:
auto t_start = std::chrono::high_resolution_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - t_start).count() << " microseconds" << std::endl;
// returns 0 most of the time
difftime does not seem to work either:
time_t start, finish;
time(&start);
_cvTracker->track(image);
time(&finish);
std::cout << "Time taken= " << difftime(finish, start) << std::endl;
// returns 0 most of the time
I have following which stop execution of program after certain time.
#include <iostream>
#include<ctime>
using namespace std;
int main( )
{
time_t timer1;
time(&timer1);
time_t timer2;
double second;
while(1)
{
time(&timer2);
second = difftime(timer2,timer1);
//check if timediff is cross 3 seconds
if(second > 3)
{
return 0;
}
}
return 0;
}
Is above program would work if time increase from 23:59 to 00:01 ?
If there any other better way?
Provided you have C++11, you can have a look at this example:
#include <thread>
#include <chrono>
int main() {
std::this_thread::sleep_for (std::chrono::seconds(3));
return 0;
}
Alternatively I'd go with a threading library of your choice and use its Thread sleep function. In most cases it is better to send your thread to sleep instead of busy waiting.
time() returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds. Thus, the time of day does not matter.
You can use std::chrono::steady_clock in C++11. Check the example in the now static method for an example :
using namespace std::chrono;
steady_clock::time_point clock_begin = steady_clock::now();
std::cout << "printing out 1000 stars...\n";
for (int i=0; i<1000; ++i) std::cout << "*";
std::cout << std::endl;
steady_clock::time_point clock_end = steady_clock::now();
steady_clock::duration time_span = clock_end - clock_begin;
double nseconds = double(time_span.count()) * steady_clock::period::num / steady_clock::period::den;
std::cout << "It took me " << nseconds << " seconds.";
std::cout << std::endl;
I have written a c++ program , I want to know how to calculate the time taken for execution so I won't exceed the time limit.
#include<iostream>
using namespace std;
int main ()
{
int st[10000],d[10000],p[10000],n,k,km,r,t,ym[10000];
k=0;
km=0;
r=0;
scanf("%d",&t);
for(int y=0;y<t;y++)
{
scanf("%d",&n);
for(int i=0;i<n;i++)
{
cin>>st[i] >>d[i] >>p[i];
}
for(int i=0;i<n;i++)
{
for(int j=i+1;j<n;j++)
{
if((d[i]+st[i])<=st[j])
{
k=p[i]+p[j];
}
if(k>km)
km=k;
}
if(km>r)
r=km;
}
ym[y]=r;
}
for( int i=0;i<t;i++)
{
cout<<ym[i]<<endl;
}
//system("pause");
return 0;
}
this is my program and i want it to be within time limit 3 sec !! how to do it ?
yeah sorry i meant execution time !!
If you have cygwin installed, from it's bash shell, run your executable, say MyProgram, using the time utility, like so:
/usr/bin/time ./MyProgram
This will report how long the execution of your program took -- the output would look something like the following:
real 0m0.792s
user 0m0.046s
sys 0m0.218s
You could also manually modify your C program to instrument it using the clock() library function, like so:
#include <time.h>
int main(void) {
clock_t tStart = clock();
/* Do your stuff here */
printf("Time taken: %.2fs\n", (double)(clock() - tStart)/CLOCKS_PER_SEC);
return 0;
}
With C++11 for measuring the execution time of a piece of code, we can use the now() function:
auto start = chrono::steady_clock::now();
// Insert the code that will be timed
auto end = chrono::steady_clock::now();
// Store the time difference between start and end
auto diff = end - start;
If you want to print the time difference between start and end in the above code, you could use:
cout << chrono::duration <double, milli> (diff).count() << " ms" << endl;
If you prefer to use nanoseconds, you will use:
cout << chrono::duration <double, nano> (diff).count() << " ns" << endl;
The value of the diff variable can be also truncated to an integer value, for example, if you want the result expressed as:
diff_sec = chrono::duration_cast<chrono::nanoseconds>(diff);
cout << diff_sec.count() << endl;
For more info click here
OVERVIEW
I have written a simple semantic hack for this using #AshutoshMehraresponse. You code looks really readable this way!
MACRO
#include <time.h>
#ifndef SYSOUT_F
#define SYSOUT_F(f, ...) _RPT1( 0, f, __VA_ARGS__ ) // For Visual studio
#endif
#ifndef speedtest__
#define speedtest__(data) for (long blockTime = NULL; (blockTime == NULL ? (blockTime = clock()) != NULL : false); SYSOUT_F(data "%.9fs", (double) (clock() - blockTime) / CLOCKS_PER_SEC))
#endif
USAGE
speedtest__("Block Speed: ")
{
// The code goes here
}
OUTPUT
Block Speed: 0.127000000s
Note: the question was originally about compilation time, but later it turned out that the OP really meant execution time. But maybe this answer will still be useful for someone.
For Visual Studio: go to Tools / Options / Projects and Solutions / VC++ Project Settings and set Build Timing option to 'yes'. After that the time of every build will be displayed in the Output window.
You can try below code for c++:
#include <chrono>
auto start = std::chrono::system_clock::now();
// Your Code to Execute //
auto end = std::chrono::system_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << "ms" << std::endl;
This looks like Dijstra's algorithm. In any case, the time taken to run will depend on N. If it takes more than 3 seconds there isn't any way I can see of speeding it up, as all the calculations that it is doing need to be done.
Depending on what problem you're trying to solve, there might be a faster algorithm.
I have used the technique said above, still I found that the time given in the Code:Blocks IDE was more or less similar to the result obtained-(may be it will differ by little micro seconds)..
If you are using C++ then you should try this below code as you would always get 0 as answer if you directly use #Ashutosh Mehra's answer.
#include <iostream>
#include <time.h>
using namespace std;
int main() {
int a = 20000, sum=0;
clock_t start = clock();
for (int i=0; i<a; i++) {
for (int k = 0; k<a; k++)
sum += 1;
}
cout.precision(10);
cout << fixed << float(clock() - start)/CLOCKS_PER_SEC << endl;
return 0;
}
Because in C++ you the float and double values will directly be rounded off. So I used the cout.precision(10) to set the output precision of any value to 10 digits after decimal.
shorter version of Ashutosh Mehra's answer:
/* including stuff here */
#include <time.h>
int main(void) {
clock_t tStart = clock();
/* stuff here */
cout<<"Time taken: "<<(double)(clock() - tStart)/CLOCKS_PER_SEC;
return 0;
}