I am currently trying to create a way to display the elapsed seconds (not the difference between cycles). My code is following:
#include <iostream>
#include <vector>
#include <chrono>
#include <Windows.h>
typedef std::chrono::high_resolution_clock::time_point TIME;
#define TIMENOW() std::chrono::high_resolution_clock::now()
#define TIMECAST(x) std::chrono::duration_cast<std::chrono::duration<double>>(x).count()
int main()
{
std::chrono::duration<double> ms;
double t = 0;
while (1)
{
TIME begin = TIMENOW();
int c = 0;
for (int i = 0; i < 10000000; i++)
{
c += i*100000;
}
TIME end = TIMENOW();
ms= std::chrono::duration_cast<std::chrono::duration<double>>(end - begin);
t =t+ ms.count();
std::cout << t << std::endl;
}
I expected adding the delta time over and over again to roughly give me the elapsed time in seconds, however I noticed that only if I do i < big number it sort of is fairly accurate. If its only 10,000 or so, t seems to accumulate slower and gradually faster. Maybe I am missing something but isnt the difference my delta time(the elapsed time between this and last cycle) and if I keep adding the delta times up, it should spit out seconds? Any help is appreciated.
Related
Update 2: Actually it's the regex(".{40000}");. That alone already takes that much time. Why?
regex_match("", regex(".{40000}")); takes almost 8 seconds on my PC. Why? Am I doing something wrong? I'm using gcc 4.9.3 from MinGW on Windows 10 on an i7-6700.
Here's a full test program:
#include <iostream>
#include <regex>
#include <ctime>
using namespace std;
int main() {
clock_t t = clock();
regex_match("", regex(".{40000}"));
cout << double(clock() - t) / CLOCKS_PER_SEC << endl;
}
How I compile and run it:
C:\Users\ ... \coding>g++ -std=c++11 test.cpp
C:\Users\ ... \coding>a.exe
7.643
Update: Looks like the time is quadratic in the given number. Doubling it roughly quadruples the time:
10000 0.520 seconds (factor 1.000)
20000 1.922 seconds (factor 3.696)
40000 7.810 seconds (factor 4.063)
80000 31.457 seconds (factor 4.028)
160000 128.904 seconds (factor 4.098)
320000 536.358 seconds (factor 4.161)
The code:
#include <regex>
#include <ctime>
using namespace std;
int main() {
double prev = 0;
for (int i=10000; ; i*=2) {
clock_t t0 = clock();
regex_match("", regex(".{" + to_string(i) + "}"));
double t = double(clock() - t0) / CLOCKS_PER_SEC;
printf("%7d %7.3f seconds (factor %.3f)\n", i, t, prev ? t / prev : 1);
prev = t;
}
}
Still no idea why. It's a very simple regex and the empty string (though it's the same with short non-empty strings). It should fail instantly. Is the regex engine just weird and bad?
Because it want be fast...
It very possible that transform this regex to another representation (state machine or something else) that is easier and faster to run. C# allow even generating runtime code that represents regex.
In your case you probably hit some bug in that transformation that have O(n^2) complexity.
Measuring the construction and matching separately:
clock_t t1 = clock();
regex r(".{40000}");
clock_t t2 = clock();
regex_match("", r);
clock_t t3 = clock();
cout << double(t2 - t1) / CLOCKS_PER_SEC << '\n'
<< double(t3 - t2) / CLOCKS_PER_SEC << endl;
I see:
0.077336
0.000613
I have app, where i must count time of executing part of C++ function and ASM function. Actually i have problem, times which i get are weird - 0 or about 15600. O ocurs more often. And sometimes, after executing, times looks good, and values are different than 0 and ~15600. Anybody knows why it occurs ? And how to fix it ?
Fragment of counting time for executing my app for C++:
auto start = chrono::system_clock::now();
for (int i = 0; i < nThreads; i++)
xThread[i]->Start(i);
for (int i = 0; i < nThreads; i++)
xThread[i]->Join();
auto elapsed = chrono::system_clock::now() - start;
long long milliseconds = chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
cppTimer = milliseconds;
What you're seeing there is the resolution of your timer. Apparently, chrono::system_clock ticks every 1/64th of a second, or 15,625 microseconds, on your system.
Since you're in C++/CLI and have the .Net library available, I'd switch to using the Stopwatch class. It will generally have a much higher resolution than 1/64th of a second.
Looks good to me. Except for cast to std::chrono::microseconds and naming it milliseconds.
The snippet I have used for many months now is:
class benchmark {
private:
typedef std::chrono::high_resolution_clock clock;
typedef std::chrono::milliseconds milliseconds;
clock::time_point start;
public:
benchmark(bool startCounting = true) {
if(startCounting)
start = clock::now();
}
void reset() {
start = clock::now();
}
// in milliseconds
double elapsed() {
milliseconds ms = std::chrono::duration_cast<milliseconds>(clock::now() - start);
double elapsed_secs = ms.count() / 1000.0;
return elapsed_secs;
}
};
// usage
benchmark b;
...
cout << "took " << b.elapsed() << " ms" << endl;
I have an array of booleans each representing a number. I am printing each one that is true with a for loop: for(unsigned long long l = 0; l<numt; l++) if(primes[l]) cout << l << endl; numt is the size of the array and is equal to over 1000000. The console window takes 30 seconds to print out all the value, but a timer I put in my program says 37ms. How do I wait for all the values to finish printing on the screen in my program so I can include that in my time.
Try this:
#include <windows.h>
...
int main() {
//init code
double startTime = GetTickCount();
//your loop
double timeNeededinSec = (GetTickCount() - startTime) / 1000.0;
}
Just in defense of ctime, cause it gives same result as with GetTickCount:
#include <ctime>
int main()
{
...
clock_t start = clock();
...
clock_t end = clock();
double timeNeededinSec = static_cast<double>(end - start) / CLOCKS_PER_SEC;
...
}
Update:
And the one with time() but in this case we can lost some precision( ~1 sec) because result in seconds.
#include <ctime>
int main()
{
time_t start;
time_t end;
...
time(&start);
...
time(&end);
int timeNeededinSec = static_cast<int>(end-start);
}
Combining both of them in simple example will show you the difference in result. In my tests I saw difference only in value after dot.
I'm trying to code a 'live' timer that runs during a calculation. It should continuously output the seconds since the start of the calculation - something like a progress bar. I did this so far:
#include <conio.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
clock_t t;
t = clock();
while (!_kbhit())
{
t = clock() - t;
printf("%f", ((float)t) / CLOCKS_PER_SEC);
system("cls");
}
return 0;
}
But there are a few problems:
It's flickering due to the call of system("cls").
The time is by far not correct due to the continuous call of printf()
Is there a rather easy way of doing this with C?
One very simple and not ideal way would be to simply stop printing so frequently, so here is an untested code as example to print the time once every INTERVAL clocks.
#define INTERVAL 1000
int main(void)
{
clock_t t;
t = clock();
clock_t step_time = INTERVAL;
while (!_kbhit())
{
t = clock() - t;
if (t - step_time > INTERVAL){
step_time = t + INTERVAL;
printf("%f", ((float)t) / CLOCKS_PER_SEC);
system("cls");
}
}
return 0;
}
You can change the INTERVAL to something smaller
I'm playing around new c++ standard. I write a test to observe behavior of scheduling algorithms and see what's happening with threads. Considering context switch time, I expected real waiting time for a specific thread to be a bit more than value specified by std::this_thread::sleep_for() function. But surprisingly it's sometimes even less than sleep time! I can't figure out why this happens, or what I'm doing wrong...
#include <iostream>
#include <thread>
#include <random>
#include <vector>
#include <functional>
#include <math.h>
#include <unistd.h>
#include <sys/time.h>
void heavy_job()
{
// here we're doing some kind of time-consuming job..
int j=0;
while(j<1000)
{
int* a=new int[100];
for(int i=0; i<100; ++i)
a[i] = i;
delete[] a;
for(double x=0;x<10000;x+=0.1)
sqrt(x);
++j;
}
std::cout << "heavy job finished" << std::endl;
}
void light_job(const std::vector<int>& wait)
{
struct timeval start, end;
long utime, seconds, useconds;
std::cout << std::showpos;
for(std::vector<int>::const_iterator i = wait.begin();
i!=wait.end();++i)
{
gettimeofday(&start, NULL);
std::this_thread::sleep_for(std::chrono::microseconds(*i));
gettimeofday(&end, NULL);
seconds = end.tv_sec - start.tv_sec;
useconds = end.tv_usec - start.tv_usec;
utime = ((seconds) * 1000 + useconds/1000.0);
double delay = *i - utime*1000;
std::cout << "delay: " << delay/1000.0 << std::endl;
}
}
int main()
{
std::vector<int> wait_times;
std::uniform_int_distribution<unsigned int> unif;
std::random_device rd;
std::mt19937 engine(rd());
std::function<unsigned int()> rnd = std::bind(unif, engine);
for(int i=0;i<1000;++i)
wait_times.push_back(rnd()%100000+1); // random sleep time between 1 and 1 million µs
std::thread heavy(heavy_job);
std::thread light(light_job,wait_times);
light.join();
heavy.join();
return 0;
}
Output on my Intel Core-i5 machine:
.....
delay: +0.713
delay: +0.509
delay: -0.008 // !
delay: -0.043 // !!
delay: +0.409
delay: +0.202
delay: +0.077
delay: -0.027 // ?
delay: +0.108
delay: +0.71
delay: +0.498
delay: +0.239
delay: +0.838
delay: -0.017 // also !
delay: +0.157
Your timing code is causing integral truncation.
utime = ((seconds) * 1000 + useconds/1000.0);
double delay = *i - utime*1000;
Suppose your wait time was 888888 microseconds and you sleep for exactly that amount. seconds will be 0 and useconds will be 888888. After dividing by 1000.0, you get 888.888. Then you add 0*1000, still yielding 888.888. That then gets assigned to a long, leaving you with 888, and an apparent delay of 888.888 - 888 = 0.888.
You should update utime to actually store microseconds so that you don't get the truncation, and also because the name implies that the unit is in microseconds, just like useconds. Something like:
long utime = seconds * 1000000 + useconds;
You've also got your delay calculation backwards. Ignoring the effects of the truncation, it should be:
double delay = utime*1000 - *i;
std::cout << "delay: " << delay/1000.0 << std::endl;
The way you've got it, all the positive delays you're outputting are actually the result of the truncation, and the negative ones represent actual delays.