When I run the code from this page high_precision_timer, I got to know my system only support microsecond precision.
As per the document,
cout << chrono::high_resolution_clock::period::den << endl;
Note, that there isn’t a guarantee how many the ticks per seconds it
has, only that it’s the highest available. Hence, the first thing we
do is to get the precision, by printing how many many times a second
the clock ticks. My system provides 1000000 ticks per second, which is
a microsecond precision.
I am also getting exactly the same value 1000000 ticks per second . That means my system is also support microseconds precision.
Everytime I run any program , I always get value xyz microsecond and xyz000 nanosec . I think the above non-support of my system to nanosec may be the reason.
Is there any way to make my system nanosec supportive ?
It's not an answer. I cannot print long message in comment.
I just test your example.
And my system output result was:
chrono::high_resolution_clock::period::den = 1000000000.
My system provides 1000000000 ticks per second, which is a nanosecond precision.
Not 1000000 (microseconds).
Your system provides 1000000 ticks per second, which is a microsecond precision.
So, I don't know how to help you. Sorry.
#include <iostream>
#include <chrono>
using namespace std;
int main()
{
cout << chrono::high_resolution_clock::period::den << endl;
auto start_time = chrono::high_resolution_clock::now();
int temp;
for (int i = 0; i< 242000000; i++)
temp+=temp;
auto end_time = chrono::high_resolution_clock::now();
cout <<"sec = "<<chrono::duration_cast<chrono::seconds>(end_time - start_time).count() << ":"<<std::endl;
cout <<"micro = "<<chrono::duration_cast<chrono::microseconds>(end_time - start_time).count() << ":"<<std::endl;
cout <<"nano = "<<chrono::duration_cast<chrono::nanoseconds>(end_time - start_time).count() << ":"<<std::endl;
return 0;
}
Consider this,
Most processors today operate at a frequency of about 1 to 3 GHz i.e. say 2 * 10^9 Hz.
which means 1 tick every 0.5 nano seconds at the processor level. so i would guess your chances are very very slim.
Edit:
though the documentation is still sparse for this I remember reading that it accesses the RTC of the CPU(not sure), whose frequency is fixed.
Also as an advice i think measuring performance in nano second has little advantage compared to measuring in micro sec ( unless its for medical use ;) ).
and take a look at this question and its answer. I think it can make more sense
HPET's frequency vs CPU frequency for measuring time
Related
i am trying to measure the execution time.
i'm on windows 10 and use gcc compiler.
start_t = chrono::system_clock::now();
tree->insert();
end_t = chrono::system_clock::now();
rslt_period = chrono::duration_cast<chrono::nanoseconds>(end_t - start_t);
this is my code to measure time about bp_w->insert()
the function insert work internally like follow (just pseudo code)
insert(){
_load_node(node);
// do something //
_save_node(node, addr);
}
_save_node(n){
ofstream file(name);
file.write(n);
file.close();
}
_load_node(n, addr){
ifstream file(name);
file.read_from(n, addr);
file.close();
}
the actual results is,
read is number of _load_node executions.
write is number of _save_node executions.
time is nano secs.
read write time
1 1 1000000
1 1 0
2 1 0
1 1 0
1 1 0
1 1 0
2 1 0
1 1 1004000
1 1 1005000
1 1 0
1 1 0
1 1 15621000
i don't have any idea why this result come and want to know.
What you are trying to measure is ill-defined.
"How long did this code take to run" can seem simple. In practice, though, do you mean "how many CPU cycles my code took" ? Or how many cycles between my program and the other running programs ? Do you account for the time to load/unload it on the CPU ? Do you account for the CPU being throttled down when on battery ? Do you want to account for the time to access the main clock located on the motherboard (in terms of computation that is extremely far).
So, in practice timing will be affected by a lot of factors and the simple fact of measuring it will slow everything down. Don't expect nanosecond accuracy. Micros, maybe. Millis, certainly.
So, that leaves you in a position where any measurement will fluctuate a lot. The sane way is to average it out over multiple measurement. Or, even better, do the same operation (on different data) a thousand (million?) times and divide the results by a thousand.
Then, you'll get significant improvement on accuracy.
In code:
start_t = chrono::system_clock::now();
for(int i = 0; i < 1000000; i++)
tree->insert();
end_t = chrono::system_clock::now();
You are using the wrong clock. system_clock is not useful for timing intervals due to low resolution and its non-monotonic nature.
Use steady_clock instead. it is guaranteed to be monotonic and have a low enough resolution to be useful.
I wrote the following code using Howard Hinnants date.h library, to compute the fractional day of the year of the current time. I was wondering if there are shorter ways of doing it, because my code feels like an overkill of std::chrono and date calls. Can I directly calculate the number of fractional days since the start of the year (at microsecond precision) and avoid my two-step approach?
#include <iostream>
#include <chrono>
#include "date.h"
int main()
{
// Get actual time.
auto now = std::chrono::system_clock::now();
// Get the number of days since start of the year.
auto ymd = date::year_month_day( date::floor<date::days>(now) );
auto ymd_ref = date::year{ymd.year()}/1/1;
int days = (date::sys_days{ymd} - date::sys_days{ymd_ref}).count();
// Get the fractional number of seconds of the day.
auto microseconds = std::chrono::duration_cast<std::chrono::microseconds>(now - date::floor<date::days>(now));
double seconds_since_midnight = 1e-6*microseconds.count();
// Get fractional day number.
std::cout << "Fractional day of the year: " << days + seconds_since_midnight / 86400. << std::endl;
return 0;
}
Good question (upvoted).
I think first we need to decide on what the right answer is. There's your answer, and currently the only other answer is Matteo's. For demonstration purposes, I've modified both answers to substitute in a "fake now" so that we can compare apples to apples:
using namespace std::chrono_literals;
auto now = date::sys_days{date::March/27/2019} + 0h + 32min + 22s + 123456us;
(approximately now at the time I'm writing this)
Chiel's code gives:
Fractional day of the year: 85.0225
Matteo's code gives:
Fractional day of the year: 85.139978280740735
They are close, but not close enough to both be considered right.
Matteo's code works with "average years":
auto this_year = date::floor<date::years>(now);
The length of a date::years is 365.2425 days, which is exactly right if you average all civil years over a 400 year period. And working with the average year length can be very useful, especially when dealing with systems that don't care about human made calendars (e.g. physics or biology).
I'm going to guess that because of the way Chiel's code is written, he would prefer a result that refers more precisely to this specific year. Therefore the code presented below is Chiel's's algorithm, resulting in exactly the same result, only slightly more efficient and concise.
// Get actual time.
auto now = std::chrono::system_clock::now();
// Get the number of days since start of the year.
auto sd = date::floor<date::days>(now);
auto ymd = date::year_month_day( sd );
auto ymd_ref = ymd.year()/1/1;
std::chrono::duration<double, date::days::period> days = sd - date::sys_days{ymd_ref};
// Get the fractional number of seconds of the day.
days += now - sd;
// Get fractional day number.
std::cout << "Fractional day of the year: " << days.count() << std::endl;
The first thing I noted was that date::floor<date::days>(now) was being computed in 3 places, so I'm computing it once and saving it in sd.
Next, since the final answer is a double-based representation of days, I'm going to let <chrono> do that work for me by storing the answer in a duration<double, days>. Any time you find yourself converting units, it is better to let <chrono> do it for you. It probably won't be faster. But it definitely won't be slower, or wrong.
Now it is a simple matter to add the fractional day to the result:
days += now - sd;
using whatever precision now has (microseconds or whatever). And the result is now simply days.count().
Update
And with just a little bit more time to reflect ...
I noticed that with the simplified code above, one can more easily see the entire algorithm as a single expression. That is (removing namespace qualification in order to get everything on one line):
duration<double, days::period> days = sd - sys_days{ymd_ref} + now - sd;
And this clearly algebraically simplifies down to:
duration<double, days::period> days = now - sys_days{ymd_ref};
In summary:
using namespace std::chrono;
using namespace date;
// Get actual time.
auto now = system_clock::now();
// Get the start of the year and subract it from now.
using ddays = duration<double, days::period>;
ddays fd = now - sys_days{year_month_day{floor<days>(now)}.year()/1/1};
// Get fractional day number.
std::cout << "Fractional day of the year: " << fd.count() << '\n';
In this case, letting <chrono> do the conversions for us, allowed the code to be sufficiently simplified such that the algorithm itself could be algebraically simplified, resulting in cleaner and more efficient code that is provably equivalent to the original algorithm in the OP's question.
I wrote a program that for every five seconds would print a random number (1-10) within a ten seconds timeframe. But it seems to be printing more than one random number every five seconds. Could anyone point me in the right direction?
clock_t start;
int random;
start = clock();
while (float(clock() - start) / CLOCKS_PER_SEC <= 10.0) {
if (fmod(float(clock() - start) / CLOCKS_PER_SEC, 5) == 0 && (float(clock() - start) / CLOCKS_PER_SEC) != 0) {
random = rand() % 10 + 1;
cout << random << endl;
}
}
return 0;
EDIT: I felt this answer was incomplete, because it does not answer your actual question. The first part now explains why your approach fails, the second part is about how to solve your problem in a better way.
You are using clock() in a way, where you wait for a number of specific points in time. Due to the nature of clock() and the limited precision of float, your check basically is equivalent to saying: Are we in a window [x-eps, x+eps], where x is a multiple of 5 and eps is generally small and depends on the floating point type used and how big (clock() - start) is. A way to increase eps is to add a constant like 1e6 to (clock() - start). If floating point numbers were precise, that should not affect your logic, because 1e6 is a multiple of 5, but in fact it will do so drastically.
On a fast machine, that condition can be true multiple times every 5 seconds; on a slow machine it may not be true every time 5 seconds passed.
The correct way to implement it is below; but if you wanted to do it using a polling approach (like you do currently), you would have to increment start by 5 * CLOCKS_PER_SECOND in your if-block and change the condition to something like (clock() - start) / CLOCKS_PER_SECOND >= 5.
Apart from the clock()-specific issues that you have, I want to remind you that it measures CPU time or ticks and is hardly a reliable way to measure wall time. Fortunately, in modern C++, we have std::chrono:
auto t = std::chrono::steady_clock::now();
auto end = t + std::chrono::seconds( 10 );
while( t < end )
{
t += std::chrono::seconds( 5 );
std::this_thread::sleep_until( t );
std::cout << ( rand() % 10 + 1 ) << std::endl;
}
I also highly recommend replacing rand() with the more modern tools in <random>, e.g.:
std::random_device rd; // Hopefully a good source of entropy; used for seeding.
std::default_random_engine gen( rd() ); // Faster pseudo-random source.
std::uniform_int_distribution<> dist( 1, 10 ); // Specify the kind of random stuff that you want.
int random = dist( gen ); // equivalent to rand() % 10 + 1.
Your code seems to be fast enough and your calculation precision small enough that you do multiple iterations before the number you are calculating changes. Thus, when the condition matches, it will match several times at once.
However, this is not a good way to do this, as you are making your computer work very hard. This way of waiting will put a rather severe load on one processor, potentially slowing down your computer, and definitely draining more power. If you're on a quad-core desktop it is not that bad, but for a laptop it's hell on batteries. Instead of asking your computer "is it time yet? is it time yet? is it time yet?" as fast as you can, trust that your computer knows how to wait, and use sleep, usleep, sleep_for, or whatever the library you're using is calling it now. See here for an example.
I have a loop and in every loop I get the current seconds the application has been running for I then want to convert this time into how many, Days, Hours and Seconds that the seconds calculate to but not 'real time' I need to be able to customize how many seconds are in a day, I have tried examples on SO and the web but nothing seems to be out there for this. I have some defines
#define DAY 1200
#define HOUR DAY / 24
#define MINUTE HOUR / 60
#define SECOND MINUTE / 60
So in my define a day would last for 1200 seconds. I have then been trying to convert elapsed seconds into 'my' seconds
seconds_passed = fmodf(SECOND, (float)(GetTicks() / 1000));
Which returns what SECOND equals (0.013889) but then every loop is the same, it never changes I was thinking I would just be able to convert for example: 1real second into 1.25fake seconds then
Minute = (seconds_passed / MINUTE);
seconds_passed = fmodf(seconds_passed, MINUTE);
work out how many (fake)minutes, (fake)hours and (fake)days have elapsed since the application started.
Hope that makes sense, thank you for your time.
Since you want to customise how many seconds are in a day, all you're really doing is changing the ratio of 1 second : 1 second.
For instance, if you did was 1200 seconds in a day your ratio is:
1:72
that is, for every 1 second that passes in your day, it is the equivilent of 72 real seconds.
So yes basically all you need to do in your program is find the ratio of 1 second to 1 second, times your elapsed seconds by that to get the 'fake' seconds, and then use that value...
The code may look something like this:
// get the ratio second:fake_second
#define REAL_DAY_SECONDS 86400
int ratio = REAL_DAY_SECONDS / DAY;
fake_to_real = fake_second*ratio;
real_to_fake = real_second/ratio;
You can make your own time durations with one line in chrono:
using fake_seconds = std::chrono::duration<float, std::ratio<72,1>>;
Some sample code
#include <iostream>
#include <chrono>
using namespace std::chrono_literals;
using fake_seconds = std::chrono::duration<float, std::ratio<72,1>>;
int main()
{
auto f_x = fake_seconds(350s);
std::cout << "350 real seconds are:\n" << f_x.count() << " fake_seconds\n";
}
https://godbolt.org/z/f5G86avxr
Hey guys I'm trying to time some search functions I wrote in microseconds, and it needs to take long enough to get it to show 2 significant digits. I wrote this code to time my search function but it seems to go too fast. I always end up getting 0 microseconds unless I run the search 5 times then I get 1,000,000 microseconds. I'm wondering if I did my math wrong to get the time in micro seconds, or if there's some kind of formatting function I can use to force it to display two sig figs?
clock_t start = clock();
index = sequentialSearch.Sequential(TO_SEARCH);
index = sequentialSearch.Sequential(TO_SEARCH);
clock_t stop = clock();
cout << "number found at index " << index << endl;
int time = (stop - start)/CLOCKS_PER_SEC;
time = time * SEC_TO_MICRO;
cout << "time to search = " << time<< endl;
You are using integer division on this line:
int time = (stop - start)/CLOCKS_PER_SEC;
I suggest using a double or float type, and you'll likely need to cast the components of the division.
Use QueryPerformanceCounter and QueryPerformanceFrequency, assuming your on windows platform
here a link to ms KB How To Use QueryPerformanceCounter to Time Code