Timing a function in microseconds - c++

Hey guys I'm trying to time some search functions I wrote in microseconds, and it needs to take long enough to get it to show 2 significant digits. I wrote this code to time my search function but it seems to go too fast. I always end up getting 0 microseconds unless I run the search 5 times then I get 1,000,000 microseconds. I'm wondering if I did my math wrong to get the time in micro seconds, or if there's some kind of formatting function I can use to force it to display two sig figs?
clock_t start = clock();
index = sequentialSearch.Sequential(TO_SEARCH);
index = sequentialSearch.Sequential(TO_SEARCH);
clock_t stop = clock();
cout << "number found at index " << index << endl;
int time = (stop - start)/CLOCKS_PER_SEC;
time = time * SEC_TO_MICRO;
cout << "time to search = " << time<< endl;

You are using integer division on this line:
int time = (stop - start)/CLOCKS_PER_SEC;
I suggest using a double or float type, and you'll likely need to cast the components of the division.

Use QueryPerformanceCounter and QueryPerformanceFrequency, assuming your on windows platform
here a link to ms KB How To Use QueryPerformanceCounter to Time Code

Related

Fractional day of the year computation in C++14

I wrote the following code using Howard Hinnants date.h library, to compute the fractional day of the year of the current time. I was wondering if there are shorter ways of doing it, because my code feels like an overkill of std::chrono and date calls. Can I directly calculate the number of fractional days since the start of the year (at microsecond precision) and avoid my two-step approach?
#include <iostream>
#include <chrono>
#include "date.h"
int main()
{
// Get actual time.
auto now = std::chrono::system_clock::now();
// Get the number of days since start of the year.
auto ymd = date::year_month_day( date::floor<date::days>(now) );
auto ymd_ref = date::year{ymd.year()}/1/1;
int days = (date::sys_days{ymd} - date::sys_days{ymd_ref}).count();
// Get the fractional number of seconds of the day.
auto microseconds = std::chrono::duration_cast<std::chrono::microseconds>(now - date::floor<date::days>(now));
double seconds_since_midnight = 1e-6*microseconds.count();
// Get fractional day number.
std::cout << "Fractional day of the year: " << days + seconds_since_midnight / 86400. << std::endl;
return 0;
}
Good question (upvoted).
I think first we need to decide on what the right answer is. There's your answer, and currently the only other answer is Matteo's. For demonstration purposes, I've modified both answers to substitute in a "fake now" so that we can compare apples to apples:
using namespace std::chrono_literals;
auto now = date::sys_days{date::March/27/2019} + 0h + 32min + 22s + 123456us;
(approximately now at the time I'm writing this)
Chiel's code gives:
Fractional day of the year: 85.0225
Matteo's code gives:
Fractional day of the year: 85.139978280740735
They are close, but not close enough to both be considered right.
Matteo's code works with "average years":
auto this_year = date::floor<date::years>(now);
The length of a date::years is 365.2425 days, which is exactly right if you average all civil years over a 400 year period. And working with the average year length can be very useful, especially when dealing with systems that don't care about human made calendars (e.g. physics or biology).
I'm going to guess that because of the way Chiel's code is written, he would prefer a result that refers more precisely to this specific year. Therefore the code presented below is Chiel's's algorithm, resulting in exactly the same result, only slightly more efficient and concise.
// Get actual time.
auto now = std::chrono::system_clock::now();
// Get the number of days since start of the year.
auto sd = date::floor<date::days>(now);
auto ymd = date::year_month_day( sd );
auto ymd_ref = ymd.year()/1/1;
std::chrono::duration<double, date::days::period> days = sd - date::sys_days{ymd_ref};
// Get the fractional number of seconds of the day.
days += now - sd;
// Get fractional day number.
std::cout << "Fractional day of the year: " << days.count() << std::endl;
The first thing I noted was that date::floor<date::days>(now) was being computed in 3 places, so I'm computing it once and saving it in sd.
Next, since the final answer is a double-based representation of days, I'm going to let <chrono> do that work for me by storing the answer in a duration<double, days>. Any time you find yourself converting units, it is better to let <chrono> do it for you. It probably won't be faster. But it definitely won't be slower, or wrong.
Now it is a simple matter to add the fractional day to the result:
days += now - sd;
using whatever precision now has (microseconds or whatever). And the result is now simply days.count().
Update
And with just a little bit more time to reflect ...
I noticed that with the simplified code above, one can more easily see the entire algorithm as a single expression. That is (removing namespace qualification in order to get everything on one line):
duration<double, days::period> days = sd - sys_days{ymd_ref} + now - sd;
And this clearly algebraically simplifies down to:
duration<double, days::period> days = now - sys_days{ymd_ref};
In summary:
using namespace std::chrono;
using namespace date;
// Get actual time.
auto now = system_clock::now();
// Get the start of the year and subract it from now.
using ddays = duration<double, days::period>;
ddays fd = now - sys_days{year_month_day{floor<days>(now)}.year()/1/1};
// Get fractional day number.
std::cout << "Fractional day of the year: " << fd.count() << '\n';
In this case, letting <chrono> do the conversions for us, allowed the code to be sufficiently simplified such that the algorithm itself could be algebraically simplified, resulting in cleaner and more efficient code that is provably equivalent to the original algorithm in the OP's question.

Program That Prints Every (n) Seconds

I wrote a program that for every five seconds would print a random number (1-10) within a ten seconds timeframe. But it seems to be printing more than one random number every five seconds. Could anyone point me in the right direction?
clock_t start;
int random;
start = clock();
while (float(clock() - start) / CLOCKS_PER_SEC <= 10.0) {
if (fmod(float(clock() - start) / CLOCKS_PER_SEC, 5) == 0 && (float(clock() - start) / CLOCKS_PER_SEC) != 0) {
random = rand() % 10 + 1;
cout << random << endl;
}
}
return 0;
EDIT: I felt this answer was incomplete, because it does not answer your actual question. The first part now explains why your approach fails, the second part is about how to solve your problem in a better way.
You are using clock() in a way, where you wait for a number of specific points in time. Due to the nature of clock() and the limited precision of float, your check basically is equivalent to saying: Are we in a window [x-eps, x+eps], where x is a multiple of 5 and eps is generally small and depends on the floating point type used and how big (clock() - start) is. A way to increase eps is to add a constant like 1e6 to (clock() - start). If floating point numbers were precise, that should not affect your logic, because 1e6 is a multiple of 5, but in fact it will do so drastically.
On a fast machine, that condition can be true multiple times every 5 seconds; on a slow machine it may not be true every time 5 seconds passed.
The correct way to implement it is below; but if you wanted to do it using a polling approach (like you do currently), you would have to increment start by 5 * CLOCKS_PER_SECOND in your if-block and change the condition to something like (clock() - start) / CLOCKS_PER_SECOND >= 5.
Apart from the clock()-specific issues that you have, I want to remind you that it measures CPU time or ticks and is hardly a reliable way to measure wall time. Fortunately, in modern C++, we have std::chrono:
auto t = std::chrono::steady_clock::now();
auto end = t + std::chrono::seconds( 10 );
while( t < end )
{
t += std::chrono::seconds( 5 );
std::this_thread::sleep_until( t );
std::cout << ( rand() % 10 + 1 ) << std::endl;
}
I also highly recommend replacing rand() with the more modern tools in <random>, e.g.:
std::random_device rd; // Hopefully a good source of entropy; used for seeding.
std::default_random_engine gen( rd() ); // Faster pseudo-random source.
std::uniform_int_distribution<> dist( 1, 10 ); // Specify the kind of random stuff that you want.
int random = dist( gen ); // equivalent to rand() % 10 + 1.
Your code seems to be fast enough and your calculation precision small enough that you do multiple iterations before the number you are calculating changes. Thus, when the condition matches, it will match several times at once.
However, this is not a good way to do this, as you are making your computer work very hard. This way of waiting will put a rather severe load on one processor, potentially slowing down your computer, and definitely draining more power. If you're on a quad-core desktop it is not that bad, but for a laptop it's hell on batteries. Instead of asking your computer "is it time yet? is it time yet? is it time yet?" as fast as you can, trust that your computer knows how to wait, and use sleep, usleep, sleep_for, or whatever the library you're using is calling it now. See here for an example.

Trouble with output function C++

I am having trouble getting this program to output properly. It simulates a drunken sailor on a board that randomly goes one step to the left or right. At the end of the simulation, the program outputs the percentage of times he fell off the board vs not falling off. My percentage is always zero, and I can't figure out whats wrong with my code.
This function correctly outputs the "experiments" and "fallCount" variable, but always displays "fallCount / experiments" as zero.
This should read "After 2 experiments, sailor fell 1 time, fall percentage was 0.5%"
(if experiments = 2 and fallCount = 1) instead, its 0% every time.
Let me know what I am doing wrong. Thank you!
void outputExperimentStats(int experiments, int fallCount)
{
cout << "After " << experiments << " experiments, sailor fell "
<< fallCount << " time, fall percentage was " << fallCount / experiments << "%\n";
}
That is because you are using integer division. There are no decimals, so things get truncated. E.g.
1 / 2 --> 0 // integer division
This is correct, and expected behavior.
To get the behavior you want, use double or float.
1.0 / 2.0 --> 0.5 // double division
In your example, you can either change the types of your inputs to double or if you want to keep them int, you can convert them during the division
static_cast<double>(fallCount) / static_cast<double>(experiments)

How to make my system support nano seconds precision

When I run the code from this page high_precision_timer, I got to know my system only support microsecond precision.
As per the document,
cout << chrono::high_resolution_clock::period::den << endl;
Note, that there isn’t a guarantee how many the ticks per seconds it
has, only that it’s the highest available. Hence, the first thing we
do is to get the precision, by printing how many many times a second
the clock ticks. My system provides 1000000 ticks per second, which is
a microsecond precision.
I am also getting exactly the same value 1000000 ticks per second . That means my system is also support microseconds precision.
Everytime I run any program , I always get value xyz microsecond and xyz000 nanosec . I think the above non-support of my system to nanosec may be the reason.
Is there any way to make my system nanosec supportive ?
It's not an answer. I cannot print long message in comment.
I just test your example.
And my system output result was:
chrono::high_resolution_clock::period::den = 1000000000.
My system provides 1000000000 ticks per second, which is a nanosecond precision.
Not 1000000 (microseconds).
Your system provides 1000000 ticks per second, which is a microsecond precision.
So, I don't know how to help you. Sorry.
#include <iostream>
#include <chrono>
using namespace std;
int main()
{
cout << chrono::high_resolution_clock::period::den << endl;
auto start_time = chrono::high_resolution_clock::now();
int temp;
for (int i = 0; i< 242000000; i++)
temp+=temp;
auto end_time = chrono::high_resolution_clock::now();
cout <<"sec = "<<chrono::duration_cast<chrono::seconds>(end_time - start_time).count() << ":"<<std::endl;
cout <<"micro = "<<chrono::duration_cast<chrono::microseconds>(end_time - start_time).count() << ":"<<std::endl;
cout <<"nano = "<<chrono::duration_cast<chrono::nanoseconds>(end_time - start_time).count() << ":"<<std::endl;
return 0;
}
Consider this,
Most processors today operate at a frequency of about 1 to 3 GHz i.e. say 2 * 10^9 Hz.
which means 1 tick every 0.5 nano seconds at the processor level. so i would guess your chances are very very slim.
Edit:
though the documentation is still sparse for this I remember reading that it accesses the RTC of the CPU(not sure), whose frequency is fixed.
Also as an advice i think measuring performance in nano second has little advantage compared to measuring in micro sec ( unless its for medical use ;) ).
and take a look at this question and its answer. I think it can make more sense
HPET's frequency vs CPU frequency for measuring time

Understanding about clock() and CLOCKS_PER_SEC in C++

I am interested in accurately timing a c++ application. There seems to be multiple definitions for "time", but for the sake of this question... I am interested in the time that I am counting on my watch in the real world... if that makes any sense! Anyway, in my application, my start time is done like this:
clock_t start = clock();
.
.
. // some statements
.
.
clock_t end = clock();
.
.
double duration = end - start;
.
.
cout << CLOCKS_PER_SEC << endl;
start is equal to 184000
end is equal to 188000
CLOCKS_PER_SEC is equal to 1000000
Does this mean the duration (in seconds) is equal to 4000/1000000 ? If so, this would mean the duration is .004 seconds? Is there a more accurate way of measuring this?
Thank you
Try this to find the time in nanoseconds precision
struct timespec start, end;
clock_gettime(CLOCK_REALTIME,&start);
/* Do something */
clock_gettime(CLOCK_REALTIME,&end);
It returns a value as ((((unsigned64)start.tv_sec) * ((unsigned64)(1000000000L))) + ((unsigned64)(start.tv_nsec))))
If you find this helpful kindly refer this link too..