C++ count time overcoming 72 minutes range of clock_t - c++

I'm trying to count the execution time of part of my application, but since I need to get milliseconds, and I need to get long execution times too. I'm currently using clock_t = clock() from ctime, but it has a range of only 72 minutes I think, which is not suitable for my needs. Is there any other portable way to count large execution times keeping millisecond precision? Or some way so overcome this limitation of clock_t ?

The first question you need to ask is do you really need millisecond precision in time spans over a hour.
If you do one simple method (without looking around for libraries that do it already)is just track when the timer rolls over and add that to another variable.

Unfortunately there are none that I know of that are cross-platform (that's not to say there doesn't exist any, however).
Nevertheless, it is easy enough to work around this problem. Just create a separate thread (ex: boost.thread) which sleeps for a long time, adds the time difference so far to a total, then repeats. When the program is shut down, stop the thread where it can also add to this counter before it quits.

Related

Is there a reliable method to guarantee a while loop will execute at a specified frequency in C++?

I have a while loop that executes a program, with a sleep every so often. The while loop is meant to simulate a real-time program that executes at a certain frequency. The current logic calcualtes a number of cycles to execute per sleep to achieve a desired frequency. This has proven to be innacurate. I think a timer would be a better implementation, but do to the complexity of refactor I am trying to maintain a while loop solution to achieve this. I am looking for advice on a scheme that may more tightly achieve a desired frequency of execution in a while loop. Pseudo-code below:
MaxCounts = DELAY_TIME_SEC/DESIRED_FREQUENCY;
DoProgram();
while(running)
{
if(counts > MaxCounts)
{
Sleep(DELAY_TIME_SEC);
}
}
You cannot reliably schedule an operation to occur at specific times on a non-realtime OS.
As C++ runs on non-realtime OS's, it cannot provide what cannot be provided.
The amount of error you are willing to accept, in both typical and extreme cases, will matter. If you want something running every minute or so, and you don't want drift on the level of days, you can just set up a starting time, then do math to determine when the nth event should happen.
Then do a wait for the nth time.
This fixes "cumulative drift" issues, so over 24 hours you get 1440+/-1 events with 1 minute between them. The time between the events will vary and not be 60 seconds exactly, but over the day it will work out.
If your issue is time on the ms level, and you are ok with a loaded system sometimes screwing up, you can sleep and aim for a time before the next event shy half a second (or whatever margin makes it reliable enough for you). Then busy-wait until the time occurs. You may also have to tweak process/thread priority; be careful, this can easily break things badly if you make priority too high.
Combining the two can work as well.

C++11 Most accurate way to pause execution for a certain amount of time? [duplicate]

This question already has answers here:
How to guarantee exact thread sleep interval?
(4 answers)
accurate sampling in c++
(2 answers)
Closed 4 years ago.
I'm currently working on some C++ code that reads from a video file, parses the video/audio streams into its constituent units (such as an FLV tag) and sends it back out in order to "restream" it.
Because my input comes from file but I want to simulate a proper framerate when restreaming this data, I am considering the ways that I might sleep the thread that is performing the read on the file in order to attempt to extract a frame at the given rate that one might expect out of typical 30 or 60 FPS.
One solution is to use an obvious std::this_thread::sleep_for call and pass in the amount of milliseconds depending on what my FPS is. Another solution I'm considering is using a condition variable, and using std::condition_variable::wait_for with the same idea.
I'm a little stuck, because I know that the first solution doesn't guarantee exact precision -- the sleep will last around as long as the argument I pass in but may in theory be longer. And I know that the std::condition_variable::wait_for call will require lock reacquisition which will take some time too. Is there a better solution than what I'm considering? Otherwise, what's the best methodology to attempt to pause execution for as precise a granularity as possible?
C++11 Most accurate way to pause execution for a certain amount of time?
This:
auto start = now();
while(now() < start + wait_for);
now() is a placeholder for the most accurate time measuring method available for the system.
This is the analogue to sleep as what spinlock is to a mutex. Like a spinlock, it will consume all the CPU cycles while pausing, but it is what you asked for: The most accurate way to pause execution. There is trade-off between accuracy and CPU-usage-efficiency: You must choose which is more important to have for your program.
why is it more accurate than std::this_thread::sleep_for?
Because sleep_for yields execution of the thread. As a consequence, it can never have better granularity than the process scheduler of the operating system has (assuming there are other processes competing for time).
The live loop shown above which doesn't voluntarily give up its time slice will achieve the highest granularity provided by the clock that is used for measurement.
Of course, the time slice granted by the scheduler will eventually run out, and that might happen near the time we should be resuming. Only way to reduce that effect is to increase the priority of our thread. There is no standard way of affecting the priority of a thread in C++. The only way to get completely rid of that effect is to run on a non-multi-tasking system.
On multi-CPU systems, one trick that you might want to do is to set the thread affinity so that the OS thread won't be migrated to other hard ware threads which would introduce latency. Likewise, you might want to set thread affinity of your other threads to stay off the time measuring thread. There is no standard tool to set thread affinity.
Let T be the time you wish to sleep for and let G be the maximum time that sleep_for could possibly overshoot.
If T is greater than G, then it will be more efficient to use sleep_for for T - G time units, and only use the live loop for the final G - O time units (where O is the time that sleep_for was observed to overshoot).
Figuring out what G is for your target system can be quite tricky however. There is no standard tool for that. If you over-estimate, you'll waste more cycles than necessary. If you under-estimate, your sleep may overshoot the target.
In case you're wondering what is a good choice for now(), the most appropriate tool provided by the standard library is std::chrono::steady_clock. However, that is not necessarily the most accurate tool available on your system. What tool is the most accurate depends on what system you're targeting.

Is there a way to calculate a program's time complexity in milliseconds?

Well, I wanted to compare the time efficiency of 2 programs, which are designed to do the same thing. I want some function/script/method, so that after the execution of a program, it gives me the time required to do that process. like - "The program took 0.3ms to complete".
I had searched for threads with similar topic,but i was not satisfied with what i read. So any light on this topic is appreciated!
If you want to measure execution time you can:
Use boost timer
Retrieve system time before and after execution and compare

Calling a method at a specific interval rate in C++

This is really annoying me as I have done it before, about a year ago and I cannot for the life of me remember what library it was.
Basically, the problem is that I want to be able to call a method a certain number of times or for a certain period of time at a specified interval.
One example would be I would like to call a method "x" starting from now, 10 times, once every 0.5 seconds. Alternatively, call method "x" starting from now, 10 times, until 5 seconds have passed.
Now I thought I used a boost library for this functionality but I can't seem to find it now and feeling a bit annoyed. Unfortunately I can't look at the code again as I'm not in possession of it any more.
Alternatively, I could have dreamt this all up and it could have been proprietary code. Assuming there is nothing out there that does what I would like, what is currently the best way of producing this behaviour? It would need to be high-resolution, up to a millisecond.
It doesn't matter if it blocks the thread that it is executed from or not.
Thanks!
Maybe you are talking about boost::asio. It is a mainly used for networking, but it can be used for scheduling timers.
It can be used in conjunction with boost::threads.
A combination of boost::this_thread::sleep and time duration found in boost::datetime?
It's probably bad practice to answer your own question but I wanted to add something more to what Nikko suggested as I have no implemented the functionality with the two suggested libraries. Someone might find this useful at some point.
void SleepingExampleTest::sleepInterval(int frequency, int cycles, boost::function<void()> method) {
boost::posix_time::time_duration interval(boost::posix_time::microseconds(1000000 / frequency));
boost::posix_time::ptime timer = boost::posix_time::microsec_clock::local_time() + interval;
boost::this_thread::sleep(timer - boost::posix_time::microsec_clock::local_time());
while(cycles--) {
method();
timer = timer + interval;
boost::this_thread::sleep(timer - boost::posix_time::microsec_clock::local_time());
}
}
Hopefully people can understand this simple example that I have knocked up. Using a bound function just to allow flexibility.
Appears to work with about 50 microsecond accuracy on my machine. Before taking into account the skew of the time it takes to execute the method being called the accuracy was a couple of hundred microseconds, so definitely worth it.

Extremely CPU Intensive Alarm Clock

EDIT:
I would like to thank you all for the swift replies ^^ Sleep() works as intended and my CPU is not being viciously devoured by this program anymore! I will keep this question as is, but to let everybody know that the CPU problem has been answered expediently and professionally :D
As an aside to the aside, I'll certainly make sure that micro-optimizations are kept to a minimum in the face of larger, more important problems!
================================================================================
For some reason my program, a console alarm clock I made for laughs and practice, is extremely CPU intensive. It consumes about 2mB RAM, which is already quite a bit for such a small program, but it devastates my CPU with over 50% resources at times.
Most of the time my program is doing nothing except counting down the seconds, so I guess this part of my program is the one that's causing so much strain on my CPU, though I don't know why. If it is so, could you please recommend a way of making it less, or perhaps a library to use instead if the problem can't be easily solved?
/* The wait function waits exactly one second before returning to the *
* called function. */
void wait( const int &seconds )
{
clock_t endwait; // Type needed to compare with clock()
endwait = clock() + ( seconds * CLOCKS_PER_SEC );
while( clock() < endwait ) {} // Nothing need be done here.
}
In case anybody browses CPlusPlus.com, this is a genuine copy/paste of the clock() function they have written as an example for clock(). Much why the comment //Nothing need be done here is so lackluster. I'm not entirely sure what exactly clock() does yet.
The rest of the program calls two other functions that only activate every sixty seconds, otherwise returning to the caller and counting down another second, so I don't think that's too CPU intensive- though I wouldn't know, this is my first attempt at optimizing code.
The first function is a console clear using system("cls") which, I know, is really, really slow and not a good idea. I will be changing that post-haste, but, since it only activates every 60 seconds and there is a noticeable lag-spike, I know this isn't the problem most of the time.
The second function re-writes the content of the screen with the updated remaining time also only every sixty seconds.
I will edit in the function that calls wait, clearScreen and display if it's clear that this function is not the problem. I already tried to reference most variables so they are not copied, as well as avoid endl as I heard that it's a little slow compared to \n.
This:
while( clock() < endwait ) {}
Is not "doing nothing". Certainly nothing is being done inside the while loop, but the test of clock() < endwait is not free. In fact, it is being executed over and over again as fast as your system can possibly handle doing it, which is what is driving up your load (probably 50% because you have a dual core processor, and this is a single-threaded program that can only use one core).
The correct way to do this is just to trash this entire wait function, and instead just use:
sleep(seconds);
Which will actually stop your program from executing for the specified number of seconds, and not consume any processor time while doing so.
Depending on your platform, you will need to include either <unistd.h> (UNIX and Linux) or <windows.h> (Windows) to access this function.
This is called a busy-wait. The CPU is spinning its wheels at full throttle in the while loop. You should replace the while loop with a simple call to sleep or usleep.
I don't know about the 2 MB, especially without knowing anything about the overall program, but that's really not something to stress out over. It could be that the C runtime libraries suck up that much on start-up for efficiency reasons.
The CPU issue has been answered well. As for the memory issue, it's not clear what 2 MB is actually measuring. It might be the total size of all the libraries mapped into your application's address space.
Run and inspect a program that simply contains
int main() { for (;;) }
to gauge the baseline memory usage on your platform.
You're spinning without yielding here, so it's no surprise that you burn CPU cycles.
Drop a
Sleep(50);
in the while loop.
The while loop is keeping the processor busy whenever your thread gets a timeslice to execute. If all you wish is to wait for a determined amount of time, you don't need a loop. You can replace it by a single call to sleep, usleep or nanosleep (depending on platform and granularity). They suspend the thread execution until the amount of time you specified has elapsed.
Alternatively, you can just give up (yield) on the remaining timeslice, calling Sleep(0) (Windows) or sched_yield() (Unix/Linux/etc).
If you want to understand the exact reason for this problem, read about scheduling.
while( clock() < endwait ) { Sleep(0); } // yield to equal priority threads