This is really annoying me as I have done it before, about a year ago and I cannot for the life of me remember what library it was.
Basically, the problem is that I want to be able to call a method a certain number of times or for a certain period of time at a specified interval.
One example would be I would like to call a method "x" starting from now, 10 times, once every 0.5 seconds. Alternatively, call method "x" starting from now, 10 times, until 5 seconds have passed.
Now I thought I used a boost library for this functionality but I can't seem to find it now and feeling a bit annoyed. Unfortunately I can't look at the code again as I'm not in possession of it any more.
Alternatively, I could have dreamt this all up and it could have been proprietary code. Assuming there is nothing out there that does what I would like, what is currently the best way of producing this behaviour? It would need to be high-resolution, up to a millisecond.
It doesn't matter if it blocks the thread that it is executed from or not.
Thanks!
Maybe you are talking about boost::asio. It is a mainly used for networking, but it can be used for scheduling timers.
It can be used in conjunction with boost::threads.
A combination of boost::this_thread::sleep and time duration found in boost::datetime?
It's probably bad practice to answer your own question but I wanted to add something more to what Nikko suggested as I have no implemented the functionality with the two suggested libraries. Someone might find this useful at some point.
void SleepingExampleTest::sleepInterval(int frequency, int cycles, boost::function<void()> method) {
boost::posix_time::time_duration interval(boost::posix_time::microseconds(1000000 / frequency));
boost::posix_time::ptime timer = boost::posix_time::microsec_clock::local_time() + interval;
boost::this_thread::sleep(timer - boost::posix_time::microsec_clock::local_time());
while(cycles--) {
method();
timer = timer + interval;
boost::this_thread::sleep(timer - boost::posix_time::microsec_clock::local_time());
}
}
Hopefully people can understand this simple example that I have knocked up. Using a bound function just to allow flexibility.
Appears to work with about 50 microsecond accuracy on my machine. Before taking into account the skew of the time it takes to execute the method being called the accuracy was a couple of hundred microseconds, so definitely worth it.
Related
This question already has answers here:
How to guarantee exact thread sleep interval?
(4 answers)
accurate sampling in c++
(2 answers)
Closed 4 years ago.
I'm currently working on some C++ code that reads from a video file, parses the video/audio streams into its constituent units (such as an FLV tag) and sends it back out in order to "restream" it.
Because my input comes from file but I want to simulate a proper framerate when restreaming this data, I am considering the ways that I might sleep the thread that is performing the read on the file in order to attempt to extract a frame at the given rate that one might expect out of typical 30 or 60 FPS.
One solution is to use an obvious std::this_thread::sleep_for call and pass in the amount of milliseconds depending on what my FPS is. Another solution I'm considering is using a condition variable, and using std::condition_variable::wait_for with the same idea.
I'm a little stuck, because I know that the first solution doesn't guarantee exact precision -- the sleep will last around as long as the argument I pass in but may in theory be longer. And I know that the std::condition_variable::wait_for call will require lock reacquisition which will take some time too. Is there a better solution than what I'm considering? Otherwise, what's the best methodology to attempt to pause execution for as precise a granularity as possible?
C++11 Most accurate way to pause execution for a certain amount of time?
This:
auto start = now();
while(now() < start + wait_for);
now() is a placeholder for the most accurate time measuring method available for the system.
This is the analogue to sleep as what spinlock is to a mutex. Like a spinlock, it will consume all the CPU cycles while pausing, but it is what you asked for: The most accurate way to pause execution. There is trade-off between accuracy and CPU-usage-efficiency: You must choose which is more important to have for your program.
why is it more accurate than std::this_thread::sleep_for?
Because sleep_for yields execution of the thread. As a consequence, it can never have better granularity than the process scheduler of the operating system has (assuming there are other processes competing for time).
The live loop shown above which doesn't voluntarily give up its time slice will achieve the highest granularity provided by the clock that is used for measurement.
Of course, the time slice granted by the scheduler will eventually run out, and that might happen near the time we should be resuming. Only way to reduce that effect is to increase the priority of our thread. There is no standard way of affecting the priority of a thread in C++. The only way to get completely rid of that effect is to run on a non-multi-tasking system.
On multi-CPU systems, one trick that you might want to do is to set the thread affinity so that the OS thread won't be migrated to other hard ware threads which would introduce latency. Likewise, you might want to set thread affinity of your other threads to stay off the time measuring thread. There is no standard tool to set thread affinity.
Let T be the time you wish to sleep for and let G be the maximum time that sleep_for could possibly overshoot.
If T is greater than G, then it will be more efficient to use sleep_for for T - G time units, and only use the live loop for the final G - O time units (where O is the time that sleep_for was observed to overshoot).
Figuring out what G is for your target system can be quite tricky however. There is no standard tool for that. If you over-estimate, you'll waste more cycles than necessary. If you under-estimate, your sleep may overshoot the target.
In case you're wondering what is a good choice for now(), the most appropriate tool provided by the standard library is std::chrono::steady_clock. However, that is not necessarily the most accurate tool available on your system. What tool is the most accurate depends on what system you're targeting.
I am currently implementing a PID controller for a project I am doing, but I realized I don't know how to ensure a fixed interval for each iteration. I want the PID controller to run at a frequency of 10Hz, but I don't want to use any sleep functions or anything that would otherwise slow down the thread it's running in. I've looked around but I cannot for the life of me find any good topics/functions that simply gives me an accurate measurement of milliseconds. Those that I have found simply uses time_t or clock_t, but time_t only seems to give seconds(?) and clock_t will vary greatly depending on different factors.
Is there any clean and good way to simply see if it's been >= 100 milliseconds since a given point in time in C++? I'm using the Qt5 framework and OpenCV library and the program is running on an ODROID X-2, if that's of any helpful information to anyone.
Thank you for reading, Christian.
I don't know much about the ODROID X-2 platform but if it's at all unixy you may have access to gettimeofday or clock_gettime either one of which would provide a higher resolution clock if available on your hardware.
what is best way to measure computation time, with either STL C++ or Qt?
I know of ctime, but I have an idea Qt could be of use here.
Thanks!
Theres the QTime class, that can measure time, you can start it via start() and retrieve it via the elapsed() method.
If you want something more advanced, you can go for Boost.Chrono if you want to get into serious time perversions. It gets real hairy real quick though, and the doc is a bit sparse (as always with Boost), but it's really one of the cleanest and best libraries if you need something of that caliber.
It all depends on what you want to do though, because "measuring time of computation" is a very broad description. Do you actually want to profile your application? Then maybe a profiler tool might be more suitable.
Also, if you just want to get the raw time it takes to execute the program, there's the time command in Linux.
Personally, I would use QElapsedTimer:
http://doc.qt.io/qt-4.8/qelapsedtimer.html
If you develop for Windows, you can use this from WINAPI:
DWORD start = ::GetTickCount();
calculation();
DWORD result = ::GetTickCount - start;
The DWORD result will contain the passed time in milliseconds.
Note: That this way of measuring is not uber precise. The precision varies between 10 and 16 ms. But if you just want to display something like "It took 5.37 seconds to calculate the meaning of life" it will suffice.
I'm trying to count the execution time of part of my application, but since I need to get milliseconds, and I need to get long execution times too. I'm currently using clock_t = clock() from ctime, but it has a range of only 72 minutes I think, which is not suitable for my needs. Is there any other portable way to count large execution times keeping millisecond precision? Or some way so overcome this limitation of clock_t ?
The first question you need to ask is do you really need millisecond precision in time spans over a hour.
If you do one simple method (without looking around for libraries that do it already)is just track when the timer rolls over and add that to another variable.
Unfortunately there are none that I know of that are cross-platform (that's not to say there doesn't exist any, however).
Nevertheless, it is easy enough to work around this problem. Just create a separate thread (ex: boost.thread) which sleeps for a long time, adds the time difference so far to a total, then repeats. When the program is shut down, stop the thread where it can also add to this counter before it quits.
I have a program I want to profile with gprof. The problem (seemingly) is that it uses sockets. So I get things like this:
::select(): Interrupted system call
I hit this problem a while back, gave up, and moved on. But I would really like to be able to profile my code, using gprof if possible. What can I do? Is there a gprof option I'm missing? A socket option? Is gprof totally useless in the presence of these types of system calls? If so, is there a viable alternative?
EDIT: Platform:
Linux 2.6 (x64)
GCC 4.4.1
gprof 2.19
The socket code needs to handle interrupted system calls regardless of profiler, but under profiler it's unavoidable. This means having code like.
if ( errno == EINTR ) { ...
after each system call.
Take a look, for example, here for the background.
gprof (here's the paper) is reliable, but it only was ever intended to measure changes, and even for that, it only measures CPU-bound issues. It was never advertised to be useful for locating problems. That is an idea that other people layered on top of it.
Consider this method.
Another good option, if you don't mind spending some money, is Zoom.
Added: If I can just give you an example. Suppose you have a call-hierarchy where Main calls A some number of times, A calls B some number of times, B calls C some number of times, and C waits for some I/O with a socket or file, and that's basically all the program does. Now, further suppose that the number of times each routine calls the next one down is 25% more times than it really needs to. Since 1.25^3 is about 2, that means the entire program takes twice as long to run as it really needs to.
In the first place, since all the time is spent waiting for I/O gprof will tell you nothing about how that time is spent, because it only looks at "running" time.
Second, suppose (just for argument) it did count the I/O time. It could give you a call graph, basically saying that each routine takes 100% of the time. What does that tell you? Nothing more than you already know.
However, if you take a small number of stack samples, you will see on every one of them the lines of code where each routine calls the next.
In other words, it's not just giving you a rough percentage time estimate, it is pointing you at specific lines of code that are costly.
You can look at each line of code and ask if there is a way to do it fewer times. Assuming you do this, you will get the factor of 2 speedup.
People get big factors this way. In my experience, the number of call levels can easily be 30 or more. Every call seems necessary, until you ask if it can be avoided. Even small numbers of avoidable calls can have a huge effect over that many layers.