I'm writing a check point. I'm checking every time I run a loop. I think this will waste a lot of CPU time. I wonder how to check with the system time every 10 seconds?
time_t start = clock();
while(forever)
{
if(difftime(clock(),start)/CLOCKS_PER_SEC >=timeLimit)
{
break;
}
}
The very short answer is that this is very difficult, if you're a novice programmer.
Now a few possiblilites:
Sleep for ten seconds. That means your program is basically pointless.
Use alarm() and signal handlers. This is difficult to get right, because you mustn't do anything fancy inside the signal handler.
Use a timerfd and integrate timing logic into your I/O loop.
Set up a dedicated thread for the timer (which can then sleep); this is exceedingly difficult because you need to think about synchronising all shared data access.
The point to take home here is that your problem doesn't have a simple solution. You need to integrate the timing logic deeply into your already existing program flow. This flow should be some sort of "main loop" (e.g. an I/O multiplexing loop like epoll_wait or select), possibly multi-threaded, and that loop should pick up the fact that the timer has fired.
It's not that easy.
Here's a tangent, possibly instructive. There are basically two kinds of computer program (apart from all the other kinds):
One kind is programs that perform one specific task, as efficiently as possible, and are then done. This is for example something like "generate an SSL key pair", or "find all lines in a file that match X". Those are the sort of programs that are easy to write and understand as far as the program flow is concerned.
The other kind is programs that interact with the user. Those programs stay up indefinitely and respond to user input. (Basically any kind of UI or game, but also a web server.) From a control flow perspective, these programs spend the vast majority of their time doing... nothing. They're just idle waiting for user input. So when you think about how to program this, how do you make a program do nothing? This is the heart of the "main loop": It's a loop that tells the OS to keep the process asleep until something interesting happens, then processes the interesting event, and then goes back to sleep.
It isn't until you understand how to do nothing that you'll be able to design programs of the second kind.
If you need precision, you can place a call to select() with null parameters but with a delay. This is accurate to the millisecond.
struct timeval timeout= {10, 0};
select(1,NULL,NULL,NULL, &timeout);
If you don't, just use sleep():
sleep(10);
Just add a call to sleep to yield CPU time to the system:
time_t start = clock();
while(forever)
{
if(difftime(clock(),start)/CLOCKS_PER_SEC >=timeLimit)
{
break;
}
sleep(1); // <<< put process to sleep for 1s
}
You can use a event loop in your program and schedule timer to do a callback. For example you can use libev to make an event loop and add timer.
ev_timer_init (timer, callback, 0., 5.);
ev_timer_again (loop, timer);
...
timer->again = 17.;
ev_timer_again (loop, timer);
...
timer->again = 10.;
ev_timer_again (loop, timer);
If you code in a specific toolkit you can use other event loops, gtk, qt, glib has own event loops so you can use them.
The simplest approach (in a single threaded environment), would be to sleep for some time and repeatedly check if the total waiting time has expired.
int sleepPeriodMs = 500;
time_t start = clock();
while(forever)
{
while(difftime(clock(),start)/CLOCKS_PER_SEC <timeLimit) // NOTE: Change of logic here!
{
sleep(sleepPeriod);
}
}
Please note, that sleep() is not very accurate. If you need higher accuracy timing (i.e. better than 10ms of resolution) you might need to dig deeper. Also, with C++11 there is the <chrono> header that offers a lot more of functionality.
using namespace std::chrono;
int sleepPeriodMs = 500;
time_t start = clock();
while(forever)
{
auto start = system_clock()::now()
// do some stuff that takes between [0..10[ seconds
std::this_thread::sleep_until(start+seconds(10));
}
Related
Everything I've found so far regarding timers is that it's, at best, available at a 1ms resolution. QTimer's docs claim that's the best it can provide.
I understand that OSes like Windows are not real-time OSes, but I still want to ask this question in hopes that someone knows something that could help.
So, I'm writing an app that requires a function to be called at a fairly precise but arbitrary interval, say 60 times/sec (full range: 59-61Hz). That means I need it to be called, on average, every ~16.67ms. This part of the design can't change.
The best timing source I currently have is vsync. When I go off of that, it's pretty good. It's not ideal, because the monitor's frequency is not exactly what I need to call this function at, but it can be somewhat compensated for.
The kicker is that the level of accuracy given the range I'm after is more or less available with timers, but not the level of precision I want. I can get a 16ms timer to hit exactly 16ms ~97% of the time. I can get a 17ms timer to hit exactly 17ms ~97% of the time. But no API exists to get me 16.67?
Is what I'm looking for simply not possible?
Background: The project is called Phoenix. Essentially, it's a libretro frontend. Libretro "cores" are game console emulators encapsulated in individual shared libraries. The API function being called at a specific rate is retro_run(). Each call emulates a game frame and calls callbacks for audio, video and so on. In order to emulate at a console's native framerate, we must call retro_run() at exactly (or as close to) this rate, hence the timer.
You could write a loop that checks std::chrono::high_resolution_clock() and std::this_thread::yield() until the right time has elapsed. If the program needs to be responsive while this is going on, you should do it in a separate thread from the one checking the main loop.
Some example code:
http://en.cppreference.com/w/cpp/thread/yield
An alternative is to use QElapsedTimer with a value of PerformanceCounter. You will still need to check it from a loop, and probably will still want to yield within that loop. Example code: http://doc.qt.io/qt-4.8/qelapsedtimer.html
It is completely unnecessary to call retro_run at any highly controlled time in particular, as long as the average frame rate comes out right, and as long as your audio output buffers don't underflow.
First of all, you are likely to have to measuring the real time using an audio-output-based timer. Ultimately, each retro_run produces a chunk of audio. The audio buffer state with the chunk added is your timing reference: if you run early, the buffer will be too full, if you run late, the buffer will be too empty.
This error measure can be fed into a PI controller, whose output gives you the desired delay until the next invocation of retro_run. This will automatically ensure that your average rate and phase are correct. Any systematic latencies in getting retro_run active will be integrated away, etc.
Secondly, you need a way of waking yourself up at the correct moment in time. Given a target time (in terms of a performance counter, for example) to call retro_run, you'll need a source of events that wake your code up so that you can compare the time and retro_run when necessary.
The simplest way of doing this would be to reimplement QCoreApplication::notify. You'll have a chance to retro_run prior to the delivery of every event, in every event loop, in every thread. Since system events might not otherwise come often enough, you'll also want to run a timer to provide a more dependable source of events. It doesn't matter what the events are: any kind of event is good for your purpose.
I'm not familiar with threading limitations of retro_run - perhaps you can run it in any one thread at a time. In such case, you'd want to run it on the next available thread in a pool, perhaps with the exception of the main thread. So, effectively, the events (including timer events) are used as energetically cheap sources of giving you execution context.
If you choose to have a thread dedicated to retro_run, it should be a high priority thread that simply blocks on a mutex. Whenever you're ready to run retro_run when a well-timed event comes, you unlock the mutex, and the thread should be scheduled right away, since it'll preempt most other threads - and certainly all threads in your process.
OTOH, on a low core count system, the high priority thread is likely to preempt the main (gui) thread, so you might as well invoke retro_run directly from whatever thread got the well-timed event.
It might of course turn out that using events from arbitrary threads to wake up the dedicated thread introduces too much worst-case latency or too much latency spread - this will be system-specific and you may wish to collect runtime statistics, switch threading and event source strategies on the fly, and stick with the best one. The choices are:
retro_run in a dedicated thread waiting on a mutex, unlock source being any thread with a well-timed event caught via notify,
retro_run in a dedicated thread waiting for a timer (or any other) event; events still caught via notify,
retro_run in a gui thread, unlock source being the events delivered to the gui thread, still caught via notify,
any of the above, but using timer events only - note that you don't care which timer events they are, they don't need to come from your timer,
as in #4, but selective to your timer only.
My implementation based on Lorehead's answer. Time for all variables are in ms.
It of course needs a way to stop running and I was also thinking about subtracting half the (running average) difference between timeElapsed and interval to make the average +-n instead of +2n, where 2n is the average overshoot.
// Typical interval value: 1/60s ~= 16.67ms
void Looper::beginLoop( double interval ) {
QElapsedTimer timer;
int counter = 1;
int printEvery = 240;
int yieldCounter = 0;
double timeElapsed = 0.0;
forever {
if( timeElapsed > interval ) {
timer.start();
counter++;
if( counter % printEvery == 0 ) {
qDebug() << "Yield() ran" << yieldCounter << "times";
qDebug() << "timeElapsed =" << timeElapsed << "ms | interval =" << interval << "ms";
qDebug() << "Difference:" << timeElapsed - interval << " -- " << ( ( timeElapsed - interval ) / interval ) * 100.0 << "%";
}
yieldCounter = 0;
importantBlockingFunction();
// Reset the frame timer
timeElapsed = ( double )timer.nsecsElapsed() / 1000.0 / 1000.0;
}
timer.start();
// Running this just once means massive overhead from calling timer.start() so many times so quickly
for( int i = 0; i < 100; i++ ) {
yieldCounter++;
QThread::yieldCurrentThread();
}
timeElapsed += ( double )timer.nsecsElapsed() / 1000.0 / 1000.0;
}
}
So I'm stuck with a little c++ program. I use "codeblocks" in a w7 environment.
I made a function which shows a ASCII map and a marker. A second function updates the markers position on that map.
I would like to know how I could make my main structure so that the marker gets updated and the map showed, and this repeated at a certain time rate. Which functions can I use to make this happen. What strategy should I follow?
every x times/second DO { showmap(); updatePosition();}
I am a c++ beginner and I hope you can help!
A loop with usleep
unsigned XtimesPerSecond = 5; // for example
unsigned long long microseconds = 1000000 / XtimesPerSecond;
do
{
showmap();
updatePosition();
usleep(microseconds);
} while(true);
Depending on what else your program needs to be doing, you may need to employ event driven programming. If updating that marker is the only thing it will be doing, a simple while loop with a sleep will suffice, as demonstrated in other answers.
In order to do event driven programming you generally need an event loop - which is a function that you call in main, which waits for events and dispatches them. Most event loops will provide timer events - where, basically, you ask the event loop to call function X after a given time interval elapses.
You most likely don't want to write your own event loop. There are many choices for an event loop, depending on many things like programming language and required portability.
Some examples of event loops:
the Qt event loop,
the GLib event loop,
the Windows event loop, and many more...
Seems that you want to implement an infinite loop like games engine does.
Try to do this:
while (true)
{
showmap();
updatePosition();
sleep(1);
}
I currently have this code
#include <iostream>
#include <curl.h>
#include <windows.h>
#include "boost\timer.hpp"
int main(void)
{
CURL *curl;
CURLcode res;
boost::timer t;
int number = 1;
while (number == 1)
{
if(t.elapsed() > 10)
{
curl = curl_easy_init();
if(curl)
{
curl_easy_setopt(curl, CURLOPT_URL, "http://google.com");
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
t.restart();
}
}
}
What i'd like it to do is continue execution of this program and never end until someone closes the window.
I tried the aforementioned code, however CPU usage spiked to 25% on my quad core CPU.
So how do i continue the execution of the program and loop the code within the while without using so much CPU?
P.S
25% on a quad core means 100% cpu usage on a single core CPU.
You can use Sleep(10000) to pause program execution for approx. 10 seconds. You can drop the boost::timer - just sleep 10 seconds in each loop iteration (Sleep is not as accurate, but for 10 seconds the inaccuracy should be negligible).
Your code is what is called a 'busy loop' - for the CPU it makes no difference whether you hang around in a tight loop without much work or do heavy computations. Both will use 100% of a CPU core because there's an neverending stream of instructions coming in. To use less, you need to relinquish execution for a while to let the OS execute other processes.
What you're currently doing is busy waiting. That is, even though your program doesn't need to do anything, it's still keeping that loop spinning, waiting for the timer. What you need to do is to execute a true sleep, which tells your operating system that the process doesn't need to do anything for the next 10 seconds.
One way to do a true sleep in boost is the boost::this_thread::sleep function.
If need to slow it down with some sleep(). Basically you need to put your thread to sleep to allow other processes to execute.
What you've implemented is called a busy wait and is considered very bad style. Use sleep to suspend program execution for a short time, and write an eternal loop as:
for (;;)
or
while (true)
Looks like you want to do a sleep after each operations.
You can boost::threads to run it in it's own thread and then do a thread.join in the main thread to wait for it. If the other thread never ends, because of a while(true) then you program will run until you close the window.
Call SwitchToThread() inside of your while loop. (Sleep is less than ideal, as it forfeits the current time slice even if no other thread needs it.)
Consider using a timer instead of doing such a tight loop.
Alternatively you can put a System.Threading.Thread.Sleep(300) in between.
Someone would have to kill your process to close it, I cannot discern any window in your code.
If you are polling a website (it looks like you are doing something with Google there) then I would advise you to make a much larger interval! Not many a web-master would be happy to see such activity. It's more likely to be seen as a DOS-attack!
Any way, if there's a window, rather put this code in a timer delegate, otherwise start a timer and allow the user to exit your program somehow (maybe with Console.ReadKey() or so).
I'm using the GCC compiler and C++ and I want to make a timer that triggers an interruption when the countdown is 0.
Any Ideas? Thanks in advance.
EDIT
Thanks to Adam, I know how to do it.
Now. What about multiple timers running in parallel?
Actually, these timers are for something very basic. In NCURSES, I have a list of things. When I press a key, one of the things will change colors for 5 seconds. If I press another key, another thing in the list will do the same. It's like emphasize strings depending on the user input. Is there a simpler way to do that?
An easy, portable way to implement an interrupt timer is using Boost.ASIO. Specifically, the boost::asio::deadline_timer class allows you to specify a time duration and an interrupt handler which will be executed asynchronously when the timer runs out.
See here for a quick tutorial and demonstration.
One way to do it is to use the alarm(2) system call to send a SIGALRM to your process when the timer runs out:
void sigalrm_handler(int sig)
{
// This gets called when the timer runs out. Try not to do too much here;
// the recommended practice is to set a flag (of type sig_atomic_t), and have
// code elsewhere check that flag (e.g. in the main loop of your program)
}
...
signal(SIGALRM, &sigalrm_handler); // set a signal handler
alarm(10); // set an alarm for 10 seconds from now
Take careful note of the cautions in the man page of alarm:
alarm() and setitimer() share the same timer; calls to one will interfere with use of the other.
sleep() may be implemented using SIGALRM; mixing calls to alarm() and sleep() is a bad idea.
Scheduling delays can, as ever, cause the execution of the process to be delayed by an arbitrary amount of time.
What is the best way to exit out of a loop as close to 30ms as possible in C++. Polling boost:microsec_clock ? Polling QTime ? Something else?
Something like:
A = now;
for (blah; blah; blah) {
Blah();
if (now - A > 30000)
break;
}
It should work on Linux, OS X, and Windows.
The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.
The calculations in the loop are for
updating a simulation. Every 30ms, I'd
like to update the viewport.
Have you considered using threads? What you describe seems the perfect example of why you should use threads instead of timers.
The main process thread keeps taking care of the UI, and have a QTimer set to 30ms to update it. It locks a QMutex to have access to the data, performs the update, and releases the mutex.
The second thread (see QThread) does the simulation. For each cycle, it locks the QMutex, does the calculations and releases the mutex when the data is in a stable state (suitable for the UI update).
With the increasing trend on multi-core processors, you should think more and more on using threads than on using timers. Your applications automatically benefits from the increased power (multiple cores) of new processors.
While this does not answer the question, it might give another look at the solution. What about placing the simulation code and user interface in different threads? If you use Qt, periodic update can be realized using a timer or even QThread::msleep(). You can adapt the threaded Mandelbrot example to suit your need.
The code snippet example in this link pretty much does what you want:
http://www.cplusplus.com/reference/clibrary/ctime/clock/
Adapted from their example:
void runwait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait)
{
/* Do stuff while waiting */
}
}
If you need to do work until a certain time has elapsed, then docflabby's answer is spot-on. However, if you just need to wait, doing nothing, until a specified time has elapsed, then you should use usleep()
Short answer is: you can't in general, but you can if you are running on the right OS or on the right hardware.
You can get CLOSE to 30ms on all the OS's using an assembly call on Intel systems and something else on other architectures. I'll dig up the reference and edit the answer to include the code when I find it.
The problem is the time-slicing algorithm and how close to the end of your time slice you are on a multi-tasking OS.
On some real-time OS's, there's a system call in a system library you can make, but I'm not sure what that call would be.
edit: LOL! Someone already posted a similiar snippet on SO: Timer function to provide time in nano seconds using C++
VonC has got the comment with the CPU timer assembly code in it.
According to your question, every 30ms you'd like to update the viewport. I wrote a similar app once that probed hardware every 500ms for similar stuff. While this doesn't directly answer your question, I have the following followups:
Are you sure that Blah(), for updating the viewport, can execute in less than 30ms in every instance?
Seems more like running Blah() would be done better by a timer callback.
It's very hard to find a library timer object that will push on a 30ms interval to do updates in a graphical framework. On Windows XP I found that the standard Win32 API timer that pushes window messages upon timer interval expiration, even on a 2GHz P4, couldn't do updates any faster than a 300ms interval, no matter how low I set the timing interval to on the timer. While there were high performance timers available in the Win32 API, they have many restrictions, namely, that you can't do any IPC (like update UI widgets) in a loop like the one you cited above.
Basically, the upshot is you have to plan very carefully how you want to have updates occur. You may need to use threads, and look at how you want to update the viewport.
Just some things to think about. They caught me by surprise when I worked on my project. If you've thought these things through already, please disregard my answer :0).
You might consider just updating the viewport every N simulation steps rather than every K milliseconds. If this is (say) a serious commercial app, then you're probably going to want to go the multi-thread route suggested elsewhere, but if (say) it's for personal or limited-audience use and what you're really interested in is the details of whatever it is you're simulating, then every-N-steps is simple, portable and may well be good enough to be getting on with.
See QueryPerformanceCounter and QueryPerformanceFrequency
If you are using Qt, here is a simple way to do this:
QTimer* t = new QTimer( parent ) ;
t->setInterval( 30 ) ; // in msec
t->setSingleShot( false ) ;
connect( t, SIGNAL( timeout() ), viewPort, SLOT( redraw() ) ) ;
You'll need to specify viewPort and redraw(). Then start the timer with t->start().