Calculate FPS with sfml - c++

This question is NOT related to the main loop in sfml. How do I calculate the actual framerate (for example, I have vsync enabled, but the main loop still runs at a high speed), so that it actually displays how fast the screen is updated (not the speed of the main loop). Measuring main loop speed is not important for me, but the actual window update speed is.

SFML did not provide a way to retrieve your current framerate, neither backends like OpenGL does. Therefore, the only way is to monitor the main loop speed, as you suggested.
Also, window.setFrameLimit(60) or window.setVerticalSyncEnabled(true) or internal loop sleep on a 60Hz monitor causes the same effect in my SFML application, with the difference of V-Sync being more CPU and GPU expensive (due to their synchronization ways).
Therefore you can rely on calculating FPS by using chrono for example in your main loop.
Make sure to wrap your draw calls by using time_point(s), into start and end points and calculating time elapsed with std::chrono::time_duration.
Example:
std::chrono::high_resolution_clock::time_point start;
std::chrono::high_resolution_clock::time_point end;
float fps;
while(window.isOpen()){
// Perform some non-rendering logic there...
// Performed. Now perform GPU stuff...
start = std::chrono::high_resolution_clock::now();
// window.draw, etc.
end = std::chrono::high_resolution_clock::now();
fps = (float)1e9/(float)std::chrono::duration_cast<std::chrono::nanoseconds>(end-start).count());
}

Related

GLX animation slower than expected

I have an application using XCB and openGL. At the beginning, I choose a framebuffer configuration with the following attributes:
const int attributes[] = {GLX_BUFFER_SIZE, 32, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, True, GLX_RENDER_TYPE, GLX_RGBA_BIT, None};
fb_configs = glXChooseFBConfig(display, screen_index, attributes, &fb_configs_count);
I run a simple animation which is supposed to last a fixed duration (1s), but showing it on the screen takes much longer (about 5s). After adding logs to show the value of progress, I found out the actual loop only lasts 1s.
struct timeval start; // start time of the animation
gettimeofday(&start, 0);
while (1)
{
double progress = timer_progress(&start);
if (progress > 1.0)
break; // end the animation
draw(progress);
glXSwapBuffers(display, drawable);
xcb_generic_event_t *event = xcb_poll_for_event(connection);
if (!event)
{
usleep(1000);
continue;
}
switch (event->response_type & ~0x80)
{
case XCB_EXPOSE:
default:
free(event);
continue;
}
}
I am not sure what is really going on. I suppose on each iteration glXSwapBuffers() enqueues the opengl commands for drawing and most of them are yet to be executed when the loop is over.
Tweaking the parameter of usleep() has no effect other than to make the animation less smooth or to make the animation much slower. The problem disappears when I switch to single buffering (but I get the problems associated with single buffering).
It seems I'm not doing something right, but I have no idea what.
The exact timing behaviour of glXSwapBuffers is left open to each implementation. NVidia and fglrx opt to block glXSwapBuffers until V-Sync (if V-Sync is enabled), Mesa and Intel opt to return immediately and block at the next call that would no longer fit into the command queue, where calls that would modify the back buffer before V-Sync are held up.
However if your desire is an exact length for your animation, then a loop with a fixed amount of frames and performing delays will not work. Instead you should redraw as fast as possible (and use delays only to limit your drawing rate). The animation should be progressed by the actual time that elapsed between the actual draw iterations instead of a fixed timestep (this is in contrast to game loops that should in fact do use a fixed time step, albeit at a much faster rate than drawing).
Last but not least you must not use gettimeofday for controlling animations. gettimeofday reports wall clock time, which may jump, slow down or up or even run backwards. Use high precision timers instead (clock_gettime(CLOCK_MONOTONIC, …)).

Why is my rendering thread taking up 100% cpu?

So right now in my OpenGL game engine, when my rendering thread has literally nothing to do, it's taking up the maximum for what my CPU can give it. Windows Task Manager shows my application taking up 25% processing (I have 4 hardware threads, so 25% is the maximum that one thread can take). When I don't start the rendering thread at all I get 0-2% (which is worrying on it's own since all it's doing is running an SDL input loop).
So, what exactly is my rendering thread doing? Here's some code:
Timer timer;
while (gVar.running)
{
timer.frequencyCap(60.0);
beginFrame();
drawFrame();
endFrame();
}
Let's go through each of those. Timer is a custom timer class I made using SDL_GetPerformanceCounter. timer.frequencyCap(60.0); is meant to ensure that the loop doesn't run more than 60 times per second. Here's the code for Timer::frequencyCap():
double Timer::frequencyCap(double maxFrequency)
{
double duration;
update();
duration = _deltaTime;
if (duration < (1.0 / maxFrequency))
{
double dur = ((1.0 / maxFrequency) - duration) * 1000000.0;
this_thread::sleep_for(chrono::microseconds((int64)dur));
update();
}
return duration;
}
void Timer::update(void)
{
if (_freq == 0)
return;
_prevTicks = _currentTicks;
_currentTicks = SDL_GetPerformanceCounter();
// Some sanity checking here. //
// The only way _currentTicks can be less than _prevTicks is if we've wrapped around to 0. //
// So, we need some other way of calculating the difference.
if (_currentTicks < _prevTicks)
{
// If we take difference between UINT64_MAX and _prevTicks, then add that to _currentTicks, we get the proper difference between _currentTicks and _prevTicks. //
uint64 dif = UINT64_MAX - _prevTicks;
// The +1 here prvents an off-by-1 error. In truth, the error would be pretty much indistinguishable, but we might as well be correct. //
_deltaTime = (double)(_currentTicks + dif + 1) / (double)_freq;
}
else
_deltaTime = (double)(_currentTicks - _prevTicks) / (double)_freq;
}
The next 3 functions are considerably simpler (at this stage):
void Renderer::beginFrame()
{
// Perform a resize if we need to. //
if (_needResize)
{
gWindow.getDrawableSize(&_width, &_height);
glViewport(0, 0, _width, _height);
_needResize = false;
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
}
void Renderer::endFrame()
{
gWindow.swapBuffers();
}
void Renderer::drawFrame()
{
}
The rendering thread was created using std::thread. The only explanation I can think of is that timer.frequencyCap somehow isn't working, except I use that exact same function in my main thread and I idle at 0-2%.
What am I doing wrong here?
If V-Sync is enabled and your program honors the the swap intervals, then you seeing your program taking up 100% is actually an artifact how Windows measures CPU time. It's been a long known issue, but anytime your program blocks in a driver context (which is what happens when OpenGL blocks on a V-Sync) windows will account this for the program actually consuming CPU time, while its actually just idling.
If you add a Sleep(1) right after swap buffers it will trick Windows into a more sane accounting; on some systems even a Sleep(0) does the trick.
Anyway, the 100% are just a cosmetic problem, most of the time.
In the past weeks I've done some exhaustive research on low latency rendering (i.e. minimizing the time between user input and corresponding photons coming out of the display), since I'm getting a VR headset soon. And here's what I found out regarding timing SwapBuffers: The sane solution to the problem is actually to time the frame rendering times and add an artificial sleep before SwapBuffers so that you wake up only a few ms before the V-Sync. However this is easier said than done because OpenGL is highly asynchronous and explicitly adding syncs will slow down your throughput.
if you have a complex scene or non optimized rendering
hit bottleneck somewhere or have an error in gl code
then framerate usually drops to around 20 fps (at least on NVidia) no matter the complexity of the scene
for very complex scenes even bellow that
try this:
try to measure time it takes this to process
beginFrame();
drawFrame();
endFrame();
there you will see your fps limit
compare it to scene complexity/HW capability
and decide if it is a bug or too complex scene
try to turn off some GL stuff
for example last week I discover that if I turn CULL_FACE off it actually speeds up one of mine non optimized rendering about 10-100 times which I don't get why till today (on old stuff GL code)
check for GL errors
I do not see any glFlush()/glFinish() in your code
try to measure with glFinish();
If you cant sort this out you still can use dirty trick like
add Sleep(1); to your code
it will force to sleep your thread so it will never use 100% power
the time it sleeps is 1ms + scheduler granularity so it also limits the target fps
you use this_thread::sleep_for(chrono::microseconds((int64)dur));
do not know that function are you really sure it does what you think?

How can I implement an accurate (but variable) FPS limit/cap in my OpenGL application?

I am currently working on an OpenGL application to display a few 3D spheres to the user, which they can rotate, move around, etc. That being said, there's not much in the way of complexity here, so the application runs at quite a high framerate (~500 FPS).
Obviously, this is overkill - even 120 would be more then enough, but my issue is that running the application at full-stat eats away my CPU, causing excess heat, power consumption, etc. What I want to do is be able to let the user set an FPS cap so that the CPU isn't being overly used when it doesn't need to be.
I'm working with freeglut and C++, and have already set up the animations/event handling to use timers (using the glutTimerFunc). The glutTimerFunc, however, only allows an integer amount of milliseconds to be set - so if I want 120 FPS, the closest I can get is (int)1000/120 = 8 ms resolution, which equates to 125 FPS (I know it's a neglegible amount, but I still just want to put in an FPS limit and get exactly that FPS if I know the system can render faster).
Furthermore, using glutTimerFunc to limit the FPS never works consistently. Let's say I cap my application to 100 FPS, it usually never goes higher then 90-95 FPS. Again, I've tried to work out the time difference between rendering/calculations, but then it always overshoots the limit by 5-10 FPS (timer resolution possibly).
I suppose the best comparison here would be a game (e.g. Half Life 2) - you set your FPS cap, and it always hits that exact amount. I know I could measure the time deltas before and after I render each frame and then loop until I need to draw the next one, but this doesn't solve my 100% CPU usage issue, nor does it solve the timing resolution issue.
Is there any way I can implement an effective, cross-platform, variable frame rate limiter/cap in my application? Or, in another way, is there any cross-platform (and open source) library that implements high resolution timers and sleep functions?
Edit: I would prefer to find a solution that doesn't rely on the end user enabling VSync, as I am going to let them specify the FPS cap.
Edit #2: To all who recommend SDL (which I did end up porting my application to SDL), is there any difference between using the glutTimerFunc function to trigger a draw, or using SDL_Delay to wait between draws? The documentation for each does mention the same caveats, but I wasn't sure if one was more or less efficient then the other.
Edit #3: Basically, I'm trying to figure out if there is a (simple way) to implement an accurate FPS limiter in my application (again, like Half Life 2). If this is not possible, I will most likely switch to SDL (makes more sense to me to use a delay function rather then use glutTimerFunc to call back the rendering function every x milliseconds).
I'd advise you to use SDL. I personnally use it to manage my timers. Moreover, it can limit your fps to your screen refresh rate (V-Sync) with SDL 1.3. That enables you to limit CPU usage while having the best screen performance (even if you had more frames, they wouldn't be able to be displayed since your screen doesn't refresh fast enough).
The function is
SDL_GL_SetSwapInterval(1);
If you want some code for timers using SDL, you can see that here :
my timer class
Good luck :)
I think a good way to achieve this, no matter what graphics library you use, is to have a single clock measurement in the gameloop to take every single tick (ms) into account. That way the average fps will be exactly the limit just like in Half-Life 2. Hopefully the following code snippet will explain what I am talking about:
//FPS limit
unsigned int FPS = 120;
//double holding clocktime on last measurement
double clock = 0;
while (cont) {
//double holding difference between clocktimes
double deltaticks;
//double holding the clocktime in this new frame
double newclock;
//do stuff, update stuff, render stuff...
//measure clocktime of this frame
//this function can be replaced by any function returning the time in ms
//for example clock() from <time.h>
newclock = SDL_GetTicks();
//calculate clockticks missing until the next loop should be
//done to achieve an avg framerate of FPS
// 1000 / 120 makes 8.333... ticks per frame
deltaticks = 1000 / FPS - (newclock - clock);
/* if there is an integral number of ticks missing then wait the
remaining time
SDL_Delay takes an integer of ms to delay the program like most delay
functions do and can be replaced by any delay function */
if (floor(deltaticks) > 0)
SDL_Delay(deltaticks);
//the clock measurement is now shifted forward in time by the amount
//SDL_Delay waited and the fractional part that was not considered yet
//aka deltaticks
the fractional part is considered in the next frame
if (deltaticks < -30) {
/*dont try to compensate more than 30ms(a few frames) behind the
framerate
//when the limit is higher than the possible avg fps deltaticks
would keep sinking without this 30ms limitation
this ensures the fps even if the real possible fps is
macroscopically inconsitent.*/
clock = newclock - 30;
} else {
clock = newclock + deltaticks;
}
/* deltaticks can be negative when a frame took longer than it should
have or the measured time the frame took was zero
the next frame then won't be delayed by so long to compensate for the
previous frame taking longer. */
//do some more stuff, swap buffers for example:
SDL_RendererPresent(renderer); //this is SDLs swap buffers function
}
I hope this example with SDL helps. It is important to measure the time only once per frame so every frame is taken into account.
I recommend to modularize this timing in a function which also makes your code clearer. This code snipped has no comments in the case they just annoyed you in the last one:
unsigned int FPS = 120;
void renderPresent(SDL_Renderer * renderer) {
static double clock = 0;
double deltaticks;
double newclock = SDL_GetTicks();
deltaticks = 1000.0 / FPS - (newclock - clock);
if (floor(deltaticks) > 0)
SDL_Delay(deltaticks);
if (deltaticks < -30) {
clock = newclock - 30;
} else {
clock = newclock + deltaticks;
}
SDL_RenderPresent(renderer);
}
Now you can call this function in your mainloop instead of your swapBuffer function (SDL_RenderPresent(renderer) in SDL). In SDL you'd have to make sure the SDL_RENDERER_PRESENTVSYNC flag is turned off. This function relies on the global variable FPS but you can think of other ways of storing it. I just put the whole thing in my library's namespace.
This method of capping the framerate delivers exactly the desired average framerate if there are no large differences in the looptime over multiple frames because of the 30ms limit to deltaticks. The deltaticks limit is required. When the FPS limit is higher than the actual framerate deltaticks will drop indefinitely. Also when the framerate then rises above the FPS limit again the code would try to compensate the lost time by rendering every frame immediately resulting in a huge framerate until deltaticks rises back to zero. You can modify the 30ms to fit your needs, it is just an estimate by me. I did a couple of benchmarks with Fraps. It works with every imaginable framerate really and delivers beautiful results from what I have tested.
I must admit I coded this just yesterday so it is not unlikely to have some kind of bug. I know this question was asked 5 years ago but the given answers did not statify me. Also feel free to edit this post as it is my very first one and probably flawed.
EDIT:
It has been brought to my attention that SDL_Delay is very very inaccurate on some systems. I heard a case where it delayed by far too much on android. This means my code might not be portable to all your desired systems.
The easiest way to solve it is to enable Vsync. That's what I do in most games to prevent my laptop from getting too hot.
As long as you make sure the speed of your rendering path is not connected to the other logic, this should be fine.
There is a function glutGet( GLUT_ELAPSED_TIME ) which returns the time since started in miliseconds, but that's likely still not fast enough.
A simple way is to make your own timer method, which uses the HighPerformanceQueryTimer on windows, and the getTimeOfDay for POSIX systems.
Or you can always use timer functions from SDL or SFML, which do basically the same as above.
You should not try to limit the rendering rate manually, but synchronize with the display vertical refresh. This is done by enabling V sync in the graphics driver settings. Apart from preventing (your) programs from rendering at to high rate, it also increases picture quality by avoiding tearing.
The swap interval extensions allow your application to fine tune the V sync behaviour. But in most cases just enabling V sync in the driver and letting the buffer swap block until sync suffices.
I would suggest using sub-ms precision system timers (QueryPerformanceCounter, gettimeofday) to get timing data. These can help you profile performance in optimized release builds also.
Some background information:
SDL_Delay is pretty much the same as Sleep/sleep/usleep/nanosleep but it is limited to milliseconds as parameter
Sleeping works by relying on the systems thread scheduler to continue your code.
Depending on your OS and hardware the scheduler may have a lower tick frequency than 1000hz, which results in longer timespans than you have specified when calling sleep, so you have no guarantee to get the desired sleep time.
You can try to change the scheduler's frequency. On Windows you can do it by calling timeBeginPeriod(). For linux systems checkout this answer.
Even if your OS supports a scheduler frequency of 1000hz your hardware may not, but most modern hardware does.
Even if your scheduler's frequency is at 1000hz sleep may take longer if the system is busy with higher priority processes, but this should not happen if your system isn't under super high load.
To sum up, you may sleep for microseconds on some tickless linux kernels, but if you are interested in a cross platform solution you should try to get the scheduler frequency up to 1000hz to ensure the sleeps are accurate in most of the cases.
To solve the rounding issue for 120FPS:
1000/120 = 8,333ms
(int)1000/120 = 8ms
Either you do a busy wait for 333 microseconds and sleep for 8ms afterwards. Which costs some CPU time but is super accurate.
Or you follow Neop approach by sleeping sometimes 8ms and sometimes 9ms seconds to average out at 8,333ms. Which is way more efficent but less accurate.

setting max frames per second in openGL

Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific? I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
Or how else can I prevent FPS to drop or raise dramatically? For this time I'm testing it on drawing big number of vertices in line, and using fraps I can see frame rate to go from 400 to 200 fps with evident slowing down of drawing it.
You have two different ways to solve this problem:
Suppose that you have a variable called maximum_fps, which contains for the maximum number of frames you want to display.
Then You measure the amount of time spent on the last frame (a timer will do)
Now suppose that you said that you wanted a maximum of 60FPS on your application. Then you want that the time measured be no lower than 1/60. If the time measured s lower, then you call sleep() to reach the amount of time left for a frame.
Or you can have a variable called tick, that contains the current "game time" of the application. With the same timer, you will incremented it at each main loop of your application. Then, on your drawing routines you calculate the positions based on the tick var, since it contains the current time of the application.
The big advantage of option 2 is that your application will be much easier to debug, since you can play around with the tick variable, go forward and back in time whenever you want. This is a big plus.
Rule #1. Do not make update() or loop() kind of functions rely on how often it gets called.
You can't really get your desired FPS. You could try to boost it by skipping some expensive operations or slow it down by calling sleep() kind of functions. However, even with those techniques, FPS will be almost always different from the exact FPS you want.
The common way to deal with this problem is using elapsed time from previous update. For example,
// Bad
void enemy::update()
{
position.x += 10; // this enemy moving speed is totally up to FPS and you can't control it.
}
// Good
void enemy::update(elapsedTime)
{
position.x += speedX * elapsedTime; // Now, you can control its speedX and it doesn't matter how often it gets called.
}
Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific?
No.
There is no way to precisely calculate how many updates should be called to reach desired framerate.
However, you can measure how much time has passed since last frame, calculate current framerate according to it, compare it with desired framerate, then introduce a bit of Sleeping to reduce current framerate to the desired value. Not a precise solution, but it will work.
I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
OpenGL is concerned only about rendering stuff, and has nothing to do with timers. Also, using windows timers isn't a good idea. Use QueryPerformanceCounter, GetTickCount or SDL_GetTicks to measure how much time has passed, and sleep to reach desired framerate.
Or how else can I prevent FPS to drop or raise dramatically?
You prevent FPS from raising by sleeping.
As for preventing FPS from dropping...
It is insanely broad topic. Let's see. It goes something like this: use Vertex buffer objects or display lists, profile application, do not use insanely big textures, do not use too much alpha-blending, avoid "RAW" OpenGL (glVertex3f), do not render invisible objects (even if no polygons are being drawn, processing them takes time), consider learning about BSPs or OCTrees for rendering complex scenes, in parametric surfaces and curves, do not needlessly use too many primitives (if you'll render a circle using one million polygons, nobody will notice the difference), disable vsync. In short - reduce to absolute possible minimum number of rendering calls, number of rendered polygons, number of rendered pixels, number of texels read, read every available performance documentation from NVidia, and you should get a performance boost.
You're asking the wrong question. Your monitor will only ever display at 60 fps (50 fps in Europe, or possibly 75 fps if you're a pro-gamer).
Instead you should be seeking to lock your fps at 60 or 30. There are OpenGL extensions that allow you to do that. However the extensions are not cross platform (luckily they are not video card specific or it'd get really scary).
windows: wglSwapIntervalEXT
x11 (linux): glXSwapIntervalSGI
max os x: ?
These extensions are closely tied to your monitor's v-sync. Once enabled calls to swap the OpenGL back-buffer will block until the monitor is ready for it. This is like putting a sleep in your code to enforce 60 fps (or 30, or 15, or some other number if you're not using a monitor which displays at 60 Hz). The difference it the "sleep" is always perfectly timed instead of an educated guess based on how long the last frame took.
You absolutely do wan't to throttle your frame-rate it all depends on what you got
going on in that rendering loop and what your application does. Especially with it's
Physics/Network related. Or if your doing any type of graphics processing with an out side toolkit (Cairo, QPainter, Skia, AGG, ...) unless you want out of sync results or 100% cpu usage.
This code may do the job, roughly.
static int redisplay_interval;
void timer(int) {
glutPostRedisplay();
glutTimerFunc(redisplay_interval, timer, 0);
}
void setFPS(int fps)
{
redisplay_interval = 1000 / fps;
glutTimerFunc(redisplay_interval, timer, 0);
}
Here is a similar question, with my answer and worked example
I also like deft_code's answer, and will be looking into adding what he suggests to my solution.
The crucial part of my answer is:
If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.
The example is for animation code that renders at the same speed regardless of whether benchmarking mode, or fixed FPS mode, is active. An animation triggered before the change even keeps a constant speed after the change.

Question about running a program at same speed in any computer

I made a program (in C++, using gl/glut) for study purposes where you can basically run around a screen (in first person), and it has several solids around the scene. I tried to run it on a different computer and the speed was completely different, so I searched on the subject and I'm currently doing something like this:
Idle function:
start = glutGet (GLUT_ELAPSED_TIME);
double dt = (start-end)*30/1000;
<all the movement*dt>
glutPostRedisplay ();
end = glutGet (GLUT_ELAPSED_TIME);
Display function:
<rendering for all objects>
glutSwapBuffers ();
My question is: is this the proper way to do it? The scene is being displayed after the idle function right?
I tried placing end = glutGet (GLUT_ELAPSED_TIME) before glutSwapBuffers () and didn't notice any change, but when I put it after glutSwapBuffers () it slows down alot and even stops sometimes.
EDIT: I just noticed that in the way I'm thinking, end-start should end up being the time that passed since all the drawing was done and before the movement update, as idle () would be called as soon as display () ends, so is it true that the only time that's not being accounted for here is the time the computer takes to do all of the movement? (Which should be barely nothing?)
Sorry if this is too confusing..
Thanks in advance.
I don't know what "Glut" is, but as a general rule of game development, I would never base movement speed off of how fast the computer can process the directives. That's what they did in the late 80's and that's why when you play an old game, things move at light speed.
I would set up a timer, and base all of my movements off of clear and specific timed events.
Set up a high-resolution timer (eg. QueryPerformanceCounter on Windows) and measure the time between every frame. This time, called delta-time (dt), should be used in all movement calculations, eg. every frame, set an object's position to:
obj.x += 100.0f * dt; // to move 100 units every second
Since the sum of dt should always be 1 over 1 second, the above code increments x by 100 every second, no matter what the framerate is. You should do this for all values which change over time. This way your game proceeds at the same rate on all machines (framerate independent), rather than depending on the rate the computer processes the logic (framerate dependent). This is also useful if the framerate starts to drop - the game doesn't suddenly start running in slow-motion, it keeps going at the same speed, just rendering less frequently.
I wouldn't use a timer. Things can go wrong, and events can stack up if the PC is too slow or too busy to run at the required rate. I'd let the loop run as fast as it's allowed, and each time calculate how much time has passed and put this into your movement/logic calculations.
Internally, you might actually implement small fixed-time sub-steps, because trying to make everything work right on variable time-steps is not as simple as x+=v*dt.
Try gamedev.net for stuff like this. lots of articles and a busy forum.
There is a perfect article about game loops that should give you all the information you need.
You have plenty of answers on how to do it the "right" way, but you're using GLUT, and GLUT sometimes sacrifices the "right" way for simplicity and maintaining platform independence. The GLUT way is to register a timer callback function with glutTimerFunc().
static void timerCallback (int value)
{
// Calculate the deltas
glutPostRedisplay(); // Have GLUT call your display function
glutTimerFunc(elapsedMilliseconds, timerCallback, value);
}
If you set elapsedMilliseconds to 40, this function will be called slightly less than 25 times a second. That slightly less will depend upon how long the computer takes to process your delta calculation code. If you keep that code simple, your animation will run the same speed on all systems, as long as each system can process the display function in less than 40 milliseconds. For more flexibility, you can adjust the frame rate at runtime with a command line option or by adding a control to your interface.
You start the timer loop by calling glutTimerFunc(elapsedMilliseconds, timerCallback, value); in your initialization process.
I'm a games programmer and have done this many times.
Most games run the AI in fixed time increments like 60hz for example. Also most are synced to the monitor refresh to avoid screen tearing so the max rate would be 60 even if the machine was really fast and could do 1000 fps. So if the machine was slow and was running at 20 fps then it would call the update ai function 3 times per render. Doing it this way solves rounding error problems with small values and also makes the AI deterministic across multiple machines since the AI update rate is decoupled from the machine speed ( necessary for online multiplayer games).
This is a very hard question.
The first thing you need to awnser yourself is, do you really want your application to really run at the same speed or just appear to run the same speed? 99% of the time you only want it to appear to run the same speed.
Now there are two problems: Speeding up you application or slowing it down.
Speeding up your application is really hard, since that requires things like dynamic LOD that adjusts to the current speed. This means LOD in everything, not only graphics.
Slowing your application down is fairly easy. You have two options sleeping or "busy waiting". It basically depends on your target frame rate for your simulation. If your simulation is way above something like 50 ms you can sleep. The problem is that when sleeping you are depended on the process scheduler and it works on average system at granularity of 10 ms.
In games busy waiting is not such a bad idea. What you do is you update your simulation and render your frame, then you use an time accumulator for the next frame. When rendering frames without simulation you then interpolate the state to get a smooth animation. A really great article on the subject can be found at http://gafferongames.com/game-physics/fix-your-timestep/.