Accurate Signal Generator - c++

I want to create an accurate signal generator in Qt.
For example a square signal that 10us (microseconds) generates 255 and 10ms (milliseconds) generates 0.
I'm using usleep() in my thread but it sleep about 1ms!! when searched about it, I found that it's for CPU context switch.
//fp:frequency of signal //t:time of generate high (amp) // n:generate n time
void Thread::rectGenerator(double fp, double t, double amp, double n)
{
double result;
double T=1000000/fp; //(us)
for (double i=0,ii=0; i<n*T; i+=_Interval,ii+=_Interval)
{
if (ii>=T)
ii=0;
if (ii<=t)
result=amp;
else
result=0;
th.usleep(1);
qDebug() << i << "\t" <<result;
}
}
As a result : rectGenerator(200, 20, 255, 12) execute in 12 seconds but it should execute in 60(ms) !!!
So what is the best way to generate accurate signal ?

Normally what you would do is allocate a buffer that represents a certain amount of real time, fill this buffer with your generated signal, then schedule it to be played or saved or streamed. (You don't specify what you're doing with the signal, but since you're doing it with threads, I'll assume it is approximately real-time).
Assume then that your target sampling rate is 48kHz (standard for professional audio). Then you would allocate a buffer of 48000 samples of floats to store 1 second of audio. (Using double is almost certainly overkill; high quality audio is 16-bit or maybe 24-bit, and 32-bit if you're mastering top-flight systems, so float is more than enough precision; double is wasting bits).
Then you would fill this buffer with your signal using a looping function very similar to what you have pasted above. But you don't use sleep or anything like that; for now, you're only preparing the data which will be played later.
So once you have the audio buffer prepared, you need to schedule it to be played. This generally involves you sending the buffer to the system to be played back at a certain time. Depending on the API or device, you will get a callback to fill up the buffer, when a low water mark has been reached, etc.
If your signal never changes, you can just generate the one buffer and keep reusing it by rescheduling it to be played. Depending on the period of the signal, you may need to adjust the buffer size to maintain the correct frequency.
(Note that a pure square wave as you describe requires theoretically infinite bandwidth to reproduce, with its brick wall edges; you should probably apply a low pass filter to band-limit the signal, which depends on your output device.)

For accuracy in the 10 uS range, your best bet my be to offload the signal generation to dedicated hardware (FPGA or microcontroller with a real-time or no OS).

Since you need a 100 Khz signal, by far the easiest solution is to use a device that's designed to create signals in that range. A good soundcard will achieve this and is quite easy to program. Just load your sample in and tell it to play. Its internal hardware will do all the timing.

Related

Playback pure tone, variable phase stream with pyaudio

I'm building an acoustic cancelling device based on Pyaudio, fourier transforms and. c-Media usb audio card. The software is threaded, using the producer/consumer model.
The device detects pure tones in the environment (reads chunks of microphone audio), uses fourier to detect the pure tone, and so far so good it works like a charm.
The final step however is getting tricky. I'm aiming to generate a 100ms wave (sine wave), which holds a certain amounts of periods of the frequency to be cancelled.
This wave buffer has to be played with Pyaudio on a separate thread continuously, which also must increase the phase little by little till detecting the amplitude of the tone in the environment drops. This is basically destructive interference.
My problem is when using Pyaudio.stream.write(), the buffer keeps overruning, since i have NO IDEA, what the function is doing internally. I have tried with many combinations of "frame_buffer_size" and audio lenght and no matter what i do, the buffer is overrun.
Ideally, the buffer does not have to be recalculated with a different phase on each run... instead, i'm trying pyaudio to read a different part of the buffer (window) to start writing the sine wave on a different origin each time.
I have no idea how to do that.
Long story short, how would you:
1) create a thread to fill a circular buffer continuously with audio data.
2) create a pyaudio consumer thread that continuously reads the buffer without overruning.
3) manipulate the volume on realtime
My output data must be 44100 hz, little endian, 16bit signed int. Any hints, advise, references or suggestions will be greatly appreciated.

Pre-loading audio buffers - what is reasonable and reliable?

I am converting an audio signal processing application from Win XP to Win 7 (at least). You can imagine it is a sonar application - a signal is generated and sent out, and a related/modified signal is read back in. The application wants exclusive use of the audio hardware, and cannot afford glitches - we don't want to read headlines like "Windows beep causes missile launch".
Looking at the Windows SDK audio samples, the most relevant one to my case is the RenderExclusiveEventDriven example. Outside the audio engine, it prepares 10 seconds of audio to play, which provides it in 10ms chunks to the rendering engine via an IAudioRenderClient object's GetBuffer() and ReleaseBuffer(). It first uses these functions to pre-load a single 10ms chunk of audio, then relies on regular 10ms events to load subsequent chunks.
Hopefully this means there will always be 10-20ms of audio data buffered. How reliable (i.e. glitch-free) should we expect this to be on reasonably modern hardware (less than 18months old)?
Previously, one readily could pre-load at least half a second worth of audio into via the waveXXX() API, so that if Windows got busy elsewhere, audio continuity was less likely to be affected. 500ms seems like a lot safer margin than 10-20ms... but if you want both event-driven and exclusive-mode, the IAudioRenderClient documentation doesn't exactly make it clear if it is or is not possible to pre-load more than a single IAudioRenderClient buffer worth.
Can anyone confirm if more extensive pre-loading is still possible? Is it recommended, discouraged or neither?
If you are worried about launching missiles, I don't think you should be using Windows or any other non Real-Time operating system.
That said, we are working on another application that consumes a much higher bandwidth of data (400 MB/s continuously for hours or more). We have seen glitches where the operating system becomes unresponsive for up to 5 seconds, so we have large buffers on the data acquisition hardware.
Like with everything else in computing, the wider you go you:
increase throughput
increase latency
I'd say 512 samples buffer is the minimum typically used for non-demanding latency wise applications. I've seen up to 4k buffers. Memory use wise that's still pretty much nothing for contemporary devices - a mere 8 kilobytes of memory per channel for 16 bit audio. You have better playback stability and lower waste of CPU cycles. For audio applications that means you can process more tracks with more DSP before audio begins skipping.
On the other end - I've seen only a few professional audio interfaces, which could handle 32 sample buffers. Most are able to achieve 128 samples, naturally you are still limited to lower channel and effect count, even with professional hardware you increase buffering as your project gets larger, lower it back and disable tracks or effects when you need "real time" to capture a performance. In terms of lowest possible latency actually the same box is capable of achieving lower latency with Linux and a custom real time kernel than on Windows where you are not allowed to do such things. Keep in mind a 64 sample buffer might sound like 8 msec of latency in theory, but in reality it is more like double - because you have both input and output latency plus the processing latency.
For a music player where latency is not an issue you are perfectly fine with a larger buffer, for stuff like games you need to keep it lower for the sake of still having a degree of synchronization between what's going on visually and the sound - you simply cannot have your sound lag half a second behind the action, for music performance capturing together with already recorded material you need to have latency low. You should never go above what your use case requires, because a small buffer will needlessly add to CPU use and the odds of getting audio drop outs. 4k buffering for an audio player is just fine if you can live with half a second of latency between the moment you hit play and the moment you hear the song starting.
I've done a "kind of a hybrid" solution in my DAW project - since I wanted to employ GPGPU for its tremendous performance relative to the CPU I've split the work internally with two processing paths - 64 samples buffer for real time audio which is processed on the CPU, and another considerably wider buffer size for the data which is processed by the GPU. Naturally, they both come out through the "CPU buffer" for the sake of being synchronized perfectly, but the GPU path is "processed in advance" thus allowing higher throughput for already recorded data, and keeping CPU use lower so the real time audio is more reliable. I am honestly surprised professional DAW software hasn't taken this path yet, but not too much, knowing how much money the big fishes of the industry make on hardware that is much less powerful than a modern midrange GPU. They've been claiming that "latency is too much with GPUs" ever since Cuda and OpenCL came out, but with pre-buffering and pre-processing that is really not an issue for data which is already recorded, and increases the size of a project which the DAW can handle tremendously.
The short answer is yes, you can preload a larger amount of data.
This example uses a call to GetDevicePeriod to return the minimum service interval for the device (in nano seconds) and then passes that value along to Initialize. You can pass a larger value if you wish.
The down side to increasing the period is that you're increasing the latency. If you are just playing a waveform back and aren't planning on making changes on the fly then this is not a problem. But if you had a sine generator for example, then the increased latency means that it would take longer for you to hear a change in frequency or amplitude.
Whether or not you have drop outs depends on a number of things. Are you setting thread priorities appropriately? How small is the buffer? How much CPU are you using preparing your samples? In general though, a modern CPU can handle a pretty low-latency. For comparison, ASIO audio devices run perfectly fine at 96kHz with a 2048 sample buffer (20 milliseconds) with multiple channels - no problem. ASIO uses a similar double buffering scheme.
This is too long to be a comment, so it may as well be an answer (with qualifications).
Although it was edited out of final form of the question I submitted, what I had intended by "more extensive pre-loading" was not about the size of buffers used, so much as the number of buffers used. The (somewhat unexpected) answers that resulted all helped widen my understanding.
But I was curious. In the old waveXXX() world, it was possible to "pre-load" multiple buffers via waveOutPrepareHeader() and waveOutWrite() calls, the first waveOutWrite() of which would start playback. My old app "pre-loaded" 60 buffers out of a set of 64 in one burst, each with 512 samples played at 48kHz, creating over 600ms of buffering in a system with a cycle of 10.66ms.
Using multiple IAudioRenderClient::GetBuffer() and IAudioRenderClient::ReleaseBuffer() calls prior to IAudioCient::Start() in the WASAPI world, it appears that the same is still possible... at least on my (very ordinary) hardware, and without extensive testing (yet). This is despite the documentation strongly suggesting that exclusive, event-driven audio is strictly a double-buffering system.
I don't know that anyone should set out to exploit this by design, but I thought I'd point out that it may be supported.

Changing gain of audio signal while it's playing causes artifacts

I am playing back audio files in a program, and in the audio rendering callbacks, I apply a gain multiplier to the input signal and add it to the output buffer. Here's some pseudo code to illustrate my actions:
void audioCallback(AudioOutputBuffer* ao, AudioInput* ai, int startSample, int numSamples){
for (int i=startSample; i<numSamples+startSample; i++){
ao[i] = ai[i]*gain;
}
}
Basically I just multiply the data by some multiplier. In this case, gain is a float member that is being adjusted via a GUI callback. If I adjust this value while the audio is still playing, I can hear that the audio is getting softer or louder when I move the slider, but I hear lots of little pops and clicks.
Not really sure what the deal is. I know about interpolation, and I do that if the audio is pitch shifted, but I'm not sure if I need to do any extra interpolation or something if the gain is being adjusted in real time before the audio file is finished playing.
If I adjust the slider before the audio start playing, the gain is set properly and I get no clicks.
Am I missing something here? How else is gain implemented but a multiplier on the input signal?
Question: how does the multiplication operator know which operand is the audio signal and which one is the gain? Answer: it doesn't. They're both audio signals, and anything audible in either one will be audible in the output.
A flat, unchanging signal doesn't produce any audible sounds. As long as the gain remains constant, it won't introduce any sound of its own.
A signal that changes abruptly will be very audible, it sounds like a click, containing lots of high frequencies.
As you've determined on your own, one way to reduce the high frequency content and thus the audibility is to stretch out the change over a number of samples, using a constant slope. This would certainly suffice in an application where you have lots of time to make the gain change.
Another way would be to run a low-pass filter on the gain signal and use that as the input to the multiplication.
I fixed it by changing the gain in increments of the amount changed. For instance, if the gain multiplier was set to 1.0, then changed to 0.8, that's a difference of 0.2 gain. For each sample in the callback, add the difference / numSamples to the previous volume to create a slurring or gradual gain change.

How can I implement an accurate (but variable) FPS limit/cap in my OpenGL application?

I am currently working on an OpenGL application to display a few 3D spheres to the user, which they can rotate, move around, etc. That being said, there's not much in the way of complexity here, so the application runs at quite a high framerate (~500 FPS).
Obviously, this is overkill - even 120 would be more then enough, but my issue is that running the application at full-stat eats away my CPU, causing excess heat, power consumption, etc. What I want to do is be able to let the user set an FPS cap so that the CPU isn't being overly used when it doesn't need to be.
I'm working with freeglut and C++, and have already set up the animations/event handling to use timers (using the glutTimerFunc). The glutTimerFunc, however, only allows an integer amount of milliseconds to be set - so if I want 120 FPS, the closest I can get is (int)1000/120 = 8 ms resolution, which equates to 125 FPS (I know it's a neglegible amount, but I still just want to put in an FPS limit and get exactly that FPS if I know the system can render faster).
Furthermore, using glutTimerFunc to limit the FPS never works consistently. Let's say I cap my application to 100 FPS, it usually never goes higher then 90-95 FPS. Again, I've tried to work out the time difference between rendering/calculations, but then it always overshoots the limit by 5-10 FPS (timer resolution possibly).
I suppose the best comparison here would be a game (e.g. Half Life 2) - you set your FPS cap, and it always hits that exact amount. I know I could measure the time deltas before and after I render each frame and then loop until I need to draw the next one, but this doesn't solve my 100% CPU usage issue, nor does it solve the timing resolution issue.
Is there any way I can implement an effective, cross-platform, variable frame rate limiter/cap in my application? Or, in another way, is there any cross-platform (and open source) library that implements high resolution timers and sleep functions?
Edit: I would prefer to find a solution that doesn't rely on the end user enabling VSync, as I am going to let them specify the FPS cap.
Edit #2: To all who recommend SDL (which I did end up porting my application to SDL), is there any difference between using the glutTimerFunc function to trigger a draw, or using SDL_Delay to wait between draws? The documentation for each does mention the same caveats, but I wasn't sure if one was more or less efficient then the other.
Edit #3: Basically, I'm trying to figure out if there is a (simple way) to implement an accurate FPS limiter in my application (again, like Half Life 2). If this is not possible, I will most likely switch to SDL (makes more sense to me to use a delay function rather then use glutTimerFunc to call back the rendering function every x milliseconds).
I'd advise you to use SDL. I personnally use it to manage my timers. Moreover, it can limit your fps to your screen refresh rate (V-Sync) with SDL 1.3. That enables you to limit CPU usage while having the best screen performance (even if you had more frames, they wouldn't be able to be displayed since your screen doesn't refresh fast enough).
The function is
SDL_GL_SetSwapInterval(1);
If you want some code for timers using SDL, you can see that here :
my timer class
Good luck :)
I think a good way to achieve this, no matter what graphics library you use, is to have a single clock measurement in the gameloop to take every single tick (ms) into account. That way the average fps will be exactly the limit just like in Half-Life 2. Hopefully the following code snippet will explain what I am talking about:
//FPS limit
unsigned int FPS = 120;
//double holding clocktime on last measurement
double clock = 0;
while (cont) {
//double holding difference between clocktimes
double deltaticks;
//double holding the clocktime in this new frame
double newclock;
//do stuff, update stuff, render stuff...
//measure clocktime of this frame
//this function can be replaced by any function returning the time in ms
//for example clock() from <time.h>
newclock = SDL_GetTicks();
//calculate clockticks missing until the next loop should be
//done to achieve an avg framerate of FPS
// 1000 / 120 makes 8.333... ticks per frame
deltaticks = 1000 / FPS - (newclock - clock);
/* if there is an integral number of ticks missing then wait the
remaining time
SDL_Delay takes an integer of ms to delay the program like most delay
functions do and can be replaced by any delay function */
if (floor(deltaticks) > 0)
SDL_Delay(deltaticks);
//the clock measurement is now shifted forward in time by the amount
//SDL_Delay waited and the fractional part that was not considered yet
//aka deltaticks
the fractional part is considered in the next frame
if (deltaticks < -30) {
/*dont try to compensate more than 30ms(a few frames) behind the
framerate
//when the limit is higher than the possible avg fps deltaticks
would keep sinking without this 30ms limitation
this ensures the fps even if the real possible fps is
macroscopically inconsitent.*/
clock = newclock - 30;
} else {
clock = newclock + deltaticks;
}
/* deltaticks can be negative when a frame took longer than it should
have or the measured time the frame took was zero
the next frame then won't be delayed by so long to compensate for the
previous frame taking longer. */
//do some more stuff, swap buffers for example:
SDL_RendererPresent(renderer); //this is SDLs swap buffers function
}
I hope this example with SDL helps. It is important to measure the time only once per frame so every frame is taken into account.
I recommend to modularize this timing in a function which also makes your code clearer. This code snipped has no comments in the case they just annoyed you in the last one:
unsigned int FPS = 120;
void renderPresent(SDL_Renderer * renderer) {
static double clock = 0;
double deltaticks;
double newclock = SDL_GetTicks();
deltaticks = 1000.0 / FPS - (newclock - clock);
if (floor(deltaticks) > 0)
SDL_Delay(deltaticks);
if (deltaticks < -30) {
clock = newclock - 30;
} else {
clock = newclock + deltaticks;
}
SDL_RenderPresent(renderer);
}
Now you can call this function in your mainloop instead of your swapBuffer function (SDL_RenderPresent(renderer) in SDL). In SDL you'd have to make sure the SDL_RENDERER_PRESENTVSYNC flag is turned off. This function relies on the global variable FPS but you can think of other ways of storing it. I just put the whole thing in my library's namespace.
This method of capping the framerate delivers exactly the desired average framerate if there are no large differences in the looptime over multiple frames because of the 30ms limit to deltaticks. The deltaticks limit is required. When the FPS limit is higher than the actual framerate deltaticks will drop indefinitely. Also when the framerate then rises above the FPS limit again the code would try to compensate the lost time by rendering every frame immediately resulting in a huge framerate until deltaticks rises back to zero. You can modify the 30ms to fit your needs, it is just an estimate by me. I did a couple of benchmarks with Fraps. It works with every imaginable framerate really and delivers beautiful results from what I have tested.
I must admit I coded this just yesterday so it is not unlikely to have some kind of bug. I know this question was asked 5 years ago but the given answers did not statify me. Also feel free to edit this post as it is my very first one and probably flawed.
EDIT:
It has been brought to my attention that SDL_Delay is very very inaccurate on some systems. I heard a case where it delayed by far too much on android. This means my code might not be portable to all your desired systems.
The easiest way to solve it is to enable Vsync. That's what I do in most games to prevent my laptop from getting too hot.
As long as you make sure the speed of your rendering path is not connected to the other logic, this should be fine.
There is a function glutGet( GLUT_ELAPSED_TIME ) which returns the time since started in miliseconds, but that's likely still not fast enough.
A simple way is to make your own timer method, which uses the HighPerformanceQueryTimer on windows, and the getTimeOfDay for POSIX systems.
Or you can always use timer functions from SDL or SFML, which do basically the same as above.
You should not try to limit the rendering rate manually, but synchronize with the display vertical refresh. This is done by enabling V sync in the graphics driver settings. Apart from preventing (your) programs from rendering at to high rate, it also increases picture quality by avoiding tearing.
The swap interval extensions allow your application to fine tune the V sync behaviour. But in most cases just enabling V sync in the driver and letting the buffer swap block until sync suffices.
I would suggest using sub-ms precision system timers (QueryPerformanceCounter, gettimeofday) to get timing data. These can help you profile performance in optimized release builds also.
Some background information:
SDL_Delay is pretty much the same as Sleep/sleep/usleep/nanosleep but it is limited to milliseconds as parameter
Sleeping works by relying on the systems thread scheduler to continue your code.
Depending on your OS and hardware the scheduler may have a lower tick frequency than 1000hz, which results in longer timespans than you have specified when calling sleep, so you have no guarantee to get the desired sleep time.
You can try to change the scheduler's frequency. On Windows you can do it by calling timeBeginPeriod(). For linux systems checkout this answer.
Even if your OS supports a scheduler frequency of 1000hz your hardware may not, but most modern hardware does.
Even if your scheduler's frequency is at 1000hz sleep may take longer if the system is busy with higher priority processes, but this should not happen if your system isn't under super high load.
To sum up, you may sleep for microseconds on some tickless linux kernels, but if you are interested in a cross platform solution you should try to get the scheduler frequency up to 1000hz to ensure the sleeps are accurate in most of the cases.
To solve the rounding issue for 120FPS:
1000/120 = 8,333ms
(int)1000/120 = 8ms
Either you do a busy wait for 333 microseconds and sleep for 8ms afterwards. Which costs some CPU time but is super accurate.
Or you follow Neop approach by sleeping sometimes 8ms and sometimes 9ms seconds to average out at 8,333ms. Which is way more efficent but less accurate.

setting max frames per second in openGL

Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific? I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
Or how else can I prevent FPS to drop or raise dramatically? For this time I'm testing it on drawing big number of vertices in line, and using fraps I can see frame rate to go from 400 to 200 fps with evident slowing down of drawing it.
You have two different ways to solve this problem:
Suppose that you have a variable called maximum_fps, which contains for the maximum number of frames you want to display.
Then You measure the amount of time spent on the last frame (a timer will do)
Now suppose that you said that you wanted a maximum of 60FPS on your application. Then you want that the time measured be no lower than 1/60. If the time measured s lower, then you call sleep() to reach the amount of time left for a frame.
Or you can have a variable called tick, that contains the current "game time" of the application. With the same timer, you will incremented it at each main loop of your application. Then, on your drawing routines you calculate the positions based on the tick var, since it contains the current time of the application.
The big advantage of option 2 is that your application will be much easier to debug, since you can play around with the tick variable, go forward and back in time whenever you want. This is a big plus.
Rule #1. Do not make update() or loop() kind of functions rely on how often it gets called.
You can't really get your desired FPS. You could try to boost it by skipping some expensive operations or slow it down by calling sleep() kind of functions. However, even with those techniques, FPS will be almost always different from the exact FPS you want.
The common way to deal with this problem is using elapsed time from previous update. For example,
// Bad
void enemy::update()
{
position.x += 10; // this enemy moving speed is totally up to FPS and you can't control it.
}
// Good
void enemy::update(elapsedTime)
{
position.x += speedX * elapsedTime; // Now, you can control its speedX and it doesn't matter how often it gets called.
}
Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific?
No.
There is no way to precisely calculate how many updates should be called to reach desired framerate.
However, you can measure how much time has passed since last frame, calculate current framerate according to it, compare it with desired framerate, then introduce a bit of Sleeping to reduce current framerate to the desired value. Not a precise solution, but it will work.
I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
OpenGL is concerned only about rendering stuff, and has nothing to do with timers. Also, using windows timers isn't a good idea. Use QueryPerformanceCounter, GetTickCount or SDL_GetTicks to measure how much time has passed, and sleep to reach desired framerate.
Or how else can I prevent FPS to drop or raise dramatically?
You prevent FPS from raising by sleeping.
As for preventing FPS from dropping...
It is insanely broad topic. Let's see. It goes something like this: use Vertex buffer objects or display lists, profile application, do not use insanely big textures, do not use too much alpha-blending, avoid "RAW" OpenGL (glVertex3f), do not render invisible objects (even if no polygons are being drawn, processing them takes time), consider learning about BSPs or OCTrees for rendering complex scenes, in parametric surfaces and curves, do not needlessly use too many primitives (if you'll render a circle using one million polygons, nobody will notice the difference), disable vsync. In short - reduce to absolute possible minimum number of rendering calls, number of rendered polygons, number of rendered pixels, number of texels read, read every available performance documentation from NVidia, and you should get a performance boost.
You're asking the wrong question. Your monitor will only ever display at 60 fps (50 fps in Europe, or possibly 75 fps if you're a pro-gamer).
Instead you should be seeking to lock your fps at 60 or 30. There are OpenGL extensions that allow you to do that. However the extensions are not cross platform (luckily they are not video card specific or it'd get really scary).
windows: wglSwapIntervalEXT
x11 (linux): glXSwapIntervalSGI
max os x: ?
These extensions are closely tied to your monitor's v-sync. Once enabled calls to swap the OpenGL back-buffer will block until the monitor is ready for it. This is like putting a sleep in your code to enforce 60 fps (or 30, or 15, or some other number if you're not using a monitor which displays at 60 Hz). The difference it the "sleep" is always perfectly timed instead of an educated guess based on how long the last frame took.
You absolutely do wan't to throttle your frame-rate it all depends on what you got
going on in that rendering loop and what your application does. Especially with it's
Physics/Network related. Or if your doing any type of graphics processing with an out side toolkit (Cairo, QPainter, Skia, AGG, ...) unless you want out of sync results or 100% cpu usage.
This code may do the job, roughly.
static int redisplay_interval;
void timer(int) {
glutPostRedisplay();
glutTimerFunc(redisplay_interval, timer, 0);
}
void setFPS(int fps)
{
redisplay_interval = 1000 / fps;
glutTimerFunc(redisplay_interval, timer, 0);
}
Here is a similar question, with my answer and worked example
I also like deft_code's answer, and will be looking into adding what he suggests to my solution.
The crucial part of my answer is:
If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.
The example is for animation code that renders at the same speed regardless of whether benchmarking mode, or fixed FPS mode, is active. An animation triggered before the change even keeps a constant speed after the change.