Implementing Delta timing in my loop - c++

I want to implement delta timing in my SFML loop in order to compensate for other computers that choose to run my application, right now I just have
float delta = .06
placed before my loop, but as wikipedia describes delta timing:
It is done by calling a timer every frame per second that holds the
time between now and last call in milliseconds.[2] Thereafter the
resulting number (Delta Time) is used to calculate how much faster
that, for instance, a game character has to move to make up for the
lag spike caused in the first place.[3]
Here is what I'm doing that is WRONG currently, I can't quite seem to translate the logic into syntax:
bool running=true; //set up bool to run SFML loop
double lastTime = clock.getElapsedTime().asSeconds();
sf::Clock clock; //clock for delta and controls
while( running )
{
clock.restart();
double time= clock.getElapsedTime().asSeconds();
double delta = time - lastTime; //not working... values are near 0.0001
time = lastTime;
//rest of loop

Shouldn't it be:
sf::Clock clock;
while( running )
{
double delta = clock.restart().asSeconds(); // asMilliseconds()
//rest of loop
}
(I assume you do not need time and last_time)

Your loop is running so fast that your delta in seconds is very small. Why not measure it in milliseconds instead?
Switch .asSeconds() to .asMilliseconds(). Look here for documentation.

Your approach is almost right. However, if you're calculating the time difference on your own (subtracting the previous time), you must not reset your sf::Clock.
For variable timesteps you can essentially use Dieter's solution. However, I'd suggest one tiny modification:
sf::Clock clock
while (running) {
// do event processing here
const sf::Time delta = clock.restart();
// do your updates here
sf::sleep(sf::microseconds(1));
}
What I did different are two things:
I store the delta time as a sf::Time object. This isn't really any significant change. However, it allows me to retrieve the difference in different units later on (just retrieving seconds or milliseconds is fine though).
I wait for a very tiny amount of time. This may make a significant difference based on the time that passes during one iteration of the loop. Otherwise - on a very, very fast computer - you might end up with a delta of 0. While this is rather unlikely as long as you're using the raw time tracking microseconds, it might be an issue if you're only evaluating milliseconds (in which case you might even want to sleep for a whole milliseconds). Depending on the system's timer granulation/power (saving) settings, this might be a tad bit slower compared to not sleeping at all, but it shouldn't be noticeable (since SFML also tries to fight this issue as well).

What you want is basically this :
while(running)
{
sf::Time now = clock.getElapsedTime();
deltaTime = now - lastTime;
lastTime = now;
}
As for the sf::sleep mentionned by Mario, you should just use sf::RenderWindow::setFramerateLimit(unsigned int) to cap the fps as you want, and the SFML will take care of making your application sleep for the correct amount of time at each loop.

Related

How to define how much CPU to use in SFML game?

I've made a game but I don't know if the game will work the same way in other devices. For example if the CPU of a computer is high will the player and enemies move faster? If so, is there a way to define CPU usage available in SFML? The way the player and enemies move in my program is to :
1-Check if the key is pressed
2-If so : move(x,y); Or is there a way to get the CPU to do some operations in the move function.
Thank you!
It sounds like you are worried about the physics of your game being affected by the game's framerate. Your intuition is serving you well! This is a significant problem, and one you'll want to address if you want your game to feel professional.
According to Glenn Fiedler in his Gaffer on Games article 'Fix Your Timestep!'
[A game loop that handles time improperly can make] the behavior of your physics simulation [depend] on the delta time you pass in. The effect could be subtle as your game having a slightly different “feel” depending on framerate or it could be as extreme as your spring simulation exploding to infinity, fast moving objects tunneling through walls and the player falling through the floor!
Logic dictates that you must detach the dependencies of your update from the time it takes to draw a frame. A simple solution is to:
Pick an amount of time which can be safely processed (your timestep)
Add the time passed every frame into an accumulated pool of time
Process the time passed in safe chunks
In pseudocode:
time_pool = 0;
timestep = 0.01; //or whatever is safe for you!
old_time = get_current_time();
while (!closed) {
new_time = get_current_time();
time_pool += new_time - old_time;
old_time = new_time;
handle_input();
while (time_pool > timestep)
{
consume_time(timestep); //update your gamestate
time_pool -= timestep;
}
//note: leftover time is not lost, and will be left in time_pool
render();
}
It is worth noting that this method has its own problem: future frames have to consume the time produced by calls to consume_time. If a call to consume_time takes too long, the time produced might require two calls be made next frame - then four - then eight - and so on. If you use this method, you will have to make sure consume_time is very efficient, and even then it would be best to have a contingency plan.
For a more thorough treatment I encourage you to read the linked article.

C++ Incorrect FPS and deltaTime measuring using std::chrono

The fps of my program is incorrect. When I calculate the fps of my application using RivaTuner statistics, it gives for example 3000 fps. But my program calculates a really different number, like 500. It is going up and down all time while Rivatuner does not.
This is how I calculate the deltatime(deltaTime variable is a float):
std::chrono::high_resolution_clock timer;
auto start = timer.now();
...doing stuff here...
auto stop = timer.now();
deltaTime = std::chrono::duration_cast<std::chrono::microseconds>(stop - start).count() / 1000.0f; //DELTATIME WAS LESS THAN 1 MILLISECOND SO THAT IS WHY I USED THIS
This is how I calculate the fps:
float fps = (1.0f / deltaTime) * 1000.0f;
I multiply my game speeds with the deltaTime variable, but because it is doing weird things(going up and down really fast the whole time) that is screwed up too. So for example, my RivaTuner says 2000 fps my game is running slower than when it says 4000 fps.
But when the application runs slower it needs more time to render 1 frame(so, a higher deltaTime, so a higher game speed?).
Is this correct?
Thanks in advance.
Just like JSQuareD said, when calculating FPS, you should take the average after measuring many frames.
The reason is that frames execution speed tend to be very different, due to many reasons.
Sum over you measurements over, lets say 0.5 seconds and calculate the average.
Yes this is this dumb as it sounds.
But you should be careful on this FPS statistics - you could have even 60 FPS and the game would still looks stuck. Why? Because few frames took really long delta time, and most frames took very fast delta time.
(It happens more than it sounds)
You can solve last problem by viewing graph or calculate standard deviation, but this is more advanced concern for now.
[My fps counter] is going up and down all time while Rivatuner does not.
Typically, rendering and other calculations take a variable amount of time. If you calculate the fps every frame, then it's expected to go up and down.
deltaTime = std::chrono::duration_cast<std::chrono::microseconds>(stop - start).count() / 1000.0f;
Don't do that. If you want a floating point value of the milliseconds with minimal loss of precision, then do this:
using ms = std::chrono::duration<float, std::milli>;
deltaTime = std::chrono::duration_cast<ms>(stop - start).count();
But when the application runs slower it needs more time to render 1 frame
Correct.
so, a higher deltaTime
Correct.
so a higher game speed?
The rendering speed shouldn't affect the speed of the game if everything is scaled in relation to the passed time. Whether it does affect the speed is impossible to tell without knowing what your game does.
If it does affect the speed of the game, then there might be something wrong with how you implemented the game. If you have behaviour that is sensitive to the length of the time step, such as physics, then those calculations should be done with a fixed time step. For example, 120 times a second. If your fps is higher, then skip advancing the simulation and if your fps is lower, then repeat the simulation.

How to reduce OpenGL CPU usage and/or how to use OpenGL properly

I'm working a on a Micromouse simulation application built with OpenGL, and I have a hunch that I'm not doing things properly. In particular, I'm suspicious about the way I am getting my (mostly static) graphics to refresh at a close-to-constant framerate (60 FPS). My approach is as follows:
1) Start a timer
2) Draw my shapes and text (about a thousand of them):
glBegin(GL_POLYGON);
for (Cartesian vertex : polygon.getVertices()) {
std::pair<float, float> coordinates = getOpenGlCoordinates(vertex);
glVertex2f(coordinates.first, coordinates.second);
}
glEnd();
and
glPushMatrix();
glScalef(scaleX, scaleY, 0);
glTranslatef(coordinates.first * 1.0/scaleX, coordinates.second * 1.0/scaleY, 0);
for (int i = 0; i < text.size(); i += 1) {
glutStrokeCharacter(GLUT_STROKE_MONO_ROMAN, text.at(i));
}
glPopMatrix();
3) Call
glFlush();
4) Stop the timer
5) Sleep for (1/FPS - duration) seconds
6) Call
glutPostRedisplay();
The "problem" is that the above approach really hogs my CPU - the process is using something like 96-100%. I know that there isn't anything inherently wrong with using lots of CPU, but I feel like I shouldn't be using that much all of the time.
The kicker is that most of the graphics don't change from frame to frame. It's really just a single polygon moving over (and covering up) some static shapes. Is there any way to tell OpenGL to only redraw what has changed since the previous frame (with the hope it would reduce the number of glxxx calls, which I've deemed to be the source of the "problem")? Or, better yet, is my approach to getting my graphics to refresh even correct?
First and foremost the biggest CPU hog with OpenGL is immediate mode… and you're using it (glBegin, glEnd). The problem with IM is, that every single vertex requires a whole couple of OpenGL calls being made; and because OpenGL uses a thread local state this means that each and every OpenGL call must go through some indirection. So the first step would be getting rid of that.
The next issue is with how you're timing your display. If low latency between user input and display is not your ultimate goal the standard approach would setting up the window for double buffering, enabling V-Sync, set a swap interval of 1 and do a buffer swap (glutSwapBuffers) once the frame is rendered. The exact timings what and where things will block are implementation dependent (unfortunately), but you're more or less guaranteed to exactly hit your screen refresh frequency, as long as your renderer is able to keep up (i.e. rendering a frame takes less time that a screen refresh interval).
glutPostRedisplay merely sets a flag for the main loop to call the display function if no further events are pending, so timing a frame redraw through that is not very accurate.
Last but not least you may be simply mocked by the way Windows does account CPU time (time spent in driver context, which includes blocking, waiting for V-Sync) will be accouted to the consumed CPU time, while it's in fact interruptible sleep. However you wrote, that you already do a sleep in your code, which would rule that out, because the go-to approach to get a more reasonable accounting would be adding a Sleep(1) before or after the buffer swap.
I found that by putting render thread to sleep helps reducing cpu usage from (my case) 26% to around 8%
#include <chrono>
#include <thread>
void render_loop(){
...
auto const start_time = std::chrono::steady_clock::now();
auto const wait_time = std::chrono::milliseconds{ 17 };
auto next_time = start_time + wait_time;
while(true){
...
// execute once after thread wakes up every 17ms which is theoretically 60 frames per
// second
auto then = std::chrono::high_resolution_clock::now();
std::this_thread::sleep_until(next_time);
...rendering jobs
auto elasped_time =
std::chrono::duration_cast<std::chrono::milliseconds> (std::chrono::high_resolution_clock::now() - then);
std::cout << "ms: " << elasped_time.count() << '\n';
next_time += wait_time;
}
}
I thought about attempting to measure the frame rate while the thread is asleep but there isn't any reason for my use case to attempt that. The result was averaging around 16ms so I thought it was good enough
Inspired by this post

Fixed timestep stuttering with VSync on

In a 2D OpenGL engine I implemented I have a fixed timestep as described in the famous fix your timestep article, along with blending.
I have a test object that moves vertically (y axis). There is stuttering in the movement (preprogrammed movement, not from user input). This means the object does not move smoothly across the screen.
Please see the uncompressed video I am linking: LINK
The game framerate stays at 60fps (Vsync turned on from Nvidia driver)
The game logic updates at a fixed 20 updates/ticks per second, set by me. This is normal. The object moves 50 pixels per update.
However the movement on the screen is severely stuttering.
EDIT: I noticed by stepping in the recorded video above frame by frame that the stuttering is caused by a frame being shown twice.
EDIT2: Setting the application priority to Realtime in the task manager completely eliminates the stutter! However this obviously isn't a solution.
Below is the object y movement delta at different times, with VSync turned off
First column is the elapsed time since last frame, in microseconds (ex 4403 )
Second column is movement on the y axis of an object since last frame.
Effectively, the object moves 1000 pixels per second, and the log below confirms it.
time since last frame: 4403 ypos delta since last frame: 4.403015
time since last frame: 3807 ypos delta since last frame: 3.806976
time since last frame: 3716 ypos delta since last frame: 3.716003
time since last frame: 3859 ypos delta since last frame: 3.859009
time since last frame: 4398 ypos delta since last frame: 4.398010
time since last frame: 8961 ypos delta since last frame: 8.960999
time since last frame: 7871 ypos delta since last frame: 7.871002
time since last frame: 3985 ypos delta since last frame: 3.984985
time since last frame: 3684 ypos delta since last frame: 3.684021
Now with VSync turned on
time since last frame: 17629 ypos delta since last frame: 17.628906
time since last frame: 15688 ypos delta since last frame: 15.687988
time since last frame: 16641 ypos delta since last frame: 16.641113
time since last frame: 16657 ypos delta since last frame: 16.656738
time since last frame: 16715 ypos delta since last frame: 16.715332
time since last frame: 16663 ypos delta since last frame: 16.663086
time since last frame: 16666 ypos delta since last frame: 16.665771
time since last frame: 16704 ypos delta since last frame: 16.704102
time since last frame: 16626 ypos delta since last frame: 16.625732
I would say they look ok.
This has been driving me bonkers for days, what am I missing?
Below is my Frame function which is called in a loop:
void Frame()
{
static sf::Time t;
static const double ticksPerSecond = 20;
static uint64_t stepSizeMicro = 1000000 / ticksPerSecond; // microseconds
static sf::Time accumulator = sf::seconds(0);
gElapsedTotal = gClock.getElapsedTime();
sf::Time elapsedSinceLastFrame = gElapsedTotal - gLastFrameTime;
gLastFrameTime = gElapsedTotal;
if (elapsedSinceLastFrame.asMicroseconds() > 250000 )
elapsedSinceLastFrame = sf::microseconds(250000);
accumulator += elapsedSinceLastFrame;
while (accumulator.asMicroseconds() >= stepSizeMicro)
{
Update(stepSizeMicro / 1000000.f);
gGameTime += sf::microseconds(stepSizeMicro);
accumulator -= sf::microseconds(stepSizeMicro);
}
uint64_t blendMicro = accumulator.asMicroseconds() / stepSizeMicro;
float blend = accumulator.asMicroseconds() / (float) stepSizeMicro;
if (rand() % 200 == 0) Trace("blend: %f", blend);
CWorld::GetInstance()->Draw(blend);
}
More info as requested in the comments:
stuttering occurs both while in fullscreen 1920x1080 and in window mode 1600x900
the setup is a simple SFML project. I'm not aware if it uses VBO/VAO internally when rendering textured rectangles
not doing anything else on my computer. Keep in mind this issue occurs on other computers as well, it's not just my rig
am running on primary display. The display doesn't really make a difference. The issue occurs both in fullscreen and window mode.
I have profiled my own code. The issue was there was an area of my code that occasionally had performance spikes due to cache misses. This caused my loop to take longer than 16.6666 milliseconds, the max time it should take to display smoothly at 60Hz. This was only one frame, once in a while. That frame caused the stuttering. The code logic itself was correct, this proved to be a performance issue.
For future reference in hopes that this will help other people, how I debugged this was I put an
if ( timeSinceLastFrame > 16000 ) // microseconds
{
Trace("Slow frame detected");
DisplayProfilingInformation();
}
in my frame code. When the if is triggered, it displays profiling stats for the functions in the last frame, to see which function took the longest in the previous frame. I was thus able to pinpoint the performance bug to a structure that was not suitable for its usage. A big, nasty map of maps that generated a lot of cache misses and occasionally spiked in performance.
I hope this helps future unfortunate souls.
It seems like you're not synchronizing your 60Hz frame loop with the GPU's 60Hz VSync. Yes, you have enabled Vsync in Nvidia but that only causes Nvidia to use a back-buffer which is swapped on the Vsync.
You need to set the swap interval to 1 and perform a glFinish() to wait for the Vsync.
A tricky one, but from the above it seems to me this is not a 'frame rate' problem, but rather somewhere in your 'animate' code. Another observation is the line "Update(stepSizeMicro / 1000000.f);". the divide by 1000000.f could mean you are losing resolution due to the limitations of floating point numbers bit resolution, so rounding could be your killer?

How to pause an animation with OpenGL / glut

To achieve an animation, i am just redrawing things on a loop.
However, I need to be able to pause when a key is pressed. I know the way i'm doing it now its wrong because it eats all of my cycles when the loop is going on.
Which way is better, and will allow for a key pause and resume?
I tried using a bool flag but obviously it didnt change the flag until the loop was done.
You have the correct very basic architecture sorted in that the everything needs to be updated in a loop, but you need to make your loop a lot smarter for a game (or other application requiring OpenGL animations).
However, I need to be able to pause when a key is pressed.
A basic way of doing this is to have a boolean value paused and to wrap the game into a loop.
while(!finished) {
while(!paused) {
update();
render();
}
}
Typically however you still want to do things such as look at your inventory, craft things, etc. while your game is paused, and many games still run their main loop while the game's paused, they just don't let the actors know any time has passed. For instance, it sounds like your animation frames simply have a number of game-frames to be visible for. This is a bad idea because if the animation speed increases or decreases on a different computer, the animation speed will look wrong on those computers. You can consider my answer here, and the linked samples to see how you can achieve framerate-independent animation by specifying animation frames in terms of millisecond duration and passing in the frame time in the update loop. For instance, your main game then changes to look like this:
float previousTime = 0.0f;
float thisTime = 0.0f;
float framePeriod = 0.0f;
while(!finished) {
thisTime = getTimeInMilliseconds();
framePeriod = previousTime - thisTime;
update(framePeriod);
render();
previousTime = thisTime;
}
Now, everything in the game that gets updated will know how much time has passed since the previous frame. This is helpful for all your physics calculations as all of our physical formulae are in terms of time + starting factors + decay factors (for instance, the SUVAT equations). The same information can be used for your animations to make them framerate independent as I have described with some links to examples here.
To answer the next part of the question:
it eats all of my cycles when the loop is going on.
This is because you're using 100% of the CPU and never going to sleep. If we consider that we want for instance 30fps on the target device (and we know that this is possible) then we know the period of one frame is 1/30th of a second. We've just calculated the time it takes to update and render our game, so we can sleep for any of the spare time:
float previousTime = 0.0f;
float thisTime = 0.0f;
float framePeriod = 0.0f;
float availablePeriod = 1 / 30.0f;
while (!finished) {
thisTime = getTimeInMilliseconds();
framePeriod = previousTime - thisTime;
update(framePeriod);
render();
previousTime = thisTime;
if (framePeriod < availablePeriod)
sleep(availablePeriod - framePeriod);
}
This technique is called framerate governance as you are manually controlling the rate at which you are rendering and updating.