Setting speed factor in a timer class? - c++

I have the following timer class :
class Timer
{
private:
unsigned int curr,prev;
float factor;
float delta;
public:
Timer(float FrameLockFactor)
{
factor = FrameLockFactor;
}
~Timer()
{
}
void Update()
{
curr = SDL_GetTicks();
delta = (curr - prev) * (1000.f / factor);
prev = curr;
}
float GetDelta()
{
return delta;
}
};
And i use it like this :
//Create a timer and lock at 60fps
Timer timer(60.0f);
while()
{
float delta;
float velocity = 4.0f;
timer.Update();
delta = timer.GetDelta();
sprite.SetPosition( sprite.GetVector() + Vector2(0.0,velocity * delta) );
sprite.Draw();
}
But there is a big problem : My game runs way too slow for a program that is supposed to run at 60frames per second and the same test code runs smooth when not using frame indepented movement , so there must be something wrong with my code.
Any help?

If delta is supposed to be a count of frames, shouldn't it be calculated as
delta = (curr - prev) * (factor / 1000.f);

I don't really get what you are trying to do with your code. Especially the line delta = (curr - prev) * (1000.f / factor); does not make sense as far as I can see.
If I understand this correctly you are trying to calculate how much time has passed since the last update and translate this into milliseconds per frame. What units are you using?
I don't know what is returned by SDL_GetTicks();. Is this the number of processor or real clock ticks, which is returned? In case it is returning real clock ticks, the (curr-prev) part will most often be zero, since you have to do multiple updates per frame for this to work.
If it is not returning real clock ticks, why are you multiplying by 1000f. Where does this factor come from?
With code like this, it is often very important to take care of round-off errors, so I am guessing that your problem lies somewhere in this area. Although without additional information I cannot tell what your actuall problem may be.

Related

Is my solution to fixed timestep with delta time and interpolation wrong?

I am trying to write simple loop with fixed delta time used for physics and interpolation before rendering the state. I am using Gaffer on games tutorial on fixed timesteps and I tried to understand it and make it work.
float timeStep = 0.01;
float alpha = 1.0;
while (isOpen()) {
processInput();
deltaTime = clock.restart(); // get elapsed time
if (deltaTime > 0.25) { deltaTime = 0.25; } // drop frame guard
accumulator += deltaTime;
while (accumulator >= timeStep) {
// spritePosBefore = sprite.getPosition();
accumulator -= timeStep;
// sprite.move(velocity * timeStep, 0);
// spritePosAfter = sprite.getPosition();
}
if (accumulator > timeStep) { alpha = accumulator / timeStep; } else { alpha = 1.0; }
// sprite.setPosition(Vector2f(spritePosBefore * (1 - alpha) + spritePosAfter * alpha));
clear();
draw(sprite);
display();
}
Now, everything looks good. I have fixed timestep for physics, draw whenever I can after physics are updated and interpolate between two positions. It should work flawless but I can still see sprite stuttering or even going back by one pixel once in a while. Why does it happen? Is there any problem with my code? I spent last two days trying to understand game loop which would ensure me flawless motions but it seems like it doesn't work as I thought it will. Any idea what could be improved?
You should remove the if statement and always calculate alpha; the if statement will never be executed as the condition is always false after the while loop is exited!
After the loop the accumulator will be between 0 and timeStep so you just end up drawing the latest position instead of interpolating.
I don't think the way you do it is necessarily wrong but it looks a bit overcomplicated. I don't understand exactly what you're trying to do so I'm just going to share the way I implement a "fixed time step" in my SFML applications.
The following is the simplest way and will be "good enough" for most applications. It's not the most precise though (the can be a little error between the measured time and the real time) :
sf::Clock clock;
sf::Event event;
while (window_.isOpen()) {
while (window_.pollEvent(event)) {}
if (clock.getElapsedTime().asSeconds() > FLT_FIXED_TIME_STEP) {
clock.restart();
update(FLT_FIXED_TIME_STEP);
}
render();
}
And if you really need precision, you can add a float variable that will act as a "buffer" :
sf::Clock clock;
sf::Event event;
float timeBeforeNextStep = 0.f; // "buffer"
float timeDilation = 1.f; // Useful if you want to slow or speed up time ( <1 for slowmo, >1 for speedup)
while (window_.isOpen()) {
while (window_.pollEvent(event)) {}
timeBeforeNextStep -= clock.restart().asSeconds() * timeDilation;
if (timeBeforeNextStep < FLT_FIXED_TIME_STEP) {
timeBeforeNextStep += FLT_FIXED_TIME_STEP; // '+=', not '=' to make sure we don't lose any time.
update(FLT_FIXED_TIME_STEP);
// Rendering every time you update is not always the best solution, especially if you have a very small time step.
render();
}
}
You might want to use another buffer for rendering (if you want to run at exactly 60 FPS for example).

Limiting FPS in C++

I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:
I'm getting the deltaTime through this method that is executed each frame:
void Time::calc_deltaTime() {
double currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
}
deltaTime is having the value I would expect (around 0.012.... to 0.016...)
And than I'm using deltaTime to delay the frame through the Sleep windows function like this:
void Time::limitToMAXFPS() {
if(1.0 / MAXFPS > deltaTime)
Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}
MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)
I also tried this method using while loop:
void Time::limitToMAXFPS() {
double diff = 1.0 / MAXFPS - deltaTime;
if(diff > 0) {
double t = glfwGetTime( );
while(glfwGetTime( ) - t < diff) { }
}
}
But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?
How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.
Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:
const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;
// approx ~ 16.666 ms
bool running = true;
double lastTime = 0.0;
while( running ) {
double time = glfwGetTime();
double deltaTime = time - lastTime;
if( deltaTime >= maxPeriod ) {
lastTime = time;
// code here gets called with max FPS
}
}
Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.
Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.
Is Sleep() inaccurate?
I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).
I also tried :
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));
and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.
Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.
So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.
I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps
auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
++frames;
auto now = std::chrono::steady_clock::now();
auto diff = now - start;
auto end = now + std::chrono::milliseconds(16);
if(diff >= std::chrono::seconds(1))
{
start = now;
std::cout << "FPS: " << frames << std::endl;
frames = 0;
}
glfwPollEvents();
processTransition(countit);
render.TickTok();
render.RenderBackground();
render.RenderCovers(countit);
std::this_thread::sleep_until(end);
glfwSwapBuffers(window);
}
to add you can easily adjust FPS preference by adjusting end.
now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.
Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.

C++ DirectX Movement

So I have been playing around with DirectX11 lately and I'm still pretty new at it. I'm trying to move something right now with the translation and this is what I've got. I've been reading Frank D Luna's book on DirectX11 and he provides a gameTimer class but I really am not sure how to use delta time. This is the small snippet of code I was working with. Obviously this won't work because whenever I'm not pressing the key the time is still increasing and it's total time.
// Button down event.
if (GetAsyncKeyState('W') & 0x8000)
{
XMMATRIX carTranslate;
// Every quarter second incremete it
static float t_base = 0.0f;
if( (mTimer.TotalTime() - t_base) >= 0.25f )
t_base += 0.25f;
carPos.x = mTimer.TotalTime();
carPos.y = 1.0f;
carPos.z = 0.0f;
carTranslate = XMMatrixTranslation(carPos.x, carPos.y, carPos.z);
XMStoreFloat4x4(&mCarWorld, XMMatrixMultiply(carScale, carTranslate));
}
Usually we constantly render frames (redrawing screen) in a while loop (so called "main loop"). To "move" an object we just draw it in another position than it was in previous frame.
To move objects consistently, you need to know a time between frames. We call it "delta time" (dt). So, between frames, time increases by dt. Given velocity (speed) of object (v), we can calculate displacement as dx = dt * v. Then, to get current position, we just add dx to previous position: x += dx.
Note, that is a bad idea to calculate delta just inside your update or rendering code. Avoiding spreading out this functionality, we usually localize this calculations in timer/clock class.
Here is a simplified example:
// somewhere in timer class
// `Time` and `Duration` are some time units
class Timer {
Time m_previousTime;
Duration m_delta;
public:
Duration getDelta() const { return m_delta; }
void tick() {
m_delta = currentTime() - m_previousTime; // just subtract
m_previousTime = currentTime; // `current` becomes `previous` for next frame
}
};
// main loop
while(rendering) {
Timer.tick();
Frame(m_Timer.getDelta());
}
void Frame(Duration dt) {
if(keyPressed) {
object.position += dt * object.velocity;
}
}
You can now even make your object to move with acceleration (throttle, gravity, etc.):
object.velocity += dt * object.acceleration;
object.position += dt * object.velocity;
Hope you got the idea!
Happy coding!

gameloop framerate control issues

this is driving me mad, anyway, usual story, attempting to guarantee the same speed in my very simple game across any windows machine that runs it. im doing it by specifying a 1/60 value, then ensuring a fram cant run until that value has passed in time since the last time it was called. the problem im having is 1/60 equates to 30hz for some reason, I have to set it to 1/120 to get 60hz. its also not bang on 60hz, its a little faster.
if i paste it out here, could someone tell me if they see anything wrong? or a more precise way to do it maybe?
float controlFrameRate = 1./60 ;
while (gameIsRunning)
{
frameTime += (system->getElapsedTime()-lastTime);
lastTime = system->getElapsedTime();
if(frameTime > controlFrameRate)
{
gameIsRunning = system->update();
   
//do stuff with the game
    frameTime = .0f;       
}
}
Don't call getElapsedTime twice, there may be a slight difference between the two calls. Instead, store its value, then reuse it. Also, instead of setting the frameTime to 0, subtract controlFrameRate from it, that way, if one frame takes a little longer, the next one will make up for it by being a little shorter.
while (gameIsRunning)
{
float elapsedTime = system->getElapsedTime();
frameTime += (elapsedTime-lastTime);
lastTime = elapsedTime;
if(frameTime > controlFrameRate)
{
gameIsRunning = system->update();
//do stuff with the game
frameTime -= controlFrameRate;
}
}
I'm not sure about your problem with having to set the rate to 1/120, what timing API are you using here?

C++: Calculating Moving FPS

I would like to calculate the FPS of the last 2-4 seconds of a game. What would be the best way to do this?
Thanks.
Edit: To be more specific, I only have access to a timer with one second increments.
Near miss of a very recent posting. See my response there on using exponential weighted moving averages.
C++: Counting total frames in a game
Here's sample code.
Initially:
avgFps = 1.0; // Initial value should be an estimate, but doesn't matter much.
Every second (assuming the total number of frames in the last second is in framesThisSecond):
// Choose alpha depending on how fast or slow you want old averages to decay.
// 0.9 is usually a good choice.
avgFps = alpha * avgFps + (1.0 - alpha) * framesThisSecond;
Here's a solution that might work for you. I'll write this in pseudo/C, but you can adapt the idea to your game engine.
const int trackedTime = 3000; // 3 seconds
int frameStartTime; // in milliseconds
int queueAggregate = 0;
queue<int> frameLengths;
void onFrameStart()
{
frameStartTime = getCurrentTime();
}
void onFrameEnd()
{
int frameLength = getCurrentTime() - frameStartTime;
frameLengths.enqueue(frameLength);
queueAggregate += frameLength;
while (queueAggregate > trackedTime)
{
int oldFrame = frameLengths.dequeue();
queueAggregate -= oldFrame;
}
setAverageFps(frameLength.count() / 3); // 3 seconds
}
Could keep a circular buffer of the frame time for the last 100 frames, and average them? That'll be "FPS for the last 100 frames". (Or, rather, 99, since you won't diff the newest time and the oldest.)
Call some accurate system time, milliseconds or better.
What you actually want is something like this (in your mainLoop):
frames++;
if(time<secondsTimer()){
time = secondsTimer();
printf("Average FPS from the last 2 seconds: %d",(frames+lastFrames)/2);
lastFrames = frames;
frames = 0;
}
If you know, how to deal with structures/arrays it should be easy for you to extend this example to i.e. 4 seconds instead of 2. But if you want more detailed help, you should really mention WHY you haven't access to an precise timer (which architecture, language) - otherwise everything is like guessing...