I'm trying to make a simple tile-based platformer in C++ and SDL2. My framerate stays at 59-60 fps, but when I start to hold down a key, it loses about 10 fps. This happens even when I don't call update or retrieve the keystates. This is the code inside my game loop:
//keys = (Uint8 *)SDL_GetKeyboardState(NULL);
elapsed = SDL_GetTicks() - current;
current += elapsed;
timeSinceSecond += elapsed;
//update(keys, elapsed / 1000.0);
draw();
frames++;
if(timeSinceSecond >= 1000) {
timeSinceSecond = 0;
cout << frames << endl;
frames = 0;
}
next = SDL_GetTicks();
if(next - current < 1000.0 / framerate) {
SDL_Delay(1000.0 / framerate - (next - current));
}
Any ideas on why this is happening? Could it be that it's a problem with SDL2? I haven't tried this with SDL 1.2.
SDL_Delay will not work the way you want. It is not precise enough (has 10 millisecond precision), so it will be impossible to get required number of frames per second this way. Us vsync instead. Another thing is that printing to stderr/stdout is slow when console is visible. If you're printing something when key is pressed, or if pressing the key somehow increases amount of text being printed, game will slow down.
Related
I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:
I'm getting the deltaTime through this method that is executed each frame:
void Time::calc_deltaTime() {
double currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
}
deltaTime is having the value I would expect (around 0.012.... to 0.016...)
And than I'm using deltaTime to delay the frame through the Sleep windows function like this:
void Time::limitToMAXFPS() {
if(1.0 / MAXFPS > deltaTime)
Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}
MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)
I also tried this method using while loop:
void Time::limitToMAXFPS() {
double diff = 1.0 / MAXFPS - deltaTime;
if(diff > 0) {
double t = glfwGetTime( );
while(glfwGetTime( ) - t < diff) { }
}
}
But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?
How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.
Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:
const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;
// approx ~ 16.666 ms
bool running = true;
double lastTime = 0.0;
while( running ) {
double time = glfwGetTime();
double deltaTime = time - lastTime;
if( deltaTime >= maxPeriod ) {
lastTime = time;
// code here gets called with max FPS
}
}
Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.
Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.
Is Sleep() inaccurate?
I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).
I also tried :
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));
and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.
Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.
So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.
I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps
auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
++frames;
auto now = std::chrono::steady_clock::now();
auto diff = now - start;
auto end = now + std::chrono::milliseconds(16);
if(diff >= std::chrono::seconds(1))
{
start = now;
std::cout << "FPS: " << frames << std::endl;
frames = 0;
}
glfwPollEvents();
processTransition(countit);
render.TickTok();
render.RenderBackground();
render.RenderCovers(countit);
std::this_thread::sleep_until(end);
glfwSwapBuffers(window);
}
to add you can easily adjust FPS preference by adjusting end.
now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.
Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.
I am using SFML making a 2D platformer. I read so many timestep articles but they don't work well for me. I am implementing it like 2500 FPS timestep, on my desktop pc it's amazingly smooth, on my laptop it's getting 300 FPS(I check with Fraps), it's not that smooth at laptop but still playable.
Here are the code snippets:
sf::Clock clock;
const sf::Time TimePerFrame = sf::seconds(1.f/2500.f);
sf::Time TimeSinceLastUpdate = sf::Time::Zero;
sf::Time elapsedTime;
These are variables and here is the game loop,
while(!quit){
elapsedTime = clock.restart();
TimeSinceLastUpdate += elapsedTime;
while (TimeSinceLastUpdate > TimePerFrame){
TimeSinceLastUpdate -= TimePerFrame;
Player::instance()->handleAll();
}
Player::instance()->render();
}
In the Player.h, I've got movement constants,
const float GRAVITY = 0.35 /2500.0f; // Uses += every frame
const float JUMP_SPEED = -400.0f/2500.0f; //SPACE -> movementSpeed.y = JUMP_SPEED;
//When character is touching to ground
const float LAND_ACCEL = 0.075 /2500.0f; // These are using +=
const float LAND_DECEL = 1.5 /2500.0f;
const float LAND_FRICTION = 0.5 /2500.0f;
const float LAND_STARTING_SPEED = 0.075; // This uses =, instead of +=
In the handleAll function of Player class, there is
cImage.move(movementSpeed);
checkCollision();
And lastly, checkCollision function, simply checks if character's master bounding box intersects the object's rectangle from each side, sets the speed x or y to 0, then fixes the overlapping by setting character position to the edge.
//Collision
if(masterBB().intersects(objectsIntersecting[i]->GetAABB())){
//HORIZONTAL
if(leftBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x < 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left + objectsIntersecting[i]->GetAABB().width + leftBB().width , cImage.getPosition().y);
}
else if(rightBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x > 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left - rightBB().width , cImage.getPosition().y);
}
//VERTICAL
if(movementSpeed.y < 0 && topBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top + objectsIntersecting[i]->GetAABB().height + masterBB().height/2);
}
if(movementSpeed.y > 0 && bottomBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top - masterBB().height/2);
//and some state updates
}
}
I tried to use 60 FPS Timestep like million times but all speed variables become so slow, I can't simply do like *2500.0f / 60.0f to all constants, It doesn't feel same. If I get close constants, It feels "ok" but then when the collision happens, character's position is getting setted all the time and it flys out of the map because of the big lap on the object caused by high speed constants applied every frame I guess...
I need to add, Normally, the book I took the timestep code uses
cImage.move(movementSpeed*TimePerFrame.asSeconds());
but as you saw, I just put /2500.0f to every constant and I don't use it.
So, is 1/2500 seconds per frame good? If not, how can I change all of these to 1/60.0f?
You're doing it wrong.
Your monitor most likely has a refresh rate of 60 Hz (= 60 FPS), thus trying to render an image at 2500 FPS is a huge waste of resources. If the only reason for choosing 2500 FPS is that your movement doesn't work the same, haven't you ever thought about, that the problem then might be with the movement code?
At best you'd implement a fixed timestep (famous article), that way your physics can run at whatever rate you want (2500 "FPS" would still be crazy, so don't do it) and is independent from your rendering rate. So even if you get some varying FPS, it won't influence your physics.
I'm trying to define a time step for the physics simulation in a PhysX application, such that the physics will run at the same speed on all machines. I wish for the physics to update at 60FPS, so each update should have a delta time of 1/60th of a second.
My application must use GLUT. Currently, my loop is set up as follows.
Idle Function:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
deltaTime = newElapsedTime - lastElapsedTime;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
The frame rate does not really matter in this case - it's only the speed at which my physics update that actually matters.
My render function contains the following:
void GLUTGame::Render()
{
// Rendering Code
simTimer += deltaTime;
if (simTimer > m_fps)
{
m_scene->UpdatePhys(m_fps);
simTimer = 0;
}
}
Where:
Fl32 m_fps = 1.f/60.f
However, this results in some very slow updates, due to the fact that deltaTime appears to equal 0 on most loops (which shouldn't actually be possible...) I've tried moving my deltaTime calculations to the bottom of my rendering function, as I thought that maybe the idle callback was called too often, but this did not solve the issue. Any ideas what I'm doing wrong here?
From the OpenGL website, we find that glutGet(GLUT_ELAPSED_TIME) returns the number of passed milliseconds as an int. So, if you call your void GLUTGame::Idle() method about 2000 times per second, then the time passed after one such call is about 1000 * 1/2000 = 0.5 ms. Thus more than 2000 calls per second to void GLUTGame::Idle() results in glutGet(GLUT_ELAPSED_TIME) returning 0 due to integer rounding.
Likely you're adding very small numbers to larger ones and you get rounding errors.
Try this:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
timeDelta = newElapsedTime - lastElapsedTime;
if (timeDelta < m_fps) return;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
You can do something similar in the other method if you want to.
I don't now anything about GLUT or PhysX, but here's how to have something execute at the same rate (using integers) no matter how fast the game runs:
if (currentTime - lastUpdateTime > msPerUpdate)
{
DWORD msPassed = currentTime - lastUpdateTime;
int updatesPassed = msPassed / msPerUpdate;
for (int i=0; i<updatesPassed; i++)
UpdatePhysX(); //or whatever function you use
lastUpdateTime = currentTime - msPassed + (updatesPassed * msPerUpdate);
}
Where currentTime is updated to timeGetTime every run through the game loop, lastUpdateTime is the last time PhysX updated, and msPerUpdate is the amount of milliseconds you assign to each update - 16 or 17 ms for 60fps
If you want to support floating-point update factors (which is recommended for a physics application), then define float timeMultiplier and update it every frame like so: timeMultiplier = (float)frameRate / desiredFrameRate; - where frameRate is self-explanatory and desiredFramerate is 60.0f if you want the physics updating at 60 fps. To do this, you have to update UpdatePhysX as taking a float parameter that it multiplies all update factors by.
I have managed to create a system that allows game objects to move according to the change in time (rather than the change in number of frames). I am now, for practise's sake, trying to impose a FPS limiter.
The problem I'm having is that double the amount of game loops are occurring than what I expect. For example, if I try to limit the number of frames to 1 FPS, 2 frames will pass in that one second - one very long frame (~1 second) and 1 extremely short frame (~15 milliseconds). This can be shown by outputting the difference in time (in milliseconds) between each game loop - an example output might be 993, 17, 993, 16...
I have tried changing the code such that delta time is calculated before the FPS is limited, but to no avail.
Timer class:
#include "Timer.h"
#include <SDL.h>
Timer::Timer(void)
{
_currentTicks = 0;
_previousTicks = 0;
_deltaTicks = 0;
_numberOfLoops = 0;
}
void Timer::LoopStart()
{
//Store the previous and current number of ticks and calculate the
//difference. If this is the first loop, then no time has passed (thus
//previous ticks == current ticks).
if (_numberOfLoops == 0)
_previousTicks = SDL_GetTicks();
else
_previousTicks = _currentTicks;
_currentTicks = SDL_GetTicks();
//Calculate the difference in time.
_deltaTicks = _currentTicks - _previousTicks;
//Increment the number of loops.
_numberOfLoops++;
}
void Timer::LimitFPS(int targetFps)
{
//If the framerate is too high, compute the amount of delay needed and
//delay.
if (_deltaTicks < (unsigned)(1000 / targetFps))
SDL_Delay((unsigned)(1000 / targetFps) - _deltaTicks);
}
(Part of the) game loop:
//The timer object.
Timer* _timer = new Timer();
//Start the game loop and continue.
while (true)
{
//Restart the timer.
_timer->LoopStart();
//Game logic omitted
//Limit the frames per second.
_timer->LimitFPS(1);
}
Note: I am calculating the FPS using a SMA; this code has been omitted.
The problem is part of how your timer structure is setup.
_deltaTicks = _currentTicks - _previousTicks;
I looked though the code a few times and based on how you have the timer setup this seems to be the problem line.
Walking though it at the start of your code _currentTicks will equal 0 and _previousTicks will equal 0 so for the first loop your _deltaTicks will be 0. This means regardless of your game logic LimitFPS will misbehave.
void Timer::LimitFPS(int targetFps)
{
if (_deltaTicks < (unsigned)(1000 / targetFps))
SDL_Delay((unsigned)(1000 / targetFps) - _deltaTicks);
}
We just calculated that _deltaTicks was 0 at the start of the loop, thus we wait the full 1000ms regardless of what happens in your game logic.
On the next frame we have this, _previousTicks will equal 0 and _currentTicks will now equal approximately 1000ms. This means the _deltaTicks will be equal to 1000.
Again we hit the LimitFPS function but this time with a _deltaTicks of 1000 which makes it so we wait for 0ms.
This behavior will constantly flip like this.
wait 1000ms
wait 0ms
wait 1000ms
wait 0ms
...
those wait 0ms essentially double your FPS.
You need to test for time at the start and end of your game logic.
//The timer object.
Timer* _timer = new Timer();
//Start the game loop and continue.
while (true)
{
//Restart the timer.
_timer->LoopStart();
//Game logic omitted
//Obtain the end time
_timer->LoopEnd();
//Now you can calculate the delta based on your start and end times.
_timer->CaculateDelta();
//Limit the frames per second.
_timer->LimitFPS(1);
}
Hope that helps.
Assuming your gameloop is triggered only when SDL_Delay() expired wouldn't it be enough to write:
void Timer::LimitFPS(int targetFps)
{
SDL_Delay((unsigned)(1000 / targetFps));
}
to delay your gameloop by the right amount of milliseconds depending on the requested target FPS?
I have a program in which I am drawing images on the screen. The draw function here is called per frame inside in which I have all my drawing code.
I have written an image sequencer that return the respective image from an index of images.
void draw()
{
sequence.getFrameForTime(getCurrentElapsedTime()).draw(0,0); //get current time returns time in float and startson application start
}
On key press, I have start the sequences from the first image [0] and then go on further. So, everytime I press a key, it has to start from [0] unlike the above code where it basically uses the currentTime%numImages to get the frame (which is not the start 0 position of image).
I was thinking to write a timer of own that basically can be triggered everytime I press the key so that the time always starts from 0. But before doing that, I wanted to ask if anybody had better/easier implementation ideas for this?
EDIT
Why I didn't use just a counter?
I have framerate adjustments in my ImageSequence as well.
Image getFrameAtPercent(float rate)
{
float totalTime = sequence.size() / frameRate;
float percent = time / totalTime;
return setFrameAtPercent(percent);
}
int getFrameIndexAtPercent(float percent){
if (percent < 0.0 || percent > 1.0) percent -= floor(percent);
return MIN((int)(percent*sequence.size()), sequence.size()-1);
}
void draw()
{
sequence.getFrameForTime(counter++).draw(0,0);
}
void OnKeyPress(){ counter = 0; }
Is there a reason this wont suffice?
What you should do is increase a "currentFrame" as a float and convert it to an int to index your frame:
void draw()
{
currentFrame += deltaTime * framesPerSecond; // delta time being the time between the current frame and your last frame
if(currentFrame >= numImages)
currentFrame -= numImages;
sequence.getFrameAt((int)currentFrame).draw(0,0);
}
void OnKeyPress() { currentFrame = 0; }
This should gracefully handle machines with different framerates and even changes of framerates on a single machine.
Also, you won't be skipping part of a frame when you loop over as the remainder of the substraction is kept.