I'm writing a game loop with sfml. When I don't make it sleep, time elapsed for each loop iteration is ~1ms. But when I add sleep(sleepTime) suddenly dt is high. I restart dt at the beginning of the loop but it seems that it adds last sleep time to it. What causes it?
sf::Clock clock;
float dt;
sf::Time sleepTime = sf::milliseconds(0);
while(m_Window.isOpen())
{
sf::Time elapsed = clock.restart();
dt = elapsed.asMilliseconds();
cout << "Elapsed: " << dt;
sf::Event event;
while(m_Window.pollEvent(event))
{
switch(event.type)
{
case sf::Event::Closed:
m_Window.close();
break;
}
}
sleepTime = sf::milliseconds(16 - dt);
float time = sleepTime.asMilliseconds();
cout << "\tSleep time: " << time << endl;
if(sleepTime >= sf::Time::Zero)
{
sf::sleep(sleepTime);
}
else
{
cout << "Shit." << endl;
}
Without sleep: https://aww.moe/sn1z0a.png
With sleep: https://aww.moe/7seof1.png
What you're trying to do – limiting the game to a specific framerate – is already built into SFML. Just call sf::Window::setFrameRateLimit() with your intended framerate as parameter and you're set. It's also possible to use vertical synchronization (by using sf::Window::setVerticalSyncEnabled()) to limit the number of frames/updates, although it's often considered a bad idea, since the game would also slow down if the target machine can't render at the desired framerate (or speed up for high end screens running at 120 or 140Hz).
However, you'll typically want to disconnect your game updates from your frame rate so the game doesn't slow down, even if the current machine can't update the screen fast enough.
The basic approach using SFML will typically look like this (this is from memory, so might include bugs or typos):
sf::Clock updateTimer; // Clock to monitor the time passed
sf::Time passedTime; // Accumulated game time
const sf::Time frameTime(sf::milliseconds(10)); // intended time per frame; here: 10ms
while (window.isOpen()) {
sf::Event event;
while (window.pollEvent(event)) {
// Event handling
}
// First add the time passed
passedTime += updateClock.restart();
unsigned int numUpdates = 0; // Count the updates done
// As long as enough time passed, do an update
// Up to a specific maximum to avoid problems, e.g.
// the main thread was blocked or can't catch up
while (passedTime >= frameTime) {
if (numUpdates++ < 10) {
// Do your game update here
}
// Subtract the time we've "handled"
passedTime -= frameTime;
}
window.clear();
// Draw your game here
window.display();
}
The usage of numUpdates might not be clear at first, but just imagine a situation where the machine is barely able to run the desired 100 updates per second. If you're 20 updates behind (some hick-up or whatever) the machine will never be able to catch up again properly, causing heavy stuttering or the game becoming unresponsive.
Related
I'm trying to implement a MIDI-like clocked sample player.
There is a timer, which increments pulse counter, and every 480 pulses is a quarter, so pulse period is 1041667 ns for 120 beats per minute.
Timer is not sleep-based and running in separate thread, but it seems like delay time is inconsistent: period between samples played in a test file is fluctuating +- 20 ms (in some occasions period is OK and steady, I can't find out dependency of this effect).
Audio backend influence is excluded: i've tried OpenAL as well as SDL_mixer.
void Timer_class::sleep_ns(uint64_t ns){
auto start = std::chrono::high_resolution_clock::now();
bool sleep = true;
while(sleep)
{
auto now = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
if (elapsed.count() >= ns) {
TestTime = elapsed.count();
sleep = false;
//break;
}
}
}
void Timer_class::Runner(void){
// this running as thread
while(1){
sleep_ns(BPMns);
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
}
};
void Player_class::PlayFile(int FileNumber){
#ifdef AUDIO_SDL_MIXER
if(Mix_PlayChannel(-1, WaveData[FileNumber], 0)==-1) {
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
#endif // AUDIO_SDL_MIXER
}
Am i doing something wrong in terms of an approach? Is there any better way to implement timer of this kind?
Deviation higher than 4-5 ms is too much in case of audio.
I see a large error and a small error. The large error is that your code assumes that the main processing in Runner consistently takes zero time:
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
That is, you're "sleeping" for the time you want your loop iteration to take, and then you're doing processing on top of that.
The small error is presuming that you can represent your ideal loop iteration time with an integral number of nanoseconds. This error is so small that it doesn't really matter. However I amuse myself by showing people how they can get rid of this error too. :-)
First lets correct the small error by exactly representing the idealized loop iteration time:
using quarterPeriod = std::ratio<1, 2>;
using iterationPeriod = std::ratio_divide<quarterPeriod, std::ratio<480>>;
using iteration_time = std::chrono::duration<std::int64_t, iterationPeriod>;
I know nothing of music, but I'm guessing the above code is right because if you convert iteration_time{1} to nanoseconds, you get approximately 1041667ns. iteration_time{1} is intended to be the precise amount of time you want each iteration of your loop in Timer_class::Runner to take.
To correct the large error, you need to sleep until a time_point, as opposed to sleeping for a duration. Here's a generic utility to help you do that:
template <class Clock, class Duration>
void
delay_until(std::chrono::time_point<Clock, Duration> tp)
{
while (Clock::now() < tp)
;
}
Now if you code Timer_class::Runner to use delay_until instead of sleep_ns, I think you'll get better results:
void
Timer_class::Runner()
{
auto next_start = std::chrono::steady_clock::now() + iteration_time{1};
while (true)
{
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1);
}
delay_until(next_start);
next_start += iteration_time{1};
}
}
I ended up using #howard-hinnant version of delay, and reducing buffer size in openal-soft, that's what made a huge difference, fluctuations is now about +-5 ms for 1/16th at 120BPM (125 ms period) and +-1 ms for quarter beats. Leaves a lot to be desired, but i guess it's okay
The code below is for an empty window but shows relatively high CPU usage of 25% on my Intel i3. I also tried the setFramerateLimit with no change. Is there a way to reduce the CPU usage?
#include<SFML/Window.hpp>
void processEvents(sf::Window& window);
int main()
{
sf::Window window(sf::VideoMode(800, 600), "My Window", sf::Style::Close);
window.setVerticalSyncEnabled(true);
while (window.isOpen())
{
processEvents(window);
}
return 0;
}
void processEvents(sf::Window& window)
{
sf::Event event;
window.pollEvent(event);
switch (event.type)
{
case sf::Event::Closed:
window.close();
break;
}
}
Since you're not calling window.display() in the loop, there's noting to halt the thread for the appropriate amount of time, set with sf::RenderWindow::setVerticalSyncEnabled or sf::RenderWindow::setMaxFramerateLimit.
Try this:
while (window.isOpen())
{
processEvents(window);
// this makes the thread sleep
// (for ~16.7ms minus the time already spent since
// the previous window.display() if synced with 60FPS)
window.display();
}
From SFML Docs:
If a limit is set, the window will use a small delay after each call to display() to ensure that the current frame lasted long enough to match the framerate limit.
The issue is
while (window.isOpen())
{
processEvents(window);
}
Is a loop with no pause in it. Since an a loop like this normally consumes 100% of the CPU I would have to guess that you have a 4 core CPU so it is consuming one entire core which is 25% of the capacity of the CPU.
You could add a pause in the loop so it is not running 100% of the time or you could change the event handling all together.
I've encountered a huge problem! I'm making a C++ Zombie game and it works perfectly besides the barrier part. I want the zombies to come to the barrier, then have them wait around 5 seconds, and then break through the barrier. Now I don't think you need my whole code for this since it's just a timer, but if you do let me know! Basically, I tried many timers AND the Sleep command, but when I use them it makes the zombies stay at the barrier, but then everything else freezes until the timers. For exmaple if the zombies at the barrier and I use a timer for 5 seconds, the zombie stays at the barrier for 5 seconds! but so does everything else, nothing else can move for 5 seconds! Is their any way I could use a sleep command only for a CERTAIN part of my code? Here is one of the few timers I used.
int Timer()
{
int s = 0;
int m = 0;
int h = 0;
while (true)
{
CPos(12,58);
cout << "Timer: ";
cout << h/3600 << ":" << m/60 << ":" << s;
if (s == 59) s = -1;
if (m == 3599) m = -1; //3599 = 60*60 -1
s++;
m++;
h++;
Sleep(1000);
cout<<"\b\b\b";
}
}
This one involves a sleep command, I also used a timer where while(number > 0) --number, but it works! but it still freezes everything else in my program!
If you need anything, Let me know!
Unless you have EACH zombie and everything else running on different threads, calling Sleep will pause the entire application for x milliseconds... You need to stop the zombie a different way, namely by just not moving him until the time has passed, while still updating the other entities as normal (don't use sleep).
EDIT:
You can't just create a timer and then wait until that timer is done. At the time when the zombie needs to stop moving, you have to 'remember' the current time, but continue on. Then each time you get back to that zombie again to update its position, you check to see if he has a pause timer. If he does, then you have to compare the elapsed time between what you 'remembered' against the current time and check whether he has paused long enough... here is some psuedo code:
#include <time>
class Zombie {
private:
int m_xPos;
time_t m_rememberedTime;
public:
Zombie() {
this->m_xPos = 0;
this->m_rememberedTime = 0;
}
void Update() {
if (CheckPaused()) {
// bail out before we move this zombie if he is paused at a barrier.
return;
}
// If it's not paused, then move him as normal.
this->m_xPos += 1; // or whatever.
if (ZombieHitBarrier()) {
PauseZombieAtBarrier();
}
}
bool CheckPaused() {
if (this.m_rememberedTime > 0) {
// If we have a remembered time, calculate the elapsed time.
time_t currentTime;
time(¤tTime);
time_t elapsed = currentTime - this.m_rememberedTime;
if (elapsed > 5.0f) {
// 5 seconds has gone by, so clear the remembered time and continue on to return false.
this.m_rememberedTime = 0;
} else {
// 5 seconds has not gone by yet, so return true that we are still paused.
return true;
}
}
// Either no timer exists, or the timer has just finished, return false that we are not paused.
return false;
}
// Call this when the zombie hits a wall.
void PauseZombieAtBarrier() {
// Store the current time in a variable for later use.
time(&this->m_rememberedTime);
}
};
I have a game with Bullet Physics as the physics engine, the game is online multiplayer so I though to try the Source Engine approach to deal with physics sync over the net. So in the client I use GLFW so the fps limit is working there by default. (At least I think it's because GLFW). But in the server side there is no graphics libraries so I need to "lock" the loop which simulating the world and stepping the physics engine to 60 "ticks" per second.
Is this the right way to lock a loop to run 60 times a second? (A.K.A 60 "fps").
void World::Run()
{
m_IsRunning = true;
long limit = (1 / 60.0f) * 1000;
long previous = milliseconds_now();
while (m_IsRunning)
{
long start = milliseconds_now();
long deltaTime = start - previous;
previous = start;
std::cout << m_Objects[0]->GetObjectState().position[1] << std::endl;
m_DynamicsWorld->stepSimulation(1 / 60.0f, 10);
long end = milliseconds_now();
long dt = end - start;
if (dt < limit)
{
std::this_thread::sleep_for(std::chrono::milliseconds(limit - dt));
}
}
}
Is it ok to use std::thread for this task?
Is this way is efficient enough?
Will the physics simulation will be steped 60 times a second?
P.S
The milliseconds_now() looks like this:
long long milliseconds_now()
{
static LARGE_INTEGER s_frequency;
static BOOL s_use_qpc = QueryPerformanceFrequency(&s_frequency);
if (s_use_qpc) {
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
return (1000LL * now.QuadPart) / s_frequency.QuadPart;
}
else {
return GetTickCount();
}
}
Taken from: https://gamedev.stackexchange.com/questions/26759/best-way-to-get-elapsed-time-in-miliseconds-in-windows
If you want to limit the rendering to a maximum FPS of 60, it is very simple :
Each frame, just check if the game is running too fast, if so just wait, for example:
while ( timeLimitedLoop )
{
float framedelta = ( timeNow - timeLast )
timeLast = timeNow;
for each ( ObjectOrCalculation myObjectOrCalculation in allItemsToProcess )
{
myObjectOrCalculation->processThisIn60thOfSecond(framedelta);
}
render(); // if display needed
}
Please note that if vertical sync is enabled, rendering will already be limited to the frequency of your vertical refresh, perhaps 50 or 60 Hz).
If, however, you wish the logic locked at 60fps, that's different matter: you will have to segregate your display and logic code in such a way that the logic runs at a maximum of 60 fps, and modify the code so that you can have a fixed time-interval loop and a variable time-interval loop (as above). Good sources to look at are "fixed timestep" and "variable timestep" ( Link 1 Link 2 and the old trusty Google search).
Note on your code:
Because you are using a sleep for the whole duration of the 1/60th of a second - already elapsed time you can miss the correct timing easily, change the sleep to a loop running as follows:
instead of
if (dt < limit)
{
std::this_thread::sleep_for(std::chrono::milliseconds(limit - dt));
}
change to
while(dt < limit)
{
std::this_thread::sleep_for(std::chrono::milliseconds(limit - (dt/10.0)));
// or 100.0 or whatever fine-grained step you desire
}
Hope this helps, however let me know if you need more info:)
First off, I found a lot of information on this topic, but no solutions that solved the issue unfortunately.
I'm simply trying to regulate my C++ program to run at 60 iterations per second. I've tried everything from GetClockTicks() to GetLocalTime() to help in the regulation but every single time I run the program on my Windows Server 2008 machine, it runs slower than on my local machine and I have no clue why!
I understand that "clock" based function calls return CPU time spend on the execution so I went to GetLocalTime and then tried to differentiate between the start time and the stop time then call Sleep((FPS / 1000) - millisecondExecutionTime)
My local machine is quite faster than the servers CPU so obviously the thought was that it was going off of CPU ticks, but that doesn't explain why the GetLocalTime doesn't work. I've been basing this method off of http://www.lazyfoo.net/SDL_tutorials/lesson14/index.php changing the get_ticks() with all of the time returning functions I could find on the web.
For example take this code:
#include <Windows.h>
#include <time.h>
#include <string>
#include <iostream>
using namespace std;
int main() {
int tFps = 60;
int counter = 0;
SYSTEMTIME gStart, gEnd, start_time, end_time;
GetLocalTime( &gStart );
bool done = false;
while(!done) {
GetLocalTime( &start_time );
Sleep(10);
counter++;
GetLocalTime( &end_time );
int startTimeMilli = (start_time.wSecond * 1000 + start_time.wMilliseconds);
int endTimeMilli = (end_time.wSecond * 1000 + end_time.wMilliseconds);
int time_to_sleep = (1000 / tFps) - (endTimeMilli - startTimeMilli);
if (counter > 240)
done = true;
if (time_to_sleep > 0)
Sleep(time_to_sleep);
}
GetLocalTime( &gEnd );
cout << "Total Time: " << (gEnd.wSecond*1000 + gEnd.wMilliseconds) - (gStart.wSecond*1000 + gStart.wMilliseconds) << endl;
cin.get();
}
For this code snippet, run on my computer (3.06 GHz) I get a total time (ms) of 3856 whereas on my server (2.53 GHz) I get 6256. So it potentially could be the speed of the processor though the ratio of 2.53/3.06 is only .826797386 versus 3856/6271 is .614893956.
I can't tell if the Sleep function is doing something drastically different than expected though I don't see why it would, or if it is my method for getting the time (even though it should be in world time (ms) not clock cycle time. Any help would be greatly appreciated, thanks.
For one thing, Sleep's default resolution is the computer's quota length - usually either 10ms or 15ms, depending on the Windows edition. To get a resolution of, say, 1ms, you have to issue a timeBeginPeriod(1), which reprograms the timer hardware to fire (roughly) once every millisecond.
In your main loop you can
int main()
{
// Timers
LONGLONG curTime = NULL;
LONGLONG nextTime = NULL;
Timers::GameClock::GetInstance()->GetTime(&nextTime);
while (true) {
Timers::GameClock::GetInstance()->GetTime(&curTime);
if ( curTime > nextTime && loops <= MAX_FRAMESKIP ) {
nextTime += Timers::GameClock::GetInstance()->timeCount;
// Business logic goes here and occurr based on the specified framerate
}
}
}
using this time library
include "stdafx.h"
LONGLONG cacheTime;
Timers::SWGameClock* Timers::SWGameClock::pInstance = NULL;
Timers::SWGameClock* Timers::SWGameClock::GetInstance ( ) {
if (pInstance == NULL) {
pInstance = new SWGameClock();
}
return pInstance;
}
Timers::SWGameClock::SWGameClock(void) {
this->Initialize ( );
}
void Timers::SWGameClock::GetTime ( LONGLONG * t ) {
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) t)) {
*t = timeGetTime();
}
cacheTime = *t;
}
LONGLONG Timers::SWGameClock::GetTimeElapsed ( void ) {
LONGLONG t;
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) &t )) {
t = timeGetTime();
}
return (t - cacheTime);
}
void Timers::SWGameClock::Initialize ( void ) {
if ( !QueryPerformanceFrequency((LARGE_INTEGER *) &this->frequency) ) {
this->frequency = 1000; // 1000ms to one second
}
this->timeCount = DWORD(this->frequency / TICKS_PER_SECOND);
}
Timers::SWGameClock::~SWGameClock(void)
{
}
with a header file that contains the following:
// Required for rendering stuff on time
#pragma once
#define TICKS_PER_SECOND 60
#define MAX_FRAMESKIP 5
namespace Timers {
class SWGameClock
{
public:
static SWGameClock* GetInstance();
void Initialize ( void );
DWORD timeCount;
void GetTime ( LONGLONG* t );
LONGLONG GetTimeElapsed ( void );
LONGLONG frequency;
~SWGameClock(void);
protected:
SWGameClock(void);
private:
static SWGameClock* pInstance;
}; // SWGameClock
} // Timers
This will ensure that your code runs at 60FPS (or whatever you put in) though you can probably dump the MAX_FRAMESKIP as that's not truly implemented in this example!
You could try a WinMain function and use the SetTimer function and a regular message loop (you can also take advantage of the filter mechanism of GetMessage( ... ) ) in which you test for the WM_TIMER message with the requested time and when your counter reaches the limit do a PostQuitMessage(0) to terminate the message loop.
For a duty cycle that fast, you can use a high accuracy timer (like QueryPerformanceTimer) and a busy-wait loop.
If you had a much lower duty cycle, but still wanted precision, then you could Sleep for part of the time and then eat up the leftover time with a busy-wait loop.
Another option is to use something like DirectX to sync yourself to the VSync interrupt (which is almost always 60 Hz). This can make a lot of sense if you're coding a game or a/v presentation.
Windows is not a real-time OS, so there will never be a perfect way to do something like this, as there's no guarantee your thread will be scheduled to run exactly when you need it to.
Note that in the remarks for Sleep, the actual amount of time will be at least one "tick" and possible one whole "tick" longer than the delay you requested before the thread is scheduled to run again (and then we have to assume the thread is scheduled). The "tick" can vary a lot depending on hardware and the version of Windows. It is commonly in the 10-15 ms range, and I've seen it as bad as 19 ms. For 60 Hz, you need 16.666 ms per iteration, so this is obviously not nearly precise enough to give you what you need.
What about rendering (iterating) based on the time elapsed between rendering of each frame? Consider creating a void render(double timePassed) function and render depending on the timePassed parameter instead of putting program to sleep.
Imagine, for example, you want to render a ball falling or bouncing. You would know it's speed, acceleration and all other physics that you need. Calculate the position of the ball based on timePassed and all other physics parameters (speed, acceleration, etc.).
Or if you prefer, you could just skip the render() function execution if time passed is a value to small, instead of puttin program to sleep.