So I am trying to program a simple tick-based game. I write in C++ on a linux machine. The code below illustrates what I'm trying to accomplish.
for (unsigned int i = 0; i < 40; ++i)
{
functioncall();
sleep(1000); // wait 1 second for the next function call
}
Well, this doesn't work. It seems that it sleeps for 40 seconds, then prints out whatever the result is from the function call.
I also tried creating a new function called delay, and it looked like this:
void delay(int seconds)
{
time_t start, current;
time(&start);
do
{
time(¤t);
}
while ((current - start) < seconds);
}
Same result here. Anybody?
To reiterate on what has already been stated by others with a concrete example:
Assuming you're using std::cout for output, you should call std::cout.flush(); right before the sleep command. See this MS knowledgebase article.
sleep(n) waits for n seconds, not n microseconds.
Also, as mentioned by Bart, if you're writing to stdout, you should flush the stream after each write - otherwise, you won't see anything until the buffer is flushed.
So I am trying to program a simple tick-based game. I write in C++ on a linux machine.
if functioncall() may take a considerable time then your ticks won't be equal if you sleep the same amount of time.
You might be trying to do this:
while 1: // mainloop
functioncall()
tick() # wait for the next tick
Here tick() sleeps approximately delay - time_it_takes_for(functioncall) i.e., the longer functioncall() takes the less time tick() sleeps.
sleep() sleeps an integer number of seconds. You might need a finer time resolution. You could use clock_nanosleep() for that.
Example Clock::tick() implementation
// $ g++ *.cpp -lrt && time ./a.out
#include <iostream>
#include <stdio.h> // perror()
#include <stdlib.h> // ldiv()
#include <time.h> // clock_nanosleep()
namespace {
class Clock {
const long delay_nanoseconds;
bool running;
struct timespec time;
const clockid_t clock_id;
public:
explicit Clock(unsigned fps) : // specify frames per second
delay_nanoseconds(1e9/fps), running(false), time(),
clock_id(CLOCK_MONOTONIC) {}
void tick() {
if (clock_nanosleep(clock_id, TIMER_ABSTIME, nexttick(), 0)) {
// interrupted by a signal handler or an error
perror("clock_nanosleep");
exit(EXIT_FAILURE);
}
}
private:
struct timespec* nexttick() {
if (not running) { // initialize `time`
running = true;
if (clock_gettime(clock_id, &time)) {
//process errors
perror("clock_gettime");
exit(EXIT_FAILURE);
}
}
// increment `time`
// time += delay_nanoseconds
ldiv_t q = ldiv(time.tv_nsec + delay_nanoseconds, 1000000000);
time.tv_sec += q.quot;
time.tv_nsec = q.rem;
return &time;
}
};
}
int main() {
Clock clock(20);
char arrows[] = "\\|/-";
for (int nframe = 0; nframe < 100; ++nframe) { // mainloop
// process a single frame
std::cout << arrows[nframe % (sizeof(arrows)-1)] << '\r' << std::flush;
clock.tick(); // wait for the next tick
}
}
Note: I've used std::flush() to update the output immediately.
If you run the program it should take about 5 seconds (100 frames, 20 frames per second).
I guess on linux u have to use usleep() and it must be found in ctime
And in windows you can use delay(), sleep(), msleep()
Related
I'm making a whack-a-mole game for class and i'm trying to make my mole1 sprite appear every 3 seconds but I can't figure out how to get it to work. Right now i Have the game to just run for 5 seconds, in the end it will be 60. Here is the main for the project. I need to change the mole1.visible to to true so he shows up. After I get this I will add the 5 other moles to each hole.
EDIT
For some reason I can't get chrono to compile but i figured out how to first make the mole appear but I can't ge thim to disappear after. I used modulo to make it false and I thought doig the opposite would make it disappear but it doesn't
if((60-now)%4==3){
mole1.visible=true;
mole1.paint_sprite(myscreen);
}
if ((60-now)%4!=3){
mole1.visible=false;
mole1.paint_sprite(myscreen);
}
Rest of code:
using namespace std; // allows us to avoid std::cout
#include <iostream> // standard C++ include
#include <curses.h> // this is required to use the Unix curses libraries
#include "screen.cpp" // screen class
#include "sprite2.cpp" // generic sprite class
#include "nonblocking.h" // facilitates non-blocking keyboard events
#include <unistd.h> // used by sleep
#include <time.h>
long start_time, now;
int i;
main() // main function
{
char c; // used to get character input from keyboard
screen myscreen; // screen data structure declaration
char aimage[80][24]={' '}; // fills in entire array with spaces
long start_time, now;
int i; // used for counters
int loop=0;
aimage[1][0]='_';
aimage[2][0]='_';
aimage[0][1]='(';
aimage[1][1]='_';
aimage[2][1]='(';
aimage[3][1]=')';
aimage[1][2]='|';
aimage[2][2]='|';
char bgimage[80][24]={' '}; // fills in entire array with spaces
bgimage[3][0]='"';
bgimage[4][0]='"';
bgimage[5][0]='"';
bgimage[2][0]='-';
bgimage[6][0]='-';
bgimage[1][0]='.';
bgimage[7][0]='.';
bgimage[0][1]='/';
bgimage[8][1]='\\';
bgimage[0][2]='|';
bgimage[8][2]='|';
bgimage[0][3]='\\';
bgimage[8][3]='/';
bgimage[1][4]='"';
bgimage[2][4]='-';
bgimage[3][4]='.';
bgimage[4][4]='.';
bgimage[5][4]='.';
bgimage[6][4]='-';
bgimage[7][4]='"';
char cimage[80][24]={' '}; // fills in entire array with spaces
cimage[1][0]='c';
cimage[2][0]='.';
cimage[3][0]='_';
cimage[4][1]='\'';
cimage[5][1]='-';
cimage[6][1]='.';
cimage[0][1]='C';
cimage[3][1]='o';
cimage[5][2]='\'';
cimage[4][2]='.';
cimage[3][2]='.';
cimage[2][3]='-';
cimage[1][3]='-';
cimage[0][2]='(';
char dimage[80][24]={' '}; // fills in entire array with spaces
dimage[0][0]='6';
dimage[1][0]='0';
sprite hammer(22,10,3,4,aimage,&myscreen);
sprite hole1(20,3,5,9,bgimage,&myscreen);
sprite hole2(40,3,5,9,bgimage,&myscreen);
sprite hole3(60,3,5,9,bgimage,&myscreen);
sprite hole4(20,15,5,9,bgimage,&myscreen);
sprite hole5(40,15,5,9,bgimage,&myscreen);
sprite hole6(60,15,5,9,bgimage,&myscreen);
sprite mole1(21,4,4,7,cimage,&myscreen);
sprite timer(5,10,1,2,dimage, &myscreen);
mole1.visible=false; // bullet should be false until the player shoots
hole1.paint_sprite(myscreen);
hole2.paint_sprite(myscreen);
hole3.paint_sprite(myscreen);
hole4.paint_sprite(myscreen);
hole5.paint_sprite(myscreen);
hole6.paint_sprite(myscreen);
hammer.paint_sprite(myscreen);
mole1.paint_sprite(myscreen);
timer.paint_sprite(myscreen);
myscreen.display(); // cause the screen to paint for the first time
start_time=(unsigned)time(NULL);
for(;;) // infinite loop
{
now = (unsigned)time(NULL)-start_time;
if((5-now)<=0) //ends game after 60 seconds
{
endwin(); // clean up curses (really never executed)
return(1);
}
loop++;
if (kbhit())
{
c=getchar(); // get one character from the keyboard
tcflush(0, TCIFLUSH); // system call to flush the keyboard buffer
if (c=='a') // if z, move ship left
{
hammer.move_sprite(-20,0,myscreen);
}
if (c=='d') // if a, move ship right
{
hammer.move_sprite(20,0,myscreen);
}
if (c=='s') // if z, move ship down
{
hammer.move_sprite(0,10,myscreen);
}
if (c=='w') // if z, move ship up
{
hammer.move_sprite(0,-10,myscreen);
}
}
myscreen.display(); // refresh the screen
}
endwin(); // clean up curses (really never executed)
return(1); // end program (also, never executed)
}
You can use global loop to calculate time difference and then set visible=true; after passing 3.0 seconds.
Like here:
#include <iostream>
#include <chrono>
#include <unistd.h>
const float TIME_TO_SHOW = 3.0f;
//Function to update all objects
void Update( float dt )
{
static float DeltaCounter = 0.0f;
DeltaCounter+= dt;
if ( DeltaCounter > TIME_TO_SHOW )
{
DeltaCounter -= TIME_TO_SHOW; //Keep overflow
//Set object visible here. For example your mole1.visible=true;
}
}
int main()
{
typedef std::chrono::duration<float> FloatSeconds;
auto OldMs = std::chrono::system_clock::now().time_since_epoch();
const uint32_t SleepMicroseconds = 100;
//Global loop
while (true)
{
auto CurMs = std::chrono::system_clock::now().time_since_epoch();
auto DeltaMs = CurMs - OldMs;
OldMs = CurMs;
//Cast delta time to float seconds
auto DeltaFloat = std::chrono::duration_cast<FloatSeconds>(DeltaMs);
std::cout << "Seconds passed since last update: " << DeltaFloat.count() << " seconds" << std::endl;
//Update all object by time as float value.
Update( DeltaFloat.count() );
// Sleep to give time for system interaction
usleep(SleepMicroseconds);
// Any other actions to calculate can be here
//...
}
return 0;
}
If you have constant behavior you can use simple loop with sleep function. It sleeps your process for given seconds:
const int32_t CountObjectToShow = 10;
const unsigned int TIME_TO_SHOW = 3;
for ( int32_t i = 0; i < CountObjectToShow; i++ )
{
sleep(TIME_TO_SHOW);
//Set object visible here. For example your mole1.visible=true;
std::cout << "Object showed" << std::endl;
}
Code with global loop is more flexible and allows to do many others useful things.
Well, in order to show something every certain amount of seconds you need to have a variable referencing the start time. Then, you check if the delta between the current time and the stored time is greater than a certain amount.
A good tool to make this for kind of task would be a clock class.
Clock.h
#ifndef CLOCK_H
#define CLOCK_H
#include <chrono>
template<typename Clock_t = std::chrono::steady_clock>
class Clock
{
public:
using TimePoint = decltype(Clock_t::now());
private:
TimePoint m_start;
public:
Clock() : m_start(Clock_t::now()) {
}
~Clock() {
}
void reset() {
m_start = Clock_t::now():
}
float getSeconds() const {
return std::chrono::duration_cast<std::chrono::duration<float>>(Clock_t::now() - m_start).count();
}
long long getMilliseconds() const {
return std::chrono::duration_cast<std::chrono::milliseconds>(ClockType::now() - m_start).count();
}
};
#endif
Example
#include <iostream>
#include "Clock.h"
int main() {
Clock<> clock;
constexpr long long spawnRate = 3000;
while (true) {
if (clock.getMilliseconds() >= spawnRate) {
std::cout << "SPAWN\n";
clock.reset();
}
}
}
Thus, for your case you would have a clock for the game, a clock for the mole spawner, etc.
During the game you would just simply check if the clock's current elapsed time is greater than a certain delta. If that is the case, do some other things.
Also, make sure to correctly reset the clocks, such as when you are resetting the mole spawn timer, and when starting the game.
This should handle the timing of things. If you have other problems, then you should ask about those.
I've encountered a huge problem! I'm making a C++ Zombie game and it works perfectly besides the barrier part. I want the zombies to come to the barrier, then have them wait around 5 seconds, and then break through the barrier. Now I don't think you need my whole code for this since it's just a timer, but if you do let me know! Basically, I tried many timers AND the Sleep command, but when I use them it makes the zombies stay at the barrier, but then everything else freezes until the timers. For exmaple if the zombies at the barrier and I use a timer for 5 seconds, the zombie stays at the barrier for 5 seconds! but so does everything else, nothing else can move for 5 seconds! Is their any way I could use a sleep command only for a CERTAIN part of my code? Here is one of the few timers I used.
int Timer()
{
int s = 0;
int m = 0;
int h = 0;
while (true)
{
CPos(12,58);
cout << "Timer: ";
cout << h/3600 << ":" << m/60 << ":" << s;
if (s == 59) s = -1;
if (m == 3599) m = -1; //3599 = 60*60 -1
s++;
m++;
h++;
Sleep(1000);
cout<<"\b\b\b";
}
}
This one involves a sleep command, I also used a timer where while(number > 0) --number, but it works! but it still freezes everything else in my program!
If you need anything, Let me know!
Unless you have EACH zombie and everything else running on different threads, calling Sleep will pause the entire application for x milliseconds... You need to stop the zombie a different way, namely by just not moving him until the time has passed, while still updating the other entities as normal (don't use sleep).
EDIT:
You can't just create a timer and then wait until that timer is done. At the time when the zombie needs to stop moving, you have to 'remember' the current time, but continue on. Then each time you get back to that zombie again to update its position, you check to see if he has a pause timer. If he does, then you have to compare the elapsed time between what you 'remembered' against the current time and check whether he has paused long enough... here is some psuedo code:
#include <time>
class Zombie {
private:
int m_xPos;
time_t m_rememberedTime;
public:
Zombie() {
this->m_xPos = 0;
this->m_rememberedTime = 0;
}
void Update() {
if (CheckPaused()) {
// bail out before we move this zombie if he is paused at a barrier.
return;
}
// If it's not paused, then move him as normal.
this->m_xPos += 1; // or whatever.
if (ZombieHitBarrier()) {
PauseZombieAtBarrier();
}
}
bool CheckPaused() {
if (this.m_rememberedTime > 0) {
// If we have a remembered time, calculate the elapsed time.
time_t currentTime;
time(¤tTime);
time_t elapsed = currentTime - this.m_rememberedTime;
if (elapsed > 5.0f) {
// 5 seconds has gone by, so clear the remembered time and continue on to return false.
this.m_rememberedTime = 0;
} else {
// 5 seconds has not gone by yet, so return true that we are still paused.
return true;
}
}
// Either no timer exists, or the timer has just finished, return false that we are not paused.
return false;
}
// Call this when the zombie hits a wall.
void PauseZombieAtBarrier() {
// Store the current time in a variable for later use.
time(&this->m_rememberedTime);
}
};
I have a piece of code that I use to test various containers (e.g. deque and a circular buffer) when passing data from a producer (thread 1) to a consumer (thread 2). A data is represented by a struct with a pair of timestamps. First timestamp is taken before push in the producer, and the second one is taken when data is popped by the consumer.
The container is protected with a pthread spinlock.
The machine runs redhat 5.5 with 2.6.18 kernel (old!), it is a 4-core system with hyperthreading disabled. gcc 4.7 with -std=c++11 flag was used in all tests.
Producer acquires the lock, timestamps the data and pushes it into the queue, unlocks and sleeps in a busy loop for 2 microseconds (the only reliable way I found to sleep for precisely 2 micros on that system).
Consumer locks, pops the data, timestamps it and generates some statistics (running mean delay and standard deviation). The stats is printed every 5 seconds (M is the mean, M2 is the std dev) and reset. I used gettimeofday() to obtain the timestamps, which means that the mean delay number can be thought of as the percentage of delays that exceed 1 microsecond.
Most of the time the output looks like this:
CNT=2500000 M=0.00935 M2=0.910238
CNT=2500000 M=0.0204112 M2=1.57601
CNT=2500000 M=0.0045016 M2=0.372065
but sometimes (probably 1 trial out of 20) like this:
CNT=2500000 M=0.523413 M2=4.83898
CNT=2500000 M=0.558525 M2=4.98872
CNT=2500000 M=0.581157 M2=5.05889
(note the mean number is much worse than in the first case, and it never recovers as the program runs).
I would appreciate thoughts on why this could happen. Thanks.
#include <iostream>
#include <string.h>
#include <stdexcept>
#include <sys/time.h>
#include <deque>
#include <thread>
#include <cstdint>
#include <cmath>
#include <unistd.h>
#include <xmmintrin.h> // _mm_pause()
int64_t timestamp() {
struct timeval tv;
gettimeofday(&tv, 0);
return 1000000L * tv.tv_sec + tv.tv_usec;
}
//running mean and a second moment
struct StatsM2 {
StatsM2() {}
double m = 0;
double m2 = 0;
long count = 0;
inline void update(long x, long c) {
count = c;
double delta = x - m;
m += delta / count;
m2 += delta * (x - m);
}
inline void reset() {
m = m2 = 0;
count = 0;
}
inline double getM2() { // running second moment
return (count > 1) ? m2 / (count - 1) : 0.;
}
inline double getDeviation() {
return std::sqrt(getM2() );
}
inline double getM() { // running mean
return m;
}
};
// pause for usec microseconds using busy loop
int64_t busyloop_microsec_sleep(unsigned long usec) {
int64_t t, tend;
tend = t = timestamp();
tend += usec;
while (t < tend) {
t = timestamp();
}
return t;
}
struct Data {
Data() : time_produced(timestamp() ) {}
int64_t time_produced;
int64_t time_consumed;
};
int64_t sleep_interval = 2;
StatsM2 statsm2;
std::deque<Data> queue;
bool producer_running = true;
bool consumer_running = true;
pthread_spinlock_t spin;
void producer() {
producer_running = true;
while(producer_running) {
pthread_spin_lock(&spin);
queue.push_back(Data() );
pthread_spin_unlock(&spin);
busyloop_microsec_sleep(sleep_interval);
}
}
void consumer() {
int64_t count = 0;
int64_t print_at = 1000000/sleep_interval * 5;
Data data;
consumer_running = true;
while (consumer_running) {
pthread_spin_lock(&spin);
if (queue.empty() ) {
pthread_spin_unlock(&spin);
// _mm_pause();
continue;
}
data = queue.front();
queue.pop_front();
pthread_spin_unlock(&spin);
++count;
data.time_consumed = timestamp();
statsm2.update(data.time_consumed - data.time_produced, count);
if (count >= print_at) {
std::cerr << "CNT=" << count << " M=" << statsm2.getM() << " M2=" << statsm2.getDeviation() << "\n";
statsm2.reset();
count = 0;
}
}
}
int main(void) {
if (pthread_spin_init(&spin, PTHREAD_PROCESS_PRIVATE) < 0)
exit(2);
std::thread consumer_thread(consumer);
std::thread producer_thread(producer);
sleep(40);
consumer_running = false;
producer_running = false;
consumer_thread.join();
producer_thread.join();
return 0;
}
EDIT:
I believe that 5 below is the only thing that can explain 1/2 second latency. When on the same core, each would run for a long time and only then switch to the other.
The rest of the things on the list are too small to cause a 1/2 second delay.
You can use pthread_setaffinity_np to pin your threads to specific cores. You can try different combinations and see how performance changes.
EDIT #2:
More things you should take care of: (who said testing was simple...)
1. Make sure the consumer is already running when the producer starts producing. Not too important in your case as the producer is not really producing in a tight loop.
2. This is very important: you divide by count every time, which is not the right thing to do for your stats. This means that the first measurement in every stats window weight a lot more than the last. To measure the median you have to collect all the values. Measuring the average and min/max, without collecting all numbers, should give you a good enough picture of the latency.
It's not surprising, really.
1. The time is taken in Data(), but then the container spends time calling malloc.
2. Are you running 64 bit or 32? In 32 bit gettimeofday is a system call while in 64 bit it's a VDSO that doesn't get into the kernel... you may want to time gettimeofday itself and record the variance. Or enroll your own using rdtsc.
The best would be to use cycles instead of micros because micros are really too big for this scenario... only the rounding to micros gets you very much skewed when dealing with such a small scale of things
3. Are you guaranteed to not get preempted between producer and consumer? I guess that not. But this should not happen very frequently on a box dedicated to testing...
4. Is it 4 cores on a single socket or 2? if it's a 2 socket box, you want to have the 2 threads on the same socket, or you pay (at least) double for data transfer.
5. Make sure the threads are not running on the same core.
6. If the Data you transfer and the additional data (container node) are sharing cache lines (kind of likely) with other Data+node, the producer would be delayed by the consumer when it writes to the consumed timestamp. This is called false sharing. You can eliminate this by padding/aligning to 64 bytes and using an intrusive container.
gettimeofday is not a good way to profile computation overhead. It is the wall clock and your computer is multiprocessing. Even you think you are not running anything else, the OS scheduler always has some other activities to keep the system running. To profile your process overhead, you have to at least raise the priority of the process you are profiling. Also use high resolution timer or cpu ticks to do the timing measure.
I am relatively new to C++, so I don't have a huge amount of experience. I have learned Python, and I am trying to make an improved version of a Python code I wrote in C++. However, I want it to work in real time, so I need to set the speed of a While loop. I'm sure there is an answer, but I couldn't find it. I want a comparable code to this:
rate(timeModifier * (1/dt))
This was the code I used in Python. I can set a variable dt to make calculations more precise, and timeModifier to double or triple the speed (1 sets it to realtime). This means that the program will go through the loop 1/dt times per second. I understand I can include time.h at the header, but I guess I am too new to C++ to understand how to transfer this to my needs.
You could write your own timer class:
#include <ctime>
class Timer {
private:
unsigned long startTime;
public:
void start() {
startTime = clock();
}
unsigned long elapsedTime() {
return ((unsigned long) clock() - startTime) / CLOCKS_PER_SEC;
}
bool isTimeout(unsigned long seconds) {
return seconds >= elapsedTime();
}
};
int main()
{
unsigned long dt = 10; //in seconds
Timer t;
t.start();
while(true)
{
if(t.elapsedTime() < dt)
{
//do something to pass time as a busy-wait or sleep
}
else
{
//do something else
t = Timer(); //reset the timer
}
}
}
Note that busy-waits are discouraged, since they will hog the CPU. If you don't need to do anything, use the sleep command(Windows) or usleep ( Linux). For more information on making timers in C++, see this link.
You can't do it the same manner in C++. You need to manually call some kind of sleep function in calculation loop, Sleep on Windows or usleep on *NIX.
It's been a while since I've done something like this, but something like this will work:
#include <time.h>
time_t t2, t1 = time(NULL);
while(CONDITIONS)
{
time_t t2 = time(NULL);
if(difftime(t2, t1) > timeModifier)
{
//DO the stuff!
t1 = time(NULL);
}
}
I should note, however, that I'm not familiar with the precision of this method, I think it measures the difference in seconds.
If you need something more precise, use the clock() function which has the number of milliseconds since 12:00 AM beginning January 1, 1980, to the nearest 10 milliseconds.
Perhaps something like this:
#include <time.h>
clock_t t2, t1 = clock();
while(CONDITIONS)
{
t2 = clock();
if((t2-t1) > someTimeElapsed*timeModifier)
{
//DO the stuff!
t1 = clock());
}
}
Update:
You can even yield the CPU to other threads and processes by adding this after the end of the if statement:
else
{
usleep(10000); //sleep for ten milliseconds (chosen because of precision on clock())
}
Depending on the accuracy you need, and your platform, you could use usleep This allows you to set the pause time down to microseconds:
#include <unistd.h>
int usleep(useconds_t useconds);
Remember that your loop will always take longer than this because of the inherent processingtime of the rest of the loop but it's a start. For anything more accurate,you'd probably need to look at timer based callbacks.
You should really create a new thread and have it do the timing so that it remains unaffected by the processing work done in the loop.
WARNING: Pseudo code... just to give you an idea of how to start.
Thread* tThread = CreateTimerThread(1000);
tThread->run();
while( conditionNotMet() )
{
tThread->waitForTimer();
doWork();
}
CreateTimerThread() should return the thread object you want, and run would be something like:
run()
{
while( false == shutdownLatch() )
{
Sleep( timeout );
pulseTimerEvent();
}
}
waitForTimer()
{
WaitForSingleObject( m_handle );
return;
}
Under Windows you can use QueryPerformanceCounter, while polling the time (e.g. within another while loop) call Sleep(0) to allow other threads to continue operation.
Remember Sleep is highly inaccurate. For full control just run a loop without operations, however you'll use 100% of the CPU. To relax the strain on the CPU you can call Sleep(10) etc.
First off, I found a lot of information on this topic, but no solutions that solved the issue unfortunately.
I'm simply trying to regulate my C++ program to run at 60 iterations per second. I've tried everything from GetClockTicks() to GetLocalTime() to help in the regulation but every single time I run the program on my Windows Server 2008 machine, it runs slower than on my local machine and I have no clue why!
I understand that "clock" based function calls return CPU time spend on the execution so I went to GetLocalTime and then tried to differentiate between the start time and the stop time then call Sleep((FPS / 1000) - millisecondExecutionTime)
My local machine is quite faster than the servers CPU so obviously the thought was that it was going off of CPU ticks, but that doesn't explain why the GetLocalTime doesn't work. I've been basing this method off of http://www.lazyfoo.net/SDL_tutorials/lesson14/index.php changing the get_ticks() with all of the time returning functions I could find on the web.
For example take this code:
#include <Windows.h>
#include <time.h>
#include <string>
#include <iostream>
using namespace std;
int main() {
int tFps = 60;
int counter = 0;
SYSTEMTIME gStart, gEnd, start_time, end_time;
GetLocalTime( &gStart );
bool done = false;
while(!done) {
GetLocalTime( &start_time );
Sleep(10);
counter++;
GetLocalTime( &end_time );
int startTimeMilli = (start_time.wSecond * 1000 + start_time.wMilliseconds);
int endTimeMilli = (end_time.wSecond * 1000 + end_time.wMilliseconds);
int time_to_sleep = (1000 / tFps) - (endTimeMilli - startTimeMilli);
if (counter > 240)
done = true;
if (time_to_sleep > 0)
Sleep(time_to_sleep);
}
GetLocalTime( &gEnd );
cout << "Total Time: " << (gEnd.wSecond*1000 + gEnd.wMilliseconds) - (gStart.wSecond*1000 + gStart.wMilliseconds) << endl;
cin.get();
}
For this code snippet, run on my computer (3.06 GHz) I get a total time (ms) of 3856 whereas on my server (2.53 GHz) I get 6256. So it potentially could be the speed of the processor though the ratio of 2.53/3.06 is only .826797386 versus 3856/6271 is .614893956.
I can't tell if the Sleep function is doing something drastically different than expected though I don't see why it would, or if it is my method for getting the time (even though it should be in world time (ms) not clock cycle time. Any help would be greatly appreciated, thanks.
For one thing, Sleep's default resolution is the computer's quota length - usually either 10ms or 15ms, depending on the Windows edition. To get a resolution of, say, 1ms, you have to issue a timeBeginPeriod(1), which reprograms the timer hardware to fire (roughly) once every millisecond.
In your main loop you can
int main()
{
// Timers
LONGLONG curTime = NULL;
LONGLONG nextTime = NULL;
Timers::GameClock::GetInstance()->GetTime(&nextTime);
while (true) {
Timers::GameClock::GetInstance()->GetTime(&curTime);
if ( curTime > nextTime && loops <= MAX_FRAMESKIP ) {
nextTime += Timers::GameClock::GetInstance()->timeCount;
// Business logic goes here and occurr based on the specified framerate
}
}
}
using this time library
include "stdafx.h"
LONGLONG cacheTime;
Timers::SWGameClock* Timers::SWGameClock::pInstance = NULL;
Timers::SWGameClock* Timers::SWGameClock::GetInstance ( ) {
if (pInstance == NULL) {
pInstance = new SWGameClock();
}
return pInstance;
}
Timers::SWGameClock::SWGameClock(void) {
this->Initialize ( );
}
void Timers::SWGameClock::GetTime ( LONGLONG * t ) {
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) t)) {
*t = timeGetTime();
}
cacheTime = *t;
}
LONGLONG Timers::SWGameClock::GetTimeElapsed ( void ) {
LONGLONG t;
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) &t )) {
t = timeGetTime();
}
return (t - cacheTime);
}
void Timers::SWGameClock::Initialize ( void ) {
if ( !QueryPerformanceFrequency((LARGE_INTEGER *) &this->frequency) ) {
this->frequency = 1000; // 1000ms to one second
}
this->timeCount = DWORD(this->frequency / TICKS_PER_SECOND);
}
Timers::SWGameClock::~SWGameClock(void)
{
}
with a header file that contains the following:
// Required for rendering stuff on time
#pragma once
#define TICKS_PER_SECOND 60
#define MAX_FRAMESKIP 5
namespace Timers {
class SWGameClock
{
public:
static SWGameClock* GetInstance();
void Initialize ( void );
DWORD timeCount;
void GetTime ( LONGLONG* t );
LONGLONG GetTimeElapsed ( void );
LONGLONG frequency;
~SWGameClock(void);
protected:
SWGameClock(void);
private:
static SWGameClock* pInstance;
}; // SWGameClock
} // Timers
This will ensure that your code runs at 60FPS (or whatever you put in) though you can probably dump the MAX_FRAMESKIP as that's not truly implemented in this example!
You could try a WinMain function and use the SetTimer function and a regular message loop (you can also take advantage of the filter mechanism of GetMessage( ... ) ) in which you test for the WM_TIMER message with the requested time and when your counter reaches the limit do a PostQuitMessage(0) to terminate the message loop.
For a duty cycle that fast, you can use a high accuracy timer (like QueryPerformanceTimer) and a busy-wait loop.
If you had a much lower duty cycle, but still wanted precision, then you could Sleep for part of the time and then eat up the leftover time with a busy-wait loop.
Another option is to use something like DirectX to sync yourself to the VSync interrupt (which is almost always 60 Hz). This can make a lot of sense if you're coding a game or a/v presentation.
Windows is not a real-time OS, so there will never be a perfect way to do something like this, as there's no guarantee your thread will be scheduled to run exactly when you need it to.
Note that in the remarks for Sleep, the actual amount of time will be at least one "tick" and possible one whole "tick" longer than the delay you requested before the thread is scheduled to run again (and then we have to assume the thread is scheduled). The "tick" can vary a lot depending on hardware and the version of Windows. It is commonly in the 10-15 ms range, and I've seen it as bad as 19 ms. For 60 Hz, you need 16.666 ms per iteration, so this is obviously not nearly precise enough to give you what you need.
What about rendering (iterating) based on the time elapsed between rendering of each frame? Consider creating a void render(double timePassed) function and render depending on the timePassed parameter instead of putting program to sleep.
Imagine, for example, you want to render a ball falling or bouncing. You would know it's speed, acceleration and all other physics that you need. Calculate the position of the ball based on timePassed and all other physics parameters (speed, acceleration, etc.).
Or if you prefer, you could just skip the render() function execution if time passed is a value to small, instead of puttin program to sleep.