Ok, so i am making this little 'program' and would like to be able to calculate FPS. I had an idea that if i hook a function that is called each frame i could possibly calculate the FPS?
Here's a complete fail, now that i look at it this code again i see how stupid i was to think this would work:
int FPS = 0;
void myHook()
{
if(FPS<60) FPS++;
else FPS = 0;
}
Obviously this is an idiotic attempt, though not sure why i even logically thought it might work in the first place...
But yeah, IS it possible to calculate FPS via hooking a function that is called each frame?
I sat down and was thinking of possible ways to do this but i just couldn't come up with anything. Any info or anything would be helpful, thanks for reading :)
This should do the trick:
int fps = 0;
int lastKnownFps = 0;
void myHook(){ //CALL THIS FUNCTION EVERY TIME A FRAME IS RENDERED
fps++;
}
void fpsUpdater(){ //CALL THIS FUNCTION EVERY SECOND
lastKnownFps = fps;
fps = 0;
}
int getFps(){ //CALL THIS FUNCTION TO GET FPS
return lastKnownFps;
}
You can call your hook function to do the fps calculation but before being able to do that you should:
Keep track of the frames by incrementing a counter each time a redraw is performed
Keep track of how much time has passed since last update (get the current time in your hook function)
Calculate the following
frames / time
Use a high resolution timer. Use a reasonable update rate (1/4 sec or the like).
You can find the time difference between succussive frames. The inverse of this time will give you frame rate. You need to implement a finction getTime_ms() which returns current time in ms.
unsigned int prevTime_ms = 0;
unsigned char firstFrame = 1;
int FPS = 0;
void myHook()
{
unsigned int timeDiff_ms = 0;
unsigned int currTime_ms = getTime_ms(); //Get the current time.
/* You need at least two frames to find the time difference. */
if(0 == firstFrame)
{
//Find the time difference with respect to previous time.
if(currTime_ms >= prevTime_ms)
{
timeDiff_ms = currTime_ms-prevTime_ms;
}
else
{
/* Clock wraparound. */
timeDiff_ms = ((unsigned int) -1) - prevTime_ms;
timeDiff_ms += (currTime_ms + 1);
}
//1 Frame:timeDiff_ms::FPS:1000ms. Find FPS.
if(0 < timeDiff_ms) //timeDiff_ms should never be zero. But additional check.
FPS = 1000/timeDiff_ms;
}
else
{
firstFrame = 0;
}
//Save current time for next calculation.
prevTime_ms = currTime_ms;
}
Related
I have a thread that waits on a std::condition_variable then loops till it is done.
Im trying to slide my rect that is drawn in opengl.
Everything works fine without using a delta, But i would like my rect to slide at the same speed no matter what computer it is ran on.
At the moment it jumps about half way then slides really slow.
If i dont use my delta it does not run at the same speed if ran on slower computers.
Im not sure if i should ihave a if statement and check if time has passed then do the sliding, an not use a delta?
auto toolbarGL::Slide() -> void
{
LARGE_INTEGER then, now, freq;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&then);
while (true)
{
// Waits to be ready to slide
// Keeps looping till stopped then starts to wait again
SlideEvent.wait();
QueryPerformanceCounter(&now);
float delta_time_sec = (float)(now.QuadPart - then.QuadPart) / freq.QuadPart;
if (slideDir == SlideFlag::Right)
{
if (this->x < 0)
{
this->x += 10 * delta_time_sec;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else if (slideDir == SlideFlag::Left)
{
if (this->x > -90)
{
this->x -= 10 * delta_time_sec;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else
SlideEvent.stop();
then = now;
}
}
If you want your rectangle to move at a steady speed no matter what, I suggest a different approach -- instead of relying on your code executing at a particular time and causing a side effect (like x += 10) each time, come up with a function that will tell you what the rectangle's location should be at any given time. That way, no matter when your Paint() method is called, it will always draw the rectangle at the location that corresponds to that time.
For example:
// Returns the current time, in microseconds-since-some-arbitrary-time-zero
unsigned long long GetCurrentTimeMicroseconds()
{
static unsigned long long _ticksPerSecond = 0;
if (_ticksPerSecond == 0) _ticksPerSecond = (QueryPerformanceFrequency(&tps)) ? tps.QuadPart : 0;
LARGE_INTEGER curTicks;
if ((_ticksPerSecond > 0)&&(QueryPerformanceCounter(&curTicks)))
{
return (curTicks.QuadPart*1000000)/_ticksPerSecond;
}
else
{
printf("GetCurrentTimeMicroseconds() failed, oh dear\n");
return 0;
}
}
[...]
// A particular location on the screen
int startPositionX = 0;
// A clock-value at which the rectangle was known to be at that location
unsigned long long timeStampAtStartPosition = GetCurrentTimeInMicroseconds();
// The rectangle's current velocity, in pixels-per-second
int speedInPixelsPerSecond = 10;
// Given any clock-value (in microseconds), returns the expected position of the rectangle at that time
int GetXAtTime(unsigned long long currentTimeInMicroseconds)
{
const long long timeSinceMicroseconds = currentTimeInMicroseconds-timeStampAtStartPosition;
return startPositionX + ((speedInPixelsPerSecond*timeSinceMicroseconds)/1000000);
}
void PaintScene()
{
const int rectX = GetXAtTime(GetCurrentTimeMicroseconds());
// code to paint the rectangle at position (rectX) goes here...
}
Given the above, your program can call PaintScene() as seldom or as often as it wants, and your rectangle's on-screen speed will not change (although the animation will look more or less smooth, depending on how often you call it).
Then if you want the rectangle to change its direction of motion, you can just do something like this:
const unsigned long long now = GetCurrentTimeInMicroseconds();
startPositionX = GetXAtTime(now);
speedInPixelsPerSecond = -speedInPixelsPerSecond; // reverse course!
The above example uses a simple y=mx+b-style equation that provides linear motion, but you can get many different types of motion, by using different parametric equations that take a time-value argument and return a corresponding position-value.
I am working on a 2D game for a school project. The template my teacher gave works well, but I wanted to replace a very, very stupid thing in the code. In the code he calls a heavy method 20 times to slow the game down. Instead of doing that, I want to check if the next frame should be handled.
The game is an object inside the template namespace. This namespace had an endless loop that calls the game tick method and swaps the frame buffers.
Inside this template, I replaced the Game->Tick() with a simple if statement:
if (game->Ready(lastftime)) {
game->Tick();
}
lastframe is the time difference in in seconds between the last time it's called and now. I know I could use this time to calculate movements inside the game tick, but that's not what I want to do right now!
This is the Ready method:
bool Game::Ready(float timedif) {
// Add time to framecounter
framecounter += timedif;
// Check if the counter over lapses the fps
if (framecounter > fps) {
// If so, substract the fps from the counter
while (framecounter > fps) {
framecounter -= fps;
}
m_Screen->Clear(0);
Draw();
return true;
}
// Frame is still inside the fps margin
return false;
}
fps is calculated as follows: fps = 1000.0f/60
I have no idea why it doesn't run at 60 frames per second, and I'm 100% sure it's been called more than that (tested with a printf). Any help would be appreciated.
what is timedif?
you should have some info about when in "real time" - for examples in ms was the last frame
It seems to me that you assume that every iteration of ur while loop takes one ms.
bool Game::Ready(float msSinceLastFrame)
{
if(msSinceLastFrame > fps)
{
m_Screen->Clear();
Draw();
return true;
}
return false
}
//call this using some kind of while where you update msSinceLastFrame
if(game->Ready(msSinceLastFrame)
{
msSinceLastFrame = 0;
game->Tick();
}
but, if you will use this approach you need to call game::ready loop using while();
I Would suggest other approach.
void Game::gameLoop()
{
auto lastFrameTimeStamp = GetTickCount(); // if you are on windows
auto frameLength = 1000/fps;
while(true) //true or some kind of condition
{
auto currentTimeStamp = GetTickCount();
if( currentTimeStamp - lastFrameTimeStamp >= frameLength)
{
//do your job;
lastFrameTimeStamp = currentTimeStamp;
}
Sleep(1); // you can use sleep here if you dont want to heat ur pcu =)
}
}
Why are you subtracting the fps from framecounter? You need to reset framecounter when it exceeds fps.
Try the following:
bool Game::Ready(float timedif) {
// Add time to framecounter
framecounter += timedif;
// Check if the counter over lapses the fps
if (framecounter > fps) {
framecounter = 0;
m_Screen->Clear(0);
Draw();
return true;
}
// Frame is still inside the fps margin
return false;
}
I have a program in which I am drawing images on the screen. The draw function here is called per frame inside in which I have all my drawing code.
I have written an image sequencer that return the respective image from an index of images.
void draw()
{
sequence.getFrameForTime(getCurrentElapsedTime()).draw(0,0); //get current time returns time in float and startson application start
}
On key press, I have start the sequences from the first image [0] and then go on further. So, everytime I press a key, it has to start from [0] unlike the above code where it basically uses the currentTime%numImages to get the frame (which is not the start 0 position of image).
I was thinking to write a timer of own that basically can be triggered everytime I press the key so that the time always starts from 0. But before doing that, I wanted to ask if anybody had better/easier implementation ideas for this?
EDIT
Why I didn't use just a counter?
I have framerate adjustments in my ImageSequence as well.
Image getFrameAtPercent(float rate)
{
float totalTime = sequence.size() / frameRate;
float percent = time / totalTime;
return setFrameAtPercent(percent);
}
int getFrameIndexAtPercent(float percent){
if (percent < 0.0 || percent > 1.0) percent -= floor(percent);
return MIN((int)(percent*sequence.size()), sequence.size()-1);
}
void draw()
{
sequence.getFrameForTime(counter++).draw(0,0);
}
void OnKeyPress(){ counter = 0; }
Is there a reason this wont suffice?
What you should do is increase a "currentFrame" as a float and convert it to an int to index your frame:
void draw()
{
currentFrame += deltaTime * framesPerSecond; // delta time being the time between the current frame and your last frame
if(currentFrame >= numImages)
currentFrame -= numImages;
sequence.getFrameAt((int)currentFrame).draw(0,0);
}
void OnKeyPress() { currentFrame = 0; }
This should gracefully handle machines with different framerates and even changes of framerates on a single machine.
Also, you won't be skipping part of a frame when you loop over as the remainder of the substraction is kept.
I would like to calculate the FPS of the last 2-4 seconds of a game. What would be the best way to do this?
Thanks.
Edit: To be more specific, I only have access to a timer with one second increments.
Near miss of a very recent posting. See my response there on using exponential weighted moving averages.
C++: Counting total frames in a game
Here's sample code.
Initially:
avgFps = 1.0; // Initial value should be an estimate, but doesn't matter much.
Every second (assuming the total number of frames in the last second is in framesThisSecond):
// Choose alpha depending on how fast or slow you want old averages to decay.
// 0.9 is usually a good choice.
avgFps = alpha * avgFps + (1.0 - alpha) * framesThisSecond;
Here's a solution that might work for you. I'll write this in pseudo/C, but you can adapt the idea to your game engine.
const int trackedTime = 3000; // 3 seconds
int frameStartTime; // in milliseconds
int queueAggregate = 0;
queue<int> frameLengths;
void onFrameStart()
{
frameStartTime = getCurrentTime();
}
void onFrameEnd()
{
int frameLength = getCurrentTime() - frameStartTime;
frameLengths.enqueue(frameLength);
queueAggregate += frameLength;
while (queueAggregate > trackedTime)
{
int oldFrame = frameLengths.dequeue();
queueAggregate -= oldFrame;
}
setAverageFps(frameLength.count() / 3); // 3 seconds
}
Could keep a circular buffer of the frame time for the last 100 frames, and average them? That'll be "FPS for the last 100 frames". (Or, rather, 99, since you won't diff the newest time and the oldest.)
Call some accurate system time, milliseconds or better.
What you actually want is something like this (in your mainLoop):
frames++;
if(time<secondsTimer()){
time = secondsTimer();
printf("Average FPS from the last 2 seconds: %d",(frames+lastFrames)/2);
lastFrames = frames;
frames = 0;
}
If you know, how to deal with structures/arrays it should be easy for you to extend this example to i.e. 4 seconds instead of 2. But if you want more detailed help, you should really mention WHY you haven't access to an precise timer (which architecture, language) - otherwise everything is like guessing...
I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.
Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:
void main_loop()
{
while(!quit)
{
handle_events();
DrawScene();
...
SDL_Delay(time_left());
}
}
I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.
I had thought that I just need a few global variables like
int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;
Then a function like :
void handle_time()
{
Uint32 current_ticks;
Uint32 dticks;
current_ticks = SDL_GetTicks();
dticks = current_ticks - prev_ticks; // get difference since last time
// if difference is greater than 30000 (half minute) increment game mins
if(dticks >= 30000) {
prev_ticks = current_ticks;
current_mins++;
if(current_mins >= 60) {
current_mins = 0;
current_hour++;
}
if(current_hour > 23) {
current_hour = 0;
num_days++;
}
}
}
and then call the handle_time() function in the main loop.
It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?
I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series
What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:
running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
while(SDL_PollEvent(&event)){
switch(event.type){
...
}
}
ct = SDL_GetTicks();
t_accum += ct - lt;
lt = ct;
while(t_accum >= timestep){
t += timestep; /* this is our actual time, in milliseconds. */
t_accum -= timestep;
for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
integrate(en, (float)t * 0.001f, timestep);
}
}
/* This should really be in a separate thread, synchronized with a mutex */
std::vector<Entity> tmpEntities(entities.size());
for(int i=0; i<entities.size(); ++i){
float alpha = (float)t_accum / (float)timestep;
tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
}
Render(tmpEntities);
}
This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.
Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.
However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.
One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.
I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.
http://linux.die.net/man/2/timer_create
EDIT:
SDL Seem to support Timers: SDL_SetTimer