Time Step in PhysX - c++

I'm trying to define a time step for the physics simulation in a PhysX application, such that the physics will run at the same speed on all machines. I wish for the physics to update at 60FPS, so each update should have a delta time of 1/60th of a second.
My application must use GLUT. Currently, my loop is set up as follows.
Idle Function:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
deltaTime = newElapsedTime - lastElapsedTime;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
The frame rate does not really matter in this case - it's only the speed at which my physics update that actually matters.
My render function contains the following:
void GLUTGame::Render()
{
// Rendering Code
simTimer += deltaTime;
if (simTimer > m_fps)
{
m_scene->UpdatePhys(m_fps);
simTimer = 0;
}
}
Where:
Fl32 m_fps = 1.f/60.f
However, this results in some very slow updates, due to the fact that deltaTime appears to equal 0 on most loops (which shouldn't actually be possible...) I've tried moving my deltaTime calculations to the bottom of my rendering function, as I thought that maybe the idle callback was called too often, but this did not solve the issue. Any ideas what I'm doing wrong here?

From the OpenGL website, we find that glutGet(GLUT_ELAPSED_TIME) returns the number of passed milliseconds as an int. So, if you call your void GLUTGame::Idle() method about 2000 times per second, then the time passed after one such call is about 1000 * 1/2000 = 0.5 ms. Thus more than 2000 calls per second to void GLUTGame::Idle() results in glutGet(GLUT_ELAPSED_TIME) returning 0 due to integer rounding.

Likely you're adding very small numbers to larger ones and you get rounding errors.
Try this:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
timeDelta = newElapsedTime - lastElapsedTime;
if (timeDelta < m_fps) return;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
You can do something similar in the other method if you want to.

I don't now anything about GLUT or PhysX, but here's how to have something execute at the same rate (using integers) no matter how fast the game runs:
if (currentTime - lastUpdateTime > msPerUpdate)
{
DWORD msPassed = currentTime - lastUpdateTime;
int updatesPassed = msPassed / msPerUpdate;
for (int i=0; i<updatesPassed; i++)
UpdatePhysX(); //or whatever function you use
lastUpdateTime = currentTime - msPassed + (updatesPassed * msPerUpdate);
}
Where currentTime is updated to timeGetTime every run through the game loop, lastUpdateTime is the last time PhysX updated, and msPerUpdate is the amount of milliseconds you assign to each update - 16 or 17 ms for 60fps
If you want to support floating-point update factors (which is recommended for a physics application), then define float timeMultiplier and update it every frame like so: timeMultiplier = (float)frameRate / desiredFrameRate; - where frameRate is self-explanatory and desiredFramerate is 60.0f if you want the physics updating at 60 fps. To do this, you have to update UpdatePhysX as taking a float parameter that it multiplies all update factors by.

Related

Delta time not working properly with GLFW

In a while loop I am calling an update function of an object:
void ParticleEmitter::update(float deltaTime) {
time += deltaTime;
if (time >= delay) {
addParticle();
time = 0;
}
}
time variable is set to 0 by default. It's basically elapsed time.
delay is 0.01. Meaning that after each 0.01 second, it should do whatever is in the if statement.
This seems frame rate-independent. But it is not. When I cap the FPS to 60 or turn on VSync, less particles seem to be added.
The delta time is being calculated properly, I have tested it. But I am confused to why this code isn't frame rate-independent.
As mentioned in the comments, you are probaly not factoring in the remaining time in deltaTime.
For example imagine that you have deltaTime = 0.05 or deltaTime = 0.1 as you are only checking that it is greater or equal then, it will only generate one particle each time that update is called. So if deltaTime is smaller (framerate higher) then update will be called more often than if deltaTime is bigger.
The solution is to apply a function like so which generates a number of particles based in the timeElapsed (extracted from my own particle generator)
void particleSystem::generateParticles(float timeStep)
{
float particlesToCreate = particlesPerSecond * timeStep;
unsigned int count = static_cast <unsigned int> (particlesToCreate);
for (unsigned int i = 0; i < count; i++)
{
emitParticle();
}
}
With particlesPerSecond being a configurable parameter in the particleSystem
Hope it helps.
It's not framerate-independent.
For simplicity, let's assume that deltaTime is fixed.
If it is 0.1, you will add one particle every 0.1 seconds, because 0.1 > 0.01.
If it is 0.01, you will add one particle every 0.01 seconds.
If it is 1/60 (60 FPS), you will add one particle every 1/60 (~0.01667) seconds, because 1/60 is larger than 0.01.
If it is 1/120, you will also add one particle every 1/60 seconds, because 1/120 is smaller than 0.01 and 2 * 1/120 is larger than 0.01.
If you want a fixed number of particles per time, you need to add several if the delta is large.
Something like
while (time >= delay)
{
addParticle();
time -= delay;
}
You also don't want to set time to 0, but save the "spill time" for the next update.
There is a drawback with this approach; adding more particles takes time so the next update may come later. This can cause a long ripple effect.
If that happens, you will need a more sophisticated approach.

Limiting FPS in C++

I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:
I'm getting the deltaTime through this method that is executed each frame:
void Time::calc_deltaTime() {
double currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
}
deltaTime is having the value I would expect (around 0.012.... to 0.016...)
And than I'm using deltaTime to delay the frame through the Sleep windows function like this:
void Time::limitToMAXFPS() {
if(1.0 / MAXFPS > deltaTime)
Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}
MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)
I also tried this method using while loop:
void Time::limitToMAXFPS() {
double diff = 1.0 / MAXFPS - deltaTime;
if(diff > 0) {
double t = glfwGetTime( );
while(glfwGetTime( ) - t < diff) { }
}
}
But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?
How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.
Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:
const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;
// approx ~ 16.666 ms
bool running = true;
double lastTime = 0.0;
while( running ) {
double time = glfwGetTime();
double deltaTime = time - lastTime;
if( deltaTime >= maxPeriod ) {
lastTime = time;
// code here gets called with max FPS
}
}
Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.
Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.
Is Sleep() inaccurate?
I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).
I also tried :
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));
and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.
Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.
So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.
I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps
auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
++frames;
auto now = std::chrono::steady_clock::now();
auto diff = now - start;
auto end = now + std::chrono::milliseconds(16);
if(diff >= std::chrono::seconds(1))
{
start = now;
std::cout << "FPS: " << frames << std::endl;
frames = 0;
}
glfwPollEvents();
processTransition(countit);
render.TickTok();
render.RenderBackground();
render.RenderCovers(countit);
std::this_thread::sleep_until(end);
glfwSwapBuffers(window);
}
to add you can easily adjust FPS preference by adjusting end.
now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.
Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.

gameloop framerate control issues

this is driving me mad, anyway, usual story, attempting to guarantee the same speed in my very simple game across any windows machine that runs it. im doing it by specifying a 1/60 value, then ensuring a fram cant run until that value has passed in time since the last time it was called. the problem im having is 1/60 equates to 30hz for some reason, I have to set it to 1/120 to get 60hz. its also not bang on 60hz, its a little faster.
if i paste it out here, could someone tell me if they see anything wrong? or a more precise way to do it maybe?
float controlFrameRate = 1./60 ;
while (gameIsRunning)
{
frameTime += (system->getElapsedTime()-lastTime);
lastTime = system->getElapsedTime();
if(frameTime > controlFrameRate)
{
gameIsRunning = system->update();
   
//do stuff with the game
    frameTime = .0f;       
}
}
Don't call getElapsedTime twice, there may be a slight difference between the two calls. Instead, store its value, then reuse it. Also, instead of setting the frameTime to 0, subtract controlFrameRate from it, that way, if one frame takes a little longer, the next one will make up for it by being a little shorter.
while (gameIsRunning)
{
float elapsedTime = system->getElapsedTime();
frameTime += (elapsedTime-lastTime);
lastTime = elapsedTime;
if(frameTime > controlFrameRate)
{
gameIsRunning = system->update();
//do stuff with the game
frameTime -= controlFrameRate;
}
}
I'm not sure about your problem with having to set the rate to 1/120, what timing API are you using here?

C++: Calculating Moving FPS

I would like to calculate the FPS of the last 2-4 seconds of a game. What would be the best way to do this?
Thanks.
Edit: To be more specific, I only have access to a timer with one second increments.
Near miss of a very recent posting. See my response there on using exponential weighted moving averages.
C++: Counting total frames in a game
Here's sample code.
Initially:
avgFps = 1.0; // Initial value should be an estimate, but doesn't matter much.
Every second (assuming the total number of frames in the last second is in framesThisSecond):
// Choose alpha depending on how fast or slow you want old averages to decay.
// 0.9 is usually a good choice.
avgFps = alpha * avgFps + (1.0 - alpha) * framesThisSecond;
Here's a solution that might work for you. I'll write this in pseudo/C, but you can adapt the idea to your game engine.
const int trackedTime = 3000; // 3 seconds
int frameStartTime; // in milliseconds
int queueAggregate = 0;
queue<int> frameLengths;
void onFrameStart()
{
frameStartTime = getCurrentTime();
}
void onFrameEnd()
{
int frameLength = getCurrentTime() - frameStartTime;
frameLengths.enqueue(frameLength);
queueAggregate += frameLength;
while (queueAggregate > trackedTime)
{
int oldFrame = frameLengths.dequeue();
queueAggregate -= oldFrame;
}
setAverageFps(frameLength.count() / 3); // 3 seconds
}
Could keep a circular buffer of the frame time for the last 100 frames, and average them? That'll be "FPS for the last 100 frames". (Or, rather, 99, since you won't diff the newest time and the oldest.)
Call some accurate system time, milliseconds or better.
What you actually want is something like this (in your mainLoop):
frames++;
if(time<secondsTimer()){
time = secondsTimer();
printf("Average FPS from the last 2 seconds: %d",(frames+lastFrames)/2);
lastFrames = frames;
frames = 0;
}
If you know, how to deal with structures/arrays it should be easy for you to extend this example to i.e. 4 seconds instead of 2. But if you want more detailed help, you should really mention WHY you haven't access to an precise timer (which architecture, language) - otherwise everything is like guessing...

Simulated time in a game loop using c++

I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.
Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:
void main_loop()
{
while(!quit)
{
handle_events();
DrawScene();
...
SDL_Delay(time_left());
}
}
I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.
I had thought that I just need a few global variables like
int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;
Then a function like :
void handle_time()
{
Uint32 current_ticks;
Uint32 dticks;
current_ticks = SDL_GetTicks();
dticks = current_ticks - prev_ticks; // get difference since last time
// if difference is greater than 30000 (half minute) increment game mins
if(dticks >= 30000) {
prev_ticks = current_ticks;
current_mins++;
if(current_mins >= 60) {
current_mins = 0;
current_hour++;
}
if(current_hour > 23) {
current_hour = 0;
num_days++;
}
}
}
and then call the handle_time() function in the main loop.
It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?
I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series
What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:
running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
while(SDL_PollEvent(&event)){
switch(event.type){
...
}
}
ct = SDL_GetTicks();
t_accum += ct - lt;
lt = ct;
while(t_accum >= timestep){
t += timestep; /* this is our actual time, in milliseconds. */
t_accum -= timestep;
for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
integrate(en, (float)t * 0.001f, timestep);
}
}
/* This should really be in a separate thread, synchronized with a mutex */
std::vector<Entity> tmpEntities(entities.size());
for(int i=0; i<entities.size(); ++i){
float alpha = (float)t_accum / (float)timestep;
tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
}
Render(tmpEntities);
}
This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.
Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.
However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.
One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.
I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.
http://linux.die.net/man/2/timer_create
EDIT:
SDL Seem to support Timers: SDL_SetTimer