Sprite just every few seconds when moving with time SFML C++ - c++

This code move a sprite across the screen relative to time. However it appears to jump to the left every couple of seconds.
int ogreMaxCell = 9;
if (SpriteVector[i].getPosition().x > ogreDirectionX[i] )
{
sf::Vector2f ogreDirection = sf::Vector2f(-1,0);
float ogreSpeed = 1;
sf::Vector2f ogreVelocity = ogreDirection * ogreSpeed * 250000.0f * dt.asSeconds();
this->SpriteVector[i].move(ogreVelocity);
//gets the spritesheet row
orcSource.y = getCellYOrc(orcLeft);
}
if (ogreClock.getElapsedTime().asMilliseconds() > 250)
{
orcxcell = (orcxcell + 1) % ogreMaxCell;
ogreClock.restart();
}
SpriteVector[i].setTextureRect(sf::IntRect(orcSource.x + (orcxcell * 80), orcSource.y, 80, 80));
The statement for time is:
sf::Time dt; // delta time
sf::Time elapsedTime;
sf::Clock clock;
elapsedTime += dt;
dt = clock.restart();
Any insight as to why this is happening?
Regards

You didn't show how you implemented your time function, 2 possibilities:
The first possibility is where you, you declared the time variables outside of
the time function's loop, in that case the result is movement with some varying degree, but it looks to me from the if structure the error most likely lies in possibility 2. 250000.0f is an insainly huge number to have to use when dealing with offsets, and the use of ogre.clock tells me #2 is more likely
2
Both the variables and the declarations of the time function are looped.
I threw that function in a compilier, and set cout to output both values as microseconds.
output is elapsedTime is always 0, and dt is always around 0-4ish microseconds, except every so often for a certian reason it is equal 400-2000ish microseconds.
The effect of this is that it made you have to use a second clock to control your time so your animations didn't glitch, and your animation will jump to the left every so often, because dt goes from being 4 microseconds to 1500 microseconds randomly. It also explains why you have to multiply by such a huge constant, because you are using miliseconds,, and keep getting infintesmily small values for dt.
There are a few problems in the time function
dt = clock.restart(); =/= 0
you will always get some small time value because in the time it takes to reset the clock to 0 and to give the clock's value to the sf::time variable.
When the animation jumps it's because in that particular cycle it took the computer a little longer to assign the value after the clock reset.
the fix is pretty simple:
declare the variables outside the loop structure,
and adjust the code like this:
//declare before loop, if you dont, elapsed time constantly gets set to 0
sf::Time dt; // delta time
sf::Time elapsedTime;
sf::Clock clock;
//startloop structure of your choice
elapsedTime += clock.getElapsedTime();
dt = clock.getElapsedTime();
clock.restart();
and modify the second if statement to
if (elapsedTime.asMilliseconds() > 250)
{
orcxcell = (orcxcell + 1) % ogreMaxCell;
elapsedTime = milliseconds(0)
}
sf::Time is just a variable, the clock has to do the counting.
Hope this helps.
p.s. always write declarations outside of your loop structures, it works okay most of the time, but from time to time it will cause you to get strange errors like this one, or random crashes.

Related

Delta time not working properly with GLFW

In a while loop I am calling an update function of an object:
void ParticleEmitter::update(float deltaTime) {
time += deltaTime;
if (time >= delay) {
addParticle();
time = 0;
}
}
time variable is set to 0 by default. It's basically elapsed time.
delay is 0.01. Meaning that after each 0.01 second, it should do whatever is in the if statement.
This seems frame rate-independent. But it is not. When I cap the FPS to 60 or turn on VSync, less particles seem to be added.
The delta time is being calculated properly, I have tested it. But I am confused to why this code isn't frame rate-independent.
As mentioned in the comments, you are probaly not factoring in the remaining time in deltaTime.
For example imagine that you have deltaTime = 0.05 or deltaTime = 0.1 as you are only checking that it is greater or equal then, it will only generate one particle each time that update is called. So if deltaTime is smaller (framerate higher) then update will be called more often than if deltaTime is bigger.
The solution is to apply a function like so which generates a number of particles based in the timeElapsed (extracted from my own particle generator)
void particleSystem::generateParticles(float timeStep)
{
float particlesToCreate = particlesPerSecond * timeStep;
unsigned int count = static_cast <unsigned int> (particlesToCreate);
for (unsigned int i = 0; i < count; i++)
{
emitParticle();
}
}
With particlesPerSecond being a configurable parameter in the particleSystem
Hope it helps.
It's not framerate-independent.
For simplicity, let's assume that deltaTime is fixed.
If it is 0.1, you will add one particle every 0.1 seconds, because 0.1 > 0.01.
If it is 0.01, you will add one particle every 0.01 seconds.
If it is 1/60 (60 FPS), you will add one particle every 1/60 (~0.01667) seconds, because 1/60 is larger than 0.01.
If it is 1/120, you will also add one particle every 1/60 seconds, because 1/120 is smaller than 0.01 and 2 * 1/120 is larger than 0.01.
If you want a fixed number of particles per time, you need to add several if the delta is large.
Something like
while (time >= delay)
{
addParticle();
time -= delay;
}
You also don't want to set time to 0, but save the "spill time" for the next update.
There is a drawback with this approach; adding more particles takes time so the next update may come later. This can cause a long ripple effect.
If that happens, you will need a more sophisticated approach.

Limiting FPS in C++

I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:
I'm getting the deltaTime through this method that is executed each frame:
void Time::calc_deltaTime() {
double currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
}
deltaTime is having the value I would expect (around 0.012.... to 0.016...)
And than I'm using deltaTime to delay the frame through the Sleep windows function like this:
void Time::limitToMAXFPS() {
if(1.0 / MAXFPS > deltaTime)
Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}
MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)
I also tried this method using while loop:
void Time::limitToMAXFPS() {
double diff = 1.0 / MAXFPS - deltaTime;
if(diff > 0) {
double t = glfwGetTime( );
while(glfwGetTime( ) - t < diff) { }
}
}
But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?
How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.
Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:
const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;
// approx ~ 16.666 ms
bool running = true;
double lastTime = 0.0;
while( running ) {
double time = glfwGetTime();
double deltaTime = time - lastTime;
if( deltaTime >= maxPeriod ) {
lastTime = time;
// code here gets called with max FPS
}
}
Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.
Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.
Is Sleep() inaccurate?
I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).
I also tried :
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));
and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.
Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.
So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.
I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps
auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
++frames;
auto now = std::chrono::steady_clock::now();
auto diff = now - start;
auto end = now + std::chrono::milliseconds(16);
if(diff >= std::chrono::seconds(1))
{
start = now;
std::cout << "FPS: " << frames << std::endl;
frames = 0;
}
glfwPollEvents();
processTransition(countit);
render.TickTok();
render.RenderBackground();
render.RenderCovers(countit);
std::this_thread::sleep_until(end);
glfwSwapBuffers(window);
}
to add you can easily adjust FPS preference by adjusting end.
now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.
Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.

C++ Jittery game loop - how to make it as smooth as possible?

This is my gameloop code:
while (shouldUpdate)
{
timeSinceLastUpdate = 0;
startTime = clock();
while (timeAccumulator >= timeDelta)
{
listener.handle();
em.update();
timeAccumulator -= timeDelta;
timeSinceLastUpdate += timeDelta;
}
rm.beginRender();
_x->draw();
rm.endRender();
timeAccumulator += clock() - startTime;
}
It runs almost perfectly, but it has some jitter to it, a few times a second instead of _x (a test entity that all it does in update is x++) moving 1 pixel to the right, it actually moves 2 pixels to the right and it's a noticeable lag/jitter effect. I'm guessing clock() isn't accurate enough. So what could I do to improve this game loop?
If it matters I use SDL and SDL_image.
EDIT: Nothing changed after doing something more accurate than clock. BUT, what I have figured out, is that it's all thanks to timeDelta. This is how timeDelta was defined when I made this post:
double timeDelta = 1000/60;
but when I defined it as something else while messing around...
double timeDelta = 16.666666;
I noticed for the first few seconds the game started, it was smooth as butter. But just a few seconds later, the game stuttered hugely and then went back to being smooth, and repeated itself like that. The more 6's (or anything after the . really) i added, the longer the game was initially smooth and the harder the lag hit when it did. It seems floating errors are attacking. So what can I do then?
EDIT2: I've tried so much stuff now it's not even funny... can someone help me with the math part of the loop? Since that's what's causing this...
EDIT3: I sent a test program to some people, some said it was perfectly smooth and others said it was jittery like how I described it. For anyone that would be willing to test it here it is(src): https://www.mediafire.com/?vfpy4phkdj97q9j
EDIT4: I changed the link to source code.
It will almost certainly be because of the accuracy of clock()
Use either std::chrono or SDL_GetTicks() to measure time since epoch.
I would recommend using std::chrono just because I prefer C++ api's, here's an example:
int main(){
using clock = std::chrono::high_resolution_clock;
using milliseconds = std::chrono::milliseconds;
using std::chrono::duration_cast;
auto start = clock::now(), end = clock::now();
uint64_t diff;
while(running){
diff = duration_cast<milliseconds>(end - start).count();
start = clock::now();
// do time difference related things
end = clock::now();
}
}
To only update after a specified delta, you'd do your loop like this:
int main(){
auto start = clock::now(), end = clock::now();
uint64_t diff = duration_cast<milliseconds>(end - start).count();
auto accum_start = clock::now();
while(running){
start = clock::now();
diff = duration_cast<milliseconds>(end - start).count();
if(duration_cast<nanoseconds>(clock::now() - accum_start).count() >= 16666666){
// do render updates every 60th of a second
accum_start = clock::now();
}
end = clock::now();
}
}
start and end will both be of the type std::chrono::time_point<clock> where clock was previously defined by us as std::chrono::high_resolution_clock.
The difference between 2 time_point's will be a std::chrono::duration which can be nanoseconds, milliseconds or any other unit you like. We cast it to milliseconds then get the count() to assign to our uint64_t. You could use some other integer type if you liked.
My diff is how you should be calculating your timeDelta. You should not be setting it as a constant. Even if you are certain it will be correct, you are wrong. Every frame will have a different delta, even if it is the smallest fraction of a second.
If you want to set a constant frame difference, use SDL_GL_SetSwapInterval to set vertical sync.
EDIT
Just for you, I created this example git repo. Notice in main.cpp where I multiply by diff to get the adjusted difference per frame. This adjustment (or, lack of), is where you are getting your jitteriness.

Fixed Timestep at 2500 FPS?

I am using SFML making a 2D platformer. I read so many timestep articles but they don't work well for me. I am implementing it like 2500 FPS timestep, on my desktop pc it's amazingly smooth, on my laptop it's getting 300 FPS(I check with Fraps), it's not that smooth at laptop but still playable.
Here are the code snippets:
sf::Clock clock;
const sf::Time TimePerFrame = sf::seconds(1.f/2500.f);
sf::Time TimeSinceLastUpdate = sf::Time::Zero;
sf::Time elapsedTime;
These are variables and here is the game loop,
while(!quit){
elapsedTime = clock.restart();
TimeSinceLastUpdate += elapsedTime;
while (TimeSinceLastUpdate > TimePerFrame){
TimeSinceLastUpdate -= TimePerFrame;
Player::instance()->handleAll();
}
Player::instance()->render();
}
In the Player.h, I've got movement constants,
const float GRAVITY = 0.35 /2500.0f; // Uses += every frame
const float JUMP_SPEED = -400.0f/2500.0f; //SPACE -> movementSpeed.y = JUMP_SPEED;
//When character is touching to ground
const float LAND_ACCEL = 0.075 /2500.0f; // These are using +=
const float LAND_DECEL = 1.5 /2500.0f;
const float LAND_FRICTION = 0.5 /2500.0f;
const float LAND_STARTING_SPEED = 0.075; // This uses =, instead of +=
In the handleAll function of Player class, there is
cImage.move(movementSpeed);
checkCollision();
And lastly, checkCollision function, simply checks if character's master bounding box intersects the object's rectangle from each side, sets the speed x or y to 0, then fixes the overlapping by setting character position to the edge.
//Collision
if(masterBB().intersects(objectsIntersecting[i]->GetAABB())){
//HORIZONTAL
if(leftBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x < 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left + objectsIntersecting[i]->GetAABB().width + leftBB().width , cImage.getPosition().y);
}
else if(rightBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x > 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left - rightBB().width , cImage.getPosition().y);
}
//VERTICAL
if(movementSpeed.y < 0 && topBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top + objectsIntersecting[i]->GetAABB().height + masterBB().height/2);
}
if(movementSpeed.y > 0 && bottomBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top - masterBB().height/2);
//and some state updates
}
}
I tried to use 60 FPS Timestep like million times but all speed variables become so slow, I can't simply do like *2500.0f / 60.0f to all constants, It doesn't feel same. If I get close constants, It feels "ok" but then when the collision happens, character's position is getting setted all the time and it flys out of the map because of the big lap on the object caused by high speed constants applied every frame I guess...
I need to add, Normally, the book I took the timestep code uses
cImage.move(movementSpeed*TimePerFrame.asSeconds());
but as you saw, I just put /2500.0f to every constant and I don't use it.
So, is 1/2500 seconds per frame good? If not, how can I change all of these to 1/60.0f?
You're doing it wrong.
Your monitor most likely has a refresh rate of 60 Hz (= 60 FPS), thus trying to render an image at 2500 FPS is a huge waste of resources. If the only reason for choosing 2500 FPS is that your movement doesn't work the same, haven't you ever thought about, that the problem then might be with the movement code?
At best you'd implement a fixed timestep (famous article), that way your physics can run at whatever rate you want (2500 "FPS" would still be crazy, so don't do it) and is independent from your rendering rate. So even if you get some varying FPS, it won't influence your physics.

Time Step in PhysX

I'm trying to define a time step for the physics simulation in a PhysX application, such that the physics will run at the same speed on all machines. I wish for the physics to update at 60FPS, so each update should have a delta time of 1/60th of a second.
My application must use GLUT. Currently, my loop is set up as follows.
Idle Function:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
deltaTime = newElapsedTime - lastElapsedTime;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
The frame rate does not really matter in this case - it's only the speed at which my physics update that actually matters.
My render function contains the following:
void GLUTGame::Render()
{
// Rendering Code
simTimer += deltaTime;
if (simTimer > m_fps)
{
m_scene->UpdatePhys(m_fps);
simTimer = 0;
}
}
Where:
Fl32 m_fps = 1.f/60.f
However, this results in some very slow updates, due to the fact that deltaTime appears to equal 0 on most loops (which shouldn't actually be possible...) I've tried moving my deltaTime calculations to the bottom of my rendering function, as I thought that maybe the idle callback was called too often, but this did not solve the issue. Any ideas what I'm doing wrong here?
From the OpenGL website, we find that glutGet(GLUT_ELAPSED_TIME) returns the number of passed milliseconds as an int. So, if you call your void GLUTGame::Idle() method about 2000 times per second, then the time passed after one such call is about 1000 * 1/2000 = 0.5 ms. Thus more than 2000 calls per second to void GLUTGame::Idle() results in glutGet(GLUT_ELAPSED_TIME) returning 0 due to integer rounding.
Likely you're adding very small numbers to larger ones and you get rounding errors.
Try this:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
timeDelta = newElapsedTime - lastElapsedTime;
if (timeDelta < m_fps) return;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
You can do something similar in the other method if you want to.
I don't now anything about GLUT or PhysX, but here's how to have something execute at the same rate (using integers) no matter how fast the game runs:
if (currentTime - lastUpdateTime > msPerUpdate)
{
DWORD msPassed = currentTime - lastUpdateTime;
int updatesPassed = msPassed / msPerUpdate;
for (int i=0; i<updatesPassed; i++)
UpdatePhysX(); //or whatever function you use
lastUpdateTime = currentTime - msPassed + (updatesPassed * msPerUpdate);
}
Where currentTime is updated to timeGetTime every run through the game loop, lastUpdateTime is the last time PhysX updated, and msPerUpdate is the amount of milliseconds you assign to each update - 16 or 17 ms for 60fps
If you want to support floating-point update factors (which is recommended for a physics application), then define float timeMultiplier and update it every frame like so: timeMultiplier = (float)frameRate / desiredFrameRate; - where frameRate is self-explanatory and desiredFramerate is 60.0f if you want the physics updating at 60 fps. To do this, you have to update UpdatePhysX as taking a float parameter that it multiplies all update factors by.