Lerping issue with timers - c++

I have been having an issue related to timers when I am lerping objects in my game engine. The lerping is almost correct and when I am applying the lerping to an object moving or rotating it is fine except every few seconds it appears as if the object quickly flashes to it's previous position before continuing to move smoothly.
Running the engine in windowed mode gives me 1500~fps but if I run in full screen with vsync clamping to 60fps the glitch happens a lot more often.
I have been trying to find either a good resource or explanation on lerping and how I can improve what I have.
For working out the tick gap I use:
float World::GetTickGap()
{
float gap = (float) (TimeMs() - m_lastTick) / m_tickDelay;
return gap > 1.f ? 1.f : gap;
}
My update function:
m_currentTick = TimeMs();
if(m_currentTick > m_lastTick+m_tickDelay)
{
m_lastTick = m_currentTick;
//Update actors
}
Then when rendering each actor I am giving the tick gap for them to lerp between their positions.
My lerping function:
float math::Lerp(float a, float b, float t)
{
return a + t*(b-a);
}
And an example of the lerping function being called:
renderPosition.x = (math::Lerp(m_LastPosition.x, m_Position.x, tickDelay));
I'm unsure where to start on trying to fix this problem. As far as I'm aware it is the timing issues with the functions. Though could anything else cause a small dip in performance at a constant rate?
Any help with this problem would be greatly appreciated. :)

I'm not really able to reconstruct your code from what you posted
But I remember that calling your time function more than once per frame is bad idea generally.
You seem to do that. Try thinking about what effect that has.
E.g. It might mean that the "update Actors" loops are out of sync with the "tickGap" intervals and actors are updated a second time with 0 gap.

Related

Box2D: how can I get same physics simulation if I use slow motion?

I have a functional personal 2D game engine that uses Box2D. I am trying to implement fast and slow motion effects for my physics simulation. The fast motion I implemented by calling the Step function more frequently. This way, I get same simulation result no matter how fast I increase the physics update rate. However, the problem lies when I slow down the simulation. If I use the same technique, the simulation provides same result, but it looks like the frames dropped due to physics not updating as frequent, which is expected, but not what I am looking for.
I tried implementing a new way to update the physics world by stepping the world by updateRate * timeScale if timeScale is less than 1. It fixed the laggy updates, but now my simulation provides a different result compared to the ones with time scales of 1 or above.
I am looking to get the same simulation results for a time scale of 1, any number above 1, and any number below 1 and have the simulation still look smooth for slow motion. This is the code I have for reference.
bool slowMotion = false;
float physicsScaledTimeFactor = Time::getPhysicsScaledTimeFactor();
if (physicsScaledTimeFactor >= 1) {
float deltaUpdateTime = Time::getTimeRunningF() - Time::getPhysicsLoopRunTimeF();
if (deltaUpdateTime < physicsTimeStep / physicsScaledTimeFactor)
return;
}
else {
slowMotion = true;
float deltaUpdateTime = Time::getTimeRunningF() - Time::getPhysicsLoopRunTimeF();
if (deltaUpdateTime < physicsTimeStep)
return;
}
Time::updatePhysicsLoop
// Update / step physics world
if (slowMotion)
physicsWorld->Step(physicsTimeStep * physicsScaledTimeFactor, physicsVelocityIterations, physicsPositionIterations);
else
physicsWorld->Step(physicsTimeStep, physicsVelocityIterations, physicsPositionIterations);
Here is how the simulation result for time scales of 1 or above look like:
Here is how the simulation result for time scales of less than 1 look like:
As you can see, the squares end up in different positions. Is there a way to get same simulation results with the provided quality conditions or is this acceptable and leave it like that?
Update:
I checked other engines and they also contain the laggy effect. But, would there be a way to get same simulation results? The slow motion effect look very smooth and would be nice to have it work as expected.

Rotating a 3D object at a (fast) constant rate in QT

This seems like a pretty simple problem, but I have tried a implementation of what I think would work and it works "kind of".
What I expect:
To be able to rotate an object about an axis at a constant rate that is rather fast. (Roughly around 120 *RPM or 2 *RPS.)
What I have tried:
My basic implementation was a QTimer that would timeout (emit timeout signal as well) after a certain amount of milliseconds. The timeout would rotate the 3D object a set amount, like 0.1 degrees.
Sample code that could do the following would be something like this:
//Setup timer and 3d object (Q3DObject is just an example and may not even exist, doesn't matter)
QTimer *rotationTimer = new QTimer();
Q3DObject *object = new Q3DObject(...)
void main()
{
//Connect signal and slot
connect(rotationTimer, SIGNAL(timeout()), this, SLOT(updateRotation()))
rotationTimer->start(10); //Timeout every 10 ms
}
//This is a slot
void updateRotation()
{
//Get current rotation from object then "add" a quaternion rotation about the -x axis of 0.1 degree.
object->setRotation(object->rotation * QQuaternion::fromAxisAndAngle(-1.0f,0.0f,0.0f,0.1f));
}
Problem with this though implementation is that even with a timeout of 1ms, it is very slow, since its increasing it by 0.1 every 1ms. That would mean its MAX change in angle is 100 angle every 1s. This is far too slow. Changing the 0.1 degree increase to something bigger does help the speed, but at the performance of how smooth the transition from each increase, higher numbers result in a jittery looking rotation.
I feel that there is a far better way of achieving my goal here but I just cant think of anything. I also think this approach is not the most computationally efficient way of rotating an object either.
Does anyone know a better way of achieving this effect? I'll keep researching to see if I can find a better solution in the mean time.
It seems that what you want is to make an animation of the rotation so in that case it is better to use a QVariantAnimation, that class will interpolate the values between the ranges that are established so it is no longer necessary to define a minimum change (the 0.1 degrees that you use).
Q3DFoo foo;
QVariantAnimation animation;
animation.setStartValue(QVariant(0.0));
animation.setEndValue(QVariant(360.0));
animation.setDuration(5 * 1000);
animation.setLoopCount(-1);
QObject::connect(&animation, &QVariantAnimation::valueChanged, &foo, [&foo](const QVariant & value){
foo.setRotation(QQuaternion::fromAxisAndAngle(-1.0f, 0.0f, 0.0f, value.toFloat()));
});
animation.start():

Cocos2D BezierBy with increasing speed over time

I'm pretty new to C++/Cocos2d, but I've been making pretty good progress. :)
What I want to do is animate a coin 'falling off' the screen after a player gets it. I've managed to successfully implement it 2 different ways, but each way has major downsides.
The goal: After a player gets a coin, the coin should 'jump', then 'fall' off of the screen. Ideally, the coin acts as if acted upon by gravity, so it jumps up with a fast speed, slows down to a stop, then proceeds to go downward at an increasing rate.
Attempts so far:
void Coin::tick(float dt) {
velocityY += gravity * dt;
float newX = coin->getPositionX() + velocityX;
float newY = coin->getPositionY() + velocityY;
coin->setPosition(newX, newY);
// using MoveBy(dt, Vec2(newX, newY)) has same result
}
// This is run on every 'update' of the main game loop.
This method does exactly what I would like it to do as far as movement, however, the frame rate gets extremely choppy and it starts to 'jump' between frames, sometimes quite significant distances.
ccBezierConfig bz;
bz.controlPoint_1 = Vec2(0, 0);
bz.controlPoint_2 = Vec2(20, 50); // These are just test values. Will normally be randomized to a degree.
bz.endPosition = Vec2(100, -2000);
auto coinDrop = BezierBy::create(2, bz);
coin->runAction(coinDrop);
This one has the benefit of 'perfect' framerate, where there is no choppiness whatsoever, however, it moves at a constant rate which ruins the experience of it falling and just makes it look like it's arbitrarily moving along some set path. (Which, well, it is.)
Has anybody run into a similar situation or know of a fix? Either to better handle the frame rate of the first one (MoveBy/To don't work- still has the choppy effect) or to programmatically set speeds of the second one (change speeds going to/from certain points in the curve)
Another idea I've had is to use a number of different MoveBy actions with different speeds, but that would have awkward 'pointy' curves and awkward changes in speed, so not really a solution.
Any ideas/help are/is greatly appreciated. :)
Yes, I have run into a similar situation. This is where 'easing' comes in handy. There are many built in easing functions that you can use such as Ease In or Ease Out. So your new code would look something like:
coin->runAction(cocos2d::EaseBounceOut::create(coinDrop));
This page shows the graphs for several common easing methods:
http://cocos2d-x.org/docs/programmers-guide/4/index.html
For your purposes (increasing speed over time) I would recommend trying the 'EaseIn' method.

How to add camera damping?

I asked a question about how to add camera damping in Ogre but didnt get any answer so here is a more vague question.
How would you add camera damping?
I googled this question and got answers in XNA and Unity and each is different than the other so I cant even figure out what technique, function or maths they are using.
I have a camera and its position, I have an object and the position where I want the camera to be and slowly move it to that position, how can I do this?
I tried using lerp but it didnt work, I dont know if that is the wrong way of doing it or my lerp function might be wrong so I dont know.
Can someone please help me out. Thanks
Here is my lerp function
Ogre::Vector3 lerp (Ogre::Vector3 &sourceLocation, Ogre::Vector3 &destLocation, Ogre::Real Time)
{
return sourceLocation+ (destLocation - sourceLocation) * Time;
}
in cpp file
this->camPos = this->lerp(this->camPos, this->playerNode->getSceneNode()->getPosition() + Ogre::Vector3(0,60,-100), 1000.0f);
this->getCamera()->setPosition(this->camPos);
but the camera just ends up miles away from the object
Thanks for answering Peter. Makes a bit more sense now, the lerp function is just returning a long vector since the time is constant however Im not sure about the second part.
I need to have a variable that increments with the frame?
Ogre::Real frametime += frame_event.timeSinceLastFrame * 0.01;
this->camPos = this->lerp(this->camPos, this->playerNode->getSceneNode()->getPosition() + Ogre::Vector3(0,60,-100), frametime);
this does slowly move the camera towards the target and then stop but since the frametime is increasing, the time it takes to get to the target destination gets quicker as well. do I just set the frametime to 0 when it reaches destination?
can you please explain a bit more about the second part. I would really really appreciate your help.
Thanks
Your calculation for lerp is the issue, your getting the vector between dest and source and massively scaling it up.
Your lerp time should not be constant, it should be scaling from 0 to 1 based on the time period you want to go from source to dest.
Before moving:
float length= (dest -start).Length();
Update ()
float distancedTravelled = (CurrentTime - StartTime) * cameraSpeed;
float lerp = distanceTravelled /length;
Pass lerp to function.
Faster camera speed is the quicker you go

How to pause an animation with OpenGL / glut

To achieve an animation, i am just redrawing things on a loop.
However, I need to be able to pause when a key is pressed. I know the way i'm doing it now its wrong because it eats all of my cycles when the loop is going on.
Which way is better, and will allow for a key pause and resume?
I tried using a bool flag but obviously it didnt change the flag until the loop was done.
You have the correct very basic architecture sorted in that the everything needs to be updated in a loop, but you need to make your loop a lot smarter for a game (or other application requiring OpenGL animations).
However, I need to be able to pause when a key is pressed.
A basic way of doing this is to have a boolean value paused and to wrap the game into a loop.
while(!finished) {
while(!paused) {
update();
render();
}
}
Typically however you still want to do things such as look at your inventory, craft things, etc. while your game is paused, and many games still run their main loop while the game's paused, they just don't let the actors know any time has passed. For instance, it sounds like your animation frames simply have a number of game-frames to be visible for. This is a bad idea because if the animation speed increases or decreases on a different computer, the animation speed will look wrong on those computers. You can consider my answer here, and the linked samples to see how you can achieve framerate-independent animation by specifying animation frames in terms of millisecond duration and passing in the frame time in the update loop. For instance, your main game then changes to look like this:
float previousTime = 0.0f;
float thisTime = 0.0f;
float framePeriod = 0.0f;
while(!finished) {
thisTime = getTimeInMilliseconds();
framePeriod = previousTime - thisTime;
update(framePeriod);
render();
previousTime = thisTime;
}
Now, everything in the game that gets updated will know how much time has passed since the previous frame. This is helpful for all your physics calculations as all of our physical formulae are in terms of time + starting factors + decay factors (for instance, the SUVAT equations). The same information can be used for your animations to make them framerate independent as I have described with some links to examples here.
To answer the next part of the question:
it eats all of my cycles when the loop is going on.
This is because you're using 100% of the CPU and never going to sleep. If we consider that we want for instance 30fps on the target device (and we know that this is possible) then we know the period of one frame is 1/30th of a second. We've just calculated the time it takes to update and render our game, so we can sleep for any of the spare time:
float previousTime = 0.0f;
float thisTime = 0.0f;
float framePeriod = 0.0f;
float availablePeriod = 1 / 30.0f;
while (!finished) {
thisTime = getTimeInMilliseconds();
framePeriod = previousTime - thisTime;
update(framePeriod);
render();
previousTime = thisTime;
if (framePeriod < availablePeriod)
sleep(availablePeriod - framePeriod);
}
This technique is called framerate governance as you are manually controlling the rate at which you are rendering and updating.