so I am working on a game, following a tutorial online. Currently I have some FPS built into the system, and a simple animation which uses pieces of a sprite like such:
if( frameCount > 12 )
frameCount = 0;
//hero frames
SDL_Rect clip[ 13 ];
clip[ 0 ].x = 0;
clip[ 0 ].y = 0;
clip[ 0 ].w = 44;
clip[ 0 ].h = 39;
clip[ 1 ].x = 51;
clip[ 1 ].y = 0;
clip[ 1 ].w = 44;
clip[ 1 ].h = 39;
clip[ 2 ].x = 102;
clip[ 2 ].y = 0;
clip[ 2 ].w = 44;
clip[ 2 ].h = 39;
...
...
SDL_BlitSurface( hero, &clip[ frameCount ], destination, &offset );
frameCount++;
Now this works just fine, and each iteration of the while loop will cause it to play the next frame in the animation (This animation is part of a character class by the way).
The problem I am facing is the speed of the animation. It takes place at the current FPS of the game, which is 60. I want to be able to control the speed of the player animation separately, so I can slow it down to a reasonable speed.
Does anyone have any suggestions on how I could go about doing this?
note: There are 13 frames all together.
You have separate your refresh rate (60 fps) from your animation rate. The best solution, in my opinion, is to tie your animation rate to the real time clock. In SDL, you can do this with the SDL_Ticks() function, which you can use to measure time in millisecond resolution.
As an example, consider this example (not working code, just a sketch):
void init() {
animationRate = 12;
animationLength = 13;
startTime = SDL_Ticks();
}
void drawSprite() {
int frameToDraw = ((SDL_Ticks() - startTime) * animationRate / 1000) % animationLength;
SDL_BlitSurface( hero, &clip[ frameToDraw ], destination, &offset );
}
In case it isn't clear, the frameToDraw variable is computed by calculating how much time passed since the animation started to play. You multiply that by the animation rate and you get how many absolute frames at the animation rate have passed. You then apply the modulo operator to reduce this number to the range of your animation length and that gives you the frame you need to draw at that time.
If your refresh rate is slower than your animation rate your sprite will skip frames to keep up with the requested animation rate. If the refresh rate is faster then the same frame will be drawn repeatedly until the time to display the next frame comes.
I hope this helps.
What Miguel has written works great. Another method you can use is, to use a timer, set it to fire at a certain frequency and in that frequency, increment your sprite index.
Note that you should always make your rendering and logic separate.
Related
hello every body : a problem in the issue of Player movement.
I have a Player character in empty scene moves from point A to B and returns to A in constant speed , At first hour of game running it was OKAY , After three Hours the movement became slow and slow , thank beforehand
Game.h
D3DXMATRIX Player_Matrix ; //main player matrix .
D3DXVECTOR3 PlayerPos; //main player position .
D3DXVECTOR3 PlayerLook; //main player Look at position .
Game.cpp
//Initialize()
D3DXMatrixIdentity(&Player_Matrix);
PlayerPos = D3DXVECTOR3(10.0f,0.0f,10.0f);
PlayerLook = D3DXVECTOR3(0.0f,0.0f,1.0f);
.
//MovePlayer()
//declarations
static float angle = D3DXToRadian(0);
float Speed = 70.0f ;
PlayerPos += ( PlayerLook * ( Speed * (m_timeDelta)) );
if(PlayerPos.x >= 320) // 320:(B)
{
angle = D3DXToRadian(180);
}
if(PlayerPos.x <= 0) // 0:(A)
{
angle = D3DXToRadian(180);
}
//Setting up player matrixes
D3DXMATRIX TransMat , RotMat , TempMat , ;
D3DXMatrixIdentity(&TempMat);
D3DXMatrixIdentity(&RotMat);
D3DXMatrixIdentity(&TransMat);
//Setup Rotation matrix .
D3DXMatrixRotationY(&RotMat,angle);
angle = 0.0f ;
//Attach PlayerLook Vector to rotation matrix
D3DXVec3TransformCoord(&PlayerLook,&PlayerLook,&RotMat);
//gathering rotation matrix with player matrix
D3DXMatrixMultiply(&Player_Matrix,&Player_Matrix,&RotMat);
//transmat is an empty matrix to collect new player position
D3DXMatrixTranslation(&TransMat, PlayerPos.x,PlayerPos.y, PlayerPos.z);
//multiply new position matrix with main player matrix
D3DXMatrixMultiply(&TempMat,&Player_Matrix,&TransMat);
d3ddev->SetTransform(D3DTS_WORLD,&TempMat);
Main_Player->Render();
I would look into how you're calculating m_timeDelta. Perhaps your methodology is allowing floating point error to build up.
Here's an article on the subject: https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
I'm with Tom Forsyth, and think that 64-bit integer is the best storage type for absolute times (http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[A%20matter%20of%20precision]]), but double will work fine too.
PlayerLook seems suspicious.
It is being iteratively rotated every frame, with floating point errors it's possible it might gradually be shrinking, probably only on the frames where the rotation changes.
You could confirm by looking at its value in the debugger after several hours of running, or you could eliminate it as a possibility if you renormalize it every frame and see if the slowdown disappears.
I've just found both the problem and its solution :
As I've explained before that all the App works fine , the problem is : the movement (just x++,y++,z++) gets slow and slow after a period of time . First I thought that it's a memory leak but the animation and delta time works fine .here, tried to find the reasonable problem that causes this .
I got the release of the App to other PC , after periods of time the App works fine , at that moment I saw the FPS won't reach 60.0 frame per sec , after search in MS DirectX SDK inside DXUT I found a struct which control FPS , there a Doc that speaks about GPU and the acceleration , they advice to control FPS and its limitation , Here is the Code :
//-----------------------------------------------------------------------------
// Name: LockFrameRate()
// Desc: Limit The frame Rate to specified
//-----------------------------------------------------------------------------
bool LockFrameRate(int frame_rate , float SecPerCnt )
{
static __int64 StartTime = 0 ;
__int64 CurTime = 0 ;
QueryPerformanceCounter((LARGE_INTEGER*)&CurTime);
float CurrentSecond = (float)((CurTime - StartTime )* SecPerCnt ) ;
// Get the elapsed time by subtracting the current time from the last time
// If the desired frame rate amount of seconds has passed -- return true (ie Blit())
if( CurrentSecond > (1.0f / frame_rate) )
{
// Reset the last time
StartTime = CurTime;
return true;
}
return false;
}
// int WINAPI WinMain(....)
//***************************
// Initialize Timing Preformance . *
//***************************
//Store Counts per second
__int64 CountPerSec = 0 ;
//Gets how many counts does the CPU do per second
QueryPerformanceFrequency((LARGE_INTEGER*)&CountPerSec);
//Gets second per count to preform it with different typs of CPUs
float SecondPerCount = 1.0f / CountPerSec ;
//Initial Previous Time
__int64 PrevTime = 0 ;
QueryPerformanceCounter((LARGE_INTEGER*)&PrevTime);
while(msg.message != WM_QUIT) // while not quit message , go on
{
if(PeekMessage(&msg,NULL,0U,0U,PM_REMOVE))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
else if(LockFrameRate(60 , SecondPerCount)) // If it is time to draw, do so , I selected 60 fps to limit
{
//Capture Current Time
__int64 CurTime = 0 ;
QueryPerformanceCounter((LARGE_INTEGER*)&CurTime);
//Calculate Delta Time
float DeltaTime =(float) ((CurTime - PrevTime) * SecondPerCount) ;
//Engine loop
Engine->Engine_Run(DeltaTime , SecondPerCount );
//After Frame Ends set Pervious Time to Current Time
PrevTime = CurTime ;
}
//else
//Sleep(1); // Give the OS a little bit of time to process other things
}
I commented Sleep(1) because I know scientifically it's just a burden on the CPU but I put it for Scientific confidence , that's between App and other there is a idle proc that waits for each other this is in nowadays computer technologies .
if you want to try it you can feel some screen stops happens these are unwanted
.
thank you Stack Over Flow .
. thank you guys .
I am using SFML making a 2D platformer. I read so many timestep articles but they don't work well for me. I am implementing it like 2500 FPS timestep, on my desktop pc it's amazingly smooth, on my laptop it's getting 300 FPS(I check with Fraps), it's not that smooth at laptop but still playable.
Here are the code snippets:
sf::Clock clock;
const sf::Time TimePerFrame = sf::seconds(1.f/2500.f);
sf::Time TimeSinceLastUpdate = sf::Time::Zero;
sf::Time elapsedTime;
These are variables and here is the game loop,
while(!quit){
elapsedTime = clock.restart();
TimeSinceLastUpdate += elapsedTime;
while (TimeSinceLastUpdate > TimePerFrame){
TimeSinceLastUpdate -= TimePerFrame;
Player::instance()->handleAll();
}
Player::instance()->render();
}
In the Player.h, I've got movement constants,
const float GRAVITY = 0.35 /2500.0f; // Uses += every frame
const float JUMP_SPEED = -400.0f/2500.0f; //SPACE -> movementSpeed.y = JUMP_SPEED;
//When character is touching to ground
const float LAND_ACCEL = 0.075 /2500.0f; // These are using +=
const float LAND_DECEL = 1.5 /2500.0f;
const float LAND_FRICTION = 0.5 /2500.0f;
const float LAND_STARTING_SPEED = 0.075; // This uses =, instead of +=
In the handleAll function of Player class, there is
cImage.move(movementSpeed);
checkCollision();
And lastly, checkCollision function, simply checks if character's master bounding box intersects the object's rectangle from each side, sets the speed x or y to 0, then fixes the overlapping by setting character position to the edge.
//Collision
if(masterBB().intersects(objectsIntersecting[i]->GetAABB())){
//HORIZONTAL
if(leftBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x < 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left + objectsIntersecting[i]->GetAABB().width + leftBB().width , cImage.getPosition().y);
}
else if(rightBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x > 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left - rightBB().width , cImage.getPosition().y);
}
//VERTICAL
if(movementSpeed.y < 0 && topBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top + objectsIntersecting[i]->GetAABB().height + masterBB().height/2);
}
if(movementSpeed.y > 0 && bottomBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top - masterBB().height/2);
//and some state updates
}
}
I tried to use 60 FPS Timestep like million times but all speed variables become so slow, I can't simply do like *2500.0f / 60.0f to all constants, It doesn't feel same. If I get close constants, It feels "ok" but then when the collision happens, character's position is getting setted all the time and it flys out of the map because of the big lap on the object caused by high speed constants applied every frame I guess...
I need to add, Normally, the book I took the timestep code uses
cImage.move(movementSpeed*TimePerFrame.asSeconds());
but as you saw, I just put /2500.0f to every constant and I don't use it.
So, is 1/2500 seconds per frame good? If not, how can I change all of these to 1/60.0f?
You're doing it wrong.
Your monitor most likely has a refresh rate of 60 Hz (= 60 FPS), thus trying to render an image at 2500 FPS is a huge waste of resources. If the only reason for choosing 2500 FPS is that your movement doesn't work the same, haven't you ever thought about, that the problem then might be with the movement code?
At best you'd implement a fixed timestep (famous article), that way your physics can run at whatever rate you want (2500 "FPS" would still be crazy, so don't do it) and is independent from your rendering rate. So even if you get some varying FPS, it won't influence your physics.
I am animating my sprite which looks like this:
I made a variable which increments by 64 every time i press W, as each sprite is 64 x 64, it works however there is blinking, here is my code. It is in the draw method by the way.
if (sf::Keyboard::isKeyPressed(sf::Keyboard::W)){
animator += 64;
}
else{
animator = 0;
}
if (animator > 512){
animator = 0;
}
playerSprite.setTextureRect(sf::IntRect(0, animator, 64, 64));
window.draw(playerSprite);
Any help would be appreciated, thanks.
You shouldn't implement the frame's change this way : the change here is dependent of the framerate and not of the elapsed time.
You should have a timer and change the frame, each [FRAME_DELAY] time.
For example, each 200 ms.
I'm trying to define a time step for the physics simulation in a PhysX application, such that the physics will run at the same speed on all machines. I wish for the physics to update at 60FPS, so each update should have a delta time of 1/60th of a second.
My application must use GLUT. Currently, my loop is set up as follows.
Idle Function:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
deltaTime = newElapsedTime - lastElapsedTime;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
The frame rate does not really matter in this case - it's only the speed at which my physics update that actually matters.
My render function contains the following:
void GLUTGame::Render()
{
// Rendering Code
simTimer += deltaTime;
if (simTimer > m_fps)
{
m_scene->UpdatePhys(m_fps);
simTimer = 0;
}
}
Where:
Fl32 m_fps = 1.f/60.f
However, this results in some very slow updates, due to the fact that deltaTime appears to equal 0 on most loops (which shouldn't actually be possible...) I've tried moving my deltaTime calculations to the bottom of my rendering function, as I thought that maybe the idle callback was called too often, but this did not solve the issue. Any ideas what I'm doing wrong here?
From the OpenGL website, we find that glutGet(GLUT_ELAPSED_TIME) returns the number of passed milliseconds as an int. So, if you call your void GLUTGame::Idle() method about 2000 times per second, then the time passed after one such call is about 1000 * 1/2000 = 0.5 ms. Thus more than 2000 calls per second to void GLUTGame::Idle() results in glutGet(GLUT_ELAPSED_TIME) returning 0 due to integer rounding.
Likely you're adding very small numbers to larger ones and you get rounding errors.
Try this:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
timeDelta = newElapsedTime - lastElapsedTime;
if (timeDelta < m_fps) return;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
You can do something similar in the other method if you want to.
I don't now anything about GLUT or PhysX, but here's how to have something execute at the same rate (using integers) no matter how fast the game runs:
if (currentTime - lastUpdateTime > msPerUpdate)
{
DWORD msPassed = currentTime - lastUpdateTime;
int updatesPassed = msPassed / msPerUpdate;
for (int i=0; i<updatesPassed; i++)
UpdatePhysX(); //or whatever function you use
lastUpdateTime = currentTime - msPassed + (updatesPassed * msPerUpdate);
}
Where currentTime is updated to timeGetTime every run through the game loop, lastUpdateTime is the last time PhysX updated, and msPerUpdate is the amount of milliseconds you assign to each update - 16 or 17 ms for 60fps
If you want to support floating-point update factors (which is recommended for a physics application), then define float timeMultiplier and update it every frame like so: timeMultiplier = (float)frameRate / desiredFrameRate; - where frameRate is self-explanatory and desiredFramerate is 60.0f if you want the physics updating at 60 fps. To do this, you have to update UpdatePhysX as taking a float parameter that it multiplies all update factors by.
I am using SFML 2.1 with C++. I want to know that how can we handle our player's movement with a delay for example:
if(right-key-is-pressed)
{
player.move(5, 0);
}
Now we want the player to move to 5 spaces but we want it to take 2 sec to do it.
How can we do it in SFML?
You have to use sf::clock in order to time your game.
http://www.sfml-dev.org/documentation/2.1-fr/classsf_1_1Clock.php
One way to do this is to introduce the concept of frame. Every second can have several frames, say 60. And each frame is a snapshot of objects state at a given time point in that second. You compute the states of objects in the new frame, and then render and show it on the screen at that time point, and continues to compute the frame.
In your example, say I can have 10 frames per second. The speed of the player is 5/2 unit/second. I'll compute the player state, and get player is at 0.25 at 0.1 seconds, 0.5 at 0.2 seconds, 0.75 at 0.3 seconds, and so on.
int main()
{
sf::Clock Clock;
float LastTime = 0;
//Main Game Loop
while(Window.isOpen())
{
/* ... */
float CurTime = Clock.restart().asSeconds();
float FPS = 1.0 / CurTime;
LastTime = CurTime;
/*Then you could multiply the objects move axis speed by FPS*/
Player->Shape.move(Vel.x * FPS, Vel.y * FPS);
Window.clear();
Window.display();
}
return 0
}
Since CurTime = Clock.restart().asSeconds(); you may(more than likely) have to significantly increase the value of your Objects Velocity.