SDL2 Smooth texture(sprite) animation between points in time function - c++

currently im trying to develop smooth animation effect via hardware accelerated technique (DirectX or OpenGL),
my current goal is very simple, i would like to move texture from point A to point B in given duration,
this is classic way to animate objects,
i read a lot about Robert Penner interpolations, and for this purpose i would like to animate my texture in simpliest linear interpolation as described here:
http://upshots.org/actionscript/jsas-understanding-easing
Everything works, except that my animation is not smooth, it is jerky.
The reason is not frame dropping, it is some double to int rounding aspects,
i prepared very short sample in C++ and SDL2 lib to show that behavior:
#include "SDL.h"
//my animation linear interpol function
double GetPos(double started, double begin, double end, double duration)
{
return (end - begin) * (double)(SDL_GetTicks() - started) / duration + begin;
}
int main(int argc, char* argv[])
{
//init SDL system
SDL_Init(SDL_INIT_EVERYTHING);
//create windows
SDL_Window* wnd = SDL_CreateWindow("My Window", 0, 0, 1920, 1080, SDL_WINDOW_SHOWN | SDL_WINDOW_BORDERLESS);
//create renderer in my case this is D3D9 renderer, but this behavior is the same with D3D11 and OPENGL
SDL_Renderer* renderer = SDL_CreateRenderer(wnd, 0, SDL_RENDERER_ACCELERATED | SDL_RENDERER_TARGETTEXTURE | SDL_RENDERER_PRESENTVSYNC);
//load image and create texture
SDL_Surface* surf = SDL_LoadBMP("sample_path_to_bmp_file");
SDL_Texture* tex = SDL_CreateTextureFromSurface(renderer, surf);
//get rid of surface we dont need surface anymore
SDL_FreeSurface(surf);
SDL_Event event;
int action = 0;
bool done = false;
//animation time start and duration
double time_start = (double) SDL_GetTicks();
double duration = 15000;
//loop render
while (!done)
{
action = 0;
while (SDL_PollEvent(&event))
{
switch (event.type)
{
case SDL_QUIT:
done = 1;
break;
case SDL_KEYDOWN:
action = event.key.keysym.sym;
break;
}
}
switch (action)
{
case SDLK_q:
done = 1;
default:
break;
}
//clear screen
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
SDL_RenderClear(renderer);
//calculate new position
double myX = GetPos(time_start, 10, 1000, duration);
SDL_Rect r;
//assaign position
r.x = (int) round(myX);
r.y = 10;
r.w = 600;
r.h = 400;
//render to rendertarget
SDL_RenderCopy(renderer, tex, 0, &r);
//present
SDL_RenderPresent(renderer);
}
//cleanup
SDL_DestroyTexture(tex);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(wnd);
SDL_Quit();
return 0;
}
i suppose that jerky animation effect is related to my GetPos(...) function which works with doubles values, and im rendering via int values. But i cant render to screen in double because i obviously can't draw at 1.2px,
My question is:
do you know any technique or do you have some advice how to make that kind of animations (from, to, duration) smooth without jerky effect?
Im sure that's definitely possible because frameworks like WPF, WIN_RT, Cocos2DX, AndroidJava all them supports that kind of animations, and texture/object animation is smooth,
thanks in advance
edit
as per #genpfault request in comments im adding frame by frame x position values, as int and double:
rx: 12 myX: 11.782
rx: 13 myX: 13.036
rx: 13 myX: 13.366
rx: 14 myX: 14.422
rx: 16 myX: 15.544
rx: 17 myX: 16.666
rx: 18 myX: 17.722
rx: 19 myX: 18.91
rx: 20 myX: 19.966
rx: 21 myX: 21.154
rx: 22 myX: 22.21
rx: 23 myX: 23.266
rx: 24 myX: 24.388
rx: 25 myX: 25.444
rx: 27 myX: 26.632
rx: 28 myX: 27.754
rx: 29 myX: 28.81
rx: 30 myX: 29.866
rx: 31 myX: 30.988
rx: 32 myX: 32.044
rx: 33 myX: 33.166
rx: 34 myX: 34.288
rx: 35 myX: 35.344
rx: 36 myX: 36.466
rx: 38 myX: 37.588
rx: 39 myX: 38.644
final update/solve:
I changed question title from DirectX/OpenGL to SDL2 because issue is related to SDL2 it self,
I marked Rafael Bastos answer as correct because he pushed me into right direction, issue is caused by SDL render pipeline which is based on int precision values
As we can see in above log - stuttering is caused by irregular X values which are rounded from float. To solve that issue i had to change SDL2 render pipeline to use floats instead of integers
Interesting is that, SDL2 internally for opengl,opengles2, d3d9 and d3d11 renderers uses floats, but public SDL_RenderCopy/SDL_RenderCopyEx api is based on SDL_Rect and int values, this causing jerky animation effects when animation is based on interpolation function,
What exactly i changed in SDL2 is far far beyound stackoverflow scope, but in next steps i writed some main points what should be done to avoid animation stuttering:
i moved SDL_FRect and SDL_FPoint structs from internal sys_render api to render.h api to make them public
i extended current SDL methods in rect.h/rect.c to support SDL_FRect and SDL_FPoint, such SDL_HasIntersectionF(...), SDL_IsEmptyF(...) SDL_IntersectRectF(...)
i added new method GerRenderViewPortF based on GetRenderViewPort to support float precision
i added 2 new method SDL_RenderCopyF and SDL_RenderCopyFEx to avoid any figures rounding and pass real floats values to internal renderers,
all public functions must be reflected in dyn_proc SDL api, it requires some SDL architecture knowledge to do that,
to avoid SDL_GetTick() and any other timing precision issues, i decided to change my interpolation step from time to frame dependency. For example to calculate animation duration im not using:
float start = SDL_GetTicks();
float duration = some_float_value_in_milliseconds;
i replaced that to:
float step = 0;
float duration = some_float_value_in_milliseconds / MonitorRefreshRate
and now im incrementing step++ after each frame render
of course it has some side effect, if my engine will drop some frames, then my animation time is not equal to duration because is more frame dependent,
of course this duration calculations are valid only when VSYNC is ON, it is useless when vblank is off,
and now i have really smooth and jerky free animations, with timeline functions,
#genpfault and #RafaelBastos thanks for your time and for your advices,

seems you need to subtract started from SDL_GetTicks()
Something like this:
(end - begin) * ((double)SDL_GetTicks() - started) / duration + begin
(end - begin) gives you the total movement
(SDL_GetTicks() - started) / duration gives you the interpolation ratio, which multiplied by the total movement will give you the amount interpolated, which needs to be summed to the begin portion, so you can have the absolute interpolated position
if that's not it, then it is probably a rounding issue, but if you can only render with int precision, then I think you need to bypass sdl and render it using plain opengl or directx calls, which allow floating precision.

Related

Too High CPU Footprint of OpenCV Text Overlay on FHD Video Stream

I want to display a FHD live-stream (25 fps) and overlay some (changing) text. For this I essentially use the code below.
Basically it is
Load frame
(cv::putText skipped here)
Display frame if it's a multiple of delay
but the code is super super slow compared to e.g. mpv and consumes way to much cpu-time (cv::useOptimized() == true).
So far delay is my inconvenient fiddle-parameter to somehow make it feasible.
delay == 1 results in 180 % CPU usage (full frame-rate)
delay == 5 results in 80 % CPU usage
But delay == 5 or 5 fps is really sluggish and actually still too much cpu load.
How can I make this code faster or otherwise better or otherwise solve the task (I'm not bound to opencv)?
P.s. Without cv::imshow the CPU usage is less than 30 %, regardless of delay.
#include <opencv2/opencv.hpp>
#include <X11/Xlib.h>
// process ever delayth frame
#define delay 5
Display* disp = XOpenDisplay(NULL);
Screen* scrn = DefaultScreenOfDisplay(disp);
int screen_height = scrn->height;
int screen_width = scrn->width;
int main(int argc, char** argv){
cv::VideoCapture cap("rtsp://url");
cv::Mat frame;
if (cap.isOpened())
cap.read(frame);
cv::namedWindow( "PREVIEW", cv::WINDOW_NORMAL );
cv::resizeWindow( "PREVIEW", screen_width, screen_height );
int framecounter = 0;
while (true){
if (cap.isOpened()){
cap.read(frame);
framecounter += 1;
// Display only delay'th frame
if (framecounter % delay == 0){
/*
* cv::putText
*/
framecounter = 0;
cv::imshow("PREVIEW", frame);
}
}
cv::waitKey(1);
}
}
I now found out about valgrind (repository) and gprof2dot (pip3 install --user gprof2dot):
valgrind --tool=callgrind /path/to/my/binary # Produced file callgrind.out.157532
gprof2dot --format=callgrind --output=out.dot callgrind.out.157532
dot -Tpdf out.dot -o graph.pdf
That produced a wonderful graph saying that over 60 % evaporates on cvResize.
And indeed, when I comment out cv::resizeWindow, the cpu usage lowers from 180 % to ~ 60 %.
Since the screen has a resolution of 1920 x 1200 and the stream 1920 x 1080, it essentially did nothing but burning CPU cycles.
So far, this is still fragile. As soon as I switch it to full-screen mode and back, the cpu load goes back to 180 %.
To fix this, it turned out that I can either disable resizing completely with cv::WINDOW_AUTOSIZE ...
cv::namedWindow( "PREVIEW", cv::WINDOW_AUTOSIZE );
... or -- as Micka suggested -- on OpenCV versions compiled with OpenGL support (-DWITH_OPENGL=ON, my Debian repository version was not), use ...
cv::namedWindow( "PREVIEW", cv::WINDOW_OPENGL );
... to offload the rendering to the GPU, what turns out to be even faster together with resizing (55 % CPU compared to 65 % for me).
It just does not seem to work together with cv::WINDOW_KEEPRATIO.*
Furthermore, it turns out that cv:UMat can be used as a drop-in replacement for cv:Mat which additionally boosts the performance (as seen by ps -e -o pcpu,args):
Appendix
[*] So we have to manually scale it and take care of the aspect ratio.
float screen_aspratio = (float) screen_width / screen_height;
float image_aspratio = (float) image_width / image_height;
if ( image_aspratio >= screen_aspratio ) { // width limited, center window vertically
cv::resizeWindow("PREVIEW", screen_width, screen_width / image_aspratio );
cv::moveWindow( "PREVIEW", 0, (screen_height - image_height) / 2 );
}
else { // height limited, center window horizontally
cv::resizeWindow("PREVIEW", screen_height * image_aspratio, screen_height );
cv::moveWindow( "PREVIEW", (screen_width - image_width) / 2, 0 );
}
One thing that pops is you're creating a new window and resizing it every time you want to display something.
move these lines
cv::namedWindow( "PREVIEW", cv::WINDOW_NORMAL );
cv::resizeWindow( "PREVIEW", screen_width, screen_height );
to before your while(true) and see it that solves this

SDL image disappears after 15 seconds

I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.
Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.
This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.

Is there a reasonable limit to how many images SDL can render? [duplicate]

I am programming a raycasting game using SDL2.
When drawing the floor, I need to call SDL_RenderCopy pixelwise. This leads to a bottleneck which drops the framerate below 10 fps.
I am looking for performance boosts but can't seem to find some.
Here's a rough overview of the performance drop:
int main() {
while(true) {
for(x=0; x<800; x++) {
for(y=0; y<600; y++) {
SDL_Rect src = { 0, 0, 1, 1 };
SDL_Rect dst = { x, y, 1, 1 };
SDL_RenderCopy(ren, tx, &src, &dst); // this drops the framerate below 10
}
}
SDL_RenderPresent(ren);
}
}
You should probably be using texture streaming for this. Basically you will create an SDL_Texture of type SDL_TEXTUREACCESS_STREAMING and then each frame you 'lock' the texture, update the pixels that you require then 'unlock' the texture again. The texture is then rendered in a single SDL_RenderCopy call.
LazyFoo Example -
http://lazyfoo.net/tutorials/SDL/42_texture_streaming/index.php
Exploring Galaxy -
http://slouken.blogspot.co.uk/2011/02/streaming-textures-with-sdl-13.html
Other than that calling SDL_RenderCopy 480,000 times a frame is always going to kill your framerate.
You are calling SDL_RenderCopy() in each frame so 600 * 800 = 480 000 times! It is normal for performance to drop.

Fixed Timestep at 2500 FPS?

I am using SFML making a 2D platformer. I read so many timestep articles but they don't work well for me. I am implementing it like 2500 FPS timestep, on my desktop pc it's amazingly smooth, on my laptop it's getting 300 FPS(I check with Fraps), it's not that smooth at laptop but still playable.
Here are the code snippets:
sf::Clock clock;
const sf::Time TimePerFrame = sf::seconds(1.f/2500.f);
sf::Time TimeSinceLastUpdate = sf::Time::Zero;
sf::Time elapsedTime;
These are variables and here is the game loop,
while(!quit){
elapsedTime = clock.restart();
TimeSinceLastUpdate += elapsedTime;
while (TimeSinceLastUpdate > TimePerFrame){
TimeSinceLastUpdate -= TimePerFrame;
Player::instance()->handleAll();
}
Player::instance()->render();
}
In the Player.h, I've got movement constants,
const float GRAVITY = 0.35 /2500.0f; // Uses += every frame
const float JUMP_SPEED = -400.0f/2500.0f; //SPACE -> movementSpeed.y = JUMP_SPEED;
//When character is touching to ground
const float LAND_ACCEL = 0.075 /2500.0f; // These are using +=
const float LAND_DECEL = 1.5 /2500.0f;
const float LAND_FRICTION = 0.5 /2500.0f;
const float LAND_STARTING_SPEED = 0.075; // This uses =, instead of +=
In the handleAll function of Player class, there is
cImage.move(movementSpeed);
checkCollision();
And lastly, checkCollision function, simply checks if character's master bounding box intersects the object's rectangle from each side, sets the speed x or y to 0, then fixes the overlapping by setting character position to the edge.
//Collision
if(masterBB().intersects(objectsIntersecting[i]->GetAABB())){
//HORIZONTAL
if(leftBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x < 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left + objectsIntersecting[i]->GetAABB().width + leftBB().width , cImage.getPosition().y);
}
else if(rightBB().intersects(objectsIntersecting[i]->GetAABB())){
if(movementSpeed.x > 0)
movementSpeed.x = 0;
cImage.setPosition(objectsIntersecting[i]->GetAABB().left - rightBB().width , cImage.getPosition().y);
}
//VERTICAL
if(movementSpeed.y < 0 && topBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top + objectsIntersecting[i]->GetAABB().height + masterBB().height/2);
}
if(movementSpeed.y > 0 && bottomBB().intersects(objectsIntersecting[i]->GetAABB())){
movementSpeed.y = 0;
cImage.setPosition(cImage.getPosition().x , objectsIntersecting[i]->GetAABB().top - masterBB().height/2);
//and some state updates
}
}
I tried to use 60 FPS Timestep like million times but all speed variables become so slow, I can't simply do like *2500.0f / 60.0f to all constants, It doesn't feel same. If I get close constants, It feels "ok" but then when the collision happens, character's position is getting setted all the time and it flys out of the map because of the big lap on the object caused by high speed constants applied every frame I guess...
I need to add, Normally, the book I took the timestep code uses
cImage.move(movementSpeed*TimePerFrame.asSeconds());
but as you saw, I just put /2500.0f to every constant and I don't use it.
So, is 1/2500 seconds per frame good? If not, how can I change all of these to 1/60.0f?
You're doing it wrong.
Your monitor most likely has a refresh rate of 60 Hz (= 60 FPS), thus trying to render an image at 2500 FPS is a huge waste of resources. If the only reason for choosing 2500 FPS is that your movement doesn't work the same, haven't you ever thought about, that the problem then might be with the movement code?
At best you'd implement a fixed timestep (famous article), that way your physics can run at whatever rate you want (2500 "FPS" would still be crazy, so don't do it) and is independent from your rendering rate. So even if you get some varying FPS, it won't influence your physics.

C++ / SDL animation speed

so I am working on a game, following a tutorial online. Currently I have some FPS built into the system, and a simple animation which uses pieces of a sprite like such:
if( frameCount > 12 )
frameCount = 0;
//hero frames
SDL_Rect clip[ 13 ];
clip[ 0 ].x = 0;
clip[ 0 ].y = 0;
clip[ 0 ].w = 44;
clip[ 0 ].h = 39;
clip[ 1 ].x = 51;
clip[ 1 ].y = 0;
clip[ 1 ].w = 44;
clip[ 1 ].h = 39;
clip[ 2 ].x = 102;
clip[ 2 ].y = 0;
clip[ 2 ].w = 44;
clip[ 2 ].h = 39;
...
...
SDL_BlitSurface( hero, &clip[ frameCount ], destination, &offset );
frameCount++;
Now this works just fine, and each iteration of the while loop will cause it to play the next frame in the animation (This animation is part of a character class by the way).
The problem I am facing is the speed of the animation. It takes place at the current FPS of the game, which is 60. I want to be able to control the speed of the player animation separately, so I can slow it down to a reasonable speed.
Does anyone have any suggestions on how I could go about doing this?
note: There are 13 frames all together.
You have separate your refresh rate (60 fps) from your animation rate. The best solution, in my opinion, is to tie your animation rate to the real time clock. In SDL, you can do this with the SDL_Ticks() function, which you can use to measure time in millisecond resolution.
As an example, consider this example (not working code, just a sketch):
void init() {
animationRate = 12;
animationLength = 13;
startTime = SDL_Ticks();
}
void drawSprite() {
int frameToDraw = ((SDL_Ticks() - startTime) * animationRate / 1000) % animationLength;
SDL_BlitSurface( hero, &clip[ frameToDraw ], destination, &offset );
}
In case it isn't clear, the frameToDraw variable is computed by calculating how much time passed since the animation started to play. You multiply that by the animation rate and you get how many absolute frames at the animation rate have passed. You then apply the modulo operator to reduce this number to the range of your animation length and that gives you the frame you need to draw at that time.
If your refresh rate is slower than your animation rate your sprite will skip frames to keep up with the requested animation rate. If the refresh rate is faster then the same frame will be drawn repeatedly until the time to display the next frame comes.
I hope this helps.
What Miguel has written works great. Another method you can use is, to use a timer, set it to fire at a certain frequency and in that frequency, increment your sprite index.
Note that you should always make your rendering and logic separate.