Cocos2d 2.0 - 3 numbers on the bottom left - cocos2d-iphone

I have 3 numbers on the bottom left part of the screen on my Cocos2D 2.0 project:
82
0.016
60.0
60 is probably FPS and what about the other two? As I remember, previous versions of Cocos had just the FPS number.
Any clues? thanks

82 <-- number of draw calls
0.016 <-- time it took to render the frame, here: 1.0/60.0 = 60 fps
60.0 <-- frames per second
The first number (82) is the number of draw calls (which is fairly high). Typically each node that renders something on the screen (sprites, labels, particle fx, etc) increases that number by one. Draw calls are expensive, so it is very important to keep that number down. One way to do so is by batching draw calls - cocos2d v3 does this automatically.
The time it took to render a frame, in seconds. Since you need to draw a new frame every 0.016666666 seconds in order to achieve 60 frames per second (1/60 = 0,0166…) it's just the inverse of the framerate.
The last number is the number of frames per second aka framerate aka fps. This value, like the previous one, is averaged over several frames so that it doesn't fluctuate as much.
Note that iOS devices always have VSynch (vertical synchronization) on. A game can render a frame every 0.0166 seconds - if every frame takes 0.017 seconds to compute, the framerate is effectively halved to 30 fps. You can only have fps in concrete steps: 60, 30, 20, 15, 12, 10 ...
Since the fps display is averaged over a couple frames it hides this fact. So if the display stats show 45 fps would be a sequence of frames where every other frame took longer than 0.0166 seconds. In fps numbers the individual fps of most recent frames would have been: 60, 30, 60, 30, 60, 30.

The top number is the number of sprites in your CCLayer, etc..
The middle is the FPS's milliseconds.
The bottom is of course your FPS! :)

Related

Is it okay to call SDL_RenderCopy() for each sprite?

This is a followup to my question here: Is it okay to have a SDL_Surface and SDL_Texture for each sprite?
I made an class called entity each having a SDL_Texture, which is set in the constructor and then a member function render() is called for every onscreen entity in a vector, which uses SDL_RenderCopy() to draw to the renderer.
This render() function includes generating rectangles for each sprite based on their position/cameradata
Is this okay? Is there a faster way?
I made a testlevel with 96 sprites that each take up 2% of the screen with tons of overdraw and ft is 15ms (~65fps)at a resolution of1600x900. Seems a little slow for just some sprites, and my computer breathes much heavier then when playing a full game such as spelunky or isaac.
Prefer frame time over FPS
You want to measure and judge your performance based on the frame time not FPS. Because the relation between the two is not linear. Going from 20 FPS to 30 FPS needs about 16.7 ms worth of optimization. That is the same amount of performance gain in optimization it takes to get from 30 FPS to 60 FPS. So if you judge performance based on FPS you would come to conclusion that a particular "optimization" that increased the FPS from 30 to 60 is better that the one that made a 20 FPS scene run 31 FPS. while the latter is actually a better optimization.
Batch your draws
If you pack all your textures into one and store each individual image's coordinates, you can use the same texture to draw many of your objects. This is limited by the size and number of your textures and also the maximum texture size supported in your environment. In my experiences 4096x4096 is safe but I prefer to use 2048x2048 "texture atlases". There are many utility programs to make such textures. You can easily find a suitable one by doing a Google search.
In this setup in addition to a SDL texture, each sprite also has the x, y, width and height of the region in the "big" texture containing the particular image needed. You can make a TextureRegion class. Each sprite then has a TextureRegion. This whole process is often referred to as batching. Look it up. The whole idea is to minimize state changes. I am not sure if it applies to software rendering or to all of SDL2 backends.
Cache your transformations
Batching your sprites will increase the performance in the GPU side. The CPU bound code is another optimization opportunity. Instead of calculating the parameters of SDL_RenderCopy in each frame, calculate them once and cache them. Then when the position/rotation of the camera or object changes, recalculate the cache. You can do this in "accessors" of your entity class (like setPosition, setRotaion, etc..). Note that instead of directly recalculating transform as soon as a position or rotation changes your want to flag the object as "dirty" and check for the dirty flag in the your render function. if this->isDirty Then recalculate and cache the transform. This prevents redundant calculations when you do this:
//if dirty flag is not used each of the following function calls
//would have resulted in a recalculation of transforms. However by
//using the dirty flag they will be calculated only once before
//the rendering of next frame in the render() function.
player->setPostion(start_x,start_y);
player->setRotation(0);
camera->reset();
So, I've done some more testing by examining the memory/cpu usage of this program at full screen with a "demanding" level and managed to make it similar to other games by enforcing a framerate cap with SDL_Wait()
float g_max_framerate = 60;
float g_max_frametime = 1/g_max_framerate * 1000;
...
while (!quit) {
lastticks = ticks;
ticks = SDL_GetTicks();
elapsed = ticks - lastticks;
...
SDL_RenderPresent(renderer);
//lock framerate
if(elapsed < g_max_frametime) {
SDL_Delay(g_max_frametime - elapsed);
}
}
With this limitation it is appropriatly lowspec.

Unreal C++ Controller Input: Yaw Rotation

I'm setting my game character camera via c++, and I came across this, and even though it works, I don't understand why the code uses DeltaTime. What is the function of GetDeltaSeconds actually doing?
void AWizardCharater::LookX(float Value)
{
AddControllerYawInput(Sensitivity * Value * GetWorld()->GetDeltaSeconds());
}
Here is the api ref : https://docs.unrealengine.com/latest/INT/API/Runtime/Engine/GameFramework/APawn/AddControllerYawInput/index.html
Thanks
Using delta time, multiplied by some sensitivity value, is a standard method used throughout games to provide a consistent movement rate, independent of framerate.
Consider the following code, without using delta time:
AddControllerYawInput(1);
If you had a framerate of 10 FPS then you'd be doing 10 degrees per second. If the framerate increases to 100 FPS, you'd be doing 100 degrees per second.
Using delta time makes the movement consistent regardless of framerate, as the time between frames decreases with faster framerate, slowing down the movement.

Get the frame from Video by the time (openCV)

I have a video and I have important times in this video
For example:
"frameTime1": "00:00:01.00"
"frameTime2": "00:00:02.50"
"frameTime2": "00:00:03.99"
.
.
.
I get the FPS, and I get the totalFrameCount
If I want to get the frames in that's times for example the frame that's happen in this time "frameTime2": "00:00:02.50" I will do the following code
FrameIndex = (Time*FPS)/1000; //1000 Because 1 second = 100 milli second
In this case 00:00:02.50 = 2500 milli second, and the FPS = 29
So the FrameIndex in this case is 72.5, in this case I will choose either frameNO: 72 or 73, but I feel that's not accurate enough, any better solution?
What's the best and accurate way to do this?
The most accurate thing you have at your disposal is the frame time. When you say that an event occurred at 2500ms, where is this time coming from? Why is it not aligned with your framerate? You only have video data points at 2483ms and 2517ms, no way around that.
If you are tracking an object on the video, and you want its position at t=2500, then you can interpolate the position from the known data points. You can do that either by doing linear interpolation between the neighboring frames, or possibly by fitting a curve on the object trajectory and solving for the target time.
If you want to rebuild a complete frame at t=2500 then it's much more complicated and still an open problem.

Understanding SDL frame animation

I am working through the book SDL game development. In the first project, there is a bit of code meant to move the coords of the rendered frame of a sprite sheet:
void Game::update()
{
m_sourceRectangle.x = 128 * int((SDL_GetTicks()/100)%6);
}
I am having trouble understanding this... I know that it moves m_sourceRectangle 128 pixels along the x axis every 100 ms... but how does it actually work? Can somebody breakdown each element of this code to help me understand?
I don't understand why SDL_GetTicks() needs to be called to do this...
I also know that %6 is there because there are 6 frames in the animation... but how does it actually do that?
The book says:
Here we have used SDL_GetTicks() to find out the amount of milliseconds since SDL was initialized. We then divide this by the amount of time (in ms) we want between frames and then use the modulo operator to keep
it in range of the amount of frames we have in our animation. This code will (every 100 milliseconds) shift the x value of our source rectangle by
128 pixels (the width of a frame), multiplied by the current frame we want, giving us the correct position. Build the project and you should see the animation displayed.
But I am not sure I understand why getting the amount of milliseconds since SDL was initialized works.
The modulo operator takes the rest of a division. So for example if GetTicks() is 2600, first dividing by 100 makes it 26 and modulo 6 of 26 is 2. Therefore it's frame 2.
if GetTicks() is 3300; you divide by 100 and get 33; modulo 6 of 33 is 3; frame 3.
Each frame will be displayed for 100ms, so at T=0ms it's Frame 0, t=100ms it's Frame 100/100, at T=200ms it's Frame 200/100 and so on. So at T=SDL_GetTicks() ms, it's Frame SDL_GetTicks()/100. But than you only have 6 frames all together and cycling, therefore at T=SDL_GetTicks() ms it's in face Frame (SDL_GetTicks()/100) % 6.
There is an assumption here is that when the program start, Frame 0 is displayed, which may not be true because there are lots of things to do at starting which take time. But for simple demo to illustrate cycling of frames, it is good enough.
Hope this helps.

Fixed timestep stuttering with VSync on

In a 2D OpenGL engine I implemented I have a fixed timestep as described in the famous fix your timestep article, along with blending.
I have a test object that moves vertically (y axis). There is stuttering in the movement (preprogrammed movement, not from user input). This means the object does not move smoothly across the screen.
Please see the uncompressed video I am linking: LINK
The game framerate stays at 60fps (Vsync turned on from Nvidia driver)
The game logic updates at a fixed 20 updates/ticks per second, set by me. This is normal. The object moves 50 pixels per update.
However the movement on the screen is severely stuttering.
EDIT: I noticed by stepping in the recorded video above frame by frame that the stuttering is caused by a frame being shown twice.
EDIT2: Setting the application priority to Realtime in the task manager completely eliminates the stutter! However this obviously isn't a solution.
Below is the object y movement delta at different times, with VSync turned off
First column is the elapsed time since last frame, in microseconds (ex 4403 )
Second column is movement on the y axis of an object since last frame.
Effectively, the object moves 1000 pixels per second, and the log below confirms it.
time since last frame: 4403 ypos delta since last frame: 4.403015
time since last frame: 3807 ypos delta since last frame: 3.806976
time since last frame: 3716 ypos delta since last frame: 3.716003
time since last frame: 3859 ypos delta since last frame: 3.859009
time since last frame: 4398 ypos delta since last frame: 4.398010
time since last frame: 8961 ypos delta since last frame: 8.960999
time since last frame: 7871 ypos delta since last frame: 7.871002
time since last frame: 3985 ypos delta since last frame: 3.984985
time since last frame: 3684 ypos delta since last frame: 3.684021
Now with VSync turned on
time since last frame: 17629 ypos delta since last frame: 17.628906
time since last frame: 15688 ypos delta since last frame: 15.687988
time since last frame: 16641 ypos delta since last frame: 16.641113
time since last frame: 16657 ypos delta since last frame: 16.656738
time since last frame: 16715 ypos delta since last frame: 16.715332
time since last frame: 16663 ypos delta since last frame: 16.663086
time since last frame: 16666 ypos delta since last frame: 16.665771
time since last frame: 16704 ypos delta since last frame: 16.704102
time since last frame: 16626 ypos delta since last frame: 16.625732
I would say they look ok.
This has been driving me bonkers for days, what am I missing?
Below is my Frame function which is called in a loop:
void Frame()
{
static sf::Time t;
static const double ticksPerSecond = 20;
static uint64_t stepSizeMicro = 1000000 / ticksPerSecond; // microseconds
static sf::Time accumulator = sf::seconds(0);
gElapsedTotal = gClock.getElapsedTime();
sf::Time elapsedSinceLastFrame = gElapsedTotal - gLastFrameTime;
gLastFrameTime = gElapsedTotal;
if (elapsedSinceLastFrame.asMicroseconds() > 250000 )
elapsedSinceLastFrame = sf::microseconds(250000);
accumulator += elapsedSinceLastFrame;
while (accumulator.asMicroseconds() >= stepSizeMicro)
{
Update(stepSizeMicro / 1000000.f);
gGameTime += sf::microseconds(stepSizeMicro);
accumulator -= sf::microseconds(stepSizeMicro);
}
uint64_t blendMicro = accumulator.asMicroseconds() / stepSizeMicro;
float blend = accumulator.asMicroseconds() / (float) stepSizeMicro;
if (rand() % 200 == 0) Trace("blend: %f", blend);
CWorld::GetInstance()->Draw(blend);
}
More info as requested in the comments:
stuttering occurs both while in fullscreen 1920x1080 and in window mode 1600x900
the setup is a simple SFML project. I'm not aware if it uses VBO/VAO internally when rendering textured rectangles
not doing anything else on my computer. Keep in mind this issue occurs on other computers as well, it's not just my rig
am running on primary display. The display doesn't really make a difference. The issue occurs both in fullscreen and window mode.
I have profiled my own code. The issue was there was an area of my code that occasionally had performance spikes due to cache misses. This caused my loop to take longer than 16.6666 milliseconds, the max time it should take to display smoothly at 60Hz. This was only one frame, once in a while. That frame caused the stuttering. The code logic itself was correct, this proved to be a performance issue.
For future reference in hopes that this will help other people, how I debugged this was I put an
if ( timeSinceLastFrame > 16000 ) // microseconds
{
Trace("Slow frame detected");
DisplayProfilingInformation();
}
in my frame code. When the if is triggered, it displays profiling stats for the functions in the last frame, to see which function took the longest in the previous frame. I was thus able to pinpoint the performance bug to a structure that was not suitable for its usage. A big, nasty map of maps that generated a lot of cache misses and occasionally spiked in performance.
I hope this helps future unfortunate souls.
It seems like you're not synchronizing your 60Hz frame loop with the GPU's 60Hz VSync. Yes, you have enabled Vsync in Nvidia but that only causes Nvidia to use a back-buffer which is swapped on the Vsync.
You need to set the swap interval to 1 and perform a glFinish() to wait for the Vsync.
A tricky one, but from the above it seems to me this is not a 'frame rate' problem, but rather somewhere in your 'animate' code. Another observation is the line "Update(stepSizeMicro / 1000000.f);". the divide by 1000000.f could mean you are losing resolution due to the limitations of floating point numbers bit resolution, so rounding could be your killer?