why is only 60 fps really smooth in cocos2d? - cocos2d-iphone

It has probably been asked before, but I can't find it anywhere...
In videoland, 24 fps and anything above is smooth. Cocos2d seems to be
smooth only when its 60 fps or maybe a bit less. Anything between
30 and 50 is certainly not smooth, the fps counter doesn't seem accurate...
Why is this? Or is it only me having this situation?

There are actually several reasons for this behavior, and it's not just cocos2d but an effect seen in any game engine in environments with vertical synchronization (VSYNC) enabled. On iOS VSYNC is always on, on PCs you usually have the option to turn it off to improve framerates if they are consistently below the monitor's rate at the cost of screen tearing. Typically LCDs like iOS devices update their display at 60 Hz, allowing a maximum of 60 fps.
Cocos2D 1.x defaults to using the CADisplayLink class for updates, Cocos2D 2.x uses CADisplayLink exclusively. CADisplayLink causes updates to be synchronized with the screen refresh rate. Meaning a notification is sent when the screen has finished drawing its contents.
When you get 60 fps all is fine. But if the game can't manage to render a frame in time to render 60 fps, it will receive its next update only after the next screen refresh has completed. This effectively halves the framerate as soon as the framerate drops just below 60 fps - or in other words as soon as your update & render cycle takes longer than 16.666 milliseconds to complete. This means you can only have discrete framerates of 60, 30, 20 and 15 fps (60 divided by 1, 2, 3 and 4) on iOS with CADisplayLink updates.
The effect is quite noticeable because a framerate that fluctuates between 60, 30, 20 and 15 fps - even just for a fraction of a second - doesn't feel smooth mainly because it's so unsteady - the unsteadiness is what we notice as "not smooth". If your game is affected by this, you might find that limiting the framerate to 30 fps will actually make the game appear smoother. You also have more time to update & render stuff between frames.
It's the steadyness of the 24 fps movie framerate that is conceived as "smooth", but also movie directors have learned to avoid scenes where the limited framerate becomes all too obvious. For instance, they avoid as hell what games do very often: scroll sideways, ie sideways movements of the camera or sideways movement of objects passing by the camera.
You'll be surprised how much smoother movies can be when you watch The Hobbit - it's the first blockbuster movie to run at 48 fps. You'll immediately notice how much more "real" and "lifelike" the characters in the movie are. To get an impression, check out this unofficial 48 fps The Hobbit trailer.
What cocos2d displays as fps is not an accurate representation of the switch from 60 to 30 to 20 and 15 fps but the average framerate over several frames. Therefore when cocos2d prints "45 fps" it means half the time the game displayed 30 fps, the other half at 60 fps over the past couple frames.

Two main problems.
First is matching the refresh rate of the display - anything else and you get irregular motion which the eye/brain is good at spotting. At least be at a multiple of it.
Second is motion-blur. Film/video tends to have motion blur, which fools the viewer into seeing continuous motion.

Related

How can I achieve a near constant FPS with QOpenGLWidget?

I have subclassed QOpenGLWidget.
I use the following surface format:
QSurfaceFormat format;
format.setSwapBehavior(QSurfaceFormat::DoubleBuffer);
format.setSwapInterval(2); // every 2nd vsync
Using this surface format I do not have to call update().
Instead Qt will call my paintGL() at the refresh rate of my monitor (60 Hz) divided by my swapInterval (2).
So in theory I should get a constant FPS=30. However when I measure this the FPS vary a lot:
I measure the time at the start of paintGL() with std::chrono::high_resolution_clock::now(), find the elapsed miliseconds since last call to paintGL() and divide 1000 by this to get the FPS.
My paintGL() method always take less than 1 ms to complete.
My GPU seem to be hardly breaking a sweat:
My CPU also seem to be fine although there are some spikes in CPU 2 and 4 utilization:
What can I do to achieve a near constant FPS with QOpenGLWidget?
BTW. Calling format.setSwapInterval() does not seem to have any effect. If I try to set the swapInterval to 1, the FPS still fluctates around 30 and not 60 as I would expect.

OpenGL framerate: connection with the size of the window

I was in the process of tracking down and eliminating those parts of my C++/OpenGL/GLUT code that were inefficient and slow, and in doing so, I watched my frames per second counter to know if I was actually making progress. I noticed that my frame rate dropped from about 120 to 60 if I maximized the window.
Further experimentation revealed that this was a linear thing, I could change the frame rate by changing the size of the window.
Does this mean that my bottleneck in in the GPU rendering? Surely GPUs these days are more than powerful enough not to notice the difference between a 300x300 and 1920x1080? Or am I asking too much from my graphics card?
The alternative is that there is some bug in my code that is causing the system to slow down on larger renders.
What I am asking is this: is it reasonable to expect a halving of framerate when changing the window size, or is there something very wrong?
Further experimentation revealed that this was a linear thing, I could change the frame rate by changing the size of the window.
Congratulations: You discovered fill rate
Does this mean that my bottleneck in in the GPU rendering?
Yes, pretty much. To be specific the bottleneck is either the bandwidth from/to the graphics memory, or the complexity of the fragment shader, or a combination of both.
Surely GPUs these days are more than powerful enough not to notice the difference between a 300x300 and 1920x1080?
300×300 = 90000
1920×1080 = 2073600
Or in other words: You ask the GPU to fill about 20 times as many pixels. Which means 20 times as much data must be flung around and also be processed.
That drop from 120Hz to 60Hz comes from V-Sync. If you disabled V-Sync you'd find, that your program would probably reach way higher rates than 60Hz for 1920×1080, but for 300×300 it will be something below 180Hz.
The reason for that is simple: When synched to the display vertical retrace, your GPU can "put out" the next frame only at the moment the display is v-syncing. If your display can do 120Hz (like yours as it's obvious) and your rendering takes less time than 1/120s to complete it will make the deadline and your framerate synchronizes to the display. If however drawing a frame takes more then 1/120s, then it will sync with every 2nd frame displayed. If rendering takes more than 1/60s second every 3rd, 1/30s every 4th and so on.

what if the update method scheduled by scheduleUpdate run too long in cocos2d-iphone?

after scheduleUpdate, the update:(ccTime)dt will be called 60 times per second, what if at one time the update method's running time exceeds 1/60 second? the next call will be cancelled?
The framerate drops. Nothing will be cancelled.
At 60 fps there's exactly 1/60th of a second for cocos2d and your code to process everything that's needed to render a frame, including all OpenGL drawing operations. That's 0.016666666 seconds to do it all.
If one update cycle takes longer than that, the next frame will be rendered after 0.03333333 seconds instead, dropping the framerate to 30 fps if multiple frames continuously take longer to process. Provided that everything is done within that time, otherwise the next frame update will be deferred to 0.05 seconds or even 0.06666666 seconds.
You can only get 60, 30, 20 or 15 fps framerate with cocos2d since it uses CADisplayLink which synchronizes updates with the screen refresh rate. The framerate counter in cocos2d may show 40 fps or something because it averages over multiple frames.

understanding of FPS and the methods they use

Just looking on resources that break down how frames per second work. I know it has something to do with keeping track of Ticks and figure out how many ticks occured between each frame. But I never ran into any resources on why exactly you have to use the methods you use in order to get a smooth frame work. I am trying to get a thourough understanding of this. Can any explain or provide any good resources ? Thanks
There are basically two approaches.
In ActionScript (and many other engines), you request the player to call a certain function at a certain framerate. For Flash games, you'll set the framerate to be 30 FPS, and then you'll implement a function that listens for ENTER_FRAME events to do what you need to do. This means you get roughly 33 ms per frame (1000ms/30FPS=33.33ms/frame). If your code that responds to ENTER_FRAME takes more than 33 ms, you'll get some stuttering.
In home-rolled main loops (like you'd generally do in C++/SDL, for example), you run the main loop as fast as possible. This means the time between each frame will be variable. You still need to keep the "guts" of your frame code less than 33 ms to make sure you'll get at least 30 FPS, but your game will run faster than 30 FPS if not a lot's going on. To account for this, you need to program all your logic in terms of elapsed time since last frame, and abandon using frames themselves as a unit of time.
http://forums.xna.com/forums/t/42624.aspx
How do you separate game logic from display?
For a continously variable frame rate you can measure the time the last frame took and assume this frame will take the same length of time. This has the benefit of meaning that time runs more or less constantly. Your biggest issue with this approach is that it is entirely possible for a VSync'd game to change from 60 fps to 30 fps and back again on subsequent frames. From experience a good way to solve this is to average the last few frame times. This smooths the result out. In the 60 to 30 fps switch each frame will progress assuming 1/45 seconds and hence the 60fps frame run slow and the 30fps frame run fast and the perceived speed remains at 45fps.
Better still is to not use this sort of time step in your calculations. Set yourself a minimum fps ... say 10fps. You then calculate all your game logic at some multiple of these 1/10 second intervals. The render engine then knows where the object is and where it is heading towards and so can inter/extrapolate the object position until its next "decision" frame shows up. This has numerous advantages. It decouples your logic entirely from rendering. It allows you to spread the logic calculations over a number of frames. For example, at 60Hz, you can test at what point a logic object will interesect with the world if it maintains its path every 6th frame. This gives the bonus of allowing you to process some logic objects on different frames to spread calculation load across the time. Its biggest disadvantage is that if the frame rate drops below your target rate everything slows down.

FPS how calculate this?

In my OpenGL book, it says this:
"What often happens on such a system is that the frame is too complicated
to draw in 1/60 second, so each frame is displayed more than once. If, for
example, it takes 1/45 second to draw a frame, you get 30 fps, and the
graphics are idle for 1/30 1/45 = 1/90 second per frame, or one-third of
the time."
In the sentence that say "it takes 1/45 second to draw a frame, you get 30 fps", why do I get only 30 fps? Woudln't 45 fps be more correct?
The graphics card will normally only buffer one frame ahead.
If it takes 1/45 of a second to draw a frame, then at the 1/60 of a second mark, the previous frame will be redisplayed. At the 1/45 mark, the next frame is done - but the card doesn't have a free buffer to start rendering the next one, so has to sit idle until 1/30, where it can send out that frame and start working on the next one.
This is with VSync enabled - if you disable it, instead of getting the 30FPS framerate and an idle card 1/3rd of the time, the card will start redrawing immediately, and you'll get screen tearing instead.
It's correct. You'd get 45 fps, but the system is slowing it down to 30 fps, to achieve a smooth framerate on 60Hz (60 redraws per second) monitors.
Because you need to draw something every 1/60 seconds on a 60Hz monitor, and can't draw a "half-frame", you must draw the previous frame. So if you 60 time per second you once draw the real frame, and every 2 frames you draw the former, then you get 30fps despite the fact that you could manage 45fps.
So yes, as others have said, this is due to your graphics waiting for v-sync prior to starting to generate the next frame.
That said...
Beware, not all monitors refresh at 60Hz. 60fps vs 30fps becomes 70fps vs 35fps on a 70Hz display.
If you don't want to get your card to wait for the v-sync before starting next frame, but still avoid the tearing, use triple buffering. The GPU then ping-pongs rendering to 2 buffers while the 3rd is displayed. The v-sync event is what triggers the swap to the "currently finished" back buffer. This is still not really great, because you end up with some frames that stay on the screen more often than others: with your 1/45 rendering, a frame will stay for 1/30s and the next for 1/60, giving some jerkiness.
Last, with the advent of offscreen rendering (rendering to non-displayed buffers), it's in theory possible for a driver to not wait for the v-sync before starting on the next frame, if the early work of that next frame happens to not touch the display surface. I don't think I've ever seen a driver be that smart though.