Cocos2d frame rate dropping - cocos2d-iphone

I am really close to completing my cocos2d project and I have a problem, the frame rate keeps dropping in certain parts of my game. I have tried to look at the code that is being executed at the time that the fps drops but the problem with that is it doesn't always drop at the same point. So my question is how can I analyze my code and see where and why the frame rate is dropping? I am using cocos2d v1.1.0-beta2b

Related

Holoens 2 - VLC sensor frames have incorrect timestamps (frames out of order)

Im using the following repo to access and save the device streams:
https://github.com/microsoft/HoloLens2ForCV
When recording using StreamRecorder it seems that the timestamps returned by all of the visible light cameras are frequently incorrect resulting in an out of order sequence of frames.
To confirm this, I made a recording while looking at a stopwatch with each visible light camera. There are many frames where the reading on the stopwatch is lower than the previous frame (despite the frame's larger timestamp). Sometimes a disruption lasting more than 5 frames happens before the timesteps seem to become correct again.
This happens often enough for it to be a serious inconvenience. For a rough idea, I counted 12 times where the stopwatch time decreased compared to the previous frame in a 10 second recording. The out of order frames are very noticeable in the resulting video playback.
I tried using timestamp.SensorTicks instead of timestamp.HostTicks in RMCameraReader.cpp but the issue persisted.
This does not happen with the PV frames or with either mode of the depth sensor frames.
I'm using the latest insider preview build: Windows Version 21H1, OS build 20346.1402
I may be wrong but I do not recall this issue occurring with the first few insider builds that supported research mode, however, I couldn't find the older insider builds online to try.
Is there any way to fix this issue?
Thanks a lot!

Constantly lag in opengl application

I'm getting some repeating lags in my opengl application.
I'm using the win32 api to create the window and I'm also creating a 2.2 context.
So the main loop of the program is very simple:
Clearing the color buffer
Drawing a triangle
Swapping the buffers.
The triangle is rotating, that's the way I can see the lag.
Also my frame time isn't smooth which may be the problem.
But I'm very very sure the delta time calculation is correct because I've tried plenty ways.
Do you think it could be a graphic driver problem?
Because a friend of mine run almost the exactly same program except I do less calculations + I'm using the standard opengl shader.
Also, His program use more CPU power than mine and the CPU % is smoother than mine.
I should also add:
On my laptop I get same lag every ~1 second, so I can see some kind of pattern.
There are many reasons for a jittery frame rate. Off the top of my head:
Not calling glFlush() at the end of each frame
other running software interfering
doing things in your code that certain graphics drivers don't like
bugs in graphics drivers
Using the standard windows time functions with their terrible resolution
Try these:
kill as many running programs as you can get away with. Use the process tab in the task manager (CTRL-SHIFT-ESC) for this.
bit by bit, reduce the amount of work your program is doing and see how that affects the frame rate and the smoothness of the display.
if you can, try enabling/disabling vertical sync (you may be able to do this in your graphic card's settings) to see if that helps
add some debug code to output the time taken to draw each frame, and see if there are anomalies in the numbers, e.g. every 20th frame taking an extra 20ms, or random frames taking 100ms.

OpenCV small screen showing delay?

I am using OpenCV to write an app (in C++ on Windows 7) that uses the cv.camshift() function to track an object on the screen. I noticed that the my camera window (my application window showing what the camera sees) has a little delay with respect to very rapid motions. The delay seems to be about 0.1 seconds - very small, but noticable. I am developing an application that is very sensitive to these delays. In order to rule out my coding error, I tried to use one of the OpenCV sample video apps that shows what the camera sees on the screen and it also had this tiny delay. Interestingly, when I look at what my camera sees through Skype, there seems to be virtually no delay at all. Is there anything I can do to make OpenCV operate faster to get rid of this tiny delay?
CamShift detects motion using meanShift - the mean motion of the object center. This has to be calculated over more than one frame. For a frame rate of 30 Hz, a depth of 3 frames would be 0.1 seconds.

Cocos2d Frame Rate

Is there a way to change the frame rate of one layer while keeping the frame rate of another constant? I am currently using [[CCScheduler sharedScheduler] setTimeScale:X]; which effects all layers of a scene. I want it to just affect a single layer.
WORD
Dave
You can set an interval-based scheduled event using schedule:interval: to give different layers didn't framerates. Interval is determined in seconds, so you could do 1.0f/60.0f for 60 FPS (though don't do that, use scheduleUpdate for anything that matches your game's framerate).

understanding of FPS and the methods they use

Just looking on resources that break down how frames per second work. I know it has something to do with keeping track of Ticks and figure out how many ticks occured between each frame. But I never ran into any resources on why exactly you have to use the methods you use in order to get a smooth frame work. I am trying to get a thourough understanding of this. Can any explain or provide any good resources ? Thanks
There are basically two approaches.
In ActionScript (and many other engines), you request the player to call a certain function at a certain framerate. For Flash games, you'll set the framerate to be 30 FPS, and then you'll implement a function that listens for ENTER_FRAME events to do what you need to do. This means you get roughly 33 ms per frame (1000ms/30FPS=33.33ms/frame). If your code that responds to ENTER_FRAME takes more than 33 ms, you'll get some stuttering.
In home-rolled main loops (like you'd generally do in C++/SDL, for example), you run the main loop as fast as possible. This means the time between each frame will be variable. You still need to keep the "guts" of your frame code less than 33 ms to make sure you'll get at least 30 FPS, but your game will run faster than 30 FPS if not a lot's going on. To account for this, you need to program all your logic in terms of elapsed time since last frame, and abandon using frames themselves as a unit of time.
http://forums.xna.com/forums/t/42624.aspx
How do you separate game logic from display?
For a continously variable frame rate you can measure the time the last frame took and assume this frame will take the same length of time. This has the benefit of meaning that time runs more or less constantly. Your biggest issue with this approach is that it is entirely possible for a VSync'd game to change from 60 fps to 30 fps and back again on subsequent frames. From experience a good way to solve this is to average the last few frame times. This smooths the result out. In the 60 to 30 fps switch each frame will progress assuming 1/45 seconds and hence the 60fps frame run slow and the 30fps frame run fast and the perceived speed remains at 45fps.
Better still is to not use this sort of time step in your calculations. Set yourself a minimum fps ... say 10fps. You then calculate all your game logic at some multiple of these 1/10 second intervals. The render engine then knows where the object is and where it is heading towards and so can inter/extrapolate the object position until its next "decision" frame shows up. This has numerous advantages. It decouples your logic entirely from rendering. It allows you to spread the logic calculations over a number of frames. For example, at 60Hz, you can test at what point a logic object will interesect with the world if it maintains its path every 6th frame. This gives the bonus of allowing you to process some logic objects on different frames to spread calculation load across the time. Its biggest disadvantage is that if the frame rate drops below your target rate everything slows down.