Manually controlling framerate of Cocos2d-iPhone game strategy - cocos2d-iphone

Most game developers should have encountered the "low framerate" issue at least once when developing games. But I, on the other hand, am making a sudoku-like puzzle game where low framerate is not a problem since it does not have constantly moving sprites/elements, and in fact I plan to decrease the framerate so that the game will take less CPU time and hence reduce the power consumption of the iDevices; all this just as a courtesy to the players :)
I already know that I can control the framerate in Cocos2d-iphone by modifying animationInterval:
[[CCDirector sharedDirector] setAnimationInterval:1.0/60];
But I'm having troubles on the strategy on when to lower the framerate and when to revert to the normal framerate (60Hz). At this point I only managed to define the strategy for touch event, that is:
Start with lower framerate. On ccTouchBegan, revert to normal framerate. On ccTouchEnded, switch to lower framerate. Make sure multitouch is handled accordingly.
I'm left with two more conditions:
Handling CCActions on CCNodes: as long as some CCNodes have running actions, revert to normal framerate.
Particle system: as long as there exist some particle systems that are emitting particles, revert to normal framerate.
Basically I need to be able to detect if there are actions that are still running on any sprites/layers/scene, also if some particle systems are still emitting particles. I prefer to not having the checking done on individual objects, and I'd rather have a simple [SomeClass isActionRunning]. I imagine that I might be able to do this by checking the list of "scheduled" object but I'm not sure how. I'd appreciate if you could also suggest some other conditions where I need to revert to normal framerate.

Hmm.. I would recommend that you set it to 30fps.. Yes.. Its the rate that screen refreshes.. But the most important is how you code the game.. It must be efficient.. Rather than running extra processes checking if something is running or not.. It may eat up slightly more processing power..

though i know it's not a very clean way but you can hack CCScheduler class and check if there are any object in scheduledMethods. and i guess you have to check if the objects there are yours since cocos2d itself schedule some classes.

This might be what you are looking for, at least regarding actions.
When you run an action you can call a block at the end of your action, in which you reset the frame rate.
[[CCDirector sharedDirector] setAnimationInterval:1.0/60];//set your fast frame rate
[self runAction:[CCSequence actions:
[CCMoveBy actionWithDuration:0.5f position:ccp(100,100)], //Do whatever action it is you want to do
[CCCallBlock actionWithBlock:^
{
[[CCDirector sharedDirector] setAnimationInterval:1.0/30]; //Revert to slow frame rate... could also check within the block for other actions
}],
nil]];

Related

Exact delay in screen draw and keyboard keypress event in Qt

I am working on a Qt project in which exact time at which certain events occur is of prime importance. To be specific: I have a very simple animation that must be drawn to the screen at certain time say t1. Once I issue the QWidget update to start the animation, it will take a small time dt (depending on screen refresh rates etc.) to actually show the update on screen. I need to measure this extra time dt. I am unsure as to how to do it.
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
Similarly, when the user presses a key it will be registered after a small delay based on the polling rate of the keyboard. I need to account for this delay as well. If I could get the polling rate I know on average how much will the delay be.
What you're asking for is--by definition--not possible from within the computer.
How would you expect to be able to tell when a pixel "actually showed up" on the screen, without a sensor stuck to the monitor and synchronized to an atomic clock the computer has access to also? :-)
The odds are stacked even further against Qt because it's generally used as an abstraction layer on top of Win/OSX/Linux. Those weren't Real-Time Operating Systems of any kind in the first place.
All you can know is when you asked for something to happen. Then you can time how long it takes for you to get back control to make another request. You can set some expectations on your basic "frame rate" throughput by doing this, but there are countless factors that could lead to wide variations in performance at any moment in time.
If you can dig through to the kernel/driver level you can find out a closer-to-the-metal measure of when the actual effect went to the hardware. But that's not Qt's domain, and still doesn't tell you the "actual" answer of when the effect manifested in the outside world.
About the best you're going to get out of Qt is a periodic QTimer. It can make a callback at (roughly) millisecond resolution. If that's not good enough... you're going to need a smaller boat. :-)
You might get a little boost from stuff related to the search term "high resolution timer":
Qt high-resolution timer
http://qt-project.org/forums/viewthread/31941
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
This is, in fact, the only way to do it, and is all you can actually do. There is nothing further that can be done without resorting to a real-time operating system, custom drivers, or external hardware.
You may not need both - the QElapsedTimer measuring the time passed since the last update is sufficient.
Do note that when the event loop is empty, the delay between invocation of widget.update() and the paintEvent executing is under a microsecond, assuming that your process wasn't preempted.
it is a reaction time experiment for some studies. A visual input is presented to which the user responds via keyboard or mouse. To be able to find the reaction time precisely I need to know when was the stimulus presented on the screen and when was the key pressed.
There is essentially only one way of doing it right without resorting to a realtime operating system or a custom driver, and a whole lot of ways of doing it wrong. So, what's the right way?
A small area of the screen needs to change color or brightness coincidentally with the presentation of the visual stimulus. You attach a fiber optic to the screen, and feed it into a receiver attached to an external event timer. The contact closure in the keyboard is also fed to the same event timer. This lets you precisely time the latency of the response with no regard for operating system latencies, thread preemption, etc. The event timer can be something as cheap as an Arduino, if you are willing to do a bit more development work.
If you are showing the stimulus repetitively and need a certain timing between stimulus presentations, you simply repeat the presentation often and collect both response latency and stimulus-to-stimulus timing in your data. You can then discard the presentations that were outside of desired tolerances.
This approach is screen-agnostic and you can use it even on a mobile device, as long as it can somehow interface with your timer hardware. The timer hardware can of course be networked, making interfacing easy.

Endless Scroller Enemy Spawning cocos2d-iphone

I'm developing an Endless Scroller type of game and I need help with ways to spawn enemies. I have two background images that repeat over and over. I spawn the enemy just above the screen then schedule an update to move the position down.
The current way I'm spawning the enemies at the start is just scheduling a selector every 8 seconds, then based on the score, I unschedule the selector and reschedule it again for 6 seconds etc. My character doesnt shoot you just have to navigate around the enemies so the quickest I can have the selector scheduled is 3 seconds otherwise there isnt enough of a gap to get around them.
I'm only new to programming and cocos2d so I'm not to sure how expensive the unschedule and schedule will be.
So basically my question is, Is there a better way of spawning the enemies? keeping in mind that there always has to be a path to survive?
Your options are to either use the ccScheduler, or implement your own via a timer in update(). Something along the lines of if nextWaveTime > timeBetweenWaves. I would recommend using the scheduler in Cocos2D because I'm sure it has some optimizations built in by some very smart people. Also, scheduling is 'a drop in the bucket' compared to the cost of draw calls. Be sure to reuse the enemies if at all possible. When active enemies go off the screen, don't remove them but instead place them back at the desired 'enter screen' point.

How To Retrieve Actions From Sprite In cocos2d

I have a CCSprite that I'm using in a scene and have created multiple CCAnimation actions to apply to it all using a single CCSpriteFrameCache sharedSpriteFrameCache. While everything is working and I'm able to switch between animations, I feel like I'm doing poorly and would like to simplify my code by retrieving the running action(s) on the CCSprite to stop them individually before running the next action on it.
To help create some context, lets assume the following situation:
We have a CCSprite called mySprite
We have 3 separate CCAnimation actions defined for walking to the right, walking to the left, and sitting looking forward called: actionAnimWalkRight, actionAnimWalkLeft, and actionAnimSitForward respectively.
We want to have the sprite walk to the right when someone touches the screen right of mySprite, walk left when someone touches the screen left of mySprite and sit when someone touches mySprite.
The approach I'm using to accomplish this is as follows:
Place CCSprite as a child in the scene.
Tell the sprite to run an action using: [self runAction:actionWalkRight];
When I want to change the action after someone touches, I have a method called stopAllAnimationActions which I call before I apply a new action that stops any animation action no matter what's running. Basically lists ALL the CCAnimation/CCActions I have defined and stops each one individually since I don't want to use stopAllActions. as follows: [self stopAction:actionWalkRight]; [self stopAction:actionWalkLeft]; [self stopAction:actionSitForward];
Then I apply the new animation after the above method fires using: [self runAction:actionWalkLeft];
While this works, it just seems like a poor design to stop items that I know aren't running just because I don't know exactly what is running. So just looking for advice and the best recommended practice to do something like this within very complex situations so tracking every possible situation is difficult. Any feedback would be appreciated.
When creating the actions set the tag of that action with a constant:
actionWalkRight.tag= kCurrentAction;
[self runAction:actionWalkRight];
Then, retrieve the running action by that tag and stop it.
[self stopActionByTag:kCurrentAction];
I recommend you simplify your process and take advantage of the native Cocos features, including stopAllActions. Don't re-use actions, always create them from scratch as it has been well discussed among Cocos developers that re-using actions can be buggy.
Cocos is well optimized and features like stopAllActions are not performance hogs. It would probably be faster than your approach, actually.

Are offscreen animations ignored by rendering and CPU?

Just wondering how Cocos manages the CPU cycle and graphics engine for CCSprites that are offscreen, including those in the middle of an animation. If you have many animated sprites going on and off the screen, I could check and stop each animation when it's off the screen then restart it when it is about to come back on, but I'm wondering if this is necessary?
Suppose you had a layer with a bunch of them and you make the layer invisible, but don't stop the sprite animations. Will they still use CPU time?
I just did a quick test (good question :) ), in a game where i can slide the screen over a large map that contains images of soldiers performing an 'idle' animation. They continue running when off-screen (I tacked a CCCallFunc in a sequence in a repeat forever, to a simple selector that logs).
I suspect they would also run when the object is not visible. It kind of makes sense, especially for animations. If you look at my use case, if the animation were stopped, it could cause a cognitive disconnect if the user slided the soldier in and out of view, especially when the soldier is doing a walk on the map - he could actually walk-in the view without the user having done any interaction with the screen.

Cocos2D - Large Image

Is it possible to use a large image in Cocos2D, and allow, via swiping or pinching, for the user to zoom in and out?
I see from this post, that the max res for a Cocos2D image is 2048x2048. That is obviously larger than a device viewport, so I want the user to be able to move around the image.
I'm not creating a game, I'm making a sort of interactive biological cell, that will allow the user to tap arbitrary organelles, and see a popup of information about them.
Here is an idea of what the image will be, and obviously cramming the whole thing into a device viewport is not possible:
So really, before I delve too deep into this project, I'm just curious as to whether it is possible to use a large image, that allows the user the ability to arbitrarily move it around, and, if I can detect organelle touches, perhaps via CCSprites?
I recommend subclassing CCSprite and using your large image as the class's image. CCSprites certainly can detect touches by simply adding the basic CCTouchDispatcher delegate to the sprite's class:
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate:self priority:-1 swallowsTouches:YES];
Then also add this method to your CCSprite subclass:
-(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event
You can do anything you want with the touches at this point, scroll or whatever suits your needs.
You could break up your image into many multiple sprites and use a CCLayer to manage touches instead, it just depends on whether you really need your image to be that large, or if the limitations for a single image are enough for you to work with, considering they are pretty large too. My method here is a lot less complicated than that.
The max texture size is limited by OpenGL ES not just coscos2d and it changes by device. However, you can load the image into more than one texture and then position and move those textures around the screen. So really you could have the appearance of an image any size you would like but programmatically you will have to manage the different sprites (tiles) of the image.
CCSptites don't detect touches. CCLayers have will get the touch events you can then do a hit test to see if it hits a givcen CCSprite.