I can turn the .visible value for CCNodes, but I wonder, can an insivisible node consume less memory/processing than a visible one? Can I set the .visible property to NO when my objects are outside the screen to optimize? Or does cocos2d already do that stuff for me?
Invisible nodes are typically skipped when it comes to being rendered. On the other hand, nodes with visible set to YES will invoke OpenGL draw calls, regardless of whether they are on or off the screen (See Riq's comment here). ie. cocos2d does not seem to perform any kind of culling for offscreen elements.
If this is indeed the case, I would simply just set visible = NO (no harm and definitely not hard!) if they are completely off the screen to avoid invoking any additional draw calls. Also note that these offscreen node objects are still physically present and still take up the same memory, even if they have visible set to NO. Furthermore, if these nodes are already running some animations/actions, they will continue updating outside the screen, unless you unschedule them.
Check this posts from the official cocos2d forum
is rendering invisible sprites in spritebatchnode cheap?
Performance Difference between visible = no and removeChild
Bad perfomance - many sprites with the same texture
Also, you can test it by yourself but i think that those post will help.
IMO setting visible=NO is enough, but depends on the sprite count.
Related
I am implementing an application with two panels containing a GLcanvas each one of them. The panels represent two types of view of the same thing and their visibility its alternated by a selection button. In the paint event I check if they are visible in order to setCurrent() that canvas and paint it.
The problem comes when I want to modify something in the two scenes at the same time. For example a texture change in an object in both of the scenes. I cannot setCurrent() a hidden panel and the glMethods used are going to be applied just in the visible scene.
Am I forced to set visible the other panel to make the modifications and then come back?
What is the best approach to handle multiple panels with multiple contexts not always visible all at once?
! The two scenes have more than a different camera placement, that is why I use different contexts. (What is a sphere in one canvas is a cube in the other one.
With the latest wxWidgets trunk you can call SetCurrent() when the window is not shown on screen because its parent is hidden (the window itself still needs to be shown though), see this change. Unfortunately there is no good solution if you are using a released version, the best I can advise is to store the modification that you need to do in some internal queue and then apply them all once the window does become visible, but this is quite clumsy, of course. It would probably be better to just update your local wx sources...
I am working on a qt-vtk project. We have a line drawing function. where straight lines are created between two mouse click position. But once actor is created it is not visible. I was calling render function just after adding the actor. But it didn't work. But if i do camera->resetview() lines become visible , but entire perspective changes. Where am i doing wrong ?
thanks
Rwik
This may not be relevant to you, but I had this exact same problem (in ActiViz [managed VTK]) and wrangled with it for a week, so I hope this helps someone out there. It turned out to be a problem with the location of the lines we wanted to draw on the canvas; they were too far away from the camera (on the Z axis) to be visible.
For us, we were trying to draw a cross on the viewing area wherever the user clicked. The data points were there, as were the actors and whatnot, but they would only be visible in the scene if you called resetCamera() and thusly changed the camera's configuration.
Initially, I blamed the custom interactor that we had to add to cirvumvent the default interactor's swallowing of MouseUp events (intended behavior). Investigation revealed that this seemed unlikely.
After this I shifted the blame onto the camera under the suspicion that perhaps the reset call was making a call to some kind of update method which I wasn't aware of. I called resetCamera() and then reverted the camera values to what they were initially.
When this was successfully done, it eventuated that the crosses would appear when the camera zoomed out and then disappear again as soon as it was set back, and it was at this point I realized that it was something to do with the scene.
At this point, I checked the methods we were using to retrieve the mouse location in 3D and realized that the z value was enormous and it was placing the points too far away as a byproduct of VTK's methods to convert 2D locations on the control to 3D locations in the scene and vice versa.
So after all that, a very mundane and avoidable mistake that originated from the methods renderer.DisplayToWorld() and WorldToDisplay().
This might not be everyone's problem, but I hope I've spared someone a week of fiddling around with VTK.
I think that's a bit hard to help, without see the code, but have you tried using
ui->qvtkwidget->update();
, where ui is the instance of your class derived from QMainWindow?
Is there any way to change the touch priority for cocos2d iOS sprites? What I have are multiple cards on the screen and they are arrayed in an arc, just like it would when you hold them in your hands. So in this setup, they overlap, and I need to recognize on which card the touch was made. I could measure the coordinates of each vertex of cards and determine the visible area of a card and then check if the touch was made inside that area (couldn't I?) but I thought there would be an easier way to deal with this, say changing the touch priority? Which means that the card closest to the screen would have the highest priority and it'll keep decreasing along the way into the background, so that even if the touch was made on 2 sprites at once (the above and below one), it would be registered only on the sprite with higher priority.
Reading on the internet only revealed ways to change the priority for a sprite and layer so that it defines whether the touch was made on the layer or the sprite, but that's not what I want.
As far as I know, by default you get exactly that behavior, the sprites closer (on the z ax) to you have priority. However, I think they pass down the event to the ones behind them as well. So, what i think you need to do is to eat the event when it gets to any of your sprites. To do that, just return NO when overwriting the "touchBegin" method. Hope it helps.
Given that I am programming within another program already using OpenGL (let's say theoretically that I have no idea how they are using it).
Can I just set up my context however I want and push/pop it from the stack and all should work as expected, or MUST I know how my (calling) program is using OpenGL in order to avoid accidentally screwing things up?
Also, how would I go about "initializing" OpenGL when it might have already been initialized?
Thanks for any advice you might have!
To answer your first question, you probably could get away with calling glPushAttrib(GL_ALL_ATTRIB_BITS) before calling the program's functions, and calling glPopAtrib() afterward. Note that this can be slow if you do it frequently (say, every frame in a loop).
I'm not sure what you mean by initializing OpenGL. Do you mean setting up the rendering context? Setting viewport and projection? Disabling or enabling certain features? You can always check if certain states are enabled (using glGet functions), but the rest depends on how your program and the other program work.
I am told that theoretically OpenGL should be able to work within any context as long as you restore the prior context afterwards.
What exactly do you want to do with OpenGL? Are you trying to draw into the same window? Do you want to draw into your own window? If you want to just draw into your own window, and the fact that the existing app uses OpenGL is just a coincidence, then you can probably get away with just creating a completely new context and ignoring the existing stuff. The only gotcha is that you will need to make the existing context current whenever you finish what you are doing, and make your context current whenever you want to do something with it. The existing code won't be expecting to need to make its own context current, and may wind up randomly drawing into your context if you aren't careful.
If you want to draw into the existing window, then your use almost certainly requires some sort of idea about what the existing stuff is doing.
I used the owner-drawn strategy on CMyListBox class which derives from CListBox. I only want the DrawItem() method to perform when I insert an item in the listbox. But the method is invoked many times. How can I change to invoke it whenever I need.
You could always cache the initial drawing by outputting the content to an in-memory bitmap and then drawing that, it does mean you need to track when something has changed so you can run the actual rending code agaain. It does save running through your render code everytime if there's a lot of it.
I've done exactly what Kieron suggests by caching bitmaps, but only in very expensive rendering code. I actually have to keep multiple cached "states" depending on if an item is highlighted, disabled, normal, etc (this is for toolbar buttons, not listitem - but I think it applies). I only cache the pre-rendered image when I first need it - that way I only cache "states" that I actually need.
My drawing was pure GDI calls. Mostly bitmap manipulations and other drawing that just takes time, plus I was being redrawn much too often (for no good reason - long story).
Changing the fundamentals in the framework I was using (MFC and Stingray) was just not an option. The caching was a last resort after all other optimizations weren't good enough (damn slow virtual machines!!).
Normally drawing is fast enough to do when you're invalidated (DrawItem in this case). I would take a look at what exactly you're doing in DrawItem. I would look into caching data and calculations that are needed by rendering and not the rendering itself (eg the final bitmaps) unless there are no other options.
Also, I read the Vista rending is more optimized, they cache what you've drawn on your window to reduce the contain invalidate/redraw cycle when, for example, a window is moved from behind another.
The DrawItem() method is called whenever there is a requirement to draw any given item in the listbox. If you do not respond to it you are likely to get a blank area in your list box, where the drawn data has been erased and you have not refreshed it. If you really do not think the drawing is necessary, you could do something like
void CMyListBox::DrawItem( LPDRAWITEMSTRUCT lpDrawItemStruct )
{
if (!m_DrawingEnabled)
return;
}
Where m_DrawingEnabled is a member you maintain to stop unnecessary draws,