Whenever I add the following lines of code to change a scrolling background (for each switch-case), it becomes jittery for about a second. Each background theme has it's own sprite sheet so I use two batch nodes to handle the switch-cases for their changes.
[backgroundFrameCache addSpriteFramesWithFile:#"firstThemedBg-hd.plist"];
backgroundBatchNode2 = [CCSpriteBatchNode batchNodeWithFile:#"firstThemedBg-hd.pvr.ccz"];
[gameLayer addChild:backgroundBatchNode2];
currentBackgroundBatchNode = backgroundBatchNode2;
How can I add the batch nodes to the layer in a way that would prevent the jitter?
Related
I'm following the tutorial here to mimic Flappy Bird. At the part to scroll the game scene:
- (void)update:(CCTime)delta {
_hero.position = ccp(_hero.position.x + delta * scrollSpeed, _hero.position.y);
_physicsNode.position = ccp(_physicsNode.position.x - (scrollSpeed *delta), _physicsNode.position.y);
...
}
Ideally the whole world will scroll left, and _hero (which is a child node of _physicsNode) move right with the same speed it would stay still on the screen. But when I run the code on simulator the _hero sprite just blast off to the right at the speed of light (about 10~20 times faster than the scrolling speed of _physicsNode). The _physicsNode and every other things inside it is scrolling left at normal speed as intended.
If I don't give _hero any movement it would scroll along with _physicsNode normally.
I have tried other method like using CCAction at game start:
CCAction *moveAction = [CCActionMoveBy actionWithDuration:1 position:ccp(-scrollSpeed,0)];
CCActionRepeatForever *repeatAction = [CCActionRepeatForever actionWithAction:(CCActionInterval *)moveAction];
[_physicsNode runAction:repeatAction];
And still get the same result. The speed value that _hero receive is always different from the speed that _physicsNode receive.
Could anyone explain to me why this is happening and how to fix it?
I'm using cocos2d 3.3 if it helps.
I finally figured it out.
The _hero node had its positionType set to CCPositionTypeNormalized (because I was trying to center him on the screen) while the _physicNode has its positionType as CCPositionTypePoints. Any change to the node's position (including movement actions) are based on the node's positionType. The result is that when I move the _hero node by 2, it doesn't move 2 points but 2% of the parent node's width instead.
I fixed it by aligning the _hero on SpriteKit using CCPositionTypeNormalized, then switching it to CCPositionTypePoints at game start.
I am working on a window which displays a number of objects (a graph) (C++/Win32API). This is in GDI+ and I have written a test piece of code with Direct2d as I want to improve the performance when dragging objects in the window. The best approach I have found (to date) is the following (using a graph of 1000 nodes and 999 edges).
(bascially buffer static content to a bitmap buffer and only draw whats moving)
When dragging starts (e.g lbuttondown state), create a base rendertarget with the full graph excluding the node being dragged and the attached edges, call GetBitmap and store for later use. When I need to draw (due to mousemove event and lbuttondown state true), then clear the current hwndrendertarget (background wash white), then draw the edges that are connected to the node being moved into the hwndrendertarget, then copy the save bitmap to the hwndrendertarget, then copy the node bitmap (created when the node is first created in the DB, saves actually drawing it) being moved to the hwndrendertarget, then call EndDraw.
Now this works ok (ish), what I don't like it that, when the node is dragged quickly the mouse cursor moves ahead of the node being dragged (distance depending on speed of drag/mousemove bu worst case up to about 1/2 inch). My reference app is MS Visio, dragging a single object on this shows the cursor staying in the same position over the object being dragged maybe +/- 1/2 pixels.
What I have not tried yet is moving all the (and only) the drawing operations to a separate thread, but before I try this I would like to research other methods if other single threaded approaches would trump this way.
Update:
I have optimized this a bit more with improvement, I found I was allocating and de-allocating the edge brush in the draw function which I have moved out to a class wide object and initialize for the life of the class as with other brushes etc. The cursor now only gets a little way (2 pixels or so) outside of the object being dragged when being dragged quickly, the object is a 15px radius circle. so the cursor is able to move up to 17px away from the middle of the object (the point the cursor should stick to) when being dragged. In testing I found an interesting thing, on my main monitor the drag is worse in that the cursor can get ahead of the object being dragged by more than 17px, say up to maybe 25px from the center point of the object where the cursor should be fixed. On the second monitor for the extended desktop (i.e. no taskbar) the drag is better in that described previously. If I hide the taskbar on the main monitor and run the app on that monitor and drag, the performance is the same as the second monitor.
When using a QListWidget in batched layout mode, whenever more items are added than the batch size, the list widget blinks for a short time when switching from the old list to the new list. This means, the list widget shows no items, and the scroll bar handle is set to a seemingly random size.
Have you ever encountered this, can this be resolved somehow? I'm using Qt 4.7.4. I should probably add that I'm not using any hidden items.
I had this issue also and spent hours combing through the sea that is Qt widget rendering. Ultimately, like you, I traced the problem back to the batch processing of the QListView. It appears, that when batch processing is enabled, Qt fires off an internal timer to perform incremental layout adjustments of the underlying scroll view. During these incremental layouts, when the scroll bar is visible, the update region is not computed correctly (it's too big and does not account for the regions occupied by the scroll widget(s) themselves). The result is a bad update region that subsequently finds its way into the viewport update which has the unfortunate side-effect of clearing the entire client area without rendering any of the ListViewItems.
Once the batch processing is complete, the final viewport update correctly computes the layout geometry (with the scroll bar) and produces a valid update region; the visible elements in the list are then redrawn.
The behavior worsens as the number of items in the list grows (relative to the batch size). For example, if your list grows from 500 to 50000 items and a batch size of 50, there is a proportionate increase in the number of "bad repaint" events which are triggered causing the view to visibly flicker even more. :(
These incremental (and failed) viewport updates also appear to be cause the apparent spazmodic behavior in the scrollbar handle position that you describe.
The root of this issue appears related to this "hack" that was added to
QListView::doItemsLayout() as commented here:
// showing the scroll bars will trigger a resize event,
// so we set the state to expanding to avoid
// triggering another layout
QAbstractItemView::State oldState = state();
setState(ExpandingState);
I suppose you could override QListView::doItemsLayout() and provide your own batch processing which handles scroll bars properly, but personally I'm too old and lazy to be cleaning up someone else's poo. Switching to SinglePass eliminated the problem entirely. Seamless flicker-free rendering and the scroll bar behavior you've come to expect and love. Yay.
I'm currently trying to make a grid that animates from collapsed -> visible to notify the user that the save has been completed. In Blend 4 I opened the project and created a usercontrol for the SaveNotifier so I can use it in other areas of this project and others. I created the default to be collapsed and also created another state called "Complete" which has visibility set to Visible and has a timetrigger of 3 seconds which sends it back to the default state. The transitions are set to transition over 1 second and use fluidlayout to show the animation between the states, but it does not show the animation between states. Instead it just shows it as if there was no fluidlayout or transition time.
If someone would be so kind as to let me know if there is a problem with trying to do this or even show me how to do this it would be gre
Not sure why, but when I put the grid inside of a new grid it started working. Seems as if you need to have a grid to be the full size for the animation to display.
My situation:
I have a single window with a content view (NSView), which has several subviews (plain NSControl subclasses; not important, just for testing) scattered around it. For part of the end effect I am trying to achieve, I want to place a semi-transparent black CALayer overlaying the whole window's content view, and be able to make it invisible (either by hiding it or removing it, doesn't matter) when a certain event is triggered, revealing the NSViews in full clarity.
For testing purposes, I place a small semi-transparent black CALayer covering only some/parts of the subviews (the controls) in the main content view, like so:
(doh, had screenshots but can not post images as a new user. You'll have to use your imagination.)
Simple enough. So then all I tried to do was to check that it hides/removes itself properly. Then came the problem. Any attempt at hiding, removing or reducing the black layer or reducing its transparency to 0 causes all of the window's subviews to become erased, with a result looking like this:
(a window with a completely blank grey [the default window bg colour] content view)
Here's the meat of the code, in the main application call which has a reference to the main window:
// I set up my main view in a nib, which I load and will add
// to the app window as the main content view
NSViewController * controller = [[NSViewController alloc]
initWithNibName:#"InterfaceTesting" bundle:nil];
NSView * view = [controller view];
// enable layers, grab root
[view setWantsLayer:YES];
CALayer * rootLayer = [view layer];
// set up black square
CALayer * blackLayer = [CALayer layer];
[blackLayer setFrame:NSMakeRect(150, 150, 100, 100)];
[blackLayer setBackgroundColor:CGColorCreateGenericRGB(0,0,0,0.5)];
[rootLayer addSublayer:blackLayer];
// hide all sublayers of root (ie the black square)
for (CALayer * layer in [[view layer] sublayers])
[layer setHidden:YES];
// add to main window
[self.window setContentView:view];
As I mentioned before, replacing [layer setHidden:YES] with [layer setOpacity:0] has the same effect of erasing the content view, as does removing the blackLayer altogether (by calling removeFromSuperlayer and also by trying to set its superlayer to nil). More interesting still, if I set the opacity of the black sublayer square to something between 0 and 1 - 0.5, say - then all of the content view's subviews' opacities reduces accordingly.
So I'm rather baffled here, as I don't understand why hiding/removing or reducing opacity on the small black CALayer is affecting all the subviews in the view it is a part of, even those it doesn't cover. Any help is much appreciated.
Edit: Well I've discovered that the contentView's top layer in fact has a sublayer not only for the black square I manually added, but for every subview as well (whether originating from the nib or manually created by me after loading the view from the nib), hence why they were all fading/disappearing when I thought I was just operating on the black box. But what I still don't know is why there are layers for all the subviews, where they came from, and how to get rid of them (again, all attempts to remove them - via removeFromSuperlayer, setting to nil, etc - fail.
Subviews (controls) have layers which are set as sublayers of the parent view layer. So looping through all the sublayers also acts on these subview layers.