I have read this page to understand batch drawing details, but I still have questions. I know that in order to reduce draw call number we need to use batch drawing. I use it like this:
auto spritebatch = SpriteBatchNode::create("ingame.png");
SpriteFrameCache::getInstance()->addSpriteFramesWithFile("ingame.plist");
And now I need to create a Sprite I have to do this:
auto backgroundSprite = Sprite::createWithSpriteFrameName("back_gradient.png");
spritebatch->addChild(backgroundSprite);
But I don't understand the following things:
What if my game has several spritesheets. For example I have HUD spritesheet and ingame spritesheet. Now if I want to show ingame screen with HUD then I need to create 2 SpriteBatchNode? and add them into ingame layer?
What if the same spritesheet should be used in different Scenes. Should I do the following again?
auto spritebatch = SpriteBatchNode::create("ingame.png");
SpriteFrameCache::getInstance()->addSpriteFramesWithFile("ingame.plist");
What if I use sprites with Button, TextEdit, Label and other UI elements.
First of all can I initialize Button state images from spritesheet?
As I know I cannot add UI element as a child to SpriteBatchNode. In this case how to combile UI elements and sprite in the same view/scene?
Sorry for lots of questions. But the fact is that I could not find any resource that contains the explanations to this questions. While they are all related. If you don't know answers to these questions, you don't know how to use SpriteBatchNode.
Don't use CCSpriteBatchNode in cocos2d-x v3. Batching is automatic and best left to the renderer to optimize draw calls through batch drawing. It says so right in the article you've linked:
The Render graph was decoupled from the Scene graph.That means that auto-batching is supported, finally :-) And the new renderer is so fast, that we no longer encourage the use of SpriteBatchNode.
I don't agree, depending not how fast new render is we want to get most of it as we can.
Narek, you are correct.
During rendering the geometry will be sorted to reduce the quantity of GL calls. But don't expect it will sort children of different parents in one line. Example: you have
Node A with children ab and ac
Node B with children bd and be.
if b and d uses textures of same atlas it is not guaranted you will get any perfomance boost of using atlases at all.
But I can confirm, currently it is really fast, and at my case the GL calls are not the bottleneck at all :)
Related
I am writing a video display software capable of displaying multiple video streams. For this I have a GridView holding VideoOutputs in QML connected to a QAbstractListModel derived class in c++ which provides instances of an object with a QAbstractVideoSurface Q_PROPERTY. It's working quite beautifully so far.
The video frames I am displaying come with metadata, however, containing data for axis aligned bounding boxes. I don't know beforehand how many boxes there are, the number could even change on a frame by frame basis, their position and size is also not set.
Ultimately, it should look something like this:
As I need to be able to display a few video streams at once, and preferably at 30+ fps, I need a fast method of drawing these boxes. Using QPainter on the QImage on which the QVideoFrame is based is rather slow so I was considering a few other approaches:
Using the QML object Rectangle in a Repeater with a c++ provided model (Was hoping to simply provide a QVariantList::fromVector() ): Could work, however I would need a lot of models which in turn I need to provide to QML with a model, and I would likely need to call begin/endResetModel every frame that the boxes change to cause QML to update - this is also very slow.
Using a Shader to draw the boxes: This is a rather difficult approach. I'm no stranger to shaders, but in Qt/Qml I don't know how to provide the shader with the information necessary.
Using OpenGL directly to draw the boxes: Again, I have no clue how to do this, but I think I could work it out if I googled.
My question: Which one, if any, of these approaches is the best? If none of these, which other approach could I use?
Thank you so much for taking the time to read my rather long question!
I am sorry for the terrible description of this question. I am making a game where the user has to try and through a football through a moving tire. I have the game set up with a single scene & layer and I am wondering how to implement options for having different background images, tires, and footballs to choose from. I don't expect someone to explain to me how to code my game. I want to have specific objects for the different background images. Like, for instance, a prison background image would have a metal tire, different football, and have certain objects be flying through the scene while attempting to throw the football through the tire. Should I create a separate scene & layer for each background image & its corresponding sprites or is there a better way to go about this. All I am asking is for someone to point me to some example code or a project that does something similar. SORRY for the long post
If you are willing to keep everything in just one png (background + moving sprites) and use a CCSpriteBatchNode you can easily keep the rects of the elements equal for all the different settings and just load a different file.
Otherwise just have a set of files and use same CGRect, it shouldn't be hard at all..
Is it possible to use a large image in Cocos2D, and allow, via swiping or pinching, for the user to zoom in and out?
I see from this post, that the max res for a Cocos2D image is 2048x2048. That is obviously larger than a device viewport, so I want the user to be able to move around the image.
I'm not creating a game, I'm making a sort of interactive biological cell, that will allow the user to tap arbitrary organelles, and see a popup of information about them.
Here is an idea of what the image will be, and obviously cramming the whole thing into a device viewport is not possible:
So really, before I delve too deep into this project, I'm just curious as to whether it is possible to use a large image, that allows the user the ability to arbitrarily move it around, and, if I can detect organelle touches, perhaps via CCSprites?
I recommend subclassing CCSprite and using your large image as the class's image. CCSprites certainly can detect touches by simply adding the basic CCTouchDispatcher delegate to the sprite's class:
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate:self priority:-1 swallowsTouches:YES];
Then also add this method to your CCSprite subclass:
-(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event
You can do anything you want with the touches at this point, scroll or whatever suits your needs.
You could break up your image into many multiple sprites and use a CCLayer to manage touches instead, it just depends on whether you really need your image to be that large, or if the limitations for a single image are enough for you to work with, considering they are pretty large too. My method here is a lot less complicated than that.
The max texture size is limited by OpenGL ES not just coscos2d and it changes by device. However, you can load the image into more than one texture and then position and move those textures around the screen. So really you could have the appearance of an image any size you would like but programmatically you will have to manage the different sprites (tiles) of the image.
CCSptites don't detect touches. CCLayers have will get the touch events you can then do a hit test to see if it hits a givcen CCSprite.
I'm currently developing a program to show and control animated sprites on the desktopscreen. My problem is now to actually draw them onto the screen. The user should still be able to access other applications, as long as the sprite does not obstruct it.
My attempts are below and I hope, someone can point me in the right direction. I don't really care which library I need to use, as long as the performance is good enough for something around 20-30 animated sprites.
My attempts so far:
My first attempt was with Qt. I used a QWidget with a QLabel in it to show the pixmap of an object. The pixmap itself had an alpha channel and I used the "setMask(pixmap.mask()" method of QWidget to remove anything I don't want to show. But this method can't be used for rapidly shifting shapes, like moving creatures. If setMask is called all 50-100ms to change the mask to the next movementphase, then the cpu load gets to high with a lot of creatures moving at the same time.
My second attempt was to use one QWidget for all creatures. This way setMask ist called only one time and not once for every creature. It's possible to move more creatures this way, but the screen is flickering like hell when moving the mouse pointer over the creatures.
My third attempt were the XShape functions from Xlib to change the shape of each creature, but the performance is not much better then setMask.
I tried the transparency in Qt but if I use a QWidget over the whole screen the cpu load of X gets really high while moving the mouse. I don't know, if I can do something better here.
Create a QGLWidget and learn to use the OpenGL API to draw sprites within it, even if only using glDrawPixels rather than texture objects.
You certainly won't have any problems drawing a few tens of sprites, and the time spent learning OpenGL will be a good investment if you aspire to do more complex graphical things in future.
Not sure if this is your language but the ESheep is on GitHub, could get you started: https://github.com/Adrianotiger/desktopPet
I subclassed QGraphicsItem and reimplemented paint.
In paint I wrote something like this for labeling the item:
painter->drawText("Test",10,40);
After some time I think It may be useful to handle labeling with seperate item. So I wrote something like this.
QGraphicsTextItem *label = new QGraphicsTextItem("TEST",this);
setPos(10,40);
But two "TEST" drawing do not appear in the same place on screen. I guess difference may be related with item coordinates - scene coordinates. I tried all mapFrom... and mapTo... combinations inside QGraphicsItem interface but no progress. I want to drawings to appear in the same place on screen.
What I miss?
I assume that you are using the same font size and type in both cases. If the difference in position is very small the reason can be the QGraphicTextItem is using some padding for the text it contains. I would try to use QGraphicsSimpleTextItem that is not going to add fancy stuff internally and see if you still have the same problem. The coordinates system is the same one if you use painter or setPost so that is not the problem. If this doesn't help I will suggest to specify the same rect for both to avoid Qt adding it owns separation spaces.