Role of CCSpriteBatchNode - cocos2d-iphone

The idea behind a CCSpriteBatchNode is to render a texture once for many sprites which should improve performance instead of treating each sprite as a different texture.
However, I'm confused how there is a benefit to using this as opposed to using only a single texture atlas. If you create a texture with this:
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"gameTexture.plist"];
and then every single image you use for sprites is pulled using frame methods, then aren't all your images using the same single rendered texture, even though a batch node was never introduced?
Of course, you can use a batchnode in combination with a texture atlas, but how is this an actual gain in performance? If you do not use a batchnode, is it rendering the texture multiple times, even though it is cached?

using the same texture you are not changing texture at each call, but you still are rendering all the sprite in different glBegin and Ends, using the CCSpriteBatchNode will make sure that every sprite in it is rendered within the same call

The performance improvement simple comes from the reduced number of OpenGL calls. If you don't use a SpriteBatchNode, your sprites will come from one texture yes but they will all make seperate OpenGL calls. The batch node object contains code to collect all of it's children and make just a single call, this is why your sprites must be a child of the same batch node to get the performance boost.
0 batch node + 100 sprites = 100 OpenGL calls.
1 batch node + 100 sprites (children of this batch node) = 1 OpenGL call.
If you're really interested have a look in CCSpriteBatchNode.m

Related

How many draw calls is acceptable in vulkan?

Ive been working on a vulkan renderer and am having a bit of a pickle. Currently I am using vulkan to render 2d sprites, and just imported a whole map to draw. The map is 40x40 with 1600 tiles. I cannot instance/batch these as there are moving objects in the scene and I may need to interject draw calls in between ( Some objects need to be rendered in front of others ). However, when I render these 1600 sprites individually my cpu CHUGS and it takes ~20ms to accomplish JUST the sprites. This happens in a separate thread and does the following:
Start command buffer & render pass
For every sprite to draw
Set up translation matrix.
Fetch the material if its not cached
If this command buffer is not bound to the pipeline bind it.
Bind the descriptor set given by the material if not already bound.
Push translation matrix to pipeline using push constant.
Draw.
End command buffer & render pass & submit.
My question I guess is, is 1600 too much? Should I try and find ways to batch this? Would it make more sense to just spend these clock cycles building a big buffer on the gpu and only draw once? I figured this was less efficient since I only really submit once for all commands given.
Yes, 1600 draw calls for this type of application is too many. It sounds like you could possibly use a single vkCmdDrawIndexedIndirect().
You would just need to create SSBOs for your per sprite matrices and texture samplers to index into each draw using gl_DrawIDARB in the shaders (don't forget to enable VK_KHR_SHADER_DRAW_PARAMETERS_EXTENSION_NAME).
Your CPU-side pre-draw preparation per frame would consist of setting the correct vertex/index buffer offsets within the VkDrawIndexedIndirectCommand structure, as well as setting up any required texture loads and populating your descriptors.
If draw order is a consideration for you, your application could track depth per sprite and then make sure they're set up for draw in the correct order.

animating a model without redrawing the whole background drawing --OPENGL

Using OPENGL , I am making a simple animation where a small triangle will move through the path that I have created with mouse (glutMotionFunc).
So the problem is how can I animate a small triangle without redrawing the whole path using glutSwapBuffers();
And also ,how can I rotate that triangle only.
I don't want to use overlay as switching between these 2 layers takes much time.
If redrawing the whole path is really too expensive, you can do your rendering to an off-screen framebuffer. The mechanism to do this with OpenGL is called Frame Buffer Object (FBO)
Explaining how to use FBOs in detail is beyond the scope of an answer here, but you should be able to find tutorials. You will be using functions like:
glGenFramebuffers()
glBindFramebuffer()
glFramebufferRenderbuffer() or glFramebufferTexture()
This way, you can draw just the additional triangle to your FBO whenever a new triangle is added. To show your rendering on screen, you can copy the current content of the FBO to the primary framebuffer using glBlitFramebuffer().
You cant! Because it just does not makes sense!
The way computer screen work is the same as in films: fps! Frames per second. There is no thing as "animation" in screens, it is just a fast series of static images, but as our eyes cannot see things moving fast, it looks like it is moving.
This means that every time something changes in the thing you want to draw, you need to create a new "static image" of that stage, and that is done with all the glVertex and so pieces of code. Once you finish drawing you want to put it on the screen, so you swap your buffer.

Sprite Batch concept

I would like to confirm the following, Is it fine to use just one sprite-batch and draw it fonts, and other animated sprites ? if that's true, how many quads that can be batched using just one sprite-batch?is that an issue of DirectX API and it takes care of that or GPU ?
Yes it is ok to use one sprite batch object for fonts and other sprites. In fact it is probably better that way.
The number of sprites that can be batched is up to the implementation. If you are using the SpriteBatch class in the DirectXTK, then it uses a growing array as you add sprites to it so there is no real limit to the number of sprites you can give it (except for memory). Internally it creates a vertex buffer that can handle 2048 sprites or 2048*4 vertices. This doesn't limit the amount of sprites that you can send to the SpriteBatch. It just means that if you queue up 3000 sprites for example, it will need to make at least two draw calls to render everything (more if you are using multiple textures).
So, the number of sprites that can be drawn in one call depends on the size of the vertex buffer that the implementation has created. The maximum size of a vertex buffer ultimately depends on how much memory is available.

Render nodes in single drawing pass with spritekit?

As I was looking around for information about spritekit textures, and I stumbled upon this quote:
If all of the children of a node use the same blend mode and texture atlas, then Sprite Kit can usually draw these sprites in a single drawing pass. On the other hand, if the children are organized so that the drawing mode changes for each new sprite, then Sprite Kit might perform as one drawing pass per sprite, which is quite inefficient.
But check this out:
Tiles with same texture (I assure you it's a texture, not just a color)
Tiles with their own texture
The draw count has a difference of 40, although all of the textures used came from the same atlas.
Am I interpreting the word 'atlas' wrong?
This is where I store my images:
Is my example a 'texture atlas,' or is the definition of 'atlas' here a single .png that contains all the images needed, and individual tiles are sliced from it?
Or is the problem probably in how I am loading/something else?
Thanks!
The problem here is most likely because of the node graph itself. Say you have 100 sprites as children of the scene, with not sub-children of their own, and they all use textures from the same atlas and default blend modes, then Sprite Kit will batch-draw these 100 sprites in a single draw call.
However if these tile sprites have children of their own, perhaps shape or label nodes for debugging, then this will "interrupt" the batch drawing operation.
Check your node graph. Make sure all tile sprites are children of the same parent, and have no children of their own, use the same blend modes and textures from the same atlas. Then batch drawing will definitely work.
If that doesn't work, verify that the tiles.atlas folder is correctly converted to a texture atlas in the bundle. If you open the compiled app bundle you should find a folder named 'tiles.atlasc' with a plist and one or more png files in it, containing all individual images of the folder. Furthermore none of these individual images should appear in the bundle - if they are added to the bundle as individual files using the same names as in the atlas, then Sprite Kit will default to loading the individual image files rather than obtaining them from the texture atlas.

Static background w/ objects in openGL. Best way to "bake"?

I'm making a 2D game in openGL and I have a list of static objects. Thus far I'm looping through them and drawing them into the room, however in some large rooms there are up to 2000 of them and speed is critical so I'd like to find a way to "bake" them all together and never update them in the draw loop after that.
How can I do this and what's the best way in terms of performance, memory usage, gpu ram usage etc?
I'd prefer to use oGL 2, but I'm considering oGL 3+.
The simplest way is to move all the data of those objects to the GPU so that rendering commands will fetch memory directly from GPU memory. It can be done by simply using VBO or even DisplayList (in 'old' OpenGL 2.0 and before).
Probably the DisplayList solution wll be the most efficient because you can 'pack' all the commands inside... with VBO you can pack only the geometry data, the materials need to be setup every frame.
Related topic: instacing (but you will have to use GL 3+).
Another way is to render them to textures... and display them as simple Sprits. This technique is called 'impostors', here is some info: True Impostors.
Another option: render the environment to a Cube Map. It could work for objects that are far away from the camera (like hills, tries, etc...) but in a room it could look strange.
First option: make single mesh for objects. For example, you may dynamically update index array with objects that are visible. Very important in this case that textures you use should be in an atlas. If you can't share shader and textures there is no much effect from this technique. You may combine this method, grouping by material an texture and using single draw call to render. For example, first draw call - render 100 trees with one texture, and than render 600 apples on them and after 100 clouds.
Another option, if your objects are static you may render all of them into texture using FBO. This may be applied if your objects like background. For example, your render random stars (1000) in space for your galaxy.