Cocos2dx- Display same image in multiple sizes - cocos2d-iphone

What is the best approach to display a same image multiple times e.g. 1000-2000 times? The image has to be rendered in discrete sizes across the screen. The most straight forward idea seems to be declaring different sprites for each image, but there should be a better approach?

That would be a lot ! I would create a batch node with the texture, and add to the batch node 999-1999 additional CCSprites that would be scaled and positioned where you want them. Then add the batch node to the scene and position it.
Then i would test that on the slowest-most-memory-limited device you intend to support with your app. I've gone up to 500 or so replicas like this (remember, only one draw call with a batch node). I dont have a good 'feel' for your use case. In my case, the texture is small.

Related

Cant understand concept of merge-instancing

I was reading slides from a presentation that was talking about "merge-instancing". (the presentation is from Emil Persson, the link: www.humus.name/Articles/Persson_GraphicsGemsForGames.pptx, from slide 19)
I can't understand what's going on, I know instancing only from openGL and I thought it can only draw the same mesh multiple times. Can somebody explain? Does it work differently with directX?
Instancing: You upload a mesh to the GPU and activate its buffers whenever you want to render it. Data is not duplicated.
Merging: You want to create a mesh from multiple smaller meshes (as the complex of building in the example), so you either:
Draw each complex using instancing, which means, multiple draw calls for each complex
You merge the instances into a single mesh, which will replicate the vertices and other data for each complex, but you will be able to render the whole complex with a single draw call
Instance-Merging: You create the complex by referencing the vertices of the instances that take part on it. Then you use the vertices to know where to fetch the data for each instance: This way you have the advantage of instancing (Each mesh is uploaded once to the GPU) and the merging benefits (you draw the whole complex with a single draw call)

control over individual particles in a CCParticleSystemQuad?

I am calling setTexture:withRect: on a particle emitter... My question is, is there any way I can give multiple rects so that the particles can be comprised of random sprites? Or is the only way to accomplish this to use multiple emitters?
I thought if there was a way to actually get the collection of particles that are being generated, then I could loop over them and set their rect, or even color properties, but in the cocos2d docs, I see no way to get individual particle objects...... Is there any way to do this?
If you want emitted particles to have different images, you can make sprite sheet of your particle images and subclass CCParticleSystemQuad overriding initTexCoordsWithRect: method so that instead of using same frame for very particle it uses different frames for different particles.
See here for example of such particle system using bitmap font. Using the same idea, I made CCParticleSystemQuad subclass which uses CCSpriteFrameCache to get frame information.
No, you can't access or modify individual particles.
If you want random sprites, simply run multiple particle systems with each using a different texture.

OpenGL: which strategy to render a tree menu?

So, what we need to implement is a tree-menu with several nodes (up to hundreds). A node can have children and then can be expanded/collapsed. There is also a background lightning by mouse over and a background lightning by mouse selection.
Each node has a box, an icon and a text, which can be very large, occupying the whole width screen.
This is an example of an already working solution:
Basically I am:
rendering text a first time, just to get the length of an possible background highlight
rendering boxes and icon-textures (yeah I know, they are upside down at the moment)
rendering text a second time, first all the bold one and then all the normal one
This solution actually has a relative acceptable performance impact.
Then we tried another way, that is using the g Graphic java by drawing the tree menu and returning it as a bufferedImage to create at the end a big texture render it. All this obviously is done at every node collapse/expande and at each mouse movement.
This performed much better, but Java seems to have some big troubles handling the old bufferedImages. Indeed ram consumption increases constantly and forcing a garbage collection improves only slightly memory increasing, by slowing down it, but still...
Moreover performances fall, since the garbage collector is called every time and does not seem light at all.
So what I am going to ask you is: which is the best strategy for my needing?
Would be also maybe feasible to render each node on a different texture (actually three: one normal, one with a light background for mouse over and a last one with a normal background for mouse selection) and then at each display() just combine all these textures with the current tree-menu state?
For the Java-approach: If the BufferedImage hasn't changed in size (the width/height of your tree control), can't you reuse it to avoid garbage collection?
For the GL-approach, make sure you minimize texture switches. How do you render text? You can have a single large texture that contains all the normal and bold letters and just use different texture coordinates for each letter.

Using CCSpriteFrameCache without CCSpriteBatchNode

I want to know if there is any benefiting caching a sprite sheet and accessing the sprite by frame without using the CCSpriteBatchNode?
In some parts of my game the sprite batch node is useful because there is a lot on the screen, however on another part its not, because there are just a few things, and there are requirements for layers so CCSpriteBatchNode wouldn't be useful. However, for the sake of consistency I would like to use Sprite Sheets for all my sprites, and so was beginning to wonder if I would still receive any benefit from it? (Or worse that it could some how be slower...)
There is defiantly a benefit to putting all your sprites into a texture atlas or sprite sheet as you called it. Textures are stored in memory in power of 2 dimensions. So if you have a sprite that is 129px by 132px it is stored in memory as 256px by 256px which is the nearest power of 2 size. If you have many sprites that is quite a lot of extra memory used up.
By using a texture atlas you only have one texture in memory and then it pulls the pieces out of it that it needs for your sprites. These sprites can be whatever size you want without having to worry about power of 2 sizes.
You can read more details about it on this tutorial
http://www.raywenderlich.com/2361/how-to-create-and-optimize-sprite-sheets-in-cocos2d-with-texture-packer-and-pixel-formats

Tiling a large QGraphicsItem in a QGraphicsView

I am currently using a QGraphicsItem that I am loading a pixmap into to display some raster data. I am currently not doing any tiling or anything of the sort, but I have overriden my QGraphicsItem so that I can implement features like zooming under mouse, tracking whick pixel I am hovering over, etc etc.
My files that are coming off the disk are 1 - 2GB in size, and I would like to figure out a more optimal way of displaying them. For starters - it seems like I could display them all at once if I wanted - because the QImage that I am using (Qpixmap->QImage->QgraphicsItem) seems to fail at any pixel index over 32,xxx (16 bit).
So how should I implement tiling here if I want to maintain using a single QGraphicsItem? I dont think I want to use multiple QGraphicsItems to save the displayed data + neighboring data "about" to be displayed. This would require me to scale them all when the person moused over and tried to scale a single tile, and thus causing me to also have to reposition everything, right? I guess this will also require having some knowledge about what data to exactly get from the file.
I am however open to ideas. I also suppose it would be nice to do this in some kind of threaded way, that way the user can keep panning the image or zooming even if all the tiles are not loaded yet.
I looked at the 40000 chip demo, but I am not sure that is what I am after - it looks like it basically still displays all of the chips like you normally would in a scene, just overrode the paint method to supply less level of detail...or did I miss something about that demo?
It's not too surprising that there would be difficulty handling images that size. Qt just isn't designed for it and there are possibly other contributing factors due to the particular OS and perhaps the way memory is managed.
You very clearly need (or at least, should use) a tiling mechanism. Your main issue is that you need a way to access your image data that does not involve using a QImage (or QPixmap) to load the entire thing and provide access to that image data since it has already been determined that this fails.
You would either need to find a method (library) that can load the entire image into memory and allow you to pull regions of image data out of it, or load only a specific region from the file on disk. You would also need the ability to resize very large regions to lower resolution sections when trying to "zoom" out on any part of the image. Unfortunately, I have never done image processing like this so am unfamiliar with what library options are available, Qt likely won't be able to help you directly with this.
One option you might explore however is using an image editing package to break your large image up into more manageable chunks. Then perhaps a QGraphicsView solution similar to the chip demo would work.