Cocos2d Loading several images at once? - cocos2d-iphone

I have been searching the cocos2d forum but I do not understand some of the concepts the people are using. In my game I am having to load over 100 images to use as an animation for my main menu but the problem that I am having is these images take about 3 to 5 seconds to load and then my game starts up. The animation runs great after the images are loaded but it's the loading that is the problem. I would use sprite sheets but the images are to big so I have to load them separately. So should I make a loading screen to load all of these images in first and if so how is the best way to implement this? This is my first time of trying to do something like this.

#Stephen : Two ways to do this. With TexturePacker you can create a .tps file, one for each source image, then under File->Export Image. Set the geometry to 1024x1024 for your images. Specify .pvr format, enable pre-multiplied alpha, and toy with dithering (this may actually benefit some textures, ie improve on .png's). You could also probably benefit from RGBA4444 for menus (gain on memory required, with no significant loss on rendered quality).
You can also use the builtin texturetool as follows:
before you do this, you must convert toto.png to a POT file (1024x1024) in your case, with photoshop for example.
MrEvil:pvrCenter$ /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/texturetool -m -f PVR -e PVRTC toto.png -o toto.pvr
MrEvil:pvrCenter$ gzip toto.pvr
this gives excellent compressions after a gzip (from 691Kb png to 295Kb).
I used texturetool because i can script that in shell, and process a whole lot of images with a single command (play D3 while the box churns out the images :) ).
EDIT 1 : some info on packing and file sizes.
ok, i started with one of my own 960x640 8 bits PNG, 691 Kb.
load it TexturePacker, set format to RBGA8888, 1024x1024, i get 766 Kb (this gives my my POT file).
export to RGBA8888 as a .pvr.ccz, 996 Kb.
export to RGBA8888 as a .pvr.gz, 1.001 Mb
export to RGBA4444 as a .pvr.ccz, 193Kb.
if i use texturetool on the 766 Kb file, then gzip the file 305Kb (RGBA8888). I cant really explain the difference between 305Kb and 996 Kb. It could be related to the dithering processing by TexturePacker, not certain.

Yes, definitely use a texture atlas (sprite sheet is not the correct term but means the same thing). A great tool for that is TexturePacker.
A texture atlas will save time when loading, and conserve memory. You can also try out different image color depth and compression options to further improve memory usage and loading times, but many options will affect image quality to a varying degree and depending on your images.
Btw, how big are these images? Assuming each is 512x512, and you load 100 of them, they'll consume 100 MB of memory. I mention that because this is often overlooked, and file sizes on disk are a fraction of what the images consume as textures.

Related

Shrink the size of a .png file

There are many programs that claim to reduce the size of a .png file but none of the well known ones, optipng , pngcrush , pngquant, allow me to shrink to a specified size. pngcrush tried its hardest, but the result was still way to big for my needs. For .jpg files, jpegoptim has an -m option that does allow me to shrink to the size I need. The obvious solution seemed to be to convert to jpg, shrink to the right size, then convert back, but that doesn't work either, the reconstituted .png file just jumps back to its original size.
Presumably, this has something to do with the structure of .png files.
Is there any way to get a small png file? This png file is an example of something i need to shrink to below 1K bytes.
Thanks for any suggestions!
Use ImageMagick to reduce the colors, then pngcrush to get rid of ancillary chunks:
magick in.png -colors 8 temp.png
pngcrush -rem alla temp.png out.png
results in a 1621-byte file. If you have an older version of ImageMagick, use "convert" instead of "magick". Using "-colors 4" instead of "-colors 8" gets you a 1015-byte file, but the dithering looks very spotty.
Note that these preserve the transparency in the image, while converting to JPEG loses the transparency and makes the background a solid color.
The only solution to your problem that I can think of is to use .jpg instead of .png. The .jpg format was mainly created for its high lossy compression but still gets a good enough image. On the other hand, .png is going for the full transparency and no quality loss. To sum it all up, .jpg is ideal for getting smaller files if quality doesn't matter, and .png is perfect for high-quality images that quality and colour really matter.
Sources:
http://www.labnol.org/software/tutorials/jpeg-vs-png-image-quality-or-bandwidth/5385/, http://www.interactivesearchmarketing.com/jpeg-png-proper-image-formatting/
I can get that 9.5 KB file down to 3.4 KB using the 8-bit palette PNG format. The image has a transparent boundary, which adds unnecessary pixels and an alpha channel for the whole image which isn't needed, since it's rectangular. After stripping the transparent boundary, eliminating the alpha channel, and using a palette, I can get it down to 3.2 KB.
To get any further, I have to use JPEG for lossy compression. At a very low image quality of 5 (out of 100), I can get it down to 1 KB. It shows some artifacts from the severe compression (look around the prompt > and _ to see some of those):

How to compress sprite sheets?

I am making a game with a large number of sprite sheets in cocos2d-x. There are too many characters and effects, and each of them use a sequence of frames. The apk file is larger than 400mb. So I have to compress those images.
In fact, each frame in a sequence only has a little difference compares with others. So I wonder if there is a tool to compress a sequence of frames instead of just putting them into a sprite sheet? (Armature animation can help but the effects cannot be regarded as an armature.)
For example, there is an effect including 10 png files and the size of each file is 1mb. If I use TexturePacker to make them into a sprite sheet, I will have a big png file of 8mb and a plist file of 100kb. The total size is 8.1mb. But if I can compress them using the differences between frames, maybe I will get a png file of 1mb and 9 files of 100kb for reproducing the other 9 png files during loading. This method only requires 1.9mb size in disk. And if I can convert them to pvrtc format, the memory required in runtime can also be reduced.
By the way, I am now trying to convert .bmp to .pvr during game loading. Is there any lib for converting to pvr?
Thanks! :)
If you have lots of textures to convert to pvr, i suggest you get PowerVR tools from www.imgtec.com. It comes with GUI and CLI variants. PVRTexToolCLI did the job for me , i scripted a massive conversion job. Free to download, free to use, you must register on their site.
I just tested it, it converts many formats to pvr (bmp and png included).
Before you go there (the massive batch job), i suggest you experiment with some variants. PVR is (generally) fat on disk, fast to load, and equivalent to other formats in RAM ... RAM requirements is essentially dictated by the number of pixels, and the amount of bits you encode for each pixel. You can get some interesting disk size with pvr, depending on the output format and number of bits you use ... but it may be lossy, and you could get artefacts that are visible. So experiment with limited sample before deciding to go full bore.
The first place I would look at, even before any conversion, is your animations. Since you are using TP, it can detect duplicate frames and alias N frames to a single frame on the texture. For example, my design team provide me all 'walk/stance' animations with 5 pictures, but 8 frames! The plist contains frame aliases for the missing textures. In all my stances, frame 8 is the same as frame 2, so the texture only contains frame 2, but the plist artificially produces a frame8 that crops the image of frame 2.
The other place i would look at is to use 16 bits. This will favour bundle size, memory requirement at runtime, and load speed. Use RGBA565 for textures with no transparency, or RGBA5551 for animations , for examples. Once again, try a few to make certain you get acceptable rendering.
have fun :)

Can't find logic behind png file sizes

I'm saving a large number of small png files for use in a game on a phone, so space is at a premium.
I'm trying to figure out the logic behind the file sizes so I can save things most efficiently, but even after using pngcrush the sizes are totally inconsistent.
I saved a 1x1 image and it takes 3kb. I have another 23x21 image which takes only 2kb. I have two images which are almost the same size, but one takes 6kb and the other takes 13kb. I doubled the image height and copied one image into the empty space of the other and saved that. The combined image is only 11kb!
Why is a 1x1 image larger than a 23x21 image? Why can I combine a 13kb image and a 6kb image and get an 11kb image?
Here are the images I'm talking about (there's a 1x1 pixel in between the 1st and second images. It's difficult to see, so I'll just give the URL: http://g42.org/temp/png/1x1.png):
example http://g42.org/temp/png/hat.png
example http://g42.org/temp/png/1x1.png
example http://g42.org/temp/png/helmet1.png
example http://g42.org/temp/png/helmet2.png
example http://g42.org/temp/png/helmet1_2.png
It's not a compression thing, the problem with the 1x1 image is that it has metadata (added by Photoshop, it seems), a color profile (iCCP chunk). If you look inside the binary, its' the data between the strings "iCCP" and "IDAT", it could be removed and you get a 69 bytes file.
If you reopen and save the file most image viewers (xnview), or use pngcrush, you can strip that chunk. : See it here : http://i.stack.imgur.com/fmOdA.png
And regarding the helmet images: besides other informational chunks (imageReady ads some informational text, as you can see), the difference is due to different formats: the two-helmets is a paletted image (8bits per pixel), the single helmet is a RGB with alpha (32bits per pixel)
PNG compression is based on the same algorithm as zlib and is highly sensitive to the data that is being compressed so you won't see a consistent relationship between image size and file size. In the case of the combined image, it is still bigger than the smaller image and given the similarity of the two halves of the image, the compressor was probably able to reuse a lot of the Huffman tree. I don't know enough about the algorithm to say for certain how it ended up smaller than the other half.
As long as you are not seeing oddities like the 1x1 image, which you seem to have figured out in the comments, I don't think this will make a lot of sense without extensive study of image compression.
There is a great utility called pngcrush
http://pmt.sourceforge.net/pngcrush/
Compressing to PNG is a rather difficult task - there are lost of assumptions and strategies to try - do we create a palette, or are we better off without it?
PNGcrush essentially bruteforces 100+ different compression strategies, while at the same time trimming useless tags and sections.
PNG has several sub-formats: 24-bit with or without alpha, 8-bit (includes alpha), grayscale, etc. which use different amount of bytes per pixel and have different "compressibility".
Plus PNG supports several compression tricks (filters and gzip settings) which affect how well image data is compressed.
On top of that PNG can contain metadata, which sometimes can be pretty large, like some embedded color profiles.
ImageAlpha converts images to the most space-efficient PNG8+alpha variant.
ImageOptim removes junk metadata and finds best compression parameters.
With a combination of those two your images can be reduced by 30-50%.

What options for convert (ImageMagick or GraphicsMagick) produce the smallest (filesize) PNG?

ImageMagick creates some pretty large PNGs. GraphicsMagick is a lot better, but I'm still looking for the best options to use with convert to obtain the smallest filesize png.
I have here a large png with a small filesize, and passing this through IM convert I have been unable to reach that filesize, let alone get it smaller. With GM convert I can get it slightly smaller but I'm looking for improvements, generically for any image I come across.
gm convert -quality 95 a_png.png gm.png
convert -quality 95 -depth 8 a_png.png im.png
gm identify *
a_png.png PNG 2560x2048+0+0 PseudoClass 256c 8-bit 60.1K 0.000u 0:01
gm.png[1] PNG 2560x2048+0+0 PseudoClass 256c 8-bit 60.0K 0.000u 0:01
im.png[2] PNG 2560x2048+0+0 DirectClass 8-bit 130.2K 0.000u 0:01
What options for convert produce the smallest PNG filesize?
(Yes, I'm familiar with OptiPNG, PNGOUT and Pngcrush. But I'm after something that will be available without question on every *nix box I happen to be on.)
Looks like you and me are looking for the same answer. Unfortunately there doesn't seem to be many people out there with a good knowledge of GraphicsMagick. This is what I have learned so far,
The quality operator doesn't properly work for any image other than JPEG's. For me it just made the file size bigger when used on PNG's and GIF's.
I have done this to my PNG and GIF files to reduce their size:
gm convert myImage.png +dither -depth 8 -colors 50 myImage.png
+dither stops any dithering of the image when the colors are reduced. (this reduces the file size)
-depth 8 is probably unnecessary as most PNG files are already depth 8.
-colors 50 reduces the number of colors in the image to 50, this is the only way to really reduce the size of a image stored in a lossless format like PNG or GIF.
Obviously for the best image quality/size ratio you cant just reduce the image depth or number of colors without knowing the current depth and number of colors. To determine this information I am doing the following
gm identify -format "file_size:%b,unique_colors:%k,bit_depth:%q" myImage.png
For my image; this returns
file_size:100.7k,unique_colors:13455,bit_depth:8
The problem is when GraphicsMagick reduces colors it always reduces to at least 255, so you can't set the number of colors to 300 for example. Also there seems to be an issue with the alpha channel for PNG files; If the image has transparency in it, reducing colors replaces these colors with transparent; with imagemagick it does not do this.
I just came across this question again so I'll update, GraphicsMagick and ImageMagick have a serious problem. They cannot write out PNG images using a tRNS chunk which means if you try to read in an image that has a tRNS chunk and then write it out, the image will be much bigger. GM is not the best tool for compressing images. You need to use a separate tool such as OptiPNG to compress PNG's again after using Image/GraphicsMagick. I am getting up to 60% smaller files when using OptiPNG after running GraphicsMagick on an image.
Also I was wondering if you have encountered a problem regarding RGBA images and bit depth. For some images I am getting an "Invalid bit depth" exception. I can't see any reason why.
I haven't found a way to do it in the command line, but i did find this free website (http://tinypng.org/) that does an excellent job, my test image got a 71% reduction, final size was only 29% of the original. It looks like you can give it 20 images at a time. I'm looking into how they do it.
http://tinypng.org/

Take advantage of Flash CS4's PNG Compression, but as an external file?

Here's the issue: I'm developing some Flash web sites and really enjoying AS3.
The problem: PNG 24-bit images are too big... I have three PNG images with transparency that I'd like to rotate through on the "Home page" every 10 seconds or so. Great. No problem - but instead of embedding all three PNGs in the SWF, which would take the thing longer to load, I'd like to load them dynamically from external files, so that the user doesn't have to wait around for images to load that aren't going to be displayed for another 10-15 seconds anyway. That's fine... I have working code for that.
The real problem: These PNG sizes, even loaded from external files on the fly, are really bugging me. One image is 350k when saved with Photoshop - 300k when I use PNGOUT. But... when I import the PNG into Flash's Library, I can go in and set it to JPG/Image Compression which reduces the size to about 45k, yet maintains the alpha information!! If Flash can compress my PNG that much, and still make it look good, why can't I find an app that can do the same for an external file? I'd be content to load my images into the Flash library and let it handle the compression, but if I end up with 5 or 6 images, that still turns out to be too long of a loading time.
Summary: How can I shrink my 350k PNG image with transparency down to 45k like Flash does when I import it into it's library?
Possible solution: Or.... hmmmm.... this could be a workaround... maybe I could just make a separate SWF movie for each PNG I want to use which uses the Flash compressed image - then read that file dynamically using a Loader... That ought to work! I shall return and report...
But still, how does Flash compress those PNGs so much more than compressors like PNGOUT? Maybe I'm just not passing in the right parameters for them to be effective.
Thanks for reading my ramblings. You all are a great sounding board!
PNG compression is lossless, so it can't compete with lossy perceptual compression schemes as JPEG. Just be sure that your png are the size to be displayed (or not : one trivial "compression" scheme would be to save your image scaled down and zoom it when displaying, but this is normally unsatisfactory). If you can't go below 24-bits (you cant go to a 256-pallete, I guesss) I dont think much can be done. I can only suggest to give a look at PngCrush.
I used to have the same question, but later, I think flash used JPEG compress for PNG files. The JPEG-compressed "png" is actually a variant that standard png format does not support, but flash supports. In my own flash project, I used it a lot. I even used swftools to generate an animated SWF from a lot of png, so I can load the single "png gallery" swf and use all the pngs inside.
I know that the question is a year old, but I thought it would be good for future reference. Using any of the png compression tools (PngCrush, Optipng) will not get anywhere near the same results as Flash compression.
The best way I've found to use flash compression without creating each swf in the Flash IDE is using SwfTools' png2swf utility, it will keep alpha channels and also allow you to set the jpeg's compression quality.
http://www.swftools.org/png2swf.html