I am making a game with a large number of sprite sheets in cocos2d-x. There are too many characters and effects, and each of them use a sequence of frames. The apk file is larger than 400mb. So I have to compress those images.
In fact, each frame in a sequence only has a little difference compares with others. So I wonder if there is a tool to compress a sequence of frames instead of just putting them into a sprite sheet? (Armature animation can help but the effects cannot be regarded as an armature.)
For example, there is an effect including 10 png files and the size of each file is 1mb. If I use TexturePacker to make them into a sprite sheet, I will have a big png file of 8mb and a plist file of 100kb. The total size is 8.1mb. But if I can compress them using the differences between frames, maybe I will get a png file of 1mb and 9 files of 100kb for reproducing the other 9 png files during loading. This method only requires 1.9mb size in disk. And if I can convert them to pvrtc format, the memory required in runtime can also be reduced.
By the way, I am now trying to convert .bmp to .pvr during game loading. Is there any lib for converting to pvr?
Thanks! :)
If you have lots of textures to convert to pvr, i suggest you get PowerVR tools from www.imgtec.com. It comes with GUI and CLI variants. PVRTexToolCLI did the job for me , i scripted a massive conversion job. Free to download, free to use, you must register on their site.
I just tested it, it converts many formats to pvr (bmp and png included).
Before you go there (the massive batch job), i suggest you experiment with some variants. PVR is (generally) fat on disk, fast to load, and equivalent to other formats in RAM ... RAM requirements is essentially dictated by the number of pixels, and the amount of bits you encode for each pixel. You can get some interesting disk size with pvr, depending on the output format and number of bits you use ... but it may be lossy, and you could get artefacts that are visible. So experiment with limited sample before deciding to go full bore.
The first place I would look at, even before any conversion, is your animations. Since you are using TP, it can detect duplicate frames and alias N frames to a single frame on the texture. For example, my design team provide me all 'walk/stance' animations with 5 pictures, but 8 frames! The plist contains frame aliases for the missing textures. In all my stances, frame 8 is the same as frame 2, so the texture only contains frame 2, but the plist artificially produces a frame8 that crops the image of frame 2.
The other place i would look at is to use 16 bits. This will favour bundle size, memory requirement at runtime, and load speed. Use RGBA565 for textures with no transparency, or RGBA5551 for animations , for examples. Once again, try a few to make certain you get acceptable rendering.
have fun :)
Related
In order to accomplish some specific editing on some .avi files, I'd like to create an application (in C++) that is able to load, edit, and save those .avi files. But, what is the most efficient way? When first thinking about it, a simple 3D-Array containing a 2D-array of pixels for every frame seems the simplest solution; But then its size would be ENORMOUS. I mean, let's assume that a pixel only needs a color. One color would mean 3bytes (1char r, 1char b, 1char g). If I now have a 1920x1080 video format, this would mean 2MEGABYTES for only one frame! This data may or may not be smaller if using pointers for the colors, so that alreay used colors wont take more size - I don't really know, since I'm pretty new to C++ and the whole low-level stuff. (As a comparison: One of my AVI files recorded with Xvid codec is 40seconds long, 30fps, and only has 2MB.)
So how would you actually store the video data (Not even the audio, just the video) efficiently (while still being easily able to perform per-frame-changes on it)?
As you have realised, uncompressed video is enormous and it is not practical to store an entire video in this way.
Video compression is an extremely complex topic, but more-or-less, it works as follows: certain "key-frames" are compressed using fairly standard compression techniques similar or identical to still-photo compression such as JPEG. Frames following key-frames are compressed by comparing the frame with the previous one and looking for changes (such as moving blocks). Every now and again, a new key-frame is used.
You don't really have to worry much about that as you are not going to write your own video coder/decoder (codec). There are standard ones.
What will happen is that your program will decode the compressed video frame-by-frame and keep a certain number of frames in memory while you are working on them and then re-encode them when it is finished. In the uncompressed form, you will have access to the individual pixels and can work on them how you want.
You are probably not going to do that either by yourself - it is very hard. You probably need to use a framework, such as OpenCV. There are a huge number of standard filters and tools built in to these frameworks, and it may be that what you want to do is already implemented somewhere.
The OpenCV framework can return individual frames in a Mat object and you can then access the pixels. See this post Get Pixels from Mat
OpenCV
Tutorial page: Open CV Tutorial
I have been searching the cocos2d forum but I do not understand some of the concepts the people are using. In my game I am having to load over 100 images to use as an animation for my main menu but the problem that I am having is these images take about 3 to 5 seconds to load and then my game starts up. The animation runs great after the images are loaded but it's the loading that is the problem. I would use sprite sheets but the images are to big so I have to load them separately. So should I make a loading screen to load all of these images in first and if so how is the best way to implement this? This is my first time of trying to do something like this.
#Stephen : Two ways to do this. With TexturePacker you can create a .tps file, one for each source image, then under File->Export Image. Set the geometry to 1024x1024 for your images. Specify .pvr format, enable pre-multiplied alpha, and toy with dithering (this may actually benefit some textures, ie improve on .png's). You could also probably benefit from RGBA4444 for menus (gain on memory required, with no significant loss on rendered quality).
You can also use the builtin texturetool as follows:
before you do this, you must convert toto.png to a POT file (1024x1024) in your case, with photoshop for example.
MrEvil:pvrCenter$ /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/texturetool -m -f PVR -e PVRTC toto.png -o toto.pvr
MrEvil:pvrCenter$ gzip toto.pvr
this gives excellent compressions after a gzip (from 691Kb png to 295Kb).
I used texturetool because i can script that in shell, and process a whole lot of images with a single command (play D3 while the box churns out the images :) ).
EDIT 1 : some info on packing and file sizes.
ok, i started with one of my own 960x640 8 bits PNG, 691 Kb.
load it TexturePacker, set format to RBGA8888, 1024x1024, i get 766 Kb (this gives my my POT file).
export to RGBA8888 as a .pvr.ccz, 996 Kb.
export to RGBA8888 as a .pvr.gz, 1.001 Mb
export to RGBA4444 as a .pvr.ccz, 193Kb.
if i use texturetool on the 766 Kb file, then gzip the file 305Kb (RGBA8888). I cant really explain the difference between 305Kb and 996 Kb. It could be related to the dithering processing by TexturePacker, not certain.
Yes, definitely use a texture atlas (sprite sheet is not the correct term but means the same thing). A great tool for that is TexturePacker.
A texture atlas will save time when loading, and conserve memory. You can also try out different image color depth and compression options to further improve memory usage and loading times, but many options will affect image quality to a varying degree and depending on your images.
Btw, how big are these images? Assuming each is 512x512, and you load 100 of them, they'll consume 100 MB of memory. I mention that because this is often overlooked, and file sizes on disk are a fraction of what the images consume as textures.
I'm saving a large number of small png files for use in a game on a phone, so space is at a premium.
I'm trying to figure out the logic behind the file sizes so I can save things most efficiently, but even after using pngcrush the sizes are totally inconsistent.
I saved a 1x1 image and it takes 3kb. I have another 23x21 image which takes only 2kb. I have two images which are almost the same size, but one takes 6kb and the other takes 13kb. I doubled the image height and copied one image into the empty space of the other and saved that. The combined image is only 11kb!
Why is a 1x1 image larger than a 23x21 image? Why can I combine a 13kb image and a 6kb image and get an 11kb image?
Here are the images I'm talking about (there's a 1x1 pixel in between the 1st and second images. It's difficult to see, so I'll just give the URL: http://g42.org/temp/png/1x1.png):
example http://g42.org/temp/png/hat.png
example http://g42.org/temp/png/1x1.png
example http://g42.org/temp/png/helmet1.png
example http://g42.org/temp/png/helmet2.png
example http://g42.org/temp/png/helmet1_2.png
It's not a compression thing, the problem with the 1x1 image is that it has metadata (added by Photoshop, it seems), a color profile (iCCP chunk). If you look inside the binary, its' the data between the strings "iCCP" and "IDAT", it could be removed and you get a 69 bytes file.
If you reopen and save the file most image viewers (xnview), or use pngcrush, you can strip that chunk. : See it here : http://i.stack.imgur.com/fmOdA.png
And regarding the helmet images: besides other informational chunks (imageReady ads some informational text, as you can see), the difference is due to different formats: the two-helmets is a paletted image (8bits per pixel), the single helmet is a RGB with alpha (32bits per pixel)
PNG compression is based on the same algorithm as zlib and is highly sensitive to the data that is being compressed so you won't see a consistent relationship between image size and file size. In the case of the combined image, it is still bigger than the smaller image and given the similarity of the two halves of the image, the compressor was probably able to reuse a lot of the Huffman tree. I don't know enough about the algorithm to say for certain how it ended up smaller than the other half.
As long as you are not seeing oddities like the 1x1 image, which you seem to have figured out in the comments, I don't think this will make a lot of sense without extensive study of image compression.
There is a great utility called pngcrush
http://pmt.sourceforge.net/pngcrush/
Compressing to PNG is a rather difficult task - there are lost of assumptions and strategies to try - do we create a palette, or are we better off without it?
PNGcrush essentially bruteforces 100+ different compression strategies, while at the same time trimming useless tags and sections.
PNG has several sub-formats: 24-bit with or without alpha, 8-bit (includes alpha), grayscale, etc. which use different amount of bytes per pixel and have different "compressibility".
Plus PNG supports several compression tricks (filters and gzip settings) which affect how well image data is compressed.
On top of that PNG can contain metadata, which sometimes can be pretty large, like some embedded color profiles.
ImageAlpha converts images to the most space-efficient PNG8+alpha variant.
ImageOptim removes junk metadata and finds best compression parameters.
With a combination of those two your images can be reduced by 30-50%.
I am building an application which takes a great many number of screenshots during the process of "recording" operations performed by the user on the windows desktop.
For obvious reasons I'd like to store this data in as efficient a manner as possible.
At first I thought about using the PNG format to get this done. But I stumbled upon this: http://www.olegkikin.com/png_optimizers/
The best algorithms only managed a 3 to 5 percent improvement on an image of GUI icons. This is highly discouraging and reveals that I'm going to need to do better because just using PNG will not allow me to use previous frames to help the compression ratio. The filesize will continue to grow linearly with time.
I thought about solving this with a bit of a hack: Just save the frames in groups of some number, side by side. For example I could just store the content of 10 1280x1024 captures in a single 1280x10240 image, then the compression should be able to take advantage of repetitions across adjacent images.
But the problem with this is that the algorithms used to compress PNG are not designed for this. I am arbitrarily placing images at 1024 pixel intervals from each other, and only 10 of them can be grouped together at a time. From what I have gathered after a few minutes scanning the PNG spec, the compression operates on individual scanlines (which are filtered) and then chunked together, so there is actually no way that info from 1024 pixels above could be referenced from down below.
So I've found the MNG format which extends PNG to allow animations. This is much more appropriate for what I am doing.
One thing that I am worried about is how much support there is for "extending" an image/animation with new frames. The nature of the data generation in my application is that new frames get added to a list periodically. But I do have a simple semi-solution to this problem, which is to cache a chunk of recently generated data and incrementally produce an "animation", say, every 10 frames. This will allow me to tie up only 10 frames' worth of uncompressed image data in RAM, not as good as offloading it to the filesystem immediately, but it's not terrible. After the entire process is complete (or even using free cycles in a free thread, during execution) I can easily go back and stitch the groups of 10 together, if it's even worth the effort to do it.
Here is my actual question that everything has been leading up to. Is MNG the best format for my requirements? Those reqs are: 1. C/C++ implementation available with a permissive license, 2. 24/32 bit color, 4+ megapixel (some folks run 30 inch monitors) resolution, 3. lossless or near-lossless (retains text clarity) compression with provisions to reference previous frames to aid that compression.
For example, here is another option that I have thought about: video codecs. I'd like to have lossless quality, but I have seen examples of h.264/x264 reproducing remarkably sharp stills, and its performance is such that I can capture at a much faster interval. I suspect that I will just need to implement both of these and do my own benchmarking to adequately satisfy my curiosity.
If you have access to a PNG compression implementation, you could easily optimize the compression without having to use the MNG format by just preprocessing the "next" image as a difference with the previous one. This is naive but effective if the screenshots don't change much, and compression of "almost empty" PNGs will decrease a lot the storage space required.
I'm writing an application that handles metadata for images and all kinds of animations, so I'm looking for a way to find basic info about an animation file, e.g:
length (in minutes/seconds/frames)
aspect ratio of pixels
resolution of individual frames
framerate
Right now, I let my program execute
mplayer -identify animfile.avi
and parse its console output, which contains all the info I need in a machine-readable format. This works fine, but I know that some potential users of the program prefer vlc as a media player so I'd rather avoid having a hard dependence on mplayer being installed.
I've tried
vlc -vv animfile.avi
which prints an ungodly amount of junk on the console, sometimes containing the stuff I'm looking for. The formatting and what data gets printed seems to vary depending on the file format of the animation though.
Is there an easier way to extract basic info from an animation of any format one has a decoder for (especially the length of the animation) using vlc or som other app/library that is usually available on a typical Linux installation?
Edit: I'd rather use another program to do the dirty work, as this is supposed to work for any animation format, e.g avi, mpg, mov, wmv, vob etc.
Edit: totem-video-indexer seems more promising, and was also included with the standard installation. Enough codecs to make it useful, however, was not. That could be fixed by installing the "non-free-codecs" package from medibuntu.
The output of totem-video-indexer is very easy to parse:
TOTEM_INFO_DURATION=5217
TOTEM_INFO_HAS_VIDEO=True
TOTEM_INFO_VIDEO_WIDTH=720
TOTEM_INFO_VIDEO_HEIGHT=480
TOTEM_INFO_VIDEO_CODEC=XVID MPEG-4
TOTEM_INFO_FPS=30
TOTEM_INFO_HAS_AUDIO=True
TOTEM_INFO_AUDIO_BITRATE=50
TOTEM_INFO_AUDIO_CODEC=MPEG 1 Audio, Layer 3 (MP3)
TOTEM_INFO_AUDIO_SAMPLE_RATE=48000
TOTEM_INFO_AUDIO_CHANNELS=Stereo
mediainfo is a pretty useful program. It's LGPL, and is just a frontend for libmediainfo, which should be exactly what you want.
http://mediainfo.sf.net/
This is a little more difficult question than you may realize. The AVI file format grew over time, and often has nearly the same information in two or three different places. In some cases those are really supposed to agree (but sometimes don't) and in other cases they're subtly different.
Just for example, you asked about the width and height. There are actually four different width/height specs for a single frame: the screen width/height, the pixel width/height (from which you derive the pixel aspect ratio), the active width/height, and the compressed width/height. The frame width and height is the (theoretical) size of the screen. The active width/height excludes the overscan area. The compressed width/height takes into account rounding -- for example, JPEG compresses in blocks of 8x8 pixels, so the compressed width and height have to be multiples of 8 for a motion JPEG file. The active width/height tells you if (for example) some pixels at the border should be ignored.
In any case, since your question is tagged C++, I'm going to guess you'd rather read the file and get the data directly than depend on spawning something else to do the dirty work. If so, you probably want to look at the OpenDML AVI file spec. You can get at least some idea of the length, resolution, and framerate just from reading the basic AVI header, which is in a fixed spot at the beginning of the file, so that much is trivial to get. It'll take a bit more work to get to the pixel aspect ratio though...