Android seems to visibly reduce the quality of PNG files at compile time.
I have an aplication working with a canvas object. The process writes canvas data over a PNG file which has smaller dimensions than the canvas. Writing process repeats according to user events more than once (maybe 20 times according to user). But after every writing process, image quality becomes worse and worse.It becomes pixelated.
Is there any way to turn off or disable compression for this?
EditingPartMutable_Bitmap.WriteToStream(out, 100, "PNG")
'quality = 100 didn't worked either
Any idea?
PNG is a lossless format and as such the quality setting is ignored. The pixelated effect you are getting must be due to another reason.
Related
I am making a game with a large number of sprite sheets in cocos2d-x. There are too many characters and effects, and each of them use a sequence of frames. The apk file is larger than 400mb. So I have to compress those images.
In fact, each frame in a sequence only has a little difference compares with others. So I wonder if there is a tool to compress a sequence of frames instead of just putting them into a sprite sheet? (Armature animation can help but the effects cannot be regarded as an armature.)
For example, there is an effect including 10 png files and the size of each file is 1mb. If I use TexturePacker to make them into a sprite sheet, I will have a big png file of 8mb and a plist file of 100kb. The total size is 8.1mb. But if I can compress them using the differences between frames, maybe I will get a png file of 1mb and 9 files of 100kb for reproducing the other 9 png files during loading. This method only requires 1.9mb size in disk. And if I can convert them to pvrtc format, the memory required in runtime can also be reduced.
By the way, I am now trying to convert .bmp to .pvr during game loading. Is there any lib for converting to pvr?
Thanks! :)
If you have lots of textures to convert to pvr, i suggest you get PowerVR tools from www.imgtec.com. It comes with GUI and CLI variants. PVRTexToolCLI did the job for me , i scripted a massive conversion job. Free to download, free to use, you must register on their site.
I just tested it, it converts many formats to pvr (bmp and png included).
Before you go there (the massive batch job), i suggest you experiment with some variants. PVR is (generally) fat on disk, fast to load, and equivalent to other formats in RAM ... RAM requirements is essentially dictated by the number of pixels, and the amount of bits you encode for each pixel. You can get some interesting disk size with pvr, depending on the output format and number of bits you use ... but it may be lossy, and you could get artefacts that are visible. So experiment with limited sample before deciding to go full bore.
The first place I would look at, even before any conversion, is your animations. Since you are using TP, it can detect duplicate frames and alias N frames to a single frame on the texture. For example, my design team provide me all 'walk/stance' animations with 5 pictures, but 8 frames! The plist contains frame aliases for the missing textures. In all my stances, frame 8 is the same as frame 2, so the texture only contains frame 2, but the plist artificially produces a frame8 that crops the image of frame 2.
The other place i would look at is to use 16 bits. This will favour bundle size, memory requirement at runtime, and load speed. Use RGBA565 for textures with no transparency, or RGBA5551 for animations , for examples. Once again, try a few to make certain you get acceptable rendering.
have fun :)
Here's the issue: I'm developing some Flash web sites and really enjoying AS3.
The problem: PNG 24-bit images are too big... I have three PNG images with transparency that I'd like to rotate through on the "Home page" every 10 seconds or so. Great. No problem - but instead of embedding all three PNGs in the SWF, which would take the thing longer to load, I'd like to load them dynamically from external files, so that the user doesn't have to wait around for images to load that aren't going to be displayed for another 10-15 seconds anyway. That's fine... I have working code for that.
The real problem: These PNG sizes, even loaded from external files on the fly, are really bugging me. One image is 350k when saved with Photoshop - 300k when I use PNGOUT. But... when I import the PNG into Flash's Library, I can go in and set it to JPG/Image Compression which reduces the size to about 45k, yet maintains the alpha information!! If Flash can compress my PNG that much, and still make it look good, why can't I find an app that can do the same for an external file? I'd be content to load my images into the Flash library and let it handle the compression, but if I end up with 5 or 6 images, that still turns out to be too long of a loading time.
Summary: How can I shrink my 350k PNG image with transparency down to 45k like Flash does when I import it into it's library?
Possible solution: Or.... hmmmm.... this could be a workaround... maybe I could just make a separate SWF movie for each PNG I want to use which uses the Flash compressed image - then read that file dynamically using a Loader... That ought to work! I shall return and report...
But still, how does Flash compress those PNGs so much more than compressors like PNGOUT? Maybe I'm just not passing in the right parameters for them to be effective.
Thanks for reading my ramblings. You all are a great sounding board!
PNG compression is lossless, so it can't compete with lossy perceptual compression schemes as JPEG. Just be sure that your png are the size to be displayed (or not : one trivial "compression" scheme would be to save your image scaled down and zoom it when displaying, but this is normally unsatisfactory). If you can't go below 24-bits (you cant go to a 256-pallete, I guesss) I dont think much can be done. I can only suggest to give a look at PngCrush.
I used to have the same question, but later, I think flash used JPEG compress for PNG files. The JPEG-compressed "png" is actually a variant that standard png format does not support, but flash supports. In my own flash project, I used it a lot. I even used swftools to generate an animated SWF from a lot of png, so I can load the single "png gallery" swf and use all the pngs inside.
I know that the question is a year old, but I thought it would be good for future reference. Using any of the png compression tools (PngCrush, Optipng) will not get anywhere near the same results as Flash compression.
The best way I've found to use flash compression without creating each swf in the Flash IDE is using SwfTools' png2swf utility, it will keep alpha channels and also allow you to set the jpeg's compression quality.
http://www.swftools.org/png2swf.html
Ok, so I'm trying to weigh up the pro's and con's of using various different texture compression techniques. I spend 99.999% of my time coding 2D sprite games for Windows machines using DirectX.
So far I have looked at texture packing (SpriteSheets) with alpha-trimming and that seems like a decent way to get a bit more performance. Now I am starting to look at the texture format that they are stored in; currently everything is stored as *.PNGs.
I have heard that *.DDS files are good, especially when used with DXT5 (/3/1 depending on the task) compression as the texture remains compressed in VRAM? Also people say that as they are already DirectDraw Surfaces they load in much, much quicker too.
So I created an application to test this out; I call the line below 20 times, releasing the texture between each call.
for (int i = 0; i < 20; i++)
{
if( FAILED( D3DXCreateTextureFromFile( g_pd3dDevice, L"Test.dds", &g_pTexture ) ) )
{
return E_FAIL;
}
g_pTexture->Release();
g_pTexture = NULL;
}
Now if I try this with a DXT5 texture, it takes 5x longer to complete than with loading in a simple *.PNG. I've heard that if you don't generate Mipmaps it can go slower, so I double checked that. Then I changed the program that I was using to generate the *.DDS file, switching to NVIDIA's own nvcompress.exe, but none of it had any effect.
EDIT: I forgot to mention that the files (both *.png and *.dds) are both the same image, just saved in different formats. (Same size, amount of alpha, everything!)
EDIT 2: When using the following parameters it loads in almost 2.5x faster AND consumes a LOT less VRAM!
D3DXCreateTextureFromFileEx( g_pd3dDevice, L"Test.dds", D3DX_DEFAULT_NONPOW2, D3DX_DEFAULT_NONPOW2, D3DX_FROM_FILE, 0, D3DFMT_FROM_FILE, D3DPOOL_MANAGED, D3DX_FILTER_NONE, D3DX_FILTER_NONE, 0, NULL, NULL, &g_pTexture )
However, I'm now losing all my transparency in the texture, I've looked at the DXT5 texture and it looks fine in Paint.NET and DirectX DDS Viewer. However when loaded in all the transparency turns to solid black. ColorKey issue?
EDIT 3: Ignore that last bit, I was being idiotic and in my "quick example" haste I'd forgotten to enable Alpha-Blending on the D3DXSprite->Begin(). Doh!
You need to distinguish between the format that your files are stored in on disk and the format that the textures ultimately use in video memory. DXT compressed textures offer a good balance between memory usage and quality in video memory but other compression techniques like PNG or Jpeg compression generally result in smaller files and/or better quality on disk.
DDS files have the advantage that they support DXT formats directly and are laid out on disk in the same way that DirectX expects the data to be laid out in memory so there is minimal CPU time required after they are loaded to convert them into a format the hardware can use. They also support pre-generated mipmap chains which formats like PNG do not support. Compressing an image to DXT formats is a fairly time consuming process so you generally want to avoid doing it on load if possible.
A DDS file with pre-generated mipmaps that is the same size as and uses the same format as the video memory texture you plan to create from it will use the least CPU time of any standard format. You need to make sure you tell D3DX not to perform any scaling, filtering, format conversion or mipmap generation to guarantee that though. D3DXCreateTextureFromFileEx allows you to specify flags that prevent any internal conversions happening (D3DX_DEFAULT_NONPOW2 for image width and height if your hardware supports non power of two textures, D3DFMT_FROM_FILE to prevent mipmap generation or format conversion, D3DX_FILTER_NONE to prevent any filtering or scaling).
CPU time is only half the story though. These days CPUs are pretty fast and hard drives are relatively slow so sometimes your total load time can be shorter if you load a smaller compressed file format like PNG or JPG and then do lots of CPU work to convert it than if you load a larger file like a DDS and just do a memcpy into video memory. A common approach that gives good results is to zip DDS files and decompress them for fast loading from disk and minimal CPU cost for format conversion.
Compression formats like PNG and JPG will compress some images more effectively than others. DDS is a fixed compression ratio - a given image resolution and format will always compress to the same size (this is why it is more suitable for decompression in hardware). If you're using simple non-representative images for testing (e.g. a uniform colour or simple pattern) then your PNG file is likely to be very small and so will load from disk faster than a typical game image would.
Compare loading a standard PNG and then compressing it to the time it takes to load a DDS file.
Still I can't see why a PNG would load any faster than the same texture DXT5 compressed. For one it will be a fair bit smaller so it should load form disk faster! Is this DXt5 texture the same as the PNG texture? ie are they the same size?
Have you tried playing with D3DXCreateTextureFromFileEx? You have far more control over what is going on. It may help you out.
Please help! Thanks in advance.
Update: Sorry for the delayed response, but if it is helpful to provide more context here, since I'm not sure what alternative question I should be asking.
I have an image for a website home page that is 300px x 300px. That image has several distinct regions, including two that have graphical copy on top of the regions.
I have compressed the image down as much as I can without compromising the appearance of that text, and those critical regions of the image.
I tried slicing the less critical regions of the image and saving those at lower compressions in order to get the total kbs down, but as gregmac posted, the sections don't look right when rejoined.
I was wondering if there was a piece of software out there, or manual solution for identifying critical regions of an image to "compress less" and could compress other parts of the image more in order to get the file size down, while keeping those elements in the graphic that need to be high resolution sharper.
You cannot - you can only compress an entire PNG file.
You don't need to (I cannot think of a single case where compressing a specific portion of a PNG file would be useful)
Dividing the image in to multiple parts ("slicing") is the only way to compress different portions of a image file, although I'd even recommend again using different compression levels in one "sliced image", as differing compression artefacts joining up will probably look odd
Regarding your update,
identifying critical regions of an image to "compress less" and could compress other parts of the image more in order to get the file size down
This is inherently what image compression does - if there's a bit empty area it will be compressed to a few bytes (using RLE for example), but if there's a very detailed region it will have more bytes "spent" on it.
The problem sounds like the image is too big (in terms of file-size), have you tried other image formats, mainly GIF or JPEG (or the other PNG format, PNG-8 or PNG-24)?
I have compressed the image down as much as I can without compromising the appearance of that text
Perhaps the text could be overlaid using CSS, rather than embedded in the image? Might not be practical, but it would allow you to compress the background more (if the background image is a photo, JPEG might work best, since you no longer have to worry about the text)
Other than that, I'm out of ideas. Is the 300*300px PNG really too big?
It sounds like you are compressing parts of your image using something like JPEG and then pasting those compressed images onto a PNG combined with other images, and the entire PNG is sent to the browser where you split them up.
The problem with this is that the more you compress your JPEG parts the more decompression artifacts you will get. Then when you put these low quality images onto the PNG, which uses deflate compression, you will actually end up increasing the file size because it won't be able to compress well.
So if you are keen on keeping PNG as your file format the best solution would be to not compress the parts using JPEG which you paste onto your PNG - keep everything as sharp as possible.
PNG compresses each row separately unless you have used a "predictor" in the compression.
So it's best to keep your PNG as wide as possible with similar images next to each other horizontally rather than under each other vertically.
Perhaps upload an example of the images you're working with?
I'm writing an application that handles metadata for images and all kinds of animations, so I'm looking for a way to find basic info about an animation file, e.g:
length (in minutes/seconds/frames)
aspect ratio of pixels
resolution of individual frames
framerate
Right now, I let my program execute
mplayer -identify animfile.avi
and parse its console output, which contains all the info I need in a machine-readable format. This works fine, but I know that some potential users of the program prefer vlc as a media player so I'd rather avoid having a hard dependence on mplayer being installed.
I've tried
vlc -vv animfile.avi
which prints an ungodly amount of junk on the console, sometimes containing the stuff I'm looking for. The formatting and what data gets printed seems to vary depending on the file format of the animation though.
Is there an easier way to extract basic info from an animation of any format one has a decoder for (especially the length of the animation) using vlc or som other app/library that is usually available on a typical Linux installation?
Edit: I'd rather use another program to do the dirty work, as this is supposed to work for any animation format, e.g avi, mpg, mov, wmv, vob etc.
Edit: totem-video-indexer seems more promising, and was also included with the standard installation. Enough codecs to make it useful, however, was not. That could be fixed by installing the "non-free-codecs" package from medibuntu.
The output of totem-video-indexer is very easy to parse:
TOTEM_INFO_DURATION=5217
TOTEM_INFO_HAS_VIDEO=True
TOTEM_INFO_VIDEO_WIDTH=720
TOTEM_INFO_VIDEO_HEIGHT=480
TOTEM_INFO_VIDEO_CODEC=XVID MPEG-4
TOTEM_INFO_FPS=30
TOTEM_INFO_HAS_AUDIO=True
TOTEM_INFO_AUDIO_BITRATE=50
TOTEM_INFO_AUDIO_CODEC=MPEG 1 Audio, Layer 3 (MP3)
TOTEM_INFO_AUDIO_SAMPLE_RATE=48000
TOTEM_INFO_AUDIO_CHANNELS=Stereo
mediainfo is a pretty useful program. It's LGPL, and is just a frontend for libmediainfo, which should be exactly what you want.
http://mediainfo.sf.net/
This is a little more difficult question than you may realize. The AVI file format grew over time, and often has nearly the same information in two or three different places. In some cases those are really supposed to agree (but sometimes don't) and in other cases they're subtly different.
Just for example, you asked about the width and height. There are actually four different width/height specs for a single frame: the screen width/height, the pixel width/height (from which you derive the pixel aspect ratio), the active width/height, and the compressed width/height. The frame width and height is the (theoretical) size of the screen. The active width/height excludes the overscan area. The compressed width/height takes into account rounding -- for example, JPEG compresses in blocks of 8x8 pixels, so the compressed width and height have to be multiples of 8 for a motion JPEG file. The active width/height tells you if (for example) some pixels at the border should be ignored.
In any case, since your question is tagged C++, I'm going to guess you'd rather read the file and get the data directly than depend on spawning something else to do the dirty work. If so, you probably want to look at the OpenDML AVI file spec. You can get at least some idea of the length, resolution, and framerate just from reading the basic AVI header, which is in a fixed spot at the beginning of the file, so that much is trivial to get. It'll take a bit more work to get to the pixel aspect ratio though...