Take advantage of Flash CS4's PNG Compression, but as an external file? - compression

Here's the issue: I'm developing some Flash web sites and really enjoying AS3.
The problem: PNG 24-bit images are too big... I have three PNG images with transparency that I'd like to rotate through on the "Home page" every 10 seconds or so. Great. No problem - but instead of embedding all three PNGs in the SWF, which would take the thing longer to load, I'd like to load them dynamically from external files, so that the user doesn't have to wait around for images to load that aren't going to be displayed for another 10-15 seconds anyway. That's fine... I have working code for that.
The real problem: These PNG sizes, even loaded from external files on the fly, are really bugging me. One image is 350k when saved with Photoshop - 300k when I use PNGOUT. But... when I import the PNG into Flash's Library, I can go in and set it to JPG/Image Compression which reduces the size to about 45k, yet maintains the alpha information!! If Flash can compress my PNG that much, and still make it look good, why can't I find an app that can do the same for an external file? I'd be content to load my images into the Flash library and let it handle the compression, but if I end up with 5 or 6 images, that still turns out to be too long of a loading time.
Summary: How can I shrink my 350k PNG image with transparency down to 45k like Flash does when I import it into it's library?
Possible solution: Or.... hmmmm.... this could be a workaround... maybe I could just make a separate SWF movie for each PNG I want to use which uses the Flash compressed image - then read that file dynamically using a Loader... That ought to work! I shall return and report...
But still, how does Flash compress those PNGs so much more than compressors like PNGOUT? Maybe I'm just not passing in the right parameters for them to be effective.
Thanks for reading my ramblings. You all are a great sounding board!

PNG compression is lossless, so it can't compete with lossy perceptual compression schemes as JPEG. Just be sure that your png are the size to be displayed (or not : one trivial "compression" scheme would be to save your image scaled down and zoom it when displaying, but this is normally unsatisfactory). If you can't go below 24-bits (you cant go to a 256-pallete, I guesss) I dont think much can be done. I can only suggest to give a look at PngCrush.

I used to have the same question, but later, I think flash used JPEG compress for PNG files. The JPEG-compressed "png" is actually a variant that standard png format does not support, but flash supports. In my own flash project, I used it a lot. I even used swftools to generate an animated SWF from a lot of png, so I can load the single "png gallery" swf and use all the pngs inside.

I know that the question is a year old, but I thought it would be good for future reference. Using any of the png compression tools (PngCrush, Optipng) will not get anywhere near the same results as Flash compression.
The best way I've found to use flash compression without creating each swf in the Flash IDE is using SwfTools' png2swf utility, it will keep alpha channels and also allow you to set the jpeg's compression quality.
http://www.swftools.org/png2swf.html

Related

Generate Live Mp4 in UWP

I want to develop a slide-show kind of app that could be casted to smart TV, similar to showing a PowerPoint slide show. Standard Miracast solution via the Connect app does not work nicely since the phone resolution does not match the high resolution of the TV; not to count the fact that there is no way to hide the navigation bar with TryEnterFullscreenAsync. The images could be quickly rendered from vector sources. So my question is whether there is a way to generate MP4 on the fly.
As long as you can gerate the bitmaps on the fly, as you mentioned, then you can use FFmpeg to create the MP4.
Download ffmpeg source code and check the source of doc/examples/muxing.c
This example is pretty much doing this. Just replace the fill_yuv_image() by the actual thing you are rendering.
Don't forget to convert your pictures to YUV format. In this example, the encoder will need a YUV bitmap and you will probably render an RGB image. Google for swscale or even check the other examples from FFmpeg in order to solve this problem.
--
If you really want something Microsoft specific, then you must use the "Microsoft Media Foundation". There's a lot of samples here on how to encode and decode:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa371827(v=vs.85).aspx
And you can use all these codecs:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff819077(v=vs.85).aspx

Most efficient way to store video data

In order to accomplish some specific editing on some .avi files, I'd like to create an application (in C++) that is able to load, edit, and save those .avi files. But, what is the most efficient way? When first thinking about it, a simple 3D-Array containing a 2D-array of pixels for every frame seems the simplest solution; But then its size would be ENORMOUS. I mean, let's assume that a pixel only needs a color. One color would mean 3bytes (1char r, 1char b, 1char g). If I now have a 1920x1080 video format, this would mean 2MEGABYTES for only one frame! This data may or may not be smaller if using pointers for the colors, so that alreay used colors wont take more size - I don't really know, since I'm pretty new to C++ and the whole low-level stuff. (As a comparison: One of my AVI files recorded with Xvid codec is 40seconds long, 30fps, and only has 2MB.)
So how would you actually store the video data (Not even the audio, just the video) efficiently (while still being easily able to perform per-frame-changes on it)?
As you have realised, uncompressed video is enormous and it is not practical to store an entire video in this way.
Video compression is an extremely complex topic, but more-or-less, it works as follows: certain "key-frames" are compressed using fairly standard compression techniques similar or identical to still-photo compression such as JPEG. Frames following key-frames are compressed by comparing the frame with the previous one and looking for changes (such as moving blocks). Every now and again, a new key-frame is used.
You don't really have to worry much about that as you are not going to write your own video coder/decoder (codec). There are standard ones.
What will happen is that your program will decode the compressed video frame-by-frame and keep a certain number of frames in memory while you are working on them and then re-encode them when it is finished. In the uncompressed form, you will have access to the individual pixels and can work on them how you want.
You are probably not going to do that either by yourself - it is very hard. You probably need to use a framework, such as OpenCV. There are a huge number of standard filters and tools built in to these frameworks, and it may be that what you want to do is already implemented somewhere.
The OpenCV framework can return individual frames in a Mat object and you can then access the pixels. See this post Get Pixels from Mat
OpenCV
Tutorial page: Open CV Tutorial

Read .tga with DirectX 11

Since a few days im working on a tool where i need to draw textures on several file format with directX 11. After googling a lot, i didnt found how to do.
I'm using D3DX11CreateShaderResourceViewFromFile for load .dds and .png files, but i read somewhere else that .tga isn't supported anymore. I read something too about D3DLOCKED_RECT to set each pixels of the texture, and read .tga files to know these pixels, but that was for directX 9.
Any help or tips? Thanks in advance.
//note: I don't use D3D11
MSDN page for D3DX11CreateShaderResourceViewFromFile says there is DirectXTex library, that should be able to load *.tga files using LoadFromTGAFile routine. You should give it a try. If it doesn't work for you, you'll have to write your own texture loader. (because it was possible to make your loader for textures in D3D9, it should be possible to do the same thing in D3D11). *.tga format is documented somewhere and many beginner tutorials specifically deal with loading this particular format without 3rd party libraries.
Two advices:
Next time, when in doubt, read documentation.
DON'T look at *.png format. This format loads very slowly (jpeg is faster, uncompressed bmp is faster, dds is faster) and most likely isn't suitable for games that need to load many images often (it is okay to use it for start menu, ending screen, etc). Either use uncompressed format (such as *.tga) or (since you're using directx) use *.dds format. Your images most likely will take extra disk space, but will load more quickly.

Cocos2d Loading several images at once?

I have been searching the cocos2d forum but I do not understand some of the concepts the people are using. In my game I am having to load over 100 images to use as an animation for my main menu but the problem that I am having is these images take about 3 to 5 seconds to load and then my game starts up. The animation runs great after the images are loaded but it's the loading that is the problem. I would use sprite sheets but the images are to big so I have to load them separately. So should I make a loading screen to load all of these images in first and if so how is the best way to implement this? This is my first time of trying to do something like this.
#Stephen : Two ways to do this. With TexturePacker you can create a .tps file, one for each source image, then under File->Export Image. Set the geometry to 1024x1024 for your images. Specify .pvr format, enable pre-multiplied alpha, and toy with dithering (this may actually benefit some textures, ie improve on .png's). You could also probably benefit from RGBA4444 for menus (gain on memory required, with no significant loss on rendered quality).
You can also use the builtin texturetool as follows:
before you do this, you must convert toto.png to a POT file (1024x1024) in your case, with photoshop for example.
MrEvil:pvrCenter$ /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/texturetool -m -f PVR -e PVRTC toto.png -o toto.pvr
MrEvil:pvrCenter$ gzip toto.pvr
this gives excellent compressions after a gzip (from 691Kb png to 295Kb).
I used texturetool because i can script that in shell, and process a whole lot of images with a single command (play D3 while the box churns out the images :) ).
EDIT 1 : some info on packing and file sizes.
ok, i started with one of my own 960x640 8 bits PNG, 691 Kb.
load it TexturePacker, set format to RBGA8888, 1024x1024, i get 766 Kb (this gives my my POT file).
export to RGBA8888 as a .pvr.ccz, 996 Kb.
export to RGBA8888 as a .pvr.gz, 1.001 Mb
export to RGBA4444 as a .pvr.ccz, 193Kb.
if i use texturetool on the 766 Kb file, then gzip the file 305Kb (RGBA8888). I cant really explain the difference between 305Kb and 996 Kb. It could be related to the dithering processing by TexturePacker, not certain.
Yes, definitely use a texture atlas (sprite sheet is not the correct term but means the same thing). A great tool for that is TexturePacker.
A texture atlas will save time when loading, and conserve memory. You can also try out different image color depth and compression options to further improve memory usage and loading times, but many options will affect image quality to a varying degree and depending on your images.
Btw, how big are these images? Assuming each is 512x512, and you load 100 of them, they'll consume 100 MB of memory. I mention that because this is often overlooked, and file sizes on disk are a fraction of what the images consume as textures.

Does anyone know of a program/method to compress just certain parts of a PNG image w/o slicing it?

Please help! Thanks in advance.
Update: Sorry for the delayed response, but if it is helpful to provide more context here, since I'm not sure what alternative question I should be asking.
I have an image for a website home page that is 300px x 300px. That image has several distinct regions, including two that have graphical copy on top of the regions.
I have compressed the image down as much as I can without compromising the appearance of that text, and those critical regions of the image.
I tried slicing the less critical regions of the image and saving those at lower compressions in order to get the total kbs down, but as gregmac posted, the sections don't look right when rejoined.
I was wondering if there was a piece of software out there, or manual solution for identifying critical regions of an image to "compress less" and could compress other parts of the image more in order to get the file size down, while keeping those elements in the graphic that need to be high resolution sharper.
You cannot - you can only compress an entire PNG file.
You don't need to (I cannot think of a single case where compressing a specific portion of a PNG file would be useful)
Dividing the image in to multiple parts ("slicing") is the only way to compress different portions of a image file, although I'd even recommend again using different compression levels in one "sliced image", as differing compression artefacts joining up will probably look odd
Regarding your update,
identifying critical regions of an image to "compress less" and could compress other parts of the image more in order to get the file size down
This is inherently what image compression does - if there's a bit empty area it will be compressed to a few bytes (using RLE for example), but if there's a very detailed region it will have more bytes "spent" on it.
The problem sounds like the image is too big (in terms of file-size), have you tried other image formats, mainly GIF or JPEG (or the other PNG format, PNG-8 or PNG-24)?
I have compressed the image down as much as I can without compromising the appearance of that text
Perhaps the text could be overlaid using CSS, rather than embedded in the image? Might not be practical, but it would allow you to compress the background more (if the background image is a photo, JPEG might work best, since you no longer have to worry about the text)
Other than that, I'm out of ideas. Is the 300*300px PNG really too big?
It sounds like you are compressing parts of your image using something like JPEG and then pasting those compressed images onto a PNG combined with other images, and the entire PNG is sent to the browser where you split them up.
The problem with this is that the more you compress your JPEG parts the more decompression artifacts you will get. Then when you put these low quality images onto the PNG, which uses deflate compression, you will actually end up increasing the file size because it won't be able to compress well.
So if you are keen on keeping PNG as your file format the best solution would be to not compress the parts using JPEG which you paste onto your PNG - keep everything as sharp as possible.
PNG compresses each row separately unless you have used a "predictor" in the compression.
So it's best to keep your PNG as wide as possible with similar images next to each other horizontally rather than under each other vertically.
Perhaps upload an example of the images you're working with?