Shrink the size of a .png file - compression

There are many programs that claim to reduce the size of a .png file but none of the well known ones, optipng , pngcrush , pngquant, allow me to shrink to a specified size. pngcrush tried its hardest, but the result was still way to big for my needs. For .jpg files, jpegoptim has an -m option that does allow me to shrink to the size I need. The obvious solution seemed to be to convert to jpg, shrink to the right size, then convert back, but that doesn't work either, the reconstituted .png file just jumps back to its original size.
Presumably, this has something to do with the structure of .png files.
Is there any way to get a small png file? This png file is an example of something i need to shrink to below 1K bytes.
Thanks for any suggestions!

Use ImageMagick to reduce the colors, then pngcrush to get rid of ancillary chunks:
magick in.png -colors 8 temp.png
pngcrush -rem alla temp.png out.png
results in a 1621-byte file. If you have an older version of ImageMagick, use "convert" instead of "magick". Using "-colors 4" instead of "-colors 8" gets you a 1015-byte file, but the dithering looks very spotty.
Note that these preserve the transparency in the image, while converting to JPEG loses the transparency and makes the background a solid color.

The only solution to your problem that I can think of is to use .jpg instead of .png. The .jpg format was mainly created for its high lossy compression but still gets a good enough image. On the other hand, .png is going for the full transparency and no quality loss. To sum it all up, .jpg is ideal for getting smaller files if quality doesn't matter, and .png is perfect for high-quality images that quality and colour really matter.
Sources:
http://www.labnol.org/software/tutorials/jpeg-vs-png-image-quality-or-bandwidth/5385/, http://www.interactivesearchmarketing.com/jpeg-png-proper-image-formatting/

I can get that 9.5 KB file down to 3.4 KB using the 8-bit palette PNG format. The image has a transparent boundary, which adds unnecessary pixels and an alpha channel for the whole image which isn't needed, since it's rectangular. After stripping the transparent boundary, eliminating the alpha channel, and using a palette, I can get it down to 3.2 KB.
To get any further, I have to use JPEG for lossy compression. At a very low image quality of 5 (out of 100), I can get it down to 1 KB. It shows some artifacts from the severe compression (look around the prompt > and _ to see some of those):

Related

How does ghostscript compress PDFs when it applies a greyscale?

I have a single paged PDF that looks like this that is 6.3 MB. Because it seems to already be in greyscale in the first place, applying a greyscale should not make a huge difference.
But when I apply a greyscale to the PDF with:
gs \
-sOutputFile=output.pdf \
-sDEVICE=pdfwrite \
-sColorConversionStrategy=Gray \
-dProcessColorModel=/DeviceGray \
-dCompatibilityLevel=1.4 \
-dNOPAUSE \
-dBATCH \
input.pdf
"output.pdf" is only 128.4 kB, and you can see the presence of new artifacts. The artifacts are not noticeable when the PDF is at full scale, but if you zoom in you can clearly tell a difference. You can see the greyscaled image here.
What is occuring in the ghostscript that causes the artifacts? But also more importantly, what causes such a dramatic loss in file size?
EDIT:
I think I overstated the artifacts in the output file. For all intensive purposes, the files look very similar.
Version: GPL Ghostscript 9.23
Here is the original PDF file: https://send.firefox.com/download/e47df175af/#tdZSodyN2CuQL8X0VIFC1g
Here is the greyscaled PDF:
https://send.firefox.com/download/a63b3d641c/#ce9Ctu6obfXlvvNZJvPnUA
I found that Sribd, Imgur compressed the original PDF file, so there was no point in using a hoster.
Supplying PNG images, rather than the actual PDF files, makes it impossible for anyone to be able to tell for certain what your problem is. If you had posted the PDF files I'd be able to look and tell you.
However, I'm going to guess that you are using an older version of Ghostscript (again you don't say), and that the image in the original file is DCT (JPEG) compressed.
Because you haven't specified a particular compression method, the pdfwrite device (not Ghostscript, but the Ghostscript device which writes PDF files) uses 'Automatic' compression. It writes the image data multiple times with different compression filters, and selects the one which produces the smallest output.
Almost certainly this will again be the DCT (JPEG) compression filter, it almost invariably produces the smallest output. This is also the default filter which is used if you disable automatic selection and don't specify a different compression filter to use.
The problem is that DCT is a lossy compression, so every time you decompress and recompress it, you lose fidelity. Though the image size in bytes does decrease each time.
So that's the reason for both your results; the compression artefacts and at least part of the reduction in size. It may also be the case that your original Grayscale image is actually not Gray but RGB (or Lab or CalRGB, or ICCBased...), in which case converting it to grayscale will result in a decrease in size of 66%. Without seeing the file I can't tell.
Note that current versions of Ghostscript use a JPEG passthrough feature. Provided that the image is not being downsampled, or having its colour space altered, the image is not decompressed. It is passed unchanged to the output device, which embeds it unchanged. This avoids the artefacts introduced by decompression and recompression.
Obviously if you want to change the colour space, then the pdfwrite device does have to manipulate the image, so it has to decompress it.
You can select the compression filter you want to use, instead of permitting automatic selection, by using the GrayImageFilter distiller parameter see here.

How can I compress jpeg image with compression rate 4 bpp or less?

I am trying to compress my .jpeg image in Photoshop.
WHat is the best way to do this?
I am now calculating the bpp taking the image size in kb, calculating how many bits that is. Then I take the image size in pixel*pixel to get the amount of pixels in the image. After that I divide bits/pixels, to find how many bits per pixel the image has.
But How can I change this number? My guess is to change how many kb the image is, but how do i do this?
Thanks for any help!!
Yes, you can achieve higher compression ratio than 4 bits per pixel. Images with solid color can have rate as low as 0.13bpp.
In fact 4bpp is quite poor compression — it's same as uncompressed 16-color image or half of 256-color image, which even GIF can manage. JPEG can look decent at 1-2bpp.
in general, you cannot "compress" a jpeg image. all you can do is to reduce the image quality further in order to achieve a lower bpp value. jpeg streams are always compressed and they use a lossy compression method. it means that the original image will never ever be reconstructed from a jpeg image. the smaller the file the more information you have lost.
a specific "bpp value" is not, and should never be your target. especially with lossy compression. you should always look at your current image and decide whether it is still good enough or not.
if you still have the original image, try a lossless compression format, like zip-compressed or lzw-compressed tiff or compressed png. i'm sure PhotoShop can handle these formats as well. another softwares like IrfanView (https://www.irfanview.com/) or XnView MP (https://www.xnview.com/en/xnviewmp/) will convert your images too.
if you want manual (eg. full) control over your images, you should use command line utilities, like ImageMagick (https://imagemagick.org/) or NConvert (please find the XnView MP link above)
if you have only the jpeg images do not touch (edit & save) them. with every single save operation you lose another bunch of information. you should always work on file copies.
you should always keep your master image (the very picture you took with your phone or your camera).
of course, these rules of thumb will not answer your original question.

Can't find logic behind png file sizes

I'm saving a large number of small png files for use in a game on a phone, so space is at a premium.
I'm trying to figure out the logic behind the file sizes so I can save things most efficiently, but even after using pngcrush the sizes are totally inconsistent.
I saved a 1x1 image and it takes 3kb. I have another 23x21 image which takes only 2kb. I have two images which are almost the same size, but one takes 6kb and the other takes 13kb. I doubled the image height and copied one image into the empty space of the other and saved that. The combined image is only 11kb!
Why is a 1x1 image larger than a 23x21 image? Why can I combine a 13kb image and a 6kb image and get an 11kb image?
Here are the images I'm talking about (there's a 1x1 pixel in between the 1st and second images. It's difficult to see, so I'll just give the URL: http://g42.org/temp/png/1x1.png):
example http://g42.org/temp/png/hat.png
example http://g42.org/temp/png/1x1.png
example http://g42.org/temp/png/helmet1.png
example http://g42.org/temp/png/helmet2.png
example http://g42.org/temp/png/helmet1_2.png
It's not a compression thing, the problem with the 1x1 image is that it has metadata (added by Photoshop, it seems), a color profile (iCCP chunk). If you look inside the binary, its' the data between the strings "iCCP" and "IDAT", it could be removed and you get a 69 bytes file.
If you reopen and save the file most image viewers (xnview), or use pngcrush, you can strip that chunk. : See it here : http://i.stack.imgur.com/fmOdA.png
And regarding the helmet images: besides other informational chunks (imageReady ads some informational text, as you can see), the difference is due to different formats: the two-helmets is a paletted image (8bits per pixel), the single helmet is a RGB with alpha (32bits per pixel)
PNG compression is based on the same algorithm as zlib and is highly sensitive to the data that is being compressed so you won't see a consistent relationship between image size and file size. In the case of the combined image, it is still bigger than the smaller image and given the similarity of the two halves of the image, the compressor was probably able to reuse a lot of the Huffman tree. I don't know enough about the algorithm to say for certain how it ended up smaller than the other half.
As long as you are not seeing oddities like the 1x1 image, which you seem to have figured out in the comments, I don't think this will make a lot of sense without extensive study of image compression.
There is a great utility called pngcrush
http://pmt.sourceforge.net/pngcrush/
Compressing to PNG is a rather difficult task - there are lost of assumptions and strategies to try - do we create a palette, or are we better off without it?
PNGcrush essentially bruteforces 100+ different compression strategies, while at the same time trimming useless tags and sections.
PNG has several sub-formats: 24-bit with or without alpha, 8-bit (includes alpha), grayscale, etc. which use different amount of bytes per pixel and have different "compressibility".
Plus PNG supports several compression tricks (filters and gzip settings) which affect how well image data is compressed.
On top of that PNG can contain metadata, which sometimes can be pretty large, like some embedded color profiles.
ImageAlpha converts images to the most space-efficient PNG8+alpha variant.
ImageOptim removes junk metadata and finds best compression parameters.
With a combination of those two your images can be reduced by 30-50%.

What options for convert (ImageMagick or GraphicsMagick) produce the smallest (filesize) PNG?

ImageMagick creates some pretty large PNGs. GraphicsMagick is a lot better, but I'm still looking for the best options to use with convert to obtain the smallest filesize png.
I have here a large png with a small filesize, and passing this through IM convert I have been unable to reach that filesize, let alone get it smaller. With GM convert I can get it slightly smaller but I'm looking for improvements, generically for any image I come across.
gm convert -quality 95 a_png.png gm.png
convert -quality 95 -depth 8 a_png.png im.png
gm identify *
a_png.png PNG 2560x2048+0+0 PseudoClass 256c 8-bit 60.1K 0.000u 0:01
gm.png[1] PNG 2560x2048+0+0 PseudoClass 256c 8-bit 60.0K 0.000u 0:01
im.png[2] PNG 2560x2048+0+0 DirectClass 8-bit 130.2K 0.000u 0:01
What options for convert produce the smallest PNG filesize?
(Yes, I'm familiar with OptiPNG, PNGOUT and Pngcrush. But I'm after something that will be available without question on every *nix box I happen to be on.)
Looks like you and me are looking for the same answer. Unfortunately there doesn't seem to be many people out there with a good knowledge of GraphicsMagick. This is what I have learned so far,
The quality operator doesn't properly work for any image other than JPEG's. For me it just made the file size bigger when used on PNG's and GIF's.
I have done this to my PNG and GIF files to reduce their size:
gm convert myImage.png +dither -depth 8 -colors 50 myImage.png
+dither stops any dithering of the image when the colors are reduced. (this reduces the file size)
-depth 8 is probably unnecessary as most PNG files are already depth 8.
-colors 50 reduces the number of colors in the image to 50, this is the only way to really reduce the size of a image stored in a lossless format like PNG or GIF.
Obviously for the best image quality/size ratio you cant just reduce the image depth or number of colors without knowing the current depth and number of colors. To determine this information I am doing the following
gm identify -format "file_size:%b,unique_colors:%k,bit_depth:%q" myImage.png
For my image; this returns
file_size:100.7k,unique_colors:13455,bit_depth:8
The problem is when GraphicsMagick reduces colors it always reduces to at least 255, so you can't set the number of colors to 300 for example. Also there seems to be an issue with the alpha channel for PNG files; If the image has transparency in it, reducing colors replaces these colors with transparent; with imagemagick it does not do this.
I just came across this question again so I'll update, GraphicsMagick and ImageMagick have a serious problem. They cannot write out PNG images using a tRNS chunk which means if you try to read in an image that has a tRNS chunk and then write it out, the image will be much bigger. GM is not the best tool for compressing images. You need to use a separate tool such as OptiPNG to compress PNG's again after using Image/GraphicsMagick. I am getting up to 60% smaller files when using OptiPNG after running GraphicsMagick on an image.
Also I was wondering if you have encountered a problem regarding RGBA images and bit depth. For some images I am getting an "Invalid bit depth" exception. I can't see any reason why.
I haven't found a way to do it in the command line, but i did find this free website (http://tinypng.org/) that does an excellent job, my test image got a 71% reduction, final size was only 29% of the original. It looks like you can give it 20 images at a time. I'm looking into how they do it.
http://tinypng.org/

Does anyone know of a program/method to compress just certain parts of a PNG image w/o slicing it?

Please help! Thanks in advance.
Update: Sorry for the delayed response, but if it is helpful to provide more context here, since I'm not sure what alternative question I should be asking.
I have an image for a website home page that is 300px x 300px. That image has several distinct regions, including two that have graphical copy on top of the regions.
I have compressed the image down as much as I can without compromising the appearance of that text, and those critical regions of the image.
I tried slicing the less critical regions of the image and saving those at lower compressions in order to get the total kbs down, but as gregmac posted, the sections don't look right when rejoined.
I was wondering if there was a piece of software out there, or manual solution for identifying critical regions of an image to "compress less" and could compress other parts of the image more in order to get the file size down, while keeping those elements in the graphic that need to be high resolution sharper.
You cannot - you can only compress an entire PNG file.
You don't need to (I cannot think of a single case where compressing a specific portion of a PNG file would be useful)
Dividing the image in to multiple parts ("slicing") is the only way to compress different portions of a image file, although I'd even recommend again using different compression levels in one "sliced image", as differing compression artefacts joining up will probably look odd
Regarding your update,
identifying critical regions of an image to "compress less" and could compress other parts of the image more in order to get the file size down
This is inherently what image compression does - if there's a bit empty area it will be compressed to a few bytes (using RLE for example), but if there's a very detailed region it will have more bytes "spent" on it.
The problem sounds like the image is too big (in terms of file-size), have you tried other image formats, mainly GIF or JPEG (or the other PNG format, PNG-8 or PNG-24)?
I have compressed the image down as much as I can without compromising the appearance of that text
Perhaps the text could be overlaid using CSS, rather than embedded in the image? Might not be practical, but it would allow you to compress the background more (if the background image is a photo, JPEG might work best, since you no longer have to worry about the text)
Other than that, I'm out of ideas. Is the 300*300px PNG really too big?
It sounds like you are compressing parts of your image using something like JPEG and then pasting those compressed images onto a PNG combined with other images, and the entire PNG is sent to the browser where you split them up.
The problem with this is that the more you compress your JPEG parts the more decompression artifacts you will get. Then when you put these low quality images onto the PNG, which uses deflate compression, you will actually end up increasing the file size because it won't be able to compress well.
So if you are keen on keeping PNG as your file format the best solution would be to not compress the parts using JPEG which you paste onto your PNG - keep everything as sharp as possible.
PNG compresses each row separately unless you have used a "predictor" in the compression.
So it's best to keep your PNG as wide as possible with similar images next to each other horizontally rather than under each other vertically.
Perhaps upload an example of the images you're working with?