Supress Caution: quantization tables are too coarse for baseline JPEG - libjpeg

I have a custom built libtiff with jpg support. When I create TIFF images using JPG compression with a low quality, like 10, I get the following warning printed to stderr:
Caution: quantization tables are too coarse for baseline JPEG
I am looking to suppress this warning. I know it is an issue, but the intent of these is a super small quick look image before the full size image is created, so it is meaningless under the circumstance and floods stderr.
What I don't know is, is there a mechanism to suppress these via the TIFF interface or am I going to have to recompile the libraries and set a field, and if so, is there a CMake option I missed, or do I need to modify code.

Related

Bad quality of images using GDI+ with PostScript driver

I'm developing a program to print images of different formats (BMP, JPEG, EMF, ...) on HDC using C++ and Windows GDI+. Using the MS Publisher Imagesetter driver I can generate a postscript file and through GhostScript functions I obtain the PDF file. If I try to print the following image:
I obtain the following bad quality result with those strange squares (not present on original image):
The part of my code that I used to print the image is:
SetMapMode(hdcPrint,MM_TEXT);
Gdiplus::Graphics graphics(hdcPrint);
graphics.SetPageUnit(Gdiplus::UnitMillimeter);
Gdiplus::Image* image = Gdiplus::Image::FromFile(srPicture->swPathImage);
graphics.DrawImage(image,x,y,w,h);
I tried to print the same image with many drivers and different kind of formats (different from PostScript: PDF, EMF, real printer) and the result is always acceptable (the squares are not present).
Furthermore, I tried to open the bad quality result with a pdf reader different from Adobe Acrobat Reader DC (Wondershare PDFelement and Chrome) and, even then, the result is acceptable.
I also noticed that if the image contains some different shapes (i.e. a big red line, like in the next image) the result is good too.
At this point, I have no idea if the problem is Adobe reader or my implementation.
Is there a differnt way to print different formats images with GDI+ (or pure GDI)?
The PostScript file generated is this.
Well... You haven't supplied either the PostScript or PDF files, which makes it really hard to comment.
Its not completely obvious to me at what point you are getting the image you show, is this what you see on the PDF file ? Is it something you are getting when printing the PDF file to a physical printer ? If its the lattter, how are you printing the PDF file to the printer ?
The JPEG you have supplied a link to is really small (6Kb), are you genuinely trying to use that JPEG file ?
My guess (and in the absence of any files, a guess is all it can be) that you are using an old version of Ghostscript. Old versions would decompress the JPEG image, then recompress the image using whatever filter produced the smallest result, usually JPEG again.
Because JPEG is a lossy format, every time you apply it to an image the quality decreases.
Newer versions of Ghostscript don't decompress the JPEG image data when going to the pdfwrite device, unless other optsions (eg Colour conversion, image downsampling etc) make it neccesary. The current version of Ghostscript is 9.27 and the release of 9.28 is imminent, I'd suggest you try one of those.
Another possibility would be that either the PostScript program has been created in such a way as to degenerate every image smaple to a rectangle, or you are using an extremely old version of Ghostscript where that technique was also used.
Note that none of these would, in my opinion, lead to exactly the result you've pasted here, but the version is certainly worth investigating. Posting the PostScript program file (ie the file you send to Ghostscript) would be more helpful, because it would allow me to at least narrow down where the problem has occured.
[EDIT]
The fault appears to be an intriguing bug in Acrobat.
The PostScript program uses a colour transfer function to invert the colour samples of the RGB JPEG image. (this is a frowned upon practice, its not what transfer functions are for, but its not uncommon). Ghostscript's pdfwrite device preserves the transfer function.
When rendered Ghostscript correctly produces the expected result, Acrobat, however, spectacularly does not, I have no idea what kind of mess they've made which leads to the result you get but its clearly wrong.
If I alter Ghostscript's pdfwrite production settings to Apply transfer functions instead of preserving them:
-c "<</TransferFunctionInfo /Apply>> setdistillerparams" -f PostScript.ps
then the resulting file views correctly in Acrobat. If I modify Adobe Acorbat's settings so that it uses Preserve instead of Apply for transfer functions (look in Settings->Edit Adobe PDF Settings, then the Color tab, and at 'when transfer functions are found', set the drop-down to Preserve instead of Apply) the resulting PDF file renders correctly in Ghostscript, and the same kind of incorrectly in Acrobat as the Ghostscript pdfwrite output file.
In short I'm afraid what you are seeing here is an Acrobat rendering bug, you can work around it by altering the Ghostscript transfer function settings as above but its really not a problem in Ghostscript.

How does ghostscript compress PDFs when it applies a greyscale?

I have a single paged PDF that looks like this that is 6.3 MB. Because it seems to already be in greyscale in the first place, applying a greyscale should not make a huge difference.
But when I apply a greyscale to the PDF with:
gs \
-sOutputFile=output.pdf \
-sDEVICE=pdfwrite \
-sColorConversionStrategy=Gray \
-dProcessColorModel=/DeviceGray \
-dCompatibilityLevel=1.4 \
-dNOPAUSE \
-dBATCH \
input.pdf
"output.pdf" is only 128.4 kB, and you can see the presence of new artifacts. The artifacts are not noticeable when the PDF is at full scale, but if you zoom in you can clearly tell a difference. You can see the greyscaled image here.
What is occuring in the ghostscript that causes the artifacts? But also more importantly, what causes such a dramatic loss in file size?
EDIT:
I think I overstated the artifacts in the output file. For all intensive purposes, the files look very similar.
Version: GPL Ghostscript 9.23
Here is the original PDF file: https://send.firefox.com/download/e47df175af/#tdZSodyN2CuQL8X0VIFC1g
Here is the greyscaled PDF:
https://send.firefox.com/download/a63b3d641c/#ce9Ctu6obfXlvvNZJvPnUA
I found that Sribd, Imgur compressed the original PDF file, so there was no point in using a hoster.
Supplying PNG images, rather than the actual PDF files, makes it impossible for anyone to be able to tell for certain what your problem is. If you had posted the PDF files I'd be able to look and tell you.
However, I'm going to guess that you are using an older version of Ghostscript (again you don't say), and that the image in the original file is DCT (JPEG) compressed.
Because you haven't specified a particular compression method, the pdfwrite device (not Ghostscript, but the Ghostscript device which writes PDF files) uses 'Automatic' compression. It writes the image data multiple times with different compression filters, and selects the one which produces the smallest output.
Almost certainly this will again be the DCT (JPEG) compression filter, it almost invariably produces the smallest output. This is also the default filter which is used if you disable automatic selection and don't specify a different compression filter to use.
The problem is that DCT is a lossy compression, so every time you decompress and recompress it, you lose fidelity. Though the image size in bytes does decrease each time.
So that's the reason for both your results; the compression artefacts and at least part of the reduction in size. It may also be the case that your original Grayscale image is actually not Gray but RGB (or Lab or CalRGB, or ICCBased...), in which case converting it to grayscale will result in a decrease in size of 66%. Without seeing the file I can't tell.
Note that current versions of Ghostscript use a JPEG passthrough feature. Provided that the image is not being downsampled, or having its colour space altered, the image is not decompressed. It is passed unchanged to the output device, which embeds it unchanged. This avoids the artefacts introduced by decompression and recompression.
Obviously if you want to change the colour space, then the pdfwrite device does have to manipulate the image, so it has to decompress it.
You can select the compression filter you want to use, instead of permitting automatic selection, by using the GrayImageFilter distiller parameter see here.

How can I know the jpeg quality of the read image using graphicsmagick

When I read a jpeg image using Magick::readImages(...) function.
How can I know the estimated jpeg quality of the image?
I know how to set the quality when I wanna write the image, but it is not relevant to the quality of the original image, so for example:
when I read a jpeg image that its quality is 80% and I write it using 90% quality I will get a bigger image than the original one, since the 90% is not 90% out of the original 80%.
How can I know the jpeg quality of the read image?
This is impossible, period. The JPEG quality settings is just a number that is passed to the encoder and affects how the encoder treats the data.
It's not even percentage of anything - it is just some setting that affects how aggressively the encoder manipulates and transforms data. Wherever you see the percent sign near JPEG quality just ignore it - it's meaningless there.
So regardless of the encoder it is impossible to find which JPEG quality settings value the enocder used to produce this very image. The only way would be to obtain the original and try all reasonably possible setting values until you hit the same result.
It is neither impossible nor difficult with GraphicsMagick (or with ImageMagick). Type
gm convert -log %e -debug coder in.jpg junk.ppm
and look in the output for the line
Quality: nn
if the image was created by Independent JPEG Group software or
Quality: nn (approximate)
otherwise.
You can also use
gm identify -verbose in.png
and look at the
Quality: nn
line; however, this doesn't distinguish between exact and approximate qualities.
It's possible but difficult. The code has to examine the image just like people do. Here's a paper on the subject http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.4621&rep=rep1&type=pdf
If you want to read the JPEG metadata, you can use the following in command-line:
gm identify -format "%[JPEG-Quality]" your-photo.jpg
In C++, it might be using IdentifyImage : https://imagemagick.org/api/MagickCore/identify_8c.html#a91079e7db05c1bad53b2dbb2b01f41ba
That said, as other users said, it's just a number generated by the software that produced the image. As a matter of fact, I can see that GM 1.4 returns nothing on a JPEG without profile information, but GM 1.3.35 does a wild guess with jpeg.c/EstimateJPEGQuality/932/Coder (that returns 100%).

Take advantage of Flash CS4's PNG Compression, but as an external file?

Here's the issue: I'm developing some Flash web sites and really enjoying AS3.
The problem: PNG 24-bit images are too big... I have three PNG images with transparency that I'd like to rotate through on the "Home page" every 10 seconds or so. Great. No problem - but instead of embedding all three PNGs in the SWF, which would take the thing longer to load, I'd like to load them dynamically from external files, so that the user doesn't have to wait around for images to load that aren't going to be displayed for another 10-15 seconds anyway. That's fine... I have working code for that.
The real problem: These PNG sizes, even loaded from external files on the fly, are really bugging me. One image is 350k when saved with Photoshop - 300k when I use PNGOUT. But... when I import the PNG into Flash's Library, I can go in and set it to JPG/Image Compression which reduces the size to about 45k, yet maintains the alpha information!! If Flash can compress my PNG that much, and still make it look good, why can't I find an app that can do the same for an external file? I'd be content to load my images into the Flash library and let it handle the compression, but if I end up with 5 or 6 images, that still turns out to be too long of a loading time.
Summary: How can I shrink my 350k PNG image with transparency down to 45k like Flash does when I import it into it's library?
Possible solution: Or.... hmmmm.... this could be a workaround... maybe I could just make a separate SWF movie for each PNG I want to use which uses the Flash compressed image - then read that file dynamically using a Loader... That ought to work! I shall return and report...
But still, how does Flash compress those PNGs so much more than compressors like PNGOUT? Maybe I'm just not passing in the right parameters for them to be effective.
Thanks for reading my ramblings. You all are a great sounding board!
PNG compression is lossless, so it can't compete with lossy perceptual compression schemes as JPEG. Just be sure that your png are the size to be displayed (or not : one trivial "compression" scheme would be to save your image scaled down and zoom it when displaying, but this is normally unsatisfactory). If you can't go below 24-bits (you cant go to a 256-pallete, I guesss) I dont think much can be done. I can only suggest to give a look at PngCrush.
I used to have the same question, but later, I think flash used JPEG compress for PNG files. The JPEG-compressed "png" is actually a variant that standard png format does not support, but flash supports. In my own flash project, I used it a lot. I even used swftools to generate an animated SWF from a lot of png, so I can load the single "png gallery" swf and use all the pngs inside.
I know that the question is a year old, but I thought it would be good for future reference. Using any of the png compression tools (PngCrush, Optipng) will not get anywhere near the same results as Flash compression.
The best way I've found to use flash compression without creating each swf in the Flash IDE is using SwfTools' png2swf utility, it will keep alpha channels and also allow you to set the jpeg's compression quality.
http://www.swftools.org/png2swf.html

If I take a loss-compressed file and save it again (e.g. JPEG) will there be loss of quality?

I've often wondered, if I load a compressed image file, edit it and the save it again, will it loose some quality? What if I use the same quality grade when saving, will the algorithms somehow detect that the file has already be compressed as a JPEG and therefore there is no point trying to compress the displayed representation again?
Would it be a better idea to always keep the original (say, a PSD) and always make changes to it and then save it as a JPEG or whatever I need?
Yes, you will lose further file information. If making multiple changes, work off of the original uncompressed file.
When it comes to lossy compression image formats such as JPEG, successive compression will lead to perceptible quality loss. The quality loss can be in the forms such as compression artifacts and blurriness of the image.
Even if one uses the same quality settings to save an image, there will still be quality loss. The only way to "preserve quality" or better yet, lose as little quality as possible, is to use the highest quality settings that is available. Even then, there is no guarantee that there won't be quality loss.
Yes, it would be a good idea to keep a copy of the original if one is going to make an image using a lossy compression scheme such as JPEG. The original could be saved with a compression scheme which is lossless such as PNG, which will preserve the quality of the file at the cost of (generally) larger file size.
(Note: There is a lossless version of JPEG, however, the most common one uses techniques such as DCT to process the image and is lossy.)
In general, yes. However, depending on the compression format there are usually certain operations (mainly rotation and mirroring) that can be performed without any loss of quality by software designed to work with the properties of the file format.
Theoretically, since JPEG compresses each 8x8 block pf pixels independantly, it should be possible to keep all unchanged blocks of an image if it is saved with the same compression settings, but I'm not aware of any software that implements this.
Of course. Because level of algorithm used initially will probably be different than in your subsequent saves. You can easily check this by using an Image manipulation software (eg. Photoshop). Save your file several times and change level of of compression each time. Just a slight bit. You'll see image degradation.
If the changes are local (fixing a few pixels, rather than reshading a region) and you use the original editing tool with the same settings, you may avoid degradation in the areas that you do not affect. Still, expect some additional quality loss around the area of change as the compressed blocks are affected, and cannot be recovered.
The real answer remains to carry out editing on the source image, captured without compression where possible, and applying the desired degree of compression before targeting the image for use.
Yes, you will always lose a bit of information when you re-save an image as JPEG. How much you lose depend on what you have done to the image after loading it.
If you keep the image the same size and only make minor changes, you will not lose that much data. When the image is loaded, an approximation of the original image is recreated from the compressed data. If you resave the image using the same compression, most of the data that you lose will be data that was recreated when loading.
If you resize the image, or edit large areas of it, you will lose more data when resaving it. Any edited part of the image will lose about the same amount of information as when you first compressed it.
If you want to get the best possible quality, you should always keep the original.