Bad quality of images using GDI+ with PostScript driver - c++

I'm developing a program to print images of different formats (BMP, JPEG, EMF, ...) on HDC using C++ and Windows GDI+. Using the MS Publisher Imagesetter driver I can generate a postscript file and through GhostScript functions I obtain the PDF file. If I try to print the following image:
I obtain the following bad quality result with those strange squares (not present on original image):
The part of my code that I used to print the image is:
SetMapMode(hdcPrint,MM_TEXT);
Gdiplus::Graphics graphics(hdcPrint);
graphics.SetPageUnit(Gdiplus::UnitMillimeter);
Gdiplus::Image* image = Gdiplus::Image::FromFile(srPicture->swPathImage);
graphics.DrawImage(image,x,y,w,h);
I tried to print the same image with many drivers and different kind of formats (different from PostScript: PDF, EMF, real printer) and the result is always acceptable (the squares are not present).
Furthermore, I tried to open the bad quality result with a pdf reader different from Adobe Acrobat Reader DC (Wondershare PDFelement and Chrome) and, even then, the result is acceptable.
I also noticed that if the image contains some different shapes (i.e. a big red line, like in the next image) the result is good too.
At this point, I have no idea if the problem is Adobe reader or my implementation.
Is there a differnt way to print different formats images with GDI+ (or pure GDI)?
The PostScript file generated is this.

Well... You haven't supplied either the PostScript or PDF files, which makes it really hard to comment.
Its not completely obvious to me at what point you are getting the image you show, is this what you see on the PDF file ? Is it something you are getting when printing the PDF file to a physical printer ? If its the lattter, how are you printing the PDF file to the printer ?
The JPEG you have supplied a link to is really small (6Kb), are you genuinely trying to use that JPEG file ?
My guess (and in the absence of any files, a guess is all it can be) that you are using an old version of Ghostscript. Old versions would decompress the JPEG image, then recompress the image using whatever filter produced the smallest result, usually JPEG again.
Because JPEG is a lossy format, every time you apply it to an image the quality decreases.
Newer versions of Ghostscript don't decompress the JPEG image data when going to the pdfwrite device, unless other optsions (eg Colour conversion, image downsampling etc) make it neccesary. The current version of Ghostscript is 9.27 and the release of 9.28 is imminent, I'd suggest you try one of those.
Another possibility would be that either the PostScript program has been created in such a way as to degenerate every image smaple to a rectangle, or you are using an extremely old version of Ghostscript where that technique was also used.
Note that none of these would, in my opinion, lead to exactly the result you've pasted here, but the version is certainly worth investigating. Posting the PostScript program file (ie the file you send to Ghostscript) would be more helpful, because it would allow me to at least narrow down where the problem has occured.
[EDIT]
The fault appears to be an intriguing bug in Acrobat.
The PostScript program uses a colour transfer function to invert the colour samples of the RGB JPEG image. (this is a frowned upon practice, its not what transfer functions are for, but its not uncommon). Ghostscript's pdfwrite device preserves the transfer function.
When rendered Ghostscript correctly produces the expected result, Acrobat, however, spectacularly does not, I have no idea what kind of mess they've made which leads to the result you get but its clearly wrong.
If I alter Ghostscript's pdfwrite production settings to Apply transfer functions instead of preserving them:
-c "<</TransferFunctionInfo /Apply>> setdistillerparams" -f PostScript.ps
then the resulting file views correctly in Acrobat. If I modify Adobe Acorbat's settings so that it uses Preserve instead of Apply for transfer functions (look in Settings->Edit Adobe PDF Settings, then the Color tab, and at 'when transfer functions are found', set the drop-down to Preserve instead of Apply) the resulting PDF file renders correctly in Ghostscript, and the same kind of incorrectly in Acrobat as the Ghostscript pdfwrite output file.
In short I'm afraid what you are seeing here is an Acrobat rendering bug, you can work around it by altering the Ghostscript transfer function settings as above but its really not a problem in Ghostscript.

Related

Supress Caution: quantization tables are too coarse for baseline JPEG

I have a custom built libtiff with jpg support. When I create TIFF images using JPG compression with a low quality, like 10, I get the following warning printed to stderr:
Caution: quantization tables are too coarse for baseline JPEG
I am looking to suppress this warning. I know it is an issue, but the intent of these is a super small quick look image before the full size image is created, so it is meaningless under the circumstance and floods stderr.
What I don't know is, is there a mechanism to suppress these via the TIFF interface or am I going to have to recompile the libraries and set a field, and if so, is there a CMake option I missed, or do I need to modify code.

How does ghostscript compress PDFs when it applies a greyscale?

I have a single paged PDF that looks like this that is 6.3 MB. Because it seems to already be in greyscale in the first place, applying a greyscale should not make a huge difference.
But when I apply a greyscale to the PDF with:
gs \
-sOutputFile=output.pdf \
-sDEVICE=pdfwrite \
-sColorConversionStrategy=Gray \
-dProcessColorModel=/DeviceGray \
-dCompatibilityLevel=1.4 \
-dNOPAUSE \
-dBATCH \
input.pdf
"output.pdf" is only 128.4 kB, and you can see the presence of new artifacts. The artifacts are not noticeable when the PDF is at full scale, but if you zoom in you can clearly tell a difference. You can see the greyscaled image here.
What is occuring in the ghostscript that causes the artifacts? But also more importantly, what causes such a dramatic loss in file size?
EDIT:
I think I overstated the artifacts in the output file. For all intensive purposes, the files look very similar.
Version: GPL Ghostscript 9.23
Here is the original PDF file: https://send.firefox.com/download/e47df175af/#tdZSodyN2CuQL8X0VIFC1g
Here is the greyscaled PDF:
https://send.firefox.com/download/a63b3d641c/#ce9Ctu6obfXlvvNZJvPnUA
I found that Sribd, Imgur compressed the original PDF file, so there was no point in using a hoster.
Supplying PNG images, rather than the actual PDF files, makes it impossible for anyone to be able to tell for certain what your problem is. If you had posted the PDF files I'd be able to look and tell you.
However, I'm going to guess that you are using an older version of Ghostscript (again you don't say), and that the image in the original file is DCT (JPEG) compressed.
Because you haven't specified a particular compression method, the pdfwrite device (not Ghostscript, but the Ghostscript device which writes PDF files) uses 'Automatic' compression. It writes the image data multiple times with different compression filters, and selects the one which produces the smallest output.
Almost certainly this will again be the DCT (JPEG) compression filter, it almost invariably produces the smallest output. This is also the default filter which is used if you disable automatic selection and don't specify a different compression filter to use.
The problem is that DCT is a lossy compression, so every time you decompress and recompress it, you lose fidelity. Though the image size in bytes does decrease each time.
So that's the reason for both your results; the compression artefacts and at least part of the reduction in size. It may also be the case that your original Grayscale image is actually not Gray but RGB (or Lab or CalRGB, or ICCBased...), in which case converting it to grayscale will result in a decrease in size of 66%. Without seeing the file I can't tell.
Note that current versions of Ghostscript use a JPEG passthrough feature. Provided that the image is not being downsampled, or having its colour space altered, the image is not decompressed. It is passed unchanged to the output device, which embeds it unchanged. This avoids the artefacts introduced by decompression and recompression.
Obviously if you want to change the colour space, then the pdfwrite device does have to manipulate the image, so it has to decompress it.
You can select the compression filter you want to use, instead of permitting automatic selection, by using the GrayImageFilter distiller parameter see here.

How can I know the jpeg quality of the read image using graphicsmagick

When I read a jpeg image using Magick::readImages(...) function.
How can I know the estimated jpeg quality of the image?
I know how to set the quality when I wanna write the image, but it is not relevant to the quality of the original image, so for example:
when I read a jpeg image that its quality is 80% and I write it using 90% quality I will get a bigger image than the original one, since the 90% is not 90% out of the original 80%.
How can I know the jpeg quality of the read image?
This is impossible, period. The JPEG quality settings is just a number that is passed to the encoder and affects how the encoder treats the data.
It's not even percentage of anything - it is just some setting that affects how aggressively the encoder manipulates and transforms data. Wherever you see the percent sign near JPEG quality just ignore it - it's meaningless there.
So regardless of the encoder it is impossible to find which JPEG quality settings value the enocder used to produce this very image. The only way would be to obtain the original and try all reasonably possible setting values until you hit the same result.
It is neither impossible nor difficult with GraphicsMagick (or with ImageMagick). Type
gm convert -log %e -debug coder in.jpg junk.ppm
and look in the output for the line
Quality: nn
if the image was created by Independent JPEG Group software or
Quality: nn (approximate)
otherwise.
You can also use
gm identify -verbose in.png
and look at the
Quality: nn
line; however, this doesn't distinguish between exact and approximate qualities.
It's possible but difficult. The code has to examine the image just like people do. Here's a paper on the subject http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.4621&rep=rep1&type=pdf
If you want to read the JPEG metadata, you can use the following in command-line:
gm identify -format "%[JPEG-Quality]" your-photo.jpg
In C++, it might be using IdentifyImage : https://imagemagick.org/api/MagickCore/identify_8c.html#a91079e7db05c1bad53b2dbb2b01f41ba
That said, as other users said, it's just a number generated by the software that produced the image. As a matter of fact, I can see that GM 1.4 returns nothing on a JPEG without profile information, but GM 1.3.35 does a wild guess with jpeg.c/EstimateJPEGQuality/932/Coder (that returns 100%).

Take advantage of Flash CS4's PNG Compression, but as an external file?

Here's the issue: I'm developing some Flash web sites and really enjoying AS3.
The problem: PNG 24-bit images are too big... I have three PNG images with transparency that I'd like to rotate through on the "Home page" every 10 seconds or so. Great. No problem - but instead of embedding all three PNGs in the SWF, which would take the thing longer to load, I'd like to load them dynamically from external files, so that the user doesn't have to wait around for images to load that aren't going to be displayed for another 10-15 seconds anyway. That's fine... I have working code for that.
The real problem: These PNG sizes, even loaded from external files on the fly, are really bugging me. One image is 350k when saved with Photoshop - 300k when I use PNGOUT. But... when I import the PNG into Flash's Library, I can go in and set it to JPG/Image Compression which reduces the size to about 45k, yet maintains the alpha information!! If Flash can compress my PNG that much, and still make it look good, why can't I find an app that can do the same for an external file? I'd be content to load my images into the Flash library and let it handle the compression, but if I end up with 5 or 6 images, that still turns out to be too long of a loading time.
Summary: How can I shrink my 350k PNG image with transparency down to 45k like Flash does when I import it into it's library?
Possible solution: Or.... hmmmm.... this could be a workaround... maybe I could just make a separate SWF movie for each PNG I want to use which uses the Flash compressed image - then read that file dynamically using a Loader... That ought to work! I shall return and report...
But still, how does Flash compress those PNGs so much more than compressors like PNGOUT? Maybe I'm just not passing in the right parameters for them to be effective.
Thanks for reading my ramblings. You all are a great sounding board!
PNG compression is lossless, so it can't compete with lossy perceptual compression schemes as JPEG. Just be sure that your png are the size to be displayed (or not : one trivial "compression" scheme would be to save your image scaled down and zoom it when displaying, but this is normally unsatisfactory). If you can't go below 24-bits (you cant go to a 256-pallete, I guesss) I dont think much can be done. I can only suggest to give a look at PngCrush.
I used to have the same question, but later, I think flash used JPEG compress for PNG files. The JPEG-compressed "png" is actually a variant that standard png format does not support, but flash supports. In my own flash project, I used it a lot. I even used swftools to generate an animated SWF from a lot of png, so I can load the single "png gallery" swf and use all the pngs inside.
I know that the question is a year old, but I thought it would be good for future reference. Using any of the png compression tools (PngCrush, Optipng) will not get anywhere near the same results as Flash compression.
The best way I've found to use flash compression without creating each swf in the Flash IDE is using SwfTools' png2swf utility, it will keep alpha channels and also allow you to set the jpeg's compression quality.
http://www.swftools.org/png2swf.html

Extracting basic info from animation file

I'm writing an application that handles metadata for images and all kinds of animations, so I'm looking for a way to find basic info about an animation file, e.g:
length (in minutes/seconds/frames)
aspect ratio of pixels
resolution of individual frames
framerate
Right now, I let my program execute
mplayer -identify animfile.avi
and parse its console output, which contains all the info I need in a machine-readable format. This works fine, but I know that some potential users of the program prefer vlc as a media player so I'd rather avoid having a hard dependence on mplayer being installed.
I've tried
vlc -vv animfile.avi
which prints an ungodly amount of junk on the console, sometimes containing the stuff I'm looking for. The formatting and what data gets printed seems to vary depending on the file format of the animation though.
Is there an easier way to extract basic info from an animation of any format one has a decoder for (especially the length of the animation) using vlc or som other app/library that is usually available on a typical Linux installation?
Edit: I'd rather use another program to do the dirty work, as this is supposed to work for any animation format, e.g avi, mpg, mov, wmv, vob etc.
Edit: totem-video-indexer seems more promising, and was also included with the standard installation. Enough codecs to make it useful, however, was not. That could be fixed by installing the "non-free-codecs" package from medibuntu.
The output of totem-video-indexer is very easy to parse:
TOTEM_INFO_DURATION=5217
TOTEM_INFO_HAS_VIDEO=True
TOTEM_INFO_VIDEO_WIDTH=720
TOTEM_INFO_VIDEO_HEIGHT=480
TOTEM_INFO_VIDEO_CODEC=XVID MPEG-4
TOTEM_INFO_FPS=30
TOTEM_INFO_HAS_AUDIO=True
TOTEM_INFO_AUDIO_BITRATE=50
TOTEM_INFO_AUDIO_CODEC=MPEG 1 Audio, Layer 3 (MP3)
TOTEM_INFO_AUDIO_SAMPLE_RATE=48000
TOTEM_INFO_AUDIO_CHANNELS=Stereo
mediainfo is a pretty useful program. It's LGPL, and is just a frontend for libmediainfo, which should be exactly what you want.
http://mediainfo.sf.net/
This is a little more difficult question than you may realize. The AVI file format grew over time, and often has nearly the same information in two or three different places. In some cases those are really supposed to agree (but sometimes don't) and in other cases they're subtly different.
Just for example, you asked about the width and height. There are actually four different width/height specs for a single frame: the screen width/height, the pixel width/height (from which you derive the pixel aspect ratio), the active width/height, and the compressed width/height. The frame width and height is the (theoretical) size of the screen. The active width/height excludes the overscan area. The compressed width/height takes into account rounding -- for example, JPEG compresses in blocks of 8x8 pixels, so the compressed width and height have to be multiples of 8 for a motion JPEG file. The active width/height tells you if (for example) some pixels at the border should be ignored.
In any case, since your question is tagged C++, I'm going to guess you'd rather read the file and get the data directly than depend on spawning something else to do the dirty work. If so, you probably want to look at the OpenDML AVI file spec. You can get at least some idea of the length, resolution, and framerate just from reading the basic AVI header, which is in a fixed spot at the beginning of the file, so that much is trivial to get. It'll take a bit more work to get to the pixel aspect ratio though...