GetDIBits() is failing with PNG compression - c++

I am trying to get the size of PNG image (Without storing into file). I am using this code as reference. When calling GetDIBits(), size of image would get updated into bi.biSizeImage. Everything works fine when bi.biCompression is BI_RGB. Then I have changed the compression mode from BI_RGB to BI_PNG; GetDIBits() started to fail. Please help me to solve this.

According to http://msdn.microsoft.com/en-us/library/dd145023%28VS.85%29.aspx:
"This extension is not intended as a means to supply general JPEG and PNG decompression to applications, but rather to allow applications to send JPEG- and PNG-compressed images directly to printers having hardware support for JPEG and PNG images."
using GetDIBits() with BI_PNG is not allowed.

Related

Bad quality of images using GDI+ with PostScript driver

I'm developing a program to print images of different formats (BMP, JPEG, EMF, ...) on HDC using C++ and Windows GDI+. Using the MS Publisher Imagesetter driver I can generate a postscript file and through GhostScript functions I obtain the PDF file. If I try to print the following image:
I obtain the following bad quality result with those strange squares (not present on original image):
The part of my code that I used to print the image is:
SetMapMode(hdcPrint,MM_TEXT);
Gdiplus::Graphics graphics(hdcPrint);
graphics.SetPageUnit(Gdiplus::UnitMillimeter);
Gdiplus::Image* image = Gdiplus::Image::FromFile(srPicture->swPathImage);
graphics.DrawImage(image,x,y,w,h);
I tried to print the same image with many drivers and different kind of formats (different from PostScript: PDF, EMF, real printer) and the result is always acceptable (the squares are not present).
Furthermore, I tried to open the bad quality result with a pdf reader different from Adobe Acrobat Reader DC (Wondershare PDFelement and Chrome) and, even then, the result is acceptable.
I also noticed that if the image contains some different shapes (i.e. a big red line, like in the next image) the result is good too.
At this point, I have no idea if the problem is Adobe reader or my implementation.
Is there a differnt way to print different formats images with GDI+ (or pure GDI)?
The PostScript file generated is this.
Well... You haven't supplied either the PostScript or PDF files, which makes it really hard to comment.
Its not completely obvious to me at what point you are getting the image you show, is this what you see on the PDF file ? Is it something you are getting when printing the PDF file to a physical printer ? If its the lattter, how are you printing the PDF file to the printer ?
The JPEG you have supplied a link to is really small (6Kb), are you genuinely trying to use that JPEG file ?
My guess (and in the absence of any files, a guess is all it can be) that you are using an old version of Ghostscript. Old versions would decompress the JPEG image, then recompress the image using whatever filter produced the smallest result, usually JPEG again.
Because JPEG is a lossy format, every time you apply it to an image the quality decreases.
Newer versions of Ghostscript don't decompress the JPEG image data when going to the pdfwrite device, unless other optsions (eg Colour conversion, image downsampling etc) make it neccesary. The current version of Ghostscript is 9.27 and the release of 9.28 is imminent, I'd suggest you try one of those.
Another possibility would be that either the PostScript program has been created in such a way as to degenerate every image smaple to a rectangle, or you are using an extremely old version of Ghostscript where that technique was also used.
Note that none of these would, in my opinion, lead to exactly the result you've pasted here, but the version is certainly worth investigating. Posting the PostScript program file (ie the file you send to Ghostscript) would be more helpful, because it would allow me to at least narrow down where the problem has occured.
[EDIT]
The fault appears to be an intriguing bug in Acrobat.
The PostScript program uses a colour transfer function to invert the colour samples of the RGB JPEG image. (this is a frowned upon practice, its not what transfer functions are for, but its not uncommon). Ghostscript's pdfwrite device preserves the transfer function.
When rendered Ghostscript correctly produces the expected result, Acrobat, however, spectacularly does not, I have no idea what kind of mess they've made which leads to the result you get but its clearly wrong.
If I alter Ghostscript's pdfwrite production settings to Apply transfer functions instead of preserving them:
-c "<</TransferFunctionInfo /Apply>> setdistillerparams" -f PostScript.ps
then the resulting file views correctly in Acrobat. If I modify Adobe Acorbat's settings so that it uses Preserve instead of Apply for transfer functions (look in Settings->Edit Adobe PDF Settings, then the Color tab, and at 'when transfer functions are found', set the drop-down to Preserve instead of Apply) the resulting PDF file renders correctly in Ghostscript, and the same kind of incorrectly in Acrobat as the Ghostscript pdfwrite output file.
In short I'm afraid what you are seeing here is an Acrobat rendering bug, you can work around it by altering the Ghostscript transfer function settings as above but its really not a problem in Ghostscript.

PNG Gamma Correction

I used the DirectXTex library to capture a screenshot of a DX11 game and save it to a file. The problem is that it works great when I save it as jpeg but if I save it as png the image would become super bright and washed out. I checked the image using TweakPNG and found out the gamma was set to 1.0 and that's what's causing the problem.
I checked images taken by some other software including the snipping tool and they seem to use 0.45455 as gamma or they leave out the gamma value altogether.
I don't know if DirectXTex will let me specify a gamma value or not. I'm not even sure if WIC has this functionality as I can't seem to find useful information either on MSDN or other sites.
By default DirectXTex will add the sRGB chunk to the PNG file it writes if the format is DXGI_FORMAT_*_SRGB. Furthermore, if the format is not DXGI_FORMAT_*_SRGB I explicitly remove the sRGB chunk and set the gAMA chunk to 1.0 because otherwise WIC always adds the sRGB chunk.
You can see this behavior in the code in both DirectXTexWIC.cpp and in the DirectX Tool Kit's ScreenGrab.cpp module.
If you are not doing 'gamma-correct' rendering where your render target is an DXGI_FORMAT_*_SRGB format but have sRGB content in a DXGI_FORMAT_* format, then my recommendation is that you pass an sRGB version of the format to the function.
In DirectXTex, that's easily done with the MakeSRGB function.
Gamma correction in the PNG format is a bit of a mess. See this blog post

Is there a way to access image buffer through OpenCV VideoCapture?

Due to some reasons I need to access directly to image buffer from a camera through OpenCV's VideoCapture but I cannot find a way. To make it more clear, I want to access to the data from cv::VideoCapture::grab() before retrieving it to a cv::Mat.
I check the OpenCV source code here
and it seems OpenCV decodes it automatically before outputing the frame. Intuitionally I am thinking about "encoding" the frame to obtain the original data, however, cv::imencode requires a specific file extension.
Is there a way to access the camera buffer data without tweaking the source code?
Best,
Eric

Saving frames in opencv without compression

I'm trying to use the imwrite() OpenCV function. I want to save the frames with the .TIFF extension. The problem that I have is that the saved images are compressed so I can't use them. Any idea how I can escape this compression?
thanks in advance
Do not mind what sietschie says. The TIFF flag is hardcoded in the opencv binaries with a LZW compression. You can just turn this off (comment it out) or change it.
In:
3rdparty/libtiff/tiff.h
Remove this line:
#define COMPRESSION_LZW 5 /* Lempel-Ziv & Welch */
Then compile. Presto.
Tiff options other than that are automatically set (8 bit, 16bit, color, rgb, rgba,etc) depending on your image
According to the documentation OpenCV only exposes a limited set of options for writing image files.
Non of which belongs to TIFF-Files.
So unless you want to use your own function or modify the OpenCV source, this is not possible.
I would suggest using another uncompressed format for saving the frames like PXM or BMP, unless you have some specific reasons to use TIFF-Files.
cv::imwrite("imagen.TIFF", bayer, {cv::IMWRITE_TIFF_COMPRESSION, 1,
cv::IMWRITE_TIFF_XDPI, 72,cv::IMWRITE_TIFF_YDPI,72});
The simplest way is recompiling OpenCV or direct using libtiff, but I consider as not very good idea changing 3rdparty/libtiff/tiff.h: after this modification you can't save compressed TIFFs at all with OpenCV, and under non-windows systems you usually have separate libtiff (not as a part of OpenCV).
I suggest simpler approach (still OpenCV recompilation, but you save possibility of writing compressed tiff and don't change libtiff directly):
saving uncompressed TIFFs with OpenCV

Is there any sample code to read thumbnail from Jpeg exif header?

I am writing application using c++, in windows.
I want to get a thumbnail from jpeg, without decoding the whole image.
How Can I read thumbnail from jpeg exif header?
Can any one offer me a some sample code?
Many thanks!
Unsurprisingly the library is called libexif has win32 port, and there is sample code for reading thubnail from file
Don't bother. You can create tumbnails very fast from JPEGs. They are compressed using DCTs on 8x8 pixel blocks. So, get the DC component (i.e. 0,0) of each block and you have an 1/64th thumbnail without decoding. Further scaling should be fast since there are hardly any pixels left.