I have generated bitmap.dll through winddk.
Added manually as a printer driver selecting print-to-file driver.
Using this I create an image of my document using print command from file.
I am able to create image and view it, But the problem is that I get inverted(mirror) image.
cScans = pOemPDEV->bmInfoHeader.biHeight;
// Flip the biHeight member so that it denotes top-down bitmap
pOemPDEV->bmInfoHeader.biHeight = cScans * -1;
Have anyone workaround of this code? As I get the problem when I comment(to get header properly generated) this lines.
Device Independent Bitmaps are documented as being laid out in memory with the bottom line at the start of the buffer. Its an experiment in cartesian co-ordinates perpetrated by the designers of OS/2 who were working with Microsoft at the same time Windows 3 was being developed.
There are two possible fixes:
Generate your buffer upside down.
Many Windows APIs that take a BITMAPINFO treat a negative biHeight value to mean a top down DIB.
Related
I've got some code to generate false-color synthetic images from side scan sonar data that I want to load into Google Earth as overlays. I need to be space efficient, so 8-bit PNG files seemed to be the way to go. I can use 1 color table entry for background and 255 for my false color table.
I saw previous answers that recommended the CImage class in the ATL library. So I look up the MSDN documentation here (http://msdn.microsoft.com/en-us/library/5eb252d0(v=vs.100).aspx)
It looks straight forward enough so I coded something like this:
CImage Cimg;
Cimg.Create(width,height,8,0);
RGBQUAD ColorTable[256];
for(i=0;i<256;i++)
// Fill table.
Cimg.SetTransparentColor(long(255));
Cimg.SetColorTable(0,256,Ctable);
// fill the pixel values.
Cimg.Save("TestFile.png",Gdiplus::ImageFormatPNG);
Everything works more or less as expected, but color 255 is just another color. It's not transparent at all.
I know this is not an issue with GoogleEarth, because I can load the file into Paint Shop Pro and manually set the background clear, and then everything works great.
I've tried setting the .rgbReserved byte in the RGBQUAD table to all zeros, 255s and a ramp, thinking it may be interpreted as the Alpha value, but that had no effect.
I also tried stepping through the implementation of CImage::Save() where I determined it's using Gdiplus::Bitmap::Save() to do the write with the "Built In PNG Encoder". I thought maybe there were some encoder parameters which weren't getting set right, but this MSDN page suggests there are zero parameters for the PNG encoder.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms533846(v=vs.85).aspx
So, um, I guess my question is "HELP!" :-)
More specifically, does anyone know if any of the MSDN native functions will let me generate 8-bit PNGs with a palate I can define and at least one transparent color? A simple fix would be great, but I'm annoyed enough with CImage now that I'm willing to abandon it and try something else.
Alternatively, does anyone know if there is a command line utility that could be used to fix the broken background color?
Thanks!
i have a little problem .. i am developing an SkinEngine that allow Delphi Vcl Application to be Skined . for this goal, i had developed a new file format (mSkin) in order to host my skin data .so my skin file contains 2 header , the first contains some information about the colors used by the skin , the second contains the bitmap used by skin (the bitmap type is Alpha channel bitmap in order to support transparency ).in my control i use a function to extract object bitmap from the bitmap(mSkin.Bitmap) and draw this bitmap onto my control . the problem is that when the bitmap is not shaped i got a bad quality when scaling the source bitmap .the size of the object bitmap is proportional to the control size (when the contol size changed ==> the bitmap siwe change too .)
i had try to read the vcl style to solve the problem .. but it seems to be very difficult to read .
is there a way to copy bitmap and Maintaining the quality ?
If you use bitmaps you simple can't do scaling without the problems you have. If you want scaling where e.g. a one-pixel border stays a one-pixel border, then you have to use a vector-based format for your images.
You need to divide that into 9 different bitmaps, like a 3x3 grid. then you only scale the middle on, the rest stay the same size but move. This link is for android but the same principles apply.
Here is another link. This is for flash, but it also explains the principle.
Try to use a resampling algorithm.
For upscaling, I like very much the B-Spline.
For simple content like yours, the hqnx family sometimes gives good results, and is very fast to render (even in real-time). For some pascal source code, you may take a look at this forum thread.
See also this more general question.
I'm trying to convert a 2d array to a DDS and saving it to a file. Array is full of Color structs (each having a red, green, blue and alpha component). Once I get the array to the correct format, I'm sure the saving it to file part won't be a problem.
I'm fine with either using a lib for this (as long as its license allows me to use it in a closed source project and works on both Linux and Windows) or doing it manually, if I can find a nice resource explaining how to do it.
If anyone can point me in the right direction, I'd really appreciate it.
In DirectDraw you can create a surface from the data in memory, by setting up certain fields in the DDSURFACEDESC structure and passing it to the CreateSurface method of the IDirectDraw interface.
First you need to tell DirectDraw which fields of the DDSURFACEDESC structure contain the correct information by setting the dwFlags field to the following set of flags: DDSD_WIDTH | DDSD_HEIGHT | DDSD_PIXELFORMAT | DDSD_LPSURFACE | DDSD_PITCH.
Oh, and this only works for system-memory surfaces, so it's probably needed to add the DDSCAPS_SYSTEMMEMORY flag in the ddsCaps.dwCaps field (if DirectDraw won't do it by default).
Then you specify the address of the beginning of your pixel data array in the lpSurface field. If your buffer is continuous, just set the lPitch to 0. Else you set the correct pitch there (the distance in bytes between the beginnings of two subsequent scanlines).
Set the correct pixel format in ddpfPixelFormat field, with correct bit depth in dwRGBBitCount and RGB masks in dwRBitMask, dwGBitMask and dwBBitMask.
Then set the lXPitch to the number of bytes your pixel has (3 for RGB). It depends on the pixel format you use.
Then pass the filled structure into CreateSurface and see if it works.
When you create the surface this way, keep in mind that DirectDraw will not manage its data buffer himself, and won't free this memory once you call Release on your surface. You need to free this memory yourself when it's no longer used by the surface.
If you want this pixel data to be placed in video memory, on the other hand, you need to create an offscreen surface in a usual way and then lock it, copy your pixels to its own buffer in video memory (you'll find its address in the lpSurface field, and remember to take lPitch in count!), and then unlock it.
Summary
Using Windows GDI to convert 24-bit color to indexed color, it seems GDI chooses colors which are "close enough" even though there are exact matches in the supplied palette.
Can anyone confirm this as a GDI issue or am I making a mistake somewhere?
Maybe there's a "please check the whole palette for color matches" flag which I've failed to find?
Note: This is not about quantizing. The source is 24-bit but contains 256 or fewer colors so an exact palette is trivial to calculate. The problem is GDI doesn't use the full palette.
Workaround
I've worked around the problem by mapping the colors myself but I'd prefer to use GDI as it should be better optimized. Problem is, it seems to be "fast but wrong."
Detailed description
My source image is 24-bit but uses 256 (or fewer) colors. I generate an exact palette for it and ask GDI to transfer the image into an indexed bitmap using that palette. For some pixels GDI chooses similar, but not exact, colors even though there are exact colors elsewhere in the palette. This ruins smooth gradients.
This problem happens with:
SetDIBitsToDevice
StretchDIBits
BitBlt
StretchBlt
The problem does not happen with:
SetPixel or SetPixelV in a loop (incredibly slow!)
Using my own code to do the mapping
I've tested this on:
Windows 7 (NVidia hardware/drivers)
Windows Vista (ATI hardware/drivers)
Windows 2000 (VMware hardware/drivers)
In every test I get the same results. (Not just the wrong colours but always the same wrong colors.)
I don't think the issue is color management (ICM/ICC profiles/etc.) as most of the APIs say they don't use it, I've tried explicitly turning it off on the GDI DC as well as via the V5 bitmap header, and I don't think it would apply within my vanlilla-Win2k VM.
Test Project
Code for a simple Win32/GDI/VS2008 test project can be found here:
http://www.pretentiousname.com/data/GdiIndexColor.zip
The Test1 function within Win32UI.cpp is the actual test. It has two arrays of RGBQUADs, one the source image and the other the exact palette for it. It verifies that the palette really is exact and then asks GDI to convert the image using the APIs mentioned above, testing the result each time. For each test it'll tell you the first incorrect pixel's before & after colors, or tell you that all pixels are correct if it worked.
Thanks!
Thanks for reading my question! Sorry if it's the result of me doing something really dumb! :-)
I ran into this exact same problem, eventually contacted Microsoft and provided them with a test case. In the test case I provided a gradient image that had 128 colors in a 24bit DIB, I then converted that to an 8bit DIB that was created with a color table containing all 128 colors from the 24bit image. After conversion, the 8 bit image had only used 65 of the 128 colors.
To sum up their response:
This is not a bug, GDI does use a close enough calculation when down converting the color depth of an image. This is not really documented anywhere, and the only way to insure all of the original colors will convert exactly is to manually manipulate the pixels yourself.
Are you using SetDIBColorTable()? This article seems to imply that, when drawing to a DIB, it is not sufficient to call SelectPalette() but that SetDIBColorTable() also needs to be called to set the palette for the DIB:
However, if the application is using
a DIB section, you create a logical
palette from the DIB colour table as
usual and then also pass the DIB
colour table to the DIB section with a
call to SetDIBColorTable(). Despite
what the "Platform SDK" documentation
of RealizePalette() appears to imply,
RealizePalette() does not adjust the
colour table of the DIB section.
The article contains some more information on drawing into palettized DIBs that may be relevant (see the section "Palettes and DIB sections").
I vaguely remember that you also need to call RealizePalette(hdc) after a palette is selected into a DC. We ditched our palette code so long ago that the code isn't even in our source tree anymore. I see from your code that you alrady tried this, but I suggest that you might want to play with that some more.
I do remember that the palette code was pretty fragile, and we stopped using it as soon as we could.
Some older AVI files would have 8 bit palettized video with a palette imbedded in the file, so playback code for those files would need to load an realize a palette. I remember that realize didn't do anything unless you were the foreground app, but that SHOULD only apply to screen DC's and not memory DC's.
If you searched around for sample source code that could play palettized AVI's you might find something that shows the magic formula for getting palettes to work.
Sorry I can't be more help.
Hopefully someone has an answer, and it's not TOO complex. I'm working on a C++ dll (no C# or .Net, fully static DLL).
Anyhow, its woring on buildin monochrome bitmaps. I have all of it working EXCEPT the resolution. I get the Device Context, Get Compatible Device Context, build bitmap, draw what I need to (as black/white), and can save. All this works fine. However, I can't figure out how to set the resolution of the bitmap.
While doing some testing from another utility under C#, I can create a bitmap and set the resolution. In doing so, I ran a routine to generate the same file content with a parameter from 1 to 300 for the resolution. Each image came out exactly the same EXCEPT for the values in the "biCompression" DWORD property. The default is the screen resolution of 96x96, but need to obviously change for printers of 300x300, and even some at 203x203 resolution.
Are you absolutely sure? The description of the behavior you observe sounds fishy to me and I would suspect the code that you're using to write your bitmaps or your code that reads them back in.
Are you sure you don't want to set biXPelsPerMeter and biYPelsPerMeter? Those two fields tell you how many pixels per meter in X and Y, which you can use to set the DPI. biCompression only deals with the compression type of the bitmap, e.g., RLE, JPG, PNG, etc.
Thanks for input, but I'll look into the biXPelsPerMeter an biYPels... too. I'll double check the format, and what was set... if so, you may have hit it with a second set of eyes (minds) on my issue.
Thanks