How to get the pixelformat layout using the Windows API - c++

Given an HDC, I would like to figure out not only how many bits of Red, Green, Blue and Alpha there are (using DescribePixelFormat), but even their layout (BGR(A) or RGB(A)) so that I can perform pixel and color component level manipulation.
Should I always assume they're in BGR(A) order? Is that a reasonable assumption?

http://msdn.microsoft.com/en-us/library/dd183449(VS.85).aspx
That's the docs for COLORREF which is stored as 0x00BBGGRR.
According to that which is used by most of WINAPI, I'd have to assume that everything on Windows is actually stored in BGRA format.. AFAIK, DIBs are in BGRA format.
You can try painting a window red. Create a DIB section and copy the pixels into a buffer. If the first byte is 0xFF, it is RGBA otherwise it is BGRA.
GDI32 images always stores as BGRA on my machine. GDI24 images are always BGR. I have released an Image handling API and of all the users, I haven't met a single one that has not had the same format. I have yet to see a DC or default backbuffer format in RGBA instead of BGRA.
What is the reason you really need to know the format of a DC? I never found myself wanting to know.
When reading an image, you figure out the bit count from the BitmapInfoHeader.biBitCount field.

According to the docs for the PIXELFORMATDESCRIPTOR:
iPixelType
Specifies the type of pixel data. The following types are defined:
PFD_TYPE_RGBA RGBA pixels. Each pixel has four components in this order: red, green, blue, and alpha.
PFD_TYPE_COLORINDEX Color-index pixels. Each pixel uses a color-index value.
So it seems like they're either RGBA or indexed.

Related

Why does windows GDI use RGBA format for `COLORREF` instead of BGRA?

MSDN states:
When specifying an explicit RGB color, the COLORREF value has the
following hexadecimal form:
0x00bbggrr
The low-order byte contains a
value for the relative intensity of red; the second byte contains a
value for green; and the third byte contains a value for blue. The
high-order byte must be zero. The maximum value for a single byte is
0xFF.
From wingdi.h
#define RGB(r,g,b) ((COLORREF)((BYTE)(r) | ((BYTE)(g) << 8) | ((BYTE)(b) << 16)))
#define GetRValue(rgb) ((BYTE) (rgb) )
#define GetGValue(rgb) ((BYTE) ((rgb) >> 8))
#define GetBValue(rgb) ((BYTE) ((rgb) >> 16))
As windows is little endian, COLORREF is in RGBA format. This looks strange because isn't the color format that Windows use internally, BGR(A)?
The RGBQUAD structure is defined as
typedef struct tagRGBQUAD {
BYTE rgbBlue;
BYTE rgbGreen;
BYTE rgbRed;
BYTE rgbReserved;
} RGBQUAD;
which is, unlike COLORREF, BGRA.
Since the bitblt function expects an array of COLORREF values, this means that there is always an additional conversion going on from RGBA to BGRA during every call, if Windows use BGRA as its native format.
I don't remember correctly, but I also read somewhere that there is a strange mix in the pixel format used in the winapi.
Can someone please explain?
COLORREFs date way back to when there was much less standardization in pixel formats. Many graphics adapters were still using palettes rather than full 24- or 32-bit color, so even if your adapter required a byte re-order, you didn't need to do very many of them. Some graphics adapters even stored images in separate color planes rather than a single plane of multi-channel colors. There was no "right" answer back then.
RGBQUADs came from the BMP format, which as Raymond Chen mentioned in the comments, comes from the OS/2 bitmap format.

How does bit blit work in GDI?

I'm interested in how bit blit works in gdi. I know that that it creates a resulting bitmap based on source and destination bitmaps based on dwROP parametar, but I'm interested how? I saw some example in which it is used for masking that is done with monochrome mask and SetBkColor() function, I am really confused how is BkColor related to these bitmaps... And in the other one, SetTextColor() is used, for removing the background... How are these DC attributes (bkColor and textColor) related? Thanks
You are wrong BitBlt never uses text for background color.
BitBlt raster operations use a pattern (that's the current selected brush), the source and destination bitmap.
The dwRop code defines a calculation between this 3 data sources.
You find a good explanation how this rop codes work in the book of Charles Petzold. Here is a corresponding chapter of the book. Read the part "The Raster Operations".
Background and text color don't play a role, only the current brush that's selected in the destination device context.
BitBlt() iterates through the pixels in the source rectangle and copies the pixels after applying a mathematical operation on the pixel data. The dwRop value determines that operation. There are three pixel values that are combined by the operation to calculate the pixel value of the destination bitmap:
the pixel from the source bitmap. The ROP code identifier contains "SRC".
the pixel from the brush. The ROP code identifier contains "PAT" if used.
the pixel from the destination bitmap before it is written, you'll see "DST" if used.
The mathematical operation applied to the pixel value are very simple. They can be
none, dwROP = SRCCOPY or PATCOPY.
always set to 0, dwROP = BLACKNESS
always set to 1, dwROP = WHITENESS
NOT operator, same as ~ in a C program
AND operator, same as & in a C program
OR operator, same as | in a C program
XOR operator, same as ^ in a C program
These operations are so simple because that's what a processor can easily do. And they are very simple to accelerate in hardware. The most important thing to keep in mind is that they are bit operators, the operation is applied to each individual bit in the pixel value. That makes ROPs an historical artifact, they only have a useful outcome on monochrome bitmaps (1 pixel = 1 bit) or indexed bitmap formats, like 4bpp or 8bpp, with a carefully chosen palette. That mattered on machines in the 1980s.
The kind of video adapters you use today, as well as bitmap formats, are at least 16bpp, almost always 24bpp or 32bpp. An operation like NOT on a pixel of such a bitmap just produces a wildly different color that the human eye isn't going to recognize as in any way related to the original color. Today you only use SRCCOPY. Maybe PATCOPY to apply a texture brush, you'd use PatBlt() instead. There's hackorama with multiple BitBlts to create transparency effects, you'd use TransBlt() or GDI+ instead.

How to transform rgb (three bytes) to one byte for bitmap format?

I have data for every pixel red one byte, green one byte, blue one byte. I need to pack this to 8 bits bitmap, so I have only one byte for pixel. How to transform rgb (three bytes) to one byte for bitmap format ?
( I am using C++ and cannot use any external libraries)
I think you misunderstood how to form a bitmap structure. You do not need to pack (somehow) 3 bytes into one. That is not possible after all, unless you throw away information (like using special image formats GL_R3_G3_B2).
The BMP file format wiki page shows detailed BMP format : it is a header, followed by data. Now depending on what you set in your header, it is possible to form a BMP image containing RBG data component, where each component is one byte.
First you need to decide how many bits you want to allocate for each color.
3bit per color will overflow a byte (9bits)
2bits per color will underflow;
In three byte RGB bitmap you have one byte to represent each color's intensity. Where 0 is minimum and 255 is max intensity. When you convert it to 1 byte bitmap (assuming you will choose 2bits per color ) transform should be:
1-byte red color/64
i.e you will get only 4 shades out of a spectrum of 265 shades per color.
First you have to produce 256 colors palette that best fits your source image.
Then you need to dither the image using the palette you've generated.
Both problems have many well-known solutions. However, it's impossible to produce high-quality result completely automatic: for different source images, different approaches work best. For example, here's the Photoshop UI that tunes the parameters of the process:

GDI functions set alpha channel to 0 when drawing. Why?

I've created 32-bit DIB section, do fill it with some non-zero values (FillMemory) and do a drawing on it using GDI function.
I've looked at the memory of the DIB section and saw that every 4th byte (alpha channel) has a 0 now.
I had the explanation for this behavior some years ago but didn't manage to find it again (and can't rememer why GDI acts like that).
Anybody know why it GDI functions sets alpha channel to 0? Is there any specification for this behavior?
The idea is this:
dib = CreateDIBSection(hdc..., &bytes);
FillMemory(bytes,...255);
memdc = CreateCompatibleDC(hdc);
SelectObject(memdc, bid);
MoveTo(memdc,...);
LineTo(memdc,...);
// look at every pixel in bytes
// if alpha == 255 then it is undrawn pixel
// and set alpha + premultiply colors otherwise
AlphaBlend(hdc, ... memdc,...);
This code works. But it assumes that GDI functions sets alpha to 0. I want to be sure that it is a "legal behavior".
It is because alpha blending has become part of drawing functionality long after Windows GDI was originally designed. You have to use relatively new functions like AlphaBlend() (is there since Windows 2000 AFAIK) to get the feature.
Originally GDI was designed so that 32 bit color value COLORREF composed by RGB macro contains colors like that 0x00bbggrr. So like you see ... what you think are alpha channel bits are not. Those are actually set to zero by GDI. Transparency was achieved by using masks, not alpha-blending.
The binary form of GDI COLORREF is documented by link I gave like that so behavior of your code is legal (until unlikely event that MS changes the documentation).

1bpp Monochromatic BMP

I ran a demo bmp file format helper program "DDDemo.exe" to help me visualize the format of a 32x1 pixel bmp file (monochromatic). I'm okay with the the two header sections but dont seem to understand the color table and pixel bits portions. I made two 32x1 pixel bmp files to help me compare (please see attached).
Can someone assit me understand how the "pixel bits" relates to the color map?
UPDATE: After some trial and error I finally was able to write a 32x1 pixel monochromatic BMP. Although it has different pixel bits as the attached images, this tool helped with the header and color mapping concept. Thank you for everyones input.
An unset bit in the PIXEL BITS refers to the first color table entry (0,0,0), black, and a set bit refers to the second color table entry (ff,ff,ff), white.
"The 1-bit per pixel (1bpp) format supports 2 distinct colors, (for example: black and white, or yellow and pink). The pixel values are stored in each bit, with the first (left-most) pixel in the most-significant bit of the first byte. Each bit is an index into a table of 2 colors. This Color Table is in 32bpp 8.8.8.0.8 RGBAX format. An unset bit will refer to the first color table entry, and a set bit will refer to the last (second) color table entry." - BMP file format
The color table for these images is simply indicating that there are two colors in the image:
Color 0 is (00, 00, 00) -- pure black
Color 1 is (FF, FF, FF) -- pure white
The image compression method shown (BI_RGB -- uncompressed) doesn't make sense with the given pixel data and images, though.