Alpha channel in DeviceContext (HDC) - c++

Please help me with alpha channel in HDC. I make HDC dc throw CreateCompatibleDC. Than call CreateDIBSection and can find bytes of image in memory. Than call DrawFrameControl to this dc. All works, but in memory there are 4 bytes per pixel and alpha channel fills by 00. Even if there were FF before. But I need alpha channel. How can I make DrawFrameControl set real alpha values or just don't touch them. Thank you. And sorry for bad english :(

You can't make GDI not write to the alpha / reserved byte of a four-byte-per-pixel bitmap. GDI is not really alpha-aware, with the exception of a couple of functions like AlphaBlend. However, you can use the knowledge that it writes to and resets the alpha to 0 to know which pixels it wrote to, and manually fix the alpha afterwards.
For more information, read these three articles:
Transparent Graphics with GDI, Part 1
Transparent Graphics with GDI, Part 1 1/2
Transparent Graphics with GDI, Part 2
The first two probably give you enough information to achieve what you want.
These articles take a generic approach to handling alpha with GDI functions, by scanning for pixels where the alpha was clobbered and fixing it (and goes into more advanced techniques to draw several things on top of each other, with correct alpha.) FrameRect draws a rectangle where the lines are one unit wide and high. You might find it more efficient to draw using lines, or even directly editing the pixel bitmap in memory, to draw straight lines in memory. That avoids having to scan the whole bitmap for GDI-drawn pixels - after all, since it's a rectangle with one-unit-wide-edges you know exactly which pixels will be drawn to already, and can edit them yourself.

Related

Real time drawing in GDI

I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).

How to create memory DC with 24 bits per pixel?

I need it to work with RGB24 data using GDI functions (specifically StretchBlt() which is pretty fast) and I can't use CreateCompatibleDC() since it can create memory DC only with color depth of other DC. Usually it's used with screen DC (by transmitting NULL pointer to function) and usually screen has color depth of value 32. In addition I can't rely on it, 'coz if screen settings are changed my application probably won't work.
So I need some way to create memory DC with specific certain color depth. So far I've found only one way with using CreateDC() function but it requires many device specific parameters and seems somewhat unreliable for me. Moreover there are too many fields to be filled with appropriate values to call CreateDC().
Is there some easier way to create specific memory DC and not rely on some devices? Or even if to create memory DC with 24 bpp?
P.S. I need it for some fast graphics. I've tried manual adding alpha channel to bitmap for using it with compatible to screen 32bpp memory DC and it worked out, but was too slow. And as I said above, I can't rely on screen settings which can be changed.
Bits-per-pixel does not really depend on a DC, but on the bitmap selected into it. Create a 24bpp bitmap with CreateDIBSection then select it into a memory DC.

Draw on DeviceContext from COLORREF[]

I have a pointer to a COLORREF buffer, something like: COLORREF* buf = new COLORREF[x*y];
A subroutine fills this buffer with color-information. Each COLORREF represents one pixel.
Now I want to draw this buffer to a device context. My current approach works, but is pretty slow (== ~200ms per drawing, depending on the size of the image):
for (size_t i = 0; i < pixelpos; ++i)
{
// Get X and Y coordinates from 1-dimensional buffer.
size_t y = i / wnd_size.cx;
size_t x = i % wnd_size.cx;
::SetPixelV(hDC, x, y, buf[i]);
}
Is there a way to do this faster; all at once, not one pixel after another?
I am not really familiar with the GDI. I heard about al lot of APIs like CreateDIBitmap(), BitBlt(), HBITMAP, CImage and all that stuff, but have no idea how to apply it. It seems all pretty complicated...
MFC is also welcome.
Any ideas?
Thanks in advance.
(Background: the subroutine I mentioned above is an OpenCL kernel - the GPU calculates an Mandelbrot image and safes it in the COLORREF buffer.)
EDIT:
Thank you all for your suggestions. The answers (and links) gave me some insight into Windows graphics programming. The performance is now acceptable (semi-realtime-scrolling into the Mandelbrot works :)
I ended up with the following solution (MFC):
...
CDC dcMemory;
dcMemory.CreateCompatibleDC(pDC);
CBitmap mandelbrotBmp;
mandelbrotBmp.CreateBitmap(clientRect.Width(), clientRect.Height(), 1, 32, buf);
CBitmap* oldBmp = dcMemory.SelectObject(&mandelbrotBmp);
pDC->BitBlt(0, 0, clientRect.Width(), clientRect.Height(), &dcMemory, 0, 0, SRCCOPY);
dcMemory.SelectObject(oldBmp);
mandelbrotBmp.DeleteObject();
So basically CBitmap::CreateBitmap() saved me from using the raw API (which I still do not fully understand). The example in the documentation of CDC::CreateCompatibleDC was also helpful.
My Mandelbrot is now blue - using SetPixelV() it was red. But I guess that has something to do with CBitmap::CreateBitmap() interpreting my buffer, not really important.
I might try the OpenGL suggestion because it would have been the much more logical choice and I wanted to try OpenCL under Linux anyway.
Under the circumstances, I'd probably use a DIB section (which you create with CreateDIBSection). A DIB section is a bitmap that allows you to access the contents directly as an array, but still use it with all the usual GDI functions.
I think that'll give you the best performance of anything based on GDI. If you need better, then #Kornel is basically correct -- you'll need to switch to something that has more direct support for hardware acceleration (DirectX or OpenGL -- though IMO, OpenGL is a much better choice for the job than DirectX).
Given that you're currently doing the calculation in OpenCL and depositing the output in a color buffer, OpenGL would be the really obvious choice. In particular, you can have OpenCL deposit the output in an OpenGL texture, then you have OpenGL draw a quad using that texture. Alternatively, since you're just putting the output on screen anyway, you could just do the calculation in an OpenGL fragment shader (or, of course, a DirectX pixel shader), so you wouldn't put the output into memory off-screen just so you can copy the result onto the screen. If memory serves, the Orange book has a Mandelbrot shader as one of its examples.
Yes, sure, that's slow. You are making a round-trip through the kernel and video device driver for each individual pixel. You make it fast by drawing to memory first, then update the screen in one fell swoop. That takes, say, CreateDIBitmap, CreateCompatibleDc() and BitBlt().
This isn't a good time and place for an extensive tutorial on graphics programming. It is well covered by any introductory text on GDI and/or Windows API programming. Everything you'll need to know you can find in Petzold's seminal Programming Windows.
Since you already have an array of pixels, you can directly use BitBlt to transfer it to the window's DC. See this link for a partial example:
http://msdn.microsoft.com/en-us/library/aa928058.aspx

GDI fails conversion to indexed color with exact palette?

Summary
Using Windows GDI to convert 24-bit color to indexed color, it seems GDI chooses colors which are "close enough" even though there are exact matches in the supplied palette.
Can anyone confirm this as a GDI issue or am I making a mistake somewhere?
Maybe there's a "please check the whole palette for color matches" flag which I've failed to find?
Note: This is not about quantizing. The source is 24-bit but contains 256 or fewer colors so an exact palette is trivial to calculate. The problem is GDI doesn't use the full palette.
Workaround
I've worked around the problem by mapping the colors myself but I'd prefer to use GDI as it should be better optimized. Problem is, it seems to be "fast but wrong."
Detailed description
My source image is 24-bit but uses 256 (or fewer) colors. I generate an exact palette for it and ask GDI to transfer the image into an indexed bitmap using that palette. For some pixels GDI chooses similar, but not exact, colors even though there are exact colors elsewhere in the palette. This ruins smooth gradients.
This problem happens with:
SetDIBitsToDevice
StretchDIBits
BitBlt
StretchBlt
The problem does not happen with:
SetPixel or SetPixelV in a loop (incredibly slow!)
Using my own code to do the mapping
I've tested this on:
Windows 7 (NVidia hardware/drivers)
Windows Vista (ATI hardware/drivers)
Windows 2000 (VMware hardware/drivers)
In every test I get the same results. (Not just the wrong colours but always the same wrong colors.)
I don't think the issue is color management (ICM/ICC profiles/etc.) as most of the APIs say they don't use it, I've tried explicitly turning it off on the GDI DC as well as via the V5 bitmap header, and I don't think it would apply within my vanlilla-Win2k VM.
Test Project
Code for a simple Win32/GDI/VS2008 test project can be found here:
http://www.pretentiousname.com/data/GdiIndexColor.zip
The Test1 function within Win32UI.cpp is the actual test. It has two arrays of RGBQUADs, one the source image and the other the exact palette for it. It verifies that the palette really is exact and then asks GDI to convert the image using the APIs mentioned above, testing the result each time. For each test it'll tell you the first incorrect pixel's before & after colors, or tell you that all pixels are correct if it worked.
Thanks!
Thanks for reading my question! Sorry if it's the result of me doing something really dumb! :-)
I ran into this exact same problem, eventually contacted Microsoft and provided them with a test case. In the test case I provided a gradient image that had 128 colors in a 24bit DIB, I then converted that to an 8bit DIB that was created with a color table containing all 128 colors from the 24bit image. After conversion, the 8 bit image had only used 65 of the 128 colors.
To sum up their response:
This is not a bug, GDI does use a close enough calculation when down converting the color depth of an image. This is not really documented anywhere, and the only way to insure all of the original colors will convert exactly is to manually manipulate the pixels yourself.
Are you using SetDIBColorTable()? This article seems to imply that, when drawing to a DIB, it is not sufficient to call SelectPalette() but that SetDIBColorTable() also needs to be called to set the palette for the DIB:
However, if the application is using
a DIB section, you create a logical
palette from the DIB colour table as
usual and then also pass the DIB
colour table to the DIB section with a
call to SetDIBColorTable(). Despite
what the "Platform SDK" documentation
of RealizePalette() appears to imply,
RealizePalette() does not adjust the
colour table of the DIB section.
The article contains some more information on drawing into palettized DIBs that may be relevant (see the section "Palettes and DIB sections").
I vaguely remember that you also need to call RealizePalette(hdc) after a palette is selected into a DC. We ditched our palette code so long ago that the code isn't even in our source tree anymore. I see from your code that you alrady tried this, but I suggest that you might want to play with that some more.
I do remember that the palette code was pretty fragile, and we stopped using it as soon as we could.
Some older AVI files would have 8 bit palettized video with a palette imbedded in the file, so playback code for those files would need to load an realize a palette. I remember that realize didn't do anything unless you were the foreground app, but that SHOULD only apply to screen DC's and not memory DC's.
If you searched around for sample source code that could play palettized AVI's you might find something that shows the magic formula for getting palettes to work.
Sorry I can't be more help.

Draw array of bits(rgb) in windows

I have an array of raw rgb data.
I would like to know how can I draw this pixels on the screen in Windows OS?
Now I use API function DrawDIBits, but I must turn up my image data.
I always use SetDiBitsToDevice, but drawDIBits could be okay as well (haven't checked).
As for the upside-down nature of the windows blit functions:
There is a workaround. If you pass a BITMAPINFOHEADER or BITMAPINFO structure to the function just negate the value in the bitmap-height member. This will tell GDI to do the blit as if the height would be positive, but interpret the data as beeing stored in a top-down order.
You may get a nice speed improvement by this "hack" as well.
If you want to shuffle the byte-order of the pixels (e.g. turn ARGB into BGRA or so) you can use the BITMAPV4HEADER structure and tell GDI how your pixel-data is organized. That's a functionality that is rarely used but works since WIN98. I'd say it's save to use it these days..
If you mean drawing it without reversing the (R,G,B) into (B,G,R), I don't know an automatic way to do that.
If you mean drawing it without padding each line to a multiple of 4 pixels, you can do it by drawing each line one at a time. It will be slow, though.