Understanding Device Contexts - mfc

As a relative newcomer to MFC, I see Device Contexts (DCs) a lot. I vaguely understand that it's something to do with drawing, but the specifics are not very well explained anywhere that I can find. What does creating a "compatible Device Context" mean, and why is it important? What does SelectObject do, and how must I make a DC compatible first?

A Device Context is just a place that drawing occurs, so if you have two different DC's, you're drawing in two different places. Kind of like a file handle.
Device Contexts can refer to real-estate on screen, or to bitmaps that just reside in memory, and probably other places, too, those are just the two I can think of at the moment.
Compatible contexts are ones that have the same underlying pixel organization, by which is meant number of bits per pixel, bytes per pixel, color organization and so forth. Memory bitmap device contexts can have any organization you want, but your screen contexts are going to be related (eventually) to buffers on your graphics card, which will (depending on mode, etc) have a very specific pixel organization.
Having compatible contexts means its efficient to transfer image data between them, because little or no translation of the data is required. At the other extreme, if you have a 256 color palette, 8 bit map and you try to blit it to a screen that has 8 bits each of RGBA per pixel, every last pixel will require significant massaging as it is copied and so copying incompatible bitmaps is very much slower. According to the Win32 SDK documentation, at least BitBlt() and StretchBlt() "convert the source color format to match the destination format", so it can be done.
Investigate CreateCompatibleDC() and CreateCompatibleBitmap() as starting points for how to create drawing contexts that are compatible with already existing ones.
SelectObject() controls which resources are currently active within the device context. A context has a current pen, brush, font, and bitmap. These make a lot of the other GDI calls simpler by allowing you to specifiy fewer parameters. For instance, you don't have to specify the font when you use TextOut(), but if you want to change the font, that's where SelectObject() comes in. If you feed SelectObject() a handle to a font, the return value is a handle to the font that was in effect, and subsequent operations use the new font. Behavior is the same for the other kinds of resources, pens, brushes, etc.

(Old question but this is shown when googling...)
I'm afraid that, for beginners, the selected answer can be a bit misleading.
Please keep in mind that MFC wraps the Win32 API, so we need to see from the Win32 level, to better understand what's going on.
To understand why there is Device Context, we should understand GDI(Graphics Device Interface).
Then, why GDI exist? - For device independence. To achieve this, Microsoft made Graphic Objects (Brush, Pen, ...), each of which wraps and abstracts all the device dependency issues.
Now we don't have to care about different devices, and that's the whole point of GDI.
So we need to hold Graphic Objects(Brush, Pen, Bitmap,...) in some data structure, and that's the Device Context.
Then what is a SelectObject function?
Literally, it enables DC to "select" a Graphic Object. I.e, we use SelectObject to change a graphic object handle stored in the DC to another graphic object handle that we want to use.
Then what is a compatible device context?
compatible device context(=memory device context) uses memory, rather than devices.
From MSDN (emphasis mine):
To enable applications to place output in memory rather than sending
it to an actual device, use a special device context for bitmap
operations called a memory device context. A memory DC enables the
system to treat a portion of memory as a virtual device. It is an
array of bits in memory that an application can use temporarily to
store the color data for bitmaps created on a normal drawing surface.
Because the bitmap is compatible with the device, a memory DC is also
sometimes referred to as a compatible device context.
The memory DC stores bitmap images for a particular device. An
application can create a memory DC by calling the CreateCompatibleDC
function.
Compatible DC can be used, for example, to reduce flickering because we can save the bitmap in the memory and show it once, not showing every time the image changes.
Following MSDN docs would be helpful to newbies (including me).
Device Contexts, from the MFC viewpoint.
Device Contexts, from the Win32 viewpoint, and this following section.

Related

Dealing with the catch 22 of object lifetimes in vulkan's device, surface, and swapchain in C++?

Background:
In order to even display to the screen you need to enable a "KHR" (khronos group extension) extension for presentation surfaces.
A surface, as far as I understand, is an abstraction of the windows/places images are displayed returned by your window software.
In vulkan you have a VkSurface (returned by your window software, ie GLFW), which has certain properties
These properties are needed in order to know if a Device is compatible with it. In other words, before a VkDevice is created (the actual logical view of the GPU which you can actually use to submit commands to), it needs to know about the Surface if you are going to use it, specifically in order to create a device with presentation queues that support that surface with the properties it has.
Once the device is created, you can create the swapchain, which is basically a series of buffers/attachments you actually use to render to.
Swapchain's however have a 1:1 relationship with surfaces. There can only ever be a single swapchain per surface at max.
Problem:
This is where I start running into issues. In my code-base, I codify this relationship in a member variable. A surface has a swapchain, which guarantees that you as the programmer can't accidentally create multiple swapchains per surface if you use my wrapper.
But, if we use this abstraction the following happens:
my::Surface surface = window.create_surface(...); //VkSurface wrapper
auto queue_family = physical_device.find_queue_family_that_matches(surface,...);
auto queue_create_list = {{queue_family, priority},...};
my::Device device = physical_device.create_device(...,queue_create_list,...);
my::swapchain_builder.swapchain_builder(device);
swapchain_builder.builder_pattern_x(...).builder_pattern_x(...)....;
surface.create_swapchain(swapchain_builder);
...
render loop{
}
...
//end of program
return 0;
//ERROR! device no longer exists to destroy swapchain!
}
Because the surface is created before the device, and because the swapchain is a member of the surface, on destruction the device is destroyed before the swapchain.
The "solution" I came up with in the mean time was this:
my::Device device; //device default constructible, but becomes a VK_NULL_HANDLE underneath
my::Surface surface = ...;
...
device = physical_device.create_device(...,queue_create_list,...);
...
surface.create_swapchain(swapchain_builder);
And this certainly works. The surface is destroyed before the device is, and thus so is the swapchain. But it leaves a bad taste in my mouth.
The whole reason I made swapchain a member was to eliminate bugs caused by multiple swapchains being created, my eliminating the option for the bug to exist in the first place, and remove the need for the user to think about the Vulkan Spec by encoding that requirement into my wrapper itself.
But Now the user has to remember to default initialize the device first... or they will get an esoteric error (not as good as the one I show here) unless they use validation layers.
Question:
Is there some way to encode this object relationship at compile time with out runtime declaration order issues?, is there maybe a better way to codify a 1:1 relationship in this scenario, such that the surface object could exist on it's own and RAII order would handle this?
Swapchain's however have a 1:1 relationship with surfaces. There can only ever be a single swapchain per surface at max.
That is not true. From the standard:
A native window cannot be associated with more than one non-retired swapchain at a time.
You can create multiple swapchains for a surface. However, when you create a new one, you have to provide the old one, and the old one becomes "retired". Images you have previously acquired from the retired swapchain can still be presented, but you cannot acquire images more from the swapchain.
This moves nicely into the next point: the user needs to be able to recreate the swapchain for a surface.
Swapchains can become invalid, perhaps due to user rescaling of a window or other things. When this happens, the user needs to recreate them. Whether you retire the old one or not, you're going to have to call the function to create one.
So if you want your surface class to store a swapchain, your API needs a way for the user to create a swapchain.
In short, your goal is wrong; users need the function you're trying to get rid of.

How are pixels drawn at the lowest level

I can use setpixel (GDI) to set any pixel on the screen a colour.
So how would I reproduce Setpixel in in the lowest assembly level. What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
setpixel most probably just calculates address of given pixel using formula:
pixel = (frame_start + y * frame_width) + x
then it simply *pixel = COLOR
You can actually use CreateDIBSection to create your own buffers and associate it with DeviceContext, then you can modify pixels at the low level using formula as above. This is usefull if you have your own graphics library like AGG.
Learning about GDI I like to look into WINE source code, here you can see how complicated it actually is (dibdrv_SetPixel):
http://fossies.org/dox/wine-1.6.1/gdi32_2dibdrv_2graphics_8c_source.html
it must take into account also clipping regions, and also different pixel sizes and probably other features. Also it is possible that some drivers might somehow accelerate this in hardware, but I have not heard of it.
If you want to recreate setpixel you need to know how your graphics hardware works. Most hardware manufacutrres follow at least the VESA standard, see here. This standard specifies that you can set the display mode using interrupt 0x10.
Once the display mode is set the memory region displayed is defined in the standard and you can simply write directly to display memory.
Advanced graphics hardware deviates from the standard (because it only covers the basics). So the above does not work for advanced features. You'll have to resort to the gpu documentation.
The "how" is always depends on "what", what I mean is that for different setups there are different methods, different systems different methods, what is common is that they are usually not allowing you to do it directly i.e. write to a memory address that will be displayed.
Some devices with dedicated setup may allow you to do that ( like some consoles do as far as I know ) but there you will have to do some locking or other utility work to make it work as it should.
Since in modern PCs Graphics Accelerators are fused into the video cards ( one counter example is the Voodoo 1 which needed a video card in order to operate, since it was just a 3D accelerator ) the GPU usually holds the memory that it will draw from the framebuffer in it's own memory making it inaccessible from the outside.
So generally you would say here is a memory address "download" the data into your own GPU memory and show it on screen, and this is where the desktop composition comes in. Since video cards suffer from this transfer all the time it is in fact faster to send the data required to draw and let the GPU do the drawing. So Aero is just a visual but as far as I know the desktop compositor works regardless of Aero making the drawing GPU dependent.
So technically low level functions such as SetPixel are software since Windows7 because of the things I mentioned above so solely because you just cant access the memory directly. So what I think it probably does is that for every HDC there is a bitmap and when you use the set pixel you simply set a pixel in that bitmap that is later sent to the GPU for display.
In case of DOS or other old tech. it is probably just an emulation in the same way it is done for GDI.
So in the light of these,
So how would I reproduce Setpixel in in the lowest assembly level.
it is just probably a copy to a memory location, but windows integrates the window surfaces and it's frambuffer that you will never get direct access. One way to emulate what it does is to make a bitmap get it memory pointer and simple set the pixel manually then tell windows to show this bitmap on screen.
What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
Like I said before it really depends on the environment what is done at the moment you make this call, and the code that needs to be executed comes from different places , some are done by Microsoft some are done by the GPU's manufacturer and these all together produce the result that pixel you see on your screen.
For to set a pixel to the framebuffer using a videomode with 32 bit color we need the address of the pixel and the color of the pixel.
With the address and the color we can simple use a move instruction to set the color to the framebuffer.
Sample with using the EDI-Register as a 32bit addressregister(default segmentregister is DS) for to address the framebuffer with the move instruction.
x86 intel syntax:
mov edi, Framebuffer ; load the address(upper left corner) into the EDI-Register
mov DWORD [edi], Color ; write the color to the address of DS:EDI
The first instruction load the EDI-Register with the address of the framebuffer and the second instruction write the color to the framebuffer.
Hint for to calculate the address of a pixel inside of the frambuffer:
Some Videomodes are using maybe a lorger scanline with more bytes for the horizontal resolution, with a part outside of the visible view.
Dirk

Real time drawing in GDI

I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).

Is it possible to control pixels on the screen just from plain C or plain C++ without any opengl / directx hassle?

Well, I want to know.. maybe others too.
Is it possible to control each pixel separately on a screen by programming, especially C or C++?
Do you need special control over the drivers for the current screen? Are there operating systems which allow you to change pixels (for example draw a message/overlay on top of everything)?
Or does windows support this maybe in it's WinApi?
Edit:
I am asking this question because I want to make my computer warn me when I'm gaming and my processor gets too hot. I mainly use Windows but I have a dual boot ubuntu distro.
The lower you go, the more hassle you'll run into.
If you want raw pixel manipulation you might check out http://www.libsdl.org/ which helps you mitigate the hassle of creating surfaces/windows and that kind of stuff.
Linux has a few means to get you even lower if you want (ie without "windows" or "xwindows" or anything of the sort, just the raw screen), look in to the Linux Frame Buffer if you're interested in that.
Delving even lower (such as doing things with your own OS), the BIOS will let you go into certain video modes, this is what OS installers tend to use (at least they used to, some of the fancier ones don't anymore). This isn't the fastest way of doing things, but can get you into the realm of showing pixels in a few assembly instructions.
And of course if you wanted to do your own OS and take advantage of the video card (bypass the BIOS), you're then talking about writing video drivers and such, which is obviously a substantial amount of work :)
Re overlay messages ontop of the screen and that sort of thing, windows does support that sort of thing, so I'm sure you can do it with the WinAPI, although there are likely libraries that would make that easier. I do know you don't need to delve too deep to do that sort of thing though.
Let's look at each bit at a time:
Is it possible to control each pixel separately on a screen by
programming, especially C or C++?
Possibly. It really depends on the graphics architecture, and in many modern systems, the actual screen surface (that is "the bunch of pixels appearing on the screen") is not directly under software control - at least not from "usermode" (that is, from an application that you or I can write - you need to write driver code, and you need to co-operate sufficiently with the existing graphics driver).
It is generally accepted that drawing the data into an off-screen buffer and using a BitBlt [BitBlockTransfer] function to copy the content onto the screen is the prefferred way to do this sort of thing.
So, in reality, you probably can't manipulate each pixel ON the screen - but you may be able to appear like you do.
Do you need special control over the drivers for the current screen?
Assuming you could get direct access to the screen memory, your code certainly will have to have cooperation with the driver - otherwise, who's to say that what you want to appear on the screen doesn't get overwritten by something else [e.g. you want full screen access, and the clock-updater updates the time on screen every once a minute on top of what you draw, etc].
You may be able to set the driver into a mode where you have a "hole" that allows you to access the screen memory as a big "framebuffer". I don't think there's an easy way to do this in Windows. I don't remember one from back in 2003-2005 when I wrote graphics drivers for a living.
Are there operating systems which allow you to change pixels (for
example draw a message/overlay on top of everything)?
It is absolutely possible to create an overlay layer in the hardware of modern graphics cards. That's generally how video playback works - the video is played into a piece of framebuffer memory that is overlaid on top of the other graphics. You need help from the driver, and this is definitely available in the Windows API, via DirectX as far as I remember.
Or does windows support this maybe in it's WinApi?
Probably, but to answer precisely, we need to understand better what you are looking to do.
Edit: In your particular use-case, I would have thought that making sounds or ejecting the CD/DVD drive may be a more suitable opton. It can be hard to overlay something on top of the graphics drawn by a game, because games often try to use as much as possible of the available graphics resource, and you will probably have a hard time finding a way that works even for the most simple use-cases - never mind something that works for multiple different categories of games using different drawing/engine/graphics libraries. I'm also not entirely sure it's anything to worry overly about, since modern CPU's are pretty tolerant to overheating, so the CPU will just slow down, possibly grind to a halt, but it will not break - even if you take the heatsink off, it won't go wrong [no, I don't suggest you try this!]
Every platform supports efficient raw pixel block transfer "aka BitBlt()", so if you really want to go to frame buffer level you can allocate a bitmap and use pointers to set its contents directly then with one line of code efficiently flip this memory chunk into video ram buffer. Of course it is not as efficient as working with PCI framebuffers directly, but on the other hand this approach (BitBlt) was fast enough even in Win95 days to port Wolfenstein 3d on Pentium CPU WITHOUT the use of WinG.
HOWEVER, a care must be taken while creating this bitmap to match its format (i.e. RGB 16 bits, or 32 bits etc...) with actual mode that device is in, otherwise the graphics sub-system will do a lengthy recoding/dithering which will completely kill your speed.
So depending on your goals, If you want a 3d game your performance will suck with this approach. If you want just to render some shapes and dont need more than 10-15fps - this will work without diving into any device-driver levels.
Here is a few tips for overlaying in Windows:
hdc = GetDC(0);//returns hdc for the whole screen and is VERY fast
You can take HDC for screen and do a BItBlt(hdc, ..... SRCCOPY) to flip blocks of raster efficiently. There are also pre-defined Windows Handles for desktop but I dont recall the exact mechanics but if you are on multiple monitors you can get HDC for each desktop, look at "GetDesktopWindow", "GetDC" and the like...

How to create memory DC with 24 bits per pixel?

I need it to work with RGB24 data using GDI functions (specifically StretchBlt() which is pretty fast) and I can't use CreateCompatibleDC() since it can create memory DC only with color depth of other DC. Usually it's used with screen DC (by transmitting NULL pointer to function) and usually screen has color depth of value 32. In addition I can't rely on it, 'coz if screen settings are changed my application probably won't work.
So I need some way to create memory DC with specific certain color depth. So far I've found only one way with using CreateDC() function but it requires many device specific parameters and seems somewhat unreliable for me. Moreover there are too many fields to be filled with appropriate values to call CreateDC().
Is there some easier way to create specific memory DC and not rely on some devices? Or even if to create memory DC with 24 bpp?
P.S. I need it for some fast graphics. I've tried manual adding alpha channel to bitmap for using it with compatible to screen 32bpp memory DC and it worked out, but was too slow. And as I said above, I can't rely on screen settings which can be changed.
Bits-per-pixel does not really depend on a DC, but on the bitmap selected into it. Create a 24bpp bitmap with CreateDIBSection then select it into a memory DC.