In a book about MFC I found "The GetDC function retrieves a handle to a display device context (DC) for the client area of a specified window or for the entire screen. You can use the returned handle in subsequent GDI functions to draw in the DC."
My question is what does it mean by "Handle" ? I am from C# background so it will be helpful if anyone can relate "Handle" with any C# concept if there's one.
Thanks in advance.
Put simply: A handle is like a pointer, except there is an intermediate layer that turns the handle into an actual memory location (which may move around).
This is useful when you have managed or virtual memory.
Related
I am wondering about the difference between OnDraw() and OnPaint() in MFC.
After searching the Internet for a while, I found a useful article. In summary,
WM_PAINT will trigger OnPaint(), which calls OnDraw() and passes a CDC*:
void CView::OnPaint()
{
// standard paint routine
CPaintDC dc(this);
OnPrepareDC(&dc);
OnDraw(&dc);
}
Another article mentions that when printing a document, OnPrint() also calls OnDraw() by passing a printer DC. Therefore, by overriding OnDraw(), you get screen painting and printing both in one function, which is convenient.
I tried to put my statements for drawing in either OnDraw() and OnPaint(). Either can work well. OnDraw() is a little easier because it has already gotten a pointer pDC.
Device contexts are an ancient abstraction. They have been described as early as 1982 in the first edition of Computer Graphics: Principles and Practice (probably even earlier) and seem to confuse people to this day.
The primary purpose of a device context is to abstract peculiarities of render devices (such as displays, printers, in-memory bitmaps, etc.) and provide a coherent interface. Code that's rendering into a device context generally does not need to know, which device is ultimately consuming the render commands.
The documentation entry titled Drawing in a View goes on to explain how the system is intended to work: In short, all painting should be performed in an OnDraw override that receives a device context. The system-provided OnPaint implementation then constructs a CPaintDC and calls OnDraw.
Up to this point this seems to be just an overly complex way to render the contents of a window. Things start to make sense when you implement, say, printing support. Now all you have to do is set up a printing device context and call OnDraw. Nothing in your OnDraw implementation needs to change.
Let's say you want to add a little extra graphical info to a Windows control. For example, you want to add drag/drop functionality to a listview (using the procedure discussed here), but with horizontal lines signaling the drop/insertion points as the user drags an item. (The control belongs to your own application.)
Is there a safe way to subclass the control and draw onto it directly? In my limited experimentation in trying to do this, I encountered some problems. First, it wasn't clear whether I should call BeginPaint and EndPaint during the WM_PAINT message, since the control itself would be calling those functions once the message was passed along to the default procedure. I also inevitably encountered flickering, since some areas were being painted twice.
I thought a safer way would be just to create a transparent overlay window and draw on that, since that would avoid conflicts with the default paint procedure, but I thought I'd ask before going down that road. Thanks for any advice.
Single Document, the simplest MFC app.
The idea is that default CDC pDC is edited(colored) in some way from onDraw() function. When a user clicks on a number I want it to be displayed using colors from pDC.
If I use the default handler function OnKeyDown for WM_KEYPRESS I don't get a pointer for my edited pDC.
My question is how to access the edited pDC?
I am sure that there is a simple solution that I am missing, please help.
I am not 100% sure I understand the question correctly, but let me try....
The usual and recommended way in MFC to do what I think you want would be to handle all drawing in onDraw() only.
So, in the onKeyDown() handler, you would store the pressed key to a member variable (or maybe even push it to a vector or list of keys to be drawn) and then call Invalidate(false). That causes Windows to generate a WM_PAINT message to your window, which ends up being handled in onDraw(), where you can now draw the correct things based on your current member variable values.
It is also possible to create a CPaintDC outside of onDraw() and draw on that. But as said, usually in MFC applications, all drawing is kept in one place. Windows may request your app to redraw at any time and it does that with a WM_PAINT message.
I've just started learning DX so I know almost nothing about it although I do know OpenGL (to certain extent). I'm follow a tutorial (http://www.rastertek.com/tutdx11.html) and I have a working window rendering just a white background (clear).
Now - how do I actually switch from windowed mode to fullscreen and vice versa? I know there are many tutorials, some even provide a code for doing that but since I'm a newbie that's not really helpful. Why? Because every code sample is different and trying to find a pattern in all of them is apparently too difficult for me.
So I don't ask for code - instead I would like you to tell me what things I need to release/recreate/change to toggle correctly (and all of them). I know I need to change the display settings, I know I have to change something about the swap chain and release/recreate some buffers - but not really sure which exactly.
You can use SetFullScreenState on your swap chain:
swapChain->SetFullScreenState(true, NULL);
MSDN
The main thing you have to do is release all reference to the IDXGISwapChain, call ResizeBuffers, then re-create everything.
Since Win32 throws the WM_SIZE message upon window initialization, it's entirely possible to:
Clear the previous window-size-specific context
If the swap chain already exists, resize it, otherwise create one
Obtain the backbuffer for this window which will be the final 3D rendertarget.
Create a view interface on the rendertarget to use on bind.
Allocate a 2-D surface as the depth/stencil buffer and create a DepthStencil view on this surface to use on bind.
Create a viewport descriptor of the full window size.
Set the current viewport using the descriptor.
inside a static function (unless WinMain has an object from which to call), and call that function when the WM_SIZE message is triggered.
You can check out how the DirectXTK does it here:
https://directxtk.codeplex.com/
Is it necessary to delete the HDC and HRC when using the win32 api for OpenGL? I would think the Win32API would destroy them upon the window's closing?
Clarification: The HRC is a HGLRC object.
Is it "necessary"? If your process terminates itself after closing the window, no. Windows will clean up outstanding handles of these type.
Should you do it? Absolutely. You should always clean up objects you use in your application. Dropping things on the floor for the OS to clean up is not good practice. If for no other reason than the fact that you might want to create a new window after closing the old one. In which case, you have this garbage HGLRC lying around taking up precious resources.
You have to worry about other things than just the window being closed. For example, read http://blogs.msdn.com/b/oldnewthing/archive/2013/03/06/10399678.aspx, which says that the owner of an object cannot delete it while it's selected into a DC.
Release (don't delete) the HDC as soon as you can. I have done OpenGL, so I don't entirely know what the HGLRC is used for, but I suspect that's handled similarly. Getting handles is cheap; holding on to them can be problematic.