I've just started learning DX so I know almost nothing about it although I do know OpenGL (to certain extent). I'm follow a tutorial (http://www.rastertek.com/tutdx11.html) and I have a working window rendering just a white background (clear).
Now - how do I actually switch from windowed mode to fullscreen and vice versa? I know there are many tutorials, some even provide a code for doing that but since I'm a newbie that's not really helpful. Why? Because every code sample is different and trying to find a pattern in all of them is apparently too difficult for me.
So I don't ask for code - instead I would like you to tell me what things I need to release/recreate/change to toggle correctly (and all of them). I know I need to change the display settings, I know I have to change something about the swap chain and release/recreate some buffers - but not really sure which exactly.
You can use SetFullScreenState on your swap chain:
swapChain->SetFullScreenState(true, NULL);
MSDN
The main thing you have to do is release all reference to the IDXGISwapChain, call ResizeBuffers, then re-create everything.
Since Win32 throws the WM_SIZE message upon window initialization, it's entirely possible to:
Clear the previous window-size-specific context
If the swap chain already exists, resize it, otherwise create one
Obtain the backbuffer for this window which will be the final 3D rendertarget.
Create a view interface on the rendertarget to use on bind.
Allocate a 2-D surface as the depth/stencil buffer and create a DepthStencil view on this surface to use on bind.
Create a viewport descriptor of the full window size.
Set the current viewport using the descriptor.
inside a static function (unless WinMain has an object from which to call), and call that function when the WM_SIZE message is triggered.
You can check out how the DirectXTK does it here:
https://directxtk.codeplex.com/
Related
I want my application to instantly draw all the data to a display. In windows there is SwapBuffers() function to do such kind of things, where you can do all the drawings to a second virtual window and then swap that virtual window with an existing one. OpenGL provides a glXSwapBuffers() function to do roughly the same. However I don't want to use it. Therefore, I am curios, what are the ways to implement this functionality in pure XLib
In X11 there are the Pixmap resources which are considered as Drawable (like Window).
Then you can draw to a Pixmap using as many steps as necessary, and finally use XCopyArea() to send the resulting drawing to a Window.
Note that a Pixmap stands on the server side, like a Window, so the final copy operation is local to the server.
There is the X Double Buffer Extension: https://www.x.org/releases/X11R7.6/doc/libXext/dbelib.html
The Double Buffer Extension (DBE) provides a standard way to utilize
double-buffering within the framework of the X Window System.
Protocol: https://www.x.org/releases/X11R7.7/doc/xextproto/dbe.html
Never seen it used in practice. Let me know if you pull it off.
I didn't understand the functionality of glutSwapBuffer properly. In my code if I don't use the glutSwapBuffer than no background color came in window and it remain transparent, capturing whatever is there in its background. I think that the background color is actually assigned by glClearColor, than how come without using glutSwapBuffer I didn't get any background color.
This question comes up over and over, I think what you are describing is actually what happens when you draw exclusively into the front-buffer in a compositing window manager.
Without swapping buffers, it does not draw your window correctly, so the window appears transparent. Double buffering is required for compositing window managers and it seems it is also required for many hybrid integrated/discrete GPU implementations (e.g. nVIDIA Optimus). In short, there is no real reason to use single-buffered rendering on a desktop platform these days.
To be certain, does your situation resemble this? This screenshot shows what happens when a window that only uses single-buffering is moved in a compositing window manager.
If so, a more thorough explanation can be found here.
opengl usually is configured to use double buffering.
You first draw to one buffer... then Swap it with the second and present it on the screen.
Without calling glutSwapBuffer you will not see anything and it is correct behavior.
about double (and more) buffering in opengl
So I've created this program to render to a window using DirectX. It has an init() method which requires a HWND object so that it can initialize DirectX to the window, and then a render() method which is called inside of an infinite-loop, and then finally a cleanup() method to release DirectX-objects and devices. However, DirectX will render a couple of frames of a rotating cube (maybe enough for a half-rotation), and then the screen will go black. Then the cube will come back on, but it is still rotating during the black period. This continues in an on...off...on...off sort of pattern. Is DirectX maybe not rendering correctly to the window? What's wrong?
From my experience I think there is a good chance you need to explicitly implement the handling of the background erase event for your window (see this page), otherwise, the default implementation will kick in and get in your way (sometime erasing what DirectX just rendered as others suggested).
But well, as everybody mentioned already: this is only a little theory, and we would need some code to check this further :-).
I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.
PIXELFORMATDESCRIPTOR pfd = { /* otherwise fine for a window with 32-bit color */ };
HDC hDC = CreateDC(TEXT("Display"),NULL,NULL,NULL); // always OK
int ipf = ChoosePixelFormat(hDC,&pfd); // always OK
SetPixelFormat(hDC,ipf,&pfd); // always OK
HGLRC hRC = wglCreateContext(hDC); // always OK
wglMakeCurrent(hDC,hRC); // ! read error: 0xbaadf039 (debug, obviously)
But the following works with the same hRC:
wglMakeCurrent(hSomeWindowDC,hRC);
The above is part of an OpenGL 3.0+ initialization system for Windows.
I am trying to avoid creating a dummy window for the sake of aesthetics.
I have never used CreateDC before, so perhaps I've missed something.
edit: hSomeWindowDC would point to a window DC with an appropriate pixel format.
More info:
I wish to create a window-independent OpenGL rendering context.
Due to the answer selected, it seems I need to use a dummy window (not really a big deal, just a handle to pass around all the same).
Why I would want to do this: Since it is possible to use the same rendering context for multiple windows with the same pixel format in the same thread, it is possible to create a rendering context (really, just a container for gl-related objects) that is independent of a particular window. In this way, one can create a clean separation between the graphics and UI initializations.The purpose of the context initially isn't for rendering (although I believe one could render into textures using it). If one wanted to change the contents of a buffer within a particular context, the desired context object itself would just need to be made current (since it's carrying the dummy window around with it, this is possible). Rendering into a window is simple: As implied by the above, the window's DC only needs to have the same pixel format. Simply make the rendering context and the window's DC current, and render.Please note that, at the time of this writing, this idea is still in testing. I will update this post should this change (or if I can remember :P ).
I've got a dormant brain cell from reading Petzold 15 years ago that just sprang back to life. The DC from CreateDC() is restricted. Good for getting info about the display device, measurement, that sort of stuff. Not good to use as a regular painting DC. You almost certainly need GetDC().
My current OpenGL 3+ initialization routine doesn't require a dummy window. You can simply attempt to make a second RC and make it current using the DC of the real window. Take a look at the OpenGL wiki Tutorial: OpenGL 3.1 The First Triangle (C++/Win)