Recreating swapchain vs using dynamic state on resize - c++

I have a vulkan application that uses GLFW as its window manager.
Upon the window resize event, vulkan needs to update its drawable area. I have seen 2 ways that this is possible. One is to recreate the swapchain alongside all of the other objects that are tied to it and the other is to use dynamic state for the viewport so that recreation is not needed.
What is the difference between these two and when should I prefer one over the other?

If the window is resized to a smaller size, the display engine may not force you to change your swapchain image sizes. It may inform you of this through the VK_SUBOPTIMAL_KHR error code (though it may not give you even that if performance of presenting is not affected). However, if the window is resized to be larger, the display engine may throw VK_ERROR_OUT_OF_DATE_KHR. That is not something you can ignore. Nor is it something a display engine can promise to never give you.
This means your code must be able to do swapchain rebuilding. Since you have to make allowances for this regardless, the only question is whether you do it whenever the window is resized or just when the display engine forces you to.
I would say that, if the display engine doesn't make you rebuild the swapchain, then it's probably faster not to. Using dynamic state isn't particularly slower than pipeline state, and its not like you're going to be changing it mid-frame. Indeed, you shouldn't rebuild all of your pipelines just because the swapchain was resized, so you should be using dynamic state for the viewport anyway.
In short: you ought to do both.

Related

Drawing to Multiple Windows Using Vulkan

I am trying to create an application that could dynamically create additional windows. Each window will be drawn to using Vulkan and I know that this means that each window will have to contain it's own SwapChain resources (image views, framebuffers, etc.) and graphics pipeline (as it is references the swap chain's extents). I am wondering if each window will also have to remember its own present queue family or if I can assume that the same queue family can be used for each window. Specifically to find a present queue family you need to find out if a particular queue family supports surface presentation using:
VkResult vkGetPhysicalDeviceSurfaceSupportKHR(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
VkSurfaceKHR surface,
VkBool32* pSupported);
This requires a VkSurfaceKHR and thus the HWND and HINSTANCE of a particular window, but I'm not sure if the present queue family is likely to change between different windows created by the same operating system or if I can safely use the same one for each window.
Similarly while reviewing swap chain recreation within the vulkan-tutorial I read that VKSurfaceFormatKHR::format rarely changes during window resize and that is the only reason the render pass needs to be reconstructed during the window resizing operation. How safe would it be to skip the render pass recreation in this step during window resizing and how well could the same render pass be used for different windows?
If each window uses a similar graphics pipeline, more specifically uses the same synchronization objects, would it be typical to have each window append to the same command buffer and use a single vkQueueSubmit? I only ask because you need to create a command buffer for each frame in flight, and thus the number of command buffers required would be numWindows * numFramesInFlight which feels excessive, but I'm not sure if it would be any different from a single large command buffer (appended to by each window) per frame in flight.
As an aside, the resources for drawing to multiple windows using Vulkan seems to be fairly scarce, so if anyone knows of any good ones I would greatly appreciate it.
On Windows you could largely assume everything can render to everything. But use you should check that is so anyway. vkGetPhysicalDeviceWin32PresentationSupportKHR does not need surface, and gives a strong hint that the device\queue is a presentation able, and not e.g. compute accelerator or something.
Similarly while reviewing swap chain recreation within the vulkan-tutorial I read that VKSurfaceFormatKHR::format rarely changes during window resize and that is the only reason the render pass needs to be reconstructed
It is not supposed to ever change for the lifetime of the physical device and surface. If it could change, that would be a TOCTOU problem.
If each window uses a similar graphics pipeline, more specifically uses the same synchronization objects, would it be typical to have each window append to the same command buffer and use a single vkQueueSubmit?
Why not. I mean there is nothing "typical" about this. But if it can be done, then it should probably be done. Otherwise if the windows are unrelated then they should probably have each their own private logical device (or even instance).
As an aside, the resources for drawing to multiple windows using Vulkan seems to be fairly scarce
Lot of resources for Vulkan are "scarce". That is because Vulkan is like a lego. Once you know what the individual pieces do, then you can build whatever without needing outside help. Drawing to multiple windows is no different than drawing to a single window, exept you do it multiple times.

Should I use a singleton.?

I am planning on making a 3D scene editor app using c++ and open gl. And I have to keep track of the current loaded project and different scenes it contains also the user preferences and other things. The best solution I can think of is to wrap them in a context class which will be a singleton. Is there a better way of doing this?
Not a good idea, probably. One reason is technical. The actual OpenGL context's lifetime is limited and is far shorter than app itself, usually related to surface you're outputting to. You would need to initialize context after your visualizing window is ready and de-initialize it before window is gone. Trying to do so when window gone may end to undefined behaviour depending on platform. In some cases you might need several contexts.
Another reason is, it doesn't look like proper separation of responsibilities. User settings aren't part of context, but some may affect only a single render pass (out of plural). You likely would have Preferences, Renderer which would be an interface to Context manager, Geometries, Textures (or materials) separate, an Scene Manager as well (think of scene tree in Blender or DAZ studio, each item in scene can have separate user settings, regarding how to visualize them).

What is necessary to toggle fullscreen in DirectX 11?

I've just started learning DX so I know almost nothing about it although I do know OpenGL (to certain extent). I'm follow a tutorial (http://www.rastertek.com/tutdx11.html) and I have a working window rendering just a white background (clear).
Now - how do I actually switch from windowed mode to fullscreen and vice versa? I know there are many tutorials, some even provide a code for doing that but since I'm a newbie that's not really helpful. Why? Because every code sample is different and trying to find a pattern in all of them is apparently too difficult for me.
So I don't ask for code - instead I would like you to tell me what things I need to release/recreate/change to toggle correctly (and all of them). I know I need to change the display settings, I know I have to change something about the swap chain and release/recreate some buffers - but not really sure which exactly.
You can use SetFullScreenState on your swap chain:
swapChain->SetFullScreenState(true, NULL);
MSDN
The main thing you have to do is release all reference to the IDXGISwapChain, call ResizeBuffers, then re-create everything.
Since Win32 throws the WM_SIZE message upon window initialization, it's entirely possible to:
Clear the previous window-size-specific context
If the swap chain already exists, resize it, otherwise create one
Obtain the backbuffer for this window which will be the final 3D rendertarget.
Create a view interface on the rendertarget to use on bind.
Allocate a 2-D surface as the depth/stencil buffer and create a DepthStencil view on this surface to use on bind.
Create a viewport descriptor of the full window size.
Set the current viewport using the descriptor.
inside a static function (unless WinMain has an object from which to call), and call that function when the WM_SIZE message is triggered.
You can check out how the DirectXTK does it here:
https://directxtk.codeplex.com/

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

OpenGL programming within a program already using OpenGL?

Given that I am programming within another program already using OpenGL (let's say theoretically that I have no idea how they are using it).
Can I just set up my context however I want and push/pop it from the stack and all should work as expected, or MUST I know how my (calling) program is using OpenGL in order to avoid accidentally screwing things up?
Also, how would I go about "initializing" OpenGL when it might have already been initialized?
Thanks for any advice you might have!
To answer your first question, you probably could get away with calling glPushAttrib(GL_ALL_ATTRIB_BITS) before calling the program's functions, and calling glPopAtrib() afterward. Note that this can be slow if you do it frequently (say, every frame in a loop).
I'm not sure what you mean by initializing OpenGL. Do you mean setting up the rendering context? Setting viewport and projection? Disabling or enabling certain features? You can always check if certain states are enabled (using glGet functions), but the rest depends on how your program and the other program work.
I am told that theoretically OpenGL should be able to work within any context as long as you restore the prior context afterwards.
What exactly do you want to do with OpenGL? Are you trying to draw into the same window? Do you want to draw into your own window? If you want to just draw into your own window, and the fact that the existing app uses OpenGL is just a coincidence, then you can probably get away with just creating a completely new context and ignoring the existing stuff. The only gotcha is that you will need to make the existing context current whenever you finish what you are doing, and make your context current whenever you want to do something with it. The existing code won't be expecting to need to make its own context current, and may wind up randomly drawing into your context if you aren't careful.
If you want to draw into the existing window, then your use almost certainly requires some sort of idea about what the existing stuff is doing.