OpenGL Viewer Control & Rendering Contexts - opengl

I have built an OpenGL Viewer control that can simply be dropped onto a windows form (at design time) and assigned an OpenGL display list (at run time).
The viewer control handles navigation, display options (e.g. background color), etc. It is also responsible for creating and destroying rendering and device contexts as necessary.
Obviously, each viewer control instance has its own device context, the 'window' where the image is drawn.
Questions:
How should each viewer control instance manage rendering contexts?
Should each instance have it's own context or share a global rendering context?
I'm particularly concerned with how this affects WGL font creation (wglUseFontBitmaps and wglUseFontOutlines), which requires a rendering context (whatever the current context is) and a device context.
Do I need to create each WGL font for each rendering/device context combination?
Perhaps my approach is flawed.

I would go with the context per control approach. You do have to remember that extensions are context based so you'll have to bind them for each one you make (I use glew_MX to handle this).
Also, you can share display lists across contexts (as long as they are on the same gpu) and the wgl font creation creates display lists so you should be fine.

Related

Drawing to Multiple Windows Using Vulkan

I am trying to create an application that could dynamically create additional windows. Each window will be drawn to using Vulkan and I know that this means that each window will have to contain it's own SwapChain resources (image views, framebuffers, etc.) and graphics pipeline (as it is references the swap chain's extents). I am wondering if each window will also have to remember its own present queue family or if I can assume that the same queue family can be used for each window. Specifically to find a present queue family you need to find out if a particular queue family supports surface presentation using:
VkResult vkGetPhysicalDeviceSurfaceSupportKHR(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
VkSurfaceKHR surface,
VkBool32* pSupported);
This requires a VkSurfaceKHR and thus the HWND and HINSTANCE of a particular window, but I'm not sure if the present queue family is likely to change between different windows created by the same operating system or if I can safely use the same one for each window.
Similarly while reviewing swap chain recreation within the vulkan-tutorial I read that VKSurfaceFormatKHR::format rarely changes during window resize and that is the only reason the render pass needs to be reconstructed during the window resizing operation. How safe would it be to skip the render pass recreation in this step during window resizing and how well could the same render pass be used for different windows?
If each window uses a similar graphics pipeline, more specifically uses the same synchronization objects, would it be typical to have each window append to the same command buffer and use a single vkQueueSubmit? I only ask because you need to create a command buffer for each frame in flight, and thus the number of command buffers required would be numWindows * numFramesInFlight which feels excessive, but I'm not sure if it would be any different from a single large command buffer (appended to by each window) per frame in flight.
As an aside, the resources for drawing to multiple windows using Vulkan seems to be fairly scarce, so if anyone knows of any good ones I would greatly appreciate it.
On Windows you could largely assume everything can render to everything. But use you should check that is so anyway. vkGetPhysicalDeviceWin32PresentationSupportKHR does not need surface, and gives a strong hint that the device\queue is a presentation able, and not e.g. compute accelerator or something.
Similarly while reviewing swap chain recreation within the vulkan-tutorial I read that VKSurfaceFormatKHR::format rarely changes during window resize and that is the only reason the render pass needs to be reconstructed
It is not supposed to ever change for the lifetime of the physical device and surface. If it could change, that would be a TOCTOU problem.
If each window uses a similar graphics pipeline, more specifically uses the same synchronization objects, would it be typical to have each window append to the same command buffer and use a single vkQueueSubmit?
Why not. I mean there is nothing "typical" about this. But if it can be done, then it should probably be done. Otherwise if the windows are unrelated then they should probably have each their own private logical device (or even instance).
As an aside, the resources for drawing to multiple windows using Vulkan seems to be fairly scarce
Lot of resources for Vulkan are "scarce". That is because Vulkan is like a lego. Once you know what the individual pieces do, then you can build whatever without needing outside help. Drawing to multiple windows is no different than drawing to a single window, exept you do it multiple times.

How to Get Unity Context into OpenGL Window

I want to get the unity context into opengl so I can display a unity render texture in an opengl glfw window. I tried using
oldContext = glfwGetCurrentContext(); but the value of oldContext is just null.
I am trying to use the low-level native unity plugin and Texture.GetNativeTexturePtr
Any help would be greatly appreciated!
OpenGL context cannot be queried like OpenGL state related objects via some glGet* API.Context is not part of OpenGL API,it is a part of the system you're running on and it exists to allow you maintaining of OpenGL state and issue command to the driver. You must access a system specific handle that points to the context via system specific API.On Windows (WinGDI)that's would be
HGLRC wglGetCurrentContext();
On linux see related GLX API. You need to find functions to access GLXContext
I did it once in Unity3D (framebuffer readout plugin). But it used Unity's OpenGL or DirectX context to issue API commands only.
Also,I am not sure you can 'inject' or share a context for a window that doesn't own that context. You see, when you (or Unity) init display it creates context and related GL resources,like the default FBO with all required attachments on its own,and that FBO is mapped to some system resource(device) which takes actually care of presenting those pixels on the screen. Therefore, I am not sure display context can be moved from Window to Window in the same manner that a context can be shared between threads.(But I can be wrong on this one)
You can create your plugin Window on some thread,with its own GL context. Then create and share a texture object between those two. Remember, GL textures are shareable. If you copy contents from Unity's screen FBO into that texture,then you can copy it into your plugin's screen FBO from that texture as well.
Btw,look at this SO question .You can see there vendor specific GL extensions which allow copying data into texture from different contexts without requiring shared context,share lists setup.
Regarding why GLFW returns you nullptr. In your example you use GLFW library.
glfwGetCurrentContext()
But if you look at the source code,you see this:
GLFWAPI GLFWwindow* glfwGetCurrentContext(void)
{
_GLFW_REQUIRE_INIT_OR_RETURN(NULL);
return _glfwPlatformGetTls(&_glfw.contextSlot);
}
Which probably means that it retrieves a pointer to GLFWWindow from its own cache and not from the system.And if you didn't create that context via GLFW,you won't get any valid pointer. So try working directly with your system related API as explained above.

OpenGL load texture without or with static device context?

I want to create opengl 2d library, where textures as well as windows are encapsulated as objects. Is it possible to create dummy static DC and make it current when loading textures? All of the windows would have same PIXELFORMATDESCRIPTOR as the static one. This way, users of the library would not have to create window prior to loading textures or passing windows as parameters to textures.
Is it possible to create dummy static DC and make it current when loading textures?
Sort of. As long as the visual formats of the device contexts are compatible with each other, you can bind a OpenGL render context created for this visual format to any of these device contexts.
So you can perfectly fine create a window, with a DC that's never shown on the screen (always kept hidden, size of 0×0) and use that for background OpenGL operations. You can also create a secondary OpenGL context, have it share its namespace with the primary context, make it current on the hidden window on a separate worker thread, so that you can asynchronously perform OpenGL operations (like loading textures) while the main context is used for other things.

Qt and OpenGL, using one context for multiple widgets

I recently asked a question about how to get around sharing issues with vertex array objects and frame buffer objects across multiple contexts, I was then convinced that using multiple contexts just caused more headaches then solutions.
I am using Qt and currently my setup is that I have one invisible QGLWidget which I then use in the constructor of my visible QGLWidget's in order to share resources, this works great accept that I cannot share certain things across the contexts.
I wish to find a solution where I am able to use a single context to render all of my different widget's, this question refers to using the QGLWidget constructor where you pass in the QGLContext you desire to be shared, however this does not seem to use one common context, but instead set the context to be used by one QGLWidget, when you try to use it on a second widget, a qWarning is called which informs you that the QGLContext must refer to the widget you are passing it to.
The goal of my application is to have 2 seperate GUI's which render different scenes, yet share the same context. Currently I have a 'World' editor which edits a scene and saves it to a file to be used in my game engine, and I also have a 'Material' editor which allows you to graphically edit a material similar to UDK's Material editor, there is a preview window which utilizes OpenGL.
Ideally I would like to keep my current design of having one unified game editor which is navigable by tabs, rather than having separate programs for each part of the editor.
The only thing that seemed like it was a decent solution was using the QGraphicsView and setting a QGLWidget as the viewport, however this does not seem to work at all. I can render basic primitives, however anything more and it falls apart.
Does anyone have experience dealing with this issue of multiple OpenGL Widgets, and if so could you explain the process you took to achieve your goal?
I don't quite understand why you are having so much trouble, I'm building a CAD-like app so share a few contexts, like this:
I use an application-wide hidden QGLWidget as a member of my main window class, this is the context shaders are loaded in.
For each document window, the window class has a hidden QGLWidget member, this is the context geometry is loaded in. The shader context is used as the 'shared' widget for it, allowing documents access to the application wide shaders.
Each of the 5 viewports in each document window is a visible QGLWidget, this is where the actual rendering takes place. The document window geometry QGLWidget is used as the 'shared' widget, so the viewports have access to the document-wide geometry data and the application-wide shaders.
The shared widget parameter allows you to create an 'inheritance' tree of contexts, every context has access to it's own and all it's ancestors data (but not it's childrens or siblings).

Handle Alt Tab in fullscreen OpenGL application properly

When trying to implement a simple OpenGL application, I was surprised that while it is easy to find plenty of examples and documentation on advanced rendering stuff, the Win32 framework is poorly documented and even most samples and tutorials do not implement this properly even for basic cases, not mentioning advanced stuff like multiple monitors. Despite of several hours of searching I was unable to find a way which would handle Alt-Tab reliably.
How should OpenGL fullscreen application respond to Alt-Tab? Which messages should the app react to (WM_ACTIVATE, WM_ACTIVATEAPP)? What should the reaction be? (Change the display resolution, destroy / create the rendering context, or destroy / create some OpenGL resources?)
If the application uses some animation loop, suspend the loop, then just minimize the window. If it changed display resolution and gamma revert to the settings before changing them.
There's no need to destroy OpenGL context or resources; OpenGL uses an abstract resource model: If another program requires RAM of the GPU or other resources, your programs resources will be swapped out transparently.
EDIT:
Changing the window visibility status may require to reset the OpenGL viewport, so it's a good idea to either call glViewport apropriately in the display/rendering function, or at least set it in the resize handler, followed by a complete redraw.