Sharing between QOpenGLContext and QGLWidget - opengl

The short form of the question: How can I draw in my QGLWidget using my FBO as a texture, without just getting a blank white image?
And, some background and details... I am using Qt 5.1 for an app that does some image processing on the GPU. have a “compositor” class which uses a QOffscreenSurface, with a QOpenGLContext, and a QOpenGLFramebufferObject. It renders some things to the FBO. If the app is running in a render only mode, the result will get written to a file. Run interactively, it gets shown on my subclass of QGLWidget the “viewer.”
If I make a QImage from the FBO and draw that to the viewer, it works. However, this requires round-tripping from GPU-> QImage-> Back to the GPU as a texture. In theory, I should be able to use the FBO as a texture directly, which is what I want.
I am trying to share between my QOpenGLContext and the QGLWidget’s QGLContext like so:
viewer = new tl::ui::glViewer(this);
compositor = new tl::playback::glCompositor(1280, 720, this);
viewer->context()->contextHandle()->setShareContext(compositor->context);
Is it possible to share between the two types of contexts? Is this the way to do it? Do I need to do something else to draw in the viewer using the FBO in the compositor? I’m just getting solid white when I draw the FBO directly instead of the QImage, so I’m clearly doing something wrong.
So I have figured out my problem. I was misinterpreting the documentation for setShareContext() which notes that it "Won't take effect until create() is called" which I mistakenly thought meant you had to share the context after it was created. Instead, sharing has to be established right before:
viewer = new tl::ui::glViewer(this);
compositor = new tl::playback::glCompositor(512, 512, viewer->context()->contextHandle(), this);
and the new constructor for my glCompositor:
offscreenSurface = new QOffscreenSurface();
QSurfaceFormat format;
format.setMajorVersion(4);
format.setMinorVersion(0);
format.setProfile(QSurfaceFormat::CompatibilityProfile);
format.setSamples(0);
offscreenSurface->setFormat(format);
offscreenSurface->create();
context = new QOpenGLContext();
context->setShareContext(srcCtx);
context->setFormat(format);
context->create();
context->makeCurrent(offscreenSurface);
QOpenGLFramebufferObjectFormat f;
f.setSamples(0);
f.setInternalTextureFormat(GL_RGBA32F);
frameBuffer = new QOpenGLFramebufferObject(w, h, f);
Will create a new FBO in a new context which is sharing with the viewer's context. When it comes time to draw, I just bind glBindTexture(GL_TEXTURE_2D, frameBuffer->texture());
I am marking the answer I got correct, even though it didn't directly solve my problem. It was very informative.

I believe your problem has to do with the fact that FBOs themselves cannot be shared across contexts. What you can, however, do is share the attached data stores (renderbuffers, textures, etc.).
It really boils down to the fact that FBOs are little more than a pretty front-end to manage collections of read/draw buffers, the FBOs themselves are not actually a sharable resource. In fact, despite the name, they are not even Buffer Objects as far as the rest of the API is concerned; have you ever wondered why you do not use glBufferData (...), etc. on them? :)
Critically Important Point:
FrameBuffer Objects are not Buffer Objects; they contain no data stores; they only manage state for attachment points and provide an interface for validation and binding semantics.
This is why they cannot be shared, the same way that Vertex Array Objects cannot be shared, but their constituent Vertex Buffer Objects can.
The takeaway message here is:
If it does not store data, it is generally not a sharable resource.
The solution you will have to pursue will involve using the renderbuffers and textures that are attached to your FBO. Provided you do not do anything crazy like try and render in both contexts at once, sharing these resources should not present too much trouble. It could get really ugly if you started trying to read from the renderbuffer or texture while the other context is drawing into it, so do not do this :P
Due to the following language in the OpenGL specification, you will probably have to detach the texture before using it in your other context:
OpenGL 3.2 (Core Profile) - 4.4.3 Feedback Loops Between Textures and the Framebuffer - pp. 230:
A feedback loop may exist when a texture object is used as both the source and destination of a GL operation. When a feedback loop exists, undefined behavior results. This section describes rendering feedback loops (see section 3.8.9) and texture copying feedback loops (see section 3.8.2) in more detail.
To put this bluntly, your white textures could be the result of feedback loops (OpenGL did not give this situation a name in versions of the spec. prior to 3.1, so proper discussion of "feedback loops" will be found in 3.1+ literature only). Because this invokes undefined behavior it will behave differently between vendors. Detaching will eliminate the source of undefined behavior and you should be good to go.

Related

How to render to OpenGL from Vulkan?

Is it possible to render to OpenGL from Vulkan?
It seems nVidia has something:
https://lunarg.com/faqs/mix-opengl-vulkan-rendering/
Can it be done for other GPU's?
Yes, it's possible if the Vulkan implementation and the OpenGL implementation both have the appropriate extensions available.
Here is a screenshot from an example app in the Vulkan Samples repository which uses OpenGL to render a simple shadertoy to a texture, and then uses that texture in a Vulkan rendered window.
Although your question seems to suggest you want to do the reverse (render to something using Vulkan and then display the results using OpenGL), the same concepts apply.... populate a texture in one API, use synchronization to ensure the GPU work is complete, and then use the texture in the other API. You can also do the same thing with buffers, so for instance you could use Vulkan for compute operations and then use the results in an OpenGL render.
Requirements
Doing this requires that both the OpenGL and Vulkan implementations support the required extensions, however, according to this site, these extensions are widely supported across OS versions and GPU vendors, as long as you're working with a recent (> 1.0.51) version of Vulkan.
You need the the External Objects extension for OpenGL and the External Memory/Fence/Sempahore extensions for Vulkan.
The Vulkan side of the extensions allow you to allocate memory, create semaphores or fences while marking the resulting objects as exportable. The corresponding GL extensions allow you to take the objects and manipulate them with new GL commands which allow you to wait on fences, signal and wait on semaphores, or use Vulkan allocated memory to back an OpenGL texture. By using such a texture in an OpenGL framebuffer, you can pretty much render whatever you want to it, and then use the rendered results in Vulkan.
Export / Import example code
For example, on the Vulkan side, when you're allocating memory for an image you can do this...
vk::Image image;
... // create the image as normal
vk::MemoryRequirements memReqs = device.getImageMemoryRequirements(image);
vk::MemoryAllocateInfo memAllocInfo;
vk::ExportMemoryAllocateInfo exportAllocInfo{
vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32
};
memAllocInfo.pNext = &exportAllocInfo;
memAllocInfo.allocationSize = memReqs.size;
memAllocInfo.memoryTypeIndex = context.getMemoryType(
memReqs.memoryTypeBits, vk::MemoryPropertyFlagBits::eDeviceLocal);
vk::DeviceMemory memory;
memory = device.allocateMemory(memAllocInfo);
device.bindImageMemory(image, memory, 0);
HANDLE sharedMemoryHandle = device.getMemoryWin32HandleKHR({
texture.memory, vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32
});
This is using the C++ interface and is using the Win32 variation of the extensions. For Posix platforms there are alternative methods for getting file descriptors instead of WIN32 handles.
The sharedMemoryHandle is the value that you'll need to pass to OpenGL, along with the actual allocation size. On the GL side you can then do this...
// These values should be populated by the vulkan code
HANDLE sharedMemoryHandle;
GLuint64 sharedMemorySize;
// Create a 'memory object' in OpenGL, and associate it with the memory
// allocated in vulkan
GLuint mem;
glCreateMemoryObjectsEXT(1, mem);
glImportMemoryWin32HandleEXT(mem, sharedMemorySize,
GL_HANDLE_TYPE_OPAQUE_WIN32_EXT, sharedMemoryHandle);
// Having created the memory object we can now create a texture and use
// the memory object for backing it
glCreateTextures(GL_TEXTURE_2D, 1, &color);
// The internalFormat here should correspond to the format of
// the Vulkan image. Similarly, the w & h values should correspond to
// the extent of the Vulkan image
glTextureStorageMem2DEXT(color, 1, GL_RGBA8, w, h, mem, 0 );
Synchronization
The trickiest bit here is synchronization. The Vulkan specification requires images to be in certain states (layouts) before corresponding operations can be performed on them. So in order to do this properly (based on my understanding), you would need to...
In Vulkan, create a command buffer that transitions the image to ColorAttachmentOptimal layout
Submit the command buffer so that it signals a semaphore that has similarly been exported to OpenGL
In OpenGL, use the glWaitSemaphoreEXT function to cause the GL driver to wait for the transition to complete.
Note that this is a GPU side wait, so the function will not block at all. It's similar to glWaitSync (as opposed to glClientWaitSync)in this regard.
Execute your GL commands that render to the framebuffer
Signal a different exported Semaphore on the GL side with the glSignalSemaphoreEXT function
In Vulkan, execute another image layout transition from ColorAttachmentOptimal to ShaderReadOnlyOptimal
Submit the transition command buffer with the wait semaphore set to the one you just signaled from the GL side.
That's would be an optimal path. Alternatively, the quick and dirty method would be to do the vulkan transition, and then execute queue and device waitIdle commands to ensure the work is done, execute the GL commands, followed by glFlush & glFinish commands to ensure the GPU is done with that work, and then resume your Vulkan commands. This is more of a brute force approach and will likely produce poorer performance than doing the proper synchronization.
NVIDIA has created an OpenGL extension, NV_draw_vulkan_image, which can render a VkImage in OpenGL. It even has some mechanisms for interacting with Vulkan semaphores and the like.
However, according to the documentation, you must bypass all Vulkan layers, since layers can modify non-dispatchable handles and the OpenGL extension doesn't know about said modifications. And their recommended means of doing so is to use the glGetVkProcAddrNV for all of your Vulkan functions.
Which also means that you can't get access to any debugging that relies on Vulkan layers.
There is some more information in this more recent slide deck from SIGGRAPH 2016. Slides 63-65 describe how to blit a Vulkan image to an OpenGL backbuffer. My opinion is that it may have been pretty easy for NVIDIA to support this since the Vulkan driver is contained in libGL.so (on Linux). So it may not have been that hard to give the Vulkan image handle to the GL side of the driver and have it be useful.
As another answer pointed out, there are still no official registered multi-vendor interop extensions. This approach just works on NVIDIA.

How to isolate my own OpenGL calls inside a third-party process?

I am writing small tool that is drawing OpenGL overlay on top of the game which is closed source. The game is using SDL, so I am just hooking into SDL_GL_SwapWindow and doing my own stuff. However, this kind of hooking results in some side effects in the game itself. I found a solution that is basically wrapping around my own calls with deprecated glPushAttrib/glPopAttrib. But this solves only half of the problems. I am still getting random texture flickering in the game (I meant game textures, mine are showing fine). What could be the reason of this flickering? Can my own textures interfere with game textures? Do I need to isolate my own calls and how can I do it?
What could be the reason of this flickering?
If the game uses shaders, then glPushAttrib / glPopAttrib will not take care of all the state you may be clobbering with. The attribute stack has been deprecated and the program may use states that are either not covered by it, or where certain attribute bits in compatibility profile have been reused or expanded to cover further state. I recommend not using the attribute stack at all, because it's hard to get right.
Can my own textures interfere with game textures?
Yes. Say you left a 2D texture active in a texture unit that's later being used for a 1D texture. If the host program does not use shaders, then the GL_TEXTURE_2D will take precedence over the GL_TEXTURE_1D. It's a (IMHO poor) design choice of OpenGL that you can have multiple texture targets being bound to the same texture unit at the same time and which one is used to deliver texels depends on the individual targets' precedence.
Do I need to isolate my own calls
Yes.
and how can I do it?
Two possible solutions:
Create separate OpenGL context for just your own stuff. Use {wgl,glX}GetCurrentContext and {wglGetCurrentDC,glXGetCurrentDrawable} to retrieve the OpenGL context and drawable active at the moment you're "jumping" in. If you don't have a context already, you can use the drawable just retrieved to create a matching OpenGL context. Optionally install a namespace sharing. Switch to your context, draw your stuff and switch back to the host program one's. – Major drawback: Switching OpenGL contexts is quite expensive.
Before switching state around, use glGet… to retrieve the state active before doing so and restore the old state before returning to the host program.

What is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?

I'm fairly new to CUDA, but I've managed to display something generated by a kernel on the screen using OpenGL. I've tried several approach :
Using a PBO and an OpenGL texture (old style);
Using a OpenGL texture as a CUDA surface and rendering on a quad (new style);
Using a renderbuffer as a CUDA surface and rendering using glBlitFramebuffer.
All of them worked, but, while implementing #2, I erroneously set the hint as cudaGraphicsRegisterFlagsWriteDiscard. Since all of the data will be generated by CUDA, I thought this was the correct option. However, later I realized that I needed a CUDA surface to write to an OpenGL texture, and when you use a surface, you are requested to use the LoadStore flag.
So basically my question is this : Since I absolutely need a CUDA surface to write to an OpenGL texture in CUDA, what is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?
The documentation description seems pretty straightforward. It is for one-way delivery of data from CUDA to OpenGL.
This online book excerpt provides a similar explanation:
Applications where CUDA is the producer and OpenGL is the consumer should register the objects with a write-discard flag...
If you want to see an example, take a look at the postProcessGL cuda sample. In that case, OpenGL is rendering an image, and it's being post-processed (blur added) by cuda, before display. In this case, there are two separate pathways for data flow. In the OpenGL->CUDA case, the data is handled by the createTextureSrc function, and the flag specified is read-only. For the CUDA->OpenGL case (delivery of the post-processed frame) the function is handled in createTextureDst, where a call is made to cudaGraphicsGLRegisterImage with the cudaGraphicsMapFlagsWriteDiscard flag specified, since on this path, CUDA is producing and OpenGL is consuming.
To understand how the textures are handled (populated with data from the cuda operations via a cudaArray) you probably want to study the sequence of operations in processImage().

What is the purpose of 'framebuffer' when setting up a GL context?

In this example code it deals with framebuffers before setting up the context.
I've read the man pages of the functions, but I still don't understand exactly what's going on.
So my question is, what exactly is a framebuffer in GLX and how significant is configuring it?
A framebuffer is an area of memory that holds a displayable image. You need one when creating an OpenGL context so that OpenGL has a place to store the image it renders.

Is it ok to use a Random texture ID?

Instead of using glGenTextures() to get an unused texture ID. Can I randomly choose a number, say, 99999 and use it?
I will, of course, query:
glIsTexture( m_texture )
and make sure it's false before proceeding.
Here's some background:
I am developing an image slideshow app for the mac. Previewing the slideshow is flawless. To save the slideshow, I am rendering to an FBO. I create an AGL context, instantiate a texture with glGenTextures() and render to a frame buffer. All's well except for a minor issue.
After I've saved the slideshow and return to the main window, all my image thumbnails are grey, i.e. the textures have been cleared.
I have investigated it and found out that the image thumbnails and my FBO texture somehow have the same texture ID. When I delete my FBO texture at the end of the slideshow saving operation, the thumbnail textures are also lost. This is weird because I have an AGL context during saving, and the main UI has another AGL context, presumably created in the background by Core Image and which I have no control on.
So my options, as I see it now, is to:
Not delete the FBO texture.
Randomly choose a high texture ID in the hopes that the main UI will not be using it.
Actually I read that you do not necessarily need to delete your texture if you are deleting the AGL opengl context. Because deleting your openGL context automatically deletes all associated textures. Is this true? If yes, option 1 makes more sense.
I do realize that there are funny things happening here that I can't really explain. For example, after I've saved my image slideshow to an .mov file, I delete the context, which was created in the same class. By right, it should not affect textures created in another context. But it does, and I seriously can't explain that.
Allow me to answer your basic question first: yes, it is legal in OpenGL 2.1 to simply pick a texture name arbitrarily and use it as a texture. Note that it is not legal in 3.2 core. The fact that the OpenGL ARB removed this ability should help illustrate whether it is a good idea or not.
Now, you seem to have run into an odd bug. glGenTextures should never return a texture name that is already in use. After all that is what it is for. You appear to somehow be getting textures that are already in use. I don't know how that happens.
An OpenGL context, when destroyed, will delete all resources specific to that context. If you have created a second context that shares resources with the first, deleting the second context will not delete the shared resources (textures, buffers, etc). It will delete un-shared resources (VAOs, etc), but everything else will remain.
I would suggest not creating and deleting the FBO texture constantly. Just create it on application startup. Worst-case, you'll have to reallocate storage for the object as needed (via glTexImage2D), if you need a different size or something.
To anybody who has this problem, make sure your context is fully initialized; until it is glGenTextures will always return 0. I didn't realize what was happening at first because it seems 0 is still a valid texture ID.