Multiple QGLWidgets with a single openGL Context in C++ - c++

I'm writing an application that is composed of multiple (16-32) plots that are updated several times a second and are drawn using openGL. Until now I've down most of the prototyping of the plots with GLUT. However I'd like to adopt a full fledge framework like QT and I'm getting ready to write a test QGLWidget.
Before I get started I'd like to figure out if its possible for multiple QGLWidgets to share a single openGL context? If so is there anything specifics I need to keep track of when sharing an openGL context between widgets?

if its possible for multiple QGLWidgets to share a single openGL context?
Now this is not possible to answer in general, because it depends on the platform in question: On X11/GLX it is indeed possible to use an indirect context on multiple drawables, however the context can be active on only one drawable at a time.
However:
It is also possible (and it is the recommended way to do this) to have multiple contexts share their data. In the very first versions of OpenGL this was only display lists, hence this still called list sharing. But with current versions of OpenGL this also includes textures, Pixel Buffer Objects and Vertex Buffer Objects. Frame Buffer Objects however can not be shared, but since textures can be used as FBO attachments that's no big deal.
QGLWidget provides a straigtforward API to share context data between QGLWidgests' contexts.

Yes, it is possible to share an opengl context by using this constructor.
If so is there anything specifics I need to keep track of when sharing
an openGL context between widgets?
I am not sure, but I don't think there is anything special you need to take care of.

Related

Should each texture have its own dedicated renderer in SDL?

I'm attempting to learn SDL2 and am having difficulties from a practical perspective. I feel like I have a good understanding of SDL windows, renderers, and textures from an abstract perspective. However, I feel like I need to know more about what's going on under the hood to use them appropriately.
For example, when creating a texture I am required to provide a reference to a renderer. I find this odd. A texture seems like it is a resource that is loaded into VRAM. Why should I need to give a resource a reference to a renderer? I understand why it would be necessary to give a renderer a reference to a texture, however, vice versa it doesn't make any sense.
So that leads to another question. Since each texture requires a renderer, should each texture have its own dedicated renderer, or should multiple textures share a renderer?
I feel like there are consequences going down one route versus the other.
Short Answers
I believe the reason a SDL_Texture requires a renderer is because some backend implementations (OpenGL?) have contexts (this is essentially what SDL_Renderer is) and the image data must be associated with that particular context. You cannot use a texture created in one context inside of another.
for your other question, no, you don't need or want a renderer for each texture. That probably would only produce correct results with the software backend for the same reason (context).
As #keltar correctly points out none of the renderer's will work with a texture that was created with a different renderer due to a check in SDL_RenderCopy. However, this is strictly an API requirement to keep things consistent, my point above is to highlight that even if that check were absent it would not work for backends such as OpenGL, but there is no technical reason it would not work for the software renderer.
Some Details about SDL_Renderer
Remember that SDL_Renderer is an abstract interface to multiple possible backends (OpenGL, OpenGLES, D3D, Metal, Software, more?). Each of these are going to possibly have restrictions on sharing data between contexts and therefore SDL has to limit itself in the same way to maintain sanity.
Example of OpenGL restrictions
Here is a good resource for general restrictions and platform dependent functionality on OpenGL contexts.
As you can see from that page that sharing between contexts has restrictions.
Sharing can only occur in the same OpenGL implementation
This means that you certainly cant share between an SDL_Renderer using OpenGL an a different SDL_Renderer using another backend.
You can share data between different OpenGL Contexts
...
This is done using OS Specific extensions
Since SDL is cross platform this means they would have to write special code for each platform to support this, and all OpenGL implementations may not support it at all so its better for SDL to simply not support this either.
each extra render context has a major impact of the applications
performance
while not a restriction, it is a reason why adding support for sharing textures is not worthwhile for SDL.
Final Note: the 'S' in SDL stands for "simple". If you need to share data between contexts SDL is simply the wrong tool for the job.

How to isolate my own OpenGL calls inside a third-party process?

I am writing small tool that is drawing OpenGL overlay on top of the game which is closed source. The game is using SDL, so I am just hooking into SDL_GL_SwapWindow and doing my own stuff. However, this kind of hooking results in some side effects in the game itself. I found a solution that is basically wrapping around my own calls with deprecated glPushAttrib/glPopAttrib. But this solves only half of the problems. I am still getting random texture flickering in the game (I meant game textures, mine are showing fine). What could be the reason of this flickering? Can my own textures interfere with game textures? Do I need to isolate my own calls and how can I do it?
What could be the reason of this flickering?
If the game uses shaders, then glPushAttrib / glPopAttrib will not take care of all the state you may be clobbering with. The attribute stack has been deprecated and the program may use states that are either not covered by it, or where certain attribute bits in compatibility profile have been reused or expanded to cover further state. I recommend not using the attribute stack at all, because it's hard to get right.
Can my own textures interfere with game textures?
Yes. Say you left a 2D texture active in a texture unit that's later being used for a 1D texture. If the host program does not use shaders, then the GL_TEXTURE_2D will take precedence over the GL_TEXTURE_1D. It's a (IMHO poor) design choice of OpenGL that you can have multiple texture targets being bound to the same texture unit at the same time and which one is used to deliver texels depends on the individual targets' precedence.
Do I need to isolate my own calls
Yes.
and how can I do it?
Two possible solutions:
Create separate OpenGL context for just your own stuff. Use {wgl,glX}GetCurrentContext and {wglGetCurrentDC,glXGetCurrentDrawable} to retrieve the OpenGL context and drawable active at the moment you're "jumping" in. If you don't have a context already, you can use the drawable just retrieved to create a matching OpenGL context. Optionally install a namespace sharing. Switch to your context, draw your stuff and switch back to the host program one's. – Major drawback: Switching OpenGL contexts is quite expensive.
Before switching state around, use glGet… to retrieve the state active before doing so and restore the old state before returning to the host program.

Is it possible to render one half of a scene by OpenGL and other half by DirectX

My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.

Share OpenGL frame buffer / render buffer between two applications

Let's say I have an application A which is responsible for painting stuff on-screen via OpenGL library. For tight integration purposes I would like to let this application A do its job, but render in a FBO or directly in a render buffer and allow an application B to have read-only access to this buffer to handle the display on-screen (basically rendering it as a 2D texture).
It seems FBOs belong to OpenGL contexts and contexts are not shareable between processes. I definitely understand that allowing several processes two mess with the same context is evil. But in my particular case, I think it's reasonable to think it could be pretty safe.
EDIT:
Render size is near full screen, I was thinking of a 2048x2048 32bits buffer (I don't use the alpha channel for now but why not later).
Framebuffer Objects can not be shared between OpenGL contexts, be it that they belong to the same process or not. But textures can be shared and textures can be used as color buffer attachment to a framebuffer objects.
Sharing OpenGL contexts between processes it actually possible if the graphics system provides the API for this job. In the case of X11/GLX it is possible to share indirect rendering contexts between multiple processes. It may be possible in Windows by emplyoing a few really, really crude hacks. MacOS X, no idea how to do this.
So what's probably the easiest to do is using a Pixel Buffer Object to gain performant access to the rendered picture. Then send it over to the other application through shared memory and upload it into a texture there (again through pixel buffer object).
In MacOS,you can use IOSurface to share framebuffer between two application.
In my understanding, you won't be able to share the objects between the process under Windows, unless it's a kernel mode object. Even the shared textures and contexts can create performance hits also it has give you the additional responsibility of syncing the SwapBuffer() calls. Especially under windows platform the OpenGL implementation is notorious.
In my opinion, you can relay on inter-process communication mechanisms like Events, mutex, window messages, pipes to sync the rendering. but just realize that there's a performance consideration on approaching in this way. Kernel mode objects are good but the transition to kernel each time has a cost of 100ms. Which is damns costly for a high performance rendering application. In my opinion you have to reconsider the multi-process rendering design.
On Linux, a solution is to use DMABUF, as explained in this blog: https://blaztinn.gitlab.io/post/dmabuf-texture-sharing/

How to draw opengl graphics from different threads?

I want to make an opengl application that shows some 3d graphics and a command-line. I would like to make them separate threads, because they both are heavy processes. I thought that I could approach this with 2 different viewports, but I would like to know how to handle the threads in opengl.
According to what I've been reading, Opengl is asynchronous, and calling its functions from different threads can be very problematic. Is there a way that I could use to approach this problem? Ideally, I would like to draw the command line on top of the 3d graphics with some transparecy effect... (this is impossible with viewports I guess)
Is important that the solution is portable.
Thanks!
Its possible you can achieve what you want to do using Overlays.
Overlays are a somewhat dated feature but it should still be supported in most setups. Basically an overlay is a separate GL Context which is rendered in the same window as another layer, drawing on top of whatever was drawn on the windows with its original context.
You can read about it here.
I think, rather than trying to create two threads to draw to the screen, you need to use the MVC pattern and make your model thread-safe. Then you can have one thread that grabs the necessary info from the model each frame and throws it on screen, and then two other threads, one for the graphics and one for the command-line, which manage only the model.
So for instance, you have a Simulation class that has your 3D graphics stuff, and then a CommandLine class that has your command line. Each of these classes does not use OpenGL at all; only manages the data, such as where things are in 3d-space, and in the case of the command-line a queue of the lines on-screen. Then the opengl thread can query thread-safe functions of these classes each frame, to get the necessary info. So for example, it grabs the positions of the 3d things, draws them on-screen, then grabs the lines to display on the command line and draws them.
You can't do it with 2 viewports.
Each OpenGL context must be used from the same thread in which it was created.
Two separate window handles, each with their own context, can be setup and used from two separate threads safely, but not 2 viewports in one window.
A good place to look at ideas for this is OpenSceneGraph. It does a lot of work with threading to help try to speed up handling and maintain a constant, high framerate with very large scenes.
I've successfully embedded it and used it from 2 separate threads, but OSG has many checks in place to help with this sort of scenario.