Let's say I have an application A which is responsible for painting stuff on-screen via OpenGL library. For tight integration purposes I would like to let this application A do its job, but render in a FBO or directly in a render buffer and allow an application B to have read-only access to this buffer to handle the display on-screen (basically rendering it as a 2D texture).
It seems FBOs belong to OpenGL contexts and contexts are not shareable between processes. I definitely understand that allowing several processes two mess with the same context is evil. But in my particular case, I think it's reasonable to think it could be pretty safe.
EDIT:
Render size is near full screen, I was thinking of a 2048x2048 32bits buffer (I don't use the alpha channel for now but why not later).
Framebuffer Objects can not be shared between OpenGL contexts, be it that they belong to the same process or not. But textures can be shared and textures can be used as color buffer attachment to a framebuffer objects.
Sharing OpenGL contexts between processes it actually possible if the graphics system provides the API for this job. In the case of X11/GLX it is possible to share indirect rendering contexts between multiple processes. It may be possible in Windows by emplyoing a few really, really crude hacks. MacOS X, no idea how to do this.
So what's probably the easiest to do is using a Pixel Buffer Object to gain performant access to the rendered picture. Then send it over to the other application through shared memory and upload it into a texture there (again through pixel buffer object).
In MacOS,you can use IOSurface to share framebuffer between two application.
In my understanding, you won't be able to share the objects between the process under Windows, unless it's a kernel mode object. Even the shared textures and contexts can create performance hits also it has give you the additional responsibility of syncing the SwapBuffer() calls. Especially under windows platform the OpenGL implementation is notorious.
In my opinion, you can relay on inter-process communication mechanisms like Events, mutex, window messages, pipes to sync the rendering. but just realize that there's a performance consideration on approaching in this way. Kernel mode objects are good but the transition to kernel each time has a cost of 100ms. Which is damns costly for a high performance rendering application. In my opinion you have to reconsider the multi-process rendering design.
On Linux, a solution is to use DMABUF, as explained in this blog: https://blaztinn.gitlab.io/post/dmabuf-texture-sharing/
Related
This is probably a stupid question, but I cant find good examples on how to approach this, or if its even possible. Im just done with a project where I used gdi to biblt stuff onto a DIB-buffer then swap that onto the screen hdc, basically making my own swapchain and drawing with opengl.
So then I thought, can I do the same thing using directx11? But I cant seem to find where the DIB/buffer I need to change even is.
Am I even thinking about this correctly? Any ideas on how to handle this?
Yes, you can. Nvidia exposes vendor-specific extensions called NV_DX_interop and NV_DX_Interop2. With these extensions, you can directly access a DirectX surface (when it resides on the GPU) and render to it from an OpenGL context. There should be minimal (driver-only) overhead for this operation and the CPU will almost never be involved.
Note that while this is a vendor-specific extension, Intel GPUs support it as well.
However, don't do this simply for the fun of it or if you control all the source code for your application. This kind of interop scenario is meant for cases where you have two legacy/complicated codebases and interop is a cheaper/better option than porting all the logic to the other API.
Yeah you can do it, both OpenGL and D3D support both writeable textures and locking them to get to the pixel data.
Simply render your scene in OpenGL to a texture, lock it, read the pixel data and pass it directly to the D3D locked texture pixel data, unlock it then do whatever you want with the texture.
Performance would be dreadful of course, you're stalling the GPU multiple times in a single "operation" and forcing it to synchronize with the CPU (who's passing the data) and the bus (for memory access). Plus there would be absolutely no benefit at all. But if you really want to try it, you can do it.
I want to find a way to send all the geometry from an opengl framebuffer to a remote computer, who would do the rendering. This would allow me to have very complex simulations running on some kind of a big supercomputer, and rendered on a small mobile or simply cheap client machine doing the rendering.
Before starting digging in my code, I though it would be relatively easy: let's copy the vertex arrays and send it through the network, using boost::serialisation for example, and that's it. But my geometry are encapsulated, which prevents me from accessing it from where I want to.
I have been able to render into a framebuffer instead of rendering directly on screen though, and I was wondering if there is a way to retrieve data from OpenGL's fbo's in anyway?
First your terminology is wrong. Frame Buffer Objects are encapsulations of off-screen images/surfaces and don't hold geometry.
Second: What you imagine has been implemented already by the VirtualGL project (however it's stuck at a rather old OpenGL profile and doesn't support modern GPUs).
Also X11/GLX always supported indirect OpenGL operation, i.e. a remote machine would send OpenGL commands to the local display server, which is exactly what you probably think of. But this has a major drawback: Network bandwidth becomes the major bottleneck.
The graphical user interface hides mysterious mechanics under its curtain. It mixes 2D and 3D contexts on a single screen and allows for seamless composition of these two, much different worlds. But in what way and at which level are they actually interleaved?
Practice has shown that an OpenGL context can be embedded into a 2D widget library, and so the whole 2D interface can be backed with OpenGL. Also some applications may explore hardware acceleration while others don't (while being rendered on the same screen). Does the graphic card "know" about 2D and 3D areas on the screen and the window manager creates the illusion of a cohesive front-end? ...one can notice accelerated windows (3D, video) "hopping" to fit into 2D interface when, e.g. scrolling a web page or moving an video player across the screen.
The question seems to be trivial, but I haven't met anybody able to give me a comprehensive answer. An answer, which could enable me to embed an OpenGL context into a GTK+ application and understand why and how it is working. I've tried GtkGlExt and GLUT, but I would like to deeply understand the topic and write my own solution as a part of an academic project. I'd like to know what are the relations between X, GLX, GTK, OpenGL and window manager and how to explore this network of libraries to consciously program it.
I don't expect that someone will write here a dissertation, but I will be grateful for any indications, suggestions or links to articles on that topic.
You're thinking much, much much too complicated. Toolkits like GTK+ or Qt add quite a layer of abstraction over somthing, that's actually rather simple: Your system's graphics device consists of a processor and some memory it can operate on. In the simplemost case the processor is the regular system CPU and the memory is the normal system memory. Modern computers feature a special purpose graphics processor (GPU), though, which has its own, high bandwidth memory.
The memory holds framebuffers. Logically a framebuffer is a 2D array of values. The GPU can be programmed to process the values in the framebuffers in a certain way. That can be used to draw into framebuffers. The monitors, displaying a picture are connected to a special piece of circuitry which continuously feeds the data of a certain framebuffer in the memory to the screen (usually called RAMDAC or CRTC). So in the GPU's memory there's a framebuffer that's directly going to the screen. If you draw there, things will appear on the screen.
A program, like the X11 server can load drivers that "know" how to program the GPU to draw graphical primitives. X11 itself defines certain graphics primitives, and extension modules can add further ones. X11 itself allows to segregate the framebuffers on the GPU memory into logical areas called Drawables. Drawables on the on-screen framebuffer are called Windows. Since logical Windows can overlap the X server also manages Z stacking, which it uses to sort the Windows for redraw. Everytime a Client wants to draw to some Window that X11 server will tell the GPU, that drawing operations will modify only those pixels of the framebuffer, of which the Window drawn to is visible (this is called "Pixel Ownership Test"). The X11 server will also create Drawables (i.e. framebuffers) that are not part of the on-screen framebuffer memory area. Those are called PBuffers or Pixmaps in X11 terminology (also with a special extension its possible to move a Window off-screen as well).
However all those Drawables are just memory. Technically those are Canvas to draw on with something. This something is called "graphics primitives". X11 itself provides a certain set, named X core. Also there's a de-facto standard extension called XRender which provides primitives not found in X core. However neither X11 core nor XRender provide graphics primitives with which the impression of a 3D drawing could be generated. So there's another extension, called GLX which teaches the X11 server another set of graphics primitives, namely in the form of OpenGL.
However X core, XRender and GLX/OpenGL are all just different pens, brushes and pencils that all operate on the same kind of Canvas, namely a simply framebuffer manages by X11.
And what do toolkits like Qt or GTK+ then? Well, they use X11 and the graphics primitives it provides to actually draw widgets, like Buttons, Menus and stuff like that, which X11 doesn't know about.
I've been looking for a answer to this question for some time. Anyone know how to do it?
I've got some ideas, can you tell me if they are valid and which is the best one to use(if there are actually suitable solutions).
Create a single directx9 device. Make a copy for the different threads. Render the loading screen(with already loaded buffers) while loading the new level assets and creating their Vertex and index buffers.
Create 2 different directx9 devices. One for each thread. One device is responsible for rendering only(and is attached to the window) and the other has no rendering surface and is taking care of making and filling the buffers.
Create a device with a thread safety flag(I think there is such thing, but it may not be called this way) and do the same as in 1.
Thanks!
If you simply want to load a level, then you don't really need separate thread for that. You could repaint scene while loading resources, for example. I'd advise to avoid multithreading unless you can't live without it.
If you still want multithreading, pass D3DCREATE_MULTITHREADED into IDirect3D9::CreateDevice. Note that DirectX SDK explicitly warns that using this flag may degrade performance.
Creating single device is preferred solution, i.e. I"d advise to use #1 .
It Is possible to share resources between several devices, but this functionality is available only on windows vista. Because people still use WinXP today, if you use something like that, your users will hate you.
Currently, My app is using a large amount of memory after loading textures (~200Mb)
I am loading the textures into a char buffer, passing it along to OpenGL and then killing the buffer.
It would seem that this memory is used by OpenGL, which is doing its own texture management internally.
What measures could I take to reduce this?
Is it possible to prevent OpenGL from managing textures internally?
One typical solution is to keep track of which textures you are needing at a given position of your camera or time-frame, and only load those when you need (opposed to load every single texture at the loading the app). You will have to have a "manager" which controls the loading-unloading and bounding of the respective texture number (e.g. a container which associates a string, name of the texture, with an integer) assigned by the glBindTexture)
Other option is to reduce the overall quality/size of the textures you are using.
It would seem that this memory is used by OpenGL,
Yes
which is doing its own texture management internally.
No, not texture management. It just need to keep the data somewhere. On modern systems the GPU is shared by several processes running simultanously. And not all of the data may fit into fast GPU memory. So the OpenGL implementation must be able to swap data out. The GPU fast memory is not storage, it's just another cache level. Just like the system memory is cache for system storage.
Also GPUs may crash and modern drivers reset them in situ, without the user noticing. For this they need a full copy of the data as well.
Is it possible to prevent OpenGL from managing textures internally?
No, because this would either be tedious to do, or break things. But what you can do, is loading only the textures you really need for drawing a given scene.
If you look through my writings about OpenGL, you'll notice that for years I tell people not to writing silly things like "initGL" functions. Put everything into your drawing code. You'll go through a drawing scheduling phase anyway (you must sort translucent objects far-to-near, frustum culling, etc.). That gives you the opportunity to check which textures you need, and to load them. You can even go as far and load only lower resolution mipmap levels so that when a scene is initially shown it has low detail, and load the higher resolution mipmaps in the background; this of course requires appropriate setting of minimum and maximum mip levels to be set as either texture or sampler parameter.