I know mixing OpenGL and DirectX is not recommended but I'm trying to build a bridge between two different applications that use separate graphics API:s and I'm hoping there is a technique for sharing data, specifically textures.
I have a texture that is created in Direct3D like this:
d3_device-> CreateTexture(width, height,
1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
&texture, NULL);
Is there any way I can use this texture from OpenGL without taking a roundtrip through system memory?
YES. As previously posted (see below) there should exists at least one solution.
I found two possible solutions:
On nvidia cards a new extension was integrated in the 256 dirvers. see http://developer.download.nvidia.com/opengl/specs/WGL_NV_DX_interop.txt
DXGI is the driving force to composite all windows in Vista and Windows 7. see msdn.microsoft.com/en-us/library/ee913554.aspx
I have not yet made experience with either solution, but I hope I will find some time to test one of them. But for me the first one seems to be the easier one.
[I think it should be possible. In recent windows version (Vista and 7) one can see a preview of any window content in the taskbar (whether its GDI, Direct3D, or OpenGL).
To my knowledge OpenGL preview was not supported in earlier windows versions. So at least in the newer version there should be a possibility to couple or share render contexts even between different processes...
This is also true for other modern platforms which share render contexts system wide to make different rendering effects.]
I think it is not possible. As both have different models of a texture.
You cannot access the texture memory directly without either directX or openGL.
Otherway around: If it is possible, you should be able to retrieve the texture address, its pitch, width and other (hardware dependant) memory layout informations and create a dummytexture in the other system and push the retrieved data into your just created texture object. This is not possible
Obviously, this will not work on any descend hardware, and if so it would not be very portable.
I don't think it's possible without downloading the data into host memory and re-uploading it into device memory.
It's possible now.
Use ANGLE OpenGL API instead native OpenGL.
You can share direct3d texture with EGL_ANGLE_d3d_texture_client_buffere extension.
https://github.com/microsoft/angle/wiki/Interop-with-other-DirectX-code#demo
No.
Think of it like sharing an image in photoshop and another image viewer. You would need a memory management library that those two applications shared.
Related
I have a texture in Unity which I will modify frequently.
Now there are two options:
I can make changes to texture by calling setPixels and then call Texture2D.apply. I think the apply actually copies the data from CPU to GPU.
One option is I can modify the texture in native code by getting the texture native handle and modifying it using glTexSubImage2D functions.
Now I read the apply copies only the changed pixels to GPU not full texture but I really doubt if its possible. but if it is true does this mean that calling Texture2D.apply == glTexSubImage2Din terms of performance.
If not, what should I use if I need good performance. I actually dont want to go to native side as I will have to manage the native code on for different graphics APIs supported by Unity like opengl, DX etc
Texture2D.Apply() and glTexSubImage2D are both used to update Texture. They both perform the-same action but they have differences in them.
GetPixels, SetPixels and Texture2D.Apply() are done on the CPU.
You should only use GetPixels, SetPixels and Texture2D.Apply() if you need individual pixels. Good example of this is when you want to send the Texture data over the network.
glTexSubImage2D is done on the GPU and does not require SetPixels or
GetPixels.
glTexSubImage2D is extremely faster than GetPixels, SetPixels and Texture2D.Apply().
If not, what should I use if I need good performance. I actually dont
want to go to native side as I will have to manage the native code on
for different graphics APIs supported by Unity like opengl,
You mentioned that you will be modifying the image frequently, so do not use GetPixels, SetPixels and Texture2D.Apply(). I know it is the easiest solution but it is very slow.
For the best performance:
1.Use glTexSubImage2D
Pass Texture.GetNativeTexturePtr() to the native C++ side as IntPtr then use glTexSubImage2D to directly modify it. I noticed that most of your questions is about C++ and OpenGL so this shouldn't be hard for someone like you.
As for supporting different graphics APIs, the first to support is OpenGL because that's supported on all major platforms. From the Editor, change the Graphics API to OpenGL then start coding. It should work on Windows, Mac, Linux, Android and iOS. If you want to support Direct3D, Metal and Vulkan then go for them too. You just don't have to. OpenGL is enough for this.
2. Use Shaders
You can combine Unity Shaders and Compute Shaders and still get more performance than glTexSubImage2D because this will be happening on the GPU instead of CPU. I personally find shaders complicated so #1 should be your priority.
Yes, glTexSubImage2D can be used to update a smaller rectangular portion of a larger texture.
To check extension availability, I need to use GL.isExtensionAvailable. In order to get the GL object, I need to create some GLCanvas and get the GL instance in init() or display().
Is there a way to check the extension availability even before I create the window, at the beginning of main()?
I guess you are out of luck. The availability of some extension may change according to which video card is connected to the screen you want to visualize your GL content, so you cannot get reliably that information before creating the GL context. You may be able to create an offscreen context only to get that information, however result may differ from a context bound to a window
It's possible to call GLContext.getCurrent().getPlatformExtensionsString() very early but it will return a non null value only when the OpenGL context has been made current at least once and on the appropriate thread. Don't forget to call GLProfile.initSingleton() before calling GLContext.getCurrent().
However, pqnet's comment is correct. Numerous computers (especially modern laptops) have several graphics cards and mechanisms difficult to understand to switch to another one (for example Optimus) depending on the power consumption, the performance profile ("high performance" or not).
Moreover, different drivers might be supported (the crappy GDI renderer and the true OpenGL driver under Windows), several profiles are often supported (forward compatible and backward compatible profiles, ES profiles even on desktop machines), ... JOGL will do its best to pick the most capable one but it can use different ones for offscreen and onscreen. The first OpenGL context used by GLProfile and the one used by the first created drawable can be very different.
This problem isn't only a problem with JOGL. My suggestion helps to know which extensions are available with the default device. You can use GLProfile.glAvailabilityToString() and GLProfile.getDefault() too.
N.B: I assume that you use at least JOGL 2.3.1. The maintenance of JOGL 1 was stopped about 5 years ago.
My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.
How can I draw a pixel array very fast in c++?
I've seen many questions like this on stackoverflow,
they are all answered with:
use gdi (windows)
use opengl
...
but there must be a way, how opengl is doing it!
I'm writing a little raytracer and need to draw every pixel
many times per second.
opengl is able to do it, platform independent and fast,
so how can i achieve that without opengl?
And "without opengl" dos not mean
use sdl (slow)
use this / that library
Please only suggest the platform native methods
or the library closest to that.
If it is possible (i know it is)
how can I do this?
platform independent solutions are preferred.
Drawing graphics on Linux you either have to use X11, or OpenGL. (And in the near future Wayland may be another option). In Linux there is no "native" way of doing graphics, because the Linux kernel doesn't care about graphics APIs. It provides a interfaces (DRM) using which graphics systems are then implemented in user space. If you just want to splat pixels on the screen, without caring about windows then you could also mmap /dev/fbdev – but you normally don't want that, because nobody wants his screen being clobbered by some program he can't move or hide.
Drawing single points is inefficient, no matter which API being uses, due to the protocol overhead.
So X11 it is. So the best bet is to use the MIT-SHM extension which you use to alter pixels in a buffer, which is then blitted in whole by the X11 server. Of course doing this using the pure X11 Xlib functions is annoyingly cumbersome. So this is what SDL effectively nicely wraps up for you.
The other option is OpenGL. OpenGL is not a library! It's a system level API, that gives you almost direct access to the GPU. And it integrates nicely with X11. Yes, the API is provided through a library that's being loaded, but technically that library is just a "wrapper" or "interface" to the actual driver. Drawing single points with OpenGL makes no sense. But you can "batch up" several points into a list (using a vertex array) and then process that list. So the idea is to collect all the incoming points between two display refresh intervals and draw them in one single batch.
platform independent solutions are preferred.
Why are you asking about native APIs then? By definition there can be no plattform independent native API. Either you're native, or you're plattform independent.
And in your particular scenario I think SDL would be the best solution, because it offers just the right kind of abstraction and program side interface for a raytracer. Just FYI: Virtual Machines like QEmu use SDL.
Or you use OpenGL which is a real plattform neutral API widely supported.
Drawing graphics on Linux you either have to use X11, or OpenGL.
This is absolutely false! Counterexample: there's platforms that don't run X11, yet they display pixels (eg. fonts).
Sidenote. OpenGL usually depends on X11 (it's possible, albeit hard, to run OpenGL without X11).
As #datenwork says, there's at least 2 other ways to draw pixels:
The framebuffer device (fbdev), an abstraction to interface with graphics hardware. Very old, designed by Martin Schaller, see the kernel docs. Source code is here. Also see here. Here's the simplest possible framebuffer driver.
The Direct Rendering Manager (DRM), a kernel subsystem that provides an API for userland apps to send commands/data directly to the GPU. (Seems suspiciously similar to what OpenGL does, but idk!). Source code is here. Here's a DRM example that inititializes a simple display pipeline.
Both of these are part of the kernel, so they're lower-level than X11, which is not part of the kernel. Both can draw arbitrary pixels (eg. penguins). I'd guess both of these are platform-independent (like OpenGL).
See this for more on how to draw stuff on Linux.
On Windows XP (64-bit) it seems to be impossible to render with OpenGL to two screens connected to different graphics cards with different GPUs (e.g. two NVIDIAs of different generations). What happens in this case is that rendering works only in one of the screens. On the other hand, with Direct3D it works without problem, rendering in both screens. Anyone knows why is this? Or more importantly: is there a way to render in both screens with OpenGL?
I have discovered that on Windows 7 rendering works on both screens even with GPUs of different brands (e.g. AMD and Intel). I think this may be because of its display model, which runs on top of a Direct3D compositer if I am not mistaken. This is just a suposition, I really don't know if it is the actual reason.
If Direct3D would be the solution, one idea would be to do all the rendering with OpenGL to a texture, and then somehow render this texture with Direct3D, suposing it isn't too slow.
What happens in Windows 7 is, that one GPU, or same type GPUs coupled, render the image to an offscreen buffer, which is then composited spanning the screens. However it is (yet) impossible to distribute rendering of a single context over GPUs of different making. That would require a standardized communication and synchronization infrastructure, which simply doesn't exsist. Neither OpenGL or Direct3D can do it.
What can be done is copying the rendering results into the onscreen framebuffers of several GPUs. Windows 7 and DirectX have support for this built in. Doing it with OpenGL is a bit more involved. Technically you render to an offscreen device context, usually a so called PBuffer. After finishing the rendering you copy the result using GDI functions to your window. This last copying step however is very slow, compared to the rest of OpenGL operation.
Both NVIDIA and AMD have ways of allowing you to choose which GPU to use. NVIDIA has WGL_NV_gpu_affinity and AMD has WGL_AMD_gpu_association. They both work rather differently, so you'll have to do different things on the different hardware to get the behavior you need.