Combining DirectX 9 and DirectX 11 rendering in one application - c++

I'm currently developing a renderer in C++ for a games company I work for and I'm restricted to using DirectX 9 libraries. This is because the target hardware our games run on on requires our games to create and use a Dx9 device that they can hook in to via a dll in order to do some custom drawing of overlays over our games.
The frustrating thing is the hardware is capable of running DirectX 11 (which we've tried and tested) but because our hardware provider won't update the dll that draws the overlays we're stuck with using a Dx9 device, which we can't even upgrade to an extended device, and this limits the things we can do with respect to shaders and other improvements that Dx11 brings.
I was wondering if it would be possible to have a Dx11 and Dx9 device running side by side in the same application with the Dx11 device doing all the behind the scenes work whilst using the Dx9 device to present the render target of the Dx11 pipeline. That way we could implement Dx11 shaders but still have a Dx9 device doing the "drawing" to screen that the third party dll could still hook in to and draw overlays over? I was thinking something like setting a Dx9 device texture as a render target to the Dx11 pipeline but I'm not sure thats even possible?
Any feedback, comments or advice on whether these ideas would be possible or if there are any alternatives that I'm not thinking of would be welcome.
Thanks in advance.

Related

Why does my background flicker through my meshes in DirectX 12 Release build?

I have been developing this game in C++ in Visual Studio using DirectX 12. I used the Debug build configuration during development and the graphics were smooth as butter.
When I was preparing to publish the game on the Windows Store so I could share it with friends as play testers, I switched to Release build configuration. As soon as I did that I started getting this flicker of the back-ground color coming through my wall meshes.
Here is a short video that shows the flicker.
Here is a longer video that I made before switching to Release build configuration that shows there is no flicker.
I am new to DirectX 12. This project was my teacher. I studied Microsoft's Direct3D 12 Graphics, and I studied the DirectX 12 templates in Visual Studio.
I felt quite pleased that I was able to master DirectX 12 well enough to produce this game as well as I did. Then the Release thing, and the flicker thing, and I am at a loss.
Is this likely to be a shader issue? or a command queue issue? or a texture issue? or something else?
DirectX 12 is an API designed for graphics experts and provides a great deal of application control compared to say Direct3D 11. The cost of that control is that you the application developer are responsible for getting everything right, making sure it works across a broad range of hardware, and robustly handling stress scenarios and error cases all yourself.
There are numerous ways you can get 'blinking' effects in DirectX 12. A common one is failure to keep your graphics memory with constants, IBs, VBs, etc. unchanged between the time you call Draw and the time the actual draw completes on the GPU which often happens a few frames later. This synchronization is a key challenge of using the API properly. For an example solution, see GraphicsMemory in DirectX Tool Kit for DirectX 12.
If you are new to DirectX development, I strongly advise starting with DirectX 11. It's basically the same functionality, but takes care of buffer renaming, resource barriers, fences, VRAM overcommit, etc.

Software and Hardware rendering in SDL2/SFML2

First off, I'm relatively new to this. I'm looking forward to picking up both SDL2 as well as SFML2 libraries for game dev. (and other stuff).
Now I know for a fact that both SDL2 and SFML2 are capable of creating OpenGL enabled contexts, through which OpenGL graphics programming may be done.
But online, I've read discussions wherein people said something to the effect of "SDL 1.2 is software accelerated, SDL2 and SFML2 are hardware accelerated by default". I know that software rendering is graphics using CPU alone. While Hardware rendering uses graphics cards/pipeline.
So my question is, with regards to these game libraries:
Part 1: when someone says one is software/hardware acc.by default, what does he mean? Is it that (my guess) if say SFML2 is hardware acc. by default, even basic 2d graphics are done by it using hardware rendering as the backend pipeline to do it, even if I didn't explicitly do any hardware-rendering programming in the code?
Part 2: And if that is true, is there any option within these libraries to set that to software acceleration/rendering?
Part 3: Which of these 2 libraries (SDL2 vs SFML2) has better overall performance/speed?
Thanks in advance for any answer and apologies if you found the question dumb.
Cannot say anything about SFML (but almost sure things are very close), but for SDL it is as you say. In SDL1 2d drawing implemented as blitting directly on display surface, and then sending this surface to display - so mostly software (although minor hw acceleration is still possible). SDL2 have SDL_Renderer and textures (which are GPU-side images or render targets) and any basic drawing that uses renderer may be accelerated by one or another backend. Which backend will be chosen depends on system your program runs and user settings - e.g. it would default to opengl for linux, d3d for windows (but still can use opengl), opengles for android/ios, etc..
You can use software renderer either by explicitly calling SDL_CreateSoftwareRenderer, or hint SDL to use software driver, or even override it by setting SDL_RENDER_DRIVER environment variable.
If you intend to use opengl for 3d graphics - then you just ignore all that and create window with opengl context as usual, and never use SDL_Renderer.

Windowless OpenGL Context in Apache2 Module

I'm trying to develop an Apache2 module that utilizes OpenGL to perform off-screen rendering and dynamically generate images that I can then send back to the client.
Apache2 is running on an Ubuntu 12.04 machine and I created a test module that renders a quad and stores the frame as an image to disk using OpenGL/GLX. But when the module receives a client request, it crashes at XOpenDisplay(0) with a segmentation fault. Any ideas what could be going wrong?
Edit:
All the examples I have seen talk about using a pixel buffer (PBuffer). As far as I know, these are deprecated and FBOs should be used instead. Can someone explain how to create a context and use FBOs to perform off-screen rendering?
While technically it's perfectly possible to do windowless, display server less off-screen GPU accelerated rendering with OpenGL, practically it's impossible these days because you need a display environment to actually get access to the GPU. Fortunately the structure of graphics systems is changing these days (Hybrid graphics, display compositors). Already Mesa provides an off-screen context creation mode (OSMesa), but it's far from being feature complete.
So right now, you'll need some kind of display server drawable to work with on which you can bind a context. X11 offers two kinds of GPU accelerated drawables: Windows and PBuffers. You can use FBOs with either (PBuffers are technically Windows that can not be mapped to the root window and have an off-screen canvas). The easiest way to go is to create a regular window on an X server but not showing it; you can still create an OpenGL context on it and create FBOs, like shown in numerous tutorials. But for OpenGL to work the X server you use must be active hold the console and be configured to use the GPU (theoretically with newer Hybrid graphics capable X servers and drivers it should be possible to configure the X server to use a dummy display device and configure the GPU as a secondary device for accelerated rendering, but I never tried that, so far).

How to add compute shader functionality to a dx9 application targeting dx11 hardware

I'm using a dx9 app I cannot update to dx11.
I have some compute shaders I want to port to this app, but I don't know what can I use to write directly to dx9 textures and possibly buffers(it's realtime graphic so coping data around is not acceptable).
It have to work on Intel , Amd and Nvidia gpus (all dx11 ready) so CUDA is not an option.
I don't know if you can share resources between dx11 and dx9 devices, but it will solve all my problems.The actual scenario is to take a rendertarget from dx9 and share it.
Aquire it in dx 11 process it by the compute shader and write the content in a shared dx11 texture i can bind in dx9 for rendering.
I was also oriented torward opencl as I have read of some dx9 interop online.
I tryed to download Amd app sdk, but in documentation i didn't find any reference to dx9 interop(some words about dx10 but all documentation is focused around opengl interoperation).I haven't checked opencl sdk from other vendors.
also c++ amp seems to not work with dx9.
Do you know if this is possible and if so what gpgpu solution that can do the job if i cannot use Dx11<->Dx9 shared resource thing?

Seamless multi-screen OpenGL rendering with heteregeneous multi-GPU configuration on Windows XP

On Windows XP (64-bit) it seems to be impossible to render with OpenGL to two screens connected to different graphics cards with different GPUs (e.g. two NVIDIAs of different generations). What happens in this case is that rendering works only in one of the screens. On the other hand, with Direct3D it works without problem, rendering in both screens. Anyone knows why is this? Or more importantly: is there a way to render in both screens with OpenGL?
I have discovered that on Windows 7 rendering works on both screens even with GPUs of different brands (e.g. AMD and Intel). I think this may be because of its display model, which runs on top of a Direct3D compositer if I am not mistaken. This is just a suposition, I really don't know if it is the actual reason.
If Direct3D would be the solution, one idea would be to do all the rendering with OpenGL to a texture, and then somehow render this texture with Direct3D, suposing it isn't too slow.
What happens in Windows 7 is, that one GPU, or same type GPUs coupled, render the image to an offscreen buffer, which is then composited spanning the screens. However it is (yet) impossible to distribute rendering of a single context over GPUs of different making. That would require a standardized communication and synchronization infrastructure, which simply doesn't exsist. Neither OpenGL or Direct3D can do it.
What can be done is copying the rendering results into the onscreen framebuffers of several GPUs. Windows 7 and DirectX have support for this built in. Doing it with OpenGL is a bit more involved. Technically you render to an offscreen device context, usually a so called PBuffer. After finishing the rendering you copy the result using GDI functions to your window. This last copying step however is very slow, compared to the rest of OpenGL operation.
Both NVIDIA and AMD have ways of allowing you to choose which GPU to use. NVIDIA has WGL_NV_gpu_affinity and AMD has WGL_AMD_gpu_association. They both work rather differently, so you'll have to do different things on the different hardware to get the behavior you need.