DirectX vertex buffer Default vs Default + Staging - c++

I was searching for the difference between these two in terms of GPU reading speed and occasionally CPU writing(less then once per frame or even only once). I don't want to use D3D11_USAGE_DYNAMIC cause data will not be updated >= once per frame.
Is there a significant performance increase with Default + Staging combo over the Default buffer?

The best performance advice for Direct3D 11 here is actually the same as Direct3D 10.0. Start by reviewing this talk Windows to Reality: Getting the Most out of Direct3D 10 Graphics in Your Games from Gamefest 2007.
To your question, any updates to a resource (texture or buffer) will have some potential performance impact due, but for 'occasional' updates, the best option is to use a STAGING resource and then CopyResource to a DEFAULT resource for actual rendering.
DYNAMIC for textures should be reserved for very frequent updates (like say video texture playback), and of course for vertex buffers when doing dynamic draw submission. Constant buffers are really intended to be DYNAMIC or use UpdateSubresource which really depends on your update pattern (a topic covered in the talk above).
Whenever possible, creating resources as IMMUTABLE and with pInitialData is the best option with DirectX 11 as it potentially allows the driver some opportunity for multi-threaded resource creation which is more efficient.
The main thing to be aware of with this pattern is that STAGING resources can result in virtual-memory fragmentation which can be a problem for 32-bit (X86) apps, so you should try to use them rather than creating a lot of them or destroying them and recreating them. See the talk "Why Your Windows Game Won't Run In 2,147,352,576 Bytes" which is attached to this blog post.
I'd suggest your initial version would be to try to use DEAFAULT+UpdateSubresource and then compare it with a DEAULT+STAGING+CopyResource solution because it really heavily depends on your content and code.

Related

In D3D12, can the render target view be any buffer?

In the samples I have looked at so far some of the commands are something like:
D3D12_DESCRIPTOR_HEAP_DESC with D3D12_DESCRIPTOR_HEAP_TYPE::D3D12_DESCRIPTOR_HEAP_TYPE_RTV
ID3D12Device::CreateDescriptorHeap
D3D12_CPU_DESCRIPTOR_HANDLE with ID3D12DescriptorHeap::GetCPUDescriptorHandleForHeapStart (and maybe ID3D12Device::GetDescriptorHandleIncrementSize)
(Optional?) ID3D12Device::CreateRenderTargetView
(Optional?) IDXGISwapChain3::GetBuffer
Schedule rendering stuff to a command list, of note OMSetRenderTargets and DrawInstanced, and then close the command list.
ID3D12CommandQueue::ExecuteCommandLists
(Optional?) IDXGISwapChain3::Present
Schedule a signal with ID3D12CommandQueue::Signal on a fence
Wait for the GPU to finish with ID3D12Fence::SetEventOnCompletion and WaitForSingleObjectEx
If possible, how can step 4.1 be replaced with a buffer of choice? I.e. how can one create a ID3D12Resource* and render to it, and then read from it into say a std::vector? (I assume if this is possible that step 6.1 can be ignored, since there is no render target view for the swap chain to present to. Perhaps step 4 is unnecessary as well in this case ? Maybe only OMSetRenderTargets matters ?)
It depends on the video memory architecture as to where exactly the render target can be located. On some systems, it's in dedicated video memory that only the video card can access. In some systems it's in video memory shared across the bus that both CPU and GPU can access. In unified memory architectures, everything is in system memory.
Therefore, you have restrictions on where exactly the render target can be located. This is why you have to use D3D12_HEAP_TYPE_DEFAULT and specify D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET when creating the ID3D12Resource you plan to bind as a render target (this is implicitly done by DXGI when you create the swap chain render target as well).
Generally speaking you can't and shouldn't use the low-level DXGI surface creation APIs to create Direct3D resources. They mostly exist for system use rather than by applications.
Unless you happen to be on a UMA system, you should minimize the CPU access to the render target as it will require expensive copies otherwise. Even on UMA systems, there's also required de-tiling as well to get the results into a linear form.
Direct3D 12 also offers the "Placed Resource" methods as well which can provide more control over where exactly the memory is allocated (or more specifically where the virtual memory addresses are allocated), but you still have to abide by the underlying architecture limitations. Depending on the memory architecture, you can "alias" multiple different ID3D12Resource instances all using the same memory (such as a render target being be aliased as a unordered access resource), but you are responsible for inserting the required resource barriers into the command list (and test it) to make sure that it works reliably on all DX12 hardware. See MSDN.
You do not have to Present your render target if you don't need the user to see the result.
Memory Management Strategies
UMA Optimizations: CPU Accessible Textures and Standard Swizzle
Getting Started with Direct3D 12
If you are new to DirectX 12, you should see the DirectX Tool Kit for DirectX 12 tutorials. If you aren't already familiar with DirectX 11, you should wait on DirectX 12 and start with DirectX Tool Kit for DirectX 11.

How should I allocate/populate/update memory on GPU for different type of scene objects?

I'm trying to write my first DX12 app. I have no previous experience in DX11. I would like to display some rigid and some soft objects. Without textures for now. So I need to place into GPU some vertex/index buffers which I will never change later and some which I will change. And the scene per se isn't static, so some new objects can appear and some can vanish.
How should I allocate/populate/update memory on GPU for it? I would like to see high level overview easy to read and understand, not real code. Hope the question isn't too broad.
You said you are new to DirectX, i will strongly recommend you to stay away from DX12 and stick with DX11. DX12 is only useful for people that are already Expert ( with a big E ) and project that has to push very far or have edge cases for a feature in DX12 not possible in DX11.
But anyway, on DX12, as an example to initialize a buffer, you have to create instances of ID3D12Resource. You will need two, one in the an upload heap and one in the default heap. You fill the first one on the CPU using Map. Then you need to use a command list to copy to the second one. Of course, you have to manage the resource state of your resource with barriers ( copy destination, shader resource, ... ). You need then to execute the command list on the command queue. You also need to add a fence to listen the gpu for completion before you can destroy the resource in the upload heap.
On DX11, you call ID3D11Device::CreateBuffer, by providing the description struct with a SRV binding flag and the pointer to the cpu data you want to put in it… Done
It is slightly more complex for texture as you deal with memory layout. So, as i state above, you should focus on DX11, it is not degrading at all, both have their roles.

Is it possible to render one half of a scene by OpenGL and other half by DirectX

My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.

DirectX 9 Device. Creating vertex and index buffers while rendering.(2 threads)

I've been looking for a answer to this question for some time. Anyone know how to do it?
I've got some ideas, can you tell me if they are valid and which is the best one to use(if there are actually suitable solutions).
Create a single directx9 device. Make a copy for the different threads. Render the loading screen(with already loaded buffers) while loading the new level assets and creating their Vertex and index buffers.
Create 2 different directx9 devices. One for each thread. One device is responsible for rendering only(and is attached to the window) and the other has no rendering surface and is taking care of making and filling the buffers.
Create a device with a thread safety flag(I think there is such thing, but it may not be called this way) and do the same as in 1.
Thanks!
If you simply want to load a level, then you don't really need separate thread for that. You could repaint scene while loading resources, for example. I'd advise to avoid multithreading unless you can't live without it.
If you still want multithreading, pass D3DCREATE_MULTITHREADED into IDirect3D9::CreateDevice. Note that DirectX SDK explicitly warns that using this flag may degrade performance.
Creating single device is preferred solution, i.e. I"d advise to use #1 .
It Is possible to share resources between several devices, but this functionality is available only on windows vista. Because people still use WinXP today, if you use something like that, your users will hate you.

What can cause a reduction in frame rate when upgrading a graphics card?

We have a two-screen DirectX application that previously ran at a consistent 60 FPS (the monitors' sync rate) using a NVIDIA 8400GS (256MB). However, when we swapped out the card for one with 512 MB of RAM the frame rate struggles to get above 40 FPS. (It only gets this high because we're using triple-buffering.) The two cards are from the same manufacturer (PNY). All other things are equal, this is a Windows XP Embedded application and we started from a fresh image for each card. The driver version number is 169.21.
The application is all 2D. I.E. just a bunch of textured quads and a whole lot of pre-rendered graphics (hence the need to upgrade the card's memory). We also have compressed animations which the CPU decodes on the fly - this involves a texture lock. The locks take forever but I've also tried having a separate system memory texture for the CPU to update and then updating the rendered texture using the device's UpdateTexture method. No overall difference in performance.
Although I've read through every FAQ I can find on the internet about DirectX performance, this is still the first time I've worked on a DirectX project so any arcane bits of knowledge you have would be useful. :)
One other thing whilst I'm on the subject; when calling Present on the swap chains it seems DirectX waits for the present to complete regardless of the fact that I'm using D3DPRESENT_DONOTWAIT in both present parameters (PresentationInterval) and the flags of the call itself. Because this is a two-screen application this is a problem as the two monitors do not appear to be genlocked, I'm working around it by running the Present calls through a threadpool. What could the underlying cause of this be?
Are the cards exactly the same (both GeForce 8400GS), and only the memory size differ? Quite often with different memory sizes come slightly different clock rates (i.e. your card with more memory might use slower memory!).
So the first thing to check would be GPU core & memory clock rates, using something like GPU-Z.
It's an easy test to see if the surface lock is the problem, just comment out the texture update and see if the framerate returns to 60hz. Unfortunately, writing to a locked surface and updating the resource kills perfomance, always has. Are you using mipmaps with the textures? I know DX9 added automatic generation of mipmaps, could be taking up a lot of time to generate those. If your constantly locking the same resource each frame, you could also try creating a pool of textures, kinda like triple-buffering except with textures. You would let the render use one texture, and on the next update you pick the next available texture in the pool that's not being used in to render. Unless of course your memory constrained or your only making diffs to the animated texture.