What is the difference between a "resource" and a "resource view"? - directx-12

In Vulkan, a memory resource is accessed via "pointer-like" metadata contained in a "view", which describes how the memory resource should be interpreted. A descriptor is data which specifies to a shader "where" this memory will be located.
In DirectX 12, a resource is short for "memory resource" (see: https://www.3dgep.com/learning-directx12-1/). A resource view is a wrapping around a resource which allows specific usage. For instance, a "shader resource view" (SRV) allows the resource to be accessed by a shader, while a "unordered access view" is similar to a SRV, except it allows for the data to be accessed in any order.
A "constant buffer view" (CBV) allows shaders to access data which will be more persistent/constant than usual. The DirectX 12 docs don't seem to differentiate between a CBV and a constant buffer. For example, the article titled "Constant Buffer View" says:
Constant buffers contain shader constant data. The value of them is that the data persists, and can be accessed by any GPU shader, until it is necessary to change the data.
The word "view" is not mentioned anywhere, almost as if there is no difference between a resource, and a resource view. Another place where this happens in the docs is in the article on resource bindings:
The most common resources are:
Constant buffer views (CBVs)
Unordered access views (UAVs)
Shader resource views (SRVs)
Samplers
When listing common resources, the article mentions resource views, again as if they are the same thing.
What is the difference between a resource, and a resource view in DirectX 12?

In DirectX, a resource is the data that you'll be using to render. For example a texture or a buffer (like a vertex buffer). Then, similar to Vulkan, you have to let the graphics driver know how to interpret these resources. For instance, you can store 4-byte chunks into a buffer and then read from it as if it where a buffer of float values. This is where "views" come in. A view (in DX12 also called a descriptor) is a pointer-like description of the way the resource it's pointing to will be bound to the pipeline. Except vertex-buffers and index-buffers (and constant buffers in some cases), you pretty much always have to bind a resource by using a view (descriptor). This process is called resource binding.
It's important to know that the same resource can be bound (i.e described) by different views. For example, you can define a SRV for a texture and another for one of its Mipmaps, and a UAV to the same texture for writing to it in a compute shader.
Resource binding is the main process by which resources are used/manipulated in D3D11/12. It's not that difficult to wrap your head around, but is too lengthy to describe in its entirety here. You can read more about it in MS documentation.

Related

What is a buffer in Computer Graphics

Give me a brief, clear definition of a Buffer in Computer Graphics, then a short description of a buffer.
Most of the definitions on the internet are answering "Frame Buffer" yet there are other types of buffers in computer graphics to be more specific in OpenGL.
Someone to give me a brief, clear definition of a Buffer in Computer Graphics
There isn't one. "Buffer" is a term that is overloaded and can mean different things in different contexts.
A "framebuffer" (one word) basically has no relation to many other kinds of "buffer". In OpenGL, a "framebuffer" is an object that has attached images to which you can render.
"Buffering" as a concept generally means using multiple, usually identically sized, regions of storage in order to prevent yourself from writing to a region while that region is being consumed by some other process. The default framebuffer in OpenGL may be double-buffered. This means that there is a front image and a back image. You render to the back image, then swap the images to render the next frame. When you swap them, the back image becomes the front image, which means that it is now visible. You then render to the old front image, now the back image, which is no longer visible. This prevents seeing incomplete rendering products, since you're never writing to the image that is visible.
You'll note that while a "framebuffer" may involve "buffering," the concept of "buffering" can be used with things that aren't "framebuffers". The two are orthogonal, unrelated.
The most broad definition of "buffer" might be "some memory that is used to store bulk data". But this would also include "textures", which in most APIs do not consider to be "buffers".
OpenGL (and Vulkan) as an API have a more strict definition. A "buffer object" is an area of contiguous, unformatted memory which can be read from or written to by various GPU processes. This is distinct from a "texture" object, which has a specific format that is internal to the implementation. Because a texture's format is not known to you, you are not allowed to directly manipulate the bytes of a texture's storage. Any bytes you upload to it or read from it are done through an API that allows the implementation to play with them.
For buffer objects, you can load arbitrary bytes to a buffer object's storage without the API (directly) knowing what those bytes mean. You can even map the storage and access it like a regular pointer to CPU memory.
"Buffer" you can simply think of it as a block of memory .
But you have to specify the context here because it means many things.
For Example In the OpenGL VBO concept. This means vertex buffer object which we use it to store vertices data in it . Like that we can do many things, we can store indices in a buffer, textures, etc.,
And For the FrameBuffer you mentioned, It is an entirely different topic. In OpenGL or Vulkan we can create custom framebuffers called frame buffer objects(FBO) apart from the default frame buffer. We can bind FBO and draw things onto it & by adding texture as an attachment we can get whatever we draw on the FBO updated to that texture.
So Buffer has so many meanings here,

Can CPU access to BackBuffer used for Render Target?

strong textI'm very new to DirectX11 and start to study. I have seen D3D11_USAGE enumeration in D3D11_BUFFER_DESC structure and searched what each means.
D3D11_DEFAULT refers that GPU can read/write but all accesses from CPU is not allowed. Therefore this option is used for the back buffer used for the render target.
Then, If I generate a swap chain using DXGI_USAGE_RENDER_TARGET_OUTPUT option for usage and get a back-buffer and create a render target from the back-buffer, can CPU access to the backbuffer or render target?
typedef struct DXGI_SWAP_CHAIN_DESC
{
DXGI_MODE_DESC BufferDesc;
DXGI_SAMPLE_DESC SampleDesc;
DXGI_USAGE BufferUsage;
UINT BufferCount;
HWND OutputWindow;
BOOL Windowed;
DXGI_SWAP_EFFECT SwapEffect;
UINT Flags;
} DXGI_SWAP_CHAIN_DESC;
Based on MSDC document(In the document, it says "Swap chain's only support the DXGI_CPU_ACCESS_NONE value in the DXGI_CPU_ACCESS_FIELD part of DXGI_USAGE."), CPU can't access but I can't find clear instruction. Am I right?
So, Can CPU access to BackBuffer used for Render Target?
The CPU does not have access to the swap chain buffers, as per the MSDN documentation you cited.
I'm not sure why you would need to read from the back buffer in the first place - the process of reading data generated on the GPU from the CPU is so slow you're better off either doing all of your rendering on the CPU in the first place, or redesigning your algorithm so you don't need to access the data from the CPU. In general, the second is the best option - D3D is designed to push data to the GPU from the CPU, not to read data from the GPU into the CPU.
In fact, if you look at the documentation for D3D11_USAGE, you'll find that the only way to create a resource with direct read access from the CPU is to create it as a staging resource (D3D11_USAGE_STAGING), which doesn't support creating views. Resources created with D3D11_USAGE_DYNAMIC can be written to by the CPU, but not read, resources created with D3D11_USAGE_DEFAULT can be copied into through UpdateSubresource, but otherwise cannot be interacted with by the CPU, and resources created with D3D11_USAGE_IMMUTABLE cannot be modified at all after creation.
A swap chain creates its resources with the DXGI equivalent of D3D11_USAGE_DEFAULT, which denies the CPU access to the data. MSDN is merely telling you that this is the case, and to not put any CPU access flags into the Usage member because they will either be ignored or rejected as erroneous.

Render to swap chain using compute shader

I'm trying to use a compute shader to render directly to the swap chain.
For this, I need to create the swapchain with the usage VK_IMAGE_USAGE_STORAGE_BIT.
The problem is that the swapchain needs to be created with the format VK_FORMAT_B8G8R8A8_UNORM or VK_FORMAT_B8G8R8A8_SRGB and neither of the 2 allows the format feature VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT with the physical device I use.
Do I said something wrong or it's impossible to render to the swapchain with compute shader using my configuration?
Vulkan imposes no requirement on the implementation that it permit direct usage of a swapchain image in a compute shader operation (FYI: "rendering" typically refers to a very specific operation; it's not something that happens in a compute shader). Therefore, it is entirely possible that the implementation will forbid you from using a swapchain image in a CS through various means.
If you cannot create a swapchain image in your preferred format, then your next best bet is to see if you can find a compatible format for an image view of the format you can use as a storage image. This however requires that the implementation support the KHR extension swapchain_mutable_format, and the creation flags for the swapchain must include VK_SWAPCHAIN_CREATE_MUTABLE_FORMAT_BIT_KHR as well as a VkImageFormatListCreateInfoKHR list of formats that you intend to create views for.
Also, given support, this would mean that your CS will have to swap the ordering of the data. And don't forget that, when you create the swapchain, you have to ask it if you can use its images as storage images (imageUsage). It may directly forbid this.

How should I allocate/populate/update memory on GPU for different type of scene objects?

I'm trying to write my first DX12 app. I have no previous experience in DX11. I would like to display some rigid and some soft objects. Without textures for now. So I need to place into GPU some vertex/index buffers which I will never change later and some which I will change. And the scene per se isn't static, so some new objects can appear and some can vanish.
How should I allocate/populate/update memory on GPU for it? I would like to see high level overview easy to read and understand, not real code. Hope the question isn't too broad.
You said you are new to DirectX, i will strongly recommend you to stay away from DX12 and stick with DX11. DX12 is only useful for people that are already Expert ( with a big E ) and project that has to push very far or have edge cases for a feature in DX12 not possible in DX11.
But anyway, on DX12, as an example to initialize a buffer, you have to create instances of ID3D12Resource. You will need two, one in the an upload heap and one in the default heap. You fill the first one on the CPU using Map. Then you need to use a command list to copy to the second one. Of course, you have to manage the resource state of your resource with barriers ( copy destination, shader resource, ... ). You need then to execute the command list on the command queue. You also need to add a fence to listen the gpu for completion before you can destroy the resource in the upload heap.
On DX11, you call ID3D11Device::CreateBuffer, by providing the description struct with a SRV binding flag and the pointer to the cpu data you want to put in it… Done
It is slightly more complex for texture as you deal with memory layout. So, as i state above, you should focus on DX11, it is not degrading at all, both have their roles.

Controlling Access To IDirect3DDevice9

So I am writing a resource manager for my in house game engine and am stuck on something. Well, not really stuck but I feel like there should be a better way to do this. The issue is: I have a resource manager class that consists of an LRU, a hash table (for quick resource look up), and another hash table that contains resource controllers(dictates how various files load and how to destroy different resources).
For the rendering part I have encapsulated the actual IDirect3DDevice object in a renderer class that sits in a SceneManager class. The scene manager determines which objects are actually visible, etc through use of an OctTree and then renders them using the renderer object.
The problem is that, say the resource controller for a .jpg file to be loaded as a texture, needs access to the device. This means I have to give the controller a pointer to the renderer which then needs to have a function that returns the texture created by the device. However, that gives too much functionality to the renderer, theoretically it shouldn't care about anything but drawing. This happens because, upon creation of a texture, you either have to call the devices member function or pass the device into one of the D3DX texture functions. The same problem exists for other resources such as meshes because they need access to the renderer to create vertex and index buffers.
In addition, once the resource controller has access to the renderer, it could potentially call any of the draw functions if so inclined which is totally unnecessary. Anyone have any work arounds to a problem like this, or is the inherent nature just a result of Microsoft giving to much functionality to the device object for D3D9?