Directx11 Texture2D formats - c++

For some reason, I cannot specify DXGI_FORMAT_R32G32B32_FLOAT format when creating a texture 2d in directx11. I can do it just fine in OpenGL, however. It also works fine when using DXGI_FORMAT_R32G32B32A32_FLOAT. I am using these textures as rendertargets for the gbuffer.
// create gbuffer textures/rendertargets
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(D3D11_TEXTURE2D_DESC));
textureDesc.Width = swapChainDesc.BufferDesc.Width;
textureDesc.Height = swapChainDesc.BufferDesc.Height;
textureDesc.ArraySize = 1;
textureDesc.MipLevels = 1;
textureDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT; <----- dosn't like this; returns E_INVALIDARG
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
for (uint32_t index = 0; index < GBuffer::GBUFFER_NUM_RENDERTARGETS; index++)
{
DXCALL(device->CreateTexture2D(&textureDesc, NULL, &mGeometryTextures[index]));
DXCALL(device->CreateRenderTargetView(mGeometryTextures[index], NULL, &mRenderTargets[index]));
}
Why cant I use DXGI_FORMAT_R32G32B32_FLOAT when creating a 2d texture in directx 11?
I do not need the extra float in my texture, hence I'd rather have just three elements rather than four.

Not all hardware supports using R32G32B32_FLOAT as a render-target and shader-resource (it's optional). You can verify whether the hardware supports the format for those uses by calling CheckFormatSupport. If it is succeeding on the same hardware with OpenGL, this likely means OpenGL is padding the resource out to the full 4-channel variant behind the scenes.

DXGI_FORMAT_R32G32B32_FLOAT support for render targets is optional: http://msdn.microsoft.com/en-us/library/windows/desktop/ff471325(v=vs.85).aspx#RenderTarget
If you think that this format should be supported by your device then turn on debug output as MooseBoys suggested. This should explain why you're getting E_INVALIDARG.

Related

How to create 2D texture using DXGI format DXGI_FORMAT_R1_UNORM?

I want to create a 1 bit per pixel monochrome texture 2D in DirectX 11 using dxgi format DXGI_FORMAT_R1_UNORM
I have done trying the following but it's showing following errors:
D3D11 ERROR: ID3D11Device::CreateTexture2D: Device does not support the format R1_UNORM. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT]
D3D11: BREAK enabled for the previous message, which was: [ ERROR STATE_CREATION #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT ]
I have tried to create a texture for rendering but it's says as you seen above The "R1_UNORM" does not supported by device. So, which format should be used to create the texture 2D?
The bitmapPixels is a dynamic memory of 1 bit color array in BYTE, prepared from this algorithm which is under review Code Review: Bit Packing algorithm of 1-Bit monochrome image
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = 32;
desc.Height = 32;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R1_UNORM;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
desc.MiscFlags = 0;
const D3D11_SUBRESOURCE_DATA subResourceData = {bitmapPixels, 4, 4 * desc.Height};
device->CreateTexture2D(&desc, &subResourceData, &texture2D);
ID3D11Device::CheckFormatSupport
Get the support of a given format on the installed video device.
...
A bitfield of D3D11_FORMAT_SUPPORT enumeration values describing how the specified format is supported on the installed device. The values are ORed together.
...
D3D11_FORMAT_SUPPORT_TEXTURE2D 2D texture resources supported.
For example, these are formats supported by Intel(R) HD Graphics 620 (just randomly picked a GPU; no DXGI_FORMAT_R1_UNORM there):
DXGI_FORMAT_R32G32B32A32_TYPELESS
DXGI_FORMAT_R32G32B32A32_FLOAT
DXGI_FORMAT_R32G32B32A32_UINT
DXGI_FORMAT_R32G32B32A32_SINT
DXGI_FORMAT_R32G32B32_TYPELESS
DXGI_FORMAT_R32G32B32_FLOAT
DXGI_FORMAT_R32G32B32_UINT
DXGI_FORMAT_R32G32B32_SINT
DXGI_FORMAT_R16G16B16A16_TYPELESS
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R16G16B16A16_UNORM
DXGI_FORMAT_R16G16B16A16_UINT
DXGI_FORMAT_R16G16B16A16_SNORM
DXGI_FORMAT_R16G16B16A16_SINT
DXGI_FORMAT_R32G32_TYPELESS
DXGI_FORMAT_R32G32_FLOAT
DXGI_FORMAT_R32G32_UINT
DXGI_FORMAT_R32G32_SINT
DXGI_FORMAT_R32G8X24_TYPELESS
DXGI_FORMAT_D32_FLOAT_S8X24_UINT
DXGI_FORMAT_R32_FLOAT_X8X24_TYPELESS
DXGI_FORMAT_X32_TYPELESS_G8X24_UINT
DXGI_FORMAT_R10G10B10A2_TYPELESS
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R10G10B10A2_UINT
DXGI_FORMAT_R11G11B10_FLOAT
DXGI_FORMAT_R8G8B8A8_TYPELESS
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_R8G8B8A8_UINT
DXGI_FORMAT_R8G8B8A8_SNORM
DXGI_FORMAT_R8G8B8A8_SINT
DXGI_FORMAT_R16G16_TYPELESS
DXGI_FORMAT_R16G16_FLOAT
DXGI_FORMAT_R16G16_UNORM
DXGI_FORMAT_R16G16_UINT
DXGI_FORMAT_R16G16_SNORM
DXGI_FORMAT_R16G16_SINT
DXGI_FORMAT_R32_TYPELESS
DXGI_FORMAT_D32_FLOAT
DXGI_FORMAT_R32_FLOAT
DXGI_FORMAT_R32_UINT
DXGI_FORMAT_R32_SINT
DXGI_FORMAT_R24G8_TYPELESS
DXGI_FORMAT_D24_UNORM_S8_UINT
DXGI_FORMAT_R24_UNORM_X8_TYPELESS
DXGI_FORMAT_X24_TYPELESS_G8_UINT
DXGI_FORMAT_R8G8_TYPELESS
DXGI_FORMAT_R8G8_UNORM
DXGI_FORMAT_R8G8_UINT
DXGI_FORMAT_R8G8_SNORM
DXGI_FORMAT_R8G8_SINT
DXGI_FORMAT_R16_TYPELESS
DXGI_FORMAT_R16_FLOAT
DXGI_FORMAT_D16_UNORM
DXGI_FORMAT_R16_UNORM
DXGI_FORMAT_R16_UINT
DXGI_FORMAT_R16_SNORM
DXGI_FORMAT_R16_SINT
DXGI_FORMAT_R8_TYPELESS
DXGI_FORMAT_R8_UNORM
DXGI_FORMAT_R8_UINT
DXGI_FORMAT_R8_SNORM
DXGI_FORMAT_R8_SINT
DXGI_FORMAT_A8_UNORM
DXGI_FORMAT_R9G9B9E5_SHAREDEXP
DXGI_FORMAT_R8G8_B8G8_UNORM
DXGI_FORMAT_G8R8_G8B8_UNORM
DXGI_FORMAT_BC1_TYPELESS
DXGI_FORMAT_BC1_UNORM
DXGI_FORMAT_BC1_UNORM_SRGB
DXGI_FORMAT_BC2_TYPELESS
DXGI_FORMAT_BC2_UNORM
DXGI_FORMAT_BC2_UNORM_SRGB
DXGI_FORMAT_BC3_TYPELESS
DXGI_FORMAT_BC3_UNORM
DXGI_FORMAT_BC3_UNORM_SRGB
DXGI_FORMAT_BC4_TYPELESS
DXGI_FORMAT_BC4_UNORM
DXGI_FORMAT_BC4_SNORM
DXGI_FORMAT_BC5_TYPELESS
DXGI_FORMAT_BC5_UNORM
DXGI_FORMAT_BC5_SNORM
DXGI_FORMAT_B5G6R5_UNORM
DXGI_FORMAT_B5G5R5A1_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8X8_UNORM
DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM
DXGI_FORMAT_B8G8R8A8_TYPELESS
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8X8_TYPELESS
DXGI_FORMAT_B8G8R8X8_UNORM_SRGB
DXGI_FORMAT_BC6H_TYPELESS
DXGI_FORMAT_BC6H_UF16
DXGI_FORMAT_BC6H_SF16
DXGI_FORMAT_BC7_TYPELESS
DXGI_FORMAT_BC7_UNORM
DXGI_FORMAT_BC7_UNORM_SRGB
DXGI_FORMAT_AYUV
DXGI_FORMAT_Y416
DXGI_FORMAT_NV12
DXGI_FORMAT_P010
DXGI_FORMAT_P016
DXGI_FORMAT_420_OPAQUE
DXGI_FORMAT_YUY2
DXGI_FORMAT_Y216
DXGI_FORMAT_AI44
DXGI_FORMAT_IA44
DXGI_FORMAT_P8
DXGI_FORMAT_A8P8
DXGI_FORMAT_B4G4R4A4_UNORM
DXGI_FORMAT_R1_UNORM is not supported by any Direct3D device. It only exists for some old Direct3D 10.0-era Windows 10 GDI font interop, and is basically unused since Direct3D 10.1.

Error runtime update of DXT compressed textures with Directx11

Context:
I'm developing a native C++ Unity 5 plugin that reads in DXT compressed texture data and uploads it to the GPU for further use in Unity. The aim is to create an fast image-sequence player, updating image data on-the-fly. The textures are compressed with an offline console application.
Unity can work with different graphics engines, I'm aiming towards DirectX11 and OpenGL 3.3+.
Problem:
The DirectX runtime texture update code, through a mapped subresource, gives different outputs on different graphics drivers. Updating a texture through such a mapped resource means mapping a pointer to the texture data and memcpy'ing the data from the RAM buffer to the mapped GPU buffer. Doing so, different drivers seem to expect different parameters for the row pitch value when copying bytes. I never had problems on the several Nvidia GPU's I tested on, but AMD and Intel GPU seems to act differently and I get distorted output as shown underneath. Furthermore, I'm working with DXT1 pixel data (0.5bpp) and DXT5 data (1bpp). I can't seem to get the correct pitch parameter for these DXT textures.
Code:
The following initialisation code for generating the d3d11 texture and filling it with initial texture data - e.g. the first frame of an image sequence - works perfect on all drivers. The player pointer points to a custom class that handles all file reads and contains getters for the current loaded DXT compressed frame, it's dimensions, etc...
if (s_DeviceType == kUnityGfxRendererD3D11)
{
HRESULT hr;
DXGI_FORMAT format = (compression_type == DxtCompressionType::DXT_TYPE_DXT1_NO_ALPHA) ? DXGI_FORMAT_BC1_UNORM : DXGI_FORMAT_BC3_UNORM;
// Create texture
D3D11_TEXTURE2D_DESC desc;
desc.Width = w;
desc.Height = h;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = format;
// no anti-aliasing
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DYNAMIC;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc.MiscFlags = 0;
// Initial data: first frame
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = player->getBufferPtr();
data.SysMemPitch = 16 * (player->getWidth() / 4);
data.SysMemSlicePitch = 0; // just a 2d texture, no depth
// Init with initial data
hr = g_D3D11Device->CreateTexture2D(&desc, &data, &dxt_d3d_tex);
if (SUCCEEDED(hr) && dxt_d3d_tex != 0)
{
DXT_VERBOSE("Succesfully created D3D Texture.");
DXT_VERBOSE("Creating D3D SRV.");
D3D11_SHADER_RESOURCE_VIEW_DESC SRVDesc;
memset(&SRVDesc, 0, sizeof(SRVDesc));
SRVDesc.Format = format;
SRVDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
SRVDesc.Texture2D.MipLevels = 1;
hr = g_D3D11Device->CreateShaderResourceView(dxt_d3d_tex, &SRVDesc, &textureView);
if (FAILED(hr))
{
dxt_d3d_tex->Release();
return hr;
}
DXT_VERBOSE("Succesfully created D3D SRV.");
}
else
{
DXT_ERROR("Error creating D3D texture.")
}
}
The following update code that runs for each new frame has the error somewhere. Please note the commented line containing method 1 using a simple memcpy without any rowpitch specified which works well on NVIDIA drivers.
You can see further in method 2 that I log the different row pitch values. For instace for a 1920x960 frame I get 1920 for the buffer stride, and 2048 for the runtime stride. This 128 pixels difference probably have to be padded (as can be seen in the example pic below) but I can't figure out how. When I just use the mappedResource.RowPitch without dividing it by 4 (done by the bitshift), Unity crashes.
ID3D11DeviceContext* ctx = NULL;
g_D3D11Device->GetImmediateContext(&ctx);
if (dxt_d3d_tex && bShouldUpload)
{
if (player->gather_stats) before_upload = ns();
D3D11_MAPPED_SUBRESOURCE mappedResource;
ctx->Map(dxt_d3d_tex, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
/* 1: THIS CODE WORKS ON ALL NVIDIA DRIVERS BUT GENERATES DISTORTED OR NO OUTPUT ON AMD/INTEL: */
//memcpy(mappedResource.pData, player->getBufferPtr(), player->getBytesPerFrame());
/* 2: THIS CODE GENERATES OUTPUT BUT SEEMS TO NEED PADDING? */
BYTE* mappedData = reinterpret_cast<BYTE*>(mappedResource.pData);
BYTE* buffer = player->getBufferPtr();
UINT height = player->getHeight();
UINT buffer_stride = player->getBytesPerFrame() / player->getHeight();
UINT runtime_stride = mappedResource.RowPitch >> 2;
DXT_VERBOSE("Buffer stride: %d", buffer_stride);
DXT_VERBOSE("Runtime stride: %d", runtime_stride);
for (UINT i = 0; i < height; ++i)
{
memcpy(mappedData, buffer, buffer_stride);
mappedData += runtime_stride;
buffer += buffer_stride;
}
ctx->Unmap(dxt_d3d_tex, 0);
}
Example pic 1 - distorted ouput when using memcpy to copy whole buffer without using separate row pitch on AMD/INTEL (method 1)
Example pic 2 - better but still erroneous output when using above code with mappedResource.RowPitch on AMD/INTEL (method 2). The blue bars indicate zone of error, and need to disappear so all pixels align well and form one image.
Thanks for any pointers!
Best,
Vincent
The mapped data row pitch is in byte, when you divide by four, it is definitely an issue.
UINT runtime_stride = mappedResource.RowPitch >> 2;
...
mappedData += runtime_stride; // here you are only jumping one quarter of a row
It is the height count with a BC format that is divide by 4.
Also a BC1 format is 8 bytes per 4x4 block, so the line below should by 8 * and not 16 *, but as long as you handle row stride properly on your side, d3d will understand, you just waste half the memory here.
data.SysMemPitch = 16 * (player->getWidth() / 4);

Cannot create a 1D Texture on DirectX11

For some reason, the code below crashes when I try to create the 1d texture.
D3D11_TEXTURE1D_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_TEXTURE1D_DESC));
desc.Width = 64;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_SNORM;
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
HRESULT hr = D3DDev_0001->CreateTexture1D(&desc, NULL, &texture); //crashes here
assert(hr == S_OK);
where D3DDev_0001 is a ID3D11Device. I am able to create 3d and 2d textures, but making a 1d texture causes the program to crash. Can anyone explain why?
A USAGE_STAGING texture can't have any BindFlags since it can't be set on the graphics context for use as an SRV, UAV or RTV. Set BindFlags to 0 if you want a STAGING texture, or set the Usage to D3D11_USAGE_DEFAULT if you just want a 'normal' texture that can be bound to the context.
USAGE_STAGING resources are either for the CPU to fill in with data before being copied to a USAGE_DEFAULT resource, or, they're the destination for GPU copies to get data from the GPU back to the CPU.
The exact cause of this error would have been explained in a message printed by "D3D11's Debug Layer"; use it to find the cause of these errors in the future.

Can't get texture.Sample to work, although I can get texture.Load to work fine in Direct 3d 11 shader

In my HLSL for Direct3d 11 app, I'm having a problem where the texture.Sample intrinsic always return 0. I know my data and parameters are correct because if I use texture.Load instead of Sample the value returned is correct.
Here are my declarations:
extern Texture2D<float> texMask;
SamplerState TextureSampler : register (s2);
Here is the code in my pixel shader that works-- this confirms that my texture is available correctly to the shader and my texcoord values are correct:
float maskColor = texMask.Load(int3(8192*texcoord.x, 4096*texcoord.y, 0));
If I substitute for this the following line, maskColor is always 0, and I can't figure out why.
float maskColor = texMask.Sample(TextureSampler, texcoord);
TextureSampler has the default state values; texMask is defined with 1 mip level.
I've also tried:
float maskColor = texMask.SampleLevel(TextureSampler, texcoord, 0);
and that also always returns 0.
C++ code for setting up sampler:
D3D11_SAMPLER_DESC sd;
ZeroMemory(&sd, sizeof(D3D11_SAMPLER_DESC));
sd.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sd.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sd.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
ID3D11SamplerState* pSampler;
dev->CreateSamplerState(&sd, &pSampler);
devcon->PSSetSamplers(2, 1, &pSampler);
Forgive me for reviving such an old post, but I figured it important to add another possible cause for this sort of issue for others, and this post is the most relevant place I could find to post in.
I, too, had an issue where the HLSL Sample function would always return 0, but only on specific textures, and not on others. I checked, ensured the texture was properly bound, and that the color values should not have been 0, and still was left wondering why I was always getting 0 back for this one specific texture, but not others used in the same shader pass. The Load function worked fine, but then I lost the nice features that samplers give us.
As it turns out, in my case, I had accidentally created this texture's description as:
D3D11_TEXTURE2D_DESC desc;
desc.Width = _width;
desc.Height = _height;
desc.MipLevels = 0; // <- Bad!
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R16G16B16A16_FLOAT;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
This worked and created a texture that was visible and renderable, however what happens when defining MipLevels to 0, is that DirectX generates an entire mip chain for that texture. Me being me, however, I forgot this while working on my project further, and while DirectX may generate the textures for the mip chain, drawing to the texture does not cascade through all the levels of the chain (which does make sense, I suppose).
Now, I suppose it's important to note that I'm still new to the whole graphics programming thing, if that wasn't already obvious enough. I have absolutely no idea what mip level, or combination of mip levels, the regular Sample function uses. But I can say that in my case, it didn't happen to be level 0. Maybe it will for a smaller mip chain, but this texture in particular had 12 levels in total, with which only level 0 had any actual color information drawn to it. Using the Load function, or SampleLevel to explicitly access mip level 0 worked fine. As I do not need, nor want, the texture I'm trying to sample to have a mip chain, I simply changed it's description to fix it.
I found my problem -- I needed to specify a register for the texture as well as the sampler in my HLSL. I can't find any documentation anywhere that describes why this is necessary, but it did fix my problem.

How to create texture using DirectX-11 with DXGI_FORMAT_420_OPAQUE format?

I am facing problems creating DirectX texture from I420 data. My program crashes when I try to add texture. I am working on windows 8.0 on a WinRT metro app. Can you help me please? My code is as follows:
D3D11_TEXTURE2D_DESC texDesc = {};
texDesc.Width = 352;
texDesc.Height = 288;
texDesc.MipLevels = 1;
byte *bitmapData;
int datasize = read_whole_file_into_array(&bitmapData , "1.i420"); //bitmapData contains the I420 frame
D3D11_SUBRESOURCE_DATA frameData;
frameData.pSysMem = bitmapData;
frameData.SysMemSlicePitch = 0;
//frameData.SysMemPitch = texDesc.Width; //Unsure about it
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_420_OPAQUE;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.BindFlags = D3D11_BIND_DECODER;
texDesc.MiscFlags = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.CPUAccessFlags = 0;
m_d3dDevice->CreateTexture2D (&texDesc, &frameData, &m_background);
m_spriteBatch->AddTexture(m_background.Get());
Please help. Thanks in advance.
Additional Information: This MSDN link contains a similar problem, however in my case I already have a byte array containing the frame. I already asked a similar question to that forum.
As per documentation here
Applications cannot use the CPU to map the resource and then access the data within the resource. You cannot use shaders with this format.
When you create your Texture, if you need to have access to it in shader, you also need to set the flag:
texDesc.BindFlags = D3D11_BIND_DECODER | D3D11_BIND_SHADER_RESOURCE;
Then when you create textures, make sure you check result to see if texture is created:
HRESULT hr = m_d3dDevice->CreateTexture2D (&texDesc, &frameData, &m_background);
In that case it would warn you that texture can't be created (and with debug layer on you have this message):
D3D11 ERROR: ID3D11Device::CreateTexture2D: The format (0x6a, 420_OPAQUE) cannot be bound as a ShaderResource or cast to a format that could be bound as a ShaderResource. Therefore this format cannot support D3D11_BIND_SHADER_RESOURCE. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT]
D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]