SamplerState issue in the pixel shader - c++

I have a problem with the pixel shader, it compiles but does not renders anything and instead Directx gives out this error:
D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler configured for default filtering to be set at Slot 0, but the sampler bound at this slot is configured for comparison filtering. This mismatch will produce undefined behavior if the sampler is used (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #390: DEVICE_DRAW_SAMPLER_MISMATCH].
Here's my shader:
struct PixelInput
{
float4 position: SV_POSITION;
float4 color : COLOR;
float2 UV: TEXCOORD0;
};
//globals
SamplerState ss;
Texture2D shaderTex;
float4 TexturePixelShader(PixelInput input) : SV_TARGET
{
float4 texColors;
texColors = shaderTex.Sample(ss, input.UV);
return texColors;
}
Sampler creation:
samplerDesc.Filter = D3D11_FILTER_COMPARISON_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
result = device->CreateSamplerState(&samplerDesc, &m_SS);
if (FAILED(result))
return false;
return true;
and rendering function:
void TextureShader::RenderShader(ID3D11DeviceContext* ctxt, int indexCount)
{
ctxt->IASetInputLayout(m_layout);
ctxt->VSSetShader(m_vertexShader, NULL, 0);
ctxt->PSSetShader(m_pixelShader, NULL, 0);
ctxt->PSSetSamplers(0, 1, &m_SS);
ctxt->DrawIndexed(indexCount, 0, 0);
return;
}

You are declaring your sampler as a comparison sampler :
samplerDesc.Filter = D3D11_FILTER_COMPARISON_MIN_MAG_MIP_LINEAR;
It should be :
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
Comparison sampler are used mostly for shadow maps, and declare as following in hlsl :
SamplerComparisonState myComparisonSampler;
myTexture.SampleCmp(myComparisonSampler, texCoord);

SamplerState in HLSL is a "Effects" construct that only applies to fx_* profiles and using the EFfects for Direct3D 11 runtime.
For shader binding in your case, use:
sampler ss : register(s0);
Texture2D<flaot4> shaderTex : register(t0);

Related

DirectX 11 Render To Texture

basically I am trying to render a scene to a texture as in this ogl tutorial here but in DirectX 11, and I faced some issues:
Absolutely nothing is rendered when I launch the program IDK why.
The only thing the texture displays 'correctly' is the clear color.
I have examined the executable in RenderDoc, and in the captured frame the back buffer draws the quad and the texture on it displays the scene correctly!
Source code peak:
D3D11_TEXTURE2D_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D11_TEXTURE2D_DESC));
texDesc.Width = Data.Width;
texDesc.Height = Data.Height;
texDesc.Format = R32G32B32A32_FLOAT;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.CPUAccessFlags = 0;
texDesc.ArraySize = 1;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
texDesc.MiscFlags = 0;
texDesc.MipLevels = 1;
if (Data.Img_Data_Buf == NULL)
{
if (FAILED(DX11Context::GetDevice()->CreateTexture2D(&texDesc, NULL, &result->tex2D)))
{
Log.Error("[DirectX] Texture2D Creation Failed for Null-ed Texture2D!\n");
return;
}
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = texDesc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MipLevels = 1;
DX11Context::GetDevice()->CreateShaderResourceView(result->tex2D, &srvDesc, &result->resourceView);
return;
}
//depth stencil texture
D3D11_TEXTURE2D_DESC texDesc;
{
texDesc.Width = size.x;
texDesc.Height = size.y;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
texDesc.CPUAccessFlags = 0;
texDesc.MiscFlags = 0;
}
if (FAILED(API::DirectX::DX11Context::GetDevice()->CreateTexture2D(&texDesc, nullptr, &depthstenciltex)))
{
Log.Error("[DX11RenderTarget] Failed to create DepthStencilTexture for render-target!\n");
//Return or the next call will fail too
return;
}
if (FAILED(API::DirectX::DX11Context::GetDevice()->CreateDepthStencilView(depthstenciltex, nullptr, &depthstencilview)))
{
Log.Error("[DX11RenderTarget] Failed to create DepthStencilView for render-target!\n");
}
//render target
D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc;
ZeroMemory(&renderTargetViewDesc, sizeof(D3D11_RENDER_TARGET_VIEW_DESC));
renderTargetViewDesc.Format = texDesc.Format;
renderTargetViewDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
renderTargetViewDesc.Texture2D.MipSlice = 0;
ID3D11RenderTargetView* rtv;
if (FAILED(API::DirectX::DX11Context::GetDevice()->CreateRenderTargetView(texture->tex2D, &renderTargetViewDesc, &rtv)))
{
Log.Error("[DX11RenderTarget] Failed to create render-target-view (RTV)!\n");
return;
}
//binding
Context->OMSetRenderTargets(1, &rtv, rt->depthstenciltex);
Shaders:
std::string VertexShader = R"(struct VertexInputType
{
float4 position : POSITION;
float2 tex : TEXCOORD;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD;
};
cbuffer NE_Camera : register(b0)
{
matrix Model;
matrix View;
matrix Projection;
};
PixelInputType main(VertexInputType input)
{
PixelInputType output;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(Model, input.position);
output.position = mul(View, output.position);
output.position = mul(Projection, output.position);
// Store the input texture for the pixel shader to use.
output.tex = input.tex;
return output;
})";
std::string PixelShader = R"(
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD;
};
Texture2D NE_Tex_Diffuse : register(t0);
SamplerState NE_Tex_Diffuse_Sampler : register(s0);
float4 main(PixelInputType input) : SV_TARGET
{
return NE_Tex_Diffuse.Sample(NE_Tex_Diffuse_Sampler, input.tex);
}
)";
std::string ScreenVertexShader = R"(struct VertexInputType
{
float2 position : POSITION;
float2 tex : TEXCOORD;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD;
};
PixelInputType main(VertexInputType input)
{
PixelInputType output;
// CalcSulate the position of the vertex against the world, view, and projection matrices.
output.position = float4(input.position.x,input.position.y,0.0f,1.0f);
// Store the input texture for the pixel shader to use.
output.tex = input.tex;
return output;
})";
std::string ScreenPixelShader = R"(
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD;
};
Texture2D ScreenTexture : register(t0);
SamplerState ScreenTexture_Sampler : register(s0);
float4 main(PixelInputType input) : SV_TARGET
{
return float4(ScreenTexture.Sample(ScreenTexture_Sampler, input.tex).rgb, 1.0f);
}
)";
Full Source Code
Also I captured a frame with visual studio graphics debugger, and noticed that the render to texture draw call has the PS shader with "stage didn't run, no output".
Note: I know that the scene should be flipped in DirectX.
I have found the bug causing this problem, I wasn't clearing the depth stencil view at rendering, I wonder why is clearing the DSV essential for RenderTarget output.

DirectX alpha blending not working in UWP

I have an application based on the Windows8-DirectX template from the Visual Studio and i'm trying to draw a texture with an alpha channel with the following D3D11_BLEND_DESC:
D3D11_BLEND_DESC blendDesc{};
blendDesc.RenderTarget[0].BlendEnable = TRUE;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_INV_DEST_ALPHA;
blendDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ONE;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
ThrowIfFailed(device->CreateBlendState(&blendDesc, &_blendState));
And i'm binding this blend state this way:
float blendFactor[4] = { 0.0f, 0.0f, 1.0f, 1.0f };
UINT sampleMask = 0xffffffff;
context->OMSetBlendState(_blendState.Get(), blendFactor, sampleMask);
Here is my Vertex Shader:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexShaderInput
{
float3 pos : POSITION;
float2 texcoord : TEXCOORD;
};
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float2 texcoord : TEXCOORD;
};
PixelShaderInput main(VertexShaderInput input)
{
PixelShaderInput output;
float4 pos = float4(input.pos, 1.0f);
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
output.pos = pos;
output.texcoord = input.texcoord;
return output;
}
Pixel Shader:
Texture2D tex0;
SamplerState s0;
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float2 texcoord : TEXCOORD;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
float4 color = tex0.Sample(s0, input.texcoord);
return color;
}
And that is my texture:
D3D11_TEXTURE2D_DESC textureDesc{};
textureDesc.Width = width;
textureDesc.Height = height;
textureDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDesc.MipLevels = textureDesc.ArraySize = 1;
textureDesc.SampleDesc.Count = 1;
ThrowIfFailed(device->CreateTexture2D(&textureDesc, NULL, &_texture));
ThrowIfFailed(device->CreateShaderResourceView(_texture.Get(), NULL, &_textureView));
But the problem is that the blending is not working properly. An texture is only blending with the background, but not with another texture, eg. if i call the ClearRenderTargetView method with blue color as third parameter then the partially transparent texture will be bluish but will not be blended with an overlapping texture.

Rendering a world-space line in DirectX 11

I am rendering a spline in DirectX 11, but I'm having an issue where it appears to be stuck in screen space, and I can't convince it to be in world space.
The spline is initially defined as a std::vector<DirectX::XMFLOAT3> of the control points, which is expanded to a vector of the same of the actual points on the spline called linePoints. Then the vertex, index and constant buffers are created in this function:
void Spline::createBuffers(ID3D11Device* device)
{
Vertex vertices[100];
for (int i = 0; i < 100; i++)
{
Vertex v;
v.position = linePoints.at(i);
v.colour = XMFLOAT4(0, 0, 0, 1.0);
vertices[i] = v;
}
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(D3D11_BUFFER_DESC));
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(Vertex) * linePoints.size();
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
D3D11_SUBRESOURCE_DATA InitData;
ZeroMemory(&InitData, sizeof(InitData));
InitData.pSysMem = vertices;
device->CreateBuffer(&bd, &InitData, &vertexBuffer);
ZeroMemory(&bd, sizeof(D3D11_BUFFER_DESC));
bd.Usage = D3D11_USAGE_DEFAULT;
bd.CPUAccessFlags = 0;
bd.ByteWidth = sizeof(WORD) * linePoints.size();
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
WORD indices[200];
int count = 0;
for (WORD i = 0; i < 100; i++)
{
indices[count] = i;
indices[count + 1] = i + 1;
count += 2;
}
ZeroMemory(&InitData, sizeof(InitData));
InitData.pSysMem = indices;
device->CreateBuffer(&bd, &InitData, &indexBuffer);
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(LineCBuffer);
bd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
bd.CPUAccessFlags = 0;
device->CreateBuffer(&bd, nullptr, &constBuffer);
}
The draw function is:
void Spline::Draw(ID3D11PixelShader* pShader, ID3D11VertexShader* vShader, Camera& cam)
{
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_LINESTRIP);
context->VSSetShader(vShader, nullptr, 0);
context->PSSetShader(pShader, nullptr, 0);
LineCBuffer lCB;
lCB.world = XMMatrixIdentity();
lCB.view = XMMatrixTranspose(cam.getView());
lCB.projection = XMMatrixTranspose(cam.getProjection());
context->UpdateSubresource(constBuffer, 0, nullptr, &lCB, 0, 0);
context->VSSetConstantBuffers(0, 1, &constBuffer);
UINT stride = sizeof(Vertex);
UINT offset = 0;
context->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
context->IASetIndexBuffer(indexBuffer, DXGI_FORMAT_R16_UINT, 0);
context->DrawIndexed(100, 0, 0);
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
}
And the entire shader file is:
cbuffer lineCBuffer : register(b0)
{
matrix World;
matrix View;
matrix Projection;
};
struct VS_IN
{
float3 position : POSITION;
float4 colour : COLOUR;
};
struct VS_OUT
{
float4 pos : SV_POSITION;
float4 colour : COLOUR;
};
VS_OUT lineVertexShader(float3 position : POSITION, float4 colour : COLOUR)
{
VS_OUT output = (VS_OUT)0;
output.pos = mul(position, World);
output.pos = mul(output.pos, View);
output.pos = mul(output.pos, Projection);
output.pos.w = 1.0f;
output.colour = colour;
return output;
}
float4 linePixelShader(VS_OUT input) : SV_TARGET
{
return input.colour;
}
The issue is that the start of the line (especially when it is set to (0,0,0)) is anchored to the viewport. When the start if the line is at (0,0,0), it will not leave the centre of the screen, even when it should be offscreen.
I think you are doing something wrong with your multiplication.
VS_OUT lineVertexShader(float3 position : POSITION, float4 colour : COLOUR)
{
VS_OUT output = (VS_OUT)0;
output.pos = mul(position, World);
output.pos = mul(output.pos, View);
output.pos = mul(output.pos, Projection);
output.pos.w = 1.0f;
output.colour = colour;
return output;
}
You use a float3 as position input. Position is a Point, so you should use a
float4(position,1.0f)
as a point in 3D Space. If your float3 would be a vector, you create a
float4(position,0.0f)
So your Vertex Shader should look like the following:
VS_OUT lineVertexShader(float3 position : POSITION, float4 colour : COLOUR)
{
VS_OUT output;
output.pos = mul(float4(position,1.0), mul(mul(World,View),Projection));
output.colour = colour;
return output;
}
One more thing. Do not set pos.w to 1! The Rasterrizer is doing the perspective devide automatically to the value of SV_POSITION. The homogenous value is then passed to the pixel shader. Sometimes you really want to set z and w to 1, maybe for your cubemap rendered at max distance or stuff like that. But i think thats another error here.
When you dont need to use world, view and projection in extra multiplications, why do not precompute it on the cpu and then just push the final worldViewProj matrix to your shaders?
Good Look

Set uniform color to pixel shader

I am a newbie in DirectX11, so sorry for a dull question.
I have a pixel shader:
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float2 texCoord : TEXCOORD0;
};
Texture2D s_texture : register(t0);
SamplerState s_sampleParams
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
float4 t = s_texture.Sample(s_sampleParams, input.texCoord);
return t;
}
Now I want to add a color to my shader:
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float2 texCoord : TEXCOORD0;
};
Texture2D s_texture : register(t0);
float4 s_color;
SamplerState s_sampleParams
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
float4 t = s_texture.Sample(s_sampleParams, input.texCoord);
return t * s_color;
}
How do I set s_color from C++ code? Do I have to use m_d3dContext->PSSetConstantBuffers? In this case I should change
`float4 s_color;`
to
cbuffer ColorOnlyConstantBuffer : register(b0)
{
float4 m_color;
};
, right?
Or I can keep simple definition float4 s_color; and set it from C++ somehow else?
As far as I remember, you specify input items for vertex shader by using specific structure:
D3D11_INPUT_ELEMENT_DESC inputLayoutDesc[2] =
{
{ "Position",
0,
DXGI_FORMAT_R32G32B32_FLOAT,
0,
D3D11_APPEND_ALIGNED_ELEMENT,
D3D11_INPUT_PER_VERTEX_DATA,
0
},
{ "Color",
0,
DXGI_FORMAT_R32G32B32_FLOAT,
0,
D3D11_APPEND_ALIGNED_ELEMENT,
D3D11_INPUT_PER_VERTEX_DATA,
0
}
};
This structure is being used to create ID3D11InputLayout:
const UINT inputLayoutDescCount = 2;
hr = device->CreateInputLayout(
inputLayoutDesc,
inputLayoutDescCount,
compiledVsShader->GetBufferPointer(),
compiledVsShader->GetBufferSize(),
&inputLayout);
You can then use semantics to specify, which element represents which entry:
struct VS_INPUT
{
float4 Pos : POSITION;
float4 Col : COLOR;
};
Edit: in response to comments.
Sorry, didn't get that. You'll need a constant buffer. Relevant (sample) parts of code follows. Just modify them to your needs:
struct PixelConstantBuffer
{
XMFLOAT4 lightPos;
XMFLOAT4 observerPos;
};
// ***
D3D11_BUFFER_DESC pixelConstantBufferDesc;
pixelConstantBufferDesc.ByteWidth = sizeof(PixelConstantBuffer);
pixelConstantBufferDesc.Usage = D3D11_USAGE_DEFAULT;
pixelConstantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
pixelConstantBufferDesc.CPUAccessFlags = 0;
pixelConstantBufferDesc.MiscFlags = 0;
pixelConstantBufferDesc.StructureByteStride = 0;
hr = device->CreateBuffer(&pixelConstantBufferDesc, nullptr, &pixelConstantBuffer);
// ***
context->PSSetConstantBuffers(1, 1, &pixelConstantBuffer);
context->PSSetShader(pixelShader, nullptr, 0);
// ***
PixelConstantBuffer pcBuffer;
pcBuffer.lightPos = XMFLOAT4(lightPos.m128_f32[0], lightPos.m128_f32[1], lightPos.m128_f32[2], lightPos.m128_f32[3]);
pcBuffer.observerPos = XMFLOAT4(camPos.m128_f32[0], camPos.m128_f32[1], camPos.m128_f32[2], camPos.m128_f32[3]);
context->UpdateSubresource(pixelConstantBuffer, 0, nullptr, &pcBuffer, 0, 0);
// *** (hlsl)
cbuffer PixelConstantBuffer : register(b1)
{
float4 LightPos;
float4 ObserverPos;
}

Using shader resources in HLSL (Port DX9->DX10)

I'm trying to port my DX9 volume renderer to a DX10 version. Currently, i'm stuck at the following error:
D3D10: ERROR: ID3D10Device::DrawIndexed: The view dimension declared in the shader code does not match the view type bound to slot 0 of the Pixel Shader unit. This is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH ]
My guess is that I'm not sending the 2D and/or 3D textures (shader resources) to the shader in the correct way; or do not use them in the correct (dx10) way. The DX9 code was something like the following (simplified for the sake of this question):
HRESULT hr;
int nVertexShaderIndex = 0;
// Setup the 2D Dependent Lookup Texture
hr = m_pDevice->SetTexture(0, lookupTexture); // lookupTexture is a LPDIRECT3DTEXTURE9
if (hr != D3D_OK) {
//handle error
}
m_pDevice->SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_CLAMP);
m_pDevice->SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_CLAMP);
m_pDevice->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_POINT);
m_pDevice->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_POINT);
// Maximum Intensity
m_pDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, TRUE); // Enable Alpha blend
m_pDevice->SetRenderState( D3DRS_SRCBLEND, D3DBLEND_ONE); // 1 * SRC color
m_pDevice->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_ONE); // 1 * DST color
m_pDevice->SetRenderState( D3DRS_BLENDOP, D3DBLENDOP_MAX); // MAX blend
m_pDevice->SetRenderState( D3DRS_ZENABLE, D3DZB_FALSE ); // Disable Z
A 3D volume texture with the actual data is send in a similar manner. The corresponding pixel shader code:
PS_OUTPUT Main(VS_OUTPUT vsIn,
uniform sampler2D lookupTexture : TEXUNIT0,
uniform sampler3D dataTexture : TEXUNIT1)
{
PS_OUTPUT psOut;
float dataValue;
psOut.color = SampleWith2DLookup(vsIn.TexCoord0,
lookupTexture,
dataTexture,
dataValue);
return psOut;
}
float4 LookupIn2DTexture(float value,
uniform sampler2D lookupTexture)
{
float2 lutCoord;
float4 outColor;
// Build a 2D Coordinate for lookup
lutCoord[0] = value;
lutCoord[1] = 0.0f;
outColor = tex2D(lookupTexture, lutCoord);
return(outColor);
}
float4 SampleWith2DLookup(const float3 TexCoord,
uniform sampler2D lookupTexture,
uniform sampler3D dataTexture,
out float dataValue)
{
float value;
float4 outputColor;
value = Sample(TexCoord, dataTexture);
outputColor = LookupIn2DTexture(value, lookupTexture);
dataValue = value;
return(outputColor);
}
In DX10 we can simplify some of the shader code (as far as I understand). I create an empty texture and fill this texture with map()/unmap(). Next I bind it as a shader resource to my PS. The c++ and shader code become the following:
// CREATE THE EMPTY TEXTURE
D3D10_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = 4096;
desc.Height = 1;
desc.ArraySize = 1;
desc.MipLevels = 1;
desc.Format = GetHardwareResourceFormatDX10();
desc.Usage = D3D10_USAGE_DYNAMIC;
desc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
desc.SampleDesc.Count = 1;
hr = m_pDeviceDX10->CreateTexture2D(&desc, NULL, &lookupTexture);
bind to shader:
// SEND TO SHADER
ID3D10ShaderResourceView* pTexDepSurface = NULL;
D3D10_SHADER_RESOURCE_VIEW_DESC srvDesc;
D3D10_TEXTURE2D_DESC desc;
pTexDep->GetDesc( &desc );
srvDesc.Format = desc.Format;
srvDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = desc.MipLevels;
srvDesc.Texture2D.MostDetailedMip = desc.MipLevels -1;
hr = m_pDeviceDX10->CreateShaderResourceView(pTexDep, &srvDesc, &pTexDepSurface);
if (FAILED(hr)) {
//handle here
}
m_pDeviceDX10->PSSetShaderResources(0,1, &pTexDepSurface);
Use in shader:
Texture2D LookupTexture : register(t0);
SamplerState LookupSampler : register(s0);
Texture2D VolumeTexture : register(t1);
SamplerState VolumeSampler : register(s1);
PS_OUTPUT Main(VS_OUTPUT vsIn,
uniform sampler2D lookupTexture : TEXUNIT0,
uniform sampler3D dataTexture : TEXUNIT1)
{
PS_OUTPUT psOut;
float dataValue;
dataValue = VolumeTexture.Sample(VolumeSampler,vsIn.TexCoord0);
psOut.color = LookupTexture.Sample(LookupSampler,dataValue);
return psOut;
}
Note that it is just an educated guess that the error is introduced by this code. If the code above looks correct to you, please respond so as well (in comments). In that case, a new direction to find a solution would be valued.
After a day's work I did find my problem; I forgot to recompile my updated shaders. So the DX9 version was still loaded instead of the DX10 version of the shader, very stupid, but also very common mistake.