I am trying to achieve deferred shading in DirectX 11 , c++. I have managed to create the G-Buffer and render my scene to it(Checked with "GPU PerfStudio"). I am having difficulty with final lighting stage. I am not able to read from textures(Diffuse,Normal,Specular) using SV_Position returned coordinates.
This is the pixel shader used to render light as shapes.
Texture2D<float4> Diffuse : register( t0 );
Texture2D<float4> Normal : register( t1 );
Texture2D<float4> Position : register( t2 );
cbuffer MaterialBuffer : register( b1 )
{
float4 ambient;
float4 diffuse;
float4 specular;
}
//--------------------------------------------------------------------------------------
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float4 PosVS: POSITION;
float4 Color : COLOR0;
float4 normal : NORMAL;
};
float4 main(VS_OUTPUT input) : SV_TARGET
{
//return Diffuse[screenPosition.xy]+Normal[screenPosition.xy]+Position[screenPosition.xy];
//return float4(1.0f, 1.0f, 1.0f, 1.0f);
//--------------------------------------------------------------------------------------
//Problematic line
float4 b=Diffuse.Load(int3(input.Pos.xy,0));
//--------------------------------------------------------------------------------------
return b;
}
I have checked with "GPU PerfStudio" the input textures are properly bound.
The above code is returning the color I used to clear the texture.(From my debugging I have found that its returning value at pixel location 0,0)
If I replace the problematic line with:-
float4 b=Diffuse.Load(int3(350,300,0));
Then its rendering the value at 350,300 pixel location with the proper shape of light.
Thanks
Do you tried with the debug flag D3D11_CREATE_DEVICE_DEBUG at device creation and looked at the output log. You may experience signature mismatch between the Vertex and the Pixel stages. It would explain why the sv_position semantic do not behave correctly.
I solved the problem.I was using the same z-buffer for rendering light geometries that I had used previously for G-buffer.
Thank you for your response.
Related
Some 3d meshes that get exported to Wavefront.obj format usually come with a .mtl file that has additional data to the texture it uses and its materials, when exported from Blender they always come with Ambient, Diffuse, Specular, and Emissive RGB data as part of its material, but I'm not sure how I can use this data in the pixel shader and get the right color output.
I would appreciate it if anyone can explain to me how to use these materials and any code sample would be very welcome.
Traditional materials and lighting models use "Ambient", "Diffuse", "Specular", and "Emissive" colors which is why you find those in Wavefront OBJ files. These can often be replaced or used in multiplicative conjunction with texture colors.
The (now defunct) XNA Game Studio product did a good job of providing simple 'classic' shaders in the BasicEffect "Stock Shaders". I use them in the DirectX Tool Kit for DX11 and DX12.
Take a look at BasicEffect.fx for a traditional material pixel shader. If you are looking mostly for pixel-shader handling, that's "per-pixel lighting" as opposed to "vertex lighting" which was more common back when we had less powerful GPUs.
Here's a 'inlined' version so you can follow it all in one place:
struct VSInputNmTx
{
float4 Position : SV_Position;
float3 Normal : NORMAL;
float2 TexCoord : TEXCOORD0;
};
Texture2D<float4> Texture : register(t0);
sampler Sampler : register(s0);
cbuffer Parameters : register(b0)
{
float4 DiffuseColor : packoffset(c0);
float3 EmissiveColor : packoffset(c1);
float3 SpecularColor : packoffset(c2);
float SpecularPower : packoffset(c2.w);
float3 LightDirection[3] : packoffset(c3);
float3 LightDiffuseColor[3] : packoffset(c6);
float3 LightSpecularColor[3] : packoffset(c9);
float3 EyePosition : packoffset(c12);
float3 FogColor : packoffset(c13);
float4 FogVector : packoffset(c14);
float4x4 World : packoffset(c15);
float3x3 WorldInverseTranspose : packoffset(c19);
float4x4 WorldViewProj : packoffset(c22);
};
struct VSOutputPixelLightingTx
{
float2 TexCoord : TEXCOORD0;
float4 PositionWS : TEXCOORD1;
float3 NormalWS : TEXCOORD2;
float4 Diffuse : COLOR0;
float4 PositionPS : SV_Position;
};
// Vertex shader: pixel lighting + texture.
VSOutputPixelLighting VSBasicPixelLightingTx(VSInputNmTx vin)
{
VSOutputPixelLighting vout;
vout.PositionPS = mul(vin.Position, WorldViewProj);
vout.PositionWS.xyz = mul(vin.Position, World).xyz;
// ComputeFogFactor
vout.PositionWS.w = saturate(dot(vin.Position, FogVector));
vout.NormalWS = normalize(mul(vin.Normal, WorldInverseTranspose));
vout.Diffuse = float4(1, 1, 1, DiffuseColor.a);
vut.TexCoord = vin.TexCoord;
return vout;
}
struct PSInputPixelLightingTx
{
float2 TexCoord : TEXCOORD0;
float4 PositionWS : TEXCOORD1;
float3 NormalWS : TEXCOORD2;
float4 Diffuse : COLOR0;
};
// Pixel shader: pixel lighting + texture.
float4 PSBasicPixelLightingTx(PSInputPixelLightingTx pin) : SV_Target0
{
float4 color = Texture.Sample(Sampler, pin.TexCoord) * pin.Diffuse;
float3 eyeVector = normalize(EyePosition - pin.PositionWS.xyz);
float3 worldNormal = normalize(pin.NormalWS);
ColorPair lightResult = ComputeLights(eyeVector, worldNormal, 3);
color.rgb *= lightResult.Diffuse;
// AddSpecular
color.rgb += lightResult.Specular * color.a;
// ApplyFog (we passed fogfactor in via PositionWS.w)
color.rgb = lerp(color.rgb, FogColor * color.a, pin.PositionWS.w);
return color;
}
Here is the helper function ComputeLights which implements a Blinn-Phong reflection model for the specular highlight.
struct ColorPair
{
float3 Diffuse;
float3 Specular;
};
ColorPair ComputeLights(float3 eyeVector, float3 worldNormal, uniform int numLights)
{
float3x3 lightDirections = 0;
float3x3 lightDiffuse = 0;
float3x3 lightSpecular = 0;
float3x3 halfVectors = 0;
[unroll]
for (int i = 0; i < numLights; i++)
{
lightDirections[i] = LightDirection[i];
lightDiffuse[i] = LightDiffuseColor[i];
lightSpecular[i] = LightSpecularColor[i];
halfVectors[i] = normalize(eyeVector - lightDirections[i]);
}
float3 dotL = mul(-lightDirections, worldNormal);
float3 dotH = mul(halfVectors, worldNormal);
float3 zeroL = step(0, dotL);
float3 diffuse = zeroL * dotL;
float3 specular = pow(max(dotH, 0) * zeroL, SpecularPower) * dotL;
ColorPair result;
result.Diffuse = mul(diffuse, lightDiffuse) * DiffuseColor.rgb + EmissiveColor;
result.Specular = mul(specular, lightSpecular) * SpecularColor;
return result;
}
These BasicEffect shaders don't make use of ambient color, but you could modify them to do so if you wanted. All ambient color does is provide a 'minimum color value' that's independent of dynamic lights.
Note that there also some unofficial Physically-Based Rendering (PBR) materials extension in some Wavefront OBJ files. See Extending Wavefront MTL for Physically-Based. More modern geometry formats like glTF assume PBR materials properties which is things like an albedo texture, normal texture, roughness/metalness texture, etc..
I'm adding a geometry shader (a very simple one) to my DirectX 11 program. I've already got vertex and pixel shader written, and they work just as expected - no errors, no warnings. The shaders are simple, too. The vertex shader is:
cbuffer PerApplication : register(b0)
{
matrix projectionMatrix;
}
cbuffer PerFrame : register(b1)
{
matrix viewMatrix;
}
cbuffer PerObject : register(b2)
{
matrix worldMatrix;
}
struct VertexShaderInput
{
float4 position : POSITION;
float4 color: COLOR;
};
struct VertexShaderOutput
{
float4 color : COLOR;
float4 position : SV_POSITION;
};
//entry point
VertexShaderOutput SimpleVertexShader(VertexShaderInput IN)
{
VertexShaderOutput OUT;
matrix mvp = mul(projectionMatrix, mul(viewMatrix, worldMatrix));
OUT.color = IN.color;
OUT.position = mul(mvp, IN.position);
return OUT;
}
The pixel shader is:
struct PixelShaderInput
{
float4 color : COLOR;
};
float4 SimplePixelShader(PixelShaderInput IN) : SV_TARGET
{
return IN.color;
}
Well, as I've said, that's working pretty well. Then I'm adding a geometry shader, which doesn't actually do anything, it just takes a triangle and returns the same triangle. The geometry shader is:
struct VertexInput
{
float4 color : COLOR;
float4 position : POSITIONT;
};
struct VertexOutput
{
float4 color : COLOR;
float4 position : SV_Position;
};
[maxvertexcount(3)]
void SimpleGeometryShader(triangle VertexInput input[3], inout TriangleStream<VertexOutput> stream)
{
VertexOutput v1 = { input[0].color, input[0].position };
stream.Append(v1);
VertexOutput v2 = { input[1].color, input[1].position };
stream.Append(v2);
VertexOutput v3 = { input[2].color, input[2].position };
stream.Append(v3);
stream.RestartStrip();
}
Doing this also requires to change the vertex shader, which now returns
struct VertexShaderOutput
{
float4 color : COLOR;
float4 position : POSITIONT; //I'm not returning SV_Position in vertex shader anymore.
};
And the program itself works, and it works as expected, I see what I expect to see. But there are now two D3D11 errors:
D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: Vertex Shader - Geometry Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (POSITIONT,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND]
D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: Geometry Shader - Pixel Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (TEXCOORD,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND]
Both are pretty strange. The vertex shader clearly returns a POSITIONT, and COLOR and POSITIONT are in the same order. What's my mistake?
The reason of the problem was that beside of rendering the scene using the shaders that I provided I was also drawing text using some library. Obviously, there were some vertex and pixel shaders involved in it with their own input and output signatures. Setting geometry shader to null solved my problem.
I have searched a lot on Google and most of examples achieving gradient using texture coordinates. But I don't have texture coordinates with me. I am working on 3D text on which I want to apply gradient color. Is it possible? If yes, how? Is it necessary to have texture coordinates for obtaining color gradient?
Following is the part of my hlsl shader file :
struct VS_INPUT
{
float3 Pos : POSITION;
float3 Norm : NORMAL;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float3 WorldNorm : TEXCOORD0;
float3 CameraPos : TEXCOORD1;
float3 WorldPos : TEXCOORD2;
};
//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
float4 worldPos = mul( float4(input.Pos,1), World );
float4 cameraPos = mul( worldPos, View );
output.WorldPos = worldPos;
output.WorldNorm = normalize(mul( input.Norm, (float3x3)World ));
output.CameraPos = cameraPos;
output.Pos = mul( cameraPos, Projection );
return output;
}
//--------------------------------------------------------------------------------------
// Pixel Shader Without Light
//--------------------------------------------------------------------------------------
float4 PS( PS_INPUT input) : SV_Target
{
float4 finalColor = {1.0f, 0.0f, 0.0f, 1.0f};
return finalColor;
}
//--------------------------------------------------------------------------------------
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0_level_9_1, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0_level_9_1, PS() ) );
}
}
You don't need texture coordinates, because they are only one possible method (the most flexible) to save needed information about how your gradient should look like, as such origin, direction and length.
To have a gradient from left to right of your 3D-Text, you need to know where is left and right in your shader to take the appropriate color. I assume that your text is changing and such dynamically, so you need to transport this information into the shader, which either can be placed into the vertices directly by texture coordinates or with a constant buffer. Last method would only work if you draw at most one text per drawcall, because the gradient data is persistent over the whole drawing of all triangles in your drawcall.
If your situation is more special, as like your text is axis-aligned, you could take this axis and the worldposition in your pixelshader to determine the relative position for your gradient, but this method makes many assumptions as you still need the left and right maximum of your text.
I have the following vertex and pixel shaders:
struct VS_INPUT
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
float4 Color : TEXCOORD1;
};
struct VS_OUTPUT
{
float4 Position : POSITION0;
float4 Color : COLOR0;
float2 TexCoord : TEXCOORD0;
};
float4x4 projview_matrix;
VS_OUTPUT vs_main(VS_INPUT Input)
{
VS_OUTPUT Output;
Output.Position = mul(Input.Position, projview_matrix);
Output.Color = Input.Color;
Output.TexCoord = Input.TexCoord;
return Output;
}
px
texture tex;
sampler2D s = sampler_state {
texture = <tex>;
};
float4 ps_main(VS_OUTPUT Input) : COLOR0
{
float4 pixel = tex2D(s, Input.TexCoord.xy);
return pixel;
}
This is for a 2d game. The vertices of the quads contain tinting colors that I want to use to tint the bitmap. How can I obtain the color of the current vertex so I can multiply it in the pixel shader by the current pixel color?
Thanks
In your pixel shader, do:
float4 pixel = tex2D(s, Input.TexCoord.xy) * Input.Color;
The Input.Color value will be linearly interpreted across your plane for you, just like Input.TexCoord is. To blend two color vectors together, you simply multiply them together. It may also be advisable to do:
float4 pixel = tex2D(s, Input.TexCoord.xy) * Input.Color;
pixel = saturate(pixel);
The saturate() function will clip each RGB value in your color in the range of 0.0 to 1.0, which may avoid any possible display artifacts.
I've been trying to code a geometry shader in order to generate billboard systems as explained in Frank Luna's book Introduction to 3D Game Programming with DirectX. I insert the shader into my technique as follows:
pass P3
{
SetVertexShader(CompileShader(vs_4_0, VS()));
SetGeometryShader(CompileShader(gs_4_0, GS()));
SetPixelShader(CompileShader( ps_4_0, PS_NoSpecular()));
SetBlendState(SrcAlphaBlendingAdd, float4( 0.0f, 0.0f, 0.0f, 0.0f ), 0xFFFFFFFF );
}
However, this gives the error 1>D3DEffectCompiler : error : No valid VertexShader-GeometryShader combination could be found in Technique render, Pass P3.
The structs for both VS and GS are given below:
struct GS_INPUT
{
float4 PosH : SV_POSITION;
float4 PosW : POSITION;
float2 Tex : TEXCOORD;
float3 N : NORMAL;
float3 Tangent : TANGENT;
float3 Binormal : BINORMAL;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
float3 N : NORMAL;
float3 Tangent : TANGENT;
float3 Binormal : BINORMAL;
};
Should they be the same?
The problem lies with the struct used to input and output from the vertex shader, geometry shader and pixel shader. If the vertex shader outputs a PS_INPUT type, then the geometry shader needs to use that PS_INPUT as its input type.