How to achieve left to right color gradient for non Standard shapes without using Texture coordinates in hlsl? - hlsl

I have searched a lot on Google and most of examples achieving gradient using texture coordinates. But I don't have texture coordinates with me. I am working on 3D text on which I want to apply gradient color. Is it possible? If yes, how? Is it necessary to have texture coordinates for obtaining color gradient?
Following is the part of my hlsl shader file :
struct VS_INPUT
{
float3 Pos : POSITION;
float3 Norm : NORMAL;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float3 WorldNorm : TEXCOORD0;
float3 CameraPos : TEXCOORD1;
float3 WorldPos : TEXCOORD2;
};
//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
float4 worldPos = mul( float4(input.Pos,1), World );
float4 cameraPos = mul( worldPos, View );
output.WorldPos = worldPos;
output.WorldNorm = normalize(mul( input.Norm, (float3x3)World ));
output.CameraPos = cameraPos;
output.Pos = mul( cameraPos, Projection );
return output;
}
//--------------------------------------------------------------------------------------
// Pixel Shader Without Light
//--------------------------------------------------------------------------------------
float4 PS( PS_INPUT input) : SV_Target
{
float4 finalColor = {1.0f, 0.0f, 0.0f, 1.0f};
return finalColor;
}
//--------------------------------------------------------------------------------------
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0_level_9_1, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0_level_9_1, PS() ) );
}
}

You don't need texture coordinates, because they are only one possible method (the most flexible) to save needed information about how your gradient should look like, as such origin, direction and length.
To have a gradient from left to right of your 3D-Text, you need to know where is left and right in your shader to take the appropriate color. I assume that your text is changing and such dynamically, so you need to transport this information into the shader, which either can be placed into the vertices directly by texture coordinates or with a constant buffer. Last method would only work if you draw at most one text per drawcall, because the gradient data is persistent over the whole drawing of all triangles in your drawcall.
If your situation is more special, as like your text is axis-aligned, you could take this axis and the worldposition in your pixelshader to determine the relative position for your gradient, but this method makes many assumptions as you still need the left and right maximum of your text.

Related

Background face is visible over foreground face in same mesh while using a diffuse shader in DirectX

I am trying to create a simple diffuse shader to paint primitive objects in DirectX 9 and faced following problem. When I used a DirectX primitive object like a Torus or Teapot, some faces in the foreground part of the mesh is invisible. I don't think this is the same thing as faces being invisible as I cannot reproduce this behavior for primitive objects like Sphere or Box where no two quads have the same normal. Following are some screenshots in fill and wire-frame modes.
torus fill-mode
Following is my vertex deceleration code.
// vertex position...
D3DVERTEXELEMENT9 element;
element.Stream = 0;
element.Offset = 0;
element.Type = D3DDECLTYPE_FLOAT3;
element.Method = D3DDECLMETHOD_DEFAULT;
element.Usage = D3DDECLUSAGE_POSITION;
element.UsageIndex = 0;
m_vertexElement.push_back(element);
// vertex normal
element.Stream = 0;
element.Offset = 12; //3 floats * 4 bytes per float
element.Type = D3DDECLTYPE_FLOAT3;
element.Method = D3DDECLMETHOD_DEFAULT;
element.Usage = D3DDECLUSAGE_NORMAL;
element.UsageIndex = 0;
m_vertexElement.push_back(element);
And shader code in development.
float4x4 MatWorld : register(c0);
float4x4 MatViewProj : register(c4);
float4 matColor : register(c0);
struct VS_INPUT
{
float4 Position : POSITION;
float3 Normal : NORMAL;
};
struct VS_OUTPUT
{
float4 Position : POSITION;
float3 Normal : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 Color : COLOR0;
};
VS_OUTPUT vsmain(in VS_INPUT In)
{
VS_OUTPUT Out;
float4 wpos = mul(In.Position, MatWorld);
Out.Position = mul(wpos, MatViewProj);
Out.Normal = normalize(mul(In.Normal, MatWorld));
return Out;
};
PS_OUTPUT psmain(in VS_OUTPUT In)
{
PS_OUTPUT Out;
float4 ambient = {0.1, 0.0, 0.0, 1.0};
float3 light = {1, 0, 0};
Out.Color = ambient + matColor * saturate(dot(light, In.Normal));
return Out;
};
I have also tried setting different render states for Depth-Stencil but wasn't successful.
project files
I figure it out! this is a Depth Buffer(Z-Buffer) issue, you can enable Z-Buffer in your code, either by fixed pipeline or in the shader.
To enable z-buffer in fixed pipeline:
First add the following code when creating D3D deivce
d3dpp.EnableAutoDepthStencil = TRUE ;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16 ;
Then enable z-buffer before drawing
device->SetRenderState(D3DRS_ZENABLE, TRUE) ;
At last, clear z-buffer in render function
device->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0,0,0), 1.0f, 0 );

Reading & Writing to textures in HLSL 5.0 (deferred shading)

I am trying to achieve deferred shading in DirectX 11 , c++. I have managed to create the G-Buffer and render my scene to it(Checked with "GPU PerfStudio"). I am having difficulty with final lighting stage. I am not able to read from textures(Diffuse,Normal,Specular) using SV_Position returned coordinates.
This is the pixel shader used to render light as shapes.
Texture2D<float4> Diffuse : register( t0 );
Texture2D<float4> Normal : register( t1 );
Texture2D<float4> Position : register( t2 );
cbuffer MaterialBuffer : register( b1 )
{
float4 ambient;
float4 diffuse;
float4 specular;
}
//--------------------------------------------------------------------------------------
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float4 PosVS: POSITION;
float4 Color : COLOR0;
float4 normal : NORMAL;
};
float4 main(VS_OUTPUT input) : SV_TARGET
{
//return Diffuse[screenPosition.xy]+Normal[screenPosition.xy]+Position[screenPosition.xy];
//return float4(1.0f, 1.0f, 1.0f, 1.0f);
//--------------------------------------------------------------------------------------
//Problematic line
float4 b=Diffuse.Load(int3(input.Pos.xy,0));
//--------------------------------------------------------------------------------------
return b;
}
I have checked with "GPU PerfStudio" the input textures are properly bound.
The above code is returning the color I used to clear the texture.(From my debugging I have found that its returning value at pixel location 0,0)
If I replace the problematic line with:-
float4 b=Diffuse.Load(int3(350,300,0));
Then its rendering the value at 350,300 pixel location with the proper shape of light.
Thanks
Do you tried with the debug flag D3D11_CREATE_DEVICE_DEBUG at device creation and looked at the output log. You may experience signature mismatch between the Vertex and the Pixel stages. It would explain why the sv_position semantic do not behave correctly.
I solved the problem.I was using the same z-buffer for rendering light geometries that I had used previously for G-buffer.
Thank you for your response.

Pre-Pass Lighting OpenGL implementation artifact

I am implementing Pre-Pass Lighting algorithm in OpenGL for my master dissertation project, after implementing a Deferred renderer as well. The Deferred renderer works perfectly and I based the implementation of PPL on it. I got a very weird artifact after the lighting pass of the algorithm: the data contained in the L-buffer, where I accumulate the contributions of the lights in the scene, is correct, but results to be slightly off in respect to the geometry so when I apply it to the scene in the material pass the result it's clearly visible! (I can't post the image here but here it's a link to see it http://postimage.org/image/kxhlbnl9v/)
It looks like the light map cube is somehow computed with an offset (different in every axes) from the geometry. I checked the shaders and C++ code many times, I do not understand where this problem comes from. I am running out of ideas. Below there is the code for the 3 passes of the algorithm that are called in sequence. The code is experimental for now so I know it's not well designed at this stage. I also add the shaders I use in every stage to write to G-buffer, L-buffer and framebuffer in order.
C++ CODE:
// Draw geometry to g buffer
void GLPrePassLightingRendererV2::GeometryStage()
{
// Set GL states
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
// Bind G-Buffer for geometry pass
mGBuffer->BindForWriting();
// Bind geometry stage shaders
mTargetRenderSystem->BindShader(mGeometryStageVS);
mTargetRenderSystem->BindShader(mGeometryStageFS);
// Clear the framebuffer
mTargetRenderSystem->ClearFrameBuffer(FBT_COLOUR | FBT_DEPTH);
// Iterate over all the Renderables in the previously built RenderQueue
RenderableList* visibles = mSceneManager->GetRenderQueue()->GetRenderables();
// Set shader params here
//[...]
// Get the transformation info from the node the renderable is attached to
for (RenderableList::iterator it = visibles->begin(); it != visibles->end(); ++it)
{
Renderable* renderable = *it;
Material* mat = renderable->GetMaterial();
mGeometryStageVS->Update();
mGeometryStageFS->Update();
// Render the object
RenderOperation rop;
renderable->GetRenderOperation(rop);
mTargetRenderSystem->Render(rop);
}
// Only the geometry pass will write to the depth buffer
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
}
// Accumulate lights contribs in L-buffer using G-buffer
void GLPrePassLightingRendererV2::LightingStage()
{
// Enable additive blending for lights
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
//glCullFace(GL_FRONT);
// Bind shader for light stage
mTargetRenderSystem->BindShader(mLightStageVS);
mTargetRenderSystem->BindShader(mLightStageFS);
// Bind G-Buffer for reading and L-Buffer for writing for lighting pass
mGBuffer->BindForReading();
mLBuffer->BindForWriting();
mTargetRenderSystem->ClearFrameBuffer(FBT_COLOUR);
// Set shader params
// [...]
// Get all the lights in frustum, not by renderable
const LightList& lights = mSceneManager->GetLightsInFrustum();
// For each light in the frustum
LightList::const_iterator front_light_it;
for (LightList::const_iterator lit = lights.begin(); lit != lights.end(); ++lit)
{
// Send per light parameters to the shader
Light* l = (*lit);
SetLight(*l);
// Calculate bounding sphere for light and scale accordingly to instensity
float lightSphereScale = GetPointLightSphereScale(l->GetColor(), l->GetDiffuseIntensity());
// TODO: Render a sphere for each point light, a full screen quad for each directional
worldMtx.Identity();
worldMtx.SetScale(lightSphereScale, lightSphereScale, lightSphereScale);
worldMtx.SetTranslation(l->GetPosition());
mLightStageVS->SetParameterValue("gWorldMtx", (float*)&worldMtx);
mLightStageVS->Update();
mLightStageFS->Update();
static MeshInstance* sphere = mSceneManager->CreateMeshInstance("LightSphere", MBT_LIGHT_SPHERE);
RenderOperation rop;
sphere->GetSubMeshInstance(0)->GetRenderOperation(rop);
mTargetRenderSystem->Render(rop);
}
// Disable additive blending
glDisable(GL_BLEND);
}
// Combine L-buffer and material information per object
void GLPrePassLightingRendererV2::MaterialStage()
{
// Set some GL states
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
//glCullFace(GL_BACK);
// Bind material stage shaders (TODO: actually every object will bind its own matarial, if not a default one is used)
mTargetRenderSystem->BindShader(mMaterialStageVS);
mTargetRenderSystem->BindShader(mMaterialStageFS);
// Bind L-Buffer for reading
mLBuffer->BindForReading();
mTargetRenderSystem->ClearFrameBuffer(FBT_COLOUR | FBT_DEPTH, Math::ColourValue::WHITE);
// Iterate over all the Renderables in the previously built RenderQueue
RenderableList* visibles = mSceneManager->GetRenderQueue()->GetRenderables();
// Set shader params here
// [...]
// Get the transformation info from the node the renderable is attached to
for (RenderableList::iterator it = visibles->begin(); it != visibles->end(); ++it)
{
Renderable* renderable = *it;
Material* mat = renderable->GetMaterial();
// Set texture units
if (mat)
{
for (unsigned short i = 0; i < mat->GetTextureUnitCount(); ++i)
{
const TextureUnit* unit = mat->GetTextureUnit(i);
GLTexture* t = static_cast<GLTexture*>(unit->GetTexture());
glActiveTexture(GL_TEXTURE1); // This is needed because the first texture map slot is hold by the LBuffer!
glBindTexture(GL_TEXTURE_2D, t->GetGLId());
}
}
mMaterialStageVS->Update();
mMaterialStageFS->Update();
// Render the object
RenderOperation rop;
renderable->GetRenderOperation(rop);
mTargetRenderSystem->Render(rop);
}
}
NVIDIA CG Shaders:
// Vertex shader for Deferred Rendering geometry stage.
float4x4 gWorldMtx;
float4x4 gViewMtx;
float4x4 gProjectionMtx;
struct a2v
{
float3 position : POSITION;
float3 normal : NORMAL;
float2 texCoord : TEXCOORD0;
};
struct v2f
{
float4 position : POSITION;
float3 normal : TEXCOORD0;
float3 wPosition : TEXCOORD1;
float2 texCoord : TEXCOORD2;
};
v2f PPL_geometry_stage_vs(a2v IN)
{
v2f OUT;
// Transform to world space
OUT.wPosition = mul(gWorldMtx, float4(IN.position, 1.0f)).xyz;
OUT.normal = mul(gWorldMtx, float4(IN.normal, 0.0f)).xyz;
// Transform to homogeneous clip space
OUT.position = mul(gViewMtx, float4(OUT.wPosition, 1.0f));
OUT.position = mul(gProjectionMtx, OUT.position);
OUT.texCoord = IN.texCoord;
return OUT;
}
// Fragment shader for Pre-pass Lighing geometry stage.
struct f2a
{
float4 position : COLOR0;
float4 normal : COLOR1;
};
f2a PPL_geometry_stage_fs(v2f IN)
{
f2a OUT;
OUT.position = float4(IN.wPosition, 1.0f);
OUT.normal = float4(normalize(IN.normal), 1.0f);
return OUT;
}
// Vertex shader for Pre-pass lighing light stage.
float4x4 gWorldMtx;
float4x4 gViewMtx;
float4x4 gProjectionMtx;
struct a2v
{
float3 position : POSITION;
};
struct v2f
{
float4 position : POSITION;
float4 lightPos : TEXCOORD0;
};
v2f PPL_light_stage_vs(a2v IN)
{
v2f OUT;
float4x4 wv = mul(gWorldMtx, gViewMtx);
float4x4 wvp = mul(gViewMtx, gProjectionMtx);
wvp = mul(wvp, gWorldMtx);
// Only transforms position to world space
OUT.position = mul(wvp, float4(IN.position, 1.0f));
// Copy light position to calculate fragment coordinate
OUT.lightPos = OUT.position;
return OUT;
}
// Fragment shader for Pre-pass lighing light stage.
// Light structures
struct BaseLight
{
float3 color;
float ambientIntensity;
float diffuseIntensity;
};
struct DirectionalLight
{
struct BaseLight base;
float3 direction;
};
struct Attenuation
{
float constant;
float linearr;
float quadratic;
};
struct PointLight
{
struct BaseLight base;
float3 position;
Attenuation atten;
};
struct SpotLight
{
struct PointLight base;
float3 direction;
float cutoff;
};
// G-Buffer textures
sampler2D gPositionMap : TEXUNIT0;
sampler2D gNormalMap : TEXUNIT1;
// Light variables
float3 gEyePosition;
DirectionalLight gDirectionalLight;
PointLight gPointLight;
SpotLight gSpotLight;
int gLightType;
float gSpecularPower;
float4 PPL_light_stage_point_light_fs(v2f IN) : COLOR0
{
// Get fragment coordinate, from NDC space [-1, 1] to [0, 1].
float2 fragcoord = ((IN.lightPos.xy / IN.lightPos.w) + 1.0f) / 2.0f;
// Calculate lighting with G-Buffer textures
float3 position = tex2D(gPositionMap, fragcoord).xyz;
float3 normal = tex2D(gNormalMap, fragcoord).xyz;
normal = normalize(normal);
// Attenuation
float3 lightDirection = position - gPointLight.position;
float dist = length(lightDirection);
float att = gPointLight.atten.constant + gPointLight.atten.linearr * dist + gPointLight.atten.quadratic * dist * dist;
// NL
lightDirection = normalize(lightDirection);
float NL = dot(normal, -lightDirection);
// Specular (Blinn-Phong)
float specular = 0.0f;
//if (NL > 0)
//{
// float3 vertexToEye = normalize(gEyePosition - position);
// float3 lightReflect = normalize(reflect(lightDirection, normal));
// specular = pow(saturate(dot(vertexToEye, lightReflect)), gSpecularPower);
//}
// Apply attenuation to NL
NL = NL / min(1.0, att);
float3 lightColor = gPointLight.base.color * gPointLight.base.diffuseIntensity;
return float4(lightColor.r, lightColor.g, lightColor.b, 1.0f) * NL;
}
// Vertex shader for Pre-pass lighing material stage.
float4x4 gWorldMtx;
float4x4 gViewMtx;
float4x4 gProjectionMtx;
struct a2v
{
float3 position : POSITION;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD0;
};
struct v2f
{
float4 position : POSITION;
float2 texcoord : TEXCOORD0;
float3 normal : TEXCOORD1;
float4 projPos : TEXCOORD2;
};
v2f PPL_material_stage_vs(a2v IN)
{
v2f OUT;
float4x4 wv = mul(gWorldMtx, gViewMtx);
float4x4 wvp = mul(gViewMtx, gProjectionMtx);
wvp = mul(wvp, gWorldMtx);
// Only transforms position to world space
OUT.position = mul(wvp, float4(IN.position, 1.0f));
// Normal (It's not necessary, but i have to see if it influences the execution)
OUT.normal = mul(gWorldMtx, float4(IN.normal, 0.0f)).xyz;
// Copy texture coordinates
OUT.texcoord = IN.texcoord;
// Copy projected position to get the fragment coordinate
OUT.projPos = OUT.position;
return OUT;
}
// Fragment shader for Pre-pass lighing material stage.
// L-buffer texture
sampler2D gLightMap : TEXUNIT0;
// Object's material specific textures
sampler2D gColorMap : TEXUNIT1;
float4 PPL_material_stage_fs(v2f IN) : COLOR0
{
float2 fragcoord = ((IN.projPos.xy / IN.projPos.w) + 1.0f) / 2.0f;
// Get all light contributions for this pixel
float4 light = tex2D(gLightMap, fragcoord);
float3 combined = saturate(light.rgb);// + light.aaa);
// Get material albedo from texture map
float4 diffuse = tex2D(gColorMap, IN.texcoord);
return float4(combined, 1.0f) * diffuse;
}
Any suggestions?
You may want to use the WPOS register (VPOS in HLSL) instead of calculating the screen locations.

Getting the color of a vertex in HLSL?

I have the following vertex and pixel shaders:
struct VS_INPUT
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
float4 Color : TEXCOORD1;
};
struct VS_OUTPUT
{
float4 Position : POSITION0;
float4 Color : COLOR0;
float2 TexCoord : TEXCOORD0;
};
float4x4 projview_matrix;
VS_OUTPUT vs_main(VS_INPUT Input)
{
VS_OUTPUT Output;
Output.Position = mul(Input.Position, projview_matrix);
Output.Color = Input.Color;
Output.TexCoord = Input.TexCoord;
return Output;
}
px
texture tex;
sampler2D s = sampler_state {
texture = <tex>;
};
float4 ps_main(VS_OUTPUT Input) : COLOR0
{
float4 pixel = tex2D(s, Input.TexCoord.xy);
return pixel;
}
This is for a 2d game. The vertices of the quads contain tinting colors that I want to use to tint the bitmap. How can I obtain the color of the current vertex so I can multiply it in the pixel shader by the current pixel color?
Thanks
In your pixel shader, do:
float4 pixel = tex2D(s, Input.TexCoord.xy) * Input.Color;
The Input.Color value will be linearly interpreted across your plane for you, just like Input.TexCoord is. To blend two color vectors together, you simply multiply them together. It may also be advisable to do:
float4 pixel = tex2D(s, Input.TexCoord.xy) * Input.Color;
pixel = saturate(pixel);
The saturate() function will clip each RGB value in your color in the range of 0.0 to 1.0, which may avoid any possible display artifacts.

HLSL and DX Skybox Issues (creates seams)

I'm (re)learning DirectX and have moved into HLSL coding. Prior to using my custom .fx file I created a skybox for a game with a vertex buffer of quads. Everything worked fine...texture mapped and wrapped beautifully. However now that I have HLSL setup to manage the vertices there are distinctive seams where the quads meet. The textures all line up properly I just cant get rid of this damn seam!
I tend to think the problem is with the texCube...or rather all the texturing information here. I'm texturing the quads in DX...it may just be that I still don't quite get the link between the two..not sure. Anyway thanks for the help in advance!
Heres the .fx file:
float4x4 World;
float4x4 View;
float4x4 Projection;
float3 CameraPosition;
Texture SkyBoxTexture;
samplerCUBE SkyBoxSampler = sampler_state
{
texture = <SkyBoxTexture>;
minfilter = ANISOTROPIC;
mipfilter = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
AddressW = Wrap;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float3 TextureCoordinate : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
float4 VertexPosition = mul(input.Position, World);
output.TextureCoordinate = VertexPosition - CameraPosition;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return texCUBE(SkyBoxSampler, normalize(input.TextureCoordinate));
}
technique Skybox
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
To avoid seams you need to draw your skybox in a single DrawIndexedPrimitive call, preferably using triangle strip. DON'T draw each face as separate primitive transformed with individual matrix or something like that - you WILL get seams. If you for some unexplainable reason don't want to use single DrawIndexedPrimitive call for skybox parts, then you must ensure that all faces are drawn using same matrix (same world + view + projection matrix used in every call) and same coordinate values for corner vertices - i.e. "top" face should use exactly same vectors (position) for corners that are used by "side" faces.
Another thing is that you should either store skybox as
cubemap (looks like that's what you're doing) - make just 8 vertices for skybox, draw them as indexed primitive.
Or an unwrapped "atlas" texture that has unused areas filled. with border color.
Or - if you're fine with shaders, you could "raytrace" skybox using shader.
You need to clamp the texture coordinates with setsampler state to get rid of the seam. This toymaker page explains this. Toymaker is a great site for learning Direct3D you should check out the tutorials if you have any more trouble.
You may like to draw a skybox using only one quad. Everything you need is an inverse of World*View*Proj matrix, that is (World*View*Proj)^(-1).
The vertices of the quad should be: (1, 1, 1, 1), (1, -1, 1, 1), (-1, 1, 1, 1), (-1, -1, 1, 1).
Then you compute texture coordinates in VS:
float4 pos = mul(vPos, WorldViewProjMatrixInv);
float3 tex_coord = pos.xyz / pos.w;
And finally you sample the texture in PS:
float4 color = texCUBE(sampler, tex_coord);
No worry about any seams! :)