I have to reproduce an effect which consists of combining two textures (tiles + coin) to achieve the following:
The best result I achieved so far:
Visual Studio Solution to reproduce the problem
The link above will take you to the project and here what I tried to do in the pixel shader:
float4 PS(PS_INPUT input) : SV_Target
{
float4 color1;
float4 color2;
float4 blendColor;
// Get the pixel color from the first texture.
color1 = texTile.Sample(samLinear, input.Tex) * vMeshColor;
// Get the pixel color from the second texture.
color2 = texCoin.Sample(samLinear, input.Tex) * vMeshColor;
// Blend the two pixels together and multiply by the gamma value.
blendColor = color1 * color2;
// Saturate the final color.
blendColor = saturate(blendColor);
return blendColor;
}
But it does not seem the right way of doing this.
Which aproach should I take to have the expected result?
Well firstly you are blending them but not blending using an alpha-mask, when the example image seems to have been blended with an alpha-mask.
An example could be something like below; provided the coin has got an alpha-channel.
(Otherwise you'll have to calculate an alpha or add one in an image editing software.
float3 blend(float4 CoinTex, float3 GridTex)
{
// Inverse of alpha, to get the area around the coin
// Alpha spans from [0,1] so the expression below would suffice
float1 inverseAlpha = (1 - CoinTex.a);
float3 blendedTex = 0;
// If-else that is evaluated to decide how they'll be overlayed
if (inverseAlpha > 0.0 ){
blendedTex = GridTex.rgb;
} else {blendedTex = CoinTex.rgb;}
return blendedTex;
}
Related
In the custom effect docs it says to calculate relative offsets for pixels using this formula :
float2 sampleLocation =
texelSpaceInput0.xy // Sample position for the current output pixel.
+ float2(0,-10) // An offset from which to sample the input, specified in pixels.
* texelSpaceInput0.zw; // Multi
For my video sequencer I have included custom effects and transitions many of them converted to HLSL from Glsl from ShaderToy. The code in Glsl uses normalized coordinates [0...1] and many calculations rely on absolute xy position, rather than relative, so I have to find a way to use absolute texture coordinates in my HLSL code.
So, that zw multiplicator I use to find the UV of the bottom right sample :
float2 FindLast(float2 MV)
{
float2 LastSample = float2(0, 0) + float2(WI,he)* MV;
return LastSample;
}
The width and height are passed to the effect as a constant buffer.
After that, the normalized coordinates are :
float2 GetNormalized(float2 UV,float2 MV)
{
float2 s = FindLast(MV);
return UV / s;
}
This works. Code from shader toy that would apply in normalized coordinate works fine in my effects. D2DSampleInput with that uv input returns the correct color.
The question is whether my solution is viable. For example I have assumed that the first top left pixel is at uv 0,0, is that correct and viable?
I'm new in HLSL and shaders, so I would appreciate your help.
Hey everyone I'm working with lighting in a 2D Tile Based game and have run into a problem with my lighting calculations, in my game I take greyscale images then color them using shaders whatever color I like whether that be green(rgb=(0,1,0)) or red(rgb=(1,0,0)) or any color. So then I apply my lighting calculations to that textured and colored pixel. The lighting works fine when the light is white(rgb=(1,1,1)) but when it is say red or green it wont show the way I want it to. I know why this is happening of course because realistic a pure red light in a pure green room would reflect no red light so the room would remain dark. What I really want is to see a red light appear over a green surface. So my question is how can I show a red light clearly on a green surface?(or really any other color on any surface)
This is the code for my fragment shader, where attenuation is simply the attenuation for the light, lightColor is obviously the lights rgb value, distance is the distance from the given vector to that light(calculated in the vertex shader) and finally color is the rgb value that is applied to the texture.
Thanks in advance for your help!
vec3 totalDiffuse = vec3(0.0);
for(int i = 0; i < 4; i++)
{
float attFactor = attenuation[i].x + (attenuation[i].y * distance[i]) + (attenuation[i].z * distance[i] * distance[i]);
totalDiffuse = totalDiffuse + (lightColor[i])/attFactor;
}
totalDiffuse = max(totalDiffuse,0.2);
out_Color = texture(textureSampler, pass_textureCoords)*vec4(color,alpha)*vec4(totalDiffuse,1);
And here is an image of what a pure red light looks like on a surface currently, it should be inside the white circle and you may be able to see it is affecting the water a little bit because I give the water a small red component-
Light Demo Image
One possibility would be to change the light calculation.
Calculate a gray scales of the light color and the surface color. Multiply the surface color by the gray scale of the light color and the multiply the light color by the gray scale of the surface color, finally sum them up:
vec4 texCol = texture(textureSampler, pass_textureCoords);
float grayTex = dot(texCol.rgb, vec3(0.2126, 0.7152, 0.0722));
float grayCol = dot(colGray.rgb, vec3(0.2126, 0.7152, 0.0722));
vec3 mixCol = texCol.rgb * grayCol + color.rgb * grayTex;
out_Color = vec4(mixCol * totalDiffuse, texCol.a * alpha);
Note, this algorithm emphasizes the color of the light at the expense of the color of the surface. But that was what you wanted by dipping a green area in red light. Of course, that contradicts the desire to illuminate an area in its own color. If the light is white, then the surface will also shine white.
If you want some light sources with the effect described above, other sources but with the original effect of the question, then I recommend to introduce a parameter that mixes the two effects:
uniform float u_lightTint;
void main()
{
.....
vec3 mixCol = texCol.rgb * grayCol + color.rgb * grayTex;
mixCol = mix(texCol.rgb * color.rgb, mixCol.rgb, u_lightTint);
out_Color = vec4(mixCol * totalDiffuse, texCol.a * alpha);
}
If u_lightTint is set 1.0, then the "new" light calculation is uses, it it is set 0.0, then the original light calculation is use. Both algorithms can be interpolated linearly by u_lightTint.
Alternatively the u_lightTint parameter can be encoded in the alpha channel of the light color:
mixCol = mix(texCol.rgb * color.rgb, mixCol.rgb, color.a);
This is a screenshot of the problem: http://s13.postimg.org/672twurfr/shadowmap_saturn_rings.jpg
You can notice a black line on top of the projected shadows of the rings over saturn, it happens on all borders of the shadows of any object.
This is the pixel shader I'm using to render the depth map, I'm using the green and blue values of the color to store the alpha value.
float depthValue;
float4 color;
float4 textureColor;
textureColor = shaderTexture.Sample(SampleType, input.tex);
depthValue = input.depthPosition.z / input.depthPosition.w;
color = float4(depthValue, textureColor.a, textureColor.a, 1.0f);
return color;
and this is the pixel shader fragment I'm using to render the final color that will be displayed:
if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))
{
float4 color_d = depthMapTexture.Sample(SampleTypeClamp, projectTexCoord);
depthValue = color_d.r;
lightDepthValue = input.lightViewPosition.z / input.lightViewPosition.w;
lightDepthValue = lightDepthValue - bias;
if(lightDepthValue < depthValue)
{
lightIntensity = saturate(dot(input.normal, lightDir));
}
else
{
lightIntensity = saturate(dot(input.normal, lightDir))*(1.0f - color_d.g);
}
}
Sorry for the vague info, but I don't really have a clue of where the problem can be, maybe it's somewhere else in the code, but it's definitely not the texture file I'm using, so it's gotta be the code.
Any idea of where I can start looking for the issue?
Perphaps it's the shadowmap texture coordinates sampling out of bounds. I can see you are checking for that but I would test it anyway.
In your SamplerState, do you have a BorderColor set? Try setting it to red and see if that's the problem.
Does the shadow map have the same depth as the rendertarget you are applying the shader on?
Edit. This is my shader code with SampleCmpLevelZero:
if (ShadowCoord.x > 0.0f && ShadowCoord.x < 1.0f && ShadowCoord.y > 0.0f && ShadowCoord.y < 1.0f)
{
ShadowCoord.z -= 0.001f;
float a = 1;
a = ShadowMap.SampleCmpLevelZero(ShadowSampler, ShadowCoord.xy, ShadowCoord.z).r;
a = linearizeDepth(a);
ambient = float4(a,a,a, 0);
}
The problem was with the texture filter used by the D3D11_SAMPLER_DESC, it was set to D3D11_FILTER_MIN_MAG_MIP_LINEAR, so the texture was being interpolated, kind of softening the colors and alpha values at the borders, which caused the shader to read those soft alpha values and cast shadows where it wasn't supposed to.
I created a new samplerdesc using D3D11_FILTER_COMPARISON_MIN_MAG_MIP_POINT as the filter so the texture would be presented with the borders mostly unchanged, just as the depth shader created it originally. Now the shadows are cast properly with no weird black line at the borders anymore.
The task is: shader takes in a constant color, then generates pixel colors according to their positions by replacing two of four color components (RGBA) with texture coordinates.
With hardcoded component set it will be like:
float4 inputColor : register(c0);
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 color = 0;
color.a = inputColor.a;
color.r = inputColor.r;
color.g = uv.x;
color.b = uv.y;
return color;
}
Now I'd like to pass in a parameter(s) specifying which components should be replaced with uv.x and uv.y. Let's say inputColor has -1 and -2 in these components. Or there are uint xIndex and yIndex parameters specifying positions in vector4 to be replaced. HLSL does not allow "color[xIndex] = uv.x".
Currently I've done that in ugly way with a bunch of if-else. But I feel like there is some cross-product or matrix multiplication solution. Any ideas?
You could work with two additional vectors as channelmasks. It works like indexing, but with vector operators.
float4 inputColor : register(c0);
float4 uvx_mask : register(c1); //e.g. (0,0,1,0)
float4 uvy_mask : register(c2); // e.g. (0,0,0,1)
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 color = 0;
// replacing uvx channel with uv.x
color = lerp(inputColor, uv.x * uvx_mask, uvx_mask);
// replacing uvy channel with uv.y
color = lerp(color , uv.y * uvy_mask, uvy_mask);
return color; //in this example (inputColor.r, inputColor.g, uv.x, uv.y)
}
If you need even the last bit of performance you could work alternative with the preprocessor (#define, #ifdef) to build the right code on demand.
I'm writing an application which renders graphics on the screen. The application can switch between Direct3D9 and Direct3D10 graphics modules (I wrote DLLs that wrap both D3D9 and D3D10). When trying to render a test mesh (a torus which comes as a stock mesh in D3DX9 and in DXUT library you can find in DirectX10 samples), Direct3D10 module behaves rather weird. Here's what I get.
D3D9:
D3D10:
The view, projection and world matrices are the same for both cases. The only thing that differs is the device initialization code, and the HLSL effect files (for simplicity I only apply ambient colors and don't use advanced lighting, texturing etc). Can this be because of wrong device initialization or because of bad shaders? I would appreciate any hint. I can post any code piece by request.
A guy on Game Dev StackExchange suggested that it is probably because of transposed Projection matrix. I've tried replacing the order the matrices are multiplied in shader file, I've tried almost every permutation I could get, but no right output on the screen.
Thanks in advance.
EDIT: Here's the .fx file. You can ignore PS, there's nothing interesting happening in there.
//Basic ambient light shader with no textures
matrix World;
matrix View;
matrix Projection;
float4 AmbientColor : AMBIENT = float4(1.0, 1.0, 1.0, 1.0);
float AmbientIntensity = 1.0;
struct VS_OUTPUT
{
float4 Position : SV_POSITION; // vertex position
float4 Color : COLOR0; // vertex color
};
RasterizerState rsWireframe { FillMode = WireFrame; };
VS_OUTPUT RenderSceneVS( float4 vPos : POSITION)
{
VS_OUTPUT output;
matrix WorldProjView = mul(World, mul(View, Projection));
vPos = mul(vPos, WorldProjView);
output.Position = vPos;
output.Color.rgb = AmbientColor * AmbientIntensity;
output.Color.a = AmbientColor.a;
return output;
}
struct PS_OUTPUT
{
float4 RGBColor : SV_Target; // Pixel color
};
PS_OUTPUT RenderScenePS( VS_OUTPUT In )
{
PS_OUTPUT output;
output.RGBColor = In.Color;
return output;
}
technique10 Ambient
{
pass P0
{
SetRasterizerState( rsWireframe );
SetVertexShader( CompileShader( vs_4_0, RenderSceneVS( ) ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, RenderScenePS( ) ) );
}
}
Make sure that your vPos.w = 1.0f.
If this is not the case, matrix multiplication will go wild and create strange results.
Not sure what causes the problem but you can check the following:
make sure constant buffers with tranformation matrices are "initialized with something", not some garbage data
if you use normal/tangent in your vertex buffer also make sure you don't put some garbage data in there (per vertex) but it would rather cause problem with texturing
make sure your vertex layout description matches the input in vertexshader (.hlsl), sometimes even if it doesn't match it will just compile and run but showing some unexpected mesh.
I have no idea how is it in DX9 but maybe there is also something with coordinates, multiplying z in vertex buffer on in some transformation matrix by -1 might help
Edit: It might be also good idea to just put some simple mesh into the buffer, cube for example (a triangle even) and check if it's drawning properly.
You need to transpose your matrices before setting them as shader constants. If you are using xnamath use the XMMatrixTranspose() function on each of the world, view and projection matrices before setting them into your constant buffer.