Directx: HLSL Texture based height map. (Shader 5.0) - c++

I'm trying to implement a GPU based height map the simplest (and fastest) way that I know how. I passed a (.png, D3DX11CreateShaderResourceViewFromFile()) texture into the shader, and I'm attempting to sample it for the current pixel value. Seeing a float4, I'm currently assigning a color value from a channel to offset the y value.
Texture2D colorMap : register(t0);
SamplerState colorSampler : register(s0);
...
VOut VShader(float4 position : POSITION, float2 Texture : TEXCOORD0, float3 Normal : NORMAL)
{
VOut output;
float4 colors = colorMap.SampleLevel(colorSampler, float4(position.x*0.001, position.z*0.001, 0,0 ),0);
position.y = colors.x*128;
output.position = mul(position, WVP);
output.texture0 = Texture;
output.normal = Normal;
return output;
}
The texture is imported correctly, and I've inserted another texture and was able to successfully blend the texture with another texture (through multiplication of values), so I know that the float4 struct contains values of a sort capable of having arithmetic performed on it.
In the Vertex function, attempting to extract the values yields nothing on a grid:
The concept seemed simple enough on paper...

Since you're using a Texture2D, the Location parameter needs to be a float2.
Also, make sure that location goes from (0,0) to (1,1). For your mapping to be correct, the grid would need to be placed from (0,0,0) to (1000,0,1000).
If this is the case then this should work:
SampleLevel(colorSampler, position.xz*0.001 ,0);
Edit
I'm curious as to how you're testing this. I tried compiling your code, with added definitions for VOut and WVP, and it fails. One of the errors is that location parameter which is a float4 and should be a float2. The other error I get is the name of the function; it should be main.
If you happen to be using Visual Studio, I strongly recommend using the Graphics Debugging tools and check all the variables. I suspect the colorMap texture might be bound to the pixel shader but not the vertex shader.

Related

Direct 2D HLSL pixel shader coordinate space

In the custom effect docs it says to calculate relative offsets for pixels using this formula :
float2 sampleLocation =
texelSpaceInput0.xy // Sample position for the current output pixel.
+ float2(0,-10) // An offset from which to sample the input, specified in pixels.
* texelSpaceInput0.zw; // Multi
For my video sequencer I have included custom effects and transitions many of them converted to HLSL from Glsl from ShaderToy. The code in Glsl uses normalized coordinates [0...1] and many calculations rely on absolute xy position, rather than relative, so I have to find a way to use absolute texture coordinates in my HLSL code.
So, that zw multiplicator I use to find the UV of the bottom right sample :
float2 FindLast(float2 MV)
{
float2 LastSample = float2(0, 0) + float2(WI,he)* MV;
return LastSample;
}
The width and height are passed to the effect as a constant buffer.
After that, the normalized coordinates are :
float2 GetNormalized(float2 UV,float2 MV)
{
float2 s = FindLast(MV);
return UV / s;
}
This works. Code from shader toy that would apply in normalized coordinate works fine in my effects. D2DSampleInput with that uv input returns the correct color.
The question is whether my solution is viable. For example I have assumed that the first top left pixel is at uv 0,0, is that correct and viable?
I'm new in HLSL and shaders, so I would appreciate your help.

OpenGL: Compute Shader - gl_GlobalInvocationID giving static output

So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);

SharpDX D3D11 VertexElement Format Mismatch?

I have a shader that no longer draws correctly. It was working in XNA, but had to be rewritten for DX11 and SharpDX and now uses shader model 5. There are no exceptions and the shader effect compiles fine, no debug-layer messages from the device except for an unrelated(?) default sampler-state complaint. It is a normal, fullscreen quad doing the simplest texture blit. (Using a random noise texture for testing)
I am feeding the vertex shader a VertexPositionTexutre, a SharpDX object which contains:
[VertexElement("SV_Position")]
public Vector3 Position;
[VertexElement("TEXCOORD0")]
public Vector2 TextureCoordinate;
I suspect there is a mismatch between signatures. VertexPositionTexture has a Vector3 position component defined for "SV_Position".
D3D11 defines "SV_Position" as a float4. It is a System-Value constant with a concrete native type. I fear the shader is eating an extra float beyond the end of the float3 of Vector3. This would also explain why my UVs are broken on several vertices.
This is my vertex shader:
struct V2P
{
float4 vPosition : SV_Position;
float2 vTexcoord : TEXCOORD0;
};
V2P VSCopy(float4 InPos : SV_Position,
float2 InTex : TEXCOORD0)
{
V2P Out = (V2P)0;
// transform the position to the screen
Out.vPosition = float4(InPos);
Out.vTexcoord = InTex;
return Out;
}
I have tried to change the input of VSCopy() from float4 to float3 with no change in result. Short of recompiling SharpDX VertexPositionTexture, I don't see how to fix this, yet SharpDX fully supports DirectX11 so I must be doing something incorrectly.
I am defining my vertices and setting them as follows:
//using SharpDX.Toolkit.Graphics
[...]//In the initialization
verts = new VertexPositionTexture[]
{
new VertexPositionTexture(
new Vector3(1.0f,-1.0f,0),
new Vector2(1,1)),
new VertexPositionTexture(
new Vector3(-1.0f,-1.0f,0),
new Vector2(0,1)),
new VertexPositionTexture(
new Vector3(-1.0f,1.0f,0),
new Vector2(0,0)),
new VertexPositionTexture(
new Vector3(1.0f,1.0f,0),
new Vector2(1,0))
}
//vb is a Buffer<VertexPositionTexture>
short[] indeces = new short[] { 0, 1, 2, 2, 3, 0 };
vb = SharpDX.Toolkit.Graphics.Buffer.Vertex.New(Game.Instance.GraphicsDevice, verts, SharpDX.Direct3D11.ResourceUsage.Dynamic);
ib = SharpDX.Toolkit.Graphics.Buffer.Index.New(Game.Instance.GraphicsDevice, indeces, dx11.ResourceUsage.Dynamic);
[...] //In the Render method, called every frame to draw the quad
Game.Instance.GraphicsDevice.SetVertexBuffer<VertexPositionTexture>(vb, 0);
Game.Instance.GraphicsDevice.SetIndexBuffer(ib, false);
Game.Instance.GraphicsDevice.DrawIndexed(PrimitiveType.TriangleList, 6);
SharpDX.Tookit provides a method for specifying the InputLayout to the vertex shader. I had missed this method and was incorrectly trying to set the vertex input layout via the property:
SharpDX.Direct3D11.ImmediateContext.InputAssembler.InputLayout
You must set the input layout via the SharpDX.Toolkit.Graphics.GraphicsDevice.SetVertexInputLayout(); method.

Get original texture color in a Fragment Shader in OpenGL

So, I need to make a shader to replace the gray colors in the texture with a given color. The fragment shader works properly if I set the color to a given specific one, like
gl_FragColor = vec4(1, 1, 0, 1);
However, I'm getting an error when I try to retrieve the original color of the texture. It always return black, for some reason.
uniform sampler2D texture; //texture to change
void main() {
vec2 coords = gl_TexCoord[0].xy;
vec3 normalColor = texture2D(texture, coords).rgb; //original color
gl_FragColor = vec4(normalColor.r, normalColor.g, normalColor.b, 1);
}
Theoretically, it should do nothing - the texture should be unchanged. But it gets entirely black instead. I think the problem is that I'm not sure how to pass the texture as a parameter (to the uniform variable). I'm currently using the ID (integer), but it seems to return always black. So I basically don't know how to set the value of the uniform texture (or to get it in any other way, without using the parameters). The code (in Java):
program.setUniform("texture", t.getTextureID());
I'm using the Program class, that I got from here, and also SlickUtils Texture class, but I believe that is irrelevant.
program.setUniform("texture", t.getTextureID());
^^^^^^^^^^^^^^^^
Nope nope nope.
Texture object IDs never go in uniforms.
Pass in the index of the texture unit you want to sample from.
So if you want to sample from the nth texture unit (GL_TEXTURE0 + n) pass in n:
program.setUniform("texture", 0);
^ or whatever texture unit you've bound `t` to
In addition to what genpfault said, when you say "replace the gray colors in the texture with a given color", that had better be a shorthand for "write the color from one texture to another, except replacing gray with a different color". Because you are not allowed to simultaneously read from and write to the same image in the same texture.

Strange rendering with Direct3D10

I'm writing an application which renders graphics on the screen. The application can switch between Direct3D9 and Direct3D10 graphics modules (I wrote DLLs that wrap both D3D9 and D3D10). When trying to render a test mesh (a torus which comes as a stock mesh in D3DX9 and in DXUT library you can find in DirectX10 samples), Direct3D10 module behaves rather weird. Here's what I get.
D3D9:
D3D10:
The view, projection and world matrices are the same for both cases. The only thing that differs is the device initialization code, and the HLSL effect files (for simplicity I only apply ambient colors and don't use advanced lighting, texturing etc). Can this be because of wrong device initialization or because of bad shaders? I would appreciate any hint. I can post any code piece by request.
A guy on Game Dev StackExchange suggested that it is probably because of transposed Projection matrix. I've tried replacing the order the matrices are multiplied in shader file, I've tried almost every permutation I could get, but no right output on the screen.
Thanks in advance.
EDIT: Here's the .fx file. You can ignore PS, there's nothing interesting happening in there.
//Basic ambient light shader with no textures
matrix World;
matrix View;
matrix Projection;
float4 AmbientColor : AMBIENT = float4(1.0, 1.0, 1.0, 1.0);
float AmbientIntensity = 1.0;
struct VS_OUTPUT
{
float4 Position : SV_POSITION; // vertex position
float4 Color : COLOR0; // vertex color
};
RasterizerState rsWireframe { FillMode = WireFrame; };
VS_OUTPUT RenderSceneVS( float4 vPos : POSITION)
{
VS_OUTPUT output;
matrix WorldProjView = mul(World, mul(View, Projection));
vPos = mul(vPos, WorldProjView);
output.Position = vPos;
output.Color.rgb = AmbientColor * AmbientIntensity;
output.Color.a = AmbientColor.a;
return output;
}
struct PS_OUTPUT
{
float4 RGBColor : SV_Target; // Pixel color
};
PS_OUTPUT RenderScenePS( VS_OUTPUT In )
{
PS_OUTPUT output;
output.RGBColor = In.Color;
return output;
}
technique10 Ambient
{
pass P0
{
SetRasterizerState( rsWireframe );
SetVertexShader( CompileShader( vs_4_0, RenderSceneVS( ) ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, RenderScenePS( ) ) );
}
}
Make sure that your vPos.w = 1.0f.
If this is not the case, matrix multiplication will go wild and create strange results.
Not sure what causes the problem but you can check the following:
make sure constant buffers with tranformation matrices are "initialized with something", not some garbage data
if you use normal/tangent in your vertex buffer also make sure you don't put some garbage data in there (per vertex) but it would rather cause problem with texturing
make sure your vertex layout description matches the input in vertexshader (.hlsl), sometimes even if it doesn't match it will just compile and run but showing some unexpected mesh.
I have no idea how is it in DX9 but maybe there is also something with coordinates, multiplying z in vertex buffer on in some transformation matrix by -1 might help
Edit: It might be also good idea to just put some simple mesh into the buffer, cube for example (a triangle even) and check if it's drawning properly.
You need to transpose your matrices before setting them as shader constants. If you are using xnamath use the XMMatrixTranspose() function on each of the world, view and projection matrices before setting them into your constant buffer.