How to make a sampler2D struct with dynamic values? - hlsl

Here's my scenario:
texture diffuseMap;
sampler2D diffuseSampler = sampler_state {
Texture = <diffuseMap>;
Filter = MIN_MAG_MIP_LINEAR;
AddressU = WRAP;
AddressV = WRAP;
};
I would like to be able to change the diffuseSampler Filter value from within my C# application by passing a parameter to my effect. Here is how I pictured it:
#define __DiffuseFilterLinear 0x0
#define __DiffuseFilterAnisotropic 0x1
#define __DiffuseFilterNearest 0x2
int diffuseMapFilter; // Here's where I'll assign, via my application,
// a hex value corresponding to one of the three
// #defines above
Then, inside diffuseSampler, I would like to achieve the equivalent to setting the Filter value using conditionals, like so:
sampler2D diffuseSampler = sampler_state {
Texture = <diffuseMap>;
// Conditionals to pick the right Filter value
switch (diffuseMapFilter)
{
case __DiffuseFilterLinear:
Filter = MIN_MAG_MIP_LINEAR;
break;
case __DiffuseFilterAnisotropic:
Filter = ANISOTROPIC;
break;
case __DiffuseFilterNearest:
Filter = MIN_MAG_MIP_POINT;
break;
}
AddressU = WRAP;
AddressV = WRAP;
};
I know trying to insert a switch block inside a sampler struct is probably an heresy, but I think it illustrates what I am trying to do.
How can I set the diffuseSampler Filter value according to the diffuseMapFilter parameter?

A shader is consist of vertex shader and pixel shader. The stream of 3D model flows from application to the vertex shader, then to the pixel shader, finally to the frame buffer. a2v struct represents the data structure transferred from an application to a vertex shader, v2p from a vertex shader to a pixel shader, and p2f from a pixel shader to the frame buffer. Below program transforms a vertex's position into the position of clip space by view matrix.
struct a2v {
float4 Position : POSITION;
};
Inside the HLSL function, struct a2v specifies a vertex structure that represents the data of a vertice.
struct v2p {
float4 Position : POSITION;
};
struct v2p specifies a stream structure from vertex to pixel shader. Position is a four dimentional vector declared by float4. Furthermore, POSITION, called output semantic, indicates the initialized type of Position.
void main(in a2v IN, out v2p OUT, uniform float4x4 ModelViewMatrix)
{
OUT.Position = mul(IN.Position, ModelViewMatrix);
}
Here's a tutorial

Related

D3D11 Multiple Vertex Buffer 2D Depth sorting with depth buffer

I am trying to create some 2D UI drawing in D3D11. I am trying to get the draw calls down as much as possible to improve performance. Previously, I batched up as many textures as possible to send to one draw call, they were accessed with an else if. Well it turned out that sending many textures, especially large ones, to a single shader destroyed performance in some scenarios which is exactly the opposite of what I was trying to achieve.
The goal is to render things back to front from 2 VBOs. So now, I am trying split some things up into a few vertex buffers. I have one vertex buffer that contains non textured vertices, and one that contains different textures. This is because I send one texture to the shader at a time, and want to keep the amount of texture swapping between vertices being drawn at a minimum. The problem I am encountering is that the depth sorting does not work as I want it to. I render the textured vertex buffer first, then the non textured one. When I try to render textures over something rendered in the non textured vertex buffer, it simply doesn't work. There may be some other strange effects that I cant quite explain.
Maybe this is something with my alpha blending? It seems to me that the Z value I write has no effect on the output. Here is the relevant code:
desc->AlphaToCoverageEnable = false;
desc->RenderTarget[0].BlendEnable = true;
desc->RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
desc->RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
desc->RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
desc->RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_INV_DEST_ALPHA;
desc->RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ONE;
desc->RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
desc->RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
d3d->device->CreateBlendState(desc.get(), &d3d->blend_state);
I also tried Depth -= 1.0f and keeping the depth between 0.0 and 1.0 here to no effect
void put(float x, float y, float u, float v) {
CurrentVBO->put({ {x,y, Depth }, current_color, {u,v}, Scale });
Depth += 1.0f;
}
vertex shader
cbuffer vertex_buffer : register(b0) {
float4x4 projection_matrix;
float render_time;
};
struct VS_INPUT {
float3 pos : POSITION;
float4 color : COLOR;
float2 uv : TEXCOORD;
float scale : SCALE;
};
struct PS_INPUT {
float4 pos : SV_Position;
float4 color : COLOR;
float2 uv : TEXCOORD;
uint tex_id : TEXID;
};
PS_INPUT main(VS_INPUT input) {
PS_INPUT output;
output.pos = mul(projection_matrix, float4(input.pos.xy * input.scale, input.pos.z, 1.0f));
output.color = input.color;
output.uv = input.uv;
return output;
}
desc->DepthEnable = true;
desc->DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
desc->DepthFunc = D3D11_COMPARISON_LESS;
desc->StencilEnable = false;
desc->FrontFace.StencilFailOp = desc->FrontFace.StencilDepthFailOp = desc->FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
desc->FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
desc->BackFace = desc->FrontFace;
d3d->device->CreateDepthStencilState(desc.get(), &d3d->depth_stencil_2d);
After some more reading, I may need to do some more work to make this work properly.
Since the problems of blending and ordering transparent objects with depth. I went with my original plan, but only allow a few textures to be passed to one shader. I also removed the high resolution textures as I think they were causing bandwidth problems in general, as the GPU is already being heavily strained in my case.

Directx: HLSL Texture based height map. (Shader 5.0)

I'm trying to implement a GPU based height map the simplest (and fastest) way that I know how. I passed a (.png, D3DX11CreateShaderResourceViewFromFile()) texture into the shader, and I'm attempting to sample it for the current pixel value. Seeing a float4, I'm currently assigning a color value from a channel to offset the y value.
Texture2D colorMap : register(t0);
SamplerState colorSampler : register(s0);
...
VOut VShader(float4 position : POSITION, float2 Texture : TEXCOORD0, float3 Normal : NORMAL)
{
VOut output;
float4 colors = colorMap.SampleLevel(colorSampler, float4(position.x*0.001, position.z*0.001, 0,0 ),0);
position.y = colors.x*128;
output.position = mul(position, WVP);
output.texture0 = Texture;
output.normal = Normal;
return output;
}
The texture is imported correctly, and I've inserted another texture and was able to successfully blend the texture with another texture (through multiplication of values), so I know that the float4 struct contains values of a sort capable of having arithmetic performed on it.
In the Vertex function, attempting to extract the values yields nothing on a grid:
The concept seemed simple enough on paper...
Since you're using a Texture2D, the Location parameter needs to be a float2.
Also, make sure that location goes from (0,0) to (1,1). For your mapping to be correct, the grid would need to be placed from (0,0,0) to (1000,0,1000).
If this is the case then this should work:
SampleLevel(colorSampler, position.xz*0.001 ,0);
Edit
I'm curious as to how you're testing this. I tried compiling your code, with added definitions for VOut and WVP, and it fails. One of the errors is that location parameter which is a float4 and should be a float2. The other error I get is the name of the function; it should be main.
If you happen to be using Visual Studio, I strongly recommend using the Graphics Debugging tools and check all the variables. I suspect the colorMap texture might be bound to the pixel shader but not the vertex shader.

SharpDX D3D11 VertexElement Format Mismatch?

I have a shader that no longer draws correctly. It was working in XNA, but had to be rewritten for DX11 and SharpDX and now uses shader model 5. There are no exceptions and the shader effect compiles fine, no debug-layer messages from the device except for an unrelated(?) default sampler-state complaint. It is a normal, fullscreen quad doing the simplest texture blit. (Using a random noise texture for testing)
I am feeding the vertex shader a VertexPositionTexutre, a SharpDX object which contains:
[VertexElement("SV_Position")]
public Vector3 Position;
[VertexElement("TEXCOORD0")]
public Vector2 TextureCoordinate;
I suspect there is a mismatch between signatures. VertexPositionTexture has a Vector3 position component defined for "SV_Position".
D3D11 defines "SV_Position" as a float4. It is a System-Value constant with a concrete native type. I fear the shader is eating an extra float beyond the end of the float3 of Vector3. This would also explain why my UVs are broken on several vertices.
This is my vertex shader:
struct V2P
{
float4 vPosition : SV_Position;
float2 vTexcoord : TEXCOORD0;
};
V2P VSCopy(float4 InPos : SV_Position,
float2 InTex : TEXCOORD0)
{
V2P Out = (V2P)0;
// transform the position to the screen
Out.vPosition = float4(InPos);
Out.vTexcoord = InTex;
return Out;
}
I have tried to change the input of VSCopy() from float4 to float3 with no change in result. Short of recompiling SharpDX VertexPositionTexture, I don't see how to fix this, yet SharpDX fully supports DirectX11 so I must be doing something incorrectly.
I am defining my vertices and setting them as follows:
//using SharpDX.Toolkit.Graphics
[...]//In the initialization
verts = new VertexPositionTexture[]
{
new VertexPositionTexture(
new Vector3(1.0f,-1.0f,0),
new Vector2(1,1)),
new VertexPositionTexture(
new Vector3(-1.0f,-1.0f,0),
new Vector2(0,1)),
new VertexPositionTexture(
new Vector3(-1.0f,1.0f,0),
new Vector2(0,0)),
new VertexPositionTexture(
new Vector3(1.0f,1.0f,0),
new Vector2(1,0))
}
//vb is a Buffer<VertexPositionTexture>
short[] indeces = new short[] { 0, 1, 2, 2, 3, 0 };
vb = SharpDX.Toolkit.Graphics.Buffer.Vertex.New(Game.Instance.GraphicsDevice, verts, SharpDX.Direct3D11.ResourceUsage.Dynamic);
ib = SharpDX.Toolkit.Graphics.Buffer.Index.New(Game.Instance.GraphicsDevice, indeces, dx11.ResourceUsage.Dynamic);
[...] //In the Render method, called every frame to draw the quad
Game.Instance.GraphicsDevice.SetVertexBuffer<VertexPositionTexture>(vb, 0);
Game.Instance.GraphicsDevice.SetIndexBuffer(ib, false);
Game.Instance.GraphicsDevice.DrawIndexed(PrimitiveType.TriangleList, 6);
SharpDX.Tookit provides a method for specifying the InputLayout to the vertex shader. I had missed this method and was incorrectly trying to set the vertex input layout via the property:
SharpDX.Direct3D11.ImmediateContext.InputAssembler.InputLayout
You must set the input layout via the SharpDX.Toolkit.Graphics.GraphicsDevice.SetVertexInputLayout(); method.

Weird y-position offset using custom frag shader (Cocos2d-x)

I'm trying to mask a sprite so I wrote a simple fragment shader that renders only the pixels that are not hidden under another texture (the mask). The problem is that it seems my texture has its y-coordinate offset after passing through the shader.
This is the init method of the sprite (GroundZone) I want to mask:
bool GroundZone::initWithSize(Size size) {
// [...]
// Setup the mask of the sprite
m_mask = RenderTexture::create(textureWidth, textureHeight);
m_mask->retain();
m_mask->setKeepMatrix(true);
Texture2D *maskTexture = m_mask->getSprite()->getTexture();
maskTexture->setAliasTexParameters(); // Disable linear interpolation on the mask
// Load the custom frag shader with a default vert shader as the sprite’s program
FileUtils *fileUtils = FileUtils::getInstance();
string vertexSource = ccPositionTextureA8Color_vert;
string fragmentSource = fileUtils->getStringFromFile(
fileUtils->fullPathForFilename("CustomShader_AlphaMask_frag.fsh"));
GLProgram *shader = new GLProgram;
shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str());
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION);
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS);
shader->link();
CHECK_GL_ERROR_DEBUG();
shader->updateUniforms();
CHECK_GL_ERROR_DEBUG();
int maskTexUniformLoc = shader->getUniformLocationForName("u_alphaMaskTexture");
shader->setUniformLocationWith1i(maskTexUniformLoc, 1);
this->setShaderProgram(shader);
shader->release();
// [...]
}
These are the custom drawing methods for actually drawing the mask over the sprite:
You need to know that m_mask is modified externally by another class, the onDraw() method only render it.
void GroundZone::draw(Renderer *renderer, const kmMat4 &transform, bool transformUpdated) {
m_renderCommand.init(_globalZOrder);
m_renderCommand.func = CC_CALLBACK_0(GroundZone::onDraw, this, transform, transformUpdated);
renderer->addCommand(&m_renderCommand);
Sprite::draw(renderer, transform, transformUpdated);
}
void GroundZone::onDraw(const kmMat4 &transform, bool transformUpdated) {
GLProgram *shader = this->getShaderProgram();
shader->use();
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, m_mask->getSprite()->getTexture()->getName());
glActiveTexture(GL_TEXTURE0);
}
Below is the method (located in another class, GroundLayer) that modify the mask by drawing a line from point start to point end. Both points are in Cocos2d coordinates (Point (0,0) is down-left).
void GroundLayer::drawTunnel(Point start, Point end) {
// To dig a line, we need first to get the texture of the zone we will be digging into. Then we get the
// relative position of the start and end point in the zone's node space. Finally we use the custom shader to
// draw a mask over the existing texture.
for (auto it = _children.begin(); it != _children.end(); it++) {
GroundZone *zone = static_cast<GroundZone *>(*it);
Point nodeStart = zone->convertToNodeSpace(start);
Point nodeEnd = zone->convertToNodeSpace(end);
// Now that we have our two points converted to node space, it's easy to draw a mask that contains a line
// going from the start point to the end point and that is then applied over the current texture.
Size groundZoneSize = zone->getContentSize();
RenderTexture *rt = zone->getMask();
rt->begin(); {
// Draw a line going from start and going to end in the texture, the line will act as a mask over the
// existing texture
DrawNode *line = DrawNode::create();
line->retain();
line->drawSegment(nodeStart, nodeEnd, 20, Color4F::RED);
line->visit();
} rt->end();
}
}
Finally, here's the custom shader I wrote.
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform sampler2D u_alphaMaskTexture;
void main() {
float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a;
float texAlpha = texture2D(u_texture, v_texCoord).a;
float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is invisible
vec3 texColor = texture2D(u_texture, v_texCoord).rgb;
gl_FragColor = vec4(texColor, blendAlpha);
return;
}
I got a problem with the y coordinates. Indeed, it seems that once it has passed through my custom shader, the sprite's texture is not at the right place:
Without custom shader (the sprite is the brown thing):
With custom shader:
What's going on here? Thanks :)
Found the solution. The vert shader should not use the MVP matrix so I loaded ccPositionTextureColor_noMVP_vert instead of ccPositionTextureA8Color_vert.
In your vert shader (.vsh), your main method should look something like this:
attribute vec4 a_position;
attribute vec2 a_texCoord;
attribute vec4 a_color;
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
void main()
{
//CC_PMatrix is the projection matrix, where as the CC_MVPMatrix is the model, view, projection matrix. Since in 2d we are using ortho camera CC_PMatrix is enough to do calculations.
//gl_Position = CC_MVPMatrix * a_position;
gl_Position = CC_PMatrix * a_position;
v_fragmentColor = a_color;
v_texCoord = a_texCoord;
}
Note that we are using CC_PMatrix instead of CC_MVPMatrix.

HLSL and DX Skybox Issues (creates seams)

I'm (re)learning DirectX and have moved into HLSL coding. Prior to using my custom .fx file I created a skybox for a game with a vertex buffer of quads. Everything worked fine...texture mapped and wrapped beautifully. However now that I have HLSL setup to manage the vertices there are distinctive seams where the quads meet. The textures all line up properly I just cant get rid of this damn seam!
I tend to think the problem is with the texCube...or rather all the texturing information here. I'm texturing the quads in DX...it may just be that I still don't quite get the link between the two..not sure. Anyway thanks for the help in advance!
Heres the .fx file:
float4x4 World;
float4x4 View;
float4x4 Projection;
float3 CameraPosition;
Texture SkyBoxTexture;
samplerCUBE SkyBoxSampler = sampler_state
{
texture = <SkyBoxTexture>;
minfilter = ANISOTROPIC;
mipfilter = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
AddressW = Wrap;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float3 TextureCoordinate : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
float4 VertexPosition = mul(input.Position, World);
output.TextureCoordinate = VertexPosition - CameraPosition;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return texCUBE(SkyBoxSampler, normalize(input.TextureCoordinate));
}
technique Skybox
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
To avoid seams you need to draw your skybox in a single DrawIndexedPrimitive call, preferably using triangle strip. DON'T draw each face as separate primitive transformed with individual matrix or something like that - you WILL get seams. If you for some unexplainable reason don't want to use single DrawIndexedPrimitive call for skybox parts, then you must ensure that all faces are drawn using same matrix (same world + view + projection matrix used in every call) and same coordinate values for corner vertices - i.e. "top" face should use exactly same vectors (position) for corners that are used by "side" faces.
Another thing is that you should either store skybox as
cubemap (looks like that's what you're doing) - make just 8 vertices for skybox, draw them as indexed primitive.
Or an unwrapped "atlas" texture that has unused areas filled. with border color.
Or - if you're fine with shaders, you could "raytrace" skybox using shader.
You need to clamp the texture coordinates with setsampler state to get rid of the seam. This toymaker page explains this. Toymaker is a great site for learning Direct3D you should check out the tutorials if you have any more trouble.
You may like to draw a skybox using only one quad. Everything you need is an inverse of World*View*Proj matrix, that is (World*View*Proj)^(-1).
The vertices of the quad should be: (1, 1, 1, 1), (1, -1, 1, 1), (-1, 1, 1, 1), (-1, -1, 1, 1).
Then you compute texture coordinates in VS:
float4 pos = mul(vPos, WorldViewProjMatrixInv);
float3 tex_coord = pos.xyz / pos.w;
And finally you sample the texture in PS:
float4 color = texCUBE(sampler, tex_coord);
No worry about any seams! :)