DirectX 11: Shader not creating world view matrix - c++

After coming across this problem here, where the the Entity::draw() call would not be displayed due to the vertex shader values returning 0 on multiplication with the world view matrix.
The problem was funneled to a faulty constant buffer input. However, after checking the values, I can't seem to understand the problem at hand. I pre-multiplied the World, View, and Projection matrices:
mWorld = XMMatrixIdentity();
mView = XMMatrixLookAtLH(Eye, At, Up);
mProjection = XMMatrixPerspectiveFovLH(XM_PIDIV2, 1.0, 0.0f, 1000.0f);
mWVP = mWorld*mView*mProjection;
mWVP
-0.999999940, 0.000000000, 0.000000000, 0.000000000
0.000000000, 0.999999940, 0.000000000, 0.000000000
0.000000000, 0.000000000, -1.00000000, -1.00000000
0.000000000, 0.000000000, 5.00000000, 5.00000000
mWVP enters the constant buffer after being transposed:
WorldCB.mWorldVP = XMMatrixTranspose(mWVP);
DeviceContext->UpdateSubresource(MatrixBuffer, 0, NULL, &WorldCB, 0, 0);
DeviceContext->VSSetConstantBuffers(0, 1, &MatrixBuffer);
XMMatrixTranspose(mWVP);
-0.999999940, 0.000000000, 0.000000000, 0.000000000
0.000000000, 0.999999940, 0.000000000, 0.000000000
0.000000000, 0.000000000, -1.00000000, 5.00000000
0.000000000, 0.000000000, -1.00000000, 5.00000000
Which looks OK, at least to me. Next my shader starts doing its thing, but here's where things get funky, checking the disassembly yields that when:
output.position = mul(position, WVP);
Vertex Shader:
00000000 dp4 o0.x, v0.xyzw, cb0[0].xyzw
00000001 dp4 o0.y, v0.xyzw, cb0[1].xyzw
00000002 dp4 o0.z, v0.xyzw, cb0[2].xyzw
00000003 dp4 o0.w, v0.xyzw, cb0[3].xyzw
00000004 mov o1.xyzw, v1.xyzw
For each multiplication, values return 0. And if output.position = position; Values are correct, and the box displays, but not inside the world transformation.
The full shader file below:
cbuffer ConstantBuffer:register(b0)
{
matrix WVP;
}
struct VOut
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
VOut VShader(float4 position : POSITION, float4 color : COLOR)
{
VOut output;
output.position = mul(position, WVP); // position;
output.color = color;
return output;
}
float4 PShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET
{
return color;
}
Edit: Also noted that the Transpose of the World matrix equals zero:
ObjectSpace = m_Scale*m_Rotation*m_Translate;
mWVP = ObjectSpace*direct3D.mView*direct3D.mProjection;
LocalWorld.mWorldVP = XMMatrixTranspose(wWVP);
XMMatrixTranspose(wWVP) comes out:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
And is likely the problem. Any guesses as to why the transpose of a matrix would equal 0?

The near plane of the perspective projection must be some value larger than zero. If it is zero, then the near plane is exactly where the camera is located, and everything in the scene converges to a single point.

Related

TRIANGLESTRIP works as required with DrawIndexed why not with Draw?

I was trying to create triangle drawing with user-defined coordinates using Directx 11:
void triangle(float anchorx, float anchory, float x1, float y1, float x2, float y2, XMFLOAT4 _color)
{
XMMATRIX scale;
XMMATRIX translate;
XMMATRIX world;
simplevertex framevertices[3] = { XMFLOAT3(anchorx, ui::height - anchory, 1.0f), XMFLOAT2(0.0f, 0.0f),
XMFLOAT3(x1, ui::height - y1, 1.0f), XMFLOAT2(1.0f, 0.0f),
XMFLOAT3(x2, ui::height - y2, 1.0f), XMFLOAT2(1.0f, 1.0f) };
world = XMMatrixIdentity();
dx11::generalcb.world = XMMatrixTranspose(world);// XMMatrixIdentity();
dx11::generalcb.fillcolor = _color;
dx11::generalcb.projection = XMMatrixOrthographicOffCenterLH( 0.0f, ui::width, 0.0f, ui::height, 0.01f, 100.0f );
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
dx11::context->Map(dx11::vertexbuffers::trianglevertexbuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);
memcpy(ms.pData, framevertices, sizeof(simplevertex) * 3);
dx11::context->Unmap(dx11::vertexbuffers::trianglevertexbuffer, NULL);
dx11::context->VSSetShader(dx11::shaders::simplevertexshader, NULL, 0);
dx11::context->IASetVertexBuffers(0, 1, &dx11::vertexbuffers::trianglevertexbuffer, &dx11::verticestride, &dx11::verticeoffset);
dx11::context->IASetInputLayout(dx11::shaders::simplevertexshaderlayout);
dx11::context->PSSetShader(dx11::shaders::panelpixelshader, NULL, 0);
dx11::context->UpdateSubresource(dx11::general_cb, 0, NULL, &dx11::generalcb, 0, 0);
dx11::context->IASetIndexBuffer(dx11::indexbuffers::triangleindexbuffer, DXGI_FORMAT_R16_UINT, 0);
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->DrawIndexed(4, 0, 0);
//dx11::context->Draw(3, 0);
};
Substration on Y axis when filling pos.y data:
XMFLOAT3(anchorx, ui::height - anchory, 1.0f)
XMFLOAT3(x1, ui::height - y1, 1.0f)
XMFLOAT3(x2, ui::height - y2, 1.0f)
My orthographic projection sets coordinates zeroes to left-bottom of the screen, so I substract passed Y coordinate from height of the window to get proper position on y axis. Not sure it can affect my problem because it worked well with all the rest of primitives (rectangles, textures, filled rectangles, lines and circles).
Index order defined in index buffer:
unsigned short triangleindices[6] = { 0, 1, 2, 0, 2, 3 };
Im actually using this index buffer to render rectangles so its created to render 2 triangles forming a quad, didn't bother to create separate triangle index buffer
Trianglevertexbuffer contains array 4 of simplevertex:
//A VERTEX STRUCT
struct simplevertex
{
XMFLOAT3 pos; // pixel coordintes
XMFLOAT2 uv; // corresponding point color coordinates in texture
};
I was not using UV data at the function above, just filled them with random data, because color data is passed via constant buffer. As you see I also memcpy only 3 first array data to the vertex buffer for a triangle requires only 3.
VERTEX SHADER:
// SIMPLE VERTEX SHADER
cbuffer buffer : register(b0) // constant buffer, set up as 0 in set constant buffers command
{
matrix world; // world matrix
matrix projection; // projection matrix, is orthographic for now
float4 bordercolor;
float4 fillcolor;
float blendindex;
};
// simple vertex shader
float4 main(float4 input : POSITION) : SV_POSITION
{
float4 output = (float4)0; // setting variable fields to zero, may be skipped?
output = mul(input, world); // multiplying vertex shader output by world matrix passed in the constant buffer
output = mul(output, projection); // multiplying on by projection passed in the constant buffer (2d for now), resulting in final output data(.pos)
return output;
}
PIXEL SHADER:
// SIMPLE PIXEL SHADER RETURNING COLOR PASSED IN CONSTNT BUFFER
cbuffer buffer : register(b0) // constant buffer, set up as 0 in set constant buffers command
{
matrix world; // world matrix
matrix projection; // projection matrix, is orthographic for now
float4 bordercolor;
float4 fillcolor;
float blendindex;
};
// pixel shader returning preset color in constant buffer
float4 main(float4 input : SV_POSITION) : SV_Target
{
return fillcolor; // and we just return the color passed to constant buffer
}
Layout used in the function:
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
There, I created next lines in the rendering cycle:
if (ui::keys[VK_F1]) tools::triangle(700, 100, 400, 300, 850, 700, colors::blue);
if (ui::keys[VK_F2]) tools::triangle(400, 300, 700, 100, 850, 700, colors::blue);
if (ui::keys[VK_F3]) tools::triangle(700, 100, 850, 700, 400, 300, colors::blue);
if (ui::keys[VK_F4]) tools::triangle(850, 700, 400, 300, 700, 100, colors::blue);
if (ui::keys[VK_F5]) tools::triangle(850, 700, 700, 100, 400, 300, colors::blue);
if (ui::keys[VK_F6]) tools::triangle(400, 300, 850, 700, 700, 100, colors::blue);
The point of this setup is to achieve any random order of coordinates forming a triangle for rendering, but this is actually the point of this question - I didn't get this triangle rendered at some coordinates variations, so I came to conclusion it goes from TOPOLOGY, as you can see I have at the moment:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->DrawIndexed(4, 0, 0);
This is the only combination that draws all the triangles but I honestly don't understand how it does happen, from what I know STRIP topology is used with context->Draw function, while LIST works with index buffer setup. Tried next:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
dx11::context->DrawIndexed(4, 0, 0);
Triangles F1, F5, F6 were not drawn. Aight, next:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
//dx11::context->DrawIndexed(4, 0, 0);
dx11::context->Draw(3, 0);
Same story, F1, F5 and F6 are not rendered.
I cannot understand the things are going on, you may find the code a bit primitive but I only want to know why I get working result only with the combination of STRIP topology and DrawIndexed function. I hope I provided enough information, sorry if not, Ill correct on demand. Thank you:)
Got this.
First of all in my rasterizer settings I had rasterizerState.CullMode = D3D11_CULL_FRONT, don't remember why may be I just copy-pasted some other's steady code, so I changed it to D3D11_CULL_NONE and all worked as intended.
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->Draw(3, 0);
Second, I found out that facing or not-facing of a primitive depends on the drawing direction, if drawing vertices goes counter-clockwise (vertice 2 is "righter" then vertice 1 on the start) the primitive is considered as facing the view, if it goes clockwise - than we see its "back" instead.
So i decided to keep D3D11_CULL_NONE instead of adding some math logic to define the order of vertices, dont know for sure if D3D11_CULL_NONE mode cuts performance tho.

HLSL: SV_Position, why/how from float3 to float4?

I'm just at the very very beginning of learning shaders/hlsl etc., so please excuse the probably stupid question.
I'm following Microsoft's DirectX Tutorials (Tutorial (link) , Code (link) ). As far as I understand, they're defining POSITION as 3-element array of float values:
// Define the input layout
D3D11_INPUT_ELEMENT_DESC layout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
Makes sense of course, each vertex position has 3 float values: x, y, and z. But now when looking at the Vertex shader, position is suddently of type float4, not of float3:
//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
VS_OUTPUT VS( float4 Pos : POSITION, float4 Color : COLOR )
{
VS_OUTPUT output = (VS_OUTPUT)0;
output.Pos = mul( Pos, World );
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
output.Color = Color;
return output;
}
I'm aware a float4 is basically a homogenous coordinate and needed for the transformations. As this is a position, I'd expect the fourth value of Pos (Pos.w, if you will) to be 1.
But how exactly does this conversion work? I've just defined POSITION to be 3 floats in C++ code, and now I'm suddently using a float4 in my Vertex shader.
In my naivety, I would have expected one of two things to happen:
Either: Pos is initialized as a float4 array with all zero-elements, and then the first 3 elements are filled with the vertix coordinates. But this would result in the fourth coordinate / w = 0, instead of 1.
Or: Since I've defined "Color" Input with InputSlot=12, i.e. to start with a byte-offset of 12, I could've imagined that Pos[0] = First four bytes (vertex position x), Pos[1] = Next 4 bytes (vertex.y), Pos[2] = Next 4 bytes (vertex.z), and Pos[3] = next 4 bytes - which would be the first element of COLOR.
Why does neither of these alternatives/errors happen? How, and why, does DirectX convert my float3 coordinates automatically to a float4 with w=1?
Thanks!
The default values for vertex attribute components that are missing are (0,0,0,1).
The MSDN confirms that's the case for D3D8 and D3D9 but I can't find an equivalent page that confirms that behaviour continued for 10 and 11. From experience though, I can say that this is still the case. Missing X, Y and Z components are replaced with 0, and missing W components are replaced with 1.

D3D11/C++ Inaccuracies in uv interpolation in pixel shader. How to avoid?

I'm trying to draw a quad with a texture onto the screen such that texels and pixels perfectly align. Sounds pretty easy. I draw 2 triangles (as TRIANGLE_LIST, so 6 vertices) using these shaders:
struct VSOutput
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
};
VSOutput VS_Draw(uint index : SV_VertexId)
{
uint vertexIndex = index % 6;
// compute face in [0,0]-[1,1] space
float2 vertex = 0;
switch (vertexIndex)
{
case 0: vertex = float2(0, 0); break;
case 1: vertex = float2(1, 0); break;
case 2: vertex = float2(0, 1); break;
case 3: vertex = float2(0, 1); break;
case 4: vertex = float2(1, 0); break;
case 5: vertex = float2(1, 1); break;
}
// compute uv
float2 uv = vertex;
// scale to size
vertex = vertex * (float2)outputSize;
vertex = vertex + topLeftPos;
// convert to screen space
VSOutput output;
output.position = float4(vertex / (float2)outputSize * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0, 1);
output.uv = uv;
return output;
}
float4 PS_Draw(VSOutput input) : SV_TARGET
{
uint2 pixelPos = (uint2)(input.uv * (float2)outputSize);
// output checker of 4x4
return (((pixelPos.x >> 2) & 1) ^ ((pixelPos.y >> 2) & 1) != 0) ? float4(0, 1, 1, 0) : float4(1, 1, 0, 0);
}
where outputSize and topLeftPos are constants and expressed in pixel units.
Now for outputSize = (102,12) and topLeftPos=(0,0) I get (what I would expect):
link to image (as i'm not allowed to post images)
But for outputSize = (102,12) and topLeftPos=(0,0.5) I get: Output for x=0, y=0.5
link to image (as i'm not allowed to post images)
As you can see there is a uv-discontinuity where the 2 triangles connect and interpolation of uv is inaccurate). This basically happens (in x and y) only at positions around the .5 (actually below .49 it correctly snaps to texel 0 and above .51 it snaps correctly to texel 1, but in between i get this artifact).
Now for the purpose I need this for it is essential to have pixel perfect mapping. Can anyone enlighten me why this happens ?
There are a two things you need consider to understand what is happening:
Pixel corners in window space have integer coordinates and pixel centers in windows space have half-integer coordinates.
When triangle is rasterized, D3D11 interpolates all attributes to pixel centers
So what is happening is that when topLeftPos=(0,0), the value of input.uv * (float2)outputSize is always half-integer, and it is consistently rounded down to closest integer. However, when topLeftPos=(0,0.5), the (input.uv * (float2)outputSize).y should always be exactly integer. However, due to unpredictable floating-point precision issues, it is sometimes little less than exact integer, and in this case it is rounded down too much. This is where you see your stretched squares.
So if you want perfect mapping, your source square should be aligned with the pixel boundaries and not pixel centers.

hlsl unexpected acos result

I found a few strange HLSL bugs - or Pix is telling nonsense:
I have 2 orthogonal Vectors: A = { 0.0f, -1.0f, 0.0f } and B { 0.0f, 0.0f, 1.0f }
If I use the HLSL dot function, the output is (-0.0f) which makes sense BUT now the acos of that output is -0.0000675917 (that's what Pix says - and what the shader outputs) which is not what I had expected;
Even if I compute the dotproduct myself (A.x*B.x + A.y * B.y + etc.) the result is still 0.0f but the acos of my result isn't zero.
I do need the result of acos to be as precisely as possible because i want to color my vertices according to the angle between the triangle normal and a given vector.
float4 PS_MyPS(VS_OUTPUT input) : COLOR
{
float Light = saturate(dot(input.Normal, g_LightDir)) + saturate(dot(-input.Normal, g_LightDir)); // compute the lighting
if (dot(input.Vector, CameraDirection) < 0) // if the angle between the normal and the camera direction is greater than 90 degrees
{
input.Vector = -input.Vector; // use a mirrored normal
}
float angle = acos(0.0f) - acos(dot(input.Vector, Vector));
float4 Color;
if (angle > Angles.x) // Set the color according to the Angle
{
Color = Color1;
}
else if (angle > Angles.y)
{
Color = Color2;
}
else if (angle >= -abs(Angles.y))
{
Color = Color3;
}
else if (angle >= Angles.z)
{
Color = Color4;
}
else
{
Color = Color5;
}
return Light * Color;
}
It works fine for angles above 0.01 degrees, but gives wrong results for smaller values.
The other bugs I found are: The "length"-function in hlsl returns 1 for the vector (0, -0, -0, 0) in Pix and the HLSL function "any" on that vector returns true as well. This would mean that -0.0f != 0.0f.
Has anyone else encountered these and maybe has a workaround for my problem?
I tested it on an Intel HD Graphics 4600 and a Nvidia card with the same results.
One of the primary reasons why acos may return bad results is because always remember that acos takes values between -1.0 and 1.0.
Hence if the value exceeds even slightly(1.00001 instead of 1.0) , it may return incorrect result.
I deal with this problem by forced capping i.e putting in a check for
if(something>1.0)
something = 1.0;
else if(something<-1.0)
something = -1.0;

DirectX Clip space texture coordinates

Okay first up I am using:
DirectX 10
C++
Okay this is a bit of a bizarre one to me, I wouldn't usually ask the question, but I've been forced by circumstance. I have two triangles (not a quad for reasons I wont go into!) full screen, aligned to screen through the fact they are not transformed.
In the DirectX vertex declaration I am passing a 3 component float (Pos x,y,z) and 2 component float (Texcoord x,y). Texcoord z is reserved for texture2d arrays, which I'm currently defaulting to 0 in the the pixel shader.
I wrote this to achieve the simple task:
float fStartX = -1.0f;
float fEndX = 1.0f;
float fStartY = 1.0f;
float fEndY = -1.0f;
float fStartU = 0.0f;
float fEndU = 1.0f;
float fStartV = 0.0f;
float fEndV = 1.0f;
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fStartY, 0, fEndU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fEndY, 0, fStartU, fEndV));
IA Layout: (Update)
D3D10_INPUT_ELEMENT_DESC ieDesc[2] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,12, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};
Data reaches the vertex shader in the following format: (Update)
struct VS_INPUT
{
float3 fPos :POSITION;
float3 fTexcoord :TEXCOORD0;
}
Within my Vertex and Pixel shader not a lot is happening for this particular draw call, the pixel shader does most of the work sampling from a texture using the specified UV coordinates. However, this isn't working quite as expected, it appears that I am getting only 1 pixel of the sampled texture.
The workaround was in the pixel shader to do the following: (Update)
sampler s0 : register(s0);
Texture2DArray<float4> meshTex : register(t0);
float4 psMain(in VS_OUTPUT vOut) : SV_TARGET
{
float4 Color;
vOut.fTexcoord.z = 0;
vOut.fTexcoord.x = vOut.fPosObj.x * 0.5f;
vOut.fTexcoord.y = vOut.fPosObj.y * 0.5f;
vOut.fTexcoord.x += 0.5f;
vOut.fTexcoord.y += 0.5f;
Color = quadTex.Sample(s0, vOut.fTexcoord);
Color.a = 1.0f;
return Color;
}
It was also worth noting that this worked with the following VS out struct defined in the shaders:
struct VS_OUTPUT
{
float4 fPos :POSITION0; // SV_POSITION wont work in this case
float3 fTexcoord :TEXCOORD0;
}
Now I have a texture that's stretched to fit the entire screen, both triangles already cover this, but why did the texture UV's not get used as expected?
To clarify I am using a point sampler and have tried both clamp and wrapping UV.
I was a bit curious and found a solution / workaround mentioned above, however I'd prefer not to have to do it if anyone knows why it's happening?
What semantics are you specifying for your vertex-type? Are they properly aligned with your vertices and also your shader? If you are using a D3DXVECTOR4, D3DXVECTOR3 setup (as shown in your VS code) this could be a problem if your CreateVertex() returns a D3DXVECTOR3, D3DXVECTOR2 struct.
It would be reassuring to see your pixel-shader code too.
Okay well, for one, the texture coordinates outside of 0..1 range get clamped. I made the mistake of assuming that by going to clip space -1 to +1 that the texture coordinates would be too. This is not the case, they still go from 0.0 to 1.0.
The reason why the code in the pixel shader worked, was because it was using the clip space x,y,z coordinates to automatically overwrite these texture coordinates; an oversight on my part. However, the pixel shader code results in texture stretch on a full screen 'quad', so it might be useful to someone somewhere ;)