D3D11 - passing blend weights and indices to vertex shader - c++

I'm trying to pass blend weight and indices to my vertex shader as float4s; the struct that holds data for each vertex is as follows in C++:
struct VertexType_Skin {
XMFLOAT3 position;
XMFLOAT2 texture;
XMFLOAT3 normal;
XMFLOAT4 boneIds;
XMFLOAT4 boneWeights;
};
and in the HLSL vertex shader:
struct VS_IN {
float3 position : POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
float4 boneIds : BLENDINDICES;
float4 boneWeights : BLENDWEIGHT;
};
I'm setting up the vertex buffer as follows in C++:
D3D11_BUFFER_DESC vertexBufferDesc = { sizeof(VertexType_Skin) * vertexCount, D3D11_USAGE_DEFAULT, D3D11_BIND_VERTEX_BUFFER, 0, 0, 0 };
vertexData = { skinVertices, 0 , 0 };
device->CreateBuffer(&vertexBufferDesc, &vertexData, &vertexBuffer);
Now I'm not sure why, but doing this doesn't seem to render my mesh at all. Commenting out the float4s in both structs works fine (I'm not using the ids or weights yet, just trying to pass them).
Is there anything obvious I'm missing in this setup?
Thanks!

Nevermind, I figured it out. I don't know if this can ever help anyone because it's an oddly specific problem, but here goes anyway.
I was sending the mesh data like this each frame (because it was in a base class):
unsigned int stride = sizeof(VertexType);//<-- this struct only contains position, uvs and normals
unsigned int offset = 0;
deviceContext->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
deviceContext->IASetIndexBuffer(indexBuffer, DXGI_FORMAT_R32_UINT, 0);
deviceContext->IASetPrimitiveTopology(top);
So doing this instead solved the issue:
unsigned int stride = sizeof(VertexType_Skin);//<-- the updated struct with bone indices and weights
unsigned int offset = 0;
deviceContext->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
deviceContext->IASetIndexBuffer(indexBuffer, DXGI_FORMAT_R32_UINT, 0);
deviceContext->IASetPrimitiveTopology(top);
Ta-dah, easy as pie! Cheers :)

Related

OpenGL horizontal pixel pairs drawn swapped

I have problem that is extremely similar to the one described in OpenGL pixels drawn with each horizontal pair swapped. The main difference is that I'm getting this disortion even when I feed the texture one-byte red-only values.
EDIT: By closer inspection of normal textures, I have discovered that this problem manifests when rendering any 2D texture. I tried rotating the resulting texture by swapping the texture coordinates. The resulting picture still have swapped visual horizontal pixels - so I'm assuming that the data in the texture is good, and the disortion occurs when rendering the texture.
Here are the relevant parts of the code:
C++:
struct coord_t { float x; float y; }
GLint loc = glGetAttributeLocation(program, "coord");
if (loc != -1) {
glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE,
sizeof(coord_t), static_cast<void *>(offsetof(coord_t, x)));
glEnableVertexAttribArray(loc);
}
loc = glGetAttributeLocation(program, "tex_coord");
if (loc != -1) {
glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE, sizeof(coord_t),
static_cast<void *>((void*)(4*sizeof(coord_t)+offsetof(coord_t, x)));
glEnableVertexAttribArray(loc);
}
// ... Texture binding to GL_TEXTURE_2D ...
coord_t pos[] = {coord_t{-1.f,-1.f}, coord_t{1.f,-1.f}
coord_t{-1.f,1.f}, coord_t{1.f,1.f}
};
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(pos), pos); // position
glBuffefSubData(GL_ARRAY_BUFFER, sizeof(pos), sizeof(pos), pos); // texture coordinates
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Corresponding vertex shader:
#version 110
attribute vec2 coord;
attribute vec2 tex_coord;
varying vec2 tex_out;
void main(void) {
gl_Position = vec4(coord.xy, 0.0, 1.0);
tex_out = tex_coord;
}
Corresponding fragment shader:
#version 110
uniform sampler2D my_texture;
varying vec2 tex_out;
void main(void) {
gl_FragColor = texture(my_texture, tex_out);
}
After extensive code investigation, I managed to find the culprit.
I was setting the blending function incorrectly, using GL_SRC1_ALPHA and GL_ONE_MINUS_SRC1_ALPHA instead of GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA.

Constant buffer receives wrong value

I have got this HLSL struct and when I pass in a Material buffer from C++ to HLSL, some values in my struct is wrong. I know this because I have tried for example setting the emissive color to Vector4(0, 1, 0, 1) or GREEN and in my pixel shader I return only the emissive color and the result becomes BLUE!
And when I set the emissive to (1, 0, 0, 1) or RED, the pixel shader outputs GREEN color. So it seems like everything is shifted 8 bytes to the right. What might be the reason behind this?
EDIT: I noticed that my virtual destructor made my structure bigger than usual. I removed that and then it worked!
HLSL struct
struct Material
{
float4 emissive;
float4 ambient;
float4 diffuse;
float4 specular;
float specularPower;
bool useTexture;
float2 padding;
};
C++ struct
class Material
{
virtual ~Material(); // - I removed this!
helium::Vector4 m_emissive;
helium::Vector4 m_ambient;
helium::Vector4 m_diffuse;
helium::Vector4 m_specular;
float m_specularPower;
int m_useTexture;
float m_padding[2];
};

Background face is visible over foreground face in same mesh while using a diffuse shader in DirectX

I am trying to create a simple diffuse shader to paint primitive objects in DirectX 9 and faced following problem. When I used a DirectX primitive object like a Torus or Teapot, some faces in the foreground part of the mesh is invisible. I don't think this is the same thing as faces being invisible as I cannot reproduce this behavior for primitive objects like Sphere or Box where no two quads have the same normal. Following are some screenshots in fill and wire-frame modes.
torus fill-mode
Following is my vertex deceleration code.
// vertex position...
D3DVERTEXELEMENT9 element;
element.Stream = 0;
element.Offset = 0;
element.Type = D3DDECLTYPE_FLOAT3;
element.Method = D3DDECLMETHOD_DEFAULT;
element.Usage = D3DDECLUSAGE_POSITION;
element.UsageIndex = 0;
m_vertexElement.push_back(element);
// vertex normal
element.Stream = 0;
element.Offset = 12; //3 floats * 4 bytes per float
element.Type = D3DDECLTYPE_FLOAT3;
element.Method = D3DDECLMETHOD_DEFAULT;
element.Usage = D3DDECLUSAGE_NORMAL;
element.UsageIndex = 0;
m_vertexElement.push_back(element);
And shader code in development.
float4x4 MatWorld : register(c0);
float4x4 MatViewProj : register(c4);
float4 matColor : register(c0);
struct VS_INPUT
{
float4 Position : POSITION;
float3 Normal : NORMAL;
};
struct VS_OUTPUT
{
float4 Position : POSITION;
float3 Normal : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 Color : COLOR0;
};
VS_OUTPUT vsmain(in VS_INPUT In)
{
VS_OUTPUT Out;
float4 wpos = mul(In.Position, MatWorld);
Out.Position = mul(wpos, MatViewProj);
Out.Normal = normalize(mul(In.Normal, MatWorld));
return Out;
};
PS_OUTPUT psmain(in VS_OUTPUT In)
{
PS_OUTPUT Out;
float4 ambient = {0.1, 0.0, 0.0, 1.0};
float3 light = {1, 0, 0};
Out.Color = ambient + matColor * saturate(dot(light, In.Normal));
return Out;
};
I have also tried setting different render states for Depth-Stencil but wasn't successful.
project files
I figure it out! this is a Depth Buffer(Z-Buffer) issue, you can enable Z-Buffer in your code, either by fixed pipeline or in the shader.
To enable z-buffer in fixed pipeline:
First add the following code when creating D3D deivce
d3dpp.EnableAutoDepthStencil = TRUE ;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16 ;
Then enable z-buffer before drawing
device->SetRenderState(D3DRS_ZENABLE, TRUE) ;
At last, clear z-buffer in render function
device->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0,0,0), 1.0f, 0 );

Direct X & HLSL - Normal ReCalculation

I'm using HLSL and DirectX 9. I'm trying to recalculate the normals of a mesh so that HLSL receives updated normals as a result of transforming the mesh. What method is best to do this...also...D3DXComputeNormals will not work for me because I do not use FVF_NORMAL as a vertex declaration...I declare vertex format like so:
const D3DVERTEXELEMENT9 dec[4] =
{
{0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION,0},
{0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0},
{0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD,0},
D3DDECL_END()
};
I know how to access the adjacency data and vertex buffers but I'm not sure what method to use in order to properly associate a vertex and its normal with a face...Any help would be greatly appreciated. Thanks!
It's not a good idea to update the normals on CPU and send it to GPU each frame. That would ruin the performance. What you really should do is to calculate the transformed normals in the vertex shader, just like you do with the positions. HLSL code would look like this:
float4x4 mWorldView; // World * View
float4x4 mWorldViewProj; // World * View * Proj
struct VS_OUTPUT
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
float3 normalVS : TEXCOORD1; // view-space normal
};
// Vertex Shader
VS_OUTPUT VS(float3 position : POSITION,
float3 normal : NORMAL,
float2 tex : TEXCOORD0)
{
VS_OUTPUT out;
// transform the position
out.position = mul(float4(position, 1), mWorldViewProj);
// pass the texture coordinates to the pixel shader
out.tex = tex;
// calculate the transformed normal
float3 n = mul(normal, (float3x3)mWorldView); // calculate view-space normal
// and use it for vertex lighting
// ... some shading calculations ...
// or pass it to the pixel shader and perform per-pixel lighting
out.normalVS = n;
// output
return out;
}

temperamental ID3D10EffectVectorVariable

I am setting an HLSL effect variable in the following way in a number of places.
extern ID3D10EffectVectorVariable* pColour;
pColour = pEffect->GetVariableByName("Colour")->AsVector();
pColour->SetFloatVector(temporaryLines[i].colour);
In one of the places it is set in a loop, each line in the vector temporaryLines has a D3DXCOLOR variable associated with it. The most annoying thing about this problem is that it actually works on rare occasions, but most of the time it doesn't. Are there any known issues with this kind of code?
Here it works:
void GameObject::Draw(D3DMATRIX matView, D3DMATRIX matProjection)
{
device->IASetInputLayout(pVertexLayout);
mesh.SetTopology();//TODO should not be done multiple times
// select which vertex buffer and index buffer to display
UINT stride = sizeof(VERTEX);
UINT offset = 0;
device->IASetVertexBuffers(0, 1, mesh.PBuffer(), &stride, &offset);
device->IASetIndexBuffer(mesh.IBuffer(), DXGI_FORMAT_R32_UINT, 0);
pColour->SetFloatVector(colour);
// create a scale matrix
D3DXMatrixScaling(&matScale, scale.x, scale.y, scale.z);
// create a rotation matrix
D3DXMatrixRotationYawPitchRoll(&matRotate, rotation.y, rotation.x, rotation.z);
// create a position matrix
D3DXMatrixTranslation(&matTranslation, position.x, position.y, position.z);
// combine the matrices and render
matFinal =
matScale *
matRotate *
matTranslation *
matView * matProjection;
pTransform->SetMatrix(&matFinal._11);
pRotation->SetMatrix(&matRotate._11); // set the rotation matrix in the effect
pPass->Apply(0);
device->DrawIndexed(mesh.Indices(), 0, 0); //input specific
}
Here is occasionally works:
void BatchLineRenderer::RenderLines(D3DXMATRIX matView, D3DXMATRIX matProjection)
{
device->IASetInputLayout(pVertexLayout);
device->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_LINESTRIP);
// select which vertex buffer and index buffer to display
UINT stride = sizeof(LINE);
UINT offset = 0;
device->IASetVertexBuffers(0, 1, &pBuffer, &stride, &offset);
device->IASetIndexBuffer(iBuffer, DXGI_FORMAT_R32_UINT, 0);
allLines = temporaryLines.size();
for(int i = 0; i < allLines; i++)
{
pColour->SetFloatVector(temporaryLines[i].colour); // in the line loop too?
// combine the matrices and render
D3DXMATRIX matFinal =
temporaryLines[i].scale *
temporaryLines[i].rotation *
temporaryLines[i].position *
matView * matProjection;
pTransform->SetMatrix(&matFinal._11);
pRotation->SetMatrix(&temporaryLines[i].rotation._11); // set the rotation matrix in the effect
pPass->Apply(0);
device->DrawIndexed(2, 0, 0);
}
temporaryLines.clear();
}
the effect file:
float4x4 Transform; // a matrix to store the transform
float4x4 Rotation; // a matrix to store the rotation transform
float4 LightVec = {0.612f, 0.3535f, 0.612f, 0.0f}; // the light's vector
float4 LightCol = {1.0f, 1.0f, 1.0f, 1.0f}; // the light's color
float4 AmbientCol = {0.3f, 0.3f, 0.3f, 1.0f}; // the ambient light's color
float4 Colour;
// a struct for the vertex shader return value
struct VSOut
{
float4 Col : COLOR; // vertex normal
float4 Pos : SV_POSITION; // vertex screen coordinates
};
// the vertex shader
VSOut VS(float4 Norm : NORMAL, float4 Pos : POSITION)
{
VSOut Output;
Output.Pos = mul(Pos, Transform); // transform the vertex from 3D to 2D
Output.Col = AmbientCol; // set the vertex color to the input's color
float4 Normal = mul(Norm, Rotation);
Output.Col += saturate(dot(Normal, LightVec)) * LightCol * Colour; // add the diffuse and passed in light
return Output; // send the modified vertex data to the Rasterizer Stage
}
// the pixel shader
float4 PS(float4 Col : COLOR) : SV_TARGET
{
return Col; // set the pixel color to the color passed in by the Rasterizer Stage
}
// the primary technique
technique10 Technique_0
{
// the primary pass
pass Pass_0
{
SetVertexShader(CompileShader(vs_4_0, VS()));
SetGeometryShader(NULL);
SetPixelShader(CompileShader(ps_4_0, PS()));
}
}
So the Colour HLSL variable has not been defined inside a ConstantBuffer, just a normal shader variable.
Perhaps the variable should rather be defined in a Constant buffer, updateblae per frame? Similar to how the world and view matrices should be defined in. At least then the GPU knows you want to update the colour variable each time you render. (As you are updating the value before you draw).
cbuffer cbChangesEveryFrame
{
//The MVP matrices.
matrix World;
matrix View;
float4 Colour;
}
Another point I would consider is to get the pointer to the technique desc everytime before the draw call (or pass through loop),
and not reuse it, seems to also make a difference.
//Initiate the pass through loop for the shader effect.
technique->GetDesc(&desc);
for (UINT p=0; p<desc.Passes; p++)
{
//Apply this pass through.
technique->GetPassByIndex(p)->Apply(0);
//draw indexed, instanced.
device->device->DrawIndexedInstanced(indicesCount, (UINT) instanceCount, 0, 0, 0);
}