How does one set an array in hlsl? - hlsl

In glsl, array = int[8]( 0, 0, 0, 0, 0, 0, 0, 0 ); works fine, but in hlsl this doesn't seem to be the case. It doesn't seem to be mentioned in any guides how you do this. What exactly am I meant to do?

For example, like this:
int array[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
Edit:
Ah, you mean array assignment. It seems like that is not possible for now. In addition to trying every possible sensible option, I cross-compiled simple glsl code to hlsl code using glslcc (which uses spirv-cross).
GLSL code:
#version 450
layout (location = SV_Target0) out vec4 fragColor;
void main()
{
int array[4] = {0, 0, 0, 0};
array = int[4]( 1, 0, 1, 0);
fragColor = vec4(array[0], array[1], array[2], array[3]);
}
HLSL code:
static const int _13[4] = { 0, 0, 0, 0 };
static const int _15[4] = { 1, 0, 1, 0 };
static float4 fragColor;
struct SPIRV_Cross_Output
{
float4 fragColor : SV_Target0;
};
void frag_main()
{
int array[4] = _13;
array = _15;
fragColor = float4(float(array[0]), float(array[1]), float(array[2]), float(array[3]));
}
SPIRV_Cross_Output main()
{
frag_main();
SPIRV_Cross_Output stage_output;
stage_output.fragColor = fragColor;
return stage_output;
}
As you can see, in this case the equivalent hlsl code uses static const array and then assigns it since that kind of array assignment is allowed in HLSL (and makes a deep copy unlike in C/C++).

Related

Putting data onto different textures in WebGL and getting it out

I am trying to output more than one buffer from a shader - the general goal is to use it for GPGPU purposes. I've looked at this answer and got closer to the goal with this:
document.addEventListener("DOMContentLoaded", function() {
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need WebGL2");
}
gl.canvas.width = 2;
gl.canvas.height = 2;
const vs = `
#version 300 es
in vec2 position;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}
`;
const fs = `
#version 300 es
precision mediump float;
layout(location = 0) out vec4 outColor0;
layout(location = 1) out vec4 outColor1;
layout(location = 2) out vec4 outColor2;
layout(location = 3) out vec4 outColor3;
layout(location = 4) out vec4 outColor4;
layout(location = 5) out vec4 outColor5;
void main() {
// simplified for question purposes
outColor0 = vec4(1, 0, 0, 1);
outColor1 = vec4(0, 1, 0, 1);
outColor2 = vec4(0, 0, 1, 1);
outColor3 = vec4(1, 1, 0, 1);
outColor4 = vec4(1, 0, 1, 1);
outColor5 = vec4(0, 1, 1, 1);
}
`
const program = twgl.createProgram(gl, [vs, fs]);
const textures = [];
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 6; ++i) {
const tex = gl.createTexture();
textures.push(tex);
gl.bindTexture(gl.TEXTURE_2D, tex);
const width = 2;
const height = 2;
const level = 0;
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// attach texture to framebuffer
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + i, gl.TEXTURE_2D, tex, level);
}
gl.viewport(0, 0, 2, 2);
// tell it we want to draw to all 4 attachments
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
gl.COLOR_ATTACHMENT4,
gl.COLOR_ATTACHMENT5,
]);
// draw a single point
gl.useProgram(program);
{
const offset = 0;
const count = 1
gl.drawArrays(gl.TRIANGLE, 0, 4);
}
for (var l = 0; l < 6; l++) {
var pixels = new Uint8Array(gl.canvas.width * gl.canvas.height * 4);
gl.readBuffer(gl.COLOR_ATTACHMENT0 + l);
gl.readPixels(0, 0, gl.canvas.width, gl.canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
console.log(pixels.join(' '));
}
}
main();
})
However, the result is that only one pixel in each buffer gets set, so the output is:
0 0 0 0 255 0 0 255 0 0 0 0 0 0 0 0
0 0 0 0 0 255 0 255 0 0 0 0 0 0 0 0
0 0 0 0 0 0 255 255 0 0 0 0 0 0 0 0
0 0 0 0 255 255 0 255 0 0 0 0 0 0 0 0
0 0 0 0 255 0 255 255 0 0 0 0 0 0 0 0
0 0 0 0 0 255 255 255 0 0 0 0 0 0 0 0
rather than what I was hoping/expecting:
255 0 0 255 255 0 0 255 255 0 0 255 255 0 0 255
etc.
I was expecting that
outColor0 = vec4(1, 0, 0, 1);
is the equivalent to
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
but clearly I am wrong.
So how do I get to the desired outcome - to be able to set each pixel on each of the buffers?
The code does not provide any vertex data even though it's asking it to draw 4 vertices. Further it's passing in gl.TRIANGLE which doesn't exist. It's gl.TRIANGLES with an S at the end. gl.TRIANGLE will be undefined which gets coerced into 0 which matches gl.POINTS
In the JavaScript console
> const gl = document.createElement('canvas').getContext('webgl2');
< undefined
> gl.TRIANGLE
< undefined
> gl.TRIANGLES
< 4
> gl.POINTS
< 0
To put it another way all the gl.CONSTANTS are just integer values. Instead of
gl.drawArrays(gl.TRIANGLES, offset, count)
you can just do this
gl.drawArrays(4, offset, count)
because gl.TRIANGLES = 4.
But you you didn't use gl.TRIANGLES you used gl.TRIANGLE (no S) so you effectively did this
gl.drawArrays(undefined, offset, count)
that was interpreted as
gl.drawArrays(0, offset, count)
0 = gl.POINTS so that's the same as
gl.drawArrays(gl.POINTS, offset, count)
The code then draws a single 1 pixel point 4 times at the same location because you called it with a count of 4
gl.drawArrays(gl.POINTS, 0, 4)
Nothing in your vertex shader changes each iteration so every iteration is going to do exactly the same thing. In this case it's going to draw a 1x1 pixel POINT at clip space position 0,0,0,1 which will end up being the bottom left pixel of the 2x2 pixels.
In any case you probably want to provide vertices but as a simple test if I add
gl_PointSize = 2.0;
to the vertex shader and change the draw call to
gl.drawArrays(gl.POINTS, 0, 1); // draw 1 point
Then it produces the results you expect. It draws a single 2x2 pixel POINT at clip space position 0,0,0,1
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need WebGL2");
}
gl.canvas.width = 2;
gl.canvas.height = 2;
const vs = `
#version 300 es
in vec2 position;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
gl_PointSize = 2.0;
}
`;
const fs = `
#version 300 es
precision mediump float;
layout(location = 0) out vec4 outColor0;
layout(location = 1) out vec4 outColor1;
layout(location = 2) out vec4 outColor2;
layout(location = 3) out vec4 outColor3;
layout(location = 4) out vec4 outColor4;
layout(location = 5) out vec4 outColor5;
void main() {
// simplified for question purposes
outColor0 = vec4(1, 0, 0, 1);
outColor1 = vec4(0, 1, 0, 1);
outColor2 = vec4(0, 0, 1, 1);
outColor3 = vec4(1, 1, 0, 1);
outColor4 = vec4(1, 0, 1, 1);
outColor5 = vec4(0, 1, 1, 1);
}
`
const program = twgl.createProgram(gl, [vs, fs]);
const textures = [];
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 6; ++i) {
const tex = gl.createTexture();
textures.push(tex);
gl.bindTexture(gl.TEXTURE_2D, tex);
const width = 2;
const height = 2;
const level = 0;
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// attach texture to framebuffer
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + i, gl.TEXTURE_2D, tex, level);
}
gl.viewport(0, 0, 2, 2);
// tell it we want to draw to all 4 attachments
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
gl.COLOR_ATTACHMENT4,
gl.COLOR_ATTACHMENT5,
]);
// draw a single point
gl.useProgram(program); {
const offset = 0;
const count = 1
gl.drawArrays(gl.POINTS, 0, 1);
}
for (var l = 0; l < 6; l++) {
var pixels = new Uint8Array(gl.canvas.width * gl.canvas.height * 4);
gl.readBuffer(gl.COLOR_ATTACHMENT0 + l);
gl.readPixels(0, 0, gl.canvas.width, gl.canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
console.log(pixels.join(' '));
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
You can try using webgl-lint which if I run with your original code will at least complain
Uncaught Error: https://greggman.github.io/webgl-lint/webgl-lint.js:2942: error in drawArrays(/UNKNOWN WebGL ENUM/ undefined, 0, 4): argument 0 is undefined
with WebGLProgram("unnamed") as current program
with the default vertex array bound
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need WebGL2");
}
gl.canvas.width = 2;
gl.canvas.height = 2;
const vs = `
#version 300 es
in vec2 position;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}
`;
const fs = `
#version 300 es
precision mediump float;
layout(location = 0) out vec4 outColor0;
layout(location = 1) out vec4 outColor1;
layout(location = 2) out vec4 outColor2;
layout(location = 3) out vec4 outColor3;
layout(location = 4) out vec4 outColor4;
layout(location = 5) out vec4 outColor5;
void main() {
// simplified for question purposes
outColor0 = vec4(1, 0, 0, 1);
outColor1 = vec4(0, 1, 0, 1);
outColor2 = vec4(0, 0, 1, 1);
outColor3 = vec4(1, 1, 0, 1);
outColor4 = vec4(1, 0, 1, 1);
outColor5 = vec4(0, 1, 1, 1);
}
`
const program = twgl.createProgram(gl, [vs, fs]);
const textures = [];
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 6; ++i) {
const tex = gl.createTexture();
textures.push(tex);
gl.bindTexture(gl.TEXTURE_2D, tex);
const width = 2;
const height = 2;
const level = 0;
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// attach texture to framebuffer
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + i, gl.TEXTURE_2D, tex, level);
}
gl.viewport(0, 0, 2, 2);
// tell it we want to draw to all 4 attachments
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
gl.COLOR_ATTACHMENT4,
gl.COLOR_ATTACHMENT5,
]);
// draw a single point
gl.useProgram(program); {
const offset = 0;
const count = 1
gl.drawArrays(gl.TRIANGLE, 0, 4);
}
for (var l = 0; l < 6; l++) {
var pixels = new Uint8Array(gl.canvas.width * gl.canvas.height * 4);
gl.readBuffer(gl.COLOR_ATTACHMENT0 + l);
gl.readPixels(0, 0, gl.canvas.width, gl.canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
console.log(pixels.join(' '));
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
<script src="https://greggman.github.io/webgl-lint/webgl-lint.js" crossorigin="anonymous"></script>

How to establish glBindBufferRange() offset with Shader Storage Buffer and std430?

I want to switch between ssbo data to draw things with different setup. To make it happen I need to use glBindBufferRange() with its suitable offset.
I've read that the offset needs to be a multiple of GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT for ubo, but things may be changed with ssbo since using std430 instead of std140.
I tried to do this the easiest way
struct Color
{
float r, g, b, a;
};
struct V2
{
float x, y;
};
struct Uniform
{
Color c1;
Color c2;
V2 v2;
float r;
float f;
int t;
};
GLuint ssbo = 0;
std::vector<Uniform> uniform;
int main()
{
//create window, context etc.
glCreateBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
Uniform u;
u.c1 = {255, 0, 255, 255 };
u.c2 = {255, 0, 255, 255 };
u.v2 = { 0.0f, 0.0f };
u.r = 0.0f;
u.f = 100.0f;
u.t = 0;
uniform.push_back(u);
u.c1 = {255, 255, 0, 255 };
u.c2 = {255, 255, 0, 255 };
u.v2 = { 0.0f, 0.0f };
u.r = 100.0f;
u.f = 100.0f;
u.t = 1;
uniform.push_back(u);
u.c1 = {255, 0, 0, 255 };
u.c2 = {255, 0, 0, 255 };
u.v2 = { 0.0f, 0.0f };
u.r = 100.0f;
u.f = 0.0f;
u.t = 0;
uniform.push_back(u);
glNamedBufferData(ssbo, sizeof(Uniform) * uniform.size(), uniform.data(), GL_STREAM_DRAW);
for(int i = 0; i < uniform.size(); ++i) {
glBindBufferRange(GL_SHADER_STORAGE_BUFFER, 1, ssbo, sizeof(Uniform) * i, sizeof(Uniform));
glDrawArrays(...);
}
//swap buffer etc.
return 0;
}
#version 460 core
layout(location = 0) out vec4 f_color;
layout(std430, binding = 1) buffer Unif
{
vec4 c1;
vec4 c2;
vec2 v2;
float r;
float f;
int t;
};
void main()
{
f_color = vec4(t, 0, 0, 1);
}
There is of course vao, vbo, vertex struct and so on, but they are not affect ssbo.
I got GL_INVALID_VALUE glBindBufferRange() error, though. And that must come from offset, because my next attempt transfers data, but with wrong order.
My next attept was to use GL_SHADER_STORAGE_BUFFER_OFFSET_ALIGNMENT
and a formula I found on the Internet
int align = 4;
glGetIntegerv(GL_SHADER_STORAGE_BUFFER_OFFSET_ALIGNMENT, &align);
int ssboSize = sizeof(Uniform) + align - sizeof(Uniform) % align;
so just changing glNamedBufferData and glBindBufferRange it looks like this
glNamedBufferData(ssbo, ssboSize * uniform.size(), uniform.data(), GL_STREAM_DRAW);
glBindBufferRange(GL_SHADER_STORAGE_BUFFER, 1, ssbo, ssboSize * i, sizeof(Uniform));
and that way, it almost worked. As you can see, ts are
0;
1;
0;
so opengl should draw 3 shapes with colors -
vec4(0, 0, 0, 1);
vec4(1, 0, 0, 1);
vec4(0, 0, 0, 1);
it draws them wrong order
vec4(1, 0, 0, 1);
vec4(0, 0, 0, 1);
vec4(0, 0, 0, 1);
How can I make it transfer data proper way?
The OpenGL spec (Version 4.6) states the following in section "6.1.1 Binding Buffer Objects to Indexed Target Points" regararding the error conditions for glBindBufferRange:
An INVALID_VALUE error is generated by BindBufferRange if buffer is
non-zero and offset or size do not respectively satisfy the constraints described for those parameters for the specified target, as described in section 6.7.1.
Section 6.7.1 "Indexed Buffer Object Limits and Binding Queries" states for SSBOs:
starting offset: SHADER_STORAGE_BUFFER_START
offset restriction: multiple of value of SHADER_STORAGE_BUFFER_OFFSET_ALIGNMENT
binding size SHADER_STORAGE_BUFFER_SIZE
According to Table 23.64 "Implementation Dependent Aggregate Shader Limits":
256 [with the following footnote]: The value of SHADER_STORAGE_BUFFER_OFFSET_ALIGNMENT is the maximum allowed, not the minimum.
So if your offset is not a multiple of 256 (which it isn't), this code is simply not guaranteed to work at all. You can query for the actual restriction by the implementation you are running on and ajust your buffer contents accordingly, but you must be prepared that it is as high as 256 bytes.
I ended up using struct alignas(128) Uniform. I guess my next goal is to not use hardcoded align.

Do I need Bind Pose Bone Transformation for my mesh Animation?

I have a Hand mesh which I want to animate.
I have the Skeleton which can be hierarchically animated.
My mesh is also weighted in Blender. So each vertex has 4 associated bones to be affected by.
When I apply the Animation of my Skeleton to the mesh, the hierarchy is applied correctly. (so the hierarchy of the mesh, matches the hierarchy of the Skeleton).
So far so good, now question:
the fingers look to be stretched (its like the fingers smashed by a heavy door). Why?
Note: (I didnt apply the bind pose bone Transformation Matrix explicitly, but I read about it and I believe its functionality is there, in the hierarchical Transformation I have for my Skeleton).
If you need more clarification of the steps, please ask.
vector<glm::mat4> Posture1Hand::HierarchyApplied(HandSkltn HNDSKs){
vector <glm::mat4> Matrices;
Matrices.resize(HNDSKs.GetLimbNum());
//non Hierarchical Matrices
for (unsigned int i = 0; i < Matrices.size(); i++){
Matrices[i] = newPose[i].getModelMatSkltn(HNDSKs.GetLimb(i).getLwCenter());
}
for (unsigned int i = 0; i < Matrices.size(); i++){
vector<Limb*>childeren = HNDSKs.GetLimb(i).getChildren();
for (unsigned int j = 0; j < childeren.size(); j++){
Matrices[childeren[j]->getId()] = Matrices[i] * Matrices[childeren[j]->getId()];
}
}
return Matrices;
}
Here is my getModelMatSkltn method.
inline glm::mat4 getModelMatSkltn(const glm::vec3& RotationCentre) const{//to apply the rotation on the whole heirarchy
glm::mat4 posMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
posMatrix = glm::translate(posMatrix, newPos);
glm::mat4 trMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
glm::mat4 OriginTranslate = glm::translate(trMatrix, -RotationCentre);
glm::mat4 InverseTranslate = glm::translate(trMatrix, RotationCentre);
glm::mat4 rotXMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotXMatrix = glm::rotate(rotXMatrix, glm::radians(newRot.x), glm::vec3(1, 0, 0));
glm::mat4 rotYMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotYMatrix = glm::rotate(rotYMatrix, glm::radians(newRot.y), glm::vec3(0, 1, 0));
glm::mat4 rotZMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotZMatrix = glm::rotate(rotZMatrix, glm::radians(newRot.z), glm::vec3(0, 0, 1));
glm::mat4 scaleMatric = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
scaleMatric = glm::scale(scaleMatric, newScale);
glm::mat4 rotMatrix = rotZMatrix*rotYMatrix*rotXMatrix;
rotMatrix = InverseTranslate*rotMatrix*OriginTranslate;
return posMatrix*rotMatrix*scaleMatric;
}
and this is how I send 20 transformation Matrix (because of 20 joints in Hand) to GPU:
void GLShader::Update(const vector trMat, const GLCamera& camera){
vector<glm::mat4> MVP; MVP.resize(trMat.size());
for (unsigned int i = 0; i < trMat.size(); i++){
MVP[i] = camera.getViewProjection()* trMat[i];
}
glUniformMatrix4fv(newUniform[TRANSFORM_U], trMat.size(), GL_FALSE, &MVP[0][0][0]);//4 floating value
}
I guess one should be familiar with calculation of vertex position in the shader in order to be able to answer the question, but I send a part of my vertex shader too.
attribute vec3 position;
attribute vec2 texCoord;
attribute vec4 weight;
attribute vec4 weightInd;
uniform mat4 transform[20];//vector of uniform for 20 number of joints in my skleton
void main(){
mat4 WMat;//weighted matrix
float w;
int Index;
for (int i=0; i<4; i++){
Index=int(weightInd[i]);
w=weight[i];
WMat += w*transform[Index];
}
gl_Position= WMat*vec4(position, 1.0);
}

DX11 Losing Instance Buffer Data

I've got a function that basically creates different instance buffers into an array for me to use in my DrawIndexedInstanced call.
But when I pass the vertex buffer and instance buffer through to my shader, my instance data is completely lost when the shader goes to use it, so none of my objects are being relocated and are thus all rendering in the same place.
I've been looking at this for hours and literally cannot find anything that is helpful.
Creating the Vertex shader input layout:
D3D11_INPUT_ELEMENT_DESC solidColorLayout[] =
{
//Vertex Buffer
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
//Instance buffer
{ "INSTANCEPOS", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1 },
{ "INSTANCEROT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 12, D3D11_INPUT_PER_INSTANCE_DATA, 1 },
{ "INSTANCESCA", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 24, D3D11_INPUT_PER_INSTANCE_DATA, 1 },
{ "INSTANCETEX", 0, DXGI_FORMAT_R32_FLOAT, 1, 36, D3D11_INPUT_PER_INSTANCE_DATA, 1 }
};
Creating an instance buffer (called multiple times per frame, to create all necessary buffers):
void GameManager::CreateInstanceBuffer(ID3D11Buffer** buffer, Mesh* mesh, std::vector<Instance> instances)
{
D3D11_BUFFER_DESC instBuffDesc;
ZeroMemory(&instBuffDesc, sizeof(instBuffDesc));
instBuffDesc.Usage = D3D11_USAGE_DEFAULT;
instBuffDesc.ByteWidth = sizeof(Instance) * instances.size();
instBuffDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
instBuffDesc.CPUAccessFlags = 0;
instBuffDesc.MiscFlags = 0;
instBuffDesc.StructureByteStride = 0;
int i = sizeof(Instance);
D3D11_SUBRESOURCE_DATA instData;
ZeroMemory(&instData, sizeof(instData));
instData.pSysMem = &instances;
instData.SysMemPitch = 0;
instData.SysMemSlicePitch = 0;
CheckFailWithError(dxManager.GetDevice()->CreateBuffer(&instBuffDesc, &instData, buffer),
"An error occurred whilst building an instance buffer",
"[GameManager]");
meshBuffers.push_back(mesh->GetBuffer(VERTEX_BUFFER));
}
The draw command:
dxManager.GetContext()->DrawIndexedInstanced(instanceIndexCounts[buffer], instanceCounts[buffer], 0, 0, 0);
The shader:
cbuffer cbChangesEveryFrame : register(b0)
{
matrix worldMatrix;
};
cbuffer cbNeverChanges : register(b1)
{
matrix viewMatrix;
};
cbuffer cbChangeOnResize : register(b2)
{
matrix projMatrix;
};
struct VS_Input
{
float4 pos : POSITION;
float2 tex0 : TEXCOORD0;
float4 instancePos : INSTANCEPOS;
float4 instanceRot : INSTANCEROT;
float4 instanceSca : INSTANCESCA;
float instanceTex : INSTANCETEX;
};
PS_Input VS_Main(VS_Input vertex)
{
PS_Input vsOut = (PS_Input)0;
vsOut.pos = mul(vertex.pos + vertex.instancePos, worldMatrix);
vsOut.pos = mul(vsOut.pos, viewMatrix);
vsOut.pos = mul(vsOut.pos, projMatrix);
vsOut.tex0 = vertex.tex0;
return vsOut;
}
I've used the graphics debugger built into Visual Studio. Initially it appeared to be assigning variables in the Vertex shader back to front, however removing APPEND_ALIGNED_ELEMENT from the AlignedByteOffset has fixed that, however the per-instance data seems to be corrupt and is not getting recieved.
If there is anything else you need let me know and I'll update the post as necessary.
The problem lies in your subresource data.
instData.pSysMem = &instances;
You are not specifying which offset to read the memory from. Try using
instData.pSysMem = &instances[0];
or
instData.pSysMem = &instances.at(0);
That clarifies where to start reading memory from and will hopefully fix your issue.

Normals are not transfered to DirectX 11 shader correctly - random, time-dependent values?

Today I was trying to add normal maps to my DirectX 11 application.
Something went wrong. I've decided to output the normals' information instead of color on scene objects to "see" where lies the problem.
What surprised me is that the normals' values changes very fast (the colors are blinking each frame). And I'm sure that I don't manipulate with their values during program execution (the position of vertices stays stable, but the normals do not).
Here are two screens for some frames at t1 and t2:
My vertex structure:
struct MyVertex{//vertex structure
MyVertex() : weightCount(0), normal(0,0,0){
//textureCoordinates.x = 1;
//textureCoordinates.y = 1;
}
MyVertex(float x, float y, float z, float u, float v, float nx, float ny, float nz)
: position(x, y, z), textureCoordinates(u, v), normal(0,0,0), weightCount(0){
}
DirectX::XMFLOAT3 position;
DirectX::XMFLOAT2 textureCoordinates;
DirectX::XMFLOAT3 normal = DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f);
//will not be sent to shader (and used only by skinned models)
int startWeightIndex;
int weightCount; //=0 means that it's not skinned vertex
};
The corresponding vertex layout:
layout[0] = { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[1] = { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[2] = { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0 };
Vertex buffer:
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DEFAULT; //D3D11_USAGE_DYNAMIC
bd.ByteWidth = sizeof(MyVertex) * structure->getVerticesCount();
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
D3D11_SUBRESOURCE_DATA InitData;
ZeroMemory(&InitData, sizeof(InitData));
InitData.pSysMem = structure->vertices;
if(device->CreateBuffer(&bd, &InitData, &buffers->vertexBuffer) != S_OK){
return false;
}
And the shader that output normals "as color" (of course, if I set output.normal to float3(1,1,1), objects stays white):
struct Light
{
float3 diffuse;
float3 position;
float3 direction;
};
cbuffer cbPerObject : register(b0)
{
float4x4 WVP;
float4x4 World;
float4 difColor;
bool hasTexture;
bool hasNormMap;
};
cbuffer cbPerFrame : register(b1)
{
Light light;
};
Texture2D ObjTexture;
Texture2D ObjNormMap;
SamplerState ObjSamplerState;
TextureCube SkyMap;
struct VS_INPUT
{
float4 position : POSITION;
float2 tex : TEXCOORD;
float3 normal : NORMAL;
};
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float4 worldPos : POSITION;
float3 normal : NORMAL;
float2 TexCoord : TEXCOORD;
float3 tangent : TANGENT;
};
VS_OUTPUT VS(VS_INPUT input)
{
VS_OUTPUT output;
//input.position.w = 1.0f;
output.Pos = mul(input.position, WVP);
output.worldPos = mul(input.position, World);
output.normal = input.normal;
output.tangent = mul(input.tangent, World);
output.TexCoord = input.tex;
return output;
}
float4 PS(VS_OUTPUT input) : SV_TARGET
{
return float4(input.normal, 1.0);
}
//--------------------------------------------------------------------------------------
// Techniques
//--------------------------------------------------------------------------------------
technique10 RENDER
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
SetBlendState( SrcAlphaBlendingAdd, float4( 0.0f, 0.0f, 0.0f, 0.0f ), 0xFFFFFFFF );
}
}
Where have I made an mistake? Maybe there are other places in code that can cause that strange behavior (some locking, buffers, dunno...)?
edit:
As 413X suggested, I've run the DirectX Diagnostic:
What is strange that on the small preview, the screen looks the same as in program. But when I investigate that frame (screenshot), I got completely different colors:
Also, here's something strange - I pick the blue pixel and it's says it's black (on the right):
edit 2:
As catflier requested I post some additional code.
The rendering and buffers binding:
//set the object world matrix
DirectX::XMMATRIX objectWorldMatrix = DirectX::XMMatrixIdentity();
DirectX::XMMATRIX rotationMatrix = DirectX::XMMatrixRotationQuaternion(
DirectX::XMVectorSet(object->getOrientation().getX(), object->getOrientation().getY(), object->getOrientation().getZ(), object->getOrientation().getW())
);
irectX::XMMATRIX scaleMatrix = (
object->usesScaleMatrix()
? DirectX::XMMatrixScaling(object->getHalfSize().getX(), object->getHalfSize().getY(), object->getHalfSize().getZ())
: DirectX::XMMatrixScaling(1.0f, 1.0f, 1.0f)
);
DirectX::XMMATRIX translationMatrix = DirectX::XMMatrixTranslation(object->getPosition().getX(), object->getPosition().getY(), object->getPosition().getZ());
objectWorldMatrix = scaleMatrix * rotationMatrix * translationMatrix;
UINT stride = sizeof(MyVertex);
UINT offset = 0;
context->IASetVertexBuffers(0, 1, &buffers->vertexBuffer, &stride, &offset); //set vertex buffer
context->IASetIndexBuffer(buffers->indexBuffer, DXGI_FORMAT_R16_UINT, 0); //set index buffer
//set the constants per object
ConstantBufferStructure constantsPerObject;
//set matrices
DirectX::XMFLOAT4X4 view = myCamera->getView();
DirectX::XMMATRIX camView = XMLoadFloat4x4(&view);
DirectX::XMFLOAT4X4 projection = myCamera->getProjection();
DirectX::XMMATRIX camProjection = XMLoadFloat4x4(&projection);
DirectX::XMMATRIX worldViewProjectionMatrix = objectWorldMatrix * camView * camProjection;
constantsPerObject.worldViewProjection = XMMatrixTranspose(worldViewProjectionMatrix);
constantsPerObject.world = XMMatrixTranspose(objectWorldMatrix);
//draw objects's non-transparent subsets
for(int i=0; i<structure->subsets.size(); i++){
setColorsAndTextures(structure->subsets[i], constantsPerObject, context); //custom method that insert data into constantsPerObject variable
//bind constants per object to constant buffer and send it to vertex and pixel shaders
context->UpdateSubresource(constantBuffer, 0, NULL, &constantsPerObject, 0, 0);
context->VSSetConstantBuffers(0, 1, &constantBuffer);
context->PSSetConstantBuffers(0, 1, &constantBuffer);
context->RSSetState(RSCullDefault);
int start = structure->subsets[i]->getVertexIndexStart();
int count = structure->subsets[i]->getVertexIndexAmmount();
context->DrawIndexed(count, start, 0);
}
The rasterizer:
void RendererDX::initCull(ID3D11Device * device){
D3D11_RASTERIZER_DESC cmdesc;
ZeroMemory(&cmdesc, sizeof(D3D11_RASTERIZER_DESC));
cmdesc.FillMode = D3D11_FILL_SOLID;
cmdesc.CullMode = D3D11_CULL_BACK;
#ifdef GRAPHIC_LEFT_HANDED
cmdesc.FrontCounterClockwise = true;
#else
cmdesc.FrontCounterClockwise = false;
#endif
cmdesc.CullMode = D3D11_CULL_NONE;
//cmdesc.FillMode = D3D11_FILL_WIREFRAME;
HRESULT hr = device->CreateRasterizerState(&cmdesc, &RSCullDefault);
}
edit 3:
The debugger output (there are some mismatches in semantics?):
D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (NORMAL,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND]
D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX]
D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. Semantic 'TEXCOORD' in each signature have different min precision levels, when they must bet identical. [ EXECUTION ERROR #3146050: DEVICE_SHADER_LINKAGE_MINPRECISION]
I am pretty sure your bytes are missaligned. A float is 4 bytes me thinks and a float4 is then 16 bytes. And it wants to be 16 byte aligned. So observe:
layout[0] = { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[1] = { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[2] = { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0 };
The value; 0,12,20. (AlignedByteOffset) Is where the value then starts. Which would mean; Position starts at 0. Texcoord starts at the end of a float3, which gives you wrong results. Because look inside the shader:
struct VS_INPUT
{
float4 position : POSITION;
float2 tex : TEXCOORD;
float3 normal : NORMAL;
};
And Normal at float3+float2. So generally, you want to align things more consistantly. Maybe even "padding" to fill the spaces to keep all the variables at 16 bytes aligned.
But to keep it more simple for you. You want to switch that statement to:
layout[0] = { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[1] = { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[2] = { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 };
What happens now? Well, the thing aligns itself automagically, however it can be less optimal. But one thing about shaders, try to keep it 16 byte aligned.
Your data structure on upload doesn't match your Input Layout declaration.
since your data structure for vertex is :
struct MyVertex{//vertex structure
MyVertex() : weightCount(0), normal(0,0,0){
//textureCoordinates.x = 1;
//textureCoordinates.y = 1;
}
MyVertex(float x, float y, float z, float u, float v, float nx, float ny, float nz)
: position(x, y, z), textureCoordinates(u, v), normal(0,0,0), weightCount(0){
}
DirectX::XMFLOAT3 position;
DirectX::XMFLOAT2 textureCoordinates;
DirectX::XMFLOAT3 normal = DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f);
//will not be sent to shader (and used only by skinned models)
int startWeightIndex;
int weightCount; //=0 means that it's not skinned vertex
};
startWeightIndex and weightCount will be copied into your vertex buffer (even if they do not contain anything useful.
If you check sizeof(MyVertex), you will have a size of 40.
Now let's look at your input layout declaration (whether you use automatic offset or not is irrelevant):
layout[0] = { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[1] = { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[2] = { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0 };
From what you see here, you are declaring a data structure of (12+8+12) = 32 bytes, which of course does not match your vertex size.
So first vertex will be fetched properly, but next ones will start to use invalid data (as the Input Assembler doesn't know that your data structure is bigger than what you specified to it).
Two ways to fix it:
1/ Strip your vertex declaration
In that case you modify your vertex structure to match your input declaration (I removed constructors for brevity:
struct MyVertex
{//vertex structure
DirectX::XMFLOAT3 position;
DirectX::XMFLOAT2 textureCoordinates;
DirectX::XMFLOAT3 normal = DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f);
};
Now your vertex structure exactly matches your declaration, so vertices will be fetched properly.
2/Adapt your Input Layout declaration:
In that case you change your layout to make sure that all data contained in your buffer is declared, so it can be taken into account by the Input Assembler (see below)
Now your declaration becomes:
layout[0] = { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[1] = { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[2] = { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[3] = { "STARTWEIGHTINDEX", 0, DXGI_FORMAT_R32_SINT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0 };
layout[4] = { "WEIGHTCOUNT", 0, DXGI_FORMAT_R32_SINT, 0, 36, D3D11_INPUT_PER_VERTEX_DATA, 0 };
So that means you inform the Input assembler of all the data that your structure contains.
In that case even if the data is not needed by your Vertex Shader, as you specified a full data declaration, Input assembler will safely ignore STARTWEIGHTINDEX and WEIGHTCOUNT, but will respect your whole structure padding.