DirectX 12 Resource binding - c++

How can I bind resources to different shader stages in D3D12?
I wrote two shaders, one vertexshader and one pixelshader:
Here is the Vertex shader:
//VertexShader.vs
float4 main(float3 posL : POSITION, uniform float4x4 gWVP) : SV_POSITION
{
return mul(float4(posL, 1.0f), gWVP);
}
Here is the Pixelshader:
//PixelShader.ps
float4 main(float4 PosH : SV_POSITION, uniform float4 Color) : SV_Target
{
return Color;
}
If I compile those two shaders with the D3DCompile function and reflect it with the D3DReflect and examine the BoundResorces member in the shader description they both have a constant buffer called $Params with contains the uniform variable respectively. The Problem is that both of those buffers are bound to slot 0. When binding resources I have to use the ID3D12RootSignature interface, which can bind resources to the resource slots. How can I bind the $Params buffer of the vertex shader only to the vertex shader and the $Params buffer of the Pixel shader only to the pixel shader?
Thanks in advance,
Philinator

With DX12, the performant solution here is to create a root signature that meets your application needs, and ideally declare it in your HLSL as well. You don't want to change root signatures often, so you definitely want to make one that works for a large swath of your shaders.
Remember DX12 is Direct3D without training wheels, magic, or shader patching so you just have to do it all explicitly yourself. Ease-of-use is a non-goal for Direct3D 12. There are plenty of good reason to stick with Direct3D 11 unless your application and developer resources merit the extra control Direct3D 12 affords you. With great power comes great responsibility.
I believe you can achieve what you want with something like:
CD3DX12_DESCRIPTOR_RANGE descRange[1];
descRange[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0);
CD3DX12_ROOT_PARAMETER rootParameters[2];
rootParameters[0].InitAsDescriptorTable(
1, &descRange[0], D3D12_SHADER_VISIBILITY_VERTEX); // b0
rootParameters[1].InitAsDescriptorTable(
1, &descRange[0], D3D12_SHADER_VISIBILITY_PIXEL); // b0
// Create the root signature.
CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc(_countof(rootParameters),
rootParameters, 0, nullptr,
D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
DX::ThrowIfFailed(D3D12SerializeRootSignature(&rootSignatureDesc,
D3D_ROOT_SIGNATURE_VERSION_1, &signature, &error));
DX::ThrowIfFailed(
device->CreateRootSignature(0, signature->GetBufferPointer(),
signature->GetBufferSize(),
IID_PPV_ARGS(m_rootSignature.ReleaseAndGetAddressOf())));

Related

How do I pass uniforms between different shader programs in OpenGL?

I'm wondering if there is a way to communicate data between two separate shader programs.
I have one geometry shader that uses a uniform float, and I would like to send the value of that float to a fragment shaders that is used by another program.
Is this possible? I know that OpenGL uses pre-defined global variables, but is there a way to create my own?
Geometry shader from program 1:
uniform float uTime;
Fragment shader from program 2:
if (uTime > 0.){
...
}
I would like to send the value of that float to a fragment shaders that is used by another program. Is this possible?
There is no "interface" between different shader programs.
If you want to use the same uniform in your second shader program, then you simply need to declare and upload that same uniform in your second shader program, just like you did it for your first shader program.
(I am not getting into uniform buffer objects for the rest of this answer, since you used top-level uniform variables in the unnamed block. But, you can use a uniform buffer object holding your uniform values, which you then bind to both shader programs.)
So: Declare the uniform in the fragment shader of your second shader program just like you did with the geometry shader of your first shader program:
uniform float uTime;
void main(void) {
...
if (uTime > 0.){
...
}
...
}
Then, as usual, compile and link that program and obtain the location of the uniform in that second shader program via glGetUniformLocation() once:
// do once:
GLint uTimeLocationInSecondShaderProgram = glGetUniformLocation(yourSecondShaderProgram, "uTime");
and then send a value to the uniform via e.g. glUniform1f() whenever the value changes (e.g. once per render loop iteration):
// do before drawing:
float time = ...; // <- whatever time you also used in your first shader program
glUseProgram(yourSecondShaderProgram);
glUniform1f(uTimeLocationInSecondShaderProgram, time);

Multiple Shaders in OpenGL

Is there a way to create multiple shaders (both vertex, fragment, even geometry and tessellation) that can be compounded in what they do?
For example: I've seen a number of uses of the in and out keywords in the later versions of OpenGL, and will use these to illustrate my question.
Is there a way given a shader (doesn't matter which, but let's say fragment shader) such as
in inVar;
out outVar;
void man(){
var varOne = inVar;
var varTwo = varOne;
var varThr = varTwo;
outVar = varThr;
}
To turn it into the fragment shader
in inVar;
out varOne;
void main(){
varOne = inVar;
}
Followed by the fragment shader
in varOne;
out varTwo;
void main(){
varTwo = varOne;
}
Followed by the fragment shader
in varTwo(
out varThr;
void main(){
varThr = varTwo
}
And finally Followed by the fragment shader
in varThr;
out outVar;
void main(){
outVar = varThr;
}
Are the in and out the correct "concepts" to describe this behavior or should I be looking for another keyword(s)?
In OpenGL, you can attach multiple shaders of the same type to a program object. In OpenGL ES, this is not supported.
This is from the OpenGL 4.5 spec, section "7.3 Program Objects", page 88 (emphasis added):
Multiple shader objects of the same type may be attached to a single program object, and a single shader object may be attached to more than one program object.
Compared with the OpenGL ES 3.2 spec, section "7.3 Program Objects", page 72 (emphasis added):
Multiple shader objects of the same type may not be attached to a single program object. However, a single shader object may be attached to more than one program object.
However, even in OpenGL, using multiple shaders of the same type does not work they way you outline it in your question. Only one of the shaders can have a main(). The other shaders typically only contain functions. So your idea of chaining multiple shaders of the same type into a pipeline will not work in this form.
The way this is sometimes used is that there is one (or multiple) shaders that contain a collection of generic functions, which only needs to be compiled once, and can then be used for multiple shader programs. This is comparable to a library of functions in general programming. You could call this, using unofficial terminology, a "library shader".
Then each shader program has a single shader of each type containing a main(), and that shader contains the specific logic and control flow for that program, while calling generic functions from the "library shaders".
Following the outline in your question, you could achieve something similar with defining a function in each shader. For example, the first fragment shader could contain this code:
void shaderOne(in vec4 varIn, out vec4 varOut) {
varOut = varIn;
}
Second shader:
void shaderTwo(in vec4 varIn, out vec4 varOut) {
varOut = varIn;
}
Third shader:
void shaderThr(in vec4 varIn, out vec4 varOut) {
varOut = varIn;
}
Then you need one shader with a main(), where you can chain the other shaders:
in vec4 varOne;
out vec4 outVar;
void main() {
vec4 varTwo, varThr;
shaderOne(varOne, varTwo);
shaderTwo(varTwo, varThr);
shaderThr(varThr, outVar);
}
Generate your shaders. That's what pretty much all 3D engines and game engines do.
In other words they manipulate text using code and generate source code at runtime or build time.
Sometimes they use GLSL's preprocessor. Example
#ifdef USE_TEXTURE
uniform sampler2D u_tex;
varying vec2 v_texcoord;
#else
uniform vec4 u_color;
#endif
void main() {
#ifdef USE_TEXTURE
gl_FragColor = texture2D(u_tex, v_texcoord);
#else
gl_FragColor = u_color;
#endif
}
Then at runtime prepend a string to define USE_TEXTURE.
JavaScript
var prefix = useLighting ? '#define USE_TEXTURE' : '';
var shaderSource = prefix + originalShaderSource;
Most engines though do a lot more string manipulation by taking lots of small chunks of GLSL and combining them with various string substitutions.

OpenglES 2.0 Vertex Shader Attributes

I've read in tutorials that the bare minimum a vertex shader needs is define below:
attribute vec3 v3_variable;
attribute vec3 v3Pos;
void main()
{
gl_Position = v3Pos;
}
OpenglES passes to the shader vertex data to v3Pos but the name of this variable can actually be anything, other tutorials name it a_Position, a_Pos, whatever~.
So my question is, how does OpenglES know that when I call glVertexAttribPointer() it knows that it should put that data into v3Pos instead of v3_variable?
There are two methods of identifying which attributes in your shaders get 'paired' with the data you provide to glVertexAttribPointer. The only identifier you get with glVertexAttribPointer is the index (the first parameter), so it must be done by index somehow.
In the first method, you obtain the index of the attribute from your shader after it is compiled and linked using glGetAttribLocation, and use that as the first parameter, eg.
GLint program = ...;
GLint v3PosAttributeIndex = glGetAttribLocation(program, "v3Pos");
GLint v3varAttributeIndex = glGetAttribLocation(program, "v3_variable");
glVertexAttribPointer(v3PosAttributeIndex, ...);
glVertexAttribPointer(v3varAttributeIndex, ...);
You can also explicitly set the layout of the attributes when authoring the shader (as #j-p's comment suggests), so you would modify your shader like so:
layout(location = 1) in vec3 v3_variable;
layout(location = 0) in vec3 v3Pos;
And then in code you would set them explicitly:
glVertexAttribPointer(0, ...); // v3Pos data
glVertexAttribPointer(1, ...); // v3_variable data
This second method only works in GLES 3.0+ (and OpenGL 3.0+), so it may not be applicable to your case if you are only using GLES 2.0. Also, note that you can use both of these methods together. Eg. even if you explicitly define the layouts, it doesn't restrict you from querying them later.
Okay, so on the app side you will have a call that looks like either
glBindAttribLocation(MyShader, 0, "v3Pos");
(before you link the shader) or
vertexLoc = glGetAttribLocation(MyShader, "v3Pos");
After you link the program.
Then the first element of the glVertexAttribPointer call will either be 0 or vertexLoc.
glVertexAttribPointer(vertexLoc, 3, GL_FLOAT, GL_FALSE, sizeof(MyVertex), BUFFER_OFFSET(0));
In short, the gl linker is going to either assign v3Pos it's own location in memory, or go off of what you tell it. Either way you are going to have to perform some interaction with the shader in order to interact with the attribute.
Hope that helps

OpenGL 3.3 - How to change tesselataionlevel during run time?

How can I change the tesselation leveln during runtime?
My only idea is to create a bufferobject with only one variable, which I have to pass through... Are there any better solutions?
I have a tesselation control shader file which works fine:
[...]
void main()
{
if(gl_InvocationID==0)
{
gl_TessLevelInner[0]= 5.0;
gl_TessLevelOuter[0]=5.0;
gl_TessLevelOuter[1]=5.0;
gl_TessLevelOuter[2]=5.0;
}
gl_out[gl_InvocationID].gl_Position =gl_in[gl_InvocationID].gl_Position;
}
You can pass the tessellation values as uniforms or (as the values are constant for each draw call) bypass the tessellation control shader altogether. If no TCS is linked to your shader program, the values supplied with -
GLfloat outer_values[4];
GLfloat inner_values[2];
// outer_values and inner_values should be set here
glPatchParameterfv​(GL_PATCH_DEFAULT_OUTER_LEVEL​, outer_values​​);
glPatchParameterfv​(GL_PATCH_DEFAULT_INNER_LEVEL​, inner_values​​);
will be used instead. In this case the tessellation evaluation shader uses the values output from the vertex shader.

OpenGL - glUniformBlockBinding after or before glLinkProgram?

I'm trying to use Uniform Buffer Objects to share my projection matrix accross different shaders (e.g., Deferred pass for solid objects and Forward pass for transparent ones). I think that I'll add more data to the UBO later on, when the complexity grows up. My problem is that the Red Book says:
To explicitly control a uniform block's binding, call glUniformBlockBinding() before calling glLinkProgram().
But the online documentation says:
When a program object is linked or re-linked, the uniform buffer object binding point assigned to each of its active uniform blocks is reset to zero.
What am I missing? Should I bind the Uniform Block after or before the linking?
Thanks.
glUniformBlockBinding​ needs the program name and the index where to find the block in this particular program.
void glUniformBlockBinding​( GLuint program​, GLuint uniformBlockIndex​, GLuint uniformBlockBinding​ );
uniformBlockIndex​ can be obtained by calling glGetUniformBlockIndex on the program.
GLuint glGetUniformBlockIndex​( GLuint program​, const char *uniformBlockName​ );
From the API (http://www.opengl.org/wiki/GLAPI/glGetUniformBlockIndex):
program​ must be the name of a program object for which the command glLinkProgram​ must have been called in the past, although it is not required that glLinkProgram​ must have succeeded. The link could have failed because the number of active uniforms exceeded the limit.
So the correct order is:
void glLinkProgram(GLuint program​);
GLuint glGetUniformBlockIndex​( GLuint program​, const char *uniformBlockName​ );
void glUniformBlockBinding​( GLuint program​, GLuint uniformBlockIndex​, GLuint uniformBlockBinding​ );