How to get shader storage block's variable offset when using layout(packed) or layout(shared) in OpenGL? - opengl

If I have a shader storage block defined in GLSL like this:
buffer test_block
{
type1 var1;
type2 var2;
type3 var3;
};
How to get the offset of one of the variable in this block?
We know that if it is a uniform block just like this:
uniform test_block
{
type1 var1;
type2 var2;
type3 var3;
};
we can get offset in this way, for example get the offset of var2:
char[] var2_name = "var2";
int var2_index;
glGetUniformIndices(program_id, 1, &var2_name, &var2_index);
int var2_offset;
glGetActiveUniformsiv(program_id, 1, &var2_index, GL_UNIFORM_OFFSET, &var2_offset);
// then var2_offset is what you want
This is for uniform block. But I can't find any tutorial of getting offset of shader storeage block variable's offset.
(Don't tell me to use std140 or std430, I know this way. I want to write program don't depend on specific memory layout. Just get the offset programmatically.)
I tried to find functions jusk like glGetUniformIndices and glGetActiveUniformsiv which is suite for shader storage block. But there is no such functions.
I expect one way to get shader storage block variable's offset programmatically just like getting uniform block variable's offset.

The program interface query API is a generic tool for querying any of the external interface components of a program. Inputs, outputs, uniforms, and yes, storage buffers. There are separate APIs for many of these interfaces (like glGetUniformIndices), but a generic one was created that accounts for all of the existing ones.
As well as SSBOs.
The different kinds of queryable properties are called "interfaces". Properties about uniform blocks are queried through the GL_UNIFORM_BLOCK interface, but the members of a uniform block are queried through the GL_UNIFORM interface. Similarly, the GL_SHADER_STORAGE_BLOCK is for querying properties of a storage block, but properties of the actual members of the block are queried through the GL_BUFFER_VARIABLE interface.
So the program interface query equivalent of your uniform querying code would be:
GLuint var2_index = glGetProgramResourceIndex(program_id, GL_UNIFORM, "var2");
GLenum props[] = {GL_OFFSET};
GLint var2_offset;
glGetProgramResourceiv(program_id, GL_UNIFORM, var2_index, 1, props, 1, nullptr, &var2_offset);
The SSBO equivalent would be:
GLuint var2_index = glGetProgramResourceIndex(program_id, GL_BUFFER_VARIABLE, "var2");
GLenum props[] = {GL_OFFSET};
GLint var2_offset;
glGetProgramResourceiv(program_id, GL_BUFFER_VARIABLE, var2_index, 1, props, 1, nullptr, &var2_offset);

Related

How do I pass uniforms between different shader programs in OpenGL?

I'm wondering if there is a way to communicate data between two separate shader programs.
I have one geometry shader that uses a uniform float, and I would like to send the value of that float to a fragment shaders that is used by another program.
Is this possible? I know that OpenGL uses pre-defined global variables, but is there a way to create my own?
Geometry shader from program 1:
uniform float uTime;
Fragment shader from program 2:
if (uTime > 0.){
...
}
I would like to send the value of that float to a fragment shaders that is used by another program. Is this possible?
There is no "interface" between different shader programs.
If you want to use the same uniform in your second shader program, then you simply need to declare and upload that same uniform in your second shader program, just like you did it for your first shader program.
(I am not getting into uniform buffer objects for the rest of this answer, since you used top-level uniform variables in the unnamed block. But, you can use a uniform buffer object holding your uniform values, which you then bind to both shader programs.)
So: Declare the uniform in the fragment shader of your second shader program just like you did with the geometry shader of your first shader program:
uniform float uTime;
void main(void) {
...
if (uTime > 0.){
...
}
...
}
Then, as usual, compile and link that program and obtain the location of the uniform in that second shader program via glGetUniformLocation() once:
// do once:
GLint uTimeLocationInSecondShaderProgram = glGetUniformLocation(yourSecondShaderProgram, "uTime");
and then send a value to the uniform via e.g. glUniform1f() whenever the value changes (e.g. once per render loop iteration):
// do before drawing:
float time = ...; // <- whatever time you also used in your first shader program
glUseProgram(yourSecondShaderProgram);
glUniform1f(uTimeLocationInSecondShaderProgram, time);

DirectX 12 Resource binding

How can I bind resources to different shader stages in D3D12?
I wrote two shaders, one vertexshader and one pixelshader:
Here is the Vertex shader:
//VertexShader.vs
float4 main(float3 posL : POSITION, uniform float4x4 gWVP) : SV_POSITION
{
return mul(float4(posL, 1.0f), gWVP);
}
Here is the Pixelshader:
//PixelShader.ps
float4 main(float4 PosH : SV_POSITION, uniform float4 Color) : SV_Target
{
return Color;
}
If I compile those two shaders with the D3DCompile function and reflect it with the D3DReflect and examine the BoundResorces member in the shader description they both have a constant buffer called $Params with contains the uniform variable respectively. The Problem is that both of those buffers are bound to slot 0. When binding resources I have to use the ID3D12RootSignature interface, which can bind resources to the resource slots. How can I bind the $Params buffer of the vertex shader only to the vertex shader and the $Params buffer of the Pixel shader only to the pixel shader?
Thanks in advance,
Philinator
With DX12, the performant solution here is to create a root signature that meets your application needs, and ideally declare it in your HLSL as well. You don't want to change root signatures often, so you definitely want to make one that works for a large swath of your shaders.
Remember DX12 is Direct3D without training wheels, magic, or shader patching so you just have to do it all explicitly yourself. Ease-of-use is a non-goal for Direct3D 12. There are plenty of good reason to stick with Direct3D 11 unless your application and developer resources merit the extra control Direct3D 12 affords you. With great power comes great responsibility.
I believe you can achieve what you want with something like:
CD3DX12_DESCRIPTOR_RANGE descRange[1];
descRange[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0);
CD3DX12_ROOT_PARAMETER rootParameters[2];
rootParameters[0].InitAsDescriptorTable(
1, &descRange[0], D3D12_SHADER_VISIBILITY_VERTEX); // b0
rootParameters[1].InitAsDescriptorTable(
1, &descRange[0], D3D12_SHADER_VISIBILITY_PIXEL); // b0
// Create the root signature.
CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc(_countof(rootParameters),
rootParameters, 0, nullptr,
D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
DX::ThrowIfFailed(D3D12SerializeRootSignature(&rootSignatureDesc,
D3D_ROOT_SIGNATURE_VERSION_1, &signature, &error));
DX::ThrowIfFailed(
device->CreateRootSignature(0, signature->GetBufferPointer(),
signature->GetBufferSize(),
IID_PPV_ARGS(m_rootSignature.ReleaseAndGetAddressOf())));

OpenglES 2.0 Vertex Shader Attributes

I've read in tutorials that the bare minimum a vertex shader needs is define below:
attribute vec3 v3_variable;
attribute vec3 v3Pos;
void main()
{
gl_Position = v3Pos;
}
OpenglES passes to the shader vertex data to v3Pos but the name of this variable can actually be anything, other tutorials name it a_Position, a_Pos, whatever~.
So my question is, how does OpenglES know that when I call glVertexAttribPointer() it knows that it should put that data into v3Pos instead of v3_variable?
There are two methods of identifying which attributes in your shaders get 'paired' with the data you provide to glVertexAttribPointer. The only identifier you get with glVertexAttribPointer is the index (the first parameter), so it must be done by index somehow.
In the first method, you obtain the index of the attribute from your shader after it is compiled and linked using glGetAttribLocation, and use that as the first parameter, eg.
GLint program = ...;
GLint v3PosAttributeIndex = glGetAttribLocation(program, "v3Pos");
GLint v3varAttributeIndex = glGetAttribLocation(program, "v3_variable");
glVertexAttribPointer(v3PosAttributeIndex, ...);
glVertexAttribPointer(v3varAttributeIndex, ...);
You can also explicitly set the layout of the attributes when authoring the shader (as #j-p's comment suggests), so you would modify your shader like so:
layout(location = 1) in vec3 v3_variable;
layout(location = 0) in vec3 v3Pos;
And then in code you would set them explicitly:
glVertexAttribPointer(0, ...); // v3Pos data
glVertexAttribPointer(1, ...); // v3_variable data
This second method only works in GLES 3.0+ (and OpenGL 3.0+), so it may not be applicable to your case if you are only using GLES 2.0. Also, note that you can use both of these methods together. Eg. even if you explicitly define the layouts, it doesn't restrict you from querying them later.
Okay, so on the app side you will have a call that looks like either
glBindAttribLocation(MyShader, 0, "v3Pos");
(before you link the shader) or
vertexLoc = glGetAttribLocation(MyShader, "v3Pos");
After you link the program.
Then the first element of the glVertexAttribPointer call will either be 0 or vertexLoc.
glVertexAttribPointer(vertexLoc, 3, GL_FLOAT, GL_FALSE, sizeof(MyVertex), BUFFER_OFFSET(0));
In short, the gl linker is going to either assign v3Pos it's own location in memory, or go off of what you tell it. Either way you are going to have to perform some interaction with the shader in order to interact with the attribute.
Hope that helps

OpenGL - glUniformBlockBinding after or before glLinkProgram?

I'm trying to use Uniform Buffer Objects to share my projection matrix accross different shaders (e.g., Deferred pass for solid objects and Forward pass for transparent ones). I think that I'll add more data to the UBO later on, when the complexity grows up. My problem is that the Red Book says:
To explicitly control a uniform block's binding, call glUniformBlockBinding() before calling glLinkProgram().
But the online documentation says:
When a program object is linked or re-linked, the uniform buffer object binding point assigned to each of its active uniform blocks is reset to zero.
What am I missing? Should I bind the Uniform Block after or before the linking?
Thanks.
glUniformBlockBinding​ needs the program name and the index where to find the block in this particular program.
void glUniformBlockBinding​( GLuint program​, GLuint uniformBlockIndex​, GLuint uniformBlockBinding​ );
uniformBlockIndex​ can be obtained by calling glGetUniformBlockIndex on the program.
GLuint glGetUniformBlockIndex​( GLuint program​, const char *uniformBlockName​ );
From the API (http://www.opengl.org/wiki/GLAPI/glGetUniformBlockIndex):
program​ must be the name of a program object for which the command glLinkProgram​ must have been called in the past, although it is not required that glLinkProgram​ must have succeeded. The link could have failed because the number of active uniforms exceeded the limit.
So the correct order is:
void glLinkProgram(GLuint program​);
GLuint glGetUniformBlockIndex​( GLuint program​, const char *uniformBlockName​ );
void glUniformBlockBinding​( GLuint program​, GLuint uniformBlockIndex​, GLuint uniformBlockBinding​ );

mapped resources and assigning data in direct3d11

Hi i'm working with shaders and i've just got a quick question about whether or not i can do something here. When mapping data to a buffer i normally see it done like this. Define class or struct to represent whats in the buffer.
class MatrixBuffer
{
public:
D3DXMATRIX worldMatrix;
D3DXMATRIX viewMatrix;
D3DXMATRIX projectionMatrix;
}
then set up a input element description when you make when you create the input layout, i got all that. Now, when i see people update the buffers before rendering, i see it mostly done like this:
MatrixBuffer * dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
dataPtr->world = worldMatrix;
dataPtr->view = viewMatrix;
dataPtr->projection = projectionMatrix;
deviceContext->Unmap(m_matrixBuffer, 0);
where the values assigned are actually valid D3DXMATRIX's. seeing as that's okay, would anything go wrong if i just did it like this?
MatrixBuffer* dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
*dataPtr = matrices;
deviceContext->Unmap(m_matrixBuffer, 0);
where matrices is a valid filled out MatrixBuffer object? Would there be dire consequences later because of some special direct X handles this or is it fine?
I don't see anything wrong with your version, from a technical perspective.
It might be slightly harder to grasp what actually gets copied there, and from a theoretical viewpoint the resulting code should be pretty much the same -- I would expect that any decent optimizer would notice that the "long" version just copies several successive items, and lump them all into one larger copy.
Try avoiding using mapping as it stalls the pipeline. For stuff that gets updated once a frame it's better to have D3D11_USAGE_DEFAULT buffer and then use UpdateSubresource method to push new data in.
More on that here: http://blogs.msdn.com/b/shawnhar/archive/2008/04/14/stalling-the-pipeline.aspx