I have been writing some code for a basic rendering application.
The renderer code consists of setting up Vertex Array and Vertex Buffer Objects for rendering entities, and 3 texture units are created for use as the diffuse, specular and emission component respectively. I wrote a wrapper class for my shader, to create and set uniforms.
void Shader::SetVec3Uniform(const GLchar * name, vec3 data)
{
glUniform3f(glGetUniformLocation(this->program, name), data);
}
When I set the uniforms using the above function, it doesn't render with the expected result. But when I find the location before I set the uniform, it renders correctly. I use the function below to set it.
void Shader::SetVec3Uniform(GLint loc, vec3 data)
{
glUniform3f(loc, data.x, data.y, data.z);
}
So my question is, is data lost, is the data not reaching the shader on the GPU in time? I am honestly stumped. No idea why this subtle difference in the way the uniforms are set is causing such a difference in its render.
Can anybody shed any light on this?
Well, according to the docs, the prototype for the function glUniform3f is:
void glUniform3f(GLint location,
GLfloat v0,
GLfloat v1,
GLfloat v2);
and for the function glUniform3fv it is:
void glUniform3fv(GLint location,
GLsizei count,
const GLfloat *value);
How is vec3 defined? The program would probably work if you had something like:
// struct vec3 { float x; float y; float z; };
glUseProgram(this->program);
glUniform3f(glGetUniformLocation(this->program, name), data.x, data.y, data.z);
or
// struct vec3 { float arr[3]; };
glUseProgram(this->program);
glUniform3fv(glGetUniformLocation(this->program, name), 1, &data.arr[0]);
Depending on your vec3 implementation of course.
Related
i have some few questions about Vulkan concepts.
1. vkCmdBindDescriptorSets arguments
// Provided by VK_VERSION_1_0
there is prototype:
void vkCmdBindDescriptorSets(
VkCommandBuffer commandBuffer,
VkPipelineBindPoint pipelineBindPoint,
VkPipelineLayout layout,
uint32_t firstSet,
uint32_t descriptorSetCount,
const VkDescriptorSet* pDescriptorSets,
uint32_t dynamicOffsetCount,
const uint32_t* pDynamicOffsets);
i want to know the usage of descriptorSetCount and pDescriporSets, pipeline use only one VkDescriptorSet in a drawcall,but provide multiply same layout VkDescriptorSet means what?
2. glsl uniform layout
layout(set=0,binding =0) uniform UniformBufferObject
{
mat4 view;
mat4 projection;
vec3 position_camera;
}ubo;
what does set means?
Understanding Vulkan uniform layout's 'set' index
vkCmdBindDescriptorSets update the current descriptorSet array.
the index of set define your shader use the one of array[index].
I am using glMultiDrawElementsIndirect with a combination of ARB_bindless_texture to draw things. I fetch texture handles with the built in vertex shader variable gl_DrawID. This variable is said to to be a dynamically uniform expression which means I dont have to depend on NV_gpu_shader5 when accessing bindless textures inside the shader. It looks something like this.
#version 460 core
out flat int MaterialIndex;
void main()
{
...
MaterialIndex = gl_DrawID;
}
#version 460 core
#extension GL_ARB_bindless_texture : require
struct Material
{
sampler2D Albedo;
ivec2 pad0;
};
layout(std140, binding = 1) uniform BindlessUBO
{
Material Materials[256];
} bindlessUBO;
in flat int MaterialIndex;
void main()
{
Material material = bindlessUBO.Materials[MaterialIndex];
vec3 albedo = texture(material.Albedo, coords).rgb;
}
This works on a NVIDIA RTX 3050 Ti, supporting NV_gpu_shader5. The model with all of its meshes and textures gets drawn correctly in one draw call. However on a AMD RX 5700XT, not supporting the extension, the driver crashes after a few seconds. Both GPUs can render the model without any textures. So now my guess would be that MaterialIndex is not a dynamically uniform expression even though it gets its value from one. Does passing MaterialIndex from a vertex to a fragment shader make it a non d.u.e? How could I obtain the index of the drawing command within glMultiDrawElementsIndirect in the fragment shader as a d.u.e? Is there a way to accomplish what I am trying to do without NV_gpu_shader5?
After upgrading from AMD driver 19.12.2 to 21.12.1 the bug disappeared. So passing gl_DrawID from vertex to fragment shader using the flat qualifier does in fact preserve it as a dynamically uniform expression and I dont need to depend on NV_gpu_shader5 :).
I am trying to setup some structures inside my UBO, but when querying the locations on the struct I added, I get an invalid value.
I modifying this sample to see how it works.
https://www.packtpub.com/books/content/opengl-40-using-uniform-blocks-and-uniform-buffer-objects
so I basically added:
struct LightParameters
{
vec3 pos;
};
layout (std140) uniform BlobSettings
{
vec4 InnerColor;
vec4 OuterColor;
float RadiusInner;
float RadiusOuter;
LightParameters lightParam;
};
then I tried:
// Query for the offsets of each block variable
const GLchar *names[] =
{
"InnerColor", "OuterColor", "RadiusInner", "RadiusOuter", "pos",
};
glGetUniformIndices(shaderLight.programId, 4+1, names, uniformBlock.loc);
where uniformBlock.loc is just an array of locations GLuint[5] but by uniformBlock.loc[4] is invalid after the call, which is just the struct I added.
Can someone point me on the right direction?
doing
"InnerColor", "OuterColor", "RadiusInner", "RadiusOuter", "lightParam.pos",
worked.
In GLSL there seems to be linking error of shaders when I try to pass a uniform struct with a sampler2D attribute to a function which is forward declared. The code works if I remove forward declaration and move the function above main. Is this illegal code?
#version 330 core
in vec2 texcoords;
out vec4 color;
struct Material{
sampler2D tex; // Sampler inside a struct
};
uniform Material material;
// Forward Declaration
vec4 add(Material m);
void main() {
color = add(material);
}
// Function Definition
vec4 add(Material m) {
return vec4(texture(m.tex, texcoords));
}
// C++
glUniform1f(glGetUniformLocation(shader, "material.tex"), 0);
glBindTexture(GL_TEXTURE_2D, texture);
EDIT:
So after a bit of searching it appears to be a bug in AMD's driver. I personally use ATI Mobility Radeon HD 4670 which is pretty old, but it still runs OpenGL 3.3. On AMD's forums I found a similar post, and so it would be interesting to know how big this is on AMD's graphic cards. Because if you're developing on Intel or NVidia, then how are you to know your shaders won't compile on some AMD's graphic cards? Should we stay safe and not use prototypes on structs with samplers, or even go as far as not putting samplers in struct completely?... It's also worth noting that WebGL doesn't even allow for samplers inside structs.
Error Message:
Vertex shader(s) failed to link, fragment shader(s) failed to link.
unexpected error.
unexpected error.
This actually is not supposed to work because you cannot instantiate a struct that contains an opaque data type (sampler, image, atomic counter). It is acceptable to have a struct with an opaque type as a uniform, but you cannot implement the function add (...) by passing an instance of Material.
Data Type (GLSL) - Opaque Types
Variables of opaque types can only be declared in one of two ways. They can be declared at global scope, as a uniform​ variables. Such variables can be arrays of the opaque type. They can be declared as members of a struct, but if so, then the struct can only be used to declare a uniform​ variable (or to declare a member of a struct/array that itself a uniform variable). They cannot be part of a buffer-backed interface block or an input/output variable, either directly or indirectly.
This change to your code should work across all compliant implementations of OpenGL:
// Must be in, because you cannot assign values to an opaque type.
vec4 add (in sampler2D tex) {
return vec4(texture(tex, texcoords));
}
This would also work, since it uses the sampler in the material.tex uniform:
vec4 add (void) {
return vec4(texture(material.tex, texcoords));
}
In HLSL I must use semantics to pass info from a vertex shader to a fragment shader. In GLSL no semantics are needed. What is an objective benefit of semantics?
Example: GLSL
vertex shader
varying vec4 foo
varying vec4 bar;
void main() {
...
foo = ...
bar = ...
}
fragment shader
varying vec4 foo
varying vec4 bar;
void main() {
gl_FragColor = foo * bar;
}
Example: HLSL
vertex shader
struct VS_OUTPUT
{
float4 foo : TEXCOORD3;
float4 bar : COLOR2;
}
VS_OUTPUT whatever()
{
VS_OUTPUT out;
out.foo = ...
out.bar = ...
return out;
}
pixel shader
void main(float4 foo : TEXCOORD3,
float4 bar : COLOR2) : COLOR
{
return foo * bar;
}
I see how foo and bar in the VS_OUTPUT get connected to foo and bar in main in the fragment shader. What I don't get is why I have the manually choose semantics to carry the data. Why, like GLSL, can't DirectX just figure out where to put the data and connect it when linking the shaders?
Is there some more concrete advantage to manually specifying semantics or is it just left over from assembly language shader days? Is there some speed advantage to choosing say TEXCOORD4 over COLOR2 or BINORMAL1?
I get that semantics can imply meaning, there's no meaning to foo or bar but they can also obscure meaning just was well if foo is not a TEXCOORD and bar is not a COLOR. We don't put semantics on C# or C++ or JavaScript variables so why are they needed for HLSL?
Simply, (old) glsl used this varying for variables naming (please note that varying is now deprecated).
An obvious benefit of semantic, you don't need the same variable names between stages, so the DirectX pipeline does matching via semantics instead of variable name, and rearranges data as long as you have a compatible layout.
If you rename foo by foo2, you need to replace this names in potentially all your shader (and eventually subsequent ones). With semantics you don't need this.
Also since you don't need an exact match, it allows easier separation between shader stages.
For example:
you can have a vertex shader like this:
struct vsInput
{
float4 PosO : POSITION;
float3 Norm: NORMAL;
float4 TexCd : TEXCOORD0;
};
struct vsOut
{
float4 PosWVP : SV_POSITION;
float4 TexCd : TEXCOORD0;
float3 NormView : NORMAL;
};
vsOut VS(vsInput input)
{
//Do you processing here
}
And a pixel shader like this:
struct psInput
{
float4 PosWVP: SV_POSITION;
float4 TexCd: TEXCOORD0;
};
Since vertex shader output provides all the inputs that pixel shader needs, this is perfectly valid. Normals will be ignored and not provided to your pixel shader.
But then you can swap to a new pixel shader that might need normals, without the need to have another Vertex Shader implementation. You can also swap the PixelShader only, hence saving some API calls (Separate Shader Objects extension is the OpenGL equivalent).
So in some ways semantics provide you a way to encapsulate In/Out between your stages, and since you reference other languages, that would be an equivalent of using properties/setters/pointer addresses...
Speed wise there's no difference depending on naming (you can name semantics pretty much any way you want, except for system ones of course). Different layout position will do imply a hit tho (pipeline will reorganize for you but will also issue a warning, at least in DirectX).
OpenGL also provides Layout Qualifier which is roughly equivalent (it's technically a bit different, but follows more or less the same concept).
Hope that helps.
As Catflier has mentioned above, using semantics can help decoupling the shaders.
However, there are some downsides of semantics which I can think of:
The number of semantics are limited by the DirectX version you are using, so it might run out.
The semantics name are a little misleading. You will see these kind of variables a lot (look at posWorld variable):
struct v2f {
float4 position : POSITION;
float4 posWorld : TEXCOORD0;
}
You will see that the posWorld variable and the TEXCOORD0 semantic are not relevant at all. But it is fine because we do not need to pass texture coordinate to TEXCOORD0. However, this is likely to cause some confusions between those who just pick up shader language.
I much prefer writing : NAME after a variable than writing layout(location = N) in before.
Also you don't need to update the HLSL shader if you change the order of the vertex inputs unlike in Vulkan.