"Variable is not available in the current GLSL version" - opengl

I am attempting to create a tessellation shader:
#version 410 core
// define the number of CPs in the output patch
layout (vertices = 3) out;
uniform vec3 gEyeWorldPos;
// attributes of the input CPs
in vec3 WorldPos_CS_in[];
in vec2 TexCoord_CS_in[];
in vec3 Normal_CS_in[];
// attributes of the output CPs
out vec3 WorldPos_ES_in[];
out vec2 TexCoord_ES_in[];
out vec3 Normal_ES_in[];
float GetTessLevel(float Distance0, float Distance1)
{
float AvgDistance = (Distance0 + Distance1) / 2.0;
if (AvgDistance <= 2.0) {
return 10.0;
}
else if (AvgDistance <= 5.0) {
return 7.0;
}
else {
return 3.0;
}
}
void main()
{
// Set the control points of the output patch
TexCoord_ES_in[gl_InvocationID] = TexCoord_CS_in[gl_InvocationID];
Normal_ES_in[gl_InvocationID] = Normal_CS_in[gl_InvocationID];
WorldPos_ES_in[gl_InvocationID] = WorldPos_CS_in[gl_InvocationID];
// Calculate the distance from the camera to the three control points
float EyeToVertexDistance0 = distance(gEyeWorldPos, WorldPos_ES_in[0]);
float EyeToVertexDistance1 = distance(gEyeWorldPos, WorldPos_ES_in[1]);
float EyeToVertexDistance2 = distance(gEyeWorldPos, WorldPos_ES_in[2]);
// Calculate the tessellation levels
gl_TessLevelOuter[0] = GetTessLevel(EyeToVertexDistance1, EyeToVertexDistance2);
gl_TessLevelOuter[1] = GetTessLevel(EyeToVertexDistance2, EyeToVertexDistance0);
gl_TessLevelOuter[2] = GetTessLevel(EyeToVertexDistance0, EyeToVertexDistance1);
gl_TessLevelInner[0] = gl_TessLevelOuter[2];
}
However, when I run my game, I get the following error:
Error: 1281
Tessellation shader wasn't able to be compiled correctly. Error log:
ERROR: 0:? : '' : Incorrect GLSL version: 410
WARNING: -1:65535: 'GL_ARB_explicit_attrib_location' : extension is not available in current GLSL version
WARNING: 0:4: 'vertices' : symbol not available in current GLSL version
WARNING: 0:? : 'gl_InvocationID' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_InvocationID' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_InvocationID' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_InvocationID' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_InvocationID' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_InvocationID' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_TessLevelOuter' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_TessLevelOuter' : variable is not available in current GLSL version
WARNING: 0:? : 'gl_TessLevelOuter' : variable is not available in current GLSL version
WARNING: 0:? : '
My game appears to be ignoring the shaders, but I guess that is expected with so many errors?
I have already tried googling the error, however I am unable to find any solution or even information on why this is happening.
How can I fix the above errors so my shader can run correctly?

Well, you probably don't have a OpenGL-4.1 core context to begin with, and thus your shaders won't compile. Instead of debugging the shaders, I suggest you look at your context creation code and double check your system is cabable of giving you a 4.1 version context.

Related

How to "fully bind" a constant buffer view to a descriptor range?

I am currently learning DirectX 12 and trying to get a demo application running. I am currently stuck at creating a pipeline state object using a root signature. I am using dxc to compile my vertex shader:
./dxc -T vs_6_3 -E main -Fo "basic.vert.dxi" -D DXIL "basic.vert"
My shader looks like this:
#pragma pack_matrix(row_major)
struct VertexData
{
float4 Position : SV_POSITION;
float4 Color : COLOR;
};
struct VertexInput
{
float3 Position : POSITION;
float4 Color : COLOR;
};
struct CameraData
{
float4x4 ViewProjection;
};
ConstantBuffer<CameraData> camera : register(b0, space0);
VertexData main(in VertexInput input)
{
VertexData vertex;
vertex.Position = mul(float4(input.Position, 1.0), camera.ViewProjection);
vertex.Color = input.Color;
return vertex;
}
Now I want to define a root signature for my shader. The definition looks something like this:
CD3DX12_DESCRIPTOR_RANGE1 descriptorRange;
descriptorRange.Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC, D3D12_DESCRIPTOR_RANGE_OFFSET_APPEND);
CD3DX12_ROOT_PARAMETER1 rootParameter;
rootParameter.InitAsDescriptorTable(1, &descriptorRange, D3D12_SHADER_VISIBILITY_VERTEX);
ComPtr<ID3DBlob> signature, error;
CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC rootSignatureDesc;
rootSignatureDesc.Init_1_1(1, &rootParameter, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
::D3D12SerializeVersionedRootSignature(&rootSignatureDesc, &signature, &error);
ComPtr<ID3D12RootSignature> rootSignature;
device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&rootSignature));
Finally, I pass the root signature along other state variables to the pipeline state object:
D3D12_GRAPHICS_PIPELINE_STATE_DESC pipelineStateDescription = {};
// ...
pipelineStateDescription.pRootSignature = rootSignature.Get();
ComPtr<ID3D12PipelineState> pipelineState;
device->CreateGraphicsPipelineState(&pipelineStateDescription, IID_PPV_ARGS(&pipelineState));
However, no matter what I do, the device keeps on complaining about the root signature not matching the vertex shader:
D3D12 ERROR: ID3D12Device::CreateGraphicsPipelineState: Root Signature doesn't match Vertex Shader: Shader CBV descriptor range (BaseShaderRegister=0, NumDescriptors=1, RegisterSpace=0) is not fully bound in root signature
[ STATE_CREATION ERROR #688: CREATEGRAPHICSPIPELINESTATE_VS_ROOT_SIGNATURE_MISMATCH]
D3D12: BREAK enabled for the previous message, which was: [ ERROR STATE_CREATION #688: CREATEGRAPHICSPIPELINESTATE_VS_ROOT_SIGNATURE_MISMATCH ]
I am confused about what this error is trying to tell me, since I clearly have a constant buffer bound to register(b0, space0). Or does it mean that I have to allocate a descriptor from a heap before creating the pipeline state object?
I also tried defining a root signature within the shader itself:
#define ShaderRootSignature \
"RootFlags( ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT ), " \
"DescriptorTable( CBV(b0, space = 0, numDescriptors = 1, flags = DATA_STATIC ) ), "
... and compiling it using [RootSignature(ShaderRootSignature)], or specifying -rootsig-define "ShaderRootSignature" for dxc. Then I tried loading the signature as suggested here, however both approaches fail, since the root signature could not be read from the shader bytecode.
Any clarification on how to interpret the error message would be much appreciated, since I really do not know what the binding in a root signature means in this context. Thanks in advance! 🙂
Long story short: shader visibility in DX12 is not a bit field, like in Vulkan, so setting the visibility to D3D12_SHADER_VISIBILITY_VERTEX | D3D12_SHADER_VISIBILITY_PIXEL results in the parameter only being visible to the pixel shader. Setting it to D3D12_SHADER_VISIBILITY_ALL solved my problem.

What is the syntax for 'pixel_interlock_ordered' in GLSL?

I'm trying out the ARB_fragment_shader_interlock extension in OpenGL 4.5 and am failing to get the shader to compile when trying to use pixel_interlock_ordered.
#version 430
#extension GL_ARB_shading_language_420pack : require
#extension GL_ARB_shader_image_load_store : require
#extension GL_ARB_fragment_shader_interlock : require
layout(location = 0, rg8, pixel_interlock_ordered) uniform image2D image1;
void main()
{
beginInvocationInterlockARB();
ivec2 coords = ivec2(gl_FragCoord.xy);
vec4 pixel = imageLoad(image1, coords);
pixel.g = pixel.g + 0.01;
if (pixel.g > 0.5)
pixel.r = pixel.r + 0.01;
else
pixel.r = pixel.r + 0.02;
imageStore(image1, coords, pixel);
endInvocationInterlockARB();
}
The following shader fails compilation with:
0(6) : error C7600: no value specified for layout qualifier 'pixel_interlock_ordered'
Which is the same error you would get for any random name instead of pixel_interlock_ordered. I guess the syntax is different somehow, but the spec (https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_fragment_shader_interlock.txt) refer to it as a "layout qualifier".
Googling "pixel_interlock_ordered" comes up short with just links to the official specs, so I can't find an example. What is the correct syntax?
Layout qualifiers in GLSL are a bit weird. They usually apply to declarations, but some of them effectively apply to the shader as a whole. Such qualifiers are basically shader-specific options you set from within the shader.
The interlock qualifiers are those kinds of qualifiers. You're not saying that this variable will be accessed via interlocking, because that's not what interlocking means. It means that the execution of the interlock-bound code will have a certain property, relative to executing interlock-bound code on other invocations of the same shader. The qualifier specifies the details of the execution restriction.
Qualifiers that apply to the shader as a whole are grammatically specified as qualifiers on in or out (most such qualifiers use in, but a few use out):
layout(pixel_interlock_ordered) in;

Uniform registers requirement on Nvidia with a sampler in vertex shader

The issue I'm going to write about can only be reproduced with some hardware and drivers.
I managed to reproduce it on GeForce GTX 760 with 399.24 driver and GeForce 950M with 416.34. The issue does not show up on GTX650 with 390.87 driver. All these GPUs return 1024 for GL_MAX_VERTEX_UNIFORM_VECTORS, so I hardcoded 1000 as vec4 array size in vertex shader.
I try to compile and link the following shader program.
Vertex shader:
#version 330
uniform vec4 vs[1000];
uniform sampler2D tex;
void main()
{
gl_Position = vs[42];
//gl_Position += texture(tex, vs[0].xy); // (1)
}
Fragment shader:
#version 330
out vec4 outFragColor;
void main()
{
outFragColor = vec4(0.0);
}
Everything is OK while line (1) is commented and thus tex sampler is optimized out. But if we uncomment it, link fails with the following log:
-----------
Internal error: assembly compile error for vertex shader at offset 427:
-- error message --
line 16, column 9: error: invalid parameter array size
line 20, column 16: error: out of bounds array access
line 22, column 30: error: out of bounds array access
-- internal assembly text --
!!NVvp5.0
OPTION NV_internal;
OPTION NV_gpu_program_fp64;
OPTION NV_bindless_texture;
Error: could not link.
# cgc version 3.4.0001, build date Oct 10 2018
# command line args:
#vendor NVIDIA Corporation
#version 3.4.0.1 COP Build Date Oct 10 2018
#profile gp5vp
#program main
#semantic vs
#semantic tex
#var float4 gl_Position : $vout.POSITION : HPOS : -1 : 1
#var float4 vs[0] : : c[0], 1000 : -1 : 1
#var ulong tex : : c[1000] : -1 : 1
PARAM c[2002] = { program.local[0..2001] };
TEMP R0;
LONG TEMP D0;
TEMP T;
PK64.U D0.x, c[1000];
TEX.F R0, c[0], handle(D0.x), 2D;
ADD.F result.position, R0, c[42];
END
# 3 instructions, 1 R-regs, 1 D-regs
Here we see that the array takes registers 0..999, and the sampler takes register 1000. Elements above 1000 are not referenced anywhere except line PARAM c[2002] = { program.local[0..2001] };.
Further experiments with array size showed that 2002 is not a constant, but a doubled amount of registers required.
I remember that
OpenGL implementations are allowed to reject shaders for
implementation-dependent reasons.
So is there a workaround to use all available registers along with a sampler in a vertex shader?
If not, what might be the rationale behind this behavior? Obviously, this shader does not use any registers for temporary computation results.
Is it a misoptimization in shader compiler?
UPD: a quick-n-diry reproduction example is here.
UPD2: Webgl repro with workaround description.

shader compilation error on const value

Hi I'm having a bug on a fragment shader that doesn't compile on certain computers. The program using this shader is running on my computer (Quadro K1000M, OpenGl 4.2) but crashes at launch on my friend's computer (AMD Firepro M4100 FireGL V, OpenGl 4.2).
The error is:
Failed to compile fragment shader:
Fragment shader failed to compile with the following errors:
ERROR: 0:2: error(#207) Non-matching types for const initializer: const
ERROR: error(#273) 1 compilation errors. No code generated
The shader code is as follows:
uniform sampler2D source;
const float Threshold = 0.75;
const float Factor = 4.0;
#define saturate(x) clamp(x, 0.f, 1.f)
void main()
{
vec4 sourceFragment = texture2D(source, gl_TexCoord[0].xy);
float luminance = sourceFragment.r * 0.2126 + sourceFragment.g * 0.7152 + sourceFragment.b * 0.0722;
sourceFragment *= saturate(luminance - Threshold) * Factor;
gl_FragColor = sourceFragment;
}
I didn't find any info about this error. I asked my friend to edit the shaders to remove the const keyword, the program then worked. But I don't understand why this keyword generates compilation error on his computer since he seems to have the same OpenGL version as I have. I tried to search for known issues between const and shaders but I didn't find anything.
Do you have any idea what's happening ?

GLSL tessellation control shader unknown qualifiers

when i load my tessellation control shader it ouputs:
0(7) : error C3008: unknown layout specifier 'vertices'
0(15) : error C7565: assignment to varying in gl_TessLevelOuterIn
0(16) : error C7565: assignment to varying in gl_TessLevelOuterIn
my shader looks like this
#version 400
layout(vertices = 2) out;
void main( )
{
gl_out[ gl_InvocationID ].gl_Position = gl_in[ gl_InvocationID ].gl_Position;
gl_TessLevelOuter[0] = float( 1 );
gl_TessLevelOuter[1] = float( 5 );
}
what i am doing wrong here?
the qualifier 'vertices' should be visible with #version 400 ?
the specs say:
Layout Qualifiers
layout(layout-qualifiers) in/out/uniform
Output Layout Qualifiers
For tessellation control shaders:
vertices = integer-constant
also my tessellation evaluation shader says:
0(5) : error C3008: unknown layout specifier 'equal_spacing'
0(5) : error C3008: unknown layout specifier 'isolines'
do i missing something?
regards,
peter