I am experiencing odd crashes when doing float comparisons in a Vulkan geometry shader. The shader code is as follows:
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
layout (triangles) in;
layout (triangle_strip, max_vertices=3) out;
layout(binding = 0) uniform UniformBufferObject {
mat4 modelView;
mat4 staticModelView;
} ubo;
in vec2 texCoordGeom[];
layout(location = 0) out vec2 texCoord;
void main() {
float dist0 = length(gl_in[0].gl_Position.xyz - gl_in[1].gl_Position.xyz);
float dist1 = length(gl_in[1].gl_Position.xyz - gl_in[2].gl_Position.xyz);
float dist2 = length(gl_in[0].gl_Position.xyz - gl_in[2].gl_Position.xyz);
float maxDist = max(dist0, max(dist1, dist2));
if(maxDist < 0.01) {
gl_Position = ubo.modelView * gl_in[0].gl_Position;
texCoord = texCoordGeom[0];
EmitVertex();
gl_Position = ubo.modelView * gl_in[1].gl_Position;
texCoord = texCoordGeom[1];
EmitVertex();
gl_Position = ubo.modelView * gl_in[2].gl_Position;
texCoord = texCoordGeom[2];
EmitVertex();
EndPrimitive();
}
}
It appears to crash at the conditional:
if(maxDist < 0.01)
When I remove this conditional the code runs without issues. If I change the value of the threshold from 0.01 to something larger, such as 0.1 or 1, again the code runs without issues.
Note that I am using the glslangValidator.exe from the VulkanSDK to compile the shader code. No validation errors are thrown except for the warning:
Warning, version 450 is not yet complete; most version-specific features are present, but some are missing.
Also note that no helpful errors are thrown when the program does crash as the entire GPU freezes (screen goes black momentarily) and the program exits.
For future readers this appeared to be a driver issue. Since updating to the latest driver (Radeon Driver Packaging Version
16.50.2011-161219a-309792E) along with the latest LunarG Vulkan SDK (1.0.37.0) the problem has resolved itself. Note I was running on an
AMD Radeon R9 380 Series.
Related
i installed the latest Vulkan SDK on my computer how ever whenever i want to generate the SPIR-V files for my shaders through glslValidator.exe it fails and returns the following errors
ERROR: Shader.vert:17: 'location' : SPIR-V requires location for user input/output
ERROR: 1 compilation errors. No code generated.
ERROR: Linking vertex stage: Missing entry point: Each stage requires one entry point
SPIR-V is not generated for failed compile or link
I found out that since update 1.0.51.1 there are some changes that might cause my old shaders to fail
Require locations on user in/out in GL_KHR_vulkan_glsl (internal issue 783).
what is the proper/new way to fix this issue?
vertex shader
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 0)uniform UniformBufferObject {
mat4 model;
mat4 view;
mat4 proj;
} ubo;
layout(location = 0)in vec3 inPosition;
layout(location = 1)in vec3 inNormals;
layout(location = 2)in vec2 inTexCoord;
layout(location = 0)out vec3 fragColor;
layout(location = 1)out vec2 fragTexCoord;
out vec4 Normal;
out gl_PerVertex{
vec4 gl_Position;
};
void main()
{
gl_Position = ubo.proj * ubo.view * ubo.model * vec4 (inPosition, 1.0);
//fragColor = inColor;
fragTexCoord = inTexCoord;
Normal = ubo.proj * ubo.view * ubo.model * vec4 (inNormals, 1.0);
}
I assume that You need to explicitly set the location through layout qualifier for all Your variables:
layout( location=<number> ) ...
Vulkan requires all input, output and uniform variables to have an explicitly provided location value. Interface matching between shader stages is performed only through a location value (as opposed to OpenGL where it can be performed through both names or locations). I'm not sure as I have always provided the location value, but maybe in earlier versions glslangValidator set them implicitly (if locations were missing).
I tried to realize height map with GLSL.
For it, i need to sent my picture to VertexShader and get grey component.
glActiveTexture(GL_TEXTURE0);
Texture.bind();
glUniform1i(mShader.getUniformLocation("heightmap"), 0);
mShader.getUniformLocation uses glGetUniformLocation and work good for other uniforms values, that used in Fragment, Vertex Shaders. But for heightmap return -1...
VertexShader code:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 color;
layout (location = 2) in vec2 texCoords;
layout (location = 3) in vec3 normal;
out vec3 Normal;
out vec3 FragPos;
out vec2 TexCoords;
out vec4 ourColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform sampler2D heightmap;
void main()
{
float bias = 0.25;
float h = 0.0;
float scale = 5.0;
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
vec3 hnormal = vec3(normal.x*h, normal.y*h, normal.z*h);
vec3 position1 = position * hnormal;
gl_Position = projection * view * model * vec4(position1, 1.0f);
FragPos = vec3(model * vec4(position, 1.0f));
Normal = mat3(transpose(inverse(model))) * normal;
ourColor = color;
TexCoords = texCoords;
}
may be algorithm of getting height is bad, but error with getting uniformlocation stops my work..
What is wrong? Any ideas?
UPD: texCoords (not TexCoords) of course is using in
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
my mistake, but it doesn't solve the problem. Having same error..
My bet is your variable has been optimized out by driver or the shader did not compile/link properly. After trying to compile your shader (on my nVidia) I got this in the logs:
0(9) : warning C7050: "TexCoords" might be used before being initialized
You should always check the GLSL compile/link logs ? see
How to debug GLSL Fragment shader
especially how the glGetShaderInfoLog is used.
In line
h = scale * ((texture2D(heightmap, TexCoords).r) - bias);
You are using TexCoords which is output variable and not yet set so the behavior is undefined and most likely your gfx driver throw that line away (and may be others) removing the TexCoords from shader completely but that is just my assumption.
What driver and gfx card you got?
What returns the logs on your setup?
It's really strange:
here are some log:
OpenGL Version = 4.1 INTEL-10.2.40
vs shaderid = 1, file = shaders/pointlight_shadow.vert
- Shader 1 (shaders/pointlight_shadow.vert) compile error: ERROR: 0:39: Use of undeclared identifier 'gl_LightSource'
BTW, I'm using C++/OpenGL/GLFW/GLEW on Mac OS X 10.10. Is there a way to check all the versions or attributes required to use "gl_LightSource" in the shader language?
Shader file:
#version 330
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal_modelspace;
layout(location = 3) in vec3 vertexTangent_modelspace;
layout(location = 4) in vec3 vertexBitangent_modelspace;
out vec4 diffuse,ambientGlobal, ambient;
out vec3 normal,lightDir,halfVector;
out float dist;
out vec3 fragmentcolor;
out vec4 ShadowCoord;
//Model, view, projection matrices
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform mat3 MV3x3;
uniform mat4 DepthBiasMVP;
void main()
{
//shadow coordinate in light space...
ShadowCoord = DepthBiasMVP * vec4(vertexPosition_modelspace,1);
// first transform the normal into camera space and normalize the result
normal = normalize(MV3x3 * vertexNormal_modelspace);
// now normalize the light's direction. Note that according to the
// OpenGL specification, the light is stored in eye space.
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
vec3 vertexPosition_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;
vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz;
//light
vec3 light0_camerapace = (V* vec4(gl_LightSource[0].position.xyz,1) ).xyz;
vec3 L_cameraspace= light0_camerapace-vertexPosition_cameraspace;
lightDir = normalize(L_cameraspace);
// compute the distance to the light source to a varying variable
dist = length(L_cameraspace);
// Normalize the halfVector to pass it to the fragment shader
{
// compute eye vector and normalize it
vec3 eye = normalize(-vertexPosition_cameraspace);
// compute the half vector
halfVector = normalize(lightDir + eye);
}
// Compute the diffuse, ambient and globalAmbient terms
diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;
ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
ambientGlobal = gl_LightModel.ambient * gl_FrontMaterial.ambient;
}
You're not specifying a profile in your shader version:
#version 330
The default in this case is core, corresponding to the OpenGL core profile. On some platforms, you could change this to using the compatibility profile:
#version 330 compatibility
But since you say that you're working on Mac OS, that's not an option for you. Mac OS only supports the core profile for OpenGL 3.x and later.
The reason your shader does not compile with the core profile is that you're using a bunch of deprecated pre-defined variables. For example:
gl_FrontMaterial
gl_LightSource
gl_LightModel
All of these go along with the old style fixed function pipeline, which is not available anymore in the core profile. You will have to define your own uniform variables for these values, and pass the values into the shader with glUniform*() calls.
I wrote a more detailed description of what happened to built-in GLSL variables in the transition to the core profile in an answer here: GLSL - Using custom output attribute instead of gl_Position.
There's a uniform vec3 in my shader that causes some odd behavior. If I use it in any way inside the shader - even if it has no actual effect on anything - the shader breaks and nothing that uses it is rendered.
This is the (vertex) shader:
#version 330 core
layout(std140) uniform ViewProjection
{
mat4 V;
mat4 P;
};
layout(location = 0) in vec3 vertexPosition_modelspace;
smooth out vec3 UVW;
uniform mat4 M;
uniform vec3 cameraPosition;
void main()
{
vec3 vtrans = vec3(vertexPosition_modelspace.x,vertexPosition_modelspace.y,vertexPosition_modelspace.z);
// if(cameraPosition.y == 123456)
// {}
mat4 MVP = P *V *M;
vec4 MVP_Pos = MVP *vec4(vtrans,1);
gl_Position = MVP_Pos;
UVW = vertexPosition_modelspace;
}
If I use it like this, it works fine, but as soon as I uncomment the commented lines, the shader breaks. There's no error on compiling or linking the shader, glGetError() reports no errors either. It happens if 'cameraPosition' is used in ANY way, even if it's meaningless.
This only happens on my laptop however, which is running OpenGL 3.1. On my PC with OpenGL 4.* I don't have this issue.
What's going on here?
Some info about my graphic card:
GL_RENDERER: Intel(R) G41 Express Chipset
OpenGL_VERSION: 2.1.0 - Build 8.15.10.1986
GLSL_VERSION: 1.20 - Intel Build 8.15.10.1986
Vertex shader 1:
#version 110
attribute vec3 vertexPosition_modelspace;
varying vec3 normal;
varying vec3 vertex;
void light(inout vec3 ver, out vec3 nor);
void main()
{
gl_Position = vec4(vertexPosition_modelspace, 1.0);
light(vertex, normal);
}
Vertex shader 2:
#version 110
void light(inout vec3 ver, out vec3 nor)
{
ver = vec3(0.0,1.0,0.0);
//vec3 v = -ver; // wrong line
nor = vec3(0.0,0.0,1.0);
//float f = dot(ver, nor); // wrong line
}
Fragment shader:
#version 110
varying vec3 normal;
varying vec3 vertex;
void main()
{
gl_FragColor = vec4(vertex, 1.0);
}
These shaders works well if the two lines are commented in second vertex shader. However, once one of them is enabled, we get a error. The error occur in opengl function glDrawArrays.
It seems that variable of out/inout can not used as right value.
I have run the same program on Intel HD Graphics 3000 which opengl's version is 3.1 and GLSL's version is 1.4, and the program works well. Is this a bug of Intel's driver or just wrong used by me?
Because intel g41 is an extremely weak gpu.
The only way through it is to upgrade your gpu.