I found out that running tessellation control shader which contains barrier() call on my integrated Intel videocard yields no geometry (????)
So just for example, the following simple shader works well on my dedicated Nvidia card, but doesn't work with Intel:
#version 450 core
layout (vertices = 4) out;
void main() {
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
gl_TessLevelInner[0] = 1;
gl_TessLevelInner[1] = 1;
gl_TessLevelOuter[0] = 1;
gl_TessLevelOuter[1] = 1;
gl_TessLevelOuter[2] = 1;
gl_TessLevelOuter[3] = 1;
barrier();
}
However, the same code without barrier() call works on Intel card too.
Would be grateful if someone could guess, where to look for the cause of the problem
UPDATE
Found out that when barrier() is called before gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; everything works fine, so I just need to move this statement to the end:
#version 450 core
layout (vertices = 4) out;
void main() {
gl_TessLevelInner[0] = 1;
gl_TessLevelInner[1] = 1;
gl_TessLevelOuter[0] = 1;
gl_TessLevelOuter[1] = 1;
gl_TessLevelOuter[2] = 1;
gl_TessLevelOuter[3] = 1;
barrier();
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
This behavior is still not clear for me, so would be cool if someone could explain it to me, why I shouldn't write to gl_out before barrier()
This looks like a graphics driver or hardware bug to me - there is nothing wrong with what you are doing from a specification point of view.
Related
I'm using opengl to render fractals, which can result in very long draw calls (multiple seconds). If the draw call ends up being too long (a couple of seconds), i get this error:
Unhandled exception at 0x69A5E899 (nvoglv32.dll) in Mandelbulb.exe: Fatal program exit requested.
Presumably opengl assumes i'm stuck in an infinite loop and therefore aborts the program.
I've isolated the problem, as it goes away if i lower the quality. (in the example code, notice that the program runs just fine if maxiterations is set to something reasonable like 1000).
Is there any way to hint to opengl that it shouldn't abort?
Maybe using compute shaders instead will work?
Sample code for reproducing (very stupid example, not the focus): make an opengl project, draw two triangles that overlap to form a rectangle covering the entire screen. Then use this fragment shader to color it.
Vertex:
#version 330 core
layout(location = 0) in vec4 position;
out vec4 pos;
void main()
{
gl_Position = position;
pos = position;
};
Fragment:
#version 330 core
layout(location = 0) out vec4 color;
in vec4 pos;
const int maxIterations = 8000000;
void main()
{
vec2 npos = pos.xy;
vec2 z=vec2(0.0,0.0);
int i = maxIterations;
for (; i > 0; i--)
{
float y2 = z.y * z.y;
float x2 = z.x * z.x;
if (x2 + y2 > 4) break;
z.y = 2*z.x*z.y+npos.y;
z.x = x2 - y2 + npos.x;
}
color = vec4(float(i)/float(maxIterations));
};
Edit:
Trying what #Scheff recommended worked. However, i sadly can't accept the comment as an answer
I am currently using a dynamic buffer to store the uniforms of multiple objects and update them individually without changing the shader. The problem is that when i am adding a VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER to my descriptor the render is this (I can't post images due to low reputation)! Just the color of the background of the texture. This is my initialization of the descriptor().
descriptorWrites[0].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
descriptorWrites[0].dstSet = descriptorSet;
descriptorWrites[0].pNext = NULL;
descriptorWrites[0].dstBinding = 0;
descriptorWrites[0].dstArrayElement = 0;
descriptorWrites[0].descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC;
descriptorWrites[0].descriptorCount = 1;
descriptorWrites[0].pBufferInfo = &bufferInfo;
descriptorWrites[1].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
descriptorWrites[1].dstSet = descriptorSet;
descriptorWrites[1].pNext = NULL;
descriptorWrites[1].dstBinding = 1;
descriptorWrites[1].dstArrayElement = 0;
descriptorWrites[1].descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
descriptorWrites[1].descriptorCount = 1;
descriptorWrites[1].pImageInfo = &imageInfo;
And this is my fragment shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(set=0, binding = 1) uniform sampler2D texSampler;
layout(location = 0) in vec3 fragColor;
layout(location = 1) in vec2 fragTexCoord;
layout(location = 0) out vec4 outColor;
void main() {
outColor = texture(texSampler, fragTexCoord*2);
}
What could be the reason of this? I have updated descriptor layout, and the validation layer don't throw any errors and the dynamic buffer was working perfectly even for multiple objects. Is the binding corrupted or something like that?
Since switching hardware from AMD to Intel, something that worked on AMD seems to cause fatal glsl error on Intel and I had to comment it out:
gltexcoord[0].st is not recognised and breaks the shader.
I am looking for help for an alternative method or maybe a workaround for this piece of code:
gl_TexCoord[0].s = r.x / m + 0.5;
gl_TexCoord[0].t = r.y / m + 0.5;
vec4 rS = texture(reflectionSampler, gl_TexCoord[0].st);
OpenGL 3.3, GLSL 3.3 - both vertex & fragment shaders 3.30 core.
gl_TexCoord was removed from core profile GLSL. The easiest way to achieve the same effect would be defining output variable vec2 in vertex shader:
out vec2 texCoord;
[..]
texCoord.xy = vec2(r.x / m + 0.f, r.y / m + 0.5);
and input variable in fragment shader:
in vec2 texCoord;
[..]
vec4 rS = texture(reflectionSampler, texCoord.xy);
I am currently trying to add tessellation shaders to my program and I feel as though I'm at my wit's end. I have gone through several tutorials and delved into a lot of the questions here as well. Unfortunately I still do not understand what I am doing wrong after all of this. I will take any tips gladly, and do admit I'm a total novice here. My vertex and fragment shaders work, but no matter which tutorial I base my code off of, I can not get anything to display once I add tessellation shaders. I get no errors while loading, linking, or using the shaders/program either.
The four shaders in question:
Vertex:
layout (location = 0) in vec4 vertex;
out gl_PerVertex{
vec4 gl_Position;
};
void main()
{
gl_Position = gl_Vertex;
//note I have to multiply this by the MVP matrix if there is no tessellation
}
Tessellation Control Shader:
layout (vertices = 3) out;
out gl_PerVertex {
vec4 gl_Position;
} gl_out[];
void main()
{
if (gl_InvocationID == 0)
{
gl_TessLevelInner[0] = 3.0;
gl_TessLevelOuter[0] = 2.0;
gl_TessLevelOuter[1] = 2.0;
gl_TessLevelOuter[2] = 2.0;
}
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
Tessellation Evaluation Shader:
layout(triangles, equal_spacing, ccw) in;
uniform mat4 ModelViewProjectionMatrix;
out gl_PerVertex{
vec4 gl_Position;
};
void main()
{
vec4 position = gl_TessCoord.x * gl_in[0].gl_Position +
gl_TessCoord.y * gl_in[1].gl_Position +
gl_TessCoord.z * gl_in[2].gl_Position;
gl_Position = ModelViewProjectionMatrix * position;
}
Fragment Shader:
void main()
{
gl_FragColor = vec4(0.1, 0.4, 0.0, 1.0);
}
I'm sure I'm overlooking some really simple stuff here, but I'm at an absolute loss here. Thanks for taking the time to read this.
When you draw using a Tessellation Control Shader (this is an optional stage), you must use GL_PATCHES as the primitive type.
Additionally, GL_PATCHES has no default number of vertices, you must set that:
glPatchParameteri (GL_PATCH_VERTICES​, 3);
glDrawElements (GL_PATCHES, 3, ..., ...);
The code listed above will draw a single triangle patch. Now, since geometry shaders and tessellation evaluation shaders do not understand GL_PATCHES, if you remove the tessellation control shader, the draw code will do nothing. Only the tessellation control shader can make sense out of GL_PATCHES primitive, and conversely it cannot make sense of any other kind of primitive.
Using OpenGL 3.3 core profile, I'm rendering a full-screen "quad" (as a single oversized triangle) via gl.DrawArrays(gl.TRIANGLES, 0, 3) with the following shaders.
Vertex shader:
#version 330 core
#line 1
vec4 vx_Quad_gl_Position () {
const float extent = 3;
const vec2 pos[3] = vec2[](vec2(-1, -1), vec2(extent, -1), vec2(-1, extent));
return vec4(pos[gl_VertexID], 0, 1);
}
void main () {
gl_Position = vx_Quad_gl_Position();
}
Fragment shader:
#version 330 core
#line 1
out vec3 out_Color;
vec3 fx_RedTest (const in vec3 vCol) {
return vec3(0.9, 0.1, 0.1);
}
vec3 fx_Grayscale (const in vec3 vCol) {
return vec3((vCol.r * 0.3) + (vCol.g * 0.59) + (vCol.b * 0.11));
}
void main () {
out_Color = fx_RedTest(out_Color);
out_Color = fx_Grayscale(out_Color);
}
Now, the code may look a bit odd and the present purpose of this may seem useless, but that shouldn't phase the GL driver.
On a GeForce, a get a gray screen as expected. That is, the "grayscale effect" applied to the hard-coded color "red" (0.9, 0.1, 0.1).
However, Intel HD 4000 [driver 9.17.10.2932 (12-12-2012) version -- the newest as of today] always, repeatedly shows nothing but the following constantly-flickering noise pattern:
Now, just to experiment a little, I changed the fx_Grayscale() function around a little bit -- effectively it should be yielding the same visual result, just with slightly different coding:
vec3 fx_Grayscale (const in vec3 vCol) {
vec3 col = vec3(0.9, 0.1, 0.1);
col = vCol;
float x = (col.r * 0.3) + (col.g * 0.59) + (col.b * 0.11);
return vec3(x, x, x);
}
Again, Nvidia does the correct thing whereas Intel HD now always, repeatedly produces a rather different, but still constantly-flickering noise pattern:
Must I suspect (yet another) Intel GL driver bug, or do you see any issues with my GLSL code -- not from a prettiness perspective (it's part of a shader code-gen experimental project) but from a mere spec-correctness point of view?
I think it looks strange to send in a "out" color as parameter to another function. I would rewrite it something like this:
void main () {
vec3 col = vec3(0f,0f,0f);
col = fx_RedTest(col);
col = fx_Grayscale(col);
out_Color = col;
}
Does it make any difference?