Program crashes on accessing SSBO in shader - c++

Whenever I try to access an SSBO, I get an error, saying atio6axx.pdb not loaded.
My graphics card (AMD) drivers are updated, but funnily enough while searching for a solution I found this thread, which was posted only a few hours ago, so could this be a driver issue? I searched my PC and found the .dll but not the .pdb, could this be the issue? I've got VS set to loading symbols from Microsoft Symbol Servers but not NuGet.org Symbol Servers.
Relevant code:
Shader (simplified to only show necessary code):
#version 430 core
layout(binding = 5, std430) buffer test
{
float t[];
};
out vec4 colour;
void main()
{
colour = vec4(test.t[0], test.t[1], test.t[2], 1);
}
Creating the SSBO:
float test[3] { 0, 10, 0 };
glGenBuffers(1, &ss_id);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ss_id);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(float) * 3, test, GL_STATIC_READ);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 5, ss_id);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
Any help is appreciated

For anyone else having this issue, I found out why it was happening. I was referring to the data in the SSBO with test.t[0], when instead it should have been just t[0].

Related

Collision with two fragment shaders at the same time

My intention was, that the less space an object takes up on the screen, the less bright the object should appear.
This is fragment shader fs_1
#version 450 core
layout (binding = 3, offset = 0) uniform atomic_uint area;
void main(void) {
atomicCounterIncrement(area);
}
That's the second fragment shader, fs_2
#version 450 core
layout (binding = 3) uniform area_block {
uint counter_value;
};
out vec4 color;
layout(location = 4) uniform float max_area;
void main(void){
float brightness = clamp(float(counter_value) / max_area, 0.0, 1.0);
color = vec4(brightness, brightness, brightness, 1.0);
}
I then attached the shaders to the program object. Now, the problematic part comes. I wonder if that's even acceptable or a complete mess. I created a named buffer and then bound it to the GL_ATOMIC_COUNTER_BUFFERbinding point. Then, I bound it to binding number 3. I reset the counter with zero. After that, my intention was to reuse the atomic counter buffer in the second fragment shader, so I bound it to the GL_UNIFORM_BUFFER target. Last, I pass the maximum expected area to the second shader, in order to calculate the brightness.
glUseProgram(program);
GLuint buf;
glGenBuffers(1, &buf);
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, buf);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, 16 * sizeof(GLuint), NULL, GL_DYNAMIC_COPY);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 3, buf);
const GLuint zero = 0;
glBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 2 * sizeof(GLuint), sizeof(GLuint), &zero);
glBindBufferBase(GL_UNIFORM_BUFFER, 3, buf);
glUniform1f(4, info.windowHeight * info.windowWidth);//max_area
Like so, it doesn't seem to work. I also somewhere need to insert glColorMask, I suppose. This, in order to turn off the output of the first fragment shader. Furthermore, I think, I have to do something with glMemoryBarrier. Or is that not necessary? Have I called the functions in the wrong order?
I found no real references to it in the internet, no sample code on how to accomplish that. I'd be thankful for any answers.
Edit:
In addition, I have got some error message which makes the problem obvious:
glLinkProgram failed to link a GLSL program with the following program info log: 'Fragment shader(s) failed to link.
Fragment link error: INVALID_OPERATION.
ERROR: 0:8: error(#248) Function already has a body: main
ERROR: error(#273) 1 compilation errors. No code generated
I have very soon remarked that that might be the problem, that two fragment shaders have the mainbody. So how could I fix that? Could I leave both main bodies and instead create a second program object?
I got the design to work, eventually!
I've followed the great advices in the comments. To sum up: I created two program objects with the same vertex shader attached to them. After that, I called program (with fragment shader fs_1) as follows:
glUseProgram(program);
glDrawArrays(GL_TRIANGLES, 0, 3);
For some reason, I didn't have to call glMemoryBarrier or glColorMask.
I then made the following calls in my render-loop:
virtual void render(double currentTime) {
glUseProgram(program1);
glDrawArrays(GL_TRIANGLES, 0, 3);
}
There, I used the second program object, program1 (with fragment shader fs_2). I suppose it works, because there needs to be a vertex shader in each program object, not just in one. I then drew the triangle the first time, without any color output, but just to calculate the area. I then drew it the second time, using the "counted area" to calculate the brightness of the triangle.
I also realised, that the shaders are only executed when you issue a draw call.

'glDrawArrays: attempt to access out of range vertices in attribute 1' on Emscripten/OpenGL code (works in native C++) [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I have narrowed down the issue to this. I have two attributes pointed at the exact same data. This works fine when build in native C++. However, when built with emscripten, the javascript console shows the following error on each frame:
'glDrawArrays: attempt to access out of range vertices in attribute 1'
When I comment out the 'glEnableVertexAttribArray' line to enable the second attribute, I don't get this error.
Below is my code. I'll start with the data buffer creation:
GLfloat rectangleData[] =
{
-.5f, -.5f, 0,1,
-.5f, .5f, 0,0,
.5f, .5f, 1,0,
.5f, -.5f, 1,1,
-.5f, -.5f, 0,1
};
glGenBuffers(1, &rectangleBuffer);
glBindBuffer(GL_ARRAY_BUFFER, rectangleBuffer);
glBufferData(
GL_ARRAY_BUFFER, sizeof(rectangleData),
rectangleData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Here is a relevant excerpt from my textured quad drawing code:
glBindBuffer(GL_ARRAY_BUFFER, rectangleBuffer);
int vertexPosition = Shader::getParameterInfo("vertexPosition")->id;
glVertexAttribPointer(
vertexPosition, 2, GL_FLOAT,
GL_FALSE, 16, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vertexPosition);
int vertexTexCoord = Shader::getParameterInfo("vertexTexCoord")->id;
glVertexAttribPointer(
vertexTexCoord, 2, GL_FLOAT,
GL_FALSE, 16, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vertexTexCoord);
glDrawArrays(GL_TRIANGLE_FAN, 0, 5);
Notice that I've adjusted the second attribute to point to the same data as the first (to reduce complexity while debugging). I'm pretty stumped here and could really use a fresh/experienced perspective.
EDIT: Here's what BUFFER_OFFSET looks like:
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
Source: How to cast int to const GLvoid*?
EDIT: For what it's worth, here is the equivalent Emscripten generated JS code. I'll post any JS code this references if requested.
dest=$rectangleData; src=2328; stop=dest+80|0; do {
HEAP32[dest>>2]=HEAP32[src>>2]|0; dest=dest+4|0; src=src+4|0; } while
((dest|0) < (stop|0));
_glGenBuffers(1,(2300|0));
$30 = HEAP32[2300>>2]|0;
_glBindBuffer(34962,($30|0));
_glBufferData(34962,80,($rectangleData|0),35044);
_glBindBuffer(34962,0);
$11 = HEAP32[2300>>2]|0;
_glBindBuffer(34962,($11|0));
$12 = (__ZN8platform6Shader16getParameterInfoEPKc(17356)|0);
$13 = HEAP32[$12>>2]|0;
$vertexPosition = $13;
$14 = $vertexPosition;
_glVertexAttribPointer(($14|0),2,5126,0,16,(0|0));
$15 = $vertexPosition;
_glEnableVertexAttribArray(($15|0));
$16 = (__ZN8platform6Shader16getParameterInfoEPKc(17379)|0);
$17 = HEAP32[$16>>2]|0;
$vertexTexCoord = $17;
$18 = $vertexTexCoord;
_glVertexAttribPointer(($18|0),2,5126,0,16,(0|0));
$19 = $vertexTexCoord;
_glEnableVertexAttribArray(($19|0));
_glDrawArrays(6,0,5);
Edit: Better yet, I'll provide a link to the JS code running on github, and the C++ code too (it's near the bottom in "drawImage()"):
https://rawgit.com/jon-heard/Native-WebGL-framework/c134e35ac94fdf3243a9662353ad2227f8c84b43/Native-WebGL-framework/web/index.html
https://github.com/jon-heard/Native-WebGL-framework/blob/c134e35ac94fdf3243a9662353ad2227f8c84b43/Native-WebGL-framework/src/platform/draw.cpp
The issue is you have a single vertex shader that ALWAYS USES 2 ATTRIBUTES
var gl = document.createElement("canvas").getContext("webgl");
var program = twgl.createProgramFromScripts(gl, ["vs", "fs"]);
log("list of used attributes");
log("-----------------------");
var numAttribs = gl.getProgramParameter(program, gl.ACTIVE_ATTRIBUTES);
for (var ii = 0; ii < numAttribs; ++ii) {
var attribInfo = gl.getActiveAttrib(program, ii);
if (!attribInfo) {
break;
}
log(gl.getAttribLocation(program, attribInfo.name), attribInfo.name);
}
function log(...args) {
var div = document.createElement("div");
div.textContent = [...args].join(" ");
document.body.appendChild(div);
}
<script src="https://twgljs.org/dist/2.x/twgl.min.js"></script>
<script type="foo" id="vs">
uniform mat4 sceneTransform;
uniform mat4 rotationTransform;
uniform vec2 objectPosition;
uniform vec2 objectScale;
attribute vec2 vertexPosition;
attribute vec2 vertexTexCoord;
varying vec2 UVs;
void main()
{
UVs = vertexTexCoord;
gl_Position =
sceneTransform *
vec4( vertexPosition * objectScale + objectPosition, 0, 1);
}
</script>
<script type="foo" id="fs">
precision mediump float;
uniform vec3 objectColor;
uniform float objectOpacity;
void main()
{
gl_FragColor = vec4(objectColor, objectOpacity);
}
</script>
When you call drawCircle you assign both of those attributes to buffers, then in your code above if you don't do something about the second attribute it's still pointing the previous buffer. That buffer is too small for your draw call and you get an error.
WebGL won't complain about unused attributes but it will complain about used attributes. You should always supply attributes your shader needs.
In your case you've got at least 2 options
Change your code so your shader only uses one attribute
You've got just one vertex shader if I read the code correctly. For those cases where your fragment shader isn't going to use the texture coordinates use a different vertex shader that doesn't supply them.
Disable the attribute so that it uses a constant value
gl.disableVertexAttribArray(...)
means that that attribute will use a constant value supplied by
gl.vertexAttribXXX
1 is arguably better than 2 because your vertex shader will not be wasting time reading from an attribute and copying it to a varying only not to use it in the fragment shader.
I saw this error when I was trying to recreate this and made a mistake. The tutorial code can be found here. However, I got this error because I defined color buffer AFTER setting the data in position buffer. I fixed it by defining color buffer , and then defining position buffer and binding data to the buffer. That did the trick. So in conclusion, this error can appear if we do not define attributes sequentially.
The error actually arises from different place, form a draw call from drawCircle function. By the looks of it you forgot to disable unused attribute arrays. Here you use just one attribute, which's bound to 0, but the error is for attribute 1. Evidently, you've enabled vertex array for attribute 1 somewhere and forgot to disable it. Now draw call verifies it's binding, founds that it's incorrect and arises GL_INVALID_OPERATION error. The spec says that out-of-bounds check should be performed only for attributes used in current program, but by the looks of it at least Chromium simply checks all array-enabled ones.
UPD. I've misunderstood Chromium's code. As pointed out by #gman, it indeed checks for out-of-bound accesses only attributes used by current program.
I encountered this error because a vec2 attribute I was passing to a shader didn't contain data for each of my vertices...

Why is glDrawArrays failing on Intel Mesa 10.3, while working with nVidia using OpenGL 3.3

I am trying to run Piccante image processing library on Intel GPU. The library is using OpenGL shaders to apply filters on images. The library is using OpenGL 4.0 according to its documentation, so I had to make a small modifications in order to get it to run in OpenGL 3.3 context, which is supported by Intel Mesa 10.3 driver.
I changed the following line (in buffer_op.hpp) when the shader is created:
prefix += glw::version("330"); // before glw::version("400")
After this modification the my program still works perfectly fine on nVidia GPU, even while initializing the OpenGL context as OpenGL 3.3 (Core Profile).
On the Intel GPU the program works partly. It seems to work fine as long as the images are single channel. When the images are RGB the drawing now longer works, and my images ends up black.
I have traced down the error to the following line in (quad.hpp):
void Render()
{
glBindVertexArray(vao); // (I checked that vao is not 0 here)
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); // glGetError() = 1286 (GL_INVALID_OPERATION)
glBindVertexArray(0);
}
This is the initiations of the Vertex Array Object and the Vertex Buffer Object:
float *data = new float[8];
data[0] = -halfSizeX;
data[1] = halfSizeY;
data[2] = -halfSizeX;
data[3] = -halfSizeY;
data[4] = halfSizeX;
data[5] = halfSizeY;
data[6] = halfSizeX;
data[7] = -halfSizeY;
//Init VBO
glGenBuffers(1, &vbo[0]);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), data, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//Init VAO
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindVertexArray(0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This is the generated fragment shader I am trying to run:
#version 330
uniform sampler2D u_tex_1;
uniform vec4 u_val_0;
uniform vec4 u_val_1;
in vec2 v_tex_coord;
out vec4 f_color;
void main(void) {
ivec2 coords = ivec2(gl_FragCoord.xy);
vec4 ret = u_val_0;
f_color = ret;
}
I checked that the vertex shader and fragment shader compiles and links successfully. Does this mean that the shader should be GLSL 3.3 compatible and the problem is not within the shader but somewhere else?
What could be causing the program to fail on RGB images while it works fine on single channel images?
What could cause the program to fail with Intel Mesa 10.3 driver while working fine with nVidia driver when the context is initialized on both as OpenGL 3.3?
There seems to be a lot of reasons that could cause GL_INVALID_OPERATION when rendering. What other things could I check in order to trace down the error?
Thanks a lot for any help!
I've been talking with Francesco Banterle, the author of Piccante library and he pointed out the following:
Regarding Intel Drivers, the issue is due to the fact these drivers do
not automatically align buffers, so I may have to force three colors
channel to be RGBA instead of RGB.
I changed the internal format from GL_RGB32F to GL_RGBA32F when loading the RGB textures:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0,
GL_RGB, GL_FLOAT, data); // before GL_RGB32F
This seemed to fix the problems on the Intel drivers.

GLSL 4.50 messed up my Shader Storage Buffer Objects

My vertex shader code was working fine with GLSL 4.30 but after upgrading to GLSL 4.50 it seems to not be able to read values from SSBOs.
Here is the code that offloads data to the buffers:
glBindBuffer(GL_SHADER_STORAGE_BUFFER, buf_plumeX);
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(GLFrame->PlumeSetP->plX), GLFrame->PlumeSetP->plX);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 7, buf_plumeX);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, buf_plumeY);
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(GLFrame->PlumeSetP->plY), GLFrame->PlumeSetP->plY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 6, buf_plumeY);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, buf_plumeZ);
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(GLFrame->PlumeSetP->plZ), GLFrame->PlumeSetP->plZ);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 5, buf_plumeZ);
In shader, I am reading those buffers using the following code:
layout ( std140, binding = 7) buffer PlumeBufferX {
float plumeX[];
};
layout ( std140, binding = 6) buffer PlumeBufferY {
float plumeY[];
};
layout ( std140, binding = 5) buffer PlumeBufferZ {
float plumeZ[];
};
I calculate an offset and fetch the data at that specific offset:
vec3 p = vec3(plumeX[offset], plumeY[offset], plumeZ[ offset] );
Now no matter how i calculate the offset when I render the points based on this p position, it looks like they are all around the value of zero!
I am positively sure that this was not the behavior I was getting with GLSL 4.30 because I had recorded a video of my render.
Any idea what is causing the problem and why my SSBO's are zeroed out by GLSL 4.50?
Thanks.
I ended up using glMapBuffer instead of glBufferSubData, and using dynamic binding instead of hard coded binding as described in this page
It works!

Reading and updating texture buffer in OpenGL/GLSL 4.3

I'm going a bit nuts on this since I don't really get what is wrong and what not. There must either be something that I've vastly misunderstood or there is some kind of bug either in the code or in the driver. I'm running this on AMD Radeon 5850 with the latest catalyst beta drivers as of last week.
OK, I began doing a OIT-rendering implementation and wanted to use a struct-array saved in a shader storage buffer object. Well, the indices in that one were reflecting/moving forward in memory way wrong and I pretty much assumed that it was a driver bug - since they just recently started supporting such thing + yeah, it's a beta driver.
Therefore I moved back a notch and used glsl-images from texture buffer objects instead, which I guess had been supported since at least a while back.
Still wasn't behaving correctly. So I created a simple test project and fumbled around a bit and now I think I've just pinched where the thing is.
OK! First I initialize the buffer and texture.
//Clearcolor and Cleardepth setup, disabling of depth test, compile and link shaderprogram etc.
...
//
GLint tbo, tex;
datasize = resolution.x * resolution.y * 4 * sizeof(GLfloat);
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, datasize, NULL, GL_DYNAMIC_COPY);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_BUFFER, tex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
glBindTexture(GL_TEXTURE_BUFFER, 0);
glBindImageTexture(2, tex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_RGBA32F);
Then the rendering loop is - update and draw, update and draw ... With a delay in between so that I have time to see what the update does.
The update is like this...
ivec2 resolution; //Using GLM
resolution.x = (GLuint)(iResolution.x + .5f);
resolution.y = (GLuint)(iResolution.y + .5f);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
void *ptr = glMapBuffer(GL_TEXTURE_BUFFER, GL_WRITE_ONLY);
color *c = (color*)ptr; //color is a simple struct containing 4 GLfloats.
for (int i = 0; i < resolution.x*resolution.y; ++i)
{
c[i].r = c[i].g = c[i].b = c[i].a = 1.0f;
}
glUnmapBuffer(GL_TEXTURE_BUFFER); c = (color*)(ptr = NULL);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
And the draw is like this...
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
ShaderProgram->Use(); //Simple shader program class
quad->Draw(GL_TRIANGLES); //Simple mesh class containing triangles (vertices) and colors
glFinish();
glMemoryBarrier(GL_ALL_BARRIER_BITS);
I just put some memory barriers around to be extra sure, shouldn't harm more than performance right? Well, the outcome was the same with or without the barriers anyway, so ... :)
The Shader program is a simple pass-through vertex shader and the fragment shader that's doing the testing.
Vertex shader
#version 430
in vec3 in_vertex;
void main(void)
{
gl_Position = vec4(in_vertex, 1.0);
}
Fragment shader (I guess coherent & memoryBarrier() isn't really needed here since I do them on CPU in between draw/fragment shader execution... but does it harm?)
#version 430
uniform vec2 iResolution;
layout(binding = 2, rgba32f) coherent uniform imageBuffer colorMap;
out vec4 FragColor;
void main(void)
{
ivec2 res = ivec2(int(iResolution.x + 0.5), int(iResolution.y + 0.5));
ivec2 pos = ivec2(int(gl_FragCoord.x + 0.5), int(gl_FragCoord.y + 0.5));
int pixelpos = pos.y * res.x + pos.x;
memoryBarrier();
vec4 prevPixel = imageLoad(colorMap, pixelpos);
vec4 green = vec4(0.0, 1.0, 0.0, 0.0);
imageStore(colorMap, pixelpos, green);
FragColor = prevPixel;
}
Expectation: A white screen! Since I'm writing "white" to the whole buffer between every draw even if I'm writing green to the image after load in the actual shader.
Result: The first frame is green, the rest is black. Some part of me thinks that there is a white frame thats too fast to be seen or some vsync-thing that tares it, but it this a place for logics? :P
Well, then I tried a new thing and moved the update block (where i'm writing "white" to the whole buffer) to the init instead.
Expectation: A white first frame, followed by a green screen.
Result: Oh yes its green allright! Even though the first frame is with some artifacts of white/green, sometimes only green. This might probably be due to (lack of) vsync of something, haven't checked that out. Still, I think I got the result I was looking for.
The conclusion I can draw out of this is that there is something wrong in my update.
Does it unhook the buffer from the texture reference or something? In that case, isn't it weird that the first frame is OK? It's only after the first imageStore-command (well, the first frame) that the texture goes all black - the "bind()-map()-unmap()-bind(0)" works the first time, but not afterwards.
My picture of glMapBuffer is that it copies the buffer data from GPU to CPU memory, let's you alter it and Unmap copies it back. Well, just now I thought that maybe it doesn't copy the buffer from GPU to CPU and then back, but only one way? Could it be the GL_WRITE_ONLY which should be changed to GL_READ_WRITE? Well, I've tried both. Supposedly one of them was correct, wouldn't my screen when using that one always be white in "test 1"?
ARGH, what am I doing wrong?
EDIT:
Well, I still don't know... Obviously glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex); should be glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);, but I think tbo and tex had the same value since they were generated in the same order. Therefore it worked in this implementation.
I have solved it though, in a manner I'm not very happy with since I really think that the above should work. On the other hand, the new solution is probably a bit better performance-wise.
Instead of using glMapBuffer(), I switched to keeping a copy of the tbo-memory on CPU by using glBufferSubData() and glgetBufferSubData() for sending the data between CPU/GPU. This worked, so I'll just continue with that solution.
But, yeah, the question still stands - Why doesn't glMapBuffer() work with my texture buffer objects?
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
should be
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);
perhaps there is something else wrong, but this stands out.
https://www.opengl.org/wiki/Buffer_Texture