Trivial OpenGL Shader Storage Buffer Object (SSBO) not working - opengl

I am trying to figure out how SSBO works with a very basic example. The vertex shader:
#version 430
layout(location = 0) in vec2 Vertex;
void main() {
gl_Position = vec4(Vertex, 0.0, 1.0);
}
And the fragment shader:
#version 430
layout(std430, binding = 2) buffer ColorSSBO {
vec3 color;
};
void main() {
gl_FragColor = vec4(color, 1.0);
}
I know they work because if I replace vec4(color, 1.0) with vec4(1.0, 1.0, 1.0, 1.0) I see a white triangle in the center of the screen.
I initialize and bind the SSBO with the following code:
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
float color[] = {1.f, 1.f, 1.f};
glBufferData(GL_SHADER_STORAGE_BUFFER, 3*sizeof(float), color, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
What is wrong here?

My guess is that you are missing the SSBO binding before rendering. In your example, you are copying the content and then you bind it immediately, which is unnecessary for the declaration. In other words, the following line in your example:
...
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
...
Must be placed before rendering, such as:
...
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
/*
Your render calls and other bindings here.
*/
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
...
Without this, your shader (theoretically) will not be able to see the content.
In addition, as Andon M. Coleman has suggested, you have to use padding for your elements when declaring arrays (e.g., use vec4 instead of vec3). If you don't, it will apparently work but produce strange results because of this fact.
The following two links have helped me out understanding the meaning of an SSBO and how to handle them in your code:
https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Object
http://www.geeks3d.com/20140704/tutorial-introduction-to-opengl-4-3-shader-storage-buffers-objects-ssbo-demo/
I hope this helps to anyone facing similar issues!
P.S.: I know this post is old, but I wanted to contribute.

When drawing a triangle, three points are necessary, and 3 separate sets of red green blue values are required for each point. You are only putting one set into the shader buffer. For the other two points, the value of color drops to the default, which is black (0.0,0.0,0.0). If you don't have blending enabled, it is likely that the triangle is being painted completely black because two of its vertices are black.
Try putting 2 more sets of red green blue values into the storage buffer to see it will load them as color values for the other two points.

Related

Simple GL fragment shader behaves strangely on newer GPU

I am tearing my hair out at this problem! I have a simple vertex and fragment shader that worked perfectly (and still does) on an old Vaio laptop. It's for a particle system, and uses point sprites and a single texture to render particles.
The problem starts when I run the program on my desktop, with a much newer graphics card (Nvidia GTX 660). I'm pretty sure I've narrowed it down to the fragment shader, as if I ignore the texture and simply pass inColor out again, everything works as expected.
When I include the texture in the shader calculations like you can see below, all points drawn while that shader is in use appear in the center of the screen, regardless of camera position.
You can see a whole mess of particles dead center using the suspect shader, and untextured particles rendering correctly to the right.
Vertex Shader to be safe:
#version 150 core
in vec3 position;
in vec4 color;
out vec4 Color;
uniform mat4 view;
uniform mat4 proj;
uniform float pointSize;
void main() {
Color = color;
gl_Position = proj * view * vec4(position, 1.0);
gl_PointSize = pointSize;
}
And the fragment shader I suspect to be the issue, but really can't see why:
#version 150 core
in vec4 Color;
out vec4 outColor;
uniform sampler2D tex;
void main() {
vec4 t = texture(tex, gl_PointCoord);
outColor = vec4(Color.r * t.r, Color.g * t.g, Color.b * t.b, Color.a * t.a);
}
Untextured particles use the same vertex shader, but the following fragment shader:
#version 150 core
in vec4 Color;
out vec4 outColor;
void main() {
outColor = Color;
}
Main Program has a loop processing SFML window events, and calling 2 functions, draw and update. Update doesn't touch GL at any point, draw looks like this:
void draw(sf::Window* window)
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
sf::Texture::bind(&particleTexture);
for (ParticleEmitter* emitter : emitters)
{
emitter->useShader();
camera.applyMatrix(shaderProgram, window);
emitter->draw();
}
}
emitter->useShader() is just a call to glUseShader() using a GLuint pointing to a shader program that is stored in the emitter object on creation.
camera.applyMatrix() :
GLuint projUniform = glGetUniformLocation(program, "proj");
glUniformMatrix4fv(projUniform, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
...
GLint viewUniform = glGetUniformLocation(program, "view");
glUniformMatrix4fv(viewUniform, 1, GL_FALSE, glm::value_ptr(viewMatrix));
emitter->draw() in it's entirity:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Build a new vertex buffer object
int vboSize = particles.size() * vboEntriesPerParticle;
std::vector<float> vertices;
vertices.reserve(vboSize);
for (unsigned int particleIndex = 0; particleIndex < particles.size(); particleIndex++)
{
Particle* particle = particles[particleIndex];
particle->enterVertexInfo(&vertices);
}
// Bind this emitter's Vertex Buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Send vertex data to GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), &vertices[0], GL_STREAM_DRAW);
GLint positionAttribute = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute,
3,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
0);
GLint colorAttribute = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colorAttribute);
glVertexAttribPointer(colorAttribute,
4,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
(void*)(3 * sizeof(float)));
GLuint sizePointer = glGetUniformLocation(shaderProgram, "pointSize");
glUniform1fv(sizePointer, 1, &pointSize);
// Draw
glDrawArrays(GL_POINTS, 0, particles.size());
And finally, particle->enterVertexInfo()
vertices->push_back(x);
vertices->push_back(y);
vertices->push_back(z);
vertices->push_back(r);
vertices->push_back(g);
vertices->push_back(b);
vertices->push_back(a);
I'm pretty sure this isn't an efficient way to do all this, but this was a piece of coursework I wrote a semester ago. I'm only revisiting it to record a video of it in action.
All shaders compile and link without error. By playing with the fragment shader, I've confirmed that I can use gl_PointCoord to vary a solid color across particles, so that is working as expected. When particles draw in the center of the screen, the texture is drawn correctly, albeit in the wrong place, so that is loaded and bound correctly as well. I'm by no means a GL expert, so that's about as much debugging as I could think to do myself.
This wouldn't be annoying me so much if it didn't work perfectly on an old laptop!
Edit: Included a ton of code
As turned out in the comments, the shaderProgram variable which was used for setting the camera-related uniforms did not depend on the actual program in use. As a result, the uniform locations were queried for a different program when drawing the textured particles.
The uniform location assignment is totally implementation specific, nvidia for example tends to assign them by the alphabetical order of the uniform names, so view's location would change depending if tex is actually present (and acttively used) or not. If the other implementation just assigns them by the order they appear in the code or some other scheme, things might work by accident.

OpenGL - GLSL assigning to varying variable breaks the vertex positioning

I did a project in OpenGL version 3.2 once where I used a "sampler2DArray" to store multiple images with the same dimensions and rendered them using textured points.
Now I am trying to port that project to my gnu/linux computer. This computer only supports up to OpenGL version to 2.1 and GLSL version up to 1.20 (which doesn't have sampler2DArray). As far as I know there is no way to update OpenGL to support the newer features.
What I am currently trying to do is to use a sampler3D to store my images and use the depth value to select the image I want.
To send the texture depth from the vertex shader to the fragment shader I have declared a "varying" float variable holding the depth value (0.0 to 1.0).
I am rendering 4 images at the locations: (-0.5, +0.5), (+0.5, +0.5), (-0.5, -0.5) and (+0.5, -0.5).
The image switching method appears to be working (changing the "index" variable changes the image). But for some wierd reason all images gets rendered at (0.0, 0.0) and not at their assigned positions. This problem goes away when I don't assign to the varying variable containing the depth value for the texture and set the depth value to 0.0 in the fragment shader.
Here is the vertex shader:
#version 120
attribute vec2 position
attribute float index
varying float v_index
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
v_index = index; // Removing this assignment makes the images appear at their assigned locations.
}
Here is the fragment shader:
#version 120
uniform sampler3D texture;
varying float v_index;
void main()
{
gl_FragColor = texture3D(texture, vec3(gl_PointCoord, v_index));
}
The structure I use represent vertices:
struct vertex {
GLfloat x;
GLfloat y;
GLfloat texture_index;
};
The calls to the glVertexAttribPointer function (the problem may be here too):
glBindAttribLocation(shader_program, 0, "position");
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)0);
glBindAttribLocation(shader_program, 1, "index");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)(2 * sizeof(GLfloat));
I have also found a very similar question. The answer marked "accepted" claims that the cause of the problem is that the shaders have more than 16 varying vectors (which isn't the case for me).
Here is the link: Strange error in GLSL while assigning attribute to a varying in vertex shader
This looks like your attribute location bings aren't effective and the locations are assigned bu the GL. Without the assignment, the index attribute is not used, and only the position one is, so it is very likely that it gets location 0. Whe index is actually used, it might get 0 (on nvidia, those locations seem to be assigned in alphabetical order).
The glBindAttribLocation() calls only have an effect when linking the program, so these have to be called before glLinkProgram(), and you have to re-link the program when you want to change those (which you should really avoid). The code you have given suggests that this is called during your regular draw calls, so that these never have any effect on the linked program.

glUseProgram unaffecting the rendering state

I am writing a basic view manager using GL on ubuntu 13 and eeepc with a nvidia ION2 (optimus in use using bumblebee project). I've an XML file from which shaders are created when the system starts (like plugins) and added to a dictionary. Once these are compiled and linked and ready for use, a wrapper function is used to select the appropriate shader program based on the program name passed.
void ShadingProgramManager::useProgram(const std::string& program){
GLuint id = getProgramId(program);
glUseProgram(id);
if(GL_INVALID_VALUE == glGetError() || GL_INVALID_OPERATION == glGetError()){
printf("Problem Loading Shader Program");
return;
}
printf("%s is in use", program.c_str());
}
Where getProgramId simply looks inside the pre created dictionary and returns the id of the shader program.
When I render the object, I put the program to use by calling:
ShadingProgramManager::getInstance()->useProgram('vc');
'vc' is formed of the following shaders
Vertex Shader - vc.vert
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
out vec4 vertcolor;
void main(){
vertcolor = color;
gl_Position = vec4(position, 1.0); //I've tried setting this as position * 10 also for any apparent changes on screen, but nothing changes
}
Fragment Shader - vc.frag:
#version 330
in vec4 vertcolor;
out vec4 outputcolor;
void main(){
outputcolor = vertcolor;
}
My vertex buffer is interleaved as:
VertexColor vertices[] =
{
{-1.0, -1.0, 0.0, 1.0, 1.0, 1.0, 1.0}, /*first 3 floats for pos, 4 for color */
{ 1.0, -1.0, 0.0, 1.0, 0.0, 0.0, 1.0},
{ 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0},
{-1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0},
};
Index Buffer is as:
GLuint indices[] =
{
0, 1, 2,
0, 2, 3,
};
VertexColor defined as:
class VertexColor{
GLfloat x;
GLfloat y;
GLfloat z;
GLfloat r;
GLfloat g;
GLfloat b;
GLfloat a;
/** some constants as below **/
};
const int VertexColor::OFFSET_POSITION =0;
const int VertexColor::OFFSET_COLOR =12;
const int VertexColor::SIZE_POSITION =3;
const int VertexColor::SIZE_COLOR =4;
const int VertexColor::STRIDE =28;
Then I use the following code to render the quad:
ShadingProgramManager::getInstance()->useProgram('vc');
glBindBuffer(GL_ARRAY_BUFFER, &vb);
glBufferData(GL_ARRAY_BUFFER, size_of_vertices_array, vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, &ib);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, size_of_indices_array, indices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribArrayPointer(0, VertexColor::SIZE_POSITION, GL_FLOAT, GL_FALSE, VertexColor::STRIDE, (GLvoid*)VertexColor::OFFSET_POSITION);
glVertexAttribArrayPointer(1, VertexColor::SIZE_COLOR, GL_FLOAT, GL_FALSE, VertexColor::STRIDE, (GLvoid*)VertexColor::OFFSET_COLOR);
glDrawElements(GL_TRIANGLES, size_of_indices, GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
However, I only see a white quad. I suspect it's the fixed function pipeline that comes into effect.
Even if I remove the call to glUseProgram(id) or use glUseProgram(0), I'm still getting the same results. I also tried multiplying the position in the vertex shader by 10.0 but no effect on screen. I am sure that the shaders are being compiled and linked as well. When I change to use something like glUseProgram(40) or any invalid number, I get the requisite error messages but elsewise, I only see a white unit square!
Sorry for the obtrusively long post but I am stumped on this one...I just get a white unit square no matter what changes I do to the vert or frag shader. I suspect GL is defaulting to the FFP, and for some reason my shader program is not falling into effect. I am hoping it's a noobish mistake, but any pointers would be appreciated.
PS: There are no compile errors so please excuse any syntactical errors in the code. I've typed the complete code above.
UPDATE: I've added the last parameter in the call to glVertexAttribArrayPointer as suggested by Andon, Dinesh and Spektre, and I had missed earlier, but still same results.
Look at this line
glVertexAttribArrayPointer(1, VertexColor::SIZE_COLOR, GL_FLOAT, GL_FALSE, VertexColor::STRIDE);
Where is the pointer to Specifies a offset of the first component of the first generic vertex attribute in the array. In your vertex array color data starts at 4th position. You have to specify the starting position of first component of color data. Initial pointer value is 0 so program reads color data from first position to 4th position which is not color data. But it read vertex data correctly because program reads vertex data from first position to 3rd position which is correct value.That's way you see only white quad.
Problem solved.
Yes, it was a noobish mistake, but all your comments helped me run through the entire code once again. I started off with redoing everything with the FFP and moving onto the PFP. So here is the error:
I missed putting in glAttachShader(pid, sid) in the dictionary that was being created for shader programs, so while the program was in effect, the vertex shader and the frag shader were never being applied.

OpenGL issue: cannot render geometry on screen

My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.

OpenGL Vertex Shader Runtime Issues (not using VBOs or textures)

I have the following vertex shader:
uniform mat4 uMVP;
attribute vec4 aPosition;
attribute vec4 aNormal;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
varying vec4 vPrimaryColor;
void main() {
gl_Position = uMVP * aPosition;
vPrimaryColor = vec4(1.0, 1.0, 1.0, 1.0);
vTexCoord = aTexCoord;
}
And the following fragment shader:
uniform sampler2D sTex;
varying vec2 vTexCoord;
varying vec4 vPrimaryColor;
void main() {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
Note that while I have a vTexCoord and vPrimaryColor, neither of which are used in the fragment shader. (The reason why they are there is because they eventually will be).
Now, I also set uMVP to be the identity matrix for now, and draw using the following code:
// Load the matrix
glUniformMatrix4fv(gvMVPHandle, 1, false, &mvPMatrix.matrix[0][0]);
// Draw the square to be textured
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvPositionHandle);
glVertexAttribPointer(gvTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glDrawArrays(GL_QUADS, 0, 4);
where the square is:
const GLfloat PlotWidget::gFullScreenQuad[] = { -1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f};
So, when I run this program, I get a black screen. Which does not seem like you would expect. However, when I change the line in the shade:
vTexCoord = aTexCoord;
To
vTexCoord = vec2(1.0, 1.0);
It works perfectly. So I would assume the problem with the code is with that line of code, but I can't think of anything in opengl that would cause this. Also, I'm using Qt for this project, which means this class is using the QGLWidget. I've never had this issue with OpenGL ES 2.0.
Any suggestions?
I'm sorry for the vague title, but I don't even know what class of problem this would be.
Are you checking glGetShaderInfoLog and glGetProgramInfoLog during your shader compilation? If not then I would recommend that as the first port of call.
Next thing to check would be your the binding for the texture coordinates. Are the attributes are being set up correctly? Is the data valid?
Finally, start stepping through your code with liberal spraying of glGetError calls. It wil almost certainly fail on glDrawArrays which won't help you much, but that's usually when the desparation sets in for me!
OR
You could try gDEBugger. I use it mainly to look for bottlenecks and to make sure I'm releasing OpenGL resources properly so can't vouch for the debugger, but it's worth a shot.
Apparently you need to actually use the whole use glEnableVertexAttribArray if it's getting passed into the fragment shader. I have no idea why though. But changing the drawing code to this:
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvPositionHandle);
glVertexAttribPointer(gvTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvTexCoordHandle);
glDrawArrays(GL_QUADS, 0, 4);
made it work.
Same problem, different cause.
For some devices the automatic variable linking in glLinkProgram does not work as specified.
Make sure things are done in the following order:
glCreateProgram
glCreateShader && glCompileShader for both shaders
glBindAttribLocation for all attributes
glLinkProgram
Step 3 can be repeated later at any time to rebind variables to different buffer slots - however changes only become effective after another call to glLinkgProgram.
or short: whenever you call glBindAttribLocation make sure a glLinkProgram calls comes after.