Pass an array to the fragment shader using GL_SHADER_STORAGE_BUFFER - opengl

I want to pass a class to the fragment shader that contains some data. The problem is that I cannot pass an array of size 100.
I pass the class as data:
struct MetaData {
int rows;
int columns;
float cellSizeX;
float cellSizeY;
int mask[10][10];
}metaData;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(metaData), &metaData, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 3, ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
GLvoid* p = glMapBuffer(GL_SHADER_STORAGE_BUFFER, GL_WRITE_ONLY);
memcpy(p, &metaData, sizeof(metaData));
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
And in the shader accept it in the vertex shader as:
layout (std430, binding=3) buffer shader_data
{
int rows;
int columns;
float cellSizeX;
float cellSizeY;
int mask[10][10];
};
flat out int Mask[10][10];
int main(){
...some code...
Mask = mask;
...some code...
}
and accept it in the fragment shader, but the following error is displayed, which I don’t know how to solve.
cannot locate suitable resource to bind variable "Mask". Possibly large array.
As I understand from the search on the Internet there is a certain limit on the number of glGet(GL_MAX_VARYING_FLOATS_ARB). Is there any way to avoid this?

Uhm, You do know that you can access shader storage buffers in shaders right?
'Fragment shaders' fall under the umbrella term 'shaders', so rather than copying the struct for each vertex, maybe just add the following to your fragment shader, and access the data directly? :p
// Add to the fragment shader source.....
layout (std430, binding=3) buffer shader_data
{
int rows;
int columns;
float cellSizeX;
float cellSizeY;
int mask[10][10];
};

Related

Using multiple VBO in a VAO

I try to use 2 VBO inside a VAO and I end up with a crash (far beyond my app).
The idea is to make a first VBO (and optionnally an IBO) to stucture the geometry.
This worked well, until I get the idea to add a second VBO for the model matrix as a vertex attribute instead of an uniform.
So, when I declare my mesh I do as follow (reduced code) :
GLuint vao = 0;
glCreateVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint vbo = 0;
glCreateBuffers(1, &vbo);
glNamedBufferStorage(vbo, ...); // Fill the right data ...
for ( ... my attributes ) // Position, normal, texcoords ...
{
glVertexArrayAttribFormat(vao, attribIndex, size, GL_FLOAT, GL_FALSE, relativeOffset);
glVertexArrayAttribBinding(vao, attribIndex, bindingIndex);
glEnableVertexArrayAttrib(vao, attribIndex);
} -> this gives me the "stride" parameter for below.
glVertexArrayVertexBuffer(vao, 0/*bindingindex*/, vbo, 0, stride/*Size of one element in vbo in bytes*/);
GLuint ibo = 0;
glCreateBuffers(1, &ibo);
glNamedBufferStorage(ibo, ...); // Fill the right data ...
glVertexArrayElementBuffer(vao, ibo);
Until there, everything is fine, all I have to do is to call glBindVertexArray() and a glDrawXXX() command, I have something perfect on screen.
So, I decided to remove the modelMatrix uniform from the shader to use a mat4 attribute,
I could have choose an UBO instead but I want to extend the idea to instancing rendering by providing several matrices.
So, I tested with one model matrix in a VBO and just before the rendering, I do as follow (the VBO is built the same way before, I just put 16 floats for an identity matrix) :
glBindVertexArray(theObjectVAOBuiltBefore);
const auto bindingIndex = static_cast< GLuint >(1); // Here next binding point for the VBO, I guess...
const auto stride = static_cast< GLsizei >(16 * sizeof(GLfloat)); // The stride is the size in bytes of a matrix
glVertexArrayVertexBuffer(theObjectVAOBuiltBefore, bindingIndex, m_vertexBufferObject.identifier(), 0, stride); // I add the new VBO to the currentle VAO which have already a VBO (bindingindex 0) and an IBO
// Then I describe my new VBO as a matrix of 4 vec4.
const auto size = static_cast< GLint >(4);
for ( auto columnIndex = 0U; columnIndex < 4U; columnIndex++ )
{
const auto attribIndex = static_cast< unsigned int >(VertexAttributes::Type::ModelMatrix) + columnIndex;
glVertexArrayAttribFormat(theObjectVAOBuiltBefore, attribIndex, size, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(theObjectVAOBuiltBefore, attribIndex, bindingIndex);
glEnableVertexArrayAttrib(theObjectVAOBuiltBefore, attribIndex);
glVertexAttribDivisor(attribIndex, 1); // Here I want this attribute per instance.
}
glDrawElementsInstanced(GL_TRIANGLES, count, GL_UNSIGNED_INT, nullptr, 1);
And the result is a beautiful crash, I don't have any clue because the crash occurs within the driver where I can't have a debug output.
Is my idea a complete garbage ? Or there is something I missed ?
I found the error glVertexAttribDivisor() is part of the old ways (like glVertexAttribPointer(), ...), I switched to glVertexBindingDivisor()/glVertexArrayBindingDivisor() and now there is no crash at all.
Answers were there : https://www.khronos.org/opengl/wiki/Vertex_Specification#Separate_attribute_format

OpenGL VAO is pointing to address 0 for some reason

I am having some trouble with my VAO not binding properly (at least that's what I think is happening).
So, what I am doing is I have a class that is creating a vbo and vao from some raw data, in this case a pointer to an array of floats.
RawModel* Loader::loadToVao(float* positions, int sizeOfPositions) {
unsigned int vaoID = this->createVao();
this->storeDataInAttributeList(vaoID, positions, sizeOfPositions);
this->unbindVao();
return new RawModel(vaoID, sizeOfPositions / 3);
}
unsigned int Loader::createVao() {
unsigned int vaoID;
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
unsigned int copyOfVaoID = vaoID;
vaos.push_back(copyOfVaoID);
return vaoID;
}
void Loader::storeDataInAttributeList(unsigned int attributeNumber, float* data, int dataSize) {
unsigned int vboID;
glGenBuffers(1, &vboID);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, dataSize * sizeof(float), data, GL_STATIC_DRAW);
glVertexAttribPointer(attributeNumber, 3, GL_FLOAT, false, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
unsigned int copyOfVboID = vboID;
vbos.push_back(copyOfVboID);
}
void Loader::unbindVao() {
glBindVertexArray(0);
}
The RawModel is just a class that should take in the array of floats and create a vbo and a vao. The vectors vbos and vaos that I am using are just there to keep track of all the ids so that I can delete them once I am done using all the data.
I am 90% confident that this should all work properly. However, when I go to try and just run some code that would draw it, OpenGL is exiting because it is trying to read from address 0x0000000 and it doesn't like that. I pass the raw model that I created from the code before this into a function in my renderer that looks like this:
void Renderer::render(RawModel* model) {
glBindVertexArray(model->getVaoID());
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, model->getVertexCount());
glDisableVertexAttribArray(0);
glBindVertexArray(0);
}
I have checked to make sure that the VaoID is the same when I am creating the vao, and when I am trying to retrieve the vao. It is in fact the same.
I have no idea how to read what address is currently stored in whatever OpenGL has currently bound as the vertex attrib array, so I cannot test whether or not it is pointing to the vertex data. I'm pretty sure that it's pointing to address 0 for some reason though.
Edit:
It turns out it was not the hard-coded 0 that was a problem. It removed the errors that Visual Studio and OpenGL were giving me, but the actual error was somewhere else. I realized that I was passing in the vaoID as the attributeNumber in some of the code above, when I should have been passing in a hard-coded 0. I edited my code here:
RawModel* Loader::loadToVao(float* positions, int sizeOfPositions) {
unsigned int vaoID = this->createVao();
this->storeDataInAttributeList(0, positions, sizeOfPositions);
this->unbindVao();
return new RawModel(vaoID, sizeOfPositions / 3);
}
I changed the line this->storeDataInAttributeList(vaoID, positions, sizeOfPositions); to what you see above, with a hard-coded 0. So, it turns out I wasn't even binding the array to the correct location in the vbo. But, after changing that it worked fine.
You should be using your vertex attribute index with glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray but what you've got is:
VAO id used with glVertexAttribPointer
hard coded 0 used with glEnableVertexAttribArray and glDisableVertexAttribArray (this isn't necessarily a bug though if you're sure about the value)
If you are not sure about the index value (e.g. if you don't specify the layout in your shader) then you can get it with a glGetAttribLocation call:
// The code assumes `program` is created with glCreateProgram
// and `position` is the attribute name in your vertex shader
const auto index = glGetAttribLocation(program, "position");
Then you can use the index with the calls mentioned above.

Using vertex attributes for GLSL with LibiGL

I am doing some computationally heavy stuff on meshes with libiGL(c++) and am trying to move these computations over to the GPU. To start off, I am trying to get a basic piece of code running but the attribute values do not seem to get passed on to the shaders.
int m = V.rows();//vertices
P.resize(m);
for (int i = 0; i < m; i++)
{
P(i) = i / m;
}
std::map<std::string, GLuint> attrib;
attrib.insert({ "concentration", 0 });
igl::opengl::create_shader_program(
mesh_vertex_shader_string,
mesh_fragment_shader_string,
attrib,
v.data().meshgl.shader_mesh);
GLuint prog_id = v.data().meshgl.shader_mesh;
const int num_vert = m;
GLuint vao[37 * 37];//37*37 vertices
glGenVertexArrays(m, vao);
GLuint buffer;
for (int i = 0; i < m; i++)
{
glGenBuffers(1, &buffer);
glBindVertexArray(vao[i]);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float), P.data(), GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, 0, (void*)(i*sizeof(float)));
}
The red color of each vertex should be dependant on its ordering (from 0 red to 1 red) but instead the mesh is colored with no red at all.
I am basing myself off of this code :https://github.com/libigl/libigl/issues/657 and http://www.lighthouse3d.com/cg-topics/code-samples/opengl-3-3-glsl-1-5-sample/.
I modified the shader strings accordingly by adding the following lines:
Vertex Shader
in float concentration;
out float c;
c = concentration;
where the last line is in main();
Fragment Shader
in float c;
outColor.x = c;
where the last line is in main().
What am I doing wrong?
Edit: OpenGL version used is actually 3.2

Modifying a Shader Storage Buffer Object from within a Vertex Shader

In my base program (C++/OpenGL 4.5) I have copied the content of the Vertex Buffer to an Shader Storage Buffer (SSBO):
float* buffer = (float*) glMapBuffer(GL_ARRAY_BUFFER, GL_READ_ONLY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 3, ssbo[2]);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(GLfloat)*size,buffer, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 3, 0);
glUnmapBuffer(GL_ARRAY_BUFFER);
In the Vertex Shader this data is bound to an array:
#version 430
#extension GL_ARB_shader_storage_buffer_object : require
layout(shared, binding = 3) buffer storage
{
float array[];
}
But when I'm trying to overwrite an array entry in the main function like this:
array[index_in_bounds] = 4.2;
nothing happens.
What am I doing wrong? Can I change the buffer from within the Vertex Shader? Is this only possible in a Geometry Shader? Do I have to do this with Transform Feedback (that I have never used before)?
edit:
I'm mapping the buffers for test purposes in my main program, just to see if the data changes:
float* buffer = (float*) glMapBuffer(GL_ARRAY_BUFFER, GL_READ_ONLY);
float* ssbo = (float*) glMapNamedBuffer(3, GL_READ_ONLY);
for(int i = 0; i < SIZE_OF_BUFFERS; i++)
printf("% 5f | % 5f\n", ssbo[i], buffer[i]);
glUnmapNamedBuffer(3);
glUnmapBuffer(GL_ARRAY_BUFFER);
Okay, I have found the problem using the red book. I have not bound the buffer correctly and binding the buffer base has to happen after buffering the data:
float* buffer = (float*) glMapBuffer(GL_ARRAY_BUFFER, GL_READ_ONLY);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, &ssbo); // bind buffer
// switched these two:
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(GLfloat)*size,buffer, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 3, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0); // unbind buffer
glUnmapBuffer(GL_ARRAY_BUFFER);

Does byte alignment occur when data is transfered from main memory to GPUmemory?

I have data struct in both glsl and C++:
struct LentiStruct
{
vec2 center;
float radius;
};
And I use a shader storage buffer to contain an array of such struct in glsl:
layout(std430, binding = 2) buffer Lents
{
LentiStruct in_lents[];
};
So I think the actual size of such struct array should be in_lents.length()*(2+2)*4 bytes if I consider the byte alignment when use qualifier std430.
Then I update the buffer with glBufferSubData():
std::vector<LentiCircle> &l;
...
Gluint ssbo;
...
glBindBuffer(GL_ARRAY_BUFFER, ssbo);
glBufferSubData(GL_ARRAY_BUFFER, 0, l.size()*sizeof(LentiStruct), &l[0]);
Everything works fine, but I think the size here doesn't match that in glsl, why it still works fine? Shouldn't I take byte alignment into consideration?