OpenGL: Shader storage buffer mapping/binding - c++

I'm currently working on a program which supports depth-independent (also known as order-independent) alpha blending. To do that, I implemented a per-pixel linked list, using a texture for the header (points for every pixel to the first entry in the linked list) and a texture buffer object for the linked list itself. While this works fine, I would like to exchange the texture buffer object with a shader storage buffer as an excercise.
I think I almost got it, but it took me about a week to get to a point where I could actually use the shader storage buffer. My question are:
Why I can't map the shader storage buffer?
Why is it a problem to bind the shader storage buffer again?
For debugging, I just display the contents of the shader storage buffer (which doesn't contain a linked list yet). I created the shader storage buffer in the following way:
glm::vec4* bufferData = new glm::vec4[windowOptions.width * windowOptions.height];
glm::vec4* readBufferData = new glm::vec4[windowOptions.width * windowOptions.height];
for(unsigned int y = 0; y < windowOptions.height; ++y)
{
for(unsigned int x = 0; x < windowOptions.width; ++x)
{
// Set the whole buffer to red
bufferData[x + y * windowOptions.width] = glm::vec4(1,0,0,1);
}
}
GLuint ssb;
// Get a handle
glGenBuffers(1, &ssb);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssb);
// Create buffer
glBufferData(GL_SHADER_STORAGE_BUFFER, windowOptions.width * windowOptions.height * sizeof(glm::vec4), bufferData, GL_DYNAMIC_COPY);
// Now bind the buffer to the shader
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssb);
In the shader, the shader storage buffer is defined as:
layout (std430, binding = 0) buffer BufferObject
{
vec4 points[];
};
In the rendering loop, I do the following:
glUseProgram(defaultProgram);
for(unsigned int y = 0; y < windowOptions.height; ++y)
{
for(unsigned int x = 0; x < windowOptions.width; ++x)
{
// Create a green/red color gradient
bufferData[x + y * windowOptions.width] =
glm::vec4((float)x / (float)windowOptions.width,
(float)y / (float)windowOptions.height, 0.0f, 1.0f);
}
}
glMemoryBarrier(GL_ALL_BARRIER_BITS); // Don't know if this is necessary, just a precaution
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, windowOptions.width * windowOptions.height * sizeof(glm::vec4), bufferData);
// Retrieving the buffer also works fine
// glMemoryBarrier(GL_ALL_BARRIER_BITS);
// glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, windowOptions.width * windowOptions.height * sizeof(glm::vec4), readBufferData);
glMemoryBarrier(GL_ALL_BARRIER_BITS); // Don't know if this is necessary, just a precaution
// Draw a quad which fills the screen
// ...
This code works, but when I replace glBufferSubData with the following code,
glm::vec4* p = (glm::vec4*)glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, windowOptions.width * windowOptions.height, GL_WRITE_ONLY);
for(unsigned int x = 0; x < windowOptions.width; ++x)
{
for(unsigned int y = 0; y < windowOptions.height; ++y)
{
p[x + y * windowOptions.width] = glm::vec4(0,1,0,1);
}
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
the mapping fails, returning GL_INVALID_OPERATION. It seems like the shader storage buffer is still bound to something, so it can't be mapped. I read something about glGetProgramResourceIndex (http://www.opengl.org/wiki/GlGetProgramResourceIndex) and glShaderStorageBlockBinding (http://www.opengl.org/wiki/GlShaderStorageBlockBinding), but I don't really get it.
My second question is, why I can neither call
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssb);
, nor
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssb);
in the render loop after glBufferSubData and glMemoryBarrier. This code should not change a thing, since these calls are the same as during the creation of the shader storage buffer. If I can't bind different shader storage buffers, I can only use one. But I know that more than one shader storage buffer is supported, so I think I'm missing something else (like "releasing" the buffer).

First of all, the glMapBufferRange fails simply because GL_WRITE_ONLY is not a valid argument to it. That was used for the old glMapBuffer, but glMapBufferRange uses a collection of flags for more fine-grained control. In your case you need GL_MAP_WRITE_BIT instead. And since you seem to completely overwrite the whole buffer, without caring for the previous values, an additional optimization would probably be GL_MAP_INVALIDATE_BUFFER_BIT. So replace that call with:
glm::vec4* p = (glm::vec4*)glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0,
windowOptions.width * windowOptions.height,
GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
The other error is not described that well in the question. But fix this one first and maybe it will already help with the following error.

Related

Indices Problem with a Batch Renderer (OpenGL)

I'm trying to implement batch rendering for 3D objects in an engine I'm doing, and I can't manage to get the indices fine.
So in a 3D Renderer class I have a Renderer3DData structure that looks like the next:
static const uint MaxQuads = 20000;
static const uint MaxVertices = MaxQuads * 4;
static const uint MaxIndices = MaxQuads * 6;
uint IndicesDrawCount = 0; // Debug var
std::vector<uint> Indices;
Ref<IndexBuffer> IBuffer = nullptr;
// Other data like a VBuffer, VArray...
So the vector of Indices will store the indices to draw on each batch while the IBuffer is the Index Buffer class which handles all OpenGL operations ("Ref" is a typedef to make a shared pointer).
Then a static Renderer3DData* s_3DData; is initialized in the init function and the index buffer is initialized as follows:
uint* indices = new uint[s_3DData->MaxIndices];
s_3DData->IBuffer = IndexBuffer::Create(indices, s_3DData->MaxIndices);
And then bounded together with the Vertex Array and the Vertex Buffer, the initialization process is properly done since without batching this works.
So on each new batch the VArray gets bound and the Indices vector gets cleared and, on each mesh drawn, it gets modified like this:
uint offset = 0;
std::vector<uint> indices = mesh->m_Indices;
for (uint i = 0; i < indices.size(); i += 6)
{
s_3DData->Indices.push_back(offset + 0 + indices[i]);
s_3DData->Indices.push_back(offset + 1 + indices[i]);
s_3DData->Indices.push_back(offset + 2 + indices[i]);
s_3DData->Indices.push_back(offset + 3 + indices[i]);
s_3DData->Indices.push_back(offset + 4 + indices[i]);
s_3DData->Indices.push_back(offset + 5 + indices[i]);
offset += 4;
s_3DData->IndicesDrawCount += 6;
}
I don't know how I did come up with this way of setting the index buffer, I was testing things to do it, pushing only the indices or the indices + offset doesn't works neither. Finally, on each draw, I do the next:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, BufferID);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, s_3DData->Indices.size(), s_3DData->Indices.data());
// With the vArray bound:
glDrawElements(GL_TRIANGLES, s_3DData->IndicesDrawCount, GL_UNSIGNED_INT, nullptr);
As I mentioned, when I'm not batching, the drawing (which doesn't goes through all this process), works, so the data in the mesh and the vertex/index buffers must be good, what I think it's wrong is the way to set the index buffer since I'm not sure how to even set it up (unlike other rendering stuff).
The result is the next one (should be a solid sphere):
The way that "sphere" is rendered makes me think that the indices are wrong. And the objects in the center are objects drawn without batching for me to know that it's not the initial setup that's wrong. Does anybody sees what I'm doing wrong?
I finally solved it (I'm crying, I've been with this a lot of time).
So there was a couple of problems:
First: The function s_3DData->IBuffer = IndexBuffer::Create(indices, s_3DData->MaxIndices); that I posted was doing the next:
glCreateBuffers(1, &m_BufferID);
glBindBuffer(GL_ARRAY_BUFFER, m_BufferID);
glBufferData(GL_ARRAY_BUFFER, count * sizeof(uint), nullptr, GL_STATIC_DRAW);
So the first problem was that I was creating index buffers with GL_STATIC_DRAW instead of GL_DYNAMIC_DRAW as required to batch since we are dynamically updating the buffer (this was my bad to not to post the function entirely, I was pretty asleep when I posted it, I should have done it).
Second: The function glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, s_3DData->Indices.size(), s_3DData->Indices.data()); was wrong on the size parameter.
OpenGL requires the size of this function to be the total size of the buffer that we want to update, which is not the vector size but the vector size multiplied by sizeof(uint) (in this case, uint because the vector is a uint vector).
Third: And final problem was the loop that modified the indices vector on each mesh draw, it was wrong and thought from the point of view of drawing quads in 2D (as I was previously testing batching in 2D).
The correct loop is the next:
std::vector<uint> indices = mesh->m_Indices;
for (uint i = 0; i < indices.size(); ++i)
{
s_3DData->Indices.push_back(s_3DData->IndicesCurrentOffset + indices[i]);
++s_3DData->IndicesDrawCount;
++s_3DData->RendererStats.IndicesCount; // Debug Purpose
}
s_3DData->IndicesCurrentOffset += mesh->m_MaxIndex;
So now each mesh stores the (max index + 1) that it has (for a quad with indices from 0 to 3, this would be 4).
This way, I can go through all mesh indices while updating the indices that we use to draw and then I can update the current offset value so that we properly store all the indices drawn in order.
Again, I'm not intending this to be fast nor performative, I was just learning how to do this (and I did :) ).
The result:

OpenGL glMultiDrawElementsIndirect with Interleaved Buffers

Originally using glDrawElementsInstancedBaseVertex to draw the scene meshes. All the meshes vertex attributes are being interleaved in a single buffer object. In total there are only 30 unique meshes. So I've been calling draw 30 times with instance counts, etc. but now I want to batch the draw calls into one using glMultiDrawElementsIndirect. Since I have no experience with this command function, I've been reading articles here and there to understand the implementation with little success. (For testing purposes all meshes are instanced only once).
The command structure from the OpenGL reference page.
struct DrawElementsIndirectCommand
{
GLuint vertexCount;
GLuint instanceCount;
GLuint firstVertex;
GLuint baseVertex;
GLuint baseInstance;
};
DrawElementsIndirectCommand commands[30];
// Populate commands.
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh{ m_meshes[index] };
commands[index].vertexCount = mesh->elementCount;
commands[index].instanceCount = 1; // Just testing with 1 instance, ATM.
commands[index].firstVertex = mesh->elementOffset();
commands[index].baseVertex = mesh->verticeIndex();
commands[index].baseInstance = 0; // Shouldn't impact testing?
}
// Create and populate the GL_DRAW_INDIRECT_BUFFER buffer... bla bla
Then later down the line, after setup I do some drawing.
// Some prep before drawing like bind VAO, update buffers, etc.
// Draw?
if (RenderMode == MULTIDRAW)
{
// Bind, Draw, Unbind
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, m_indirectBuffer);
glMultiDrawElementsIndirect (GL_TRIANGLES, GL_UNSIGNED_INT, nullptr, 30, 0);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, 0);
}
else
{
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh { m_meshes[index] };
glDrawElementsInstancedBaseVertex(
GL_TRIANGLES,
mesh->elementCount,
GL_UNSIGNED_INT,
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
1,
mesh->verticeIndex());
}
}
Now the glDrawElements... still works fine like before when switched. But trying glMultiDraw... gives indistinguishable meshes but when I set the firstVertex to 0 for all commands, the meshes look almost correct (at least distinguishable) but still largely wrong in places?? I feel I'm missing something important about indirect multi-drawing?
//Indirect data
commands[index].firstVertex = mesh->elementOffset();
//Direct draw call
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
That's not how it works for indirect rendering. The firstVertex is not a byte offset; it's the first vertex index. So you have to divide the byte offset by the size of the index to compute firstVertex:
commands[index].firstVertex = mesh->elementOffset() / sizeof(GLuint);
The result of that should be a whole number. If it wasn't, then you were doing unaligned reads, which probably hurt your performance. So fix that ;)

OpenGL Uniform Buffer Object (Array) Skipping Every Other Vec2

Problem
There should not be gaps between those white lines. Those lines are composed of squares (eventually I won't be generating just another giant square; this is for debugging). For some reason when I send data through my Uniform Buffer Object (example below), I am getting gaps. It's almost as if it's skipping every other y value. There are actually two squares on each location instead of there being one at (y) and one at (y + 1).
Code Snippets
Generating data pointer array
blockData = new glm::vec2[24*24];
for (int x = 0; x < 24; x++) {
for (int y = 0; y < 24; y++) {
int i = x * 24 + y;
blockData[i] = glm::vec2(x, y);
}
}
In the rendering class
glBindBuffer(GL_UNIFORM_BUFFER, ubo);
glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(glm::vec2) * blocksActive, blockData);
glBindBufferRange(GL_UNIFORM_BUFFER, uniBlockData, ubo, 0, sizeof(glm::vec2) * blocksActive);
glBindBuffer(GL_UNIFORM_BUFFER, 0);
glBindVertexArray(vao);
glDrawElementsInstanced(GL_TRIANGLES, 6, GL_UNSIGNED_INT, (void*)0, blocksActive);
glBindVertexArray(0);
Vertex Shader (GLSL)
layout (std140) uniform blockData {
vec2 blockDataPosition[5184];
};
Testing
When I change blockData[i] = glm::vec2(x, y); to blockData[i] = glm::vec2(y, x); (switching y and x), the gaps move to the x-axis.
I have tried switching the x and the y in the for loop, but it does not affect it. This issue is somehow linked to the y variable.
What does affect it is if I switch the x and y around in int i = x * 24 + y;
Setting the vec2 to (x, x) results in a correctly placed diagonal.
Setting the vec2 to (y, y) results in an oddly placed diagonal (below)
Before switching to a UBO, I was just using a uniform in the shader and it worked fine. That is why I believe it has something to do with my sending of data through the UBO.
Well so what is happening is most likely an alignment issue. Using std140, you must align for 4 floats. However, without seeing all of your code, its not going to be possible for me to be completely certain how you are buffering your data. What I can suggest, however, is to query your block offsets using glGetActiveUniformsiv, and then align your objects based on that. So a typical example would be this :
char* bufferNames[] = [generate the values with a for loop, this should be pretty easy with string and sprintf e.g "blockData.blockDataPosition[0]" to "block.Data.blockDataPosition[5184]"];
GLuint uniformBlockIndex = glGetUniformBlockIndex(yourprogram, "block name");
GLuint uniformIndices[number of names in bufferNames];
glGetUniformIndices(yourprogram, the number of names in bufferNames, bufferNames, indices);
GLint uniformOffsets[the number of names in bufferNames];
glGetActiveUniformsiv(yourprogram, 1, &uniformIndices, GL_UNIFORM_OFFSET, uniformOffsets);
The offsets will tell you everything you need to know. They are the number of offset in bytes for every name. And you can use a for loop to fill each offset, or just print it out to see how everything is aligned. This should not be done in a loop, but once for initialization and such.
Tell me if you have any questions or you need a more specific example.

OpenGL texture array initialization

I want to use texture arrays to reduce the high texture binding cost, but I can't upload the data to the texture array. I use Tao framework. Here's my code:
Gl.glEnable(Gl.GL_TEXTURE_2D_ARRAY_EXT);
Gl.glGenTextures(1, out textureArray);
Gl.glBindTexture(Gl.GL_TEXTURE_2D_ARRAY_EXT, textureArray);
var data = new uint[textureWidth, textureHeight, textureCount];
for (var x = 0; x < textureWidth; x++)
{
for (var y = 0; y < textureHeight; y++)
{
for (var z = 0; z < textureCount; z++)
data[x, y, z] = GetRGBAColor(1, 1, 1, 1);
}
}
Gl.glTexImage3D(Gl.GL_TEXTURE_2D_ARRAY_EXT, 0, Gl.GL_RGBA, textureWidth,
textureHeight, textureCount, 0, Gl.GL_RGBA, Gl.GL_UNSIGNED_BYTE, data);
Console.WriteLine(Glu.gluErrorString(Gl.glGetError()));
The glTexImage3D function says there is an invalid enumerant.
The most likely cause for a GL_INVALID_ENUM in the above code is the
Gl.glEnable(Gl.GL_TEXTURE_2D_ARRAY_EXT);
call.
This is simply not allowed. Array textures cannot be used with the fixed-function pipeline, but only with shaders (which do not need those texture enables at all). The GL_EXT_texture_array spec makes this quite clear:
This extension does not provide for the use of array textures with fixed-function fragment processing. Such support could be added by providing an additional extension allowing pplications to pass the new target enumerants (TEXTURE_1D_ARRAY_EXT and TEXTURE_2D_ARRAY_EXT) to Enable and Disable.
There never was any further extension allowing array textures for fixed-function processing (AFAIK)...
Change the 2nd parameter of glTexImage3d into 1.
I don't know why, however, nvidia's opengl driver seems to need at least 1 level for texture 2d array object.

Using vertex buffers in jogl, crash when too many triangles

I have written a simple application in Java using Jogl which draws a 3d geometry. The camera can be rotated by dragging the mouse. The application works fine, but drawing the geometry with glBegin(GL_TRIANGLE) ... calls ist too slow.
So I started to use vertex buffers. This also works fine until the number of triangles gets larger than 1000000. If that happens, the display driver suddenly crashes and my montior gets dark. Is there a limit of how many triangles fit in the buffer? I hoped to get 1000000 triangles rendered at a reasonable frame rate.
I have no idea on how to debug this problem. The nasty thing is that I have to reboot Windows after each launch, since I have no other way to get my display working again. Could anyone give me some advice?
The vertices, triangles and normals are stored in arrays float[][] m_vertices, int[][] m_triangles, float[][] m_triangleNormals.
I initialized the buffer with:
// generate a VBO pointer / handle
if (m_vboHandle <= 0) {
int[] vboHandle = new int[1];
m_gl.glGenBuffers(1, vboHandle, 0);
m_vboHandle = vboHandle[0];
}
// interleave vertex / normal data
FloatBuffer data = Buffers.newDirectFloatBuffer(m_triangles.length * 3*3*2);
for (int t=0; t<m_triangles.length; t++)
for (int j=0; j<3; j++) {
int v = m_triangles[t][j];
data.put(m_vertices[v]);
data.put(m_triangleNormals[t]);
}
data.rewind();
// transfer data to VBO
int numBytes = data.capacity() * 4;
m_gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vboHandle);
m_gl.glBufferData(GL.GL_ARRAY_BUFFER, numBytes, data, GL.GL_STATIC_DRAW);
m_gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);
Then, the scene gets rendered with:
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vboHandle);
gl.glEnableClientState(GL2.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL2.GL_NORMAL_ARRAY);
gl.glVertexPointer(3, GL.GL_FLOAT, 6*4, 0);
gl.glNormalPointer(GL.GL_FLOAT, 6*4, 3*4);
gl.glDrawArrays(GL.GL_TRIANGLES, 0, 3*m_triangles.length);
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);
Try checking the return value of calling glBufferData. It will return GL_OUT_OF_MEMORY if it cannot satisfy numBytes.