Why is OpenGL not drawing rectangle correctly - c++

I'm trying to build game engine and I wanted to make it cross platform as possible, but BufferLayout abstraction might be suspect
I tried to debug it, but all numbers are correct. I also tried to wrap OpenGL calls around error checking code, but doesn't have any error.
void OpenGLVertexArray::AddBuffers(const std::vector<std::shared_ptr<Buffer>>& buffers, const BufferLayout& layout)
{
BZ_CORE_ASSERT(layout.GetElements().size() == buffers.size(), "Error, buffers and layout elements are not same size!");
Enable();
const auto& elements = layout.GetElements();
uint32 offset = 0;
for (size_t i = 0; i < buffers.size(); ++i)
{
buffers[i]->Bind();
const auto& element = elements[i];
GLCall(glEnableVertexAttribArray(i));
GLCall(glVertexAttribPointer(i, element.count, element.type, element.normalized ? GL_TRUE : GL_FALSE, layout.GetStride(), (const void*)offset));
offset += BufferElement::GetSize(element.type) * element.count;
buffers[i]->Unbind();
}
Disable();
}
If you need more code, then I published entire code to GitHub.
https://github.com/WhoseTheNerd/MinecraftPi
I expected rectangle to be rendered on screen, but it renders weird triangle. https://i.imgur.com/yOIZcsu.png

Your offset processing seems to be a little strange to my eyes? Firstly you are casting a 32bit int into a pointer type. That's going to cause a bother if you are using a 64bit OS. If you change the offset to a uint8_t ptr, that will remove one of the problems (and remove the need for the cast).
const uint8_t* offset = 0;
One other issue i can see is that you're offset calculations seem somewhat confused. Are you sure you didn't just mean to pass in 0 for each offset?
// bind an entirely new buffer
buffers[i]->Bind();
/* snip */
// ok, so if buffers[0] contains the vertices. The offset for the next
// buffer will be (numVertices * sizeof(float) * 3) ?
// So then I bind buffers[1] (let's say they contain vertex colours)
// The array itself is (numVertices * sizeof(float) * 3) in size,
// however the offset from the previous iteration is going to point
// past the end of the buffer. That seems wrong to my eyes?
offset += BufferElement::GetSize(element.type) * element.count;
It's almost as though the offset calculation code assumes that all of the data will come from a single buffer. However you seem to be binding a different buffer per element, which you would assume indicates an offset of zero for each buffer. Something tells me you shouldn't be calculating the offset at all, and should instead add it as a var to your element structure (i.e. you will specify manually for each element - which will allow you to use both individual buffers, and multiple elements stored within a single buffer)
One final issue, if element.type is anything other than GL_FLOAT or GL_HALF_FLOAT, you have a bug. For integer types you should be using glVertexAttribIPointer, and for GL_DOUBLE you need to use glVertexAttribLPointer.

Related

Indices Problem with a Batch Renderer (OpenGL)

I'm trying to implement batch rendering for 3D objects in an engine I'm doing, and I can't manage to get the indices fine.
So in a 3D Renderer class I have a Renderer3DData structure that looks like the next:
static const uint MaxQuads = 20000;
static const uint MaxVertices = MaxQuads * 4;
static const uint MaxIndices = MaxQuads * 6;
uint IndicesDrawCount = 0; // Debug var
std::vector<uint> Indices;
Ref<IndexBuffer> IBuffer = nullptr;
// Other data like a VBuffer, VArray...
So the vector of Indices will store the indices to draw on each batch while the IBuffer is the Index Buffer class which handles all OpenGL operations ("Ref" is a typedef to make a shared pointer).
Then a static Renderer3DData* s_3DData; is initialized in the init function and the index buffer is initialized as follows:
uint* indices = new uint[s_3DData->MaxIndices];
s_3DData->IBuffer = IndexBuffer::Create(indices, s_3DData->MaxIndices);
And then bounded together with the Vertex Array and the Vertex Buffer, the initialization process is properly done since without batching this works.
So on each new batch the VArray gets bound and the Indices vector gets cleared and, on each mesh drawn, it gets modified like this:
uint offset = 0;
std::vector<uint> indices = mesh->m_Indices;
for (uint i = 0; i < indices.size(); i += 6)
{
s_3DData->Indices.push_back(offset + 0 + indices[i]);
s_3DData->Indices.push_back(offset + 1 + indices[i]);
s_3DData->Indices.push_back(offset + 2 + indices[i]);
s_3DData->Indices.push_back(offset + 3 + indices[i]);
s_3DData->Indices.push_back(offset + 4 + indices[i]);
s_3DData->Indices.push_back(offset + 5 + indices[i]);
offset += 4;
s_3DData->IndicesDrawCount += 6;
}
I don't know how I did come up with this way of setting the index buffer, I was testing things to do it, pushing only the indices or the indices + offset doesn't works neither. Finally, on each draw, I do the next:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, BufferID);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, s_3DData->Indices.size(), s_3DData->Indices.data());
// With the vArray bound:
glDrawElements(GL_TRIANGLES, s_3DData->IndicesDrawCount, GL_UNSIGNED_INT, nullptr);
As I mentioned, when I'm not batching, the drawing (which doesn't goes through all this process), works, so the data in the mesh and the vertex/index buffers must be good, what I think it's wrong is the way to set the index buffer since I'm not sure how to even set it up (unlike other rendering stuff).
The result is the next one (should be a solid sphere):
The way that "sphere" is rendered makes me think that the indices are wrong. And the objects in the center are objects drawn without batching for me to know that it's not the initial setup that's wrong. Does anybody sees what I'm doing wrong?
I finally solved it (I'm crying, I've been with this a lot of time).
So there was a couple of problems:
First: The function s_3DData->IBuffer = IndexBuffer::Create(indices, s_3DData->MaxIndices); that I posted was doing the next:
glCreateBuffers(1, &m_BufferID);
glBindBuffer(GL_ARRAY_BUFFER, m_BufferID);
glBufferData(GL_ARRAY_BUFFER, count * sizeof(uint), nullptr, GL_STATIC_DRAW);
So the first problem was that I was creating index buffers with GL_STATIC_DRAW instead of GL_DYNAMIC_DRAW as required to batch since we are dynamically updating the buffer (this was my bad to not to post the function entirely, I was pretty asleep when I posted it, I should have done it).
Second: The function glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, s_3DData->Indices.size(), s_3DData->Indices.data()); was wrong on the size parameter.
OpenGL requires the size of this function to be the total size of the buffer that we want to update, which is not the vector size but the vector size multiplied by sizeof(uint) (in this case, uint because the vector is a uint vector).
Third: And final problem was the loop that modified the indices vector on each mesh draw, it was wrong and thought from the point of view of drawing quads in 2D (as I was previously testing batching in 2D).
The correct loop is the next:
std::vector<uint> indices = mesh->m_Indices;
for (uint i = 0; i < indices.size(); ++i)
{
s_3DData->Indices.push_back(s_3DData->IndicesCurrentOffset + indices[i]);
++s_3DData->IndicesDrawCount;
++s_3DData->RendererStats.IndicesCount; // Debug Purpose
}
s_3DData->IndicesCurrentOffset += mesh->m_MaxIndex;
So now each mesh stores the (max index + 1) that it has (for a quad with indices from 0 to 3, this would be 4).
This way, I can go through all mesh indices while updating the indices that we use to draw and then I can update the current offset value so that we properly store all the indices drawn in order.
Again, I'm not intending this to be fast nor performative, I was just learning how to do this (and I did :) ).
The result:

How do I load multiple structs into a single UBO?

I am following the tutorials on: Here.
I have completed up till loading models so my code is similar to that point.
I am now trying to pass another struct to the Uniform Buffer Object, in a similar way to previously shown.
I have created another struct defined outside the application to store the data as follows:
struct Light{
alignas(16) glm::vec3 position;
alignas(16) glm::vec3 colour;
};
After doing this, I resized the uniform buffer size in the following way:
void createUniformBuffers() {
VkDeviceSize bufferSize = sizeof(CameraUBO) + sizeof(Light);
...
Next, when creating the descriptor sets, I added the lightBufferInfo below the already defined bufferInfo as shown below:
...
for (size_t i = 0; i < swapChainImages.size(); i++) {
VkDescriptorBufferInfo bufferInfo = {};
bufferInfo.buffer = uniformBuffers[i];
bufferInfo.offset = 0;
bufferInfo.range = sizeof(UniformBufferObject);
VkDescriptorBufferInfo lightBufferInfo = {};
lightBufferInfo.buffer = uniformBuffers[i];
lightBufferInfo.offset = 0;
lightBufferInfo.range = sizeof(Light);
...
I then added this to the descriptorWrites array:
...
descriptorWrites[2].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
descriptorWrites[2].dstSet = descriptorSets[i];
descriptorWrites[2].dstBinding = 2;
descriptorWrites[2].dstArrayElement = 0;
descriptorWrites[2].descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
descriptorWrites[2].descriptorCount = 1;
descriptorWrites[2].pBufferInfo = &lightBufferInfo;
...
Now similarly to the UniformBufferObject I plan to use the updateUniformBuffer(uint32_t currentImage) function to change the lights position and colour, but first I just tried to set the position to a desired value:
void updateUniformBuffer(uint32_t currentImage) {
...
ubo.proj[1][1] *= -1;
Light light = {};
light.position = glm::vec3(0, 10, 10);
light.colour = glm::vec3(1, 1, 0);
void* data;
vkMapMemory(device, uniformBuffersMemory[currentImage], 0, sizeof(ubo), 0, &data);
memcpy(data, &ubo, sizeof(ubo));
vkUnmapMemory(device, uniformBuffersMemory[currentImage]);
}
I do not understand how the offset works when trying to pass two objects to a uniform buffer, so I do not know how to copy the light object to uniformBuffersMemory.
How would the offsets be defined in order to get this to work?
A note before reading further: Splitting data for a single UBO into two different structs and descriptors makes passing data a bit more complicated, as all your sizes and writes need to be aligned to the minUniformBufferAlignment property of your device, making your code a bit more complicated. If you're starting with Vulkan you may want to split the data either into two UBOs (creating two buffers), or just pass all values as a single struct.
But if you want to continue with the way you described in your post:
First you need to properly size your array, because your copies need to be aligned to minUniformBufferAlignment you probably can't just copy your light data to the area right after your other data. If your device has an minUniformBufferAlignment of 256 bytes and you want to copy over two host structs you'r uniform buffers size needs to be at least 2 * 256 bytes and not just sizeof(matrices) + sizeof(lights). So you need to adjust your bufferSize in the VkDeviceSize structure accordingly.
Next you need to offset your lightBufferInfo VkDescriptorBufferInfo:
lightBufferInfo.offset = std::max(sizeof(Light), minUniformBufferOffsetAlignment);
This will let your vertex shader know where to start fetching data for that binding.
On most NVidia GPUs e.g., minUniformBufferOffsetAlignment is 256 bytes, where as the size of your Light struct is 32 bytes. So to make this work on such a GPU you have to align at 256 bytes instead of 32.
Inspecting your setup in RenderDoc should then look similar to this:
Note that for more complex allocations and scenarios you'd need to properly get the right alignment size depending on the size of your data structure instead of using a simple max like above.
And now when updating your uniform buffers you need to map and copy to the proper offset too:
void* mapped = nullptr;
// Copy matrix data to offset for binding 0
vkMapMemory(device, uniformBuffersMemory[currentImage].memory, 0, sizeof(ubo), 0, &mapped);
memcpy(mapped, &ubo, sizeof(ubo));
vkUnmapMemory(device, uniformBuffersMemory[currentImage].memory);
// Copy light data to offset for binding 1
vkMapMemory(device, uniformBuffersMemory[currentImage].memory, std::max(sizeof(ubo), minUniformBufferOffsetAlignment), sizeof(Light), 0, &mapped);
memcpy(mapped, &uboLight, sizeof(Light));
vkUnmapMemory(device, uniformBuffersMemory[currentImage].memory);
Note that you may want to only map once after creating the buffers for performance reasons rather than mapping on every update. Just store the offset pointer somewhere in your code.

GLSL Compute Shader Setting buffer with lookup table results in no data written, setting the same buffer with other data works

I am attempting to implement a slightly modified version of this standard marching cubes algorithm in a compute shader.
I have reached the stage at which triTable is used to insert the correct vertex indices into a buffer and have modified the table to be 1 dimensional (const int triTable[4096]={-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,8,3...})
The following code shows the error that I am experiencing (this does not implement the algorithm however it demonstrates the current issue fully):
layout(binding=1) buffer Grid
{
float GridData[]; //contains 512*512*512 data volume previously generated, unused in this test case
};
uniform uint marchableCount;
uniform uint pointCount;
layout(std430, binding = 4) buffer X {uvec4 marchableList[];}; //format is x,y,z,cubeIndex
layout(std430, binding = 5) buffer v {vec4 vertices[];};
layout(std430,binding = 6) buffer n {vec4 normals[];};
layout(binding = 7) uniform atomic_uint triCount;
void main()
{
uvec3 gid = marchableList[gl_GlobalInvocationID.x].xyz; //xyz of grid cell
int E = int(edgeTable[marchableList[gl_GlobalInvocationID.x].w]);
if (E != 0)
{
uint cubeIndex = marchableList[gl_GlobalInvocationID.x].w;
uint index = atomicCounterIncrement(triCount);
int tCount = 0;//unused in this test, used for iteration in actual algorithm
int tGet = tCount+16*int(cubeIndex); //correction from converting 2d array to 1d array
vertices[index] = vec4(tGet);
}
}
This code produces expected values: the vertices buffer is filled with data and the atomic counter increments
changing this line:
vertices[index] = vec4(tGet);
to
vertices[index] = vec4(triTable[tGet]);
or
vertices[index] = vec4(triTable[tGet]+1);
(demonstrating that triTable is not coincidentally returning zeros)
results in what appears to be a complete failure of the shader: the buffer is filled with zeros and the atomic counter does not increment. No error messages are output when the shader is compiled. tGet is less than 4096.
The following test cases also produce the correct output:
vertices[index] = vec4(triTable[3]); //-1
vertices[index] = vec4(triTable[4095]); //also -1
showing that triTable is in fact implemented correctly
What causes the shader to have issues in these very specific cases?
I'm more surprised that const int triTable[4096] = {...}; compiles at all. That array, if it is actually needed, is 16KB in size. That's a lot for a shader, even if the array lives in shared memory.
What is most likely happening is that, whenever the compiler detects usage of this array that it can't optimize it out to a simple value (triTable[3] will always be 1, so the compiler doesn't need to store the whole table), the compilation either fails or results in a non-functional shader.
It would be best to make this table a uniform buffer. An SSBO might work too, but some hardware implements uniform blocks through specialized memory rather than with a global memory fetch.

OpenGL 4.5 - Shader storage buffer objects layout

I'm trying my hand at shader storage buffer objects (aka Buffer Blocks) and there are a couple of things I don't fully grasp. What I'm trying to do is to store the (simplified) data of an indeterminate number of lights n in them, so my shader can iterate through them and perform calculations.
Let me start by saying that I get the correct results, and no errors from OpenGL. However, it bothers me not to know why it is working.
So, in my shader, I got the following:
struct PointLight {
vec3 pos;
float intensity;
};
layout (std430, binding = 0) buffer PointLights {
PointLight pointLights[];
};
void main() {
PointLight light;
for (int i = 0; i < pointLights.length(); i++) {
light = pointLights[i];
// etc
}
}
and in my application:
struct PointLightData {
glm::vec3 pos;
float intensity;
};
class PointLight {
// ...
PointLightData data;
// ...
};
std::vector<PointLight*> pointLights;
glGenBuffers(1, &BBO);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, BBO);
glNamedBufferStorage(BBO, n * sizeof(PointLightData), NULL, GL_DYNAMIC_STORAGE_BIT);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, BBO);
...
for (unsigned int i = 0; i < pointLights.size(); i++) {
glNamedBufferSubData(BBO, i * sizeof(PointLightData), sizeof(PointLightData), &(pointLights[i]->data));
}
In this last loop I'm storing a PointLightData struct with an offset equal to its size times the number of them I've already stored (so offset 0 for the first one).
So, as I said, everything seems correct. Binding points are correctly set to the zeroeth, I have enough memory allocated for my objects, etc. The graphical results are OK.
Now to my questions. I am using std430 as the layout - in fact, if I change it to std140 as I originally did it breaks. Why is that? My hypothesis is that the layout generated by std430 for the shader's PointLights buffer block happily matches that generated by the compiler for my application's PointLightData struct (as you can see in that loop I'm blindingly storing one after the other). Do you think that's the case?
Now, assuming I'm correct in that assumption, the obvious solution would be to do the mapping for the sizes and offsets myself, querying opengl with glGetUniformIndices and glGetActiveUniformsiv (the latter called with GL_UNIFORM_SIZE and GL_UNIFORM_OFFSET), but I got the sneaking suspicion that these two guys only work with Uniform Blocks and not Buffer Blocks like I'm trying to do. At least, when I do the following OpenGL throws a tantrum, gives me back a 1281 error and returns a very weird number as the indices (something like 3432898282 or whatever):
const char * names[2] = {
"pos", "intensity"
};
GLuint indices[2];
GLint size[2];
GLint offset[2];
glGetUniformIndices(shaderProgram->id, 2, names, indices);
glGetActiveUniformsiv(shaderProgram->id, 2, indices, GL_UNIFORM_SIZE, size);
glGetActiveUniformsiv(shaderProgram->id, 2, indices, GL_UNIFORM_OFFSET, offset);
Am I correct in saying that glGetUniformIndices and glGetActiveUniformsiv do not apply to buffer blocks?
If they do not, or the fact that it's working is like I imagine just a coincidence, how could I do the mapping manually? I checked appendix H of the programming guide and the wording for array of structures is somewhat confusing. If I can't query OpenGL for sizes/offsets for what I'm tryind to do, I guess I could compute them manually (cumbersome as it is) but I'd appreciate some help in there, too.

Generating Smooth Normals from active Vertex Array

I'm attempting to hack and modify several rendering features of an old opengl fixed pipeline game, by hooking into OpenGl calls, and my current mission is to implement shader lighting. I've already created an appropriate shader program that lights most of my objects correctly, but this game's terrain is drawn with no normal data provided.
The game calls:
void glVertexPointer(GLint size, GLenum type, GLsizei stride, const GLvoid * pointer);
and
void glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid * indices);`
to define and draw the terrain, thus I have these functions both hooked, and I hope to loop through the given vertex array at the pointer, and calculate normals for each surface, on either every DrawElements call or VertexPointer call, but I'm having trouble coming up with an approach to do so - specifically, how to read, iterate over, and understand the data at the pointer. In this case, the usual parameters for the glVertexPointer calls are size = 3, type = GL_float, stride = 16, pointer = some pointer. Hooking glVertexPointer, I don't know how I could iterate through the pointer and grab all the vertices for the mesh, considering I don't know the total count of all the vertices, nor do I understand how the data is structured at the pointer given the stride - and similarly how i should structure the normal array
Would it be a better idea to try to calculate the normals in drawelements for each specified index in the indice array?
Depending on your vertex array building procedure, indices would be the only relevant information for building your normals.
Difining normal average for one vertex is simple if you add a normal field in your vertex array, and sum all the normal calculations parsing your indices array.
You have than to divide each normal sum by the number of repetition in indices, count that you can save in a temporary array following vertex indices (incremented each time a normal is added to the vertex)
so to be more clear:
Vertex[vertexCount]: {Pos,Normal}
normalCount[vertexCount]: int count
Indices[indecesCount]: int vertexIndex
You may have 6 normals per vertex so add a temporary array of normal array to averrage those for each vertex:
NormalTemp[vertexCount][6] {x,y,z}
than parsing your indice array (if it's triangle):
for i=0 to indicesCount step 3
for each triangle top (t from 0 to 2)
NormalTemp[indices[i + t]][normalCount[indices[i+t]]+1] = normal calculation with cross product of vectors ending with other summits or this triangle
normalCount[indices[i+t]]++
than you have to divide your sums by the count
for i=0 to vertexCount step 1
for j=0 to NormalCount[i] step 1
sum += NormalTemp[i][j]
normal[i] = sum / normacount[i]
While I like and have voted up the j-p's answer I would still like to point out that you could get away with calculating one normal per face and just using for all 3 vertices. It would be faster, and easier, and sometimes even more accurate.