How do I load multiple structs into a single UBO? - c++

I am following the tutorials on: Here.
I have completed up till loading models so my code is similar to that point.
I am now trying to pass another struct to the Uniform Buffer Object, in a similar way to previously shown.
I have created another struct defined outside the application to store the data as follows:
struct Light{
alignas(16) glm::vec3 position;
alignas(16) glm::vec3 colour;
};
After doing this, I resized the uniform buffer size in the following way:
void createUniformBuffers() {
VkDeviceSize bufferSize = sizeof(CameraUBO) + sizeof(Light);
...
Next, when creating the descriptor sets, I added the lightBufferInfo below the already defined bufferInfo as shown below:
...
for (size_t i = 0; i < swapChainImages.size(); i++) {
VkDescriptorBufferInfo bufferInfo = {};
bufferInfo.buffer = uniformBuffers[i];
bufferInfo.offset = 0;
bufferInfo.range = sizeof(UniformBufferObject);
VkDescriptorBufferInfo lightBufferInfo = {};
lightBufferInfo.buffer = uniformBuffers[i];
lightBufferInfo.offset = 0;
lightBufferInfo.range = sizeof(Light);
...
I then added this to the descriptorWrites array:
...
descriptorWrites[2].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
descriptorWrites[2].dstSet = descriptorSets[i];
descriptorWrites[2].dstBinding = 2;
descriptorWrites[2].dstArrayElement = 0;
descriptorWrites[2].descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
descriptorWrites[2].descriptorCount = 1;
descriptorWrites[2].pBufferInfo = &lightBufferInfo;
...
Now similarly to the UniformBufferObject I plan to use the updateUniformBuffer(uint32_t currentImage) function to change the lights position and colour, but first I just tried to set the position to a desired value:
void updateUniformBuffer(uint32_t currentImage) {
...
ubo.proj[1][1] *= -1;
Light light = {};
light.position = glm::vec3(0, 10, 10);
light.colour = glm::vec3(1, 1, 0);
void* data;
vkMapMemory(device, uniformBuffersMemory[currentImage], 0, sizeof(ubo), 0, &data);
memcpy(data, &ubo, sizeof(ubo));
vkUnmapMemory(device, uniformBuffersMemory[currentImage]);
}
I do not understand how the offset works when trying to pass two objects to a uniform buffer, so I do not know how to copy the light object to uniformBuffersMemory.
How would the offsets be defined in order to get this to work?

A note before reading further: Splitting data for a single UBO into two different structs and descriptors makes passing data a bit more complicated, as all your sizes and writes need to be aligned to the minUniformBufferAlignment property of your device, making your code a bit more complicated. If you're starting with Vulkan you may want to split the data either into two UBOs (creating two buffers), or just pass all values as a single struct.
But if you want to continue with the way you described in your post:
First you need to properly size your array, because your copies need to be aligned to minUniformBufferAlignment you probably can't just copy your light data to the area right after your other data. If your device has an minUniformBufferAlignment of 256 bytes and you want to copy over two host structs you'r uniform buffers size needs to be at least 2 * 256 bytes and not just sizeof(matrices) + sizeof(lights). So you need to adjust your bufferSize in the VkDeviceSize structure accordingly.
Next you need to offset your lightBufferInfo VkDescriptorBufferInfo:
lightBufferInfo.offset = std::max(sizeof(Light), minUniformBufferOffsetAlignment);
This will let your vertex shader know where to start fetching data for that binding.
On most NVidia GPUs e.g., minUniformBufferOffsetAlignment is 256 bytes, where as the size of your Light struct is 32 bytes. So to make this work on such a GPU you have to align at 256 bytes instead of 32.
Inspecting your setup in RenderDoc should then look similar to this:
Note that for more complex allocations and scenarios you'd need to properly get the right alignment size depending on the size of your data structure instead of using a simple max like above.
And now when updating your uniform buffers you need to map and copy to the proper offset too:
void* mapped = nullptr;
// Copy matrix data to offset for binding 0
vkMapMemory(device, uniformBuffersMemory[currentImage].memory, 0, sizeof(ubo), 0, &mapped);
memcpy(mapped, &ubo, sizeof(ubo));
vkUnmapMemory(device, uniformBuffersMemory[currentImage].memory);
// Copy light data to offset for binding 1
vkMapMemory(device, uniformBuffersMemory[currentImage].memory, std::max(sizeof(ubo), minUniformBufferOffsetAlignment), sizeof(Light), 0, &mapped);
memcpy(mapped, &uboLight, sizeof(Light));
vkUnmapMemory(device, uniformBuffersMemory[currentImage].memory);
Note that you may want to only map once after creating the buffers for performance reasons rather than mapping on every update. Just store the offset pointer somewhere in your code.

Related

How can I convert a 2D RGB array into a dds file using C++?

I've got a 2D array of RGB values that represent a plane's painted vertices. I'd like to store the final vertex colours of the plane in a .dds file so I can load the .dds file later on as a texture.
How might I approach this?
Thanks to Chuck in the comments, I've found this solution works for me. There are likely improvements to be made. The approach can be broken into a few steps.
Step one is to deal with capturing the device context and D3D device to use for rendering, and setting the correct texture description to be used when creating the texture to be captured using SaveDDSTextureToFile (ScreenGrab). The below description works when each float contains four byte values for each colour and for the alpha value.
void DisplayChunk::SaveVertexColours(std::shared_ptr<DX::DeviceResources> DevResources)
{
// Setup D3DDeviceContext and D3DDevice
auto devicecontext = DevResources->GetD3DDeviceContext();
auto device = DevResources->GetD3DDevice();
// Create Texture2D and Texture2D Description
ID3D11Texture2D* terrain_texture;
D3D11_TEXTURE2D_DESC texture_desc;
// Set up a texture description
texture_desc.Width = TERRAINRESOLUTION;
texture_desc.Height = TERRAINRESOLUTION;
texture_desc.MipLevels = texture_desc.ArraySize = 1;
texture_desc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
texture_desc.SampleDesc.Count = 1;
texture_desc.SampleDesc.Quality = 0;
texture_desc.Usage = D3D11_USAGE_DYNAMIC;
texture_desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texture_desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
texture_desc.MiscFlags = 0;
Step two is to create a vector of floats and populate it with the relevant RGBA values. The reason I used a vector was for convenience in terms of pushing the separate float values for R, G, B, and A each iteration. This way at each vertex position you can push back the required sixteen byte values (four from each float as discussed above).
// Create vertex colour vector
std::vector<float> colour_vector;
for (int i = 0; i < TERRAINRESOLUTION; i++) {
for (int j = 0; j < TERRAINRESOLUTION; j++) {
colour_vector.push_back((float)m_terrainGeometry[i][j].color.x);
colour_vector.push_back((float)m_terrainGeometry[i][j].color.y);
colour_vector.push_back((float)m_terrainGeometry[i][j].color.z);
colour_vector.push_back((float)m_terrainGeometry[i][j].color.w);
}
}
Step three is to fill a buffer with the vertex RGBA values. Buffer size must be equal to the total number of bytes required for storage, which is four bytes per float, four floats per vertex, and TERRAINTRESOLUTION * TERRAINTRESOLUTION vertices. Example of a 10 x 10 terrain: 4(bytes) x 4(floats) x 10(width) x 10(height) = 1600 bytes to store RGBA of each vertex.
// Initialise buffer parameters
const int components = 4;
const int length = components * TERRAINRESOLUTION * TERRAINRESOLUTION;
// Fill buffer with vertex colours
float* buffer = new float[length * sizeof(float)];
for (int i = 0; i < length; i++)
buffer[i] = colour_vector[i];
Final step is to create the texture data using the contents of the buffer created above. pSysMem required a pointer to the buffer, used as the initialisation data. SysMemPitch must be set to the size of one row. Using CreateTexture2D, a new texture can be created using the stored byte values. SaveDDSTextureToFile allows for saving the texture resource to an external .dds file. Don't forget to delete the buffer after use.
// Set the texture data using the buffer contents
D3D11_SUBRESOURCE_DATA texture_data;
texture_data.pSysMem = (void*)buffer;
texture_data.SysMemPitch = TERRAINRESOLUTION * components * sizeof(float);
// Create the texture using the terrain colour data
device->CreateTexture2D(&texture_desc, &texture_data, &terrain_texture);
// Save the texture to a .dds file
HRESULT hr = SaveDDSTextureToFile(devicecontext, terrain_texture, L"terrain_output.dds");
// Delete the buffer
delete[] buffer;
}
Some resources I used whilst implementing:
(DDS Guide)
(ScreenGrab Source)
(ScreenGrab Example)
(Creating textures in DirectX)
(Addressing scheme example (navigating buffer/array correctly))

Why is OpenGL not drawing rectangle correctly

I'm trying to build game engine and I wanted to make it cross platform as possible, but BufferLayout abstraction might be suspect
I tried to debug it, but all numbers are correct. I also tried to wrap OpenGL calls around error checking code, but doesn't have any error.
void OpenGLVertexArray::AddBuffers(const std::vector<std::shared_ptr<Buffer>>& buffers, const BufferLayout& layout)
{
BZ_CORE_ASSERT(layout.GetElements().size() == buffers.size(), "Error, buffers and layout elements are not same size!");
Enable();
const auto& elements = layout.GetElements();
uint32 offset = 0;
for (size_t i = 0; i < buffers.size(); ++i)
{
buffers[i]->Bind();
const auto& element = elements[i];
GLCall(glEnableVertexAttribArray(i));
GLCall(glVertexAttribPointer(i, element.count, element.type, element.normalized ? GL_TRUE : GL_FALSE, layout.GetStride(), (const void*)offset));
offset += BufferElement::GetSize(element.type) * element.count;
buffers[i]->Unbind();
}
Disable();
}
If you need more code, then I published entire code to GitHub.
https://github.com/WhoseTheNerd/MinecraftPi
I expected rectangle to be rendered on screen, but it renders weird triangle. https://i.imgur.com/yOIZcsu.png
Your offset processing seems to be a little strange to my eyes? Firstly you are casting a 32bit int into a pointer type. That's going to cause a bother if you are using a 64bit OS. If you change the offset to a uint8_t ptr, that will remove one of the problems (and remove the need for the cast).
const uint8_t* offset = 0;
One other issue i can see is that you're offset calculations seem somewhat confused. Are you sure you didn't just mean to pass in 0 for each offset?
// bind an entirely new buffer
buffers[i]->Bind();
/* snip */
// ok, so if buffers[0] contains the vertices. The offset for the next
// buffer will be (numVertices * sizeof(float) * 3) ?
// So then I bind buffers[1] (let's say they contain vertex colours)
// The array itself is (numVertices * sizeof(float) * 3) in size,
// however the offset from the previous iteration is going to point
// past the end of the buffer. That seems wrong to my eyes?
offset += BufferElement::GetSize(element.type) * element.count;
It's almost as though the offset calculation code assumes that all of the data will come from a single buffer. However you seem to be binding a different buffer per element, which you would assume indicates an offset of zero for each buffer. Something tells me you shouldn't be calculating the offset at all, and should instead add it as a var to your element structure (i.e. you will specify manually for each element - which will allow you to use both individual buffers, and multiple elements stored within a single buffer)
One final issue, if element.type is anything other than GL_FLOAT or GL_HALF_FLOAT, you have a bug. For integer types you should be using glVertexAttribIPointer, and for GL_DOUBLE you need to use glVertexAttribLPointer.

GLSL Compute Shader Setting buffer with lookup table results in no data written, setting the same buffer with other data works

I am attempting to implement a slightly modified version of this standard marching cubes algorithm in a compute shader.
I have reached the stage at which triTable is used to insert the correct vertex indices into a buffer and have modified the table to be 1 dimensional (const int triTable[4096]={-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,8,3...})
The following code shows the error that I am experiencing (this does not implement the algorithm however it demonstrates the current issue fully):
layout(binding=1) buffer Grid
{
float GridData[]; //contains 512*512*512 data volume previously generated, unused in this test case
};
uniform uint marchableCount;
uniform uint pointCount;
layout(std430, binding = 4) buffer X {uvec4 marchableList[];}; //format is x,y,z,cubeIndex
layout(std430, binding = 5) buffer v {vec4 vertices[];};
layout(std430,binding = 6) buffer n {vec4 normals[];};
layout(binding = 7) uniform atomic_uint triCount;
void main()
{
uvec3 gid = marchableList[gl_GlobalInvocationID.x].xyz; //xyz of grid cell
int E = int(edgeTable[marchableList[gl_GlobalInvocationID.x].w]);
if (E != 0)
{
uint cubeIndex = marchableList[gl_GlobalInvocationID.x].w;
uint index = atomicCounterIncrement(triCount);
int tCount = 0;//unused in this test, used for iteration in actual algorithm
int tGet = tCount+16*int(cubeIndex); //correction from converting 2d array to 1d array
vertices[index] = vec4(tGet);
}
}
This code produces expected values: the vertices buffer is filled with data and the atomic counter increments
changing this line:
vertices[index] = vec4(tGet);
to
vertices[index] = vec4(triTable[tGet]);
or
vertices[index] = vec4(triTable[tGet]+1);
(demonstrating that triTable is not coincidentally returning zeros)
results in what appears to be a complete failure of the shader: the buffer is filled with zeros and the atomic counter does not increment. No error messages are output when the shader is compiled. tGet is less than 4096.
The following test cases also produce the correct output:
vertices[index] = vec4(triTable[3]); //-1
vertices[index] = vec4(triTable[4095]); //also -1
showing that triTable is in fact implemented correctly
What causes the shader to have issues in these very specific cases?
I'm more surprised that const int triTable[4096] = {...}; compiles at all. That array, if it is actually needed, is 16KB in size. That's a lot for a shader, even if the array lives in shared memory.
What is most likely happening is that, whenever the compiler detects usage of this array that it can't optimize it out to a simple value (triTable[3] will always be 1, so the compiler doesn't need to store the whole table), the compilation either fails or results in a non-functional shader.
It would be best to make this table a uniform buffer. An SSBO might work too, but some hardware implements uniform blocks through specialized memory rather than with a global memory fetch.

OpenGL 4.5 - Shader storage buffer objects layout

I'm trying my hand at shader storage buffer objects (aka Buffer Blocks) and there are a couple of things I don't fully grasp. What I'm trying to do is to store the (simplified) data of an indeterminate number of lights n in them, so my shader can iterate through them and perform calculations.
Let me start by saying that I get the correct results, and no errors from OpenGL. However, it bothers me not to know why it is working.
So, in my shader, I got the following:
struct PointLight {
vec3 pos;
float intensity;
};
layout (std430, binding = 0) buffer PointLights {
PointLight pointLights[];
};
void main() {
PointLight light;
for (int i = 0; i < pointLights.length(); i++) {
light = pointLights[i];
// etc
}
}
and in my application:
struct PointLightData {
glm::vec3 pos;
float intensity;
};
class PointLight {
// ...
PointLightData data;
// ...
};
std::vector<PointLight*> pointLights;
glGenBuffers(1, &BBO);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, BBO);
glNamedBufferStorage(BBO, n * sizeof(PointLightData), NULL, GL_DYNAMIC_STORAGE_BIT);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, BBO);
...
for (unsigned int i = 0; i < pointLights.size(); i++) {
glNamedBufferSubData(BBO, i * sizeof(PointLightData), sizeof(PointLightData), &(pointLights[i]->data));
}
In this last loop I'm storing a PointLightData struct with an offset equal to its size times the number of them I've already stored (so offset 0 for the first one).
So, as I said, everything seems correct. Binding points are correctly set to the zeroeth, I have enough memory allocated for my objects, etc. The graphical results are OK.
Now to my questions. I am using std430 as the layout - in fact, if I change it to std140 as I originally did it breaks. Why is that? My hypothesis is that the layout generated by std430 for the shader's PointLights buffer block happily matches that generated by the compiler for my application's PointLightData struct (as you can see in that loop I'm blindingly storing one after the other). Do you think that's the case?
Now, assuming I'm correct in that assumption, the obvious solution would be to do the mapping for the sizes and offsets myself, querying opengl with glGetUniformIndices and glGetActiveUniformsiv (the latter called with GL_UNIFORM_SIZE and GL_UNIFORM_OFFSET), but I got the sneaking suspicion that these two guys only work with Uniform Blocks and not Buffer Blocks like I'm trying to do. At least, when I do the following OpenGL throws a tantrum, gives me back a 1281 error and returns a very weird number as the indices (something like 3432898282 or whatever):
const char * names[2] = {
"pos", "intensity"
};
GLuint indices[2];
GLint size[2];
GLint offset[2];
glGetUniformIndices(shaderProgram->id, 2, names, indices);
glGetActiveUniformsiv(shaderProgram->id, 2, indices, GL_UNIFORM_SIZE, size);
glGetActiveUniformsiv(shaderProgram->id, 2, indices, GL_UNIFORM_OFFSET, offset);
Am I correct in saying that glGetUniformIndices and glGetActiveUniformsiv do not apply to buffer blocks?
If they do not, or the fact that it's working is like I imagine just a coincidence, how could I do the mapping manually? I checked appendix H of the programming guide and the wording for array of structures is somewhat confusing. If I can't query OpenGL for sizes/offsets for what I'm tryind to do, I guess I could compute them manually (cumbersome as it is) but I'd appreciate some help in there, too.

Mapping a dynamic buffer in Direct3D11 in a Windows Store App

I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer.
The buffer is declared as a member variable:
Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer;
Then I create it (which succeeds):
D3D11_BUFFER_DESC instanceDesc;
ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC));
instanceDesc.Usage = D3D11_USAGE_DYNAMIC;
instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT;
instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
instanceDesc.MiscFlags = 0;
instanceDesc.StructureByteStride = 0;
DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer));
However, when I try to map it:
D3D11_MAPPED_SUBRESOURCE inst;
DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst));
The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.
Because the buffer was created with D3D11_USAGE_DYNAMIC, it had to be mapped with D3D_MAP_WRITE_DISCARD (or D3D_MAP_WRITE_NO_OVERWRITE, but that was inappropriate for my application).
I had the same problem when I tried to create a buffer for a shader. At the createBuffer it would always give me E_INVALIDARG.
The problem at my project was, that I forgot to align all the attributes to 16 Bytes. here is an example:
struct TessellationBufferType
{
float tessellationAmount; //4bytes
D3DXVECTOR3 cameraPosition; //12bytes
};
and if you don't get 16, add an additional variable (e.g padding) just to align up to 16:
struct LightBufferType
{
D3DXVECTOR4 ambientColor; //16
D3DXVECTOR4 diffuseColor; //16
D3DXVECTOR3 lightDirection; //12
float padding; //4
};