Reuse atomic uint in DrawElementsIndirect as voxel count - opengl

I'm currently trying to implement sparse voxel octrees according to GPU insightss: https://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf
In this book they describe how to voxelize a 3d mesh and how to create an octree as a compact represenation of the voxelized mesh. One intermediate step is to voxelize the mesh and to store the voxels in a preallocated array, where the target location in the array of any voxel is defined by an atomic counter (which I implemented)
Now I'm not sure if I can bind the first Gluint of DrawElementsIndirectCommand (which is used in the DrawElementsIndirect call) first as a GL_ATOMIC_COUNTER_BUFFER and then rebind it to the target GL_DRAW_INDIRECT_BUFFER because I want to draw the Voxels for debugging purposes now.
I assume I can rebind it, but I'm not supposed to call glBufferData again?

Related

Dynamic size arrays in Uniform buffer in Vulkan

I recently re-wrote some code to use Shader Storage Buffer Object in OpenGL, to send in dynamic sized arrays into GLSL, vastly increasing performance when drawing many procedurally generated objects.
What I have is a couple of thousand points, and for each point I render a procedurally generated circular billboard. Each one can in turn have different colors and radius, as well as a few other characteristics (represented as bools or enums)
I fill a vector with these positions, packed together with the radius and color. Then I upload it as a Shader Storage Buffer Object with dynamic size. I create a dummy VAO, containing 0 vbos, but call the draw command with the same amount of points that I have.
Inside the shader, I then iterate through this array, using the gl_VertexID, and generate a quad (two triangles) with texture coordinates, for each point.
I'm looking for a way of doing the same in Vulkan. Is there some way in Vulkan, to pass a dynamic sized array into a shader? Reading about Shader Storage Buffer objects in Graham Seller's Vulkan book, it only mentions them being read-write, but not capable of dynamically sized arrays.
Edit: It seems that storage buffers are in fact capable of dynamic sized arrays, based on Sasha Willems particle example. Is there a way of doing the same thing via uniforms?
I may be misunderstanding your question. SSBOs have identical behavior and functionality between the two APIs. That's why they're named the same. An unbounded array at the end of a storage block will have its length defined at runtime, based on what data you provide.
The size of a buffer descriptor is not hard coded into the descriptor set layout; it's something you set with VkWriteDescriptorSet. Now unlike the offset, you cannot change a descriptor set's size without changing the descriptor itself. That is, you don't have an equivalent to vkCmdBindDescriptorSets's pDynamicOffsets field. So you have to actually update the descriptor in-situ to change the length.
But that just requires double-buffering your descriptor set; it shouldn't be a problem.
Is there a way of doing the same thing via uniforms?
Again, the answer is the same for Vulkan as for OpenGL: no.

reduced vertex buffer with indexed triangles

In my OpenGL program have a huge vertex buffer with data (normals,position,texcoords) for 2048x2048 points.
In each frame i reduce my indexed buffer with a LOD algorithm and bind GL_ELEMENT_ARRAY_BUFFER again.
I wonder if it makes sense to additionally reduce the vertex buffer as well, so that it only contains the used vertices from the index buffer.
So the question is, if there is a performance gain even with rebuilding and rebinding the vertex array per frame.

Use of Vertex Array Objects and Vertex Buffer Objects

I am trying to understand these two, how to use them and how they are related. Let's say I want to create a simple terrain and a textured cube. For both objects I have the array of triangles vertices and for the cube I have an array containing the texture's data. My question is: how do I use VAOs and VBOs to create and render these two?
Would I have to create a VAO and VBO for each object?
or should create a VAO for each object's VBO (vertices, texture data, etc.)?
There are many tutorials and books but I still don't get the very idea of how these concepts must be understood and used.
Fundamentally, you need to understand two things:
Vertex Array Objects (VAOs) are conceptually nothing but thin state wrappers.
Vertex Buffer Objects (VBOs) store actual data.
Another way of thinking about this is that VAOs describe the data stored in one or more VBOs.
Think of VBOs (and buffer objects in general) as unstructured arrays of data stored in server (GPU) memory. You can layout your vertex data in multiple arrays if you want, or you can pack them into a single array. In either case, buffer objects boil down to locations where you will store data.
Vertex Array Objects track the actual pointers to VBO memory needed for draw commands.
They are a little bit more sophisticated than pointers as you would know them in a language like C, however. Vertex pointers keep track of the buffer object that was bound when they were specified, the offset into its address space, stride between vertex attributes and how to interpret the underlying data (e.g. whether to keep integer values or to convert them to floating-point [0.0,1.0] by normalizing to the data type's range).
For example, integer data is usually converted to floating-point, but it is the command you use to specify the vertex pointer (glVertexAttribPointer (...) vs. glVertexAttribIPointer (...)) that determines this behavior.
Vertex Array Objects also track the buffer object currently bound to GL_ELEMENT_ARRAY_BUFFER.
GL_ELEMENT_ARRAY_BUFFER is where the command: glDrawElements (...) sources its list of indices from (assuming a non-zero binding) and there is no glElementArrayPointer (...) command. glDrawElements (...) combines the pointer and draw command into a single operation, and will use the binding stored in the active Vertex Array Object to accomplish this.
With that out of the way, unless your objects share vertex data you are generally going to need a unique set of VBOs for each.
You can use a single VAO for your entire software if you want, or you can take advantage of the fact that changing the bound VAO changes nearly the entire set of states necessary to draw different objects.
Thus, drawing your terrain and cube could be as simple as changing the bound VAO. You may have to do more than that if you need to apply different textures to each of them, but the VAO takes care of all vertex data related setup.
Your question is not easily answerable here, but rather in a tutorial. You probably already know these two websites, but if not, I'm leaving the references.
OGLDEV
OpenGL-Tutorial.org
Now trying to elucidate your questions, a Vertex Array Object is an OpenGL object designed with the goal of reducing API overhead for draw calls. You can think of it as a container for a Vertex Buffer and its associated states. Something similar perhaps to the old display-lists.
Normally, there is a 1 to 1 relationship between a VAO and a VBO; that is, each VAO contains a unique VBO. But this is not strictly necessary. You could have several VAOs referencing the same VBO.
The simplest way to model this in code, I think, would be for you to have a VAO class/type and a method to attach a VBO to it. Then give an instance of VAO to each mesh. The mesh in turn can have a reference to a VBO type that may be its own or a shared one.

OpenGL: How to pass vectors with variable size to shaders?

What is an efficient way to pass a variably sized data (which is not unique for each vertex, but rather is common to groups of vertices) into shaders?
For example, I have three polygons with different numbers of vertices. Each polygon has a different color and is colored uniformly (each polygon's fragment has the same color). It seems ineffective to pass the polygon's color in the vertices attributes as a vector containing the same color values for each vertex in polygon. My thought is to assign to each polygon a unique number (starting from zero), create a vector with three color values (one color for each polygon), and to pass for each vertex not a color value (four floats), but the polygon number which the vertex belongs to (one integer). And have the shader fetch the polygon color from the color's vector using the polygon number as the index.
The color values for polygons can be passed in a vector of a variable length (vector whose size is not specified during compilation) through a "Shader Storage Block" where it can be accessed by the shader.
Is there other method (maybe commonly used technique) for passing variably sized vectors to shaders?
This is precisely the use case of Buffer Texture Objects.
The easiest approach is to manage the color apart from the vertex data and pass it ass a uniform to the shader. But only if the color is the same for each vertex in the vector. If however the color changes for every primitive (or each vertex) don't mind, the overhead implied by the computation with indexing etc. is way worse than simply having to flush the data on a per vertex basis to your video memory! (Unless this is already your bottleneck, but it shouldn't really be with nowadays hardware)
To rapidly flush the data from the vector for rendering, the best approach is to create a
vertex buffer object.
You basically create a large pool of vertices (and indices, depending on your precise use). Using glDrawElements (or glDrawArray) you can control how many primitive of what type shall be rendered.
Reserve a vertex buffer object with the maximal assumed size of memory you may require in the advance, so you avoid the overhead of recreating it again and again. Initialize the number of primitives to render, to 0. Each time the vector changes, copy the data to the vertex buffer object, from the first vertex to the last in the vector. Modify the number of primitives to be rendered. That's it, the complete overhead left is only flushing the vector to the vertex buffer object. No complex bookkeeping or similar required.

Strategies for packing data into OpenGL 3D array

I am implementing a voxel raycaster in OpenGL 4.3.0. I have got a basic version going where I store a 256x256x256 voxel data set of float values in a 3D texture of the same dimensions.
However, I want to make a LOD scheme using an octree. I have the data stored in a 1D array on the host side. The root has index 0, the root's children have indices 1-8, the next level have indices 9-72 and so on. The octree has 9 levels in total (where the last level has the full 256x256x256 resolution). Since the octree will always be full the structure is implicit and there is no need to store pointers, just the one float value per voxel. I have the 1D indexing and traversal algorithms all set.
My problem is that I don't know how to store this in a texture. GL_TEXTURE_MAX_SIZE is way too small (16384) for using the 1D array approach for which I have figured out the indexing. I need to store this in a 3D texture, and I don't know what will happen when I try to cram my 1D array in there, nor do I know how to choose a size and a 1D->3D conversion scheme to not waste space or time.
My question is if someone has a good strategy for storing this whole octree structure in one 3D texture, and in that case how to choose dimensions and indexing for it.
First some words on porting your 1D-array solution directly:
First of all, as Mortennobel says in his comment, the max texture size is very likely not 3397, that's just the enum value of GL_MAX_TEXTURE_SIZE (how should the opengl.h Header, that defines this value, know your hardware and driver limits?). To get the actual value from your implementation use int size; glGetIntegerv(GL_MAX_TEXTURE_SIZE, &size);. But even then this might be too small for you (maybe 8192 or something similar).
But to get much larger 1D arrays into your shaders, you can use buffer textures (which are core since OpenGL 3, and therefore present on DX10 class hardware). Those are textures sourcing their data from standard OpenGL buffer objects. But those textures are always 1D, accessed by integer texCoords (array indices, so to say) and not filtered. So they are effectively not really textures, but a way to access a buffer object as a linear 1D array inside a shader, which is a perfect fit for your needs (and in fact a much better fit than a normal filtered and normalized 1D texture).
EDIT: You might also think about using a straight-forward 3D texture like you did before, but with homemade mipmap levels (yes, a 3D texture can have mipmaps, too) for the higher parts of the hierarchy. So mipmap level 0 is the fine 256 grid, level 1 contains the coarser 128 grid, ... But to work with this data structure effectively, you will probably need explicit LOD texture access in the shader (using textureLod or, even better without filtering, texelFetch), which requires OpenGL 3, too.
EDIT: If you don't have support for OpenGL 3, I would still not suggest to use 3D textures to put your 1D array into, but rather 2D textures, like Rahul suggests in his answer (the 1D-2D index magic isn't really that hard). But if you have OpenGL 3, then I would either use buffer textures for using your linear 1D array layout directly, or a 3D texture with mipmaps for a straight-forward octree mapping (or maybe come up with a completely different and more sophisticated data structure for the voxel grid in the first place).
EDIT: Of course a fully subdivided octree is not really using the memory saving features of octrees to its advantage. For a more dynamic and memory efficient method of packing octrees into 3D textures, you might also take some inspiration from this classic GPU Gems article on octree textures. They basically store all octree cells as 2x2x2 grids arbitrarily into a 3D texture using the interal nodes' values as pointers to the children in this texture. Of course nowadays you can employ all sorts of refinements on this (since it seems you want the internal nodes to store data, too), like storing integers alongside floats and using nice bit encodings and the like, but the basic idea is pretty simple.
Here's a solution sketch/outline:
Use a 2D texture to store your 256x256x256 (it'll be 4096x4096 -- I hope you're using an OpenGL platform that supports 4k x 4k textures).
Now store your 1D data in row-major order. Inside your raycaster, simply do a row/col conversion (from 1D address to 4k x 4k) and look up the value you need.
I trust that you will figure out the rest yourself :)