I am implementing a voxel raycaster in OpenGL 4.3.0. I have got a basic version going where I store a 256x256x256 voxel data set of float values in a 3D texture of the same dimensions.
However, I want to make a LOD scheme using an octree. I have the data stored in a 1D array on the host side. The root has index 0, the root's children have indices 1-8, the next level have indices 9-72 and so on. The octree has 9 levels in total (where the last level has the full 256x256x256 resolution). Since the octree will always be full the structure is implicit and there is no need to store pointers, just the one float value per voxel. I have the 1D indexing and traversal algorithms all set.
My problem is that I don't know how to store this in a texture. GL_TEXTURE_MAX_SIZE is way too small (16384) for using the 1D array approach for which I have figured out the indexing. I need to store this in a 3D texture, and I don't know what will happen when I try to cram my 1D array in there, nor do I know how to choose a size and a 1D->3D conversion scheme to not waste space or time.
My question is if someone has a good strategy for storing this whole octree structure in one 3D texture, and in that case how to choose dimensions and indexing for it.
First some words on porting your 1D-array solution directly:
First of all, as Mortennobel says in his comment, the max texture size is very likely not 3397, that's just the enum value of GL_MAX_TEXTURE_SIZE (how should the opengl.h Header, that defines this value, know your hardware and driver limits?). To get the actual value from your implementation use int size; glGetIntegerv(GL_MAX_TEXTURE_SIZE, &size);. But even then this might be too small for you (maybe 8192 or something similar).
But to get much larger 1D arrays into your shaders, you can use buffer textures (which are core since OpenGL 3, and therefore present on DX10 class hardware). Those are textures sourcing their data from standard OpenGL buffer objects. But those textures are always 1D, accessed by integer texCoords (array indices, so to say) and not filtered. So they are effectively not really textures, but a way to access a buffer object as a linear 1D array inside a shader, which is a perfect fit for your needs (and in fact a much better fit than a normal filtered and normalized 1D texture).
EDIT: You might also think about using a straight-forward 3D texture like you did before, but with homemade mipmap levels (yes, a 3D texture can have mipmaps, too) for the higher parts of the hierarchy. So mipmap level 0 is the fine 256 grid, level 1 contains the coarser 128 grid, ... But to work with this data structure effectively, you will probably need explicit LOD texture access in the shader (using textureLod or, even better without filtering, texelFetch), which requires OpenGL 3, too.
EDIT: If you don't have support for OpenGL 3, I would still not suggest to use 3D textures to put your 1D array into, but rather 2D textures, like Rahul suggests in his answer (the 1D-2D index magic isn't really that hard). But if you have OpenGL 3, then I would either use buffer textures for using your linear 1D array layout directly, or a 3D texture with mipmaps for a straight-forward octree mapping (or maybe come up with a completely different and more sophisticated data structure for the voxel grid in the first place).
EDIT: Of course a fully subdivided octree is not really using the memory saving features of octrees to its advantage. For a more dynamic and memory efficient method of packing octrees into 3D textures, you might also take some inspiration from this classic GPU Gems article on octree textures. They basically store all octree cells as 2x2x2 grids arbitrarily into a 3D texture using the interal nodes' values as pointers to the children in this texture. Of course nowadays you can employ all sorts of refinements on this (since it seems you want the internal nodes to store data, too), like storing integers alongside floats and using nice bit encodings and the like, but the basic idea is pretty simple.
Here's a solution sketch/outline:
Use a 2D texture to store your 256x256x256 (it'll be 4096x4096 -- I hope you're using an OpenGL platform that supports 4k x 4k textures).
Now store your 1D data in row-major order. Inside your raycaster, simply do a row/col conversion (from 1D address to 4k x 4k) and look up the value you need.
I trust that you will figure out the rest yourself :)
Related
For texture sizes greater than GL_MAX_ARRAY_TEXTURE_LAYERS, I have to use an array of single textures instead of one texture array. No problem for me.
I'm just wondering whats the reason behind making GL_MAX_ARRAY_TEXTURE_LAYERS so much smaller than GL_MAX_TEXTURE_SIZE?
edit: this can be deleted
For texture sizes greater than GL_MAX_ARRAY_TEXTURE_LAYERS
I believe you have mis-understood what this limitation means. This is not like GL_MAX_3D_TEXTURE_SIZE, which is a limitation on all axes of a 3D texture. It is a limitation on the number of array elements in a 2D array texture. When you call glTexImage3D/TexStorage3D, the limitation applies to the depth parameter only, not the "size".
The width/height limit is still governed by GL_MAX_TEXTURE_SIZE.
I recently re-wrote some code to use Shader Storage Buffer Object in OpenGL, to send in dynamic sized arrays into GLSL, vastly increasing performance when drawing many procedurally generated objects.
What I have is a couple of thousand points, and for each point I render a procedurally generated circular billboard. Each one can in turn have different colors and radius, as well as a few other characteristics (represented as bools or enums)
I fill a vector with these positions, packed together with the radius and color. Then I upload it as a Shader Storage Buffer Object with dynamic size. I create a dummy VAO, containing 0 vbos, but call the draw command with the same amount of points that I have.
Inside the shader, I then iterate through this array, using the gl_VertexID, and generate a quad (two triangles) with texture coordinates, for each point.
I'm looking for a way of doing the same in Vulkan. Is there some way in Vulkan, to pass a dynamic sized array into a shader? Reading about Shader Storage Buffer objects in Graham Seller's Vulkan book, it only mentions them being read-write, but not capable of dynamically sized arrays.
Edit: It seems that storage buffers are in fact capable of dynamic sized arrays, based on Sasha Willems particle example. Is there a way of doing the same thing via uniforms?
I may be misunderstanding your question. SSBOs have identical behavior and functionality between the two APIs. That's why they're named the same. An unbounded array at the end of a storage block will have its length defined at runtime, based on what data you provide.
The size of a buffer descriptor is not hard coded into the descriptor set layout; it's something you set with VkWriteDescriptorSet. Now unlike the offset, you cannot change a descriptor set's size without changing the descriptor itself. That is, you don't have an equivalent to vkCmdBindDescriptorSets's pDynamicOffsets field. So you have to actually update the descriptor in-situ to change the length.
But that just requires double-buffering your descriptor set; it shouldn't be a problem.
Is there a way of doing the same thing via uniforms?
Again, the answer is the same for Vulkan as for OpenGL: no.
I kinda sorta get a grip what texture arrays are about, but nowhere on the internet I can find info about what they actually are. Are they like one big texture atlas (singular continuous block of memory divided in parts of equal dimensions), or are they like texture units (completely separate textures that you can bind all at once)? The consequence of this question is, what are the size limits of a single texture array, and how to use them efficiently? If, for example, my GPU handles 4096x4096 textures and 64 units, can I create a texture array of 64 4096x4096 textures? Or is the actual limit 4096*4096 pixels per array? Or maybe something else? How to query the limit?
They are arrays of images (the concept is generally referred to as layered images, because there are multiple types of textures that actually store multiple images). An array texture is special in that it has equal dimensions for each of its layers. Additionally, though the memory is allocated the same way as a 3D texture, filtering across image layers is impossible for array textures and the mipmap chain for a 2D array texture remains 2D.
The very first form of array texture ever introduced was the cube map, it had 6 2D layers... but it also had a special set of lookup functions, so it was far from a generic 2D array texture.
Nevertheless, the limit you are discussing is per-image dimension in your array texture. That is, if your maximum texture size is 4096, that means a 2D texture is limited to 4096x4096 per-image for the highest detail mipmap level.
Layers in your array texture are not like Texture Image Units at all. They are separate images, but they all belong to a single texture object, which can be bound to a single Texture Image Unit.
The maximum number of layers in your array texture can be queried this way:
GLint max_layers;
glGetIntegerv (GL_MAX_ARRAY_TEXTURE_LAYERS, &max_layers);
On the topic of using array textures efficiently, you may consider a new feature called Sparse Array Textures (via GL_ARB_sparse_texture).
Say you wanted to reserve enough storage for a 4096 x 4096 x 64 array texture but only actually used 4 or 5 of those layers initially. Ordinarily that would be horribly wasteful, but with sparse textures, only the 4 or 5 layers in-use have to be resident (committed). The rest of the layers are handled like demand-paged virtual memory in operating systems; no backing storage is actually allocated until first use.
I have a couple questions about how OpenGL handles these drawing operations.
So lets say I pass OpenGL the pointer to my vertex array. Then I can call glDrawElements with an array of indexes. It will draw the requested shapes using those indexes in the vertex array correct?
After that glDrawElements call could I then do another glDawElements call with another set of indexes? Would it then draw the new index array using the original vertex array?
Does OpenGL keep my vertex data around for the next frame when I redo all of these calls? So the the next vertex pointer call would be a lot quicker?
Assuming the answer to the last three questions is yes, What if I want to do this on multiple vertex arrays every frame? I'm assuming doing this on any more than 1 vertex array would cause OpenGL to drop the last used array from graphics memory and start using the new one. But in my case the vertex arrays are never going to change. So what I want to know is does opengl keep my vertex arrays around in-case next time I send it vertex data it will be the same data? If not is there a way I can optimize this to allow something like this? Basically I want to draw procedurally between the vertexes using indicies without updating the vertex data, in order to reduce overhead and speed up complicated rendering that requires constant procedurally changing shapes that will always use the vertexes from the original vertex array. Is this possible or am I just fantasizing?
If I'm just fantasizing about my fourth question what are some good fast ways of drawing a whole lot of polygons each frame where only a few will change? Do I always have to pass in a totally new set of vertex data for even small changes? Does it already do this anyways when the vertex data doesn't change because I notice I cant really get around the vertex pointer call each frame.
Feel free to totally slam any logic errors I've made in my assertions. I'm trying to learn everything I can about how opengl works and it's entirely possible my current assumptions on how it works are all wrong.
1.So lets say I pass OpenGL the pointer to my vertex array. Then I can call glDrawElements with an array of indexes. It will draw the
requested shapes using those indexes in the vertex array correct?
Yes.
2.After that glDrawElements call could I then do another glDawElements
call with another set of indexes? Would it then draw the new index
array using the original vertex array?
Yes.
3.Does OpenGL keep my vertex data around for the next frame when I redo
all of these calls? So the the next vertex pointer call would be a lot
quicker?
Answering that is a bit more tricky than you might. The way you ask these questions makes me to assume that uou use client-side vertex arrays, that is, you have some arrays in your system memory and let your vertes pointers point directly to those. In that case, the answer is no. The GL cannot "cache" that data in any useful way. After the draw call is finished, it must assume that you might change the data, and it would have to compare every single bit to make sure you have not changed anything.
However, client side VAs are not the only way to have VAs in the GL - actually, they are completely outdated, deprecated since GL3.0 and been removed from modern versions of OpenGL. The modern way of doing thins is using Vertex Buffer Objects, which basically are buffers which are managed by the GL, but manipulated by the user. Buffer objects are just a chunk of memory, but you will need special GL calls to create them, read or write or change data and so on. And the buffer object might very well not be stored in system memory, but directly in VRAM, which is very useful for static data which is used over and over again. Have a look at the GL_ARB_vertex_buffer_object extension spec, which orignially introduced that feature in 2003 and became core in GL 1.5.
4.Assuming the answer to the last three questions is yes, What if I want
to do this on multiple vertex arrays every frame? I'm assuming doing
this on any more than 1 vertex array would cause OpenGL to drop the
last used array from graphics memory and start using the new one. But
in my case the vertex arrays are never going to change. So what I want
to know is does opengl keep my vertex arrays around in-case next time
I send it vertex data it will be the same data? If not is there a way
I can optimize this to allow something like this? Basically I want to
draw procedurally between the vertexes using indicies without updating
the vertex data, in order to reduce overhead and speed up complicated
rendering that requires constant procedurally changing shapes that
will always use the vertexes from the original vertex array. Is this
possible or am I just fantasizing?
VBOs are exactly what you are looking for, here.
5.If I'm just fantasizing about my fourth question what are some good
fast ways of drawing a whole lot of polygons each frame where only a
few will change? Do I always have to pass in a totally new set of
vertex data for even small changes? Does it already do this anyways
when the vertex data doesn't change because I notice I cant really get
around the vertex pointer call each frame.
You can also update just parts of a VBO. However, it might become inefficient if you have many small parts which are randomliy distributed in your buffer, it will be more efficient to update continous (sub-)regions. But that is a topic on it's own.
Yes
Yes
No. As soon as you create a Vertex Buffer Object (VBO) it will stay in the GPU memory. Otherwise vector data needs to be re-transferred (an old method of avoiding this was Display Lists). In both cases the performance of subsequent frames should stay similar (but much better with the VBO method): you can do the VBO creation and download before rendering the first frame.
The VBO was introduced to provide you exactly with this functionality. Just create several VBOs. Things get messy when you need more GPU memory than available though.
VBO is still the answer, and see Modifying only a specific element type of VBO buffer data?
It sounds like you should try something called Vertex Buffer Objects. It offers the same benefits as Vertex Arrays, but you can create multiple vertex buffers and store them in "named slots". This method has much better performance as data is stored directly in Graphic Card memory.
Here is a good tutorial in C++ to start with.
Is it possible in desktop GLSL to pass a fixed size array of floats to the vertex shader as an attribute? If yes, how?
I want to have per vertex weights for character animation so I would like to have something like the following in my vertex shader:
attribute float weights[25];
How would I fill the attribute array from my C++ & OpenGL program? I have seen in another question that I could get the attribute location of the array attribute and then just add the index to that location. Could someone give an example on that for my pretty large array?
Thanks.
Let's start with what you asked for.
On pretty much no hardware that exists currently will attribute float weights[25]; compile. While shaders can have arrays of attributes, each array index represents a new attribute index. And on all hardware the currently exists, the maximum number of attribute indices is... 16. You'd need 25, and that's just for the weights.
Now, you can mitigate this easily enough by remembering that you can use vec4 attributes. Thus, you store every four array elements in a single attribute. Your array would be attribute vec4 weights[7]; which is doable. Your weight-fetching logic will have to change of course.
Even so, you don't seem to be taking in the ramifications of what this would actually mean for your vertex data. Each attribute represents a component of a vertex's data. Each vertex for a rendering call will have the same amount of data; the contents of that data will differ, but not how much data.
In order to do what you're suggesting, every vertex in your mesh would need 25 floats describing the weight. Even if this was stored as normalized unsigned bytes, that's still 25 extra bytes worth of data at a minimum. That's a lot. Especially considering that for the vast majority of vertices, most of these values will be 0. Even in the worst case, you'd be looking at maybe 6-7 bones affecting an single vertex.
The way skinning is generally done in vertex shaders is to limit the number of bones that affects a single vertex to four. This way, you don't use an array of attributes; you just use a vec4 attribute for the weights. Of course, you also now need to say which bone is associated with which weight. So you have a second vec4 attribute that specifies the bone index for that weight.
This strikes a good balance. You only take up 2 extra attributes (which can be unsigned bytes in terms of size). And for the vast majority of vertices, you'll never even notice, because most vertices are only influenced by 1-3 bones. A few uses 4, and fewer still use 5+. In those cases, you just cut off the lowest weights and recompute the weights of the others proportionately.
Nicol Bolas already gave you an answer how to restructure your task. You should do it, because processing 25 floats for a vertex, probably through some quaternion multiplication will waste a lot of good GPU processing power; most of the attributes for a vertex will translate close to an identity transform anyway.
However for academic reasons I'm going to tell you, how to pass 25 floats per vertex. The key is not using attributes for this, but fetching the data from some buffer, a texture. The GLSL vertex shader stage has the builtin variable gl_VertexID, which passes the index of the currently processed vertex. With recent OpenGL you can access textures from the vertex shader as well. So you have a texture of size vertex_count × 25 holding the values. In your vertex shader you can access them using the texelFetch function, i.e. texelFetch(param_buffer, vec2(gl_VertexID, 3));
If used in skeletal animation this system is often referred to as texture skinning. However it should be used sparingly, as it's a real performance hog. But sometimes you can't avoid it, for example when implementing a facial animation system where you have to weight all the vertices to 26 muscles, if you want to accurately simulate a human face.