Writing to one layer of a texture array and reading from another - c++

I'm currently implementing depth-peeling in an OpenGL 3D engine. I want to store the values in a depth 2D texture array. The algorithm, at its n'th execution, would need to read the n-1 layer, and if the current value is greater (object far away), insert the current value in the n'th layer. However, we are not supposed to be able to read and write in the same texture.
Would it be possible for example to read from it (only n-1'th layer) and to attach the n'th layer as the depth attachement of the current FBO?

However, we are not supposed to be able to read and write in the same texture.
Says who?
Textures store images. Note the plural. There is no prohibition against reading from and writing to the same texture. The prohibition is against reading from and writing to the same image.
Array textures contain multiple images. Each array layer is its own 2D image (or set of 2D mipmap images). Therefore, it is perfectly legal to read from one array layer and write to another. It's perfectly legal to read from one mipmap in an array layer and write to another mipmap in the same array layer.
What is not legal is reading/writing on the same mipmap of the same array layer.
This is why OpenGL doesn't give an error if the same texture is attached to the FBO at the same time it is bound to the rendering context for reading. This is legal as long as you ensure that you're not reading from/writing to the same image.

Related

Is there a sample-less cubemaps in vulkan?

In my game-engine all the static textures are bounded on an one big descriptor set(I using dynamic indexing for that), and now i also want to bind a cube-texture at orbitrary index inside this descriptors array.
All my textures bounded saperatly without any sampler(I am using glsl-extention named "GL_EXT_samplerless_texture_functions" for that).
Is there a way in vulkan-glsl to implement a cube-sampling of sample-less texture?
Sorry about my English.
The texelFetch functions are intended for accessing a specific texel from a given mipmap level and layer of the image. The texture coordinates are explicitly in "texel space", not any other space. No translation is needed, not even denormalization.
By contrast, accessing a cubemap from a 3D direction requires work. The 3D direction must be converted into a 2D XY texel coordinate and a face within the cubemap.
Not doing translations of this sort is the entire point of texelFetch. This is what it means to access a texture without a sampler. So it doesn't make sense to ever texelFetch a cubemap (and only the texelFetch functions are available for accessing a texture without a sampler).
So if you want to access a cubemap, you want one of two things:
You want to pass a 3D direction, which gets converted into a 2D coordinate and face index.
You need a sampler for that. Period.
You have a 2D coordinate in texture space and a face index, and you want to access that texel from the texture.
In this case, you're not accessing a cubemap; you're accessing a 2D array texture. Indeed, there is no such thing in Vulkan as a "cubemap image"; there is merely a 2D array image which has a flag set on it that allows it to be associated with cubemap image views. That is, a "cubemap" is a special way of accessing a 2D array. But you can just as easily create a 2D array view of the same image and bind that to the descriptor.
So do that.

OpenGL: efficient way to read sparce pixel data from many framebuffer textures?

I'm writing a program that uses the GPU to calculate stuff, and I want to read data from the framebuffers to be used in my client code. The framebuffers I'm using are about 40 textures, all 1024x1024 in size, all of which contain data that needs read, but only very sparcely, like 50 or so pixels in arbitrary x/y coordinates from each texture. Using glReadPixels for each texture, for each frame, is proving too costly for me to do though...
I only need to read a few select pixels from each texture, is there a way to quickly gather their data without needing to download every entire texture from the GPU?
This sounds fairly expensive no matter how you slice it. A couple of approaches come to mind:
What I would try first is glReadPixels(), but with using a PBO. Bind a buffer large enough to hold all the pixels to the GL_PIXEL_PACK_BUFFER target, and then submit the glReadPixels() calls, with offsets to place the results in distinct sections of the buffer. Then call glMapBufferRange() to read back the values.
An alternate approach is that you copy all the pixels you want to read into a single texture. You could use glBlitFramebuffer() or glCopyTexSubImage2D(). Then use a single glReadPixels() or glGetTexImage() call to get all the data from this texture.
Both of these approaches should result in about the same amount of work and synchronization overhead. But one or the other could be more efficient, depending on which paths in the driver are better optimized.
As the earlier answer already suggested, I would make very sure that you really need this, and there isn't any way to keep and process the data on the GPU. Any time you read back data, you introduce synchronization between GPU and CPU, which is mostly harmful to performance.
Do you have any restrictions on what OpenGL version you can use? If not, it sounds like you should look into compute shaders. You say that you are calculating data, so I assume that you are "abusing" the rendering pipeline for your application, especially the fragment shader, and store fragment data in the framebuffer that is interpreted as something else than color.
If this is the case, then all you need is a shader storage buffer and an atomic counter. At some point right now you are deciding that fragment (x, y, z [z being the texture index]) should have value v. So in your compute shader, you do your calculation as you would in the fragment shader, but as output, you store a tuple (x, y, z, v). You store this tuple in the shader storage buffer at the index of the atomic counter which you increment after each written element. In the end, you have your data stored compactly in the buffer and only need to read back these elements. The exact number is the value the atomic counter holds after termination. Download the buffer with glGetBufferSubData into an array of location-value pairs, iterate over it and do your CPU magic.
If you need to copy the data from the GPU to the CPU memory, there is no way (AFAIK) around using glReadPixels.
Depending on what platform you're using, and the specific of your programs, you can try several optimizations, using FBOs:
Copy only part of the texture, assuming you know the locations of the pixels. Note that in most cases it still faster to copy the entire texture instead of issuing several small reads
If you don't need 32 bit textures, you can render to a lower color resolution. The specific depends on your platform extensions.
Maybe you don't really need to copy the pixels since you plan to use them as a texture input to the next stage? In that case you copy the pixels directly on the GPU using glCopyTexImage2D

Looking for alternative to glTexSubImage2d with data offset support

I have a PBO which is updated each frame by CUDA. After it, I also want to update a texture using this PBO, which I do using glTexSubImage2d. I'm afraid updating the whole texture is expensive and would like to update only the viewable region of the texture while my PBO has the whole data on it.
The problem is that, although glTexSubImage2d accepts an offset, width and height as parameters, they're only used when painting to the texture, while I still need my buffer data to be linearly layed. I'm afraid preparing the buffer data myself might be too expensive (actually it would be extremely expensive, since my PBO resides in GPU memory.)
Is there any alternative to glTexSubImage2d which also takes parameters for the buffer offset or should I keep updating the whole texture at once?
Please read up on the pixel store parameters, set with glPixelStore. The parameters GL_UNPACK_ROW_LENGTH, GL_UNPACK_SKIP_PIXELS and GL_UNPACK_SKIP_ROWS are of most interest for you:
These values are provided as a convenience to the programmer; they provide no functionality that cannot be duplicated by incrementing the pointer passed to glDrawPixels, glTexImage1D, glTexImage2D, glTexSubImage1D, glTexSubImage2D, glBitmap, or glPolygonStipple. Setting GL_UNPACK_SKIP_PIXELS to i is equivalent to incrementing the pointer by i ⁢ n components or indices, where n is the number of components or indices in each pixel. Setting GL_UNPACK_SKIP_ROWS to j is equivalent to incrementing the pointer by j ⁢ k components or indices, where k is the number of components or indices per row, as just computed in the GL_UNPACK_ROW_LENGTH section.
You're still going to use glTexImage and/or glTexSubImage for data transfer.
glTexSubimage2D has errors on getting data from the PBO if the selected ROI in the texture is not equal to the whole texture size.
That is a known issue which may not be fixed (e.g. opengl forum thread).

Transferring large voxel data to a GLSL shader

I'm working a program which renders a dynamic high resolution voxel landscape.
Currently I am storing the voxel data in 32x32x32 blocks with 4 bits each:
struct MapData {
char data[32][32][16];
}
MapData *world = new MapData[(width >> 5) * (height >> 5) * (depth >> 5)];
What I'm trying to do with this, is send it to my vertex and fragment shaders for processing and rendering. There are several different methods I've seen to do this, but I have no idea which one will be best for this.
I started with a sampler1D format, but that results in floating point output between 0 and 1. I also had the hinting suspicion that it was storing it as 16 bits per voxel.
As for Uniform Buffer Objects I tried and failed to implement this.
My biggest concern with all of this is not having to send the whole map to the GPU every frame. I want to be able to load maps up to ~256MB (1024x2048x256 voxels) in size, so I need to be able to send it all once, and then resend only the blocks that were changed.
What is the best solution for this short of writing OpenCL to handle the video memory for me. If there's a better way to store my voxels that makes this easier, I'm open to other formats.
If you just want a large block of memory to access from in a shader, you can use a buffer texture. This obviously requires a semi-recent GL version (3.0 or better), so you need DX10 hardware or better.
The concept is pretty straightforward. You make a buffer object that stores your data. You create a buffer texture using the typical glGenTextures command, then glBindTexture it to the GL_TEXTURE_BUFFER target. Then you use glTexBuffer to associate your buffer object with the texture.
Now, you seem to want to use 4 bits per voxel. So your image format needs to be a single-channel, unsigned 8-bit integral format. Your glTexBuffer call should be something like this:
glTexBuffer(GL_TEXTURE_BUFFER, GL_RUI8, buffer);
where buffer is the buffer object that stores your voxel data.
Once this is done, you can change the contents of this buffer object using the usual mechanisms.
You bind the buffer texture for rendering just like any other texture.
You use a usamplerBuffer sampler type in your shader, because it's an unsigned integral buffer texture. You must use the texelFetch command to access data from it, which takes integer texture coordinates and ignores filtering. Which is of course exactly what you want.
Note that buffer textures do have size limits. However, the size limits are often some large percentage of video memory.

How to read a 3D texture from GPU memory with Pixel Buffer Objects

I'm writing data into a 3D texture from within a fragment shader, and I need to asynchronously read back said data into system memory. The only means of asynchronously initiating the packing operation into the buffer object seems to be calling glReadPixels() with a NULL pointer. But this function insists on getting passed a rectangle defining the region to read back. Now I don't know if these parameters are ignored when using PBOs, but I assume not. In this case, I have no idea what to pass to this function in order to obtain the whole 3D texture.
Even if have to read back individual slices (which would be kind of stupid IMO), I still have no idea how to communicate to OpenGL which slice to read from. Am I missing something?
BTW, I could use individual 2D textures for every slice, but that would screw up (3D-)mipmapping if I'm not mistaken. I wanted to use the 3D mipmaps in order to efficiently find regions of interest in the resulting 3D texture.
P.S. Sorry for the sub-optimal tags, apparently no one ever asked about 3d textures before and since I'm not allowed to create new tags...
Who says that glReadPixels is the only way to read image data? Maybe in OpenGL ES it is, but if you're using ES, you should say so. The rest of this answer will be assuming you're talking about desktop GL.
If you have a texture, and you want to read its contents, you should use glGetTexImage. The switch that controls whether it reads into a buffer object or not is the same switch that controls it for glReadPixels: whether a buffer is bound to GL_PIXEL_PACK_BUFFER.
Note that glGetTexImage will retrieve the entire texture (for a given mipmap level).