I'm trying to use a depth texture in a compute shader.
The depth texture is created with the format VK_FORMAT_D32_SFLOAT and with the usage VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_STORAGE_BIT.
The problem is that it seems that this combination of parameters is not supported, I have this warning: vkCreateImageView(): pCreateInfo->format VK_FORMAT_D32_SFLOAT with tiling VK_IMAGE_TILING_OPTIMAL does not support usage that includes VK_IMAGE_USAGE_STORAGE_BIT.
Except this message, the program is working well and the compute shader successfully read the depth texture.
Is this possible to read depth texture in compute shader ?
Yes, it's possible to read a 32-bit normalized depth image in a compute shader. Just not in your implementation.
Vulkan permits an implementation to refuse certain combinations of image formats and usages. They can refuse some formats entirely, while restricting other formats to only specific usages. As such, unless the format+usage combination you intend to use is on the Vulkan specification's list of required functionality, you must query support for it.
Vulkan doesn't require that implementations allow you to use D32 images as storage images. Therefore, you must check to see if a particular implementation provides this functionality.
Related
Generally on modern desktop OpenGL hardware what is the best way to fill a depth buffer from a compute shader and then use that depth buffer for graphics pipeline rendering with triangles etc?
Specifically I am wondering about concerns regards HiZ. Also I wonder if it's better to do compute shader modifications to the depth buffer before or after the graphics rendering?
If the compute shader is run after the graphics rendering I assume the depth buffer will typically be decompressed behind the scenes. But I worry done the other way around the depth buffer may be in a decompressed/non-optimal state for the graphics pipeline?
As far as i know, you cannot bind textures with any of the depth formats as images, and thus cannot write to depth format textures in compute shaders. See glBindImageTexture documentation, it lists the formats that your texture format must be compatible to. Depth formats are not among them and the specification says the depth formats are not compatible to the normal formats.
Texture copying functions have the same compatibility restrictions, so you can't even e.g. write to a normal texture in the compute shader and then copy to a depth texture. glCopyImageSubData does not explicitly have that restriction but i haven't tried it and it's not part of the core profile anymore.
What might work is writing to a normal texture, then rendering a fullscreen triangle and setting gl_FragDepth to values read from the texture, but that's an additional fullscreen pass.
I don't quite understand your second question - if your compute shader stuff modifies the depth buffer, the result will most likely be different depending on whether you do it before or after regular rendering because different parts will be visible or occluded.
But maybe that question is moot since it seems you cannot manually write into depth buffers at all - which might also answer your third question - by not writing into depth buffers you cannot mess with the compression of it :)
Please note that i'm no expert in this, i had a similar problem and looked at the docs/spec myself, so this all might be wrong :) Please let me know if you manage to write to depth buffers with compute shaders!
Is it possible to find the internal format of a texture within the shader (glsl)?
For example, if I have a texture with the format GL_RG, is it possible to recognize in the shader that the blue and alpha value are "constant" and can be ignored?
I know I can use a uniform to pass the texture type from c++ to the shaders. But is there an "intrinsic" way to find out from within the shader?
No, I don't believe there is anything that would give you this information directly.
Looking at the latest GLSL spec (4.50 at this time), I would expect a hypothetical function to get this information to be listed in section "8.9.1. Texture Query Functions" starting on page 158. But the only functions listed there are:
textureSize: Get size of texture.
textureQueryLod: Get the level of detail used for the given texture coordinates.
textureQueryLevels: Get the number of mipmap levels in the texture.
textureSamples: Get the number of samples for a multisampled texture.
So unless there is something completely different I missed, what you're looking for does not exist.
Is there a way to read fragment from the framebuffer currently rendered?
So, I'm looking for a way to read color information from the fragment that's on the place that current fragment will probably overwrite. So, exact position of the fragment that previously rendered.
I found gl_FragData and gl_LastFragData to be added with certain EXT_ extensions to shaders, but if they are what I need, could somebody explain how to use those?
I am looking either for a OpenGL or OpenGL ES 2.0 solution.
EDIT:
All the time I was searching for the solution that would allow me to have some kind of read&write "uniform" accessible from shaders. For anyone out there searching for similar thing, OpenGL version 4.3+ support image and buffer storage types. They do allow both reading and writing to them simultaneously, and in combination with compute shaders they proved to be very powerful tool.
Your question seems rather confused.
Part of your question (the first sentence) asks if you can read from the framebuffer in the fragment shader. The answer is, generally no. There is an OpenGL ES 2.0 extension that lets you do so, but it's only supported on some hardware. In desktop GL 4.2+, you can use arbitrary image load/store to get the same effect. But you can't render to that image anymore; you have to write your data using image storing functions.
gl_LastFragData is pretty simple: it's the color from the sample in the framebuffer that will be overwritten by this fragment shader. You can do with it what you wish, if it is available.
The second part of your question (the second paragraph) is a completely different question. There, you're asking about fragments that were potentially never written to the framebuffer. You can't read from a fragment shader; you can only read images. And if a fragment fails the depth test, then it's data was never rendered to an image. So you can't read it.
With most nVidia hardware you can use the GL_NV_texture_barrier extension to read from a texture that's currently bound to a framebuffer. But bear in mind that you won't be able to read data any more recent than produced in the previous draw call
Is there a way to get results from a shader running on a GPU back to the program running on the CPU?
I want to generate a polygon mesh from simple voxel data based on a computational costly algorithm on the GPU but I need the result on the CPU for physics calculations.
Define "the results"?
In general, if you're doing GPGPU-style computations with OpenGL, you are going to need to structure your shaders around the needs of a rendering system. Rendering systems are designed to be one-way: data goes into them and an image is produced. Going backwards, having the rendering system produce data, is not generally how rendering systems are structured.
That doesn't mean you can't do it, of course. But you need to architect everything around the limitations of OpenGL.
OpenGL offers a number of hooks where you can write data from certain shader stages. Most of these require specialized hardware
Fragment shader outputs
Any hardware capable of fragment shaders will obviously allow you to write to the current framebuffer you're rendering. Through the use of framebuffer objects and textures with floating-point or integer image formats, you can write pretty much any data you want to a variety of images. Once in a texture, you can simply call glGetTexImage to get the rendered pixel data. Or you can just do glReadPixels to get it if the FBO is still bound. Either way works.
The primary limitations of this method are:
The number of images you can attach to the framebuffer; this limits the amount of data you can write. On pre-GL 3.x hardware, FBOs were typically limited to only 4 images plus a depth/stencil buffer. In 3.x and better hardware, you can expect a minimum of 8 images.
The fact that you're rendering. This means that you need to set up your vertex data to position a triangle exactly where you want it to modify data. This is not a trivial undertaking. It's also difficult to get useful input data, since you typically want each texel to be fairly independent of the other. Structuring your fragment shader around these limitations is difficult. Not impossible, but non-trivial in many cases.
Transform Feedback
This OpenGL 3.0 feature allows the output from the Vertex Processing stage of OpenGL (vertex shader and optional geometry shader) to be captured in one or more buffer objects.
This is much more natural for capturing vertex data that you want to play with or render again. In your case, you'll need to read it back after rendering it, perhaps with a glGetBufferSubData call, or by using glMapBufferRange for reading.
The limitations here are that you generally only can capture 4 output values, where each value is a vec4. There are also some strict layout restrictions. Some OpenGL 3.x and 4.x hardware offers the ability to write data to multiple feedback streams, which can all be written into different buffers.
Image Load/Store
This GL 4.2 feature represents the pinnacle of writing: you can bind an image (a buffer texture, if you want to write to a buffer), and just write to it. There are memory ordering constraints that you need to work within.
It's very flexible, but very complex. Besides the difficulty in using it properly, there are a number of limitations. The number of images you can write to will be fairly limited, perhaps 8 or so. And implementations may have total write limits, so that 8 images to write to may have to be shared by the fragment shader's outputs.
What's more, image outputs are only guaranteed for the fragment shader (and 4.3's compute shaders). That is, hardware is allowed to forbid you from using image load/store on non-FS/CS shader stages.
I was wondering if there is an easy way to query (programatically) the GPU OpenGL Limits for the following features:
- maximum 2D texture size
- maximum 3D texture size
- maximum number of vertex shader attributes
- maximum number of varying floats
- number of texture image units (in vertex shader, and in fragment shader)
- maximum number of draw buffers
I need to know these numbers in advance before writing my GPU Research Project.
glGet() is your friend, with:
GL_MAX_3D_TEXTURE_SIZE
GL_MAX_TEXTURE_SIZE
GL_MAX_VERTEX_ATTRIBS
GL_MAX_VARYING_FLOATS
GL_MAX_TEXTURE_UNITS
GL_MAX_DRAW_BUFFERS
e.g.:
GLint result;
glGetIntegerv(GL_MAX_VARYING_FLOATS, &result);
Not quite sure what your project is setting out to achieve, but you might be interested in OpenCL if it's general purpose computing and you weren't already aware of it. In particular Cl/GL interop if there is a graphics element too and your hardware supports it.
As Damon pointed out in the comments in practice it may be more complex than this for texture sizes. The problems arise because rendering may fallback from hardware to software for some sizes of textures, and also because the size of a texture varies depending upon the pixel format used. To work around this it is possible to use GL_PROXY_TEXTURE_* with glTexImage*.
As a complement to what was said by "awoodland" and if you still do not know ... i think you should take a look at GLEW...
GLEW provides efficient run-time mechanisms for determining which OpenGL extensions are supported on the target platform.
http://glew.sourceforge.net/