Rendering to TBO - opengl

I need to render to Buffer texture. Why TBO? TBO can be mapped easily as CUDA resource for graphic interop. It can also store byte sized data, which is what I need. I was trying to find related info in GL specs. Here it is stated that:
Buffer Textures work like 1D texture, only they have a single image,
identified by mipmap level​ 0.
But when I try to attach TBO to FBO I am always getting "Missing attachment" error when checking completeness, which leads me to a conclusion that GL_TEXTURE_BUFFER is not supported as FBO attachment.
Two questions:
Is it true?
Is the only alternative here to write to SSBO
More detailed info:
I am getting
GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT
When I am trying to attach the TBO to the framebuffer.There is no point to attach the whole code here as it is both heavily abstracted API and belive me,I am pretty experienced in using OpenGL and framebuffers.Attaching a regular GL_TEXTURE_2D works great.
Here is the chunk of the TBO creation and the attachment stage:
GLuint tbo;
GLuint tboTex;
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, viewportWidth * viewportHeight * 4, NULL, GL_DYNAMIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glGenTextures(1, &tboTex);
glBindTexture(GL_TEXTURE_BUFFER, tboTex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA8, tbo);
glBindTexture(GL_TEXTURE_BUFFER, 0);
Then attach to FBO:
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, tboTex, 0);
///also tried:
/// glFramebufferTexture1D
/// glFramebufferTexture2D

Buffer textures cannot be attached to FBOs:
GL_INVALID_OPERATION is generated by[sic] if texture is a buffer texture.
A good reminder for why you should always check your OpenGL errors.
Is the only alternative here to write to SSBO
If your goal is to use a rendering operation to write stuff to a buffer object, you could also use Image Load/Store operations with buffer textures. But if your hardware could handle that, then it should handle SSBOs too.
You could also try to use geometry shaders and transform feedback operations to write whatever you're trying to write.
However:
It can also store byte sized data
Images can store "byte sized data" as well. The image format GL_R8UI represents a single-channel, 8-bit unsigned integer.
That doesn't resolve any CUDA-interop issues, but rendering to bytes is very possible.

Related

GL_TEXTURE_3D color and stencil FBO attachments

I am doing a layered rendering to an offscreen FBO using OpenGL 4.3.I used GL_TEXTURE_3D with several layers as COLOR attachment.Then I use geometry shader to index into different layers when writing the output.It works ok.Now I need also stencil attachment for stencil test I am performing during the rendering.First I tried just to attach a render buffer as in the case with 2D attachments.
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT,
GL_RENDERBUFFER, _stencilBuffer)
In this case,when checking FBO for completeness I am getting frame buffer error:
GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB
Then I assumed that if the color attachment is 3D so the stencil attachment also must be 3D.And because there is no 3D render buffer I tried to attach a 3D texture for depth stencil slot of the FBO.
glTexImage3D(GL_TEXTURE_3D, 0, GL_DEPTH24_STENCIL8, width, height, depth,
0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8);
Where width - texture width,height-texture height,depth-number of layers inside texture 3D.
//Attach to FBO:
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, texId, 0));
Doing it this way I am getting:
GL_FRAMEBUFFER_INCOMPLETE
INVALID_OPERATION
I have searched any possible example to find how such a setup should be done,but found nothing.I also tried to use GL_TEXTURE_2D_ARRAY instead,but same problem. (for some reason this actually fixed the problem which persisted in my earlier tests)
UPDATE
My fault as got confused with some of my findings during the debug.Basically half of what I wrote above can be discarded.But because other people may get into the same issues I will explain what happened.
At first,when I attached a 3d texture to COLOR attachment of FBO I created a render buffer for GL_DEPTH_STENCIL attachment.And yes,on completeness check I got:
GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB
Next,I tried instead:
glTexImage3D(GL_TEXTURE_3D, 0, GL_DEPTH24_STENCIL8, width, height, depth,
0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8);
which thrown:
INVALID_OPERATION
Now,instead of GL_TEXTURE_3D target I tried GL_TEXTURE_2D_ARRAY which finally caused the FBO to be complete.So,while I would still like to understand why GL_TEXTURE_3D causes INVALID_OPERATION(feel free to post an answer),this change has solved the problem.
Based on the spec, GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS means (quoted from OpenGL 4.5 spec):
If any framebuffer attachment is layered, all populated attachments must be layered. Additionally, all populated color attachments must be from textures of the same target (three-dimensional, one- or two-dimensional array, cube map, or cube map array textures).
Based on the first part of this, your initial attempt of using a single layer stencil attachment with a layered 3D texture color attachment was clearly illegal.
The second part sounds somewhat unclear to me. Since it only talks about "color attachments", it suggests that using a GL_TEXTURE_3D color attachment and a GL_TEXTURE_2D_ARRAY stencil attachment would be legal. But I'm not convinced that this is actually the intention. Unfortunately I couldn't find additional confirmation of this in the rest of the spec.
Using GL_TEXTURE_3D for a stencil or depth/stencil texture is a non-starter. There's no such thing as a 3D stencil texture. From the 4.5 spec, pages 191-192 in section "8.5 Texture Image Specification"
Textures with a base internal format of DEPTH_COMPONENT, DEPTH_STENCIL, or STENCIL_INDEX are supported by texture image specification commands only if target is TEXTURE_1D, TEXTURE_2D, TEXTURE_2D_MULTISAMPLE, TEXTURE_1D_ARRAY, TEXTURE_2D_ARRAY, TEXTURE_2D_MULTISAMPLE_ARRAY, TEXTURE_RECTANGLE, TEXTURE_CUBE_MAP, TEXTURE_CUBE_MAP_ARRAY, PROXY_TEXTURE_1D, PROXY_TEXTURE_2D, PROXY_TEXTURE_2D_MULTISAMPLE, PROXY_TEXTURE_1D_ARRAY, PROXY_TEXTURE_2D_ARRAY, PROXY_TEXTURE_2D_MULTISAMPLE_ARRAY, PROXY_TEXTURE_RECTANGLE, PROXY_TEXTURE_CUBE_MAP, or PROXY_TEXTURE_CUBE_MAP_ARRAY.
That's a long list, but TEXTURE_3D is not in it.
Based on this, I believe that what you found to be working is the only option. You need to use textures with target GL_TEXTURE_2D_ARRAY for both the color and stencil attachment.

Opengl depth buffer to cuda

I'm a new programmer to Opengl,
my aim is to retrieve the depth buffer into a FBO to be able to transfer to cuda without using glReadpixels.
Here is what I've already done:
void make_Fbo()
{
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
fbo);
check_gl_error("make_fbo");
}
void make_render_buffer()
{
glGenRenderbuffers(1, &rb);
glBindRenderbuffer(GL_RENDERBUFFER, rb);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_DEPTH_COMPONENT,
win.width,
win.height);
check_gl_error("make render_buffer");
}
This code create my FBO with correct depth values.
A new problem appear now, according to the article "fast triangle rasterization using irregular z-buffer on cuda"
It's not possible to acces to depth buffer attached to the FBO from Cuda.
Here is is the quote from the article:
Textures or render buffers can be attached onto the depth
attachment point of FBOs to accommodate the depth values. However, as far as
we have tested, they cannot be accessed by CUDA kernels. [...]
we managed to use the color attachment points on the FBO. Apparently
in this case we have to write a simple shader program to dump the depth values onto
the color channels of the frame buffer. According to the GLSL specification [KBR06],
the special variable gl_FragCoord
Are the statements still true?
What do you advise me to dump the depth buffer to the color channels?
to a texture ?
Well yes and no. The problem is that you can't access resources in CUDA while they are bound to the FBO.
As I understand it, with cudaGraphicsGLRegisterImage() you enable cuda access to any type of image data. So if you use a depth buffer that is a rendertarget and is NOT bound to the FBO, you can use it.
Here's the cuda API information:
https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__OPENGL.html#group__CUDART__OPENGL_1g80d12187ae7590807c7676697d9fe03d
And in this article they explain that you should round-robin or double-buffer the depth-buffer, or copy the data before using it in CUDA (but then you more or less void the whole idea of interop).
http://codekea.com/xLj7d1ya5gD6/modifying-opengl-fbo-texture-attachment-in-cuda.html

Transform feedback without a framebuffer?

I'm interested in using a vertex shader to process a buffer without producing any rendered output. Here's the relevant snippet:
glUseProgram(program);
GLuint tfOutputBuffer;
glGenBuffers(1, &tfOutputBuffer);
glBindBuffer(GL_ARRAY_BUFFER, tfOutputBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(double)*4*3, NULL, GL_STATIC_READ);
glEnable(GL_RASTERIZER_DISCARD_EXT);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, tfOutputBuffer);
glBeginTransformFeedbackEXT(GL_TRIANGLES);
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 4, GL_FLOAT, GL_FALSE, sizeof(double)*4, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementBuffer);
glDrawElements(GL_TRIANGLES, 1, GL_UNSIGNED_INT, 0);
This works fine up until the glDrawElements() call, which results in GL_INVALID_FRAMEBUFFER_OPERATION. And glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); returns GL_FRAMEBUFFER_UNDEFINED.
I presume this is because my GL context does not have a default framebuffer, and I have not bound another FBO. But, since I don't care about the rendered output and I've enabled GL_RASTERIZER_DISCARD_EXT, I thought a framebuffer shouldn't be necessary.
So, is there a way to use transform feedback without a framebuffer, or do I need to generate and bind a framebuffer even though I don't care about its contents?
This is actually perfectly valid behavior, as-per the specification.
OpenGL 4.4 Core Specification - 9.4.4 Effects of Framebuffer Completeness on Framebuffer Operations
A GL_INVALID_FRAMEBUFFER_OPERATION error is generated by attempts to render to or read from a framebuffer which is not framebuffer complete. This error is generated regardless of whether fragments are actually read from or written to the framebuffer. For example, it is generated when a rendering command is called and the framebuffer is incomplete, even if GL_RASTERIZER_DISCARD is enabled.
What you need to do to work around this is create an FBO with a 1 pixel color attachment and bind that. You must have a complete FBO bound or you get GL_INVALID_FRAMEBUFFER_OPERATION and one of the rules for completeness is that at least 1 complete image is attached.
OpenGL 4.3 actually allows you to skirt around this issue by defining an FBO with no attachments of any sort (see: GL_ARB_framebuffer_no_attachments). However, because you are using the EXT form of FBOs and Transform Feedback, I doubt you have a 4.3 implementation.

How to read a pixel from a Depth Texture efficiently?

I have an OpenGL Texture and want to be able to read back a single pixel's value, so I can display it on the screen. If the texture is a regular old RGB texture or the like, this is no problem: I take an empty Framebuffer Object that I have lying around, attach the texture to COLOR0 on the framebuffer and call:
glReadPixels(x, y, 1, 1, GL_RGBA, GL_FLOAT, &c);
Where c is essentially a float[4].
However, when it is a depth texture, I have to go down a different code path, setting the DEPTH attachment instead of the COLOR0, and calling:
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &c);
where c is a float. This works fine on my Windows 7 computer running NVIDIA GeForce 580, but causes an error on my old 2008 MacBook pro. Specifically, after attaching the depth texture to the framebuffer, if I call glCheckFrameBufferStatus(GL_READ_BUFFER), I get GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER.
After searching the OpenGL documentation, I found this line, which seems to imply that OpenGL does not support reading from a depth component of a framebuffer if there is no color attachment:
GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER is returned if GL_READ_BUFFER is not GL_NONE
and the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is GL_NONE for the color
attachment point named by GL_READ_BUFFER.
Sure enough, if I create a temporary color texture and bind it to COLOR0, no errors occur when I readPixels from the depth texture.
Now creating a temporary texture every time (EDIT: or even once and having GPU memory tied up by it) through this code is annoying and potentially slow, so I was wondering if anyone knew of an alternative way to read a single pixel from a depth texture? (Of course if there is no better way I will keep around one texture to resize when needed and use only that for the temporary color attachment, but this seems rather roundabout).
The answer is contained in your error message:
if GL_READ_BUFFER is not GL_NONE
So do that; set the read buffer to GL_NONE. With glReadBuffer. Like this:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo); //where fbo is your FBO.
glReadBuffer(GL_NONE);
That way, the FBO is properly complete, even though it only has a depth texture.

Using a framebuffer as a vertex buffer without moving the data to the CPU

In OpenGL, is there a way to use framebuffer data as vertex data without moving the data through the CPU? Ideally, a framebuffer object could be recast as a vertex buffer object directly on the GPU. I'd like to use the fragment shader to generate a mesh and then render that mesh.
There's a couple ways you could go about this, the first has already been mentioned by spudd86 (except you need to use GL_PIXEL_PACK_BUFFER, that's the one that's written to by glReadPixels).
The other is to use a framebuffer object and then read from its texture in your vertex shader, mapping from a vertex id (that you would have to manage) to a texture location. If this is a one-time operation though I'd go with copying it over to a PBO and then binding into GL_ARRAY_BUFFER and then just using it as a VBO.
Just use the functions to do the copy and let the driver figure out how to do what you want, chances are as long as you copy directly into the vertex buffer it won't actually do a copy but will just make your VBO a reference to the data.
The main thing to be careful of is that some drivers may not like you using something you told it was for vertex data with an operation for pixel data...
Edit: probably something like the following may or may not work... (IIRC the spec says it should)
int vbo;
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, vbo);
// use appropriate pixel formats and size
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbo);
// draw stuff
Edited to correct buffer bindings thanks Phineas
The specification for GL_pixel_buffer_object gives an example demonstrating how to render to a vertex array under "Usage Examples".
The following extensions are helpful for solving this problem:
GL_texture_float - floating point internal formats to use for the color buffer attachment
GL_color_buffer_float - disable automatic clamping for fragment colors and glReadPixels
GL_pixel_buffer_object - operations for transferring pixel data to buffer objects
If you can do your work in a vertex/geometry shader, you can use transform feedback to write directly into a buffer object. This also has the option of skip the rasterizer and fragment shading.
Transform feedback is available as EXT_transform_feedback or core version since GL 3.0 (and the ARB equivalent).