I been trying to wrap my head around trasnfering data from my FBO to a PBO to a texture to render it to a QUAD:
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colorAttachment0, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthAttachment);
To my PBO:
glGenBuffers(1, &bufferObj);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, bufferObj);
glBufferData(GL_PIXEL_UNPACK_BUFFER, 800 * 800 * 4, NULL, GL_DYNAMIC_DRAW);
But I try to transfer it like this: (*which is where the ISSUE IS *) .. my guess
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, bufferObj);
glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
glReadBuffer(GL_NONE);
I have code that renders out a PBO from CUDA .. transfers it to a texture, and it displays it correctly
And I can render out my FBO by rendering its texture to a quad as well
The issue is the transferring between the FBO to the PBO it seems to to function correctly, since when I replace the render FBO->PBO code it does not work correctly
Some things to try:
Make sure that your FBO is still bound when you do the ReadPixels.
Check that glGetError isn't reporting any errors.
Try reading to a buffer (regular C array) instead of binding the PBO and see if it seems to be getting filled in with correct values.
Related
I'm trying to implement some program and using this classic code:
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
Bind depth buffer.
glGenRenderbuffers(1, &depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width,
height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbuffer);
Bind several 3D textures:
glGenTextures(targets.size(), textures);
for (auto &target : targets) {
glBindTexture(GL_TEXTURE_3D, textures[ix]);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA32F, width, height, depth, 0, GL_RGBA, GL_FLOAT, NULL);
}
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + ix, textures[ix], 0);
buffers[ix] = GL_COLOR_ATTACHMENT0 + ix;
++ix;
}
glDrawBuffers(targets.size(), buffers);
And it do not work, indicating GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS error. I supposed, that, since my textures are layered, there could be a problem with not layered depth buffer. I have created it in the same way as I did it for 3D textures, so I removed depth buffer. This solved task, now my framebuffer is working, rendering completes and so on.
But what if I need to render each layer, having depth buffer attached? Is this functionality supported, are there some seldom api calls to achieve that?
If one attached image is layered, then all attached images must be layered. So if you want to do layered rendering with a depth buffer, the depth image must also be layered.
So instead of using a renderbuffer, you should use a 2D array texture with a depth format.
Using OpenGL to draw objects and also have my fragment shader outputting a scalar integer ID. For drawing the objects, I'm using multi-sampling for anti-aliasing, so when I create the buffer for the integer ID, I have to create it as an MSAA buffer as well for the FBO to be complete:
glBindRenderbuffer(GL_RENDERBUFFER, rboColorId);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, msaaSamples, GL_RGBA8,
cam.getWidth(), cam.getHeight());
glBindRenderbuffer(GL_RENDERBUFFER, rboDepthId);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, msaaSamples, GL_DEPTH_COMPONENT,
cam.getWidth(), cam.getHeight());
glBindRenderbuffer(GL_RENDERBUFFER, rboObjId);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, msaaSamples, GL_R32UI,
cam.getWidth(), cam.getHeight());
glBindRenderbuffer(GL_RENDERBUFFER, rboColorNoMsaaId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8,
cam.getWidth(), cam.getHeight());
glBindRenderbuffer(GL_RENDERBUFFER, rboObjNoMsaaId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI,
cam.getWidth(), cam.getHeight());
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rboColorId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboDepthId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_RENDERBUFFER, rboObjId);
glBindFramebuffer(GL_FRAMEBUFFER, fboNoMsaaId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rboColorNoMsaaId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_RENDERBUFFER, rboObjNoMsaaId);
As you can see in the code above, I have 2 FBOs. The first is MSAA and has a buffer for drawing the scene, a depth buffer, and an integer buffer for the IDs. The second FBO is single sampled (non-MSAA) and has just the draw scene buffer and the integer buffer. After I draw everything (fragment shader sets indeces for each pixel), I read the integer ID buffer (GL_COLOR_ATTACHMENT1) by first blitting it to the single sampled FBO so I can glReadPixels from it. In this particular code, I'm just reading the 1 pixel where the mouse is pointing:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glReadBuffer(GL_COLOR_ATTACHMENT1);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboNoMsaaId);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
glBlitFramebuffer(mouse_x_pos, cam.getHeight() - mouse_y_pos, mouse_x_pos+1, cam.getHeight() - mouse_y_pos + 1,
0, 0, 1, 1,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboNoMsaaId);
glReadBuffer(GL_COLOR_ATTACHMENT1);
GLuint objectId;
glReadPixels(0, 0, 1, 1, GL_RED_INTEGER, GL_UNSIGNED_INT, &objectId);
My problem is that when I blit, the multi-samples for the pixel I want are interpolated into the single pixel I get to read. I want that for the color buffer I'm using to draw the scene, but I do not want that for the integer IDs I read. If I'm reading a pixel that contains fragments for both an ID of 50 and an ID of 100, I want to read either 50 or 100 (don't care which). But what I get is some value between 50 and 100, like 75. 75 may actually be a completely different pixel, so I don't want that at all.
Is there something I can do to read a single sample for the integer ID instead of the interpolation of multiple samples?
Instead of resolving the multisample texture by blitting, you can implement your own multisample resolution in a render-to-texture pass. You can use a sampler of type sampler2DMS, and use this texelFetch( variant:
gvec4 texelFetch( gsampler2DMS sampler, ivec2 P, int sample);
so P are the 2D unnormalized texel coordinates, and sample is the ID of the sample. If you really don't care about which of the values you get, you can just use sample 0 all the time. But you could also for example iterate over all samples and take the one with the most occurences, or whatever suits your needs.
For this to work, you will have to switch from a renderbuffer for the ID attachment to a multisampled 2D texture.
So basically, you can bind the non-multisampled FBO as draw FBO, do a standard blit for depth and color textures, and do a fullscreen render pass with the multisampled ID texture, writing to the non-multisampled ID color attachment.
I'm trying to implement some program and using this classic code:
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
Bind depth buffer.
glGenRenderbuffers(1, &depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width,
height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbuffer);
Bind several 3D textures:
glGenTextures(targets.size(), textures);
for (auto &target : targets) {
glBindTexture(GL_TEXTURE_3D, textures[ix]);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA32F, width, height, depth, 0, GL_RGBA, GL_FLOAT, NULL);
}
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + ix, textures[ix], 0);
buffers[ix] = GL_COLOR_ATTACHMENT0 + ix;
++ix;
}
glDrawBuffers(targets.size(), buffers);
And it do not work, indicating GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS error. I supposed, that, since my textures are layered, there could be a problem with not layered depth buffer. I have created it in the same way as I did it for 3D textures, so I removed depth buffer. This solved task, now my framebuffer is working, rendering completes and so on.
But what if I need to render each layer, having depth buffer attached? Is this functionality supported, are there some seldom api calls to achieve that?
If one attached image is layered, then all attached images must be layered. So if you want to do layered rendering with a depth buffer, the depth image must also be layered.
So instead of using a renderbuffer, you should use a 2D array texture with a depth format.
I am having problems using explicit multisampling when using multiple rendering targets in OpenGL.
I have 4 render targets (Position, Diffuse + opacity, Normal, Specular + exponent) that are rendered to during the initial geometry pass.
These are all non multisampled textures attached to a framebuffer object, and then set as the render targets using glDrawBuffers(). This works fine, and I can then sample these textures later to get the information needed for lighting calculations. Lovely.
I now want to try to remove some of the aliasing I am getting, so I started to implement explicit MSAA using multisampled textures. However, when I use multisampled textures as the render targets instead, only the first render target seems to be drawn to and the rest remain blank.
Other than changing how the textures are setup, bound and read in the shaders I haven't changed any other code.
Multisampled textures are attached to the framebuffer object using this code:
for (unsigned int i = 0; i < textureCount; ++i)
{
// Generate texture
GLuint texture;
glGenTextures(1, &texture);
// Bind multisample texture
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, msaaSamples, GL_RGBA, width, height, false);
// Set params
glTexParameterf(GL_TEXTURE_2D_MULTISAMPLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D_MULTISAMPLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Attach to frame buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D_MULTISAMPLE, texture, 0);
// Unbind
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, 0);
}
Multisampled depth/stencil texture is attached as well.
glGenTextures(1, &depth);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, depth);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, msaaSamples, GL_DEPTH24_STENCIL8, width, height, false);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D_MULTISAMPLE, depth, 0);
The framebuffer is then bound and draw buffers set
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
GLuint attachments[4] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3 };
glDrawBuffers(4, attachments);
I then draw the geometry as usual. However, when inspecting with apitrace it is clear that only the first color attachment is being drawn into. This is not the case when I use regular (non-multisampled textures).
All 4 draw buffers are definetly still being set however.
glTexParameterf cannot be used with GL_TEXTURE_2D_MULTISAMPLE.
Apart from this I would set the 'fixedsamplelocations'-param of glTexImage2DMultisample to true just to be on the safe side regarding hardware compatibility.
And I recommend to use a sized format like GL_RGBA8 instead of GL_RGBA (don't know how you managed to get GL_RGBA32F as listed in the first image with the unsized GL_RGBA enum...).
I am trying to get into screen a video frame in a 2d texture covering the screen, I must use framebuffers, because eventually I wish to do the ping-pong rendering technique. But for the time being I want first to achieve just to render on the screen by using framebuffers.
This is my setup code:
// Texture setup
int[] text = new int[1];
glGenTextures(1, text, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 512, 512, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, null);
// Frambuffer setup
int[] fbo = new int[1];
glGenFramebuffers(1, fbo, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, text[0], 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Till here all ok, I didn't write here the testing code to keep it small, but right after I check if the frame buffer was created correctly, and all is fine.
Now in my Render loop I do next:
// Use the GLSL program
glUseProgram(programHandle);
// Swap to my FBO
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glViewport(0, 0, 512, 512);
// Pass the new image data to the program so the fragment shader processes it.
// Using glTexSubImage2D to speed up
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glUniform1i(glGetUniformLocation(programHandle, "u_texture"), 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_LUMINANCE, GL_UNSIGNED_BYTE, NewImageData);
// Draw the quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Swap back to the default screen frambuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
At this point I get a black screen, and on the log I can see a GL_INVALID_FRAMEBUFFER_OPERATION 1286.
I tried putting the glDrawArrays after the glBindFramebuffer call, but then the application crashes.
Any thoughts? Thanks in advance.
LUMINANCE textures are not renderable, this means you can't render to them using a FBO. This problem is solved with the GL_ARB_texture_rg extension, which introduces one and two channel texture formats that are renderable.