I'm trying to create an application that uses DirectX 11 to output to a swap chain and a bytemap at the same time. I call ID3D11DeviceContext::CopyResource to copy from the backbuffer texture into a staging texture and ID3D11DeviceContext::Map to map the staging texture to memory to read from it. Works fine with my single frame test.
Do I need to call both methods for each frame or once before rendering any frames?
After calling IDXGISwapChain::Present there's a new buffer object with a swapchain. If the scene changes each frame, the resource needs to be copied each frame. (Explained by Chuck in comment above)
Related
I have the following OpenGL setup for troubleshooting frame buffer issues:
I render a cube into a frame buffer.
I use the target texture from this frame buffer to draw a textured quad, which displays the cube in my viewport.
This works OK when both stages of the process are done in the same context, but breaks if stage 1 is done in a different context to stage 2 (note that these contexts are both shared and both on the same thread). In this case, I only ever see the cube displayed when I resize my viewport (which recreates my frame buffer). The cube is sometimes corrupted or fragmented, which leads me to believe that all I'm seeing is parts of memory that were used by the texture before it was resized, and that nothing is ever displayed properly.
The reason I have to have this setup is that in my actual application I'm using Qt OpenGL widgets, which are forced to use their own individual contexts, so I have to render my scene in its own dedicated context and then copy it to the relevant viewports using shareable OpenGL resources. If I don't do this, I get errors caused by VAOs being bound/used in other contexts.
I've tried the following unsuccessful combinations (where the primary context is where I use the texture to draw the quad, and the secondary context where the "offscreen" rendering of the cube into the frame buffer takes place):
Creating the frame buffer, its render buffer and its texture all in the secondary context.
Creating the frame buffer and the render buffer in the secondary context, creating the texture in the primary context, and then attaching the texture to the frame buffer in the secondary context.
Creating the frame buffer, its render buffer and two separate textures in the secondary context. One of these textures is initially attached to the frame buffer for rendering. Once the rendering to the frame buffer is complete, the first texture is detached and the second one attached. The previously attached texture containing the content of the rendering is used with the quad in the primary context.
In addition, I can't use glBlitFramebuffer() as I don't have access to the frame buffer the QOpenGLWidget uses in the application (as far as I've tried, QOpenGLWidget::defaultFramebufferObject() returns 0 which causes glBlitFramebuffer to give me errors).
The only way I have managed to get the rendering to work is to use a QOpenGLFrameBuffer and call takeTexture() when I want to use the texture with the quad. However, doing it this way means that the QOpenGLFrameBuffer creates itself a new texture and I have to destroy the old one once I've used it, which seems very inefficient.
Is there anything I can do to solve this problem?
I've got a project that uses a texture like that. You need to call glFinish() after drawing and before using the texture from QOpenGLFramebufferObject::texture(). That was our problem on some of the OSes.
Well i have a texture that is generated every frame and I was wondering the best way to render it in opengl. It is simply pixel data that is generated on the cpu in rgba8 (32-bit, 8 bit for each component) format, I simply need to transfer it to the gpu and draw it onto the screen. I remember there being some sort of pixel buffer or frame buffer that does this without having to generate a new texture every frame in association with glTexImage2d?
Pixel Buffer Objects do not change the fact that you need to call glTexImage2D (...) to (re-)allocate texture storage and copy your image. PBOs provide a means of asynchronous pixel transfer - basically making it so that a call to glTexImage2D (...) does not have to block until it finishes copying your memory from the client (CPU) to the server (GPU).
The only way this is really going to improve performance for you is if you map the memory in a PBO (Pixel Unpack Buffer) and write to that mapped memory every frame while you are computing the image on the CPU.
While that buffer is bound to GL_PIXEL_UNPACK_BUFFER, call glTexImage2D (...) with NULL for the data parameter and this will upload your texture using memory that is already owned by the server, so it avoids an immediate client->server copy. You might get a marginal improvement in performance by doing this, but do not expect anything huge. It depends on how much work you do between the time you map/unmap the buffer's memory and when you upload the buffer to your texture and use said texture.
Moreover, if you call glTexSubImage2D (...) every frame instead of allocating new texture image storage by calling glTexImage2D (...) (do not worry -- the old storage is reclaimed when no pending command is using it anymore) you may introduce a new source of synchronization overhead that could reduce your performance. What you are looking for here is known as buffer object streaming.
You are more likely to improve performance by using a pixel format that requires no conversion. Newer versions of GL (4.2+) let you query the optimal pixel transfer format using glGetInternalFormativ (...).
On a final, mostly pedantic note, glTexImage2D (...) does not generate textures. It allocates storage for their images and optionally transfers pixel data. Texture Objects (and OpenGL objects in general) are actually generated the first time they are bound (e.g. glBindTexture (...)). From that point on, glTexImage2D (...) merely manages the memory belonging to said texture object.
It seems as though glTexSubImage2D requires a pointer to the full texture buffer. How does one make partial updates to a texture by only providing a pointer to the update region rather than the whole buffer?
For example if I want to overlay a second image onto an existing texture, rather than copy the image data onto my texture buffer then call glTexSubImage2D, I just get opengl to update the texture without having to copy data between RAM locations.
Where do you get the notion that glTexSubImage2D requires a pointer to the full texture buffer ?
From the documentation linked above, it seems to me that the last parameter is a pointer to the buffer containing your new data only. The other parameters are what you use to specify which texture object to update (just an OpenGL identifier, no pointer to the original data required) and the offset and size to copy your new data to.
TL;DR: glTexSubImage2D takes a pointer to your new data and does exactly what you think it should in your example :)
This is a question about syncronization in OpenGL. And the question is:
At which point in the following (pseudo) code sample does syncronization happen.
// 1.
try to map buffer object (write only and invalidate buffer)
copy new data to mapped buffer
unmap buffer
// 2.
bind buffer
call subteximage to fill texture from buffer
unbind buffer
// 3.
render with texture
As far as i know syncronization happens as soon as 'an object is used'. Now it's questionable if the texture is used if it is filled from the buffer or if it is used in rendering.
If glSubTexImage doesn't block it would be possible to generally stream texture data by using buffer updates in texture update calls.
Florian
Your code can block anywhere between copy and glFlush after render with texture (or frame buffer swap). It's up to the implementation.
What is the difference between FBO and PBO? Which one should I use for off-screen rendering?
What is the difference between FBO and PBO?
A better question is how are they similar. The only thing that is similar about them is their names.
A Framebuffer Object (note the capitalization: framebuffer is one word, not two) is an object that contains multiple images which can be used as render targets.
A Pixel Buffer Object is:
A Buffer Object. FBOs are not buffer objects. Again: framebuffer is one word.
A buffer object that is used for asynchronous uploading/downloading of pixel data to/from images.
If you want to render to a texture or just a non-screen framebuffer, then you use FBOs. If you're trying to read pixel data back to your application asynchronously, or you're trying to transfer pixel data to OpenGL images asynchronously, then you use PBOs.
They're nothing alike.
A FBO (Framebuffer object) is a target where you can render images other than the default frame buffer or screen.
A PBO (Pixel Buffer Object) allows asynchronous transfers of pixel data to and from the device. This can be helpful to improve overall performance when rendering if you have other things that can be done while waiting for the pixel transfer.
I would read VBOs, PBOs and FBOs:
Apple has posted two very nice bits of
sample code demonstrating PBOs and
FBOs. Even though these are
Mac-specific, as sample code they're
good on any platoform because PBOs and
FBOs are OpenGL extensions, not
windowing system extensions.
So what are all these objects? Here's
the situation:
I want to highlight something.
FBO it not memory block. I think it look like struct of pointer. You Must attach Texture to FBO to use it. After attach Texture you now can draw to it for offscreen render or for second pass effect.
struct FBO{
AttachColor0 *ptr0;
AttachColor1 *ptr1;
AttachColor2 *ptr2;
AttachDepth *ptr3;
};
In the other hand, PBO is memory block "block to hold type of memory. "Try to think of it as malloc of x size, then you can use memcpy to copy data from it to texture/FBO or to it".
Why to use PBO?
Create intermediate memory buffer to interface with Host memory and not stop OpenGL drawing will upload texture to or from host.