I'm doing a little bit of video processing in real time using OpenGL.
I do a render to texture via FBO+RBO and shaders for simple processing on the video frame. Then I use that texture to render (not blit) to the default frame buffer.
Some of my video processing needs to be frame accurate. If I step through the video frame by frame everything looks good; when I play it back at video rate it gets out of sync.
I'm thinking that the texture I'm getting out of the FBO+RBO is not based on the texture I input because of buffering/other issues.
This seems like a relevant question but there is no answer to it yet: double buffering with FBO+RBO and glFinish()
In my case I am using a Qt QGLWidget with the QGL::DoubleBuffer format option.
I need to flush the output of FBO; alternatively if I could work out which frame texture has come out of the FBO I can compensate for the sync issue.
Related
the big picture: I'm writing a renderer for volumetric models using a splatting approach (with C++, OpenGL and SDL2). I've got a multi-resolution data structure (an octree). While the camera is moving rendering is done at a resolution that runs in real time. As soon as the camera stands still, rendering at higher resolutions is done (= iterative refinement).
The problem: Since rendering during refinement can last multiple seconds, I need to cancel it once the user decides to change the camera position. Not a problem regarding the color buffer, I use double buffering and simply don't switch it. But I have to clear the depth buffer before rendering, so when I cancel the rendering run, the information in the depth buffer is lost. The thing is, I need the depth information in another part of my renderer.
My question: What is the best strategy in this case? Backup the depth buffer? Or is there a way to do depth double buffering out of the box using OpenGL and SDL2?
You can render the depth information to a frame buffer to back it up
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
This way you can implement the double depth buffer thing yourself
I'm building a recorder using a BlackMagic DeckLink Card.
The program is using OpenGL to display the Frames and FFMPEG (with the glReadPixels() method) to record them.
I'm setting the viewport in my program to apply a automatic letter-/pillarbox whether the Image/Monitor is 16:9/4:3.
The problem with that ist that when i capture these frames of this viewport they get of course recorded in the resolution my viewport is (i.e. Full Hd source -> viewport due to monitor with 1600x1200 resolution -> letterbox down to 1600x900) and so FFMPEG records those 1600x1200 with black bars at the top/bottom.
Is there any possibility to grab the RAW Frame before it gets passed trough the setViewport function and all the rescaling stuff?
Well, at some point the image is passed to OpenGL. Why don't you just take that data and pass it to FFMPEG directly instead of doing the lengthy, inefficient and expensive round trip through OpenGL.
If OpenGL is used for realtime colorspace conversion, that I suggest you do that rendering to a FBO with a texture attached with the size of the video resolution, use a glReadPixels on that FBO (preferrably into a PBO); and finally draw that rendered to texture onto the main screen in that window resolution.
However if you can simply feed the raw frames directly to FFMPEG (which can do colorspace conversions as well) I strongly suggest you do that.
I am basically trying to do something with the default frame buffer pixmap. I wish to blur it when somebody pauses the game . My problem is that even if I am using a separate thread for the whole blur operation, the method ScreenUtils.getFrameBufferPixmap has to be called on the rendering thread. But this method takes atleast 1 second to return even on nexus 5. Calling the method on my blur processing thread is not possible as there is no gl context available on any other thread other than rendering thread .
Is there any solution for eliminating the stall
What you're trying to do: take a screenshot, modify it on the CPU and upload it back to the GPU. There are 3 problems with this approach.
1.Grabbing pixels, takes a lot of time.
2.Blurring can be successfully executed independentely for each pixel so there is no point doing it on CPU. GPU can do it in a blink of an eye.
3. Uploading the texture back still takes some time.
The correct approach is: instead of rendering everything to the screen render it to the offscreen texture. (See offscreen rendering tutorials) Next, draw this texture on a quad of the size of your screen, but while drawing, use a blur shader. There is a number of example blur shaders available. It should basically sample the surroundings of the target pixel and render it's average.
In the source for ScreenUtils.java you can see that getFrameBufferPixmap is basically a wrapper around OpenGL's glReadPixels. There isn't too much you can do to improve the Java or Libgdx wrapper. This is not the direction OpenGL is optimized for (its good at pushing data up to the GPU, not pulling data off).
You might be better off re-rendering your current screen to a (smaller, off-screen) FrameBuffer, and then pulling that down. You could use the GPU to do the blurring this way, too.
I believe the screen's format (e.g., not RGBA8888) may have an impact on the read performance.
This isn't Libgdx-specific, so any tips or suggestions for OpenGL in general like Making glReadPixel() run faster should apply.
I am investigating a possible method of collision detection for a simple game I am developing but I am now stuck.
I am trying to load a texture into memory but not into the frame buffer and read pixels (specifically, the colour) from it using coordinates...I can read the buffer contents easily and get the color of pixels at coordinates but I cannot work out how to do this on a texture, is it even possible?
Any help/guidance/what to research or possible functions would be much appreciated.
Note: I am using OpenGL 2.0
OpenGL is not a image manipulation, nor image access library. It's a drawing API and as such it should be treated. Reading back (whole) textures is possible with OpenGL (though not very performant). On OpenGL-ES there's no direct way to read texture data.
You already have the whole texture image in a regular buffer? Good, because that's what you want to operate on anyway. Reading back single pixels for their color is just stupid, because it clogs up the CPU with function call overhead.
decode png to regular buffer.
create texture by glGenTextures;
bind texture by glBindTexture;
load image to texture by glTexImage2D.
I would like to see an example of rendering with nVidia Cg to an offscreen frame buffer object.
The computers I have access to have graphic cards but no monitors (or X server). So I want to render my stuff and output them as images on the disk. The graphic cards are GTX285.
You need to create an off screen buffer and render to it the same way as you would render to a window.
See here for example (but without Cg) :
http://www.mesa3d.org/brianp/sig97/offscrn.htm
Since you have a Cg shader, just enable it the same way as you would render to a window.
EDIT:
For FBO example, take a look here :
http://www.songho.ca/opengl/gl_fbo.html
but that is not supported by all graphical cards.
You could also render to texture, and then copy the texture to the main memory, but that is not very good (performance wise)