I'm building a recorder using a BlackMagic DeckLink Card.
The program is using OpenGL to display the Frames and FFMPEG (with the glReadPixels() method) to record them.
I'm setting the viewport in my program to apply a automatic letter-/pillarbox whether the Image/Monitor is 16:9/4:3.
The problem with that ist that when i capture these frames of this viewport they get of course recorded in the resolution my viewport is (i.e. Full Hd source -> viewport due to monitor with 1600x1200 resolution -> letterbox down to 1600x900) and so FFMPEG records those 1600x1200 with black bars at the top/bottom.
Is there any possibility to grab the RAW Frame before it gets passed trough the setViewport function and all the rescaling stuff?
Well, at some point the image is passed to OpenGL. Why don't you just take that data and pass it to FFMPEG directly instead of doing the lengthy, inefficient and expensive round trip through OpenGL.
If OpenGL is used for realtime colorspace conversion, that I suggest you do that rendering to a FBO with a texture attached with the size of the video resolution, use a glReadPixels on that FBO (preferrably into a PBO); and finally draw that rendered to texture onto the main screen in that window resolution.
However if you can simply feed the raw frames directly to FFMPEG (which can do colorspace conversions as well) I strongly suggest you do that.
Related
I can't find a proper description of the types of the texture. Documentation (https://docs.rs/sdl2/0.34.3/sdl2/render/struct.TextureCreator.html#method.create_texture) says about static, streaming and target textures, but gives a little information on how they differ.
If I want to update texture completely on each frame (the texture is 100% of the canvas in size), which texture should I use?
It took me a bit of time to understand difference between them, but:
Static texture is a texture which is rarely changed (like sprites).
Target texture is a texture which can be used as a 'drawing place' (to be used as a surface, using SDL draw primitives). It's intended to be updated often.
Streaming texture is a special type of texture which assumes a full update from external source of data. It was designed for video players and alike (render new frame of the video into the same texture). It's intended to be updated often too.
The streaming texture should be updated with with_lock method which takes closure to perform update. The closure gets the texture's writable byte-array as a parameter.
So, the key difference is that 'target' allows to 'draw' on the texture (fill, draw a line, bltbit, etc), and 'streaming' allows to update it as byte-array, (lower level even than pixel array).
I am doing video texturing to a rectangle surface created. I need to create 2 more rectangle of say different size and then copy a part of the texturing video running on the 1st surface (for eg: middle part of the video ) and play it on the new surface created. Is this possible using OpenGL ES ? Through my native video surface renderer, I can do this functionality and can map it to OGLES application. I was just wondering whether it is possible to do directly from OGL app itself, by copying selected rectangle from one of the video texturing surface ?
If your texture is full motion video, you should not copy the texture data because that will be too slow too keep up with video frame rates. You should avoid using glTexImage2D() and instead use the EGL Image Extensions as detailed in my third article here:
http://montgomery1.com/opengl/
But either way, once you have the image in a texture and the texture is bound with glBindTexture(), then any number of rectangles you draw will be textured with that same currently-bound texture, without more copying. These rectangles are actually geometry constructed of triangles and not "surfaces". The framebuffer is the surface. The texture coordinates can be different for each rectangle, which allows you to crop and/or scale the texture mapping uniquely for each.
I'm doing a little bit of video processing in real time using OpenGL.
I do a render to texture via FBO+RBO and shaders for simple processing on the video frame. Then I use that texture to render (not blit) to the default frame buffer.
Some of my video processing needs to be frame accurate. If I step through the video frame by frame everything looks good; when I play it back at video rate it gets out of sync.
I'm thinking that the texture I'm getting out of the FBO+RBO is not based on the texture I input because of buffering/other issues.
This seems like a relevant question but there is no answer to it yet: double buffering with FBO+RBO and glFinish()
In my case I am using a Qt QGLWidget with the QGL::DoubleBuffer format option.
I need to flush the output of FBO; alternatively if I could work out which frame texture has come out of the FBO I can compensate for the sync issue.
I am investigating a possible method of collision detection for a simple game I am developing but I am now stuck.
I am trying to load a texture into memory but not into the frame buffer and read pixels (specifically, the colour) from it using coordinates...I can read the buffer contents easily and get the color of pixels at coordinates but I cannot work out how to do this on a texture, is it even possible?
Any help/guidance/what to research or possible functions would be much appreciated.
Note: I am using OpenGL 2.0
OpenGL is not a image manipulation, nor image access library. It's a drawing API and as such it should be treated. Reading back (whole) textures is possible with OpenGL (though not very performant). On OpenGL-ES there's no direct way to read texture data.
You already have the whole texture image in a regular buffer? Good, because that's what you want to operate on anyway. Reading back single pixels for their color is just stupid, because it clogs up the CPU with function call overhead.
decode png to regular buffer.
create texture by glGenTextures;
bind texture by glBindTexture;
load image to texture by glTexImage2D.
In a Qt based application I want to execute a fragment shader on two textures (both 1000x1000 pixels).
I draw a rectangle and the fragment shader works fine.
But, now I want to renderer the output into GL_AUX0 frame buffer to let the result read back and save to a file.
Unfortunately if the window size is less than 1000x1000 pixels the output is not correct. Just the window size area is rendered onto the frame buffer.
How can I execute the frame buffer for the whole texture?
The recommended way to do off-screen processing is to use Framebuffer Objects (FBO). These buffers act similar the render buffers you already know, but are not constrained by the window resolution or color depths. You can use the GPGPU Framebuffer Object Class to hide low-level OpenGL commands and use the FBO right away. If you prefer doing this on your own, have a look at the extension specification.