I want to read pixels from the screen/monitor after the image has been displayed to the user.
MS provides Graphics.CopyFromScreen which apparently:
Performs a bit-block transfer of color data from the screen to the
drawing surface of the Graphics.
But is this really true? Does it not read the pixels from the frame buffer?
glReadPixels (doc) reads from the frame buffer - is that before or after the screen has been updated (it doesn't specify)?
Would the "MS" equivalent to glReadPixels be to use DirectX's GetBuffer? (such as the answer provided here)
Related
I want to display a 2D array of pixels directly to the screen. The pixel-data is not static and changes on user triggered event like a mousemove. I wish to have a display framebuffer to which I could write directly to the screen.
I have tried to create a texture with glTexImage2D(). I then render this texture to a QUAD. And then I update the texture with glTexSubImage2D() whenever a pixel is modified.
It works!
But this is not the efficient way I guess. The glTexSubImage2D copies whole array including the unmodified pixels back to the texture which is not good performance wise.
Is there any other way, like having a "display-framebuffer" to which I could write only the modified pixels and change will reflect on the screen.
glBlitFramebuffer is what you want.
Copies a rectangular block of pixels from one frame buffer to another. Can stretch or compress, but doesn't go through shaders and whatnot.
You'll probably also need some combination of glBindFramebuffer, glFramebufferTexture, glReadBuffer to set up the source and destination.
I'm building a recorder using a BlackMagic DeckLink Card.
The program is using OpenGL to display the Frames and FFMPEG (with the glReadPixels() method) to record them.
I'm setting the viewport in my program to apply a automatic letter-/pillarbox whether the Image/Monitor is 16:9/4:3.
The problem with that ist that when i capture these frames of this viewport they get of course recorded in the resolution my viewport is (i.e. Full Hd source -> viewport due to monitor with 1600x1200 resolution -> letterbox down to 1600x900) and so FFMPEG records those 1600x1200 with black bars at the top/bottom.
Is there any possibility to grab the RAW Frame before it gets passed trough the setViewport function and all the rescaling stuff?
Well, at some point the image is passed to OpenGL. Why don't you just take that data and pass it to FFMPEG directly instead of doing the lengthy, inefficient and expensive round trip through OpenGL.
If OpenGL is used for realtime colorspace conversion, that I suggest you do that rendering to a FBO with a texture attached with the size of the video resolution, use a glReadPixels on that FBO (preferrably into a PBO); and finally draw that rendered to texture onto the main screen in that window resolution.
However if you can simply feed the raw frames directly to FFMPEG (which can do colorspace conversions as well) I strongly suggest you do that.
I'm doing a little bit of video processing in real time using OpenGL.
I do a render to texture via FBO+RBO and shaders for simple processing on the video frame. Then I use that texture to render (not blit) to the default frame buffer.
Some of my video processing needs to be frame accurate. If I step through the video frame by frame everything looks good; when I play it back at video rate it gets out of sync.
I'm thinking that the texture I'm getting out of the FBO+RBO is not based on the texture I input because of buffering/other issues.
This seems like a relevant question but there is no answer to it yet: double buffering with FBO+RBO and glFinish()
In my case I am using a Qt QGLWidget with the QGL::DoubleBuffer format option.
I need to flush the output of FBO; alternatively if I could work out which frame texture has come out of the FBO I can compensate for the sync issue.
I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective
In a Qt based application I want to execute a fragment shader on two textures (both 1000x1000 pixels).
I draw a rectangle and the fragment shader works fine.
But, now I want to renderer the output into GL_AUX0 frame buffer to let the result read back and save to a file.
Unfortunately if the window size is less than 1000x1000 pixels the output is not correct. Just the window size area is rendered onto the frame buffer.
How can I execute the frame buffer for the whole texture?
The recommended way to do off-screen processing is to use Framebuffer Objects (FBO). These buffers act similar the render buffers you already know, but are not constrained by the window resolution or color depths. You can use the GPGPU Framebuffer Object Class to hide low-level OpenGL commands and use the FBO right away. If you prefer doing this on your own, have a look at the extension specification.