Save OpenGL Rendering to Video - opengl

I have an OpenGL game, and I want to save what's shown on the screen to a video.
How can I do that? Is there any library or how-to-do-it?
I don't care about compression, I need the most efficient way so hopefully the FPS won't drop.
EDIT:
It's OpenGL 1.1 and it's working on Mac OSX though I need it to be portable.

There most certainly are great video capture software out there you could use to capture your screen, even when running a full screen OpenGL game.
If you are using new versions of OpenGL, as genpfault has mentioned you can use PBOs. If you are using legacy OpenGL (version 1.x), here's how you can capture the screen:
glFinish(); // Make sure everything is drawn
glReadBuffer(GL_FRONT);
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
glReadPixels(blx, bly, w, h, mode, GL_UNSIGNED_BYTE, GL_BGRA);
where blx and bly are the bottom left coordinates of the part of the screen you want to capture (in your case (0, 0)) and w and h are the width and height of the box to be captured. See the reference for glReadPixels for more info, such as the last parameter.
Writing captured screen (at your desired rate, for example 24 fps) to a video file is a simple matter of choosing the file format you want (for example raw video), write the header of the video and write the images (image by image if raw, or image differences in some other format etc)

Use Pixel Buffer Objects (PBOs).

Related

Why is glReadPixels so slow and are there any alternative?

I need to take sceenshots at every frame and I need very high performance (I'm using freeGlut). What I figured out is that it can be done like this inside glutIdleFunc(thisCallbackFunction)
GLubyte *data = (GLubyte *)malloc(3 * m_screenWidth * m_screenHeight);
glReadPixels(0, 0, m_screenWidth, m_screenHeight, GL_RGB, GL_UNSIGNED_BYTE, data);
// and I can access pixel values like this: data[3*(x*512 + y) + color] or whatever
It does work indeed but I have a huge issue with it, it's really slow. When my window is 512x512 it runs no faster than 90 frames per second when only cube is being rendered, without these two lines it runs at 6500 FPS! If we compare it to irrlicht graphics engine, there I can do this
// irrlicht code
video::IImage *screenShot = driver->createScreenShot();
const uint8_t *data = (uint8_t*)screenShot->lock();
// I can access pixel values from data in a similar manner here
and 512x512 window runs at 400 FPS even with a huge mesh (Quake 3 Map) loaded! Take into account that I'm using openGL as driver inside irrlicht. To my inexperienced eye it seems like glReadPixels is copying every pixel data from one place to another while (uint8_t*)screenShot->lock() is just copying a pointer to already existent array. Can I do something similar to latter using freeGlut? I expect it to be faster than irrlicht.
Note that irrlicht uses openGL too (well it offers directX and other options as well but in the example I gave above I used openGL and by the way it was the fastest compared to other options)
OpenGL methods are used to manage the rendering pipeline. In its nature, while the graphics card is showing image to the viewer, computations of the next frame are being done. When you call glReadPixels; graphics card wait for the current frame to be done, reads the pixels and then starts computing the next frame. Therefore pipeline becomes stalled and becomes sequential.
If you can hold two buffers and tell to the graphics card to read data into these buffers interchanging each frame; then you can read-back from your buffer 1-frame late but without stalling the pipeline. This is called double buffering. You can also do triple buffering with 2 frame late read-back and so on.
There is a relatively old web page describing the phenomenon and implementation here: http://www.songho.ca/opengl/gl_pbo.html
Also there are a lot of tutorials about framebuffers and rendering into a texture on the web. One of them is here: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/

Transparent SwapChainPanel (compositeMode) - no opaque DirectX drawing?

I'm making an Augmented Reality App (C++) in Windows 10 (universal Windows app / WinRT) with openCV.
Problem:
I want to have a transparent SwapChainPanel background to make the content behind (webcam stream) visible, but with opaque 3D-models (eg. a cone).
Try:
To my research it seems, that setting the CompositeMode of the SwapChainPanel to "minBlend" should do it - yes, but I still want my 3D Objects to be opaque. In fact, I want my objects to be semitransparent, but always visible. The "minBlend" mode is more for text-highlighting, not really to overlay something with semitransparent models (dark areas are not overlayed, see pictures).
Image: Standard DirectX Cube (oqaque background and model)
Image: DirectX Cube overlayed
Do you have any suggestions? Is it possible?
Background:
I'm making an Augmented Reality Windows 10 App with openCV. For getting the current pixel data of my webcam stream I'm using the Win10 methods mediaCapture->GetPreviewFrameAsync(videoframe) with SoftwareBitmap->LockBuffer to get access to the bytes in the memorybuffer. The bytes are processed within openCV functions and after processing is complete, I'm setting up a WriteableBitmap to show the modified webcam-stream in my Xaml-UI element. Because of already having classes to draw my DirectX objects and modify them with touch input, i want to use DirectX to overlay the webcam preview with my objects.
Sorry for not linking the used methods and the linked Images, I haven't enough reputation
Edit: Maybe an alternative would be to create a texture from my pixel data and set up a fullscreen rectangle on my swapchainpanel which functions as a background. Then every frame I have to update the texture data.
Due to some official Windows Store examples, changing the compositeMode is the only solution to make Xaml content behind a swapChainPanel visible.
But this workaround finally works: I've created a Texture2D and update the containing data every frame:
/*----- Update background texture with new videoframe -----*/
// image = openCV cv::Mat containing the pixeldata of the current frame
if (image.data != nullptr)
{
D3D11_MAPPED_SUBRESOURCE mappedResource;
ZeroMemory(&mappedResource, sizeof(D3D11_MAPPED_SUBRESOURCE));
// Disable GPU access to data.
auto m_d3dContext = m_deviceResources->GetD3DDeviceContext();
m_d3dContext->Map(m_pTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
// Update texture
memcpy(mappedResource.pData, image.data, 4*image.rows*image.cols);
// Reenable GPU access to data.
m_d3dContext->Unmap(m_pTexture, 0);
}
At first I'm drawing a fullscreen rectangle (SpriteBatch with DirectXTK) with this texture. After that I can draw anything else.

Trying to use OpenGL Texture Compression on a large bitmap - get white squares

I'm trying to use OpenGL's texture compression on a large image. My image is a world map that I'm painting on the screen as a series of 128x128 tiles as part of a learning exercise. I want the user to be able to pan and zoom around the image. It's a JPG that is rather large (20k by 10k pixels) and so I wanted each of my tiles (I tiled the image) to be compressed in order to lower the memory footprint of my program.
I picked an arbitrary texture compression format when I called glTexImage2D and each of my tiles become white squares. I dug a little deeper into this and figured "maybe my video card doesn't support all these formats." The video card is an Nvidia NVS 3100M on an IBM ThinkPad laptop and I did a glGetString to try to see what the supported texture compression formats were, but it didn't return anything (GL_COMPRESSED_TEXTURE_FORMATS). I also checked what GL_EXTENSIONS were supported and it returned "GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture" which doesn't look like much.
My program is in C# using the SharpGL library.
What other things can I check to see to try to figure this one out?
How about checking those texture minification filtering settings?

Whole screen capture and render in DirectX [PERFORMANCE]

I need some way to get screen data and pass them to DX9 surface/texture in my aplication and render it at at least 25fps at 1600*900 resolution, 30 would be better.
I tried BitBliting but even after that I am at 20fps and after loading data into texture and rendering it I am at 11fps which is far behind what I need.
GetFrontBufferData is out of question.
Here is something about using Windows Media API, but I am not familiar with it. Sample is saving data right into file, maybe it can be set up to give you individual frames, but I haven't found good enough documentation to try it on my own.
My code:
m_memDC.BitBlt(0, 0, m_Rect.Width(),m_Rect.Height(), //m_Rect is area to be captured
&m_dc, m_Rect.left, m_Rect.top, SRCCOPY);
//at 20-25fps after this if I comment out the rest
//DC,HBITMAP setup and memory alloc is done once at the begining
GetDIBits( m_hDc, (HBITMAP)m_hBmp.GetSafeHandle(),
0L, // Start scan line
(DWORD)m_Rect.Height(), // # of scan lines
m_lpData, // LPBYTE
(LPBITMAPINFO)m_bi, // address of bitmapinfo
(DWORD)DIB_RGB_COLORS); // Use RGB for color table
//at 17-20fps
IDirect3DSurface9 *tmp;
m_pImageBuffer[0]->GetSurfaceLevel(0,&tmp); //m_pImageBuffer is Texture of same
//size as bitmap to prevent stretching
hr= D3DXLoadSurfaceFromMemory(tmp,NULL,NULL,
(LPVOID)m_lpData,
D3DFMT_X8R8G8B8,
m_Rect.Width()*4,
NULL,
&r, //SetRect(&r,0,0,m_Rect.Width(),m_Rect.Height();
D3DX_DEFAULT,0);
//12-14fps
IDirect3DSurface9 *frameS;
hr=m_pFrameTexture->GetSurfaceLevel(0,&frameS); // Texture of that is rendered
pd3dDevice->StretchRect(tmp,NULL,frameS,NULL,D3DTEXF_NONE);
//11fps
I found out that for 512*512 square its running on 30fps (for i.e. 490*450 at 20-25) so I tried dividing screen, but it didn't seem to work well.
If there is something missing in code please write, don't vote down. Thanks
Starting with Windows 8, there is a new desktop duplication API that can be used to capture the screen in video memory, including mouse cursor changes and which parts of the screen actually changed or moved. This is far more performant than any of the GDI or D3D9 approaches out there and is really well-suited to doing things like encoding the desktop to a video stream, since you never have to pull the texture out of GPU memory. The new API is available by enumerating DXGI outputs and calling DuplicateOutput on the screen you want to capture. Then you can enter a loop that waits for the screen to update and acquires each frame in turn.
To encode the frames to a video, I'd recommend taking a look at Media Foundation. Take a look specifically at the Sink Writer for the simplest method of encoding the video frames. Basically, you just have to wrap the D3D textures you get for each video frame into IMFSample objects. These can be passed directly into the sink writer. See the MFCreateDXGISurfaceBuffer and MFCreateVideoSampleFromSurface functions for more information. For the best performance, typically you'll want to use a codec like H.264 that has good hardware encoding support (on most machines).
For full disclosure, I work on the team that owns the desktop duplication API at Microsoft, and I've personally written apps that capture the desktop (and video, games, etc.) to a video file at 60fps using this technique, as well as a lot of other scenarios. This is also used to do screen streaming, remote assistance, and lots more within Microsoft.
If you don't like the FrontBuffer, try the BackBuffer:
LPDIRECT3DSURFACE9 surface;
surface = GetBackBufferImageSurface(&fmt);
to save it to a file use
D3DXSaveSurfaceToFile(filename, D3DXIFF_JPG, surface, NULL, NULL);

How to use texture compression in openGL?

I'm making an image viewer using openGL and I've run into a situation where I need to load very large (>50MB) images to be viewed. I'm loading the images as textures and displaying them to a GL_QUAD which has been working great for smaller images, but on the large images the loading fails and I get a blank rectangle. So far I've implemented a very ugly hack that uses another program to convert the images to smaller, lower resolution versions which can be loaded, but I'm looking for a more elegant solution. I've found that openGL has a texture compression feature but I can't get it to work. When I call
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_ARB, t.width(), t.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, t.bits());
I get the compiler error "GL_COMPRESSED_RGBA_ARB undeclared". What am I doing wrong? Is there a library I'm missing? And more generally, is this a viable solution to my problem?
I'm using Qt Creator on a Windows Vista machine, with a NVIDIA Quadro FX 1700 graphics card.
On my own GFX card the maximum resolution for an opengl texture is 8192x8192, if your image is bigger then 50MB, it is propably a very very high resolution...
Check http://www.opengl.org/resources/faq/technical/texture.htm , it describes how you can find the maximum texture size.
First, I'd have to ask what resolution are these large images? Secondly, to use a define such as GL_COMPRESSED_RGBA_ARB, you would need to download and use something like GLEW which is more modernized in the GL api than the standard MS-Dev install.