Rendering Video in OpenGL - opengl

Is there a good solution for playing a compressed video in OpenGL?
It needs to
Be cross-platform (Windows and MacOSX)
Render to a texture (preferably but not 100% needed)
Cost less than Bink
Any ideas?

Qt can be used to render widgets (including a video player) in an OpenGL scene. It has a multimedia framework called phonon that can play video and audio.
See this demo video.
Qt is cross-platform and is now licensed under LGPL.

I recommend the Theora video format.
Here are the benefits:
Totally open, free and patent-unencoumbered specification
Free working library implementation (encoder/decoder) and source-code examples, available under a BSD-style license
Not too shabby documentation.
Portable
The decoder lets you decode to R'G'B', which can easily be uploaded with an OpenGL buffer object and fetched in a shader via a sampler.

if you mean by solution that you can build/code it, i can suggest quicktime (easy on mac with cocoa, strange on windows but it works) or you can checkout mplayer/vlc sources and try to integrate that. there are a lot of demos about this on the web.
since you need cross platform, i guess gstreamer, video4linux and directshow are nothing for you. but there are video players that support different backends on different platforms - like openFrameworks

Related

UWP Hardware Video Decoding - DirectX12 vs Media Foundation

I would like to use DirectX 12 to load each frame of an H264 file into a texture and render it. There is however little to no information on doing this, and the Microsoft website has limited superficial documentation.
Media Foundation has plenty of examples and offers Hardware Enabled decoding. Is the Media Foundation a wrapper around DirectX or is it doing something else?
If not, how much less optimised would the Media Foundation equivalent be in comparison to a DX 12 approach?
Essentially, what are the big differences between Media Foundation and DirectX12 Video Decoding?
I am already using DirectX 12 in my engine so this is specifically regarding DX12.
Thanks in advance.
Hardware video decoding comes from DXVA (DXVA2) API. It's DirectX 11 evolution is D3D11 Video Device part of D3D11 API. Microsoft provides wrappers over hardware accelerated decoders in the format of Media Foundation API primitives, such as H.264 Video Decoder. This decoder is offering use of hardware decoding capabilities as well as fallback to software decoding scenario.
Note that even though Media Foundation is available for UWP development, your options are limited and you are not offered primitives like mentioned transform directly. However if you use higher level APIs (Media Foundation Source Reader API in particular) you can leverage hardware accelerated video decoding in your UWP application.
Media Foundation implementation offers interoperability with Direct3D 11, in the part of video encoding/decoding in particular, but not Direct3D 12. You will not be able to use Media Foundation and DirectX 12 together out of the box. You will either have to implement Direct3D 11/12 interop to transfer the data between the APIs (or, where applicable, use shared access to the same GPU data).
Or alternatively you will have to step down to underlying ID3D12VideoDevice::CreateVideoDecoder which is further evolution of mentioned DXVA2 and Direct3D 11 video decoding APIs with similar usage.
Unfortunately if Media Foundation is notoriously known for poor documentation and hard-to-start development, Direct3D 12 video decoding has zero information and you will have to enjoy a feeling of a pioneer.
Either way all the mentioned are relatively thin wrappers over hardware assisted video decoding implementation with the same great performance. I would recommend taking Media Foundation path and implement 11/12 interop if/when it becomes necessary.
You will get a lot of D3D12 errors caused by Media Foundation if you pass a D3D12 device to IMFDXGIDeviceManager::ResetDevice.
The errors could be avoided if you call IMFSourceReader::ReadSample slowly. It doesn't matter that you adopt sync or async mode to use this method. And, how slowly it should be depends on the machine that runs the program. I use ::Sleep(1) between ReadSample calls for sync mode playing a stream from network, and ::Sleep(3) for sync mode playing a local mp4 file on my machine.
Don't ask who I am. My name is 'the pioneer'.

Encode OpenGL rendered video without leaving the GPU memory

I am doing some preliminary work to make a rendering pipeline and I am investigating whether OpenGL is a good option for my use case: from a markup language I need to generate a video, ideally using opengl which already implements most of the primitives I need.
Is there a way to, instead of (or additionally to) updating a framebuffer, to make an mp4 video file using nvenc, without copying data back and forth between the GPU's and main memory?
The nvenc SDK page[1] on the NVidia website suggests that it can, as the current header graphic is of a game being streamed. (Even if it's a Direct3D game, same chip underneath.) A quick search for "nvenc share buffer with OpenGL" turned up a number of people apparently combining the two.
Runs on Linux and MS Windows only, so no joy if you have a Mac.
Hope this helps.

Video Playback in DirectX 11

Pretty self explanatory. Microsoft had DirectShow for DirectX 9, but using DirectShow with DX11 is a COM nightmare beyond words. Is there a standard for video rendering I haven't heard of, or perhaps a free third-party library for this purpose?
Edit: Thanks to Mgetz, I am aware of Microsoft's attempt at a solution, Media Foundation. However, it's limited to Windows 8+, which I would much prefer to avoid.
This may not exactly match your requirement, but for your GOAL, you may take a look on ffmpeg, libx264 and theora(for ogg sound) or faad(decode aac).
I have done using ffmpeg to open container(3gp/mp4 is simple to implment yourself btw if full GPL licence is a concern), libx264 to decode to frame and upload to opengl texture, performance is good (on mac pro it can render 50 fps for 1080p without optimization) and by getting your hand dirty you can have fun doing stupid things with the texture and 3d transforms.
Media Foundation says that it "enables the development of applications and components for using digital media on Windows Vista and later."
So, it looks like it should work for Vista, Windows 7, and Windows 8.
There is DirectX Video Acceleration 2.0 which has a fabulous API, the DXVA-HD (after one has seen VMR9's API, especially with that custom allocator/presenter for renderless drawing, every other API is fabulous :) )
Have a look at: https://msdn.microsoft.com/en-us/library/windows/desktop/ee663586(v=vs.85).aspx
Also, there is a sample in: https://msdn.microsoft.com/en-us/library/windows/desktop/dd756740(v=vs.85).aspx
Windows 7 is the minimum supported windows version
You will not believe how straight forward it is with this API to have it decode the video into your texture.

Rendering to video file by SFML

I have written a program using SFML Library (in C++) rendering simple 2D animation.
I would like to save the animation to a video file instead of drawing it on the screen.
Does SFML provide such functionality? Is there any other, portable way to do this? (portable between different OSes)
SFML does not have such a feature, especially since video processing is a whole world of its own. You can take a look at FFmpeg and GStreamer. Both libraries are cross-platform and should be able to record, playback and stream videos. If you want a specific codec, you could directly look at the codec's website and/or search for good encoder.
Overall it's not an easy task and depending on what you're trying to do, you could also think about grabbing the rendering directly with an third-party application, e.g. Open Broadcaster Software or (again) FFmpeg.

How can I play a FLV file in a C++ OpenGL application?

I am trying to play a .flv file in the GLUT window using OpenGL and C++ in Linux, but I'm not sure where to start.
Is it possible to do this? If so, how?
Make sure you mean .flv not .swf.
It's quite easy. Decode the video with something like libavcodec and you can use raw frames as textures.
If you really want to do this, check out the source code of Gnash. They've a renderer that use OpenGL. However, rendering is just a small part of the job, you also have to decode audio/video, run actionscript, etc.. in order to run a flash file.
It so complicated that even Adobe didn't manage to make it right :)
If you want to play just some video, look at #Banthar's answer, otherwise:
OpenGL is a no-frills drawing API. It gives you the computer equivalent of "pens and brushes" to draw on some framebuffer. Period. No higher level functionality than that.
Flash it a really complex thing. It consists of a vector geometry object system, a script engine (ActionScript), provides sound and video de-/compression etc. All this must be supported by a SWF player. ATM there's only one fully featured SWF player and that's the one Adobe makes. There are free alternatives, but the are behind the official flash players by several major versions (Lightspark, Gnash).
So the most viable thing to do was loading the Flash player browser plugin in your program through the plugin interface, provide it, what a browser provided to a plugin (DOM, HTTP transport, etc.) and have the plugin render to a offscreen buffer which you then copy to OpenGL context. But that's not very efficient.
TL;DR: Complicated as sh**, and probably not worth the effort.