How to hardware decode h264 video on Intel GPUs using GStreamer? - gstreamer

I'm new to GStreamer and hardware decoding and started with Playback tutorial 8: Hardware-accelerated video decoding
As far I understand I need to use one of the hardware accelerated plugins.
I would like to smoothly playback 1080p videos on OSX and Windows 10 operating systems for now, taking advantage of Intel GPUs for decoding (e.g. Intel Iris, Intel HD Graphics 620, etc.)
As far as I understand I should be able to use either the VAAPI or OpenMAX plugins ?
This what I've tried on OSX so far and where I got confused/stuck:
I've compiled from source gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-bad, gst-plugins-ugly. after a few compile errors (due to some dependencies) that worked out
I've attempted compiling gstreamer-vaapi and got to a dependency error (checking for LIBVA... configure: error: Package requirements (libva >= 0.39.0 libva != 0.99.0) were not met). I've attempted to compile libva and was met another dependency error: configure: error: Package requirements (libdrm >= 2.4) were not met when then lead me to try to compile drm which threw another dependency error: configure.ac:35: error: must install xorg-macros 1.12 or later before running autoconf/autogen. This is where I'm not sure how far the rabbit hole goes. Attempting to compile mesa/drm makes me think this mainly for Linux and might not be supported on OSX ? Is this correct ? Can I use libva and gstreamer-vaapi to hardware decode 1080p H264 videos on OSX ? (What about Windows) ?
Trying to compile gst-omx throws stomxvideodec.c:41:10: fatal error: 'gst/gl/egl/gstglmemoryegl.h' file not found. I've found gst-plugins-base/gst-libs/gst/gl/egl/gstglmemoryegl.h but I'm unsure how to point gst-omx to this location.
Is it possible to hardware decode 1080p H264 videos on OSX and Windows with GStreamer on Intel GPUs ? If so, what's the simplest method ?

Related

OpenCL in OpenCV 3 does not see more than one GPU

I was trying to call the Nvidia GT 650M GPU on my MacBook Pro, whose index is 1, with the integrated GPU having index 0. (as found from running $ clinfo)
Just as mentioned in this StackOverflow question:
In OpenCV 3.0.0 beta [or above], only a single device is detected:
context.ndevices() // returns 1 instead of 2
When I run the code in the answer above, OpenCV will always use the first GPU, despite setting cv::ocl::Device(context.device(1));
A discussion thread in the OpenCV forum says that this might be due to the compatibility (1.1) of the Nvidia GPU on the platform, but that is not the case for me.
I wrote a simple OpenCL program to create a context on the Nvidia GPU and run the kernel, and it succeeded. So it is not a OpenCL hardware compatibility issue (the Apple support article also confirms this).
What I tried and not seems to be helping:
Turn off auto graphics switching in System Preferences.
Switch to Nvidia Web Driver (from macOS native driver) and install CUDA.
OpenCL: 1.2, on both GPUs
OpenCV: 3.4.1, from HomeBrew
OS: macOS High Sierra 10.13.4
Xcode: 9.3

What version of ffmpeg does libvlc use?

I am writing an application on linux (debian) that uses both libvlc and ffmpeg.
Both run fine separately, but the moment I try to compile both functionalities into my app, libvlc stops working (complains about not finding the codecs).
I have been able to isolate the problem, basically libvlc runs fine until I compile the first line of ffmpeg code (av_register_all), at which point the linker brings in my own ffmpeg compiled lib, and the moment I do that, it stops playing the file. Obviously I have 2 conflicting ffmpeg libs on my system, the one libvlc is using, and the newer one I built myself to write ffmpeg code.
My question is, how do I make libvlc work with the newer library? Considering there were functions deprecated in the newer ffmpeg code, would that involve recompiling libvlc? Is the libvlc code up to date with the newest ffmpeg library(new function signatures)?
Any help is appreciated!

OpenGL - Using modern libraries

Upon successful compilation of a recent program I wrote from the openGL-book using openGL 4.0 I wasn't able to run the program due to an error that stated " error XX - unsupported hardware.."
However according to a previous question I asked if I could compile/run openGL programs on my computer I got an answer that I could:
Wiki claims you can do GL 4.0 with your HD 4000 [Graphics Chip] on Windows.
My question is - is that I am using the libraries freeglut 2.8 and GLEW 1.10 (newest versions) but the tutorial I followed used functions that came with 4.0 could the reason that my program does not run follow because I am linking modern versions of openGL libaries?
Things you have to check to run modern OpenGL:
Graphics driver: Do you have the latest and most up to date drivers?
Graphics card/chipset: Can your graphics hardware support the latest OpenGL even with its most recent drivers?
Using Proper Hardware: Some laptops come with a low powered graphics chipset and a high powered alternate graphics card/chipset. The low powered one may not support new stuff, but the high powered one definitely should. Have you instructed your computer to use the right one?
Libraries: Have you properly linked to something like GLEW that gives you the ability to use modern OpenGL?
Since you're on Windows, do note that they purposefully don't give you preinstalled access to modern OpenGL, so you have to do it yourself. Usually that just means checking your drivers and downloading GLEW.
From your error message, it looks like your graphics drivers aren't up to date or the graphics card/chipset/whatever you're using doesn't support the OpenGL version you want.

What is the difference between x264 and ffmpeg? [duplicate]

This question already has an answer here:
What is ffmpeg, avcodec, x264? [closed]
(1 answer)
Closed 9 years ago.
So I'm trying to compile the H.264 codec so that I can use it to enhance performance in NoMachine as per https://www.nomachine.com/AR10K00695
Instructions below deal with the following possible cases on server
host:
Case 1: You don't have x264 library already compiled
Case 2: You have x264 library already compiled
and on client host:
Case 1: You have FFmpeg already installed
Case 2: You don't have FFmpeg installed
Case 3: You have FFmpeg libraries already compiled
The weird thing is that it states that you compile x264 on the server and ffmpeg on the client. Shouldn't you have either have x264 compiled on both the server and the client or ffmpeg on the server and the client?
Why use two different codecs for the server and the client?
We rewrote the article since it was not doing a good job of explaining the subtleties we wanted to present. Encoder and decoder in FFmpeg are different codecs, developed by different developers and with different licenses. FFmpeg provides a H.264 decoder in the default build, but not the encoder. Additionally, when FFmpeg is built with the H.264 encoder, the default build links the encoder statically, so that other applications can't use it. This means that in most cases the encoder must be built separately.
Anyway this is not important for the end-users :-) If you want to use H.264 on the NoMachine client just install FFmpeg from the repository of your Linux distribution or install a Windows or Mac build from one of the sites providing it. If you want to use H.264 on the server, install a FFmpeg package including libx264 as a shared library or build it yourself using the instructions you find on the website.
Note also that NoMachine on Windows and Mac uses the codecs provided by the OS, which have the additional benefit of often being HW accelerated, with the FFmpeg SW codecs used as fallback in the case no suitable encoder or decoder can be initialized.
The NoMachine Team

OpenCV2.4 Error: No GPU support in unknown function file

I am runnung Visual Stadio2010, and have build the OpenCV2.4 with Cmake2.8, during the confugration have set :
WITH_CUDA flag on
CUDA_SDK_ROOT_DIR :C:/ProgramData/NVIDIA Corporation/NVIDIA GPU Computing SDK 4.2
CUDA_TOOLKIT_ROOT_DIR: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v4.2
and then build the whole project in visual studio, successfully.
I am using NVIDIA Quadro 5000, and have tested the examples in "OpenCV-2.4.0-GPU-demos-pack-win32", which all of the works without any error.
also the core and highgui libraries function works fine too.but I cant run anything related to GPU functions in openCV.
this code return me 0 which according to documentation means no device has been find:
int deviceCount =cv::gpu::getCudaEnabledDeviceCount();
std::cout << "index " << deviceCount <<"\n";
which the same as device number number from the GPUdemopack examples, but any other gpu function shows me the following error:
OpenCV Error: No GPU support in unknown function file c:\slave\wininstallerMegaPack\src\opencv\modules\core\src\gpumat.cpp,line193
any body has any idea? please let me know. Thanks
OpenCV 2.4 is still in beta and is not ready to be used for serious projects. It has several build problems on Windows and Mac OS X as far as I could test.
I suggest you stick with the 2.3.1 which is the last stable release. Don't use the 2.4 unless there's a feature in there that you really really need.
EDIT:
By the way, OpenCV 2.3.1 only supports CUDA 4.0.
Run devicequery.exe from the Cuda SDK ( CUDA sdk 4.1\C\bin\win32\Release ) and check the compute capability value of your card.
Then in cmake for opencv, check the CUDA_ARCH_BIN includes this value.
Earlier cards only did 1.1 and don't have ARCH_PTX (the new CUDA binary format) - it's possible to make opencv build only for the new format - which doesn't need as much runtime compilation
You are saying that you had build OpenCV yourself, but the file path from error message (c:\slave\wininstallerMegaPack\...) clearly indicates that you are using prebuilt OpenCV from sourceforge. If you have really build OpenCV yourself, then you have to troubleshoot your environment and find why wrong binaries are used. (The simplest thing you can do - remove any OpenCV binaries from your PC and make a clean full build of both OpenCV and your app.)
OpenCV 2.4 betas have a packaging bug making gpu-enabled binaries useless. So you have to rebuild the library from source or use OpenCV 2.3.1 (CUDA 4.0 indeed).
GPU demos pack is tricky - it has own copy of all binaries it might need. However it can not be used for development.
Final OpenCV 2.4 release is awaited in few days. Windows package will include working CUDA binaries.
EDIT:
OpenCV 2.4.0 is out!