Encoding RGB to H.264 - c++

What I am doing is trying to record the screen in windows XP and Win7. I got the bitmap by using DirectX's interface CreateOffscreenPlainSurface and GetFrontBufferData. I need to encode the bitmap into a H.264 format video. The problem is the bitmap captured is in format D3DFMT_A8R8G8B8, but the H.264 Video Encoder can only support MFVideoFormat_I420, MFVideoFormat_IYUV, MFVideoFormat_NV12, MFVideoFormat_YUY2 and MFVideoFormat_YV12 as input. My question is do I need to transfer the format myself(I do not want to)? Are there any other better solutions for this?

The input format corresponds to MFVideoFormat_ARGB32.
Stock OS component that handles the conversion is Video Processor MFT. I don't see availability information in the footer of MSDN article, however I am under impression that this MFT comes with Windows Vista, just like the whole Media Foundation API.
In Windows XP there has been a similar Color Converter DSP which offers really close services, and exposes a really close interface of DirectX Media Object (DMO). It is available in all more recent operating systems, however it is software only and never leverages GPU capability for the conversion.
These both can handle the requested format conversion for you.
Also for the reference, H.264 Video Encoder was introduced with Windows 7 only.

Related

Media Foundation: Custom Topology with Direct3D 11

I am having to build a video topology manually, which includes using loading and configuring the mpeg2videoextension (decoder). Otherwise the default topoloader fails to resolve the video stream automatically. I am using the default topology loader to resolve the rest of the topology.
Since I am loading the decoder manually, the docs say that I am responsible to get the decoder the hardware acceleration manager. (This decoder is D3D11 Aware). If I create a DXGI device, then create manager in code, I can pass the manager to the decoder, and it seems to work.
The docs also say, however that "In a Media Session scenario, the video renderer creates the Direct3D 11 device."
If this is the case, how do I get a handle to that device? I assume I should be using that device in the device manager to pass into the decoder.
I'm going around in circles. All of the sample code uses IDirect3DDeviceManager9. I am unable to get those samples to work. So I decided to use 11. But I can't find any sample code that uses 11.
Can someone point me in the right direction?
Microsoft does not give a good solution for this challenge. Indeed, standard video renderer for Media Foundation is EVR and it is "aware" of Direct3D 9 only. So you cannot combine it with the decoder using common DXGI device manager. Newer Microsoft applications use a different Direct3D 11 aware renderer, which is not published as an API: you can take advantage of these rendering services as a part of wrapping APIs such as UWP or HTML5 media element playing video. MPEG-2 decoder extension targets primarily these scanarios leaving you with a problem if you are plugging this into older Media Foundation topologies.
I can think of a few solutions to this problems, none of which sound exactly perfect:
Stop using EVR and use DX11VideoRenderer instead: Microsoft gives a starting point with this sample and you are own your own to establish required wiring to share DXGI device manager.
Use multiple Direct3D devices and transfer video frames between the two; there should be graphics API interop to help transfer in efficient way, but overall this looks a sort of stupid work as of 2020 even though doable. This path looks more or less acceptable if you can accept performance hit from transfer through system memory, which makes things a tad easier to implement.
Stop using MPEG-2 decoder extension and implement your own decoder on top of lower level DXVA2 API and implement hardware assisted decoder without fallback to software, in which case you have more control over using GPU services and fitting to renderer's device.

UWP Hardware Video Decoding - DirectX12 vs Media Foundation

I would like to use DirectX 12 to load each frame of an H264 file into a texture and render it. There is however little to no information on doing this, and the Microsoft website has limited superficial documentation.
Media Foundation has plenty of examples and offers Hardware Enabled decoding. Is the Media Foundation a wrapper around DirectX or is it doing something else?
If not, how much less optimised would the Media Foundation equivalent be in comparison to a DX 12 approach?
Essentially, what are the big differences between Media Foundation and DirectX12 Video Decoding?
I am already using DirectX 12 in my engine so this is specifically regarding DX12.
Thanks in advance.
Hardware video decoding comes from DXVA (DXVA2) API. It's DirectX 11 evolution is D3D11 Video Device part of D3D11 API. Microsoft provides wrappers over hardware accelerated decoders in the format of Media Foundation API primitives, such as H.264 Video Decoder. This decoder is offering use of hardware decoding capabilities as well as fallback to software decoding scenario.
Note that even though Media Foundation is available for UWP development, your options are limited and you are not offered primitives like mentioned transform directly. However if you use higher level APIs (Media Foundation Source Reader API in particular) you can leverage hardware accelerated video decoding in your UWP application.
Media Foundation implementation offers interoperability with Direct3D 11, in the part of video encoding/decoding in particular, but not Direct3D 12. You will not be able to use Media Foundation and DirectX 12 together out of the box. You will either have to implement Direct3D 11/12 interop to transfer the data between the APIs (or, where applicable, use shared access to the same GPU data).
Or alternatively you will have to step down to underlying ID3D12VideoDevice::CreateVideoDecoder which is further evolution of mentioned DXVA2 and Direct3D 11 video decoding APIs with similar usage.
Unfortunately if Media Foundation is notoriously known for poor documentation and hard-to-start development, Direct3D 12 video decoding has zero information and you will have to enjoy a feeling of a pioneer.
Either way all the mentioned are relatively thin wrappers over hardware assisted video decoding implementation with the same great performance. I would recommend taking Media Foundation path and implement 11/12 interop if/when it becomes necessary.
You will get a lot of D3D12 errors caused by Media Foundation if you pass a D3D12 device to IMFDXGIDeviceManager::ResetDevice.
The errors could be avoided if you call IMFSourceReader::ReadSample slowly. It doesn't matter that you adopt sync or async mode to use this method. And, how slowly it should be depends on the machine that runs the program. I use ::Sleep(1) between ReadSample calls for sync mode playing a stream from network, and ::Sleep(3) for sync mode playing a local mp4 file on my machine.
Don't ask who I am. My name is 'the pioneer'.

custom opengl compressed texture

I am developing mobile games on windows. Our image resources are in the PVR-TC 4 format. When we run our game on simulator, images are decoded by CPU which is really slow, as our PC graphic card don't support GPU decode. Is it possible to make PC OpenGL support PVR-TC or ETC hardware decode?
You cannot force an implementation to implement a particular extension or image format.
Your best bet is to convert the images yourself offline. That is, instead of loading images of a format your hardware can't handle, load images of the format that it can.
After all, it's not like the images are originally in PVRTC format, right? They were originally authored in a regular format like PNG or whatever, then converted to PVRTC. So just add another conversion for S3TC or whatever format desktop hardware actually supports.

How to use Intel Hardware MJPEG Decoder MFT in MediaFoundation SourceReader for Window Desktop application?

I'm developing USB camera streaming Desktop application using MediaFoundation SourceReader technique. The camera is having USB3.0 support and gives 60fps for 1080p MJPG video format resolution.
I used Software MJPEG Decoder MFT to convert MJPG to YUY2 frames and then converted into the RGB32 frame to draw on the window. Instead of 60fps, I'm able to render only 30fps on the window when using this software decoder. I have posted a question on this site and got some suggestion to use Intel Hardware MJPEG Decoder MFT to solve frame drop issue.
I have faced an error 0xC00D36B5 - MF_E_NOTACCEPTING when calling IMFTransform::ProcessInput() method. To solve this error, MSDN suggested using IMFTranform interface asynchronously. So, I used IMFMediaEventGenerator interface to GetEvent for every In/Out sample. Successfully, I can process only one input sample and then continuously IMFMediaEventGenerator:: GetEvent() methods returns MF_E_NO_EVENTS_AVAILABLE error(GetEvent() is synchronous).
I have tried to configure an asynchronous callback for SourceReader as well as IMFTransform but MFAsyncCallback:: Invoke method is not invoking, hence I planned to use GetEvent method.
Am I missing anything?If Yes, Someone guides me to use Intel Hardware Decoder into my project?
Intel Hardware MJPEG Decoder MFT is an asynchronous MFT and if you are managing it directly, you are responsible to apply asynchronous model. You seem to be doing this but you don't provide information that allows nailing the problem down. Yes, you are supposed to use event model described in ProcessInput, ProcessOutput sections of the article linked above. As you get the first frame, you should debug further to make it work with smooth continuous processing.
When you use APIs like media session our source reader, you have Media Foundation itself dealing with the MFTs. It is capable of doing synchronous and asynchronous consumption when appropriate. In this case, however, you don't do IMFTransform calls and even from your vague description it comes you are doing it wrong way.

DXGI Desktop Duplication: encoding frames to send them over the network

I'm trying to write an app which will capture a video stream of the screen and send it to a remote client. I've found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I've been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network.
The frames come from AcquireNextFrame with a surface that contains the desktop bitmap and metadata which contains dirty and move regions that were updated. From here, I have a couple of options:
Extract a bitmap from a DirectX surface and then use an external library like ffmpeg to encode series of bitmaps to H.264 and send it over RTSP. While straightforward, I fear that this method will be too slow as it isn't taking advantage of any native Windows methods. Converting D3D texture to a ffmpeg-compatible bitmap seems like unnecessary work.
From this answer: convert D3D texture to IMFSample and use MediaFoundation's SinkWriter to encode the frame. I found this tutorial of video encoding, but I haven't yet found a way to immediately get the encoded frame and send it instead of dumping all of them to a video file.
Since I haven't done anything like this before, I'm asking if I'm moving in the right direction. In the end, I want to have a simple, preferably low latency desktop capture video stream, which I can view from a remote device.
Also, I'm wondering if I can make use of dirty and move regions provided by Desktop Duplication. Instead of encoding the frame, I can send them over the network and do the processing on the client side, but this means that my client has to have DirectX 11.1 or higher available, which is impossible if I would want to stream to a mobile platform.
You can use IMFTransform interface for H264 encoding. Once you get IMFSample from ID3D11Texture2D just pass it to IMFTransform::ProcessInput and get the encoded IMFSample from IMFTransform::ProcessOutput.
Refer this example for encoding details.
Once you get the encoded IMFSamples you can send them one by one over the network.