How to use hardware acceleration with ffmpeg - c++

I need to have ffmpeg decode my video(e.g. h264) using hardware acceleration. I'm using the usual way of decoding frames: read packet -> decode frame. And I'd like to have ffmpeg speed up decoding. So I've built it with --enable-vaapi and --enable-hwaccel=h264. But I don't really know what should I do next. I've tried to use avcodec_find_decoder_by_name("h264_vaapi") but it returns nullptr.
Anyway, I might want to use others API and not just VA API. How one is supposed to speed up ffmpeg decoding?
P.S. I didn't find any examples on Internet which uses ffmpeg with hwaccel.

After some investigation I was able to implement the necessary HW accelerated decoding on OS X (VDA) and Linux (VDPAU). I will update the answer when I get my hands on Windows implementation as well.
So let's start with the easiest:
Mac OS X
To get HW acceleration working on Mac OS you should just use the following:
avcodec_find_decoder_by_name("h264_vda");
Note, however that you can accelerate h264 videos only on Mac OS with FFmpeg.
Linux VDPAU
On Linux things are much more complicated(who is surprised?). FFmpeg has 2 HW accelerators on Linux: VDPAU(Nvidia) and VAAPI(Intel) and only one HW decoder: for VDPAU. And it may seems perfectly reasonable to use vdpau decoder like in the Mac OS example above:
avcodec_find_decoder_by_name("h264_vdpau");
You might be surprised to find out that it doesn't change anything and you have no acceleration at all. That's because it is only the beginning, you have to write much more code to get the acceleration working. Happily, you don't have to come up with a solution on your own: there are at least 2 good examples of how to achieve that: libavg and FFmpeg itself. libavg has VDPAUDecoder class which is perfectly clear and which I've based my implementation on. You can also consult ffmpeg_vdpau.c to get another implementation to compare. In my opinion the libavg implementation is easier to grasp, though.
The only things both aforementioned examples lack is proper copying of the decoded frame to the main memory. Both examples uses VdpVideoSurfaceGetBitsYCbCr which killed all the performance I gained on my machine. That's why you might want to use the following procedure to extract the data from a GPU:
bool VdpauDecoder::fillFrameWithData(AVCodecContext* context,
AVFrame* frame)
{
VdpauDecoder* vdpauDecoder = static_cast<VdpauDecoder*>(context->opaque);
VdpOutputSurface surface;
vdp_output_surface_create(m_VdpDevice, VDP_RGBA_FORMAT_B8G8R8A8, frame->width, frame->height, &surface);
auto renderState = reinterpret_cast<vdpau_render_state*>(frame->data[0]);
VdpVideoSurface videoSurface = renderState->surface;
auto status = vdp_video_mixer_render(vdpauDecoder->m_VdpMixer,
VDP_INVALID_HANDLE,
nullptr,
VDP_VIDEO_MIXER_PICTURE_STRUCTURE_FRAME,
0, nullptr,
videoSurface,
0, nullptr,
nullptr,
surface,
nullptr, nullptr, 0, nullptr);
if(status == VDP_STATUS_OK)
{
auto tmframe = av_frame_alloc();
tmframe->format = AV_PIX_FMT_BGRA;
tmframe->width = frame->width;
tmframe->height = frame->height;
if(av_frame_get_buffer(tmframe, 32) >= 0)
{
VdpStatus status = vdp_output_surface_get_bits_native(surface, nullptr,
reinterpret_cast<void * const *>(tmframe->data),
reinterpret_cast<const uint32_t *>(tmframe->linesize));
if(status == VDP_STATUS_OK && av_frame_copy_props(tmframe, frame) == 0)
{
av_frame_unref(frame);
av_frame_move_ref(frame, tmframe);
return;
}
}
av_frame_unref(tmframe);
}
vdp_output_surface_destroy(surface);
return 0;
}
While it has some "external" objects used inside you should be able to understand it once you have implemented the "get buffer" part(to which the aforementioned examples are of great help). Also I've used BGRA format which was more suitable for my needs maybe you will choose another.
The problem with all of it is that you can't just get it working from FFmpeg you need to understand at least basics of the VDPAU API. And I hope that my answer will aid someone in implementing the HW acceleration on Linux. I've spent much time on it myself before I realized that there is no simple, one-line way of implementing HW accelerated decoding on Linux.
Linux VA-API
Since my original question was regarding VA-API I can't not leave it unanswered.
First of all there is no decoder for VA-API in FFmpeg so avcodec_find_decoder_by_name("h264_vaapi") doesn't make any sense: it is nullptr.
I don't know how much harder(or maybe simpler?) is to implement decoding via VA-API since all the examples I've seen were quite intimidating. So I choose not to use VA-API at all and I had to implement the acceleration for an Intel card. Fortunately enough for me, there is a VDPAU library(driver?) which works over VA-API. So you can use VDPAU on Intel cards!
I've used the following link to setup it on my Ubuntu.
Also, you might want to check the comments to the original question where #Timothy_G also mentioned some links regarding VA-API.

Related

FFmpeg, decode mp3 packet after sending by network

Good day, everyone.
Working with FFmpeg.
And have some issues with decoding, I cannot find in docs and forums.
The code is not simple, so I will try to explain firstly in words, may be someone really skilled in ffmpeg will understand the issue by words. But if the code really helps, i will try to post it.
So, firstly in general, what i want to do. I want to capture voice, encode to mp3, get packet, send to network, on the other side: accept the packet, decode and play. Why not to use ffmpeg streaming? Well, because this packet will be modified a little bit, and may be encoded/crypted, so ffmpeg has no functions to do this, and i should do it manually.
Well, what i managed to do now. Now i can encode and decode via file. This code works fine, without any issues. So generally i can encode and decode mp3, and play, so i assume that my code for encoding/decoding works fine.
So, than i just change code, making no saving to file, and send packets via network instead.
This is the method, standard, well, i use to send packet to decoder on the accepting side.
result = avcodec_send_packet( codecContext, networkPacket );
if( result < 0 ) {
if( result != AVERROR(EAGAIN) ) {
qDebug() << "Some decoding error occured: " << result;
return;
}
}
networkPacket is the AVPacket restored from network.
AVPacket* networkPacket = NULL;
networkPacket = av_packet_alloc();
...
And that is the way i restore it.
void FFmpegPlay::processNetworkPacket( MediaPacket* mediaPacket ) {
qDebug() << "FFmpegPlay::processNetworkPacket start";
int result;
AVPacket* networkPacket = NULL;
networkPacket = av_packet_alloc();
networkPacket->size = mediaPacket->data.size();
networkPacket->data = (uint8_t*) malloc( mediaPacket->data.size() + AV_INPUT_BUFFER_PADDING_SIZE );
memcpy( networkPacket->data, mediaPacket->data.data(), mediaPacket->data.size() );
networkPacket->pts = mediaPacket->pts;
networkPacket->dts = mediaPacket->dts;
networkPacket->flags = mediaPacket->flags;
networkPacket->duration = mediaPacket->duration;
networkPacket->pos = mediaPacket->pos;
...
And there i get -22, EINVAL, invalid argument.
Docs tell me:
AVERROR(EINVAL): codec not opened, it is an encoder, or requires flush
Well, my codec really opened, this is decoder, and this is first call, so i think that flush is not required. So i assume, that issue is in packet and codec setup. I also tried different flags, and always get this error. Codec just doesn't want to accept packet.
So, now i have explained the situation.
And question is: Is there any special options or flags for ffmpeg mp3 decoder to implement what is explained above? Which one of them i should change?
Upd.
After some testing, i decided to make more clear test and check if i can decode immediately after encoding, without network, and it looks like i can do that.
So it looks like in network case the decoder should be initialized somehow special, or have some options.
Well, i'm dealing initialization by copying AVCodecParameters from original and sending them by network. Maybe i should change them some special way?
I'm stuck on this one. And have no idea how to deal with it. So any help is appreciated.

Low quality H.265 Media Foundation encoding?

I'm trying to encode video with MF H.265, and no matter what I try, the quality is always lower than the same-settings video procuded by non MF encoders, like what VideoPad uses (say, ffmpeg) at the same 4000 bitrate.
Videopad produces this video of a swimming boy. My app produces this video. The sky in my app is clearly worse at a 6K bitrate, where the VideoPad is at 1K.
pMediaTypeOutVideo->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
pMediaTypeOutVideo->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_HEVC);
pMediaTypeOutVideo->SetUINT32(MF_MT_AVG_BITRATE, 4000000);
MFSetAttributeSize(pMediaTypeOutVideo, MF_MT_FRAME_SIZE, 1920,1080);
MFSetAttributeRatio(pMediaTypeOutVideo, MF_MT_FRAME_RATE, 25, 1);
MFSetAttributeRatio(pMediaTypeOutVideo, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
pMediaTypeOutVideo->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive);
pMediaTypeOutVideo->SetUINT32(MF_MT_VIDEO_NOMINAL_RANGE, MFNominalRange_Wide);
CComPtr<ICodecAPI> ca;
hr = pSinkWriter->GetServiceForStream(OutVideoStreamIndex, GUID_NULL, __uuidof(ICodecAPI), (void**)&ca);
if (ca)
{
if (true)
{
VARIANT v = {};
v.vt = VT_BOOL;
v.boolVal = VARIANT_FALSE;
ca->SetValue(&CODECAPI_AVLowLatencyMode, &v);
}
if (true)
{
VARIANT v = {};
v.vt = VT_UI4;
v.ulVal = 100;
hr = ca->SetValue(&CODECAPI_AVEncCommonQualityVsSpeed, &v);
}
if (true)
{
VARIANT v = {};
v.vt = VT_UI4;
v.ulVal = eAVEncCommonRateControlMode_Quality;
ca->SetValue(&CODECAPI_AVEncCommonRateControlMode, &v);
if (true)
{
VARIANT v = {};
v.vt = VT_UI4;
v.ulVal = 100;
ca->SetValue(&CODECAPI_AVEncCommonQuality, &v);
}
}
}
No matter what, the quality at 4000k remains inferior to what ffmpeg produces. Also the eAVEncCommonRateControlMode_Quality and CODECAPI_AVEncCommonQuality does not seem to take effect (it works in H.264). The only way to see better quality is to raise the bitrate.
Also, the speed parameter does not seem to affect the quality or the encoder speed.
Even at 1000k Videopad produced video does not have pixelizing in the sky. Of course, its speed is 1/100.
Is the Media Foundation encoders worse than ffmpeg's? What am I missing?
Edit: Rendering with software (MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS to FALSE) is also equally bad.
Update: Tried it ot my laptop with an AMD hardware encoder. Similar problem, when the bitrate is low the quality is awful.
I checked the two videos with MediaInfo and it is obvious that they use different HEVC profile, which should be the main reason that affects the quality of the NVidia encoded video.
Here is the comparison screenshot:
You can try setting the MF_MT_VIDEO_PROFILE in your input IMFMediaType to eAVEncH265VProfile_Main_420_8. Additionally the MF_MT_MPEG2_LEVEL should be set accordingly as well. For instance to eAVEncH265VLevel4_1.
You might also consider using IClassFactory approach in order to guarantee the correct order of calling the ICodecAPI methods.
Perhaps simply because software encoders are better than hardware encoders.
If i take a look at this : https://www.techspot.com/article/1131-hevc-h256-enconding-playback/page7.html, I also would confirm hardware nvidia encoder is bad, comparing to x265 (date 2016).
I can't investigate a bit more, but from what I see about your Post :
pMediaTypeOutVideo->SetUINT32(MF_MT_VIDEO_NOMINAL_RANGE, MFNominalRange_Wide); -> Why not MFNominalRange_Normal ?
Are there others ICodecAPI from NVidia encoder, than CODECAPI_AVLowLatencyMode/CODECAPI_AVEncCommonQualityVsSpeed/CODECAPI_AVEncCommonRateControlMode...
Where is the 2 passes encoding parameter ?
I found at least three forums saying NVidia HEVC encoder blurs images. And you confirm this, so... Fake news or not, (date 2018/2019)..
From NVidia https://developer.nvidia.com/nvidia-video-codec-sdk#NVENCFeatures (date undefined).
I don't understand nothing about this diagram, but NVidia seems to pretend they are the best... So Fake news or not.
EDIT
Rendering with software (MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS to FALSE) is also equally bad.
Can you confirm in software mode, the H.265 / HEVC Video Encoder used ?
If so, did you play with Codec Properties ?
CODECAPI_AVEncCommonRateControlMode
CODECAPI_AVEncCommonMeanBitRate
CODECAPI_AVEncCommonBufferSize
CODECAPI_AVEncCommonMaxBitRate
CODECAPI_AVEncMPVGOPSize
CODECAPI_AVLowLatencyMode
CODECAPI_AVEncCommonQualityVsSpeed
CODECAPI_AVEncVideoForceKeyFrame
CODECAPI_AVEncVideoEncodeQP
CODECAPI_AVEncVideoMinQP
CODECAPI_AVEncVideoMaxQP
CODECAPI_VideoEncoderDisplayContentType
CODECAPI_AVEncNumWorkerThreads

Is there an API that will run on iOS in order to change the Frame Per Second of an existing video?

I am looking for a way to receive as an input any video (that is supported on iOS) and save on the device a new video with a new Frame Per Second rate. The motivation is to decrease the video size, and as well make it as lite weighted as possible.
Tried using ffmpeg library from command line (need it to run directly from application)
Tried working with SDAVAssetExportSessionDelegate, but managed only to change the bit per second (each frame quality is lower)
Though to work with OpenCV - but preferring something lighter and build in if possible
Objective C:
'''
compressionEncoder.videoSettings = #
{
AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: [NSNumber numberWithInt:width], //Set your resolution width here
AVVideoHeightKey: [NSNumber numberWithInt:height], //set your resolution height here
AVVideoCompressionPropertiesKey: #
{
AVVideoAverageBitRateKey: [NSNumber numberWithInt:bitRateKey], // Give bitrate for lower size low values
AVVideoProfileLevelKey: AVVideoProfileLevelH264High40,
// Does not change - quality setting and not reletaed to playback framerate!
//AVVideoMaxKeyFrameIntervalKey: #800,
},
};
compressionEncoder.audioSettings = #
{
AVFormatIDKey: #(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: #2,
AVSampleRateKey: #44100,
AVEncoderBitRateKey: #128000,
};
'''
Expected a video with less Frame Per Second, each frame is in the same quality. Similar to a brief thumbnail summary of the video
The type of conversion you are doing will be time and power consuming on a mobile device, but I am guessing you are already aware of that.
Given your end goal is to reduce size, while presumably maintaining a reasonable quality, you may find you want to experiment with different settings etc in the encodings.
For this type of video manipulation, ffmpeg is a good choice as you probably saw from your command line usage. To use ffmpeg from an application, a common approach is to use a well supported 'ffmpeg wrapper' - this effectively runs the Ffmpeg command line commands from wihtin your application.
The advantage is that all the usual syntax should work and you can leverage the vast amount of info on ffmpeg command line syntax on the web. The downsides are that ffmpeg was not not designed to be wrapped like this so you may see some issues, although with a well supported wrapper you should find either help or that others have already worked around the issues.
Some examples of popular iOS ffmpeg wrappers:
https://github.com/tanersener/mobile-ffmpeg
https://github.com/sunlubo/SwiftFFmpeg
Get MobileFFMpeg up and running:
https://stackoverflow.com/a/59325680/1466453
Once you can make MobileFFMpeg calls in your IOS code then changing frame rate is pretty straightforward with this code:
[MobileFFmpeg execute: #"-i -filter:v fps=fps=30 "];

RtAudio + Qt : duplex not working with RME Fireface on Linux

This is my first post on Stackoverflow, I hope I'm doing this right.
I'm new to C++.
I've been playing with RtAudio and Qt (on linux, desktop and raspberry pi).
Backend is ALSA.
Audio out went fine both on my desktop computer (RME Fireface UCX in ClassCompilant mode and on the Raspberry Pi 3 with HifiBerry and PiSound)
Lately, I tried to add audio input support to my program.
I read the duplex tutorial on RtAudio website, and tried to implement it inside my code.
As soon as I added the input StreamParameters to openStream I got a very cracky sound.
Although, StreamStatus is ok in the callback...
I tried to create an empty C++ project, and simply copy the RtAudio tutorial.
Sadly, the problem remains...
I added this to my project file in Qt Creator
LIBS += -lpthread -lasound
I think my issue is similar to this one, but I couldn't find how (or if) it was solved
I tried different buffer sizes (from 64 to 4096 and more), the cracks are less audible, but still present when buffer size increases
Do you know anything that should be done regarding RtAudio in duplex mode that might solve this ? It seems that buffer size is not the same when in duplex mode.
edit :
Out of curiosity (and despair), I tried even lower buffer sizes with the canonical example from RtAudio help : it turns out using buffer size 1, 2, 4 and 8 frames removes the cracks...
As soon as I use 16 frames, sound is awful
Even 15 frames works, I really don't get what's going on
Code Sample :
RtAudio::StreamOptions options;
options.flags |= RTAUDIO_SCHEDULE_REALTIME;
RtAudio::StreamParameters params_in, params_out;
params_in.deviceId = 3;
params_in.nChannels = 2;
params_out.deviceId = 3;
params_out.nChannels = 2;
When output only, it works :
try {
audio.openStream(
&params_out,
NULL,
RTAUDIO_SINT16,
48000,
&buffer_frames,
&inout,
(void *) &buffer_bytes,
&options
);
}
catch (RtAudioError& e) {
std::cout << "Error while opening stream" << std::endl;
e.printMessage();
exit(0);
}
Cracks appear when changing NULL to &params_in :
try {
audio.openStream(
&params_out,
&params_in,
RTAUDIO_SINT16,
48000,
&buffer_frames,
&inout,
(void *) &buffer_bytes,
&options
);
}
catch (RtAudioError& e) {
std::cout << "Error while opening stream" << std::endl;
e.printMessage();
exit(0);
}
Thank you for your help
Answering my own question.
I re did my tests from scratch on the Raspberry Pi 3 / PiSound.
It turns out I must have done something wrong the first time. The canonical example from RtAudio (and the input implementation I did for my program) work well at 64, 128, etc buffer sizes.
The desktop build still have cracky sound, but works with weird buffer sizes (like 25, 30 or 27). The problem most likely comes from the Fireface UCX which is not well supported on Linux (even in ClassCompilant mode).
Thank you for your help, and sorry if I wasted your time.

FFMPEG with C++ accessing a webcam

I have searched all around and can not find any examples or tutorials on how to access a webcam using ffmpeg in C++. Any sample code or any help pointing me to some documentation, would greatly be appreciated.
Thanks in advance.
I have been working on this for months now. Your first "issue" is that ffmpeg (libavcodec and other ffmpeg libs) does NOT access web cams, or any other device.
For a basic USB webcam, or audio/video capture card, you first need driver software to access that device. For linux, these drivers fall under the Video4Linux (V4L2 as it is known) category, which are modules that are part of most distros. If you are working with MS Windows, then you need to get an SDK that allows you to access the device. MS may have something for accessing generic devices, (but from my experience, they are not very capable, if they work at all) If you've made it this far, then you now have raw frames (video and/or audio).
THEN you get to the ffmpeg part - libavcodec - which takes the raw frames (audio and/or video) and encodes them into a streams, which ffmpeg can then mux into your final container.
I have searched, but have found very few examples of all of these, and most are piece-meal.
If you don't need to actually code of this yourself, the command line ffmpeg, as well as vlc, can access these devices, capture and save to files, and even stream.
That's the best I can do for now.
ken
For windows use dshow
For Linux (like ubuntu) use Video4Linux (V4L2).
FFmpeg can take input from V4l2 and can do the process.
To find the USB video path type : ls /dev/video*
E.g : /dev/video(n) where n = 0 / 1 / 2 ….
AVInputFormat – Struct which holds the information about input device format / media device format.
av_find_input_format ( “v4l2”) [linux]
av_format_open_input(AVFormatContext , “/dev/video(n)” , AVInputFormat , NULL)
if return value is != 0 then error.
Now you have accessed the camera using FFmpeg and can continue the operation.
sample code is below.
int CaptureCam()
{
avdevice_register_all(); // for device
avcodec_register_all();
av_register_all();
char *dev_name = "/dev/video0"; // here mine is video0 , it may vary.
AVInputFormat *inputFormat =av_find_input_format("v4l2");
AVDictionary *options = NULL;
av_dict_set(&options, "framerate", "20", 0);
AVFormatContext *pAVFormatContext = NULL;
// check video source
if(avformat_open_input(&pAVFormatContext, dev_name, inputFormat, NULL) != 0)
{
cout<<"\nOops, could'nt open video source\n\n";
return -1;
}
else
{
cout<<"\n Success !";
}
} // end function
Note : Header file < libavdevice/avdevice.h > must be included
This really doesn't answer the question as I don't have a pure ffmpeg solution for you, However, I personally use Qt for webcam access. It is C++ and will have a much better API for accomplishing this. It does add a very large dependency on your code however.
It definitely depends on the webcam - for example, at work we use IP cameras that deliver a stream of jpeg data over the network. USB will be different.
You can look at the DirectShow samples, eg PlayCap (but they show AmCap and DVCap samples too). Once you have a directshow input device (chances are whatever device you have will be providing this natively) you can hook it up to ffmpeg via the dshow input device.
And having spent 5 minutes browsing the ffmpeg site to get those links, I see this...