What is AVHWAccel, and how can I use it? - c++

I want to make use of hardware acceleration for decoding an h264 encoded MP4 file.
My computing environment:
Hardware: MacPro (2015 model)
Software: FFmpeg (installed by brew)
Here is the output of FFmpeg command:
$ffmpeg -hwaccels
Hardware acceleration methods:
vda
videotoolbox
According to this document, there are two options for my environment, that is, VDA and VideoToolBox. I tried VDA in C++:
Codec = avcodec_find_decoder_by_name("h264_vda");
It kind of worked, but the output of the pixel format is UYVY422 which I have trouble to deal with (any suggestion on how to render UYVY422 in C++? The ideal format is yuv420p)
So I want to try VideotoolBox, but there is no such simple thing like (it may work in the case of encoding though)
Codec = avcodec_find_decoder_by_name("h264_videotoolbox");
It seems I should use AVHWAccel, but what is AVHWAccel and how to use it?
Part of My C++ code:
for( unsigned int i = 0; i < pFormatCtx->nb_streams; i++ ){
if(pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO){
pCodecCtx = pFormatCtx->streams[i]->codec;
video_stream = pFormatCtx->streams[i];
if( pCodecCtx->codec_id == AV_CODEC_ID_H264 ){
//pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
pCodec = avcodec_find_decoder_by_name("h264_vda");
break;
}
}
}
// open codec
if( pCodec ){
if((ret=avcodec_open2(pCodecCtx, pCodec, NULL)) < 0) {
....

It's nothing to do with the decoder for which pix format to choose.
Your video pix format is UYVY422, so you got this format after you decode the frame.
Like the answer #halfelf mentioned, you can perform a swscale after you decode a frame, to convert the pix format to your ideal format yuv420p, then render it.
Meanwhile, if you are sure it's the format UYVY422, SDL2 can handle the render directly for you.
In the example below, my format is yuv420p, and I use swscale to convert to UYVY422 to render to SDL2
// prepare swscale context, AV_PIX_FMT_UYVY422 is my destination pix format
SwsContext *swsCtx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
codecCtx->width, codecCtx->height, AV_PIX_FMT_UYVY422,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Window *window;
SDL_Renderer *render;
SDL_Texture *texture;
SDL_CreateWindowAndRenderer(codecCtx->width,
codecCtx->height, SDL_WINDOW_OPENGL, &window, &render);
texture = SDL_CreateTexture(render, SDL_PIXELFORMAT_UYVY, SDL_TEXTUREACCESS_STREAMING,
codecCtx->width, codecCtx->height);
// ......
// decode the frame
// ......
AVFrame *frameUYVY = av_frame_alloc();
av_image_alloc(frameUYVY->data, frameUYVY->linesize, codecCtx->width, codecCtx->height, AV_PIX_FMT_UYVY422, 32);
SDL_LockTexture(texture, NULL, (void **)frameUYVY->data, frameUYVY->linesize);
// convert the decoded frame to destination frameUYVY (yuv420p -> uyvy422)
sws_scale(swsCtx, frame->data, frame->linesize, 0, frame->height,
frameUYVY->data, frameUYVY->linesize);
SDL_UnlockTexture(texture);
// performa render
SDL_RenderClear(render);
SDL_RenderCopy(render, texture, NULL, NULL);
SDL_RenderPresent(render);
In your example, if your pix format is uyvy422, you can skip the swscale part, and perform the render directly after decode from ffmpeg.

Decoders won't choose which pixel format the output is, it is determined by the video itself. swscale lib is used to convert one pixel format to another.
auto sws_ctx = sws_getContext(src_width, src_height, AV_PIX_FMT_UYUV422, dst_width, dst_height, AV_PIX_FMT_YUV420P, 0,0,0,0);
av_image_alloc(new_data, new_linesize, dst_width, dst_height, AV_PIX_FMT_BGR24, FRAME_ALIGN);
sws_scale(sws_ctx, frame->data, frame->linesize, 0, src_height, new_data, new_linesize);
And there is no h264_videotoolbox decoder, only encoder. To list decoders/encoders available:
ffmpeg -encoders
ffmpeg -decoders
The decoder/encoder names is written in the source, for example, at the end of libavcodec/vda_h264_dec.c and libavcodec/videotoolboxenc.c.

Related

C/C++ ffmpeg output is low quality and blurry

I've made a program that takes a video file as input, edits it using opengl/glfw, then encodes that edited video. The program works just fine, I get the desired output. However the video quality is really low and I don't know how to adjust it. The editing seems fine, since the display on the glfw window is high resolution. I don'T think its about scaling since it just reads the pixels on the glfw window and passes it to the encoder, and the glfw window is high res.
Here is what the glfw window looks like when the program is running:
I'm encoding in YUV420P formatting, but the information I'm getting from the glfw window is in RGBA format. I'm getting the data using:
glReadPixels(0, 0,
gl_width, gl_height,
GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid*) state.glBuffer
);
I simply got the muxing.c example from ffmpeg's docs and edited it slightly so it looks something like this:
AVFrame* video_encoder::get_video_frame(OutputStream *ost)
{
AVCodecContext *c = ost->enc;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, c->time_base,
(float) STREAM_DURATION / 1000, (AVRational){ 1, 1 }) > 0)
return NULL;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally; make sure we do not overwrite it here */
if (av_frame_make_writable(ost->frame) < 0)
exit(1);
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
#if __AUDIO_ONLY
image_for_audio_only(ost->tmp_frame, ost->next_pts, c->width, c->height);
#endif
sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,
ost->tmp_frame->linesize, 0, c->height, ost->frame->data,
ost->frame->linesize);
} else {
//This is where I set the information I got from the glfw window.
set_frame_yuv_from_rgb(ost->frame, ost->sws_ctx);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
void video_encoder::set_frame_yuv_from_rgb(AVFrame *frame, struct SwsContext *sws_context) {
const int in_linesize[1] = { 4 * width };
//uint8_t* dest[4] = { rgb_data, NULL, NULL, NULL };
sws_context = sws_getContext(
width, height, AV_PIX_FMT_RGBA,
width, height, AV_PIX_FMT_YUV420P,
SWS_BICUBIC, 0, 0, 0);
sws_scale(sws_context, (const uint8_t * const *)&rgb_data, in_linesize, 0,
height, frame->data, frame->linesize);
}
rgb_data is the buffer I got from the glfw window. It's simply an uint8_t*.
And at the end of all this, here is what the encoded output looks like when ran through mplayer:
It's much lower quality compare to the glfw window. How can I improve the quality of the video?
Here are encoding settings from youtube for a better quality:
https://support.google.com/youtube/answer/1722171
Make sure to have high bitrate and gop size. E.g. 5Mbps and 60 correspondingly.

BGRA to YUV420 FFmpeg giving bad output

My purpose is a screen recorder. I am using the Windows DXGI API to receive screenshots and I'm encoding the screenshots into a video using libx264. Feeding BGRA images directly to libx264 is producing weird colors in the output video. So, to get correct colors, I am trying to convert the BGRA to YUV420p. To speed up encoding, I am also trying to downscale the BGRA image.
So I am getting an 1920x1080 BGRA image and I want to convert it to 1280x720 YUV420p. For this, I am using FFmpeg swscale library to do both the format conversion and downscaling.
The problem is that the output video is coming like 3 images in the same frame. Please see this video. https://imgur.com/a/EYimjrJ
I tried just BGRA to YUV conversion without any downscaling and it is working fine. But BGRA to YUV with downscaling is giving this problem.
What is the cause for this problem? How do I fix it?
Here is my code snippet:
uint8_t* Image;
x264_picture_t picIn, picOut;
x264_picture_alloc(&picIn, X264_CSP_I420, 1280, 720);
SwsContext* sws = sws_getContext(1920, 1080, AV_PIX_FMT_BGRA, 1280, 720, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
while (true)
{
take_screenshot(&Image);
AVFrame BGRA;
BGRA.linesize[0] = 1280 * 4;
BGRA.data[0] = Image;
sws_scale(sws, BGRA.data, BGRA.linesize, 0, 1080, picIn.img.plane, picIn.img.i_stride);
nal_size = x264_encoder_encode(h, &nals, &nal_count, &picIn, &picOut);
save_to_flv(nals, nal_size, nal_count);
}
Here are my libx264 parameters:
x264_param_default_preset(&param, preset, 0);
param.i_csp = X264_CSP_I420;
param.i_width = 1280;
param.i_height = 720;
param.i_fps_num = 30;
param.i_fps_den = 1;
param.rc.i_bitrate = 2500;
param.i_bframe = 0;
param.b_repeat_headers = 0;
param.b_annexb = 1;
x264_param_apply_profile(&param, 0);
h = x264_encoder_open(&param);
Change BGRA.linesize[0] to be 1920*4. You see this 3 images pattern because
1240*3 == 1920*2

How to convert ffmpeg video frame to YUV444?

I have been following a tutorial on how to use ffmpeg and SDL to make a simple video player with no audio (yet). While looking through the tutorial I realized it was out of date and many of the functions it used, for both ffmpeg and SDL, were deprecated. So I searched for an up-to-date solution and found a stackoverflow question answer that completed what the tutorial was missing.
However, it uses YUV420 which is of low quality. I want to implement YUV444 and after studying chroma-subsampling for a bit and looking at the different formats for YUV am confused as to how to implement it. From what I understand YUV420 is a quarter of the quality YUV444 is. YUV444 means every pixel has its own chroma sample and as such is more detailed while YUV420 means pixels are grouped together and have the same chroma sample and therefore is less detailed.
And from what I understand the different formats of YUV(420, 422, 444) are different in the way they order y, u, and v. All of this is a bit overwhelming because I haven't done much with codecs, conversions, etc. Any help would be much appreciated and if additional info is needed please let me know before downvoting.
Here is the code from the answer I mentioned concerning the conversion to YUV420:
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_YV12,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// initialize SWS context for software scaling
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV420P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// set up YV12 pixel array (12 bits per pixel)
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width / 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished) {
AVPicture pict;
pict.data[0] = yPlane;
pict.data[1] = uPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = uvPitch;
pict.linesize[2] = uvPitch;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
pCodecCtx->width,
uPlane,
uvPitch,
vPlane,
uvPitch
);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch (event.type) {
case SDL_QUIT:
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
exit(0);
break;
default:
break;
}
}
// Free the YUV frame
av_frame_free(&pFrame);
free(yPlane);
free(uPlane);
free(vPlane);
// Close the codec
avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrig);
// Close the video file
avformat_close_input(&pFormatCtx);
EDIT:
After more research I learned that in YUV420 is stored with all Y's first then a combination of U and V bytes one after another as illustrated by this image:
(source: wikimedia.org)
However I also learned that YUV444 is stored in the order U, Y, V and repeats like this picture shows:
I tried changing some things around in code:
// I changed SDL_PIXELFORMAT_YV12 to SDL_PIXELFORMAT_UYVY
// as to reflect the order of YUV444
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_UYVY,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// Changed AV_PIX_FMT_YUV420P to AV_PIX_FMT_YUV444P
// for rather obvious reasons
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV444P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// There are as many Y, U and V bytes as pixels I just
// made yPlaneSz and uvPlaneSz equal to the number of pixels
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width * 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Rearranged the order of the planes to reflect UYV order
// then set linesize to the number of Y, U and V bytes
// per row
if (frameFinished) {
AVPicture pict;
pict.data[0] = uPlane;
pict.data[1] = yPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = pCodecCtx->width;
pict.linesize[2] = pCodecCtx->width;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
1,
uPlane,
uvPitch,
vPlane,
uvPitch
);
//.................................................
But now I get an access violation at the call to SDL_UpdateYUVTexture... I'm honestly not sure what's wrong. I think it may have to do with setting AVPicture pic's member data and linesize improperly but I'm not positive.
After many hours of scouring the web for possible answers I stumbled upon this post in which someone was asking about YUV444 support for packed or planar mode. The only current format I've found is AYUV which is packed.
The answer they got was a list of all the currently supported formats which did not include AYUV. Therefore SDL does not support YUV444.
The only solution is to use a different library that supports AYUV / YUV444.

WICConvertBitmapSource + CopyPixels results in blue image

I'm trying to use WIC to load an image into an in-memory buffer for further processing then write it back to a file when done. Specifically:
Load the image into an IWICBitmapFrameDecode.
The loaded IWICBitmapFrameDecode reports that its pixel format is GUID_WICPixelFormat24bppBGR. I want to work in 32bpp RGBA, so I call WICConvertBitmapSource.
Call CopyPixels on the converted frame to get a memory buffer.
Write the memory buffer back into an IWICBitmapFrameEncode using WritePixels.
This results in a recognizable image, but the resulting image is mostly blueish, as if the red channel is being interpreted as blue.
If I call WriteSource to write the converted frame directly, instead of writing the memory buffer, it works. If I call CopyPixels from the original unconverted frame (and update my stride and pixel formats accordingly), it works. It's only the combination of WICConvertBitmapSource plus the use of a memory buffer (CopyPixels + WritePixels) that causes the problem, but I can't figure out what I'm doing wrong.
Here's my code.
int main() {
IWICImagingFactory *pFactory;
IWICBitmapDecoder *pDecoder = NULL;
CoInitializeEx(NULL, COINIT_MULTITHREADED);
CoCreateInstance(
CLSID_WICImagingFactory,
NULL,
CLSCTX_INPROC_SERVER,
IID_IWICImagingFactory,
(LPVOID*)&pFactory
);
// Load the image.
pFactory->CreateDecoderFromFilename(L"input.png", NULL, GENERIC_READ, WICDecodeMetadataCacheOnDemand, &pDecoder);
IWICBitmapFrameDecode *pFrame = NULL;
pDecoder->GetFrame(0, &pFrame);
// pFrame->GetPixelFormat shows that the image is 24bpp BGR.
// Convert to 32bpp RGBA for easier processing.
IWICBitmapSource *pConvertedFrame = NULL;
WICConvertBitmapSource(GUID_WICPixelFormat32bppRGBA, pFrame, &pConvertedFrame);
// Copy the 32bpp RGBA image to a buffer for further processing.
UINT width, height;
pConvertedFrame->GetSize(&width, &height);
const unsigned bytesPerPixel = 4;
const unsigned stride = width * bytesPerPixel;
const unsigned bitmapSize = width * height * bytesPerPixel;
BYTE *buffer = new BYTE[bitmapSize];
pConvertedFrame->CopyPixels(nullptr, stride, bitmapSize, buffer);
// Insert image buffer processing here. (Not currently implemented.)
// Create an encoder to turn the buffer back into an image file.
IWICBitmapEncoder *pEncoder = NULL;
pFactory->CreateEncoder(GUID_ContainerFormatPng, nullptr, &pEncoder);
IStream *pStream = NULL;
SHCreateStreamOnFileEx(L"output.png", STGM_WRITE | STGM_CREATE, FILE_ATTRIBUTE_NORMAL, true, NULL, &pStream);
pEncoder->Initialize(pStream, WICBitmapEncoderNoCache);
IWICBitmapFrameEncode *pFrameEncode = NULL;
pEncoder->CreateNewFrame(&pFrameEncode, NULL);
pFrameEncode->Initialize(NULL);
WICPixelFormatGUID pixelFormat = GUID_WICPixelFormat32bppRGBA;
pFrameEncode->SetPixelFormat(&pixelFormat);
pFrameEncode->SetSize(width, height);
pFrameEncode->WritePixels(height, stride, bitmapSize, buffer);
pFrameEncode->Commit();
pEncoder->Commit();
pStream->Commit(STGC_DEFAULT);
return 0;
}
The PNG encoder only supports GUID_WICPixelFormat32bppBGRA (BGR) for 32bpp as specified in PNG Native Codec official documentation. When you call it with GUID_WICPixelFormat32bppRGBA, it will not do channel switching. The pervert will just use your pixels as they were BGR, not RGB, and will not tell you there's a problem.
I don't know what you're trying to do, but in your example, you could just replace GUID_WICPixelFormat32bppRGBA by GUID_WICPixelFormat32bppBGRA in the call to WICConvertBitmapSource (and also replace the definition of the last pixelFormat variable to make sure your source code is correct, but it doesn't change anything).
PS: you can use Wic to save files, not need to create stream using another API, see my answer here: Capture screen using DirectX

Save bitmap to video (libavcodec ffmpeg)

I'd like to convert a HBitmap to a video stream using libavcodec.
I get my HBitmap using:
HBITMAP hCaptureBitmap =CreateCompatibleBitmap(hDesktopDC, nScreenWidth, nScreenHeight);
SelectObject(hCaptureDC,hCaptureBitmap);
BitBlt(hCaptureDC,0,0,nScreenWidth,nScreenHeight,hDesktopDC,0,0,SRCCOPY);
And I'd like to convert it to YUV (which is required by the codec i'm using). For that I use:
SwsContext *fooContext = sws_getContext(c->width,c->height,PIX_FMT_BGR32, c->width,c->height,PIX_FMT_YUV420P,SWS_FAST_BILINEAR,NULL,NULL,NULL);
uint8_t *movie_dib_bits = reinterpret_cast<uint8_t *>(bm.bmBits) + bm.bmWidthBytes * (bm.bmHeight - 1);
int dibrowbytes = -bm.bmWidthBytes;
uint8_t* data_out[1];
int stride_out[1];
data_out[0] = movie_dib_bits;
stride_out[0] = dibrowbytes;
sws_scale(fooContext,data_out,stride_out,0,c->height,picture->data,picture->linesize);
But this is not working at all... Any idea why ? Or how could I do it differently ?
Thank you !
I am not familiar with the stuff you are using to get the bitmap, but assuming it is correct and you have a pointer to the BGR 32-bit/pixel data, try something like this:
uint8_t* inbuffer;
int in_width, in_height, out_width, out_height;
//here, make sure inbuffer points to the input BGR32 data,
//and the input and output dimensions are set correctly.
//calculate the bytes needed for the output image
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
//create ffmpeg frame structures. These do not allocate space for image data,
//just the pointers and other information about the image.
AVFrame* inpic = avcodec_alloc_frame();
AVFrame* outpic = avcodec_alloc_frame();
//this will set the pointers in the frame structures to the right points in
//the input and output buffers.
avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);
//create the conversion context
SwsContext* fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);
//encode the frame here...
//free memory
av_free(outbuffer);
av_free(inpic);
av_free(outpic);
Of course, if you are going to be converting a sequence of frames, just make your allocations once at the beginning and deallocations once at the end.