Process AVFrame using opencv mat causing encoding error - c++

I'm trying to decode a video file using ffmpeg, grab the AVFrame object, convert it to opencv mat object, do some processing then convert it back to AVFrame object and encode it back to a video file.
Well, the program can run, but it produces bad result.
I Keep getting errors like "top block unavailable for requested intra mode at 7 19", "error while decoding MB 7 19, bytestream 358", "concealing 294 DC, 294AC, 294 MV errors in P frame" etc.
And the result video got glithes all over it. like this,
I'm guessing it's because my AVFrame to Mat and Mat to AVFrame methods, and here they are
//unspecified function
temp_rgb_frame = avcodec_alloc_frame();
int numBytes = avpicture_get_size(PIX_FMT_RGB24, width, height);
uint8_t * frame2_buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
avpicture_fill((AVPicture*)temp_rgb_frame, frame2_buffer, PIX_FMT_RGB24, width, height);
void CoreProcessor::Mat2AVFrame(cv::Mat **input, AVFrame *output)
{
//create a AVPicture frame from the opencv Mat input image
avpicture_fill((AVPicture *)temp_rgb_frame,
(uint8_t *)(*input)->data,
AV_PIX_FMT_RGB24,
(*input)->cols,
(*input)->rows);
//convert the frame to the color space and pixel format specified in the sws context
sws_scale(
rgb_to_yuv_context,
temp_rgb_frame->data,
temp_rgb_frame->linesize,
0, height,
((AVPicture *)output)->data,
((AVPicture *)output)->linesize);
(*input)->release();
}
void CoreProcessor::AVFrame2Mat(AVFrame *pFrame, cv::Mat **mat)
{
sws_scale(
yuv_to_rgb_context,
((AVPicture*)pFrame)->data,
((AVPicture*)pFrame)->linesize,
0, height,
((AVPicture *)temp_rgb_frame)->data,
((AVPicture *)temp_rgb_frame)->linesize);
*mat = new cv::Mat(pFrame->height, pFrame->width, CV_8UC3, temp_rgb_frame->data[0]);
}
void CoreProcessor::process_frame(AVFrame *pFrame)
{
cv::Mat *mat = NULL;
AVFrame2Mat(pFrame, &mat);
Mat2AVFrame(&mat, pFrame);
}
Am I doing something wrong with the memory? Because if I remove the processing part, just decode and then encode the frame, the result is correct.

Well, it turns out I made a mistake at the initialization of temp_rgb_frame,if should be like this,
temp_rgb_frame = avcodec_alloc_frame();
int numBytes = avpicture_get_size(PIX_FMT_RGB24, width, height);
uint8_t * frame2_buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
avpicture_fill((AVPicture*)temp_rgb_frame, frame2_buffer, PIX_FMT_RGB24, width, height);

Related

sws_scale, YUV to RGB conversion

I need convert YUV to RGB. I also need the RGB values to be in the limited range (16-235).
I try to use sws_scale function for this task.
My code you can see below. But after conversion I got the black pixel is (0, 0, 0) instead of (16, 16, 16).
Maybe there are some options to tell sws_scale function to calculate the limited range.
AVFrame* frameRGB = avFrameConvertPixelFormat(_decodedBuffer[i].pAVFrame, AV_PIX_FMT_RGB24);
AVFrame* Decoder::avFrameConvertPixelFormat(const AVFrame* src, AVPixelFormat dstFormat) {
int width = src->width;
int height = src->height;
AVFrame* dst = allocPicture(dstFormat, width, height);
SwsContext* conversion = sws_getContext(width,
height,
(AVPixelFormat)src->format,
width,
height,
dstFormat,
SWS_FAST_BILINEAR,
NULL,
NULL,
NULL);
sws_scale(conversion, src->data, src->linesize, 0, height, dst->data, dst->linesize);
sws_freeContext(conversion);
dst->format = dstFormat;
dst->width = src->width;
dst->height = src->height;
return dst;
}
Also I tried convert YUV pixel to RGB pixel manualy with formula and I got correct result. From YUV (16, 128, 128) I got (16, 16, 16) in RGB.
cmpR = y + 1.402 * (v - 128);
cmpG = y - 0.3441 * (u - 128) - 0.7141 * (v - 128);
cmpB = y + 1.772 * (u - 128);
You may the source format to "full scale" YUVJ.
As far as I know, sws_scale has no option for selecting Studio RGB as output format.
Changing the input format is the best solution I can think of.
The color conversion formula of "JPEG: YUV -> RGB" is the same as the formula in your post.
Examples for setting the source format:
If src->format is PIX_FMT_YUV420P, set the format to PIX_FMT_YUVJ420P.
If src->format is PIX_FMT_YUV422P, set the format to PIX_FMT_YUVJ422P.
If src->format is PIX_FMT_YUV444P, set the format to PIX_FMT_YUVJ444P.
If PIX_FMT_YUV440P, use PIX_FMT_YUVJ440P.
I know the solution is not covering all the possibilists, and there might be some output pixels exceeding the range of [16, 235], so it's not the most general solution...
yuv to rgb conversion using FFMPEG I see lot of information given already for this above. However for code completeness I am re-sharing the code with missing allocPicture() function, header & library to include, it works for me like a charm. Thanks to #Валентин Никин & #Rotem for most of the info & code.
Headers:
#include <libswscale/swscale.h>
Link FFMPEG Library:
libswscale
static AVFrame* allocPicture(enum AVPixelFormat pix_fmt, int width, int height)
{
// Allocate a frame
AVFrame* frame = av_frame_alloc();
if (frame == NULL)
{
fprintf(stderr, "avcodec_alloc_frame failed");
}
if (av_image_alloc(frame->data, frame->linesize, width, height, pix_fmt, 1) < 0)
{
fprintf(stderr, "av_image_alloc failed");
}
frame->width = width;
frame->height = height;
frame->format = pix_fmt;
return frame;
}
static AVFrame* avFrameConvertPixelFormat(const AVFrame* src, enum AVPixelFormat dstFormat)
{
int width = src->width;
int height = src->height;
AVFrame* dst = allocPicture(dstFormat, width, height);
struct SwsContext* conversion = sws_getContext(width,
height,
(enum AVPixelFormat)src->format,
width,
height,
dstFormat,
SWS_FAST_BILINEAR | SWS_FULL_CHR_H_INT | SWS_ACCURATE_RND,
NULL,
NULL,
NULL);
sws_scale(conversion, src->data, src->linesize, 0, height, dst->data, dst->linesize);
sws_freeContext(conversion);
dst->format = dstFormat;
dst->width = src->width;
dst->height = src->height;
return dst;
}
// convert yuv420p10le to rgb24 (or any other RGB formats)
AVFrame* frame = avFrameConvertPixelFormat(frame, AV_PIX_FMT_RGB24);

Failing to properly initialize AVFrame for sws_scale conversion

I'm decoding video using FFMpeg, and want to edit the decoded frames using OpenGL, but in order to do that I need to convert the data in AVFrame from YUV to RGB.
In order to do that I create a new AVFrame:
AVFrame *inputFrame = av_frame_alloc();
AVFrame *outputFrame = av_frame_alloc();
av_image_alloc(outputFrame->data, outputFrame->linesize, width, height, AV_PIX_FMT_RGB24, 1);
av_image_fill_arrays(outputFrame->data, outputFrame->linesize, NULL, AV_PIX_FMT_RGB24, width, height, 1);
Create a conversion context:
struct SwsContext *img_convert_ctx = sws_getContext(width, height, AV_PIX_FMT_YUV420P,
width, height, AV_PIX_FMT_RGB24,
0, NULL, NULL, NULL);
And then try to convert it to RGB:
sws_scale(img_convert_ctx, (const uint8_t *const *)&inputFrame->data, inputFrame->linesize, 0, inputFrame->height, outputFrame->data, outputFrame->linesize);
But this causes an "[swscaler # 0x123f15000] bad dst image pointers" error during run time. When I went over FFMpeg's source I found out that the reason is that outputFrame's data wasn't initialized, but I don't understand how it should be.
All existing answers or tutorials that I found (see example) seem to use deprecated APIs, and it's unclear how to use the new APIs. I'd appreciate any help.
Here's how I call sws_scale:
image buf2((buf.w + 15)/16*16, buf.h, 3);
sws_scale(sws_ctx, (const uint8_t * const *)frame->data, frame->linesize, 0, c->height, (uint8_t * const *)buf2.c, &buf2.ys);
There are two differences here:
You pass &inputFrame->data but it shall be inputFrame->data without the address-of operator.
You don't have to allocate a second frame structure. The sws_scale doesn't care about it. It just needs a chunk of memory of the proper size (and maybe alignment).
In my case the av_image_alloc / av_image_fill_arrays did not create the frame->data pointers.
Here is how I did it, not sure if everything is correct, but it works:
d->m_FrameCopy = av_frame_alloc();
uint8_t* buffer = NULL;
int numBytes;
// Determine required buffer size and allocate buffer
numBytes = avpicture_get_size(
AV_PIX_FMT_RGB24, d->m_Frame->width, d->m_Frame->height);
buffer = (uint8_t*)av_malloc(numBytes * sizeof(uint8_t));
avpicture_fill(
(AVPicture*)d->m_FrameCopy,
buffer,
AV_PIX_FMT_RGB24,
d->m_Frame->width,
d->m_Frame->height);
d->m_FrameCopy->format = AV_PIX_FMT_RGB24;
d->m_FrameCopy->width = d->m_Frame->width;
d->m_FrameCopy->height = d->m_Frame->height;
d->m_FrameCopy->channels = d->m_Frame->channels;
d->m_FrameCopy->channel_layout = d->m_Frame->channel_layout;
d->m_FrameCopy->nb_samples = d->m_Frame->nb_samples;

Create a divx-encoded avi from frames using opencv

This question is similar to this one and particularly this one but my desired output is different. I'm trying to capture the desktop to video using opencv. The preferred output is an avi file with divx encoding. I'm new to opencv and bitmap programming in general.
As a first step, to make sure the divx codec is present, I create a single frame (cv::Mat) of a solid color (yellow) and write that 100 times to the video file, as shown here:
int main(int argc, char* argv[])
{
cv::Mat frame(1200, 1920, CV_8UC3, cv::Scalar(0, 50000, 50000));
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, cv::Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
This works perfectly - the video file is created and can be played on my Win 10 machine with VLC, Windows Media Player or the Films&TV app. It's 100 frames of solid yellow, but it shows the video is being created properly.
Next step: replace the dummy cv::Mat frame in the code above with a series of screenshots of the desktop. I get a handle to the desktop window using GetDesktopWindow(), and then use the function hwnd2mat (taken from this SO question - thanks!) to convert the bitmap obtained from the desktop handle to a cv::Mat that I can write to my video.
I copied the hwnd2mat function verbatim except I don't scale the image - the desktop bitmap is already 1920x1200, and also the cv::Mat I create is CV_8UC3 instead of CV_8UC4 (CV_8UC4 causes my app to crash).
Here's the code, including a reprint of hwnd2mat:
int main(int argc, char* argv[])
{
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
HWND hDsktopWindow = ::GetDesktopWindow();
cv::Mat frame = hwnd2mat(hDsktopWindow);
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
cv::Mat hwnd2mat(HWND hwnd)
{
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow;
cv::Mat src;
BITMAPINFOHEADER bi;
hwindowDC = GetDC(hwnd);
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom / 1; //change this to whatever size you want to resize to
width = windowsize.right / 1;
src.create(height, width, CV_8UC3);
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0,srcwidth, srcheight, SRCCOPY);
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS);
// avoid memory leak
DeleteObject(hbwindow); DeleteDC(hwindowCompatibleDC); ReleaseDC(hwnd,hwindowDC);
return src;
}
The result of this is that the video file is created and can be played without errors, but it's just solid grey. It seems like the bitmap of the desktop is not getting copied correctly into the cv::Mat frame. I've tried a zillion combinations of the values in the BITMAPINFOHEADER, but nothing works and I don't know what I'm doing to be honest. I know opencv has conversion functions but again, I don't even really know what I'm trying to convert to/from.
Any help appreciated!
Figured out a way to make it work - I have no idea if this is the best way, so comments or alternative solutions are still welcome.
It seems like for GetDIBits to work, the cv::Mat has to be 4-channel, i.e. CV_8UC4, like the original code of hwnd2mat before I changed it. If it is not CV_8UC4, no data is copied (GetDIBits returns 0 scan lines copied) and that's why my avi was just gray. So the first change was to create the src cv::Mat as 4-channel:
src.create(height, width, CV_8UC4);
But for the divx-encoded avi file that I'm trying to create, the frames should be 3-channel (don't ask me why). I added a call to an opencv conversion function after calling GetDIBits(), as follows:
cv::Mat dst;
dst.create(height, width, CV_8UC3);
cv::cvtColor(src, dst, CV_RGBA2RGB);
And then I return dst from hwnd2mat instead of src. The call to cvtColor removes the alpha channel (the A in RGBA) and dst ends up with just the R,G,B channels.
You can get bitmap with no alpha channel from GetDIBits and write it straight to cv::VideoWriter. Just change biBitCount to 24. Leave Mat format to CV_8IC3. This worked for me.
src.create(height, width, CV_8UC3);
bi.biBitCount = 24; // this is where to change

sws_scale YUV --> RGB distorted image

I want to convert YUV420P image (received from H.264 stream) to RGB, while also resizing it, using sws_scale.
The size of the original image is 480 × 800. Just converting with same dimensions works fine.
But when I try to change the dimensions, I get a distorted image, with the following pattern:
changing to 481 × 800 will yield a distorted B&W image which looks like it's cut in the middle
482 × 800 will be even more distorted
483 × 800 is distorted but in color
484 × 800 is ok (scaled correctly).
Now this pattern follows - scaling will only work fine if the difference between divides by 4.
Here's a sample code of the way that I decode and convert the image. All methods show "success".
int srcX = 480;
int srcY = 800;
int dstX = 481; // or 482, 483 etc
int dstY = 800;
AVFrame* avFrameYUV = avcodec_alloc_frame();
avpicture_fill((AVPicture *)avFrameYUV, decoded_yuv_frame, PIX_FMT_YUV420P, srcX , srcY);
AVFrame *avFrameRGB = avcodec_alloc_frame();
AVPacket avPacket;
av_init_packet(&avPacket);
avPacket.size = read; // size of raw data
avPacket.data = raw_data; // raw data before decoding to YUV
int frame_decoded = 0;
int decoded_length = avcodec_decode_video2(g_avCodecContext, avFrameYUV, &frame_decoded, &avPacket);
int size = dstX * dstY * 3;
struct SwsContext *img_convert_ctx = sws_getContext(srcX, srcY, SOURCE_FORMAT, dstX, dstY, PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL);
avpicture_fill((AVPicture *)avFrameRGB, rgb_frame, PIX_FMT_RGB24, dstX, dstY);
sws_scale(img_convert_ctx, avFrameYUV->data, avFrameYUV->linesize, 0, srcY, avFrameRGB->data, avFrameRGB->linesize);
// draws the resulting frame with windows BitBlt
DrawBitmap(hdc, dstX, dstY, rgb_frame, size);
sws_freeContext(img_convert_ctx);
When you make a bitmap image, the width of image MUST be multiple of 4.
So you have to change width like 480, 484, 488, 492 ...
Here is method to change to multiple of 4
#define WIDTHBYTES(bits) (((bits) + 31) / 32 * 4)
void main()
{
BITMAPFILEHEADER bmFileHeader;
BITMAPINFOHEADER bmInfoHeader;
// load image
// ...
// when you use the method, put parameter like this.
int tempWidth = WIDTHBYTES(width * bmInfoHeader.biBitCount);
}
I hope you solve the problem.

Save bitmap to video (libavcodec ffmpeg)

I'd like to convert a HBitmap to a video stream using libavcodec.
I get my HBitmap using:
HBITMAP hCaptureBitmap =CreateCompatibleBitmap(hDesktopDC, nScreenWidth, nScreenHeight);
SelectObject(hCaptureDC,hCaptureBitmap);
BitBlt(hCaptureDC,0,0,nScreenWidth,nScreenHeight,hDesktopDC,0,0,SRCCOPY);
And I'd like to convert it to YUV (which is required by the codec i'm using). For that I use:
SwsContext *fooContext = sws_getContext(c->width,c->height,PIX_FMT_BGR32, c->width,c->height,PIX_FMT_YUV420P,SWS_FAST_BILINEAR,NULL,NULL,NULL);
uint8_t *movie_dib_bits = reinterpret_cast<uint8_t *>(bm.bmBits) + bm.bmWidthBytes * (bm.bmHeight - 1);
int dibrowbytes = -bm.bmWidthBytes;
uint8_t* data_out[1];
int stride_out[1];
data_out[0] = movie_dib_bits;
stride_out[0] = dibrowbytes;
sws_scale(fooContext,data_out,stride_out,0,c->height,picture->data,picture->linesize);
But this is not working at all... Any idea why ? Or how could I do it differently ?
Thank you !
I am not familiar with the stuff you are using to get the bitmap, but assuming it is correct and you have a pointer to the BGR 32-bit/pixel data, try something like this:
uint8_t* inbuffer;
int in_width, in_height, out_width, out_height;
//here, make sure inbuffer points to the input BGR32 data,
//and the input and output dimensions are set correctly.
//calculate the bytes needed for the output image
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
//create ffmpeg frame structures. These do not allocate space for image data,
//just the pointers and other information about the image.
AVFrame* inpic = avcodec_alloc_frame();
AVFrame* outpic = avcodec_alloc_frame();
//this will set the pointers in the frame structures to the right points in
//the input and output buffers.
avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);
//create the conversion context
SwsContext* fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);
//encode the frame here...
//free memory
av_free(outbuffer);
av_free(inpic);
av_free(outpic);
Of course, if you are going to be converting a sequence of frames, just make your allocations once at the beginning and deallocations once at the end.