ffmpeg memory leak in the avcodec_open2 method - c++

I've developed an application which handles live video stream. The problem is that it should run as a service and over time I am noticing some memory increase. When I check the application with valgrind - it did not find any leak related issues.
So I've check it with google profile tools. This is a result(substracting the one of the first dumps from the latest) after approximately 6 hour run:
30.0 35.7% 35.7% 30.0 35.7% av_malloc
28.9 34.4% 70.2% 28.9 34.4% av_reallocp
24.5 29.2% 99.4% 24.5 29.2% x264_malloc
When I check the memory on the graph I see, that these allocations are related to avcodec_open2. The client code is:
` g_EncoderMutex.lock();
ffmpeg_encoder_start(OutFileName.c_str(), AV_CODEC_ID_H264, m_FPS, width, height);
for (pts = 0; pts < VideoImages.size(); pts++) {
m_frame->pts = pts;
ffmpeg_encoder_encode_frame(VideoImages[pts].RGBimage[0]);
}
ffmpeg_encoder_finish();
g_EncoderMutex.unlock()
The ffmpeg_encoder_start method is:
void VideoEncoder::ffmpeg_encoder_start(const char *filename, int codec_id, int fps, int width, int height)
{
int ret;
m_FPS=fps;
AVOutputFormat * fmt = av_guess_format(filename, NULL, NULL);
m_oc = NULL;
avformat_alloc_output_context2(&m_oc, NULL, NULL, filename);
m_stream = avformat_new_stream(m_oc, 0);
AVCodec *codec=NULL;
codec = avcodec_find_encoder(codec_id);
if (!codec)
{
fprintf(stderr, "Codec not found\n");
return; //-1
}
m_c=m_stream->codec;
avcodec_get_context_defaults3(m_c, codec);
m_c->bit_rate = 400000;
m_c->width = width;
m_c->height = height;
m_c->time_base.num = 1;
m_c->time_base.den = m_FPS;
m_c->gop_size = 10;
m_c->max_b_frames = 1;
m_c->pix_fmt = AV_PIX_FMT_YUV420P;
if (codec_id == AV_CODEC_ID_H264)
av_opt_set(m_c->priv_data, "preset", "ultrafast", 0);
if (m_oc->oformat->flags & AVFMT_GLOBALHEADER)
m_c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
avcodec_open2( m_c, codec, NULL );
m_stream->time_base=(AVRational){1, m_FPS};
if (avio_open(&m_oc->pb, filename, AVIO_FLAG_WRITE) < 0)
{
printf( "Could not open '%s'\n", filename);
exit(1);
}
avformat_write_header(m_oc, NULL);
m_frame = av_frame_alloc();
if (!m_frame) {
printf( "Could not allocate video frame\n");
exit(1);
}
m_frame->format = m_c->pix_fmt;
m_frame->width = m_c->width;
m_frame->height = m_c->height;
ret = av_image_alloc(m_frame->data, m_frame->linesize, m_c->width, m_c->height, m_c->pix_fmt, 32);
if (ret < 0) {
printf("Could not allocate raw picture buffer\n");
exit(1);
}
}
The ffmpeg_encoder_encode_frame is:
void VideoEncoder::ffmpeg_encoder_encode_frame(uint8_t *rgb)
{
int ret, got_output;
ffmpeg_encoder_set_frame_yuv_from_rgb(rgb);
av_init_packet(&m_pkt);
m_pkt.data = NULL;
m_pkt.size = 0;
ret = avcodec_encode_video2(m_c, &m_pkt, m_frame, &got_output);
if (ret < 0) {
printf("Error encoding frame\n");
exit(1);
}
if (got_output)
{
av_packet_rescale_ts(&m_pkt,
(AVRational){1, m_FPS}, m_stream->time_base);
m_pkt.stream_index = m_stream->index;
int ret = av_interleaved_write_frame(m_oc, &m_pkt);
av_packet_unref(&m_pkt);
}
}
ffmpeg_encoder_finish code is:
void VideoEncoder::ffmpeg_encoder_finish(void)
{
int got_output, ret;
do {
ret = avcodec_encode_video2(m_c, &m_pkt, NULL, &got_output);
if (ret < 0) {
printf( "Error encoding frame\n");
exit(1);
}
if (got_output) {
av_packet_rescale_ts(&m_pkt,
(AVRational){1, m_FPS}, m_stream->time_base);
m_pkt.stream_index = m_stream->index;
int ret = av_interleaved_write_frame(m_oc, &m_pkt);
av_packet_unref(&m_pkt);
}
} while (got_output);
av_write_trailer(m_oc);
avio_closep(&m_oc->pb);
avformat_free_context(m_oc);
av_freep(&m_frame->data[0]);
av_frame_free(&m_frame);
av_packet_unref(&m_pkt);
sws_freeContext(m_sws_context);
}
This code runs multiple times in the loop.
So my question is - what am I doing wrong? maybe ffmpeg is using some kind of internal buffering? If so, how to disable it? Because such an increase in memory usage is unacceptable at all.

You didn't close encoder context. Add avcodec_close(m_c) to ffmpeg_encoder_finish().
See ffmpeg.org
User is required to call avcodec_close() and avformat_free_context() to clean up the allocation by avformat_new_stream().
Plus I don't see how m_c is allocated. Usually it is allocated with avcodec_alloc_context and must be deallocated with av_free (after closing of course).

Don't use valgrind to check memory leaks for your own projects, use sanitizers, with these you can pin point the source of the leak. Check this out: Multi-Threaded Video Decoder Leaks Memory
Hope that helps.

It's sufficient to call 'avcodec_free_context(m_c)', this procedure calls 'avcodec_close' and also de-allocates 'extradata'(if it's was allocated) and 'subtitle_header' (if it was allocated).

Related

FFMPEG using AV_PIX_FMT_D3D11 gives "Error registering the input resource" from NVENC

Input frames start on the GPU as ID3D11Texture2D pointers.
I encode them to H264 using FFMPEG + NVENC. NVENC works perfectly if I download the textures to CPU memory as format AV_PIX_FMT_BGR0, but I'd like to cut out the CPU texture download entirely, and pass the GPU memory pointer directly into the encoder in native format. I write frames like this:
int write_gpu_video_frame(ID3D11Texture2D* gpuTex, AVFormatContext* oc, OutputStream* ost) {
AVFrame *hw_frame = ost->hw_frame;
printf("gpuTex address = 0x%x\n", &gpuTex);
hw_frame->data[0] = (uint8_t *) gpuTex;
hw_frame->data[1] = (uint8_t *) (intptr_t) 0;
hw_frame->pts = ost->next_pts++;
return write_frame(oc, ost->enc, ost->st, hw_frame);
// write_frame is identical to sample code in ffmpeg repo
}
Running the code with this modification gives the following error:
gpuTex address = 0x4582f6d0
[h264_nvenc # 00000191233e1bc0] Error registering an input resource: invalid call (9):
[h264_nvenc # 00000191233e1bc0] Could not register an input HW frame
Error sending a frame to the encoder: Unknown error occurred
Here's some supplemental code used in setting up and configuring the hw context and encoder:
/* A few config flags */
#define ENABLE_NVENC TRUE
#define USE_D3D11 TRUE // Skip downloading textures to CPU memory and send it straight to NVENC
/* Init hardware frame context */
static int set_hwframe_ctx(AVCodecContext* ctx, AVBufferRef* hw_device_ctx) {
AVBufferRef* hw_frames_ref;
AVHWFramesContext* frames_ctx = NULL;
int err = 0;
if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx))) {
fprintf(stderr, "Failed to create HW frame context.\n");
throw;
}
frames_ctx = (AVHWFramesContext*) (hw_frames_ref->data);
frames_ctx->format = AV_PIX_FMT_D3D11;
frames_ctx->sw_format = AV_PIX_FMT_NV12;
frames_ctx->width = STREAM_WIDTH;
frames_ctx->height = STREAM_HEIGHT;
//frames_ctx->initial_pool_size = 20;
if ((err = av_hwframe_ctx_init(hw_frames_ref)) < 0) {
fprintf(stderr, "Failed to initialize hw frame context. Error code: %s\n", av_err2str(err));
av_buffer_unref(&hw_frames_ref);
throw;
}
ctx->hw_frames_ctx = av_buffer_ref(hw_frames_ref);
if (!ctx->hw_frames_ctx)
err = AVERROR(ENOMEM);
av_buffer_unref(&hw_frames_ref);
return err;
}
/* Add an output stream. */
static void add_video_stream(
OutputStream* ost,
AVFormatContext* oc,
const AVCodec** codec,
enum AVCodecID codec_id,
int width,
int height
) {
AVCodecContext* c;
int i;
bool nvenc = false;
/* find the encoder */
if (ENABLE_NVENC) {
printf("Getting nvenc encoder\n");
*codec = avcodec_find_encoder_by_name("h264_nvenc");
nvenc = true;
}
if (!ENABLE_NVENC || *codec == NULL) {
printf("Getting standard encoder\n");
avcodec_find_encoder(codec_id);
nvenc = false;
}
if (!(*codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(codec_id));
exit(1);
}
ost->st = avformat_new_stream(oc, NULL);
if (!ost->st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
ost->st->id = oc->nb_streams - 1;
c = avcodec_alloc_context3(*codec);
if (!c) {
fprintf(stderr, "Could not alloc an encoding context\n");
exit(1);
}
ost->enc = c;
printf("Using video codec %s\n", avcodec_get_name(codec_id));
c->codec_id = codec_id;
c->bit_rate = 4000000;
/* Resolution must be a multiple of two. */
c->width = STREAM_WIDTH;
c->height = STREAM_HEIGHT;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
ost->st->time_base = {1, STREAM_FRAME_RATE};
c->time_base = ost->st->time_base;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
if (nvenc && USE_D3D11) {
const std::string hw_device_name = "d3d11va";
AVHWDeviceType device_type = av_hwdevice_find_type_by_name(hw_device_name.c_str());
// set up hw device context
AVBufferRef *hw_device_ctx;
// const char* device = "0"; // Default GPU (may be integrated in the case of switchable graphics!)
const char* device = "1";
ret = av_hwdevice_ctx_create(&hw_device_ctx, device_type, device, nullptr, 0);
if (ret < 0) {
fprintf(stderr, "Could not create hwdevice context; %s", av_err2str(ret));
}
set_hwframe_ctx(c, hw_device_ctx);
c->pix_fmt = AV_PIX_FMT_D3D11;
} else if (nvenc && !USE_D3D11)
c->pix_fmt = AV_PIX_FMT_BGR0;
else
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B-frames */
c->max_b_frames = 2;
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;
}
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}

FFMPEG encoder:Application provided invalid, non monotonically increasing dts to muxer in stream

I am encoding video with h264 codec and mp4 container. The source is a sequence of images I load from memory. The encoding seems to be working good. I am able to verify the exact amount of frames encoded, but what bugs me is the following warning/error I am getting at the end of the session from the muxer in terminal:
Application provided invalid, non monotonically increasing dts to
muxer in stream 0:
I tried different techniques to calculate pts/dts and it is the same. I did notice though, that avcodec_receive_packet() sometimes returns packets out of order,so may be that's the reason for the warning? Below is my function where I encode the image data.
bool LibAVEncoder::EncodeFrame(uint8_t* data)
{
int err;
if (!mVideoFrame)//init frame once
{
mVideoFrame = av_frame_alloc();
mVideoFrame->format = mVideoStream->codecpar->format;
mVideoFrame->width = mCodecContext->width;
mVideoFrame->height = mCodecContext->height;
if ((err = av_frame_get_buffer(mVideoFrame, 0)) < 0)
{
printf("Failed to allocate picture: %i\n", err);
return false;
}
}
err = av_frame_make_writable(mVideoFrame);
if (!mSwsCtx)
{
mSwsCtx = sws_getContext(mCodecContext->width, mCodecContext->height, AV_PIX_FMT_RGBA, mCodecContext->width,
mCodecContext->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, 0, 0, 0);
}
const int inLinesize[1] = { 4 * mCodecContext->width };
// RGBA to YUV
sws_scale(mSwsCtx, (const uint8_t* const*)&data, inLinesize, 0, mCodecContext->height, mVideoFrame->data, mVideoFrame->linesize);
mVideoFrame->pts = mFrameCount++;
//Encoding
err = avcodec_send_frame(mCodecContext, mVideoFrame);
if (err < 0)
{
printf("Failed to send frame: %i\n", err);
return false;
}
while (err >= 0)
{
err = avcodec_receive_packet(mCodecContext, mPacket);
if (err == AVERROR(EAGAIN))
{
return true;
}
else if (err == AVERROR_EOF)
{
return true;
}
else if (err < 0)
{
fprintf(stderr, "Error during encoding,shutting down\n");
exit(1);
}
// 1. way of calc pts/dts
mPacket->pts = av_rescale_q_rnd(mPacket->pts, mCodecContext->time_base, mVideoStream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
mPacket->dts = av_rescale_q_rnd(mPacket->dts, mCodecContext->time_base, mVideoStream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
mPacket->duration = av_rescale_q(mPacket->duration, mCodecContext->time_base, mVideoStream->time_base);
/*
//2. another way of calc pts/dts
const int64_t duration = av_rescale_q(1, mCodecContext->time_base, mVideoStream->time_base);
mPacket->duration = duration;
mPacket->pts = totalDuration;
mPacket->dts = totalDuration;
totalDuration += duration;
*/
mPacket->stream_index = mVideoStream->index;
av_interleaved_write_frame(mOFormatContext, mPacket);
av_packet_unref(mPacket);
}
return true;
}
I would appreciate any pointer to what I am doing wrong here.

Encoded video made using the Nvidia encoded SDK is not correct

I am trying to integrate the Nvidia Encoder SDK with the Nvidia Frame Buffer Capture SDK. I am capturing frames in a cuda buffer and then sending then to the Nvidia Encoder to be encoded in H.264 format.
The encoded video however, is not what I want it to be, it is a completely pink screen.
Are there any reasons as to why it might be happening?
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <dlfcn.h>
#include <string.h>
#include <getopt.h>
#include <unistd.h>
#include<dlfcn.h>
#include"nvEncodeAPI.h"
#include <NvFBC.h>
// If you have the CUDA Toolkit Installed, you can include the CUDA headers
// #include "cuda.h"
//#include "cuda_drvapi_dynlink_cuda.h"
#include"cuda.h"
#include "NvFBCUtils.h"
#define APP_VERSION 4
#define LIB_NVFBC_NAME "libnvidia-fbc.so.1"
#define LIB_CUDA_NAME "libcuda.so.1"
#define N_FRAMES 10
/*
* CUDA entry points
*/
typedef CUresult (* CUINITPROC) (unsigned int Flags);
typedef CUresult (* CUDEVICEGETPROC) (CUdevice *device, int ordinal);
typedef CUresult (* CUCTXCREATEV2PROC) (CUcontext *pctx, unsigned int flags, CUdevice dev);
typedef CUresult (* CUMEMCPYDTOHV2PROC) (void *dstHost, CUdeviceptr srcDevice, size_t ByteCount);
typedef CUresult (* CUGETDEVICEPTR) (CUdeviceptr *dptr, size_t bytesize);
static CUINITPROC cuInit_ptr = NULL;
static CUDEVICEGETPROC cuDeviceGet_ptr = NULL;
static CUCTXCREATEV2PROC cuCtxCreate_v2_ptr = NULL;
static CUMEMCPYDTOHV2PROC cuMemcpyDtoH_v2_ptr = NULL;
static CUGETDEVICEPTR cuGetDevicePtr=NULL;
typedef NVENCSTATUS (*PNVENCCREATEINSTANCE) (NV_ENCODE_API_FUNCTION_LIST* pFuncList);
/**
* Dynamically opens the CUDA library and resolves the symbols that are
* needed for this application.
*
* \param [out] libCUDA
* A pointer to the opened CUDA library.
*
* \return
* NVFBC_TRUE in case of success, NVFBC_FALSE otherwise.
*/
static NVFBC_BOOL cuda_load_library(void *libCUDA)
{
libCUDA = dlopen(LIB_CUDA_NAME, RTLD_NOW);
if (libCUDA == NULL) {
fprintf(stderr, "Unable to open '%s'\n", LIB_CUDA_NAME);
return NVFBC_FALSE;
}
cuInit_ptr = (CUINITPROC) dlsym(libCUDA, "cuInit");
if (cuInit_ptr == NULL) {
fprintf(stderr, "Unable to resolve symbol 'cuInit'\n");
return NVFBC_FALSE;
}
cuDeviceGet_ptr = (CUDEVICEGETPROC) dlsym(libCUDA, "cuDeviceGet");
if (cuDeviceGet_ptr == NULL) {
fprintf(stderr, "Unable to resolve symbol 'cuDeviceGet'\n");
return NVFBC_FALSE;
}
cuCtxCreate_v2_ptr = (CUCTXCREATEV2PROC) dlsym(libCUDA, "cuCtxCreate_v2");
if (cuCtxCreate_v2_ptr == NULL) {
fprintf(stderr, "Unable to resolve symbol 'cuCtxCreate_v2'\n");
return NVFBC_FALSE;
}
cuMemcpyDtoH_v2_ptr = (CUMEMCPYDTOHV2PROC) dlsym(libCUDA, "cuMemcpyDtoH_v2");
if (cuMemcpyDtoH_v2_ptr == NULL) {
fprintf(stderr, "Unable to resolve symbol 'cuMemcpyDtoH_v2'\n");
return NVFBC_FALSE;
}
cuGetDevicePtr = (CUGETDEVICEPTR) dlsym(libCUDA,"cuMemAlloc");
if(cuGetDevicePtr==NULL){
fprintf(stderr, "Unable to resolve symbol 'cuMemAlloc'\n");
return NVFBC_FALSE;
}
return NVFBC_TRUE;
}
/**
* Initializes CUDA and creates a CUDA context.
*
* \param [in] cuCtx
* A pointer to the created CUDA context.
*
* \return
* NVFBC_TRUE in case of success, NVFBC_FALSE otherwise.
*/
static NVFBC_BOOL cuda_init(CUcontext *cuCtx)
{
CUresult cuRes;
CUdevice cuDev;
cuRes = cuInit_ptr(0);
if (cuRes != CUDA_SUCCESS) {
fprintf(stderr, "Unable to initialize CUDA (result: %d)\n", cuRes);
return NVFBC_FALSE;
}
cuRes = cuDeviceGet_ptr(&cuDev, 0);
if (cuRes != CUDA_SUCCESS) {
fprintf(stderr, "Unable to get CUDA device (result: %d)\n", cuRes);
return NVFBC_FALSE;
}
cuRes = cuCtxCreate_v2_ptr(cuCtx, CU_CTX_SCHED_AUTO, cuDev);
if (cuRes != CUDA_SUCCESS) {
fprintf(stderr, "Unable to create CUDA context (result: %d)\n", cuRes);
return NVFBC_FALSE;
}
return NVFBC_TRUE;
}
/**
* Initializes the NvFBC and CUDA libraries and creates an NvFBC instance.
*
* Creates and sets up a capture session to video memory using the CUDA interop.
*
* Captures a bunch of frames every second, converts them to BMP and saves them
* to the disk.
*/
int main(int argc, char *argv[])
{
//handing the opening of the nvidia encoder library
void* handle = dlopen("libnvidia-encode.so",RTLD_NOW);
if(handle==NULL)
{
printf("Unable to load the nvidia encoder library");
return EXIT_FAILURE;
}
PNVENCCREATEINSTANCE NvEncCreateInstance_ptr;
NvEncCreateInstance_ptr = (PNVENCCREATEINSTANCE) dlsym(handle,"NvEncodeAPICreateInstance");
if(NvEncCreateInstance_ptr==NULL)
{
printf("Failure to load the nv encode lib");
}
static struct option longopts[] = {
{ "get-status", no_argument, NULL, 'g' },
{ "track", required_argument, NULL, 't' },
{ "frames", required_argument, NULL, 'f' },
{ "size", required_argument, NULL, 's' },
{ "format", required_argument, NULL, 'o' },
{ NULL, 0, NULL, 0 }
};
int opt, ret;
unsigned int i, nFrames = N_FRAMES;
NVFBC_SIZE frameSize = { 0, 0 };
NVFBC_BOOL printStatusOnly = NVFBC_FALSE;
NVFBC_TRACKING_TYPE trackingType = NVFBC_TRACKING_DEFAULT;
char outputName[NVFBC_OUTPUT_NAME_LEN];
uint32_t outputId = 0;
void *libNVFBC = NULL, *libCUDA = NULL;
PNVFBCCREATEINSTANCE NvFBCCreateInstance_ptr = NULL;
NVFBC_API_FUNCTION_LIST pFn;
CUcontext cuCtx;
NVFBCSTATUS fbcStatus;
NVFBC_BOOL fbcBool;
NVFBC_SESSION_HANDLE fbcHandle;
NVFBC_CREATE_HANDLE_PARAMS createHandleParams;
NVFBC_GET_STATUS_PARAMS statusParams;
NVFBC_CREATE_CAPTURE_SESSION_PARAMS createCaptureParams;
NVFBC_DESTROY_CAPTURE_SESSION_PARAMS destroyCaptureParams;
NVFBC_DESTROY_HANDLE_PARAMS destroyHandleParams;
NVFBC_TOCUDA_SETUP_PARAMS setupParams;
NVFBC_BUFFER_FORMAT bufferFormat = NVFBC_BUFFER_FORMAT_NV12;
char videoFile[64]="video.h264";
FILE* fd;
/*
* Parse the command line.
*/
while ((opt = getopt_long(argc, argv, "hgt:f:s:o:", longopts, NULL)) != -1) {
switch (opt) {
case 'g':
printStatusOnly = NVFBC_TRUE;
break;
case 't':
NvFBCUtilsParseTrackingType(optarg, &trackingType, outputName);
break;
case 'f':
nFrames = (unsigned int) atoi(optarg);
break;
case 's':
ret = sscanf(optarg, "%ux%u", &frameSize.w, &frameSize.h);
if (ret != 2) {
fprintf(stderr, "Invalid size format: '%s'\n", optarg);
return EXIT_FAILURE;
}
break;
case 'o':
if (!strcasecmp(optarg, "rgb")) {
bufferFormat = NVFBC_BUFFER_FORMAT_RGB;
} else if (!strcasecmp(optarg, "argb")) {
bufferFormat = NVFBC_BUFFER_FORMAT_ARGB;
} else if (!strcasecmp(optarg, "nv12")) {
bufferFormat = NVFBC_BUFFER_FORMAT_NV12;
} else if (!strcasecmp(optarg, "yuv444p")) {
bufferFormat = NVFBC_BUFFER_FORMAT_YUV444P;
} else {
fprintf(stderr, "Unknown buffer format: '%s'\n", optarg);
return EXIT_FAILURE;
}
break;
case 'h':
default:
usage(argv[0]);
return EXIT_SUCCESS;
}
}
NvFBCUtilsPrintVersions(APP_VERSION);
/*
* Dynamically load the NvFBC library.
*/
libNVFBC = dlopen(LIB_NVFBC_NAME, RTLD_NOW);
if (libNVFBC == NULL) {
fprintf(stderr, "Unable to open '%s'\n", LIB_NVFBC_NAME);
return EXIT_FAILURE;
}
fbcBool = cuda_load_library(libCUDA);
if (fbcBool != NVFBC_TRUE) {
return EXIT_FAILURE;
}
fbcBool = cuda_init(&cuCtx);
if (fbcBool != NVFBC_TRUE) {
return EXIT_FAILURE;
}
//input output buffer the contains all the pointers to the functions for the api
NV_ENCODE_API_FUNCTION_LIST nvencFunctionList;
nvencFunctionList.version=NV_ENCODE_API_FUNCTION_LIST_VER;
//this function call populates the passed pointer to the nvidia api function list with
//the function pointers to the routines implemented by the encoder api
NVENCSTATUS status = NvEncCreateInstance_ptr(&nvencFunctionList);//NvEncodeAPICreateInstance(&nvencFunctionList);
//exit code if failed to create instance of the api
if(status!=NV_ENC_SUCCESS)
{
printf("The nvidia encoder could not be initialized");
EXIT_FAILURE;
}
//creating the encode session params
NV_ENC_OPEN_ENCODE_SESSION_EX_PARAMS openSessionExParams;
//initializing them and then adding the values
// memset(&openSessionExParams,0,sizeof(openSessionExParams));
//setting the version
openSessionExParams.version = NV_ENC_OPEN_ENCODE_SESSION_EX_PARAMS_VER;
openSessionExParams.deviceType = NV_ENC_DEVICE_TYPE_CUDA;
//Note - Maybe change the next line
//ading the device pointer which in this case will be a cuda context
openSessionExParams.device = (void*)cuCtx;
openSessionExParams.reserved = NULL;
openSessionExParams.apiVersion = NVENCAPI_VERSION;
//setting all values of the reserved 1 array to 0
//Note - check the next two lines
memset(openSessionExParams.reserved1,0,253*sizeof(uint32_t));//sizeof(openSessionExParams.reserved1)/sizeof(openSessionExParams.reserved1[0]));
memset(openSessionExParams.reserved2,NULL,64*sizeof(void*));//sizeof(openSessionExParams.reserved2)/sizeof(openSessionExParams.reserved2[0]));
//handle for the encoder
void* encoder;
//Note- Seg fault
status = nvencFunctionList.nvEncOpenEncodeSessionEx(&openSessionExParams,&encoder);
//NvEncOpenEncodeSessionEx(&openSessionExParams,encoder);
if(status!=NV_ENC_SUCCESS)
{
printf("Failure opening encoding session: %u\n",status);
return EXIT_FAILURE;
}
//getting the count of the available encoder settings
uint32_t guidCount;
status =nvencFunctionList.nvEncGetEncodeGUIDCount(encoder,&guidCount);
if(status!=NV_ENC_SUCCESS)
{
printf("Failure getting GUID count");
return EXIT_FAILURE;
}
//allocating a buffer that holds that count
GUID* GUIDs = malloc(guidCount*sizeof(GUID));
//number of supported guid
uint32_t supportedGUID;
//adding all the guids to the previously allocated buffer
status =nvencFunctionList.nvEncGetEncodeGUIDs(encoder,GUIDs,guidCount,&supportedGUID);
if(status!=NV_ENC_SUCCESS)
{
printf("Failure filling the guid buffer");
return EXIT_FAILURE;
}
//setting the encode guid
GUID encodeGuid = NV_ENC_CODEC_H264_GUID;
//settin up the presets
uint32_t encodePresetCount;
status =nvencFunctionList.nvEncGetEncodePresetCount(encoder,encodeGuid,&encodePresetCount);
if(status!=NV_ENC_SUCCESS)
{
printf("Failure getting the encoder preset count");
return EXIT_FAILURE;
}
//allocating a buffer to hold the suppoting preset guid
GUID *presetGUIDs = malloc(encodePresetCount*sizeof(GUID));
uint32_t supportedPresetGUIDs;
status =nvencFunctionList.nvEncGetEncodePresetGUIDs(encoder,encodeGuid,presetGUIDs,encodePresetCount,&supportedPresetGUIDs);
if(status!=NV_ENC_SUCCESS)
{
printf("Failure filling up the preset buffer");
return EXIT_FAILURE;
}
//getting a preset guid Note - check this line later
GUID presetGuid = NV_ENC_PRESET_LOW_LATENCY_HQ_GUID;
//setting up a preset congif Note - check this
NV_ENC_PRESET_CONFIG encPresetConfig;
encPresetConfig.version = NV_ENC_PRESET_CONFIG_VER;
encPresetConfig.presetCfg.version=NV_ENC_CONFIG_VER;
status = nvencFunctionList.nvEncGetEncodePresetConfig(encoder,encodeGuid,presetGuid,&encPresetConfig);
if(status!=NV_ENC_SUCCESS)
{
printf("Error getting the preset configuration");
return EXIT_FAILURE;
}
//getting the input format which is YUV for our solution
//retrieving the input formats
uint32_t inputFormatCount;
status = nvencFunctionList.nvEncGetInputFormatCount(encoder,encodeGuid,&inputFormatCount);
if(status!=NV_ENC_SUCCESS)
{
printf("Error getting the input buffer format count");
return EXIT_FAILURE;
}
//allocating a buffer for the input formats
NV_ENC_BUFFER_FORMAT* encodeBufferFormats = malloc(inputFormatCount*sizeof(encodeBufferFormats));
//holding the size of the supported formats
uint32_t supportedInputFormats;
//filling up the above buffer
status = nvencFunctionList.nvEncGetInputFormats(encoder,encodeGuid, encodeBufferFormats,inputFormatCount,&supportedInputFormats);
if(status!=NV_ENC_SUCCESS)
{
printf("Error getting the supported input formats");
return EXIT_FAILURE;
}
//selecting a buffer format
//NV_ENC_BUFFER_FORMAT bufferFormat = NV_ENC_BUFFER_FORMAT_IYUV;
//querying the underlying hardware encoder capabilities can be done using the API
//initializing the hardware encoder
NV_ENC_INITIALIZE_PARAMS initializationParams;
/* NV_ENC_PRESET_CONFIG presetConfig;
presetConfig.version = NV_ENC_PRESET_CONFIG_VER;
NV_ENC_CONFIG encoderConfig;
encoderConfig.version = NV_ENC_CONFIG_VER;
encoderConfig.profileGUID=NV_ENC_CODEC_PROFILE_AUTOSELECT_GUID;
presetConfig.presetCfg=encoderConfig;
status = nvencFunctionList.nvEncGetEncodePresetConfig(encoder,encodeGuid,presetGuid,&presetConfig);*/
if(status!=NV_ENC_SUCCESS)
{
printf("\nUnable to get preset config %u",status);
return EXIT_FAILURE;
}
//initializing the encoder params
initializationParams.version = NV_ENC_INITIALIZE_PARAMS_VER;
initializationParams.encodeConfig = &encPresetConfig.presetCfg;
initializationParams.encodeGUID = encodeGuid;
initializationParams.presetGUID=presetGuid;
initializationParams.encodeWidth = 600;
initializationParams.encodeHeight = 600;
initializationParams.frameRateNum = 60;
initializationParams.frameRateDen = 1;
initializationParams.enablePTD=1;
status = nvencFunctionList.nvEncInitializeEncoder(encoder,&initializationParams);
if(status!=NV_ENC_SUCCESS)
{
printf("Error initializing encoder, %u",status);
return EXIT_FAILURE;
}
//allocating the nvidia input output buffers
int num_macroblocks = ((600 + 15) / 16) * ((600 + 15) / 16);
int max_surfaces = (num_macroblocks >= 8160) ? 16 : 32;
NV_ENC_INPUT_PTR* inputBuffers = (NV_ENC_INPUT_PTR*) malloc(max_surfaces * sizeof(NV_ENC_INPUT_PTR));
NV_ENC_OUTPUT_PTR* outputBuffers = (NV_ENC_OUTPUT_PTR*) malloc(max_surfaces * sizeof(NV_ENC_OUTPUT_PTR));
//CUdeviceptr* cudaDevicePtrs = (CUdeviceptr*) malloc(max_surfaces*sizeof(CUdeviceptr));
for(int i=0;i<max_surfaces;i++)
{
//creating the input buffer
NV_ENC_CREATE_INPUT_BUFFER inputBufferParams;
inputBufferParams.version = NV_ENC_CREATE_INPUT_BUFFER_VER;
inputBufferParams.width =(600 + 31) & ~31;
inputBufferParams.height=(600 + 31) & ~31;
inputBufferParams.bufferFmt = NV_ENC_BUFFER_FORMAT_IYUV;
inputBufferParams.reserved=0;
memset(inputBufferParams.reserved1,0,57*sizeof(uint32_t));
memset(inputBufferParams.reserved2,NULL,63*sizeof(void*));
status = nvencFunctionList.nvEncCreateInputBuffer(encoder,&inputBufferParams);
if(status!=NV_ENC_SUCCESS)
{
printf("Input buffer could not be created");
return EXIT_FAILURE;
}
//creating the output buffer
NV_ENC_CREATE_BITSTREAM_BUFFER bitstreamBufferParams;
bitstreamBufferParams.version = NV_ENC_CREATE_BITSTREAM_BUFFER_VER;
bitstreamBufferParams.reserved=0;
memset(bitstreamBufferParams.reserved1,0,58*sizeof(uint32_t));
memset(bitstreamBufferParams.reserved2,NULL,64*sizeof(void*));
//This line is giving me the error, NV_ENC_ERR_UNIMPLEMENTED
status = nvencFunctionList.nvEncCreateBitstreamBuffer(encoder,&bitstreamBufferParams);
if(status!=NV_ENC_SUCCESS)
{
printf("Bitstream could not be created %i",status);
return EXIT_FAILURE;
}
/* CUresult cuResult = cuGetDevicePtr(&cudaDevicePtrs[i],600);
if(cuResult!=CUDA_SUCCESS)
{
printf("Could not allocate cuda device pointer, %u\n",cuResult);
return EXIT_FAILURE;
}*/
inputBuffers[i]=inputBufferParams.inputBuffer;
outputBuffers[i]=bitstreamBufferParams.bitstreamBuffer;
}
/*
* Resolve the 'NvFBCCreateInstance' symbol that will allow us to get
* the API function pointers.
*/
NvFBCCreateInstance_ptr =
(PNVFBCCREATEINSTANCE) dlsym(libNVFBC, "NvFBCCreateInstance");
if (NvFBCCreateInstance_ptr == NULL) {
fprintf(stderr, "Unable to resolve symbol 'NvFBCCreateInstance'\n");
return EXIT_FAILURE;
}
/*
* Create an NvFBC instance.
*
* API function pointers are accessible through pFn.
*/
memset(&pFn, 0, sizeof(pFn));
pFn.dwVersion = NVFBC_VERSION;
fbcStatus = NvFBCCreateInstance_ptr(&pFn);
if (fbcStatus != NVFBC_SUCCESS) {
fprintf(stderr, "Unable to create NvFBC instance (status: %d)\n",
fbcStatus);
return EXIT_FAILURE;
}
/*
* Create a session handle that is used to identify the client.
*/
memset(&createHandleParams, 0, sizeof(createHandleParams));
createHandleParams.dwVersion = NVFBC_CREATE_HANDLE_PARAMS_VER;
fbcStatus = pFn.nvFBCCreateHandle(&fbcHandle, &createHandleParams);
if (fbcStatus != NVFBC_SUCCESS) {
fprintf(stderr, "%s\n", pFn.nvFBCGetLastErrorStr(fbcHandle));
return EXIT_FAILURE;
}
/*
* Get information about the state of the display driver.
*
* This call is optional but helps the application decide what it should
* do.
*/
memset(&statusParams, 0, sizeof(statusParams));
statusParams.dwVersion = NVFBC_GET_STATUS_PARAMS_VER;
fbcStatus = pFn.nvFBCGetStatus(fbcHandle, &statusParams);
if (fbcStatus != NVFBC_SUCCESS) {
fprintf(stderr, "%s\n", pFn.nvFBCGetLastErrorStr(fbcHandle));
return EXIT_FAILURE;
}
if (printStatusOnly) {
NvFBCUtilsPrintStatus(&statusParams);
return EXIT_SUCCESS;
}
if (statusParams.bCanCreateNow == NVFBC_FALSE) {
fprintf(stderr, "It is not possible to create a capture session "
"on this system.\n");
return EXIT_FAILURE;
}
if (trackingType == NVFBC_TRACKING_OUTPUT) {
if (!statusParams.bXRandRAvailable) {
fprintf(stderr, "The XRandR extension is not available.\n");
fprintf(stderr, "It is therefore not possible to track an RandR output.\n");
return EXIT_FAILURE;
}
outputId = NvFBCUtilsGetOutputId(statusParams.outputs,
statusParams.dwOutputNum,
outputName);
if (outputId == 0) {
fprintf(stderr, "RandR output '%s' not found.\n", outputName);
return EXIT_FAILURE;
}
}
/*
* Create a capture session.
*/
printf("Creating an asynchronous capture session of %u frames with 1 "
"second internal between captures.\n", nFrames);
memset(&createCaptureParams, 0, sizeof(createCaptureParams));
createCaptureParams.dwVersion = NVFBC_CREATE_CAPTURE_SESSION_PARAMS_VER;
createCaptureParams.eCaptureType = NVFBC_CAPTURE_SHARED_CUDA;
createCaptureParams.bWithCursor = NVFBC_TRUE;
createCaptureParams.frameSize = frameSize;
createCaptureParams.eTrackingType = trackingType;
if (trackingType == NVFBC_TRACKING_OUTPUT) {
createCaptureParams.dwOutputId = outputId;
}
fbcStatus = pFn.nvFBCCreateCaptureSession(fbcHandle, &createCaptureParams);
if (fbcStatus != NVFBC_SUCCESS) {
fprintf(stderr, "%s\n", pFn.nvFBCGetLastErrorStr(fbcHandle));
return EXIT_FAILURE;
}
/*
* Set up the capture session.
*/
memset(&setupParams, 0, sizeof(setupParams));
setupParams.dwVersion = NVFBC_TOCUDA_SETUP_PARAMS_VER;
setupParams.eBufferFormat = bufferFormat;
fbcStatus = pFn.nvFBCToCudaSetUp(fbcHandle, &setupParams);
if (fbcStatus != NVFBC_SUCCESS) {
fprintf(stderr, "%s\n", pFn.nvFBCGetLastErrorStr(fbcHandle));
return EXIT_FAILURE;
}
/*
* We are now ready to start grabbing frames.
*/
fd = fopen(videoFile, "wb");
if (fd == NULL)
{
fprintf(stderr,"Unable to create '%s'\n", videoFile);
return EXIT_FAILURE;
}
//configuring the per frame encode parameters
NV_ENC_PIC_PARAMS encoderPicParams;
//Codec specific params
/* NV_ENC_CODEC_PIC_PARAMS h264Params;
h264Params.h264PicParams.reserved3=0;
h264Params.h264PicParams.reservedBitFields=0;
h264Params.h264PicParams.ltrUsageMode=0;
memset(h264Params.h264PicParams.reserved,0,242*sizeof(uint32_t));
memset(h264Params.h264PicParams.reserved2,NULL,61*sizeof(void*));*/
encoderPicParams.version = NV_ENC_PIC_PARAMS_VER;
//encoderPicParams.codecPicParams=h264Params;
encoderPicParams.inputWidth=(600+ 31) & ~31;
encoderPicParams.inputHeight=(600+ 31) & ~31;
encoderPicParams.inputPitch = (600+ 31) & ~31;
//encoderPicParams.encodePicFlags=NV_ENC_PIC_FLAG_FORCEIDR;
//encoderPicParams.bufferFmt=NV_ENC_BUFFER_FORMAT_IYUV;
encoderPicParams.pictureStruct=NV_ENC_PIC_STRUCT_FRAME;
for (i = 0; i < 60; i++) {
static CUdeviceptr cuDevicePtr;
static unsigned char *frame = NULL;
static uint32_t lastByteSize = 0;
//char filename[64];
int index=i;
if(i>=max_surfaces)
{
index=i%max_surfaces;
}
int res;
uint64_t t1, t2, t1_total, t2_total, t_delta, wait_time_ms;
CUresult cuRes;
NVFBC_TOCUDA_GRAB_FRAME_PARAMS grabParams;
NVFBC_FRAME_GRAB_INFO frameInfo;
t1 = NvFBCUtilsGetTimeInMillis();
t1_total = t1;
memset(&grabParams, 0, sizeof(grabParams));
memset(&frameInfo, 0, sizeof(frameInfo));
grabParams.dwVersion = NVFBC_TOCUDA_GRAB_FRAME_PARAMS_VER;
/*
* Use asynchronous calls.
*
* The application will not wait for a new frame to be ready. It will
* capture a frame that is already available. This might result in
* capturing several times the same frame. This can be detected by
* checking the frameInfo.bIsNewFrame structure member.
*/
grabParams.dwFlags = NVFBC_TOCUDA_GRAB_FLAGS_NOWAIT;
/*
* This structure will contain information about the captured frame.
*/
grabParams.pFrameGrabInfo = &frameInfo;
/*
* The frame will be mapped in video memory through this CUDA
* device pointer.
*/
grabParams.pCUDADeviceBuffer = &cuDevicePtr;
/*
* Capture a frame.
*/
fbcStatus = pFn.nvFBCToCudaGrabFrame(fbcHandle, &grabParams);
if (fbcStatus != NVFBC_SUCCESS) {
fprintf(stderr, "%s\n", pFn.nvFBCGetLastErrorStr(fbcHandle));
return EXIT_FAILURE;
}
t2 = NvFBCUtilsGetTimeInMillis();
/*
* Allocate or re-allocate the destination buffer in system memory
* when necessary.
*
* This is to handle change of resolution.
*/
if (lastByteSize < frameInfo.dwByteSize) {
frame = (unsigned char *)realloc(frame, frameInfo.dwByteSize);
if (frame == NULL) {
fprintf(stderr, "Unable to allocate system memory\n");
return EXIT_FAILURE;
}
printf("Reallocated %u KB of system memory\n",
frameInfo.dwByteSize / 1024);
lastByteSize = frameInfo.dwByteSize;
}
printf("%s id %u grabbed in %llu ms",
(frameInfo.bIsNewFrame ? "New frame" : "Frame"),
frameInfo.dwCurrentFrame,
(unsigned long long) (t2 - t1));
/*
* Download frame from video memory to system memory.
*/
t1 = NvFBCUtilsGetTimeInMillis();
cuRes = cuMemcpyDtoH_v2_ptr((void *) frame, cuDevicePtr,
frameInfo.dwByteSize);
if (cuRes != CUDA_SUCCESS) {
fprintf(stderr, "CUDA memcpy failure (result: %d)\n", cuRes);
return EXIT_FAILURE;
}
//creating the params for the nv enc lock input buffer
//inputBufferParams.pSysMemBuffer=(void*) frame;
//status = nvencFunctionList.nvEncCreateInputBuffer(encoder,&inputBufferParams);
//setting up the input params
NV_ENC_REGISTER_RESOURCE registerResource;
registerResource.version=NV_ENC_REGISTER_RESOURCE_VER;
registerResource.resourceType = NV_ENC_INPUT_RESOURCE_TYPE_CUDADEVICEPTR;
registerResource.width = (600+31)&~31;
registerResource.height=(600+31)&~31;
registerResource.pitch=(600+31)&~31;
registerResource.resourceToRegister = (void*)cuDevicePtr;
registerResource.bufferFormat = NV_ENC_BUFFER_FORMAT_NV12;
status = nvencFunctionList.nvEncRegisterResource(encoder,&registerResource);
if(status!=NV_ENC_SUCCESS)
{
printf("Failed to register resource, %u",status);
return EXIT_FAILURE;
}
//mapping these resources
NV_ENC_MAP_INPUT_RESOURCE mappedResource;
mappedResource.version = NV_ENC_MAP_INPUT_RESOURCE_VER;
mappedResource.registeredResource=registerResource.registeredResource;
status = nvencFunctionList.nvEncMapInputResource(encoder,&mappedResource);
if(status!=NV_ENC_SUCCESS)
{
printf("Could not map resource: %u,",status);
return EXIT_FAILURE;
}
encoderPicParams.outputBitstream=outputBuffers[index];
encoderPicParams.inputBuffer=mappedResource.mappedResource;
encoderPicParams.bufferFmt=mappedResource.mappedBufferFmt;
/* memset(encoderPicParams.reserved1,0,6*sizeof(uint32_t));
memset(encoderPicParams.reserved2,NULL,2*sizeof(void*));
memset(encoderPicParams.reserved3,0,286*sizeof(uint32_t));
memset(encoderPicParams.reserved4,0,60*sizeof(void*));*/
//sending the picture to be encoded
status=nvencFunctionList.nvEncEncodePicture(encoder,&encoderPicParams);
if(status!=NV_ENC_SUCCESS)
{
printf("\n Unable to encode video, Error code: %u",status);
return EXIT_FAILURE;
}
//setting up the lock bitstream params
//sending the data to the output stream
NV_ENC_LOCK_BITSTREAM lockBitstreamParams;
lockBitstreamParams.version=NV_ENC_LOCK_BITSTREAM_VER;
lockBitstreamParams.doNotWait=0;
lockBitstreamParams.outputBitstream=outputBuffers[index];
lockBitstreamParams.reservedBitFields=0;
memset(lockBitstreamParams.reserved,0,219*sizeof(uint32_t));
memset(lockBitstreamParams.reserved1,0,13*sizeof(uint32_t));
memset(lockBitstreamParams.reserved2,NULL,64*sizeof(uint32_t));
status = nvencFunctionList.nvEncLockBitstream(encoder,&lockBitstreamParams);
if(status!=NV_ENC_SUCCESS)
{
printf("error locking bitstream");
return EXIT_FAILURE;
}
//writing the video data to the file
int bytes = fwrite(lockBitstreamParams.bitstreamBufferPtr,1,lockBitstreamParams.bitstreamSizeInBytes ,fd);
printf("\nNumber of bytes: %u\n",bytes);
if(bytes==0)
{
fprintf(stderr, "Unable to write to '%s'\n", videoFile);
return EXIT_FAILURE;
}
status = nvencFunctionList.nvEncUnlockBitstream(encoder,outputBuffers[index]);
status = nvencFunctionList.nvEncUnmapInputResource(encoder,mappedResource.mappedResource);
if(status!=NV_ENC_SUCCESS)
{
printf("failure to map resource, %u",status);
return EXIT_FAILURE;
}

c++ videoplayer ffmpeg: get pixel data?

I want to get the pixel data of a frame. I found this (in original version as old code) and changed some things.
I have this code:
AVFormatContext *pFormatCtx;
pFormatCtx = avformat_alloc_context();
// Open file
if (int err = avformat_open_input(&pFormatCtx, file, NULL, 0) != 0)
{
exit(2);
}
// Get infromation about streams
if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
{
exit(2);
}
// # video stream
int videoStreamIndex = -1;
AVCodecContext *pVideoCodecCtx;
AVCodec *pVideoCodec;
int res = 0;
int width = 0;
int height = 0;
for (unsigned int i = 0; i < pFormatCtx->nb_streams; i++)
{
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
videoStreamIndex = i;
pVideoCodecCtx = pFormatCtx->streams[i]->codec;
// Find decoder
pVideoCodec = avcodec_find_decoder(pVideoCodecCtx->codec_id);
if (pVideoCodec)
{
// Open decoder
res = !(avcodec_open2(pVideoCodecCtx, pVideoCodec, NULL) < 0);
width = pVideoCodecCtx->coded_width;
height = pVideoCodecCtx->coded_height;
}
break;
}
}
// Frame width
width = pFormatCtx->streams[videoStreamIndex]->codec->width;
// Frame height
height = pFormatCtx->streams[videoStreamIndex]->codec->height;
AVPacket packet;
int got_picture_ptr;
AVPacket *avpkt;
AVFrame * pOutFrame;
pOutFrame = av_frame_alloc();
AVFrame * rgbOutFrame = av_frame_alloc();
if (!pOutFrame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == videoStreamIndex)
{
// Decode packeg to frame.
int videoFrameBytes = avcodec_decode_video2(pVideoCodecCtx, pOutFrame,
&got_picture_ptr, &packet);
// Create context
SwsContext* pImgConvertCtx = sws_getContext(pVideoCodecCtx->width,
pVideoCodecCtx->height,
pVideoCodecCtx->pix_fmt,
pVideoCodecCtx->width, pVideoCodecCtx->height,
AV_PIX_FMT_YUV420P,
SWS_BICUBIC, NULL, NULL, NULL);
// Convert frame
sws_scale(pImgConvertCtx, pOutFrame->data, pOutFrame->linesize,
width, height, rgbOutFrame->data, rgbOutFrame->linesize);
}
}
I know, that the code from SwsContext and sws_scale is wrong but I wonder, where can I find the pixel data of my frame... (and in which format it is stored).
Can someone help me here?
Pixel data is stored in data field.
According to the documentation:
uint8_t* AVFrame::data[AV_NUM_DATA_POINTERS]
pointer to the picture/channel planes.
Look here for more information.
Generally speaking, your code is a bit misleading and rather buggy. I can point out some drawbacks:
1) You don't need to create new SwsContext on every incoming video packet. Just create it once before while cycle.
2) Next, you have an rgbOutFrame, but SwsContext is created for scaling into the YUV420 pixel format. It looks strange.
3) Besides, avcodec_decode_video2 is invoked, but you never check neither return value nor got_picture_ptr flag. Such practice is really error-prone.
And so on...
Hope it'll help you to improve your program and get necessary results.

How do I dump the buffer when encoding H264 with FFMPEG?

I'm using a c++ library to write images captured from a webcam to an libx264 encoded mp4 file.
The encoding is working properly but when it starts it writes 40 frames to the buffer. When I close the file these frames aren't flushed so about 6 seconds of video are left unwritten (cam is about 6fps).
So i'm calling:
out_size = libffmpeg::avcodec_encode_video( codecContext, data->VideoOutputBuffer,data->VideoOutputBufferSize, data->VideoFrame );
// if zero size, it means the image was buffered
if ( out_size > 0 )
{
//... write to file
}
I can't see a way of accessing the images that are left in the buffer. Any ideas?
I've got this working using the following code to flush the buffer. Seems that I was searching for the wrong term - should have been "delayed frames"...
void VideoFileWriter::Flush(void)
{
if ( data != nullptr )
{
int out_size = 0;
int ret = 0;
libffmpeg::AVCodecContext* c = data->VideoStream->codec;
/* get the delayed frames */
while (1) {
libffmpeg::AVPacket packet;
libffmpeg::av_init_packet(&packet);
out_size = libffmpeg::avcodec_encode_video(c, data->VideoOutputBuffer, data->VideoOutputBufferSize, NULL);
if (out_size < 0) {
//fprintf(stderr, "Error encoding delayed frame %d\n", out_size);
break;
}
if (out_size == 0) {
break;
}
if (c->coded_frame->pts != AV_NOPTS_VALUE) {
packet.pts = av_rescale_q(c->coded_frame->pts,
c->time_base,
data->VideoStream->time_base);
//fprintf(stderr, "Video Frame PTS: %d\n", (int)packet.pts);
} else {
//fprintf(stderr, "Video Frame PTS: not set\n");
}
if (c->coded_frame->key_frame) {
packet.flags |= AV_PKT_FLAG_KEY;
}
packet.stream_index = data->VideoStream->index;
packet.data = data->VideoOutputBuffer;
packet.size = out_size;
ret = libffmpeg::av_interleaved_write_frame( data->FormatContext, &packet );
if (ret != 0) {
//fprintf(stderr, "Error writing delayed frame %d\n", ret);
break;
}
}
libffmpeg::avcodec_flush_buffers(data->VideoStream->codec);
}
}
Here is a tutorial regarding ffmpeg with avcodec, stating that avcodec uses some internal buffers which need to be flushed. There is also some code showing how flushing of these buffers is done ("Flushing our buffers").