Copying a decoded ffmpeg AVFrame - c++

I've been trying to copy a AVFrame just like what was answered in ffmpeg: make a copy from a decoded frame (AVFrame). However but I can't seem to get it to get a positive return code from av_frame_copy().
Here is basically what I'm doing:
AVFrame *copyFrame = NULL;
copyFrame = av_frame_alloc();
int return_code = av_frame_copy(copyFrame, originalFrame);
if(return_code < 0){
fprintf(stderr, "av_frame_copy failed with return code %d\n", return_code);
return(1);
}
If it helps, the return code I get from av_frame_copy is -22.

If you read the documentation for av_frame_copy, it says "This function does not allocate anything, dst must be already initialized and allocated with the same parameters as src."
av_frame_alloc doesn't do anything other than allocate the AVFrame struct and initialize it to some default values. Most importantly, it doesn't allocate buffers for the frame data or prepare the frame to be used. av_frame_copy is failing because the destination frame doesn't have the correct pixel format set or buffers allocated.
If you want to clone a frame (by incrementing its reference counter, not creating a deep copy) you can use av_frame_clone or av_frame_ref.
If you want to move the frame you can use av_frame_move_ref.
But you probably want to do a proper deep copy. In that case, you can look at the source code of the av_frame_make_writable. This function makes a deep copy of the frame if it isn't writeable, so we can use the same logic to make a deep copy of the frame here:
AVFrame *copyFrame = av_frame_alloc();
copyFrame->format = frame->format;
copyFrame->width = frame->width;
copyFrame->height = frame->height;
copyFrame->channels = frame->channels;
copyFrame->channel_layout = frame->channel_layout;
copyFrame->nb_samples = frame->nb_samples;
av_frame_get_buffer(copyFrame, 32);
av_frame_copy(copyFrame, frame);
av_frame_copy_props(copyFrame, frame);
Note that I haven't checked for errors in the functions I've called. You should do that in your real code. I omitted it here for brevity.

I had AVFrame * on GPU. This worked for me:
int ret;
AVFrame *dst;
dst = av_frame_alloc();
memcpy(dst,src,sizeof(AVFrame));
dst->format = src->format;
dst->width = src->width;
dst->height = src->height;
dst->channels = src->channels;
dst->channel_layout = src->channel_layout;
dst->nb_samples = src->nb_samples;
dst->extended_data = src->extended_data;
memcpy(dst->data, src->data, sizeof(src->data));
ret = av_frame_copy_props(dst, src);
if (ret < 0) { av_frame_unref(dst);}

AVFrame *copyFrame = new AVFrame;
copyFrame = av_frame_alloc();
*copyFrame = *inAVFrame;
if (int iRet = av_frame_copy(copyFrame, inAVFrame) == 0) {
//av_log(NULL, AV_LOG_INFO, "Ok");
} else {
//av_log(NULL, AV_LOG_INFO, "Error: %s\n", AV_err2str(iRet));
}

Related

GL Screenshot Breaks on viewport resize…sometimes

I’m developing a plugin for SIMDIS (basically military google earth), written in c++ using VS 2012. It’s a pretty nifty little thing to auto plot points, and one of its functions is to take a series of screenshot of the view-port and save the images off so it can be used/processed somewhere else. This works fine too… until you re-size the view-port one too many times. Re-size is done by clicking the corner of the window and dragging it bigger and smaller, and the program may launch full screen or windowed mode; either way it works fine the first few sets… or as long as the window is not re-sized.
When it breaks, the program will still march happily along, create the files, and filling them with data at what seems to be an appropriate size for whatever resolution image I’m trying to generate… but the format becomes no-good. It will still be a *.bmp, but windows stops being able to understand it. No errors are thrown though, (I think, I’m not catching any GL errors?[if that’s possible?]).
I can’t get it to consistently happen with a specific number of actions, but it seems to start failing after 3-7 view-port re-sizes. I don’t know if this is a problem with my screenshot code, an issue with the SIMDIS program or plugin, a GL issue, or what. I’ve tested it on multiple machines.
Has anyone run into this problem before? Is there something specific I should be doing that I’m not? Is this a problem native to the parent program (SIMDIS), or something I can work with/around with GL commands I don’t know about?
Screenshot code follows:
#include "TakeScreenshot.h" //has "#include <gl/GL.h>" etc...
TakeScreenshot::TakeScreenshot()
{
}
std::vector<int> * TakeScreenshot::TakeAScreenshotBMP(const char* filename)
{
//std::cout << "Screenshot! ";
std::vector<int> * returnVec = new std::vector<int>();
int VPort[4] = {0,0,0,0};
int FSize = 0;
int PackStore = 0;
//get GL viewport dimensions, x,y,w,h into vport
glGetIntegerv(GL_VIEWPORT,VPort);
//make a framebuffer, RGB
FSize = VPort[2]*VPort[3]*3;
unsigned char PStore[8294400];// 4k sized buffer
//store settings
glGetIntegerv(GL_PACK_ALIGNMENT, &PackStore);
//unpack to byte order
glPixelStorei(GL_PACK_ALIGNMENT, 1);
//read the gl buffer into our buffer
glReadPixels(VPort[0],VPort[1],VPort[2],VPort[3],GL_RGB,GL_UNSIGNED_BYTE,&PStore);
//Pass back settings
glPixelStorei(GL_PACK_ALIGNMENT, PackStore);
///
//set up file info
///
BITMAPINFOHEADER BMIH; //info header
BMIH.biSize = sizeof(BITMAPINFOHEADER);
BMIH.biSizeImage= VPort[2] * VPort[3] * 3;
BMIH.biWidth = VPort[2];
BMIH.biHeight = VPort[3];
BMIH.biPlanes = 1;
BMIH.biBitCount = 24;
BMIH.biCompression = BI_RGB;
BITMAPFILEHEADER bmfh;//file header
int nBitsOffset = sizeof(BITMAPFILEHEADER) + BMIH.biSize;
LONG lImageSize = BMIH.biSizeImage;
LONG lFileSize = nBitsOffset + lImageSize;
bmfh.bfType = 'B' + ('M'<<8);
bmfh.bfOffBits = nBitsOffset;
bmfh.bfSize = lFileSize;
bmfh.bfReserved1 = bmfh.bfReserved2 = 0;
// swap r and b values because GL has them backwards for BMP format.
unsigned char SwapByte;
for(int loop = 0; loop<FSize; loop+=3)
{
SwapByte = PStore[loop];
PStore[loop] = PStore[loop+2];
PStore[loop +2] = SwapByte;
}
///
// File writing section
///
FILE *pFile;
pFile = fopen(filename, "wb");
//if something borked
if(pFile == NULL)
{
std::cout << "TakeScreenshot::TakeAScreenshotBMP>> Error; was not able to create file (Permisions?)" << std::endl;
returnVec->push_back(-1);
returnVec->push_back(-1);
return returnVec; //exit
}
UINT nWrittenFileHeaderSize = fwrite(&bmfh,1,sizeof(BITMAPFILEHEADER), pFile);
UINT nWrittenInfoHeaderSize = fwrite(&BMIH,1,sizeof(BITMAPINFOHEADER), pFile);
UINT nWrittenDIBDataSize = fwrite(&PStore, 1, lImageSize, pFile);
fclose(pFile);
//some return data for processing later
returnVec->push_back(VPort[2]);
returnVec->push_back(VPort[3]);
return returnVec;
}
TakeScreenshot::~TakeScreenshot(void)
{
}

FFmpeg audio encoder new encode function

I would like to update an AV Audio encoder using function avcodec_encode_audio (deprecated) to avcodec_encode_audio2, without modifying the structure of existing encoder:
outBytes = avcodec_encode_audio(m_handle, dst, sizeBytes, (const short int*)m_samBuf);
where:
1) m_handle AVCodecContext
2) dst, uint8_t * destination buffer
3) sizeBytes, uint32_t size of the destination buffer
4) m_samBuf void * to the input chunk of data to encode (this is casted to: const short int*)
is there a simply way to do it?
Im tryng with:
int gotPack = 1;
memset (&m_Packet, 0, sizeof (m_Packet));
m_Frame = av_frame_alloc();
av_init_packet(&m_Packet);
m_Packet.data = dst;
m_Packet.size = sizeBytes;
uint8_t* buffer = (uint8_t*)m_samBuf;
m_Frame->nb_samples = m_handle->frame_size;
avcodec_fill_audio_frame(m_Frame,m_handle->channels,m_handle->sample_fmt,buffer,m_FrameSize,1);
outBytes = avcodec_encode_audio2(m_handle, &m_Packet, m_Frame, &gotPack);
char error[256];
av_strerror(outBytes,error,256);
if (outBytes<0){
m_server->log(1,1,"Input data: %d, encode function call error: %s \n",gotPack, error);
return AUDIOWRAPPER_ERROR;
}
av_frame_free(&m_Frame);
it compiles but it does not encode anything, i dont here audio at the output if I pipe the output stream on mplayer, wich was warking prior to the upgrade.
What am I doing wrong?
The encoder accept only two sample formats:
AV_SAMPLE_FMT_S16, ///< signed 16 bits
AV_SAMPLE_FMT_FLT, ///< float
here is how the buffer is allocated:
free(m_samBuf);
int bps = 2;
if(m_handle->codec->sample_fmts[0] == AV_SAMPLE_FMT_FLT) {
bps = 4;
}
m_FrameSize = bps * m_handle->frame_size * m_handle->channels;
m_samBuf = malloc(m_FrameSize);
m_numSam = 0;
avcodec_fill_audio_frame should get you there
memset (&m_Packet, 0, sizeof (m_Packet));
memset (&m_Frame, 0, sizeof (m_Frame));
av_init_packet(&m_Packet);
m_Packet.data = dst;
m_Packet.size = sizeBytes;
m_Frame->nb_samples = //you need to get this value from somewhere, it is the number of samples (per channel) this frame represents
avcodec_fill_audio_frame(m_Frame, m_handle->channels, m_handle->sample_fmt,
buffer,
sizeBytes, 1);
int gotPack = 1;
avcodec_encode_audio2(m_handle, &m_Packet, &m_Frame, &gotPack);

Ffmpeg decoder yuv420p

I work on a video player yuv420p with ffmpeg but it's not working and i can't find out why. I spend the whole week on it...
So i have a test which just decode some frame and read it, but the output always differ, and it's really weird.
I use a video (mp4 yuv420p) which color one black pixel in more each frame :
For the video, put http://sendvid.com/b1sgf8r1 on a website like http://www.telechargerunevideo.com/en/
VideoContext is just a little struct:
struct VideoContext {
unsigned int currentFrame;
std::size_t size;
int width;
int height;
bool pause;
AVFormatContext* formatCtx;
AVCodecContext* codecCtxOrig;
AVCodecContext* codecCtx;
int streamIndex;
};
So i have a function to count the number of black pixels:
std::size_t checkFrameNb(const AVFrame* frame) {
std::size_t nb = 0;
for (int y = 0; y < frame->height; ++y) {
for (int x = 0 ; x < frame->width; ++x) {
if (frame->data[0][(y * frame->linesize[0]) + x] == BLACK_FRAME.y
&& frame->data[1][(y / 2 * frame->linesize[1]) + x / 2] == BLACK_FRAME.u
&& frame->data[2][(y / 2 * frame->linesize[2]) + x / 2] == BLACK_FRAME.v)
++nb;
}
}
return nb;
}
And this is how i decode one frame:
const AVFrame* VideoDecoder::nextFrame(entities::VideoContext& context) {
int frameFinished;
AVPacket packet;
// Allocate video frame
AVFrame* frame = av_frame_alloc();
if(frame == nullptr)
throw;
// Initialize frame->linesize
avpicture_fill((AVPicture*)frame, nullptr, AV_PIX_FMT_YUV420P, context.width, context.height);
while(av_read_frame(context.formatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if(packet.stream_index == context.streamIndex) {
// Decode video frame
avcodec_decode_video2(context.codecCtx, frame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished) {
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
++context.currentFrame;
return frame;
}
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
throw core::GlobalException("nextFrame", "Frame decode failed");
}
There is already something wrong?
Maybe the context initialization will be useful:
entities::VideoContext VideoLoader::loadVideoContext(const char* file,
const int width,
const int height) {
entities::VideoContext context;
// Register all formats and codecs
av_register_all();
context.formatCtx = avformat_alloc_context();
// Open video file
if(avformat_open_input(&context.formatCtx, file, nullptr, 0) != 0)
throw; // Couldn't open file
// Retrieve stream information
if(avformat_find_stream_info(context.formatCtx, nullptr) > 0)
throw; // Couldn't find stream information
// Dump information about file onto standard error
//av_dump_format(m_formatCtx, 0, file, 1);
// Find the first video stream because we don't need more
for(unsigned int i = 0; i < context.formatCtx->nb_streams; ++i)
if(context.formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
context.streamIndex = i;
context.codecCtx = context.formatCtx->streams[i]->codec;
break;
}
if(context.codecCtx == nullptr)
throw; // Didn't find a video stream
// Find the decoder for the video stream
AVCodec* codec = avcodec_find_decoder(context.codecCtx->codec_id);
if(codec == nullptr)
throw; // Codec not found
// Copy context
if ((context.codecCtxOrig = avcodec_alloc_context3(codec)) == nullptr)
throw;
if(avcodec_copy_context(context.codecCtxOrig, context.codecCtx) != 0)
throw; // Error copying codec context
// Open codec
if(avcodec_open2(context.codecCtx, codec, nullptr) < 0)
throw; // Could not open codec
context.currentFrame = 0;
decoder::VideoDecoder::setVideoSize(context);
context.pause = false;
context.width = width;
context.height = height;
return std::move(context);
}
I know it's not a little piece of code, if you have any idea too make an exemple more brief, go on.
And if someone have an idea about this issue, there is my output:
9 - 10 - 12 - 4 - 10 - 14 - 11 - 8 - 9 - 10
But i want :
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10
PS:
get fps and video size are copy paste code of opencv

Memory leak in jpeg compression. Bug or my mistake?

I wrote an npm module for capturing webcam input on linux. The captured frame in yuyv format is converted to rgb24 and after compressed to a jpeg image. In the jpeg compression there appears to be a memory leak. So the usage of memory increases continuously.
Image* rgb24_to_jpeg(Image *img, Image *jpeg) { // img = RGB24
jpeg_compress_struct cinfo;
jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jerr.trace_level = 10;
jpeg_create_compress(&cinfo);
unsigned char *imgd = new unsigned char[img->size];
long unsigned int size = 0;
jpeg_mem_dest(&cinfo, &imgd, &size);
cinfo.image_width = img->width;
cinfo.image_height = img->height;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
jpeg_set_quality(&cinfo, 100, true);
jpeg_start_compress(&cinfo, true);
int row_stride = cinfo.image_width * 3;
JSAMPROW row_pointer[1];
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer[0] = &img->data[cinfo.next_scanline * row_stride];
jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
jpeg_finish_compress(&cinfo);
jpeg_destroy_compress(&cinfo);
// size += 512; // TODO: actual value to expand jpeg buffer... JPEG header?
if (jpeg->data == NULL) {
jpeg->data = (unsigned char *) malloc(size);
} else {
jpeg->data = (unsigned char *) realloc(jpeg->data, size);
}
memcpy(jpeg->data, imgd, size);
delete[] imgd;
jpeg->size = size;
return jpeg;
}
The rgb24 and jpeg buffers are reallocated on every cycle. So it looks like the leak is inside libjpeg layer. Is this true or I simply made a mistake somewhere in the code?
Note: the compressed image shall not be saved as a file, since the data might be used for live streaming.
You are using the jpeg_mem_dest in a wrong way - the second parameter is pointer to pointer to char because it is actually set by the library and then you must free it after you are done. Now you are initializing it with a pointer, it gets overwritten and you free the memory region allocated by the library but the original memory region is leaked.
This is how you should change your function:
Image* rgb24_to_jpeg(Image *img, Image *jpeg) { // img = RGB24
jpeg_compress_struct cinfo;
jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jerr.trace_level = 10;
jpeg_create_compress(&cinfo);
unsigned char *imgd = 0;
long unsigned int size = 0;
cinfo.image_width = img->width;
cinfo.image_height = img->height;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
jpeg_set_quality(&cinfo, 100, true);
jpeg_mem_dest(&cinfo, &imgd, &size); // imgd will be set by the library
jpeg_start_compress(&cinfo, true);
int row_stride = cinfo.image_width * 3;
JSAMPROW row_pointer[1];
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer[0] = &img->data[cinfo.next_scanline * row_stride];
jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
jpeg_finish_compress(&cinfo);
jpeg_destroy_compress(&cinfo);
// size += 512; // TODO: actual value to expand jpeg buffer... JPEG header?
if (jpeg->data == NULL) {
jpeg->data = (unsigned char *) malloc(size);
} else if (jpeg->size != size) {
jpeg->data = (unsigned char *) realloc(jpeg->data, size);
}
memcpy(jpeg->data, imgd, size);
free(imgd); // dispose of imgd when you are done
jpeg->size = size;
return jpeg;
}
This snippet form jpeg_mem_dest explains the memory management:
if (*outbuffer == NULL || *outsize == 0) {
/* Allocate initial buffer */
dest->newbuffer = *outbuffer = (unsigned char *) malloc(OUTPUT_BUF_SIZE);
if (dest->newbuffer == NULL)
ERREXIT1(cinfo, JERR_OUT_OF_MEMORY, 10);
*outsize = OUTPUT_BUF_SIZE;
}
So, if you pass a an empty pointer or a zero sized buffer the library will perform an allocation for you. Thus - another approach is also to set the size correctly and then you can use the originally supplied pointer
In my case I did not solve the issue with previous answer, there was no way to free the memory image pointer, the only way to do that was reserving enough memory to the image and that way the library will not reserve memory and I have the control over the memory and is on the same heap of my application and not on the library's heap, here is my example:
//previous code...
struct jpeg_compress_struct cinfo;
//reserving the enough memory for my image (width * height)
unsigned char* _image = (unsigned char*)malloc(Width * Height);
//putting the reserved size into _imageSize
_imageSize = Width * Height;
//call the function like this:
jpeg_mem_dest(&cinfo, &_image, &_imageSize);
................
//releasing the reserved memory
free(_image);
NOTE: if you put _imageSize = 0, the library will assume that you have not reserve memory and the own library will do it.. so you need to put in _imageSize the amount of bytes reserved in _image
That way you have total control over the reserved memory and you can release it whenever you want in your software..

FFMPEG with QT memory leak

Let me start with a code clip:
QByteArray ba;
ba.resize(500000);
int encsize = avcodec_encode_video(context, (uint8_t*)ba.data(), 500000, frame.unownedPointer());
What I'm doing is encoding the data from frame and putting the data into the buffer pointed at QByteArray. If I comment out the avcodec_encode_video line my memory leak goes away. unownedPointer() looks like this:
if (this->frame != NULL) return this->frame;
this->frame = avcodec_alloc_frame();
uchar *data = this->img.bits();
frame->data[0] = (uint8_t *)data;
frame->data[1] = (uint8_t *)data + 1;
frame->data[2] = (uint8_t *)data + 2;
frame->linesize[0] = width * lineSize(this->fmt);
frame->linesize[1] = width * lineSize(this->fmt);
frame->linesize[2] = width * lineSize(this->fmt);
return this->frame;
Where this->frame is a AVFrame *, and this->img is a QImage.
At a encoding rate of about 30fps, I'm getting a memory leak of about 50MB/sec. So I'm not sure what the issue could be. It seems as if avcodec_encode_video() is copying memory and never freeing it or something. Any ideas?
If avcodec_encode_video is converting my RGB24 data to YUV420P would it be modifying the data pointed to by frame.unownedPointer()?
Take a look at the code for QtFFmpegwrapper it uses a saved context to do this efficently, or you can just use the QtFFMpegwrapper directly