FFMPEG, C++, Memory leak, what am I doing wrong? - c++

So I've built this app that consumes an IP cameras rtsp feed and does fun things with it, however I have a small memory leak that I have only just now pinned down.
If I just run this
while (av_read_frame(input_format_context, &input_packet) >= 0) {}
It will just grow'n grow'n grow ... So what am I missing?
Am using a windows port of ffmpeg and my version is 58.9.100.0
Could it be a leak in FFMPEG itself?

From documentation:
If pkt->buf is NULL, then the packet is valid until the next
av_read_frame() or until avformat_close_input(). Otherwise the packet
is valid indefinitely. In both cases the packet must be freed with
av_packet_unref when it is no longer needed.
Something like this?
AVPacket *pPacket = av_packet_alloc();
if (!pPacket)
{
logging("failed to allocated memory for AVPacket");
return -1;
}
while (av_read_frame(pFormatContext, pPacket) >= 0)
{
auto response = decode_packet(pPacket, pCodecContext, pFrame);
if (response < 0)
break;
}
av_packet_unref(pPacket);
}
PS: Don't be a victim of cargo cult, research the source code. this is in no way a complete example. There are working projects that use ffmpeg.

Related

IMFTransfomer::ProcessInput() and MF_E_TRANSFORM_NEED_MORE_INPUT

I have code that decodes AAC-encoded audio using IMFTransform. It works well for various test inputs. But I observed that in some cases IMFTransform::ProcessOutput() returns MF_E_TRANSFORM_NEED_MORE_INPUT when according to my reading of MS documentation it should return a valid data sample.
Basically the code has the following structure:
IMFTransform* transformer;
MFT_OUTPUT_DATA_BUFFER output_data_buffer;
...
bool try_to_get_output = false;
for (;;) {
if (try_to_get_output) {
// Try to get the outpu sample.
try_to_get_output = false;
output_data_buffer.dwStatus = 0;
...
hr = transformer->ProcessOutput(...&output_data_buffer);
if (success) {
// process sample
if (output_data_buffer.dwStatus & MFT_OUTPUT_DATA_BUFFER_INCOMPLETE) {
// We have more data
try_to_get_output = true;
}
} else if (hr == MF_E_TRANSFORM_NEED_MORE_INPUT) {
Log("Unnecessary ProcessOutput()");
} else {
// Process other errors
}
continue;
}
// Send more encoded AAC data to MFT.
hr->ProcessInput();
}
What happens is that ProcessOutput() sets MFT_OUTPUT_DATA_BUFFER_INCOMPLETE in MFT_OUTPUT_DATA_BUFFER.dwStatus but then the following ProcessOutput() always returns MF_E_TRANSFORM_NEED_MORE_INPUT contradicting the documentation.
Again, so far it seems harmless and things works. But then what exactly does AAC decoder want to tell the caller via setting MFT_OUTPUT_DATA_BUFFER_INCOMPLETE?
This might be a small glitch in the decoder implementation. Quite possible that if you happen to drain the MFT it would spit out some data, so the incompletion flag migth indicate, a bit confusingly, some data even though not immediately accessible.
However overall the idea is to do ProcessOutput sucking the output data for as long as possible until yщu get MF_E_TRANSFORM_NEED_MORE_INPUT, and then proceed with feeding new input (or draining). That is, I would say MF_E_TRANSFORM_NEED_MORE_INPUT is much more important compared to MFT_OUTPUT_DATA_BUFFER_INCOMPLETE. After all this is what Microsoft's own code over MFTs does.
Also keep in mind that AAC decoder is an "old", "first generation" MFT and so over years its update could be such that it diverted a bit from the current docs.

FFMPEG c++ memory leak issue when reading the packet

I have written a program to read the frames from a video file. Everything works perfect except below described issue.
after reading the frame, when I call avcode_send_packet function, it leaks the memory.
I used av_packet_unref before reading the next frame. But still the memory leak is not resolved.
I am using FFMPEG latest 4.3 version on WIndows 10.
also av_frame_unref does not fix the memory leak. I think data buffer inside the packet does not get freed somehow I feel it is related to FFMPEG version issue as I see the similar coding done by other programmers on the internet.
Does any one have idea about how to fix this memory leak ?
----------------- code is as below-----------------------
... here code related to setting avformatcontext, and avcodeccontext.
while(1)
{
if (av_read_frame(pFormatCtx, packet) >= 0)
{
if (packet->stream_index == videoindex)
{
ret = avcodec_send_packet(pCodecCtx, packet);//on executing this line, memory shoots up in MBs , everytime.
if (ret < 0)
{
av_packet_unref(packet);
fprintf(stderr,"Failed to Decode packet. \n:%s", av_err2str(ret));
return -1;
}
ret = avcodec_receive_frame(pCodecCtx, pAvFrame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
{
av_packet_unref(packet);
continue;
}
if (ret < 0)
{
av_packet_unref(packet);
printf("Failed to Decode packet. \n");
return -1;
}
av_packet_unref(packet);
{
//.. do something with the frame.
}
av_frame_unref(pAvFrame);
}
av_packet_unref(packet);
}
}

VIDIOC_DQBUF hangs on camera disconnection

My application is using v4l2 running in a separate thread. If a camera gets disconnected then the user is given an appropriate message before terminating the thread cleanly. This works in the vast majority of cases. However, if the execution is inside the VIDIOC_DQBUF ioctl when the camera is disconnected then the ioctl doesn't return causing the entire thread to lock up.
My system is as follows:
Linux Kernel: 4.12.0
OS: Fedora 25
Compiler: gcc-7.1
The following is a simplified example of the problem function.
// Get Raw Buffer from the camera
void v4l2_Processor::get_Raw_Frame(void* buffer)
{
struct v4l2_buffer buf;
memset(&buf, 0, sizeof (buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
// Grab next frame
if (ioctl(m_FD, VIDIOC_DQBUF, &buf) < 0)
{ // If the camera becomes disconnected when the execution is
// in the above ioctl, then the ioctl never returns.
std::cerr << "Error in DQBUF\n";
}
// Queue for next frame
if (ioctl(m_FD, VIDIOC_QBUF, &buf) < 0)
{
std::cerr << "Error in QBUF\n";
}
memcpy(buffer, m_Buffers[buf.index].buff,
m_Buffers[buf.index].buf_length);
}
Can anybody shed any light on why this ioctl locks up and what I might do to solve this problem?
I appreciate any help offered.
Amanda
I am currently having the same issue. However, my entire thread doesn't lock up. The ioctl times out (15s) but thats way too long.
Is there a what to query V4L2 (that wont hang) if video is streaming? or at least change the ioctl timeout ?
UPDATE:
#Amanda you can change the timeout of the dequeue in the v4l2_capture driver source & rebuild the kernel/kernel module
modify the timeout in the dqueue function:
if (!wait_event_interruptible_timeout(cam->enc_queue,
cam->enc_counter != 0,
50 * HZ)) // Modify this constant
Best of luck!

C++/C FFmpeg artifact build up across video frames

Context:
I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here: https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html
Video capture specifics:
I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions:
// Used for injecting decoding functions for different media types, allowing
// for a generic decode loop
typedef std::function<int(AVPacket*, int*, int)> PacketDecoder;
/**
* Decodes a video packet.
* If the decoding operation is successful, returns the number of bytes decoded,
* else returns the result of the decoding process from ffmpeg
*/
int decode_video_packet(AVPacket *packet,
int *got_frame,
int cached){
int ret = 0;
int decoded = packet->size;
*got_frame = 0;
//Decode video frame
ret = avcodec_decode_video2(video_decode_context,
video_frame, got_frame, packet);
if (ret < 0) {
//FFmpeg users should use av_err2str
char errbuf[128];
av_strerror(ret, errbuf, sizeof(errbuf));
std::cerr << "Error decoding video frame " << errbuf << std::endl;
decoded = ret;
} else {
if (*got_frame) {
video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);
//Write to log file
AVRational *time_base = &video_decode_context->time_base;
log_frame(video_frame, time_base,
video_frame->coded_picture_number, video_log_stream);
#if( DEBUG )
std::cout << "Video frame " << ( cached ? "(cached)" : "" )
<< " coded:" << video_frame->coded_picture_number
<< " pts:" << pts << std::endl;
#endif
/*Copy decoded frame to destination buffer:
*This is required since rawvideo expects non aligned data*/
av_image_copy(video_dest_attr.video_destination_data,
video_dest_attr.video_destination_linesize,
(const uint8_t **)(video_frame->data),
video_frame->linesize,
video_decode_context->pix_fmt,
video_decode_context->width,
video_decode_context->height);
//Write to rawvideo file
fwrite(video_dest_attr.video_destination_data[0],
1,
video_dest_attr.video_destination_bufsize,
video_out_file);
//Unref the refcounted frame
av_frame_unref(video_frame);
}
}
return decoded;
}
/**
* Grabs frames in a loop and decodes them using the specified decoding function
*/
int process_frames(AVFormatContext *context,
PacketDecoder packet_decoder) {
int ret = 0;
int got_frame;
AVPacket packet;
//Initialize packet, set data to NULL, let the demuxer fill it
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
// read frames from the file
for (;;) {
ret = av_read_frame(context, &packet);
if (ret < 0) {
if (ret == AVERROR(EAGAIN)) {
continue;
} else {
break;
}
}
//Convert timing fields to the decoder timebase
unsigned int stream_index = packet.stream_index;
av_packet_rescale_ts(&packet,
context->streams[stream_index]->time_base,
context->streams[stream_index]->codec->time_base);
AVPacket orig_packet = packet;
do {
ret = packet_decoder(&packet, &got_frame, 0);
if (ret < 0) {
break;
}
packet.data += ret;
packet.size -= ret;
} while (packet.size > 0);
av_free_packet(&orig_packet);
if(stop_recording == true) {
break;
}
}
//Flush cached frames
std::cout << "Flushing frames" << std::endl;
packet.data = NULL;
packet.size = 0;
do {
packet_decoder(&packet, &got_frame, 1);
} while (got_frame);
av_log(0, AV_LOG_INFO, "Done processing frames\n");
return ret;
}
Questions:
How do I go about debugging the underlying issue?
Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem?
Am I doing something wrong in the decoding code?
Things I have tried/found:
I found this thread that is about the same problem here: FFMPEG decoding artifacts between keyframes
(I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have)
However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn't 'preserving the packets correctly', but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification.
I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.
I found another question about debugging decoding issues, but couldn't find anything conclusive: How is video decoding corruption debugged?
I'd appreciate any insight. Thanks a lot!
[EDIT] In response to Ronald's answer, I am adding a little more information that wouldn't fit in a comment:
I am only calling decode_video_packet() from the thread processing video frames; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.
I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.
A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can't even say if a solution works or not.
A few things to check:
are you running multiple threads that are calling decode_video_packet()? If you are: don't do that! FFmpeg has built-in support for multi-threaded decoding, and you should let FFmpeg do threading internally and transparently.
you are calling av_free_packet() right after calling the frame decoder function, but at that point it may not yet have had a chance to copy the contents. You should probably let decode_video_packet() free the packet instead, after calling avcodec_decode_video2().
General debugging advice:
run it without any threading and see if that works;
if it does, and with threading it fails, use thread debuggers such as tsan or helgrind to help in finding race conditions that point to your code.
it can also help to know whether the output you're getting is reproduceable (this suggests a non-threading-related bug in your code) or changes from one run to the other (this suggests a race condition in your code).
And yes, the periodic clean-ups are because of keyframes.

Right free of memory while using libudev

I use libudev to detect usb devices.
Initialise monitor and filter:
struct udev* udev = udev_new();
if (udev == nullptr) { /* error handling */ }
struct udev_monitor* usb = udev_monitor_new_from_netlink(udev, "udev");
udev_monitor_filter_add_match_subsystem_devtype(usb, "usb", NULL);
udev_monitor_enable_receiving(usb);
while(! canceled) { /* setup fd, poll fd, process result */ }
Then I release the allocated ressources with:
udev_monitor_unref(usb);
udev_unref(udev);
But sometimes I get
* glibc detected * ./usbtest: corrupted double-linked list: 0x084cc5d0 ***
I tried to use:
free(usb);
free(udev);
But then valgrind complaint about memory leaks.
What the right way to release the memory in this case?
According to the documentation it should be sufficient to use:
udev_unref(udev);
and here says:
udev_monitor_unref(usb);
should free that resource. If that gives you a double free, then something is not right, and you really need to debug that issue, not try to work around it by other means.