I'm trying to transcode a video with help of libavcodec.
On transcoding big video files(hour or more) i get huge memory leaks in avcodec_encode_video. I have tried to debug it, but with different video files different functions produce leaks, i have got a little bit confused about that :). Here FFMPEG with QT memory leak is the same issue that i have, but i have no idea how did that person solve it. QtFFmpegwrapper seems to do the same i do(or i missed something).
my method is lower. I took care about aFrame and aPacket outside with av_free and av_free_packet.
int
Videocut::encode(
AVStream *anOutputStream,
AVFrame *aFrame,
AVPacket *aPacket
)
{
AVCodecContext *outputCodec = anOutputStream->codec;
if (!anOutputStream ||
!aFrame ||
!aPacket)
{
return 1;
/* NOTREACHED */
}
uint8_t * buffer = (uint8_t *)malloc(
sizeof(uint8_t) * _DefaultEncodeBufferSize
);
if (NULL == buffer) {
return 2;
/* NOTREACHED */
}
int packetSize = avcodec_encode_video(
outputCodec,
buffer,
_DefaultEncodeBufferSize,
aFrame
);
if (packetSize < 0) {
free(buffer);
return 1;
/* NOTREACHED */
}
aPacket->data = buffer;
aPacket->size = packetSize;
return 0;
}
The first step would be to try to reproduce your problem under Valgrind on a Linux box, if you can.
ffmpeg encoders and decoders usually not dynamically allocate memory; they reuse buffers between calls. Leaks are usually going to be in the frames somewhere.
Note that av_free_packet will only free your dynamically allocated buffer if the packet has a destructor function!
Look at how the function is defined in libavcodec/avpacket.c:
void av_free_packet(AVPacket *pkt)
{
if (pkt) {
if (pkt->destruct) pkt->destruct(pkt);
pkt->data = NULL; pkt->size = 0;
pkt->side_data = NULL;
pkt->side_data_elems = 0;
}
}
If there is no pkt->destruct function, no clean up takes place!
Related
I need debugger I am writing to give me the name of shared lib that program being debugged is linking with, or loading dynamically. I get the rendezvous structure as described in link.h, and answers to other questions, using DT_DEBUG, in the loop over _DYNAMIC[].
First, debugger never hits the break point set at r_brk.
Then I put a break in the program being debugged, and use link_map to print all loaded libraries. It only prints libraries loaded by the debugger, not the program being debugged.
It seems that, the rendezvous structure I am getting belongs to the debugger itself. If so, could you please tell me how to get the rendezvous structure of the program I am debugging? If what I am doing must work, your confirmation will be helpful, perhaps with some hint as to what else might be needed.
Thank you.
// You need to include <link.h>. All structures are explained
// in elf(5) manual pages.
// Caller has opened "prog_name", the debugee, and fd is the
// file descriptor. You can send the name instead, and do open()
// here.
// Debugger is tracing the debugee, so we are using ptrace().
void getRandezvousStructure(int fd, pid_t pd, r_debug& rendezvous) {
Elf64_Ehdr elfHeader;
char* elfHdrPtr = (char*) &elfHeader;
read(fd, elfHdrPtr, sizeof(elfHeader));
Elf64_Addr debugeeEntry = elfHeader.e_entry; // entry point of debugee
// Here, set a break at debugeeEntry, and after "PTRACE_CONT",
// and waitpid(), remove the break, and set rip back to debugeeEntry.
// After that, here it goes.
lseek(fd, elfHeader.e_shoff, SEEK_SET); // offset of section header
Elf64_Shdr secHeader;
elfHdrPtr = (char*) &secHeader;
Elf64_Dyn* dynPtr;
// Keep reading until we get: secHeader.sh_addr.
// That is the address of _DYNAMIC.
for (int i = 0; i < elfHeader.e_shnum; i++) {
read(fd, elfHdrPtr, elfHeader.e_shentsize);
if (secHeader.sh_type == SHT_DYNAMIC) {
dynPtr = (Elf64_Dyn*) secHeader.sh_addr; // address of _DYNAMIC
break;
}
}
// Here, we get "dynPtr->d_un.d_ptr" which points to rendezvous
// structure, r_debug
uint64_t data;
for (;; dynPtr++) {
data = ptrace(PTRACE_PEEKDATA, pd, dynPtr, 0);
if (data == DT_NULL) break;
if (data == DT_DEBUG) {
data = ptrace(PTRACE_PEEKDATA, pd, (uint64_t) dynPtr + 8 , 0);
break;
}
}
// Using ptrace() we read sufficient chunk of memory of debugee
// to copy to rendezvous.
int ren_size = sizeof(rendezvous);
char* buffer = new char[2 * ren_size];
char* p = buffer;
int total = 0;
uint64_t value;
for (;;) {
value = ptrace(PTRACE_PEEKDATA, pd, data, 0);
memcpy(p, &value, sizeof(value));
total += sizeof(value);
if (total > ren_size + sizeof(value)) break;
data += sizeof(data);
p += sizeof(data);
}
// Finally, copy the memory to rendezvous, which was
// passed by reference.
memcpy(&rendezvous, buffer, ren_size);
delete [] buffer;
}
So I've been reading the libjpeg documentation and it is extremely lackluster.
I have been trying to figure out how to read from a custom memory buffer rather than a file and am not sure how to even test if my solution is working correctly.
At the moment my function for loading a jpeg from memory is like so:
struct error_mgr{
jpeg_error_mgr pub;
std::jmp_buf buf;
};
bool load_jpeg(void *mem, size_t size, output_struct &output){
jpeg_source_mgr src;
src.next_input_bytes = static_cast<JOCTET*>(mem)-size;
src.bytes_in_buffer = size;
src.init_source = [](j_compress_ptr){};
src.fill_input_buffer = [](j_decompress_ptr cinfo) -> boolean{
// should never reach end of buffer
throw "libjpeg tried to read past end of file";
return true;
};
src.skip_input_data = [](j_compress_ptr cinfo, long num_bytes){
if(num_bytes < 1) return; // negative or 0 us no-op
cinfo->src.next_input_byte+=num_bytes;
cinfo->src.bytes_in_buffer-=num_bytes;
};
src.resync_to_restart = jpeg_resync_to_restart;
src.term_source = [](j_decompress_ptr){};
struct jpeg_decompress_struct cinfo;
error_mgr err;
cinfo.err = jpeg_std_error(&err.pub);
err.pub.error_exit = [](j_common_ptr cinfo){
error_mgr ptr = reinterpret_cast<error_mgr*>(cinfo->err);
std::longjmp(ptr->buf, 1);
};
if(std::setjmp(err.buf)){
jpeg_destroy_decompress(&cinfo);
return false;
}
cinfo.src = &src;
jpeg_create_decompress(&cinfo);
(void) jpeg_read_header(&cinfo, TRUE);
// do the actual reading of the image
return true;
}
But it never makes it past jpeg_read_header.
I know that this is a jpeg file and I know that my memory is being passed correctly because I have libpng loading images with the same signature and calling function fine, so I'm sure it is how I am setting the source manager in cinfo.
Anybody with more experience in libjpeg know how to do this?
In my code I am setting cinfo.src before calling jpeg_create_decompress, setting it afterwards fixed the issue :)
Using jpeg_mem_src() works for me, with libjpeg-turbo:
struct jpeg_decompress_struct cinfo;
/* ... */
char *ptr;
size_t buffer;
/* Initialize ptr, buffer, then: */
jpeg_mem_src(&cinfo, ptr, buffer);
/* Now, it's time for jpeg_read_header(), etc... */
I'm working with a FFmpeg project right now, I have to store the data of AVPacket which from av_read_frame, and fill data to new AVPacket for following decoding.
Here is my problem: when I try to new & free an AVPacket, memory leaks always happen.
I am just doing a simple testing:
for(;;) {
AVPacket pkt;
av_new_packet(&pkt, 1000);
av_init_packet(&pkt);
av_free_packet(&pkt);
}
What am I doing wrong?
av_new_packet creates a packet and allocates data
av_init_packet
sets all packet members to default, and sets data pointer to NULL,
the leak is here
av_free_packet clears all visible members, but
your data is already leaking
If you want FFmpeg to allocate the data for you, do not call av_init_packet. If you want to handle the data yourself, allocate the packet object on the stack and set its data yourself (and free it yourself):
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = dataBuffer;
pkt.size = dataBufferSize;
// use your packet
// free your dataBuffer
I just read the FFmpeg 2.2 AVPacket.c source code.
int av_new_packet(AVPacket *pkt, int size) {
AVBufferRef *buf = NULL;
int ret = packet_alloc(&buf, size);
if (ret < 0)
return ret;
av_init_packet(pkt);
pkt->buf = buf;
pkt->data = buf->data;
pkt->size = size;
#if FF_API_DESTRUCT_PACKET
FF_DISABLE_DEPRECATION_WARNINGS
pkt->destruct = dummy_destruct_packet;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
return 0;
}
void av_init_packet(AVPacket *pkt) {
pkt->pts = AV_NOPTS_VALUE;
pkt->dts = AV_NOPTS_VALUE;
pkt->pos = -1;
pkt->duration = 0;
pkt->convergence_duration = 0;
pkt->flags = 0;
pkt->stream_index = 0;
#if FF_API_DESTRUCT_PACKET
FF_DISABLE_DEPRECATION_WARNINGS
pkt->destruct = NULL;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
pkt->buf = NULL;
pkt->side_data = NULL;
pkt->side_data_elems = 0;
}
I don't really know about the defines FF_API_DESTRUCT_PACKET, FF_DISABLE_DEPRECATION_WARNINGS, FF_ENABLE_DEPRECATION_WARNINGS
Some reason makes destruct of av_free_packet leak
According the source code, av_init_packet is called in av_new_packet and av_new_packet already allocates the AVBuffer, so if you want to set the data to new AVPacket.
Just memory copy to data of AVPacket, and call av_free_packet when you are done.
I have developed just a simple library modifing a library that I found on the internet.
What scares me, is that, when I play an avi, it plays and free the memory when the video ends, but when I play the video, it's like a memory leak! It grows to 138mb although the video has ended and the FreeAll method (A function that deletes the context, etc...) has been called.
Here is the code of the method that is causing the memory leak:
int VideoGL::NextVideoFrame(){
int frameDone = 0;
int result = 0;
double pts = 0;
if(!this->ended){
if (!_started) return 0;
AVPacket* packet;
// Get the number of milliseconds passed and see if we should display a new frame
int64_t msPassed = (1000 * (clock() - _baseTime)) / CLOCKS_PER_SEC;
if (msPassed >= _currentPts)
{
// If this is not the current frame, copy it to the buffer
if (_currentFramePts != _currentPts){
_currentFramePts = _currentPts;
memcpy(buffer_a,buffer, 3 * _codec_context_video->width * _codec_context_video->height);
result = 1;
}
// Try to load a new frame from the video packet queue
bool goodop=false;
AVFrame *_n_frame = avcodec_alloc_frame();
while (!frameDone && (packet = this->DEQUEUE(VIDEO)) != NULL)
{
if (packet == (AVPacket*)-1) return -1;
goodop=true;
_s_pts = packet->pts;
avcodec_decode_video2(_codec_context_video, _n_frame, &frameDone, packet);
av_free_packet(packet);
if (packet->dts == AV_NOPTS_VALUE)
{
if (_n_frame->opaque && *(uint64_t*)_n_frame->opaque != AV_NOPTS_VALUE) pts = (double) *(uint64_t*)_n_frame->opaque;
else pts = 0;
}
else pts = (double) packet->dts;
pts *= av_q2d(_codec_context_video->time_base);
}
if (frameDone)
{
// if a frame was loaded scale it to the current texture frame buffer, but also set the pts so that it won't be copied to the texture until it's time
sws_scale(sws_ctx,_n_frame->data, _n_frame->linesize, 0, _codec_context_video->height, _rgb_frame->data, _rgb_frame->linesize);
double nts = 1.0/av_q2d(_codec_context_video->time_base);
_currentPts = (uint64_t) (pts*nts);
}
avcodec_free_frame(&_n_frame);
av_free(_n_frame);
if(!goodop){
ended=true;
}
}
}
return result;
}
I'll be waiting for answers, thanks.
I had a memory leak problem either. For me, the deallocation worked when I included the following commands:
class members:
AVPacket avpkt;
AVFrame *frame;
AVCodecContext *avctx;
AVCodec *codec;
constructor:
av_init_packet(&avpkt);
avcodec_open2(avctx, codec, NULL);
frame = avcodec_alloc_frame();
destructor:
av_free_packet(&avpkt);
avcodec_free_frame(&frame);
av_free(frame);
avcodec_close(avctx);
i also had the same problem. According to the ffplay.c
you should call
av_frame_unref(pFrame);
avcodec_get_frame_defaults(pFrame);
after every sw_scale call. this will free up all malloc during decode.
I had similar routine using FFmpeg that would leak memory. I found a resolution by deallocating memory for the frame and packet objects for each call to avcodec_decode_video2.
In your code the packet object is freed, however the frame is not. Adding the following lines before avcodec_decode_video2 should resolve the memory leak. I found that it's safe to call avcodec_free_frame on a frame object that is already deallocated. You could remove the allocation of the frame before the while loop.
avcodec_free_frame(&_n_frame);
_n_frame = avcodec_alloc_frame();
avcodec_decode_video2(_codec_context_video, _n_frame, &frameDone, packet);
I using libzip to work with zip files and everything goes fine, until i need to read file from zip
I need to read just a whole text files, so it will be great to achieve something like PHP "file_get_contents" function.
To read file from zip there is a function "int
zip_fread(struct zip_file *file, void *buf, zip_uint64_t nbytes)".
Main problem what i don't know what size of buf must be and how many nbytes i must read (well i need to read whole file, but files have different size). I can just do a big buffer to fit them all and read all it's size, or do a while loop until fread return -1 but i don't think it's rational option.
You can try using zip_stat to get file size.
http://linux.die.net/man/3/zip_stat
I haven't used the libzip interface but from what you write it seems to look very similar to a file interface: once you got a handle to the stream you keep calling zip_fread() until this function return an error (ir, possibly, less than requested bytes). The buffer you pass in us just a reasonably size temporary buffer where the data is communicated.
Personally I would probably create a stream buffer for this so once the file in the zip archive is set up it can be read using the conventional I/O stream methods. This would look something like this:
struct zipbuf: std::streambuf {
zipbuf(???): file_(???) {}
private:
zip_file* file_;
enum { s_size = 8196 };
char buffer_[s_size];
int underflow() {
int rc(zip_fread(this->file_, this->buffer_, s_size));
this->setg(this->buffer_, this->buffer_,
this->buffer_ + std::max(0, rc));
return this->gptr() == this->egptr()
? traits_type::eof()
: traits_type::to_int_type(*this->gptr());
}
};
With this stream buffer you should be able to create an std::istream and read the file into whatever structure you need:
zipbuf buf(???);
std::istream in(&buf);
...
Obviously, this code isn't tested or compiled. However, when you replace the ??? with whatever is needed to open the zip file, I'd think this should pretty much work.
Here is a routine I wrote that extracts data from a zip-stream and prints out a line at a time. This uses zlib, not libzip, but if this code is useful to you, feel free to use it:
#
# compile with -lz option in order to link in the zlib library
#
#include <zlib.h>
#define Z_CHUNK 2097152
int unzipFile(const char *fName)
{
z_stream zStream;
char *zRemainderBuf = malloc(1);
unsigned char zInBuf[Z_CHUNK];
unsigned char zOutBuf[Z_CHUNK];
char zLineBuf[Z_CHUNK];
unsigned int zHave, zBufIdx, zBufOffset, zOutBufIdx;
int zError;
FILE *inFp = fopen(fName, "rbR");
if (!inFp) { fprintf(stderr, "could not open file: %s\n", fName); return EXIT_FAILURE; }
zStream.zalloc = Z_NULL;
zStream.zfree = Z_NULL;
zStream.opaque = Z_NULL;
zStream.avail_in = 0;
zStream.next_in = Z_NULL;
zError = inflateInit2(&zStream, (15+32)); /* cf. http://www.zlib.net/manual.html */
if (zError != Z_OK) { fprintf(stderr, "could not initialize z-stream\n"); return EXIT_FAILURE; }
*zRemainderBuf = '\0';
do {
zStream.avail_in = fread(zInBuf, 1, Z_CHUNK, inFp);
if (zStream.avail_in == 0)
break;
zStream.next_in = zInBuf;
do {
zStream.avail_out = Z_CHUNK;
zStream.next_out = zOutBuf;
zError = inflate(&zStream, Z_NO_FLUSH);
switch (zError) {
case Z_NEED_DICT: { fprintf(stderr, "Z-stream needs dictionary!\n"); return EXIT_FAILURE; }
case Z_DATA_ERROR: { fprintf(stderr, "Z-stream suffered data error!\n"); return EXIT_FAILURE; }
case Z_MEM_ERROR: { fprintf(stderr, "Z-stream suffered memory error!\n"); return EXIT_FAILURE; }
}
zHave = Z_CHUNK - zStream.avail_out;
zOutBuf[zHave] = '\0';
/* copy remainder buffer onto line buffer, if not NULL */
if (zRemainderBuf) {
strncpy(zLineBuf, zRemainderBuf, strlen(zRemainderBuf));
zBufOffset = strlen(zRemainderBuf);
}
else
zBufOffset = 0;
/* read through zOutBuf for newlines */
for (zBufIdx = zBufOffset, zOutBufIdx = 0; zOutBufIdx < zHave; zBufIdx++, zOutBufIdx++) {
zLineBuf[zBufIdx] = zOutBuf[zOutBufIdx];
if (zLineBuf[zBufIdx] == '\n') {
zLineBuf[zBufIdx] = '\0';
zBufIdx = -1;
fprintf(stdout, "%s\n", zLineBuf);
}
}
/* copy some of line buffer onto the remainder buffer, if there are remnants from the z-stream */
if (strlen(zLineBuf) > 0) {
if (strlen(zLineBuf) > strlen(zRemainderBuf)) {
/* to minimize the chance of doing another (expensive) malloc, we double the length of zRemainderBuf */
free(zRemainderBuf);
zRemainderBuf = malloc(strlen(zLineBuf) * 2);
}
strncpy(zRemainderBuf, zLineBuf, zBufIdx);
zRemainderBuf[zBufIdx] = '\0';
}
} while (zStream.avail_out == 0);
} while (zError != Z_STREAM_END);
/* close gzip stream */
zError = inflateEnd(&zStream);
if (zError != Z_OK) {
fprintf(stderr, "could not close z-stream!\n");
return EXIT_FAILURE;
}
if (zRemainderBuf)
free(zRemainderBuf);
fclose(inFp);
return EXIT_SUCCESS;
}
With any streaming you should consider the memory requirements of your app.
A good buffer size is large, but you do not want to have too much memory in use depending on your RAM usage requirements. A small buffer size will require you call your read and write operations more times which are expensive in terms of time performance. So, you need to find a buffer in the middle of those two extremes.
Typically I use a size of 4096 (4KB) which is sufficiently large for many purposes. If you want, you can go larger. But at the worst case size of 1 byte, you will be waiting a long time for you read to complete.
So to answer your question, there is no "right" size to pick. It is a choice you should make so that the speed of your app and the memory it requires are what you need.