Memory leak using cvCopy - c++

I have a memory leak problem using the cvCopy function of OpenCV. If I comment that line it's all ok. If not the memory raises until the system crashes..
I've found this interesting article about OpenCV memory leaks: http://www.andol.info/hci/963.htm but if I comment the line:
targetImage = cvCreateImage( ....
I get another problem because it says that I'm passing a null pointer.
..... //other code (here we are inside a loop
cvSetImageROI(&tmpimag,TargetRect);
targetImage = cvCreateImage( cvSize(TargetRect.width, TargetRect.height), tmpimag.depth, tmpimag.nChannels );
cvCopy(&tmpimag,targetImage);
cvResetImageROI(&tmpimag); // release image ROI
....//other code

For what I can tell based on your little snippet of code, the mem leak might be your fault.
On each iteration of the loop you are creating/allocating a new image with cvCreateImage(), but I don't see you releasing it (check cvReleaseImage()). Therefore, after each iteration, more and more memory is allocated generating a genuine mem leak.
EDIT:
cvResetImageROI(&tmpimag); does not release an image, it just resets the ROI information previously set. You still need to cvReleaseImage(&tmpimag).

are you releasing targetImage each time the loop iterates?

Related

Malloc Error: OpenCV/C++ while push_back Vector

I try to create a Descriptor using FAST for the Point detection and SIFT for building the Descriptor. For that purpose I use OpenCV. While I use OpenCV's FAST I just use parts of the SIFT code, because I only need the Descriptor. Now I have a really nasty malloc Error and I don't know, how to solve it. I posted my code into GitHub because it is big and I dont really know where the Error comes from. I just know, that it is created at the end of the DO-WHILE-Loop:
features2d.push_back(features);
features.clear();
candidates2d.push_back(candidates);
candidates.clear();
}
}while(candidates.size() > 100);
As you can see in the code of GitHub I already tried to release Memory of the Application. Xcode Analysis says, that my Application uses 9 Mb memory. I tried to debug the Error but It was very complicated and I haven't found any clue where the Error comes from.
EDIT
I wondered if this Error could occur because I try to access the Image Pixel Value passed to calcOrientationHist(...) with img.at<sift_wt>(...) where typdef float sift_wt at Line 56, and 57 in my code, because normally the Patch I pass outputs the type 0 which means it is a CV_8UC1 But well, I copied this part from the sift.cpp at Line 330 and 331 Normally the SIFT Descriptor should also have a Grayscale image or not?
EDIT2
After changing the type in the img.at<sift_wt>(...)Position nothing changed. So I googled Solutions and landed at the GuardMalloc feature from XCode. Enabling it showed me a new Error which is probably the Reason I get the Malloc Error. In line 77 of my Code. The Error it gives me at this line is EXC_BAD_ACCESS (Code=1, address=....) There are the following lines:
for( k = 0; k < len; k ++){
int bin = cvRound((n/360.f)+Ori[k]);
if(bin >= n)
bin -=n;
if(bin < 0 )
bin +=n;
temphist[bin] += W[k]*Mag[k];
}
The Values of the mentioned Variables are the following:
bin = 52, len = 169, n = 36, k = 0, W, Mag, Ori and temphist are not shown.
Here the GuadMalloc Output (sorry but I dont really understand what exactly it wants)
GuardMalloc[Test-1935]: Allocations will be placed on 16 byte boundaries.
GuardMalloc[Test-1935]: - Some buffer overruns may not be noticed.
GuardMalloc[Test-1935]: - Applications using vector instructions (e.g., SSE) should work.
GuardMalloc[Test-1935]: version 108
Test(1935,0x102524000) malloc: protecting edges
Test(1935,0x102524000) malloc: enabling scribbling to detect mods to free blocks
Answer is simpler as thought...
The Problem was, that in the calculation of Bin in the For-loop the wrong value came out. Instead of adding ori[k] it should be a multiplication with ori[k].
The mistake there resulted in a bin value of 52. But the Length of the Array that temphist is pointing to is 38.
For all who have similar Errors I really recomment to use GuardMalloc or Valgrind to debug Malloc Errors.

malloc error while using Tesseract with OpenCL option enabled

I have compiled Tesseract 3.04.00 with the OpenCL option enabled. While trying to extract text from an image using GetUTF8Text(), there is a malloc error, a memory leak I suppose.
I found a patch for a memory leak error that was previously added, however, the version I have compiled already has the patch added. I am not sure as to why the memory leak has occurred.
This is the output I am getting:
[DS] Profile read from file (tesseract_opencl_profile_devices.dat).
[DS] Device[1] 1:Intel(R) Core(TM) i5-4250U CPU # 1.30GHz score is 14049349632.000000
[DS] Device[2] 1:HD Graphics 5000 score is 14049349632.000000
[DS] Device[3] 0:(null) score is 21474836480.000000
[DS] Selected Device[2]: "HD Graphics 5000" (OpenCL)
ACP(15114,0x7fff795bf300) malloc: *** mach_vm_map(size=1125865547108352) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Has anyone faced this problem before? How do I fix this?
I'm not familiar with Tesseract, but I suspect the patch that you referred was for a different issues.
Looking in the output details, It looks like you are using an apple computer. Please take a look at link below which contains some 'how to' for installing and using Tesseract on Mac OS X:
https://ryanfb.github.io/etc/2015/03/18/experimenting_with_opencl_for_tesseract.html
Hope this is useful to fix the issue.
Anyway, the error "can't allocate region" means that there is no memory space left. Indeed has been required a huge quantity of memory (size=1125865547108352, about 1.126 Petabyte). To figure out what is really happening, you should profile the code using a profiling tool like gdb (indeed the error message says "set a breakpoint in malloc_error_break to debug"), or at least upload a little program that can be used to reproduce the issue.
You ran out of memory. (error code=3) "can't allocate region" means malloc tried to allocate more memory than was available..
Maybe you can try to restrict recognition to a sub-rectangle of the image - call SetRectangle(left, top, width, height) after SetImage. Each SetRectangle clears the recogntion results so multiple rectangles can be recognized with the same image. E.g.
api->SetRectangle(30, 86, 590, 100);
Without seeing your code, I'd guess either you are not destroying objects and releasing memory or there is a bug with that version. To check if it's the former you can use a memory leak detection tool/library, to check if it is the later you can debug it or just try using a different version..

Memory management using mex files with matlab

I have a mex file that communicates with a uEye USB3 camera, captures a length of video at resolution and framerate specified by the input arguments, and exports it as a stack of .tif images. Before the mex file returns, I free the buffers I was using to store the images in, but this memory doesn't get made available to the system again, it appears to remain allocated to matlab. Further calls of mex functions appear to re-use this memory, but I can't free it to the wider system without restarting matlab. Running clear mex or clear all appear to have no influence on this, indeed clear all occasionally completely crashes matlab with a segmentation fault. I'm pretty sure it's not a fault with my C++ because if I rewrite the same function in straight C++ I see the memory being released as it should be.
My memory is de-allocated using
is_UnlockSeqBuf(hCam, nMemID, pBuffer);
is_ClearSequence (hCam);
int k=0;
while (k < frame_count){
int fMem = is_FreeImageMem(hCam, pFrameBuffer[k], pFrame[k]);
k++;
}
having previously been allocated with
for (int i = 0; i < frame_count;i++){
int nAlloc = is_AllocImageMem (hCam, width,height, 24, &pFrameBuffer[i], &pFrame[i]);
int nSeq = is_AddToSequence(hCam, pFrameBuffer[i], pFrame[i]);
pFrame[i] = i + 1;
}
Does anyone have any ideas on how to release the memory without restarting matlab?

Heap corruption on Windows but not Linux

Below is some simple OpenCV code to create a frame from a video file and run the SURF feature detector and extractor on the frame. When I run this code on Linux and OSX it runs fine, however on windows I am getting a heap corruption error on the two commented lines.
VideoCapture capture(vidFilename.c_str());
Mat frame;
capture >> frame;
SurfFeatureDetector *detector = new SurfFeatureDetector(minHessian);
vector<KeyPoint> frameKeypoints;
detector->detect(frame, frameKeypoints);
delete(detector); // Heap Corruption Detected
SurfDescriptorExtractor *extractor = new SurfDescriptorExtractor();
Mat frameDescriptors;
extractor->compute(frame, frameKeypoints, frameDescriptors);
delete(extractor); // Heap Corruption Detected
I have no clue what could cause this in the code. I am using VS 2010 to compile the code, is there something in VS that could cause this to happen?
As mentioned above that you do not get any exception related to heap corruption does not mean that it did not happen.There would be problem in your code and not in VS or compiler.My previous post on the similar article would be useful here as well.
https://stackoverflow.com/a/22074401/2724703
Probably you should also try to run some dynamic tool(Valgrind) on your linux. There is high possibility that, you would find the same bug using Valgrind as well.
These dynamic tools would give you the root cause of these problem.

Where to look for Segmentation fault?

My program only sometimes gets a Segmentation fault: 11 and I can't figure it out for the life of me. I don't know a whole lot in the realm of C++ and pointers, so what kinds of things should I be looking for?
I know it might have to do with some function pointers I'm using.
My question is what kinds of things produce Segmentation faults? I'm desperately lost on this and I have looked through all the code I thought could cause this.
The debugger I'm using is lldb and it shows the error being in this code segment:
void Player::update() {
// if there is a smooth animation waiting, do this one
if (queue_animation != NULL) {
// once current animation is done,
// switch it with the queue animation and make the queue NULL again
if (current_animation->Finished()) {
current_animation = queue_animation;
queue_animation = NULL;
}
}
current_animation->update(); // <-- debug says program halts on this line
game_object::update();
}
current_animation and queue_animation are both pointers of class Animation.
Also to note, within Animation::update() is a function pointer that gets passed to Animation in the constructor.
If you need to see all of the code, it's over here.
EDIT:
I changed the code to use a bool:
void Player::update() {
// if there is a smooth animation waiting, do this one
if (is_queue_animation) {
// once current animation is done,
// switch it with the queue animation and make the queue NULL again
if (current_animation->Finished()) {
current_animation = queue_animation;
is_queue_animation = false;
}
}
current_animation->update();
game_object::update();
}
It didn't help anything because I still sometimes get a Segmentation fault.
EDIT 2:
Modified code to this:
void Player::update() {
// if there is a smooth animation waiting, do this one
if (is_queue_animation) {
std::cout << "queue" << std::endl;
// once current animation is done,
// switch it with the queue animation and make the queue NULL again
if (current_animation->Finished()) {
if (queue_animation != NULL) // make sure this is never NULL
current_animation = queue_animation;
is_queue_animation = false;
}
}
current_animation->update();
game_object::update();
}
Just to see when this function would output without any user input. Every time I got a Segmentation fault this would output twice right before the fault. This is my debug output:
* thread #1: tid = 0x1421bd4, 0x0000000000000000, queue = 'com.apple.main-thread, stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
frame #0: 0x0000000000000000
error: memory read failed for 0x0
Some causes of segmentation fault:
You dereference a pointer that is uninitialized or that points to NULL
You dereference a deleted pointer
You write outside the bounds of the scope of allocated memory (e.g. after the last element of an array)
Run valgrind with your software (warning, it really slows things down). Its likely that memory has been overwritten in some way. Valgrind (and other tools) can help track down some of these kinds of issues, but not everything.
If its a large program, this could get very difficult as everything is suspect since anything can corrupt anything in memory. You might try to minimize the code paths run by limiting the program in some way and see if you can make the problem happen. This can help reduce the amount of suspect code.
If you have a previous version of the code that didn't have the problem, see if you can revert back to that and then look to see what changed. If you are using git, it has a way to bisect search into the revision where a failure first occurred.
Warning, this kind of thing is the bane of C/C++ developers, which is one of the reason that languages such as Java are "safer".
You might just start looking through the code and see if you can find things that look suspicious, including possible race conditions. Hopefully this won't take to much time. I don't want to freak you out, but these kinds of bugs can be some of the most difficult to track down.