I'm trying to build a program that recognises objects through template matching.
Whenever I select my ROI the program memory starts to gain 2-3 mbs per second and after 20-30 minutes the program crashes because it uses to much memory.
I think the problem starts somewhere in the code below but I have no idea where, and how to somehow free the memory without corrupting the program.
Could someone give me a lead on how to fix this problem?
void testApp::update()
{
vidGrabber.grabFrame();
if (vidGrabber.isFrameNew())
{
colorImg.setFromPixels(vidGrabber.getPixels(), camWidth,camHeight);
if(subjectIsDefined)
{
IplImage *result = cvCreateImage(cvSize(camWidth - subjectImg.width + 1, camHeight - subjectImg.height + 1), 32, 1);
cvMatchTemplate(colorImg.getCvImage(), subjectImg.getCvImage(), result, CV_TM_SQDIFF);
double minVal, maxVal;
CvPoint minLoc, maxLoc;
cvMinMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, 0);
subjectLocation.x = minLoc.x;
subjectLocation.y = minLoc.y;
}
}
}
void testApp::mouseReleased(int x, int y, int button)
{
//End tracking and normalize subject frame
if(subjectFrame.width < 0)
{
subjectFrame.x += subjectFrame.width;
subjectFrame.width *= -1;
}
if(subjectFrame.height < 0)
{
subjectFrame.y += subjectFrame.height;
subjectFrame.height *= -1;
}
isSelectingTrackingRegion = false;
subjectLocation.x = subjectFrame.x;
subjectLocation.y = subjectFrame.y;
//Copy selected portion of the image to the subject image;
subjectImg.allocate(subjectFrame.width, subjectFrame.height);
colorImg.setROI(subjectFrame);
subjectImg = colorImg;
colorImg.resetROI();
subjectIsDefined = true;
}
You have to use cvReleaseImage() for all the image Iplmage which you have created in your program. You have to be careful to find the correct place to write cvReleaseImage(image name) because once you release the memory related to an image, you can not access that image again.
Suggestion: Don't use IplImage. That's the old c api of OpenCV. Now a days, you can use Mat images which can manage the memory themself. The advantage of using Mat is that you don't need to think about freeing the memory.
Related
Alright; so I'm finding an odd memory leak when attempting to use cvCreateMat to make room for my soon-to-be-filled mat. Below is what I am attempting to do; adaptiveThreshold didn't like it when I put the 3-channel image in, so I wanted to split it into its separate channels. It works! But every time we go through this particular function we gain another ~3MB of memory. Since this function is expected to run a few hundred times, this becomes a rather noticeable problem.
So here's the code:
void adaptiveColorThreshold(Mat *inputShot, int adaptiveMethod, int blockSize, int cSubtraction)
{
Mat newInputShot = (*inputShot).clone();
Mat inputBlue = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputGreen = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputRed = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
for(int rows = 0; rows < newInputShot.rows; rows++)
{
for(int cols = 0; cols < newInputShot.cols; cols++)
{
inputBlue.data[inputBlue.step[0]*rows + inputBlue.step[1]*cols] = newInputShot.data[newInputShot.step[0]*rows + newInputShot.step[1]*cols + 0];
inputGreen.data[inputGreen.step[0]*rows + inputGreen.step[1]*cols] = newInputShot.data[newInputShot.step[0]*rows + newInputShot.step[1]*cols + 1];
inputRed.data[inputRed.step[0]*rows + inputRed.step[1]*cols] = newInputShot.data[newInputShot.step[0]*rows + newInputShot.step[1]*cols + 2];
}
}
adaptiveThreshold(inputBlue, inputBlue, 255, adaptiveMethod, THRESH_BINARY, blockSize, cSubtraction);
adaptiveThreshold(inputGreen, inputGreen, 255, adaptiveMethod, THRESH_BINARY, blockSize, cSubtraction);
adaptiveThreshold(inputRed, inputRed, 255, adaptiveMethod, THRESH_BINARY, blockSize, cSubtraction);
for(int rows = 0; rows < (*inputShot).rows; rows++)
{
for(int cols = 0; cols < (*inputShot).cols; cols++)
{
(*inputShot).data[(*inputShot).step[0]*rows + (*inputShot).step[1]*cols + 0] = inputBlue.data[inputBlue.step[0]*rows + inputBlue.step[1]*cols];
(*inputShot).data[(*inputShot).step[0]*rows + (*inputShot).step[1]*cols + 1] = inputGreen.data[inputGreen.step[0]*rows + inputGreen.step[1]*cols];
(*inputShot).data[(*inputShot).step[0]*rows + (*inputShot).step[1]*cols + 2] = inputRed.data[inputRed.step[0]*rows + inputRed.step[1]*cols];
}
}
inputBlue.release();
inputGreen.release();
inputRed.release();
newInputShot.release();
return;
}
So going through it one line at a time...
newInputShot adds ~3MB
inputBlue adds ~1MB
inputGreen adds ~1MB
and inputRed adds ~1MB
So far, so good - need memory to hold the data. newInputShot gets its data right off the bat, but inputRGB need to get their data from newInputShot - so we just allocate the space to be filled in the upcoming for-loop, which (as expected) allocates no new memory, just fills in the space already claimed.
The adaptiveThresholds don't add any new memory either, since they're simply supposed to overwrite what is already there, and the next for-loop writes straight to inputShot; no new memory needed there. So now we get around to (manually) releasing the memory.
Releasing inputBlue frees up 0MB
Releasing inputGreen frees up 0MB
Releasing inputRed frees up 0MB
Releasing newInputShot frees up ~3MB
Now, according to the OpenCV documentation site: "OpenCV handles all the memory automatically."
First of all, std::vector, Mat, and other data structures used by the
functions and methods have destructors that deallocate the underlying
memory buffers when needed. This means that the destructors do not
always deallocate the buffers as in case of Mat. They take into
account possible data sharing. A destructor decrements the reference
counter associated with the matrix data buffer. The buffer is
deallocated if and only if the reference counter reaches zero, that
is, when no other structures refer to the same buffer. Similarly, when
a Mat instance is copied, no actual data is really copied. Instead,
the reference counter is incremented to memorize that there is another
owner of the same data. There is also the Mat::clone method that
creates a full copy of the matrix data.
TLDR the quote: Related mats get clumped together in a super-mat that gets released all at once when nothing is left using it.
This is why I created newInputShot as a clone (that doesn't get clumped with inputShot) - so I could see if this was occurring with the inputRGBs. Well... nope! the inputRGBs are their own beast that refuse to be deallocated. I know it isn't any of the intermediate functions because this snippet does the exact same thing:
void adaptiveColorThreshold(Mat *inputShot, int adaptiveMethod, int blockSize, int cSubtraction)
{
Mat newInputShot = (*inputShot).clone();
Mat inputBlue = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputGreen = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputRed = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
inputBlue.release();
inputGreen.release();
inputRed.release();
newInputShot.release();
return;
}
That's about as simple as it gets. Allocate - fail to Deallocate. So what's going on with cvCreateMat?
I would suggest not to use cvCreateMat and you don't need to clone the original Mat either.
Look into using split() and merge() functions. They will do the dirty work for you and will return Mat's that will handle memory for you. I don't have OpenCV installed right now so i can't test any of the code but i'm sure that's the route you want to take.
I'm coding using C++ and opencv on linux. I've found this similar question; although, I can't quite get it to work.
What I want to do is read in a video file and store a certain number of frames in an array. Over that number, I want to delete the first frame and add the most recent frame to the end of the array.
Here's my code so far.
VideoCapture cap("Video.mp4");
int width = 2;
int height = 2;
Rect roi = Rect(100, 100, width, height);
vector<Mat> matArray;
int numberFrames = 6;
int currentFrameNumber = 0;
for (;;){
cap >> cameraInput;
cameraInput(roi).copyTo(finalOutputImage);
if(currentFrameNumber < numberFrames){
matArray.push_back(finalOutputImage);
}else if(currentFrameNumber <= numberFrames){
for(int i=0;i<matArray.size()-1; i++){
swap(matArray[i], matArray[i+1]);
}
matArray.pop_back();
matArray.push_back(finalOutputImage);
}
currentFrameNumber++;
}
My understanding of mats says this is probably a problem with pointers; I'm just not sure how to fix it. When I look at the array of mats, every element is the same frame. Thank you.
There's no need for all this complication if you were to make use of C++'s highly useful STL.
if( currentFrameNumber >= numberFrames )
matArray.remove( matArray.begin() );
matArray.push_back( finalOutputImage.clone() ); //check out #berak's comment
should do it.
I want to extract some harriscorners from an image and get FREAK descriptors. Here is how I try to do it:
(The passed variables are globally defined.)
void computeFeatures(cv::Mat &src, std::vector<cv::KeyPoint> &keypoints, cv::Mat &desc ) {
cv::Mat featureSpace;
featureSpace = cv::Mat::zeros( src.size(), CV_32FC1 );
//- Detector parameters
int blockSize = 3;
int apertureSize = 3;
double k = 0.04;
//- Detecting corners
cornerHarris( src, featureSpace, blockSize, apertureSize, k, cv::BORDER_DEFAULT );
//- Thresholding featureSpace
keypoints.clear();
nonMaximumSuppression(featureSpace, keypoints, param.nms_n);
//- compute FREAK-descriptor
cv::FREAK freak(false, false, 22.0f, 4);
freak.compute(src, keypoints, desc);
}
I can compile it with Visual Studio 12 as well as Matlab R2013b via mex. When I run it as "stand alone" (.exe) it works just fine. When I try to execute it via Matlab it fails with this message:
A buffer overrun has occurred in MATLAB.exe which has corrupted the
program's internal state. Press Break to debug the program or Continue
to terminate the program.
I mexed with the debug option '-g' and attached VisualStudio to Matlab to be able to get closer to the error:
After nonMaximumSuppression() the size of keypoints is 233 when I jump into freak.compute() the size is suddenly 83 with "random" values stored.
The actual error is then in KeyPointsFilter::runByKeypointSize when keypoints should be erased.
in keypoint.cpp line 256:
void KeyPointsFilter::runByKeypointSize( vector<KeyPoint>& keypoints, float minSize, float maxSize )
{
CV_Assert( minSize >= 0 );
CV_Assert( maxSize >= 0);
CV_Assert( minSize <= maxSize );
keypoints.erase( std::remove_if(keypoints.begin(), keypoints.end(), SizePredicate(minSize, maxSize)),
keypoints.end() );
}
Is there some error I'm making with passing the keyPoint-vector? Has anybody run into a similar problem?
EDIT:
Here is the mex-file with the additional library "opencv_matlab.hpp" taken from MatlabCentral
#include "opencv_matlab.hpp"
void mexFunction (int nlhs,mxArray *plhs[],int nrhs,const mxArray *prhs[]) {
// read command
char command[128];
mxGetString(prhs[0],command,128);
if (!strcmp(command,"push") || !strcmp(command,"replace")) {
// check arguments
if (nrhs!=1+1 && nrhs!=1+2)
mexErrMsgTxt("1 or 2 inputs required (I1=left image,I2=right image).");
if (!mxIsUint8(prhs[1]) || mxGetNumberOfDimensions(prhs[1])!=2)
mexErrMsgTxt("Input I1 (left image) must be a uint8_t matrix.");
// determine input/output image properties
const int *dims1 = mxGetDimensions(prhs[1]);
const int nDims1 = mxGetNumberOfDimensions(prhs[1]);
const int rows1 = dims1[0];
const int cols1 = dims1[1];
const int channels1 = (nDims1 == 3 ? dims1[2] : 1);
// Allocate, copy, and convert the input image
// #note: input is double
cv::Mat I1_ = cv::Mat::zeros(cv::Size(cols1, rows1), CV_8UC(channels1));
om::copyMatrixToOpencv<uchar>((unsigned char*)mxGetPr(prhs[1]), I1_);
// push back single image
if (nrhs==1+1) {
// compute features and put them to ring buffer
pushBack(I1_,!strcmp(command,"replace"));
// push back stereo image pair
} else {
if (!mxIsUint8(prhs[2]) || mxGetNumberOfDimensions(prhs[2])!=2)
mexErrMsgTxt("Input I2 (right image) must be a uint8_t matrix.");
// determine input/output image properties
const int *dims2 = mxGetDimensions(prhs[2]);
const int nDims2 = mxGetNumberOfDimensions(prhs[2]);
const int rows2 = dims2[0];
const int cols2 = dims2[1];
const int channels2 = (nDims2 == 3 ? dims2[2] : 1);
// Allocate, copy, and convert the input image
// #note: input is double
cv::Mat I2_ = cv::Mat::zeros(cv::Size(cols2, rows2), CV_8UC(channels2));
om::copyMatrixToOpencv<uchar>((unsigned char*)mxGetPr(prhs[2]), I2_);
// check image size
if (dims1_[0]!=dims2_[0] || dims1_[1]!=dims2_[1])
mexErrMsgTxt("Input I1 and I2 must be images of same size.");
// compute features and put them to ring buffer
pushBack(I1_,I2_,!strcmp(command,"replace"));
}
}else {
mexPrintf("Unknown command: %s\n",command);
}
}
And here is an additional part of the main cpp project.
std::vector<cv::KeyPoint> k1c1, k2c1, k1p1, k2p1; //KeyPoints
cv::Mat d1c1, d2c1, d1p1, d2p1; //descriptors
void pushBack (cv::Mat &I1,cv::Mat &I2,const bool replace) {
// sanity check
if (I1.empty()) {
cerr << "ERROR: Image empty!" << endl;
return;
}
if (replace) {
//if (!k1c1.empty())
k1c1.clear(); k2c1.clear();
d1c1.release(); d2c1.release();
} else {
k1p1.clear(); k2p1.clear();
d1p1.release(); d2p1.release();
k1p1 = k1c1; k2p1 = k2c1;
d1c1.copyTo(d1p1); d2c1.copyTo(d2p1);
k1c1.clear(); k2c1.clear();
d1c1.release(); d2c1.release();
}
// compute new features for current frame
computeFeatures(I1,k1c1,d1c1);
if (!I2.empty())
computeFeatures(I2,k2c1,d2c1);
}
And here is how I call the mex-file from Matlab
I1p = imread('\I1.bmp');
I2p = imread('\I2.bmp');
harris_freak('push',I1p,I2p);
Hope this helps...
I hope this is the correct way to give an answer to my own question.
After a couple of days I found kind of a work around. Instead of building the mex file in Matlab, which gives the above mentioned error, I built it in Visual Studio with instructions taken from here.
Now everything works just fine.
It kind of bothers me to not know how to do it with matlab, but hey, maybe someone still has an idea.
Thanks to the commenters for taking the time to look through my question!
If you have the Computer Vision System Toolbox then you do not need mex. It includes the detectHarrisFeatures function for detecting Harris corners, and the extractFeatures function, which can compute FREAK descriptors.
Please help how to handle this problem:
OpenCV Error: Insufficient memory (Failed to allocate 921604 bytes) in
unknown function, file
........\ocv\opencv\modules\core\src\alloc.cpp, line 52
One of my method using cv::clone and pointer
The code is:
There is a timer every 100ms;
In the timer event, I call this method:
void DialogApplication::filterhijau(const Mat &image, Mat &result) {
cv::Mat resultfilter = image.clone();
int nlhijau = image.rows;
int nchijau = image.cols*image.channels();;
for(int j=0; j<nlhijau; j++) {
uchar *data2=resultfilter.ptr<uchar> (j); //alamat setiap line pada result
for(int i=0; i<nchijau; i++) {
*data2++ = 0; //element B
*data2++ = 255; //element G
*data2++ = 0; //element R
}
// free(data2); //I add this line but the program hung up
}
cv::addWeighted(resultfilter,0.3,image,0.5,0,resultfilter);
result=resultfilter;
}
The clone() method of a cv::Mat performs a hard copy of the data. So the problem is that for each filterhijau() a new image is allocated, and after hundreds of calls to this method your application will have occupied hundreds of MBs (if not GBs), thus throwing the Insufficient Memory error.
It seems like you need to redesign your current approach so it occupies less RAM memory.
I faced this error before, I solved it by reducing the size of the image while reading them and sacrificed some resolution.
It was something like this in Python:
# Open the Video
cap = cv2.VideoCapture(videoName + '.mp4')
i = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
frame = cv2.resize(frame, (900, 900))
# append the frames to the list
images.append(frame)
i += 1
cap.release()
N.B. I know it's not the most optimum solution for the problem but, it was enough for me.
I have made an Intensity meter in a pictureBox. For making this intensity meter I have used a picture as Dialer(Dialer.bmp) and making needle using a line. I am doing this using openCV. And changing the needle pointer we have created a thread at form load. And the code is as follows
private: System::Void FocusExposure_Load(System::Object^ sender, System::EventArgs^ e) {
if(ExposureThreadStatus)
th->Abort();
th = gcnew Thread(gcnew ThreadStart(this,&FocusExposure::UpdateIntensity));
th->Start();
ExposureThreadStatus = true;
}
void UpdateIntensity()
{
Intensity_Values data;
while(1)
{
data = ReadImage(focusBuffer);
System::Drawing::Bitmap ^bmp=drawImageMeter(data.Max);
this->pictureBox1->Image =this->pictureBox1->Image->FromHbitmap(bmp->GetHbitmap());
delete bmp;
Sleep(1000);
}
}
System::Drawing::Bitmap^ drawImageMeter(float intensity_value)
{
IplImage *Background =cvLoadImage("Dialer.bmp", 1);
int width,height;
if(counter==1)
{
width=Background->width;
height=Background->height;
counter++;
needle_center.x=width/2;
needle_center.y=height/2;
needle_top.x=needle_center.x;
needle_top.y=needle_center.y-140;
}
double const PI = 3.14159265358979323;
int x1 = needle_top.x;
int y1 = needle_top.y;
int x0=needle_center.x;
int y0=needle_center.y;
float angle;
CurrIntensity = intensity_value;
angle = CurrIntensity-PreIntensity;
angle= 0.0703125f * angle;
// degrees, not radians
float radians = angle * (PI / 180.0f); // convert degrees to radians
if (current_max==1)
{
current_max++;
int N1x1 = needle_top.x;
int N1y1 = needle_top.y;
needle1_top.x = ((N1x1-x0) * cos(radians)) - ((N1y1-y0) * sin(radians)) + x0;
needle1_top.y = ((N1x1-x0) * sin(radians)) + ((N1y1-y0) * cos(radians)) + y0;
}
needle_top.x = ((x1-x0) * cos(radians)) - ((y1-y0) * sin(radians)) + x0;
needle_top.y = ((x1-x0) * sin(radians)) + ((y1-y0) * cos(radians)) + y0;
cvLine(Background, needle_center, needle1_top, CV_RGB(0, 0, 255), 1, 4, 0);
cvLine(Background, needle_center, needle_top, CV_RGB(255, 0, 0), 1, 4, 0);
System::Drawing::Bitmap ^bmp = gcnew System::Drawing::Bitmap(Background->width,Background->height,Background->widthStep,System::Drawing::Imaging::PixelFormat::Format24bppRgb,(System::IntPtr)Background->imageData);
PreIntensity = CurrIntensity;
return bmp;
}
This code is working fine and giving output as per my requirement. But The only problem is that when I am opening the form It is giving memory leak. I have seen in task manager and also used Intel Vtune profiler. This profiler is showing Mismatched allocation/deallocation at the following line
IplImage *Background =cvLoadImage("Dialer.bmp", 1);
We need to reload this image because we are drawing the line at the image and when needle pointer has changed it require the Dialer Image without needle.
Can anybody please suggest me any solution to solve this memory leak Issue.
Any help will be appreciated.
It seems that drawImageMeter() is being called more than once. The problem is that each time cvLoadImage() is executed, it allocates space on the HEAP for the pixels. So after displaying the image you should release it with cvReleaseImage() so the data get's freed.
A quick and dirty (and horrible) fix would be to make the variablestatic:
static IplImage* Background =cvLoadImage("Dialer.bmp", 1);
But you should really change the design of your application so Background it's allocated only once. To do that you can either make it global variable and load it on the main() method, or make it a parameter of drawImageMeter():
System::Drawing::Bitmap^ drawImageMeter(IplImage* Background, float intensity_value)
{
// code
}