I have a client/server application, my server manages the opencv library to do for example disparity mapping, the application works fine with stereoSGBM, but with stereoBM I have random crash with ctrl + f5 release, so launching it without debugging.
The crash is random, with a try/catch sometimes I can get bad allocation memory, failed to allocate 1k bytes. Instead with the call stack I'm not able to catch anything relevant because the crash is not always in the same point, sometimes is in a imgread, sometimes is a malloc, a free a mat.release, so every time is different, but always involves memory in some way.
The code is pretty simple:
void disparity_mapping(std::vector<std::string> & _return, const StereoBmValue& BmValue, const ClientShowSelection& clientShowSelection, const std::string& filenameL, const std::string& filenameR)
{
int ch;
alg = BmValue.algorithmSelection;
if((filenameL == "0" || filenameR == "0"))
_return.push_back("0");
if((filenameL != "0" && filenameR != "0"))
{
imgL = imread(filenameL , CV_LOAD_IMAGE_GRAYSCALE );
imgR = imread(filenameR, CV_LOAD_IMAGE_GRAYSCALE );
_return.push_back("1");
ch = imgL.channels();
setAlgValue(BmValue, methodSelection, ch); //Setting the value for StereoBM or SGBM
disp = calculateDisparity(imgL, imgR, alg); //calculating disparity
normalize(disp, disp8, 0, 255, CV_MINMAX, CV_8U);
string matAsStringL(imgL.begin<unsigned char>(), imgL.end<unsigned char>());
_return.push_back(matAsStringL);
string matAsStringR(imgR.begin<unsigned char>(), imgR.end<unsigned char>());
_return.push_back(matAsStringR);
string matAsStringD(disp8.begin<unsigned char>(), disp8.end<unsigned char>());
_return.push_back(matAsStringD);
}
the two function that are called:
void setAlgValue(const StereoBmValue BmValue, int methodSelection, int ch)
{
if (initDisp)
initDisparity(methodSelection); //inizializing bm.init(...) and find remap informations from steroRect, etc.
//Select 0 == StereoSGBM, 1 == StereoBM
int alg = BmValue.algorithmSelection;
//storing alg value.
stereoValue.minDisparity = BmValue.minDisparity;
stereoValue.disp12MaxDiff = BmValue.disp12MaxDiff;
stereoValue.SADWindowSize = BmValue.SADWindowSize;
stereoValue.textureThreshold = BmValue.textureThreshold;
stereoValue.uniquenessRatio = BmValue.uniquenessRatio;
stereoValue.numberOfDisparities = BmValue.numberOfDisparities;
stereoValue.preFilterCap = BmValue.preFilterCap;
stereoValue.speckleWindowSize = BmValue.speckleWindowSize;
stereoValue.speckleRange = BmValue.speckleRange;
stereoValue.preFilterSize = BmValue.preFilterSize;
if (alg == 1) //set of the values in the bm state
{
bm.state->roi1 = roi1;
bm.state->roi2 = roi2;
bm.state->preFilterCap = stereoValue.preFilterCap;
bm.state->SADWindowSize = stereoValue.SADWindowSize;
bm.state->minDisparity = stereoValue.minDisparity;
bm.state->numberOfDisparities = stereoValue.numberOfDisparities;
bm.state->textureThreshold = stereoValue.textureThreshold;
bm.state->uniquenessRatio = stereoValue.uniquenessRatio;
bm.state->speckleWindowSize = stereoValue.speckleWindowSize;
bm.state->speckleRange = stereoValue.speckleRange;
bm.state->disp12MaxDiff = stereoValue.disp12MaxDiff;
bm.state->preFilterSize = stereoValue.preFilterSize;
}
else if(alg == 0) //same for SGBM
{
sgbm.P1 = 8*ch*sgbm.SADWindowSize*sgbm.SADWindowSize;
sgbm.P2 = 32*ch*sgbm.SADWindowSize*sgbm.SADWindowSize;
sgbm.preFilterCap = stereoValue.preFilterCap;
sgbm.SADWindowSize = stereoValue.SADWindowSize;
sgbm.minDisparity = stereoValue.minDisparity;
sgbm.numberOfDisparities = stereoValue.numberOfDisparities;
sgbm.uniquenessRatio = stereoValue.uniquenessRatio;
sgbm.speckleWindowSize = stereoValue.speckleWindowSize;
sgbm.speckleRange = stereoValue.speckleRange;
sgbm.disp12MaxDiff = stereoValue.disp12MaxDiff;
}
}
and the other one:
Mat calculateDisparity(Mat& imgL, Mat& imgR, int alg)
{
Mat disparity;
//remap for rectification
remap(imgL, imgL, map11, map12, INTER_LINEAR,BORDER_CONSTANT, Scalar());
remap(imgR, imgR, map21, map22, INTER_LINEAR,BORDER_CONSTANT, Scalar());
//disp
if (alg == 1)
bm( imgL , imgR , disparity);
else if (alg == 0)
sgbm(imgL, imgR, disparity);
return disparity;
}
So as you can see the code is really simple, but using bm make all crash. I'm using the last OpenCV library build for VS9 updated. Is also linked with thrift apache, pcl, eigen, vtk and boost. The bm/sgbm value are controlled by the client and are correct, i don't get any error in debug/release with debug.
What can be? Why one works and the other one make the entire application to crash? Why just in release without debug?
I was having this same issue, and just found out that with high values of bm.state->textureThreshold it would crash. Values from ~50+ are crashing for me.
Related
I'm trying to implement the Tensorflow C API for a C++ Plugin Environment, but the segmentation results differ from the Python graph. I was told it maybe has to do something with the correct casting to float/uint8, because the resulting image seems a bit like a 3x3 grid of the correct image, but as a newby to C/C++ I don't see where exactly the error is.
It works for easy classifcation tasks such as MNIST or segmentation with grayscale inputs, but doesn't work for segmentation tasks for RGB images.
We use our own environment for image representations, but it is equivalent to OpenCV Mat. I transform the image to a tensor like this:
void* tensor_data = image->Buffer().Ptr();
int64_t dims4[]={1,512,512,3};
int ndims = 4;
std::shared_ptr<TF_Tensor> tensor = TF_NewTensor(
TF_FLOAT, dims4, ndims, tensor_data, 3*512*512*sizeof(float) , noDealloc, nullptr
);
So maybe the error could be here if e.g. the RGB data is wrongly read. But I tried to segment an image with same channels, i.e. a 3D-grayscale image, but it still didn't work.
Then I run the Model, where everything should be correct, since it works for certain tasks, unless there is an error with Tensorflow.
//********* Read model
TF_Graph* Graph = TF_NewGraph();
TF_Status* Status = TF_NewStatus();
TF_SessionOptions* SessionOpts = TF_NewSessionOptions();
TF_Buffer* RunOpts = NULL;
const char* saved_model_dir = m_path.c_str(); // Path of the model
const char* tags = "serve"; // default model serving tag; can change in future
int ntags = 1;
TF_Session* Session = TF_LoadSessionFromSavedModel(SessionOpts, RunOpts, saved_model_dir, &tags, ntags, Graph, NULL, Status);
tf_utils::throw_status(Status);
//****** Get input tensor operation
int NumInputs = 1;
TF_Output* Input = (TF_Output*)malloc(sizeof(TF_Output) * NumInputs);
const std::string in_param_name = "input_op:" + std::to_string(0);
const std::string in_op_name = m_params.GetString(in_param_name.c_str(), "").c_str();
TF_Output t0 = {TF_GraphOperationByName(Graph, in_op_name.c_str()), 0};
if(t0.oper == NULL){
printf("ERROR: Failed TF_GraphOperationByName Input\n");
}
Input[0] = t0;
//********* Get Output tensor operation
int NumOutputs = 1;
TF_Output* Output = (TF_Output*)malloc(sizeof(TF_Output) * NumOutputs);
const std::string out_param_name = "output_op:" + std::to_string(0);
const std::string out_op_name = m_params.GetString(out_param_name.c_str(), "").c_str();
TF_Output t2 = {TF_GraphOperationByName(Graph, out_op_name.c_str()), 0};
if(t2.oper == NULL){
printf("ERROR: Failed TF_GraphOperationByName Output\n");
}
Output[0] = t2;
//********* Allocate data for inputs & outputs
TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*NumInputs);
TF_Tensor** OutputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*NumOutputs);
InputValues[0] = tensor.get();
// //Run the Session
TF_SessionRun(Session, NULL, Input, InputValues, NumInputs, Output, OutputValues, NumOutputs, NULL, 0,NULL , Status);
tf_utils::throw_status(Status);
// //Free memory
TF_DeleteGraph(Graph);
TF_DeleteSession(Session, Status);
TF_DeleteSessionOptions(SessionOpts);
TF_DeleteStatus(Status);
std::shared_ptr<TF_Tensor> out_tensor(OutputValues[0], TF_DeleteTensor);
Then I convert it back to an image, where I think the error may be:
const TF_DataType tensor_type = TF_TensorType(out_tensor.get());
itwm_type = &ITWM::IMAGE_GREY_F; //Float image
// Create the image and copy the buffer.
const float* data = reinterpret_cast<float*>(TF_TensorData(out_tensor.get()));
const std::size_t byte_size = TF_TensorByteSize(out_tensor.get());
const std::size_t size = byte_size/sizeof(float);
ITWM::CImage* image = new ITWM::CImage(*itwm_type, ITWM::CSize(size));
memcpy(image->Buffer().Ptr(), data, byte_size);
I tried casting it to different formats but the error is the same or results are NaN. I also tried changing the input to three grayscale images and stack them together, but it still didn't work.
I would be very thankful if you can help me find the error!
PS: Sorry that you can't run it and it's a bit messy, I copied it from three different plugins.
From comments,
The Tensorflow C API needs Interleaving for RGB images. This means
first need to switch the X and Z axes (here the first channel and
color channel, to an (3x512x512) image) and then create a tensor with
the normal dimensions (here (512x512x3)) (paraphrased from Mathematicus)
I'm trying to use the CannyEdgeDetectionImageFilter but the GetPixel() method doesn't seem to be properly referencing the filtered image. I've tried a lot of different tactics to attempt to resolve the issue, but the only thing that seems to work is writing and reading the image to/from the disk (which is not ideal). My code is below:
typedef itk::Image<unsigned char, 2> ImageType;
typedef itk::Image<float, 2> floatImageType;
...
floatImageType::Pointer to_float(ImageType::Pointer image){
typedef itk::CastImageFilter <ImageType, floatImageType> CastFilterType;
CastFilterType::Pointer castToFloat = CastFilterType::New();
castToFloat->SetInput( image );
castToFloat->Update();
return castToFloat->GetOutput();
}
...
ImageType::Pointer check(ImageType::Pointer image){
typedef itk::ImageDuplicator <ImageType> ImageDuplicatorType;
typedef itk::RescaleIntensityImageFilter
<floatImageType, ImageType> RescaleFilter;
typedef itk::CannyEdgeDetectionImageFilter
<floatImageType, floatImageType> CannyFilter;
RescaleFilter::Pointer Rescale = RescaleFilter::New();
CannyFilter::Pointer Canny = CannyFilter::New();
ImageDuplicatorType::Pointer Duplicator = ImageDuplicatorType::New();
ImageType::Pointer cannyImage;
Canny->SetVariance( 20 );
Canny->SetUpperThreshold( 2 );
Canny->SetLowerThreshold( 20 );
Rescale->SetOutputMinimum( 0 );
Rescale->SetOutputMaximum( 255 );
Duplicator->SetInputImage(image);
Duplicator->Update();
ImageType::Pointer img_copy = Duplicator->GetOutput();
floatImageType::Pointer floatImage = to_float(img_copy);
Canny->SetInput(floatImage);
Rescale->SetInput( Canny->GetOutput() );
Rescale->Update();
cannyImage = Rescale->GetOutput();
/* Insert odd work-around here */
const ImageType::SizeType img_size = cannyImage->GetLargestPossibleRegion().GetSize();
itk::Index<2> loc = {{img_size[0]/2, 0}};
int top_edge = 0;
bool contin = true;
for (int i = 0; ((i < img_size[1]) && contin); i++){
loc[1] = i;
if (cannyImage->GetPixel(loc) != 0){
top_edge = i;
contin = false;
}
}
...
}
When a pixel value of cannyImage later on, the value should be either 0, or (if an edge) 255. However, it produces values that appear to belong to a gray-scale image.
However, if I include the following code in he section above, it works as one should expect:
std::string fname = "/tmp/canny_" + to_string(getpid()) + ".tmp";
writeImage(cannyImage, fname);
cannyImage = readImage(fname);
(Methods writeImage(ImageType::Pointer image, std::string filename) and ImageType::Pointer readImage(std::string filename) were defined earlier the program.)
Anyone know what's going wrong with my program? Why does pushing it through the disk IO cause it to work?
Iḿ working on a algorithm to find and fill regions on a binarized image,the code work as expected for some images, but I don't know why, after the fourth image I always get this error:
* Error in `./heli': double free or corruption (out): 0x0000000001ccb610 *
Aborted (core dumped)
This is my code:
void fillRegion(Mat src, Mat &dst, Point origin, Vec3b color)
{
int size = 0;
list<Point> cadena;
cadena.push_back(origin);
while(!cadena.empty())
{
Point current = cadena.front();
cadena.pop_front();
Point top,bot,right,left;
top = bot = right = left = current;
top.y -= 1;
bot.y += 1;
right.x += 1;
left.x -= 1;
Vec3b cero = Vec3b(0,0,0);
if(top.y >= 0 && dst.at<Vec3b>(top) == cero && src.at<uchar>(top)!= 0)
{
dst.at<Vec3b>(top) = color;
cadena.push_back(top);
}
if(bot.y <= src.rows && dst.at<Vec3b>(bot) == cero && src.at<uchar>(bot)!= 0)
{
dst.at<Vec3b>(bot) = color;
cadena.push_back(bot);
}
if(right.x <= src.cols && dst.at<Vec3b>(right) == cero && src.at<uchar>(right)!= 0)
{
dst.at<Vec3b>(right) = color;
cadena.push_back(right);
}
if(left.y >= 0 && dst.at<Vec3b>(left) == cero && src.at<uchar>(left)!= 0)
{
dst.at<Vec3b>(left) = color;
cadena.push_back(left);
}
}
}
void segment(Mat src, Mat &dst)
{
for(int x = 0; x < src.cols; x++)
{
for(int y = 0; y < src.rows; y++)
{
Point p = Point(x,y);
if(src.at<uchar>(p) != 0 && dst.at<Vec3b>(p) == Vec3b(0,0,0) )
{
fillRegion(src,dst,p,getRandomColor());
}
}
}
}
int main(int argc, char *argv[])
{
namedWindow("Original", WINDOW_NORMAL);
namedWindow("Resultado", WINDOW_NORMAL);
vector<string> archivos = vector<string>();
getdir("../images",archivos);
int tam = archivos.size();
for(uint i = 0; i < tam; i++)
{
Mat img = imread(archivos[i],CV_LOAD_IMAGE_GRAYSCALE);
Mat newImg = Mat(img.rows,img.cols, CV_8UC3, Scalar(0,0,0));;
threshold(img,img,125,255,THRESH_BINARY);
imshow("Original", img);
segment(img,newImg);
imshow("Resultado", newImg);
waitKey(0);
}
}
I call the "Segment" method, which then calls the "fillRegion" method for each region found.
I know the error is at the "fillRegion" method, because if I comment it from "segment", the error is gone, but I just can't find/don't know what's the error on it.
If you have gdb (and you should), do the following:
gdb
run "your file name here"
core /[name of core file]
bt (stands for backtrace)
This will tell you where your code crashed.
I don't think it has to do with the copy constructor other than the fact that it fails to allocate enough memory. As you mentioned, commenting out the fillRegion call in segment avoids the seg fault. This likely means that there is a memory leak in fillRegion that is filling up the allotted memory size within 4 iterations of your program.
While this won't solve your memory leak, I suggest passing Mat src instead as const Mat& src. Passing by reference will avoid an unnecessary copy in this case since you aren't internally modifying anything in src.
My guess that the memory leak has to do with these two checks:
bot.y <= src.rows
right.x <= src.cols
I believe that they should instead be strictly less than (<) to avoid calling at on an invalid point. When calling at on a point that is outside the current Mat's dimensions, does it resize? If that is the case, then you are constantly resizing these images whenever you hit these boundaries. There's your memory leak
One more thing: can getRandomColor() potentially return (0, 0, 0)? If so, this program could run into an infinite loop
For anyone who comes across this like I did, adding exit(0) as the last line in my code fixed the segfault for me on ubuntu 16.04.
I want to extract some harriscorners from an image and get FREAK descriptors. Here is how I try to do it:
(The passed variables are globally defined.)
void computeFeatures(cv::Mat &src, std::vector<cv::KeyPoint> &keypoints, cv::Mat &desc ) {
cv::Mat featureSpace;
featureSpace = cv::Mat::zeros( src.size(), CV_32FC1 );
//- Detector parameters
int blockSize = 3;
int apertureSize = 3;
double k = 0.04;
//- Detecting corners
cornerHarris( src, featureSpace, blockSize, apertureSize, k, cv::BORDER_DEFAULT );
//- Thresholding featureSpace
keypoints.clear();
nonMaximumSuppression(featureSpace, keypoints, param.nms_n);
//- compute FREAK-descriptor
cv::FREAK freak(false, false, 22.0f, 4);
freak.compute(src, keypoints, desc);
}
I can compile it with Visual Studio 12 as well as Matlab R2013b via mex. When I run it as "stand alone" (.exe) it works just fine. When I try to execute it via Matlab it fails with this message:
A buffer overrun has occurred in MATLAB.exe which has corrupted the
program's internal state. Press Break to debug the program or Continue
to terminate the program.
I mexed with the debug option '-g' and attached VisualStudio to Matlab to be able to get closer to the error:
After nonMaximumSuppression() the size of keypoints is 233 when I jump into freak.compute() the size is suddenly 83 with "random" values stored.
The actual error is then in KeyPointsFilter::runByKeypointSize when keypoints should be erased.
in keypoint.cpp line 256:
void KeyPointsFilter::runByKeypointSize( vector<KeyPoint>& keypoints, float minSize, float maxSize )
{
CV_Assert( minSize >= 0 );
CV_Assert( maxSize >= 0);
CV_Assert( minSize <= maxSize );
keypoints.erase( std::remove_if(keypoints.begin(), keypoints.end(), SizePredicate(minSize, maxSize)),
keypoints.end() );
}
Is there some error I'm making with passing the keyPoint-vector? Has anybody run into a similar problem?
EDIT:
Here is the mex-file with the additional library "opencv_matlab.hpp" taken from MatlabCentral
#include "opencv_matlab.hpp"
void mexFunction (int nlhs,mxArray *plhs[],int nrhs,const mxArray *prhs[]) {
// read command
char command[128];
mxGetString(prhs[0],command,128);
if (!strcmp(command,"push") || !strcmp(command,"replace")) {
// check arguments
if (nrhs!=1+1 && nrhs!=1+2)
mexErrMsgTxt("1 or 2 inputs required (I1=left image,I2=right image).");
if (!mxIsUint8(prhs[1]) || mxGetNumberOfDimensions(prhs[1])!=2)
mexErrMsgTxt("Input I1 (left image) must be a uint8_t matrix.");
// determine input/output image properties
const int *dims1 = mxGetDimensions(prhs[1]);
const int nDims1 = mxGetNumberOfDimensions(prhs[1]);
const int rows1 = dims1[0];
const int cols1 = dims1[1];
const int channels1 = (nDims1 == 3 ? dims1[2] : 1);
// Allocate, copy, and convert the input image
// #note: input is double
cv::Mat I1_ = cv::Mat::zeros(cv::Size(cols1, rows1), CV_8UC(channels1));
om::copyMatrixToOpencv<uchar>((unsigned char*)mxGetPr(prhs[1]), I1_);
// push back single image
if (nrhs==1+1) {
// compute features and put them to ring buffer
pushBack(I1_,!strcmp(command,"replace"));
// push back stereo image pair
} else {
if (!mxIsUint8(prhs[2]) || mxGetNumberOfDimensions(prhs[2])!=2)
mexErrMsgTxt("Input I2 (right image) must be a uint8_t matrix.");
// determine input/output image properties
const int *dims2 = mxGetDimensions(prhs[2]);
const int nDims2 = mxGetNumberOfDimensions(prhs[2]);
const int rows2 = dims2[0];
const int cols2 = dims2[1];
const int channels2 = (nDims2 == 3 ? dims2[2] : 1);
// Allocate, copy, and convert the input image
// #note: input is double
cv::Mat I2_ = cv::Mat::zeros(cv::Size(cols2, rows2), CV_8UC(channels2));
om::copyMatrixToOpencv<uchar>((unsigned char*)mxGetPr(prhs[2]), I2_);
// check image size
if (dims1_[0]!=dims2_[0] || dims1_[1]!=dims2_[1])
mexErrMsgTxt("Input I1 and I2 must be images of same size.");
// compute features and put them to ring buffer
pushBack(I1_,I2_,!strcmp(command,"replace"));
}
}else {
mexPrintf("Unknown command: %s\n",command);
}
}
And here is an additional part of the main cpp project.
std::vector<cv::KeyPoint> k1c1, k2c1, k1p1, k2p1; //KeyPoints
cv::Mat d1c1, d2c1, d1p1, d2p1; //descriptors
void pushBack (cv::Mat &I1,cv::Mat &I2,const bool replace) {
// sanity check
if (I1.empty()) {
cerr << "ERROR: Image empty!" << endl;
return;
}
if (replace) {
//if (!k1c1.empty())
k1c1.clear(); k2c1.clear();
d1c1.release(); d2c1.release();
} else {
k1p1.clear(); k2p1.clear();
d1p1.release(); d2p1.release();
k1p1 = k1c1; k2p1 = k2c1;
d1c1.copyTo(d1p1); d2c1.copyTo(d2p1);
k1c1.clear(); k2c1.clear();
d1c1.release(); d2c1.release();
}
// compute new features for current frame
computeFeatures(I1,k1c1,d1c1);
if (!I2.empty())
computeFeatures(I2,k2c1,d2c1);
}
And here is how I call the mex-file from Matlab
I1p = imread('\I1.bmp');
I2p = imread('\I2.bmp');
harris_freak('push',I1p,I2p);
Hope this helps...
I hope this is the correct way to give an answer to my own question.
After a couple of days I found kind of a work around. Instead of building the mex file in Matlab, which gives the above mentioned error, I built it in Visual Studio with instructions taken from here.
Now everything works just fine.
It kind of bothers me to not know how to do it with matlab, but hey, maybe someone still has an idea.
Thanks to the commenters for taking the time to look through my question!
If you have the Computer Vision System Toolbox then you do not need mex. It includes the detectHarrisFeatures function for detecting Harris corners, and the extractFeatures function, which can compute FREAK descriptors.
I'm doing online destructive clustering (clusters replace clustered objects) on a list of class instances (stl::list).
Background
My list of current percepUnits is: stl::list<percepUnit> units; and for each iteration I get a new list of input percepUnits stl::list<percepUnit> scratch; that need to be clustered with the units.
I want to maintain a fixed number of percepUnits (so units.size() is constant), so for each new scratch percepUnit I need to merge it with the nearest percepUnit in units. Following is a code snippet that builds a list (dists) of structures (percepUnitDist) that contain pointers to each pair of items in scratch and units percepDist.scratchUnit = &(*scratchUnit); and percepDist.unit = &(*unit); and their distance. Additionally, for each item in scratch I keep track of which item in units has the least distance minDists.
// For every scratch percepUnit:
for (scratchUnit = scratch.begin(); scratchUnit != scratch.end(); scratchUnit++) {
float minDist=2025.1172; // This is the max possible distance in unnormalized CIELuv, and much larger than the normalized dist.
// For every percepUnit:
for (unit = units.begin(); unit != units.end(); unit++) {
// compare pairs
float dist = featureDist(*scratchUnit, *unit, FGBG);
//cout << "distance: " << dist << endl;
// Put pairs in a structure that caches their distances
percepUnitDist percepDist;
percepDist.scratchUnit = &(*scratchUnit); // address of where scratchUnit points to.
percepDist.unit = &(*unit);
percepDist.dist = dist;
// Figure out the percepUnit that is closest to this scratchUnit.
if (dist < minDist)
minDist = dist;
dists.push_back(percepDist); // append dist struct
}
minDists.push_back(minDist); // append the min distance to the nearest percepUnit for this particular scratchUnit.
}
So now I just need to loop through the percepUnitDist items in dists and match the distances with the minimum distances to figure out which percepUnit in scratch should be merged with which percepUnit in units. The merging process mergePerceps() creates a new percepUnit which is a weighted average of the "parent" percepUnits in scratch and units.
Question
I want to replace the instance in the units list with the new percepUnit constructed by mergePerceps(), but I would like to do so in the context of looping through the percepUnitDists. This is my current code:
// Loop through dists and merge all the closest pairs.
// Loop through all dists
for (distIter = dists.begin(); distIter != dists.end(); distIter++) {
// Loop through all minDists for each scratchUnit.
for (minDistsIter = minDists.begin(); minDistsIter != minDists.end(); minDistsIter++) {
// if this is the closest cluster, and the closest cluster has not already been merged, and the scratch has not already been merged.
if (*minDistsIter == distIter->dist and not distIter->scratchUnit->remove) {
percepUnit newUnit;
mergePerceps(*(distIter->scratchUnit), *(distIter->unit), newUnit, FGBG);
*(distIter->unit) = newUnit; // replace the cluster with the new merged version.
distIter->scratchUnit->remove = true;
}
}
}
I thought that I could replace the instance in units via the percepUnitDist pointer with the new percepUnit instance using *(distIter->unit) = newUnit;, but that does not seem to be working as I'm seeing a memory leak, implying the instances in the units are not getting replaced.
How do I delete the percepUnit in the units list and replace it with a new percepUnit instance such that the new unit is located in the same location?
EDIT1
Here is the percepUnit class. Note the cv::Mat members. Following is the mergePerceps() function and the mergeImages() function on which it depends:
// Function to construct an accumulation.
void clustering::mergeImages(Mat &scratch, Mat &unit, cv::Mat &merged, const string maskOrImage, const string FGBG, const float scratchWeight, const float unitWeight) {
int width, height, type=CV_8UC3;
Mat scratchImagePad, unitImagePad, scratchImage, unitImage;
// use the resolution and aspect of the largest of the pair.
if (unit.cols > scratch.cols)
width = unit.cols;
else
width = scratch.cols;
if (unit.rows > scratch.rows)
height = unit.rows;
else
height = scratch.rows;
if (maskOrImage == "mask")
type = CV_8UC1; // single channel mask
else if (maskOrImage == "image")
type = CV_8UC3; // three channel image
else
cout << "maskOrImage is not 'mask' or 'image'\n";
merged = Mat(height, width, type, Scalar::all(0));
scratchImagePad = Mat(height, width, type, Scalar::all(0));
unitImagePad = Mat(height, width, type, Scalar::all(0));
// weight images before summation.
// because these pass by reference, they mess up the images in memory!
scratch *= scratchWeight;
unit *= unitWeight;
// copy images into padded images.
scratch.copyTo(scratchImagePad(Rect((scratchImagePad.cols-scratch.cols)/2,
(scratchImagePad.rows-scratch.rows)/2,
scratch.cols,
scratch.rows)));
unit.copyTo(unitImagePad(Rect((unitImagePad.cols-unit.cols)/2,
(unitImagePad.rows-unit.rows)/2,
unit.cols,
unit.rows)));
merged = scratchImagePad+unitImagePad;
}
// Merge two perceps and return a new percept to replace them.
void clustering::mergePerceps(percepUnit scratch, percepUnit unit, percepUnit &mergedUnit, const string FGBG) {
Mat accumulation;
Mat accumulationMask;
Mat meanColour;
int x, y, w, h, area;
float l,u,v;
int numMerges=0;
std::vector<float> featuresVar; // Normalized, Sum, Variance.
//float featuresVarMin, featuresVarMax; // min and max variance accross all features.
float scratchWeight, unitWeight;
if (FGBG == "FG") {
// foreground percepts don't get merged as much.
scratchWeight = 0.65;
unitWeight = 1-scratchWeight;
} else {
scratchWeight = 0.85;
unitWeight = 1-scratchWeight;
}
// Images TODO remove the meanColour if needbe.
mergeImages(scratch.image, unit.image, accumulation, "image", FGBG, scratchWeight, unitWeight);
mergeImages(scratch.mask, unit.mask, accumulationMask, "mask", FGBG, scratchWeight, unitWeight);
mergeImages(scratch.meanColour, unit.meanColour, meanColour, "image", "FG", scratchWeight, unitWeight); // merge images
// Position and size.
x = (scratch.x1*scratchWeight) + (unit.x1*unitWeight);
y = (scratch.y1*scratchWeight) + (unit.y1*unitWeight);
w = (scratch.w*scratchWeight) + (unit.w*unitWeight);
h = (scratch.h*scratchWeight) + (unit.h*unitWeight);
// area
area = (scratch.area*scratchWeight) + (unit.area*unitWeight);
// colour
l = (scratch.l*scratchWeight) + (unit.l*unitWeight);
u = (scratch.u*scratchWeight) + (unit.u*unitWeight);
v = (scratch.v*scratchWeight) + (unit.v*unitWeight);
// Number of merges
if (scratch.numMerges < 1 and unit.numMerges < 1) { // both units are patches
numMerges = 1;
} else if (scratch.numMerges < 1 and unit.numMerges >= 1) { // unit A is a patch, B a percept
numMerges = unit.numMerges + 1;
} else if (scratch.numMerges >= 1 and unit.numMerges < 1) { // unit A is a percept, B a patch.
numMerges = scratch.numMerges + 1;
cout << "merged scratch??" <<endl;
// TODO this may be an impossible case.
} else { // both units are percepts
numMerges = scratch.numMerges + unit.numMerges;
cout << "Merging two already merged Percepts" <<endl;
// TODO this may be an impossible case.
}
// Create unit.
mergedUnit = percepUnit(accumulation, accumulationMask, x, y, w, h, area); // time is the earliest value in times?
mergedUnit.l = l; // members not in the constrcutor.
mergedUnit.u = u;
mergedUnit.v = v;
mergedUnit.numMerges = numMerges;
mergedUnit.meanColour = meanColour;
mergedUnit.pActivated = unit.pActivated; // new clusters retain parent's history of activation.
mergedUnit.scratch = false;
mergedUnit.habituation = unit.habituation; // we inherent the habituation of the cluster we merged with.
}
EDIT2
Changing the copy and assignment operators had performance side-effects and did not seem to resolve the problem. So I've added a custom function to do the replacement, which just like the copy operator makes copies of each member and make's sure those copies are deep. The problem is that I still end up with a leak.
So I've changed this line: *(distIter->unit) = newUnit;
to this: (*(distIter->unit)).clone(newUnit)
Where the clone method is as follows:
// Deep Copy of members
void percepUnit::clone(const percepUnit &source) {
// Deep copy of Mats
this->image = source.image.clone();
this->mask = source.mask.clone();
this->alphaImage = source.alphaImage.clone();
this->meanColour = source.meanColour.clone();
// shallow copies of everything else
this->alpha = source.alpha;
this->fadingIn = source.fadingIn;
this->fadingHold = source.fadingHold;
this->fadingOut = source.fadingOut;
this->l = source.l;
this->u = source.u;
this->v = source.v;
this->x1 = source.x1;
this->y1 = source.y1;
this->w = source.w;
this->h = source.h;
this->x2 = source.x2;
this->y2 = source.y2;
this->cx = source.cx;
this->cy = source.cy;
this->numMerges = source.numMerges;
this->id = source.id;
this->area = source.area;
this->features = source.features;
this->featuresNorm = source.featuresNorm;
this->remove = source.remove;
this->fgKnockout = source.fgKnockout;
this->colourCalculated = source.colourCalculated;
this->normalized = source.normalized;
this->activation = source.activation;
this->activated = source.activated;
this->pActivated = source.pActivated;
this->habituation = source.habituation;
this->scratch = source.scratch;
this->FGBG = source.FGBG;
}
And yet, I still see a memory increase. The increase does not happen if I comment out that single replacement line. So I'm still stuck.
EDIT3
I can prevent memory from increasing if I disable the cv::Mat cloning code in the function above:
// Deep Copy of members
void percepUnit::clone(const percepUnit &source) {
/* try releasing Mats first?
// No effect on memory increase, but the refCount is decremented.
this->image.release();
this->mask.release();
this->alphaImage.release();
this->meanColour.release();*/
/* Deep copy of Mats
this->image = source.image.clone();
this->mask = source.mask.clone();
this->alphaImage = source.alphaImage.clone();
this->meanColour = source.meanColour.clone();*/
// shallow copies of everything else
this->alpha = source.alpha;
this->fadingIn = source.fadingIn;
this->fadingHold = source.fadingHold;
this->fadingOut = source.fadingOut;
this->l = source.l;
this->u = source.u;
this->v = source.v;
this->x1 = source.x1;
this->y1 = source.y1;
this->w = source.w;
this->h = source.h;
this->x2 = source.x2;
this->y2 = source.y2;
this->cx = source.cx;
this->cy = source.cy;
this->numMerges = source.numMerges;
this->id = source.id;
this->area = source.area;
this->features = source.features;
this->featuresNorm = source.featuresNorm;
this->remove = source.remove;
this->fgKnockout = source.fgKnockout;
this->colourCalculated = source.colourCalculated;
this->normalized = source.normalized;
this->activation = source.activation;
this->activated = source.activated;
this->pActivated = source.pActivated;
this->habituation = source.habituation;
this->scratch = source.scratch;
this->FGBG = source.FGBG;
}
EDIT4
While I still can't explain this issue, I did notice another hint. I realized that this leak can also be stopped if I don't normalize those features I use to cluster via featureDist() (but continue to clone cv::Mats). The really odd thing is that I rewrote that code entirely and still the problem persists.
Here is the featureDist function:
float clustering::featureDist(percepUnit unitA, percepUnit unitB, const string FGBG) {
float distance=0;
if (FGBG == "BG") {
for (unsigned int i=0; i<unitA.featuresNorm.rows; i++) {
distance += pow(abs(unitA.featuresNorm.at<float>(i) - unitB.featuresNorm.at<float>(i)),0.5);
//cout << "unitA.featuresNorm[" << i << "]: " << unitA.featuresNorm[i] << endl;
//cout << "unitB.featuresNorm[" << i << "]: " << unitB.featuresNorm[i] << endl;
}
// for FG, don't use normalized colour features.
// TODO To include the area use i=4
} else if (FGBG == "FG") {
for (unsigned int i=4; i<unitA.features.rows; i++) {
distance += pow(abs(unitA.features.at<float>(i) - unitB.features.at<float>(i)),0.5);
}
} else {
cout << "FGBG argument was not FG or BG, returning 0." <<endl;
return 0;
}
return pow(distance,2);
}
Features used to be a vector of floats, and thus the normalization code was as follows:
void clustering::normalize(list<percepUnit> &scratch, list<percepUnit> &units) {
list<percepUnit>::iterator unit;
list<percepUnit*>::iterator unitPtr;
vector<float> min,max;
list<percepUnit*> masterList; // list of pointers.
// generate pointers
for (unit = scratch.begin(); unit != scratch.end(); unit++)
masterList.push_back(&(*unit)); // add pointer to where unit points to.
for (unit = units.begin(); unit != units.end(); unit++)
masterList.push_back(&(*unit)); // add pointer to where unit points to.
int numFeatures = masterList.front()->features.size(); // all percepts have the same number of features.
min.resize(numFeatures); // allocate for the number of features we have.
max.resize(numFeatures);
// Loop through all units to get feature values
for (int i=0; i<numFeatures; i++) {
min[i] = masterList.front()->features[i]; // starting point.
max[i] = min[i];
// calculate min and max for each feature.
for (unitPtr = masterList.begin(); unitPtr != masterList.end(); unitPtr++) {
if ((*unitPtr)->features[i] < min[i])
min[i] = (*unitPtr)->features[i];
if ((*unitPtr)->features[i] > max[i])
max[i] = (*unitPtr)->features[i];
}
}
// Normalize features according to min/max.
for (int i=0; i<numFeatures; i++) {
for (unitPtr = masterList.begin(); unitPtr != masterList.end(); unitPtr++) {
(*unitPtr)->featuresNorm[i] = ((*unitPtr)->features[i]-min[i]) / (max[i]-min[i]);
(*unitPtr)->normalized = true;
}
}
}
I changed the features type to a cv::Mat so I could use the opencv normalization function, so I rewrote the normalization function as follows:
void clustering::normalize(list<percepUnit> &scratch, list<percepUnit> &units) {
Mat featureMat = Mat(1,units.size()+scratch.size(), CV_32FC1, Scalar(0));
list<percepUnit>::iterator unit;
// For each feature
for (int i=0; i< units.begin()->features.rows; i++) {
// for each unit in units
int j=0;
float value;
for (unit = units.begin(); unit != units.end(); unit++) {
// Populate featureMat j is the unit index, i is the feature index.
value = unit->features.at<float>(i);
featureMat.at<float>(j) = value;
j++;
}
// for each unit in scratch
for (unit = scratch.begin(); unit != scratch.end(); unit++) {
// Populate featureMat j is the unit index, i is the feature index.
value = unit->features.at<float>(i);
featureMat.at<float>(j) = value;
j++;
}
// Normalize this featureMat in place
cv::normalize(featureMat, featureMat, 0, 1, NORM_MINMAX);
// set normalized values in percepUnits from featureMat
// for each unit in units
j=0;
for (unit = units.begin(); unit != units.end(); unit++) {
// Populate percepUnit featuresNorm, j is the unit index, i is the feature index.
value = featureMat.at<float>(j);
unit->featuresNorm.at<float>(i) = value;
j++;
}
// for each unit in scratch
for (unit = scratch.begin(); unit != scratch.end(); unit++) {
// Populate percepUnit featuresNorm, j is the unit index, i is the feature index.
value = featureMat.at<float>(j);
unit->featuresNorm.at<float>(i) = value;
j++;
}
}
}
I can't understand what the interaction between mergePercepts and normalization, especially since normalization is an entirely rewritten function.
Update
Massif and my /proc memory reporting don't agree. Massif says there is no effect of normalization on memory usage, only commenting out the percepUnit::clone() operation bypasses the leak.
Here is all the code, in case the interaction is somewhere else I am missing.
Here is another version of the same code with the dependence on OpenCV GPU removed, to facilitate testing...
It was recommended by Nghia (on the opencv forum) that I try and make the percepts a constant size. Sure enough, if I fix the dimensions and type of the cv::Mat members of percepUnit, then the leak disappears.
So it seems to me this is a bug in OpenCV that effects calling clone() and copyTo() on Mats of different sizes that are class members. So far unable to reproduce in a simple program. The leak does seem small enough that it may be the headers leaking, rather than the underlying image data.