What is wrong with this MLP backpropogation implementation? - c++

I am writing a MLP neural network in C++, and I am struggling with backpropogation. My implementation follows this article closely, but I've done something wrong and can't spot the problem. My Matrix class confirms that there are no mismatched dimensions in any of Matrix calculations, but the output seems to always approach zero or some variant of infinity. Is this "vanishing" or "exploding" gradients as mentioned here, or is there something else going wrong?
Here is my activation function and its derivative:
double sigmoid(double d) {
return 1/(1+exp(-d));
}
double dsigmoid(double d) {
return sigmoid(d) * (1 - sigmoid(d));
}
Here is my training algorithm:
void KNN::train(const Matrix& input, const Matrix& target) {
this->layer[0] = input;
for(uint i = 1; i <= this->num_depth+1; i++) {
this->layer[i] = Matrix::multiply(this->weights[i-1], this->layer[i-1]);
this->layer[i] = Matrix::function(this->layer[i], sigmoid);
}
this->deltas[this->num_depth+1] = Matrix::multiply(Matrix::subtract(this->layer[this->num_depth+1], target), Matrix::function(Matrix::multiply(this->weights[this->num_depth], this->layer[this->num_depth]), dsigmoid), true);
this->gradients[this->num_depth+1] = Matrix::multiply(this->deltas[this->num_depth+1], Matrix::transpose(this->layer[this->num_depth]));
this->weights[this->num_depth] = Matrix::subtract(this->weights[this->num_depth], Matrix::multiply(Matrix::multiply(this->weights[this->num_depth], this->learning_rate), this->gradients[this->num_depth+1], true));
for(int i = this->num_depth; i > 0; i--) {
this->deltas[i] = Matrix::multiply(Matrix::multiply(Matrix::transpose(this->weights[i]), this->deltas[i+1]), Matrix::function(Matrix::multiply(this->weights[i-1], this->layer[i-1]), dsigmoid), true);
this->gradients[i] = Matrix::multiply(this->deltas[i], Matrix::transpose(this->layer[i-1]));
this->weights[i-1] = Matrix::subtract(this->weights[i-1], Matrix::multiply(Matrix::multiply(this->weights[i-1], this->learning_rate), this->gradients[i], true));
}
}
The third argument in Matrix::multiply tells whether or not to use Hadamard product (default is false). this->num_depth is the number of hidden layers.
Adding biases seems to do... something, but output almost always tends towards zero.
void KNN::train(const Matrix& input, const Matrix& target) {
this->layer[0] = input;
for(uint i = 1; i <= this->num_depth+1; i++) {
this->layer[i] = Matrix::multiply(this->weights[i-1], this->layer[i-1]);
this->layer[i] = Matrix::add(this->layer[i], this->biases[i-1]);
this->layer[i] = Matrix::function(this->layer[i], this->activation);
}
this->deltas[this->num_depth+1] = Matrix::multiply(Matrix::subtract(this->layer[this->num_depth+1], target), Matrix::function(Matrix::multiply(this->weights[this->num_depth], this->layer[this->num_depth]), this->dactivation), true);
this->gradients[this->num_depth+1] = Matrix::multiply(this->deltas[this->num_depth+1], Matrix::transpose(this->layer[this->num_depth]));
this->weights[this->num_depth] = Matrix::subtract(this->weights[this->num_depth], Matrix::multiply(Matrix::multiply(this->weights[this->num_depth], this->learning_rate), this->gradients[this->num_depth+1], true));
this->biases[this->num_depth] = Matrix::subtract(this->biases[this->num_depth], Matrix::multiply(this->deltas[this->num_depth+1], this->learning_rate * .5));
for(uint i = this->num_depth+1 -1; i > 0; i--) {
this->deltas[i] = Matrix::multiply(Matrix::multiply(Matrix::transpose(this->weights[i+1 -1]), this->deltas[i+1]), Matrix::function(Matrix::multiply(this->weights[i-1], this->layer[i-1]), this->dactivation), true);
this->gradients[i] = Matrix::multiply(this->deltas[i], Matrix::transpose(this->layer[i-1]));
this->weights[i-1] = Matrix::subtract(this->weights[i-1], Matrix::multiply(Matrix::multiply(this->weights[i-1], this->learning_rate), this->gradients[i], true));
this->biases[i-1] = Matrix::subtract(this->biases[i-1], Matrix::multiply(this->deltas[i], this->learning_rate * .5));
}
}

Related

Outline of pixels after detecting object (without convex hull)

The idea is to use grabcut (OpenCV) to detect the image inside a rectangle and create a geometry with Direct2D.
My test image is this:
After performing the grab cut, resulting in this image:
the idea is to outline it. I can use an opacity brush to exclude it from the background but I want to use a geometric brush in order to be able to append/widen/combine geometries on it like all other selections in my editor (polygon, lasso, rectangle, etc).
If I apply the convex hull algorithm to the points, I get this:
Which of course is not desired for my case. How do I outline the image?
After getting the image from the grabcut, I keep the points based on luminance:
DWORD* pixels = ...
for (UINT y = 0; y < he; y++)
{
for (UINT x = 0; x < wi; x++)
{
DWORD& col = pixels[y * wi + x];
auto lumthis = lum(col);
if (lumthis > Lum_Threshold)
{
points.push_back({x,y});
}
}
}
Then I sort the points on Y and X:
std::sort(points.begin(), points.end(), [](D2D1_POINT_2F p1, D2D1_POINT_2F p2) -> bool
{
if (p1.y < p2.y)
return true;
if ((int)p1.y == (int)p2.y && p1.x < p2.x)
return true;
return false;
});
Then, for each line (traversing the above point array from top Y to bototm Y) I create "groups" for each line:
struct SECTION
{
float left = 0, right = 0;
};
auto findgaps = [](D2D1_POINT_2F* p,size_t n) -> std::vector<SECTION>
{
std::vector<SECTION> j;
SECTION* jj = 0;
for (size_t i = 0; i < n; i++)
{
if (i == 0)
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
continue;
}
if ((p[i].x - jj->right) < 1.5f)
{
jj->right = p[i].x;
}
else
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
}
}
return j;
};
I'm stuck at this point. I know that from an arbitrary set of points many polygons are possible, but in my case the points have defined what's "left" and what's "right". How would I proceed from here?
For anyone interested, the solution is OpenCV contours. Working example here.

OpenCasCade BrepFeat_MakeCylindricalHole(); function

Currently I'm working on a project to drill some cylindrical holes on to a given shape.
So for do that I did use BRepFeat_MakeCylindricalHole(); function and used Standard_EXPORT void PerformThruNext (const Standard_Real Radius, const Standard_Boolean WithControl = Standard_True);
Because in this scenario I haven't the depth of the cylinder. Using performThruNext it will drilled, Top face to Bottom face.
User will be given the start and the end points, then using that will be drilling holes.
**1.when start and end points having decimal fraction part (gp_pnt StartPnt (328.547,-4.325,97.412);
gp_pnt EndPnt
(291.841,-7.586,101.651);), it doesn't drill the face correctly.
2.when diameter getting to a higher value,happens the same thing,
,
** following is my code,any help would be highly appreciated. Thanks!
void COCCAPPDoc::OnBox2()
{
ofstream out;
out.open("C:/Users/Dell/Desktop/Blade/areaInit.txt");
CString str;
TopoDS_Shape bladeWithCore,DrilledShape,DrilledShape2;
/* IMPORT bladeWithCore.brep FILE */
Standard_CString strFile = "C:/Users/Dell/Desktop/Blade/bladeWithCore.brep";
BRep_Builder brepS;
BRepTools::Read(bladeWithCore,strFile,brepS);
TopoDS_Face Face_52;
Standard_CString strFace = "C:/Users/Dell/Desktop/Blade/Face_52.brep";
BRep_Builder brepF;
BRepTools::Read(Face_52,strFace,brepF);
BRepClass3d_SolidClassifier classify;
classify.Load(bladeWithCore);
classify.PerformInfinitePoint(1.0e-6);
TopAbs_State state = classify.State();
if(state==TopAbs_IN) bladeWithCore.Reverse();
DrilledShape = bladeWithCore;
gp_Vec v1(gp_Pnt( 330,11,108),gp_Pnt(291,4,109)); //FINDING VECTOR FROM START TO THE END POINT
gp_Pnt point(330,11,108);
gp_Pnt startpoint (330,11,108);
TopoDS_Vertex aV1 = BRepBuilderAPI_MakeVertex (startpoint);
BRepTools::Write(aV1,"C:/Users/Dell/Desktop/Blade/startpoint.brep");
/* gp_Vec v1(gp_Pnt( 330.208114,11.203318,108.480255),gp_Pnt(291.103558,4.265285,109.432663));
gp_Pnt point(330.208114,11.203318,108.480255);*/
gp_Dir d1(v1); //GETTING DIRECTION USING FOUND VECTOR
gp_Vec UnitVector(d1); //FINDING UNITVECTOR
gp_Dir dir_line(0,0,1);
gp_Vec UnitVector_line(dir_line);
gp_Pnt minPoint;
Standard_Real diameter = 0.1;
int noOfInstances =3;
int gap = -10;
for(int i=0; i {
point.Translate(UnitVector);
gp_Pnt newPoint = point;
newPoint.Translate(UnitVector_line*10);
BRepBuilderAPI_MakeEdge LINE(newPoint, point);
BRepBuilderAPI_MakeWire lineWire(LINE);
TopoDS_Wire wire = TopoDS::Wire(lineWire);
TopoDS_Shape lineShape = TopoDS_Shape(wire);
BRepTools::Write(lineShape,"C:/Users/Dell/Desktop/Blade/linlineShapee.brep");
intersection( bladeWithCore,lineShape, point, minPoint);
TopoDS_Vertex checkPoint = BRepBuilderAPI_MakeVertex (minPoint);
BRepTools::Write(checkPoint,"C:/Users/Dell/Desktop/Blade/checkPoint.brep");
gp_Ax1 location =gp_Ax1(minPoint,gp_Dir(0,0,1));
BRepFeat_MakeCylindricalHole drilled;
drilled.Init(DrilledShape,location);
drilled.PerformThruNext(diameter/2.0);
BRepFeat_Status status = drilled.Status();
if(status==BRepFeat_NoError)
{
DrilledShape = drilled.Shape();
BRepTools::Write(DrilledShape,"C:/Users/Dell/Desktop/Blade/bladeWithCoreDrilled.brep");
}
}
Handle(AIS_InteractiveObject) obj3 = (new AIS_Shape(DrilledShape));
myAISContext->Display(obj3);
COCCAPPDoc:: Fit();
}
void COCCAPPDoc::intersection(TopoDS_Shape bladeWithCore,TopoDS_Shape lineShape,gp_Pnt point,gp_Pnt& minPoint)
{
BRepExtrema_DistShapeShape interSection( bladeWithCore, lineShape, Extrema_ExtFlag_MIN, Extrema_ExtAlgo_Grad);
interSection.Perform();
Standard_Integer Solutions = interSection.NbSolution();
CArray disArray;
int minindex = 1;
for(int i = 1; i {
double distance = point.Distance((interSection.PointOnShape1(i)));
disArray.Add(distance);
}
for(int i = 1; i {
if(disArray[0] > disArray[i])
{
disArray[0] = disArray[i];
minindex = i;
}
}
minPoint = interSection.PointOnShape1(minindex);
TopoDS_Vertex aV = BRepBuilderAPI_MakeVertex (minPoint);
BRepTools::Write(aV,"C:/Users/Dell/Desktop/Blade/minPoint.brep");
}

How to replace an instance with another instance via pointer?

I'm doing online destructive clustering (clusters replace clustered objects) on a list of class instances (stl::list).
Background
My list of current percepUnits is: stl::list<percepUnit> units; and for each iteration I get a new list of input percepUnits stl::list<percepUnit> scratch; that need to be clustered with the units.
I want to maintain a fixed number of percepUnits (so units.size() is constant), so for each new scratch percepUnit I need to merge it with the nearest percepUnit in units. Following is a code snippet that builds a list (dists) of structures (percepUnitDist) that contain pointers to each pair of items in scratch and units percepDist.scratchUnit = &(*scratchUnit); and percepDist.unit = &(*unit); and their distance. Additionally, for each item in scratch I keep track of which item in units has the least distance minDists.
// For every scratch percepUnit:
for (scratchUnit = scratch.begin(); scratchUnit != scratch.end(); scratchUnit++) {
float minDist=2025.1172; // This is the max possible distance in unnormalized CIELuv, and much larger than the normalized dist.
// For every percepUnit:
for (unit = units.begin(); unit != units.end(); unit++) {
// compare pairs
float dist = featureDist(*scratchUnit, *unit, FGBG);
//cout << "distance: " << dist << endl;
// Put pairs in a structure that caches their distances
percepUnitDist percepDist;
percepDist.scratchUnit = &(*scratchUnit); // address of where scratchUnit points to.
percepDist.unit = &(*unit);
percepDist.dist = dist;
// Figure out the percepUnit that is closest to this scratchUnit.
if (dist < minDist)
minDist = dist;
dists.push_back(percepDist); // append dist struct
}
minDists.push_back(minDist); // append the min distance to the nearest percepUnit for this particular scratchUnit.
}
So now I just need to loop through the percepUnitDist items in dists and match the distances with the minimum distances to figure out which percepUnit in scratch should be merged with which percepUnit in units. The merging process mergePerceps() creates a new percepUnit which is a weighted average of the "parent" percepUnits in scratch and units.
Question
I want to replace the instance in the units list with the new percepUnit constructed by mergePerceps(), but I would like to do so in the context of looping through the percepUnitDists. This is my current code:
// Loop through dists and merge all the closest pairs.
// Loop through all dists
for (distIter = dists.begin(); distIter != dists.end(); distIter++) {
// Loop through all minDists for each scratchUnit.
for (minDistsIter = minDists.begin(); minDistsIter != minDists.end(); minDistsIter++) {
// if this is the closest cluster, and the closest cluster has not already been merged, and the scratch has not already been merged.
if (*minDistsIter == distIter->dist and not distIter->scratchUnit->remove) {
percepUnit newUnit;
mergePerceps(*(distIter->scratchUnit), *(distIter->unit), newUnit, FGBG);
*(distIter->unit) = newUnit; // replace the cluster with the new merged version.
distIter->scratchUnit->remove = true;
}
}
}
I thought that I could replace the instance in units via the percepUnitDist pointer with the new percepUnit instance using *(distIter->unit) = newUnit;, but that does not seem to be working as I'm seeing a memory leak, implying the instances in the units are not getting replaced.
How do I delete the percepUnit in the units list and replace it with a new percepUnit instance such that the new unit is located in the same location?
EDIT1
Here is the percepUnit class. Note the cv::Mat members. Following is the mergePerceps() function and the mergeImages() function on which it depends:
// Function to construct an accumulation.
void clustering::mergeImages(Mat &scratch, Mat &unit, cv::Mat &merged, const string maskOrImage, const string FGBG, const float scratchWeight, const float unitWeight) {
int width, height, type=CV_8UC3;
Mat scratchImagePad, unitImagePad, scratchImage, unitImage;
// use the resolution and aspect of the largest of the pair.
if (unit.cols > scratch.cols)
width = unit.cols;
else
width = scratch.cols;
if (unit.rows > scratch.rows)
height = unit.rows;
else
height = scratch.rows;
if (maskOrImage == "mask")
type = CV_8UC1; // single channel mask
else if (maskOrImage == "image")
type = CV_8UC3; // three channel image
else
cout << "maskOrImage is not 'mask' or 'image'\n";
merged = Mat(height, width, type, Scalar::all(0));
scratchImagePad = Mat(height, width, type, Scalar::all(0));
unitImagePad = Mat(height, width, type, Scalar::all(0));
// weight images before summation.
// because these pass by reference, they mess up the images in memory!
scratch *= scratchWeight;
unit *= unitWeight;
// copy images into padded images.
scratch.copyTo(scratchImagePad(Rect((scratchImagePad.cols-scratch.cols)/2,
(scratchImagePad.rows-scratch.rows)/2,
scratch.cols,
scratch.rows)));
unit.copyTo(unitImagePad(Rect((unitImagePad.cols-unit.cols)/2,
(unitImagePad.rows-unit.rows)/2,
unit.cols,
unit.rows)));
merged = scratchImagePad+unitImagePad;
}
// Merge two perceps and return a new percept to replace them.
void clustering::mergePerceps(percepUnit scratch, percepUnit unit, percepUnit &mergedUnit, const string FGBG) {
Mat accumulation;
Mat accumulationMask;
Mat meanColour;
int x, y, w, h, area;
float l,u,v;
int numMerges=0;
std::vector<float> featuresVar; // Normalized, Sum, Variance.
//float featuresVarMin, featuresVarMax; // min and max variance accross all features.
float scratchWeight, unitWeight;
if (FGBG == "FG") {
// foreground percepts don't get merged as much.
scratchWeight = 0.65;
unitWeight = 1-scratchWeight;
} else {
scratchWeight = 0.85;
unitWeight = 1-scratchWeight;
}
// Images TODO remove the meanColour if needbe.
mergeImages(scratch.image, unit.image, accumulation, "image", FGBG, scratchWeight, unitWeight);
mergeImages(scratch.mask, unit.mask, accumulationMask, "mask", FGBG, scratchWeight, unitWeight);
mergeImages(scratch.meanColour, unit.meanColour, meanColour, "image", "FG", scratchWeight, unitWeight); // merge images
// Position and size.
x = (scratch.x1*scratchWeight) + (unit.x1*unitWeight);
y = (scratch.y1*scratchWeight) + (unit.y1*unitWeight);
w = (scratch.w*scratchWeight) + (unit.w*unitWeight);
h = (scratch.h*scratchWeight) + (unit.h*unitWeight);
// area
area = (scratch.area*scratchWeight) + (unit.area*unitWeight);
// colour
l = (scratch.l*scratchWeight) + (unit.l*unitWeight);
u = (scratch.u*scratchWeight) + (unit.u*unitWeight);
v = (scratch.v*scratchWeight) + (unit.v*unitWeight);
// Number of merges
if (scratch.numMerges < 1 and unit.numMerges < 1) { // both units are patches
numMerges = 1;
} else if (scratch.numMerges < 1 and unit.numMerges >= 1) { // unit A is a patch, B a percept
numMerges = unit.numMerges + 1;
} else if (scratch.numMerges >= 1 and unit.numMerges < 1) { // unit A is a percept, B a patch.
numMerges = scratch.numMerges + 1;
cout << "merged scratch??" <<endl;
// TODO this may be an impossible case.
} else { // both units are percepts
numMerges = scratch.numMerges + unit.numMerges;
cout << "Merging two already merged Percepts" <<endl;
// TODO this may be an impossible case.
}
// Create unit.
mergedUnit = percepUnit(accumulation, accumulationMask, x, y, w, h, area); // time is the earliest value in times?
mergedUnit.l = l; // members not in the constrcutor.
mergedUnit.u = u;
mergedUnit.v = v;
mergedUnit.numMerges = numMerges;
mergedUnit.meanColour = meanColour;
mergedUnit.pActivated = unit.pActivated; // new clusters retain parent's history of activation.
mergedUnit.scratch = false;
mergedUnit.habituation = unit.habituation; // we inherent the habituation of the cluster we merged with.
}
EDIT2
Changing the copy and assignment operators had performance side-effects and did not seem to resolve the problem. So I've added a custom function to do the replacement, which just like the copy operator makes copies of each member and make's sure those copies are deep. The problem is that I still end up with a leak.
So I've changed this line: *(distIter->unit) = newUnit;
to this: (*(distIter->unit)).clone(newUnit)
Where the clone method is as follows:
// Deep Copy of members
void percepUnit::clone(const percepUnit &source) {
// Deep copy of Mats
this->image = source.image.clone();
this->mask = source.mask.clone();
this->alphaImage = source.alphaImage.clone();
this->meanColour = source.meanColour.clone();
// shallow copies of everything else
this->alpha = source.alpha;
this->fadingIn = source.fadingIn;
this->fadingHold = source.fadingHold;
this->fadingOut = source.fadingOut;
this->l = source.l;
this->u = source.u;
this->v = source.v;
this->x1 = source.x1;
this->y1 = source.y1;
this->w = source.w;
this->h = source.h;
this->x2 = source.x2;
this->y2 = source.y2;
this->cx = source.cx;
this->cy = source.cy;
this->numMerges = source.numMerges;
this->id = source.id;
this->area = source.area;
this->features = source.features;
this->featuresNorm = source.featuresNorm;
this->remove = source.remove;
this->fgKnockout = source.fgKnockout;
this->colourCalculated = source.colourCalculated;
this->normalized = source.normalized;
this->activation = source.activation;
this->activated = source.activated;
this->pActivated = source.pActivated;
this->habituation = source.habituation;
this->scratch = source.scratch;
this->FGBG = source.FGBG;
}
And yet, I still see a memory increase. The increase does not happen if I comment out that single replacement line. So I'm still stuck.
EDIT3
I can prevent memory from increasing if I disable the cv::Mat cloning code in the function above:
// Deep Copy of members
void percepUnit::clone(const percepUnit &source) {
/* try releasing Mats first?
// No effect on memory increase, but the refCount is decremented.
this->image.release();
this->mask.release();
this->alphaImage.release();
this->meanColour.release();*/
/* Deep copy of Mats
this->image = source.image.clone();
this->mask = source.mask.clone();
this->alphaImage = source.alphaImage.clone();
this->meanColour = source.meanColour.clone();*/
// shallow copies of everything else
this->alpha = source.alpha;
this->fadingIn = source.fadingIn;
this->fadingHold = source.fadingHold;
this->fadingOut = source.fadingOut;
this->l = source.l;
this->u = source.u;
this->v = source.v;
this->x1 = source.x1;
this->y1 = source.y1;
this->w = source.w;
this->h = source.h;
this->x2 = source.x2;
this->y2 = source.y2;
this->cx = source.cx;
this->cy = source.cy;
this->numMerges = source.numMerges;
this->id = source.id;
this->area = source.area;
this->features = source.features;
this->featuresNorm = source.featuresNorm;
this->remove = source.remove;
this->fgKnockout = source.fgKnockout;
this->colourCalculated = source.colourCalculated;
this->normalized = source.normalized;
this->activation = source.activation;
this->activated = source.activated;
this->pActivated = source.pActivated;
this->habituation = source.habituation;
this->scratch = source.scratch;
this->FGBG = source.FGBG;
}
EDIT4
While I still can't explain this issue, I did notice another hint. I realized that this leak can also be stopped if I don't normalize those features I use to cluster via featureDist() (but continue to clone cv::Mats). The really odd thing is that I rewrote that code entirely and still the problem persists.
Here is the featureDist function:
float clustering::featureDist(percepUnit unitA, percepUnit unitB, const string FGBG) {
float distance=0;
if (FGBG == "BG") {
for (unsigned int i=0; i<unitA.featuresNorm.rows; i++) {
distance += pow(abs(unitA.featuresNorm.at<float>(i) - unitB.featuresNorm.at<float>(i)),0.5);
//cout << "unitA.featuresNorm[" << i << "]: " << unitA.featuresNorm[i] << endl;
//cout << "unitB.featuresNorm[" << i << "]: " << unitB.featuresNorm[i] << endl;
}
// for FG, don't use normalized colour features.
// TODO To include the area use i=4
} else if (FGBG == "FG") {
for (unsigned int i=4; i<unitA.features.rows; i++) {
distance += pow(abs(unitA.features.at<float>(i) - unitB.features.at<float>(i)),0.5);
}
} else {
cout << "FGBG argument was not FG or BG, returning 0." <<endl;
return 0;
}
return pow(distance,2);
}
Features used to be a vector of floats, and thus the normalization code was as follows:
void clustering::normalize(list<percepUnit> &scratch, list<percepUnit> &units) {
list<percepUnit>::iterator unit;
list<percepUnit*>::iterator unitPtr;
vector<float> min,max;
list<percepUnit*> masterList; // list of pointers.
// generate pointers
for (unit = scratch.begin(); unit != scratch.end(); unit++)
masterList.push_back(&(*unit)); // add pointer to where unit points to.
for (unit = units.begin(); unit != units.end(); unit++)
masterList.push_back(&(*unit)); // add pointer to where unit points to.
int numFeatures = masterList.front()->features.size(); // all percepts have the same number of features.
min.resize(numFeatures); // allocate for the number of features we have.
max.resize(numFeatures);
// Loop through all units to get feature values
for (int i=0; i<numFeatures; i++) {
min[i] = masterList.front()->features[i]; // starting point.
max[i] = min[i];
// calculate min and max for each feature.
for (unitPtr = masterList.begin(); unitPtr != masterList.end(); unitPtr++) {
if ((*unitPtr)->features[i] < min[i])
min[i] = (*unitPtr)->features[i];
if ((*unitPtr)->features[i] > max[i])
max[i] = (*unitPtr)->features[i];
}
}
// Normalize features according to min/max.
for (int i=0; i<numFeatures; i++) {
for (unitPtr = masterList.begin(); unitPtr != masterList.end(); unitPtr++) {
(*unitPtr)->featuresNorm[i] = ((*unitPtr)->features[i]-min[i]) / (max[i]-min[i]);
(*unitPtr)->normalized = true;
}
}
}
I changed the features type to a cv::Mat so I could use the opencv normalization function, so I rewrote the normalization function as follows:
void clustering::normalize(list<percepUnit> &scratch, list<percepUnit> &units) {
Mat featureMat = Mat(1,units.size()+scratch.size(), CV_32FC1, Scalar(0));
list<percepUnit>::iterator unit;
// For each feature
for (int i=0; i< units.begin()->features.rows; i++) {
// for each unit in units
int j=0;
float value;
for (unit = units.begin(); unit != units.end(); unit++) {
// Populate featureMat j is the unit index, i is the feature index.
value = unit->features.at<float>(i);
featureMat.at<float>(j) = value;
j++;
}
// for each unit in scratch
for (unit = scratch.begin(); unit != scratch.end(); unit++) {
// Populate featureMat j is the unit index, i is the feature index.
value = unit->features.at<float>(i);
featureMat.at<float>(j) = value;
j++;
}
// Normalize this featureMat in place
cv::normalize(featureMat, featureMat, 0, 1, NORM_MINMAX);
// set normalized values in percepUnits from featureMat
// for each unit in units
j=0;
for (unit = units.begin(); unit != units.end(); unit++) {
// Populate percepUnit featuresNorm, j is the unit index, i is the feature index.
value = featureMat.at<float>(j);
unit->featuresNorm.at<float>(i) = value;
j++;
}
// for each unit in scratch
for (unit = scratch.begin(); unit != scratch.end(); unit++) {
// Populate percepUnit featuresNorm, j is the unit index, i is the feature index.
value = featureMat.at<float>(j);
unit->featuresNorm.at<float>(i) = value;
j++;
}
}
}
I can't understand what the interaction between mergePercepts and normalization, especially since normalization is an entirely rewritten function.
Update
Massif and my /proc memory reporting don't agree. Massif says there is no effect of normalization on memory usage, only commenting out the percepUnit::clone() operation bypasses the leak.
Here is all the code, in case the interaction is somewhere else I am missing.
Here is another version of the same code with the dependence on OpenCV GPU removed, to facilitate testing...
It was recommended by Nghia (on the opencv forum) that I try and make the percepts a constant size. Sure enough, if I fix the dimensions and type of the cv::Mat members of percepUnit, then the leak disappears.
So it seems to me this is a bug in OpenCV that effects calling clone() and copyTo() on Mats of different sizes that are class members. So far unable to reproduce in a simple program. The leak does seem small enough that it may be the headers leaking, rather than the underlying image data.

cvGetHistValue_1D is deprecated. What is to be used instead?

According to latest OpenCV (OpenCV 2.4.5) documentation, cvGetHistValue_1D has been deprecated from imgproc module, and is now part of the legacy module.
I would like to know what should be used instead of cvGetHistValue_1D if I do not plan to use the legacy module.
My previous code is as under, which needs to be rewritten without the use of cvGetHistValue_1D
CvHistogram *hist = cvCreateHist(1, &numBins, CV_HIST_ARRAY, ranges, 1);
cvClearHist(hist);
cvCalcHist(&oDepth,hist);
cvNormalizeHist(hist, 1.0f);
float *cumHist = new float[numBins];
cumHist[0] = *cvGetHistValue_1D(hist, 0);
for(int i = 1; i<numBins; i++)
{
cumHist[i] = cumHist[i-1] + *cvGetHistValue_1D(hist, i);
if (cumHist[i] > 0.95)
{
oMaxDisp = i;
break;
}
}
I am assuming you are interested in using the C++ API. Matrix element access is easily accomplished using cv::Mat::at().
Your code might then look like this:
cv::Mat image; //Already in memory
size_t oMaxDisp = 0;
cv::Mat hist;
//Setup the histogram parameters
const int channels = 0;
const int numBins = 256;
const float rangevals[2] = {0.f, 256.f};
const float* ranges = rangevals;
cv::calcHist(&image, 1, &channels, cv::noArray(), hist, 1, &numBins, &ranges);
cv::normalize(hist, hist,1,0,cv::NORM_L1);
float cumHist[numBins];
float sum = 0.f;
for (size_t i = 0; i < numBins; ++i)
{
float val = hist.at<float>(i);
sum += val;
cumHist[i] = sum;
if (cumHist[i] > 0.95)
{
oMaxDisp = i;
break;
}
}
As a side note, it's a good idea not to use new unless it is necessary.

3D picking lwjgl

I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.)
This is how the code looks like:
if(Mouse.getEventButton() == 1) {
if (!Mouse.getEventButtonState()) {
Camera.get().generateViewMatrix();
float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio();
float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f));
float displacementRate = (float)Math.tan(Camera.get().getFovy()/2);
screenSpaceX *= displacementRate;
screenSpaceY *= displacementRate;
Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1);
Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1);
Matrix4f tmpView = new Matrix4f();
Camera.get().getViewMatrix().transpose(tmpView);
Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert();
Vector4f worldSpaceNear = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear);
Vector4f worldSpaceFar = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar);
Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z);
Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z);
rayDirection.normalise();
Ray clickRay = new Ray(rayPosition, rayDirection);
Vector tMin = new Vector(), tMax = new Vector(), tempPoint;
float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f;
Drawable closestDrawableHit = null;
for(Drawable d : this.worldModel.getDrawableThings()) {
// Calcualte AABB for each object... needs to be moved later...
firstVertex = true;
for(Surface surface : d.getSurfaces()) {
for(Vertex v : surface.getVertices()) {
worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x;
worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y;
worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z;
worldPosition = worldPosition.rotate(d.getRotation());
if (firstVertex) {
maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z;
minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z;
firstVertex = false;
} else {
if (worldPosition.x > maxX) {
maxX = worldPosition.x;
}
if (worldPosition.x < minX) {
minX = worldPosition.x;
}
if (worldPosition.y > maxY) {
maxY = worldPosition.y;
}
if (worldPosition.y < minY) {
minY = worldPosition.y;
}
if (worldPosition.z > maxZ) {
maxZ = worldPosition.z;
}
if (worldPosition.z < minZ) {
minZ = worldPosition.z;
}
}
}
}
// ray/slabs intersection test...
// clickRay.getOrigin().x + clickRay.getDirection().x * f = minX
// clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f
// clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f
// -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f
largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y;
if(smallestExitingValue > temp) {
smallestExitingValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z;
if(smallestExitingValue < temp) {
smallestExitingValue = temp;
}
if(largestEnteringValue > smallestExitingValue) {
//System.out.println("Miss!");
} else {
if (largestEnteringValue < closestEnteringValue) {
closestEnteringValue = largestEnteringValue;
closestDrawableHit = d;
}
}
}
if(closestDrawableHit != null) {
System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z);
this.worldModel.removeDrawableThing(closestDrawableHit);
}
}
}
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all.
Edit:
Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center and always shots to the same position no matter where i direct my camera...
My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the glulookat method in lwjgl; I have to build it my self and I guess thats where the problem is at)...
Edit2:
This is how i calculate it currently:
private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}};
public Vector getCameraDirectionVector() {
Vector actualEye = this.getActualEyePosition();
return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z);
}
public Vector getActualEyePosition() {
return eye.rotate(this.getRotation());
}
public void generateViewMatrix() {
Vector cameraDirectionVector = getCameraDirectionVector().normalize();
Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize();
Vector up = Vector.cross(side, cameraDirectionVector);
viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x;
viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y;
viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z;
/*
Vector actualEyePosition = this.getActualEyePosition();
Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize();
Vector xaxis = Vector.cross(upVector, zaxis).normalize();
Vector yaxis = Vector.cross(zaxis, xaxis);
viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x;
viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y;
viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z;
viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition);
*/
viewMatrix = new Matrix4f();
viewMatrix.load(getViewMatrixAsFloatBuffer());
}
Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it...
I have read alot of threads and documentations about this but i can't seam to wrapp my head around it...
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but things are not disappearing where i press on the screen.
OpenGL is not a scene graph, it's a drawing library. So after removing something from your internal representation you must redraw the scene. And your code is missing some call to a function that triggers a redraw.
Okay so i finaly solved it with the help from the guys at gamedev and a friend, here is a link to the answer where i have posted the code!