Get 3d object boundaries - opengl

How do applications figure out the boundaries (just a box) of a 3d object?
Here's an example
I need this info for collision detection.

This is how you can compute the minimum and maximum (bounding box) of a 3D object.
void BBox(GLpoint *p, int n_vert, GLpoint& p_max, GLpoint& p_min)
{
p_min.x = p[0].x;
p_min.y = p[0].y;
p_min.z = p[0].z;
p_max.x = p[0].x;
p_max.y = p[0].y;
p_max.z = p[0].z;
for (int i=1; i<n_vert; i++)
{
p_min.x = MIN(p_min.x, p[i].x);
p_min.y = MIN(p_min.y, p[i].y);
p_min.z = MIN(p_min.z, p[i].z);
p_max.x = MAX(p_max.x, p[i].x);
p_max.y = MAX(p_max.y, p[i].y);
p_max.z = MAX(p_max.z, p[i].z);
}
}

Related

Can a KTX image file be a cubemap arrays?

Is it valid for a KTX image to be a cubemaps arrays, or is that not a thing?
I have some code that I'm currently using for uploading the data from a KTX file to the GPU. Currently, the code works for a regular 2d image, a cubemap, and a texture array. However, it would not support a KTX image that is a cubemap array, if that is a thing.
If it is possible, what is the code below missing to accomplish that?
uint32_t offset = 0;
for (uint32_t layer = 0; layer < layers; layer++) {
for (uint32_t face = 0; face < faces; face++) {
for (uint32_t level = 0; level < mipLevels; level++) {
offset = tex->GetImageOffset(layer, face, level);
vk::BufferImageCopy bufferCopyRegion = {};
bufferCopyRegion.imageSubresource.aspectMask = vk::ImageAspectFlagBits::eColor;
bufferCopyRegion.imageSubresource.mipLevel = level;
bufferCopyRegion.imageSubresource.baseArrayLayer = (faces == 6 ? face : layer); // TexArray or Cubemap, not both.
bufferCopyRegion.imageSubresource.layerCount = 1;
bufferCopyRegion.imageExtent.width = width >> level;;
bufferCopyRegion.imageExtent.height = height >> level;
bufferCopyRegion.imageExtent.depth = 1;
bufferCopyRegion.bufferOffset = offset;
bufferCopyRegions.push_back(bufferCopyRegion);
}
}
}
Vulkan command to transfer the image.
// std::vector<vk::BufferImageCopy> regions;
cmdBuf->copyBufferToImage(srcBufferHandle, destImageHandle,
vk::ImageLayout::eTransferDstOptimal, uint32_t(regions.size()), regions.data());
Yes, KTX also supports cube map arrays (see the KTX specification). Those are stored using layers.
The Vulkan spec states the following on how cube maps are stored in a cube map array:
For cube arrays, each set of six sequential
layers is a single cube, so the number of cube maps in a cube map array view is layerCount / 6, and
image array layer (baseArrayLayer + i) is face index (i mod 6) of cube i / 6.
So you need to change the baseArrayLayer of your buffer copy region accordingly.
Sample code:
// Setup buffer copy regions to get the data from the ktx file to your own image
for (uint32_t layer = 0; layer < ktxTexture->numLayers; layer++) {
for (uint32_t face = 0; face < 6; face++) {
for (uint32_t level = 0; level < ktxTexture->numLevels; level++) {
ktx_size_t offset;
KTX_error_code ret = ktxTexture_GetImageOffset(ktxTexture, level, layer, face, &offset);
VkBufferImageCopy bufferCopyRegion = {};
bufferCopyRegion.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
bufferCopyRegion.imageSubresource.mipLevel = level;
bufferCopyRegion.imageSubresource.baseArrayLayer = layer * 6 + face;
bufferCopyRegion.imageSubresource.layerCount = 1;
bufferCopyRegion.imageExtent.width = ktxTexture->baseWidth >> level;
bufferCopyRegion.imageExtent.height = ktxTexture->baseHeight >> level;
bufferCopyRegion.imageExtent.depth = 1;
bufferCopyRegion.bufferOffset = offset;
bufferCopyRegions.push_back(bufferCopyRegion);
}
}
}
// Create the image view for a cube map array
VkImageViewCreateInfo view = vks::initializers::imageViewCreateInfo();
view.viewType = VK_IMAGE_VIEW_TYPE_CUBE_ARRAY;
view.format = format;
view.components = { VK_COMPONENT_SWIZZLE_R, VK_COMPONENT_SWIZZLE_G, VK_COMPONENT_SWIZZLE_B, VK_COMPONENT_SWIZZLE_A };
view.subresourceRange = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 };
view.subresourceRange.layerCount = 6 * cubeMap.layerCount;
view.subresourceRange.levelCount = cubeMap.mipLevels;
view.image = cubeMap.image;
vkCreateImageView(device, &view, nullptr, &cubeMap.view);

vtkResliceImageViewer get bounds image in view

Situation:
I'm using vtkresliceimageviewer to display the three MPR views, based on this example.
So I can rotate the cursors to generate a new view on all three screens.
Question:
I want to know the bounds of my image on the viewer, even after rotating some of the cursors.
obs
I tried to get the values from the imageActor and ask for my renderer positions, but this works only when I have no interaction from the other cursors.
Example for axial:
void UpdatePointWordToViewer(vtkRenderer* rend, double p[4])
{
rend->SetWorldPoint(p);
rend->WorldToDisplay();
rend->GetDisplayPoint(p);
}
void UpdateBoxAxial()
{
auto bounds = pMPR->GetViewer(2)->GetImageActor()->GetBounds();
auto ren = pMPR->GetViewer(2)->GetRenderer();
auto size = pMPR->GetViewer(2)->GetInteractor()->GetSize();
double pIni1[]{ 0,0,0,1 };
double pIni2[]{ 0,0,0,1 };
double pIni3[]{ 0,0,0,1 };
double pIni4[]{ 0,0,0,1 };
double pConvert1[]{ 0,0,0 };
double pConvert2[]{ 0,0,0 };
//check the bounds
pIni1[0] = bounds[0];
pIni1[1] = bounds[3];
pIni1[2] = bounds[4];
pIni2[0] = bounds[1];
pIni2[1] = bounds[3];
pIni2[2] = bounds[4];
pIni3[0] = bounds[0];
pIni3[1] = bounds[2];
pIni3[2] = bounds[4];
pIni4[0] = bounds[1];
pIni4[1] = bounds[2];
pIni4[2] = bounds[4];
//convert the points for viewer coordinates
UpdatePointWordToViewer(ren, pIni1);
UpdatePointWordToViewer(ren, pIni2);
UpdatePointWordToViewer(ren, pIni3);
UpdatePointWordToViewer(ren, pIni4);
//P1
if (pIni1[0] < pIni3[0])
pConvert1[0] = pIni1[0];
else
pConvert1[0] = pIni3[0];
if (pIni3[1] < pIni4[1])
pConvert1[1] = pIni3[1];
else
pConvert1[1] = pIni4[1];
//P2
if (pIni2[0] > pIni4[0])
pConvert2[0] = pIni2[0];
else
pConvert2[0] = pIni4[0];
if (pIni1[1] > pIni2[1])
pConvert2[1] = pIni1[1];
else
pConvert2[1] = pIni2[1];
}
So I know the minimum point (Left - Down) and the maximum point (Right - Top)

Extracting skin data from an FBX file

I need to convert animation data from Autodesk's FBX file format to one that is compatible with DirectX; specifically, I need to calculate the offset matrices for my skinned mesh. I have written a converter( which in this case converts .fbx to my own 'scene' format ) in which I would like to calculate an offset matrix for my mesh. Here is code:
//
// Skin
//
if(bHasDeformer)
{
// iterate deformers( TODO: ACCOUNT FOR MULTIPLE DEFORMERS )
for(int i = 0; i < ncDeformers && i < 1; ++i)
{
// skin
FbxSkin *pSkin = (FbxSkin*)pMesh->GetDeformer(i, FbxDeformer::eSkin);
if(pSkin == NULL)
continue;
// bone count
int ncBones = pSkin->GetClusterCount();
// iterate bones
for (int boneIndex = 0; boneIndex < ncBones; ++boneIndex)
{
// cluster
FbxCluster* cluster = pSkin->GetCluster(boneIndex);
// bone ref
FbxNode* pBone = cluster->GetLink();
// Get the bind pose
FbxAMatrix bindPoseMatrix, transformMatrix;
cluster->GetTransformMatrix(transformMatrix);
cluster->GetTransformLinkMatrix(bindPoseMatrix);
// decomposed transform components
vS = bindPoseMatrix.GetS();
vR = bindPoseMatrix.GetR();
vT = bindPoseMatrix.GetT();
int *pVertexIndices = cluster->GetControlPointIndices();
double *pVertexWeights = cluster->GetControlPointWeights();
// Iterate through all the vertices, which are affected by the bone
int ncVertexIndices = cluster->GetControlPointIndicesCount();
for (int iBoneVertexIndex = 0; iBoneVertexIndex < ncVertexIndices; iBoneVertexIndex++)
{
// vertex
int niVertex = pVertexIndices[iBoneVertexIndex];
// weight
float fWeight = (float)pVertexWeights[iBoneVertexIndex];
}
}
}
How do I convert the fbx transforms to a bone offset matrix?

How to replace an instance with another instance via pointer?

I'm doing online destructive clustering (clusters replace clustered objects) on a list of class instances (stl::list).
Background
My list of current percepUnits is: stl::list<percepUnit> units; and for each iteration I get a new list of input percepUnits stl::list<percepUnit> scratch; that need to be clustered with the units.
I want to maintain a fixed number of percepUnits (so units.size() is constant), so for each new scratch percepUnit I need to merge it with the nearest percepUnit in units. Following is a code snippet that builds a list (dists) of structures (percepUnitDist) that contain pointers to each pair of items in scratch and units percepDist.scratchUnit = &(*scratchUnit); and percepDist.unit = &(*unit); and their distance. Additionally, for each item in scratch I keep track of which item in units has the least distance minDists.
// For every scratch percepUnit:
for (scratchUnit = scratch.begin(); scratchUnit != scratch.end(); scratchUnit++) {
float minDist=2025.1172; // This is the max possible distance in unnormalized CIELuv, and much larger than the normalized dist.
// For every percepUnit:
for (unit = units.begin(); unit != units.end(); unit++) {
// compare pairs
float dist = featureDist(*scratchUnit, *unit, FGBG);
//cout << "distance: " << dist << endl;
// Put pairs in a structure that caches their distances
percepUnitDist percepDist;
percepDist.scratchUnit = &(*scratchUnit); // address of where scratchUnit points to.
percepDist.unit = &(*unit);
percepDist.dist = dist;
// Figure out the percepUnit that is closest to this scratchUnit.
if (dist < minDist)
minDist = dist;
dists.push_back(percepDist); // append dist struct
}
minDists.push_back(minDist); // append the min distance to the nearest percepUnit for this particular scratchUnit.
}
So now I just need to loop through the percepUnitDist items in dists and match the distances with the minimum distances to figure out which percepUnit in scratch should be merged with which percepUnit in units. The merging process mergePerceps() creates a new percepUnit which is a weighted average of the "parent" percepUnits in scratch and units.
Question
I want to replace the instance in the units list with the new percepUnit constructed by mergePerceps(), but I would like to do so in the context of looping through the percepUnitDists. This is my current code:
// Loop through dists and merge all the closest pairs.
// Loop through all dists
for (distIter = dists.begin(); distIter != dists.end(); distIter++) {
// Loop through all minDists for each scratchUnit.
for (minDistsIter = minDists.begin(); minDistsIter != minDists.end(); minDistsIter++) {
// if this is the closest cluster, and the closest cluster has not already been merged, and the scratch has not already been merged.
if (*minDistsIter == distIter->dist and not distIter->scratchUnit->remove) {
percepUnit newUnit;
mergePerceps(*(distIter->scratchUnit), *(distIter->unit), newUnit, FGBG);
*(distIter->unit) = newUnit; // replace the cluster with the new merged version.
distIter->scratchUnit->remove = true;
}
}
}
I thought that I could replace the instance in units via the percepUnitDist pointer with the new percepUnit instance using *(distIter->unit) = newUnit;, but that does not seem to be working as I'm seeing a memory leak, implying the instances in the units are not getting replaced.
How do I delete the percepUnit in the units list and replace it with a new percepUnit instance such that the new unit is located in the same location?
EDIT1
Here is the percepUnit class. Note the cv::Mat members. Following is the mergePerceps() function and the mergeImages() function on which it depends:
// Function to construct an accumulation.
void clustering::mergeImages(Mat &scratch, Mat &unit, cv::Mat &merged, const string maskOrImage, const string FGBG, const float scratchWeight, const float unitWeight) {
int width, height, type=CV_8UC3;
Mat scratchImagePad, unitImagePad, scratchImage, unitImage;
// use the resolution and aspect of the largest of the pair.
if (unit.cols > scratch.cols)
width = unit.cols;
else
width = scratch.cols;
if (unit.rows > scratch.rows)
height = unit.rows;
else
height = scratch.rows;
if (maskOrImage == "mask")
type = CV_8UC1; // single channel mask
else if (maskOrImage == "image")
type = CV_8UC3; // three channel image
else
cout << "maskOrImage is not 'mask' or 'image'\n";
merged = Mat(height, width, type, Scalar::all(0));
scratchImagePad = Mat(height, width, type, Scalar::all(0));
unitImagePad = Mat(height, width, type, Scalar::all(0));
// weight images before summation.
// because these pass by reference, they mess up the images in memory!
scratch *= scratchWeight;
unit *= unitWeight;
// copy images into padded images.
scratch.copyTo(scratchImagePad(Rect((scratchImagePad.cols-scratch.cols)/2,
(scratchImagePad.rows-scratch.rows)/2,
scratch.cols,
scratch.rows)));
unit.copyTo(unitImagePad(Rect((unitImagePad.cols-unit.cols)/2,
(unitImagePad.rows-unit.rows)/2,
unit.cols,
unit.rows)));
merged = scratchImagePad+unitImagePad;
}
// Merge two perceps and return a new percept to replace them.
void clustering::mergePerceps(percepUnit scratch, percepUnit unit, percepUnit &mergedUnit, const string FGBG) {
Mat accumulation;
Mat accumulationMask;
Mat meanColour;
int x, y, w, h, area;
float l,u,v;
int numMerges=0;
std::vector<float> featuresVar; // Normalized, Sum, Variance.
//float featuresVarMin, featuresVarMax; // min and max variance accross all features.
float scratchWeight, unitWeight;
if (FGBG == "FG") {
// foreground percepts don't get merged as much.
scratchWeight = 0.65;
unitWeight = 1-scratchWeight;
} else {
scratchWeight = 0.85;
unitWeight = 1-scratchWeight;
}
// Images TODO remove the meanColour if needbe.
mergeImages(scratch.image, unit.image, accumulation, "image", FGBG, scratchWeight, unitWeight);
mergeImages(scratch.mask, unit.mask, accumulationMask, "mask", FGBG, scratchWeight, unitWeight);
mergeImages(scratch.meanColour, unit.meanColour, meanColour, "image", "FG", scratchWeight, unitWeight); // merge images
// Position and size.
x = (scratch.x1*scratchWeight) + (unit.x1*unitWeight);
y = (scratch.y1*scratchWeight) + (unit.y1*unitWeight);
w = (scratch.w*scratchWeight) + (unit.w*unitWeight);
h = (scratch.h*scratchWeight) + (unit.h*unitWeight);
// area
area = (scratch.area*scratchWeight) + (unit.area*unitWeight);
// colour
l = (scratch.l*scratchWeight) + (unit.l*unitWeight);
u = (scratch.u*scratchWeight) + (unit.u*unitWeight);
v = (scratch.v*scratchWeight) + (unit.v*unitWeight);
// Number of merges
if (scratch.numMerges < 1 and unit.numMerges < 1) { // both units are patches
numMerges = 1;
} else if (scratch.numMerges < 1 and unit.numMerges >= 1) { // unit A is a patch, B a percept
numMerges = unit.numMerges + 1;
} else if (scratch.numMerges >= 1 and unit.numMerges < 1) { // unit A is a percept, B a patch.
numMerges = scratch.numMerges + 1;
cout << "merged scratch??" <<endl;
// TODO this may be an impossible case.
} else { // both units are percepts
numMerges = scratch.numMerges + unit.numMerges;
cout << "Merging two already merged Percepts" <<endl;
// TODO this may be an impossible case.
}
// Create unit.
mergedUnit = percepUnit(accumulation, accumulationMask, x, y, w, h, area); // time is the earliest value in times?
mergedUnit.l = l; // members not in the constrcutor.
mergedUnit.u = u;
mergedUnit.v = v;
mergedUnit.numMerges = numMerges;
mergedUnit.meanColour = meanColour;
mergedUnit.pActivated = unit.pActivated; // new clusters retain parent's history of activation.
mergedUnit.scratch = false;
mergedUnit.habituation = unit.habituation; // we inherent the habituation of the cluster we merged with.
}
EDIT2
Changing the copy and assignment operators had performance side-effects and did not seem to resolve the problem. So I've added a custom function to do the replacement, which just like the copy operator makes copies of each member and make's sure those copies are deep. The problem is that I still end up with a leak.
So I've changed this line: *(distIter->unit) = newUnit;
to this: (*(distIter->unit)).clone(newUnit)
Where the clone method is as follows:
// Deep Copy of members
void percepUnit::clone(const percepUnit &source) {
// Deep copy of Mats
this->image = source.image.clone();
this->mask = source.mask.clone();
this->alphaImage = source.alphaImage.clone();
this->meanColour = source.meanColour.clone();
// shallow copies of everything else
this->alpha = source.alpha;
this->fadingIn = source.fadingIn;
this->fadingHold = source.fadingHold;
this->fadingOut = source.fadingOut;
this->l = source.l;
this->u = source.u;
this->v = source.v;
this->x1 = source.x1;
this->y1 = source.y1;
this->w = source.w;
this->h = source.h;
this->x2 = source.x2;
this->y2 = source.y2;
this->cx = source.cx;
this->cy = source.cy;
this->numMerges = source.numMerges;
this->id = source.id;
this->area = source.area;
this->features = source.features;
this->featuresNorm = source.featuresNorm;
this->remove = source.remove;
this->fgKnockout = source.fgKnockout;
this->colourCalculated = source.colourCalculated;
this->normalized = source.normalized;
this->activation = source.activation;
this->activated = source.activated;
this->pActivated = source.pActivated;
this->habituation = source.habituation;
this->scratch = source.scratch;
this->FGBG = source.FGBG;
}
And yet, I still see a memory increase. The increase does not happen if I comment out that single replacement line. So I'm still stuck.
EDIT3
I can prevent memory from increasing if I disable the cv::Mat cloning code in the function above:
// Deep Copy of members
void percepUnit::clone(const percepUnit &source) {
/* try releasing Mats first?
// No effect on memory increase, but the refCount is decremented.
this->image.release();
this->mask.release();
this->alphaImage.release();
this->meanColour.release();*/
/* Deep copy of Mats
this->image = source.image.clone();
this->mask = source.mask.clone();
this->alphaImage = source.alphaImage.clone();
this->meanColour = source.meanColour.clone();*/
// shallow copies of everything else
this->alpha = source.alpha;
this->fadingIn = source.fadingIn;
this->fadingHold = source.fadingHold;
this->fadingOut = source.fadingOut;
this->l = source.l;
this->u = source.u;
this->v = source.v;
this->x1 = source.x1;
this->y1 = source.y1;
this->w = source.w;
this->h = source.h;
this->x2 = source.x2;
this->y2 = source.y2;
this->cx = source.cx;
this->cy = source.cy;
this->numMerges = source.numMerges;
this->id = source.id;
this->area = source.area;
this->features = source.features;
this->featuresNorm = source.featuresNorm;
this->remove = source.remove;
this->fgKnockout = source.fgKnockout;
this->colourCalculated = source.colourCalculated;
this->normalized = source.normalized;
this->activation = source.activation;
this->activated = source.activated;
this->pActivated = source.pActivated;
this->habituation = source.habituation;
this->scratch = source.scratch;
this->FGBG = source.FGBG;
}
EDIT4
While I still can't explain this issue, I did notice another hint. I realized that this leak can also be stopped if I don't normalize those features I use to cluster via featureDist() (but continue to clone cv::Mats). The really odd thing is that I rewrote that code entirely and still the problem persists.
Here is the featureDist function:
float clustering::featureDist(percepUnit unitA, percepUnit unitB, const string FGBG) {
float distance=0;
if (FGBG == "BG") {
for (unsigned int i=0; i<unitA.featuresNorm.rows; i++) {
distance += pow(abs(unitA.featuresNorm.at<float>(i) - unitB.featuresNorm.at<float>(i)),0.5);
//cout << "unitA.featuresNorm[" << i << "]: " << unitA.featuresNorm[i] << endl;
//cout << "unitB.featuresNorm[" << i << "]: " << unitB.featuresNorm[i] << endl;
}
// for FG, don't use normalized colour features.
// TODO To include the area use i=4
} else if (FGBG == "FG") {
for (unsigned int i=4; i<unitA.features.rows; i++) {
distance += pow(abs(unitA.features.at<float>(i) - unitB.features.at<float>(i)),0.5);
}
} else {
cout << "FGBG argument was not FG or BG, returning 0." <<endl;
return 0;
}
return pow(distance,2);
}
Features used to be a vector of floats, and thus the normalization code was as follows:
void clustering::normalize(list<percepUnit> &scratch, list<percepUnit> &units) {
list<percepUnit>::iterator unit;
list<percepUnit*>::iterator unitPtr;
vector<float> min,max;
list<percepUnit*> masterList; // list of pointers.
// generate pointers
for (unit = scratch.begin(); unit != scratch.end(); unit++)
masterList.push_back(&(*unit)); // add pointer to where unit points to.
for (unit = units.begin(); unit != units.end(); unit++)
masterList.push_back(&(*unit)); // add pointer to where unit points to.
int numFeatures = masterList.front()->features.size(); // all percepts have the same number of features.
min.resize(numFeatures); // allocate for the number of features we have.
max.resize(numFeatures);
// Loop through all units to get feature values
for (int i=0; i<numFeatures; i++) {
min[i] = masterList.front()->features[i]; // starting point.
max[i] = min[i];
// calculate min and max for each feature.
for (unitPtr = masterList.begin(); unitPtr != masterList.end(); unitPtr++) {
if ((*unitPtr)->features[i] < min[i])
min[i] = (*unitPtr)->features[i];
if ((*unitPtr)->features[i] > max[i])
max[i] = (*unitPtr)->features[i];
}
}
// Normalize features according to min/max.
for (int i=0; i<numFeatures; i++) {
for (unitPtr = masterList.begin(); unitPtr != masterList.end(); unitPtr++) {
(*unitPtr)->featuresNorm[i] = ((*unitPtr)->features[i]-min[i]) / (max[i]-min[i]);
(*unitPtr)->normalized = true;
}
}
}
I changed the features type to a cv::Mat so I could use the opencv normalization function, so I rewrote the normalization function as follows:
void clustering::normalize(list<percepUnit> &scratch, list<percepUnit> &units) {
Mat featureMat = Mat(1,units.size()+scratch.size(), CV_32FC1, Scalar(0));
list<percepUnit>::iterator unit;
// For each feature
for (int i=0; i< units.begin()->features.rows; i++) {
// for each unit in units
int j=0;
float value;
for (unit = units.begin(); unit != units.end(); unit++) {
// Populate featureMat j is the unit index, i is the feature index.
value = unit->features.at<float>(i);
featureMat.at<float>(j) = value;
j++;
}
// for each unit in scratch
for (unit = scratch.begin(); unit != scratch.end(); unit++) {
// Populate featureMat j is the unit index, i is the feature index.
value = unit->features.at<float>(i);
featureMat.at<float>(j) = value;
j++;
}
// Normalize this featureMat in place
cv::normalize(featureMat, featureMat, 0, 1, NORM_MINMAX);
// set normalized values in percepUnits from featureMat
// for each unit in units
j=0;
for (unit = units.begin(); unit != units.end(); unit++) {
// Populate percepUnit featuresNorm, j is the unit index, i is the feature index.
value = featureMat.at<float>(j);
unit->featuresNorm.at<float>(i) = value;
j++;
}
// for each unit in scratch
for (unit = scratch.begin(); unit != scratch.end(); unit++) {
// Populate percepUnit featuresNorm, j is the unit index, i is the feature index.
value = featureMat.at<float>(j);
unit->featuresNorm.at<float>(i) = value;
j++;
}
}
}
I can't understand what the interaction between mergePercepts and normalization, especially since normalization is an entirely rewritten function.
Update
Massif and my /proc memory reporting don't agree. Massif says there is no effect of normalization on memory usage, only commenting out the percepUnit::clone() operation bypasses the leak.
Here is all the code, in case the interaction is somewhere else I am missing.
Here is another version of the same code with the dependence on OpenCV GPU removed, to facilitate testing...
It was recommended by Nghia (on the opencv forum) that I try and make the percepts a constant size. Sure enough, if I fix the dimensions and type of the cv::Mat members of percepUnit, then the leak disappears.
So it seems to me this is a bug in OpenCV that effects calling clone() and copyTo() on Mats of different sizes that are class members. So far unable to reproduce in a simple program. The leak does seem small enough that it may be the headers leaking, rather than the underlying image data.

C++ vector element is different when accessed at different times

I'm developing a 3D game using SDL and OpenGL on Ubuntu 9.04 using Eclipse CDT. I've got a class to hold the mesh data in vectors for each type. Such as Vertex, Normal, UVcoord (texture coordinates), as well as a vector of faces. Each face has 3 int vectors which hold indexes to the other data. So far my game has been working quite well at rendering at nice rates. But then again I only had less then one hundred vertexes among two objects for testing purposes.
The loop accessing this data looks like this:
void RenderFace(oFace face)
{
/*
* More Stuff
*/
oVertice gvert;
oUVcoord tvert;
oNormal nvert;
for (unsigned int fvIndex = 0; fvIndex < face.GeoVerts.size(); fvIndex++)
{
gvert = obj.TheMesh.GetVertice(face.GeoVerts[fvIndex] - 1);
tvert = obj.TheMesh.GetUVcoord(face.UV_Verts[fvIndex] - 1);
nvert = obj.TheMesh.GetNormal(face.NrmVerts[fvIndex] - 1);
glNormal3f(nvert.X, nvert.Y, nvert.Z);
glTexCoord2f(tvert.U, tvert.V);
glVertex3f(scale * gvert.X, scale * gvert.Y, scale * gvert.Z);
}
/*
* More Stuff
*/
}
There is a loop that calls the renderFace() function which includes the above for loop. The minus one is because Wavefront .obj files are 1 indexed (instead of c++ 0 index). Anyway, I discovered that once you have about 30 thousand or so faces, all those calls to glVertex3f() and the like slow the game down to about 10 FPS. That I can't allow. So I learned about vertex arrays, which require pointers to arrays. Following the example of a NeHe tutorial I continued to use my oVertice class and the others. Which just have floats x, y, z, or u, v. So I added the same function above to my OnLoad() function to build the arrays which are just "oVertice*" and similar.
Here is the code:
bool oEntity::OnLoad(std::string FileName)
{
if (!obj.OnLoad(FileName))
{
return false;
}
unsigned int flsize = obj.TheMesh.GetFaceListSize();
obj.TheMesh.VertListPointer = new oVertice[flsize];
obj.TheMesh.UVlistPointer = new oUVcoord[flsize];
obj.TheMesh.NormListPointer = new oNormal[flsize];
oFace face = obj.TheMesh.GetFace(0);
oVertice gvert;
oUVcoord tvert;
oNormal nvert;
unsigned int counter = 0;
unsigned int temp = 0;
for (unsigned int flIndex = 0; flIndex < obj.TheMesh.GetFaceListSize(); flIndex++)
{
face = obj.TheMesh.GetFace(flIndex);
for (unsigned int fvIndex = 0; fvIndex < face.GeoVerts.size(); fvIndex++)
{
temp = face.GeoVerts[fvIndex];
gvert = obj.TheMesh.GetVertice(face.GeoVerts[fvIndex] - 1);
temp = face.UV_Verts[fvIndex];
tvert = obj.TheMesh.GetUVcoord(face.UV_Verts[fvIndex] - 1);
temp = face.NrmVerts[fvIndex];
nvert = obj.TheMesh.GetNormal(face.NrmVerts[fvIndex] - 1);
obj.TheMesh.VertListPointer[counter].X = gvert.X;
obj.TheMesh.VertListPointer[counter].Y = gvert.Y;
obj.TheMesh.VertListPointer[counter].Z = gvert.Z;
obj.TheMesh.UVlistPointer[counter].U = tvert.U;
obj.TheMesh.UVlistPointer[counter].V = tvert.V;
obj.TheMesh.NormListPointer[counter].X = nvert.X;
obj.TheMesh.NormListPointer[counter].Y = nvert.Y;
obj.TheMesh.NormListPointer[counter].Z = nvert.Z;
counter++;
}
}
return true;
}
The unsigned int temp variable is for debugging purposes. Apparently I don't have a default constructor for oFace that doesn't have something to initialize with. Anyway, as you can see it's pretty much that same exact routine. Only instead of calling a gl function I add the data to three arrays.
Here's the kicker:
I'm loading a typical cube made of triangles.
When I access element 16 (0 indexed) of the UV_Verts vector from the RenderFace() function I get 12.
But when I access element 16 (0 indexed) of the same UV_Verts vector from the OnLoad() function I get something like 3045472189
I am so confused.
Does anyone know what's causing this? And if so how to resolve it?
One possible reason could be that you're creating arrays with size flsize:
obj.TheMesh.VertListPointer = new oVertice[flsize];
obj.TheMesh.UVlistPointer = new oUVcoord[flsize];
obj.TheMesh.NormListPointer = new oNormal[flsize];
but use the arrays with indices up to flsize * face.GeoVerts.size
for (...; flIndex < obj.TheMesh.GetFaceListSize(); ...) { // flsize = GetFaceListSize
for (...; fvIndex < face.GeoVerts.size(); ...) {
...
obj.TheMesh.UVlistPointer[counter].U = ...;
...
counter++;
}
}
so your array creation code should actually be more like
obj.TheMesh.VertListPointer = new oVertice[flsize * face.GeoVerts.size()];
obj.TheMesh.UVlistPointer = new oUVcoord[flsize * face.GeoVerts.size()];
obj.TheMesh.NormListPointer = new oNormal[flsize * face.GeoVerts.size()];