C++ programm stops without a reason on a random position [closed] - c++

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am working on a C++ program which should transfer a 2D image of a flame's intensity into a 3D model. The program is mainly working with multiple matrix-operations which I all realised using pointers (I know, I could use vectors though).
After the Input of the textfile, the mirroring and smoothing of the data values, there comes a correction calculation for each line of the image. At the beginning of the function for this calculation, the program stops on a random position but in the for-loop declaring the y_values-vector.
Here is the code-fragment:
void CorrectionCalculation(Matrix Matrix_To_Calculate, int n_values, int polynomial_degree, int n_rows)
{
for (int h = 0; h < n_rows; h++)
{
//Initialising and declaration of the y_values-vector, which is the copy of each matrix-line. This line is used for the correction-calculation.
double* y_values = new double(n_values);
for (int i = 0; i < n_values; i++)
{
y_values[i] = Matrix_To_Calculate[h][i];
}
//Initialisiing and declaration of the x-values (from 0 to Spiegelachse with stepwidth 1, because of the single Pixels)
double* x_values = new double(n_values);
for (int i = 0; i < n_values; i++)
{
x_values[i] = i;
}
When calculating a single line, the program worked fine. But when I added some code to calculate the whole image, the program stops.

You're not allocating an array of values, but a single element.
Instead of:
double* y_values = new double(n_values);
// ...
double* x_values = new double(n_values);
Change it to
double* y_values = new double[n_values];
//...
double* x_values = new double[n_values];
You should use a vector of doubles rather than array new. That way the memory will be automatically deleted when its no longer needed. E.g.:
#include <vector>
std::vector<double> y_values(y_values);
You're also hiding variables by using variable names the same as the parameters. This can lead to confusion and subtle bugs in code where you're not quite sure which variable is being changed.

Related

I get an Assertion Failiure ((elemSize() == sizeof(_Tp)) in C++ OpenCV when trying to access values of a histogram [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 months ago.
Improve this question
When I am trying to access the bin values of a generated histogram of a greyscale image, I get this assertion failiure:
Error: Assertion failed (elemSize() == sizeof(_Tp)) in cv::Mat::at ... opencv2\core\mat.inl.hpp, line 943
This is the Code Fragment that throws the failiure:
for (int i = 0; i < 256; i++) {
hist.at<float>(i) = (hist.at<float>(i) / pixelAmount) * 255;
}
My main problem is that i dont really understand the problem associated with the assertion failiure
I looked up the OpenCV documentation for Histogram Calculation and they are accessing the histogram values the same way.
Thanks in advance for any advice
I'll assume that you got your hist Mat from another API call, so its type can't be affected at creation time.
The at<T>() method requires you to know the element type of the Mat (say, CV_8U), and to use that same type (uint8_t) in the access.
There are two ways to solve this situation:
get the scalar value (uint8_t), convert the scalar value so it suits your calculation, write back (uint8_t) a coerced value
convert the entire Mat to CV_32F, which is equivalent to float, and then do your operations
First option:
for (int i = 0; i < 256; i++) {
hist.at<uint8_t>(i) =
(static_cast<float>(hist.at<uint8_t>(i)) / pixelAmount) * 255;
}
Second option:
hist.convertTo(hist, CV_32F);
// now do your calculations with at<float>()

Data analysis - memory bug c++

I am a data scientist, currently working on some C++ code to extract triplet particles from a rather large text file containing 2D coordinate data of particles in ~10⁵ consecutive frames. I am struggling with a strange memory error that I don't seem to understand.
I have a vector of structs, which can be divided into snippets defined by their frame. For each frame, I build an array with unique ID's for each individual coordinate pair, and if at any point the coordinate pair is repeated, the coordinate pair is given the old coordinate pair. This I then use later to define whether the particle triplet is indeed a trimer.
I loop over all particles and search forward for any corresponding coordinate pair. After I'm done, and no particles were found, I define this triplet to be unique and push the coordinates into a vector that corresponds to particle IDs.
The problem is: after the 18th iteration, at line trimerIDs[i][0] = particleCounter; , the variable trimerCands (my big vector array) suddenly becomes unreadable. Can this be that the vector pointer object is being overwritten? I put this vector fully on the heap, but even if I put it on stack, the error persists.
Do any of you have an idea of what I might be overlooking? Please note that I am rather new at C++, coming from other, less close to the metal, languages. While I think I understand how stack/heap allocations work, especially with respect to vectors/vector structs, I might be very wrong!
The error that Eclipse gives me in the variables tab is:
Failed to execute MI command:
-data-evaluate-expression trimerCands
Error message from debugger back end:
Cannot access memory at address 0x7fff0000000a
The function is as follows.
struct trimerCoords{
float x1,y1,x2,y2,x3,y3;
int frame;
int tLength1, tLength2, tLength3;
};
void removeNonTrimers(std::vector<trimerCoords> trimerCands, int *trCandLUT){
// trimerCands is a vector containing possible trimers, tLengthx is an attribute of the particle;
// trCandLUT is a look up table array with indices;
for (int currentFrame = 1; currentFrame <=framesTBA; currentFrame++){ // for each individual frame
int nTrimers = trCandLUT[currentFrame] - trCandLUT[currentFrame-1]; // get the number of trimers for this specific frame
int trimerIDs[nTrimers][3] = {0}; // preallocate an array for each of the inidivual particle in each triplet;
int firstTrim = trCandLUT[currentFrame-1]; // first index for this particular frame
int lastTrim = trCandLUT[currentFrame] - 1; // last index for this particular frame
bool found;
std::vector<int> traceLengths;
traceLengths.reserve(nTrimers*3);
// Block of code to create a unique ID array for this particular frame
std::vector<Particle> currentFound;
Particle tempEntry;
int particleCounter = 0;
for (int i = firstTrim; i <= lastTrim; i++){
// first triplet particle. In the real code, this is repeated three times, for x2/y2 and x3/y3, corresponding to the
tempEntry.x = trimerCands[i].x1;
tempEntry.y = trimerCands[i].y1;
found = false;
for (long unsigned int j = 0; j < currentFound.size(); j++){
if (fabs(tempEntry.x - currentFound[j].x) + fabs(tempEntry.y - currentFound[j].y) < 0.001){
trimerIDs[i][0] = j; found = true; break;
}
}
if (found == false) {
currentFound.push_back(tempEntry);
traceLengths.push_back(trimerCands[i].tLength1);
trimerIDs[i][0] = particleCounter;
particleCounter++;
}
}
// end block of create unique ID code block
compareTrips(nTrimers, trimerIDs, traceLengths, trimerfile_out);
}
}
If anything's unclear, let me know!

Segmentation Fault when using vtkPolyLine in custom Paraview Filter

I want to display multiple sets of 3D points using vtkPolyLine.
The points are stored as Nodes(custom class) in a multidimensional vector:
vector<vector <Node> > criticalLines; where a node has: double posX; double posY; double posZ; to store its position.
For the following section I tried to use vtkPolyLine similar to this example:
http://www.paraview.org/Wiki/VTK/Examples/Cxx/GeometricObjects/PolyLine
This function is called after the vector has been filled with nodes:
void Algorithm::displayLines(vtkSmartPointer<vtkPoints> points,vtkSmartPointer<vtkCellArray> lines)
{
for(int i = 0; i<criticalLines.size(); i++)
{
if(criticalLines[i].empty())
{
continue;
}
vtkSmartPointer<vtkPolyLine> polyLine =
vtkSmartPointer<vtkPolyLine>::New()
for(int j =0; j< criticalLines[i].size(); ++j)
{
vtkIdType idx=points->InsertNextPoint(criticalLines[i][j].posX,
criticalLines[i][j].posY,
criticalLines[i][j].posZ);
//print posX,posY,posZ of current Node
criticalLines[i][j].PrintSelf();
//Seg. Fault occurs here
polyLine->GetPointIds()->SetId(j,idx);
}
lines->InsertNextCell(polyLine);
}
}
Both points and lines are defined in Algorithm.h file and initialized in the constructor as follows:
points = vtkSmartPointer<vtkPoints>::New();
lines = vtkSmartPointer<vtkCellArray>::New();
And added to vtkPolyData later on:
vtkSmartPointer<vtkPolyData> opd=vtkSmartPointer<vtkPolyData>::New() ;
opd->SetPoints(algorithm.points);
opd->SetLines(algorithm.lines);
Output ofcriticalLines[i][j].PrintSelf(); shows values as expected.
When using vtkSmartPointer<vtkTriangle> triangle = vtkSmartPointer<vtkTriangle>::New(); instead of vtkPolyLine everything works fine.
The solution to this create multiple polylines given a set of points using vtk somehow related problem did not seem to be what I was looking for.
I am not sure what is missing/wrong in my Code.
Please let me know if you need more information.
Any help is very much appreciated!
Your vtkPolyLine needs to allocate some space for the point IDs, like
polyLine->GetPointIds()->SetNumberOfIds(5);
in the example you linked to. In your case, you need to call
polyLine->GetPointIds()->SetNumberOfIds(criticalLines[i].size());
right after creating polyLine.

C++ and SDL problem

I want to blit surfaces that I've created in two classes. One is called Map, that holds the relevant map vector as well as some other stuff. The other is a Tile class. There is a problem when I run the program.
I get no errors, and the program runs as it should. Any ideas? It's probably a stupid mistake somewhere.
Map populate
void map::Populate(map M)
for(int x=0;x<=19;x++)
{
for(int y=0;y<=15;y++)
{
int y2 = (y*32);
int x2 = (y*32);
Tile T(x2,y2);
M.AddToMap(&T);
printf("Added Tile");
Render
void map::Render(SDL_Surface* screen)
{
for(int x=0;x<grid.size();x++)
{
printf("test");
Tile* T = grid[x];
SDL_Surface* k = T->GetIcon();
SDL_Rect dstrect;
dstrect.x = (screen->w - k->w) / 2;
dstrect.y = (screen->h - k->h) / 2;
SDL_BlitSurface(k, 0, screen, &dstrect);
You're not stating what the problem actually is, just that the program "runs as it should".
Problems in your code:
int x2 = (y*32); should likely be x*32.
void map::Populate(map M) takes a map by value - this copies the map you pass, and any changes will not be visible in the passed map. map & M passes a reference, so changes will be seen in the map you pass.
M.AddToMap(&T) adds a pointer to the local Tile variable, which gets invalidated each iteration of the inner loop. More likely you want new Tile(T) there, or better yet a smart pointer such as boost's shared_ptr. Remember that you also need to delete those Tiles if you don't use a smart pointer.
New code:
void map::Populate(map & M)
for(int x=0; x<20; x++)
{
for(int y=0; y<16; y++)
{
int y2 = (y*32);
int x2 = (x*32);
M.AddToMap(new Tile(x2,y2));
printf("Added Tile");
You are adding a reference to a local variable to your map in Populate. If the method doesn't make a copy of the input, this is most likely wrong. Make a copy (pass by value) or store a smart pointer to your Tile. Of course you can store a plain old pointer, but make sure to delete those Tiles in the end!
Assuming your problem is that the image doesn't show up you may need to post the setup code for the screen and surfaces so we can see if that is the problem.

OpenCV 1.1 K-Means Clustering in High Dimensional Spaces

I am trying to write a bag of features system image recognition system. One step in the algorithm is to take a larger number of small image patches (say 7x7 or 11x11 pixels) and try to cluster them into groups that look similar. I get my patches from an image, turn them into gray-scale floating point image patches, and then try to get cvKMeans2 to cluster them for me. I think I am having problems formatting the input data such that KMeans2 returns coherent results. I have used KMeans for 2D and 3D clustering before but 49D clustering seems to be a different beast.
I keep getting garbage values for the returned clusters vector, so obviously this is a garbage in / garbage out type problem. Additionally the algorithm runs way faster than I think it should for such a huge data set.
In the code below the straight memcpy is only my latest attempt at getting the input data in the correct format, I spent a while using the built in OpenCV functions, but this is difficult when your base type is CV_32FC(49).
Can OpenCV 1.1's KMeans algorithm support this sort of high dimensional analysis?
Does someone know the correct method of copying from images to the K-Means input matrix?
Can someone point me to a free, Non-GPL KMeans algorithm I can use instead?
This isn't the best code as I am just trying to get things to work right now:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks){
// the size of one image patch, CELL_SIZE = 7
int chunk_size = CELL_SIZE*CELL_SIZE*sizeof(float);
// create the input data, CV_32FC(49) is 7x7 float object (I think)
CvMat* data = cvCreateMat(chunks.size(),1,CV_32FC(49) );
// Create a temporary vector to hold our data
// we'll copy into the matrix for KMeans
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
float * rawdata = new float[rdsize];
// Go through each image chunk and copy the
// pixel values into the raw data array.
vector<IplImage*>::iterator iter;
int k = 0;
for( iter = chunks.begin(); iter != chunks.end(); ++iter )
{
for( int i =0; i < CELL_SIZE; i++)
{
for( int j=0; j < CELL_SIZE; j++)
{
CvScalar val;
val = cvGet2D(*iter,i,j);
rawdata[k] = (float)val.val[0];
k++;
}
}
}
// Copy the data into the CvMat for KMeans
// I have tried various methods, but this is just the latest.
memcpy( data->data.ptr,rawdata,rdsize*sizeof(float));
// Create the output array
CvMat* results = cvCreateMat(chunks.size(),1,CV_32SC1);
// Do KMeans
int r = cvKMeans2(data, 128,results, cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 1000, 0.1));
// Copy the grouping information to our output vector
vector<int> retVal;
for( int y = 0; y < chunks.size(); y++ )
{
CvScalar cvs = cvGet1D(results, y);
int g = (int)cvs.val[0];
retVal.push_back(g);
}
return retVal;}
Thanks in advance!
Though I'm not familiar with "bag of features", have you considered using feature points like corner detectors and SIFT?
You might like to check out http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/ for another open source clustering package.
Using memcpy like this seems suspect, because when you do:
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
If CELL_SIZE and chunks.size() are very large you are creating something large in rdsize. If this is bigger than the largest storable integer you may have a problem.
Are you wanting to change "chunks" in this function?
I'm guessing that you don't as this is a K-means problem.
So try passing by reference to const here. (And generally speaking this is what you will want to be doing)
so instead of:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks)
it would be:
std::vector<int> DoKMeans(const std::vector<IplImage *>& chunks)
Also in this case it is better to use static_cast than the old c style casts. (for example static_cast(variable) as opposed to (float)variable ).
Also you may want to delete "rawdata":
float * rawdata = new float[rdsize];
can be deleted with:
delete[] rawdata;
otherwise you may be leaking memory here.