Can int values reset if too much is loaded at once? - c++

In my C++ project, I have some int values (I am following two of them specifically). These values are input when the user loads a file. Their values (should) never change throughout the entire program.
Later on in my program, I load about 30 MB of data into around 3000 QString variables (4 arrays). It seems that around a certain number of strings loaded, my int values reset to zero. I only use them at the beginning and end of my program.
I didn't put any source code simply because my program is huge and I don't feel comfortable putting all the source code on the web.
So my question is, is it possible for some variables to be "reset" because new variables are being filled? I get no errors or freezing like I would expect from bad allocation. This has got me completely puzzled.
Thanks for your time :)
EDIT:
Here is the exact spot I notice my int values get reset. Btw, All this code worked when editing a smaller amount of files.
//These loops input 2916 files each around 10kb. They are loaded into 4 QString arrays.
if(OregionBR != "Null")
{
for(int z=0; z <=26; z++)
{
for(int x=0; x <=26; x++)
{
temp_hex = OregionBR.mid((z*256)+(x*8), 6);
if(temp_hex != "000000")
{
temp_hex.append("000");
HexToInt(temp_hex, temp_int);
//Here, the files are input.
Input("stuff\\regions\\xbox_chunks\\br\\" + QString::number(temp_int) + ".dat", temp_chunk);
//... minor file changes
//The file is then loaded into the array
OBRChunks[(zPos*27) + xPos] = temp_chunk;
//... minor file changes
}
}
}
}
//level_ptr is my int value. This number is around 150,000
QMessageBox::information(this, "test", QString::number(level_ptr));
if(OregionBL != "Null")
{
for(int z=0; z <=26; z++)
{
for(int x=0; x <=26; x++)
{
temp_hex = OregionBL.mid((z*256)+(x*8)+40, 6);
if(temp_hex != "000000")
{
temp_hex.append("000");
HexToInt(temp_hex, temp_int);
Input("stuff\\regions\\xbox_chunks\\bl\\" + QString::number(temp_int) + ".dat", temp_chunk);
//... minor file changes
OBLChunks[(zPos*27) + xPos] = temp_chunk;
//... minor file changes
}
}
}
}
//level_ptr is my int value. This number is around 150,000
QMessageBox::information(this, "test", QString::number(level_ptr));
if(OregionTR != "Null")
{
for(int z=0; z <=26; z++)
{
for(int x=0; x <=26; x++)
{
temp_hex = OregionTR.mid((z*256)+(x*8)+1280, 6);
if(temp_hex != "000000")
{
temp_hex.append("000");
HexToInt(temp_hex, temp_int);
Input("stuff\\regions\\xbox_chunks\\tr\\" + QString::number(temp_int) + ".dat", temp_chunk);
//... minor file changes
OTRChunks[(zPos*27) + xPos] = temp_chunk;
//... minor file changes
}
}
}
}
//index_ptr is my int value. NOW IT SAYS LEVEL_PTR IS 0!
QMessageBox::information(this, "test", QString::number(level_ptr));
if(OregionTL != "Null")
{
for(int z=0; z <=26; z++)
{
for(int x=0; x <=26; x++)
{
temp_hex = OregionTL.mid((z*256)+(x*8)+1320, 6);
if(temp_hex != "000000")
{
temp_hex.append("000");
HexToInt(temp_hex, temp_int);
Input("stuff\\regions\\xbox_chunks\\tl\\" + QString::number(temp_int) + ".dat", temp_chunk);
//... minor file changes
OTLChunks[(zPos*27) + xPos] = temp_chunk;
//... minor file changes
}
}
}
}

It's possible for variables to be reset if you have a bug in your program. This is called memory corruption and has many many causes.
If you want help with the bug in your program you are going to have to show us the code. Try and produce a smaller version of your program that still has the same or similar problem.

is it possible that you are going out of range? something like
int temp = value_larger_than_MAX_INT;
In this case, the value will be truncated so you might see 0, or small numbers instead of very large numbers.
It sounds plausible to me after reading through your code, so consider using unsigned int instead of int, or even long.
hope this helps!

Related

Performance bottleneck in writing a large matrix of doubles to a file

My program opens a file which contains 100,000 numbers and parses them out into a 10,000 x 10 array correlating to 10,000 sets of 10 physical parameters. The program then iterates through each row of the array, performing overlap calculations between that row and every other row in the array.
The process is quite simple, and being new to c++, I programmed it the most straightforward way that I could think of. However, I know that I'm not doing this in the most optimal way possible, which is something that I would love to do, as the program is going to face off against my cohort's identical program, coded in Fortran, in a "race".
I have a feeling that I am going to need to implement multithreading to accomplish my goal of speeding up the program, but not only am I new to c++, I am new to multithreading, so I'm not sure how I should go about creating new threads in a beneficial way, or if it is even something that would give me that much "gain on investment" so to speak.
The program has the potential to be run on a machine with over 50 cores, but because the program is so simple, I'm not convinced that more threads is necessarily better. I think that if I implement two threads to compute the complex parameters of the two gaussians, one thread to compute the overlap between the gaussians, and one thread that is dedicated to writing to the file, I could speed up the program significantly, but I could also be wrong.
CODE:
cout << "Working...\n";
double **gaussian_array;
gaussian_array = (double **)malloc(N*sizeof(double *));
for(int i = 0; i < N; i++){
gaussian_array[i] = (double *)malloc(10*sizeof(double));
}
fstream gaussians;
gaussians.open("GaussParams", ios::in);
if (!gaussians){
cout << "File not found.";
}
else {
//generate the array of gaussians -> [10000][10]
int i = 0;
while(i < N) {
char ch;
string strNums;
string Num;
string strtab[10];
int j = 0;
getline(gaussians, strNums);
stringstream gaussian(strNums);
while(gaussian >> ch) {
if(ch != ',') {
Num += ch;
strtab[j] = Num;
}
else {
Num = "";
j += 1;
}
}
for(int c = 0; c < 10; c++) {
stringstream dbl(strtab[c]);
dbl >> gaussian_array[i][c];
}
i += 1;
}
}
gaussians.close();
//Below is the process to generate the overlap file between all gaussians:
string buffer;
ofstream overlaps;
overlaps.open("OverlapMatrix", ios::trunc);
overlaps.precision(15);
for(int i = 0; i < N; i++) {
for(int j = 0 ; j < N; j++){
double r1[6][2];
double r2[6][2];
double ol[2];
//compute complex parameters from the two gaussians
compute_params(gaussian_array[i], r1);
compute_params(gaussian_array[j], r2);
//compute overlap between the gaussians using the complex parameters
compute_overlap(r1, r2, ol);
//write to file
overlaps << ol[0] << "," << ol[1];
if(j < N - 1)
overlaps << " ";
else
overlaps << "\n";
}
}
overlaps.close();
return 0;
Any suggestions are greatly appreciated. Thanks!

How to fix problem with array overload in while loop

I'm trying to fix a SIGSEGV error in my program. I am not able to locate the site of error. The program compiles successfully in Xcode but does not provide me the results.
The goal of the program is to check whether the same element occurs in three separate arrays and return the element if it is more than 2 arrays.
#include <iostream>
using namespace std;
int main()
{
int i = 0 ,j = 0,k = 0;
int a[5]={23,30,42,57,90};
int b[6]={21,23,35,57,90,92};
int c[5]={21,23,30,57,90};
while(i< 5 or j< 6 or k< 5)
{
int current_a = 0;
int current_b = 0;
int current_c = 0;
{ if (i<5) {
current_a = a[i];
} else
{
;;
}
if (j<6)
{
current_b = b[j];
} else
{
;;
}
if (k<5)
{
current_c= c[k];
} else
{
;;
}
}
int minvalue = min((current_a,current_b),current_c);
int countoo = 0;
if (minvalue==current_a)
{
countoo += 1;
i++;
}
if (minvalue==current_b)
{
countoo +=1;
j++;
}
if (minvalue==current_c)
{
countoo += 1;
k++;
}
if (countoo >=2)
{
cout<< minvalue;
}
}
}
I am not getting any output for the code.
This is surely not doing what you want
int minvalue = min((current_a,current_b),current_c);
If min() is defined meaningfully (you really should provide an MCVE for a question like this), you want
int minvalue = min(min(current_a,current_b),current_c);
This will result in the minimum of the minimum of (a and b) and c, i.e. the minimum of all three, instead of the minimum of b and c. The comma operator , is important to understand this.
This seems to be a flag/counter to make a note across loop executions or count something
int countoo = 0;
It can however not work if you define the variable inside the loop.
You need to move that line BEFORE the while.
With this line you do not prevent the indexes to leave the size of the arrays,
that is very likely the source for the segfault.
while(i< 5 or j< 6 or k< 5)
In order to prevent segfaults, make sure that ALL indexes stay small enough,
instead of only at least one.
while(i< 5 && j< 6 && k< 5)
(By the way I initially seriously doubted that or can compile. I thought
with a macro for or it could, but I do not see that. It could be a new operator in a recent C++ standard update which I missed...
And it turns out that it is the case. I learned something here.)
This should fix the segfault.
To achieve the goal of the code I think you need to spend some additional effort on the algorithm. I do not see the code being related to the goal.

STATUS_STACK_BUFFER_OVERRUN encountered

I have searched for this particular error and found the underlying issue involves loop counts being wrong and causing the program to exceed it's bounds for the array.
However, after I lowered each array to the point where the array began to lose data on output, it continued to throw the same error. I am still new to C/C++ but any insight into this would be greatly appreciated.
The program seems to run through to the very end and even returns to the main method.
#include <stdio.h>
void sortAr(char[]);
int main ()
{
char a='y';
char b,i;
char c[20];
int x=0,n=0,z=0;
while (x<=19)
{
c[x]='#';
x++;
}
printf("Enter 20 letters: \n");
while (z<=20) //(The '=' caused my problem, removed and it runs fine.)
{
z++;
x=0;
b='y';
scanf("%c",&i);
while (x<=19)
{
if (c[x]==i)
b='n';
x++;
}
if (b=='y')
{
c[n]=i;
n++;
}
}
printf("\n");
printf("The nonduplicate values are: \n");
sortAr(c);
}
void sortAr(char ar[])
{
char z;
for (int i = 0; i <= 19; i++)
{
for (int j=i+1; j <= 19; ++j)
{
if (ar[i]>ar[j])
{
z = ar[i];
ar[i] = ar[j];
ar[j] = z;
}
}
}
for (int i = 0; i < 20; i++)
{
if(ar[i]=='#')
continue;
printf("%c ", ar[i]);
}
printf("\n");
}
I found the error at:
while (z<=20)
The reason is the array would overwrite more characters than intended by executing more times than the array had indexed in the memory. As a result it wrote into memory that was not allocated to it and caused the Stack_Buffer_Overrun.
Trace Z:
Z was initialized to 0.
Array was initialized to 20.
While loop starts with Z as the counter for read-ins.
z=0 array=1 1st run,
z=1 array=2 2nd run,
z=2 array=3 3rd run,
z=3 array=4 4th run,
...
z=20 array=21 21st run. (Array cannot hold 21st character and results in Stack_Buffer_Overrun.)
Solution:
change while(z<=20) -> while(z<20)

std::vector and memory error when resizing

I have a structure defined like this:
struct Edge
{
int u, v; // vertices
Edge() { }
Edge(int u, int v)
{
this->u = u;
this->v = v;
}
};
and a class field defined like
vector<Edge> solution;
In one of the methods I'm creating new Edges and pushing them into the vector like this (a huge simplification of my real code, but the problem still exists):
solution.push_back(Edge(1, 2));
solution.push_back(Edge(3, 4));
solution.push_back(Edge(5, 6));
solution.push_back(Edge(7, 8));
solution.push_back(Edge(9, 10));
solution.push_back(Edge(11, 12));
solution.push_back(Edge(13, 14)); // adding 7th element; the problem occurs here
When the last push_back is executing, I'm getting an error window in Visual Studio's debug mode
[AppName] has triggered a breakpoint.
and the debugger goes to malloc.c, to the end of _heap_alloc function. Before this 7th line, the vector seems to work properly. I can see all the elements in the debugger. It seems that the vector has a problem reallocating itself (expanding its size).
What's interesting, if I put this before all the pushing back:
solution.reserve(7);
, the 7th edge is added properly. What's even more interesting, trying to reserve space for more than 22 elements also causes the mentioned error.
What am I doing wrong? How can I debug it? The rest of the application doesn't use so much memory, so I can't believe the heap is full.
More code, on request. It's a rather sloppy implementation of 2-approximation algorithm for Metric Travelling Salesman's Problem. It first creates a minimum spanning tree, then adds vertices (just indices) to the partialSolution vector in the DFS order.
void ApproxTSPSolver::Solve()
{
// creating a incidence matrix
SquareMatrix<float> graph(noOfPoints);
for (int r=0; r<noOfPoints; r++)
{
for (int c=0; c<noOfPoints; c++)
{
if (r == c)
graph.SetValue(r, c, MAX);
else
graph.SetValue(r, c, points[r].distance(points[c]));
}
}
// finding a minimum spanning tree
spanningTree = SquareMatrix<bool>(noOfPoints);
// zeroeing the matrix
for (int r=0; r<noOfPoints; r++)
for (int c=0; c<noOfPoints; c++)
spanningTree.SetValue(r, c, false);
bool* selected = new bool[noOfPoints];
memset(selected, 0, noOfPoints*sizeof(bool));
selected[0] = true; // the first point is initially selected
float min;
int minR, minC;
for (int i=0; i<noOfPoints - 1; i++)
{
min = MAX;
for (int r=0; r<noOfPoints; r++)
{
if (selected[r] == false)
continue;
for (int c=0; c<noOfPoints; c++)
{
if (selected[c] == false && graph.GetValue(r, c) < min)
{
min = graph.GetValue(r, c);
minR = r;
minC = c;
}
}
}
selected[minC] = true;
spanningTree.SetValue(minR, minC, true);
}
delete[] selected;
// traversing the tree
DFS(0);
minSol = 0.0f;
// rewriting the solution to the solver's solution field
for (int i=0; i<noOfPoints - 1; i++)
{
solution.push_back(Edge(partialSolution[i], partialSolution[i + 1]));
minSol += points[partialSolution[i]].distance(points[partialSolution[i + 1]]);
}
solution.push_back(Edge(partialSolution[noOfPoints - 1], partialSolution[0]));
minSol += points[partialSolution[noOfPoints - 1]].distance(points[partialSolution[0]]);
cout << endl << minSol << endl;
solved = true;
}
void ApproxTSPSolver::DFS(int vertex)
{
bool isPresent = std::find(partialSolution.begin(), partialSolution.end(), vertex)
!= partialSolution.end();
if (isPresent == false)
partialSolution.push_back(vertex); // if I comment out this line, the error doesn't occur
for (int i=0; i<spanningTree.GetSize(); i++)
{
if (spanningTree.GetValue(vertex, i) == true)
DFS(i);
}
}
class ApproxTSPSolver : public TSPSolver
{
vector<int> partialSolution;
SquareMatrix<bool> spanningTree;
void DFS(int vertex);
public:
void Solve() override;
};
from main.cpp:
TSPSolver* solver;
string inputFilePath, outputFilePath;
// parsing arguments
if (ArgParser::CmdOptionExists(argv, argv + argc, "/a"))
{
solver = new ApproxTSPSolver();
}
else if (ArgParser::CmdOptionExists(argv, argv + argc, "/b"))
{
solver = new BruteForceTSPSolver();
}
else
{
solver = new BranchAndBoundTSPSolver();
}
inputFilePath = ArgParser::GetCmdOption(argv, argv + argc, "/i");
outputFilePath = ArgParser::GetCmdOption(argv, argv + argc, "/s");
solver->LoadFromFile(inputFilePath);
Timer timer;
timer.start();
solver->Solve();
timer.stop();
cout << timer.getElapsedTime();
A part of TSPSolver.c:
TSPSolver::TSPSolver()
{
points = NULL;
solved = false;
}
TSPSolver::~TSPSolver()
{
if (points)
delete[] points;
}
void TSPSolver::LoadFromFile(string path)
{
ifstream input(path);
string line;
int nodeID;
float coordX, coordY;
bool coords = false;
minX = numeric_limits<float>::max();
maxX = numeric_limits<float>::min();
minY = numeric_limits<float>::max();
maxY = numeric_limits<float>::min();
while (input.good())
{
if (coords == false)
{
getline(input, line);
if (line == "NODE_COORD_SECTION")
{
coords = true;
}
else if (line.find("DIMENSION") != string::npos)
{
int colonPos = line.find_last_of(":");
noOfPoints = stoi(line.substr(colonPos + 1));
#ifdef _DEBUG
cout << noOfPoints << " points" << endl;
#endif
// allocating memory for this amount of points
points = new Point[noOfPoints];
}
}
else
{
input >> nodeID >> coordX >> coordY;
points[nodeID - 1].X = coordX;
points[nodeID - 1].Y = coordY;
minX = min(minX, coordX);
maxX = max(maxX, coordX);
minY = min(minY, coordY);
maxY = max(maxY, coordY);
if (nodeID == noOfPoints)
{
break;
}
}
}
input.close();
}
This is rather a comment then an answer, but the space is too limited.
If you are on windows, try Microsoft Application Verifier. It might detect wrong memory access.
Another way of detecting such access is to reseve empty char arrays initialized to 0.
Open your class where the vector is declared and declare a char array of let's say 64 chars before and after your vector and initialize them to 0!.
Then break into your vector code, where the error is generated and check the contents of those padding arrays. If they are filled, somebody writes more that it should.
A way to locate the "malicious" access (at least in VC++) is to set a data breakpoint writing in your padding arrays and check the callstack then.
You may be doing out-of-bounds accesses on points in various places, e.g. this one:
input >> nodeID >> coordX >> coordY;
points[nodeID - 1].X = coordX;
What if input failed, or the value is out of range?
I would suggest removing all uses of new and delete and []from your code; e.g. assuming points is int *points; then replace it with std::vector<int> points. Change all [] accesses to be .at() and catch exceptions. Disable copying on all classes that don't have correct copy semantics.
Then you can be more certain that it is not a memory allocation error, copying error, or out-of-bounds access (which are strong candidates for explaining your symptoms).
This would also fix the problem that TSPSolver currently does not have correct copy semantics.
It would be very useful to make a SSCCE. You mention that there is "a lot of input", try reducing the input as small as you can but still have the problem occur. An SSCCE can include input data so long as it is a manageable size that you can post. Currently you show too much code but not enough, as they say. The problem is still lurking somewhere you haven't posted yet.

For loop for reading strings from a file into a 3D array

I'm having a problem with one of my functions, I'm working on a simple tile map editor, and I'm trying to implement a 3D array to keep track of tiles (x,y, layer). Before this I had a 1D array where all the tiles were just listed sequencially:
bool Map::OnLoad(char* File) {
TileList.clear();
FILE* FileHandle = fopen(File, "r");
if(FileHandle == NULL) {
return false;
}
for(int Y = 0;Y < MAP_HEIGHT;Y++) {
for(int X = 0;X < MAP_WIDTH;X++) {
Tile tempTile;
fscanf(FileHandle, "%d:%d ", &tempTile.TileID, &tempTile.TilePassage);
TileList.push_back(tempTile);
}
fscanf(FileHandle, "\n");
}
fclose(FileHandle);
return true;
}
This basically read strings from the file which looked like:
2:1 1:0 3:2...
Where the first number states the tileID and the second one states the Tile passability. The above function works. My 3D arrays are also correctly constructed, I tested them with simple assignments and calling values out of it. The function that gives me problems is the following (please note that the number 2 i.e. OnLoad2() was added so I can keep the old variables and the function untouched until the prototype is working):
bool Map::OnLoad2(char* File) {
TileList2.clear();
FILE* FileHandle2 = fopen(File, "r");
if(FileHandle2 == NULL) {
return false;
}
for(int Y = 0;Y < MAP_HEIGHT;Y++) {
for(int X = 0;X < MAP_WIDTH;X++) {
Tile tempTile;
fscanf(FileHandle2, "%d:%d ", &tempTile.TileID, &tempTile.TilePassage);
TileList2[X][Y][0] = tempTile;
}
fscanf(FileHandle2, "\n");
}
fclose(FileHandle2);
return true;
}
While this function doesn't trigger the compiler to report any errors, as soon as the application starts, it freezes up and crashes. For additional information MAP_WIDTH and MAP_HEIGHT are set to 40 each and the 3D array was constructed like this:
TileList2.resize(MAP_HEIGHT);
for (int i = 0; i < MAP_HEIGHT; ++i) {
TileList2[i].resize(MAP_WIDTH);
for (int j = 0; j < MAP_WIDTH; ++j)
TileList2[i][j].resize(3);
}
I would appreciate it if you could point me out what do I need to fix, as far as I know I must have messed up the for loop structure, as the 3D array initializes and works properly. Thank you for your help!
TileList2.clear();
This line reinitializes TileList2, so it is back to a zero-length vector. Delete that line, and you will probably be okay.