Creating an Array of Structures on the Heap in C++ - c++

I need to declare an array of structures on the heap, then transfer data from parallel arrays on the stack and from calculations into each structure. I declared
struct Grades
{
string studentName;
int scores[4];
double average;
};
....
Grades *art1301 = new Grades;
....
(art1301 + i)->studentName = names[i];
for((int i = 0 ; i < 5 ; i++ )
(art1301 + i)->scores[j] = exams[i][j];
(art1301 + i)->average = average;
My program accesses the first record, but it crashes after it accesses the first field of the second record. I don't understand why it works for the first record, but dies in the middle of the second? Am I accessing the structure correctly?
Thank you.

To allocate an array, you need the array form of new, with the square brackets:
Grades *art1301 = new Grades[200];
// ^^^^^
The array size can be a dynamically determined quantity.

You aren't allocating memory for an array, you are allocating only for one element.
As someone said in the comments, the key is in the new Grades instruction
In addition, unless you have another i variable declared before (which is a bad practice), that code doesn't compile because (art1301 + i)->studentName = names[i]; will not find variable i

Related

Finding the size of an array of pointers?

I'm trying to find the size of an array of pointers. The array is declared as such:
Student *students[ROSTER_MAX];
where ROSTER_MAX is a static const int that == 1024 and Student is an object that contains an int and two strings. I'm trying to find the size of students (e.g. the number of elements in the array). So far I have tried:
sizeof(students)/sizeof(*(students[0]));
and
sizeof(students)/sizeof(students[0]);
If anyone could help me understand why the previous two (especially the first one) isn't working and provide an alternative, it would be appreciated!
update:
I'm trying to find the number of non-null elements in the array. The constructor for the array class (called Roster) is:
Roster::Roster(){
this -> numStudents = 0;
for(int i = 0; i < ROSTER_MAX; i++){
this -> students[i] = NULL;
}
}
so I can see how the above lines of code would lead to 1024. But I'm trying to find the number of initialized elements.

Counting the elements in a local array

I have a double x[12] which has no elements in it. When the user is prompted, he/she enters a number, which is stored in x.
I want the program to first check if x is empty and if it is, put the user's input in x[0] or if it isn't, put the user's input in the next free index.
I had done this:
...
double x[12];
void AddPayment(double Amount)
{
int i = sizeof(x);
x[i] = Amount;
}
Is it that sizeof() doesn't work with arrays, is there a better way of doing this?
When sizeof is applied to an array, it does not tell you how much data the array holds; it tells you how much data the array could hold. The fact that you did not specify any data to put into your double x[12] has no influence on the size of the array. Therefore, sizeof would return the number of bytes required on your system to hold an array of twelve doubles.
If you would like to keep a count of how many items among 12 have been assigned, add a separate variable for it. Initialize it to zero, and use it to keep track of how many items have been inserted:
size_t x_count = 0;
double x[12];
void AddPayment(double Amount) {
if (x_count == 12) {
// Error; we cannot add more than 12 items.
// Tell the user what's going on and quit,
// or handle the error in some other way.
cerr << "Cannot add more than 12 elements to x[]" << endl;
return;
}
x[x_count++] = Amount;
}
Whether x[12] has values or not, it will always have a size of 12 * sizeof(double).
So using the sizeof() operator is not a good way to accomplish your aim.
A best thing to do would be initialize x[12] with a value that the user cannot enter, say 0, and test for the first available location in the array that has a zero to enter that value.
double x[12] = { 0 };
void AddPayment(double Amount)
{
for (int i = 0; i < 12; i++) {
if (x[i] == 0) {
x[i] = Amount;
break;
}
}
}
I have a double x[12] which has no elements in it.
That's a misconception. double x[12]; creates 12 double elements, leaving their values in an undefined status. So you have 12 uninitialised elements. That's quite different from having no elements.
(The misconception would become even clearer if you had, for example, an array of type std::string. Unlike double, std::string is always initialised to a defined value, an empty string. So std::string x[12] would definitely be 12 strings, not an empty array.)
When the user is prompted, he/she enters a number, which is stored in
x.
I want the program to first check if x is empty and if it is, put the
user's input in x[0] or if it isn't, put the user's input in the next
free index.
I'm really surprised that nobody has suggested this, but an array is the wrong tool for what you are trying to accomplish. You need a container which can grow. You need std::vector:
std::vector<double> x; // starts off empty
void AddPayment(double Amount)
{
x.push_back(Amount); // grows by 1 element
}
std::vector also has a size() member function to tell you the current number of elements. No more sizeof needed.

Memory allocation with unknown number of elements

Since the number of elements is determined by some conditions, I wrote a program like this;
int i = 0;
int *layer;
while (i != 12){
layer = new int;
layer[i] = i;
cout << layer[i] << endl;
i++;
}
delete[] layer;
return 0;
I get the result;
0
1
2
3
4
5
6
And then program crashes. What is the reason of this and how should I modify the program in order to allocate memory for unknown number of elements?
Thanks in advance!
You have undefined behaviour. You allocate space for a single int, then you treat it as an array.
layer = new int; // single int
layer[i] = i; // wat??
Then you leak the memory, then you call delete[] on the last newed int. Since it isn't clear what you want to do with your code, I can only offer some suggestions:
Consider using std::vector<int> as a dynamic array. It will save you a lot of trouble.
If you must allocate an array with new (you probably don't), you need new int[n] where n is the number of elements.
Call delete [] for every new[] and delete for every new.

Large Malloc Array data lost after successful assigning it to memory

I am trying to store large amount of data into multiple malloc array
I have three malloc array, two 2d char array and one int array. In a test case the array name are defined as:
cres=12163;
catm=41241;
matm = (char**) malloc(catm*sizeof(char*));
for(i=0;i<catm;i++)
matm[i]=(char*) malloc(5*sizeof(char));
mres = (char**) malloc(cres*sizeof(char*));
for(i=0;i<cres;i++)
mres[i]=(char*) malloc(5*sizeof(char));
mrin = (int*) malloc(cres*sizeof(int));
I read the data from a file. The data stored in these array if printed as it is stored in the these array is in right format. But when I try to retrieve data from the character arrays, after assigning value to the int array the character; array change the column length to 14 and the value is set to 8.50000000E-01.
I am using Linux Opensuse and g++ comiler.
Any Solution or alternate method to store large amount of data.
Sorry for all the confusion the blunder was on my part i was assigning the file-handling line pointer to all the values.
So matm is an array of char* with length catm. You then assign to its elements arrays of char of length 5. Then you do the same for res instead of atm.
Finally, you allocate and store in mrin an array of cres integers.
Almost certainly you are overflowing one of these arrays. You can use valgrind to figure out which, most likely automatically, by simply running valgrind ./a.out or whatever your program is called. It will print stack traces where memory errors occur.
You may simply have strings longer than 4 characters (plus the terminating null). You don't show the code where you populate the arrays.
Since you're using a C++ compiler, you should consider using C++ containers like std::vector<char> and std::string instead of raw C arrays which are error-prone as you have discovered.
OK, so I am going to take a crack at this... in C!
What you are making are arrays of pointers to char.
So two arrays of pointer to char, each holding 41241 pointers to char
One array holding pointers to int ( although why I have no idea since just declaring an array of int of size 12163 would do the trick.
Further you are declaring each entry on the char pointer array to be 5 chars which will hold a C style string of 4 bytes plus the null terminator.
char* strArray1 [41241] ;
char* strArray2 [41241] ;
int* intArray [12163] ;
for( int x=0 ; int < 41241;x++){
strArray1[x] = malloc(5*sizeof(char)) ;
strcopy("fred",strArray1[x]);
}
for( int x=0 ; int < 41241;x++){
strArray2[x] = malloc(5*sizeof(char)) ;
strcopy("Tom",strArray2[x]);
}
for(x=0;x<12163;x++){
inArray[x*] = rand() % 50 ;
}
for( int x=0 ; int < 41241;x++){
printf(" This entry = %s \n",strArray1[x]) ;
}
for( int x=0 ; int < 41241;x++){
printf(" This entry = %s \n",strArray2[x]) ;
}
for( int x=0 ; int < 12163;x++){
printf(" This entry = %i \n",intArray[x*]) ;
}
DO NOT try and get cute with C as it will bite you in the ass every time.

Memory allocation with operator new and initialization with data

In my project, there are one million inputs and I am supposed to compare search/sort algorithms with different numbers of inputs untill one million inputs. I want to do memory allocation and initialization with data together but I reailized it is not possible. So I decided to do like this;
double temp1, temp2, temp3; //Each line has three numbers
int i;
Person *list[N]; //Here, stackoverflow occurs, for example N=500000
for(i=0; i<N; i++){
file >> temp1 >> temp2 >> temp3;
list[i] = new Person(temp1, temp2, temp3); //I wanted to initialize with data
} //but if I wrote "new Person[N]"
//stackoverflow doesn't occur
But there is an overflow with huge numbers, for example N = 500000.
So, is there any method which combine these two?(Without overflow and with data initialization)
Secondly, is there any difference between these two code;
Person *list[N];
for(i=0; i<N; i++){
list[i] = new Person();
}
Person *list = new list[N];
As a beginner, it's best to avoid using your own containers. You can just use the Standard-provided ones:
...
#include <vector>
#include <cstdlib> // for EXIT_FAILURE, EXIT_SUCCESS
double temp1, temp2, temp3; //Each line has three numbers
std::vector<Person> people;
for(int i=0; i<N; i++)
if (file >> temp1 >> temp2 >> temp3)
people.emplace_back(temp1, temp2, temp3);
else
{
std::cerr << "error reading 3 numbers from file, terminating\n";
exit(EXIT_FAILURE);
}
It's especially useful to use vector (or new Person[n], and in contrast to new Person*[n]) to keep the data together (contiguous) in memory, so your CPU gets the maximum possible benefit from its caches during the searching and sorting that you want to compare... if your data's harder to access it'll hide the extent of performance difference between the algorithms under test. With new Person*[n] and every Person object being allocated on the heap, the data gets scattered and can be much slower to access.
Just to explain what was happening with your current code:
you were trying to put too much data on the stack; you can work around that by having a single stack-hosted pointer to the required amount of dynamically allocated memory (it's normal for an application to have massively more dynamic memory available than stack space).
Secondly, is there any difference between these two code;
Person* list[N]; // first
for(i=0; i<N; i++){
list[i] = new Person();
}
Person *list = new Person[N]; // second - corrected from "new list[N}"
The first asks for an array of Person*s on the stack, then assigns each of those pointers to a distinct dynamically-allocated memory address. At best, that will use almost as much stack memory - and at worst around double - as trying to put Person list[N]; directly on the stack and is likely to fail the same way. It also scatters the Person data around in dynamic memory, and operations on the data will be unnecessarily slow.
The second creates one dynamically-allocated memory region big enough for N Persons, and keeps a single pointer to it on the stack. That's not unreasonable (but std::vector's still a better idea).
In your example,
Person *list[N];
is created as a local variable on the stack. 500,000 pointers would take up about 2 MB - which is likely to exceed the stack size on some machines. http://msdn.microsoft.com/en-us/library/windows/desktop/ms686774(v=vs.85).aspx
However,
//Person *list = new list[N];
Person **list = new Person* [N];
will create your array on the heap, and you should be able to allocate that without running out of memory. However, each Person object will have a size and require allocation in addition to the array of pointers.