VS 2010: Error cxx0030 when dealing with 3d arrays - c++

I'm dealing with someone else's code as a part of my assignment and have ran into trouble. Instead of running smoothly, the given code throws out the mentioned error in the following function:
template <typename T>
inline T ***Create3DArray(int d1, int d2, int d3) {
T ***retval;
retval = (T***)malloc((sizeof(T**)+(sizeof(T*)+sizeof(T)*d3)*d2)*d1);
T **ptr = (T**)(retval+d1);
T *ptr2 = (T*)(ptr+d1*d2);
for(int i = 0; i < d1; i++, ptr += d2) {
retval[i] = ptr; // this line triggers the CXX0030 error
for(int j = 0; j < d2; j++, ptr2 += d3) {
retval[i][j] = ptr2;
if(j == 0) {
for(int k = 0; k < d3; k++)
retval[i][j][k] = 0;
} else
memcpy(retval[i][j], retval[i][0], sizeof(T)*d3);
}
}
return retval;
}
Any clues as to why this happens? I doubt that someone would publish their code if it couldn't even be ran. Is this maybe a Visual Studio specific issue?
Edit:
I stress the fact that I'm not the one who has written the code, and have very little insight regarding the big picture (although the problem seems localized). Here's some more info, the line which calls the function Create3DArray is:
float ***pts = Create3DArray<float>(classes->NumParts(), numObjects*m, part_width*part_width);
The arguments are 15, 11988 and 3136, meaning that over 1GB of memory gets allocated.
The link to the project's website is here. The file which I'm currently trying to use can be found under Examples->import_birds200.cpp. Do note that the whole thing is pretty big and uses some 1GB of data.

1.You use of c functions that do not ensure c++ object lifecycle / semantics:
You use malloc() in c++. This is not a great practice, because malloc() doesn't initialize the objects in the allocated area. When you later assign an object, like in retval[i][j][k] = 0; your compiler assumes that retval[i][j][k] already contains an object which is in a stable state.
THis isn't the direct cause of your error. But from the second iteration onwards, depending on how T operator= is implemented, you could have corrupted memory.
If you want to proceed with malloc(), you have to use placement new to initialize the objects properly: new (&retval[i][j][k])T(); // placement creation
You later use memcpy(). This will performe a byte clone of your object, without ensuring the semantic of a copy. For example, if your type T would have member pointing to a memory region allocated during its construction, both the clone and the original would then point to the same region. The first who gets deleted will free the memory. The second will attempt to free again : memory issues guaranteed !
Prefer std::copy() over memcpy(). But this requires an existing object to first be constructed. So in your context, no way arround a placement new : get rid of the special case using the mmemcpy().
2. Memory allocation issues:
It may sound trivial, but allocation could fail. So it would be preferable to put an assert to verify that you didn't get a NULL in return !
I'm suggesting this because that's one of the probable cause of Cxx0030

Related

Getting sementation fault (core dumped)

Everything seems to run okay up until the return part of shuffle_array(), but I'm not sure what.
int * shuffle_array(int initialArray[], int userSize)
{
// Variables
int shuffledArray[userSize]; // Create new array for shuffled
srand(time(0));
for (int i = 0; i < userSize; i++) // Copy initial array into new array
{
shuffledArray[i] = initialArray[i];
}
for(int i = userSize - 1; i > 0; i--)
{
int randomPosition = (rand() % userSize;
temp = shuffledArray[i];
shuffledArray[i] = shuffledArray[randomPosition];
shuffledArray[randomPosition] = temp;
}
cout << "The numbers in the initial array are: ";
for (int i = 0; i < userSize; i++)
{
cout << initialArray[i] << " ";
}
cout << endl;
cout << "The numbers in the shuffled array are: ";
for (int i = 0; i < userSize; i++)
{
cout << shuffledArray[i] << " ";
}
cout << endl;
return shuffledArray;
}
Sorry if spacing is off here, not sure how to copy and past code into here, so I had to do it by hand.
EDIT: Should also mention that this is just a fraction of code, not the whole project I'm working on.
There are several issues of varying severity, and here's my best attempt at flagging them:
int shuffledArray[userSize];
This array has a variable length. I don't think that it's as bad as other users point out, but you should know that this isn't allowed by the C++ standard, so you can't expect it to work on every compiler that you try (GCC and Clang will let you do it, but MSVC won't, for instance).
srand(time(0));
This is most likely outside the scope of your assignment (you've probably been told "use rand/srand" as a simplification), but rand is actually a terrible random number generator compared to what else the C++ language offers. It is rather slow, it repeats quickly (calling rand() in sequence will eventually start returning the same sequence that it did before), it is easy to predict based on just a few samples, and it is not uniform (some values have a much higher probability of being returned than others). If you pursue C++, you should look into the <random> header (and, realistically, how to use it, because it's unfortunately not a shining example of simplicity).
Additionally, seeding with time(0) will give you sequences that change only once per second. This means that if you call shuffle_array twice quickly in succession, you're likely to get the same "random" order. (This is one reason that often people will call srand once, in main, instead.)
for(int i = userSize - 1; i > 0; i--)
By iterating to i > 0, you will never enter the loop with i == 0. This means that there's a chance that you'll never swap the zeroth element. (It could still be swapped by another iteration, depending on your luck, but this is clearly a bug.)
int randomPosition = (rand() % userSize);
You should know that this is biased: because the maximum value of rand() is likely not divisible by userSize, you are marginally more likely to get small values than large values. You can probably just read up the explanation and move on for the purposes of your assignment.
return shuffledArray;
This is a hard error: it is never legal to return storage that was allocated for a function. In this case, the memory for shuffledArray is allocated automatically at the beginning at the function, and importantly, it is deallocated automatically at the end: this means that your program will reuse it for other purposes. Reading from it is likely to return values that have been overwritten by some code, and writing to it is likely to overwrite memory that is currently used by other code, which can have catastrophic consequences.
Of course, I'm writing all of this assuming that you use the result of shuffle_array. If you don't use it, you should just not return it (although in this case, it's unlikely to be the reason that your program crashes).
Inside a function, it's fine to pass a pointer to automatic storage to another function, but it's never okay to return that. If you can't use std::vector (which is the best option here, IMO), you have three other options:
have shuffle_array accept a shuffledArray[] that is the same size as initialArray already, and return nothing;
have shuffle_array modify initialArray instead (the shuffling algorithm that you are using is in-place, meaning that you'll get correct results even if you don't copy the original input)
dynamically allocate the memory for shuffledArray using new, which will prevent it from being automatically reclaimed at the end of the function.
Option 3 requires you to use manual memory management, which is generally frowned upon these days. I think that option 1 or 2 are best. Option 1 would look like this:
void shuffle_array(int initialArray[], int shuffledArray[], int userSize) { ... }
where userSize is the size of both initialArray and shuffledArray. In this scenario, the caller needs to own the storage for shuffledArray.
You should NOT return a pointer to local variable. After the function returns, shuffledArray gets deallocated and you're left with a dangling pointer.
You cannot return a local array. The local array's memory is released when you return (did the compiler warn you about that). If you do not want to use std::vector then create yr result array using new
int *shuffledArray = new int[userSize];
your caller will have to delete[] it (not true with std::vector)
When you define any non static variables inside a function, those variables will reside in function's stack. Once you return from function, the function's stack is gone. In your program, you are trying to return a local array which will be gone once control is outside of shuffle_array().
To solve this, either you need to define the array globally (which I won't prefer because using global variables are dangerous) or use dynamic memory allocation for the array which will create space for the array in heap rather than allocating the space on the function's stack. You can use std::vectors also, if you are familiar with vectors.
To allocate memory dynamically, you have to use new as mentioned below.
int *shuffledArray[] = new int[userSize];
and once you completed using shuffledArray, you need to free the memory as below.
delete [] shuffledArray;
otherwise your program will leak memory.

C++ C2040: 'Block***' differs in levels of indirection (& memory assistance)

I've looked many places without finding enough information to help me solve my issue. Basically, I want a three dimensional array of 16*16*256 instances of a class. This caused a stack overflow, so I attempted a vector but that also crashed. Finally, I am attempting to allocate heap memory via triple, double, and single pointers.
/* Values chunkX, chunkY, chunkZ used below are static constant integers*/
Block*** blocks;
blocks = new Block**[chunkX];
for (int i = 0; i < chunkX; i++) {//Initialize all arrays
blocks[i] = new Block*[chunkZ];
for (int u = 0; u < chunkZ; u++) {
blocks[i][u] = new Block[chunkY];
}
}
This made sense to me but probably is incorrect. The C2040 error is at the line where blocks is first defined.
Later in the code, I attempt:
Block& block = blocks[x][z][y];
But it tells me C2530 'block': references must be initialized ... even though I initialized them above..? And then it just stops compiling because of these two errors. I'm quite confused and couldn't find any triple pointer tutorials using new. I don't think the amount of memory I want is unreasonable, because the Block class isn't huge.
EDIT (SOLVED):
Apparently this was not a code problem but just the compiler bugging out. It compiles now without changes made. Thanks for the comments and sorry for the inconvenience.

Confused about deleting dynamic memory allocated to array of struct

I'm having a memory leak issue and it's related to an array of structs inside a class (not sure if it matters that they're in a class). When I call delete on the struct, the memory is not cleared. When I use the exact same process with int and dbl it works fine and frees the memory as it should.
I've created very simple examples and they work correctly so it's related to something else in the code but I'm not sure what that could be. I never get any errors and the code executes correctly. However, the allocation / deallocation occurs in a loop so the memory usage continually rises.
In other words, here's a summary of the problem:
struct myBogusStruct {
int bogusInt1, bogusInt2;
};
class myBogusClass {
public:
myBogusStruct *bogusStruct;
};
void main(void) {
int i, arraySize;
double *bogusDbl;
myBogusClass bogusClass;
// arraySize is read in from an input file
for(i=0;i<100;i++) {
bogusDbl = new double[arraySize];
bogusClass.bogusStruct = new myBogusStruct[arraySize];
// bunch of other code
delete [] bogusDbl; // this frees memory
delete [] bogusClass.bogusStruct; // this does not free memory
}
}
When I remove the bunch of other code, both delete lines work correctly. When it's there, though, the second delete line does nothing. Again, I never get any errors from the code, just memory leaks. Also, if I replace arraySize with a fixed number like 5000 then both delete lines works correctly.
I'm not really sure where to start looking - what could possibly cause the delete line not to work?
There is no reason at all for you to either allocate or delete myBogusDbl inside the for loop, because arraySize never changes inside the loop.
Same goes for myBogusClass.myBogusStruct. No reason to allocate/delete it at all inside the loop:
myBogusDbl = new double[arraySize];
myBogusClass.myBogusStruct = new bogusStruct[arraySize];
for (i = 0; i < 100; i++) {
// bunch of other code
}
delete[] myBogusDbl;
delete[] myBogusClass.myBogusStruct;
You should also consider using std::vector instead of using raw memory allocation.
Now to the possible reason of why the second delete in the original code doesn't do anything: deleting a NULL pointer does, by definition, nothing. It's a no-op. So for debugging purposes, try introducing a test before deleting it to see if it's NULL and if yes abort(). (I'd use a debugger instead though, as it's much quicker to set up a watch expression there compared to writing debug code.)
In general though, we need to see that "bunch of other code".

100% of array correct in function, 75% of array correct in CALLING function - C

Note: i'm using the c++ compiler, hence why I can use pass by reference
i have a strange problem, and I don't really know what's going on.
Basically, I have a text file: http://pastebin.com/mCp6K3HB
and I'm reading the contents of the text file in to an array of atoms:
typedef struct{
char * name;
char * symbol;
int atomic_number;
double atomic_weight;
int electrons;
int neutrons;
int protons;
} atom;
this is my type definition of atom.
void set_up_temp(atom (&element_record)[DIM1])
{
char temp_array[826][20];
char temp2[128][20];
int i=0;
int j=0;
int ctr=0;
FILE *f=fopen("atoms.txt","r");
for (i = 0; f && !feof(f) && i < 827; i++ )
{
fgets(temp_array[i],sizeof(temp_array[0]),f);
}
for (j = 0; j < 128; j++)
{
element_record[j].name = temp_array[ctr];
element_record[j].symbol = temp_array[ctr+1];
element_record[j].atomic_number = atol(temp_array[ctr+2]);
element_record[j].atomic_weight = atol(temp_array[ctr+3]);
element_record[j].electrons = atol(temp_array[ctr+4]);
element_record[j].neutrons = atol(temp_array[ctr+5]);
element_record[j].protons = atol(temp_array[ctr+6]);
ctr = ctr + 7;
}
//Close the file to free up memory and prevent leaks
fclose(f);
} //AT THIS POINT THE DATA IS FINE
Here is the function I'm using to read the data. When i debug this function, and let it run right up to the end, I use the debugger to check it's contents, and the array has 100% correct data, that is, all elements are what they should be relative to the text file.
http://i.imgur.com/SEq9w7Q.png This image shows what I'm talking about. On the left, all the elements, 0, up to 127, are perfect.
Then, I go down to the function I'm calling it from.
atom myAtoms[118];
set_up_temp(myAtoms); //AT THIS POINT DATA IS FINE
region current_button_pressed; // NOW IT'S BROKEN
load_font_named("arial", "cour.ttf", 20);
panel p1 = load_panel("atomicpanel.txt");
panel p2 = load_panel("NumberPanel.txt");
As soon as ANYTHING is called, after i call set_up_temp, the elements 103 to 127 of my array turn in to jibberish. As more things get called, EVEN MORE of the array turns to jibberish. This is weird, I don't know what's happening... Does anyone have any idea? Thanks.
for (j = 0; j < 128; j++)
{
element_record[j].name = temp_array[ctr];
You are storing, and then returning, pointers into temp_array, which is on the stack. The moment you return from the function, all of temp_array becomes invalid -- it's undefined behavior to dereference any of those pointers after that point. "Undefined behavior" includes the possibility that you can still read elements 0 through 102 with no trouble, but 103 through 127 turn to gibberish, as you say. You need to allocate space for these strings that will live as long as the atom object. Since as you say you are using C++, the easiest fix is to change both char * members to std::string. (If you don't want to use std::string, the second easiest fix is to use strdup, but then you have to free that memory explicitly.)
This may not be the only bug in this code, but it's probably the one causing your immediate problem.
In case you're curious, the reason the high end of the data is getting corrupted is that on most (but not all) computers, including the one you're using, the stack grows downward, i.e. from high addresses to low. Arrays, however, always index from low addresses to high. So the high end of the memory area that used to be temp_array is the part that's closest to the stack pointer in the caller, and thus most likely to be overwritten by subsequent function calls.
Casual inspection yields this:
char temp_array[826][20];
...
for (i = 0; f && !feof(f) && i < 827; i++ )
Your code potentially allows i to become 826. Which means you're accessing the 827th element of temp_array. Which is one past the end. Oops.
Additionally, you are allocating an array of 118 atoms (atom myAtoms[118];) but you are setting 128 of them inside of set_up_temp in the for (j = 0; j < 128; j++) loop.
The moral of this story: Mind your indices and since you use C++ leverage things like std::vector and std::string and avoid playing with arrays directly.
Update
As Zack pointed out, you're returning pointers to stack-allocated variables which will go away when the set_up_temp function returns. Additionally, the fgets you use doesn't do what you think it does and it's HORRIBLE code to begin with. Please read the documentation for fgets and ask yourself what your code does.
You are allocating an array with space for 118 elements but the code sets 128 of them, thus overwriting whatever happens to live right after the array.
Also as other noted you're storing in the array pointers to data that is temporary to the function (a no-no).
My suggestion is to start by reading a good book about C++ before programming because otherwise you're making your life harder for no reason. C++ is not a language in which you can hope to make serious progress by experimentation.

Should I be using Malloc? Errors with large array of objects

I'm using a bit of legacy type code that runs on a framework, so I can't really explain whats going on at a lower level as I don't know.
However my code creates an array of objectives.
int maxSize = 20;
myObjects = new Object*[maxSize+1];
myObjects[0] = new item1(this);
myObjects[1] = new item2(this);
for(int i=2; i != maxSize+1; i++){
myObjects[i] = new item3(this);
}
myObjects[maxSize+1] = NULL;
If maxSize is larger than 30 I get a whole load of errors I've never seen. Visual Studio draws up an error in xutility highlighting:
const _Container_base12 *_Getcont() const
{ // get owning container
return (_Myproxy == 0 ? 0 : _Myproxy->_Mycont);
}
I've never used Malloc before, but is this where the problem lies. Should I be assigning using it to avoid this problem?
The absolute value of maxSize is probably not a culprit: allocating 30 pointers should go without trouble on any computer, including most micro-controllers. Using malloc is not going to change anything: you are doing your allocation the way you're supposed to do it in C++.
Here is the likely source of your error:
myObjects[maxSize+1] = NULL;
You have allocated storage for maxSize+1 items, so the valid indexes are between 0 and maxSize. Writing one past the last element is undefined behavior, meaning that a crash could happen. You got lucky with 20 elements, but 30 smoked out this bug for you. Using valgrind utility is a good way to catch memory errors that could cause crashes, even if they currently don't cause them.
int maxSize = 20;
myObjects = new Object*[maxSize+1];
myObjects[0] = new item1(this);
myObjects[1] = new item2(this);
// if maxsize is 1, this loop could be trouble
for(int i=2; i != maxSize; i++){
myObjects[i] = new item3(this);
}
myObjects[maxSize] = NULL;
You're going past the bounds with:
myObjects[maxSize+1] = NULL;
In your example, you created an array with 21 items. That will run from 0..20 but you're trying to write to the 21st element here.
The problem is not with new / delete as far as I can see, and I can't see any reason for switching to malloc here.
You should not use malloc() in C++; you should use new.
There's one possible exception to this: if you have to allocate a block of memory which you intend to pass as an argument to a function which is going to eventually free it using free(). If you used new to allocate such a block the free() would likely cause heap corruption. But this is purely hypothetical -- I've never seen such an API!
I think you can't access offset "maxSize+1". The solution is like:
myObjects = new Object*[maxSize+2];
...
myObjects[maxSize+1] = NULL;