I have a problem caused when reserving array. The problem is a heap error. The software i am making is like that:
I am making a small software to render a model of a specific format. the model contains several groups and every group contains array vertices and array of indices for these verts "such as a motorcycle model of 3 groups: front wheel, back wheel and body. After i load the model into memory, i want to render it as a vbo but the model is made of several groups as mentioned. so i am merging all the verts in all groups into one array of verts and the same goes for indices. when merging a heap error occurs when reserving the array. The code is like this:
int index=0;
for(int i=0;i<this->groupsSize;i++)
index+=this->groups[i]->capacity.vertsSize;
mdl_vert *m_pVertices=new mdl_vert[index];
index=0;
for(int i=0;i<this->groupsSize;i++)
index+=this->groups[i]->capacity.indicesSize;
unsigned int *m_pIndices=new unsigned int[index];
index=0;
for(int i=0;i<this->groupsSize;i++)
{
for(int j=0;j<this->groups[i]->capacity.vertsSize;j++)
{
m_pVertices[index]=this->groups[i]->verts[j];
index++;
}
}
When i reserve indices the heap error is occuring. I also used std::vector but the same error occur. can anybody give me a hint of what am i doing wrong in this case.
N.B. mdl_vert is a struct that consists of float x,y,z;caused when reserving array.
You don't supply enough info to pintpoint the problem, or even what the problem is.
But there are things you can do to clean up the code, and maybe that will help.
1. Use std::vector instead of `new`-ing raw arrays
Instead of
unsigned int *m_pIndices=new unsigned int[index];
use
std::vector<unsigned> indices( index );
Note that this std::vector is not itself dynamically allocated.
It uses dynamic allocation inside, and it does that correctly for you.
Even better, just use …
std::vector<int> indices( index );
… because unsigned arithmetic can easily screw up.
2. Don't use misleading naming
The m_ prefix makes it seem as if you really want to access data members, not local variables.
But you are defining local variables.
Either use the data members, or drop the m_ name prefixes.
3. Don't "reuse" variables
You're using the variable index for multiple successive purposes.
Declare and use one (properly named) variable for each purpose.
4. Don't rely on side-effects from earlier code.
For example, you are relying on the value of index after a for-loop where it's used a loop counter.
Instead, directly use the value that you have deduced that it will have.
5. Don't obscure the code with do-nothing things.
This is just a style issue, but consider removing all the this-> qualifications. It's verbose and obscures the code; it makes the code less readable and less clear. Yes, with primitive tools like Visual Studio such qualifications can help with getting names in drop-down lists, but that's a disservice: it makes it more difficult to remember things, and without remembering things you can't have the understanding needed to write correct code.
Cheers & hth.,
How big is the resulting index? Probably just overflow occurs.
Do you know what the error is? If not, you may want to try putting the code into a try/catch with std::exception. Generally when I have gotten errors along those lines, it was related to a st9bad_alloc error. Which essentially means the size supplied to the new was invalid or too big (either in terms or actual memory, or because of limits imposed by the system with regards to stack space). If so, validate the numbers supplied to new, and ensure the stack size is large enough (try the 'limit' command if using Linux). Good luck
One or more of your capacity.indicesSize is likely uninitialized, so you get a way too large number. Print index before allocating
Related
I hope that this question hasn’t been asked before, but I couldn’t find any answer after googling for an hour.
I have the following problem: I do numerical mathematics and I have about 40 MB of doubles in the form of certain matrices that are compile time constants. I very frequently use these matrices throughout my program. The creation of these matrices takes a week of computation, so I do not want to compute them on the fly but use precomputed values.
What is the best way of doing this in a portable manner? Right now I have some very large CPP-files, in which I create dynamically allocated matrices with the constants as initialiser lists. Something along the lines:
data.cpp:
namespace // Anonymous
{
// math::matrix uses dynamic memory internally
const math::matrix K { 3.1337e2, 3.1415926e00, /* a couple of hundred-thousand numbers */ };
}
const math::matrix& get_K() { return K; }
data.h
const math::matrix& get_K();
I am not sure if this can cause problems with too much data on the stack, due to the initialiser list. Or if this can crash some compilers. Or if this is the right way to go about things. It does seem to be working though.
I also thought about loading the matrices at program startup from an external file, but that also seems messy to me.
Thank you very much in advance!
I am not sure if this can cause problems with too much data on the stack, due to the initialiser list.
There should not be a problem assuming it has static storage with non-dynamic initialisation. Which should be the case if math::matrix is an aggregate.
Given that the values will be compile time constant, you might consider defining them in a header, so that all translation units can take advantage of them at compile time. How beneficial that would be depends on how the values are used.
I also thought about loading the matrices at program startup from an external file
The benefit of this approach is the added flexibility that you gain because you would not need to recompile the program when you change the data. This is particularly useful if the program is expensive to compile.
A slightly cleaner approach:
// math::matrix uses dynamic memory internally
const math::matrix K {
#include "matrix_initial_values.h"
};
And, in the included header,
3.1337e2, 3.1415926e00, 1,2,3,4,5e6, 7.0,...
Considering your comment of "A few hundred thousand" float values: 1M double values takes 8,000,000 bytes, or about 7.6MB. That's not going to blow the stack. Win64 has a max stack size of 1GB, so you'd have to go really, really nuts, and that's assuming that these values are actually placed on the stack, which they should not be given that it's const.
This is probably implementation-specific, but a large block of literals is typically stored as a big chunk of code-space data that's loaded directly into the process' memory space. The identifier (K) is really just a label for that data. It doesn't exist on the stack or the heap anymore than code does.
DISCLAIMER: I am at a very entry level in c++ (or any language)... I searched for similar questions but found none
I am trying to write a simple program which should make some operations on an array as big as int pop[100000000][4] (10^8); however my compiler crashs even for a int pop[130000][4] array... is there any way out? Am I using a wrong approach?
(For now I am limiting myself to a very simple program, my aim is to generate random numbers in the array[][0] every "turn" to simulate a population and work with that).
Thanks for your time and attention
An array of 130000 * 4 ints is going to be huge, and likely not something you want stored locally (in reality, on the stack where it generally won't fit).
Instead you can use dynamic allocation to get heap storage, the recommended means would be a vector of vectors
std::vector<std::vector<int>> pop(130000, std::vector<int>(4));
pop[12000][1] = 9; // expected syntax
vectors are dynamic, so know that they can be changed with all sorts of calls
If you're a new programmer and trying to write a simple programmer, you should consider not using 203KiB of ints
I've got a prewritten function in C that fills an 1-D array with data, e.g.
int myFunction(myData **arr,...);
myData *array;
int arraySize;
arraySize = myFunction(&arr, ...);
I would like to call the function n times in a row with slightly different parameters (n is dependent on user input), and I need all the data collected in a single C array afterwards. The size of the returned array is not always fixed. Oh, and myFunction does the memory allocation internally. I want to do this in a memory-efficient way, but using realloc in each iteration does not sound like a good idea.
I do have all the C++ functionality available (the project is in C++, just using a C library), but using std::vector is no good because the collected data is later sent in to a function with a definition similar to:
void otherFunction(myData *data, int numData, ...);
Any ideas? Only things I can think of are realloc or using a std::vector and copying the data into an array afterwards, and those don't sound too promising.
Using realloc() in each iteration sounds like a very fine idea to me, for two reasons:
"does not sound like a good idea" is what people usually say when they have not established a performance requirement for their software, and they have not tested their software against the performance requirement to see if there is any need to improve it.
Instead of reallocating a new block each time, the realloc method will simply keep expanding your memory block which will presumably be at the top of the memory heap, so it won't be wasting any time either traversing memory block lists, or copying data around. This holds true provided that whatever memory allocated by myFunction() gets freed before it returns. You can verify it by looking at the pointer returned by realloc() and seeing that it always (or almost always(*1)) is the exact same pointer as the one you gave it to reallocate.
EDIT (*1) some C++ runtimes implement two heaps, one for small allocations and one for large allocations, so if your block gets allocated in the heap for small blocks, and then it grows large, there is a possibility that it will be moved once to the heap for large blocks. So, don't expect the pointer to always be the same; just most of the time.
Just copy all of the data into an std::vector. You can call otherFunction on a vector v with
otherFunction(&v[0], v.size(), ...)
or
otherFunction(v.data(), v.size(), ...)
As for your efficiency requirement: it looks to me like your optimizing prematurely. First try this option, then measure how fast it is and only look for other solutions if it's really too slow.
If you know that you are going to call the function N times, and returned arrays are always M long, then why don't you just allocate one array M*N initially? Or if you don't know one of M or N, then set a worst case maximum. Or are M and N both dependent on user-input?
Then, change how you call your user-input-getting function, such that the array pointer you pass it is actually an offset into that large array, so that it stores the data in the right location. Then, next iteration, offset further, and call again.
I think best solution would be to write your own 1D array class with some methods which you need.
depending on how you write the class you'll get such result. (sorry bad grammar)..
I don't understand the need for dynamic arrays. From what I understand so far, dynamic arrays are needed because one cannot always tell what size of array will be needed at runtime.
But surely one can do this?:
cin >> SIZE;
int a[SIZE];
So what's the big deal with dynamic arrays and the new operator?
Firstly, that is a compiler extension and not Standard C++. Secondly, that array is allocated on the stack, whereas operator new allocates from the heap, which are two very different places that drastically affects the lifetime of the array. What use is that code if I want to return that array? Thirdly, what are you gonna do if you want to resize it?
SIZE is a variable which means it's value can be modified. Array by definition, can neither grow nor shrink in it's size. So, it's size needs to be a compile time constant.
cin >> SIZE; int a[SIZE];
Some of my users have a hard enough time using a mouse, and you want them to tell my application how much memory to allocate?
Dynamic arrays are very handy ... suppose you keep generating objects, and you have no idea how many objects might be generated (i.e., how much input might you get from someone typing an answer at a prompt, or from a network socket, etc.)? You'd either have to make predictions on what appropriate sizes are needed, or you're going to have to code hard-limits. Both are a pain and in the case of hard-limits on arrays, can lead to allocating way too much memory for the job at hand in order to try and cover every possible circumstance that could be encountered. Even then you could incur buffer-overruns which create security holes and/or crashes. Having the ability for an object to dynamically allocate new memory, yet keep the association between objects so that you have constant-time access (rather than linear-time access like you'd get with a linked-list), is a very, very handy tool.
In the early days of C, space was made on the stack when you first entered a function and the size of the stack was untouched until you called another function or returned. This is why all variables needed to be declared at the top of a function, and why array sizes needed to be known at compile-time, so the compiler could reserve a known amount of stack.
While C++ loosened the restrictions of variable declaration, it still kept the restriction of knowing the stack requirements at compile time. I don't know why.
As noted in the link in the comments, C finally allowed for dynamic size arrays. This came after C and C++ were split, so C++ didn't automatically gain that capability. It's not uncommon to find it supported as an extension in C++.
Personally I prefer:
std::cin >> size;
std::vector a(size);
Later, as others have mentioned, you could do something like ...
std::cin >> size;
a.resize(size);
...but, and this is probably the key point, you don't have to if you don't want to. If your requirements and constraints are such that they can be satisfied with static size arrays / vectors / other data structures then great. Please don't think you know enough about everybody else's requirements and constraints to remove from them a very useful tool.
I was told that the optimal way to program in C++ is to use STL and string rather than arrays and character arrays.
i.e.,
vector<int> myInt;
rather than
int myInt[20]
However, I don't understand the rational behind why it would result in security problems.
I suggest you read up on buffer overruns, then. It's much more likely that a programmer creates or risks buffer overruns when using raw arrays, since they give you less protection and don't offer an API. Sure, it's possible to shoot yourself in the foot using STL too, but at least it's harder.
There appears to be some confusion here about what security vectors can and cannot provide. Ignoring the use of iterators, there are three main ways of accessing elements ina vector.
the operator[] function of vector - this provides no bounds checking and will
result in undefined behaviour on a bounds error, in the same way as an array would if you use an invalid index.
the at() vector member function - this provides bounds checking and will raise an exception if an invalid index is used, at a small performance cost
the C++ operator [] for the vector's underlying array - this provides no bounds checking, but gives the highest possible access speed.
Arrays don't perform bound checking. Hence they are very vulnerable to bound checking errors which can be hard to detect.
Note: the following code has a programming error.
int Data[] = { 1, 2, 3, 4 };
int Sum = 0;
for (int i = 0; i <= 4; ++i) Sum += Data[i];
Using arrays like this, you won't get an exception that helps you find the error; only an incorrect result.
Arrays don't know their own size, whereas a vector defines begin and end methods to access its elements. With arrays you'll always have to rely on pointer arithmetics (And since they are nothing but pointers you can accidentially cast them)
C++ arrays do not perform bounds checking, on either insert or read and it is quite easy to accidentally access items from outside of the array bounds.
From an OO perspective, the vector also has more knowledge about itself and so can take care of its own housekeeping.
Your example has a static array with a fixed number of items; depending on your algorithm, this may be just as safe as a vector with a fixed number of items.
However, as a rule of thumb, when you want to dynamically allocate an array of items, a vector is much easier and also lets you make fewer mistakes. Any time you have to think, there's a possibility for a bug, which might be exploited.