I have a struct:
typedef struct{
int *issueTypeCount;
}issueTypeTracker;
I've declared a variable of type issueTypeTracker:
issueTypeTracker *typeTracker;
I've allocated necessary memory:
typeTracker = (issueTypeTracker*) malloc(sizeof(issueTypeTracker) * issueTypeList.count());
typeTracker->issueTypeCount = (int*) calloc(65536,sizeof(int));
And then when I try to do something with it, I get a segmentation fault
while(qry.next()){ //while there are records in the query
for(j=0;j<locationList.count();j++){ // no problem
if(qry.value(1) == locationList[j]){ //no problem
for(i=0;i<issueTypeList.count();i++){ //no problem
typeTracker[j].issueTypeCount[i]++; //seg fault as soon as we hit this line
}
}
}
}
I figured it would be a problem with the way i've allocated memory, but as far as I'm aware i've done it correctly. I've tried the solutions proposed in this question, however it still did not work.
I've tried replacing typeTracker->issueTypeCount = (int*) calloc(65536,sizeof(int)); with:
for(j=0;j<issueTypeList.count();j++){
typeTracker[j].issueTypeCount = (int*) calloc(65536,sizeof(int));
}
But I still get the same issue. This happens with any value of j or i, even zero.
This is a lot more trouble than it's worth and a poor implementation of what I'm trying to do anyway, so I'm probably going to scrap this entire thing and just use a multidimensional array. Even so, I'd like to know why this doesn't work, so in the future I don't have trouble when i'm faced with a similar scenario.
You have several issues. Firstly, you're not checking your allocations for success, so any of your pointers could be NULL/nullptr.
Secondly,
typeTracker->issueTypeCount = (int*) calloc(65536,sizeof(int));
is equivalent to
typeTracker[0].issueTypeCount = (int*) calloc(65536,sizeof(int));
so, you initialized the issueTypeCount member for only the first issueTypeTracker in your array. For the other issueTypeList.count() - 1 elements in the array, the pointer is uninitialized.
Therefore this line:
typeTracker[j].issueTypeCount[i]++; //seg fault as soon as we hit this line
will invoke UB for any j>0. Obviously if your allocation failed, you have UB for j==0 as well.
Related
I've looked many places without finding enough information to help me solve my issue. Basically, I want a three dimensional array of 16*16*256 instances of a class. This caused a stack overflow, so I attempted a vector but that also crashed. Finally, I am attempting to allocate heap memory via triple, double, and single pointers.
/* Values chunkX, chunkY, chunkZ used below are static constant integers*/
Block*** blocks;
blocks = new Block**[chunkX];
for (int i = 0; i < chunkX; i++) {//Initialize all arrays
blocks[i] = new Block*[chunkZ];
for (int u = 0; u < chunkZ; u++) {
blocks[i][u] = new Block[chunkY];
}
}
This made sense to me but probably is incorrect. The C2040 error is at the line where blocks is first defined.
Later in the code, I attempt:
Block& block = blocks[x][z][y];
But it tells me C2530 'block': references must be initialized ... even though I initialized them above..? And then it just stops compiling because of these two errors. I'm quite confused and couldn't find any triple pointer tutorials using new. I don't think the amount of memory I want is unreasonable, because the Block class isn't huge.
EDIT (SOLVED):
Apparently this was not a code problem but just the compiler bugging out. It compiles now without changes made. Thanks for the comments and sorry for the inconvenience.
I have a very strange segmentation fault that occurs when I call delete[] on an allocated dynamic array (created with the new keyword). At first it occurred when I deleted a global pointer, but it also happens in the following very simple case, where I delete[] arr
int main(int argc, char * argv [])
{
double * arr = new double [5];
delete[] arr;
}
I get the following message:
*** Error in `./energy_out': free(): invalid next size (fast): 0x0000000001741470 ***
Aborted (core dumped)
Apart from the main function, I define some fairly standard functions, as well as the following (defined before the main function)
vector<double> cos_vector()
{
vector<double> cos_vec_temp = vector<double>(int(2*pi()/trig_incr));
double curr_val = 0;
int curr_idx = 0;
while (curr_val < 2*pi())
{
cos_vec_temp[curr_idx] = cos(curr_val);
curr_idx++;
curr_val += trig_incr;
}
return cos_vec_temp;
}
const vector<double> cos_vec = cos_vector();
Note that the return value of cos_vector, cos_vec_temp, gets assigned to the global variable cos_vec before the main function is called.
The thing is, I know what causes the error: cos_vec_temp should be one element bigger, as cos_vec_temp[curr_idx] ends up accessing one element past the end of the vector cos_vec_temp. When I make cos_vec_temp one element larger at its creation, the error does not occur. But I do not understand why it occurs at the delete[] of arr. When I run gdb, after setting a breakpoint at the start of the main function, just after the creation of arr, I get the following output when examining contents of the variables:
(gdb) p &cos_vec[6283]
$11 = (__gnu_cxx::__alloc_traits<std::allocator<double> >::value_type *) 0x610468
(gdb) p arr
$12 = (double *) 0x610470
In the first gdb command, I show the memory location of the element just past the end of the cos_vec vector, which is 0x610468. The second gdb command shows the memory location of the arr pointer, which is 0x610470. Since I assigned a double to the invalid memory location 0x610468, I understand it must have wrote partly over the location that starts at 0x610470, but this was done before arr was even created (the function is called before main). So why does this affect arr? I would have thought that when arr is created, it does not "care" what was previously done to the memory location there, since it is not registered as being in use.
Any clarification would be appreciated.
NOTE:
cos_vec_temp was previously declared as a dynamic double array of size int(2*pi()/trig_incr) (same size as the one in the code, but created with new). In that case, I also had the invalid access as above, and it also did not give any errors when I accessed the element at that location. But when I tried to call delete[] on the cos_vec global variable (which was of type double * then) it also gave a segmentation fault, but it did not give the message that I got for the case above.
NOTE 2:
Before you downvote me for using a dynamic array, I am just curious as to why this occurs. I normally use STL containers and all their conveniences (I almost NEVER use dynamic arrays).
Many heap allocators have meta-data stored next to the memory it allocates for you, before or after (or both) the memory. If you write out of bounds of some heap-allocated memory (and remember that std::vector dynamically allocates off the heap) you might overwrite some of this meta-data, corrupting the heap.
None of this is actually specified in the C++ specifications. All it says that going out of bounds leads to undefined behavior. What the allocators do, or store, and where it possibly store meta-data, is up to the implementation.
As for a solution, well most people tell you to use push_back instead of direct indexing, and that will solve the problem. Unfortunately it will also mean that the vector needs to be reallocated and copied a few times. That can be solved by reserving an approximate amount of memory beforehand, and then let the extra stray element lead to a reallocation and copying.
Or, or course, make better predictions for the actual amount of elements the vector will contain.
It looks like you are writing past the end of the vector allocated in the function executing before main, causing undefined behavior later on.
You should be able to fix the problem by rounding the number up when allocating the vector (casting to int rounds the number down), or using push_back instead of indexing:
cos_vec_temp.push_back(cos(curr_val));
int main() {
int* i = new int(1);
i++;
*i=1;
delete i;
}
Here is my logic:
I increment I by 1, and then assign a value to it. Then I delete the I, so I free the memory location while leaking the original memory. Where is my problem?
I also tried different versions. Every time, as long as I do the arithmetics and delete the pointer, my program crashes.
What your program shows is several cases of undefined behaviour:
You write to memory that hasn't been allocated (*i = 1)
You free something that you didn't allocate, effectively delete i + 1.
You MUST call delete on exactly the same pointer-value that you got back from new - nothing else. Assuming the rest of your code was valid, it would be fine to do int *j = i; after int *i = new int(1);, and then delete j;. [For example int *i = new int[2]; would then make your i++; *i=1; valid code]
Who allocates is who deallocates. So you should not be able to delete something you did not new by yourself. Furthermore, i++;*i=1; is UB since you may access a restricted memory area or read-only memory...
The code made no sense . I think You have XY problem. If you could post your original problem there will be more chance to help you.
In this case you need to have a short understanding how the heap memory management works. in particular implementation of it, when you allocate an object you receive a pointer to the start of the memory available to you to work with. However, the 'really' allocated memory starts a bit 'earlier'. This means the allocated block is a bit more than you have requested to allocate. The start of the block is the address you have received minus some offset. Thus, when you pass the incremented pointer to the delete it tries to find the internal information at the left side of it. And because your address is now incremented this search fails what results in a crash. That's in short.
The problem lies here:
i++;
This line doesn't increment the value i points to, but the pointer itself by the number of bytes an int has (4 on 32-bit platform).
You meant to do this:
(*i)++;
Let's take it step by step:
int* i = new int(1); // 1. Allocate a memory.
i++; // 2. Increment a pointer. The pointer now points to
// another location.
*i=1; // 3. Dereference a pointer which points to unknown
// memory. This could cause segmentation fault.
delete i; // 4. Delete the unknown memory which is undefined
// behavior.
In short: If you don't own a piece of memory you can't do arithmetic with it neither delete it!
I wanted to access deleted array to see how the memory was changed it works till I deleted really big array then I get access violation exception. Please do not care about cout I know they are slow but I will get rid of them.
When I do it for 1000 elements array it is ok, when I do it for 1000000 i get an exception. I know that this is weird task but my teacher is stubborn and I can't find out how to deal with that.
EDIT: I know that I never should access that memory, but I also know that there is probably trick he will show then and tell that I am not right.
long max = 1000000;// for 10000 i do not get any exception.
int* t = new int[max];
cout<<max<<endl;
uninitialized_fill_n(t, max, 1);
delete[] t;
cout<<"deleted t"<<endl;
int x;
cin>>x;//wait little bit
int one = 1;
long counter = 0;
for(long i = 0; i < max; i++){
cout<<i<<endl;
if(t[i] != 1){
cout<<t[i]<<endl;
counter++;
}
}
That state of "deleted" memory is undefined, to access memory after delete is UNDEFINED BEHAVIOUR (meaning, the C++ specification allows "anything" to happen when you access such memory - including the appearance of it "working" sometimes, and "not working" sometimes).
You should NEVER access memory that has been deleted, and as shown in your larger array case, it may not work to do so, because the memory may no longer actually be available to your process.
You are not allowed to access to a released buffer
Accessing memory that is no longer in use results in undefined behaviour. You will not get any consistent patterns. If the original memory has not been overwritten after it became invalid, the values will be exactly what they used to be.
I find the answer to this similar question to be very clear in explaining the concept with a simple analogy.
A simple way to mimic this behaviour is to create a function which returns a pointer to a local variable, for example:
int *foo(){
int a=1;
return &a;
}
I'm using a bit of legacy type code that runs on a framework, so I can't really explain whats going on at a lower level as I don't know.
However my code creates an array of objectives.
int maxSize = 20;
myObjects = new Object*[maxSize+1];
myObjects[0] = new item1(this);
myObjects[1] = new item2(this);
for(int i=2; i != maxSize+1; i++){
myObjects[i] = new item3(this);
}
myObjects[maxSize+1] = NULL;
If maxSize is larger than 30 I get a whole load of errors I've never seen. Visual Studio draws up an error in xutility highlighting:
const _Container_base12 *_Getcont() const
{ // get owning container
return (_Myproxy == 0 ? 0 : _Myproxy->_Mycont);
}
I've never used Malloc before, but is this where the problem lies. Should I be assigning using it to avoid this problem?
The absolute value of maxSize is probably not a culprit: allocating 30 pointers should go without trouble on any computer, including most micro-controllers. Using malloc is not going to change anything: you are doing your allocation the way you're supposed to do it in C++.
Here is the likely source of your error:
myObjects[maxSize+1] = NULL;
You have allocated storage for maxSize+1 items, so the valid indexes are between 0 and maxSize. Writing one past the last element is undefined behavior, meaning that a crash could happen. You got lucky with 20 elements, but 30 smoked out this bug for you. Using valgrind utility is a good way to catch memory errors that could cause crashes, even if they currently don't cause them.
int maxSize = 20;
myObjects = new Object*[maxSize+1];
myObjects[0] = new item1(this);
myObjects[1] = new item2(this);
// if maxsize is 1, this loop could be trouble
for(int i=2; i != maxSize; i++){
myObjects[i] = new item3(this);
}
myObjects[maxSize] = NULL;
You're going past the bounds with:
myObjects[maxSize+1] = NULL;
In your example, you created an array with 21 items. That will run from 0..20 but you're trying to write to the 21st element here.
The problem is not with new / delete as far as I can see, and I can't see any reason for switching to malloc here.
You should not use malloc() in C++; you should use new.
There's one possible exception to this: if you have to allocate a block of memory which you intend to pass as an argument to a function which is going to eventually free it using free(). If you used new to allocate such a block the free() would likely cause heap corruption. But this is purely hypothetical -- I've never seen such an API!
I think you can't access offset "maxSize+1". The solution is like:
myObjects = new Object*[maxSize+2];
...
myObjects[maxSize+1] = NULL;