EXC_BAD_ACCESS on for loop (array function pointer) - c++

The following code is causing a EXC_BAD_ACCESS address 0x0 error - even when i is correct somehow. It's used to execute the arrays pointed to the functions. If I change the sizeof(draw); with the number i have, it just works as expected.
for(int i = 0; i < sizeof(draw); i++)
draw[i](i);

sizeof(draw) returns the size of "draw" in number of bytes, not number of items. You are probably looking for (sizeof(draw)/sizeof(draw[0]))

Related

A segmentation fault

I have try the following code to judge prime:
const int N = 200000;
long prime[N] = {0};
long num_prime = 0;
int is_not_prime[N]={1,1};
void Prime_sort(void)
{
for( long i = 2 ; i<N ; i++ )
{
if( !is_not_prime[i] )
{
prime[num_prime++] = i;
}
for( long j = 0; j<num_prime && i*prime[i]<N ; j++ )
{
is_not_prime[i*prime[j]] = 1;
}
}
}
But when I run it, it cause a segmentation fault! That fault I have never meet.And I searched Google,and it explain segmentation fault as follow:
A segmentation fault (often shortened to segfault) is a particular
error condition that can occur during the operation of computer
software. In short, a segmentation fault occurs when a program
attempts to access a memory location that it is not allowed to access,
or attempts to access a memory location in a way that is not allowed
But I don't know where cause this fault in my code.Please help me.
Your array is_not_prime has length N. For example, at the final lap of the outer for loop, i will have the value N-1. When i is that big, is_not_prime[i*prime[j]] will cause you to write far out of bounds of the array.
I'm not quite sure what j<num_prime && i*prime[i]<N is supposed to do, but it is likely part of the bug. Single step through the program with your debugger and see at what values the variables have when the program crashes.
Just re-write your program in a less complex manner and all bugs will go away.
Compare your loop bound checking to your indexing - they aren't the same. (I believe you meant to write i*prime[j]<N in your for loop.)
Your program crashes because an index goes out of bounds. And the index goes out of bounds because your algorithm is not valid.
As it still crashes if you set N at a much smaller value
const int N = 3;
it shouln't be too difficult to see what goes wrong by running your program with pencil and paper...

Why can array cells exceed array length [duplicate]

This question already has answers here:
Accessing an array out of bounds gives no error, why?
(18 answers)
Closed 7 years ago.
While debugging I found an error with an int array of 0. In a test document I messed around with array with more cell input than their length.
int array[0];
for(int i = 0; i < 10; i++)
array[i] = i;
for(int i = 0; i < 10; i++)
cout << array[i];
After I compiled and ran the program I got
0123456789
Then I received the message "test.exe has stopped working". I expected both of these, but what I am confused about is why the compiler lets me create an array of 0, and why the program doesn't crash until the very end of the program. I expected the program to crash once I exceeded the array length.
Can someone explain?
The compiler should have at least warned you about a zero size array - if it didn't .. consider changing compiler.
Remember that an array is just a bit of memory just like any other. In your example the array is probably stored on the stack and so writing off the end of it may not cause much of a problem until your function exits. At that point you may find you have written some text over the return address and you get an exception. Writing off the end of arrays are a common cause of bugs in C/C++ - just be thankful you got an error with this one and it didn't just overwrite some other unrelated data.

Valgrind - invalid read and write

I have a mystery that I do not have an answer for. I have written a simple program in C++(and I should say that I'm not a professional C++ developer). Here it is:
#include <iostream>
int main(){
const int SIZE = 1000;
int pool1 [SIZE];
int pool2 [SIZE];
int result [SIZE*SIZE];
//Prepare data
for(int i = 0; i < SIZE; i++){
pool1[i] = i + 1;
pool2[i] = SIZE - i;
}
//Run test
for(int i = 0; i < SIZE; i++){
for(int j = 0; j < SIZE; j++){
result[i*SIZE + j] = pool1[i]*pool2[j];
}
}
return 0;
}
The program seems to work (I use it as a kind of benchmark for different languages) but then I ran it with valgrind and it started complaing:
==25912== Invalid read of size 4
==25912== at 0x804864B: main (in /home/renra/Dev/Benchmarks/array_iteration/array_iteration_cpp)
==25912== Address 0xbee79da0 is on thread 1's stack
==25912== Invalid write of size 4
==25912== at 0x8048632: main (in /home/renra/Dev/Benchmarks/array_iteration/array_iteration_cpp)
==25912== Address 0xbeaa9498 is on thread 1's stack
==25912== More than 10000000 total errors detected. I'm not reporting any more.
==25912== Final error counts will be inaccurate. Go fix your program!
Hmm, does not look good. Size 4 probably refers to the size of int. As you can see at first I was using SIZE 1000 so the results array would be 1,000,000 ints long. So, I thought, it was just overflowing and I needed a larger value type(at least for the iterators and the array of results). I used unsigned long long (the max of unsigned long is 18,446,744,073,709,551,615 and all I needed was 1,000,000 - SIZE*SIZE ). But I'm still getting these error messages (and they still say the read and write size is 4 even though sizeof(long long) is 8).
Also the messages are not there when I use a lower SIZE, but they seem to kick in exactly at SIZE 707 regardless of the used type. Anybody has a clue? I'm quite curious :-).
C and C++ both have no clear limit on what sizes of arrays you will be able to use on the stack and also usually no builtin protection. Just don't allocate such large chunks as automatic (scope local) variables. Use malloc in C or new in C++ for such a purpose.

C array with float data crashing in Objective C class (EXC_BAD_ACCESS)

I am doing some audio processing and therefore mixing some C and Objective C. I have set up a class that handles my OpenAL interface and my audio processing. I have changed the class suffix to
.mm
...as described in the Core Audio book among many examples online.
I have a C style function declared in the .h file and implemented in the .mm file:
static void granularizeWithData(float *inBuffer, unsigned long int total) {
// create grains of audio data from a buffer read in using ExtAudioFileRead() method.
// total value is: 235377
float tmpArr[total];
// now I try to zero pad a new buffer:
for (int j = 1; j <= 100; j++) {
tmpArr[j] = 0;
// CRASH on first iteration EXC_BAD_ACCESS (code=1, address= ...blahblah)
}
}
Strange??? Yes I am totally out of ideas as to why THAT doesn't work but the FOLLOWING works:
float tmpArr[235377];
for (int j = 1; j <= 100; j++) {
tmpArr[j] = 0;
// This works and index 0 - 99 are filled with zeros
}
Does anyone have any clue as to why I can't declare an array of size 'total' which has an int value? My project uses ARC, but I don't see why this would cause a problem. When I print the value of 'total' when debugging, it is in fact the correct value. If anyone has any ideas, please help, it is driving me nuts!
Problem is that that array gets allocated on the stack and not on the heap. Stack size is limited so you can't allocate an array of 235377*sizeof(float) bytes on it, it's too large. Use the heap instead:
float *tmpArray = NULL;
tmpArray = (float *) calloc(total, sizeof(float)); // allocate it
// test that you actually got the memory you asked for
if (tmpArray)
{
// use it
free(tmpArray); // release it
}
Mind that you are always responsible of freeing memory which is allocated on the heap or you will generate a leak.
In your second example, since size is known a priori, the compiler reserves that space somewhere in the static space of the program thus allowing it to work. But in your first example it must do it on the fly, which causes the error. But in any case before being sure that your second example works you should try accessing all the elements of the array and not just the first 100.

High number causes seg fault

This bit of code is from a program I am writing to take in x col and x rows to run a matrix multiplication on CUDA, parallel processing. The larger the sample size, the better.
I have a function that auto generates x amount of random numbers.
I know the answer is simple but I just wanted to know exactly why. But when I run it with say 625000000 elements in the array, it seg faults. I think it is because I have gone over the size allowed in memory for an int.
What data type should I use in place of int for a larger number?
This is how the data is being allocated, then passed into the function.
a.elements = (float*) malloc(mem_size_A);
where
int mem_size_A = sizeof(float) * size_A; //for the example let size_A be 625,000,000
Passed:
randomInit(a.elements, a.rowSize,a.colSize, oRowA, oColA);
What the randomInit is doing is say I enter a 2x2 but I am padding it up to a multiple of 16. So it takes the 2x2 and pads the matrix to a 16x16 of zeros and the 2x2 is still there.
void randomInit(float* data, int newRowSize,int newColSize, int oldRowSize, int oldColSize)
{
printf("Initializing random function. The new sized row is %d\n", newRowSize);
for (int i = 0; i < newRowSize; i++)//go per row of new sized row.
{
for(int j=0;j<newColSize;j++)
{
printf("This loop\n");
if(i<oldRowSize&&j<oldColSize)
{
data[newRowSize*i+j]=rand() / (float)RAND_MAX;//brandom();
}
else
data[newRowSize*i+j]=0;
}
}
}
I've even ran it with the printf in the loop. This is the result I get:
Creating the random numbers now
Initializing random function. The new sized row is 25000
This loop
Segmentation fault
Your memory allocation for data is probably failing.
Fortunately, you almost certainly don't need to store a large collection of random numbers.
Instead of storing:
data[n]=rand() / (float)RAND_MAX
for some huge collection of n, you can run:
srand(n);
value = rand() / (float)RAND_MAX;
when you need a particular number and you'll get the same value every time, as if they were all calculated in advance.
I think you're going past the value you allocated for data. when you're newrowsize is too large, you're accessing unallocated memory.
remember, data isn't infinitely big.
Well the real problem is that, if the problem is really the integer size used for your array access, you will be not able to fix it. I think you probably just have not enough space in your memory so as to store that huge number of data.
If you want to extends that, just define a custom structure or class if you are in C++. But you will loose the O(1) time access complexity involves with array.