How to run C codes in matlab - c++

I want to run a c++ code in Matlab, in my code I have this
int max=(int)*mxGetPr(prhs[0]);
double a[max];
but when I use mex it has these errors
error C2057: expected constant expression
error C2466: cannot allocate an array of constant size 0
'a' : unknown size
all for line 2, and I get errors for this file only , (I can mex example codes)
any idea how I can fix it?

The problem is that max is not a constant expression (or, at least, not marked as being constant). For the second line to work, you must have max being a constant, because the memory footprint of the array must be known prior execution (the array is allocated on the stack). If you do not know the size, you need to use something like
double *a = mxCalloc(max, sizeof(double));
Then you allocate a chunk of memory on the heap, which allows to use dynamic sizes.

Related

Weird compilation error: catastrophic error: section length mismatch in array expression compilation aborted for shocktube.c

I am facing trouble in compiling a simple piece of code. Following are the details:
Variable declaration:
double q_old[3][N], q_new[3][N], u[3][N], flux[3][N+1], fl[3][N+1], fr[3][N+1];
The following line seems to be the source of error:
fl[0][1:N+1] = u[1][0:N]*u[0][0:N]; // this does not work
fl[0][1:N] = u[1][0:N]*u[0][0:N]; // this works
The error:
shocktube.c(47): catastrophic error: section length mismatch in array expression
compilation aborted for shocktube.c (code 1)
I am using intel icpc compiler. The first statement does not work but the second does, which is really weird because AFAIK the size of the LHS array in the first statement will be N(index varying from 1 to N) and size of RHS should also be N(0 to N-1), while in the second statement size of LHS is N-1.
Thanks,
The Intel array section notation is [start:length], not [start:end]. Therefore, this line
fl[0][1:N+1] = u[1][0:N]*u[0][0:N]; // this does not work
is invalid because you are indexing past the end of the array (specifically, you are asking for indices [1, N+2) in the fl array, whose last dimension only has N+1 elements).
The error probably should be a little gentler ("catastrophic" is not a term I'd apply to a user error), but this is ultimately not the compiler's fault.

2D array access time comparison

I have two ways of constructing a 2D array:
int arr[NUM_ROWS][NUM_COLS];
//...
tmp = arr[i][j]
and flattened array
int arr[NUM_ROWS*NUM_COLS];
//...
tmp = arr[i*NuM_COLS+j];
I am doing image processing so even a little improvement in access time is necessary. Which one is faster? I am thinking the first one since the second one needs calculation, but then the first one requires two addressing so I am not sure.
I don't think there is any performance difference. System will allocate same amount of contiguous memory in both cases. For calculate i*Numcols+j, either you would do it for 1D array declaration, or system would do it in 2D case. Only concern is ease of usage.
You should have trust into the capabilities of your compiler in optimizing standard code.
Also you should have trust into modern CPUs having fast numeric multiplication instructions.
Don't bother to use one or another!
I - decades ago - optimized some code greatly by using pointers instead of using 2d-array-calculation --> but this will a) only be useful if it is an option to store the pointer - e.g. in a loop and b) have low impact since i guess modern cpus should do 2d array access in a single cycle? Worth measuring! May be related to the array size.
In any case pointers using ptr++ or ptr += NuM_COLS will for sure be a little bit faster if applicable!
The first method will almost always be faster. IN GENERAL (because there are always corner cases) processor and memory architecture as well as compilers may have optimizations built in to aid with 2d arrays or other similar data structures. For example, GPUs are optimized for matrix (2d array) math.
So, again in general, I would allow the compiler and hardware to optimize your memory and address arithmetic if possible.
...also I agree with #Paul R, there are much bigger considerations when it comes to performance than your array allocation and address arithmetic.
There are two cases to consider: compile time definition and run-time definition of the array size. There is big difference in performance.
Static allocation, global or file scope, fixed size array:
The compiler knows the size of the array and tells the linker to allocate space in the data / memory section. This is the fastest method.
Example:
#define ROWS 5
#define COLUMNS 6
int array[ROWS][COLUMNS];
int buffer[ROWS * COLUMNS];
Run time allocation, function local scope, fixed size array:
The compiler knows the size of the array, and tells the code to allocate space in the local memory (a.k.a. stack) for the array. In general, this means adding a value to a stack register. Usually one or two instructions.
Example:
void my_function(void)
{
unsigned short my_array[ROWS][COLUMNS];
unsigned short buffer[ROWS * COLUMNS];
}
Run Time allocation, dynamic memory, fixed size array:
Again, the compiler has already calculated the amount of memory required for the array since it was declared with fixed size. The compiler emits code to call the memory allocation function with the required amount (usually passed as a parameter). A little slower because of the function call and the overhead required to find some dynamic memory (and maybe garbage collection).
Example:
void another_function(void)
{
unsigned char * array = new char [ROWS * COLS];
//...
delete[] array;
}
Run Time allocation, dynamic memory, variable size:
Regardless of the dimensions of the array, the compiler must emit code to calculate the amount of memory to allocate. This quantity is then passed to the memory allocation function. A little slower than above because of the code required to calculate the size.
Example:
int * create_board(unsigned int rows, unsigned int columns)
{
int * board = new int [rows * cols];
return board;
}
Since your goal is image processing then I would assume your images are too large for static arrays. The correct question you should be about dynamically allocated arrays
In C/C++ there are multiple ways you can allocate a dynamic 2D array How do I work with dynamic multi-dimensional arrays in C?. To make this work in both C/C++ we can use malloc with casting (for C++ only you can use new)
Method 1:
int** arr1 = (int**)malloc(NUM_ROWS * sizeof(int*));
for(int i=0; i<NUM_ROWS; i++)
arr[i] = (int*)malloc(NUM_COLS * sizeof(int));
Method 2:
int** arr2 = (int**)malloc(NUM_ROWS * sizeof(int*));
int* arrflat = (int*)malloc(NUM_ROWS * NUM_COLS * sizeof(int));
for (int i = 0; i < dimension1_max; i++)
arr2[i] = arrflat + (i*NUM_COLS);
Method 2 essentially creates a contiguous 2D array: i.e. arrflat[NUM_COLS*i+j] and arr2[i][j] should have identical performance. However, arrflat[NUM_COLS*i+j] and arr[i][j] from method 1 should not be expected to have identical performance since arr1 is not contiguous. Method 1, however, seems to be the method that is most commonly used for dynamic arrays.
In general, I use arrflat[NUM_COLS*i+j] so I don't have to think of how to allocated dynamic 2D arrays.

Getting User store segfault error

I am receiving the error "User store segfault # 0x000000007feff598" for a large convolution operation.
I have defined the resultant array as
int t3_isize = 0;
int t3_irowcount = 0;
t3_irowcount=atoi(argv[2]);
t3_isize = atoi(argv[3]);
int iarray_size = t3_isize*t3_irowcount;
uint64_t t_result[iarray_size];
I noticed that if the array size is less than 2^16 - 1, the operation doesn't fail, but for the array size 2^16 or higher, I get the segfault error.
Any idea why this is happening? And how can i rectify this?
“I noticed that if the array size is greater than 2^16 - 1, the operation doesn't fail, but for the array size 2^16 or higher, I get the segfault error”
↑ Seems a bit self-contradictory.
But probably you're just allocating a too large array on the stack. Using dynamic memory allocation (e.g., just switch to using std::vector) you avoid that problem. For example:
std::vector<uint64_t> t_result(iarray_size);
In passing, I would ditch the Hungarian notation-like prefixes. For example, t_ reads like this is a type. The time for Hungarian notation was late 1980's, and its purpose was to support Microsoft's Programmer's Workbench, a now dicontinued (for very long) product.
You're probably declaring too large of an array for the stack. 216 elements of 8 bytes each is quite a lot (512K bytes).
If you just need static allocation, move the array to file scope.
Otherwise, consider using std::vector, which will allocate storage from the heap and manage it for you.
Using malloc() solved the issue.
uint64_t* t_result = (uint64_t*) malloc(sizeof(uint64_t)*iarray_size);

Why is there a compiler error, when declaring an array with size as integer variable?

In visual studio, I have an error that I didn't have before in Dev-C++:
int project = (rand() % 5) + 1 ;
int P[project][3];
Compilation:
error C2057: expected constant expression
error C2466: cannot allocate an array of constant size 0
error C2133: 'P' : unknown size
Can you help to understand this error?
You need to allocate memory dynamically in this case. So you cannot say int P[someVariable]. You need to use int *mem = new int[someVariable]
Have a look at this link.
In C++ you can only create arrays of a size which is compile time constant.
The size of array P needs to be known at compile time and it should be a constant, the compiler warns you of that through the diagnostic messages.
Why different results on different compilers?
Most compilers allow you to create variable length arrays through compiler extensions but it is non standard approved and such usage will make your program non portable across different compiler implementations. This is what you experience.
The standard C++ class for variable-length arrays is std::vector. In this case you'd get std::vector<int> P[3]; P[0].resize(project); P[1].resize(project); P[2].resize(project);

Allocate an Array of 999999999 cells | C++

If I compiled this:
long double *N;
N = new long double[999999999];
I get this error:
error C2148: total size of array must not exceed 0x7fffffff bytes
So, I tried compiling this:
long double *N;
long double *N2;
N = new long double[999999999];
N2 = N + 99999999;
N2 = new long double[900000000];
I still didn't run the program, but I'm pretty sure that I'll get a heap corruption detected error because I don't want to navigate with N then at a certain point navigate with N2.
Is there a safe why to do this with only one pointer ?
999999999*sizeof(double) is 7999999992 bytes. On a 32-bit platform, that is way more than 2^32 bytes. You simply can't address that many bytes in a 32-bit application.
If you absolutely must have 1 billion doubles, use a 64-bit platform.
If you are on a 64bit platform and has enough RAM to support the memory allocation the compiler wont generate any error. If either of the condition is false the compiler generates error or exception and there is no "safe" way to allocate memory beyond Ram or more than what is supported by the OS and processor.