Using another variable for array index limit in C++ - c++

Instead of limiting my arrays index one by one...
int limit=10, data_1[10], data_2[10], data_3[10];
Is it possible to use the value of limit to limit the indeces of these datas? My code gets an error "Constant Expression Required" when I use data_1[limit]
Any solutions to use another variable to limit these arrays' indeces in C++?

Here you go:
const int limit = 10;
int data_1[limit], data_2[limit], data_3[limit];
limit must be a const
EDIT:
As other answers have mentioned, limit could also simply be defined through a preprocessing step, like so:
#define LIMIT 10 // Usually preprocessor-defined variables are in all caps

The error message is telling you that you must have a constant expression to allocate memory on the stack. For allocating on the stack you have two options (for getting a constant); you could use
#define LIMIT 10
or you could use const int like this
const int LIMIT = 10;
and with either, this would then work
int data_1[LIMIT], data_2[LIMIT], data_3[LIMIT];
You might also allocate on the heap (using malloc()), but then you must also call free().
int *data = (int *) malloc(limit * sizeof(int)); /* as an example */
/* Do something, check that malloc succeeded */
free(data); /* free the memory */

You've tagged this with both C and C++, but the right way to handle this is different between the two.
In C, assuming a reasonably up-to-date (C99 or newer) compiler, the way you've done things is allowed, as long as data_1, data_2 and data_3 are local to some function. They almost certainly shouldn't be globals, so for C the obvious cure is to simply make them local to the function that needs them (and if other functions need them, pass them as parameters).
In C++, you've gotten some answers that cure the immediate problem, such as const-qualifying limit and allocating the other three items dynamically. At least in my opinion, these are inferior choices though. In most cases, you should use std::vector instead of arrays, in which case you don't need to const-qualify limit for things to be just fine:
int limit = 10;
std::vector<int> data_1(limit), data_2(limit), data_3(limit);

use a macro or const
#define LIMIT 10
or
const int LIMIT = 10;

for C and C++
#define LIMIT 10
int data[LIMIT];
just for just C++
const int LIMIT = 10;
int data[LIMIT];

Seeing the number of answers that propose to use a #define, and as the Q is tagged C++, I think it should be mentioned, though, that using a #define has drawbacks, especially the fact that the compiler doesn't know what LIMIT is, as every occurences are removed during the preprocessing stage and replaced with the value. Thus, when debugging, you could get an error message referring to the value (i.e. 10 in this case) but no mention of LIMIT, as it never entered the symbol table.
Thus, you should prefer the use of
const int Limit = 10;
int data[Limit];
instead of
#define LIMIT 10
if you're given the opportunity (i.e. if you're in C++, and not in C).
And as mentioned, using an std::vector would be simpler and would remove the need for such constant expression.

Related

Time efficient way to pass pointer to a Macro

Most of my code involves passing address of a memory location to several macros which does the required job.
Could you please explain which is the best way to pass the address in terms of a time efficiency.
Sample code:
#define FILL_VAL(ptr /* uint8_t* */ ) \
do \
{ \
/* Macro which does the job */ \
\
}while(0);
uint8_t *buf = malloc(100);
uint16_t buf_index = 0;
//Method 1:
FILL_VAL(&buf[buf_index])
//Method 2:
FILL_VAL( buf + buf_index)
Macros / defines are just text substitutions. And as such, they can not directly influence "time/space/whatever efficiency" of the target program.
So basically your question must be rephrased as "will my compiler generate same code for (similar) expressions containing buf + buf_index and &buf[buf_index]"?
Actually, only live experiments can proof this, but it's a very reasonable guess that the generated code will be the same.
In C++ &buf[buf_index] is preferred.
In C buf + buf_index is normal.
Why??
In C, it's part of the definition of the idiom. "Why say say things with more words than possible?". Say it in the shortest possible way, it makes it easier to understand and harder to type wrong.
C++ Introduced containers. These are data structures that "look" like something they are not. To illustrate this, consider a piece of code that uses a fixed-size C-array.
int my_vec[FIXED_SIZE]; // create elements
FILL_VAL(&my_vec);
Later you want to switch to a variable size array.
std::vector<int> my_vec(FIXED_SIZE); // create a container for elements
FILL_VAL(&my_vec); // << WRONG!!!
Now your "FILL_VAL" macro does the wrong thing. it will write over the actual vector, almost certainly creating a bad pointer, and eventually memory corruption. This is why very early in the use of C++ most programmers switched to this style.
std::vector<int> my_vec(FIXED_SIZE); // create a container for elements
FILL_VAL(&my_vec[0]); // << Works for c-array and vector!
As for which is faster, they express exactly the same thing. Compilers will treat these as exactly the same thing. It is not unusual for compilers to start with a pass that swaps equivalent code for a single standard representation. This way the code generation pass can be simpler.

Defining Array C/C++

What is the difference between this two array definitions and which one is more correct and why?
#include <stdio.h>
#define SIZE 20
int main() {
// definition method 1:
int a[SIZE];
// end definition method 1.
// defintion method 2:
int n;
scanf("%d", &n);
int b[n];
// end definition method 2.
return 0;
}
I know if we read size, variable n, from stdin, it's more correct to define our (block of memory we'll be using) array as a pointer and use stdlib.h and array = malloc(n * sizeof(int)), rather than decalring it as int array[n], but again why?
It's not "more correct" or "less correct". It either is xor isn't correct. In particular, this works in C, but not in C++.
You are declaring dynamic arrays. Better way to declare Dynamic arrays as
int *arr; // int * type is just for simplicity
arr = malloc(n*sizeof(int*));
this is because variable length arrays are only allowed in C99 and you can't use this in c89/90.
In (pre-C99) C and C++, all types are statically sized. This means that arrays must be declared with a size that is both constant and known to the compiler.
Now, many C++ compilers offer dynamically sized arrays as a nonstandard extension, and C99 explicitly permits them. So int b[n] will most likely work if you try it. But in some cases, it will not, and the compiler is not wrong in those cases.
If you know SIZE at compile-time:
int ar[SIZE];
If you don't:
std::vector<int> ar;
I don't want to see malloc anywhere in your C++ code. However, you are fundamentally correct and for C that's just what you'd do:
int* ptr = malloc(sizeof(int) * SIZE);
/* ... */
free(ptr);
Variable-length arrays are a GCC extension that allow you to do:
int ar[n];
but I've had issues where VLAs were disabled but GCC didn't successfully detect that I was trying to use them. Chaos ensues. Just avoid it.
Q1 : First definition is the static array declaration. Perfectly correct.
It is when you have the size known, so no comparison with VLA or malloc().
Q2 : Which is better when taking size as an input from the user : VLA or malloc .
VLA : They are limited by the environment's bounds on the size of automatic
allocation. And automatic variables are usually allocated on the stack which is relatively
small.The limitation is platform specific.Also, this is in c99 and above only.Some ease of use while declaring multidimensional arrays is obtained by VLA.
Malloc : Allocates from the heap.So, for large size is definitely better.For, multidimensional arrays pointers are involved so a bit complex implementataion.
Check http://bytes.com/topic/c/answers/578354-vla-feature-c99-vs-malloc
I think that metod1 could be little bit faster, but both of them are correct in C.
In C++ first will work, but if you want to use a second you should use:
int size = 5;
int * array = new int[size];
and remember to delete it:
delete [] array;
I think it gives you more option to use while coding.
If you use malloc or other dynamic allocation to get a pointer. You will use like p+n..., but if you use array, you could use array[n]. Also, while define pointer, you need to free it; but array does not need to free.
And in C++, we could define user-defined class to do such things, and in STL, there is std::vector which do the array-things, and much more.
Both are correct. the declaration you use depends on your code.
The first declaration i.e. int a[size]; creates an array with a fixed size of 20 elements.
It is helpful when you know the exact size of the array that will be used in the code. for example, you are generating
table of a number n up till its 20th multiple.
The second declaration allows you to make an array of the size that you desire.
It is helpful when you will need an array of different sizes, each time the code is executed for example, you want to generate the fibonacci series till n. In that case, the size of the array must be n for each value of n. So say you have n = 5, in this case int a [20] will waste memory because only the first five slots will be used for the fibonacci series and the rest will be empty. Similarly if n = 25 then your array int a[20] will become too small.
The difference if you define array using malloc is that, you can pass the size of array dynamically i.e at run time. You input a value your program has during run time.
One more difference is that arrays created using malloc are allocated space on heap. So they are preserved across function calls unlike static arrays.
example-
#include<stdio.h>
#include<stdlib.h>
int main()
{
int n;
int *a;
scanf("%d",&n);
a=(int *)malloc(n*sizeof(int));
return 0;
}

Using private vars to initialize array

I'm trying to do a little application that would calculate some paths for a given graph.
I've created a class to handle simple graphs, as follows:
class SimpleGraph {
int _nbNodes;
int _nbLines;
protected:
int AdjMatrix[_nbNodes, _nbNodes]; //Error happens here...
int IncMatrix[_nbNodes, _nbLines]; //...and here!
public:
SimpleGraph(int nbNodes, int nbLines) { this->_nbNodes = nbNodes - 1; this->_nbLines = nbLines - 1; };
virtual bool isSimple();
};
At compilation time, I get an error on the two protected members declaration.
I don't understand what is wrong, as there is only one constructor that takes these values as parameters. As such, they cannot be uninitialized.
What am I missing here?
The compiler needs to know how much space to allocate for a member of class SimpleGraph. However, since AdjMatrix and IncMatrix are defined on the stack and their sizes are determined at run-time (i.e., after compilation), it cannot do that. Specifically, the standard says that the size of an array in a class must be a constexpr.
To fix this, you can:
Allocate AdjMatrix and IncMatrix on the heap instead and then you can allocate memory at runtime.
Use a fixed size for the two arrays and keep them on the stack.
--
Another major issue with your code is that you cannot create multi-dimensional arrays using a comma (AdjMatrix[int, int]). You must instead either use:
AdjMatrix[int][int]
AdjMatrix[int * int]
Objects in C++ have a fixed size that needs to be known at compilation time. The size of AdjMatrix and InMatrix are not known at compilation time, only at run time.
In the lines
int AdjMatrix[_nbNodes, _nbNodes]; //Error happens here...
int IncMatrix[_nbNodes, _nbLines]; //...and here!
The array notation is wrong. You cannot specify a 2 dimensional array that way in C++. The correct notation uses brackets on each dimension, as for instance:
int data[5][2];
Regarding the problem you are facing, the dimensions of an array in C++ must be specified at compile time, ie. the compiler must know what are the values used to indicate the array dimension when compiling the program. This is clearly not the case here. You must revert to use integer literals, as in my example, or change the code to use vectors:
std::vector<std::vector<int> > AdjMatrix;
and in the constructor:
SimpleGraph(int nbNodes, int nbLines) : AdjMatrix(nbNodes) {
for (int i = 0; i< nbNodes; i++)
AdjMatrix[i].resize(20);
}
Note that you won't need _nbNodes anymore, and use instead the size() method on AdjMatrix. You will have to do the same for IncMatrix.
Another option, if you know the values at compile time, is to use macros to define them symbolically.
#define NBNODES 20
int AdjMatrix[NBNODES][NBNODES];
but since you wish to pass them as constructor parameter, this may not fit your need. Still, if you know that the parameters are constants at compile time, you might be able use the C++11 constexpr qualifier on the constructor parameters.

Size of an Array.... in C/C++?

Okay so you have and array A[]... that is passed to you in some function say with the following function prototype:
void foo(int A[]);
Okay, as you know it's kind of hard to find the size of that array without knowing some sort of ending variable or knowing the size already...
Well here is the deal though. I have seem some people figure it out on a challenge problem, and I don't understand how they did it. I wasn't able to see their source code of course, that is why I am here asking.
Does anyone know how it would even be remotely possible to find the size of that array?? Maybe something like what the free() function does in C??
What do you think of this??
template<typename E, int size>
int ArrLength(E(&)[size]){return size;}
void main()
{
int arr[17];
int sizeofArray = ArrLength(arr);
}
The signature of that function is not that of a function taking an array, but rather a pointer to int. You cannot obtain the size of the array within the function, and will have to pass it as an extra argument to the function.
If you are allowed to change the signature of the function there are different alternatives:
C/C++ (simple):
void f( int *data, int size ); // function
f( array, sizeof array/sizeof array[0] ); // caller code
C++:
template <int N>
void f( int (&array)[N] ); // Inside f, size N embedded in type
f( array ); // caller code
C++ (though a dispatch):
template <int N>
void f( int (&array)[N] ) { // Dispatcher
f( array, N );
}
void f( int *array, int size ); // Actual function, as per option 1
f( array ); // Compiler processes the type as per 2
You cannot do that. Either you have a convention to signal the end of the array (e.g. that it is made of non-zero integers followed by a 0), or you transmit the size of the array (usually as an additional argument).
If you use the Boehm garbage collector (which has a lot of benefit, in particular you allocate with GC_malloc and friends but you don't care about free-ing memory explicitly), you could use the GC_size function to give you the size of a GC_malloc-ed memory zone, but standard malloc don't have this feature.
You're asking what we think of the following code:
template<typename E, int size>
int ArrLength(E(&)[size]){return size;}
void main()
{
int arr[17];
int sizeofArray = ArrLength(arr);
}
Well, void main has never been standard, neither in C nor in C++.
It's int main.
Regarding the ArrLength function, a proper implementation does not work for local types in C++98. It does work for local types by C++11 rules. But in C++11 you can write just end(a) - begin(a).
The implementation you show is not proper: it should absolutely not have int template argument. Make that a ptrdiff_t. For example, in 64-bit Windows the type int is still 32-bit.
Finally, as general advice:
Use std::vector and std::array.
One relevant benefit of this approach is that it avoid throwing away the size information, i.e. it avoids creating the problem you're asking about. There are also many other advantages. So, try it.
The first element could be a count, or the last element could be a sentinel. That's about all I can think of that could work portably.
In new code, for container-agnostic code prefer passing two iterators (or pointers in C) as a much better solution than just passing a raw array. For container-specific code use the C++ containers like vector.
No you can't. Your prototype is equivalent to
void foo(int * A);
there is obviously no size information. Also implementation dependent tricks can't help:
the array variable can be allocated on the stack or be static, so there is no information provided by malloc or friends
if allocated on the heap, a user of that function is not forced to call it with the first element of an allocation.
e.g the following are valid
int B[22];
foo(B);
int * A = new int[33];
foo(A + 25);
This is something that I would not suggest doing, however if you know the address of the beginning of the array and the address of the next variable/structure defined, you could subtract the address. Probably not a good idea though.
Probably an array allocated at compile time has information on its size in the debug information of the executable. Moreover one could search in the code for all the address corresponding to compile time allocated variables and assume the size of the array is minus the difference between its starting address and the next closest starting address of any variable.
For a dinamically allocated variable it should be possible to get its size from the heap data structures.
It is hacky and system dependant, but it is still a possible solution.
One estimate is as follows: if you have for instance an array of ints but know that they are between (stupid example) 0..80000, the first array element that's either negative or larger than 80000 is potentially right past the end of the array.
This can sometimes work because the memory right past the end of the array (I'm assuming it was dynamically allocated) won't have been initialized by the program (and thus might contain garbage values), but might still be part of the allocated pages, depending on the size of the array. In other cases it will crash or fail to provide meaningful output.
All of the other answers are probably better, i.e. you either have to pass the length of the array or terminate it with a special byte sequence.
The following method is not portable, but it works for me in VS2005:
int getSizeOfArray( int* ptr )
{
int size = 0;
void* ptrToStruct = ptr;
long adr = (long)ptrToStruct;
adr = adr - 0x10;
void* ptrToSize = (void*)adr;
size = *(int*)ptrToSize;
size /= sizeof(int);
return size;
}
This is entirely dependent of the memory model of your compiler and system so, again, it is not portable. I bet there are equivalent methods for other platforms. I would never use this in a production environment, merely stating this as an alternative.
You can use this: int n = sizeof(A) / sizeof(A[0]);

Which is the better practice: global constant or #define? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
C++ - enum vs. const vs. #define
Before I used #define I used to create constants in my main function and pass them where they were needed. I found that I passed them very often and it was kind of odd, especially array sizes.
More recently I have been using #define for the reason that I don't have to pass constants in my main to each individual function.
But now that I think of it, I could use global constants as well, but for some reason I have been a little hesitant towards them.
Which is the better practice: global constants or #define?
A side question, also related: Is passing constants from my main as I described a bad practice?
They don't do quite the same thing. #define lets you affect the code at compilation time, while global constants only come into effect at runtime.
Seeing as #define can only give you extra trouble because there's no checking going on with how you use it, you should use global constants when you can and #define when you must. It will be safer and more readable that way.
As for passing constants from main, it's not unreasonable because it makes the called functions more flexible to accept an argument from the caller than to blindly pull it out of some global. Of course it the argument isn't really expected to change for the lifetime of the program you don't have much to gain from that.
Using constants instead of #define is very much to be preferred. #define replaces the token dumbly in every place it appears, and can cause all sorts of unintended consequences.
Passing values instead of using globals is good practice. It makes the code more flexible and modular, and more testable. Try googling for "parameterise from above".
You should never use either #defines or const variables to represent array sizes; it's better to make them explicit.
Instead of:
#define TYPICAL_ARRAY_SIZE 4711
int fill_with_zeroes(char *array)
{
memset(array, 0, TYPICAL_ARRAY_SIZE);
}
int main(void)
{
char *za;
if((za = malloc(TYPICAL_ARRAY_SIZE)) != NULL)
{
fill_with_zeroes(za);
}
}
which uses a (shared, imagine it's in a common header or something) #define to communicate the array size, it's much better to just pass it to the function as a real argument:
void fill_with_zeroes(char *array, size_t num_elements)
{
memset(array, 0, num_elements); /* sizeof (char) == 1. */
}
Then just change the call site:
int main(void)
{
const size_t array_size = 4711;
char *za;
if((za = malloc(array_size)) != NULL)
{
fill_with_zeroes(za, array_size);
}
}
This makes the size local to the place that allocated it, there's no need for the called function to magically "know" something about its arguments that is not communicated through its arguments.
If the array is non-dynamically allocated, we can do even better and remove the repeated symbolic size even locally:
int main(void)
{
char array[42];
fill_with_zeroes(array, sizeof array / sizeof *array);
}
Here, the well-known sizeof x / sizeof *x expression is used to (at compile-time) compute the number of elements in the array.
Constants are better. The only difference between the two is that constants are type-safe.
You shouldn't use values defined with #define like const parameters. Defines are used mostly to prevent the compiler to compile some parts of code depending on your needings at compile time (platform dependent choices, optimization at compile time, ).
So if you are not using define for these reasons avoid that and use costant values.