what is aggregate initialization - c++

The section "Array Initialization" in Chapter 4, page 231 of "Thinking in Java, 2nd Edition" has this to say:
Initializing arrays in C is error-prone and tedious. C++ uses
aggregate initialization to make it much safer. Java has no
“aggregates” like C++, since everything is an object in Java. It does
have arrays, and these are supported with array initialization.
Why is it error prone and tedious in C? What does it mean by aggregate initialization and why is it safer? I came across the chapter "Aggregate initialization" in Bruce Eckel's "Thinking in C++" (2nd Ed.), but it doesn't convince me of anything.

First of all, to answer the main question, aggregate initialization means the use of brace-enclosed initializer lists to initialize all members of an aggregate (i.e. an array or struct [in C++, only certain types of structs count as aggregates]).
Obviously,
int ar[] = { 1 , 2 };
is safer than
int ar[2];
ar[0] = 1;
ar[1] = 2;
because the latter gives ample opportunity for typos and other errors in the indices of the individual elements to be initialized.
Looking at today's C and C++, it's unclear to me why the author makes a distinction between C and C++. Both languages enable aggregate initialization for arrays.
One possibility is that the author referred to old versions of the C Standard. Notably, in ANSI C (C89) an important restriction applied to the use of aggregate initialization: All initializers had to be constant expressions:
/* This is possible in C89: */
f(int i)
{ int ar[] = { 1 , 2 }; }
/* But this is not
(because i is not a constant expression):
*/
f(int i)
{ int ar[] = { i , i+1 }; }
This is due to 3.5.7 in C89 (quoting from the draft I found here):
All the expressions in an initializer for an object that has static storage duration or in an initializer list for an object that has aggregate or union type shall be constant expressions.
This clearly limits the usefulness of aggregate initialization (and even in 1989, I believe many compilers implemented extensions to enable aggregate initialization also for non-constant expressions).
Later versions of the C Standard did not have this restriction, and the standardized versions of C++ (starting with C++98), I believe, never had any such restriction.
I can only speculate, but perhaps this is what the author had in mind?

I am assuming that the author is warning you about the lack of enforcing size constraints in C and C++. In C and C++, arrays decay down to pointers to their first element. It then uses pointer arithmetic to find the element you are refering to by index. Since arrays are not objects and the compiler makes no effort to store their size, there are no length checks. In java, arrays are objects and therefore their size is known. This size can be checked against, which safe guards the developer from accessing memory which doesn't belong to him/her when overstepping the bounds of the array.
I find it strange the statement 'C++ uses aggregate initialize to make it much safer' was even used in this context.
Aggregate initialization, which is common to most modern languages, is as follows
int intArray[3] = {1,2,3};
int int2DArray[2][2] = {{1,2}, {3,4}};
This type of initialization assumes you know the size of the array beforehand and its contents. This type of initialization safe guards one from over stepping the boundary and provides for initializing an array with set values. Maybe in this case the author has in a mind a developer who declared a static C array of size 5. This developer then creates a loop to initialize its content but oversteps the boundary of the array by one, writing to memory that is not his/hers.

Related

If the `std::array` does not have a constructor that takes `std::initializer_list` how does the initialization then work? [duplicate]

The section "Array Initialization" in Chapter 4, page 231 of "Thinking in Java, 2nd Edition" has this to say:
Initializing arrays in C is error-prone and tedious. C++ uses
aggregate initialization to make it much safer. Java has no
“aggregates” like C++, since everything is an object in Java. It does
have arrays, and these are supported with array initialization.
Why is it error prone and tedious in C? What does it mean by aggregate initialization and why is it safer? I came across the chapter "Aggregate initialization" in Bruce Eckel's "Thinking in C++" (2nd Ed.), but it doesn't convince me of anything.
First of all, to answer the main question, aggregate initialization means the use of brace-enclosed initializer lists to initialize all members of an aggregate (i.e. an array or struct [in C++, only certain types of structs count as aggregates]).
Obviously,
int ar[] = { 1 , 2 };
is safer than
int ar[2];
ar[0] = 1;
ar[1] = 2;
because the latter gives ample opportunity for typos and other errors in the indices of the individual elements to be initialized.
Looking at today's C and C++, it's unclear to me why the author makes a distinction between C and C++. Both languages enable aggregate initialization for arrays.
One possibility is that the author referred to old versions of the C Standard. Notably, in ANSI C (C89) an important restriction applied to the use of aggregate initialization: All initializers had to be constant expressions:
/* This is possible in C89: */
f(int i)
{ int ar[] = { 1 , 2 }; }
/* But this is not
(because i is not a constant expression):
*/
f(int i)
{ int ar[] = { i , i+1 }; }
This is due to 3.5.7 in C89 (quoting from the draft I found here):
All the expressions in an initializer for an object that has static storage duration or in an initializer list for an object that has aggregate or union type shall be constant expressions.
This clearly limits the usefulness of aggregate initialization (and even in 1989, I believe many compilers implemented extensions to enable aggregate initialization also for non-constant expressions).
Later versions of the C Standard did not have this restriction, and the standardized versions of C++ (starting with C++98), I believe, never had any such restriction.
I can only speculate, but perhaps this is what the author had in mind?
I am assuming that the author is warning you about the lack of enforcing size constraints in C and C++. In C and C++, arrays decay down to pointers to their first element. It then uses pointer arithmetic to find the element you are refering to by index. Since arrays are not objects and the compiler makes no effort to store their size, there are no length checks. In java, arrays are objects and therefore their size is known. This size can be checked against, which safe guards the developer from accessing memory which doesn't belong to him/her when overstepping the bounds of the array.
I find it strange the statement 'C++ uses aggregate initialize to make it much safer' was even used in this context.
Aggregate initialization, which is common to most modern languages, is as follows
int intArray[3] = {1,2,3};
int int2DArray[2][2] = {{1,2}, {3,4}};
This type of initialization assumes you know the size of the array beforehand and its contents. This type of initialization safe guards one from over stepping the boundary and provides for initializing an array with set values. Maybe in this case the author has in a mind a developer who declared a static C array of size 5. This developer then creates a loop to initialize its content but oversteps the boundary of the array by one, writing to memory that is not his/hers.

Why C++ forbids new T[n](arg...)?

This question seems ancient (since C++98), but a quick search didn't lead me to an answer.
std::size_t n = 100;
std::unique_ptr<int[]> data(new int[n]); // ok, uninitialized
std::unique_ptr<int[]> data(new int[n]()); // ok, value-initialized
std::unique_ptr<int[]> data(new int[n](5)); // not allowed, but why?
What's the rationale behind this restriction, some UDTs cannot be default constructed, so those types cannot be used with new[].
Please don't go astray to suggesting something like std::vector or just say that's how the standard defines it, everyone knows that, but I want to know the reason why new T[n](arg...) is forbidden by the standard.
The first part of the answer to "why is it forbidden" is almost tautological: because it is not allowed by the standard. I know you probably don't like such an answer, but that's the nature of the beast, sorry.
And why should it be allowed anyway? What would it mean? In your very very very simple case, initializing every int with a specific value is fairly reasonable. But then again, for normal (statically allocated) array initialization, the rule is that each element in the right hand side {} is passed to an element of the left hand side array, with extra elements getting default-initialization treatment. Ie,
int data[n] = {5};
would only initialize the first element with 5.
But let's look at another example, which isn't even very contrived, which shows that what you ask for doesn't really make a lot of sense in a general context.
struct Foo {
int a,b,c,d;
Foo(int a=0, int b=0, int c=0, int d=0)
: a(a), b(b), c(c), d(d) {}
};
...
Foo *f = new Foo[4](1,2,3,4); // <-- what does this mean?!?!
Should there be four Foo(1,2,3,4)s? Or [Foo(1,2,3,4), Foo(), Foo(), Foo()]? Or maybe [Foo(1), Foo(2), Foo(3), Foo(4)]? Or why not [Foo(1,2,3), Foo(4), Foo(), Foo()]? What if one of Foo's arguments was rvalue reference or something? There are just soooo many cases in which there is no obvious Right Thing that the compiler should do. Most of the examples I just gave have valid use cases, and there isn't one that's clearly better than the others.
PS: You can achieve what you want with eg
std::vector<int> data(n, 5);
some UDTs don't even have a default ctor, so those types cannot be used with new[]
I'm not sure what you mean by this. E.g. int does not have a default constructor. However, you can initialize it as new int(3) or as new int[n](), as you already know. The event that takes place here is called initialization. Initialization can be carried out by constructors, but that's just a specific kind of initialization applicable to class types only. int is not a class type and constructors are completely inapplicable to int. So, you should not be even mentioning constructors with regard to int.
As for new int[n](5)... What did you expect to happen in this case? C++ does not support such syntax for array initialization. What did you want it to mean? You have n array elements and only one initializer. How are you proposing to initialize n array elements and only one initializer? Use value 5 to initialize each array element? But C++ never had such multi-initialization. Even the modern C++ doesn't.
You seem to have adopted this "multi-initialization" interpretation of new int[n](5) syntax as the one and only "obviously natural" way for it to behave. However, this is not necessarily that clear-cut. Historically C++ language (and C language) followed a different philosophy with regard to initializers that are "smaller" or "shorter" than the aggregate being initialized. Historically the language used the explicitly specified initializers to initialize the sub-objects at the beginning of the aggregate, while the rest of the sub-objects got default-initialized (sticking to C++98 terminology). From this point of view, you can actually see the () initializer in new int[n]() not as your "multi-initializer", but rather as an initializer only for the very first element of the array. Meanwhile, the rest of the elements get default-initialized (producing the same effect as () would). Granted, one can argue that the above logic usually applies to { ... } initializers, not to (...) initializers, but nevertheless this general design principle is present in the language.
It's not clear what int[n](5) would even mean. int[5]{1,2,3,4,5} is perfectly well-defined, however.
I'm going to assume you mean for int[n](...) to construct each array element in the same way with the given arguments. Your use case for such a syntax is for data types without a default constructor, but I posit that you don't actually solve that use case: that for many (most?) arrays of such types, each object needs to be constructed differently.
The original expectation is to allow new T[n](arg…) to call T(arg…) to initialize each element.
It turns out that people don't even agree on what new T[n](arg…) would mean.
I gather some good points from the ansnwers and comments, here's the summary:
Inconsistent meaning. parenthesized initializer is used to initialize the object, in case of an array, the only viable one is () which default initializes the array and its elements. Give T[n](arg…) a new meaning will conflict with the current meaning of parenthesized initializer.
No general way to channel the args. Consider a type T with ctor T(int, Ref&, Rval&&), and the usage new T[n](++i, ref, Rval{}). If the args is supplied literally (i.e. call T(++i, ref, Rval{}) for each), ++i will be called multiple times. If the args is supplied through some temporaries, how can you decide ref will pass by reference, while Rval{} will pass as prvalue?
In short, the syntax seems plausible but doesn't actually make sense and is not generally implementable.

usage of const in c++

I am new to C++.I was going through a C++ book and it says
const int i[] = { 1, 2, 3, 4 };
float f[i[3]]; // Illegal
It says the declaration of the float variable is invalid during compilation.Why is that?
Suppose if we use
int i = 3;
float f[i];
It works.
What is the problem with the first situation?
Thanks.
So the first is illegal because an array must have a compile-time known bound, and i[3], while strictly speaking known at compile time, does not fulfill the criteria the language sets for "compile-time known".
The second is also illegal for the same reason.
Both cases, however, will generally be accepted by GCC because it supports C99-style runtime-sized arrays as an extension in C++. Pass the -pedantic flag to GCC to make it complain.
Edit: The C++ standard term is "integral constant expression", and things qualifying as such are described in detail in section 5.19 of the standard. The exact rules are non-trivial and C++11 has a much wider range of things that qualify due to constexpr, but in C++98, the list of legal things is, roughly:
integer literals
simple expressions involving only constants
non-type template parameters of integral type
variables of integral type declared as const and initialized with a constant expression
Your second example doesn't work and it shouldn't work.
i must be constant. This works
const int i = 3;
float f[i];
Just to expound on Sebastian's answer:
When you create a static array, the compiler must know how much space it needs to reserve. That means the array size must be known at compile-time. In other words, it must be a literal or a constant:
const int SIZE = 3;
int arr[SIZE]; // ok
int arr[3]; // also ok
int size = 3;
int arr[size]; // Not OK
Since the value of size could be different by the time the array is created, the oompiler won't know how much space to reserve for the array. If you declare it as const, it knows the value will not change, and can reserve the proper amount of space.
If you need an array of a variable size, you will need to create it dynamically using new (and make sure to clean it up with delete when you are done with it).
For arrays with lengths known only at runtime in C++ we have std::vector<T>. For builtin arrays the size must be known at compile-time. This is also true for C++11, although the much older C99-standard already supports dynamic stack arrays. See also the accepted answer of Why doesn't C++ support dynamic arrays on the stack?

Why can't arrays be passed as function arguments?

Why can't you pass arrays as function arguments?
I have been reading this C++ book that says 'you can't pass arrays as function arguments', but it never explains why. Also, when I looked it up online I found comments like 'why would you do that anyway?' It's not that I would do it, I just want to know why you can't.
Why can't arrays be passed as function arguments?
They can:
void foo(const int (&myArray)[5]) {
// `myArray` is the original array of five integers
}
In technical terms, the type of the argument to foo is "reference to array of 5 const ints"; with references, we can pass the actual object around (disclaimer: terminology varies by abstraction level).
What you can't do is pass by value, because for historical reasons we shall not copy arrays. Instead, attempting to pass an array by value into a function (or, to pass a copy of an array) leads its name to decay into a pointer. (some resources get this wrong!)
Array names decay to pointers for pass-by-value
This means:
void foo(int* ptr);
int ar[10]; // an array
foo(ar); // automatically passing ptr to first element of ar (i.e. &ar[0])
There's also the hugely misleading "syntactic sugar" that looks like you can pass an array of arbitrary length by value:
void foo(int ptr[]);
int ar[10]; // an array
foo(ar);
But, actually, you're still just passing a pointer (to the first element of ar). foo is the same as it was above!
Whilst we're at it, the following function also doesn't really have the signature that it seems to. Look what happens when we try to call this function without defining it:
void foo(int ar[5]);
int main() {
int ar[5];
foo(ar);
}
// error: undefined reference to `func(int*)'
So foo takes int* in fact, not int[5]!
(Live demo.)
But you can work-around it!
You can hack around this by wrapping the array in a struct or class, because the default copy operator will copy the array:
struct Array_by_val
{
int my_array[10];
};
void func (Array_by_val x) {}
int main() {
Array_by_val x;
func(x);
}
This is somewhat confusing behaviour.
Or, better, a generic pass-by-reference approach
In C++, with some template magic, we can make a function both re-usable and able to receive an array:
template <typename T, size_t N>
void foo(const T (&myArray)[N]) {
// `myArray` is the original array of N Ts
}
But we still can't pass one by value. Something to remember.
The future...
And since C++11 is just over the horizon, and C++0x support is coming along nicely in the mainstream toolchains, you can use the lovely std::array inherited from Boost! I'll leave researching that as an exercise to the reader.
So I see answers explaining, "Why doesn't the compiler allow me to do this?" Rather than "What caused the standard to specify this behavior?" The answer lies in the history of C. This is taken from "The Development of the C Language" (source) by Dennis Ritchie.
In the proto-C languages, memory was divided into "cells" each containing a word. These could be dereferenced using the eventual unary * operator -- yes, these were essentially typeless languages like some of today's toy languages like Brainf_ck. Syntactic sugar allowed one to pretend a pointer was an array:
a[5]; // equivalent to *(a + 5)
Then, automatic allocation was added:
auto a[10]; // allocate 10 cells, assign pointer to a
// note that we are still typeless
a += 1; // remember that a is a pointer
At some point, the auto storage specifier behavior became default -- you may also be wondering what the point of the auto keyword was anyway, this is it. Pointers and arrays were left to behave in somewhat quirky ways as a result of these incremental changes. Perhaps the types would behave more alike if the language were designed from a bird's-eye view. As it stands, this is just one more C / C++ gotcha.
Arrays are in a sense second-class types, something that C++ inherited from C.
Quoting 6.3.2.1p3 in the C99 standard:
Except when it is the operand of the sizeof operator or the unary
& operator, or is a string literal used to initialize an array, an
expression that has type "array of type" is converted to an
expression with type "pointer to type" that points to the initial
element of the array object and is not an lvalue. If the array object
has register storage class, the behavior is undefined.
The same paragraph in the C11 standard is essentially the same, with the addition of the new _Alignof operator. (Both links are to drafts which are very close to the official standards. (UPDATE: That was actually an error in the N1570 draft, corrected in the released C11 standard. _Alignof can't be applied to an expression, only to a parenthesized type name, so C11 has only the same 3 exceptions that C99 and C90 did. (But I digress.)))
I don't have the corresponding C++ citation handy, but I believe it's quite similar.
So if arr is an array object, and you call a function func(arr), then func will receive a pointer to the first element of arr.
So far, this is more or less "it works that way because it's defined that way", but there are historical and technical reasons for it.
Permitting array parameters wouldn't allow for much flexibility (without further changes to the language), since, for example, char[5] and char[6] are distinct types. Even passing arrays by reference doesn't help with that (unless there's some C++ feature I'm missing, always a possibility). Passing pointers gives you tremendous flexibility (perhaps too much!). The pointer can point to the first element of an array of any size -- but you have to roll your own mechanism to tell the function how big the array is.
Designing a language so that arrays of different lengths are somewhat compatible while still being distinct is actually quite tricky. In Ada, for example, the equivalents of char[5] and char[6] are the same type, but different subtypes. More dynamic languages make the length part of an array object's value, not of its type. C still pretty much muddles along with explicit pointers and lengths, or pointers and terminators. C++ inherited all that baggage from C. It mostly punted on the whole array thing and introduced vectors, so there wasn't as much need to make arrays first-class types.
TL;DR: This is C++, you should be using vectors anyway! (Well, sometimes.)
Arrays are not passed by value because arrays are essentially continuous blocks of memmory. If you had an array you wanted to pass by value, you could declare it within a structure and then access it through the structure.
This itself has implications on performance because it means you will lock up more space on the stack. Passing a pointer is faster because the envelope of data to be copied onto the stack is far less.
I believe that the reason why C++ did this was, when it was created, that it might have taken up too many resources to send the whole array rather than the address in memory. That is just my thoughts on the matter and an assumption.
It's because of a technical reason. Arguments are passed on the stack; an array can have a huge size, megabytes and more. Copying that data to the stack on every call will not only be slower, but it will exhaust the stack pretty quickly.
You can overcome that limitation by putting an array into a struct (or using Boost::Array):
struct Array
{
int data[512*1024];
int& operator[](int i) { return data[i]; }
};
void foo(Array byValueArray) { .......... }
Try to make nested calls of that function and see how many stack overflows you'll get!

Explicit initialization of struct/class members

struct some_struct{
int a;
};
some_struct n = {};
n.a will be 0 after this;
I know this braces form of initialization is inherited from C and is supported for compatibility with C programs, but this only compiles with C++, not with the C compiler. I'm using Visual C++ 2005.
In C this type of initialization
struct some_struct n = {0};
is correct and will zero-initialize all members of a structure.
Is the empty pair of braces form of initialization standard? I first saw this form of initialization in a WinAPI tutorial from msdn.
The empty braces form of initialization is standard in C++ (it's permitted explicitly by the grammar). See C Static Array Initialization - how verbose do I need to be? for more details if you're interested.
I assume that it was added to C++ because it might not be appropriate for a 0 value to be used for a default init value in all situations.
It is standard in C++, it isn't in C.
The syntax was introduced to C++, because some objects can't be initialized with 0, and there would be no generic way to perform value-initialization of arrays.
The {0} is C99 apparently.
Another way to initialize in a C89 and C++ compliant way is this "trick":
struct some_struct{
int a;
};
static some_struct zstruct;
some_struct n = zstruct;
This uses the fact that static variables are pre-initialized with 0'ed memory, contrary to declarations on the stack or heap.
I find the following link to be very informative on this particular issue
http://publib.boulder.ibm.com/infocenter/lnxpcomp/v8v101/index.jsp?topic=/com.ibm.xlcpp8l.doc/language/ref/strin.htm