In the following program:
#include <iostream>
struct I {
int i;
I(){i=2;}
I(int _i){i=_i;}
};
int a[3] = {a[2] = 1};
int aa[3][3] = {aa[2][2] = 1};
I A[3] = {A[2].i = 1};
I AA[3][3] = {AA[2][2].i = 1};
int main(int argc, char **argv) {
for (int b : a) std::cout << b << ' ';
std::cout << '\n';
for (auto &bb : aa) for (auto &b : bb) std::cout << b << ' ';
std::cout << '\n';
for (auto &B : A) std::cout << B.i << ' ';
std::cout << '\n';
for (auto &BB : AA) for (auto &B : BB) std::cout << B.i << ' ';
std::cout << '\n';
return 0;
}
The output is
1 0 0
1 0 0 0 0 0 0 0 1
1 2 2
1 2 2 2 2 2 2 2 2
from http://ideone.com/1ueWdK with clang3.7
but the result is :
0 0 1
1 0 0 0 0 0 0 0 1
1 2 2
1 2 2 2 2 2 2 2 2
on http://rextester.com/l/cpp_online_compiler_clang also with clang 3.7.
On my own ubuntu, gcc 6.2 givs an internal compiler error on the construct int aa[3][3] = {aa[2][2] = 1}.
I'm assuming this is undefined behavior, but cannot find a definitive statement in the standard.
The question is:
Whether the evaluation order of the side effects on the assignment in the initializer list (e.g. a[2] = 1) and initialization of the actual element of the array (e.g. a[2]) defined in the standard?
It is explicitly stated as defined or undefined? Or does it become undefined just because it is not explicitly defined?
Or does the construct has defined or undefined behavior due to other reason aside from the evaluation order?
Let's start with the simplest case:
I A[3] = {A[2].i = 1};
I AA[3][3] = {AA[2][2].i = 1};
Both of these are UB, due to a violation of [basic.life]. You are accessing the value of an object before its lifetime has begun. I does not have a trivial default constructor, and therefore cannot be vacuously initialized. Therefore, the object's lifetime only begins once a constructor has completed. The elements of the A array have not yet been constructed when you are accessing elements of that array.
Therefore, you are invoking UB by accessing a not-yet-constructed object.
Now, the other two cases are more complex:
int a[3] = {a[2] = 1};
int aa[3][3] = {aa[2][2] = 1};
See, int permits "vacuous initialization", as defined by [basic.life]/1. Storage for a and aa has been acquired. Therefore, int a[3] is a valid array of int objects, even though aggregate initialization has not yet begun. So accessing the object and even setting its state is not UB.
The order of operations here is fixed. Even pre-C++17, the initialization of the elements of the initializer list is sequenced before the aggregate initialization is invoked, as stated in [dcl.init.list]/4. Elements in the aggregate which are not listed in the initialization list here will be filled in as if by typename{} constructs. int{} means to value-initialize an int, which results in 0.
So even though you set a[2] and aa[2][2], they should immediately be overwritten via aggregate initialization.
Therefore, all of these compilers are wrong. The answer should be:
1 0 0
1 0 0 0 0 0 0 0 0
Now granted, this is all very stupid and you shouldn't do it. But from a pure language perspective, this is well-defined behavior.
Related
I was going through lambda functions on https://shaharmike.com/cpp/lambdas-and-functions/ and found below code.
int i = 0;
auto x = [i]() mutable { cout << ++i << endl; };
x();
cout << i << endl;
auto y = x;
x();
y();
Output:
1
0
2
2
Unable to understand why last 2 statements are getting printed as 2 and 2. Even though 'i' is mutable, it will not effect i value outside lamda function. so, x() and y() should print 1 and 1. Can any one please explain why it is printing 2 and 2.
x has a copy of i. I will call it x.i.
x(); -- prints ++x.i, aka 1
cout << i; -- prints i, aka 0
auto y = x; -- copies x into y. x.i is 1, y.i is also 1.
x(); -- prints ++x.i, aka 2
y(); -- prints ++y.i, aka 2
The value of i is saved as a field of the functor generated by the lambda function, so when you copy it, the field is copied as well with the value 1. Then calling the functor increments each object's i field and displays that value, so you get 2 and 2.
After some experiments I've come up with these four ways of creating multidimensional array on the heap (1 and 2 is kinda the same except that the result for 1 I wanted reference):
#include <memory>
#include <iostream>
template <typename T>
void printArr(T const &arr)
{
std::cout << typeid(T).name() << "\t";
for (int x = 0; x < 2; ++x)
for (int y = 0; y < 2; ++y)
for (int z = 0; z < 2; ++z)
std::cout << arr[x][y][z] << " ";
std::cout << std::endl;
}
int main()
{
int(&arr)[2][2][2] = reinterpret_cast<int(&)[2][2][2]>(*new int[2][2][2]{ { { 1,2 },{ 3,4 } }, { { 5,6 },{ 7,8 } } });
printArr(arr);
delete[] &arr;
int(*arr2)[2][2] = new int[2][2][2]{ { { 1,2 },{ 3,4 } },{ { 5,6 },{ 7,8 } } };
printArr(arr2);
delete[] arr2;
std::unique_ptr<int[][2][2]> arr3(new int[2][2][2]{ { { 1,2 },{ 3,4 } },{ { 5,6 },{ 7,8 } } });
printArr(arr3);
std::unique_ptr<int[][2][2]> arr4 = std::make_unique<int[][2][2]>(2);
printArr(arr4);
return 0;
}
Tested this on various online compilers without problems, so what I would like to know if they are valid ways as well?
Here is the demo https://ideone.com/UWXOoW and output:
int [2][2][2] 1 2 3 4 5 6 7 8
int (*)[2][2] 1 2 3 4 5 6 7 8
class std::unique_ptr<int [0][2][2],struct std::default_delete<int [0][2][2]> > 1 2 3 4 5 6 7 8
class std::unique_ptr<int [0][2][2],struct std::default_delete<int [0][2][2]> > 0 0 0 0 0 0 0 0
I think your first example is undefined behavior or at least would lead to undefined behavior as soon as you'd try to actually access the array through the reference arr.
new int[2][2][2] creates an array of two int[2][2]. It returns you a pointer to the first element of this array ([expr.new] §1). However, a pointer to the first element of an array and a pointer to the array itself are not pointer interconvertible. I'm not sure if there is a definitive answer yet to the philosophical question "does the act of dereferencing an invalid pointer itself already constitute undefined behavior?". But at least accessing the reference obtained from your reinterpret_cast should definitely violate the strict aliasing rule. The other three should be fine.
Edit:
As there still seems to be some confusion, here a more detailed explanation of my argument:
new int[2][2][2]
creates an array of two int[2][2] and returns a pointer to the first element of this array, i.e., a pointer to the first int[2][2] subobject within this array and not the complete array object itself.
If you want to get an int(*)[2][2][2] from new, you could, e.g., do
new int[1][2][2][2]
I am looking for confirmation, clarification is this code well defined or not.
It is very common to erase elements of a container in a loop by re-assigning the iterator to the result of the container's erase() function. The loop is usually a little messy like this:
struct peer { int i; peer(int i): i(i) {} };
int main()
{
std::list<peer> peers {0, 1, 2, 3, 4, 5, 6};
for(auto p = peers.begin(); p != peers.end();) // remember not to increment
{
if(p->i > 1 && p->i < 4)
p = peers.erase(p);
else
++p; // remember to increment
}
for(auto&& peer: peers)
std::cout << peer.i << ' ';
std::cout << '\n';
}
Output: 0 1 4 5 6
It occurred to me that the following might be little tidier, assuming it is not invoking undefined behavior:
struct peer { int i; peer(int i): i(i) {} };
int main()
{
std::list<peer> peers {0, 1, 2, 3, 4, 5, 6};
for(auto p = peers.begin(); p != peers.end(); ++p)
if(p->i > 1 && p->i < 4)
--(p = peers.erase(p)); // idiomatic decrement ???
for(auto&& peer: peers)
std::cout << peer.i << ' ';
std::cout << '\n';
}
Output: 0 1 4 5 6
The reasons why I think this works are as follows:
peers.erase() will always return an incremented p, therefore it is safe to decrement it again
peers.erase(p) makes a copy of p so it will not operate on the wrong value due to sequencing reference
p = peers.erase(p) returns a p& so the decrement operates on the correct object reference.
But I have niggling doubts. I am worried I am invoking the bad sequencing rule by using --(p) in the same expression where p is used as a parameter despite the fact that it looks on paper to be okay.
Can anyone see a problem with my assessment here? Or is this well defined?
It depends on the condition that you are using to detect those elements to delete. It will fail if you try to delete the first element as erase will return the new begin() and you are then decrementing it. This is illegal, even if you immediately increment it again.
To avoid this error and because it is more common and readable, I'd stick with the first version.
Second version is - as stated by #DanielFrey - wrong, but if you don't like first version, why just not do something like that:
std::list<int> myList = { 0, 1, 2, 3, 4, 5, 6 };
myList.remove_if(
[](int value) -> bool {
return value > 1 && value < 4;
}
);
/* even shorter version
myList.remove_if([](int value) -> bool {
return (value > 1 && value < 4);
});
*/
for(int value : myList) {
std::cout << value << " ";
}
This gives output:
0 1 4 5 6
Live example.
Why does this...
int a[5];
a[-2] = 1;
a[-1] = 2;
a[0] = 3;
a[1] = 4;
a[2] = 5;
cout << a[-2] << endl <<endl;
for(int i=-2 ; i<=2 ; i++)
{
cout << a[i] << endl;
}
...output this?
1
-2
2
3
4
5
I created another project file in Code::Blocks, compiled, and got this:
1
1
-1
3
4
5
I tried to find posts with similar problems, but I couldn’t find any. This just doesn’t make sense to me.
Accessing arrays in C++ using negative indices is undefined behavior, the valid index for:
int a[5];
will be 0 to 4.
If we look at the draft C++ standard section 8.3.4 Arrays in paragraph 1 says:
[...] If the value of the constant expression is N, the array has N elements numbered 0 to N-1, [...]
Your code exhibits undefined behavior: -2 is not a valid index into int[5] array. Valid indexes into such array are 0 through 4.
In this particular case, it just so happens, by accident, that i is located in memory at exactly the offset 2 * sizeof(int) below the first element of a, so a[-2] happens to be an alias for i.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C and C++ : Partial initialization of automatic structure
While reading Code Complete, I came across an C++ array initialization example:
float studentGrades[ MAX_STUDENTS ] = { 0.0 };
I did not know C++ could initialize the entire array, so I've tested it:
#include <iostream>
using namespace std;
int main() {
const int MAX_STUDENTS=4;
float studentGrades[ MAX_STUDENTS ] = { 0.0 };
for (int i=0; i<MAX_STUDENTS; i++) {
cout << i << " " << studentGrades[i] << '\n';
}
return 0;
}
The program gave the expected results:
0 0
1 0
2 0
3 0
But changing the initialization value from 0.0 to, say, 9.9:
float studentGrades[ MAX_STUDENTS ] = { 9.9 };
Gave the interesting result:
0 9.9
1 0
2 0
3 0
Does the initialization declaration set only the first element in the array?
You only initialize the first N positions to the values in braces and all others are initialized to 0. In this case, N is the number of arguments you passed to the initialization list, i.e.,
float arr1[10] = { }; // all elements are 0
float arr2[10] = { 0 }; // all elements are 0
float arr3[10] = { 1 }; // first element is 1, all others are 0
float arr4[10] = { 1, 2 }; // first element is 1, second is 2, all others are 0
No, it sets all members/elements that haven't been explicitly set to their default-initialisation value, which is zero for numeric types.