Casting double array to a struct of doubles - c++

Is it OK to cast a double array to a struct made of doubles?
struct A
{
double x;
double y;
double z;
};
int main (int argc , char ** argv)
{
double arr[3] = {1.0,2.0,3.0};
A* a = static_cast<A*>(static_cast<void*>(arr));
std::cout << a->x << " " << a->y << " " << a->z << "\n";
}
This prints 1 2 3. But is it guaranteed to work every time with any compiler?
EDIT: According to
9.2.21: A pointer to a standard-layout struct object, suitably converted ? using a reinterpret_cast, points to its initial member (...) and vice versa.
if I replace my code with
struct A
{
double & x() { return data[0]; }
double & y() { return data[1]; }
double & z() { return data[2]; }
private:
double data[3];
};
int main (int, char **)
{
double arr[3] = {1.0,2.0,3.0};
A* a = reinterpret_cast<A*>(arr);
std::cout << a->x() << " " << a->y() << " " << a->z() << "\n";
}
then it is guaranteed to work. Correct? I understand that many people would not find this aesteticaly pleasing but there are advantages in working with a struct and not having to copy the input array data. I can define member functions in that struct to compute scalar and vector products, distances etc, that will make my code much easier to understand than if I work with arrays.
How about
int main (int, char **)
{
double arr[6] = {1.0,2.0,3.0,4.0,5.0,6.0};
A* a = reinterpret_cast<A*>(arr);
std::cout << a[0].x() << " " << a[0].y() << " " << a[0].z() << "\n";
std::cout << a[1].x() << " " << a[1].y() << " " << a[1].z() << "\n";
}
Is this also guaranteed to work or the compiler could put something AFTER the data members so that sizeof(A) > 3*sizeof(double)? And is there any portable way to prevent the compiler from doing so?

No, it's not guaranteed.
The only thing prohibiting any compiler from inserting padding between x and y, or between y and z is common sense. There is no rule in any language standard that would disallow it.
Even if there is no padding, even if the representation of A is exactly the same as that of double[3], then it's still not valid. The language doesn't allow you to pretend one type is really another type. You're not even allowed to treat an instance of struct A { int i; }; as if it's a struct B { int i; };.

The standard gives little guarantees about memory layout of objects.
For classes/structs:
9.2./15: Nonstatic data members of a class with the same access control are allocated so that later members have higher addresses
within a class object. The order of allocation of non-static data
members with different access control is unspecified. Implementation
alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for
space for managing virtual functions and virtual base classes.
For arrays, the elements are contiguous. Nothing is said about alignment, so it may or may not use same alignment rules than in struct :
8.3.4: An object of array type contains a contiguously allocated non-empty set of N subobjects of type T.
The only thing you can be sure of in your specific example is that a.x corresponds to arr[0], if using a reinterpret_cast:
9.2.21: A pointer to a standard-layout struct object, suitably converted using a reinterpret_cast, points to its initial member (...)
and vice versa. [
>

No it is not guaranteed, even if it should work with all compilers I know on common architectures, because C language specification says :
6.2.6 Representations of types
6.2.6.1 General1 The representations of all types are unspecified except as stated in this subclause. And it says nothing on the default padding in a struct.
Of course, common architectures use at most 64bits which is the size of a double on those architecture, so there should be no padding and your conversion should work.
But beware : you are explicitely invoking Undefined Behaviour, and next generation of compilers could do anything when compiling such a cast.

From all I know the answer is: yes.
The only thing that could throw you off is a #pragma directive with some very unusual alignment setting for the struct. If for example a double takes 8 bytes on your machine and the #pragma directive tells to align every member on 16-byte boundaries that could cause problems. Other than that you are fine.

std::complex implementation of msvc use the array solution, and llvm libc++ use the former form.
I think, just check the implementation of the std::complex of your libc++, and use the same solution with it.

I disagree with the consensus here. A struct with three doubles in it, is exactly the same as an array with 3 doubles in it. Unless you specifically pack the struct differently and are on a weird processor that has an odd number of bytes for doubles.
It's not built into the language, but I would feel safe in doing it. Style wise I wouldn't do it, because it's just confusing.

Related

Why does inheritence force inflation of otherwise zero sized structs?

Why does a zero-sized array force a struct to be zero sized when an otherwise empty struct has a size of 1, and why does inheriting from a non-zero-sized struct cause the struct to be inflated to the size of the base type?
Compiling via GCC 5.3.0, in case any answer depends on the c++ spec.
#include <iostream>
struct a {};
struct b { int x[0]; };
struct c : a{ int x[0]; };
struct d : b{ int x[0]; };
int main()
{
std::cout << sizeof(a) << std::endl; // 1
std::cout << sizeof(b) << std::endl; // 0
std::cout << sizeof(c) << std::endl; // 4
std::cout << sizeof(d) << std::endl; // 0
}
The ISO C++ standard does not recognize the possibility of a "zero-sized array". That should give you a compile error on a strictly-conforming implementation. While yes, there are a lot of permissive implementations that allow it, their behavior is not governed by the C++ standard.
Similarly, the ISO C++ standard does not permit any type T for which sizeof(T) is zero. Ever. Again, some implementation may do this, but it is not conforming with the standard.
So why it happens depends on the compiler and the expectations of the writer of that code. Zero-sized arrays are generally a C-ism (though C forbids them too, but the C-ism may predate C89), so while it might be meaningful in C code, how they interact with C++ features is really up to the compiler.
The reason c might have a size while d does not could be because a is a normal C++ struct that follows the C++ standard and b isn't. So c::a (the base class of c) still has to follow the normal rules of C++ to some degree, while d::b is completely up to whatever the compiler wants to make of it.

C++ alignment and arrays

I have some type T that I explicitly specify as x-aligned
x > sizeof(T)
x > any implementation fundamental alignment
(ex: x is page or cache aligned)
Suppose I now have: T arr[y], where arr is x-aligned (either by being allocated on the stack, or in data, or by an x-aligned heap allocation)
Then at least some of arr[1],...,arr[y-1] are not x-aligned.
Correct? (In fact, it must be correct if sizeof(T) does not change with extended alignment specification)
Note1: This is not the same question as How is an array aligned in C++ compared to a type contained?. This question asks about the alignment of the array itself, not of the individual elements inside.
Note2: This question: Does alignas affect the value of sizeof? is essentially what I'm asking - but for extended-alignment.
Note3: https://stackoverflow.com/a/4638295/7226419 Is an authoritative answer to the question (that sizeof(T) includes any padding necessary to satisfy alignment requirements for having all T's in an array of T's properly aligned.
If type T is x-aligned, every object of type T is x-aligned, including any array elements. In particular, this means x > sizeof(T) cannot possibly hold.
A quick test with a couple of modern compilers confirms:
#include <iostream>
struct alignas(16) overaligned {};
struct unaligned {};
template <class T> void sizes()
{
T m, marr[2];
std::cout << sizeof(m) << " " << sizeof(marr) << std::endl;
}
int main ()
{
sizes<unaligned>();
sizes<overaligned>();
}
Output:
1 2
16 32

Why is the size of my class zero? How can I ensure that different objects have different address?

I created a class but its size is zero. Now, how can I be sure that all objects have different addresses? (As we know, empty classes have a non-zero size.)
#include<cstdio>
#include<iostream>
using namespace std;
class Test
{
int arr[0];//Why is the sizezero?
};
int main()
{
Test a,b;
cout <<"size of class"<<sizeof(a)<<endl;
if (&a == &b)// now how we ensure about address of objects ?
cout << "impossible " << endl;
else
cout << "Fine " << endl;//Why isn't the address the same?
return 0;
}
Your class definition is illegal. C++ does not allow array declarations with size 0 in any context. But even if you make your class definition completely empty, the sizeof is still required to evaluate to a non-zero value.
9/4 Complete objects and member subobjects of class type shall have
nonzero size.
In other words, if your compiler accepts the class definition and evaluates the above sizeof to zero, that compiler is going outside of scope of standard C++ language. It must be a compiler extension that has no relation to standard C++.
So, the only answer to the "why" question in this case is: because that's the way it is implemented in your compiler.
I don't see what it all has to do with ensuring that different objects have different addresses. The compiler can easily enforce this regardless of whether object size is zero or not.
The standard says that having an array of zero size causes undefined behavior. When you trigger undefined behavior, other guarantees that the standard provides, such as requiring that objects be located at a different address, may not hold.
Don't create arrays of zero size, and you shouldn't have this problem.
This is largely a repetition of what the other answers have already said, but with a few more references to the ISO C++ standard and some musings about the odd behavior of g++.
The ISO C++11 standard, in section 8.3.4 [dcl.array], paragraph 1, says:
If the constant-expression (5.19) is present, it shall be an
integral constant expression and its value shall be greater than zero.
Your class definition:
class Test
{
int arr[0];
};
violates this rule. Section 1.4 [intro.compliance] applies here:
If a program contains a violation of any diagnosable rule [...], a
conforming implementation shall issue at least one diagnostic message.
As I understand it, if a compiler issues this diagnostic and then accepts the program, the program's behavior is undefined. So that's all the standard has to say about your program.
Now it becomes a question about your compiler rather than about the language.
I'm using g++ version 4.7.2, which does permit zero-sized arrays as an extension, but prints the required diagnostic (a warning) if you invoke it with, for example, -std=c++11 -pedantic:
warning: ISO C++ forbids zero-size array ‘arr’ [-pedantic]
(Apparently you're also using g++.)
Experiment shows that g++'s treatment of zero-sized arrays is a bit odd. Here's an example, based on the one in your program:
#include <iostream>
class Empty {
/* This is valid C++ */
};
class Almost_Empty {
int arr[0];
};
int main() {
Almost_Empty arr[2];
Almost_Empty x, y;
std::cout << "sizeof (Empty) = " << sizeof (Empty) << "\n";
std::cout << "sizeof (Almost_Empty) = " << sizeof (Almost_Empty) << "\n";
std::cout << "sizeof arr[0] = " << sizeof arr[0] << '\n';
std::cout << "sizeof arr = " << sizeof arr << '\n';
if (&x == &y) {
std::cout << "&x == &y\n";
}
else {
std::cout << "&x != &y\n";
}
if (&arr[0] == &arr[1]) {
std::cout << "&arr[0] == &arr[1]\n";
}
else {
std::cout << "&arr[0] != &arr[1]\n";
}
}
I get the required warning on int arr[0];, and then the following run-time output:
sizeof (Empty) = 1
sizeof (Almost_Empty) = 0
sizeof arr[0] = 0
sizeof arr = 0
&x != &y
&arr[0] == &arr[1]
C++ requires a class, even one with no members, to have a size of at least 1 byte. g++ follows this requirement for class Empty, which has no members. But adding a zero-sized array to a class actually causes the class itself to have a size of 0.
If you declare two objects of type Almost_Empty, they have distinct addresses, which is sensible; the compiler can allocate distinct objects any way it likes.
But for elements in an array, a compiler has less flexibility: an array of N elements must have a size of N times the number of elements.
In this case, since class Almost_Empty has a size of 0, it follows that an array of Almost_Empty elements has a size of 0 *and that all elements of such an array have the same address.
This does not indicate that g++ fails to conform to the C++ standard. It's done its job by printing a diagnostic (even though it's a non-fatal warning); after that, as far as the standard is concerned, it's free to do whatever it likes.
But I would probably argue that it's a bug in g++. Just in terms of common sense, adding an empty array to a class should not make the class smaller.
But there is a rationale for it. As DyP points out in a comment, the gcc manual (which covers g++) mentions this feature as a C extension which is also available for C++. They are intended primarily to be used as the last member of a structure that's really a header for a variable-length object. This is known as the struct hack. It's replaced in C99 by flexible array members, and in C++ by container classes.
My advice: Avoid all this confusion by not defining zero-length arrays. If you really need sequences of elements that can be empty, use one of the C++ standard container classes such as std::vector or std::array.
There is a difference between variable declaration and variable initialization. In your case, you just declare variables; A and B. Once you have declared a variable, you need to initialize it using either NEW or MALLOC.
The initialization will now allocate memory to the variables that you just declared. You can initialize the variable to an arbitrary size or block of memory.
A and B are both variables meaning you have created two variables A and B. The compiler will identify this variable as unique variables, it will then allocate A to a memory address say 2000 and then allocate B to another memory address say 150.
If you want A to point to B or B to point to A, you can make a reference to A or B such as;
A = &B. Now A as a memory reference or address to B or rather A points to B. This is called passing variables, in C++ you can either pass variables by reference or pass variables by value.

const values run-time evaluation

The output of the following code:
const int i= 1;
(int&)i= 2; // or: const_cast< int&>(i)= 2;
cout << i << endl;
is 1 (at least under VS2012)
My question:
Is this behavior defined?
Would the compiler always use the defined value for constants?
Is it possible to construct an example where the compiler would use the value of the latest assignment?
It is totally undefined. You just cannot change the value of constants.
It so happens that the compiler transforms your code into something like
cout << 1 << endl;
but the program could just as well crash, or do something else.
If you set the warnings level high enough, the compiler will surely tell you that it is not going to work.
Is this behavior defined?
The behavior of this code is not defined by the C++ standard, because it attempts to modify a const object.
Would the compiler always use the defined value for constants?
What value the compiler uses in cases like this depends on the implementation. The C++ standard does not impose a requirement.
Is it possible to construct an example where the compiler would use the value of the latest assignment?
There might be cases where the compiler does modify the value and use it, but they would not be reliable.
As said by others, the behaviour is undefined.
For the sake of completeness, here is the quote from the Standard:
(§7.1.6.1/4) Except that any class member declared mutable (7.1.1) can be modified, any attempt to modify a const object during its lifetime (3.8) results in undefined behavior. [ Example:
[...]
const int* ciq = new const int (3); // initialized as required
int* iq = const_cast<int*>(ciq); // cast required
*iq = 4; // undefined: modifies a const object
]
Note that the word object is this paragraph refers to all kinds of objects, including simple integers, as shown in the example – not only class objects.
Although the example refers to a pointer to an object with dynamic storage, the text of the paragraph makes it clear that this applies to references to objects with automatic storage as well.
The answer is that the behavior is undefined.
I managed to set up this conclusive example:
#include <iostream>
using namespace std;
int main(){
const int i = 1;
int *p=const_cast<int *>(&i);
*p = 2;
cout << i << endl;
cout << *p << endl;
cout << &i << endl;
cout << p << endl;
return 0;
}
which, under gcc 4.7.2 gives:
1
2
0x7fffa9b7ddf4
0x7fffa9b7ddf4
So, it is like you have the same memory address as it is holding two different values.
The most probable explanation is that the compiler simply replaces constant values with their literal values.
You are doing a const_cast using the C-like cast operator.
Using const_cast is not guaranteeing any behaviour.
if ever you do it, it might work or it might not work.
(It's not good practice to use C-like operators in C++ you know)
Yes you can, but only if you initiate a const as a read-only but not compile-time const, as follows:
int y=1;
const int i= y;
(int&)i= 2;
cout << i << endl; // prints 2
C++ const keyword can be missleading, it's either a const or a read-only.

Casting between unrelated congruent classes

Suppose I have two classes with identical members from two different libraries:
namespace A {
struct Point3D {
float x,y,z;
};
}
namespace B {
struct Point3D {
float x,y,z;
};
}
When I try cross-casting, it worked:
A::Point3D pa = {3,4,5};
B::Point3D* pb = (B::Point3D*)&pa;
cout << pb->x << " " << pb->y << " " << pb->z << endl;
Under which circumstances is this guaranteed to work? Always? Please note that it would be highly undesirable to edit an external library to add an alignment pragma or something like that. I'm using g++ 4.3.2 on Ubuntu 8.10.
If the structs you are using are just data and no inheritance is used I think it should always work.
As long as they are POD it should be ok.
http://en.wikipedia.org/wiki/Plain_old_data_structures
According to the standard(1.8.5)
"Unless it is a bit-field (9.6), a most derived object shall have a non-zero size and shall occupy one or more bytes of
storage. Base class subobjects may have zero size. An object of POD5)
type (3.9) shall occupy contiguous bytes of
storage."
If they occupy contiguous bytes of storage and they are the same struct with different name, a cast should succeed
If two POD structs start with the same sequence of members, the standard guarantees that you'll be able to access them freely through a union. You can store an A::Point3D in a union, and then read from the B::Point3D member, as long as you're only touching the members that are part of the initial common sequence. (so if one struct contained int, int, int, float, and the other contained int, int, int, int, you'd only be allowed to access the three first ints).
So that seems like one guaranteed way in which your code should work.
It also means the cast should work, but I'm not sure if this is stated explicitly in the standard.
Of course all this assumes that both structs are compiled with the same compiler to ensure identical ABI.
This line should be :
B::Point3D* pb = (B::Point3D*)&pa;
Note the &. I think what you are doing is a reinterpret_cast between two pointers. In fact you can reinterpret_cast any pointer type to another one, regardless of the type of the two pointers. But this unsafe, and not portable.
For example,
int x = 5;
double* y = reinterpret_cast<double*>(&x);
You are just going with the C-Style, So the second line is actually equal to:
double* z = (double*)&x;
I just hate the C-Style when casting because you can't tell the purpose of the cast from one look :)
Under which circumstances is this
guaranteed to work?
This is not real casting between types. For example,
int i = 5;
float* f = reinterpret_cast<float*>(&i);
Now f points to the same place that i points to. So, no conversion is done. When you dereference f, you will get the a float with the same binary representation of the integer i. It is four bytes on my machine.
The following is pretty safe:
namespace A {
struct Point3D {
float x,y,z;
};
}
namespace B {
typedef A::Point3D Point3D;
}
int main() {
A::Point3D a;
B::Point3D* b = &a;
return 0;
}
I know exactly it wouldn't work:
both struct has differen alignment;
compiled with different RTTI options
may be some else...