What's a need for union { } in C++? [duplicate] - c++

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
C/C++: When would anyone use a union? Is it basically a remnant from the C only days?
Hi all.
What are the reasons for unions to exist in C++? And what are they, actually?
I found this code:
#include <iostream>
using namespace std;
union mixture {
short number;
char symbol[2];
main() {
mixture m1, m2;
cout << "Enter 2 symbols to form short number made of them: ";
cin >> m1.symbol[0] >> m1.symbol[1];
cout << "2 more to go..: ";
cin >> m2.symbol[0] >> m2.symbol[1];
cout.setf(ios::hex);
cout << "Biggest received number: " << (m1.number > m2.number ? m1.number : m2.number) << endl;
system("pause");
return 0;
}
return 0;
But actually, what I win from using union { struct1, struct2 } instead of writing struct { struct2(struct1}: a(struct1.a), b(_struct1.b {}} struct2? to transparently support both types?
I do some embedding stuff (Arduino etc), but never seen the real usage for structs.
Examples, please.

The union lets you treat your data as either char or short without having to cast. Casting pointer between types can produce type-punning errors in the optimizer and generate incorrect output.
EDIT: For an example, I use unions to byte swap doubles and floats after reading them from a file. If you read byteswapped floating point numbers the numbers might get normalized or adjusted while bytes swapped, which of course results in junk once they're swapped:
union float_int
{
float ff;
int32_t ii;
};
float_int data;
read(fd, &data, 4); // Read 4 bytes
byteswap(data.ii); // Passing as a float can alter the bits.
float myvalue = data.ff; // My float is now byteswaped and ready to go.

A union is fundamentally different from a struct in that only one of the members is guaranteed to be usable at a time -- but this is a white lie, see next. (The data may or may not overlap depending upon the compiler, target, and various packing rules and this overlap can be [ab]used in cases The overlap can be [ab]used depending upon compiler, target, types, etc).
Conceptually a union represents a "discrete value" (X has/is A or B but X is not/does not have A and B) and it also allows the compiler (or whatever uses this model) to represent the data more efficiently in cases.
Happy coding.

In hardware level code you have to be sure your bits are in consecutive memory, this is one way to make sure of it.
edit: You dont HAVE to be sure, but there are plenty of cases it does.

Firstly, they're inherited from C and there was no good reason to remove support, so C++ retained support.
It's one of the only widely-supported ways to do type-punning (but still likely illegal by the letter of the standard).
It can save space in certain cases.

Related

How to iterate over every bit of a type in C++

I wanted to write the Digital Search Tree in C++ using templates. To do that given a type T and data of type T I have to iterate over bits of this data. Doing this on integers is easy, one can just shift the number to the right an appropriate number of positions and "&" the number with 1, like it was described for example here How to get nth bit values . The problem starts when one tries to do get i'th bit from the templated data. I wrote something like this
#include <iostream>
template<typename T>
bool getIthBit (T data, unsigned int bit) {
return ((*(((char*)&data)+(bit>>3)))>>(bit&7))&1;
}
int main() {
uint32_t a = 16;
for (int i = 0; i < 32; i++) {
std::cout << getIthBit (a, i);
}
std::cout << std::endl;
}
Which works, but I am not exactly sure if it is not undefined behavior. The problem with this is that to iterate over all bits of the data, one has to know how many of them are, which is hard for struct data types because of padding. For example here
#include <iostream>
struct s {
uint32_t i;
char c;
};
int main() {
std::cout << sizeof (s) << std::endl;
}
The actual data has 5 bytes, but the output of the program says it has 8. I don't know how to get the actual size of the data, or if it is at all possible. A question about this was asked here How to check the size of struct w/o padding? , but the answers are just "don't".
It's easy to know know how many bits there are in a type. There's exactly CHAR_BIT * sizeof(T). sizeof(T) is the actual size of the type in bytes. But indeed, there isn't a general way within standard C++ to know which of those bits - that are part of the type - are padding.
I recommend not attempting to support types that have padding as keys of your DST.
Following trick might work for finding padding bits of trivially copyable classes:
Use std::memset to set all bits of the object to 0.
For each sub object with no sub objects of their own, set all bits to 1 using std::memset.
For each sub object with their own sub objects, perform the previous and this step recursively.
Check which bits stayed 0.
I'm not sure if there are any technical guarantees that the padding actually stays 0, so whether this works may be unspecified. Furthermore, there can be non-classes that have padding, and the described trick won't detect those. long double is typical example; I don't know if there are others. This probably won't detect unused bits of integers that underlie bitfields.
So, there are a lot of caveats, but it should work in your example case:
s sobj;
std::memset(&sobj, 0, sizeof sobj);
std::memset(&sobj.i, -1, sizeof sobj.i);
std::memset(&sobj.c, -1, sizeof sobj.c);
std::cout << "non-padding bits:\n";
unsigned long long ull;
std::memcpy(&ull, &sobj, sizeof sobj);
std::cout << std::bitset<sizeof sobj * CHAR_BIT>(ull) << std::endl;
There's a Standard way to know if a type has unique representation or not. It is std::has_unique_object_representations, available since C++17.
So if an object has unique representations, it is safe to assume that every bit is significant.
There's no standard way to know if non-unique representation caused by padding bytes/bits like in struct { long long a; char b; }, or by equivalent representations¹. And no standard way to know padding bits/bytes offsets.
Note that "actual size" concept may be misleading, as padding can be in the middle, like in struct { char a; long long b; }
Internally compiler has to distinguish padding bits from value bits to implement C++20 atomic<T>::compare_exchange_*. MSVC does this by zeroing padding bits with __builtin_zero_non_value_bits. Other compiler may use other name, another approach, or not expose atomic<T>::compare_exchange_* internals to this level.
¹ like multiple NaN floating point values

Is there any portable way to ensure a struct is defined without padding bytes using c++11? [duplicate]

This question already has answers here:
Compile-time check to make sure that there is no padding anywhere in a struct
(4 answers)
Closed 3 years ago.
Lets consider the following task:
My C++ module as part of an embedded system receives 8 bytes of data, like: uint8_t data[8].
The value of the first byte determines the layout of the rest (20-30 different). In order to get the data effectively, I would create different structs for each layout and put each to a union and read the data directly from the address of my input through a pointer like this:
struct Interpretation_1 {
uint8_t multiplexer;
uint8_t timestamp;
uint32_t position;
uint16_t speed;
};
// and a lot of other struct like this (with bitfields, etc..., layout is not defined by me :( )
union DataInterpreter {
Interpretation_1 movement;
//Interpretation_2 temperatures;
//etc...
};
...
uint8_t exampleData[8] {1u, 10u, 20u,0u,0u,0u, 5u,0u};
DataInterpreter* interpreter = reinterpret_cast<DataInterpreter*>(&exampleData);
std::cout << "position: " << +interpreter->movement.position << "\n";
The problem I have is, the compiler can insert padding bytes to the interpretation structs and this kills my idea. I know I can use
with gcc: struct MyStruct{} __attribute__((__packed__));
with MSVC: I can use #pragma pack(push, 1) MyStruct{}; #pragma pack(pop)
with clang: ? (I could check it)
But is there any portable way to achieve this? I know c++11 has e.g. alignas for alignment control, but can I use it for this? I have to use c++11 but I would be just interested if there is a better solution with later version of c++.
But is there any portable way to achieve this?
No, there is no (standard) way to "make" a type that would have padding to not have padding in C++. All objects are aligned at least as much as their type requires and if that alignment doesn't match with the previous sub objects, then there will be padding and that is unavoidable.
Furthermore, there is another problem: You're accessing through a reinterpreted pointed that doesn't point to an object of compatible type. The behaviour of the program is undefined.
We can conclude that classes are not generally useful for representing arbitrary binary data. The packed structures are non-standard, and they also aren't compatible across different systems with different representations for integers (byte endianness).
There is a way to check whether a type contains padding: Compare the size of the sub objects to the size of the complete object, and do this recursively to each member. If the sizes don't match, then there is padding. This is quite tricky however because C++ has minimal reflection capabilities, so you need to resort either hard coding or meta programming.
Given such check, you can make the compilation fail on systems where the assumption doesn't hold.
Another handy tool is std::has_unique_object_representations (since C++17) which will always be false for all types that have padding. But note that it will also be false for types that contain floats for example. Only types that return true can be meaningfully compared for equality with std::memcmp.
Reading from unaligned memory is undefined behavior in C++. In other words, the compiler is allowed to assume that every uint32_t is located at a alignof(uint32_t)-byte boundary and every uint16_t is located at a alignof(uint16_t)-byte boundary. This means that if you somehow manage to pack your bytes portably, doing interpreter->movement.position will still trigger undefined behaviour.
(In practice, on most architectures, unaligned memory access will still work, but albeit incur a performance penalty.)
You could, however, write a wrapper, like how std::vector<bool>::operator[] works:
#include <cstdint>
#include <cstring>
#include <iostream>
#include <type_traits>
template <typename T>
struct unaligned_wrapper {
static_assert(std::is_trivial<T>::value);
std::aligned_storage_t<sizeof(T), 1> buf;
operator T() const noexcept {
T ret;
memcpy(&ret, &buf, sizeof(T));
return ret;
}
unaligned_wrapper& operator=(T t) noexcept {
memcpy(&buf, &t, sizeof(T));
return *this;
}
};
struct Interpretation_1 {
unaligned_wrapper<uint8_t> multiplexer;
unaligned_wrapper<uint8_t> timestamp;
unaligned_wrapper<uint32_t> position;
unaligned_wrapper<uint16_t> speed;
};
// and a lot of other struct like this (with bitfields, etc..., layout is not defined by me :( )
union DataInterpreter {
Interpretation_1 movement;
//Interpretation_2 temperatures;
//etc...
};
int main(){
uint8_t exampleData[8] {1u, 10u, 20u,0u,0u,0u, 5u,0u};
DataInterpreter* interpreter = reinterpret_cast<DataInterpreter*>(&exampleData);
std::cout << "position: " << interpreter->movement.position << "\n";
}
This would ensure that every read or write to the unaligned integer is transformed to a bytewise memcpy, which does not have any alignment requirement. There might be a performance penalty for this on architectures with the ability to access unaligned memory quickly, but it would work on any conforming compiler.

Type casting struct to integer and vice versa in C++

So, I've seen this thread Type casting struct to integer c++ about how to cast between integers and structs (bitfields) and undoubtly, writing a proper conversion function or overloading the relevant casting operators is the way to go for any cases where there is an operating system involved.
However, when writing firmware for a small embedded system where only one flash image is run, the case might be different insofar, as security isn't so much of a concern while performance is.
Since I can test whether the code works properly (meaning the bits of a bitfield are arranged the way I would expect them to be) each time when compiling my code, the answer might be different here.
So, my question is, whether there is a 'proper' way to convert between bitfield and unsigned int that does compile to no operations in g++ (maybe shifts will get optimised away when the compiler knows the bits are arranged correctly in memory).
This is an excerpt from the original question:
struct {
int part1 : 10;
int part2 : 6;
int part3 : 16;
} word;
I can then set part2 to be equal to whatever value is requested, and set the other parts as 0.
word.part1 = 0;
word.part2 = 9;
word.part3 = 0;
I now want to take that struct, and convert it into a single 32 bit integer. I do have it compiling by forcing the casting, but it does not seem like a very elegant or secure way of converting the data.
int x = *reinterpret_cast<int*>(&word);
EDIT:
Now, quite some time later, I have learned some things:
1) Type punning (changing the interpretation of data) by means of pointer casting is, undefined behaviour since C99 and C++98. These language changes introduced strict aliasing rules (They allow the compiler to reason that data is only accessed through pointers of compatible type) to allow for better optimisations. In effect, the compiler will not need to keep the ordering between accesses (or do the off-type access at all). For most cases, this does not seem to present a [immediate] problem, but when using higher optimisation settings (for gcc that is -O which includes -fstrict-aliasing) this will become a problem.
For examples see https://blog.regehr.org/archives/959
2) Using unions for type punning also seems to involve undefined behaviour in C++ but not C (See https://stackoverflow.com/a/25672839/4360539), however GCC (and probably others) does explicitly allow it: (See https://gcc.gnu.org/bugs/#nonbugs).
3) The only really reliable way of doing type punning in C++ seems to be using memcpy to copy the data to a new location and perform whatever is to be done and then to use another memcpy to return the changes. I did read somewhere on SO, that GCC (or most compilers probably) should be able to optimise the memcpy to a mere register copy for register-sized data types, but I cannot find it again.
So probably the best thing to do here is to use the union if you can be sure the code is compiled by a compiler supporting type punning through a union. For the other cases, further investigation would be needed how the compiler treats bigger data structures and memcpy and if this really involves copying back and forth, probably sticking with bitwise operations is the best idea.
union {
struct {
int part1: 10;
int part2: 6;
int part3: 16;
} parts;
int whole;
} word;
Then just use word.whole.
I had the same problem. I am guessing this is not very relevant today. But this is how I solved it:
#include <iostream>
struct PACKED{
int x:10;
int y:10;
int z:12;
PACKED operator=(int num )
{
*( int* )this = num;
return *this;
}
operator int()
{
int *x;
x = (int*)this;
return *x;
}
} __attribute__((packed));
int main(void) {
std::cout << "size: " << sizeof(PACKED) << std::endl;
PACKED bf;
bf = 0xFFF00000;
std::cout << "Values ( x, y, z ) = " << bf.x << " " << bf.y << " " << bf.z << std::endl;
int testint;
testint = bf;
std::cout << "As integer: " << testint << std::endl;
return 0;
}
This now fits on a int, and is assignable by standard ints. However I do not know how portable this solution is. The output of this is then:
size: 4
Values ( x, y, z ) = 0 0 -1
As integer: -1048576

Scenario where it is better to use union instread of struct

Can some give me some scenario where it is wise to use union instead of struct in some problem?
Thanks
It is wise to use a union whenever you have a data bottleneck, and you have two pieces of data that are mutually exclusive, but available in the same data structure.
Let's say I have two messages that have identical data, except for two pieces of data are mutually exclusive between them, and are close in size (an 32 bit int, and a 4 byte array). I can make a union of the two, and the messages can share data structure without having an increase in size that they won't use.
Be aware of problems:
The data may not be mutually exclusive in the future.
Initialization of the mutually exclusive data.
Reusing the same instance of the data for both messages (you'll need to be sure you switch out the mutually exclusive data, or the receiver deals with junk data).
Having a union to refer to the same data with different type definitions is undefined behavior. So:
Do not use a union to cheat the type system.
Do not use a union to store a pointer and access an reference.
Do not use a union to create cheaper type casting.
Also, Do not use a union with data that is a pointer which can be deleted from another point in the code. You likely have a deleted pointer in your union and accidentally refer to the data using the other definition.
And most importantly, if you do not understand this answer. Do not use a union.
Unions can be a way to get at the actual binary representation of a data structure.
#include <iostream>
#include <iomanip>
union MyUnion {
int integer;
unsigned char bytes[sizeof(int)];
};
int main() {
MyUnion foo;
foo.integer = 42;
std::cout << "My computer represents " << foo.integer << " as:";
for (int i = 0; i < sizeof(foo.bytes); ++i) {
std::cout << ' ' << std::hex << std::setw(2) << std::setfill('0')
<< static_cast<unsigned int>(foo.bytes[i]);
}
std::cout << std::endl;
return 0;
}
There are other ways to accomplish this in C++, but using a union makes the intent rather transparent.
Note that the results may vary by platform (little-endian vs. big-endian) and possibly by compiler (how it packs and pads arrays and data types). Most of the time, you shouldn't need to do stuff like this.
Sometimes you have to deal with a legacy binary format with several different interpretations. ("If the first byte is a 3, then the next value is a zero-terminated ASCII string of at most 16 bytes, otherwise, the next DWORD-aligned int is an offset in the resource block ..."). If you understand all the endianness and packing issues involved, a union makes it relatively easy to tease apart such a struct.

Bit field vs Bitset

I want to store bits in an array (like structure). So I can follow either of the following two approaches
Approach number 1 (AN 1)
struct BIT
{
int data : 1
};
int main()
{
BIT a[100];
return 0;
}
Approach number 2 (AN 2)
int main()
{
std::bitset<100> BITS;
return 0;
}
Why would someone prefer AN 2 over AN 1?
Because approach nr. 2 actually uses 100 bits of storage, plus some very minor (constant) overhead, while nr. 1 typically uses four bytes of storage per Bit structure. In general, a struct is at least one byte large per the C++ standard.
#include <bitset>
#include <iostream>
struct Bit { int data : 1; };
int main()
{
Bit a[100];
std::bitset<100> b;
std::cout << sizeof(a) << "\n";
std::cout << sizeof(b) << "\n";
}
prints
400
16
Apart from this, bitset wraps your bit array in a nice object representation with many useful operations.
A good choice depends on how you're going to use the bits.
std::bitset<N> is of fixed size. Visual C++ 10.0 is non-conforming wrt. to constructors; in general you have to provide a workaround. This was, ironically, due to what Microsoft thought was a bug-fix -- they introduced a constructor taking int argument, as I recall.
std::vector<bool> is optimized in much the same way as std::bitset. Cost: indexing doesn't directly provide a reference (there are no references to individual bits in C++), but instead returns a proxy object -- which isn't something you notice until you try to use it as a reference. Advantage: minimal storage, and the vector can be resized as required.
Simply using e.g. unsigned is also an option, if you're going to deal with a small number of bits (in practice, 32 or less, although the formal guarantee is just 16 bits).
Finally, ALL UPPERCASE identifiers are by convention (except Microsoft) reserved for macros, in order to reduce the probability of name collisions. It's therefore a good idea to not use ALL UPPERCASE identifiers for anything else than macros. And to always use ALL UPPERCASE identifiers for macros (this also makes it easier to recognize them).
Cheers & hth.,
bitset has more operations
Approach number 1 will most likely be compiled as an array of 4-byte integers, and one bit of each will be used to store your data. Theoretically a smart compiler could optimize this, but I wouldn't count on it.
Is there a reason you don't want to use std::bitset?
To quote cplusplus.com's page on bitset, "The class is very similar to a regular array, but optimizing for space allocation". If your ints are 4 bytes, a bitset uses 32 times less space.
Even doing bool bits[100], as sbi suggested, is still worse than bitset, because most implementations have >= 1-byte bools.
If, for reasons of intellectual curiosity only, you wanted to implement your own bitset, you could do so using bit masks:
typedef struct {
unsigned char bytes[100];
} MyBitset;
bool getBit(MyBitset *bitset, int index)
{
int whichByte = index / 8;
return bitset->bytes[whichByte] && (1 << (index = % 8));
}
bool setBit(MyBitset *bitset, int index, bool newVal)
{
int whichByte = index / 8;
if (newVal)
{
bitset->bytes[whichByte] |= (1 << (index = % 8));
}
else
{
bitset->bytes[whichByte] &= ~(1 << (index = % 8));
}
}
(Sorry for using a struct instead of a class by the way. I'm thinking in straight C because I'm in the middle of a low-level assignment for school. Obviously two huge benefits of using a class are operator overloading and the ability to have a variable-sized array.)