I have an explicitly sized structure as follow:
typedef struct
{
unsigned long A : 4;
unsigned long B : 12;
union
{
unsigned long C1 : 8;
unsigned long C2 : 8;
unsigned long C3 : 8;
};
unsigned long D : 8;
}FooStruct;
The total size of this struct should be 32bit (4 bytes) in theory. However, I get a 12-byte size using sizeof so there should be some padding and alignment happening here.
I just don't see why and where. Can someone explain to me how this structure takes 12 bytes in memory?
The union forces the start of a new unsigned long, and the member after the union yet another unsigned long. Assuming long is 4 bytes that means your struct will have 3 unsigned longs for a total of 12 bytes. Although a union with three equally sized members also seems odd.
If you want this to have a size of 4 bytes why not change it to:
typedef struct
{
unsigned short A : 4;
unsigned short B : 12;
union
{
unsigned char C1 : 8;
unsigned char C2 : 8;
unsigned char C3 : 8;
};
unsigned char D : 8;
}FooStruct;
Additionally if you are using gcc and want to disable structure padding, you can use __attribute__((packed)):
struct FooStruct
{
unsigned long A : 4;
unsigned long B : 12;
union
{
unsigned long C1 : 8;
unsigned long C2 : 8;
unsigned long C3 : 8;
} __attribute__((packed)) C;
unsigned long D : 8;
} __attribute__((packed));
But beware that some architectures may have penalities on unalligned data access or not allow it at all.
Related
#include <iostream>
typedef union dbits {
double d;
struct {
unsigned int M1: 20;
unsigned int M2: 20;
unsigned int M3: 12;
unsigned int E: 11;
unsigned int s: 1;
};
};
int main(){
std::cout << "sizeof(dbits) = " << sizeof(dbits) << '\n';
}
output: sizeof(dbits) = 16, but if
typedef union dbits {
double d;
struct {
unsigned int M1: 12;
unsigned int M2: 20;
unsigned int M3: 20;
unsigned int E: 11;
unsigned int s: 1;
};
};
Output: sizeof(dbits) = 8
Why does the size of the union increase?
In the first and second union, the same number of bits in the bit fields in the structure, why the different size?
I would like to write like this:
typedef union dbits {
double d;
struct {
unsigned long long M: 52;
unsigned int E: 11;
unsigned int s: 1;
};
};
But, sizeof(dbits) = 16, but not 8, Why?
And how convenient it is to use bit fields in structures to parse bit in double?
members of a bit field will not cross boundaries of the specified storage type. So
unsigned int M1: 20;
unsigned int M2: 20;
will be 2 unsigned int using 20 out of 32 bit each.
In your second case 12 + 20 == 32 fits in a single unsigned int.
As for your last case members with different storage type can never share. So you get one unsigned long long and one unsigned int instead of a single unsigned long long as you desired.
You should use uint64_t so you get exact bit counts. unsigned int could e anything from 16 to 128 (or more) bit.
Note: bitfields are highly implementation defined, this is just the common way it usually works.
I've got into a trap trying to pack a couple of variables into a one variable 8 byte long.
Basically, I've got a couple of short items which have small binary size and I need to pack them together to send into class which must be able to depack it back.
So I made the following:
typedef unsigned long long PACKAGE; // 8 byte (shows as _int64 in debug)
(sizeof returns '8')
unsigned int dat1 = 25; // 1 byte long max
unsigned int dat2 = 1; // 4 bit long max
unsigned int dat3 = 100; // 2 byte long max
unsigned int dat4 = 200; // 4 byte long max
unsigned int dat5 = 2; // 4 bit long max
Then I make a variable of PACKAGE type which is empty (0)
PACKAGE pack = 0;
And I want to throw variables into that pack using binary operations, I do:
pack = (dat1 << 56) | (dat2 << 52) | (dat3 << 36) | (dat4 << 4) | dat5;
it works only half-good, I calculated that I must get decimal value of pack equals to 2526526262902525058, or
0010001100010000000001100100000000000000000000000000110010000010
as binary, however istead I'm getting 588254914, or
00100011000100000000111011000010 as binary
which is somehow correct at it's tail and head but the middle part is missing off somewhere.
And when this is done I'm still about to extract the data back somehow.
I'd rather use a bitfield struct to represent such type (also use uint64_t to be sure of the available size):
union PACKAGE {
struct bits {
uint64_t dat1 : 8; // 1 byte long max
uint64_t dat2 : 4; // 4 bit long max
uint64_t dat3 : 16; // 2 byte long max
uint64_t dat4 : 32; // 4 byte long max
uint64_t dat5 : 4; // 4 bit long max
};
uint64_t whole; // for convenience
};
As mentioned in comments you could even use the uint_least64_t data type, to ensure your target supports it (since availability of uint64_t is optional from the current c++ standard):
union PACKAGE {
struct bits {
uint_least64_t dat1 : 8; // 1 byte long max
uint_least64_t dat2 : 4; // 4 bit long max
uint_least64_t dat3 : 16; // 2 byte long max
uint_least64_t dat4 : 32; // 4 byte long max
uint_least64_t dat5 : 4; // 4 bit long max
};
uint_least64_t whole; // for convenience
};
Assuming that sizeof(unsigned int) != sizeof(unsigned long long), the left operand of each shift is the wrong type. Each shift operation is being truncated (probably to 32 bits).
Try, for example:
typedef unsigned long long PACKAGE; // 8 byte (shows as _int64 in debug)
(sizeof returns '8')
unsigned long long dat1 = 25; // 1 byte long max
unsigned long long dat2 = 1; // 4 bit long max
unsigned long long dat3 = 100; // 2 byte long max
unsigned long long dat4 = 200; // 4 byte long max
unsigned long long dat5 = 2; // 4 bit long max
pack = (dat1 << 56) | (dat2 << 52) | (dat3 << 36) | (dat4 << 4) | dat5;
or:
typedef unsigned long long PACKAGE; // 8 byte (shows as _int64 in debug)
(sizeof returns '8')
unsigned int dat1 = 25; // 1 byte long max
unsigned int dat2 = 1; // 4 bit long max
unsigned int dat3 = 100; // 2 byte long max
unsigned int dat4 = 200; // 4 byte long max
unsigned int dat5 = 2; // 4 bit long max
pack = ((PACKAGE)dat1 << 56) | ((PACKAGE)dat2 << 52) | ((PACKAGE)dat3 << 36) | ((PACKAGE)dat4 << 4) | (PACKAGE)dat5;
Note: Okay, in actuality each shift operation in which the right-hand operand is greater than the size of the left-hand type, in bits invokes undefined behavior. The typical undefined behavior is truncation, but any other behavior, up to and including global thermonuclear war, is allowed by the standard.
I have defined my struct with bit fields.
typedef struct{
unsigned char primero;
unsigned int bit1: 1;
unsigned int bit2: 1;
unsigned char segundo;
unsigned char array[4];
unsigned int offset: 6;
} date;
I want to send this data by a socket in this specific orer of bits.
char auxsendbuf[BUF_SIZ];
memset(sendbuf, 0, BUF_SIZ);
date *st = (date *) auxsendbuf;
st->primero = 0x01;
st->bit1 = 1;
st->bit2 = 1;
st->segundo = 0x03;
st->array[0] = 0x04;
st->array[1] = 0x05;
st->array[2] = 0x06;
st->array[3] = 0x07;
My problem is that bit1 and bit2 are filled with 0s to complete a extra byte that i don't want to send. This is the result...
01 03 03 04 05 06 07 00 50
How can I force the order of bites? I can use C++ if it would necessary.
You need to group the fields so that the bitfields are together:
typedef struct{
unsigned char primero;
unsigned int bit1: 1;
unsigned int bit2: 1;
unsigned int offset: 6;
unsigned char segundo;
unsigned char array[4];
} date;
EDIT:
If you want all the bits to be packed the original order without padding, you need to make bit fields out of everything else in between:
typedef struct{
unsigned char primero;
unsigned int bit1: 1;
unsigned int bit2: 1;
unsigned char segundo: 8;
unsigned char array0: 8;
unsigned char array1: 8;
unsigned char array2: 8;
unsigned char array3: 8;
unsigned int offset: 6;
} date;
Note here that you can't have an array inside of a bitfield.
Why exactly do you need the bits in this order? Because any solution that uses it will be very convoluted.
Compiler can do weird things with your bit-fields. It's basically a "best-effort" approach, there is no standard way to influence it. In my experience, the best thing is to not use bit-fields for the purpose of "message-mapping". Declare the byte(s) you want to send and then do the bit-operations (setting and getting the bits you need) yourself.
Actually, since there could be issues with structure members' size, alignment and padding, and byte-ordering (for multi-byte data) to be perfectly safe, don't use structures either. Pack and unpack the message to an array of bytes yourself. Only in some application-specific optimizations and with some checks (using sizeof and ofsetoff tricks) should you use structures for message-mapping.
my task is to place specified bits in an array of 8 bytes (not all 64 bits).
This can be done by using a struct:
struct Date {
unsigned int nWeekDay : 3;
unsigned int nMonthDay : 6;
unsigned int reserved1 : 10;
unsigned int nMonth : 5;
unsigned int nYear : 8;
};
I would like to write a generic class that gets value, start bits and length and place the the value a the correct position.
Could someone point me to an implementation of such a class\function?
This should be simple but I have no clue where to look for the issue:
I have a struct:
struct region
{
public:
long long int x;
long long int y;
long long int width;
long long int height;
unsigned char scale;
};
When I do sizeof(region) it gives me 40 when I am expecting 33.
Any ideas?
(mingw gcc, win x64 os)
It's padding the struct to fit an 8-byte boundary. So it actually is taking 40 bytes in memory - sizeof is returning the correct value.
If you want it to only take 33 bytes then specify the packed attribute:
struct region
{
public:
long long int x;
long long int y;
long long int width;
long long int height;
unsigned char scale;
} __attribute__ ((packed));
long long int values are 8 bytes each. scale is only 1 byte but is padded for alignments, so it effectively takes up 8 bytes too. 5*8 = 40.
As others said, structs are padded for alignments, and such padding not only depends on the type of the members, but also on the order of the members in which they're defined.
For example, consider these two structs A and B as defined below. Both structs are identical in terms of members and types; the only difference is that the order in which members are defined isn't same:
struct A
{
int i;
int j;
char c;
char d;
};
struct B
{
int i;
char c;
int j;
char d;
};
Would the sizeof(A) be equal to sizeof(B) just because they've same number of members of same type? No. Try printing the size of each:
cout << "sizeof(A) = "<< sizeof(A) << endl;
cout << "sizeof(B) = "<< sizeof(B) << endl;
Output:
sizeof(A) = 12
sizeof(B) = 16
Surprised? See the output yourself : http://ideone.com/yCX4S