my task is to place specified bits in an array of 8 bytes (not all 64 bits).
This can be done by using a struct:
struct Date {
unsigned int nWeekDay : 3;
unsigned int nMonthDay : 6;
unsigned int reserved1 : 10;
unsigned int nMonth : 5;
unsigned int nYear : 8;
};
I would like to write a generic class that gets value, start bits and length and place the the value a the correct position.
Could someone point me to an implementation of such a class\function?
Related
#include <iostream>
typedef union dbits {
double d;
struct {
unsigned int M1: 20;
unsigned int M2: 20;
unsigned int M3: 12;
unsigned int E: 11;
unsigned int s: 1;
};
};
int main(){
std::cout << "sizeof(dbits) = " << sizeof(dbits) << '\n';
}
output: sizeof(dbits) = 16, but if
typedef union dbits {
double d;
struct {
unsigned int M1: 12;
unsigned int M2: 20;
unsigned int M3: 20;
unsigned int E: 11;
unsigned int s: 1;
};
};
Output: sizeof(dbits) = 8
Why does the size of the union increase?
In the first and second union, the same number of bits in the bit fields in the structure, why the different size?
I would like to write like this:
typedef union dbits {
double d;
struct {
unsigned long long M: 52;
unsigned int E: 11;
unsigned int s: 1;
};
};
But, sizeof(dbits) = 16, but not 8, Why?
And how convenient it is to use bit fields in structures to parse bit in double?
members of a bit field will not cross boundaries of the specified storage type. So
unsigned int M1: 20;
unsigned int M2: 20;
will be 2 unsigned int using 20 out of 32 bit each.
In your second case 12 + 20 == 32 fits in a single unsigned int.
As for your last case members with different storage type can never share. So you get one unsigned long long and one unsigned int instead of a single unsigned long long as you desired.
You should use uint64_t so you get exact bit counts. unsigned int could e anything from 16 to 128 (or more) bit.
Note: bitfields are highly implementation defined, this is just the common way it usually works.
I have an explicitly sized structure as follow:
typedef struct
{
unsigned long A : 4;
unsigned long B : 12;
union
{
unsigned long C1 : 8;
unsigned long C2 : 8;
unsigned long C3 : 8;
};
unsigned long D : 8;
}FooStruct;
The total size of this struct should be 32bit (4 bytes) in theory. However, I get a 12-byte size using sizeof so there should be some padding and alignment happening here.
I just don't see why and where. Can someone explain to me how this structure takes 12 bytes in memory?
The union forces the start of a new unsigned long, and the member after the union yet another unsigned long. Assuming long is 4 bytes that means your struct will have 3 unsigned longs for a total of 12 bytes. Although a union with three equally sized members also seems odd.
If you want this to have a size of 4 bytes why not change it to:
typedef struct
{
unsigned short A : 4;
unsigned short B : 12;
union
{
unsigned char C1 : 8;
unsigned char C2 : 8;
unsigned char C3 : 8;
};
unsigned char D : 8;
}FooStruct;
Additionally if you are using gcc and want to disable structure padding, you can use __attribute__((packed)):
struct FooStruct
{
unsigned long A : 4;
unsigned long B : 12;
union
{
unsigned long C1 : 8;
unsigned long C2 : 8;
unsigned long C3 : 8;
} __attribute__((packed)) C;
unsigned long D : 8;
} __attribute__((packed));
But beware that some architectures may have penalities on unalligned data access or not allow it at all.
I am trying to wrap my head around a warning I get from MSVC. From what I can tell, the warning seems to be bogus, but I would like to be sure.
I am trying to convert an off_t to an offset in an OVERLAPPED. Given an off_t named offset and an OVERLAPPED named overlapped, I am trying the following:
overlapped.Offset = static_cast<DWORD>(offset);
if constexpr(sizeof(offset) > 4) {
overlapped.OffsetHigh = static_cast<DWORD>(offset >> 32);
}
MSVC complains about the bitshift, pretending the shift count is either negative or too big. Since it's clearly not negative - and even MSVC should be able to tell that - it must think it's too big.
How could it be too big? The code in question is only compiled if the size of an off_t is greater than 4. It must therefore be at least 5 bytes (but probably 8), and given 8 bits to the byte meaning a minimum of 40 bits, which is more than 32.
What is going on here?
Could it be the assignment into overlapped.OffsetHigh, and not your explicit shift, that is causing the warning? The following code generates the same warning on VS2015 compiling for x86 32-bit:
struct Clump
{
unsigned int a : 32;
unsigned int b : 32;
unsigned int c : 32;
};
unsigned int x = 0;
unsigned int y = 0;
unsigned int z = 0;
Clump clump = { x, y, z }; // This is line 1121.
1>E.cpp(1121): warning C4293: '<<': shift count negative or too big, undefined behavior
But removing the bit-fields, there is no warning:
struct Clump
{
unsigned int a;
unsigned int b;
unsigned int c;
};
unsigned int x = 0;
unsigned int y = 0;
unsigned int z = 0;
Clump clump = { x, y, z }; // No warning.
I would like to put 6 ints into one unsigned long long variable. Then, I would like to read these integers from long long variable bits range. I wrote something like this but it returns negative output
unsigned long long encode(int caller, int caller_zone,
int callee, int callee_zone,
int duration, int tariff) {
struct CallInfo
{
int caller : 17;
int caller_zone : 7;
int callee : 17;
int callee_zone : 7;
int duration : 13;
int tariff : 3;
};
CallInfo info = { caller, caller_zone, callee, callee_zone, duration, tariff};
cout << info.caller << endl;
cout << info.caller_zone << endl;
}
It's much easier to use bit fields for this, e.g.
struct CallInfo
{
unsigned int caller : 17;
unsigned int caller_zone : 7;
unsigned int callee : 17;
unsigned int callee_zone : 7;
unsigned int duration : 13;
unsigned int tariff : 3;
};
You would not really need an encode function, as you could just write, e.g.
CallInfo info = { /* ... initialise fields here ... */ };
and then access fields in the normal way:
info.caller = 0;
info.caller_zone = info.callee_zone;
// ...
I have defined the screen with a struct as below:
struct
{
private:
//data and attributes
char character : 8;
unsigned short int foreground : 3;
unsigned short int intensity : 1;
unsigned short int background : 3;
unsigned short int blink : 1;
public:
unsigned short int row;
unsigned short int col;
//a function which gets row, column, data and attributes then sets that pixel of the screen in text mode view with the data given
void setData (unsigned short int arg_row,
unsigned short int arg_col,
char arg_character,
unsigned short int arg_foreground,
unsigned short int arg_itensity,
unsigned short int arg_background,
unsigned short int arg_blink)
{
//a pointer which points to the first block of the screen in text mode view
int far *SCREEN = (int far *) 0xB8000000;
row = arg_row;
col = arg_col;
character = arg_character;
foreground = arg_foreground;
intensity = arg_itensity;
background = arg_background;
blink = arg_blink;
*(SCREEN + row * 80 + col) = (blink *32768) + (background * 4096) + (intensity * 2048) + (foreground * 256) + character;
}
} SCREEN;
but when I use characters with more than '128' ASCII code in using this struct, data will be crashed. I defined character field with 8 bit. so what's wrong with this definition?
In the c++ compiler you use apparently char is signed and so in an 8 bit character you fit values from -128 to 127(assuming two's complements representation for negative values is used). If you want to be guaranteed to be able to fit values greater than or equal to 128, use unsigned char.