How to initialize bitfields with a C++ Constructor? - c++

First off, I’m not concerned with portability, and can safely assume that the endianness will not change. Assuming I read a hardware register value, I would like to overlay that register value over bitfields so that I can refer to the individual fields in the register without using bit masks.
EDIT: Fixed problems pointed out by GMan, and adjusted the code so it's clearer for future readers.
SEE: Anders K. & Michael J's answers below for a more eloquent solution.
#include <iostream>
/// \class HardwareRegister
/// Abstracts out bitfields in a hardware register.
/// \warning This is non-portable code.
class HardwareRegister
{
public:
/// Constructor.
/// \param[in] registerValue - the value of the entire register. The
/// value will be overlayed onto the bitfields
/// defined in this class.
HardwareRegister(unsigned long registerValue = 0)
{
/// Lots of casting to get registerValue to overlay on top of the
/// bitfields
*this = *(reinterpret_cast<HardwareRegister*>(&registerValue));
}
/// Bitfields of this register.
/// The data type of this field should be the same size as the register
/// unsigned short for 16 bit register
/// unsigned long for 32 bit register.
///
/// \warning Remember endianess! Order of the following bitfields are
/// important.
/// Big Endian - Start with the most signifcant bits first.
/// Little Endian - Start with the least signifcant bits first.
unsigned long field1: 8;
unsigned long field2:16;
unsigned long field3: 8;
}; //end class Hardware
int main()
{
unsigned long registerValue = 0xFFFFFF00;
HardwareRegister testRegister(registerValue);
// Prints out for little endianess machine
// Field 1 = 0
// Field 2 = 65535
// Field 3 = 255
std::cout << "Field 1 = " << testRegister.field1 << std::endl;
std::cout << "Field 2 = " << testRegister.field2 << std::endl;
std::cout << "Field 3 = " << testRegister.field3 << std::endl;
}

don't do this
*this = *(reinterpret_cast<HW_Register*>(&registerValue));
the 'this' pointer shouldn't be fiddled with in that way:
HW_Register reg(val)
HW_Register *reg = new HW_Register(val)
here 'this' is in two different places in memory
instead have an internal union/struct to hold the value, that way its easy to convert
back and forth (since you are not interested in portability)
e.g.
union
{
struct {
unsigned short field1:2;
unsigned short field2:4;
unsigned short field3:2;
...
} bits;
unsigned short value;
} reg
edit: true enough with the name 'register'

Bitfields don't work that way. You can't assign a scalar value to a struct full of bitfields. It looks like you already know this since you used reinterpret_cast, but since reinterpret_cast isn't guaranteed to do very much, it's just rolling the dice.
You need to encode and decode the values if you want to translate between bitfield structs and scalars.
HW_Register(unsigned char value)
: field1( value & 3 ),
field2( value >> 2 & 3 ),
field3( value >> 4 & 7 )
{}
Edit: The reason you don't get any output is that the ASCII characters corresponding to the numbers in the fields are non-printing. Try this:
std::cout << "Field 1 = " << (int) testRegister.field1 << std::endl;
std::cout << "Field 2 = " << (int) testRegister.field2 << std::endl;
std::cout << "Field 3 = " << (int) testRegister.field3 << std::endl;

Try this:
class HW_Register
{
public:
HW_Register(unsigned char nRegisterValue=0)
{
Init(nRegisterValue);
}
~HW_Register(void){};
void Init(unsigned char nRegisterValue)
{
nVal = nRegisterValue;
}
unsigned Field1() { return nField1; }
unsigned Field2() { return nField2; }
unsigned Field3() { return nField3; }
private:
union
{
struct
{
unsigned char nField1:2;
unsigned char nField2:4;
unsigned char nField3:2;
};
unsigned char nVal;
};
};
int main()
{
unsigned char registerValue = 0xFF;
HW_Register testRegister(registerValue);
std::cout << "Field 1 = " << testRegister.Field1() << std::endl;
std::cout << "Field 2 = " << testRegister.Field2() << std::endl;
std::cout << "Field 3 = " << testRegister.Field3() << std::endl;
return 0;
}

HW_Register(unsigned char registerValue) : field1(0), field2(0), field3(0)

Related

Arrays of enum's packed into bit fields in MSVC++

Unsing MS Studio 2022 I am trying to pack two items into a union of size 16 bits but I am having problems with the correct syntax.
The first item is an unsigned short int so no problems there. The other is an array of 5 items, all two bits long. So imagine:
enum States {unused, on, off};
// Should be able to store this in a 2 bit field
then I want
States myArray[5];
// Should be able to fit in 10 bits and
// be unioned with my unsigned short
Unfortunatly I am completely failing to work out the correct syntax which leads to my array fitting into 16 bits. Any ideas?
You can't do that. An array is an array, not some packed bits.
What you can do is using manual bit manipulation:
#include <iostream>
#include <cstdint>
#include <bitset>
#include <climits>
enum status {
on = 0x03,
off = 0x01,
unused = 0x00
};
constexpr std::uint8_t status_bit_width = 2;
std::uint16_t encode(status s,std::uint8_t id, std::uint16_t status_vec) {
if(id >= (CHAR_BIT * sizeof(std::uint16_t)) / status_bit_width) {
std::cout << "illegal id" << std::endl;
return status_vec;
}
std::uint8_t bit_value = s;
status_vec |= bit_value << (id*status_bit_width);
return status_vec;
};
int main(void) {
std::uint16_t bits = 0;
bits = encode(on,1,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(off,2,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(unused,3,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(off,4,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(off,7,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(on,8,bits);
}

Is there a way to access members of a struct

I want to be able to find the size of the individual members in a struct. For example
struct A {
int a0;
char a1;
}
Now sizeof(A) is 8, but let's assume I am writing a function that will print the alignment of A as shown below where "aa" represents the padding.
data A:
0x00: 00 00 00 00
0x04: 00 aa aa aa
*-------------------------
size: 8 padding: 3
In order for me to calculate padding, I need to know the size of each individual members of a struct. So my question is how can I access to individual members of a given struct.
Also, let me know if there is another way to find the number of padding.
A simple approach would be to use sizeof operator (exploiting the fact that it does not evaluate its operand, only determines the size of the type that would result if it was evaluated) and the offsetof() macro (from <cstddef>).
For example;
#include <iostream>
#include <cstddef>
struct A
{
int a0;
char a1;
};
int main()
{
// first calculate sizes
size_t size_A = sizeof(A);
size_t size_a0 = sizeof(((A *)nullptr)->a0); // sizeof will not dereference null
size_t size_a1 = sizeof(((A *)nullptr)->a1);
// calculate positions
size_t pos_a0 = offsetof(A, a0); // will be zero, but calculate it anyway
size_t pos_a1 = offsetof(A, a1);
// now calculate padding amounts
size_t padding_a0 = pos_a1 - pos_a0 - size_a0; // padding between a0 and a1 members
size_t padding_a1 = size_A - pos_a1 - size_a1;
std::cout << "Data A:\n";
std::cout << "0x" << std::hex << std::setw(2) << std::setfill('0') << pos_a0;
size_t i = pos_a0;
while (i < pos_a0 + size_a0) // print out zeros for bytes of a0 member
{
std::cout << " 00";
++i;
}
while (i < pos_a1) // print out aa for each padding byte after a_0
{
std::cout << " aa";
++i;
}
std::cout << std::endl;
std::cout << "0x" << std::hex << std::setw(2) << std::setfill('0') << pos_a1;
while (i < pos_a1 + size_a1) // print out zeros for bytes of a1 member
{
std::cout << " 00";
++i;
}
while (i < size_A) // print out aa for each padding byte after a_1
{
std::cout << " aa";
++i;
}
std::cout << std::endl;
std::cout << "size: " << size_A << " padding: " << padding_a0 + padding_a1 << std::endl;
}
You can work this out if you know the content of the struct. Passing usually works like this,
Assume that your struct is this.
struct A {
int a0; // 32 bits (4 bytes)
char a1; // 8 bits (1 byte)
};
But this is not memory efficient as if you pack these structs in memory, you might get some fragmentation issues thus making the application slower. So the compiler optimizes the struct like this and the final struct to the compiler would look something like this.
struct A {
int a0;
char a1;
// padding
char __padd[3]; // 3 * 1 byte = 3 bytes.
/// Adding this padding makes it 32 bit aligned.
};
Now using this knowledge, you can see why its not easy to get the padding of an object without knowing the content of it. And paddings aren't always placed at the end of the object. For example,
struct Obj {
int a = 0;
char c = 'b';
// Padding is set here by the compiler. It may look something like this: char __padd[3];
int b = 10;
};
So how to get the padding of the struct?
You can use something called Reflection to get the content of the struct at runtime. Then workout the sizes of the data types in the struct and then you can calculate the padding by deducting the size of the previous type and the next type which gives you how the padding would look like.
As other answers have said, the offsetof macro is clearly the best solution here, but just to demonstrate that you could find the positions of your members at run time by looking at the pointers:
#include <iostream>
struct A
{
char x;
int y;
char z;
};
template <typename T>
void PrintSize ()
{
std::cout << " size = " << sizeof(T) << std::endl;
}
void PrintPosition (char * ptr_mem, char * ptr_base)
{
std::cout << " position = " << ptr_mem - ptr_base << std::endl;
}
template <typename T>
void PrintDetails (char member, T * ptr_mem, A * ptr_base)
{
std::cout << member << ":" << std::endl;
PrintSize<T>();
PrintPosition((char*) ptr_mem, (char*) ptr_base);
std::cout << std::endl;
}
int main()
{
A a;
PrintDetails('x', &a.x, &a);
PrintDetails('y', &a.y, &a);
PrintDetails('z', &a.z, &a);
}
Output on my machine:
x:
size = 1
position = 0
y:
size = 4
position = 4
z:
size = 1
position = 8
(Surprisingly, on my intel, with gcc/clang, A is of size 12! I thought that the compiler did a better job of rearranging elements)
To calculate the padding of a structure, you need to know the offset of the last member, and the size:
Concisely, if type T has a member last which is of type U, the padding size is:
sizeof(T) - (offsetof(T, last) + sizeof(U))
To calculate the total amount of padding in a structure, if that is what this question is about, I would use a GCC extension: declare the same structure twice (perhaps with the help of a macro), once without the packed attribute and once with. Then subtract their sizes.
Here is a complete, working sample:
#include <stdio.h>
#define X struct { char a; int b; char c; }
int main(void)
{
printf("%zd\n", sizeof(X) - sizeof(X __attribute__((packed))));
return 0;
}
For the above structure, it outputs 6. This corresponds to the 3 bytes of padding after a necessary for the four-byte alignment of b and at the end of the structure, necessary for the alignment of b if the structure is used as an array member.
The packed attribute defeats all padding, and so the difference between the packed and unpacked structure gives us the total amount of padding.

Showing binary representation of floating point types in C++ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Consider the following code for integral types:
template <class T>
std::string as_binary_string( T value ) {
return std::bitset<sizeof( T ) * 8>( value ).to_string();
}
int main() {
unsigned char a(2);
char b(4);
unsigned short c(2);
short d(4);
unsigned int e(2);
int f(4);
unsigned long long g(2);
long long h(4);
std::cout << "a = " << +a << " " << as_binary_string( a ) << std::endl;
std::cout << "b = " << +b << " " << as_binary_string( b ) << std::endl;
std::cout << "c = " << c << " " << as_binary_string( c ) << std::endl;
std::cout << "d = " << c << " " << as_binary_string( d ) << std::endl;
std::cout << "e = " << e << " " << as_binary_string( e ) << std::endl;
std::cout << "f = " << f << " " << as_binary_string( f ) << std::endl;
std::cout << "g = " << g << " " << as_binary_string( g ) << std::endl;
std::cout << "h = " << h << " " << as_binary_string( h ) << std::endl;
std::cout << "\nPress any key and enter to quit.\n";
char q;
std::cin >> q;
return 0;
}
Pretty straight forward, works well and is quite simple.
EDIT
How would one go about writing a function to extract the binary or bit pattern of arbitrary floating point types at compile time?
When it comes to floats I have not found anything similar in any existing libraries of my own knowledge. I've searched google for days looking for one, so then I resorted into trying to write my own function without any success. I no longer have the attempted code available since I've originally asked this question so I can not exactly show you all of the different attempts of implementations along with their compiler - build errors. I was interested in trying to generate the bit pattern for floats in a generic way during compile time and wanted to integrate that into my existing class that seamlessly does the same for any integral type. As for the floating types themselves, I have taken into consideration the different formats as well as architecture endian. For my general purposes the standard IEEE versions of the floating point types is all that I should need to be concerned with.
iBug had suggested for me to write my own function when I originally asked this question, while I was in the attempt of trying to do so. I understand binary numbers, memory sizes, and the mathematics, but when trying to put it all together with how floating point types are stored in memory with their different parts {sign bit, base & exp } is where I was having the most trouble.
Since then with the suggestions those who have given a great answer - example I was able to write a function that would fit nicely into my already existing class template and now it works for my intended purposes.
What about writing one by yourself?
static_assert(sizeof(float) == sizeof(uint32_t));
static_assert(sizeof(double) == sizeof(uint64_t));
std::string as_binary_string( float value ) {
std::uint32_t t;
std::memcpy(&t, &value, sizeof(value));
return std::bitset<sizeof(float) * 8>(t).to_string();
}
std::string as_binary_string( double value ) {
std::uint64_t t;
std::memcpy(&t, &value, sizeof(value));
return std::bitset<sizeof(double) * 8>(t).to_string();
}
You may need to change the helper variable t in case the sizes for the floating point numbers are different.
You can alternatively copy them bit-by-bit. This is slower but serves for arbitrarily any type.
template <typename T>
std::string as_binary_string( T value )
{
const std::size_t nbytes = sizeof(T), nbits = nbytes * CHAR_BIT;
std::bitset<nbits> b;
std::uint8_t buf[nbytes];
std::memcpy(buf, &value, nbytes);
for(int i = 0; i < nbytes; ++i)
{
std::uint8_t cur = buf[i];
int offset = i * CHAR_BIT;
for(int bit = 0; bit < CHAR_BIT; ++bit)
{
b[offset] = cur & 1;
++offset; // Move to next bit in b
cur >>= 1; // Move to next bit in array
}
}
return b.to_string();
}
You said it doesn't need to be standard. So, here is what works in clang on my computer:
#include <iostream>
#include <algorithm>
using namespace std;
int main()
{
char *result;
result=new char[33];
fill(result,result+32,'0');
float input;
cin >>input;
asm(
"mov %0,%%eax\n"
"mov %1,%%rbx\n"
".intel_syntax\n"
"mov rcx,20h\n"
"loop_begin:\n"
"shr eax\n"
"jnc loop_end\n"
"inc byte ptr [rbx+rcx-1]\n"
"loop_end:\n"
"loop loop_begin\n"
".att_syntax\n"
:
: "m" (input), "m" (result)
);
cout <<result <<endl;
delete[] result;
return 0;
}
This code makes a bunch of assumptions about the computer architecture and I am not sure on how many computers it would work.
EDIT:
My computer is a 64-bit Mac-Air. This program basically works by allocating a 33-byte string and filling the first 32 bytes with '0' (the 33rd byte will automatically be '\0').
Then it uses inline assembly to store the float into a 32-bit register and then it repeatedly shifts it to the right by one bit.
If the last bit in the register was 1 before the shift, it gets stored into the carry flag.
The assembly code then checks the carry flag and, if it contains 1, it increases the corresponding byte in the string by 1.
Since it was previously initialized to '0', it will turn to '1'.
So, effectively, when the loop in the assembly is finished, the binary representation of a float is stored into a string.
This code only works for x64 (it uses 64-bit registers "rbx" and "rcx" to store the pointer and the counter for the loop), but I think it's easy to tweak it to work on other processors.
An IEEE floating point number looks like the following
sign exponent mantissa
1 bit 11 bits 52 bits
Note that there's a hidden 1 before the mantissa, and the exponent
is biased so 1023 = 0, not two's complement.
By memcpy()ing to a 64 bit unsigned integer you can then apply AND and
OR masks to get the bit pattern. The arrangement could be big endian
or little endian.
You can easily work out which arrangement you have by passing easy numbers
such as 1 or 2.
Generally people either use std::hexfloat or cast a pointer to the floating-point value to a pointer to an unsigned integer of the same size and print the indirected value in hex format. Both methods facilitate bit-level analysis of floating-point in a productive fashion.
You could roll your by casting the address of the float/double to a char and iterating it that way:
#include <memory>
#include <iostream>
#include <limits>
#include <iomanip>
template <typename T>
std::string getBits(T t) {
std::string returnString{""};
char *base{reinterpret_cast<char *>(std::addressof(t))};
char *tail{base + sizeof(t) - 1};
do {
for (int bits = std::numeric_limits<unsigned char>::digits - 1; bits >= 0; bits--) {
returnString += ( ((*tail) & (1 << bits)) ? '1' : '0');
}
} while (--tail >= base);
return returnString;
}
int main() {
float f{10.0};
double d{100.0};
double nd{-100.0};
std::cout << std::setprecision(1);
std::cout << getBits(f) << std::endl;
std::cout << getBits(d) << std::endl;
std::cout << getBits(nd) << std::endl;
}
Output on my machine (note the sign flip in the third output):
01000001001000000000000000000000
0100000001011001000000000000000000000000000000000000000000000000
1100000001011001000000000000000000000000000000000000000000000000

working of std::bitset in cpp

I want to know how is this program working:
#include <bitset>
#include <iostream>
const int option_1 = 0;
const int option_2 = 1;
const int option_3 = 2;
const int option_4 = 3;
const int option_5 = 4;
const int option_6 = 5;
const int option_7 = 6;
const int option_8 = 7;
int main()
{
std::bitset<8> bits(0x2);
bits.set(option_5);
bits.flip(option_6);
bits.reset(option_6);
std::cout << "Bit 4 has value: " << bits.test(option_5) << '\n';
std::cout << "Bit 5 has value: " << bits.test(option_6) << '\n';
std::cout << "All the bits: " << bits << '\n';
return 0;
}
I have seen this example on a website, But cannot understand the working of some part of this program.
here, first option5 is set to 4, then in main program "bit.set(option5);" is used to set it to 1(is what i think). then what is use of that 4 assigned to integer option5 above??
So this is basically at a high level an array of bits. Using non type templating an array of bits is created on the stack.
The option5 variable is used to set the fourth bit (from the back when printed) to 1. So when you print out the values there is a 1 in the location pointed to by option5 which is location 4 from the back.
The constructor of the bitset is used to initialize the bitset to look like 0b00000010. The set() function sets the bit at the location specified to 1, the reset() function sets the bit at the location specified to 0.

Converting hex String to structure

I've got a file containing a large string of hexidecimal. Here's the first few lines:
0000038f
0000111d
0000111d
03030303
//Goes on for a long time
I have a large struct that is intended to hold that data:
typedef struct
{
unsigned int field1: 5;
unsigned int field2: 11;
unsigned int field3: 16;
//Goes on for a long time
}calibration;
What I want to do is read the above string and store it in the struct. I can assume the input is valid (it's verified before I get it).
I've already got a loop that reads the file and puts the whole item in a string:
std::string line = "";
std::string hexText = "";
while(!std::getline(readFile, line))
{
hexText += line;
}
//Convert string into calibration
//Convert string into long int
long int hexInt = strtol(hexText.c_str(), NULL, 16);
//Here I get stuck: How to get from long int to calibration...?
How to get from long int to calibration...?
Cameron's answer is good, and probably what you want.
I offer here another (maybe not so different) approach.
Note1: Your file input needs re-work. I will suggest
a) use getline() to fetch one line at a time into a string
b) convert the one entry to a uint32_t (I would use stringstream instead of atol)
once you learn how to detect and recover from invalid input,
you could then work on combining a) and b) into one step
c) then install the uint32_t in your structure, for which my
offering below might offer insight.
Note2: I have worked many years with bit fields, and have developed a distaste for them.
I have never found them more convenient than the alternatives.
The alternative I prefer is bit masks and field shifting.
So far as we can tell from your problem statement, it appears your problem does not need bit-fields (which Cameron's answer illustrates).
Note3: Not all compilers will pack these bit fields for you.
The last compiler I used require what is called a "pragma".
G++ 4.8 on ubuntu seemed to pack the bytes just fine (i.e. no pragma needed)
The sizeof(calibration) for your original code is 4 ... i.e. packed.
Another issue is that packing can unexpectedly change when you change options or upgrade the compiler or change the compiler.
My team's work-around was to always have an assert against struct size and a few byte offsets in the CTOR.
Note4: I did not illustrate the use of 'union' to align a uint32_t array over your calibration struct.
This may be preferred over the reinterpret cast approach. Check your requirements, team lead, professor.
Anyway, in the spirit of your original effort, consider the following additions to your struct calibration:
typedef struct
{
uint32_t field1 : 5;
uint32_t field2 : 11;
uint32_t field3 : 16;
//Goes on for a long time
// I made up these next 2 fields for illustration
uint32_t field4 : 8;
uint32_t field5 : 24;
// ... add more fields here
// something typically done by ctor or used by ctor
void clear() { field1 = 0; field2 = 0; field3 = 0; field4 = 0; field5 = 0; }
void show123(const char* lbl=0) {
if(0 == lbl) lbl = " ";
std::cout << std::setw(16) << lbl;
std::cout << " " << std::setw(5) << std::hex << field3 << std::dec
<< " " << std::setw(5) << std::hex << field2 << std::dec
<< " " << std::setw(5) << std::hex << field1 << std::dec
<< " 0x" << std::hex << std::setfill('0') << std::setw(8)
<< *(reinterpret_cast<uint32_t*>(this))
<< " => " << std::dec << std::setfill(' ')
<< *(reinterpret_cast<uint32_t*>(this))
<< std::endl;
} // show
// I did not create show456() ...
// 1st uint32_t: set new val, return previous
uint32_t set123(uint32_t nxtVal) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
uint32_t prevVal = myVal[0];
myVal[0] = nxtVal;
return (prevVal);
}
// return current value of the combined field1, field2 field3
uint32_t get123(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[0]);
}
// 2nd uint32_t: set new val, return previous
uint32_t set45(uint32_t nxtVal) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
uint32_t prevVal = myVal[1];
myVal[1] = nxtVal;
return (prevVal);
}
// return current value of the combined field4, field5
uint32_t get45(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[1]);
}
// guess that next 4 fields fill 32 bits
uint32_t get6789(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[2]);
}
// ... tedious expansion
} calibration;
Here is some test code to illustrate the use:
uint32_t t125()
{
const char* lbl =
"\n 16 bits 11 bits 5 bits hex => dec";
calibration cal;
cal.clear();
std::cout << lbl << std::endl;
cal.show123();
cal.field1 = 1;
cal.show123("field1 = 1");
cal.clear();
cal.field1 = 31;
cal.show123("field1 = 31");
cal.clear();
cal.field2 = 1;
cal.show123("field2 = 1");
cal.clear();
cal.field2 = (2047 & 0x07ff);
cal.show123("field2 = 2047");
cal.clear();
cal.field3 = 1;
cal.show123("field3 = 1");
cal.clear();
cal.field3 = (65535 & 0x0ffff);
cal.show123("field3 = 65535");
cal.set123 (0xABCD6E17);
cal.show123 ("set123(0x...)");
cal.set123 (0xffffffff);
cal.show123 ("set123(0x...)");
cal.set123 (0x0);
cal.show123 ("set123(0x...)");
std::cout << "\n";
cal.clear();
std::cout << "get123(): " << cal.get123() << std::endl;
std::cout << " get45(): " << cal.get45() << std::endl;
// values from your file:
cal.set123 (0x0000038f);
cal.set45 (0x0000111d);
std::cout << "get123(): " << "0x" << std::hex << std::setfill('0')
<< std::setw(8) << cal.get123() << std::endl;
std::cout << " get45(): " << "0x" << std::hex << std::setfill('0')
<< std::setw(8) << cal.get45() << std::endl;
// cal.set6789 (0x03030303);
// std::cout << "get6789(): " << cal.get6789() << std::endl;
// ...
return(0);
}
And the test code output:
16 bits 11 bits 5 bits hex => dec
0 0 0 0x00000000 => 0
field1 = 1 0 0 1 0x00000001 => 1
field1 = 31 0 0 1f 0x0000001f => 31
field2 = 1 0 1 0 0x00000020 => 32
field2 = 2047 0 7ff 0 0x0000ffe0 => 65,504
field3 = 1 1 0 0 0x00010000 => 65,536
field3 = 65535 ffff 0 0 0xffff0000 => 4,294,901,760
set123(0x...) abcd 370 17 0xabcd6e17 => 2,882,366,999
set123(0x...) ffff 7ff 1f 0xffffffff => 4,294,967,295
set123(0x...) 0 0 0 0x00000000 => 0
get123(): 0
get45(): 0
get123(): 0x0000038f
get45(): 0x0000111d
The goal of this code is to help you see how the bit fields map into the lsbyte through msbyte of the data.
If you care at all about efficiency, don't read the whole thing into a string and then convert it. Simply read one word at a time, and convert that. Your loop should look something like:
calibration c;
uint32_t* dest = reinterpret_cast<uint32_t*>(&c);
while (true) {
char hexText[8];
// TODO: Attempt to read 8 bytes from file and then skip whitespace
// TODO: Break out of the loop on EOF
std::uint32_t hexValue = 0; // TODO: Convert hex to dword
// Assumes the structure padding & packing matches the dump version's
// Assumes the structure size is exactly a multiple of 32-bytes (w/ padding)
static_assert(sizeof(calibration) % 4 == 0);
assert(dest - &c < sizeof(calibration) && "Too much data");
*dest++ = hexValue;
}
assert(dest - &c == sizeof(calibration) && "Too little data");
Converting 8 chars of hex to an actual 4-byte int is a good exercise and is well-covered elsewhere, so I've left it out (along with the file reading, which is similarly well-covered).
Note the two assumptions in the loop: the first one cannot be checked either at run-time or compile time, and must be either agreed upon in advance or extra work has to be done to properly serialize the structure (handling structure packing and padding, etc.). The last one can at least be checked at compile time with the static_assert.
Also, care has to be taken to ensure that the endianness of the hex bytes in the file matches the endianness of the architecture executing the program when converting the hex string. This will depend on whether the hex was written in a specific endianness in the first place (in which case you can convert it from the know endianness to the current architecture's endianness quite easily), or whether it's architecture-dependent (in which case you have no choice but to assume the endianness is the same as your current architecture).