compatibility and structure padding - c++

there is one structure in Linux (64bit OS)
And I did the following to output this structure as Hexa code.
After the below code, "strBuff" will be output to the file in the same way as "printf",
This is a situation that needs to be read from windows, and should be stored in the same structure as above "example".
However, there was a problem here.
In my current windows, unsigned long size is 4byte.
In my current Linux, unsigned long size is 8byte.
So there is too much zero output in the output text.
This seems to be related to the padding bit. It is expected that only 2 bytes should be padding, and padding is done by 4 bytes.
It is not possible to change the structure "example" because the code is implemented by thinking it is 4byte when outputting from linux and the code is already in the completion stage.
I have two things to ask.
What if I need to get rid of unnecessary zero hexa in the output code?
Currently, we are using a hard coding method to skip all unsigned long and signed long variables.
Compatibility between windows and linux should be solved.
The code can be changed both on the reading side and on the output side. Is there a lib related to the above problem and compatibility that can solve the padding problem?
enter code here
struct example
{
unsigned long Ul;
int a;
signed long Sl;
}
struct examle eg;
// data input at eg
char *tempDataPtr = (char*)(&eg);
for(int i = 0 ; i < size(example) ; i++)
{
sprintf(&strBuff[i*3],"%02X ", tempDataPtr[i]);
}

Use types that have explicit format:
(And order them from largest to smallest for good measure, to protect against padding discrepancies between fields)
struct example
{
uint32_t Ul;
int32_t Sl;
int16_t a;
}

Related

Using Unions to convert an IP address into one 32-bit, two 16-bit, and four 8-bit values

I'm working on a school assignment. I am writing a program that utilizes unions to convert a given IP address in "192.168.1.10" format into its 32-bit single value, two 16-bit values, and four 8-bit values.
I'm having trouble with implementing my structs and unions appropriately, and am looking for insight on the subject. To my understanding, unions point to the same location as the referenced struct, but can look at specified pieces.
Any examples showing how a struct with four 8-bit values and a union can be used together would help. Also, any articles or books that might help me would also be appreciated.
Below is the assignment outline:
Create a program that manages an IP address. Allow the user to enter the IP address as four 8 bit unsigned integer values (just use 4 sequential CIN statements). The program should output the IP address upon the users request as any of the following. As a single 32 bit unsigned integer value, or as four 8 bit unsigned integer values, or as 32 individual bit values which can be requested as a single bit by the user (by entering an integer 0 to 31). Or as all 32 bits assigned into 2 variable sized groups (host group and network group) and outputted as 2 unsigned integer values from 1 bit to 31 bits each.
I was going to cin to int pt1,pt2,pt3,pt4 and assign them to the IP_Adress.pt1, .... etc.
struct IP_Adress {
unsigned int pt1 : 8;
unsigned int pt2 : 8;
unsigned int pt3 : 8;
unsigned int pt4 : 8;
};
I have not gotten anything to work appropriately yet. I think I am lacking a true understanding of the implementation of unions.
A union is not a good fit for this assignment. In fact, nothing in the text you quoted even says to use a union at all. And, a union will not help you with the parts of the assignment that deal with "32 individual bit values" or with "32 bits assigned into 2 variable sized groups". Those parts of the assignment will require bit shifting instead. Bit shifting is the better way to solve the other parts of the assignment, as well.
That being said, if you absolutely must use a union, you are probably looking for something more like this instead:
union IP_Adress {
uint8_t u8[4]; // four 8-bit values
uint16_t u16[2]; // two 16-bit values
uint32_t u32; // one 32-bit value
};
Except that C++ does not allow you to write into one union field and then read from another. C allows that kind of type puning, but it is undefined behavior in C++.
Why is type punning considered UB?
The asker already knows that doing this can blow up in their face a number of different ways, but here's a simple example for 4 byte, 4x1 byte, and 32x1 bit.
union bad_idea
{
uint32_t ints; // 32 bit unsigned integer
uint8_t bytes[sizeof(uint32_t)]; // 4 8 bit unsigned integers
};
and then
uint32_t get_int(const bad_idea & in)
{
return in.ints;
}
uint8_t get_byte(const bad_idea & in,
size_t offset)
{
if (offset >= sizeof(uint32_t)) // trap typos and idiots
{
throw std::runtime_error("invalid offset");
}
return in.bytes[offset];
}
bool get_bit(const bad_idea & in,
size_t offset)
{
if (offset >= sizeof(uint32_t)*8)
{
throw std::runtime_error("invalid offset");
}
return (in.ints >> offset) & 1; // shift the required bit to the end (in.ints >> offset)
// then mask off all of the other bits (& 1)
}
Things get a bit ugly getting input because you can't simply
std::cin >> bad.bytes[0];
because it reads a single character. Type in 127 for the first octet and you'll wind up filling bad.bytes[0] through bad.bytes[2] with '1', '2', and '7'.
You need to involve a temporary variable
int temp;
std::cin >> temp;
// tests for valid range in temp
bad.bytes[0] = temp
or risk some explosive batsmurf like
std::cin >> *(int*)&bad.bytes[0];
// tests for valid value in bad.bytes[0] impossible because aliasing has been broken
// and everything is already <expletive deleted>ed
pardon my C. The more respectable
std::cin >> *reinterpret_cast<int*>(&bad.bytes[0]);
isn't any better. As ugly as it is, use the temporary variable and bundle it up in a function to eliminate the duplication. Frankly this is a time when I'd probably fall back into C and pull out good ol' scanf.
The assignment doesn't say c++, you can just use typecasting instead of a union. I like to print the 32bit address out in hex also as it's easier to make sure you have right 32bit value;
#define word8 uint8_t
#define word16 uint16_t
#define word32 uint32_t
char *sIP = "192.168.0.11";
main(){
word32 ip, *pIP;
pIP = &ip;
inet_pton(AF_INET, sIP, pIP);
printf("32bit:%u %x\n", *pIP, *pIP);
printf("16bit:%u %u\n", *(word16*)pIP, *(((word16*)pIP)+1));
printf("8bit:%u %u %u %u\n", *(word8*)pIP, *(((word8*)pIP)+1),*(((word8*)pIP)+2),*(((word8*)pIP)+3));
}
Output:
32bit:184592576 b00a8c0
16bit:43200 2816
8bit:192 168 0 11
You could also store the IP as a 4 byte string and do math to get the 16 bit and 32 bit answers. Its a pretty dumb assignment IMO; I would never use a union to do it.

How to read and write data in 8 bit integers unit form by c++ file functions

Is it possible to store data in integer form from 0 to 255 rather than 8-bit characters.Although both are same thing, how can we do it, for example, with write() function?
Is it ok to directly cast any integer to char and vice versa? Does something like
{
int a[1]=213;
write((char*)a,1);
}
and
{
int a[1];
read((char*)a,1);
cout<<a;
}
work to get 213 from the same location in the file? It may work on that computer but is it portable, in other words, is it suitable for cross-platform projects in that way? If I create a file format for each game level(which will store objects' coordinates in the current level's file) using this principle, will it work on other computers/systems/platforms in order to have loaded same level?
The code you show would write the first (lowest-address) byte of a[0]'s object representation - which may or may not be the byte with the value 213. The particular object representation of an int is imeplementation defined.
The portable way of writing one byte with the value of 213 would be
unsigned char c = a[0];
write(&c, 1);
You have the right idea, but it could use a bit of refinement.
{
int intToWrite = 213;
unsigned char byteToWrite = 0;
if ( intToWrite > 255 || intToWrite < 0 )
{
doError();
return();
}
// since your range is 0-255, you really want the low order byte of the int.
// Just reading the 1st byte may or may not work for your architecture. I
// prefer to let the compiler handle the conversion via casting.
byteToWrite = (unsigned char) intToWrite;
write( &byteToWrite, sizeof(byteToWrite) );
// you can hard code the size, but I try to be in the habit of using sizeof
// since it is better when dealing with multibyte types
}
{
int a = 0;
unsigned char toRead = 0;
// just like the write, the byte ordering of the int will depend on your
// architecture. You could write code to explicitly handle this, but it's
// easier to let the compiler figure it out via implicit conversions
read( &toRead, sizeof(toRead) );
a = toRead;
cout<<a;
}
If you need to minimize space or otherwise can't afford the extra char sitting around, then it's definitely possible to read/write a particular location in your integer. However, it can need linking in new headers (e.g. using htons/ntons) or annoying (using platform #defines).
It will work, with some caveats:
Use reinterpret_cast<char*>(x) instead of (char*)x to be explicit that you’re performing a cast that’s ordinarily unsafe.
sizeof(int) varies between platforms, so you may wish to use a fixed-size integer type from <cstdint> such as int32_t.
Endianness can also differ between platforms, so you should be aware of the platform byte order and swap byte orders to a consistent format when writing the file. You can detect endianness at runtime and swap bytes manually, or use htonl and ntohl to convert between host and network (big-endian) byte order.
Also, as a practical matter, I recommend you prefer text-based formats—they’re less compact, but far easier to debug when things go wrong, since you can examine them in any text editor. If you determine that loading and parsing these files is too slow, then consider moving to a binary format.

why add fillers in a c++ struct?

What are the effect of fillers in a c++ struct? I often see them in some c++ api. For example:
struct example
{
unsigned short a;
unsigned short b;
char c[3];
char filler1;
unsigned short e;
char filler2;
unsigned int g;
};
This struct is meant to transport through network
struct example
{
unsigned short a; //2 bytes
unsigned short b;//2 bytes
//4 bytes consumed
char c[3];//3 bytes
char filler1;//1 bytes
//4 bytes consumed
unsigned short e;//2 bytes
char filler2;//1 bytes
//3 bytes consumed ,should be filler[2]
unsigned int g;//4 bytes
};
Because sometimes you don't actually control the format of the data you're using.
The format may be specified by something beyond your control. For example, it may be created in a system with different alignment requirements to yours.
Alternatively, the data may have real data in those filler areas that your code doesn't care about.
Those fillers are usually inserted to explicitly make sure some of the members of a structure are naturally aligned i.e. their offset inside a structure is a multiple of its size.
In the example below assuming char is 1 bytes, short is 2 and int is 4.
struct example
{
unsigned short a;
unsigned short b;
char c[3];
char filler1;
unsigned short e; // starts at offset 8
char filler2[2];
unsigned int g; // starts at offset 12
};
If you don't specify any fillers, a compiler will usually add the necessary padding bytes to ensure a proper alignment of the structure members.
Btw, these fields can also be used for reserved fields that might appear in the future.
updated:
Since it has been mentioned that a structure is a network packet, the fillers are required to get a structure that is compatible with the one being passed from another host.
However, inserting filler bytes in this case might not be enough (especially, if portability is required). If these structures are to be sent via a network as is (i.e. without manually packing into a separate buffer for sending), you have to inform a compiler that the structure should be packed.
In microsoft compiler this can be achieved using #pragma pack:
#pragma pack(1)
struct T {
char t;
int i;
short j;
double k;
};
In gcc you can use __attribute__((packed))
struct foo {
char c;
int x;
} __attribute__((packed));
However, many people prefer to manually pack/unpack structures int a raw-byte array, because accessing misaligned data on some systems might not be [properly] supported.
Depending on what code you're working with they may be attempting to align the structure on word boundries (32 bit in your case), this is a speed optimization, however, doing things like this has been rendered obsolete by decent optimizing compilers, however if the compiler was instructed not to optimize this piece of code, or the compiler is very low-end e.g. for an embedded system, it may be better to handle this yourself. It basically boils downto how much you trust the compiler.
The other reason is for writing binary files, where reserved bytes have been left in the file format specification.

Read Bytes off a Serial Device (and make sense of them??)

I'm pulling my hair out trying to figure out how to read bytes off a serial device, check a checksum, and then convert them into something that I can actually read.
I have a device which "should" be sending me various messages, each started with the byte $83 and ended with the byte $84. The second to last byte is supposedly a checksum, generated by XORign all the other values together and comparing.
The actual values coming back should be alphanumeric, but I can't make heads or tail of the data. I'm newish to C++ - I'm sure that's not helping.
I've read several guides on serial programming, but I'm lost.
Can anyone help me, link me, or show me how to read bytes off a serial device, watch for $83 and $84, and then make sense of the data in between?
Here is the format of each message:
$FF byte Destination Address
$10 byte Message Length 16 Bytes
$37 byte Message Type
$00 byte Message subtype
BankAngle int -179 to +180
PitchAngle int -90 to +90
YawAngle int -179 to +180
Slip sint -50 to +50
GForce fps 0 to 6G
MISC byte Mode bits
Heading word 0 to 359
N/A not used
Voltage byte input voltage
This is all coming off an MGL SP-4 AHRS, and for ease of use I am targeting a Linux system, specifically Ubuntu. I am using the GCC compiler end the Eclipse CDT for development.
Where I'm lost
I can read the data into a buffer, but then I'm not versed enough in C++ to make sense of it after that, since it's not ASCII. I'm interested in learning what I need to know, but I don't know what I need to know.
I have a Perl / Java background.
Accomplishing this is going to be wholly dependent on the Operating System and platform that you target. Since the device you mention is mounted internally to an aircraft in the general use-case, I will assume you are not targeting a Windows platform, but more likely a Linux or embedded system. There are a number of resources available for performing serial I/O on such platforms (for example: the Serial Programming HOW-TO) that you should look at. Additionally, as suggested in the device's Installation Manual (available here about halfway down the page), you should "Consult the SP-4 OEM manual for message formats and message type selection." I suspect you will obtain the most relevant and useful information from that document. You may want to check if the manufacturer provides an API for your platform, as that would negate the need for you to implement the actual communication routine.
As far as making sense of the data, once you can read bytes from your serial interface, you can leverage structs and unions to make accessing your data more programmer-friendly. For the rough message outline you provided, something like this might be appropriate:
struct _message
{
uint8_t DestinationAddress;
uint8_t MessageLength;
uint8_t MessageType;
uint8_t MessageSubtype;
int32_t BankAngle; //assuming an int is 32 bits
int32_t PitchAngle;
int32_t YawAngle;
sint_t Slip; //not sure what a 'sint' is
fps_t GForce; //likewise 'fps'
uint8_t MISC;
uint16_t Heading; //assuming a word is 16 bits
uint8_t Unused[UNUSED_BYTES]; //however many there are
uintt_t Voltage;
}
struct myMessage
{
union
{
char raw[MAX_MESSAGE_SIZE]; //sizeof(largest possible message)
struct _message message;
}
}
This way, if you were to declare struct myMessage serialData;, you can read your message into serialData.raw, and then conveniently access its members (e.g. serialData.message.DestinationAddress).
Edit: In response to your edit, I'll provide an example of how to make sense of your data. This example supposes there is only one message type you have to worry about, but it can be easily extended to other types.
struct myMessage serialData;
memcpy(serialData.raw, serialDataBuffer, MAX_MESSAGE_SIZE); //copy data from your buffer
if(serialData.message.MessageType == SOME_MESSAGE_TYPE)
{
//you have usable data here.
printf("I am a SOME_MESSAGE!\n");
}
Now, supposing that these integral types are really only useful for data transmission, you need to translate these bits into "usable data". Say one of these fields is actually an encoded floating-point number. One common scheme is to select a bit-weight (sometimes also called resolution). I don't know if this is directly applicable to your device, or if it is what the real values are, but let's say for the sake of discussion, that the YawAngle field had a resolution of 0.00014 degrees/bit. To translate the value in your message (serialData.message.YawAngle) from its uint32_t value to a double, for example, you might do this:
double YawAngleValue = 0.00014 * serialData.message.YawAngle;
...and that's about it. The OEM manual should tell you how the data is encoded, and you should be able to work out how to decode it from there.
Now, let's say you've got two message types to handle. The one I've already shown you, and a theoretical CRITICAL_BITS message. To add that type using the scheme I've laid out, you would first define the CRITICAL_BITS structure (perhaps as follows):
struct _critical_bits
{
uint8_t DestinationAddress;
uint8_t MessageLength;
uint8_t MessageType;
uint8_t MessageSubtype;
uint32_t SomeCriticalData;
}
...and then add it to the struct myMessage definition like so:
struct myMessage
{
union
{
char raw[MAX_MESSAGE_SIZE]; //sizeof(largest possible message)
struct _message message;
struct _critical_bits critical_message;
}
}
...then you can access the SomeCriticalData just like the other fields.
if(serialData.message.MessageType == CRITICAL_MESSAGE_TYPE)
{
uint32_t critical_bits = serialData.critical_message.SomeCriticalData;
}
You can find a little more information on how this works by reading about structs. Bear in mind, that instances of the struct myMessage type will only ever contain one set of meaningful data at a time. Put more simply, if serialData contains CRITICAL_MESSAGE_TYPE data, then the data in serialData.critical_message is valid, but serialData.message is not --even though the language does not prevent you from accessing that data if you request it.
Edit: One more example; to calculate the checksum of a message, using the algorithm you've specified, you would probably want something like this (assuming you already know the message is completely within the buffer):
uint8_t calculate_checksum(struct myMessage *data)
{
uint8_t number_bytes = data->message.MessageLength;
uint8_t checksum = 0;
int i;
for(i=0; i<number_bytes; ++i)
{
//this performs a XOR with checksum and the byte
//in the message at offset i
checksum ^= data->raw[i];
}
return checksum;
}
You might need to adjust that function for bytes that aren't included, check to make sure that data != NULL, etc. but it should get you started.

force a bit field read to 32 bits

I am trying to perform a less-than-32bit read over the PCI bus to a VME-bridge chip (Tundra Universe II), which will then go onto the VME bus and picked up by the target.
The target VME application only accepts D32 (a data width read of 32bits) and will ignore anything else.
If I use bit field structure mapped over a VME window (nmap'd into main memory) I CAN read bit fields >24 bits, but anything less fails. ie :-
struct works {
unsigned int a:24;
};
struct fails {
unsigned int a:1;
unsigned int b:1;
unsigned int c:1;
};
struct main {
works work;
fails fail;
}
volatile *reg = function_that_creates_and_maps_the_vme_windows_returns_address()
This shows that the struct works is read as a 32bit, but a read via fails struct of a for eg reg->fail.a is getting factored down to a X bit read. (where X might be 16 or 8?)
So the questions are :
a) Where is this scaled down? Compiler? OS? or the Tundra chip?
b) What is the actual size of the read operation performed?
I basiclly want to rule out everything but the chip. Documentation on that is on the web, but if it can be proved that the data width requested over the PCI bus is 32bits then the problem can be blamed on the Tundra chip!
edit:-
Concrete example, code was:-
struct SVersion
{
unsigned title : 8;
unsigned pecversion : 8;
unsigned majorversion : 8;
unsigned minorversion : 8;
} Version;
So now I have changed it to this :-
union UPECVersion
{
struct SVersion
{
unsigned title : 8;
unsigned pecversion : 8;
unsigned majorversion : 8;
unsigned minorversion : 8;
} Version;
unsigned int dummy;
};
And the base main struct :-
typedef struct SEPUMap
{
...
...
UPECVersion PECVersion;
};
So I still have to change all my baseline code
// perform dummy 32bit read
pEpuMap->PECVersion.dummy;
// get the bits out
x = pEpuMap->PECVersion.Version.minorversion;
And how do I know if the second read wont actually do a real read again, as my original code did? (Instead of using the already read bits via the union!)
Your compiler is adjusting the size of your struct to a multiple of its memory alignment setting. Almost all modern compilers do this. On some processors, variables and instructions have to begin on memory addresses that are multiples of some memory alignment value (often 32-bits or 64-bits, but the alignment depends on the processor architecture). Most modern processors don't require memory alignment anymore - but almost all of them see substantial performance benefit from it. So the compilers align your data for you for the performance boost.
However, in many cases (such as yours) this isn't the behavior you want. The size of your structure, for various reasons, can turn out to be extremely important. In those cases, there are various ways around the problem.
One option is to force the compiler to use different alignment settings. The options for doing this vary from compiler to compiler, so you'll have to check your documentation. It's usually a #pragma of some sort. On some compilers (the Microsoft compilers, for instance) it's possible to change the memory alignment for only a very small section of code. For example (in VC++):
#pragma pack(push) // save the current alignment
#pragma pack(1) // set the alignment to one byte
// Define variables that are alignment sensitive
#pragma pack(pop) // restore the alignment
Another option is to define your variables in other ways. Intrinsic types are not resized based on alignment, so instead of your 24-bit bitfield, another approach is to define your variable as an array of bytes.
Finally, you can just let the compilers make the structs whatever size they want and manually record the size that you need to read/write. As long as you're not concatenating structures together, this should work fine. Remember, however, that the compiler is giving you padded structs under the hood, so if you make a larger struct that includes, say, a works and a fails struct, there will be padded bits in between them that could cause you problems.
On most compilers, it's going to be darn near impossible to create a data type smaller than 8 bits. Most architectures just don't think that way. This shouldn't be a huge problem because most hardware devices that use datatypes of smaller than 8-bits end up arranging their packets in such a way that they still come in 8-bit multiples, so you can do the bit manipulations to extract or encode the values on the data stream as it leaves or comes in.
For all of the reasons listed above, a lot of code that works with hardware devices like this work with raw byte arrays and just encode the data within the arrays. Despite losing a lot of the conveniences of modern language constructs, it ends up just being easier.
I am wondering about the value of sizeof(struct fails). Is it 1? In this case, if you perform the read by dereferencing a pointer to a struct fails, it looks correct to issue a D8 read on the VME bus.
You can try to add a field unsigned int unused:29; to your struct fails.
The size of a struct is not equal to the sum of the size of its fields, including bit fields. Compilers are allowed, by the C and C++ language specifications, to insert padding between fields in a struct. Padding is often inserted for alignment purposes.
The common method in embedded systems programming is to read the data as an unsigned integer then use bit masking to retrieve the interesting bits. This is due to the above rule that I stated and the fact that there is no standard compiler parameter for "packing" fields in a structure.
I suggest creating an object ( class or struct) for interfacing with the hardware. Let the object read the data, then extract the bits as bool members. This puts the implementation as close to the hardware. The remaining software should not care how the bits are implemented.
When defining bit field positions / named constants, I suggest this format:
#define VALUE (1 << BIT POSITION)
// OR
const unsigned int VALUE = 1 << BIT POSITION;
This format is more readable and has the compiler perform the arithmetic. The calculation takes place during compilation and has no impact during run-time.
As an example, the Linux kernel has inline functions that explicitly handle memory-mapped IO reads and writes. In newer kernels it's a big macro wrapper that boils down to an inline assembly movl instruction, but it older kernels it was defined like this:
#define readl(addr) (*(volatile unsigned int *) (addr))
#define writel(b,addr) ((*(volatile unsigned int *) (addr)) = (b))
Ian - if you want to be sure as to the size of things you're reading/writing I'd suggest not using structs like this to do it - it's possible the sizeof of the fails struct is just 1 byte - the compiler is free to decide what it should be based on optimizations etc- I'd suggest reading/writing explicitly using int's or generally the things you need to assure the sizes of and then doing something else like converting to a union/struct where you don't have those limitations.
It is the compiler that decides what size read to issue. To force a 32 bit read, you could use a union:
union dev_word {
struct dev_reg {
unsigned int a:1;
unsigned int b:1;
unsigned int c:1;
} fail;
uint32_t dummy;
};
volatile union dev_word *vme_map_window();
If reading the union through a volatile-qualified pointer isn't enough to force a read of the whole union (I would think it would be - but that could be compiler-dependent), then you could use a function to provide the required indirection:
volatile union dev_word *real_reg; /* Initialised with vme_map_window() */
union dev_word * const *reg_func(void)
{
static union dev_word local_copy;
static union dev_word * const static_ptr = &local_copy;
local_copy = *real_reg;
return &static_ptr;
}
#define reg (*reg_func())
...then (for compatibility with the existing code) your accesses are done as:
reg->fail.a
The method described earlier of using the gcc flag -fstrict-volatile-bitfields and defining bitfield variables as volatile u32 works, but the total number of bits defined must be greater than 16.
For example:
typedef union{
vu32 Word;
struct{
vu32 LATENCY :3;
vu32 HLFCYA :1;
vu32 PRFTBE :1;
vu32 PRFTBS :1;
};
}tFlashACR;
.
tFLASH* const pFLASH = (tFLASH*)FLASH_BASE;
#define FLASH_LATENCY pFLASH->ACR.LATENCY
.
FLASH_LATENCY = Latency;
causes gcc to generate code
.
ldrb r1, [r3, #0]
.
which is a byte read. However, changing the typedef to
typedef union{
vu32 Word;
struct{
vu32 LATENCY :3;
vu32 HLFCYA :1;
vu32 PRFTBE :1;
vu32 PRFTBS :1;
vu32 :2;
vu32 DUMMY1 :8;
vu32 DUMMY2 :8;
};
}tFlashACR;
changes the resultant code to
.
ldr r3, [r2, #0]
.
I believe the only solution is to
1) edit/create my main struct as all 32bit ints (unsigned longs)
2) keep my original bit-field structs
3) each access I require,
3.1) I have to read the struct member as a 32bit word, and cast it into the bit-field struct,
3.2) read the bit-field element I require. (and for writes, set this bit-field, and write the word back!)
(1) Which is a same, because then I lose the intrinsic types that each member of the "main/SEPUMap" struct are.
End solution :-
Instead of :-
printf("FirmwareVersionMinor: 0x%x\n", pEpuMap->PECVersion);
This :-
SPECVersion ver = *(SPECVersion*)&pEpuMap->PECVersion;
printf("FirmwareVersionMinor: 0x%x\n", ver.minorversion);
Only problem I have is writting! (Writes are now Read/Modify/Writes!)
// Read - Get current
_HVPSUControl temp = *(_HVPSUControl*)&pEpuMap->HVPSUControl;
// Modify - set to new value
temp.OperationalRequestPort = true;
// Write
volatile unsigned int *addr = reinterpret_cast<volatile unsigned int*>(&pEpuMap->HVPSUControl);
*addr = *reinterpret_cast<volatile unsigned int*>(&temp);
Just have to tidy that code up into a method!
#define writel(addr, data) ( *(volatile unsigned long*)(&addr) = (*(volatile unsigned long*)(&data)) )
I had same problem on ARM using GCC compiler, where write into memory is only through bytes rather than 32bit word.
The solution is to define bit-fields using volatile uint32_t (or required size to write):
union {
volatile uint32_t XY;
struct {
volatile uint32_t XY_A : 4;
volatile uint32_t XY_B : 12;
};
};
but while compiling you need add to gcc or g++ this parameter:
-fstrict-volatile-bitfields
more in gcc documentation.