I have a 8 byte CAN Bus message
15 E0 7F 34 17 5C 2 33
There is for example MCU_SelfCheckStatus at bit 52 and one bit long.
Or MCU_MotorTemp at bit 47 and 8 bits long.
The endianess is Motorola.
My readers endianess is little endian.
What would be an easy way to get them as a labeled data structure? Like:
bool isOk = msg.MCU_SelfCheckStatus;
uint8_t temp = msg.MCU_MotorTemp;
I thought about unions but I don't know if they allow such things.
I would simply apply logical AND operation.
bool isOk = msg & 0x20'0000'0000'0000;
bool temp = msg & 0x3F'8000'0000'0000;
Explanation:
why & with 0x20'0000'0000'0000 for MCU_SelfCheckStatus? Well if you flip only the 52th bit of 8byte long type, you will get that number.
why & with 0x3F'8000'0000'0000 for MCU_MotorTemp? Similarly, if you flip only the 47th bit and next 8 bits of 8byte long type, you will get that number.
Hope it helps.
I got it working with unions:
typedef union
{
struct
{
unsigned int MCU_DC_Link_Volt : 16;
unsigned int MCU_DC_Link_Curr : 16;
unsigned int MCU_InverterTemp : 8;
unsigned int MCU_MotorTemp : 8;
unsigned int MCU_SelfCheckStatus : 1;
unsigned int MCU_IgnitionSts : 1;
unsigned int MCU_VCU_InverterCtrlSts : 1;
unsigned int MCU_2_MessageCounter : 4;
unsigned int MCU_2_Checksum : 8;
};
uint8_t raw[8];
} MCU_2_415;
The LSB (dark blue) for the first value is 8
While the MSB (orange) is 7.
Related
So here is an example:
struct field
{
unsigned int a : 8;
unsigned int b : 8;
unsigned int c : 8;
unsigned int d : 8;
};
union test
{
unsigned int raw;
field bits;
};
int main()
{
test aUnion;
aUnion.raw = 0xabcdef;
printf("a: %x \n", aUnion.bits.a);
printf("b: %x \n", aUnion.bits.b);
printf("c: %x \n", aUnion.bits.c);
printf("d: %x \n", aUnion.bits.d);
return 0;
}
now running this I get:
a: ef
b: cd
c: ab
d: 0
And I guess I just dont really get whats happening here. So I set raw to a value, and since this is a union, everything else pulls from that since they have all been set to be smaller than an unsigned int? so the bit field is based on raw? but how does that map out? why is d: 0 in this instance?
I would appreciate any help here.
Using hexadecimal representation of an integer is useful because it makes clear what is the value of every byte of the integer. So the setting
aUnion.raw = 0xabcdef;
means that the value of least significant byte is 0xef, that the second least significant byte has value 0xcd and so on. But you are setting the raw field of the union, that is an integer so it is 4 bytes long. In the previous representation the most significant byte is missing, so it can be written as
aUnion.raw = 0x00abcdef;
(it is like making explicit that an integer x = 42 has 0 hundreds, 0 thousands and so on).
Your union fields represent respectively a =byte[0], b = byte[1], c = byte[2] and d = byte[3] of the integer raw, since in a union all the elements share the same memory location. This is true because you are running your code in a little endian architecture (least significant bytes come first).
So:
a = byte[0] of raw = 0xef
b = byte[1] of raw = 0xcd
c = byte[2] of raw = 0xab
d = byte[3] of raw = 0x00
Its because your unsigned int isn't 32 bit long enough (all 32 bits not set) to completely fill all the bit field values. Because it only 24 bits long, the bit field d is showing hex value of 00 . Try it for e.g.
aUnion.raw = 0xffabcdef;
which will produce
a: ef
b: cd
c: ab
d: ff
Since the dd bit field occupies bits 24-32 (on little endian), unless the assigned unsigned int field has been assigned a value that occupies those bits set, that bit field position doesn't show the value too.
i am trying to convert IEEE 754 Floating Point Representation to its Decimal Equivalent so i have an example data [7E FF 01 46 4B CD CC CC CC CC CC 10 40 1B 7E] which is in hex.
char strResponseData[STATUS_BUFFERSIZE]={0};
unsigned long strData = (((strResponseData[12] & 0xFF)<< 512 ) |((strResponseData[11] & 0xFF) << 256) |((strResponseData[10] & 0xFF)<< 128 ) |((strResponseData[9] & 0xFF)<< 64) |((strResponseData[8] & 0xFF)<< 32 ) |((strResponseData[7]& 0xFF) << 16) |((strResponseData[6] & 0xFF )<< 8) |(strResponseData[5] & 0xFF));
value = IEEEHexToDec(strData,1);
then i am passing this value to this function
IEEEHexToDec(unsigned long number, int isDoublePrecision)
{
int mantissaShift = isDoublePrecision ? 52 : 23;
unsigned long exponentMask = isDoublePrecision ? 0x7FF0000000000000 : 0x7f800000;
int bias = isDoublePrecision ? 1023 : 127;
int signShift = isDoublePrecision ? 63 : 31;
int sign = (number >> signShift) & 0x01;
int exponent = ((number & exponentMask) >> mantissaShift) - bias;
int power = -1;
double total = 0.0;
for ( int i = 0; i < mantissaShift; i++ )
{
int calc = (number >> (mantissaShift-i-1)) & 0x01;
total += calc * pow(2.0, power);
power--;
}
double value = (sign ? -1 : 1) * pow(2.0, exponent) * (total + 1.0);
return value;
}
but in return am getting value 0, also when am trying to print strData it is giving me only CCCCCD.
i am using eclipse ide.
please i need some suggestion
((strResponseData[12] & 0xFF)<< 512 )
First, the << operator takes a number of bits to shift, you seem to be confusing it with multiplication by the resulting power of two - while it has the same effect, you need to supply the exponent. Given that you have no typical data types of 512 bit width, it's fairly certain that this should actually be.
((strResponseData[12] & 0xFF)<< 9 )
Next, it's necessary for the value to be shifted to be of a sufficient type to hold the result before you do the shift. A char is obviously not sufficient, so you need to explicitly cast the value to a sufficient type to hold the result before you perform the shift.
Additionally keep in mind that depending on your platform an unsigned long may be either a 32 bit or 64 bit type, so if you were doing an operation with a bit shift where the result would not fit in 32 bits, you may want to use an unsigned long long or better yet make things unambiguous, for example with #include <stdint.h> and type such as uint32_t or uint64_t. Given that your question is tagged "embedded" this is especially important to keep in mind as you might be targeting a 32 (or even 8) bit processor, but sometimes building algorithms to test on the development machine instead.
Further, a char can be either a signed or an unsigned type. Before shifting, you should make that explicit. Given that you are combining multiple pieces of something, it is almost certain that at least most of these should be treated as unsigned.
So probably you want something like
((uint32_t)(strResponseData[12] & 0xFF)<< 9 )
Unless you are on an odd platform where char is not 8 bits (for example some TI DSP's) you probably don't need to pre-mask with 0xff, but it's not hurting anything
Finally it is not 100% clear what you are staring with:
i have an example data [7E FF 01 46 4B CD CC CC CC CC CC 10 40 1B 7E] which is in hex.
Is ambiguous as it is not clear if you mean
[0x7e, 0xff, 0x01, 0x46...]
Which would be an array of byte values which debugging code has printed out in hex for human convenience, or if you actually mean that you something such as
"[7E FF 01 46 .... ]"
Which string of text containing a human readable representation of hex digits as printable characters. In the latter case, you'd first have to convert the character representation of hex digits or octets into into numeric values.
I'm using a 24 bit I2C ADC with the Arduino and there is no 3 byte (24 bit) data type so I instead used the uint32_t which is a 32 bit unsigned int. My actual output however, is a 24 bit signed number as you can see below:
Also here is the code that I used to read the results if you're interested:
uint32_t readData(){
Wire.beginTransmission(address);
Wire.write(0x10);
Wire.endTransmission();
Wire.requestFrom(address,3);
byte dataMSB = Wire.read();
byte data = Wire.read();
byte dataLSB = Wire.read();
uint32_t data32 = dataMSB;
data32 <<= 8;
data32 |= data;
data32 <<= 8;
data32 |= dataLSB;
return data32;
}
In order for this number to be useful, I need to convert it back to a 24 bit signed integer (I'm not sure how to do that or eve if it's possible because 24 is not a power of 2) so I'm a bit stuck. It would be great if somebody can help me as I'm almost finished with the project and this is one of the last few steps.
The problem is that there’s no safe and portable way to use shifting for sign extension in C — at best it is implementation defined. So if you want to do it portably, you need to convert your 2s-complement value manually into a signed integer.
int32_t cvt24bit(uint32_t val) {
val &= 0xffffff; // limit to 24 bits -- may not be necessary
if (val >= (UINT32_C(1) << 23))
return (int32_t)val - (INT32_C(1) << 24);
else
return val;
}
this will take your 24-bit two’s-complement value in a uint32_t and convert it to a (signed) int32_t.
Conversion from 24-bit two’s complement in a uint32_t to int32_t can be done with:
int32_t Convert(uint32_t x)
{
int32_t t = x & 0xffffff;
return t - (t >> 23 << 24);
}
The x & 0xffffff ensures the number has no spurious bits above bit 23. If it is certain no such bits are set, then the statement can be just int32_t t = x;.
Then t >> 23 removes bits 0 to 22, leave just bit 23, which is the sign bit for a 24-bit integer. Then << 24 scales this, producing either 0 (for positive numbers) or 224 (for negative numbers). Subtracting that from t produces the desired value.
Use int32_t instead of unit32_t for data32. Then before returning the value, shift it left by 8, then right by 8 to sign extend it.
So this code:
uint32_t readData(){
Wire.beginTransmission(address);
Wire.write(0x10);
Wire.endTransmission();
Wire.requestFrom(address,3);
byte dataMSB = Wire.read();
byte data = Wire.read();
byte dataLSB = Wire.read();
int32_t data32 = dataMSB;
data32 <<= 8;
data32 |= data;
data32 <<= 8;
data32 |= dataLSB;
return (data32 << 8) >> 8;
}
I'm working with C and I'm trying to figure out how to change a set of bits in a 32-bit unsigned integer.
For example, if I have
int a = 17212403u;
In binary, that becomes 1000001101010001111110011. Now, supposing I labeled these bits, which are arranged in little-endian format, such that the bit utmost right represents the ones, the second to the right is the twos, and so on, how can I manually change a group of bits?
For example, suppose I wanted to change the bits such that the 11th bit to the 15th bit has the decimal value of 17. How would this be possible?
I was thinking of getting that range by doing as such:
unsigned int range = (a << (sizeof(a) * 8) - 14) >> (28)
But I'm not sure where to go on from now.
You will (1) first have to clear the bits 11..15, and (2) then to set the bits according to the value you want to set. To achieve (1), create a "mask" that has all bits set to 1 except the ones that you want to clear; use then a & bitMask to set the bits to 0. Then, use | myValue to set the bits to the value wanted.
Use the bit shift operator << to place the mask and the value at the right positions:
int main(int argc, char** argv) {
// Let's assume a range of 5 bits
unsigned int bitRange = 0x1Fu; // is ...00000000011111
// Let's assume to position the range from bit 11 onwards (i.e. move 10 left):
bitRange = bitRange << 10; // something like 000000111110000000000
unsigned int bitMask = ~bitRange; // something like 111111000001111111111
unsigned int valueToSet = (17u << 10); // corresponds to 000000101110000000000
unsigned int a = (17212403u & bitMask) | valueToSet;
return 0;
}
This is the long version to explain what's going on. In brief, you could also write:
unsigned int a = (17212403u & ~(0x1Fu << 10)) | (17u << 10)
The 11th to 15th bit is 5 bits, assuming you meant including the 15th bit. 5 bits is the hex value: 0x1f
Then you shift these 5 bits 11 position to the left:0x1f << 11
Now we have a mask for the bits 11 through 15 that we want to clear in the original variable, which - we do that by inverting the mask, bitwise and the variable with the inverted mask: a & ~(0x1f << 11)
Next is shifting the value 17 up to the 11th bit: 17 << 11
Then we bitwise or that into the 5 bits we have cleared:
unsigned int b = (a & ~(0x1f << 11)) | (17 << 11)
Consider using bit fields. This allows you to name and access sub-sections of the integer as though they were integer members of a struct.
For info on C bitfields see:
https://www.tutorialspoint.com/cprogramming/c_bit_fields.htm
Below is code to do what you want, using bitfields. The "middle5" member of the struct holds bits 11-15. The "lower11" member is a filler for the lower 11 bits, so that the "middle5" member will be in the right place.
#include <stdio.h>
void showBits(unsigned int w)
{
unsigned int bit = 1<<31;
while (bit > 0)
{
printf("%d", ((bit & w) != 0)? 1 : 0);
bit >>= 1;
}
printf("\n");
}
int main(int argc, char* argv[])
{
struct aBitfield {
unsigned int lower11: 11;
unsigned int middle5: 5;
unsigned int upper16: 16;
};
union uintBits {
unsigned int whole;
struct aBitfield parts;
};
union uintBits b;
b.whole = 17212403u;
printf("Before:\n");
showBits(b.whole);
b.parts.middle5 = 17;
printf("After:\n");
showBits(b.whole);
}
Output of the program:
Before:
00000001000001101010001111110011
After:
00000001000001101000101111110011
Of course, you would want to use more meaningful naming for the various fields.
Be careful though, bitfields may be implemented differently on different platforms - so it may not be completely portable.
I have a question about bit packing in C++.
Lets say we have a struct defined in C++. Here it is below:
typedef struct
{
unsigned long byte_half : 4; //0.5
unsigned long byte_oneAndHalf : 12; //2
union
{
unsigned long byte_union_one_1 : 8; //3
unsigned long byte_union_one_2 : 8; //3
unsigned long byte_union_one_3 : 8; //3
};
unsigned long byte_one : 8; //4
}LongStruct;
It is a struct called LongStruct. From the looks of it, it occupies 4 bytes, and fits into a long. Now I execute the following line:
int size = sizeof(LongStruct);
I take a look at size, expecting it to have the value 4. Turns out I get 12 instead. In what way am I incorrectly visualizing my struct?
Thank you in advance for any help you can give me.
The union is expanded to a long, so its size is 4 bytes instead of 1 byte.
As a result, it is aligned to a 4-byte offset from the beginning of the structure.
In addition, the entire structure is expanded to be a multiple of 4 bytes in size.
So the actual structure looks like this:
unsigned long byte_half : 4; // bits 0 - 3
unsigned long byte_oneAndHalf : 12; // bits 4 - 15
unsigned long byte_padding_1 : 16; // bits 16 - 31 // align union
union
{
unsigned long byte_union_one_1 : 8; // bits 32 - 39
unsigned long byte_union_one_2 : 8; // bits 32 - 39
unsigned long byte_union_one_3 : 8; // bits 32 - 39
unsigned long byte_padding_2 : 24; // bits 40 - 63 // expand union
};
unsigned long byte_one : 8; // bits 64 - 71
unsigned long byte_padding_3 : 24; // bits 72 - 95 // expand structure
Hence the total size is 96 bits (12 bytes).
The anonymous union is not substituting its attributes, but is taking a four-byte chunk in the middle of your bit field struct. So your first two members are two bytes + two padding. Then your union is one byte plus three padding. Then your final member is one byte and three more padding. The total is the 12 you observe.
I'll try to dig into the standard to see exactly what it says about anonymous union bitfields. Alternately if you describe the real problem you're trying to solve we could look into answering that as well.
As an aside you have this tagged C++ so strongly prefer struct X {}; over typedef struct {} X;