Reinterpret cast is not taking all my bytes - c++

I receive some bytes and then I want to cast to typedef struct with correspondent values.
My typedef struct is:
typedef struct SomethingHeader {
uint8_t PV;
uint8_t messageID;
uint32_t stID;
} ;
I receive the array with values:
somePDU[0] = {char} 2 [0x2]
somePDU[1] = {char} 6 [0x6]
somePDU[2] = {char} 41 [0x29]
somePDU[3] = {char} -90 [0xa6]
somePDU[4] = {char} 28 [0x1c]
somePDU[5] = {char} -93 [0xa3]
somePDU[6] = {char} 55 [0x37]
somePDU[7] = {char} -50 [0xce]
somePDU[8] = {char} 0 [0x]
....
When I use reinterpret_cast<SomethingHeader*>(somePDU), on the watch debug mode I see:
PV = 2 [0x2]
messageID = 6 [0x6]
stID = -835214564 [0xce37a31c]
The reinterpret_cast jumps two bytes: somePDU[2] and somePDU[3], but I needed, because my expected value is 698797623 (0x29a6ce37)
It seems to me that the reinterpret_cast only works well every 4 bytes (or with structures that occupy 4 bytes in a row).
How can I force the reinterpret_cast not to skip those two bytes?

You cannot use reinterpret_cast for this in C++. You will violate type aliasing rules and the behaviour of the program will be undefined.
You cannot define the struct in such way that it won't have padding in standard C++. A structure like this is not a portable way to represent byte patterns in C++.
A working example:
std::size_t offs = 0;
SomethingHeader sh;
std::memcpy(&sh.PV, somePDU + offs, sizeof sh.PV);
offs += sizeof sh.PV;
std::memcpy(&sh.messageID, somePDU + offs, sizeof sh.messageID);
offs += sizeof sh.messageID;
std::memcpy(&sh.stID, somePDU + offs, sizeof sh.stID);
offs += sizeof sh.stID;
Note that this still assumes that the order of bytes of the integer are in native endianness which is not a portable assumption. You need to know the endianness of the source data, and convert it to native byte order. This can be done portably by shift and bitwise or.

My problem are padding and endianness that are in comments on my question.
My solution for padding is:
#pragma pack(push,1)
struct SomethingHeader {
uint8_t PV;
uint8_t messageID;
uint32_t stID;
};
#pragma pack(pop)
For the endianness just use ntohl().
Thank you

Related

AVR C++ uint32_t weird behaviour

uint32_t a = 65536;
uint32_t b = 1 << 16;
Why is a != b here, but
uint32_t a = 65536;
uint32_t b = 65536;
here a == b although it should technically be the same?
I'm using CLion as an IDE and CMake 3.7.1 with Arduino CMake.
uint32_t b = 1 << 16;
as you noticed, this breaks down if you don't cast 1 to a 32-bit integer first:
The literal 1 is the default integer type on your compiler. Don't know which, but it's either an 8 or a 16 bit int.
Now, assume it's an 16 bit in. When you shift 1 left 16 times, you just... well, it doesn't make sense. So, make your 1 a 32 bit int first, then shift.
I had to cast 1 to an uint32_t, so that there are enough bytes to shift it in.

Safely convert 2 bytes to short

I'm making an emulator for the Intel 8080. One of the opcodes requires a 16 bit address by combining the b and c registers (both 1 byte). I have a struct with the registers adjacent to each other. The way I combine the two registers is:
using byte = char;
struct {
... code
byte b;
byte c;
... code
} state;
...somewhere in code
// memory is an array of byte with a size of 65535
memory[*reinterpret_cast<short*>(&state.b)]
I was thinking I can just OR them together, but that doesn't work.
short address = state.b | state.c
Another way I tried doing this was by creating a short, and setting the 2 bytes individually.
short address;
*reinterpret_cast<byte*>(&address) = state.b;
*(reinterpret_cast<byte*>(&address) + 1) = state.c;
Is there a better/safer way to achieve what I am trying to do?
short j;
j = state.b;
j <<= 8;
j |= state.c;
Reverse the state.b and state.c if you need the opposite endianness.
short address = ((unsigned short)state.b << 8) | (unsigned char)state.c;
That's the portable way. Your way, with reinterpret_cast is not really that terrible, as long as you understand that it'll only work on architecture with the correct endian-ness.
As others have mentioned there are concerns with endian-ness but you can also use a union to manipulate the memory without the need to do any shifting.
Example Code
#include <cstdint>
#include <iostream>
using byte = std::uint8_t;
struct Regs
{
union
{
std::uint16_t bc;
struct
{
// The order of these bytes matters
byte c;
byte b;
};
};
};
int main()
{
Regs regs;
regs.b = 1; // 0000 0001
regs.c = 7; // 0000 0111
// Read these vertically to know the value associated with each bit
//
// 2 1
// 5 2631
// 6 8426 8421
//
// The overall binary: 0000 0001 0000 0111
//
// 256 + 4 + 2 + 1 = 263
std::cout << regs.bc << "\n";
return 0;
}
Example Output
263
Live Example
You can use:
unsigned short address = state.b * 0x100u + state.c;
Using multiplication instead of shift avoids all the issues relating to shifting the sign bit etc.
The address should be unsigned otherwise you will cause out-of-range assignment, and probably you want to use 0 to 65535 as your address range anyway, instead of -32768 to 32767.

How concatenate array of bytes and convert to decimal?

How can I concatenate bytes?, for example
I have one byte array, BYTE *buffer[2] = [0x00, 0x02], but I want concatenate this two bytes, but backwards.
something like that
0x0200 <---
and later convert those bytes in decimal 0x0200 = 512
but I don't know how do it on C, because I can't use memcpy or strcat for the reason that buffer is BYTE and not a CHAR, even don't know if I can do that
Can somebody help me with a code or how can I concatenate bytes to convert on decimal?
because I have another byte array, buff = {0x00, 0x00, 0x0C, 0x00, 0x00, 0x00} and need do the same.
help please.
regards.
BYTE is not a standard type and is probably a typedef for unsigned char. Here, I'll use the definitions from <stdint.h> that define intgers for specified byte widths and where a byte is uint8_t.
Concatenating two bytes "backwards" is easy if you think about it:
uint8_t buffer[2] = {0x00, 0x02};
uint16_t x = buffer[1] * 256 + buffer[0];
It isn't called backwards, by the way, but Little Endian byte order. The opposite would be Big Endian, where the most significant byte comes first:
uint16_t x = buffer[0] * 256 + buffer[1];
Then, there's no such thing as "converting to decimal". Internally, all numbers are binary. You can print them as decimal numbers or as hexadeximal numbers or as numbers of any base or even as Roman numerals if you like, but it's still the same number:
printf("dec: %u\n", x); // prints 512
printf("hex: %x\n", x); // prints 200
Now let's look what happens for byte arrays of any length:
uint8_t buffer[4] = {0x11, 0x22, 0x33, 0x44};
uint32_t x = buffer[3] * 256 * 256 * 256
+ buffer[2] * 256 * 256
+ buffer[1] * 256
+ buffer[0];
See a pattern? You can rewrite this as:
uint32_t x = ( ( (buffer[3]) * 256
+ buffer[2]) * 256
+ buffer[1]) * 256
+ buffer[0];
You can convert this logic to a function easily:
uint64_t int_little_endian(uint8_t *arr, size_t n)
{
uint64_t res = 0ul;
while (n--) res = res * 256 + arr[n];
return res;
}
Likewise for Big Endian, wher you move "forward":
uint64_t int_big_endian(uint8_t *arr, size_t n)
{
uint64_t res = 0ul;
while (n--) res = res * 256 + *arr++;
return res;
}
Lastly, code that deals with byte conversions usually doesn't use the arithmetic operations of multiplication and addition, but is uses so-called bit-wise operators. A multiplication with 2 is represented by a shifting all bits of a number right by one. (Much as a multiplication by 10 in decimal is done by shifting all digits by one and appending a zero.) Out multiplication by 256 will become a bit-shift of 8 bytes to the left, which i C notation is x << 8.
Addition is done by applying the bit-wise or. These two operations are not identical, because the bit-wise or operates on bits and does not account for carry. In our case, where there are no clashes between additions, they behave the same. Your Little-Endian conversion function now looks like this:
uint64_t int_little_endian(uint8_t *arr, size_t n)
{
uint64_t res = 0ul;
while (n--) res = res << 8 | arr[n];
return res;
}
And If that doesn't look like some nifty C code, I don't know. (If these bitwise operators confuse you, leave them for now. In your example, you're fine with multiplication and addition.)

convert 4 bytes to 3 bytes in C++

I have a requirement, where 3 bytes (24 bits) need to be populated in a binary protocol. The original value is stored in an int (32 bits). One way to achieve this would be as follows:-
Technique1:-
long x = 24;
long y = htonl(x);
long z = y>>8;
memcpy(dest, z, 3);
Please let me know if above is the correct way to do it?
The other way, which i dont understand was implemented as below
Technique2:-
typedef struct {
char data1;
char data2[3];
} some_data;
typedef union {
long original_data;
some_data data;
} combined_data;
long x = 24;
combined_data somedata;
somedata.original_data = htonl(x);
memcpy(dest, &combined_data.data.data2, 3);
What i dont understand is, how did the 3 bytes end up in combined_data.data.data2 as opposed to first byte should go into combined_data.data.data1 and next 2 bytes should go into
combined_data.data.data2?
This is x86_64 platform running 2.6.x linux and gcc.
PARTIALLY SOLVED:-
On x86_64 platform, memory is addressed from right to left. So a variable of type long with value 24, will have following memory representation
|--Byte4--|--Byte3--|--Byte2--|--Byte1--|
0 0 0 0x18
With htonl() performed on above long type, the memory becomes
|--Byte4--|--Byte3--|--Byte2--|--Byte1--|
0x18 0 0 0
In the struct some_data, the
data1 = Byte1
data2[0] = Byte2
data2[1] = Byte3
data4[2] = Byte4
But my Question still holds, Why not simply right shift by 8 as shown in technique 1 ?
A byte takes 8 bits :-)
int x = 24;
int y = x<<8;
moving by 0 you are changing nothing. By 1 - *2, by 2 - *4, by 8 - *256.
if we are on the BIG ENDIAN machine, 4 bytes are put in memory as so: 2143. And such algorythms won't work for numbers greater than 2^15. On the other way, on the BIG ENDIAN machine you should define, what means " putting integer in 3 bytes"
Hmm. I think, the second proposed algorythm will be ok, but change the order of bytes:
You have them as 2143. You need 321, I think. But better check it.
Edit: I checked on wiki - x86 is little endian, they say, so algorythms are OK

How to create a byte out of 8 bool values (and vice versa)?

I have 8 bool variables, and I want to "merge" them into a byte.
Is there an easy/preferred method to do this?
How about the other way around, decoding a byte into 8 separate boolean values?
I come in assuming it's not an unreasonable question, but since I couldn't find relevant documentation via Google, it's probably another one of those "nonono all your intuition is wrong" cases.
The hard way:
unsigned char ToByte(bool b[8])
{
unsigned char c = 0;
for (int i=0; i < 8; ++i)
if (b[i])
c |= 1 << i;
return c;
}
And:
void FromByte(unsigned char c, bool b[8])
{
for (int i=0; i < 8; ++i)
b[i] = (c & (1<<i)) != 0;
}
Or the cool way:
struct Bits
{
unsigned b0:1, b1:1, b2:1, b3:1, b4:1, b5:1, b6:1, b7:1;
};
union CBits
{
Bits bits;
unsigned char byte;
};
Then you can assign to one member of the union and read from another. But note that the order of the bits in Bits is implementation defined.
Note that reading one union member after writing another is well-defined in ISO C99, and as an extension in several major C++ implementations (including MSVC and GNU-compatible C++ compilers), but is Undefined Behaviour in ISO C++. memcpy or C++20 std::bit_cast are the safe ways to type-pun in portable C++.
(Also, the bit-order of bitfields within a char is implementation defined, as is possible padding between bitfield members.)
You might want to look into std::bitset. It allows you to compactly store booleans as bits, with all of the operators you would expect.
No point fooling around with bit-flipping and whatnot when you can abstract away.
The cool way (using the multiplication technique)
inline uint8_t pack8bools(bool* a)
{
uint64_t t;
memcpy(&t, a, sizeof t); // strict-aliasing & alignment safe load
return 0x8040201008040201ULL*t >> 56;
// bit order: a[0]<<7 | a[1]<<6 | ... | a[7]<<0 on little-endian
// for a[0] => LSB, use 0x0102040810204080ULL on little-endian
}
void unpack8bools(uint8_t b, bool* a)
{
// on little-endian, a[0] = (b>>7) & 1 like printing order
auto MAGIC = 0x8040201008040201ULL; // for opposite order, byte-reverse this
auto MASK = 0x8080808080808080ULL;
uint64_t t = ((MAGIC*b) & MASK) >> 7;
memcpy(a, &t, sizeof t); // store 8 bytes without UB
}
Assuming sizeof(bool) == 1
To portably do LSB <-> a[0] (like the pext/pdep version below) instead of using the opposite of host endianness, use htole64(0x0102040810204080ULL) as the magic multiplier in both versions. (htole64 is from BSD / GNU <endian.h>). That arranges the multiplier bytes to match little-endian order for the bool array. htobe64 with the same constant gives the other order, MSB-first like you'd use for printing a number in base 2.
You may want to make sure that the bool array is 8-byte aligned (alignas(8)) for performance, and that the compiler knows this. memcpy is always safe for any alignment, but on ISAs that require alignment, a compiler can only inline memcpy as a single load or store instruction if it knows the pointer is sufficiently aligned. *(uint64_t*)a would promise alignment, but also violate the strict-aliasing rule. Even on ISAs that allow unaligned loads, they can be faster when naturally aligned. But the compiler can still inline memcpy without seeing that guarantee at compile time.
How they work
Suppose we have 8 bools b[0] to b[7] whose least significant bits are named a-h respectively that we want to pack into a single byte. Treating those 8 consecutive bools as one 64-bit word and load them we'll get the bits in reversed order in a little-endian machine. Now we'll do a multiplication (here dots are zero bits)
| b7 || b6 || b4 || b4 || b3 || b2 || b1 || b0 |
.......h.......g.......f.......e.......d.......c.......b.......a
× 1000000001000000001000000001000000001000000001000000001000000001
────────────────────────────────────────────────────────────────
↑......h.↑.....g..↑....f...↑...e....↑..d.....↑.c......↑b.......a
↑.....g..↑....f...↑...e....↑..d.....↑.c......↑b.......a
↑....f...↑...e....↑..d.....↑.c......↑b.......a
+ ↑...e....↑..d.....↑.c......↑b.......a
↑..d.....↑.c......↑b.......a
↑.c......↑b.......a
↑b.......a
a
────────────────────────────────────────────────────────────────
= abcdefghxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
The arrows are added so it's easier to see the position of the set bits in the magic number. At this point 8 least significant bits has been put in the top byte, we'll just need to mask the remaining bits out
So the magic number for packing would be 0b1000000001000000001000000001000000001000000001000000001000000001 or 0x8040201008040201. If you're on a big endian machine you'll need to use the magic number 0x0102040810204080 which is calculated in a similar manner
For unpacking we can do a similar multiplication
| b7 || b6 || b4 || b4 || b3 || b2 || b1 || b0 |
abcdefgh
× 1000000001000000001000000001000000001000000001000000001000000001
────────────────────────────────────────────────────────────────
= h0abcdefgh0abcdefgh0abcdefgh0abcdefgh0abcdefgh0abcdefgh0abcdefgh
& 1000000010000000100000001000000010000000100000001000000010000000
────────────────────────────────────────────────────────────────
= h0000000g0000000f0000000e0000000d0000000c0000000b0000000a0000000
After multiplying we have the needed bits at the most significant positions, so we need to mask out irrelevant bits and shift the remaining ones to the least significant positions. The output will be the bytes contain a to h in little endian.
The efficient way
On newer x86 CPUs with BMI2 there are PEXT and PDEP instructions for this purpose. The pack8bools function above can be replaced with
_pext_u64(*((uint64_t*)a), 0x0101010101010101ULL);
And the unpack8bools function can be implemented as
_pdep_u64(b, 0x0101010101010101ULL);
(This maps LSB -> LSB, like a 0x0102040810204080ULL multiplier constant, opposite of 0x8040201008040201ULL. x86 is little-endian: a[0] = (b>>0) & 1; after memcpy.)
Unfortunately those instructions are very slow on AMD before Zen 3 so you may need to compare with the multiplication method above to see which is better
The other fast way is SSE2
x86 SIMD has an operation that takes the high bit of every byte (or float or double) in a vector register, and gives it to you as an integer. The instruction for bytes is pmovmskb. This can of course do 16 bytes at a time with the same number of instructions, so it gets better than the multiply trick if you have lots of this to do.
#include <immintrin.h>
inline uint8_t pack8bools_SSE2(const bool* a)
{
__m128i v = _mm_loadl_epi64( (const __m128i*)a ); // 8-byte load, despite the pointer type.
// __m128 v = _mm_cvtsi64_si128( uint64 ); // alternative if you already have an 8-byte integer
v = _mm_slli_epi32(v, 7); // low bit of each byte becomes the highest
return _mm_movemask_epi8(v);
}
There isn't a single instruction to unpack until AVX-512, which has mask-to-vector instructions. It is doable with SIMD, but likely not as efficiently as the multiply trick. See Convert 16 bits mask to 16 bytes mask and more generally is there an inverse instruction to the movemask instruction in intel avx2? for unpacking bitmaps to other element sizes.
How to efficiently convert an 8-bit bitmap to array of 0/1 integers with x86 SIMD has some answers specifically for 8-bits -> 8-bytes, but if you can't do 16 bits at a time for that direction, the multiply trick is probably better, and pext certainly is (except on CPUs where it's disastrously slow, like AMD before Zen 3).
#include <stdint.h> // to get the uint8_t type
uint8_t GetByteFromBools(const bool eightBools[8])
{
uint8_t ret = 0;
for (int i=0; i<8; i++) if (eightBools[i] == true) ret |= (1<<i);
return ret;
}
void DecodeByteIntoEightBools(uint8_t theByte, bool eightBools[8])
{
for (int i=0; i<8; i++) eightBools[i] = ((theByte & (1<<i)) != 0);
}
bool a,b,c,d,e,f,g,h;
//do stuff
char y= a<<7 | b<<6 | c<<5 | d<<4 | e <<3 | f<<2 | g<<1 | h;//merge
although you are probably better off using a bitset
http://www.cplusplus.com/reference/stl/bitset/bitset/
I'd like to note that type punning through unions is UB in C++ (as rodrigo does in his answer. The safest way to do that is memcpy()
struct Bits
{
unsigned b0:1, b1:1, b2:1, b3:1, b4:1, b5:1, b6:1, b7:1;
};
unsigned char toByte(Bits b){
unsigned char ret;
memcpy(&ret, &b, 1);
return ret;
}
As others have said, the compiler is smart enough to optimize out memcpy().
BTW, this is the way that Boost does type punning.
There is no way to pack 8 bool variables into one byte. There is a way packing 8 logical true/false states in a single byte using Bitmasking.
You would use the bitwise shift operation and casting to archive it. a function could work like this:
unsigned char toByte(bool *bools)
{
unsigned char byte = \0;
for(int i = 0; i < 8; ++i) byte |= ((unsigned char) bools[i]) << i;
return byte;
}
Thanks Christian Rau for the correction s!