Is Boost endian library platform-independent? - c++

There is a valid uint8_t* buffer with four uint8_t bytes.
I know the buffer contains two uint16_t numbers in big-endian format, and I want to extract them.
I can create the required data manually, while taking care of the correct endianness:
const std::uint8_t* data = ...
uint16_t first = (data[1]<<8) + data[0];
uint16_t second = (data[3]<<8) + data[2];
I have been told (thanks to #eerorika):
this works the same way on all systems. This reads the data as little
endian, whether the CPU is little, big or some other endian. This is
well portable.
And this works as intended.
Now let's consider another solution with Boost's endian library:
struct datatype {
boost::endian::little_int16_buf_t first;
boost::endian::little_int16_buf_t second;
} d;
memcpy(&d, data, sizeof d);
This solution also works, and my question is: is this in any way worse in terms of portability, platform- and CPU-dependency than the first?
If I compile and run this on a non little-endian architecture, will it produce the same values as on a little-endian one?

Related

Converting uint8_t* buffer to uint16_t and changing endianness

I'd like to process data provided by an external library.
The lib holds the data and provides access to it like this:
const uint8_t* data;
std::pair<const uint8_t*, const uint8_t*> getvalue() const {
return std::make_pair(data + offset, data + length);
}
I know that the current data contains two uint16_t numbers, but I need to change their endianness.
So altogether the data is 4 bytes long and contains this numbers:
66 4 0 0
So I'd like to get two uint16_t numbers with 1090 and 0 value respectively.
I can do basic arithmetic and in one place change the endianness:
pair<const uint8_t*, const uint8_t*> dataPtrs = library.value();
vector<uint8_t> data(dataPtrs.first, dataPtrs.second);
uint16_t first = data[1] <<8 + data[0]
uint16_t second = data[3]<<8 + data[2]
However I'd like to do something more elegant (the vector is replaceable if there is better way for getting the uint16_ts).
How can I better create uint16_t from uint8_t*? I'd avoid memcpy if possible, and use something more modern/safe.
Boost has some nice header-only endian library which can work, but it needs an uint16_t input.
For going further, Boost also provides data types for changing endianness, so I could create a struct:
struct datatype {
big_int16_buf_t data1;
big_int16_buf_t data2;
}
Is it possible to safely (paddings, platform-dependency, etc) cast a valid, 4 bytes long uint8_t* to datatype? Maybe with something like this union?
typedef union {
uint8_t u8[4];
datatype correct_data;
} mydata;
Maybe with something like this union?
No. Type punning with unions is not well defined in C++.
This would work assuming big_int16_buf_t and therefore datatype is trivially copiable:
datatype d{};
std::memcpy(&d, data, sizeof d);
uint16_t first = data[1] <<8 + data[0]
uint16_t second = data[3]<<8 + data[2]
However I'd like to do something more elegant
This is actually (subjectively, in my opinion) quite an elegant way because it works the same way on all systems. This reads the data as little endian, whether the CPU is little, big or some other endian. This is well portable.
However I'd like to do something more elegant (the vector is replaceable if there is better way for getting the uint16_ts).
The vector seems entirely pointless. You could just as well use:
const std::uint8_t* data = dataPtrs.first;
How can I better create uint16_t from uint8_t*?
If you are certain that the data sitting behind the uint8_t pointer is truly a uint16_t, C++ allows: auto u16 = *static_cast<uint16_t const*>(data); Otherwise, this is UB.
Given a big endian value, transforming this into little endian can be done with the ntohs function (under linux, other OSes have similar functions).
But beware, if the pointer you hold points to two individual uint8_t values, you mustn't convert them by pointer-cast. In that case, you have to manually specify which value goes where (conceivably with a function template). This will be the most portable solution, and in all likelihood the compiler will create efficient code out of the shifts and ors.

Header with restrictions using bytes for an UDP Socket

I am doing a Header for an UDP socket which have a restrictions using bytes.
| Packet ID (1 byte) | Packet Size (2 bytes) | Subpacket ID (1 Byte) | etc
I did an struct for store this kind of attributes like:
typedef struct WHEATHER_STRUCT
{
unsigned char packetID[1];
unsigned char packetSize[2];
unsigned char subPacketID[1];
unsigned char subPacketOffset[2];
...
} wheather_struct;
I initialized this struct using new and I updated the values. The question is about if I want to use only 2 bytes in Packet Size attribute. What of these two forms that I wrote below is the correct one?
*weather_struct->packetSize = '50';
or
*weather_struct->packetSize = 50;
If you can use C++11 and gcc (or clang) then I would do this:
typedef struct WHEATHER_STRUCT
{
uint8_t packetID;
uint16_t packetSize;
uint8_t subPacketID;
uint16_t subPacketOffset;
// ...
} __attribute__((packed)) wheather_struct;
If you can't use C++11 then you can use unsigned char and unsigned short instead.
If you're using Visual C then you can do:
#pragma pack (push, 1)
typedef struct ...
#pragma (pop)
Beware also byte ordering issues, depending on what architectures you need to support. You can use htons() and ntohs() to overcome this problem.
Live demo at Wandbox
Packing and unpacking data from IP packets is a problem as old as the internet itself (indeed, older).
Different machine architectures have different layouts for representing integers, which can cause problems when communicating between machines.
For this reason, the IP stack standardises on encoding integers in 'network byte order' (which basically means most significant byte first).
Standard functions exist to convert values in network byte order to native types and vice versa. I urge you to consider using these as your code will then be more portable.
Furthermore, it makes sense to abstract data representations from the program's point of view. c++ compilers can perform the conversions very efficiently.
Example:
#include <arpa/inet.h>
#include <cstring>
#include <cstdint>
typedef struct WEATHER_STRUCT
{
std::int8_t packetID;
std::uint16_t packetSize;
std::uint8_t subPacketID;
std::uint16_t subPacketOffset;
} weather_struct;
const std::int8_t* populate(weather_struct& target, const std::int8_t* source)
{
auto get16 = [&source]
{
std::uint16_t buf16;
std::memcpy(&buf16, source, 2);
source += 2;
return ntohs(buf16);
};
target.packetID = *source++;
target.packetSize = get16();
target.subPacketID = *source++;
target.subPacketOffset = get16();
return source;
}
uint8_t* serialise(uint8_t* target, weather_struct const& source)
{
auto write16 = [&target](std::uint16_t val)
{
val = ntohs(val);
std::memcpy(target, &val, 2);
target += 2;
};
*target++ = source.packetID;
write16(source.packetSize);
*target++ = source.subPacketID;
write16(source.subPacketOffset);
return target;
}
https://linux.die.net/man/3/htons
here's an link to a c++17 version of the above:
https://godbolt.org/z/oRASjI
A further note on conversion costs:
Data arriving into or leaving your program is an event that happens once per payload. Suffering a conversion cost here incurs a negligible penalty.
Once the data has arrived in your program, or before it leaves, it may be manipulated many times by your code.
Some processors architectures suffer huge performance penalties during data access if data is not aligned on natural word boundaries. This is why attributes such as packed exist - the compiler is doing all it can to avoid misaligned data. Using a packed attribute is tantamount to deliberately telling the compiler to produce very suboptimal code.
For this reason, I would recommend not using packed structures (e.g. __attribute__((packed)) etc) for data that will be referred to by program logic.
Compared to RAM, networks are many orders of magnitude slower. A minuscule performance hit (literally nanoseconds) at the point of encoding or decoding a network packet is inconsequential compared to the cost of actually transmitting it.
Packing structures can cause horrible performance issues in program code and often leads to portability headaches.
Neither is correct, you need to treat the two bytes as a single 16-bit number. You probably also need to take into account the different endianness of the network stream to your processor architecture (depending on the protocol, most are big endian).
The correct code would therefore be:
*((uint16_t*)weather_struct->packetSize) = htons(50);
It would be simpler if packetSize were uint16_t to start with:
weather_struct->packetSize = htons(50);

Is the following data cross-platform compatible when written to a file?

I have a structure with the following format:
struct Serializable {
uint64_t value1;
uint32_t value2;
uint16_t value3;
uint8_t value4;
// returns the raw data after converting the values to big-endian format
// if the current architecture follows little-endian format. Else, if
// if the current architecture follows big-endian format, the return
// expression will be "return (char*) (this);"
char* convert_all_to_bigendian();
// checks if the architecture follows little-endian format or big-endian format.
// If the little-endian format is followed, after the contents of rawdata
// are copied back to the structure, the integer fields are converted back to their,
// little-endian format (serialized data follow big-endian format by default).
char* get_and_restructure_serialized_data(char* rawdata);
uint64_t size();
} __attribute__ ((__packed__));
The implementation of the size() member:
uint64_t Serializable::size() {
return sizeof(uint64_t) + sizeof(uint32_t) +
sizeof(uint16_t) + sizeof(uint8_t);
}
If I write an object of the above structure to the file using fstream, as given in the following code:
std::fstream fWrite ("dump.dat", std::ios_base::out | std::ios_base::binary);
// obj is an object of the structure Serializable.
fWrite.write (obj.convert_all_to_bigendian(), obj.size());
Will the contents written to the file dump.dat be cross-platform?
Assuming that I write another class and structure comparable to work with Visual C++, then Will the windows side application interpret the dump.dat file the same way the Linux side does?
If not, can you please explain what other factors should I consider other than padding and the differences in Endianness (which is dependent on the processor architecture) to make this cross-platform?
I understand that there are too many serialization libraries out there, which are all well tested and used extensively. But I'm doing this purely for learning purpose.

How to convert byte array to integral types (int, long, short, etc.) 'endian safely'?

template <class T>
T readData(size_t position)
{
byte rawData[sizeof(T)] = { 0, };
// some logic that write data into rawData
return *((T*)rawData);
}
Now I'm developing cross-platform game engine. but I heard that casting is absolutly dangerous because of endian difference. How can I convert rawData to type T endian safely without using conditions about endianness?
You must know the endianness or the source data. Data is usually big-endian when being transferred over a network. Then you need to determine if your system is a little-endian or a big-endian machine. If the endianness of the data and the system is not the same, just reverse the bytes, and then use it.
You can determine the endianness of your system as follows:
int is_little_endian() {
short a = 1;
return *((char*)&a) & 1;
}
Convert from little/big endian to system endian and vice versa using these macros:
#define LITTLE_X_SYSTEM(dst_type, src) if(!is_little_endian()) memrev((src), 1 , sizeof(dst))
#define BIG_X_SYSTEM(dst_type, src) if(is_little_endian()) memrev((src), 1, sizeof(dst))
You can use it like this:
template <class T>
T readData(size_t position)
{
byte rawData[sizeof(T)] = { 0, };
// assuming source data is big endian
BIG_X_SYSTEM(T, rawData);
return *((T*)rawData);
}
This answer gives some more insight into endianness.
There's no need for you to care. Unless your rawData comes from a different system (network stream, external peripheral, ...). As you develop a game engine i presume that is not the case.
Yes, you can do twisted things like writing data byte by byte and then reading them as integer but that is a design problem. You should avoid that rather than spending to much time worrying about endianness.

dealing with endianness in c++

I am working on translating a system from python to c++. I need to be able to perform actions in c++ that are generally performed by using Python's struct.unpack (interpreting binary strings as numerical values). For integer values, I am able to get this to (sort of) work, using the data types in stdint.h:
struct.unpack("i", str) ==> *(int32_t*) str; //str is a char* containing the data
This works properly for little-endian binary strings, but fails on big-endian binary strings. Basically, I need an equivalent to using the > tag in struct.unpack:
struct.unpack(">i", str) ==> ???
Please note, if there is a better way to do this, I am all ears. However, I cannot use c++11, nor any 3rd party libraries other than Boost. I will also need to be able to interpret floats and doubles, as in struct.unpack(">f", str) and struct.unpack(">d", str), but I'll get to that when I solve this.
NOTE I should point out that the endianness of my machine is irrelevant in this case. I know that the bitstream I receive in my code will ALWAYS be big-endian, and that's why I need a solution that will always cover the big-endian case. The article pointed out by BoBTFish in the comments seems to offer a solution.
For 32 and 16-bit values:
This is exactly the problem you have for network data, which is big-endian. You can use the the ntohl to turn a 32-bit into host order, little-endian in your case.
The ntohl() function converts the unsigned integer netlong from network byte order to
host byte order.
int res = ntohl(*((int32_t) str)));
This will also take care of the case where your host is big-endian and won't do anything.
For 64-bit values
Non-standardly on linux/BSD you can take a look at 64 bit ntohl() in C++?, which points to htobe64
These functions convert the byte encoding of integer values from the byte order that
the current CPU (the "host") uses, to and from little-endian and big-endian byte
order.
For windows try: How do I convert between big-endian and little-endian values in C++?
Which points to _byteswap_uint64 and as well as a 16 and 32-bit solution and a gcc-specific __builtin_bswap(32/64) call.
Other Sizes
Most systems don't have values that aren't 16/32/64 bits long. At that point I might try to store it in a 64-bit value, shift it and they translate. I'd write some good tests. I suspectt is an uncommon situation and more details would help.
Unpack the string one byte at a time.
unsigned char *str;
unsigned int result;
result = *str++ << 24;
result |= *str++ << 16;
result |= *str++ << 8;
result |= *str++;
First, the cast you're doing:
char *str = ...;
int32_t i = *(int32_t*)str;
results in undefined behavior due to the strict aliasing rule (unless str is initialized with something like int32_t x; char *str = (char*)&x;). In practical terms that cast can result in an unaligned read which causes a bus error (a crash) on some platforms and slow performance on others.
Instead you should be doing something like:
int32_t i;
std::memcpy(&i, c, sizeof(i));
There are a number of functions for swapping bytes between the host's native byte ordering and a host independent ordering: ntoh*(), hton*(), where * is nothing, l, or s for the different types supported. Since different hosts may have different byte orderings then this may be what you want to use if the data you're reading uses a consistent serialized form on all platforms.
ntoh(i);
You can also manually move bytes around in str before copying it into the integer.
std::swap(str[0],str[3]);
std::swap(str[1],str[2]);
std::memcpy(&i,str,sizeof(i));
Or you can manually manipulate the integer's value using shifts and bitwise operators.
std::memcpy(&i,str,sizeof(i));
i = (i&0xFFFF0000)>>16 | (i&0x0000FFFF)<<16;
i = (i&0xFF00FF00)>>8 | (i&0x00FF00FF)<<8;
This falls in the realm of bit twiddling.
for (i=0;i<sizeof(struct foo);i++) dst[i] = src[i ^ mask];
where mask == (sizeof type -1) if the stored and native endianness differ.
With this technique one can convert a struct to bit masks:
struct foo {
byte a,b; // mask = 0,0
short e; // mask = 1,1
int g; // mask = 3,3,3,3,
double i; // mask = 7,7,7,7,7,7,7,7
} s; // notice that all units must be aligned according their native size
Again these masks can be encoded with two bits per symbol: (1<<n)-1, meaning that in 64-bit machines one can encode necessary masks of a 32 byte sized struct in a single constant (with 1,2,4 and 8 byte alignments).
unsigned int mask = 0xffffaa50; // or zero if the endianness matches
for (i=0;i<16;i++) {
dst[i]=src[i ^ ((1<<(mask & 3))-1]; mask>>=2;
}
If your as received values are truly strings, (char* or std::string) and you know their format information, sscanf(), and atoi(), well, really ato() will be your friends. They take well formatted strings and convert them per passed-in formats (kind of reverse printf).