I have the following typedefs
typedef unsigned char BYTE;
typedef unsigned short WORD;
Now, I have an array that looks like this
BYTE redundantMessage[6];
and a field which looks like this
WORD vehicleSpeedToWord = static_cast<WORD>(redundantVelocity);
I would like to set the third and fourth bytes of this message to the value of
vehicleSpeedToWord. Will this do so:
redundantMessage[3] = vehicleSpeedToWord;
Will the third byte of redundantMessage automatically be overwritten?
As you proposed, the best way to do it is using std::memcpy(). However, you need to pass the address, not the value; and if you really meant the third and fourth bytes, it should start at 2, rather than 3:
std::memcpy(&redundantDataMessage[2], vehicleSpeedToWord, sizeof(vehicleSpeedToWord));
Of course, you may do it "manually" by fiddling with the bits, e.g. (assuming CHAR_BIT == 8):
const BYTE high = vehicleSpeedToWord >> 8;
const BYTE low = vehicleSpeedToWord & static_cast<WORD>(0x00FF);
redundantDataMessage[2] = high;
redundantDataMessage[3] = low;
Do not be concerned with the performance of the std::memcpy(), the generated code should be the same.
Another point that you discuss in the comments is the endianness. If you are dealing with a network protocol, you must implement whatever endianness they specify in it; and convert accordingly. For this, the best is to convert beforehand your WORD using some functions to the proper endianness (i.e. from your arch's endianness to the protocol's endianness -- this conversion may be the identity function if they match).
Compilers/environments typically define a set of functions to deal with that. If you need portable code, wrap them inside your own function or implement your own, see How do I convert between big-endian and little-endian values in C++? for more details.
I would like to set the third and fourth bytes of this message [fn. redundantMessage] to the value of vehicleSpeedToWord.
Little endian or big endian?
Assuming unsigned short is exactly 16-bit (!) (ie. sizeof(unsigned short) == 2 && CHAR_BIT == 8, then:
// little endian
// set the third byte of redundantMessage to (vehicleSpeedToWord&0xff)
redundantMessage[2] = vehicleSpeedToWord;
// sets the fourth byte of redundantMessage to ((vehicleSpeedToWord&0xff00)>>8)
redundantMessage[3] = vehicleSpeedToWord>>8;
or
// big endian
redundantMessage[2] = vehicleSpeedToWord>>8;
redundantMessage[3] = vehicleSpeedToWord;
If you want to use your host endianess, you need to tell the compiler to assign WORD data:
*reinterpret_cast<WORD*>(&redundantMessage[2]) = vehicleSpeedToWord;
but this is not really reliable.
short is not 16-bit, but at least 16-bit. So it may be 64-bit on x64 machines, or 1024-bits on 1024-bit machines. It is best to use fixed width integer types:
#include <cstdint>
typedef uint8_t BYTE;
typedef uint16_t WORD;
You don't say whether you want the data to be stored in little-endian format (e.g. intel processors) or big-endian (network byte order).
Here's how I would tackle the problem.
I have provided both versions for comparison.
#include <cstdint>
#include <type_traits>
#include <cstddef>
#include <iterator>
struct little_endian {}; // low bytes first
struct big_endian {}; // high bytes first
template<class T>
auto integral_to_bytes(T value, unsigned char* target, little_endian)
-> std::enable_if_t<std::is_unsigned_v<T>>
{
for(auto count = sizeof(T) ; count-- ; )
{
*target++ = static_cast<unsigned char>(value & T(0xff));
value /= 0x100;
}
}
template<class T>
auto integral_to_bytes(T value, unsigned char* target, big_endian)
-> std::enable_if_t<std::is_unsigned_v<T>>
{
auto count = sizeof(T);
auto first = std::make_reverse_iterator(target + count);
while(count--)
{
*first++ = static_cast<unsigned char>(value & T(0xff));
value /= 0x100;
}
}
int main()
{
extern std::uint16_t get_some_value();
extern void foo(unsigned char*);
unsigned char buffer[6];
std::uint16_t some_value = get_some_value();
// little_endian
integral_to_bytes(some_value, buffer + 3, little_endian());
foo(buffer);
// big-endian
integral_to_bytes(some_value, buffer + 3, big_endian());
foo(buffer);
}
You can take a look at the resulting assembler here. You can see that either way, the compiler does a very good job of converting logical intent into very efficient code.
update: we can improve style without cost in emitted code. Modern c++ compilers are amazing:
#include <cstdint>
#include <type_traits>
#include <cstddef>
#include <iterator>
struct little_endian {}; // low bytes first
struct big_endian {}; // high bytes first
template<class T, class Iter>
void copy_bytes_le(T value, Iter first)
{
for(auto count = sizeof(T) ; count-- ; )
{
*first++ = static_cast<unsigned char>(value & T(0xff));
value /= 0x100;
}
}
template<class T, class Iter>
auto integral_to_bytes(T value, Iter target, little_endian)
-> std::enable_if_t<std::is_unsigned_v<T>>
{
copy_bytes_le(value, target);
}
template<class T, class Iter>
auto integral_to_bytes(T value, Iter target, big_endian)
-> std::enable_if_t<std::is_unsigned_v<T>>
{
copy_bytes_le(value,
std::make_reverse_iterator(target + sizeof(T)));
}
int main()
{
extern std::uint16_t get_some_value();
extern void foo(unsigned char*);
unsigned char buffer[6];
std::uint16_t some_value = get_some_value();
// little_endian
integral_to_bytes(some_value, buffer + 3, little_endian());
foo(buffer);
// big-endian
integral_to_bytes(some_value, buffer + 3, big_endian());
foo(buffer);
}
Related
I have a int64 variable with a random value. I want to modify the lower 32 bits of it to 0xf0ffffff
The variable is rdx register but i want to edit edx value
ContextRecord->Rdx = 0xf0ffffff; // Not correct
The variable is rdx register but i want to edit edx value
I assume that this means that you want to keep the most significant 32-bits intact and only change the least significant 32-bits.
Assuming that the data member ContextRecord->Rdx contains the original value and you want to write back the edited value to this data member, then you could use the following code:
auto temp = ContextRecord->Rdx;
temp &= 0xffffffff00000000; //set least significant 32-bits to 00000000
temp |= 0x00000000f0ffffff; //set least significant 32-bits to f0ffffff
ContextRecord->Rdx = temp;
Of course, these lines could be combined into a single line, like this:
ContextRecord->Rdx = ContextRecord->Rdx & 0xffffffff00000000 | 0x00000000f0ffffff;
Please note that this line only works because & has a higher operator precedence than |, otherwise additional parentheses would be required.
Read the whole value, mask out the lower bits and bitwise-OR it with the 32 bit value you want there:
#include <stdint.h>
void f(int64_t *X)
{
*X = (*X & ~(uint64_t)0xffffffff) //mask out the lower 32 bits
| 0xf0ffffff; //<value to set into the lower 32 bits
}
gcc and clang on little-endian architectures optimize it to a direct mov into the lower 32 bits, i.e., the equivalent of:
#include <string.h>
//only works on little-endian architectures
void g(int64_t *X)
{
uint32_t y = 0xf0ffffff;
memcpy(X,&y,sizeof(y));
}
https://gcc.godbolt.org/z/nkMSvw
If you were doing this in straight assembly, you could just
mov edx 0xf0ffffff
as edx is an alias to the lower 32 bits of rdx. Since it seems you want to do it in C/C++, you need to adjust Rdx directly. Something like -
CONTEXT ctx;
GetThreadContext(hYourThread,&ctx); // check return value, handle errors
DWORD64 newRdx = ctx->Rdx;
newRdx &= 0xfffffffff0ffffff;
newRdx |= 0xf0ffffff;
Need to write some unit tests as I haven't tested this for all types in all architectures but a template something like below maybe what you are looking for
#include <cassert>
#include <cstdint>
#include <iostream>
#include <limits>
#include <type_traits>
template <const int bits, class /* This is where I wish concepts were completed */ T>
constexpr T modifyBits(T highPart, T lowPart)
{
// std::numeric_limits<T>::max() will fail on bits == 0 or float types
static_assert(bits != 0);
static_assert(std::is_signed<T>::value || std::is_unsigned<T>::value);
constexpr T almostallSetMask = std::numeric_limits<T>::max();
constexpr T bitmaskRaw = almostallSetMask >> (bits - (std::is_signed<T>::value ? 1 : 0));
constexpr T bitmaskHigh = bitmaskRaw << bits;
constexpr T bitmaskLow = ~bitmaskHigh;
return (highPart & bitmaskHigh) | (lowPart & bitmaskLow);
}
int main()
{
// Example usage
constexpr int64_t value = 0xFFFFFFFF00000000LL;
constexpr int64_t updated = modifyBits<32, int64_t>(value, 0xFFFFFFFFLL);
static_assert(updated == -1LL); // has to pass
return 0;
}
As you can see I can use static assert and const_expr in a generic way like this. The motto is: Write it once use everywhere. Be warned though, without unit tests this is not complete at all. Feel free to copy this if you like, you can consider it as CC0 or public domain,
A slightly more cheaty and less recommendable way to do it would be type punning:
struct splitBytes {
__int32 lower, upper;
}
void changeLower(__int64* num) {
splitBytes* pun = (splitBytes*)*num;
pun->lower = 0xf0ffffff;
}
note: type punning is pretty risky, so you really shouldn't use unless it's unavoidable. It basically lets you treat a block of memory as if it were of a different type. Really, don't use it if you can avoid it. I'm just putting it out there.
The Problem
I'm currently trying to simulate some firmware in C++11. In the firmware we have a fixed data length of 32 bits, we split this 32 bits into smaller packets e.g we have a packet which as a size of 9 bits, another of 6 which gets packed into the 32 bit word.
In C++ I want to ensure the data I type in is of those lengths. I don't care if I overflow, just that only the 9 bits are operated on or passed onto another function.
Ideally I'd like some simple typedef like:
only_18_bits some_value;
My Attempt
struct sel_vals{
int_fast32_t m_val : 18;
int_fast8_t c_val : 5;
}
But this is a little annoying as I'd have to do this whenever I want to use it:
sel_vals somevals;
somevals.m_val = 5;
Seems a little verbose to me plus I have to declare the struct first.
Also for obvious reasons, I can't just do something like:
typedef sel_vals.m_val sel_vals_m_t;
typedef std::vector<sel_vals_m_t>;
I could use std::bitset<9> but whenever I want to do some maths I have to convert it to unsigned, it just gets a little messy. I want to avoid mess.
Any ideas?
I would suggest a wrapper facade, something along these lines:
#include <cstdint>
template<int nbits> class bits {
uint64_t value;
static const uint64_t mask = (~(uint64_t)0) >> (64-nbits);
public:
bits(uint64_t initValue=0) : value(initValue & mask) {}
bits &operator=(uint64_t newValue)
{
value=newValue & mask;
}
operator uint64_t() const { return value; }
};
//
bits<19> only_19_bits_of_precision;
With a little bit of work, you can define math operator overloads that directly operate on these templates.
With a little bit of more work, you could work this template to pick a smaller internal value, uint32_t, uint16_t, or uint8_t, if the nbits template parameter is small enough.
I've searched through many sites and can not seem to find anything relevant.
I would like to be able to take the individual bytes of each default data types such as short, unsigned short, int, unsigned int, float and double, and to store each individual byte information(binary part) into each index of the unsigned char array. How can this be achieved?
For example:
int main() {
short sVal = 1;
unsigned short usVal = 2;
int iVal = 3;
unsigned int uiVal = 4;
float fVal = 5.0f;
double dVal = 6.0;
const unsigned int uiLengthOfShort = sizeof(short);
const unsigned int uiLengthOfUShort = sizeof(unsigned short);
const unsigned int uiLengthOfInt = sizeof(int);
const unsigned int uiLengthOfUInt = sizeof(unsigned int);
const unsigned int uiLengthOfFloat = sizeof(float);
const unsigned int uiLengthOfDouble = sizeof(double);
unsigned char ucShort[uiLengthOfShort];
unsigned char ucUShort[uiLengthOfUShort];
unsigned char ucInt[uiLengthOfInt];
unsigned char ucUInt[uiLengthOfUInt];
unsigned char ucFloat[uiLengthOfFloat];
unsigned char ucDouble[uiLengthOfDouble];
// Above I declared a variable val for each data type to work with
// Next I created a const unsigned int of each type's size.
// Then I created unsigned char[] using each data types size respectively
// Now I would like to take each individual byte of the above val's
// and store them into the indexed location of each unsigned char array.
// For Example: - I'll not use int here since the int is
// machine and OS dependent.
// I will use a data type that is common across almost all machines.
// Here I will use the short as my example
// We know that a short is 2-bytes or has 16 bits encoded
// I would like to take the 1st byte of this short:
// (the first 8 bit sequence) and to store it into the first index of my unsigned char[].
// Then I would like to take the 2nd byte of this short:
// (the second 8 bit sequence) and store it into the second index of my unsigned char[].
// How would this be achieved for any of the data types?
// A Short in memory is 2 bytes here is a bit representation of an
// arbitrary short in memory { 0101 1101, 0011 1010 }
// I would like ucShort[0] = sVal's { 0101 1101 } &
// ucShort[1] = sVal's { 0011 1010 }
ucShort[0] = sVal's First Byte info. (8 Bit sequence)
ucShort[1] = sVal's Second Byte info. (8 Bit sequence)
// ... and so on for each data type.
return 0;
}
Ok, so first, don't do that if you can avoid it. Its dangerous and can be extremely dependent on architecture.
The commentators above are correct, union is the safest way to do it, you have the endian problem still, yes, but at least you don't have the stack alignment problem (I assume this is for network code, so stack-alignment is another potential architecture problem)
This is what I've found to be the most straight-forward way to do this:
uint32_t example_int;
char array[4];
//No endian switch
array[0] = ((char*) &example_int)[0];
array[1] = ((char*) &example_int)[1];
array[2] = ((char*) &example_int)[2];
array[3] = ((char*) &example_int)[3];
//Endian switch
array[0] = ((char*) &example_int)[3];
array[1] = ((char*) &example_int)[2];
array[2] = ((char*) &example_int)[1];
array[3] = ((char*) &example_int)[0];
If you're trying to write cross-architecture code, you will need to deal with endian problems one way or another. My suggestion is to construct a short endian test and build functions to "pack" and "unpack" byte arrays based on the above method. It should be noted that to "unpack" a byte array, simply reverse the above assignment statements.
The simplest correct way is:
// static_assert(sizeof ucShort == sizeof sVal);
memcpy( &ucShort, &sVal, sizeof ucShort);
The stuff you write in comments is not correct; all types have machine-dependent size, other than character types.
With the help of Raw N by providing me a website, I did a search on byte manipulation and found this thread - http://www.cplusplus.com/forum/articles/12/ and it presents a similar solution towards what I am looking for, however I would have to repeat this process for every default data type.
After doing some testing this is what I have come up with so far and this is dependent on machine architecture, but to do this on other machines the concept is the same.
typedef struct packed_2bytes {
unsigned char c0;
unsigned char c1;
} packed_2bytes;
typedef struct packed_4bytes {
unsigned char c0;
unsigned char c1;
unsigned char c2;
unsigned char c3;
} packed_4bytes;
typedef struct packed_8bytes {
unsigned char c0;
unsigned char c1;
unsigned char c2;
unsigned char c3;
unsigned char c4;
unsigned char c5;
unsigned char c6;
unsigned char c7;
} packed_8bytes;
typedef union {
short s;
packed_2bytes bytes;
} packed_short;
typedef union {
unsigned short us;
packed_2bytes bytes;
} packed_ushort;
typedef union { // 32bit machine, os, compiler only
int i;
packed_4bytes bytes;
} packed_int;
typedef union { // 32 bit machine, os, compiler only
unsigned int ui;
packed_4bytes bytes;
} packed_uint;
typedef union {
float f;
packed_4bytes bytes;
} packed_float;
typedef union {
double d;
packed_8bytes bytes;
} packed_double;
There is no implementation of use only the declarations or definitions to these types. I do think that they should contain which ever endian is being used, but the person who is using them has to know this ahead of time just as knowing the machines architectures sizes for each of the default types. I am not sure if there would be a problem with signed int or not due to one's, two's compliment or signed bit implementations, but it could also be something to consider.
Let's say that you are using <cstdint> and types like std::uint8_t and std::uint16_t, and want to do operations like += and *= on them. You'd like arithmetic on these numbers to wrap around modularly, like typical in C/C++. This ordinarily works, and you find experimentally works with std::uint8_t, std::uint32_t and std::uint64_t, but not std::uint16_t.
Specifically, multiplication with std::uint16_t sometimes fails spectacularly, with optimized builds producing all kinds of weird results. The reason? Undefined behavior due to signed integer overflow. The compiler is optimizing based upon the assumption that undefined behavior does not occur, and so starts pruning chunks of code from your program. The specific undefined behavior is the following:
std::uint16_t x = UINT16_C(0xFFFF);
x *= x;
The reason is C++'s promotion rules and the fact that you, like almost everyone else these days, are using a platform on which std::numeric_limits<int>::digits == 31. That is, int is 32-bit (digits counts bits but not the sign bit). x gets promoted to signed int, despite being unsigned, and 0xFFFF * 0xFFFF overflows for 32-bit signed arithmetic.
Demo of the general problem:
// Compile on a recent version of clang and run it:
// clang++ -std=c++11 -O3 -Wall -fsanitize=undefined stdint16.cpp -o stdint16
#include <cinttypes>
#include <cstdint>
#include <cstdio>
int main()
{
std::uint8_t a = UINT8_MAX; a *= a; // OK
std::uint16_t b = UINT16_MAX; b *= b; // undefined!
std::uint32_t c = UINT32_MAX; c *= c; // OK
std::uint64_t d = UINT64_MAX; d *= d; // OK
std::printf("%02" PRIX8 " %04" PRIX16 " %08" PRIX32 " %016" PRIX64 "\n",
a, b, c, d);
return 0;
}
You'll get a nice error:
main.cpp:11:55: runtime error: signed integer overflow: 65535 * 65535
cannot be represented in type 'int'
The way to avoid this, of course, is to cast to at least unsigned int before multiplying. Only the exact case where the number of bits of the unsigned type exactly equals half the number of bits of int is problematic. Any smaller would result in the multiplication being unable to overflow, as with std::uint8_t; any larger would result in the type exactly mapping to one of the promotion ranks, as with std::uint64_t matching unsigned long or unsigned long long depending on platform.
But this really sucks: it requires knowing which type is problematic based upon the size of int on the current platform. Is there some better way by which undefined behavior with unsigned integer multiplication can be avoided without #if mazes?
Some template metaprogramming with SFINAE, perhaps.
#include <type_traits>
template <typename T, typename std::enable_if<std::is_unsigned<T>::value && (sizeof(T) <= sizeof(unsigned int)) , int>::type = 0>
T safe_multiply(T a, T b) {
return (unsigned int)a * (unsigned int)b;
}
template <typename T, typename std::enable_if<std::is_unsigned<T>::value && (sizeof(T) > sizeof(unsigned int)) , int>::type = 0>
T safe_multiply(T a, T b) {
return a * b;
}
Demo.
Edit: simpler:
template <typename T, typename std::enable_if<std::is_unsigned<T>::value, int>::type = 0>
T safe_multiply(T a, T b) {
typedef typename std::make_unsigned<decltype(+a)>::type typ;
return (typ)a * (typ)b;
}
Demo.
Here's a relatively simple solution, which forces a promotion to unsigned int instead of int for unsigned type narrower than an int. I don't think any code is generated by promote, or at least no more code than the standard integer promotion; it will just force multiplication etc. to use unsigned ops instead of signed ones:
#include <type_traits>
// Promote to unsigned if standard arithmetic promotion loses unsignedness
template<typename integer>
using promoted =
typename std::conditional<std::numeric_limits<decltype(integer() + 0)>::is_signed,
unsigned,
integer>::type;
// function for template deduction
template<typename integer>
constexpr promoted<integer> promote(integer x) { return x; }
// Quick test
#include <cstdint>
#include <iostream>
#include <limits>
int main() {
uint8_t i8 = std::numeric_limits<uint8_t>::max();
uint16_t i16 = std::numeric_limits<uint16_t>::max();
uint32_t i32 = std::numeric_limits<uint32_t>::max();
uint64_t i64 = std::numeric_limits<uint64_t>::max();
i8 *= promote(i8);
i16 *= promote(i16);
i32 *= promote(i32);
i64 *= promote(i64);
std::cout << " 8: " << static_cast<int>(i8) << std::endl
<< "16: " << i16 << std::endl
<< "32: " << i32 << std::endl
<< "64: " << i64 << std::endl;
return 0;
}
This article regarding a C solution to the case of uint32_t * uint32_t multiplication on a system in which int is 64 bits has a really simple solution that I hadn't thought of: 32 bit unsigned multiply on 64 bit causing undefined behavior?
That solution, translated to my problem, is simple:
// C++
static_cast<std::uint16_t>(1U * x * x)
// C
(uint16_t) (1U * x * x)
Simply involving 1U in the left side of the chain of arithmetic operations like that will promote the first parameter to the larger rank of unsigned int and std::uint16_t, then so on down the chain. The promotion will ensure that the answer is both unsigned and that the requested bits remain present. The final cast then reduces it back to the desired type.
This is really simple and elegant, and I wish I had thought of it a year ago. Thank you to everyone who responded before.
I have a BIG problem with the answer to this question Swap bits in c++ for a double
Yet, this question is more or less what I search for:
I receive a double from the network and I want to encoded it properly in my machine.
In the case I receive an int I perform this code using ntohl :
int * piData = reinterpret_cast<int*>((void*)pData);
//manage endianness of incomming network data
unsigned long ulValue = ntohl(*piData);
int iValue = static_cast<int>(ulValue);
But in the case I receive an double, I don't know what to do.
The answer to the question suggest to do:
template <typename T>
void swap_endian(T& pX)
{
char& raw = reinterpret_cast<char&>(pX);
std::reverse(&raw, &raw + sizeof(T));
}
However , if I quote this site:
The ntohl() function converts the unsigned integer netlong from network byte order to host byte order.
When the two byte orders are different, this means the endian-ness of the data will be changed. When the two byte orders are the same, the data will not be changed.
On the contrary #GManNickG's answer to the question always does the inversion with std::reverse .
Am I wrong considering that this answer is false ? ( in the extent of network management of endianess which the use of ntohl suggest though it was not precisely said in the title of the OP question).
In the end: Should I split my double into two parts of 4 bytes and apply the ntohl function on the two parts ? Are there more cannonical solutions ?
There's also this interesting question in C, host to network double?, but it limits to 32 bits values. And the answer says doubles should be converted to strings because of architecture differences... I'm also gonna work with audio samples, should I really consider converting all the samples to strings in my database ? ( the doubles come from a database that I query over the network)
If your doubles are in IEEE 754 format that you should be relatively OK. Now you have to divide their 64 bits into two 32-bit halves and then transmit them in big-endian order (which is network order);
How about:
void send_double(double d) {
long int i64 = *((reinterpret_cast<int *>)(&d)); /* Ugly, but works */
int hiword = htonl(static_cast<int>(i64 >> 32));
send(hiword);
int loword = htonl(static_cast<int>(i64));
send(loword);
}
double recv_double() {
int hiword = ntohl(recv_int());
int loword = ntohl(recv_int());
long int i64 = (((static_cast<long int>) hiword) << 32) | loword;
return *((reinterpret_cast<double *>(&i64));
}
Assuming you have a compile-time option to determine endianness:
#if BIG_ENDIAN
template <typename T>
void swap_endian(T& pX)
{
// Don't need to do anything here...
}
#else
template <typename T>
void swap_endian(T& pX)
{
char& raw = reinterpret_cast<char&>(pX);
std::reverse(&raw, &raw + sizeof(T));
}
#endif
Of course, the other option is to not send double across the network at all - considering that it's not guaranteed to be IEEE-754 compatible - there are machines out there using other floating point formats... Using for example a string would work much better...
I could not make John Källén code work on my machine. Moreover, it might be more useful to convert the double into bytes (8 bit, 1 char):
template<typename T>
string to_byte_string(const T& v)
{
char* begin_ = reinterpret_cast<char*>(v);
return string(begin_, begin_ + sizeof(T));
}
template<typename T>
T from_byte_string(std::string& s)
{
assert(s.size() == sizeof(T) && "Wrong Type Cast");
return *(reinterpret_cast<T*>(&s[0]));
}
This code will also works for structs which are using POD types.
If you really want the double as two ints
double d;
int* data = reinterpret_cast<int*>(&d);
int first = data[0];
int second = data[1];
Finally, long int will not always be a 64bit integer (I had to use long long int to make a 64bit int on my machine).
If you want to know system endianless
ONLY #if __cplusplus > 201703L
#include <bit>
#include <iostream>
using namespace std;
int main()
{
if constexpr (endian::native == endian::big)
cout << "big-endian";
else if constexpr (endian::native == endian::little)
cout << "little-endian";
else
cout << "mixed-endian";
}
For more info: https://en.cppreference.com/w/cpp/types/endian