Suppose, I wanted to write decimal 31 in a binary file (which is already loaded into vector) in 4 bytes so I have to write as 00 00 00 1f, but I don't know how to convert decimal number in hex string (of 4 bytes)
So, expected hex in vector of unsigned char is:
0x00 0x00 0x00 0x1f // int value of this is 31
To do this I tried following:
std::stringstream stream;
stream << std::setfill('0') << std::setw(sizeof(int) * 2) << std::hex << 31;
cout << stream.str();
Output:
0000001f
Above line of code gives output in string format but I want it into vector of unsigned char in format of '0x', so my output vector should have elements after conversion as 0x00 0x00 0x00 0x1F.
Without bothering with endianness you could copy the int value into a character buffer of the appropriate size. This buffer could be the vector itself.
Perhaps something like this:
std::vector<uint8_t> int_to_vector(unsigned value)
{
// Create a vector of unsigned characters (bytes on a byte-oriented platform)
// The size will be set to the same size as the value type
std::vector<uint8_t> buffer(sizeof value);
// Do a byte-wise copy of the value into the vector data
std::memcpy(buffer.data(), &value, sizeof value);
return buffer;
}
The order of bytes in the vector will always in the host native order. If a specific order is mandated then each byte of the multi-byte value needs to be copied into a specific element of the array using bitwise operations (std::memcpy can't be used).
Also note that this function will break strict aliasing if uint8_t isn't an alias of unsigned char. And that uint8_t is an optional type, there are platforms which doesn't have 8-bit entities (though they are not common).
For an endianness-specific variant, where each value of a byte is extracted one by one and added to the vector, perhaps something like this:
std::vector<uint8_t> int_to_be_vector(unsigned value)
{
// Create a vector of unsigned characters (bytes on a byte-oriented platform)
// The size will be set to the same size as the value type
std::vector<uint8_t> buffer(sizeof value);
// For each byte in the multi-byte value, copy it to the "correct" place in the vector
for (size_t i = buffer.size(); i > 0; --i)
{
// The cast truncates the value, dropping all but the lowest eight bits
buffer[i - 1] = static_cast<uint8_t>(value);
value >>= 8;
}
return buffer;
}
Example of it working
You could use a loop to extract one byte at a time of the original number and store that in a vector.
#include <algorithm>
#include <cstdint>
#include <iostream>
#include <vector>
using u8 = std::uint8_t;
using u32 = std::uint32_t;
std::vector<u8> GetBytes(const u32 number) {
const u32 mask{0xFF};
u32 remaining{number};
std::vector<u8> result{};
while (remaining != 0u) {
const u32 bits{remaining & mask};
const u8 res{static_cast<u8>(bits)};
result.push_back(res);
remaining >>= 8u;
}
std::reverse(std::begin(result), std::end(result));
return result;
}
int main() {
const u32 myNumber{0xABC123};
const auto bytes{GetBytes(myNumber)};
std::cout << std::hex << std::showbase;
for (const auto b : bytes) {
std::cout << static_cast<u32>(b) << ' ';
}
std::cout << std::endl;
return 0;
}
The output of this program is:
0xab 0xc1 0x23
I have a class that facilitates encoding/decoding raw memory. I ultimately store a void pointer to point to the memory and the number of bytes being referenced. I'm concerned about aliasing issues as well as the bit-shifting operations to get the encoding correct. Essentially, for WHAT_TYPE should I use char, unsigned char, int8_t, uint8_t, int_fast8_t, uint_fast8_t, int_least8_t, or uint_least8_t? Is there a definitive answer within the spec?
class sample_buffer {
size_t index; // For illustrative purposes
void *memory;
size_t num_bytes;
public:
sample_buffer(size_t n) :
index(0),
memory(malloc(n)),
num_bytes(memory == nullptr ? 0 : n) {
}
~sample_buffer() {
if (memory != nullptr) free(memory);
}
void put(uint32_t const value) {
WHAT_TYPE *bytes = static_cast<WHAT_TYPE *>(memory);
bytes[index] = value >> 24;
bytes[index + 1] = (value >> 16) & 0xFF;
bytes[index + 2] = (value >> 8) & 0xFF;
bytes[index + 3] = value & 0xFF;
index += 4;
}
void read(uint32_t &value) {
WHAT_TYPE const *bytes = static_cast<WHAT_TYPE const *>(memory);
value = (static_cast<uint32_t>(bytes[index]) << 24) |
(static_cast<uint32_t>(bytes[index + 1]) << 16) |
(static_cast<uint32_t>(bytes[index + 2]) << 8) |
(static_cast<uint32_t>(bytes[index + 3]);
index += 4;
}
};
In C++17: std::byte. This type is specifically created precisely for this reason, to convey all the right semantic meaning. Moreover, it has all the operators you would need to use on raw data (like the << in your example), but none of the operators that you wouldn't.
Before C++17: unsigned char. The standard defines object representation as a sequence of unsigned char, so it's just a good type to use. Furthermore, as Mooing Duck rightly suggests, using unsigned char* would prevent many bugs caused by mistakenly using your char* that refers to raw bytes as if it were a string and passing it into a function like strlen.
If you really cannot use unsigned char, then you should use char. Both unsigned char and char are the types you're allowed to alias through, so either are preferred to any of the other integer types.
Is this a good and fast enough idea to set an integer value in to the vector (defined vData) of char?
Or should I use memcpy for such small operation?
) {
int p1 = GetInt();
int p2 = GetInt();
if ( !d_bProtected )
{
d_vData.at(p2) = p1 & 0xFF;
d_vData.at(p2+1) = (p1 >> 8) & 0xFF;
d_vData.at(p2+2) = (p1 >> 16) & 0xFF;
d_vData.at(p2+3) = (p1 >> 24) & 0xFF;
//memcpy( &d_vData[p2], reinterpret_cast<char*>(&p1), sizeof(p1) );
}
}
Since memcpy is often compiler intrinsic, it is likely to give you exactly the same performance as manual byte-copying - with a benefit of not having to all those binary algebra yourself.
I vote for memcpy.
A potentially nicer interface for d_vData would be to make it a stringstream which, despite the name, acts as more of a binary stream buffer and has a nice interface for writing arbitrary binary data to an underlying buffer.
This would make your binary reading and writing entirely generic and wouldn't require you to write specialized functionality for every type you may potentially be writing to d_vData.
The example below shows how this is possible.
#include <sstream>
#include <iostream>
using namespace std;
template<typename T>
void write(stringstream& ss, const T& t)
{
ss.write(reinterpret_cast<char*>(&t), sizeof(T));
}
template<typename T>
void read(stringstream& ss, T& t)
{
char d[sizeof(T)];
ss.read(d, sizeof(d));
t = *(reinterpret_cast<T*>(d));
}
int read_stringstream(stringstream& ss)
{
// ensure read position is at the beginning of the stream
ss.seekg(0, ios_base::beg);
// extract data from a string stream
int i;
double d;
read(ss, i);
read(ss, d);
cout << "i:" << i << ", d:" << d << endl;
}
int main()
{
stringstream ss;
int i = 42;
double d = 69.0;
write(ss, i);
write(ss, d);
read_stringstream(ss);
}
1) I have a big buffer
2) I have a lot of variables of almost every types,
I use this buffer to send to multiple destinations, with different byte orders.
when I send to a network byte order, I usually used htons, or htonl and a customized function for specific data types,
so my issue,
every time I am constructing the buffer, I change byte order for each variable then use memcpy.
however, does anyone know a better way, like I was wishing for an efficient memcpy with specific intended byte order
an example,
UINT32 dwordData = 0x01234567;
UINT32 dwordTmp = htonl(dwordData);
memcpy(&buffer[loc], &dwordTmp, sizeof(UNIT32));
loc += sizeof(UNIT32);
this is just an example I just randomly wrote btw
I hope for a function that look like
memcpyToNetwork(&buffer[loc], &dwordTmp, sizeof(UNIT32));
if you know what I mean, naming is just a descriptive, and depending on the data type it does the byte order for the specific data type so I dont have to keep changing orders manually and have a temp variable to copy to, saving copying twice.
There is no standard solution, but it is fairly easy to write yourself.
Off the top of my head, an outline could look like this:
// Macro to be able to switch easily between encodings. Just for convenience
#define WriteBuffer WriteBufferBE
// Generic template as interface specification. Not implemented itself
// Takes buffer (of sufficient size) and value, returns number of bytes written
template <typename T>
size_t WriteBufferBE(char* buffer, const T& value);
template <typename T>
size_t WriteBufferLE(char* buffer, const T& value);
// Specializations for specific types
template <>
size_t WriteBufferBE(char* buffer, const UINT32& value)
{
buffer[0] = (value >> 24) & 0xFF;
buffer[1] = (value >> 16) & 0xFF;
buffer[2] = (value >> 8) & 0xFF;
buffer[3] = (value) & 0xFF;
return 4;
}
template <>
size_t WriteBufferBE(char* buffer, const UINT16& value)
{
buffer[0] = (value >> 8) & 0xFF;
buffer[1] = (value) & 0xFF;
return 2;
}
template <>
size_t WriteBufferLE(char* buffer, const UINT32& value)
{
buffer[0] = (value) & 0xFF;
buffer[1] = (value >> 8) & 0xFF;
buffer[2] = (value >> 16) & 0xFF;
buffer[3] = (value >> 24) & 0xFF;
return 4;
}
template <>
size_t WriteBufferLE(char* buffer, const UINT16& value)
{
buffer[0] = (value) & 0xFF;
buffer[1] = (value >> 8) & 0xFF;
return 2;
}
// Other types left as an exercise. Can use the existing functions!
// Usage:
loc += writeBuffer(&buffer[loc], dwordData);
The man pages for htonl() seem to suggest that you can only use it for up to 32 bit values. (In reality, ntohl() is defined for unsigned long, which on my platform is 32 bits. I suppose if the unsigned long were 8 bytes, it would work for 64 bit ints).
My problem is that I need to convert 64 bit integers (in my case, this is an unsigned long long) from big endian to little endian. Right now, I need to do that specific conversion. But it would be even nicer if the function (like ntohl()) would NOT convert my 64 bit value if the target platform WAS big endian. (I'd rather avoid adding my own preprocessor magic to do this).
What can I use? I would like something that is standard if it exists, but I am open to implementation suggestions. I have seen this type of conversion done in the past using unions. I suppose I could have a union with an unsigned long long and a char[8]. Then swap the bytes around accordingly. (Obviously would break on platforms that were big endian).
Documentation: man htobe64 on Linux (glibc >= 2.9) or FreeBSD.
Unfortunately OpenBSD, FreeBSD and glibc (Linux) did not quite work together smoothly to create one (non-kernel-API) libc standard for this, during an attempt in 2009.
Currently, this short bit of preprocessor code:
#if defined(__linux__)
# include <endian.h>
#elif defined(__FreeBSD__) || defined(__NetBSD__)
# include <sys/endian.h>
#elif defined(__OpenBSD__)
# include <sys/types.h>
# define be16toh(x) betoh16(x)
# define be32toh(x) betoh32(x)
# define be64toh(x) betoh64(x)
#endif
(tested on Linux and OpenBSD) should hide the differences. It gives you the Linux/FreeBSD-style macros on those 4 platforms.
Use example:
#include <stdint.h> // For 'uint64_t'
uint64_t host_int = 123;
uint64_t big_endian;
big_endian = htobe64( host_int );
host_int = be64toh( big_endian );
It's the most "standard C library"-ish approach available at the moment.
I would recommend reading this: http://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
uint64_t
ntoh64(const uint64_t *input)
{
uint64_t rval;
uint8_t *data = (uint8_t *)&rval;
data[0] = *input >> 56;
data[1] = *input >> 48;
data[2] = *input >> 40;
data[3] = *input >> 32;
data[4] = *input >> 24;
data[5] = *input >> 16;
data[6] = *input >> 8;
data[7] = *input >> 0;
return rval;
}
uint64_t
hton64(const uint64_t *input)
{
return (ntoh64(input));
}
int
main(void)
{
uint64_t ull;
ull = 1;
printf("%"PRIu64"\n", ull);
ull = ntoh64(&ull);
printf("%"PRIu64"\n", ull);
ull = hton64(&ull);
printf("%"PRIu64"\n", ull);
return 0;
}
Will show the following output:
1
72057594037927936
1
You can test this with ntohl() if you drop the upper 4 bytes.
Also You can turn this into a nice templated function in C++ that will work on any size integer:
template <typename T>
static inline T
hton_any(const T &input)
{
T output(0);
const std::size_t size = sizeof(input);
uint8_t *data = reinterpret_cast<uint8_t *>(&output);
for (std::size_t i = 0; i < size; i++) {
data[i] = input >> ((size - i - 1) * 8);
}
return output;
}
Now your 128 bit safe too!
Quick answer
#include <endian.h> // __BYTE_ORDER __LITTLE_ENDIAN
#include <byteswap.h> // bswap_64()
uint64_t value = 0x1122334455667788;
#if __BYTE_ORDER == __LITTLE_ENDIAN
value = bswap_64(value); // Compiler builtin GCC/Clang
#endif
Header file
As reported by zhaorufei (see her/his comment) endian.h is not C++ standard header and the macros __BYTE_ORDER and __LITTLE_ENDIAN may be undefined. Therefore the #if statement is not predictable because undefined macro are treated as 0.
Please edit this answer if you want to share your C++ elegant trick to detect endianness.
Portability
Moreover the macro bswap_64() is available for GCC and Clang compilers but not for Visual C++ compiler. To provide a portable source code, you may be inspired by the following snippet:
#ifdef _MSC_VER
#include <stdlib.h>
#define bswap_16(x) _byteswap_ushort(x)
#define bswap_32(x) _byteswap_ulong(x)
#define bswap_64(x) _byteswap_uint64(x)
#else
#include <byteswap.h> // bswap_16 bswap_32 bswap_64
#endif
See also a more portable source code: Cross-platform _byteswap_uint64
C++14 constexpr template function
Generic hton() for 16 bits, 32 bits, 64 bits and more...
#include <endian.h> // __BYTE_ORDER __LITTLE_ENDIAN
#include <algorithm> // std::reverse()
template <typename T>
constexpr T htonT (T value) noexcept
{
#if __BYTE_ORDER == __LITTLE_ENDIAN
char* ptr = reinterpret_cast<char*>(&value);
std::reverse(ptr, ptr + sizeof(T));
#endif
return value;
}
C++11 constexpr template function
C++11 does not permit local variable in constexpr function.
Therefore the trick is to use an argument with default value.
Moreover the C++11 constexpr function must contain one single expression.
Therefore the body is composed of one return having some comma-separated statements.
template <typename T>
constexpr T htonT (T value, char* ptr=0) noexcept
{
return
#if __BYTE_ORDER == __LITTLE_ENDIAN
ptr = reinterpret_cast<char*>(&value),
std::reverse(ptr, ptr + sizeof(T)),
#endif
value;
}
No compilation warning on both clang-3.5 and GCC-4.9 using -Wall -Wextra -pedantic
(see compilation and run output on coliru).
C++11 constexpr template SFINAE functions
However the above version does not allow to create constexpr variable as:
constexpr int32_t hton_six = htonT( int32_t(6) );
Finally we need to separate (specialize) the functions depending on 16/32/64 bits.
But we can still keep generic functions.
(see the full snippet on coliru)
The below C++11 snippet use the traits std::enable_if to exploit Substitution Failure Is Not An Error (SFINAE).
template <typename T>
constexpr typename std::enable_if<sizeof(T) == 2, T>::type
htonT (T value) noexcept
{
return ((value & 0x00FF) << 8)
| ((value & 0xFF00) >> 8);
}
template <typename T>
constexpr typename std::enable_if<sizeof(T) == 4, T>::type
htonT (T value) noexcept
{
return ((value & 0x000000FF) << 24)
| ((value & 0x0000FF00) << 8)
| ((value & 0x00FF0000) >> 8)
| ((value & 0xFF000000) >> 24);
}
template <typename T>
constexpr typename std::enable_if<sizeof(T) == 8, T>::type
htonT (T value) noexcept
{
return ((value & 0xFF00000000000000ull) >> 56)
| ((value & 0x00FF000000000000ull) >> 40)
| ((value & 0x0000FF0000000000ull) >> 24)
| ((value & 0x000000FF00000000ull) >> 8)
| ((value & 0x00000000FF000000ull) << 8)
| ((value & 0x0000000000FF0000ull) << 24)
| ((value & 0x000000000000FF00ull) << 40)
| ((value & 0x00000000000000FFull) << 56);
}
Or an even-shorter version based on built-in compiler macros and C++14 syntax std::enable_if_t<xxx> as a shortcut for std::enable_if<xxx>::type:
template <typename T>
constexpr typename std::enable_if_t<sizeof(T) == 2, T>
htonT (T value) noexcept
{
return bswap_16(value); // __bswap_constant_16
}
template <typename T>
constexpr typename std::enable_if_t<sizeof(T) == 4, T>
htonT (T value) noexcept
{
return bswap_32(value); // __bswap_constant_32
}
template <typename T>
constexpr typename std::enable_if_t<sizeof(T) == 8, T>
htonT (T value) noexcept
{
return bswap_64(value); // __bswap_constant_64
}
Test code of the first version
std::uint8_t uc = 'B'; std::cout <<std::setw(16)<< uc <<'\n';
uc = htonT( uc ); std::cout <<std::setw(16)<< uc <<'\n';
std::uint16_t us = 0x1122; std::cout <<std::setw(16)<< us <<'\n';
us = htonT( us ); std::cout <<std::setw(16)<< us <<'\n';
std::uint32_t ul = 0x11223344; std::cout <<std::setw(16)<< ul <<'\n';
ul = htonT( ul ); std::cout <<std::setw(16)<< ul <<'\n';
std::uint64_t uL = 0x1122334455667788; std::cout <<std::setw(16)<< uL <<'\n';
uL = htonT( uL ); std::cout <<std::setw(16)<< uL <<'\n';
Test code of the second version
constexpr uint8_t a1 = 'B'; std::cout<<std::setw(16)<<a1<<'\n';
constexpr auto b1 = htonT(a1); std::cout<<std::setw(16)<<b1<<'\n';
constexpr uint16_t a2 = 0x1122; std::cout<<std::setw(16)<<a2<<'\n';
constexpr auto b2 = htonT(a2); std::cout<<std::setw(16)<<b2<<'\n';
constexpr uint32_t a4 = 0x11223344; std::cout<<std::setw(16)<<a4<<'\n';
constexpr auto b4 = htonT(a4); std::cout<<std::setw(16)<<b4<<'\n';
constexpr uint64_t a8 = 0x1122334455667788;std::cout<<std::setw(16)<<a8<<'\n';
constexpr auto b8 = htonT(a8); std::cout<<std::setw(16)<<b8<<'\n';
Output
B
B
1122
2211
11223344
44332211
1122334455667788
8877665544332211
Code generation
The online C++ compiler gcc.godbolt.org indicate the generated code.
g++-4.9.2 -std=c++14 -O3
std::enable_if<(sizeof (unsigned char))==(1), unsigned char>::type htonT<unsigned char>(unsigned char):
movl %edi, %eax
ret
std::enable_if<(sizeof (unsigned short))==(2), unsigned short>::type htonT<unsigned short>(unsigned short):
movl %edi, %eax
rolw $8, %ax
ret
std::enable_if<(sizeof (unsigned int))==(4), unsigned int>::type htonT<unsigned int>(unsigned int):
movl %edi, %eax
bswap %eax
ret
std::enable_if<(sizeof (unsigned long))==(8), unsigned long>::type htonT<unsigned long>(unsigned long):
movq %rdi, %rax
bswap %rax
ret
clang++-3.5.1 -std=c++14 -O3
std::enable_if<(sizeof (unsigned char))==(1), unsigned char>::type htonT<unsigned char>(unsigned char): # #std::enable_if<(sizeof (unsigned char))==(1), unsigned char>::type htonT<unsigned char>(unsigned char)
movl %edi, %eax
retq
std::enable_if<(sizeof (unsigned short))==(2), unsigned short>::type htonT<unsigned short>(unsigned short): # #std::enable_if<(sizeof (unsigned short))==(2), unsigned short>::type htonT<unsigned short>(unsigned short)
rolw $8, %di
movzwl %di, %eax
retq
std::enable_if<(sizeof (unsigned int))==(4), unsigned int>::type htonT<unsigned int>(unsigned int): # #std::enable_if<(sizeof (unsigned int))==(4), unsigned int>::type htonT<unsigned int>(unsigned int)
bswapl %edi
movl %edi, %eax
retq
std::enable_if<(sizeof (unsigned long))==(8), unsigned long>::type htonT<unsigned long>(unsigned long): # #std::enable_if<(sizeof (unsigned long))==(8), unsigned long>::type htonT<unsigned long>(unsigned long)
bswapq %rdi
movq %rdi, %rax
retq
Note: my original answer was not C++11-constexpr compliant.
This answer is in Public Domain CC0 1.0 Universal
To detect your endian-ness, use the following union:
union {
unsigned long long ull;
char c[8];
} x;
x.ull = 0x0123456789abcdef; // may need special suffix for ULL.
Then you can check the contents of x.c[] to detect where each byte went.
To do the conversion, I would use that detection code once to see what endian-ness the platform is using, then write my own function to do the swaps.
You could make it dynamic so that the code will run on any platform (detect once then use a switch inside your conversion code to choose the right conversion) but, if you're only going to be using one platform, I'd just do the detection once in a separate program then code up a simple conversion routine, making sure you document that it only runs (or has been tested) on that platform.
Here's some sample code I whipped up to illustrate it. It's been tested though not in a thorough manner, but should be enough to get you started.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define TYP_INIT 0
#define TYP_SMLE 1
#define TYP_BIGE 2
static unsigned long long cvt(unsigned long long src) {
static int typ = TYP_INIT;
unsigned char c;
union {
unsigned long long ull;
unsigned char c[8];
} x;
if (typ == TYP_INIT) {
x.ull = 0x01;
typ = (x.c[7] == 0x01) ? TYP_BIGE : TYP_SMLE;
}
if (typ == TYP_SMLE)
return src;
x.ull = src;
c = x.c[0]; x.c[0] = x.c[7]; x.c[7] = c;
c = x.c[1]; x.c[1] = x.c[6]; x.c[6] = c;
c = x.c[2]; x.c[2] = x.c[5]; x.c[5] = c;
c = x.c[3]; x.c[3] = x.c[4]; x.c[4] = c;
return x.ull;
}
int main (void) {
unsigned long long ull = 1;
ull = cvt (ull);
printf ("%llu\n",ull);
return 0;
}
Keep in mind that this just checks for pure big/little endian. If you have some weird variant where the bytes are stored in, for example, {5,2,3,1,0,7,6,4} order, cvt() will be a tad more complex. Such an architecture doesn't deserve to exist, but I'm not discounting the lunacy of our friends in the microprocessor industry :-)
Also keep in mind that this is technically undefined behaviour, as you're not supposed to access a union member by any field other than the last one written. It will probably work with most implementations but, for the purist point of view, you should probably just bite the bullet and use macros to define your own routines, something like:
// Assumes 64-bit unsigned long long.
unsigned long long switchOrderFn (unsigned long long in) {
in = (in && 0xff00000000000000ULL) >> 56
| (in && 0x00ff000000000000ULL) >> 40
| (in && 0x0000ff0000000000ULL) >> 24
| (in && 0x000000ff00000000ULL) >> 8
| (in && 0x00000000ff000000ULL) << 8
| (in && 0x0000000000ff0000ULL) << 24
| (in && 0x000000000000ff00ULL) << 40
| (in && 0x00000000000000ffULL) << 56;
return in;
}
#ifdef ULONG_IS_NET_ORDER
#define switchOrder(n) (n)
#else
#define switchOrder(n) switchOrderFn(n)
#endif
some BSD systems has betoh64 which does what you need.
one line macro for 64bit swap on little endian machines.
#define bswap64(y) (((uint64_t)ntohl(y)) << 32 | ntohl(y>>32))
How about a generic version, which doesn't depend on the input size (some of the implementations above assume that unsigned long long is 64 bits, which is not necessarily always true):
// converts an arbitrary large integer (preferrably >=64 bits) from big endian to host machine endian
template<typename T> static inline T bigen2host(const T& x)
{
static const int one = 1;
static const char sig = *(char*)&one;
if (sig == 0) return x; // for big endian machine just return the input
T ret;
int size = sizeof(T);
char* src = (char*)&x + sizeof(T) - 1;
char* dst = (char*)&ret;
while (size-- > 0) *dst++ = *src--;
return ret;
}
uint32_t SwapShort(uint16_t a)
{
a = ((a & 0x00FF) << 8) | ((a & 0xFF00) >> 8);
return a;
}
uint32_t SwapWord(uint32_t a)
{
a = ((a & 0x000000FF) << 24) |
((a & 0x0000FF00) << 8) |
((a & 0x00FF0000) >> 8) |
((a & 0xFF000000) >> 24);
return a;
}
uint64_t SwapDWord(uint64_t a)
{
a = ((a & 0x00000000000000FFULL) << 56) |
((a & 0x000000000000FF00ULL) << 40) |
((a & 0x0000000000FF0000ULL) << 24) |
((a & 0x00000000FF000000ULL) << 8) |
((a & 0x000000FF00000000ULL) >> 8) |
((a & 0x0000FF0000000000ULL) >> 24) |
((a & 0x00FF000000000000ULL) >> 40) |
((a & 0xFF00000000000000ULL) >> 56);
return a;
}
How about:
#define ntohll(x) ( ( (uint64_t)(ntohl( (uint32_t)((x << 32) >> 32) )) << 32) |
ntohl( ((uint32_t)(x >> 32)) ) )
#define htonll(x) ntohll(x)
I like the union answer, pretty neat. Typically I just bit shift to convert between little and big endian, although I think the union solution has fewer assignments and may be faster:
//note UINT64_C_LITERAL is a macro that appends the correct prefix
//for the literal on that platform
inline void endianFlip(unsigned long long& Value)
{
Value=
((Value & UINT64_C_LITERAL(0x00000000000000FF)) << 56) |
((Value & UINT64_C_LITERAL(0x000000000000FF00)) << 40) |
((Value & UINT64_C_LITERAL(0x0000000000FF0000)) << 24) |
((Value & UINT64_C_LITERAL(0x00000000FF000000)) << 8) |
((Value & UINT64_C_LITERAL(0x000000FF00000000)) >> 8) |
((Value & UINT64_C_LITERAL(0x0000FF0000000000)) >> 24) |
((Value & UINT64_C_LITERAL(0x00FF000000000000)) >> 40) |
((Value & UINT64_C_LITERAL(0xFF00000000000000)) >> 56);
}
Then to detect if you even need to do your flip without macro magic, you can do a similiar thing as Pax, where when a short is assigned to 0x0001 it will be 0x0100 on the opposite endian system.
So:
unsigned long long numberToSystemEndian
(
unsigned long long In,
unsigned short SourceEndian
)
{
if (SourceEndian != 1)
{
//from an opposite endian system
endianFlip(In);
}
return In;
}
So to use this, you'd need SourceEndian to be an indicator to communicate the endianness of the input number. This could be stored in the file (if this is a serialization problem), or communicated over the network (if it's a network serialization issue).
An easy way would be to use ntohl on the two parts seperately:
unsigned long long htonll(unsigned long long v) {
union { unsigned long lv[2]; unsigned long long llv; } u;
u.lv[0] = htonl(v >> 32);
u.lv[1] = htonl(v & 0xFFFFFFFFULL);
return u.llv;
}
unsigned long long ntohll(unsigned long long v) {
union { unsigned long lv[2]; unsigned long long llv; } u;
u.llv = v;
return ((unsigned long long)ntohl(u.lv[0]) << 32) | (unsigned long long)ntohl(u.lv[1]);
}
htonl can be done by below steps
If its big endian system return the value directly. No need to do any conversion. If its litte endian system, need to do the below conversion.
Take LSB 32 bit and apply 'htonl' and shift 32 times.
Take MSB 32 bit (by shifting the uint64_t value 32 times right) and apply 'htonl'
Now apply bit wise OR for the value received in 2nd and 3rd step.
Similarly for ntohll also
#define HTONLL(x) ((1==htonl(1)) ? (x) : (((uint64_t)htonl((x) & 0xFFFFFFFFUL)) << 32) | htonl((uint32_t)((x) >> 32)))
#define NTOHLL(x) ((1==ntohl(1)) ? (x) : (((uint64_t)ntohl((x) & 0xFFFFFFFFUL)) << 32) | ntohl((uint32_t)((x) >> 32)))
You can delcare above 2 definition as functions also.
template <typename T>
static T ntoh_any(T t)
{
static const unsigned char int_bytes[sizeof(int)] = {0xFF};
static const int msb_0xFF = 0xFF << (sizeof(int) - 1) * CHAR_BIT;
static bool host_is_big_endian = (*(reinterpret_cast<const int *>(int_bytes)) & msb_0xFF ) != 0;
if (host_is_big_endian) { return t; }
unsigned char * ptr = reinterpret_cast<unsigned char *>(&t);
std::reverse(ptr, ptr + sizeof(t) );
return t;
}
Works for 2 bytes, 4-bytes, 8-bytes, and 16-bytes(if you have 128-bits integer). Should be OS/platform independent.
This is assuming you are coding on Linux using 64 bit OS; most systems have htole(x) or ntobe(x) etc, these are typically macro's to the various bswap's
#include <endian.h>
#include <byteswap.h>
unsigned long long htonll(unsigned long long val)
{
if (__BYTE_ORDER == __BIG_ENDIAN) return (val);
else return __bswap_64(val);
}
unsigned long long ntohll(unsigned long long val)
{
if (__BYTE_ORDER == __BIG_ENDIAN) return (val);
else return __bswap_64(val);
}
Side note; these are just functions to call to swap the byte ordering. If you are using little endian for example with a big endian network, but if you are using big ending encoding then this will unnecessarily reverse the byte ordering so a little "if __BYTE_ORDER == __LITTLE_ENDIAN" check might be require to make your code more portable, depening on your needs.
Update: Edited to show example of endian check
universal function for any value size.
template <typename T>
T swap_endian (T value)
{
union {
T src;
unsigned char dst[sizeof(T)];
} source, dest;
source.src = value;
for (size_t k = 0; k < sizeof(T); ++k)
dest.dst[k] = source.dst[sizeof(T) - k - 1];
return dest.src;
}
union help64
{
unsigned char byte[8];
uint64_t quad;
};
uint64_t ntoh64(uint64_t src)
{
help64 tmp;
tmp.quad = src;
uint64_t dst = 0;
for(int i = 0; i < 8; ++i)
dst = (dst << 8) + tmp.byte[i];
return dst;
}
It isn't in general necessary to know the endianness of a machine to convert a host integer into network order. Unfortunately that only holds if you write out your net-order value in bytes, rather than as another integer:
static inline void short_to_network_order(uchar *output, uint16_t in)
{
output[0] = in>>8&0xff;
output[1] = in&0xff;
}
(extend as required for larger numbers).
This will (a) work on any architecture, because at no point do I use special knowledge about the way an integer is laid out in memory and (b) should mostly optimise away in big-endian architectures because modern compilers aren't stupid.
The disadvantage is, of course, that this is not the same, standard interface as htonl() and friends (which I don't see as a disadvantage, because the design of htonl() was a poor choice imo).