Bitwise operations. Is this code safe and portable? - c++

I need to compute the Hamming distance between bitsets that are represented as char arrays. This is a core operation, so it must be as fast as possible. I have something like this:
const int N = 32; // 32 always
// returns the number of bits that are ones in a char
int countOnes_uchar8(unsigned char v);
// pa and pb point to arrays of N items
int hamming(const unsigned char *pa, const unsigned char *pb)
{
int ret = 0;
for(int i = 0; i < N; ++i, ++pa, ++pb)
{
ret += countOnes_uchar8(*pa ^ *pb);
}
return ret;
}
After profiling, I noticed that operating on ints is faster, so I wrote:
const int N = 32; // 32 always
// returns the number of bits that are ones in a int of 32 bits
int countOnes_int32(unsigned int v);
// pa and pb point to arrays of N items
int hamming(const unsigned char *pa, const unsigned char *pb)
{
const unsigned int *qa = reinterpret_cast<const unsigned int*>(pa);
const unsigned int *qb = reinterpret_cast<const unsigned int*>(pb);
int ret = 0;
for(int i = 0; i < N / sizeof(unsigned int); ++i, ++qa, ++qb)
{
ret += countOnes_int32(*qa ^ *qb);
}
return ret;
}
Questions
1) Is that cast from unsigned char * to unsigned int * safe?
2) I work on a 32-bit machine, but I would like the code to work on a 64-bit machine. Does sizeof(unsigned int) returns 4 in both machines, or is it 8 on a 64-bit one?
3) If sizeof(unsigned int) returned 4 in a 64-bit machine, how would I be able to operate on a 64-bit type, with long long?

Is that cast from unsigned char * to unsigned int * safe?
Formally, it gives undefined behaviour. Practically, it will work on just about any platform if the pointer is suitably aligned for unsigned int. On some platforms, it may fail, or perform poorly, if the alignment is wrong.
Does sizeof(unsigned int) returns 4 in both machines, or is it 8 on a 64-bit one?
It depends. Some platforms have 64-bit int, and some have 32-bit. It would probably make sense to use uint64_t regardless of platform; on a 32-bit platform, you'd effectively be unrolling the loop (processing two 32-bit values per iteration), which might give a modest improvement.
how would I be able to operate on a 64-bit type, with long long?
uint64_t, if you have a C++11 or C99 library. long long is at least 64 bits, but might not exist on a pre-2011 implementation.

1) No, it is not safe/portable, it is undefined behavior. There are systems where char is larger than one byte and there is no guarantee that the char pointer is properly aligned.
2) sizeof(int) might in theory be anything on a 64 bit machine. In practice, it will be either 4 or 8.
3) long long is most likely 64 bits but there are no guarantees there either. If you want guarantees, use uint64_t. However, for your specific algorithm I don't see why the sizeof() the data chunk would matter.
Consider using the types in stdint.h instead, they are far more suitable for portable code. Instead of char, int or long long, use uint_fast8_t. This will let the compiler pick the fastest integer for you, in a portable manner.
As a sidenote, you should consider implementing "countOnes" as a lookup table, working on 4, 8 or 32 bit level, depending on what is most optimal for your system. This will increase program size but reduce execution time. Maybe try to implement some form of adaptive lookup table which depends on sizeof(uint_fast8_t).

Related

How would I put 4 chars into a single int? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I am working on a replica GM buffer system in C++ to get familiar with bits and such, and it's not bad, but I've run into a problem. How do I push 4 different chars into an int? I am not the best at bitwise stuff, I've never used it. I've got no idea how to.
In the thing, I have an array of chars; of size byteArraySize and when I call the grab int function, it would take the bytes from bufferPointer + 4 to bufferPointer; backwards to grab the int properly.
I read a bit on bitshifting (lol), and I thought I could like bitshift every char's bits i to the right. I've just got no clue where to start.
Any help is greatly appreciated.
Pedantically, in pure standard C++14 or C++11, you probably cannot.
AFAIK, nothing forbids an hypothetical C++14 implementation to have all of char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long to be the same types (at least the same internal representation), and be all 64 bits, (or 96 bits, or 128 bits) and all of sizeof 1. The recent C and C++ standards mandate that long long has at least 64 bits.
IIRC, some weird C implementation above some Common Lisp is doing similar things.
But of course, there is no such C++14 implementation in practice.
In practice, on most implementations, char-s are 8 bits bytes (perhaps signed perhaps unsigned) and int-s are often 32 bits words (e.g. are std::int32_t), and you obviously could code
inline int pack4chars(char c1, char c2, char c3, char c4) {
return ((int)(((unsigned char)c1) << 24)
| (int)(((unsigned char)c2) << 16)
| (int)(((unsigned char)c3) << 8)
| (int)((unsigned char)c4));
}
The cast to (unsigned char) is needed because some implementations have signed char-s and others have unsigned ones.
Read also about endianness, serialization, htonl(3)
Yes, you can pack 4 chars (actually sizeof(int) chars) into an int. Here's how you could do it:
unsigned int packChars(unsigned char *c)
{
unsigned int val = u0;
for (size_t idx = 0; idx < sizeof(unsigned int); ++idx) {
val |= c[idx] << (idx * CHAR_BIT);
}
}
I'm using unsigned types, because bit shifting gets tricky when sign bits are involved. Also not that the code above is intentionally generic in the sizes used: sizeof(unsigned int) gives you the number of char units which fit into unsigned int, and CHAR_BIT specifies the number of bits in a char.
First of all you should be aware, that sizeof(int) does not have to be 4 *sizeof(char). Standard only guarantees, that sizeof(int) >= sizeof(char) and nothing more.
In fact int can be the same size with char's size (or bigger), But you never know, unless you find this out.
One possible solution is to use a union which have all its members aligned from the same offset in memory.
Example:
union Color
{
std::uint32_t m_rgba;
struct
{
std::uint8_t m_a;
std::uint8_t m_b;
std::uint8_t m_g;
std::uint8_t m_r;
};
};
Color white = { 0xffffffff };

Convert double to 8 length char array in c++

I want to convert a double to a 8 length char array in c++. The problem is that I want to cover all the number of bytes of the double type (double is not always 8 byte long in c++).
The char array is just used to store the bytes of the double, as if char type = byte type.
Any ideas?
Yes, you can always treat any object as an array of bytes. To access the bytes, use a reinterpret-cast:
T x; // any object
unsigned char const * bytes = reinterpret_cast<unsigned char const *>)(&x);
for (std::size_t i = 0; i != sizeof(T); ++i)
{
std::fprintf("Byte %zu is %02X\n", bytes[i]); // assuming CHAR_BIT == 8
}
Note that there isn't generally a way to know which of the bytes are part of the object representation and what their actual meaning is. For example, a long double may on certain platforms have size 12 or 16 but only have 10 relevant bytes, and you don't know which one is which. Though for a double with size 8 it's reasonable to assume that there's no padding and that the bytes make up an IEEE-754 representation in linear order. Your platform manual might tell you.

Narrowing conversion in C++

In Beej's Guide to Network Programming, there is a function that was meant to provide a portable way to serialize a 16-bit integer.
/*
** packi16() -- store a 16-bit int into a char buffer (like htons())
*/
void packi16(unsigned char *buf, unsigned int i)
{
*buf++ = i>>8; *buf++ = i;
}
I don't understand why the statement *buf++ = i; is portable, as the assignment of an unsigned integer (i) to an unsigned character (*buf) would result in a narrowing conversion.
Does the C++ standard guarantees that in such a conversion, the unsigned int is always truncated and its least significant 8 bits are retained in the unsigned char?
If not, is there any preferred way to fix the issue? Is it adequate to change the function body to the following?
*buf++ = (i>>8) & 0xFFFFU; *buf++ = i & 0xFFFFU;
The code assumes an 8-bit byte, and that is not portable.
E.g. some Texas Instruments digital signal processors have 16-bit byte.
The number of bits per byte is given by CHAR_BIT from <limits.h>.
Also, the code assumes that unsigned is 16 bits, which is not portable.
In summary the code is not portable.
Re
” Does the C++ standard guarantees that in such a conversion, the unsigned int is always truncated and its least significant 8 bits are retained in the unsigned char?
No, since the C++ standard does not guarantee that the number of bits per byte is 8.
The only guarantee is that it's at least 8 bits.
Unsigned arithmetic is guaranteed modular, however.
Re
” If not, is there any preferred way to fix the issue?
Use a simple loop, iterating sizeof(unsigned) times.
The code in question appears to have been distilled from such a loop, since the post-increment in *buf++ = i; is totally meaningless (this is the last use of buf).
Yes, out-of-range assignments to unsigned types adjust the value modulo one greater than the maximum value representable in the type. In this case, mod UCHAR_MAX+1.
No fix is required. Some people like to write *buf++ = i % 0x100; or equivalent, to make it clear that this was intentional narrowing.

can anyone explain why size_t type is used with an example?

I was wondering why this size_t is used where I can use say int type. Its said that size_t is a return type of sizeof operator. What does it mean? like if I use sizeof(int) and store what its return to an int type variable, then it also works, it's not necessary to store it in a size_t type variable. I just clearly want to know the basic concept of using size_t with a clearly understandable example.Thanks
size_t is guaranteed to be able to represent the largest size possible, int is not. This means size_t is more portable.
For instance, what if int could only store up to 255 but you could allocate arrays of 5000 bytes? Clearly this wouldn't work, however with size_t it will.
The simplest example is pretty dated: on an old 16-bit-int system with 64 k of RAM, the value of an int can be anywhere from -32768 to +32767, but after:
char buf[40960];
the buffer buf occupies 40 kbytes, so sizeof buf is too big to fit in an int, and it needs an unsigned int.
The same thing can happen today if you use 32-bit int but allow programs to access more than 4 GB of RAM at a time, as is the case on what are called "I32LP64" models (32 bit int, 64-bit long and pointer). Here the type size_t will have the same range as unsigned long.
You use size_t mostly for casting pointers into unsigned integers of the same size, to perform calculations on pointers as if they were integers, that would otherwise be prevented at compile time. Such code is intended to compile and build correctly in the context of different pointer sizes, e.g. 32-bit model versus 64-bit.
It is implementation defined but on 64bit systems you will find that size_t is often 64bit while int is still 32bit (unless it's ILP64 or SILP64 model).
depending on what architecture you are on (16-bit, 32-bit or 64-bit) an int could be a different size.
if you want a specific size I use uint16_t or uint32_t .... You can check out this thread for more information
What does the C++ standard state the size of int, long type to be?
size_t is a typedef defined to store object size. It can store the maximum object size that is supported by a target platform. This makes it portable.
For example:
void * memcpy(void * destination, const void * source, size_t num);
memcpy() copies num bytes from source into destination. The maximum number of bytes that can be copied depends on the platform. So, making num as type size_t makes memcpy portable.
Refer https://stackoverflow.com/a/7706240/2820412 for further details.
size_t is a typedef for one of the fundamental unsigned integer types. It could be unsigned int, unsigned long, or unsigned long long depending on the implementation.
Its special property is that it can represent the size of (in bytes) of any object (which includes the largest object possible as well!). That is one of the reasons it is widely used in the standard library for array indexing and loop counting (that also solves the portability issue). Let me illustrate this with a simple example.
Consider a vector of length 2*UINT_MAX, where UINT_MAX denotes the maximum value of unsigned int (which is 4294967295 for my implementation considering 4 bytes for unsigned int).
std::vector vec(2*UINT_MAX,0);
If you would want to fill the vector using a for-loop such as this, it would not work because unsigned int can iterate only upto the point UINT_MAX (beyond which it will start again from 0).
for(unsigned int i = 0; i<2*UINT_MAX; ++i) vec[i] = i;
The solution here is to use size_t since it is guaranteed to represent the size of any object (and therefore our vector vec too!) in bytes. Note that for my implementation size_t is a typedef for unsigned long and therefore its max value = ULONG_MAX = 18446744073709551615 considering 8 bytes.
for(size_t i = 0; i<2*UINT_MAX; ++i) vec[i] = i;
References: https://en.cppreference.com/w/cpp/types/size_t

C++: how to cast 2 bytes in an array to an unsigned short

I have been working on a legacy C++ application and am definitely outside of my comfort-zone (a good thing). I was wondering if anyone out there would be so kind as to give me a few pointers (pun intended).
I need to cast 2 bytes in an unsigned char array to an unsigned short. The bytes are consecutive.
For an example of what I am trying to do:
I receive a string from a socket and place it in an unsigned char array. I can ignore the first byte and then the next 2 bytes should be converted to an unsigned char. This will be on windows only so there are no Big/Little Endian issues (that I am aware of).
Here is what I have now (not working obviously):
//packetBuffer is an unsigned char array containing the string "123456789" for testing
//I need to convert bytes 2 and 3 into the short, 2 being the most significant byte
//so I would expect to get 515 (2*256 + 3) instead all the code I have tried gives me
//either errors or 2 (only converting one byte
unsigned short myShort;
myShort = static_cast<unsigned_short>(packetBuffer[1])
Well, you are widening the char into a short value. What you want is to interpret two bytes as an short. static_cast cannot cast from unsigned char* to unsigned short*. You have to cast to void*, then to unsigned short*:
unsigned short *p = static_cast<unsigned short*>(static_cast<void*>(&packetBuffer[1]));
Now, you can dereference p and get the short value. But the problem with this approach is that you cast from unsigned char*, to void* and then to some different type. The Standard doesn't guarantee the address remains the same (and in addition, dereferencing that pointer would be undefined behavior). A better approach is to use bit-shifting, which will always work:
unsigned short p = (packetBuffer[1] << 8) | packetBuffer[2];
This is probably well below what you care about, but keep in mind that you could easily get an unaligned access doing this. x86 is forgiving and the abort that the unaligned access causes will be caught internally and will end up with a copy and return of the value so your app won't know any different (though it's significantly slower than an aligned access). If, however, this code will run on a non-x86 (you don't mention the target platform, so I'm assuming x86 desktop Windows), then doing this will cause a processor data abort and you'll have to manually copy the data to an aligned address before trying to cast it.
In short, if you're going to be doing this access a lot, you might look at making adjustments to the code so as not to have unaligned reads and you'll see a perfromance benefit.
unsigned short myShort = *(unsigned short *)&packetBuffer[1];
The bit shift above has a bug:
unsigned short p = (packetBuffer[1] << 8) | packetBuffer[2];
if packetBuffer is in bytes (8 bits wide) then the above shift can and will turn packetBuffer into a zero, leaving you with only packetBuffer[2];
Despite that this is still preferred to pointers. To avoid the above problem, I waste a few lines of code (other than quite-literal-zero-optimization) it results in the same machine code:
unsigned short p;
p = packetBuffer[1]; p <<= 8; p |= packetBuffer[2];
Or to save some clock cycles and not shift the bits off the end:
unsigned short p;
p = (((unsigned short)packetBuffer[1])<<8) | packetBuffer[2];
You have to be careful with pointers, the optimizer will bite you, as well as memory alignments and a long list of other problems. Yes, done right it is faster, done wrong the bug can linger for a long time and strike when least desired.
Say you were lazy and wanted to do some 16 bit math on an 8 bit array. (little endian)
unsigned short *s;
unsigned char b[10];
s=(unsigned short *)&b[0];
if(b[0]&7)
{
*s = *s+8;
*s &= ~7;
}
do_something_With(b);
*s=*s+8;
do_something_With(b);
*s=*s+8;
do_something_With(b);
There is no guarantee that a perfectly bug free compiler will create the code you expect. The byte array b sent to the do_something_with() function may never get modified by the *s operations. Nothing in the code above says that it should. If you don't optimize your code then you may never see this problem (until someone does optimize or changes compilers or compiler versions). If you use a debugger you may never see this problem (until it is too late).
The compiler doesn't see the connection between s and b, they are two completely separate items. The optimizer may choose not to write *s back to memory because it sees that *s has a number of operations so it can keep that value in a register and only save it to memory at the end (if ever).
There are three basic ways to fix the pointer problem above:
Declare s as volatile.
Use a union.
Use a function or functions whenever changing types.
You should not cast a unsigned char pointer into an unsigned short pointer (for that matter cast from a pointer of smaller data type to a larger data type). This is because it is assumed that the address will be aligned correctly. A better approach is to shift the bytes into a real unsigned short object, or memcpy to a unsigned short array.
No doubt, you can adjust the compiler settings to get around this limitation, but this is a very subtle thing that will break in the future if the code gets passed around and reused.
Maybe this is a very late solution but i just want to share with you. When you want to convert primitives or other types you can use union. See below:
union CharToStruct {
char charArray[2];
unsigned short value;
};
short toShort(char* value){
CharToStruct cs;
cs.charArray[0] = value[1]; // most significant bit of short is not first bit of char array
cs.charArray[1] = value[0];
return cs.value;
}
When you create an array with below hex values and call toShort function, you will get a short value with 3.
char array[2];
array[0] = 0x00;
array[1] = 0x03;
short i = toShort(array);
cout << i << endl; // or printf("%h", i);
static cast has a different syntax, plus you need to work with pointers, what you want to do is:
unsigned short *myShort = static_cast<unsigned short*>(&packetBuffer[1]);
Did nobody see the input was a string!
/* If it is a string as explicitly stated in the question.
*/
int byte1 = packetBuffer[1] - '0'; // convert 1st byte from char to number.
int byte2 = packetBuffer[2] - '0';
unsigned short result = (byte1 * 256) + byte2;
/* Alternatively if is an array of bytes.
*/
int byte1 = packetBuffer[1];
int byte2 = packetBuffer[2];
unsigned short result = (byte1 * 256) + byte2;
This also avoids the problems with alignment that most of the other solutions may have on certain platforms. Note A short is at least two bytes. Most systems will give you a memory error if you try and de-reference a short pointer that is not 2 byte aligned (or whatever the sizeof(short) on your system is)!
char packetBuffer[] = {1, 2, 3};
unsigned short myShort = * reinterpret_cast<unsigned short*>(&packetBuffer[1]);
I (had to) do this all the time. big endian is an obvious problem. What really will get you is incorrect data when the machine dislike misaligned reads! (and write).
you may want to write a test cast and an assert to see if it reads properly. So when ran on a big endian machine or more importantly a machine that dislikes misaligned reads an assert error will occur instead of a weird hard to trace 'bug' ;)
On windows you can use:
unsigned short i = MAKEWORD(lowbyte,hibyte);
I realize this is an old thread, and I can't say that I tried every suggestion made here. I'm just making my self comfortable with mfc, and I was looking for a way to convert a uint to two bytes, and back again at the other end of a socket.
There are alot of bit shifting examples you can find on the net, but none of them seemed to actually work. Alot of the examples seem overly complicated; I mean we're just talking about grabbing 2 bytes out of a uint, sending them over the wire, and plugging them back into a uint at the other end, right?
This is the solution I finally came up with:
class ByteConverter
{
public:
static void uIntToBytes(unsigned int theUint, char* bytes)
{
unsigned int tInt = theUint;
void *uintConverter = &tInt;
char *theBytes = (char*)uintConverter;
bytes[0] = theBytes[0];
bytes[1] = theBytes[1];
}
static unsigned int bytesToUint(char *bytes)
{
unsigned theUint = 0;
void *uintConverter = &theUint;
char *thebytes = (char*)uintConverter;
thebytes[0] = bytes[0];
thebytes[1] = bytes[1];
return theUint;
}
};
Used like this:
unsigned int theUint;
char bytes[2];
CString msg;
ByteConverter::uIntToBytes(65000,bytes);
theUint = ByteConverter::bytesToUint(bytes);
msg.Format(_T("theUint = %d"), theUint);
AfxMessageBox(msg, MB_ICONINFORMATION | MB_OK);
Hope this helps someone out.