Why does std::bitset expose bits in little-endian fashion? - c++

When I use std::bitset<N>::bitset( unsigned long long ) this constructs a bitset and when I access it via the operator[], the bits seems to be ordered in the little-endian fashion. Example:
std::bitset<4> b(3ULL);
std::cout << b[0] << b[1] << b[2] << b[3];
prints 1100 instead of 0011 i.e. the ending (or LSB) is at the little (lower) address, index 0.
Looking up the standard, it says
initializing the first M bit positions to the corresponding bit values in val
Programmers naturally think of binary digits from LSB to MSB (right to left). So the first M bit positions is understandably LSB → MSB, so bit 0 would be at b[0].
However, under shifting, the definition goes
The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled.
Here one has to interpret the bits in E1 as going from MSB → LSB and then left-shift E2 times. Had it been written from LSB → MSB, then only right-shifting E2 times would give the same result.
I'm surprised that everywhere else in C++, the language seems to project the natural (English; left-to-right) writing order (when doing bitwise operations like shifting, etc.). Why be different here?

There is no notion of endian-ness as far as the standard is concerned. When it comes to std::bitset, [template.bitset]/3 defines bit position:
When converting between an object of class bitset<N> and a value of
some integral type, bit position pos corresponds to the bit value 1<<pos.
The integral value corresponding to two or more bits is the sum
of their bit values.
Using this definition of bit position in your standard quote
initializing the first M bit positions to the corresponding bit values in val
a val with binary representation 11 leads to a bitset<N> b with b[0] = 1, b[1] = 1 and remaining bits set to 0.

This is consistent with the way bits are usually numbered - bit 0 represents 20, bit 1 represents 21, etc. It has nothing to do with the endianness of the architecture, which concerns byte ordering not bit ordering.

Related

c++ combining 2 uint8_t into one uint16_t not working?

So I have a little piece of code that takes 2 uint8_t's and places then next to each other, and then returns a uint16_t. The point is not adding the 2 variables, but putting them next to each other and creating a uint16_t from them.
The way I expect this to work is that when the first uint8_t is 0, and the second uint8_t is 1, I expect the uint16_t to also be one.
However, this is in my code not the case.
This is my code:
uint8_t *bytes = new uint8_t[2];
bytes[0] = 0;
bytes[1] = 1;
uint16_t out = *((uint16_t*)bytes);
It is supposed to make the bytes uint8_t pointer into a uint16_t pointer, and then take the value. I expect that value to be 1 since x86 is little endian. However it returns 256.
Setting the first byte to 1 and the second byte to 0 makes it work as expected. But I am wondering why I need to switch the bytes around in order for it to work.
Can anyone explain that to me?
Thanks!
There is no uint16_t or compatible object at that address, and so the behaviour of *((uint16_t*)bytes) is undefined.
I expect that value to be 1 since x86 is little endian. However it returns 256.
Even if the program was fixed to have well defined behaviour, your expectation is backwards. In little endian, the least significant byte is stored in the lowest address. Thus 2 byte value 1 is stored as 1, 0 and not 0, 1.
Does endianess also affect the order of the bit's in the byte or not?
There is no way to access a bit by "address"1, so there is no concept of endianness. When converting to text, bits are conventionally shown most significant on left and least on right; just like digits of decimal numbers. I don't know if this is true in right to left writing systems.
1 You can sort of create "virtual addresses" for bits using bitfields. The order of bitfields i.e. whether the first bitfield is most or least significant is implementation defined and not necessarily related to byte endianness at all.
Here is a correct way to set two octets as uint16_t. The result will depend on endianness of the system:
// no need to complicate a simple example with dynamic allocation
uint16_t out;
// note that there is an exception in language rules that
// allows accessing any object through narrow (unsigned) char
// or std::byte pointers; thus following is well defined
std::byte* data = reinterpret_cast<std::byte*>(&out);
data[0] = 1;
data[1] = 0;
Note that assuming that input is in native endianness is usually not a good choice, especially when compatibility across multiple systems is required, such as when communicating through network, or accessing files that may be shared to other systems.
In these cases, the communication protocol, or the file format typically specify that the data is in specific endianness which may or may not be the same as the native endianness of your target system. De facto standard in network communication is to use big endian. Data in particular endianness can be converted to native endianness using bit shifts, as shown in Frodyne's answer for example.
In a little endian system the small bytes are placed first. In other words: The low byte is placed on offset 0, and the high byte on offset 1 (and so on). So this:
uint8_t* bytes = new uint8_t[2];
bytes[0] = 1;
bytes[1] = 0;
uint16_t out = *((uint16_t*)bytes);
Produces the out = 1 result you want.
However, as you can see this is easy to get wrong, so in general I would recommend that instead of trying to place stuff correctly in memory and then cast it around, you do something like this:
uint16_t out = lowByte + (highByte << 8);
That will work on any machine, regardless of endianness.
Edit: Bit shifting explanation added.
x << y means to shift the bits in x y places to the left (>> moves them to the right instead).
If X contains the bit-pattern xxxxxxxx, and Y contains the bit-pattern yyyyyyyy, then (X << 8) produces the pattern: xxxxxxxx00000000, and Y + (X << 8) produces: xxxxxxxxyyyyyyyy.
(And Y + (X<<8) + (Z<<16) produces zzzzzzzzxxxxxxxxyyyyyyyy, etc.)
A single shift to the left is the same as multiplying by 2, so X << 8 is the same as X * 2^8 = X * 256. That means that you can also do: Y + (X*256) + (Z*65536), but I think the shifts are clearer and show the intent better.
Note that again: Endianness does not matter. Shifting 8 bits to the left will always clear the low 8 bits.
You can read more here: https://en.wikipedia.org/wiki/Bitwise_operation. Note the difference between Arithmetic and Logical shifts - in C/C++ unsigned values use logical shifts, and signed use arithmetic shifts.
If p is a pointer to some multi-byte value, then:
"Little-endian" means that the byte at p is the least-significant byte, in other words, it contains bits 0-7 of the value.
"Big-endian" means that the byte at p is the most-significant byte, which for a 16-bit value would be bits 8-15.
Since the Intel is little-endian, bytes[0] contains bits 0-7 of the uint16_t value and bytes[1] contains bits 8-15. Since you are trying to set bit 0, you need:
bytes[0] = 1; // Bits 0-7
bytes[1] = 0; // Bits 8-15
Your code works but your misinterpreted how to read "bytes"
#include <cstdint>
#include <cstddef>
#include <iostream>
int main()
{
uint8_t *in = new uint8_t[2];
in[0] = 3;
in[1] = 1;
uint16_t out = *((uint16_t*)in);
std::cout << "out: " << out << "\n in: " << in[1]*256 + in[0]<< std::endl;
return 0;
}
By the way, you should take care of alignment when casting this way.
One way to think in numbers is to use MSB and LSB order
which is MSB is the highest Bit and LSB ist lowest Bit for
Little Endian machines.
For ex.
(u)int32: MSB:Bit 31 ... LSB: Bit 0
(u)int16: MSB:Bit 15 ... LSB: Bit 0
(u)int8 : MSB:Bit 7 ... LSB: Bit 0
with your cast to a 16Bit value the Bytes will arrange like this
16Bit <= 8Bit 8Bit
MSB ... LSB BYTE[1] BYTE[0]
Bit15 Bit0 Bit7 .. 0 Bit7 .. 0
0000 0001 0000 0000 0000 0001 0000 0000
which is 256 -> correct value.

C/C++ Bitwise Operations not resulting in expected output?

I'm currently working on bitwise operations but I am confused right now... Here's the scoop and why
I have a byte 0xCD in bits this is 1100 1101
I am shifting the bits left 7, then I'm saying & 0xFF since 0xFF in bits is 1111 1111
unsigned int bit = (0xCD << 7) & 0xFF<<7;
Now I would make the assumption that both 0xCD and 0xFF would get shifted to the left 7 times and the remaining bit would be 1&1 = 1 but I'm not getting that for output also I would also make the assumption that shifting 6 would give me bits 0&1 = 0 but I'm getting again a number above 1 like 205 0.o Is there something incorrect about the way I am trying to process bit shifting in my head? If so what is it that I am doing wrong?
Code Below:
unsigned char byte_now = 0xCD;
printf("Bits for byte_now: 0x%02x: ", byte_now);
/*
* We want to get the first bit in a byte.
* To do this we will shift the bits over 7 places for the last bit
* we will compare it to 0xFF since it's (1111 1111) if bit&1 then the bit is one
*/
unsigned int bit_flag = 0;
int bit_pos = 7;
bit_flag = (byte_now << bit_pos) & 0xFF;
printf("%d", bit_flag);
Is there something incorrect about the way I am trying to process bit shifting in my head?
There seems to be.
If so what is it that I am doing wrong?
That's unclear, so I offer a reasonably full explanation.
In the first place, it is important to understand that C does not not perform any arithmetic directly on integers smaller than int. Consider, then, your expression byte_now << bit_pos. "The usual arithmetic promotions" are performed on the operands, resulting in the left operand being converted to the int value 0xCD. The result has the same pattern of least-significant value bits as bit_flag, but also a bunch of leading zero bits.
Left shifting the result by 7 bits produces the bit pattern 110 0110 1000 0000, equivalent to 0x6680. You then perform a bitwise and operation on the result, masking off all but the least-significant 8 bits, thus yielding 0x80. What happens when you assign that to bit_flag depends on the type of that variable, but if it is an integer type that is either unsigned or has more than 7 value bits then the assignment is well-defined and value-preserving. Note that it is bit 7 that is nonzero, not bit 0.
The type of bit_flag is more important when you pass it to printf(). You've paired it with a %d field descriptor, which is correct if bit_flag has type int and incorrect otherwise. If bit_flag does have type int, then I would expect the program to print 128.

Why does shifting 0xff left by 24 bits result in an incorrect value?

I would like to shift 0xff left by 3 bytes and store it in a uint64_t, which should work as such:
uint64_t temp = 0xff << 24;
This yields a value of 0xffffffffff000000 which is most definitely not the expected 0xff000000.
However, if I shift it by fewer than 3 bytes, it results in the correct answer.
Furthermore, trying to shift 0x01 left by 3 bytes does work.
Here's my output:
0xff shifted by 0 bytes: 0xff
0x01 shifted by 0 bytes: 0x1
0xff shifted by 1 bytes: 0xff00
0x01 shifted by 1 bytes: 0x100
0xff shifted by 2 bytes: 0xff0000
0x01 shifted by 2 bytes: 0x10000
0xff shifted by 3 bytes: 0xffffffffff000000
0x01 shifted by 3 bytes: 0x1000000
With some experimentation, left shifting works up to 3 bits for each uint64_t up to 0x7f, which yields 0x7f000000. 0x80 yields 0xffffffff80000000.
Does anyone have an explanation for this bizarre behavior? 0xff000000 certainly falls within the 264 - 1 limits of uint64_t.
Does anyone have an explanation for this bizarre behavior?
Yes, type of operation always depend on operand types and never on result type:
double r = 1.0 / 2.0;
// double divided by double and result double assigned to r
// r == 0.5
double r = 1.0 / 2;
// 2 converted to double, double divided by double and result double assigned to r
// r == 0.5
double r = 1 / 2;
// int divided by int, result int converted to double and assigned to r
// r == 0.0
When you understand and remenber this you would not fall on this mistake again.
I suspect the behavior is compiler dependent, but I am seeing the same thing.
The fix is simple. Be sure to cast the 0xff to a uint64_t type BEFORE performing the shift. That way the compiler will handle it as the correct type.
uint64_t temp = uint64_t(0xff) << 24
Shifting left creates a negative (32 bit) number which then gets filled to 64 bits.
Try
0xff << 24LL;
Let's break your problem up into two pieces. The first is the shift operation, and the other is the conversion to uint64_t.
As far as the left shift is concerned, you are invoking undefined behavior on 32-bit (or smaller) architectures. As others have mentioned, the operands are int. A 32-bit int with the given value would be 0x000000ff. Note that this is a signed number, so the left-most bit is the sign. According to the standard, if you the shift affects the sign-bit, the result is undefined. It is up to the whims of the implementation, it is subject to change at any point, and it can even be completely optimized away if the compiler recognizes it at compile-time. The latter is not realistic, but it is actually permitted. While you should never rely on code of this form, this is actually not the root of the behavior that puzzled you.
Now, for the second part. The undefined outcome of the left shift operation has to be converted to a uint64_t. The standard states for signed to unsigned integral conversions:
If the destination type is unsigned, the resulting value is the smallest unsigned value equal to the source value modulo 2n where n is the number of bits used to represent the destination type.
That is, depending on whether the destination type is wider or narrower, signed integers are sign-extended[footnote 1] or truncated and unsigned integers are zero-extended or truncated respectively.
The footnote clarifies that sign-extension is true only for two's-complement representation which is used on every platform with a C++ compiler currently.
Sign-extension means just that everything left of the sign bit on the destination variable will be filled with the sign-bit, which produces all the fs in your result. As you noted, you could left shift 0x7f by 3-bytes without this occurring, That's because 0x7f=0b01111111. After the shift, you get 0x7f000000 which is the largest signed int, ie the largest number that doesn't affect the sign bit. Therefore, in the conversion, a 0 was extended.
Converting the left operand to a large enough type solves this.
uint64_t temp = uint64_t(0xff) << 24

Right shift with zeros at the beginning

I'm trying to do a kind of left shift that would add zeros at the beginning instead of ones. For example, if I left shift 0xff, I get this:
0xff << 3 = 11111000
However, if I right shift it, I get this:
0xff >> 3 = 11111111
Is there any operation I could use to get the equivalent of a left shift? i.e. I would like to get this:
00011111
Any suggestion?
Edit
To answer the comments, here is the code I'm using:
int number = ~0;
number = number << 4;
std::cout << std::hex << number << std::endl;
number = ~0;
number = number >> 4;
std::cout << std::hex << number << std::endl;
output:
fffffff0
ffffffff
Since it seems that in general it should work, I'm interested as to why this specific code doesn't. Any idea?
This is how C and binary arithmetic both work:
If you left shift 0xff << 3, you get binary: 00000000 11111111 << 3 = 00000111 11111000
If you right shift 0xff >> 3, you get binary: 00000000 11111111 >> 3 = 00000000 00011111
0xff is a (signed) int with the positive value 255. Since it is positive, the outcome of shifting it is well-defined behavior in both C and C++. It will not do any arithmetic shifts nor any kind or poorly-defined behavior.
#include <stdio.h>
int main()
{
printf("%.4X %d\n", 0xff << 3, 0xff << 3);
printf("%.4X %d\n", 0xff >> 3, 0xff >> 3);
}
Output:
07F8 2040
001F 31
So you are doing something strange in your program because it doesn't work as expected. Perhaps you are using char variables or C++ character literals.
Source: ISO 9899:2011 6.5.7.
EDIT after question update
int number = ~0; gives you a negative number equivalent to -1, assuming two's complement.
number = number << 4; invokes undefined behavior, since you left shift a negative number. The program implements undefined behavior correctly, since it either does something or nothing at all. It may print fffffff0 or it may print a pink elephant, or it may format the hard drive.
number = number >> 4; invokes implementation-defined behavior. In your case, your compiler preserves the sign bit. This is known as arithmetic shift, and arithmetic right shift works in such a way that the MSB is filled with whatever bit value it had before the shift. So if you have a negative number, you will experience that the program is "shifting in ones".
In 99% of all real world cases, it doesn't make sense to use bitwise operators on signed numbers. Therefore, always ensure that you are using unsigned numbers, and that none of the dangerous implicit conversion rules in C/C++ transforms them into signed numbers (for more info about dangerous conversions, see "the integer promotion rules" and "the usual arithmetic conversions", plenty of good info about those on SO).
EDIT 2, some info from the C99 standard's rationale document V5.10:
6.5.7 Bitwise shift operators
The description of shift operators in K&R suggests that shifting by a
long count should force the left operand to be widened to long before
being shifted. A more intuitive practice, endorsed by the C89
Committee, is that the type of the shift count has no bearing on the
type of the result.
QUIET CHANGE IN C89
Shifting by a long count no longer coerces the shifted operand to
long. The C89 Committee affirmed the freedom in implementation granted
by K&R in not requiring the signed right shift operation to sign
extend, since such a requirement might slow down fast code and since
the usefulness of sign extended shifts is marginal. (Shifting a
negative two’s complement integer arithmetically right one place is
not the same as dividing by two!)
If you explicitly shift 0xff it works as you expected
cout << (0xff >> 3) << endl; // 31
It should be possible only if 0xff is in type of signed width 8 (char and signed char on popular platforms).
So, in common case:
You need to use unsigned ints
(unsigned type)0xff
right shift works as division by 2(with rounding down, if I understand correctly).
So when you have 1 as first bit, you have negative value and after division it's negative again.
The two kinds of right shift you're talking about are called Logical Shift and Arithmetic Shift. C and C++ use logical shift for unsigned integers and most compilers will use arithmetic shift for a signed integer but this is not guaranteed by the standard meaning that the value of right shifting a negative signed int is implementation defined.
Since you want a logical shift you need to switch to using an unsigned integer. You can do this by replacing your constant with 0xffU.
To explain your real code you just need the C++ versions of the quotes from the C standard that Lundin gave in comments:
int number = ~0;
number = number << 4;
Undefined behavior. [expr.shift] says
The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated
bits are zero-filled. If E1 has an unsigned type, the value of the
result is E1 × 2E2, reduced modulo one more than the maximum value
representable in the result type. Otherwise, if E1 has a signed type
and non-negative value, and E1×2E2 is representable in the result
type, then that is the resulting value; otherwise, the behavior is
undefined.
number = ~0;
number = number >> 4;
Implementation-defined result, in this case your implementation gave you an arithmetic shift:
The value of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has
an unsigned type or if E1 has a signed type and a non-negative value,
the value of the result is the integral part of the quotient of
E1/2E2. If E1 has a signed type and a negative value, the resulting
value is implementation-defined
You should use an unsigned type:
unsigned int number = -1;
number = number >> 4;
std::cout << std::hex << number << std::endl;
Output:
0x0fffffff
To add my 5 cents worth here...
I'm facing exactly the same problem as this.lau! I've done some perfunctory research on this and these are my results:
typedef unsigned int Uint;
#define U31 0x7FFFFFFF
#define U32 0xFFFFFFFF
printf ("U31 right shifted: 0x%08x\n", (U31 >> 30));
printf ("U32 right shifted: 0x%08x\n", (U32 >> 30));
Output:
U31 right shifted: 0x00000001 (expected)
U32 right shifted: 0xffffffff (not expected)
It would appear (in the absence of anyone with detailed knowledge) that the C compiler in XCode for Mac OS X v5.0.1 reserves the MSB as a carry bit that gets pulled along with each shift.
Somewhat annoyingly, the converse is NOT true:-
#define ST00 0x00000001
#define ST01 0x00000002
printf ("ST00 left shifted: 0x%08x\n", (ST00 << 30));
printf ("ST01 left shifted: 0x%08x\n", (ST01 << 30));
Output:
ST00 left shifted: 0x40000000
ST01 left shifted: 0x80000000
I concur completely with the people above that assert that the sign of the operand has no bearing on the behaviour of the shift operator.
Can anyone shed any light on the specification for the Posix4 implementation of C? I feel a definitive answer may rest there.
In the meantime, it appears that the only workaround is a construct along the following lines;-
#define CARD2UNIVERSE(c) (((c) == 32) ? 0xFFFFFFFF : (U31 >> (31 - (c))))
This works - exasperating but necessary.
Just in case if you want the first bit of negative number to be 0 after right shift what we can do is to take the XOR of that negative number with INT_MIN that will make its msb zero, I understand that its not appropriate arithmetic shift but will get work done

integer operations with <<

I've recently bee seen a number of code examples with stuff like
1 << 20
although I knew this operator could be used on integers I'm not sure what it does and every google search I try doing on it returns stuff about cout << but nothing on the integer operations. Could someone tell me what this operator does to integers?
<< is Bit wise left shift operator
C++03 [5.8/2]
The value of E1 << E2 is E1 (interpreted as a bit pattern) left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned type, the value of the result is E1 multiplied by the quantity 2 raised to the power E2, reduced modulo ULONG_MAX+1 if E1 has type unsigned long, UINT_MAX+1 otherwise. [Note: the constants ULONG_MAXand UINT_MAX are defined in the header ). ]
In addition in the expression E1 << E2 if E1 has a signed type and negative value the behaviour is undefined.
That means something like -1 << 4 invokes UB.
Bit shifting: http://en.wikipedia.org/wiki/Bitwise_shift#Bit_shifts
You are shifting 1 20 bits left, or...
1 << 20 == (binary) 1 0000 0000 0000 0000 0000 == 2^20 == 1048576
The use of << and >> for io is actually newer than their "original" purpose, bit shifting.
1 << 1 means to take the binary 1 (which is also just plain 1) and move everything along 1 to the left, resulting in a binary 10 or 2. It doubles it. 1 << 2 becomes 4, 1 << 3 becomes 8, and so on. (Starting with a more complicated number than 1 still works, again you just shift everything over to the left.) This kind of work is called "bit twiddling" by some. It can save time for certain kinds of arithmetical operations.