C++ Double to Binary Representation (Reinterpret Cast) - c++

I've recently decided to create a program that'll allow me to print out the exact bit pattern of an instance of any type in C++. I'm starting with the primitive built-in types. I've ran into an issue with printing the binary representation of a double type.
Here's my code:
#include <iostream>
using namespace std;
void toBinary(ostream& o, char a)
{
const size_t size = sizeof(a) * 8;
for (int i = size - 1; i >= 0; --i){
bool b = a & (1UL << i);
o << b;
}
}
void toBinary(ostream& o, double d)
{
const size_t size = sizeof(d);
for (int i = 0; i < size; ++i){
char* c = reinterpret_cast<char*>(&d) + i;
toBinary(o, *c);
}
}
int main()
{
int a = 5;
cout << a << " as binary: ";
toBinary(cout, static_cast<char>(a));
cout << "\n";
double d = 5;
cout << d << " as double binary: ";
toBinary(cout, d);
cout << "\n";
}
My output is the following:
5 as binary: 00000101
5 as double binary: 0000000000000000000000000000000000000000000000000001010001000000
However, I know that 5 as a floating point representation is:
01000000 00010100 00000000 00000000
00000000 00000000 00000000 00000000
Maybe I'm not understanding something here, but doesn't the reinterpret_cast<char*>(&d) + i line I've written allow me to treat a double* as a char* so that adding i to it will advance the pointer by sizeof(char) instead of sizeof(double). (Which is what I want here)? What am I doing wrong?

If you interpret a numeric type as a "byte sequence" you are exposed to the machine endianess: some platform store the most significant byte first, other do the reverse.
Just observe your number, in 8-bit groups, reading it from the last group towards the first and you get exactly what you expect.
Note that the same problem also happens with integers: 5 (in 32 bit) is
00000101-00000000-00000000-00000000
and not
00000000-00000000-00000000-00000101
as you wold expect.

Related

How does C++ compiler represent int numbers in binary code?

I've written a program, which shows binary representation of a particular integer value, using bitwise operators in C++. For even numbers it works as I expect, but for odd it adds 1 to the left of the binary representation.
#include <iostream>
using std::cout;
using std::cin;
using std::endl;
int main()
{
unsigned int a = 128;
for (int i = sizeof(a) * 8; i >= 0; --i) {
if (a & (1UL << i)) { // if i-th digit is 1
cout << 1; // Output 1
}
else {
cout << 0; // Otherwise output 0
}
}
cout << endl;
system("pause");
return 0;
}
Results:
For a = 128: 000000000000000000000000010000000,
For a = 127: 100000000000000000000000001111111
You might prefer CHAR_BIT macro instead of raw 8 (#include <climits>).
Consider your start value! Assuming unsigned int having 32 bit, your start value is int i = 4*8, so 1U << i shifts the value out of range. This is undefined behaviour and could result in anything, obviously, your specific compiler or hardware shifts %32, thus you get an initial value & 1, resulting in the unexpected leading 1... Did you notice that you actually printed out 33 digits instead of only 32?
The problem is here:
for (int i = sizeof(a) * 8; i >= 0; --i) {
it should be:
for (int i = sizeof(a) * 8; i-- ; ) {
http://coliru.stacked-crooked.com/a/8cb2b745063883fa

Accessing 8-bit data as 7-bit

I have an array of 100 uint8_t's, which is to be treated as a stream of 800 bits, and dealt with 7 bits at a time. So in other words, if the first element of the 8-bit array holds 0b11001100 and the second holds ob11110000 then when I come to read it in 7-bit format, the first element of the 7-bit array would be 0b1100110 and the second would be 0b0111100 with the remaining 2 bits being held in the 3rd.
The first thing I tried was a union...
struct uint7_t {
uint8_t i1:7;
};
union uint7_8_t {
uint8_t u8[100];
uint7_t u7[115];
};
but of course everything's byte aligned and I essentially end up simply loosing the 8th bit of each element.
Does anyone have any idea's on how I can go about doing this?
Just to be clear, this is something of a visual representation of the result of the union:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
0xxxxxxx 0xxxxxxx 0xxxxxxx 0xxxxxxx 32 bits of 7-bit data.
And this represents what it is that I want to do instead:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxx 32 bits of 7-bit data.
I'm aware the last bits may be padded but that's fine, I just want someway of accessing each byte 7 bits at a time without losing any of the 800 bits. So far the only way I can think of is lots of bit shifting, which of course would work but I'm sure there's a cleaner way of going about it(?)
Thanks in advance for any answers.
Not sure what you mean by "cleaner". Generally people who work on this sort of problem regularly consider shifting and masking to be the right primitive tool to use. One can do something like defining a bitstream abstraction with a method to read an arbitrary number of bits off the stream. This abstraction sometimes shows up in compression applications. The internals of the method of course do use shifting and masking.
One fairly clean approach is to write a function which extracts a 7-bit number at any bit index in an array of unsigned char's. Use a division to convert the bit index to a byte index, and modulus to get the bit index within the byte. Then shift and mask. The input bits can span two bytes, so you either have to glue together a 16-bit value before extraction, or do two smaller extractions and or them together to construct the result.
If I were aiming for something moderately performant, I'd likely take one of two approaches:
The first has two state variables saying how many bits to take from the current and next byte. It would use shifting, masking, and bitwise or, to produce the current output (a number between 0 and 127 as an int for example), then the loop would update both state variables via adding and modulus, and would increment the current byte pointers if all bits in the first byte were consumed.
The second approach is to load 56-bits (8 outputs worth of input) into a 64-bit integer and use a fully unrolled structure to extract each of the 8 outputs. Doing this without using unaligned memory reads requires constructing the 64-bit integer piecemeal. (56-bits is special because the starting bit position is byte aligned.)
To go real fast, I might try writing SIMD code in Halide. That's beyond scope here I believe. (And not clear it is going to win much actually.)
Designs which read more than one byte into a integer at a time will likely have to consider processor byte ordering.
Process them in groups of 8 (since 8x7 nicely rounds to something 8bit aligned). Bitwise operators are the order of the day here. Faffing around with the last (upto) 7 numbers is a little faffy, but not impossible. (This code assumes these are unsigned 7 bit integers! Signed conversion would require you to do consider flipping the top bit if bit[6] is 1)
// convert 8 x 7bit ints in one go
void extract8(const uint8_t input[7], uint8_t output[8])
{
output[0] = input[0] & 0x7F;
output[1] = (input[0] >> 7) | ((input[1] << 1) & 0x7F);
output[2] = (input[1] >> 6) | ((input[2] << 2) & 0x7F);
output[3] = (input[2] >> 5) | ((input[3] << 3) & 0x7F);
output[4] = (input[3] >> 4) | ((input[4] << 4) & 0x7F);
output[5] = (input[4] >> 3) | ((input[5] << 5) & 0x7F);
output[6] = (input[5] >> 2) | ((input[6] << 6) & 0x7F);
output[7] = input[6] >> 1;
}
// convert array of 7bit ints to 8bit
void seven_bit_to_8bit(const uint8_t* const input, uint8_t* const output, const size_t count)
{
size_t count8 = count >> 3;
for(size_t i = 0; i < count8; ++i)
{
extract8(input + 7 * i, output + 8 * i);
}
// handle remaining (upto) 7 bytes
const size_t countr = (count % 8);
if(countr)
{
// how many bytes do we need to copy from the input?
size_t remaining_bits = 7 * countr;
if(remaining_bits % 8)
{
// round to next nearest multiple of 8
remaining_bits += (8 - remaining_bits % 8);
}
remaining_bits /= 8;
{
uint8_t in[7] = {0}, out[8] = {0};
for(size_t i = 0; i < remaining_bits; ++i)
{
in[i] = input[count8 * 7 + i];
}
extract8(in, out);
for(size_t i = 0; i < countr; ++i)
{
output[count8 * 8 + i] = in[i];
}
}
}
}
Here is a solution that uses the vector bool specialization. It also uses a similar mechanism to allow access to the seven-bit elements via reference objects.
The member functions allow for the following operations:
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference to second element
v = 99; // change second element via reference
Full program:
#include <vector>
#include <iterator>
#include <iostream>
struct uint7_t {
unsigned int i : 7;
};
struct seven_bit_ref {
size_t begin;
size_t end;
std::vector<bool>& bits;
seven_bit_ref& operator=(const uint7_t& right)
{
auto it{bits.begin()+begin};
for(int mask{1}; mask != 1 << 7; mask <<= 1)
*it++ = right.i & mask;
return *this;
}
operator uint7_t() const
{
uint7_t r{};
auto it{bits.begin() + begin};
for(int i{}; i < 7; ++i)
r.i += *it++ << i;
return r;
}
seven_bit_ref operator*()
{
return *this;
}
void operator++()
{
begin += 7;
end += 7;
}
bool operator!=(const seven_bit_ref& right)
{
return !(begin == right.begin && end == right.end);
}
seven_bit_ref operator=(int val)
{
uint7_t temp{};
temp.i = val;
operator=(temp);
return *this;
}
};
template<typename T>
class Arr;
template<>
class Arr<uint7_t> {
public:
Arr(size_t size) : bits(size * 7, false) {}
seven_bit_ref operator[](size_t index)
{
return {index * 7, index * 7 + 7, bits};
}
size_t size()
{
return bits.size() / 7;
}
void push_back(uint7_t val)
{
for(int mask{1}; mask != 1 << 7; mask <<= 1){
bits.push_back(val.i & mask);
}
}
seven_bit_ref begin()
{
return {0, 7, bits};
}
seven_bit_ref end()
{
return {size() * 7, size() * 7 + 7, bits};
}
std::vector<bool> bits;
};
std::ostream& operator<<(std::ostream& os, uint7_t val)
{
os << val.i;
return os;
}
int main()
{
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference
v = 99; // change via reference
std::cout << "\nAfter changes:\n";
for(auto&& i : arr)
std::cout << i << '\n';
}
The following code works as you have asked for it, but first the output and live example on ideone.
Output:
Before changing values...:
7 bit representation: 1111111 0000000 0000000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 11111110 00000000 00000000 00000000 00000000 00000000 00000000
After changing values...:
7 bit representation: 1000000 1001100 1110010 1011010 1010100 0000111 1111110 0000000
8 bit representation: 10000001 00110011 10010101 10101010 10000001 11111111 00000000
8 Bits: 11111111 to ulong: 255
7 Bits: 1111110 to ulong: 126
After changing values...:
7 bit representation: 0010000 0101010 0100000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 00100000 10101001 00000000 00000000 00000000 00000000 00000000
It is very straight forward using a std::bitset in a class called BitVector. I implement one getter and setter. The getter returns also a std::bitset at the given index selIdx with a given template argument size M. The given idx will be multiplied by the given size M to get the right position. The returned bitset can also be converted to numerical or string values.
The setter uses an uint8_t value as input and again the index selIdx. The bits will be shifted to the right position into the bitset.
Further you can use the getter and setter with different sizes because of the template argument M, which means you can work with either 7 or 8 bit representation but also 3 or what ever you like.
I'm sure this code is not the best concerning speed, but I think it is a very clear and clean solution. Also it is not complete at all as there are just one getter, one setter and two constructors. Remember to implement error checking concerning indexes and sizes.
Code:
#include <iostream>
#include <bitset>
template <size_t N> class BitVector
{
private:
std::bitset<N> _data;
public:
BitVector (unsigned long num) : _data (num) { };
BitVector (const std::string& str) : _data (str) { };
template <size_t M>
std::bitset<M> getBits (size_t selIdx)
{
std::bitset<M> retBitset;
for (size_t idx = 0; idx < M; ++idx)
{
retBitset |= (_data[M * selIdx + idx] << (M - 1 - idx));
}
return retBitset;
}
template <size_t M>
void setBits (size_t selIdx, uint8_t num)
{
const unsigned char* curByte = reinterpret_cast<const unsigned char*> (&num);
for (size_t bitIdx = 0; bitIdx < 8; ++bitIdx)
{
bool bitSet = (1 == ((*curByte & (1 << (8 - 1 - bitIdx))) >> (8 - 1 - bitIdx)));
_data.set(M * selIdx + bitIdx, bitSet);
}
}
void print_7_8()
{
std:: cout << "\n7 bit representation: ";
for (size_t idx = 0; idx < (N / 7); ++idx)
{
std::cout << getBits<7>(idx) << " ";
}
std:: cout << "\n8 bit representation: ";
for (size_t idx = 0; idx < N / 8; ++idx)
{
std::cout << getBits<8>(idx) << " ";
}
}
};
int main ()
{
BitVector<56> num = 127;
std::cout << "Before changing values...:";
num.print_7_8();
num.setBits<8>(0, 0x81);
num.setBits<8>(1, 0b00110011);
num.setBits<8>(2, 0b10010101);
num.setBits<8>(3, 0xAA);
num.setBits<8>(4, 0x81);
num.setBits<8>(5, 0xFF);
num.setBits<8>(6, 0x00);
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
std::cout << "\n\n8 Bits: " << num.getBits<8>(5) << " to ulong: " << num.getBits<8>(5).to_ulong();
std::cout << "\n7 Bits: " << num.getBits<7>(6) << " to ulong: " << num.getBits<7>(6).to_ulong();
num = BitVector<56>(std::string("1001010100000100"));
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
return 0;
}
Here is one approach without the manual shifting. This is just a crude POC, but hopefully you will be able to get something out of it. I don't know if you are able to easily transform your input into bitset, but i think it should be possible.
int bytes = 0x01234567;
bitset<32> bs(bytes);
cout << "Input: " << bs << endl;
for(int i = 0; i < 5; i++)
{
bitset<7> slice(bs.to_string().substr(i*7, 7));
cout << slice << endl;
}
Also this is probably much less performant then the bitshifting version, so i wouldn't recommend it for heavy lifting.
You can use this to get the index'th 7-bit element from in (note that it doesn't have proper end of array handling). Simple, fast.
int get7(const uint8_t *in, int index) {
int fidx = index*7;
int idx = fidx>>3;
int sidx = fidx&7;
return (in[idx]>>sidx|in[idx+1]<<(8-sidx))&0x7f;
}
You can use direct access or bulk bit packing/unpacking as in TurboPFor:Integer Compression
// Direct read access
// b : bit width 0-16 (7 in your case)
#define bzhi32(u,b) ((u) & ((1u <<(b))-1))
static inline unsigned bitgetx16(unsigned char *in,
unsigned idx,
unsigned b) {
unsigned bidx = b*idx;
return bzhi32( *(unsigned *)((uint16_t *)in+(bidx>>4)) >> (bidx& 0xf), b );
}

Combining Bits - Combining Characters to Integers C++

I have an idea on how to make a header of a file more efficient (for an assignment) but I want to know if I can carry out the implementation.
Is it possible to read 24 bits from a file and then put it into an integer and have it retain its value?
Let's say I have:
00000000 00000001 00000000 = 256
Can I read this from a file, separate it into three characters, and then combine these characters into one integer such that the value 256 is retained? Such that the end result would be:
00000000 00000000 00000001 00000000 = 256
unsigned char a, b, c;
if (f >> std::noskipws >> a >> b >> c)
{
int n = a * 256 * 256 + b * 256 + c;
// use n...
}
else
std::cerr << "unable to read 3 characters from file\n";
Try this:
int combine_chars(char a, char b, char c) {
char arr[4] = {c, b, a, '\0'};
return *(int *)arr;
}
This is assuming little-endian representation of the integer, which means that the least significant byte has the smallest index. Basically the function creates an array of the 3 characters with a trailing 0 (since the most significant byte is '\0', or most significant 8 bits are 0). The least significant byte is c, then since the second least significant byte is 1 from the example, that becomes 1 << 8 which is 256. The array in memory is:
c b a '\0'
This then gets casted to an integer pointer (int *) which then gets dereferenced to the integer value and returned.
you can try something like this:
int getNumberFromStdin() {
std::string numberStr;
for (int i = 0; i < 3; ++i) {
std::string s;
std::cin >> s;
numberStr += s;
}
int number = 0;
for (const auto c : numberStr) {
number = (number << 1) + (c - '0');
}
return number;
}

Copy bytes from smaller types to bigger type

So, I want to retain information of two short into one int for sending over the network later (this is just a demonstration, no network code here). I copied two bytes of each short number into the prim (primitive) variable.
#include <iostream>
#include <stdio.h>
#include <string.h>
#include <string>
int main()
{
unsigned int prim = 0;
short major = 0x0001;
short minor = 0x0000;
prim = 0;
memcpy(&prim, &major, 2);
memcpy(&prim+2,&minor, 2);
std::cout << "Prim: " << prim << std::endl;
return EXIT_SUCCESS
}
If I copy the major first, then minor (as written in the code above) later, I will get the correct output:
Prim: 1
Because my CPU is little endian, the binary representation should be:
10000000 00000000 00000000 00000000
However, if I change the order, meaning minor first then major later like this:
memcpy(&prim, &minor, 2);
memcpy(&prim+2,&major, 2);
I get a 0.
Prim: 0
Why? It is supposed to be:
00000000 00000000 10000000 00000000
Which is equal to 65536.
A more portable way of putting integers into larger integers is by shifting.
uint16_t lo = 1;
uint16_t hi = 2;
uint32_t all = (hi << 16) + lo;
Because &prim + 2 points to 8 bytes (2 × sizeof(int)) after &prim.
You need to cast it to a char* to make the pointers increase by bytes.
memcpy((char*)&prim+2, &major, 2);
Besides fixing the error in calculating correct poiters you may avoid all this pointer calculations by using unions:
#include <iostream>
union two_shorts_t {
unsigned int prim ;
struct shorts_t {
short major;
short minor;
} shorts;
} ;
int main ()
{
two_shorts_t values;
values.shorts.major=1;
values.shorts.minor=0;
std::cout << "Prim: " << values.prim << std::endl;
values.shorts.major=0;
values.shorts.minor=1;
std::cout << "Prim: " << values.prim << std::endl;
return 0;
}

Conceptual problem in Union

My code is this
// using_a_union.cpp
#include <stdio.h>
union NumericType
{
int iValue;
long lValue;
double dValue;
};
int main()
{
union NumericType Values = { 10 }; // iValue = 10
printf("%d\n", Values.iValue);
Values.dValue = 3.1416;
printf("%d\n", Values.iValue); // garbage value
}
Why do I get garbage value when I try to print Values.iValue after doing Values.dValue = 3.1416?
I thought the memory layout would be like this. What happens to Values.iValue and
Values.lValue; when I assign something to Values.dValue ?
In a union, all of the data members overlap. You can only use one data member of a union at a time.
iValue, lValue, and dValue all occupy the same space.
As soon as you write to dValue, the iValue and lValue members are no longer usable: only dValue is usable.
Edit: To address the comments below: You cannot write to one data member of a union and then read from another data member. To do so results in undefined behavior. (There's one important exception: you can reinterpret any object in both C and C++ as an array of char. There are other minor exceptions, like being able to reinterpret a signed integer as an unsigned integer.) You can find more in both the C Standard (C99 6.5/6-7) and the C++ Standard (C++03 3.10, if I recall correctly).
Might this "work" in practice some of the time? Yes. But unless your compiler expressly states that such reinterpretation is guaranteed to be work correctly and specifies the behavior that it guarantees, you cannot rely on it.
Because floating point numbers are represented differently than integers are.
All of those variables occupy the same area of memory (with the double occupying more obviously). If you try to read the first four bytes of that double as an int you are not going to get back what you think. You are dealing with raw memory layout here and you need to know how these types are represented.
EDIT: I should have also added (as James has already pointed out) that writing to one variable in a union and then reading from another does invoke undefined behavior and should be avoided (unless you are re-interpreting the data as an array of char).
Well, let's just look at simpler example first. Ed's answer describes the floating part, but how about we examine how ints and chars are stored first!
Here's an example I just coded up:
#include "stdafx.h"
#include <iostream>
using namespace std;
union Color {
int value;
struct {
unsigned char R, G, B, A;
};
};
int _tmain(int argc, _TCHAR* argv[])
{
Color c;
c.value = 0xFFCC0000;
cout << (int)c.R << ", " << (int)c.G << ", " << (int)c.B << ", " << (int)c.A << endl;
getchar();
return 0;
}
What would you expect the output to be?
255, 204, 0, 0
Right?
If an int is 32 bits, and each of the chars is 8 bits, then R should correspond to the to the left-most byte, G the second one, and so forth.
But that's wrong. At least on my machine/compiler, it appears ints are stored in reverse byte order. I get,
0, 0, 204, 255
So to make this give the output we'd expect (or the output I would have expected anyway), we have to change the struct to A,B,G,R. This has to do with endianness.
Anyway, I'm not an expert on this stuff, just something I stumbled upon when trying to decode some binaries. The point is, floats aren't necessarily encoded the way you'd expect either... you have to understand how they're stored internally to understand why you're getting that output.
You've done this:
union NumericType Values = { 10 }; // iValue = 10
printf("%d\n", Values.iValue);
Values.dValue = 3.1416;
How a compiler uses memory for this union is similar to using the variable with largest size and alignment (any of them if there are several), and reinterpret cast when one of the other types in the union is written/accessed, as in:
double dValue; // creates a variable with alignment & space
// as per "union Numerictype Values"
*reinterpret_cast<int*>(&dValue) = 10; // separate step equiv. to = { 10 }
printf("%d\n", *reinterpret_cast<int*>(dValue)); // print as int
dValue = 3.1416; // assign as double
printf("%d\n", *reinterpret_cast<int*>(dValue)); // now print as int
The problem is that in setting dValue to 3.1416 you've completely overwritten the bits that used to hold the number 10. The new value may appear to be garbage, but it's simply the result of interpreting the first (sizeof int) bytes of the double 3.1416, trusting there to be a useful int value there.
If you want the two things to be independent - so setting the double doesn't affect the earlier-stored int - then you should use a struct/class.
It may help you to consider this program:
#include <iostream>
void print_bits(std::ostream& os, const void* pv, size_t n)
{
for (int i = 0; i < n; ++i)
{
uint8_t byte = static_cast<const uint8_t*>(pv)[i];
for (int j = 0; j < 8; ++j)
os << ((byte & (128 >> j)) ? '1' : '0');
os << ' ';
}
}
union X
{
int i;
double d;
};
int main()
{
X x = { 10 };
print_bits(std::cout, &x, sizeof x);
std::cout << '\n';
x.d = 3.1416;
print_bits(std::cout, &x, sizeof x);
std::cout << '\n';
}
Which, for me, produced this output:
00001010 00000000 00000000 00000000 00000000 00000000 00000000 00000000
10100111 11101000 01001000 00101110 11111111 00100001 00001001 01000000
Crucially, the first half of each line shows the 32 bits that are used for iValue: note the 1010 binary in the least significant byte (on the left on an Intel CPU like mine) is 10 decimal. Writing 3.1416 changes the entire 64-bits to a pattern representing 3.1416 (see http://en.wikipedia.org/wiki/Double_precision_floating-point_format). The old 1010 pattern is overwritten, clobbered, an electromagnetic memory no more.