Lemires Nearly Divisionless Modulo Trick - c++

In https://lemire.me/blog/2019/06/06/nearly-divisionless-random-integer-generation-on-various-systems/, Lemire uses -s % s to compute something which according to the paper is supposed to be 2^L % s. According to https://shufflesharding.com/posts/dissecting-lemire this should be equivalent, but I'm getting different results. A 32-bit example:
#include <iostream>
int main() {
uint64_t s = 1440000000;
uint64_t k1 = (1ULL << 32ULL) % s;
uint64_t k2 = (-s) % s;
std::cout << k1 << std::endl;
std::cout << k2 << std::endl;
}
Output:
./main
1414967296
109551616
The results aren't matching. What am I missing?

Unary negation on integers operates on every bit (two's complement and all that).
So if you want to simulate 32 bit operations using uint64_t variables, you need to cast the value to 32 bits for that step:
#include <iostream>
int main() {
uint64_t s = 1440000000;
uint64_t k1 = (1ULL << 32ULL) % s;
uint64_t k2 = (-uint32_t(s)) % s;
std::cout << k1 << std::endl;
std::cout << k2 << std::endl;
}
Which leads to the expected result:
Program returned: 0
Program stdout
1414967296
1414967296
See on godbolt

Related

Float64 bit issue fir dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]

I'm trying the below code to get lower and higher 32-bit parts of Float64 bit value.
#define LSL_HI(x) *(1+(sInt32*)&x)
#define LSL_LO(x) *(sInt32*)&x
//warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
sInt32 lx = LSL_LO(1111.2222);
The [-Wstrict-aliasing] option is part of -O2 optimization and I don't have to disable this.
What is the solution to fix this issue?
This macro I have taken from GCC math lib itself.
If you want the literal bytes of a float 64 you can do this:
#include <cstdint>
static_assert(double == 8, "non 64-bit double!");
int
main()
{
double x = 1111.2222;
uint32_t lo = static_cast<uint32_t>(reinterpret_cast<uint64_t>(x) & 0x00000000FFFFFFFF);
uint32_t hi = static_cast<uint32_t>((reinterpret_cast<uint64_t>(x) & 0xFFFFFFFF00000000) >> (8*4));
return 0;
}
I don't think this is what you are trying to do. If you instead want the fractional part of the double and the whole part you will need to try something else.
I'm not sure what you are trying to accomplish, same as what Dylan mentioned.
If you want to rip apart a double (assuming an IEEE 754 double), you could do this (I'm using ANSI escape sequences, which may be unsuitable for your environment). I've got both "ripping apart into hex bytes" and "ripping apart into sign / exponent / fraction":
#include <algorithm>
#include <cmath>
#include <cstddef>
#include <cstring>
#include <iomanip>
#include <iostream>
#include <limits>
#include <sstream>
#include <string>
using std::cout;
using std::fpclassify;
using std::memcpy;
using std::nan;
using std::numeric_limits;
using std::reverse;
using std::setw;
using std::size_t;
using std::string;
using std::stringstream;
using std::uint32_t;
using std::uint64_t;
namespace {
uint32_t low32_from(double d) {
char const* p = reinterpret_cast<char const*>(&d);
uint32_t result;
memcpy(&result, p, sizeof result);
return result;
}
uint32_t high32_from(double d) {
char const* p = reinterpret_cast<char const*>(&d);
p += 4;
uint32_t result;
memcpy(&result, p, sizeof result);
return result;
}
string hexstr(uint32_t value) {
char hex[] = "0123456789ABCDEF";
unsigned char buffer[4];
memcpy(buffer, &value, sizeof buffer);
auto p = &buffer[0];
stringstream ss;
char const* sep = "";
for (size_t i = 0; i < sizeof buffer; ++i) {
ss << sep << hex[(*p >> 4) & 0xF] << hex[*p & 0xF];
sep = " ";
++p;
}
return ss.str();
}
string bits(uint64_t v, size_t len) {
string s;
int group = 0;
while (len--) {
if (group == 4) { s.push_back('\''); group = 0; }
s.push_back(v & 1 ? '1' : '0');
v >>= 1;
++group;
}
reverse(s.begin(), s.end());
return s;
}
string doublebits(double d) {
auto dx = fpclassify(d);
unsigned char buffer[8];
memcpy(buffer, &d, sizeof buffer);
stringstream ss;
uint64_t s = (buffer[7] >> 7) & 0x1;
uint64_t e = ((buffer[7] & 0x7FU) << 4) | ((buffer[6] >> 4) & 0xFU);
uint64_t f = buffer[6] & 0xFU;
f = (f << 8) + (buffer[5] & 0xFFU);
f = (f << 8) + (buffer[4] & 0xFFU);
f = (f << 8) + (buffer[3] & 0xFFU);
f = (f << 8) + (buffer[2] & 0xFFU);
f = (f << 8) + (buffer[1] & 0xFFU);
f = (f << 8) + (buffer[0] & 0xFFU);
ss << "sign:\033[0;32m" << bits(s, 1) << "\033[0m ";
if (s) ss << "(-) ";
else ss << "(+) ";
ss << "exp:\033[0;33m" << bits(e, 11) << "\033[0m ";
ss << "(" << setw(5) << (static_cast<int>(e) - 1023) << ") ";
ss << "frac:";
// 'i' for implied 1 bit, '.' for not applicable (so things align correctly).
if (dx == FP_NORMAL) ss << "\033[0;34mi";
else ss << "\033[0;37m.\033[34m";
ss << bits(f, 52) << "\033[0m";
if (dx == FP_INFINITE) ss << " \033[35mInfinite\033[0m";
else if (dx == FP_NAN) ss << " \033[35mNot-A-Number\033[0m";
else if (dx == FP_NORMAL) ss << " \033[35mNormal\033[0m";
else if (dx == FP_SUBNORMAL) ss << " \033[35mDenormalized\033[0m";
else if (dx == FP_ZERO) ss << " \033[35mZero\033[0m";
ss << " " << d;
return ss.str();
}
} // anon
int main() {
auto lo = low32_from(1111.2222);
auto hi = high32_from(1111.2222);
cout << hexstr(lo) << "\n";
cout << hexstr(hi) << "\n";
cout << doublebits(1111.2222) << "\n";
cout << doublebits(1.0) << "\n";
cout << doublebits(-1.0) << "\n";
cout << doublebits(+0.0) << "\n";
cout << doublebits(-0.0) << "\n";
cout << doublebits(numeric_limits<double>::infinity()) << "\n";
cout << doublebits(-numeric_limits<double>::infinity()) << "\n";
cout << doublebits(nan("")) << "\n";
double x = 1.0;
while (x > 0.0) {
cout << doublebits(x) << "\n";
x = x / 2.0;
}
}
The ability to cast a pointer and use the freshly-cast pointer to perform type punning "in a documented fashion characteristic of the environment" is one the Standards Committee referred to as a "popular extension" [see page 11 of the Rationale]. The Standard deliberately refrains from requiring that even compilers that are not intended to be suitable for low-level programming tasks (such as decomposing 64-bit floating-point values directly into 32-bit chunks) must support it, but instead it allows compilers to support such constructs or not, based upon customers' needs or any other criteria the compiler writers see fit, and it allows such constructs to be used in programs in programs that seek merely to be "conforming" rather than "strictly conforming".
Implementations that make a bona fide effort to be maximally suitable for low-level programming tasks will support that extension by processing such constructs "in a documented fashion characteristic of the environment". GCC is issuing a warning because it is not configured to be suitable for such usage; adding the compilation flag -fno-strict-aliasing will configure it properly and thus eliminate the warning.
While compilers designed for low-level programming nned not guarantee that all possible situations involving type punning will be processed in the manner implied by object representations, they should have no difficulty supporting situations where casts a pointer and immediately dereferences it as the new type for the purposes of accessing an object of the original type. Implementations which do not seek to be maximally suitable for low-level programming, however, may require the use of clunkier constructs which, depending upon the target platform, may or may not be processed as efficiently. When invoked without the -fno-strict-aliasing flag, the optimizers of clang and gcc fall into the latter category.
As a general principle, optimizations that assume a program won't do X may be useful in cases where there is no need to do X, but counter-productive for code whose purpose could best be described using X. If a program would benefit from being able to use low-level type punning constructs, documenting that it is only suitable for use with compiler configurations that support them is better than trying to work around their absence, especially given that clang and gcc don't reliably uphold the Standard in corner cases that would arise when trying to work around their lack of low-level programming support.

Why can't I pack these ints together?

I have the following code. The goal is to combine the two uint32_ts into a single uint64_t and then retrieve the values.
#include <iostream>
#include <cstdint>
int main()
{
uint32_t first = 5;
uint32_t second = 6;
uint64_t combined = (first << 32) | second;
uint32_t firstR = combined >> 32;
uint32_t secondR = combined & 0xffffffff;
std::cout << "F: " << firstR << " S: " << secondR << std::endl;
}
It outputs
F: 0 S: 7
How do I successfully retrieve the values correctly?
first is a 32-bit type and you bit-shift it by 32 bits. This is technically undefined behaviour, but probably the most likely outcome is that the result of the expression is 0. You need to cast it to a larger type before bit-shifting it.
uint64_t combined = (static_cast<uint64_t>(first) << 32) | second;
When you perform first << 32, you are shifting 32 bits within the space of 32 bits, so there are no bits remaining after the shift. The result of the shift is 0. You need to convert the first value to 64 bits before you shift it:
uint64_t combined = (uint64_t(first) << 32) | second;
As per the comments:
#include <iostream>
#include <cstdint>
int main()
{
uint32_t first = 5;
uint32_t second = 6;
uint64_t combined = (uint64_t(first) << 32) | second;
uint32_t firstR = combined >> 32;
uint32_t secondR = combined & 0xffffffff;
std::cout << "F: " << firstR << " S: " << secondR << std::endl;
}
The bit manipulation operators return a type of the first parameter. So you need to cast it to uint64_t in order for it to have room for the second value.

How to print the binary value of negative numbers? [duplicate]

I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;

Print out every bit of variable like 0 or 1 in byte blocks [duplicate]

I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;

How to print (using cout) a number in binary form?

I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;