Pad leading zeroes in multiples of x, using std::cout << std::hex - c++

There are many questions regarding how to pad a fixed number of leading zeroes when using C++ streams with variables we want represented in hexadecimal format:
std::cout << std::hex << setfill('0') << setw(8) << myByteSizedVar;
My question regards how to do this for not a fixed width, but a multiple of some fixed amount - likely 8 for the obvious reason that when comparing outputs we might want:
0x87b003a
0xab07
To match up for width to be compared more easily (okay the top is larger - but try a bitwise comparison in your head? Easily confused.)
0x87b003a
0x000ab07
Nice, two bytes lined up nice and neatly. Except we only see 7 bits - which is not immediately obvious (especially if it were 15/16, 31/32, etc.), possibly causing us to mistake the top for a negative number (assuming signed).
All well and good, we can set the width to 8 as above.
However, when making the comparison next to say a 32-bit word:
0x000000000000000000000000087b003a
0x238bfa700af26fa930b00b130b3b022b
It may be more unneccessary, depending on the use, or even misleading if dealing with hardware where the top number actually has no context in which to be a 32-bit word.
What I would like, is to automagically set the width of the number to be a multiple of 8, like:
std::cout << std::hex << std::setfill('0') << setWidthMultiple(8) << myVar;
For results like:
0x00000000
0x388b7f62
0x0000000f388b7f62
How is this possible with standard libraries, or a minimal amount of code? Without something like Boost.Format.

How about this:
template <typename Int>
struct Padme
{
static_assert(std::is_unsigned<Int>::value, "Unsigned ints only");
Int n_;
explicit Padme(Int n) : n_(n) {}
template <typename Char, typename CTraits>
friend
std::basic_ostream<Char, CTraits> & operator<<(
std::basic_ostream<Char, CTraits> & os, Padme p)
{
return os << std::setw(ComputeWidth(p.n_)) << p.n_;
}
static std::size_t ComputeWidth(Int n)
{
if (n < 0x10000) { return 4; }
if (n < 0x100000000) { return 8; }
if (n < 0x1000000000000) { return 12; }
return 16;
}
};
template <typename Int>
Padme<Int> pad(Int n) { return Padme<Int>(n); }
Usage:
std::cout << pad(129u) << "\n";
With some more work you could provide versions with different digit group sizes, different number bases etc. And for signed types you could stick std::make_unsigned into the pad function template.

There's no immediate support for it, because it involves more
than one input value (if I understand what you're asking for
correctly). The only solution which comes to mind is to use
std::max to find the largest value, and then derive the number
of digits you need from that, say by using a table:
struct DigitCount
{
unsigned long long maxValue;
int digits;
};
static const DigitCount digitCount[] =
{
{ 255UL, 2 },
{ 65535UL, 4 },
{ 16777215UL, 6 },
{ 4294967295UL, 8 },
};
and looking up the width in it.

Related

How to extract a subset of a bitset larger than unsigned long long without using to_string or extracting bits one-by-one?

I want to extract a subset from a given bitset, where:
both G(given bitset size) and S(subset size) are known at compile-time
G > sizeof(unsigned long long) * 8
S <= sizeof(int) * 8
The subset may start anywhere in the given bitset
The resulting subset should be a bitset of the correct size, S
I can isolate the subset using bitwise math but then bitset.to_ullong does not work because of overflow error.
I know I can use bitset.to_string and do string manipulations/conversion. What I'm currently doing is extracting bits one-by-one manually (this is enough for my use case).
I'm just interested to know if there are other ways of doing this.
std::bitset has bitshift operators, so for example you can write a function to place bits (begin,end] at the front:
template <size_t size>
std::bitset<size> extract_bits(std::bitset<size> x,int begin, int end) {
(x >>= (size-end)) <<= (size-end);
x <<= begin;
return x;
}
For example to get the bits 1 and 2:
int main()
{
std::bitset<6> x{12};
std::cout << x << "\n";
std::cout << extract_bits(x,1,3);
}
Output:
001100
010000
If you know begin and end at compile-time you can get a std::bitset containing only the desired bits:
template <size_t begin,size_t end,size_t in>
std::bitset<end-begin> crop(const std::bitset<in>& x) {
return std::bitset<end-begin>{ x.to_string().substr(begin,end-begin) };
};
auto c = crop<1,3>(x);
std::cout << c; // 01
If the bitset is too large to fit in a unsigned long long I don't know another way that does not use to_string() or loops for individual bits.

How to print the decimal value of a `std::bitset` bigger than a `unsigned long long`?

I'm programming a class to handle integers with a bitset with more bits of a unsigned long long.
#include <bitset>
#include <string>
#define MAX_BITS 32000
class RossInt {
std::bitset<MAX_BITS> bits;
public:
RossInt(unsigned long long num); /* Copies the bits of num into the bitset */
RossInt operator+ (const RossInt& op) const;
std::string to_string() const; /* Returns the number in decimal in a string */
};
Since the number is bigger than a unsigned long long I'll put it in a string, but the issue is that I can't use the way I would normally use with the decimal_n += pow(2, bit_index) cause I can't store the result into a variable and I want to use as less external library as possible.
Is there a way for converting it using bitwise operators or any other way?
After loosing two nights on it, I've finally found a way of printing it. It's not the most efficient way and I'm gonna rewrite it until I'm satisfied with the speed.
std::ostream& operator<<(std::ostream& os, const RossInt& value) {
RossInt op = value;
RossInt i = 1;
while (op / i > 0) i = i * 10;
do {
i = i / 10;
os << (op / i).to_ullong();
op = op % i;
} while (i > 1);
return os;
}
The logic used in this code is pretty simple: if x = y and I divide both of them by 10^x, x is still equal to y. Using this property I wrote a code that prints the number digit by digit.
The first line copies the number to avoid changes in the value.
RossInt i = 1;
while (op / i > 0) i = i * 10;
The initial value of i is 1, the neutral number for multiplications. In the while it keeps increasing i until it gets bigger than the value of the number making possible to know the number of digit of the number without converting it in decimal.
do {
i = i / 10;
os << (op / i).to_ullong();
op = op % i;
} while (i > 1);
This cycle is used to do the real print of the number.
With every iteration the i loose a 0. Dividing op by i removes as many digits from the bottom of op as many 0s i has. Ex.: 12345 / 100 = 123.
The operation right after that does the exact opposite: it keeps as many digits as the 0s instead of removing them.
This cycle then print a digit and remove it right after, passing and printing them all.
What you are looking for is a Binary->BCD (Binary Coded Decimal) solution. Once you have a BCD, printing it is easy.
Given BCD math is heavily used I tried looking for a standard solution, but could not find one.
This example class should help. It's not traditional BCD representation, but it's easier to follow.
class BCD
{
public:
// function to make the "root" value.
static BCD One() { BCD retval; retval.digits = {1}; return retval; }
// we will generate all powers of two from doubling from one
void Double()
{
for(auto& d : digits) d *= 2;
Normalise();
}
// We can generate all other numbers by combining powers of two.
void operator+=(const BCD& other)
{
if (other.digits.size()>digits.size())
digits.resize(other.digits.size(), 0);
std::transform(digits.begin(), digits.end(), other.digits.begin(), digits.begin(), std::plus<char>{});
Normalise();
}
friend inline std::ostream& operator << (std::ostream&, const BCD&);
private:
// "normal form" is all digits 0-9, if required carry the one
void Normalise()
{
int carry = 0;
for(auto& d : digits) {
d+=carry;
carry = d/10;
d = d%10;
}
if (carry) digits.push_back(carry);
}
std::vector<char> digits;
};
inline std::ostream& operator<<(std::ostream& os, const BCD& bcd)
{
for_each(bcd.digits.rbegin(), bcd.digits.rend(), [&os](char d){ os << (char)(d+'0'); });
}
Using this class it should be simple to build a BCD from a bitset. For example
int main()
{
BCD power_of_two = BCD::One();
BCD total;
for (int i = 0; i < 10; ++i) {
total += power_of_two;
std::cout << power_of_two << " total = " << total << "\n";
power_of_two.Double();
}
return 0;
}
produces
1 total = 1
2 total = 3
4 total = 7
8 total = 15
16 total = 31
32 total = 63
64 total = 127
128 total = 255
256 total = 511
512 total = 1023
all you have to do is make the "+=" conditional on the bit in the set. I'm not claiming this is the best way to do it, but it should be good enough to test your results.
Hope it helps
-----------------Update after comments----------
It has been pointed out that my solution is not a complete solution.
In the spirit of being helpful I will provide how you can use the BCD class above to build a "to_string" function.
If you don't understand this you have to read up on number theory. You have to do it this way because 10=5*2. 5 is odd, and therefore there is no 10^x = 2^y, where x and y are integers. In practice, this means that the first digit in the decimal representation depends on every digit in the binary representation. You must determine it by adding all the powers of two. You may as well generate the whole number using the same logic.
Please be advised though that stack overflow exists to help programmers who have got stuck, not to do other peoples work for them. Please do not expect to have other people post working programs for you.
std::string to_string() const
{
BCD power_of_two = BCD::One();
BCD total;
for (int idx = 0; idx < bits.size(); ++idx) {
if (bits.test(idx))
total += power_of_two;
power_of_two.Double();
}
std::ostringstream os;
os << total;
return os.str();
}

C++ - Getting size in bits of integer

I need to know whether an integer is 32 bits long or not (I want to know if it's exactly 32 bits long (8 hexadecimal characters). How could I achieve this in C++? Should I do this with the hexadecimal representation or with the unsigned int one?
My code is as follows:
mistream.open("myfile.txt");
if(mistream)
{
for(int i=0; i<longArray; i++)
{
mistream >> hex >> datos[i];
}
}
mistream.close();
Where mistream is of type ifstream, and datos is an unsigned int array
Thank you
std::numeric_limits<unsigned>::digits
is a static integer constant (or constexpr in C++11) giving the number of bits (since unsigned is stored in base 2, it gives binary digits).
You need to #include <limits> to get this, and you'll notice here that this gives the same value as Thomas' answer (while also being generalizable to other primitive types)
For reference (you changed your question after I answered), every integer of a given type (eg, unsigned) in a given program is exactly the same size.
What you're now asking is not the size of the integer in bits, because that never varies, but whether the top bit is set. You can test this trivially with
bool isTopBitSet(uint32_t v) {
return v & 0x80000000u;
}
(replace the unsigned hex literal with something like T{1} << (std::numeric_limits<T>::digits-1) if you want to generalise to unsigned T other than uint32_t).
As already hinted in a comment by #chux, you can use a combination of the sizeof operator and the CHAR_BIT macro constant. The former tells you (at compile-time) the size (in multiples of sizeof(char) aka bytes) of its argument type. The latter is the number of bits to the byte (usually 8).
You can encapsulate this nicely into a function template.
#include <climits> // CHAR_BIT
#include <cstddef> // std::size_t
#include <iostream> // std::cout, std::endl
template <typename T>
constexpr std::size_t
bit_size() noexcept
{
return sizeof(T) * CHAR_BIT;
}
int
main()
{
std::cout << bit_size<int>() << std::endl;
std::cout << bit_size<long>() << std::endl;
}
On my implementation, it outputs 32 and 64.
Since the function is a constexpr, you can use it in static contexts, such as in static_assert<bit_size<int>() >= 32, "too small");.
Try this:
#include <climits>
unsigned int bits_per_byte = CHAR_BIT;
unsigned int bits_per_integer = CHAR_BIT * sizeof(int);
The identifier CHAR_BIT represents the number of bits in a char.
The sizeof returns the number of char locations occupied by the integer.
Multiplying them gives us the number of bits for an integer.
OP said "if it's exactly 32 bits long (8 hexadecimal characters)" and further with ".. interested in knowing if the value is between power(2, 31) and power(2, 32) - 1". So it is a little fuzzy on negative 32-bit numbers.
Certainly OP wants to know the result based on the value and not the type.
bool integer_is_32_bits_long(int x) =
// cope with 32-bit int
((INT_MAX == 0x7FFFFFFF) && (x < 0)) ||
// larger 32-bit int
((INT_MAX > 0x7FFFFFFF) && (x >= 0x80000000) && (x <= 0xFFFFFFFF));
Of course if int is 16-bit, then the result is always false.
I want to know if it's exactly 32 bits long (8 hexadecimal characters)
I am interested in knowing if the value is between power(2, 31) and power(2, 32) - 1
So you want to know if the upper bit is set? Then you can simply test if the number is negative:
bool upperBitSet(int x)
{
return x < 0;
}
For unsigned numbers, you can simply shift left and back right and then check if you lost data:
bool upperBitSet(unsigned x)
{
return (x << 1 >> 1) != x;
}
The simplest way probably is to check if the 32nd bit is set:
bool isReally32bitsLong(uint32_t in) {
return (in >> 31)!=0;
}
bool isExactly32BitsLong(uint64_t in) {
return ((in >> 31)!=0) && ((in >> 32) == 0);
}

How to get array of bits in a structure?

I was pondering (and therefore am looking for a way to learn this, and not a better solution) if it is possible to get an array of bits in a structure.
Let me demonstrate by an example. Imagine such a code:
#include <stdio.h>
struct A
{
unsigned int bit0:1;
unsigned int bit1:1;
unsigned int bit2:1;
unsigned int bit3:1;
};
int main()
{
struct A a = {1, 0, 1, 1};
printf("%u\n", a.bit0);
printf("%u\n", a.bit1);
printf("%u\n", a.bit2);
printf("%u\n", a.bit3);
return 0;
}
In this code, we have 4 individual bits packed in a struct. They can be accessed individually, leaving the job of bit manipulation to the compiler. What I was wondering is if such a thing is possible:
#include <stdio.h>
typedef unsigned int bit:1;
struct B
{
bit bits[4];
};
int main()
{
struct B b = {{1, 0, 1, 1}};
for (i = 0; i < 4; ++i)
printf("%u\n", b.bits[i]);
return 0;
}
I tried declaring bits in struct B as unsigned int bits[4]:1 or unsigned int bits:1[4] or similar things to no avail. My best guess was to typedef unsigned int bit:1; and use bit as the type, yet still doesn't work.
My question is, is such a thing possible? If yes, how? If not, why not? The 1 bit unsigned int is a valid type, so why shouldn't you be able to get an array of it?
Again, I don't want a replacement for this, I am just wondering how such a thing is possible.
P.S. I am tagging this as C++, although the code is written in C, because I assume the method would be existent in both languages. If there is a C++ specific way to do it (by using the language constructs, not the libraries) I would also be interested to know.
UPDATE: I am completely aware that I can do the bit operations myself. I have done it a thousand times in the past. I am NOT interested in an answer that says use an array/vector instead and do bit manipulation. I am only thinking if THIS CONSTRUCT is possible or not, NOT an alternative.
Update: Answer for the impatient (thanks to neagoegab):
Instead of
typedef unsigned int bit:1;
I could use
typedef struct
{
unsigned int value:1;
} bit;
properly using #pragma pack
NOT POSSIBLE - A construct like that IS NOT possible(here) - NOT POSSIBLE
One could try to do this, but the result will be that one bit is stored in one byte
#include <cstdint>
#include <iostream>
using namespace std;
#pragma pack(push, 1)
struct Bit
{
//one bit is stored in one BYTE
uint8_t a_:1;
};
#pragma pack(pop, 1)
typedef Bit bit;
struct B
{
bit bits[4];
};
int main()
{
struct B b = {{0, 0, 1, 1}};
for (int i = 0; i < 4; ++i)
cout << b.bits[i] <<endl;
cout<< sizeof(Bit) << endl;
cout<< sizeof(B) << endl;
return 0;
}
output:
0 //bit[0] value
0 //bit[1] value
1 //bit[2] value
1 //bit[3] value
1 //sizeof(Bit), **one bit is stored in one byte!!!**
4 //sizeof(B), ** 4 bytes, each bit is stored in one BYTE**
In order to access individual bits from a byte here is an example (Please note that the layout of the bitfields is implementation dependent)
#include <iostream>
#include <cstdint>
using namespace std;
#pragma pack(push, 1)
struct Byte
{
Byte(uint8_t value):
_value(value)
{
}
union
{
uint8_t _value;
struct {
uint8_t _bit0:1;
uint8_t _bit1:1;
uint8_t _bit2:1;
uint8_t _bit3:1;
uint8_t _bit4:1;
uint8_t _bit5:1;
uint8_t _bit6:1;
uint8_t _bit7:1;
};
};
};
#pragma pack(pop, 1)
int main()
{
Byte myByte(8);
cout << "Bit 0: " << (int)myByte._bit0 <<endl;
cout << "Bit 1: " << (int)myByte._bit1 <<endl;
cout << "Bit 2: " << (int)myByte._bit2 <<endl;
cout << "Bit 3: " << (int)myByte._bit3 <<endl;
cout << "Bit 4: " << (int)myByte._bit4 <<endl;
cout << "Bit 5: " << (int)myByte._bit5 <<endl;
cout << "Bit 6: " << (int)myByte._bit6 <<endl;
cout << "Bit 7: " << (int)myByte._bit7 <<endl;
if(myByte._bit3)
{
cout << "Bit 3 is on" << endl;
}
}
In C++ you use std::bitset<4>. This will use a minimal number of words for storage and hide all the masking from you. It's really hard to separate the C++ library from the language because so much of the language is implemented in the standard library. In C there's no direct way to create an array of single bits like this, instead you'd create one element of four bits or do the manipulation manually.
EDIT:
The 1 bit unsigned int is a valid type, so why shouldn't you be able
to get an array of it?
Actually you can't use a 1 bit unsigned type anywhere other than the context of creating a struct/class member. At that point it's so different from other types it doesn't automatically follow that you could create an array of them.
C++ would use std::vector<bool> or std::bitset<N>.
In C, to emulate std::vector<bool> semantics, you use a struct like this:
struct Bits {
Word word[];
size_t word_count;
};
where Word is an implementation-defined type equal in width to the data bus of the CPU; wordsize, as used later on, is equal to the width of the data bus.
E.g. Word is uint32_fast_t for 32-bit machines, uint64_fast_t for 64-bit machines;
wordsize is 32 for 32-bit machines, and 64 for 64-bit machines.
You use functions/macros to set/clear bits.
To extract a bit, use GET_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] & (1 << ((bit) % wordsize))).
To set a bit, use SET_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] |= (1 << ((bit) % wordsize))).
To clear a bit, use CLEAR_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] &= ~(1 << ((bit) % wordsize))).
To flip a bit, use FLIP_BIT(bits, bit) (((bits)->)word[(bit)/wordsize] ^= (1 << ((bit) % wordsize))).
To add resizeability as per std::vector<bool>, make a resize function which calls realloc on Bits.word and changes Bits.word_count accordingly. The exact details of this is left as a problem.
The same applies for proper range-checking of bit indices.
this is abusive, and relies on an extension... but it worked for me:
struct __attribute__ ((__packed__)) A
{
unsigned int bit0:1;
unsigned int bit1:1;
unsigned int bit2:1;
unsigned int bit3:1;
};
union U
{
struct A structVal;
int intVal;
};
int main()
{
struct A a = {1, 0, 1, 1};
union U u;
u.structVal = a;
for (int i =0 ; i<4; i++)
{
int mask = 1 << i;
printf("%d\n", (u.intVal & mask) >> i);
}
return 0;
}
You can also use an array of integers (ints or longs) to build an arbitrarily large bit mask. The select() system call uses this approach for its fd_set type; each bit corresponds to the numbered file descriptor (0..N). Macros are defined: FD_CLR to clear a bit, FD_SET to set a bit, FD_ISSET to test a bit, and FD_SETSIZE is the total number of bits. The macros automatically figure out which integer in the array to access and which bit in the integer. On Unix, see "sys/select.h"; under Windows, I think it is in "winsock.h". You can use the FD technique to make your own definitions for a bit mask. In C++, I suppose you could create a bit-mask object and overload the [] operator to access individual bits.
You can create a bit list by using a struct pointer. This will use more than a bit of space per bit written though, since it'll use one byte (for an address) per bit:
struct bitfield{
unsigned int bit : 1;
};
struct bitfield *bitstream;
Then after this:
bitstream=malloc( sizeof(struct bitfield) * numberofbitswewant );
You can access them like so:
bitstream[bitpointer].bit=...

How to convert an int to a binary string representation in C++

I have an int that I want to store as a binary string representation. How can this be done?
Try this:
#include <bitset>
#include <iostream>
int main()
{
std::bitset<32> x(23456);
std::cout << x << "\n";
// If you don't want a variable just create a temporary.
std::cout << std::bitset<32>(23456) << "\n";
}
I have an int that I want to first convert to a binary number.
What exactly does that mean? There is no type "binary number". Well, an int is already represented in binary form internally unless you're using a very strange computer, but that's an implementation detail -- conceptually, it is just an integral number.
Each time you print a number to the screen, it must be converted to a string of characters. It just so happens that most I/O systems chose a decimal representation for this process so that humans have an easier time. But there is nothing inherently decimal about int.
Anyway, to generate a base b representation of an integral number x, simply follow this algorithm:
initialize s with the empty string
m = x % b
x = x / b
Convert m into a digit, d.
Append d on s.
If x is not zero, goto step 2.
Reverse s
Step 4 is easy if b <= 10 and your computer uses a character encoding where the digits 0-9 are contiguous, because then it's simply d = '0' + m. Otherwise, you need a lookup table.
Steps 5 and 7 can be simplified to append d on the left of s if you know ahead of time how much space you will need and start from the right end in the string.
In the case of b == 2 (e.g. binary representation), step 2 can be simplified to m = x & 1, and step 3 can be simplified to x = x >> 1.
Solution with reverse:
#include <string>
#include <algorithm>
std::string binary(unsigned x)
{
std::string s;
do
{
s.push_back('0' + (x & 1));
} while (x >>= 1);
std::reverse(s.begin(), s.end());
return s;
}
Solution without reverse:
#include <string>
std::string binary(unsigned x)
{
// Warning: this breaks for numbers with more than 64 bits
char buffer[64];
char* p = buffer + 64;
do
{
*--p = '0' + (x & 1);
} while (x >>= 1);
return std::string(p, buffer + 64);
}
AND the number with 100000..., then 010000..., 0010000..., etc. Each time, if the result is 0, put a '0' in a char array, otherwise put a '1'.
int numberOfBits = sizeof(int) * 8;
char binary[numberOfBits + 1];
int decimal = 29;
for(int i = 0; i < numberOfBits; ++i) {
if ((decimal & (0x80000000 >> i)) == 0) {
binary[i] = '0';
} else {
binary[i] = '1';
}
}
binary[numberOfBits] = '\0';
string binaryString(binary);
http://www.phanderson.com/printer/bin_disp.html is a good example.
The basic principle of a simple approach:
Loop until the # is 0
& (bitwise and) the # with 1. Print the result (1 or 0) to the end of string buffer.
Shift the # by 1 bit using >>=.
Repeat loop
Print reversed string buffer
To avoid reversing the string or needing to limit yourself to #s fitting the buffer string length, you can:
Compute ceiling(log2(N)) - say L
Compute mask = 2^L
Loop until mask == 0:
& (bitwise and) the mask with the #. Print the result (1 or 0).
number &= (mask-1)
mask >>= 1 (divide by 2)
I assume this is related to your other question on extensible hashing.
First define some mnemonics for your bits:
const int FIRST_BIT = 0x1;
const int SECOND_BIT = 0x2;
const int THIRD_BIT = 0x4;
Then you have your number you want to convert to a bit string:
int x = someValue;
You can check if a bit is set by using the logical & operator.
if(x & FIRST_BIT)
{
// The first bit is set.
}
And you can keep an std::string and you add 1 to that string if a bit is set, and you add 0 if the bit is not set. Depending on what order you want the string in you can start with the last bit and move to the first or just first to last.
You can refactor this into a loop and using it for arbitrarily sized numbers by calculating the mnemonic bits above using current_bit_value<<=1 after each iteration.
There isn't a direct function, you can just walk along the bits of the int (hint see >> ) and insert a '1' or '0' in the string.
Sounds like a standard interview / homework type question
Use sprintf function to store the formatted output in the string variable, instead of printf for directly printing. Note, however, that these functions only work with C strings, and not C++ strings.
There's a small header only library you can use for this here.
Example:
std::cout << ConvertInteger<Uint32>::ToBinaryString(21);
// Displays "10101"
auto x = ConvertInteger<Int8>::ToBinaryString(21, true);
std::cout << x << "\n"; // displays "00010101"
auto x = ConvertInteger<Uint8>::ToBinaryString(21, true, "0b");
std::cout << x << "\n"; // displays "0b00010101"
Solution without reverse, no additional copy, and with 0-padding:
#include <iostream>
#include <string>
template <short WIDTH>
std::string binary( unsigned x )
{
std::string buffer( WIDTH, '0' );
char *p = &buffer[ WIDTH ];
do {
--p;
if (x & 1) *p = '1';
}
while (x >>= 1);
return buffer;
}
int main()
{
std::cout << "'" << binary<32>(0xf0f0f0f0) << "'" << std::endl;
return 0;
}
This is my best implementation of converting integers(any type) to a std::string. You can remove the template if you are only going to use it for a single integer type. To the best of my knowledge , I think there is a good balance between safety of C++ and cryptic nature of C. Make sure to include the needed headers.
template<typename T>
std::string bstring(T n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}
Use it like so,
std::cout << bstring<size_t>(371) << '\n';
This is the output in my computer(it differs on every computer),
0000000000000000000000000000000000000000000000000000000101110011
Note that the entire binary string is copied and thus the padded zeros which helps to represent the bit size. So the length of the string is the size of size_t in bits.
Lets try a signed integer(negative number),
std::cout << bstring<signed int>(-1) << '\n';
This is the output in my computer(as stated , it differs on every computer),
11111111111111111111111111111111
Note that now the string is smaller , this proves that signed int consumes less space than size_t. As you can see my computer uses the 2's complement method to represent signed integers (negative numbers). You can now see why unsigned short(-1) > signed int(1)
Here is a version made just for signed integers to make this function without templates , i.e use this if you only intend to convert signed integers to string.
std::string bstring(int n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}