How do I Quickly Compare Variable-Length Bit Strings in C++? - c++

I'm performing comparisons of objects based on the binary presence or absence of a set of features. These features can be represented by a bit string, such as this:
10011
This bitstring has the first, fourth and fifth feature.
I'm trying to calculate the similarity of a pair of bit strings as the number of features that both share in common. For a given set of bit strings I know that they'll all have the same length, but I don't know at compile time what that length will be.
For example, these two strings have two features in common, so I'd like the similarity function to return 2:
s(10011,10010) = 2
How do I efficiently represent and compare bit-strings in C++?

You can use the std::bitset STL class.
They can be built from bit strings, ANDed, and count the number of 1:
#include <string>
#include <bitset>
int main()
{
std::bitset<5> option1(std::string("10011")), option2(std::string("10010"));
std::bitset<5> and_bit = option1 & option2; //bitset will have 1s only on common options
size_t s = and_bit.count (); //return the number of 1 in the bitfield
return 0;
}
EDIT
If number of bits is unknown at compile time, you can use boost::dynamic_bitset<>:
boost::dynamic_bitset<> option(bit_string);
Other parts of example don't change, since boost::dynamic_bitset<> share a common interface with std::bitset.

Faster algorithm:
int similarity(unsigned int a, unsigned int b)
{
unsigned int r = a & b;
r = ( r & 0x55555555 ) + ((r >> 1) & 0x55555555 );
r = ( r & 0x33333333 ) + ((r >> 2) & 0x33333333 );
r = ( r & 0x0f0f0f0f ) + ((r >> 4) & 0x0f0f0f0f );
r = ( r & 0x00ff00ff ) + ((r >> 8) & 0x00ff00ff );
r = ( r & 0x0000ffff ) + ((r >>16) & 0x0000ffff );
return r;
}
int main() {
unsigned int a = 19 ;//10011
unsigned int b = 18 ;//10010
cout << similarity(a,b) << endl;
return 0;
}
Output:
2
Demonstration at ideone : http://www.ideone.com/bE4qb

As you don't know the bit length at compile time, you can use boost::dynamic_bitset instead of std::bitset.
You can then use operator& (or &=) to find the common bits, and count them using boost::dynamic_bitset::count().
The performance depends. For max speed, depending on your compiler, you may have to implement the loop yourself, e.g. using #Nawaz's method, or something from Bit Twiddling Hacks, or by writing the loop using assembler/compiler intrinsics for sse/popcount/etc.
Notice that at least llvm, gcc and icc detect many patterns of this sort and optimize the thing for you, so profile/check the generated code before doing manual work.

Use a std::bitset, if your set of features is less than the number of bits in a long (I think it's a long), you can get an unsigned long representation of the bits, then and the two values, and use bit twidling tricks from here to count.
If you want to continue to use strings to represent your bit pattern, you could do something like the following, using the zip_iterator from boost.
#include <iostream>
#include <string>
#include <algorithm>
#include <boost/tuple/tuple.hpp>
#include <boost/iterator/zip_iterator.hpp>
struct check_is_set :
public std::unary_function<const boost::tuple<char const&, char const&>&, bool>
{
bool operator()(const boost::tuple<char const&, char const&>& t) const
{
const char& cv1 = boost::get<0>(t);
const char& cv2 = boost::get<1>(t);
return cv1 == char('1') && cv1 == cv2;
}
};
size_t count_same(std::string const& opt1, std::string const& opt2)
{
std::string::const_iterator beg1 = opt1.begin();
std::string::const_iterator beg2 = opt2.begin();
// need the same number of items for end (this really is daft, you get a runtime
// error if the sizes are different otherwise!! I think it's a bug in the
// zip_iterator implementation...)
size_t end_s = std::min(opt1.size(), opt2.size());
std::string::const_iterator end1 = opt1.begin() + end_s;
std::string::const_iterator end2 = opt2.begin() + end_s;
return std::count_if(
boost::make_zip_iterator(
boost::make_tuple(beg1, beg2)
),
boost::make_zip_iterator(
boost::make_tuple(end1, end2)
),
check_is_set()
);
}
int main(void)
{
std::string opt1("1010111");
std::string opt2("001101");
std::cout << "same: " << count_same(opt1, opt2) << std::endl;
return 0;
}

Related

Select value randomly from groups of values of different types

I have arrays of types int, bool and float:
std::array<int, 3>myInts = {15, 3, 6};
std::array<bool, 2>myBools = {true, false};
std::array<float,5>myFloats = {0.1, 15.2, 100.6, 10.44, 5.5};
I would like to generate a random integer(I know how to do that) from 0 to the total number of elements (3 + 2 + 5) so the generated random Integer represents one of the values. Next based on that integer I would like to retrieve my value and do further calculations with it. The problem I am facing is that I don't want to use if else statements like these:
int randInt = RandIntGen(0, myInts.size() + myBools.size() + myFloats.size());//Generates a random Integer
if(randInt<myInts.size()){//if the random integer is less than the size of the integers array I can choose
// from the the integers array
int myValue = myInts[RandInt]
}
else if(randInt>=myInts.size() && randInt<myBools.size() + myInts.size()){//if the random integer
//is between the size o the integer's array and the size of the bool's array + the size of the integers array
//then I can choose from the bool's array
bool myValue = myBools(RandInt - myInts.size())
}
.
.
.
Then if for example randInt=2 then myValue=6 or if randInt=4 then myValue=false
However I would like that the selection algorithm was more straightforward something like:
int randInt = RandIntGen(0, myInts.size() + myBools.size() + myFloats.size());
allValues = {myInts, myBools, myFloats}
if(type_id(allValues[randInt]).name=="int")
int myValue = allValues[randInt] //(this value will be used for further calculations)
if(type_id(allValues[randInt]).name=="bool")
bool myValue = allValues[randInt] //(this value will be used for further calculations)
I've tried with a mix of templates, inheritance and linked lists however I cannot implement what I want. I think the solution should be really simple but at this time I cannot think of something else.
I am novice in C++ I've been learning already for 1 and half months, before I was doing stuff in python and everything was way easier but then I decided to try C++. I am not a experienced programmer I know some basic things and I am trying to learn new things, thanks for the help.
Most probably, you need to think how to satisfy your requirements in a simpler way, but it is possible to get literally what you want with C++17. If your compiler doesn't support C++17, you can use corresponding boost libraries. Here is the code:
#include <array>
#include <iostream>
#include <tuple>
#include <variant>
using Result = std::variant<int, bool, float>;
template<class T>
bool take_impl(int& i, const T& vec, Result& result)
{
if (i < static_cast<int>(std::size(vec)))
result = vec[i];
i -= std::size(vec);
return i < 0;
}
template<class T>
Result take(int i, const T& arrays)
{
if (i < 0)
throw std::runtime_error("i is too small");
Result res;
std::apply([&i, &res](const auto&... array) { return (take_impl(i, array, res) || ...); }, arrays);
if (i >= 0)
throw std::runtime_error("i is too large");
return res;
}
std::ostream& operator<<(std::ostream& s, const Result& v)
{
if (std::holds_alternative<int>(v))
std::cout << "int(" << std::get<int>(v);
else if (std::holds_alternative<bool>(v))
std::cout << "bool(" << std::get<bool>(v);
else
std::cout << "float(" << std::get<float>(v);
return std::cout << ')';
}
auto arrays = std::make_tuple(
std::array<int, 3>{15, 3, 6},
std::array<bool, 2>{true, false},
std::array<float,5>{0.1, 15.2, 100.6, 10.44, 5.5}
);
int main()
{
for (int i = 0; i < 10; ++i)
std::cout << take(i, arrays) << '\n';
}
If you are not required to keep separate arrays of different types, you can make one uniform array of std::variant<int, bool, float>. This will be significantly more efficient than using std::shared_ptr-s.

Convert vector<bool> to binary

I have a vector<bool> that contains 10 elements. How can I convert it to a binary type;
vector<bool> a={0,1,1,1,1,0,1,1,1,0}
I want to get binary values, something like this:
long long int x = convert2bin(s)
cout << "x = " << x << endl
x = 0b0111101110
Note: the size of vector will be change during run time, max size = 400.
0b is important, I want to use the gcc extension, or some literal type.
As I understood of comment
Yes it can even hold 400 values
And in question
0b is important
You need to have string, not int.
std::string convert2bin(const std::vector<bool>& v)
{
std::string out("0b");
out.reserve(v.size() + 2);
for (bool b : v)
{
out += b ? '1' : '0';
}
return i;
}
std::vector<bool> a = { 0, 1, 1, 1, 1, 0, 1, 1, 1, 0 };
std::string s = "";
for (bool b : a)
{
s += std::to_string(b);
}
int result = std::stoi(s);
If you really want to do this, you start from the end. Although I support Marius Bancila and advise to use a bitset instead.
int mValue = 0
for(int i=a.size()-1, pos=0; i>=0; i--, pos++)
{
// Here we create the bitmask for this value
if(a[i] == 1)
{
mask = 1;
mask << pos;
myValue |= mask;
}
}
Your x is just an integer form from a, so can use std::accumulate like following
long long x = accumulate(a.begin(), a.end(), 0,
[](long long p, long long q)
{ return (p << 1) + q; }
);
For a 400 size, you need a std::string though
First of all the result of the conversion is not a literal. So you may not use prefix 0b applied to variable x.
Here is an example
#include <iostream>
#include <iomanip>
#include <algorithm>
#include <numeric>
#include <vector>
#include <iterator>
#include <limits>
int main()
{
std::vector<bool> v = { 0, 1, 1, 1, 1, 0, 1, 1, 1, 0 };
typedef std::vector<bool>::size_type size_type;
size_type n = std::min<size_type>( v.size(),
std::numeric_limits<long long>::digits + 1 );
long long x = std::accumulate( v.begin(), std::next( v.begin(), n ), 0ll,
[]( long long acc, int value )
{
return acc << 1 | value;
} );
for ( int i : v ) std::cout << i;
std::cout << std::endl;
std::cout << std::hex << x << std::endl;
return 0;
}
The output is
0111101110
1ee
vector<bool> is already a "binary" type.
Converting to an int is not possible for more bits than available in an int. However if you want to be able to print in that format, you can use a facet and attach it to the locale then imbue() before you print your vector<bool>. Ideally you will "store" the locale once.
I don't know the GNU extension for printing an int with 0b prefix but you can get your print facet to do that.
A simpler way is to create a "wrapper" for your vector<bool> and print that.
Although vector<bool> is always internally implemented as a "bitset" there is no public method to extract the raw data out nor necessarily a standard representation for it.
You can of course convert it to a different type by iterating through it, although I guess you may have been looking for something else?
If the number of bits is known in advance and by some reason you need to start from an std::array rather than from an std::bitset directly, consider this option (inspired by this book):
#include <sstream>
#include <iostream>
#include <bitset>
#include <array>
#include <iterator>
/**
* #brief Converts an array of bools to a bitset
* #tparam nBits the size of the array
* #param bits the array of bools
* #return a bitset with size nBits
* #see https://www.linuxtopia.org/online_books/programming_books/c++_practical_programming/c++_practical_programming_192.html
*/
template <size_t nBits>
std::bitset<nBits> BitsToBitset(const std::array<bool, nBits> bits)
{
std::ostringstream oss;
std::copy(std::begin(bits), std::end(bits), std::ostream_iterator<bool>(oss, ""));
return std::bitset<nBits>(oss.str());
}
int main()
{
std::array<bool, 10> a = { 0, 1, 1, 1, 1, 0, 1, 1, 1, 0 };
unsigned long int x = BitsToBitset(a).to_ulong();
std::cout << x << std::endl;
return x;
}

How to calculate (A*B)%C? [duplicate]

This question already has answers here:
Multiply two overflowing integers modulo a third
(2 answers)
Closed 9 years ago.
Can someone help me how to calculate (A*B)%C, where 1<=A,B,C<=10^18 in C++, without big-num, just a mathematical approach.
Off the top of my head (not extensively tested)
typedef unsigned long long BIG;
BIG mod_multiply( BIG A, BIG B, BIG C )
{
BIG mod_product = 0;
A %= C;
while (A) {
B %= C;
if (A & 1) mod_product = (mod_product + B) % C;
A >>= 1;
B <<= 1;
}
return mod_product;
}
This has complexity O(log A) iterations. You can probably replace most of the % with a conditional subtraction, for a bit more performance.
typedef unsigned long long BIG;
BIG mod_multiply( BIG A, BIG B, BIG C )
{
BIG mod_product = 0;
// A %= C; may or may not help performance
B %= C;
while (A) {
if (A & 1) {
mod_product += B;
if (mod_product > C) mod_product -= C;
}
A >>= 1;
B <<= 1;
if (B > C) B -= C;
}
return mod_product;
}
This version has only one long integer modulo -- it may even be faster than the large-chunk method, depending on how your processor implements integer modulo.
Live demo: https://ideone.com/1pTldb -- same result as Yakk's.
An implementation of this stack overflow answer prior:
#include <stdint.h>
#include <tuple>
#include <iostream>
typedef std::tuple< uint32_t, uint32_t > split_t;
split_t split( uint64_t a )
{
static const uint32_t mask = -1;
auto retval = std::make_tuple( mask&a, ( a >> 32 ) );
// std::cout << "(" << std::get<0>(retval) << "," << std::get<1>(retval) << ")\n";
return retval;
}
typedef std::tuple< uint64_t, uint64_t, uint64_t, uint64_t > cross_t;
template<typename Lambda>
cross_t cross( split_t lhs, split_t rhs, Lambda&& op )
{
return std::make_tuple(
op(std::get<0>(lhs), std::get<0>(rhs)),
op(std::get<1>(lhs), std::get<0>(rhs)),
op(std::get<0>(lhs), std::get<1>(rhs)),
op(std::get<1>(lhs), std::get<1>(rhs))
);
}
// c must have high bit unset:
uint64_t a_times_2_k_mod_c( uint64_t a, unsigned k, uint64_t c )
{
a %= c;
for (unsigned i = 0; i < k; ++i)
{
a <<= 1;
a %= c;
}
return a;
}
// c must have about 2 high bits unset:
uint64_t a_times_b_mod_c( uint64_t a, uint64_t b, uint64_t c )
{
// ensure a and b are < c:
a %= c;
b %= c;
auto Z = cross( split(a), split(b), [](uint32_t lhs, uint32_t rhs)->uint64_t {
return (uint64_t)lhs * (uint64_t)rhs;
} );
uint64_t to_the_0;
uint64_t to_the_32_a;
uint64_t to_the_32_b;
uint64_t to_the_64;
std::tie( to_the_0, to_the_32_a, to_the_32_b, to_the_64 ) = Z;
// std::cout << to_the_0 << "+ 2^32 *(" << to_the_32_a << "+" << to_the_32_b << ") + 2^64 * " << to_the_64 << "\n";
// this line is the one that requires 2 high bits in c to be clear
// if you just add 2 of them then do a %c, then add the third and do
// a %c, you can relax the requirement to "one high bit must be unset":
return
(to_the_0
+ a_times_2_k_mod_c(to_the_32_a+to_the_32_b, 32, c) // + will not overflow!
+ a_times_2_k_mod_c(to_the_64, 64, c) )
%c;
}
int main()
{
uint64_t retval = a_times_b_mod_c( 19010000000000000000, 1011000000000000, 1231231231231211 );
std::cout << retval << "\n";
}
The idea here is to split your 64-bit integer into a pair of 32-bit integers, which are safe to multiply in 64-bit land.
We express a*b as (a_high * 2^32 + a_low) * (b_high * 2^32 + b_low), do the 4-fold multiplication (keeping track of the 232 factors without storing them in our bits), then note that doing a * 2^k % c can be done via a series of k repeats of this pattern: ((a*2 %c) *2%c).... So we can take this 3 to 4 element polynomial of 64-bit integers in 232 and reduce it without having to worry about things.
The expensive part is the a_times_2_k_mod_c function (the only loop).
You can make it go many times faster if you know that c has more than one high bit clear.
You could instead replace the a %= c with subtraction a -= (a>=c)*c;
Doing both isn't all that practical.
Live example

How to convert an int to a binary string representation in C++

I have an int that I want to store as a binary string representation. How can this be done?
Try this:
#include <bitset>
#include <iostream>
int main()
{
std::bitset<32> x(23456);
std::cout << x << "\n";
// If you don't want a variable just create a temporary.
std::cout << std::bitset<32>(23456) << "\n";
}
I have an int that I want to first convert to a binary number.
What exactly does that mean? There is no type "binary number". Well, an int is already represented in binary form internally unless you're using a very strange computer, but that's an implementation detail -- conceptually, it is just an integral number.
Each time you print a number to the screen, it must be converted to a string of characters. It just so happens that most I/O systems chose a decimal representation for this process so that humans have an easier time. But there is nothing inherently decimal about int.
Anyway, to generate a base b representation of an integral number x, simply follow this algorithm:
initialize s with the empty string
m = x % b
x = x / b
Convert m into a digit, d.
Append d on s.
If x is not zero, goto step 2.
Reverse s
Step 4 is easy if b <= 10 and your computer uses a character encoding where the digits 0-9 are contiguous, because then it's simply d = '0' + m. Otherwise, you need a lookup table.
Steps 5 and 7 can be simplified to append d on the left of s if you know ahead of time how much space you will need and start from the right end in the string.
In the case of b == 2 (e.g. binary representation), step 2 can be simplified to m = x & 1, and step 3 can be simplified to x = x >> 1.
Solution with reverse:
#include <string>
#include <algorithm>
std::string binary(unsigned x)
{
std::string s;
do
{
s.push_back('0' + (x & 1));
} while (x >>= 1);
std::reverse(s.begin(), s.end());
return s;
}
Solution without reverse:
#include <string>
std::string binary(unsigned x)
{
// Warning: this breaks for numbers with more than 64 bits
char buffer[64];
char* p = buffer + 64;
do
{
*--p = '0' + (x & 1);
} while (x >>= 1);
return std::string(p, buffer + 64);
}
AND the number with 100000..., then 010000..., 0010000..., etc. Each time, if the result is 0, put a '0' in a char array, otherwise put a '1'.
int numberOfBits = sizeof(int) * 8;
char binary[numberOfBits + 1];
int decimal = 29;
for(int i = 0; i < numberOfBits; ++i) {
if ((decimal & (0x80000000 >> i)) == 0) {
binary[i] = '0';
} else {
binary[i] = '1';
}
}
binary[numberOfBits] = '\0';
string binaryString(binary);
http://www.phanderson.com/printer/bin_disp.html is a good example.
The basic principle of a simple approach:
Loop until the # is 0
& (bitwise and) the # with 1. Print the result (1 or 0) to the end of string buffer.
Shift the # by 1 bit using >>=.
Repeat loop
Print reversed string buffer
To avoid reversing the string or needing to limit yourself to #s fitting the buffer string length, you can:
Compute ceiling(log2(N)) - say L
Compute mask = 2^L
Loop until mask == 0:
& (bitwise and) the mask with the #. Print the result (1 or 0).
number &= (mask-1)
mask >>= 1 (divide by 2)
I assume this is related to your other question on extensible hashing.
First define some mnemonics for your bits:
const int FIRST_BIT = 0x1;
const int SECOND_BIT = 0x2;
const int THIRD_BIT = 0x4;
Then you have your number you want to convert to a bit string:
int x = someValue;
You can check if a bit is set by using the logical & operator.
if(x & FIRST_BIT)
{
// The first bit is set.
}
And you can keep an std::string and you add 1 to that string if a bit is set, and you add 0 if the bit is not set. Depending on what order you want the string in you can start with the last bit and move to the first or just first to last.
You can refactor this into a loop and using it for arbitrarily sized numbers by calculating the mnemonic bits above using current_bit_value<<=1 after each iteration.
There isn't a direct function, you can just walk along the bits of the int (hint see >> ) and insert a '1' or '0' in the string.
Sounds like a standard interview / homework type question
Use sprintf function to store the formatted output in the string variable, instead of printf for directly printing. Note, however, that these functions only work with C strings, and not C++ strings.
There's a small header only library you can use for this here.
Example:
std::cout << ConvertInteger<Uint32>::ToBinaryString(21);
// Displays "10101"
auto x = ConvertInteger<Int8>::ToBinaryString(21, true);
std::cout << x << "\n"; // displays "00010101"
auto x = ConvertInteger<Uint8>::ToBinaryString(21, true, "0b");
std::cout << x << "\n"; // displays "0b00010101"
Solution without reverse, no additional copy, and with 0-padding:
#include <iostream>
#include <string>
template <short WIDTH>
std::string binary( unsigned x )
{
std::string buffer( WIDTH, '0' );
char *p = &buffer[ WIDTH ];
do {
--p;
if (x & 1) *p = '1';
}
while (x >>= 1);
return buffer;
}
int main()
{
std::cout << "'" << binary<32>(0xf0f0f0f0) << "'" << std::endl;
return 0;
}
This is my best implementation of converting integers(any type) to a std::string. You can remove the template if you are only going to use it for a single integer type. To the best of my knowledge , I think there is a good balance between safety of C++ and cryptic nature of C. Make sure to include the needed headers.
template<typename T>
std::string bstring(T n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}
Use it like so,
std::cout << bstring<size_t>(371) << '\n';
This is the output in my computer(it differs on every computer),
0000000000000000000000000000000000000000000000000000000101110011
Note that the entire binary string is copied and thus the padded zeros which helps to represent the bit size. So the length of the string is the size of size_t in bits.
Lets try a signed integer(negative number),
std::cout << bstring<signed int>(-1) << '\n';
This is the output in my computer(as stated , it differs on every computer),
11111111111111111111111111111111
Note that now the string is smaller , this proves that signed int consumes less space than size_t. As you can see my computer uses the 2's complement method to represent signed integers (negative numbers). You can now see why unsigned short(-1) > signed int(1)
Here is a version made just for signed integers to make this function without templates , i.e use this if you only intend to convert signed integers to string.
std::string bstring(int n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}

Converting integer to a bit representation

How can I convert a integer to its bit representation. I want to take an integer and return a vector that has contains 1's and 0's of the integer's bit representation.
I'm having a heck of a time trying to do this myself so I thought I would ask to see if there was a built in library function that could help.
Doesn't work with negatives.
vector<int> convert(int x) {
vector<int> ret;
while(x) {
if (x&1)
ret.push_back(1);
else
ret.push_back(0);
x>>=1;
}
reverse(ret.begin(),ret.end());
return ret;
}
It's not too hard to solve with a one-liner, but there is actually a standard-library solution.
#include <bitset>
#include <algorithm>
std::vector< int > get_bits( unsigned long x ) {
std::string chars( std::bitset< sizeof(long) * CHAR_BIT >( x )
.to_string< char, std::char_traits<char>, std::allocator<char> >() );
std::transform( chars.begin(), chars.end(),
std::bind2nd( std::minus<char>(), '0' ) );
return std::vector< int >( chars.begin(), chars.end() );
}
C++0x even makes it easier!
#include <bitset>
std::vector< int > get_bits( unsigned long x ) {
std::string chars( std::bitset< sizeof(long) * CHAR_BIT >( x )
.to_string( char(0), char(1) ) );
return std::vector< int >( chars.begin(), chars.end() );
}
This is one of the more bizarre corners of the library. Perhaps really what they were driving at was serialization.
cout << bitset< 8 >( x ) << endl; // print 8 low-order bits of x
A modification of DCP's answer. The behavior is implementation defined for negative values of t. It provides all bits, even the leading zeros. Standard caveats related to the use of std::vector<bool> and it not being a proper container.
#include <vector> //for std::vector
#include <algorithm> //for std::reverse
#include <climits> //for CHAR_BIT
template<typename T>
std::vector<bool> convert(T t) {
std::vector<bool> ret;
for(unsigned int i = 0; i < sizeof(T) * CHAR_BIT; ++i, t >>= 1)
ret.push_back(t & 1);
std::reverse(ret.begin(), ret.end());
return ret;
}
And a version that [might] work with floating point values as well. And possibly other POD types. I haven't really tested this at all. It might work better for negative values, or it might work worse. I haven't put much thought into it.
template<typename T>
std::vector<bool> convert(T t) {
union {
T obj;
unsigned char bytes[sizeof(T)];
} uT;
uT.obj = t;
std::vector<bool> ret;
for(int i = sizeof(T)-1; i >= 0; --i)
for(unsigned int j = 0; j < CHAR_BIT; ++j, uT.bytes[i] >>= 1)
ret.push_back(uT.bytes[i] & 1);
std::reverse(ret.begin(), ret.end());
return ret;
}
Here is a version that works with negative numbers:
string get_bits(unsigned int x)
{
string ret;
for (unsigned int mask=0x80000000; mask; mask>>=1) {
ret += (x & mask) ? "1" : "0";
}
return ret;
}
The string can, of course, be replaced by a vector or indexed for bit values.
Returns a string instead of a vector, but can be easily changed.
template<typename T>
std::string get_bits(T value) {
int size = sizeof(value) * CHAR_BIT;
std::string ret;
ret.reserve(size);
for (int i = size-1; i >= 0; --i)
ret += (value & (1 << i)) == 0 ? '0' : '1';
return ret;
}
The world's worst integer to bit as bytes converter:
#include <algorithm>
#include <functional>
#include <iterator>
#include <stdlib.h>
class zero_ascii_iterator: public std::iterator<std::input_iterator_tag, char>
{
public:
zero_ascii_iterator &operator++()
{
return *this;
}
char operator *() const
{
return '0';
}
};
char bits[33];
_itoa(value, bits, 2);
std::transform(
bits,
bits + strlen(bits),
zero_ascii_iterator(),
bits,
std::minus<char>());