I'm trying to implement the Huffman's encoding algorithm in c++.
my question is : after i got the equivalent binary string for each character , how can i write those zeros and ones as binary on a file not as string 0 or string 1 ?
thanks in advance ...
I hope this code can help you.
You start from a sequence of bytes (1s and 0s) representing the continuous encoding of every character of the input file.
You take every byte of the sequence and add a bit into a temporary byte (char byte)
Every time you fill a byte, you write it to file (you could also wait, for efficiency, to have a bigger data)
At the end, you write the remaining bits to file, filled with trailing zeros, for example
As akappa correctly pointed out, the else branch can be removed if byte is set to 0 after each file writing operation (or, more generically, every time it has been totally filled and flushed somewhere else), so only 1s must be written.
void writeBinary(char *huffmanEncoding, int sequenceLength)
{
char byte = 0;
// For each bit of the sequence
for (int i = 0; i < sequenceLength; i++) {
char bit = huffmanEncoding[i];
// Add a single bit to byte
if (bit == 1) {
// MSB of the sequence to msb of the file
byte |= (1 << (7 - (i % 8)));
// equivalent form: byte |= (1 << (-(i + 1) % 8);
}
else {
// MSB of the sequence to msb of the file
byte &= ~(1 << (7 - (i % 8)));
// equivalent form: byte &= ~(1 << (-(i + 1) % 8);
}
if ((i % 8) == 0 && i > 0) {
//writeByteToFile(byte);
}
}
// Fill the last incomplete byte, if any, and write to file
}
Obtaining individually the encoding of each character in a different data structure is a broken solution, because you need to juxtapose the encoding of each character in the resulting binary file: storing them individually makes that as hard as directly storing them contiguously in a vector of bits.
This consideration suggests using a std::vector<bool> to perform your task, but it is a broken solution because it can't be treated as a c-style array, and you really need that at output time.
This question asks precisely which are the valid alternatives to std::vector<bool>, so I think answers to that question fits perfectly your question.
BTW, what I would do is to just wrap a std::vector<uint8_t> under a class which suits yout needs, like the code attached:
#include <iostream>
#include <vector>
#include <cstdint>
#include <algorithm>
class bitstream {
private:
std::vector<std::uint8_t> storage;
unsigned int bits_used:3;
void alloc_space();
public:
bitstream() : bits_used(0) { }
void push_bit(bool bit);
template <typename T>
void push(T t);
std::uint8_t *get_array();
size_t size() const;
// beware: no reference!
bool operator[](size_t pos) const;
};
void bitstream::alloc_space()
{
if (bits_used == 0) {
std::uint8_t push = 0;
storage.push_back(push);
}
}
void bitstream::push_bit(bool bit)
{
alloc_space();
storage.back() |= bit << 7 - bits_used++;
}
template <typename T>
void bitstream::push(T t)
{
std::uint8_t *t_byte = reinterpret_cast<std::uint8_t*>(&t);
for (size_t i = 0; i < sizeof(t); i++) {
uint8_t byte = t_byte[i];
if (bits_used > 0) {
storage.back() |= byte >> bits_used;
std::uint8_t to_push = (byte & ((1 << (8 - bits_used)) - 1)) << bits_used;
storage.push_back(to_push);
} else {
storage.push_back(byte);
}
}
}
std::uint8_t *bitstream::get_array()
{
return &storage.front();
}
size_t bitstream::size() const
{
const unsigned int m = 0;
return std::max(m, (storage.size() - 1) * 8 + bits_used);
}
bool bitstream::operator[](size_t size) const
{
// No range checking
return static_cast<bool>((storage[size / 8] >> 7 - (size % 8)) & 0x1);
}
int main(int argc, char **argv)
{
bitstream bs;
bs.push_bit(true);
std::cout << bs[0] << std::endl;
bs.push_bit(false);
std::cout << bs[0] << "," << bs[1] << std::endl;
bs.push_bit(true);
bs.push_bit(true);
std::uint8_t to_push = 0xF0;
bs.push_byte(to_push);
for (size_t i = 0; i < bs.size(); i++)
std::cout << bs[i] << ",";
std::cout << std::endl;
}
You cant write to a binary file with only bits; the smallest size of data written is one byte (thus 8 bits).
So what you should do is create a buffer (any size).
char BitBuffer;
Writing to a buffer:
int Location;
bool Value;
if (Value)
BitBuffer |= (1 << Location);
else
BitBuffer &= ~(1 << Location)
The code (1 << Location) generates a number with all 0's except the position specified by Location. Then, if Value is set to true, it sets corresponding bit in Buffer to 1, and to 0 in other case. The binary operations used are fairly simple, if you don't understand them, it should be in any good C++ book/tutorial.
Location should be number in range <0, sizeof(Buffer)-1>, so <0,7> in this case.
Writing buffer to a file is relatively simple when using fstream. Just remember to open it as binary.
ofstream File;
File.open("file.txt", ios::out | ios::binary);
File.write(BitBuffer, sizeof(char))
EDIT: Noticed a bug and fixed it.
EDIT2: You can't use << operators in binary mode, i forgot about it.
Alternative solution : Use std::vector<bool> or std::bitset as a buffer.
This should be even simpler, but I thought I could help you a little bit more.
void WriteData (std::vector<bool> const& data, std::ofstream& str)
{
char Buffer;
for (unsigned int i = 0; i < data.size(); ++i)
{
if (i % 8 == 0 && i != 0)
str.write(Buffer, 1);
else
// Paste buffer setting code here
// Location = i/8;
// Value = data[i];
}
// It might happen that data.size() % 8 != 0. You should fill the buffer
// with trailing zeros and write it individually.
}
Related
I have an array of 100 uint8_t's, which is to be treated as a stream of 800 bits, and dealt with 7 bits at a time. So in other words, if the first element of the 8-bit array holds 0b11001100 and the second holds ob11110000 then when I come to read it in 7-bit format, the first element of the 7-bit array would be 0b1100110 and the second would be 0b0111100 with the remaining 2 bits being held in the 3rd.
The first thing I tried was a union...
struct uint7_t {
uint8_t i1:7;
};
union uint7_8_t {
uint8_t u8[100];
uint7_t u7[115];
};
but of course everything's byte aligned and I essentially end up simply loosing the 8th bit of each element.
Does anyone have any idea's on how I can go about doing this?
Just to be clear, this is something of a visual representation of the result of the union:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
0xxxxxxx 0xxxxxxx 0xxxxxxx 0xxxxxxx 32 bits of 7-bit data.
And this represents what it is that I want to do instead:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxx 32 bits of 7-bit data.
I'm aware the last bits may be padded but that's fine, I just want someway of accessing each byte 7 bits at a time without losing any of the 800 bits. So far the only way I can think of is lots of bit shifting, which of course would work but I'm sure there's a cleaner way of going about it(?)
Thanks in advance for any answers.
Not sure what you mean by "cleaner". Generally people who work on this sort of problem regularly consider shifting and masking to be the right primitive tool to use. One can do something like defining a bitstream abstraction with a method to read an arbitrary number of bits off the stream. This abstraction sometimes shows up in compression applications. The internals of the method of course do use shifting and masking.
One fairly clean approach is to write a function which extracts a 7-bit number at any bit index in an array of unsigned char's. Use a division to convert the bit index to a byte index, and modulus to get the bit index within the byte. Then shift and mask. The input bits can span two bytes, so you either have to glue together a 16-bit value before extraction, or do two smaller extractions and or them together to construct the result.
If I were aiming for something moderately performant, I'd likely take one of two approaches:
The first has two state variables saying how many bits to take from the current and next byte. It would use shifting, masking, and bitwise or, to produce the current output (a number between 0 and 127 as an int for example), then the loop would update both state variables via adding and modulus, and would increment the current byte pointers if all bits in the first byte were consumed.
The second approach is to load 56-bits (8 outputs worth of input) into a 64-bit integer and use a fully unrolled structure to extract each of the 8 outputs. Doing this without using unaligned memory reads requires constructing the 64-bit integer piecemeal. (56-bits is special because the starting bit position is byte aligned.)
To go real fast, I might try writing SIMD code in Halide. That's beyond scope here I believe. (And not clear it is going to win much actually.)
Designs which read more than one byte into a integer at a time will likely have to consider processor byte ordering.
Process them in groups of 8 (since 8x7 nicely rounds to something 8bit aligned). Bitwise operators are the order of the day here. Faffing around with the last (upto) 7 numbers is a little faffy, but not impossible. (This code assumes these are unsigned 7 bit integers! Signed conversion would require you to do consider flipping the top bit if bit[6] is 1)
// convert 8 x 7bit ints in one go
void extract8(const uint8_t input[7], uint8_t output[8])
{
output[0] = input[0] & 0x7F;
output[1] = (input[0] >> 7) | ((input[1] << 1) & 0x7F);
output[2] = (input[1] >> 6) | ((input[2] << 2) & 0x7F);
output[3] = (input[2] >> 5) | ((input[3] << 3) & 0x7F);
output[4] = (input[3] >> 4) | ((input[4] << 4) & 0x7F);
output[5] = (input[4] >> 3) | ((input[5] << 5) & 0x7F);
output[6] = (input[5] >> 2) | ((input[6] << 6) & 0x7F);
output[7] = input[6] >> 1;
}
// convert array of 7bit ints to 8bit
void seven_bit_to_8bit(const uint8_t* const input, uint8_t* const output, const size_t count)
{
size_t count8 = count >> 3;
for(size_t i = 0; i < count8; ++i)
{
extract8(input + 7 * i, output + 8 * i);
}
// handle remaining (upto) 7 bytes
const size_t countr = (count % 8);
if(countr)
{
// how many bytes do we need to copy from the input?
size_t remaining_bits = 7 * countr;
if(remaining_bits % 8)
{
// round to next nearest multiple of 8
remaining_bits += (8 - remaining_bits % 8);
}
remaining_bits /= 8;
{
uint8_t in[7] = {0}, out[8] = {0};
for(size_t i = 0; i < remaining_bits; ++i)
{
in[i] = input[count8 * 7 + i];
}
extract8(in, out);
for(size_t i = 0; i < countr; ++i)
{
output[count8 * 8 + i] = in[i];
}
}
}
}
Here is a solution that uses the vector bool specialization. It also uses a similar mechanism to allow access to the seven-bit elements via reference objects.
The member functions allow for the following operations:
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference to second element
v = 99; // change second element via reference
Full program:
#include <vector>
#include <iterator>
#include <iostream>
struct uint7_t {
unsigned int i : 7;
};
struct seven_bit_ref {
size_t begin;
size_t end;
std::vector<bool>& bits;
seven_bit_ref& operator=(const uint7_t& right)
{
auto it{bits.begin()+begin};
for(int mask{1}; mask != 1 << 7; mask <<= 1)
*it++ = right.i & mask;
return *this;
}
operator uint7_t() const
{
uint7_t r{};
auto it{bits.begin() + begin};
for(int i{}; i < 7; ++i)
r.i += *it++ << i;
return r;
}
seven_bit_ref operator*()
{
return *this;
}
void operator++()
{
begin += 7;
end += 7;
}
bool operator!=(const seven_bit_ref& right)
{
return !(begin == right.begin && end == right.end);
}
seven_bit_ref operator=(int val)
{
uint7_t temp{};
temp.i = val;
operator=(temp);
return *this;
}
};
template<typename T>
class Arr;
template<>
class Arr<uint7_t> {
public:
Arr(size_t size) : bits(size * 7, false) {}
seven_bit_ref operator[](size_t index)
{
return {index * 7, index * 7 + 7, bits};
}
size_t size()
{
return bits.size() / 7;
}
void push_back(uint7_t val)
{
for(int mask{1}; mask != 1 << 7; mask <<= 1){
bits.push_back(val.i & mask);
}
}
seven_bit_ref begin()
{
return {0, 7, bits};
}
seven_bit_ref end()
{
return {size() * 7, size() * 7 + 7, bits};
}
std::vector<bool> bits;
};
std::ostream& operator<<(std::ostream& os, uint7_t val)
{
os << val.i;
return os;
}
int main()
{
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference
v = 99; // change via reference
std::cout << "\nAfter changes:\n";
for(auto&& i : arr)
std::cout << i << '\n';
}
The following code works as you have asked for it, but first the output and live example on ideone.
Output:
Before changing values...:
7 bit representation: 1111111 0000000 0000000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 11111110 00000000 00000000 00000000 00000000 00000000 00000000
After changing values...:
7 bit representation: 1000000 1001100 1110010 1011010 1010100 0000111 1111110 0000000
8 bit representation: 10000001 00110011 10010101 10101010 10000001 11111111 00000000
8 Bits: 11111111 to ulong: 255
7 Bits: 1111110 to ulong: 126
After changing values...:
7 bit representation: 0010000 0101010 0100000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 00100000 10101001 00000000 00000000 00000000 00000000 00000000
It is very straight forward using a std::bitset in a class called BitVector. I implement one getter and setter. The getter returns also a std::bitset at the given index selIdx with a given template argument size M. The given idx will be multiplied by the given size M to get the right position. The returned bitset can also be converted to numerical or string values.
The setter uses an uint8_t value as input and again the index selIdx. The bits will be shifted to the right position into the bitset.
Further you can use the getter and setter with different sizes because of the template argument M, which means you can work with either 7 or 8 bit representation but also 3 or what ever you like.
I'm sure this code is not the best concerning speed, but I think it is a very clear and clean solution. Also it is not complete at all as there are just one getter, one setter and two constructors. Remember to implement error checking concerning indexes and sizes.
Code:
#include <iostream>
#include <bitset>
template <size_t N> class BitVector
{
private:
std::bitset<N> _data;
public:
BitVector (unsigned long num) : _data (num) { };
BitVector (const std::string& str) : _data (str) { };
template <size_t M>
std::bitset<M> getBits (size_t selIdx)
{
std::bitset<M> retBitset;
for (size_t idx = 0; idx < M; ++idx)
{
retBitset |= (_data[M * selIdx + idx] << (M - 1 - idx));
}
return retBitset;
}
template <size_t M>
void setBits (size_t selIdx, uint8_t num)
{
const unsigned char* curByte = reinterpret_cast<const unsigned char*> (&num);
for (size_t bitIdx = 0; bitIdx < 8; ++bitIdx)
{
bool bitSet = (1 == ((*curByte & (1 << (8 - 1 - bitIdx))) >> (8 - 1 - bitIdx)));
_data.set(M * selIdx + bitIdx, bitSet);
}
}
void print_7_8()
{
std:: cout << "\n7 bit representation: ";
for (size_t idx = 0; idx < (N / 7); ++idx)
{
std::cout << getBits<7>(idx) << " ";
}
std:: cout << "\n8 bit representation: ";
for (size_t idx = 0; idx < N / 8; ++idx)
{
std::cout << getBits<8>(idx) << " ";
}
}
};
int main ()
{
BitVector<56> num = 127;
std::cout << "Before changing values...:";
num.print_7_8();
num.setBits<8>(0, 0x81);
num.setBits<8>(1, 0b00110011);
num.setBits<8>(2, 0b10010101);
num.setBits<8>(3, 0xAA);
num.setBits<8>(4, 0x81);
num.setBits<8>(5, 0xFF);
num.setBits<8>(6, 0x00);
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
std::cout << "\n\n8 Bits: " << num.getBits<8>(5) << " to ulong: " << num.getBits<8>(5).to_ulong();
std::cout << "\n7 Bits: " << num.getBits<7>(6) << " to ulong: " << num.getBits<7>(6).to_ulong();
num = BitVector<56>(std::string("1001010100000100"));
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
return 0;
}
Here is one approach without the manual shifting. This is just a crude POC, but hopefully you will be able to get something out of it. I don't know if you are able to easily transform your input into bitset, but i think it should be possible.
int bytes = 0x01234567;
bitset<32> bs(bytes);
cout << "Input: " << bs << endl;
for(int i = 0; i < 5; i++)
{
bitset<7> slice(bs.to_string().substr(i*7, 7));
cout << slice << endl;
}
Also this is probably much less performant then the bitshifting version, so i wouldn't recommend it for heavy lifting.
You can use this to get the index'th 7-bit element from in (note that it doesn't have proper end of array handling). Simple, fast.
int get7(const uint8_t *in, int index) {
int fidx = index*7;
int idx = fidx>>3;
int sidx = fidx&7;
return (in[idx]>>sidx|in[idx+1]<<(8-sidx))&0x7f;
}
You can use direct access or bulk bit packing/unpacking as in TurboPFor:Integer Compression
// Direct read access
// b : bit width 0-16 (7 in your case)
#define bzhi32(u,b) ((u) & ((1u <<(b))-1))
static inline unsigned bitgetx16(unsigned char *in,
unsigned idx,
unsigned b) {
unsigned bidx = b*idx;
return bzhi32( *(unsigned *)((uint16_t *)in+(bidx>>4)) >> (bidx& 0xf), b );
}
Let's say I have a string that contains a binary like this one "0110110101011110110010010000010". Is there a easy way to output that string into a binary file so that the file contains 0110110101011110110010010000010? I understand that the computer writes one byte at a time but I am having trouble coming up with a way to write the contents of the string as a binary to a binary file.
Use a bitset:
//Added extra leading zero to make 32-bit.
std::bitset<32> b("00110110101011110110010010000010");
auto ull = b.to_ullong();
std::ofstream f;
f.open("test_file.dat", std::ios_base::out | std::ios_base::binary);
f.write(reinterpret_cast<char*>(&ull), sizeof(ull));
f.close();
I am not sure if that's what you need but here you go:
#include<iostream>
#include<fstream>
#include<string>
using namespace std;
int main() {
string tmp = "0110110101011110110010010000010";
ofstream out;
out.open("file.txt");
out << tmp;
out.close();
}
Make sure your output stream is in binary mode. This handles the case where the string size is not a multiple of the number of bits in a byte. Extra bits are set to 0.
const unsigned int BitsPerByte = CHAR_BIT;
unsigned char byte;
for (size_t i = 0; i < data.size(); ++i)
{
if ((i % BitsPerByte) == 0)
{
// first bit of a byte
byte = 0;
}
if (data[i] == '1')
{
// set a bit to 1
byte |= (1 << (i % BitsPerByte));
}
if (((i % BitsPerByte) == BitsPerByte - 1) || i + 1 == data.size())
{
// last bit of the byte
file << byte;
}
}
I have a string of 1s and 0s that i padded with enough 0s to make its length exactly divisible by 8. My goal is to convert this string to a number of bytes and order it in such a way that the first character i read is the least siginificant bit, then the next on is the next least siginificant, etc until i have read 8 bits, save that as a byte and the continue reading the string saving the next bit as as the least siginificant bit of the second byte.
As an example the string "0101101101010010" is length 16 so it will be converted into two bytes. The first byte should be "11011010" and the second byte should be "01001010".
I am unsure how to do this because it is not as simple as reversing the string (i need to maintain the order of these bytes).
Any help is appreciated, thanks!
You could iterate backwards through the string, but reversing it like you suggest might be easier. From there, you can just build the bytes one at a time. A nested for loop would work nicely:
unsigned char bytes[8]; // Make sure this is zeroed
for (int i=0, j=0; i<str.length(); j++) {
for (int k=0; k<8; k++, i++) {
bytes[j] >>= 1;
if (str[i] == '1') bytes[j] |= 0x80;
}
}
i is the current string index, j is the current byte array index, and k counts how many bits we've set in the current byte. We set the bit if the current character is 1, otherwise we leave it unset. It's important that the byte array is unsigned since we're using a right-shift.
You can get the number of bytes using the string::size / 8.
Then, it is just a matter of reversing the sub-strings.
You can do something like that:
for(int i=0; i<number_of_bytes; i++)
{
std::string temp_substr = original.substr(i*8,8);
std::reversed = string(temp_substr.rbegin(),temp_substr.rend()) // using reverse iterators
//now you can save that "byte" represented in the "reversed" string, for example using memcpy
}
Depends whether you want to expose it as a general purpose function or encapsulate it in a class which will ensure you have all the right constraints applied, such as all the characters being either 0 or 1.
#include <cstdint>
#include <string>
#include <algorithm>
#include <iostream>
static const size_t BitsPerByte = 8;
// Suitable for a member function where you know all the constraints are met.
uint64_t crudeBinaryDecode(const std::string& src)
{
uint64_t value = 0;
const size_t numBits = src.size();
for (size_t bitNo = 0; bitNo < numBits; ++bitNo)
value |= uint64_t(src[bitNo] - '0') << bitNo;
return value;
}
uint64_t clearerBinaryDecode(const std::string& src)
{
static const size_t BitsPerByte = 8;
if ((src.size() & (BitsPerByte - 1)) != 0)
throw std::invalid_argument("binary value must be padded to a byte size");
uint64_t value = 0;
const size_t numBits = std::min(src.size(), sizeof(value) * BitsPerByte);
for (size_t bitNo = 0; bitNo < numBits; ++bitNo) {
uint64_t bitValue = (src[bitNo] == '0') ? 0ULL : 1ULL;
value |= bitValue << bitNo;
}
return value;
}
int main()
{
std::string dead("1011" "0101" "0111" "1011");
std::string beef("1111" "0111" "0111" "1101");
std::string bse ("1111" "0111" "0111" "1101" "1011" "0101" "0111" "1011" "1111" "0111" "0111" "1101" "1011" "0111" "0111" "1111");
std::cout << std::hex;
std::cout << "'dead' is: " << crudeBinaryDecode(dead) << std::endl;
std::cout << "'beef' is: " << clearerBinaryDecode(beef) << std::endl;
std::cout << "'bse' is: " << crudeBinaryDecode(bse) << std::endl;
return 0;
}
How to write bitset data to a file?
The first answer doesn't answer the question correctly, since it takes 8 times more space than it should.
How would you do it ? I really need it to save a lot of true/false values.
Simplest approach : take consecutive 8 boolean values, represent them as a single byte, write that byte to your file. That would save lot of space.
In the beginning of file, you can write the number of boolean values you want to write to the file; that number will help while reading the bytes from file, and converting them back into boolean values!
If you want the bitset class that best supports converting to binary, and your bitset is more than the size of unsigned long, then the best option to use is boost::dynamic_bitset. (I presume it is more than 32 and even 64 bits if you are that concerned about saving space).
From dynamic_bitset you can use to_block_range to write the bits into the underlying integral type. You can construct the dynamic_bitset back from the blocks by using from_block_range or its constructor from BlockInputIterator or by making append() calls.
Now you have the bytes in their native format (Block) you still have the issue of writing it to a stream and reading it back.
You will need to store a bit of "header" information first: the number of blocks you have and potentially the endianness. Or you might use a macro to convert to a standard endianness (eg ntohl but you will ideally use a macro that is no-op for your most common platform so if that is little-endian you probably want to store that way and convert only for big-endian systems).
(Note: I am assuming that boost::dynamic_bitset standardly converts integral types the same way regardless of underlying endianness. Their documentation does not say).
To write numbers binary to a stream use os.write( &data[0], sizeof(Block) * nBlocks ) and to read use is.read( &data[0], sizeof(Block) * nBlocks ) where data is assumed to be vector<Block> and before read you must do data.resize(nBlocks) (not reserve()). (You can also do weird stuff with istream_iterator or istreambuf_iterator but resize() is probably better).
Here is a try with two functions that will use a minimal number of bytes, without compressing the bitset.
template<int I>
void bitset_dump(const std::bitset<I> &in, std::ostream &out)
{
// export a bitset consisting of I bits to an output stream.
// Eight bits are stored to a single stream byte.
unsigned int i = 0; // the current bit index
unsigned char c = 0; // the current byte
short bits = 0; // to process next byte
while(i < in.size())
{
c = c << 1; //
if(in.at(i)) ++c; // adding 1 if bit is true
++bits;
if(bits == 8)
{
out.put((char)c);
c = 0;
bits = 0;
}
++i;
}
// dump remaining
if(bits != 0) {
// pad the byte so that first bits are in the most significant positions.
while(bits != 8)
{
c = c << 1;
++bits;
}
out.put((char)c);
}
return;
}
template<int I>
void bitset_restore(std::istream &in, std::bitset<I> &out)
{
// read bytes from the input stream to a bitset of size I.
/* for debug */ //for(int n = 0; n < I; ++n) out.at(n) = false;
unsigned int i = 0; // current bit index
unsigned char mask = 0x80; // current byte mask
unsigned char c = 0; // current byte in stream
while(in.good() && (i < I))
{
if((i%8) == 0) // retrieve next character
{ c = in.get();
mask = 0x80;
}
else mask = mask >> 1; // shift mask
out.at(i) = (c & mask);
++i;
}
}
Note that probably using a reinterpret_cast of the portion of memory used by the bitset as an array of chars could also work, but it is maybe not portable accross systems because you don't know what the representation of the bitset is (endianness?)
How about this
#include <sys/time.h>
#include <unistd.h>
#include <algorithm>
#include <fstream>
#include <vector>
...
{
std::srand(std::time(nullptr));
std::vector<bool> vct1, vct2;
vct1.resize(20000000, false);
vct2.resize(20000000, false);
// insert some data
for (size_t i = 0; i < 1000000; i++) {
vct1[std::rand() % 20000000] = true;
}
// serialize to file
std::ofstream ofs("bitset", std::ios::out | std::ios::trunc);
for (uint32_t i = 0; i < vct1.size(); i += std::_S_word_bit) {
auto vct1_iter = vct1.begin();
vct1_iter += i;
uint32_t block_num = i / std::_S_word_bit;
std::_Bit_type block_val = *(vct1_iter._M_p);
if (block_val != 0) {
// only write not-zero block
ofs.write(reinterpret_cast<char*>(&block_num), sizeof(uint32_t));
ofs.write(reinterpret_cast<char*>(&block_val), sizeof(std::_Bit_type));
}
}
ofs.close();
// deserialize
std::ifstream ifs("bitset", std::ios::in);
ifs.seekg(0, std::ios::end);
uint64_t file_size = ifs.tellg();
ifs.seekg(0);
uint64_t load_size = 0;
while (load_size < file_size) {
uint32_t block_num;
ifs.read(reinterpret_cast<char*>(&block_num), sizeof(uint32_t));
std::_Bit_type block_value;
ifs.read(reinterpret_cast<char*>(&block_value), sizeof(std::_Bit_type));
load_size += sizeof(uint32_t) + sizeof(std::_Bit_type);
auto offset = block_num * std::_S_word_bit;
if (offset >= vct2.size()) {
std::cout << "error! already touch end" << std::endl;
break;
}
auto iter = vct2.begin();
iter += offset;
*(iter._M_p) = block_value;
}
ifs.close();
// check result
int count_true1 = std::count(vct1.begin(), vct1.end(), true);
int count_true2 = std::count(vct2.begin(), vct2.end(), true);
std::cout << "count_true1: " << count_true1 << " count_true2: " << count_true2 << std::endl;
}
One way might be:
std::vector<bool> data = /* obtain bits somehow */
// Reserve an appropriate number of byte-sized buckets.
std::vector<char> bytes((int)std::ceil((float)data.size() / CHAR_BITS));
for(int byteIndex = 0; byteIndex < bytes.size(); ++byteIndex) {
for(int bitIndex = 0; bitIndex < CHAR_BITS; ++bitIndex) {
int bit = data[byteIndex * CHAR_BITS + bitIndex];
bytes[byteIndex] |= bit << bitIndex;
}
}
Note that this assumes you don't care what the bit layout ends up being in memory, because it makes no adjustments for anything. But as long as you also serialize out the number of bits that were actually stored (to cover cases where you have a bit count that isn't a multiple of CHAR_BITS) you can deserialize exactly the same bitset or vector as you had originally like this.
(I'm not happy with that bucket size computation but it's 1am and I'm having trouble thinking of something more elegant).
#include "stdio"
#include "bitset"
...
FILE* pFile;
pFile = fopen("output.dat", "wb");
...
const unsigned int size = 1024;
bitset<size> bitbuffer;
...
fwrite (&bitbuffer, 1, size/8, pFile);
fclose(pFile);
Two options:
Spend the extra pounds (or pence, more likely) for a bigger disk.
Write a routine to extract 8 bits from the bitset at a time, compose them into bytes, and write them to your output stream.
I have a byte array generated by a random number generator. I want to put this into the STL bitset.
Unfortunately, it looks like Bitset only supports the following constructors:
A string of 1's and 0's like "10101011"
An unsigned long. (my byte array will be longer)
The only solution I can think of now is to read the byte array bit by bit and make a string of 1's and 0's. Does anyone have a more efficient solution?
Something like this?
#include <bitset>
#include <climits>
template<size_t numBytes>
std::bitset<numBytes * CHAR_BIT> bytesToBitset(uint8_t *data)
{
std::bitset<numBytes * CHAR_BIT> b;
for(int i = 0; i < numBytes; ++i)
{
uint8_t cur = data[i];
int offset = i * CHAR_BIT;
for(int bit = 0; bit < CHAR_BIT; ++bit)
{
b[offset] = cur & 1;
++offset; // Move to next bit in b
cur >>= 1; // Move to next bit in array
}
}
return b;
}
And an example usage:
int main()
{
std::array<uint8_t, 4> bytes = { 0xDE, 0xAD, 0xBE, 0xEF };
auto bits = bytesToBitset<bytes.size()>(bytes.data());
std::cout << bits << std::endl;
}
There's a 3rd constructor for bitset<> - it takes no parameters and sets all the bits to 0. I think you'll need to use that then walk through the array calling set() for each bit in the byte array that's a 1.
A bit brute-force, but it'll work. There will be a bit of complexity to convert the byte-index and bit offset within each byte to a bitset index, but it's nothing a little bit of thought (and maybe a run through under the debugger) won't solve. I think it's most likely simpler and more efficient than trying to run the array through a string conversion or a stream.
I have spent a lot of time by writing a reverse function (bitset -> byte/char array). There it is:
bitset<SIZE> data = ...
// bitset to char array
char current = 0;
int offset = 0;
for (int i = 0; i < SIZE; ++i) {
if (data[i]) { // if bit is true
current |= (char)(int)pow(2, i - offset * CHAR_BIT); // set that bit to true in current masked value
} // otherwise let it to be false
if ((i + 1) % CHAR_BIT == 0) { // every 8 bits
buf[offset++] = current; // save masked value to buffer & raise offset of buffer
current = 0; // clear masked value
}
}
// now we have the result in "buf" (final size of contents in buffer is "offset")
Here is my implementation using template meta-programming.
Loops are done in the compile-time.
I took #strager version, modified it in order to prepare for TMP:
changed order of iteration (so that I could make recursion from it);
reduced number of used variables.
Modified version with loops in a run-time:
template <size_t nOfBytes>
void bytesToBitsetRunTimeOptimized(uint8_t* arr, std::bitset<nOfBytes * CHAR_BIT>& result) {
for(int i = nOfBytes - 1; i >= 0; --i) {
for(int bit = 0; bit < CHAR_BIT; ++bit) {
result[i * CHAR_BIT + bit] = ((arr[i] >> bit) & 1);
}
}
}
TMP version based on it:
template<size_t nOfBytes, int I, int BIT> struct LoopOnBIT {
static inline void bytesToBitset(uint8_t* arr, std::bitset<nOfBytes * CHAR_BIT>& result) {
result[I * CHAR_BIT + BIT] = ((arr[I] >> BIT) & 1);
LoopOnBIT<nOfBytes, I, BIT+1>::bytesToBitset(arr, result);
}
};
// stop case for LoopOnBIT
template<size_t nOfBytes, int I> struct LoopOnBIT<nOfBytes, I, CHAR_BIT> {
static inline void bytesToBitset(uint8_t* arr, std::bitset<nOfBytes * CHAR_BIT>& result) { }
};
template<size_t nOfBytes, int I> struct LoopOnI {
static inline void bytesToBitset(uint8_t* arr, std::bitset<nOfBytes * CHAR_BIT>& result) {
LoopOnBIT<nOfBytes, I, 0>::bytesToBitset(arr, result);
LoopOnI<nOfBytes, I-1>::bytesToBitset(arr, result);
}
};
// stop case for LoopOnI
template<size_t nOfBytes> struct LoopOnI<nOfBytes, -1> {
static inline void bytesToBitset(uint8_t* arr, std::bitset<nOfBytes * CHAR_BIT>& result) { }
};
template <size_t nOfBytes>
void bytesToBitset(uint8_t* arr, std::bitset<nOfBytes * CHAR_BIT>& result) {
LoopOnI<nOfBytes, nOfBytes - 1>::bytesToBitset(arr, result);
}
client code:
uint8_t arr[]={0x6A};
std::bitset<8> b;
bytesToBitset<1>(arr,b);
Well, let's be honest, I was bored and started to think there had to be a slightly faster way than setting each bit.
template<int numBytes>
std::bitset<numBytes * CHARBIT bytesToBitset(byte *data)
{
std::bitset<numBytes * CHAR_BIT> b = *data;
for(int i = 1; i < numBytes; ++i)
{
b <<= CHAR_BIT; // Move to next bit in array
b |= data[i]; // Set the lowest CHAR_BIT bits
}
return b;
}
This is indeed slightly faster, at least as long as the byte array is smaller than 30 elements (depending on your optimization-flags passed to compiler). Larger array than that and the time used by shifting the bitset makes setting each bit faster.
you can initialize the bitset from a stream. I can't remember how to wrangle a byte[] into a stream, but...
from http://www.sgi.com/tech/stl/bitset.html
bitset<12> x;
cout << "Enter a 12-bit bitset in binary: " << flush;
if (cin >> x) {
cout << "x = " << x << endl;
cout << "As ulong: " << x.to_ulong() << endl;
cout << "And with mask: " << (x & mask) << endl;
cout << "Or with mask: " << (x | mask) << endl;
}