I have a Constructor that creates a BitArray object, which asks a user for how many 'bits' they would like to use. It then uses unsigned chars to store the Bytes needed to hold the many. I then wish to create methods that allow for a user to 'Set' a certain bit, and also to Display the full set of Bytes at the end. However, my Set method does not seem to be changing the bit, that, or my Print function (The Overload) does not seem to actually be printing the actual bit(s). Can somebody point out the problem please?
Constructor
BitArray::BitArray(unsigned int n)
{
//Now let's find the minimum 'bits' needed
n++;
//If it does not "perfectly" fit
//------------------------------------ehhhh
if( (n % BYTE) != 0)
arraySize =(n / BYTE);
else
arraySize = (n / BYTE) + 1;
//Now dynamically create the array with full byte size
barray = new unsigned char[arraySize];
//Now intialize bytes to 0
for(int i = 0; i < arraySize; i++)
{
barray[i] = (int) 0;
}
}
Set Method:
void BitArray::Set(unsigned int index)
{
//Set the Indexed Bit to ON
barray[index/BYTE] |= 0x01 << (index%BYTE);
}
Print Overload:
ostream &operator<<(ostream& os, const BitArray& a)
{
for(int i = 0; i < (a.Length()*BYTE+1); i++)
{
int curNum = i/BYTE;
char charToPrint = a.barray[curNum];
os << (charToPrint & 0X01);
charToPrint >>= 1;
}
return os;
}
for(int i = 0; i < (a.Length()*BYTE+1); i++)
{
int curNum = i/BYTE;
char charToPrint = a.barray[curNum];
os << (charToPrint & 0X01);
charToPrint >>= 1;
}
Each time you run your loop, you are fetching a new value for charToPrint. That means that the operation charToPrint >>= 1; is useless, since that modification is not going to be carried out to the next time the loop runs. Therefore, you will always print only the first bit of each char of your array.
Related
I've written the below code to convert and store the data from a string (array of chars) called strinto an array of 16-bit integers called arr16bit
The code works. However, i'd say that there's a better or cleaner way to implement this logic, using less variables etc.
I don't want to use index i to get the modulus % 2, because if using little endian, I have the same algorithm but i starts at the last index of the string and counts down instead of up. Any recommendations are appreciated.
// assuming str had already been initialised before this ..
int strLength = CalculateStringLength(str); // function implementation now shown
uint16_t* arr16bit = new uint16_t[ (strLength /2) + 1]; // The only C++ feature used here , so I didn't want to tag it
int indexWrite = 0;
int counter = 0;
for(int i = 0; i < strLength; ++i)
{
arr16bit[indexWrite] <<= 8;
arr16bit[indexWrite] |= str[i];
if ( (counter % 2) != 0)
{
indexWrite++;
}
counter++;
}
Yes, there are some redundant variables here.
You have both counter and i which do exactly the same thing and always hold the same value. And you have indexWrite which is always exactly half (per integer division) of both of them.
You're also shifting too far (16 bits rather than 8).
const std::size_t strLength = CalculateStringLength(str);
std::vector<uint16_t> arr16bit((strLength/2) + 1);
for (std::size_t i = 0; i < strLength; ++i)
{
arr16bit[i/2] <<= 8;
arr16bit[i/2] |= str[i];
}
Though I'd probably do it more like this to avoid N redundant |= operations:
const std::size_t strLength = CalculateStringLength(str);
std::vector<uint16_t> arr16bit((strLength/2) + 1);
for (std::size_t i = 0; i < strLength+1; i += 2)
{
arr16bit[i/2] = (str[i] << 8);
arr16bit[(i/2)+1] |= str[i+1];
}
You may also wish to consider a simple std::copy over the whole dang buffer, if your endianness is right for it.
Hello I am trying to write 8 bits from std::vector to binary file and read them back . Writing works fine , have checked with binary editor and all values are correct , but once I try to read I got bad data .
Data that i am writing :
11000111 //bits
Data that i got from reading:
11111111 //bits
Read function :
std::vector<bool> Read()
{
std::vector<bool> map;
std::ifstream fin("test.bin", std::ios::binary);
int size = 8 / 8.0f;
char * buffer = new char[size];
fin.read(buffer, size);
fin.close();
for (int i = 0; i < size; i++)
{
for (int id = 0; id < 8; id++)
{
map.emplace_back(buffer[i] << id);
}
}
delete[] buffer;
return map;
}
Write function(just so you guys know more whats going on)
void Write(std::vector<bool>& map)
{
std::ofstream fout("test.bin", std::ios::binary);
char byte = 0;
int byte_index = 0;
for (size_t i = 0; i < map.size(); i++)
{
if (map[i])
{
byte |= (1 << byte_index);
}
byte_index++;
if (byte_index > 7)
{
byte_index = 0;
fout.write(&byte, sizeof(byte));
}
}
fout.close();
}
Your code spreads out one byte (the value of buffer[i], where i is always 0) over 8 bools. Since you only read one byte, which happens to be non-zero, you now end up with 8 trues (since any non-zero integer converts to true).
Instead of spreading one value out, you probably want to split one value into its constituent bits:
for (int id = 0; id < 8; id++)
{
map.emplace_back((static_cast<unsigned char>(buffer[i]) & (1U << id)) >> id);
}
I have a base64 string containing bits, I have alredy decoded it with the code in here. But I'm unable to transform the resultant string in bits I could work with. Is there a way to convert the bytes contained in the code to a vector of bools containing the bits of the string?
I have tried converting the char with this code but it failed to conver to a proper char
void DecodedStringToBit(std::string const& decodedString, std::vector<bool> &bits) {
int it = 0;
for (int i = 0; i < decodedString.size(); ++i) {
unsigned char c = decodedString[i];
for (unsigned char j = 128; j > 0; j <<= 1) {
if (c&j) bits[++it] = true;
else bits[++it] = false;
}
}
}
Your inner for loop is botched: it's shifting j the wrong way. And honestly, if you want to work with 8-bit values, you should use the proper <stdint.h> types instead of unsigned char:
for (uint8_t j = 128; j; j >>= 1)
bits.push_back(c & j);
Also, remember to call bits.reserve(decodedString.size() * 8); so your program doesn't waste a bunch of time on resizing.
I'm assuming the bit order is MSB first. If you want LSB first, the loop becomes:
for (uint8_t j = 1; j; j <<= 1)
In OP's code, it is not clear if the vector bits is of sufficient size, for example, if it is resized by the caller (It should not be!). If not, then the vector does not have space allocated, and hence bits[++it] may not work; the appropriate thing might be to push_back. (Moreover, I think the code might need the post-increment of it, i.e. bits[it++] to start from bits[0].)
Furthermore, in OP's code, the purpose of unsigned char j = 128 and j <<= 1 is not clear. Wouldn't j be all zeros after the first iteration? If so, the inner loop would always run for only one iteration.
I would try something like this (not compiled):
void DecodedStringToBit(std::string const& decodedString,
std::vector<bool>& bits) {
for (auto charIndex = 0; charIndex != decodedString.size(); ++charIndex) {
const unsigned char c = decodedString[charIndex];
for (int bitIndex = 0; bitIndex != CHAR_BIT; ++bitIndex) {
// CHAR_BIT = bits in a char = 8
const bool bit = c & (1 << bitIndex); // bitwise-AND with mask
bits.push_back(bit);
}
}
}
I am building a class in C++ which can be used to store arbitrarily large integers. I am storing them as binary in a vector. I need to be able to print this vector in base 10 so it is easier for a human to understand. I know that I could convert it to an int and then output that int. However, my numbers will be much larger than any primitive types. How can I convert this directly to a string.
Here is my code so far. I am new to C++ so if you have any other suggestions that would be great too. I need help filling in the string toBaseTenString() function.
class BinaryInt
{
private:
bool lastDataUser = true;
vector<bool> * data;
BinaryInt(vector<bool> * pointer)
{
data = pointer;
}
public:
BinaryInt(int n)
{
data = new vector<bool>();
while(n > 0)
{
data->push_back(n % 2);
n = n >> 1;
}
}
BinaryInt(const BinaryInt & from)
{
from.lastDataUser = false;
this->data = from.data;
}
~BinaryInt()
{
if(lastDataUser)
delete data;
}
string toBinaryString();
string toBaseTenString();
static BinaryInt add(BinaryInt a, BinaryInt b);
static BinaryInt mult(BinaryInt a, BinaryInt b);
};
BinaryInt BinaryInt::add(BinaryInt a, BinaryInt b)
{
int aSize = a.data->size();
int bSize = b.data->size();
int newDataSize = max(aSize, bSize);
vector<bool> * newData = new vector<bool>(newDataSize);
bool carry = 0;
for(int i = 0; i < newDataSize; i++)
{
int sum = (i < aSize ? a.data->at(i) : 0) + (i < bSize ? b.data->at(i) : 0) + carry;
(*newData)[i] = sum % 2;
carry = sum >> 1;
}
if(carry)
newData->push_back(carry);
return BinaryInt(newData);
}
string BinaryInt::toBinaryString()
{
stringstream ss;
for(int i = data->size() - 1; i >= 0; i--)
{
ss << (*data)[i];
}
return ss.str();
}
string BinaryInt::toBaseTenString()
{
//Not sure how to do this
}
I know you said in your OP that "my numbers will be much larger than any primitive types", but just hear me out on this.
In the past, I've used std::bitset to work with binary representations of numbers and converting back and forth from various other representations. std::bitset is basically a fancy std::vector with some added functionality. You can read more about it here if it sounds interesting, but here's some small stupid example code to show you how it could work:
std::bitset<8> myByte;
myByte |= 1; // mByte = 00000001
myByte <<= 4; // mByte = 00010000
myByte |= 1; // mByte = 00010001
std::cout << myByte.to_string() << '\n'; // Outputs '00010001'
std::cout << myByte.to_ullong() << '\n'; // Outputs '17'
You can access the bitset by standard array notation as well. By the way, that second conversion I showed (to_ullong) converts to an unsigned long long, which I believe has a max value of 18,446,744,073,709,551,615. If you need larger values than that, good luck!
Just iterate (backwards) your vector<bool> and accumulate the corresponding value when the iterator is true:
int base10(const std::vector<bool> &value)
{
int result = 0;
int bit = 1;
for (vb::const_reverse_iterator b = value.rbegin(), e = value.rend(); b != e; ++b, bit <<= 1)
result += (*b ? bit : 0);
return result;
}
Live demo.
Beware! this code is only a guide, you will need to take care of int overflowing if the value is pretty big.
Hope it helps.
I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?
Pseudo code:
int value = ???? // assuming a 32 bit int
int i;
for (i = 0; i < 32; ++i) {
array[i] = (value >> i) & 1;
}
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number,
output_iterator first, output_iterator last)
{
const unsigned number_bits = CHAR_BIT*sizeof(int);
//extract bits one at a time
for(unsigned i=0; i<number_bits && first!=last; ++i) {
const unsigned shift_amount = number_bits-i-1;
const unsigned this_bit = (number>>shift_amount)&1;
*first = this_bit;
++first;
}
//pad the rest with zeros
while(first != last) {
*first = 0;
++first;
}
}
int main() {
int number = 413523152;
int array[32];
convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
for(int i=0; i<32; ++i)
std::cout << array[i] << ' ';
}
Proof of compilation here
You could use C++'s bitset library, as follows.
#include<iostream>
#include<bitset>
int main()
{
int N;//input number in base 10
cin>>N;
int O[32];//The output array
bitset<32> A=N;//A will hold the binary representation of N
for(int i=0,j=31;i<32;i++,j--)
{
//Assigning the bits one by one.
O[i]=A[j];
}
return 0;
}
A couple of points to note here:
First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes.
Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library.
You can print out the bitset variable A as
cout<<A;
and see how it works.
You can do like this:
while (input != 0) {
if (input & 1)
result[index] = 1;
else
result[index] =0;
input >>= 1;// dividing by two
index++;
}
As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:
// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}
Decimal to Binary: Size independent
Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).
First Method:
#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
// conversion
int left = sizeof(int) * CHAR_BIT - 1;
for(i = 0; left >= 0; left--, i++){
bits[i] = !!(dec & ( 1u << left ));
}
return bits;
}
Second Method:
#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);
//mask = 1000 0000 0000 0000
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
//conversion
while(mask > 0){
if((num & mask) == 0 )
bits[i] = 0;
else
bits[i] = 1;
mask = mask >> 1 ; // Right Shift
i++;
}
return bits;
}
I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)
int BinToDec(int Value, int Padding = 8)
{
int Bin = 0;
for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
{
Bin += ((Value >> I - 1) & 1) * Pos;
}
return Bin;
}
This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.
std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
std::vector<int> r;
// make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
while (num_to_convert_to_binary > 0)
{
//cout << " top of loop" << endl;
if (num_to_convert_to_binary % 2 == 0)
r.push_back(0);
else
r.push_back(1);
num_to_convert_to_binary = num_to_convert_to_binary / 2;
}
while(r.size() < num_bits_in_out_vec)
r.push_back(0);
return r;
}