dynamic size of std::bitset initialization [duplicate] - c++

I want to make a simple program that will take number of bits from the input and as an output show binary numbers, written on given bits (example: I type 3: it shows 000, 001, 010, 011, 100, 101, 110, 111).
The only problem I get is in the second for-loop, when I try to assign variable in bitset<bits>, but it wants constant number.
If you could help me find the solution I would be really greatful.
Here's the code:
#include <iostream>
#include <bitset>
#include <cmath>
using namespace std;
int main() {
int maximum_value = 0,x_temp=10;
//cin >> x_temp;
int const bits = x_temp;
for (int i = 1; i <= bits; i++) {
maximum_value += pow(2, bits - i);
}
for (int i = maximum_value; i >= 0; i--)
cout << bitset<bits>(maximum_value - i) << endl;
return 0;
}

A numeric ("non-type", as C++ calls it) template parameter must be a compile-time constant, so you cannot use a user-supplied number. Use a large constant number (e.g. 64) instead. You need another integer that will limit your output:
int x_temp = 10;
cin >> x_temp;
int const bits = 64;
...
Here 64 is some sort of a maximal value you can use, because bitset has a constructor with an unsigned long long argument, which has 64 bits (at least; may be more).
However, if you use int for your intermediate calculations, your code supports a maximum of 14 bits reliably (without overflow). If you want to support more than 14 bits (e.g. 64), use a larger type, like uint32_t or uint64_t.
A problem with holding more bits than needed is that the additional bits will be displayed. To cut them out, use substr:
cout << bitset<64>(...).to_string().substr(64 - x_temp);
Here to_string converts it to string with 64 characters, and substr cuts the last characters, whose number is x_temp.

You have to define const int bits=10; as a global constant :
#include <iostream>
#include <math.h>
#include <bitset>
using namespace std;
const unsigned bits=10;
int main() {
int maximum_value = 0,x_temp=10;
for (int i = 1; i <= bits; i++) {
maximum_value += pow(2, bits - i);
}
for (int i = maximum_value; i >= 0; i--)
cout << bitset<bits>(maximum_value - i) << endl;
return 0;
}

Related

How to convert an unsigned char variable that holds two hexadecimal values into two chars with one hex number

I need to get the hash (sha1) value of a given unsigned char array. So, I have used openssl. The SHA1 function generate the hash value in an unsigned char array which has 20 values. Indeed each value represent two hexadecimal values.
But, I should convert the generated array (with length of 20) to an array of chars with 40 values.
For example now hashValue[0] is "a0" but, I want to have hashValue[0] = "a" and hashValue[1] = "0"
#include <iostream>
#include <openssl/sha.h> // For sha1
using namespace std;
int main() {
unsigned char plainText[] = "compute sha1";
unsigned char hashValue[20];
SHA1(plainText,sizeof(plainText),hashValue);
for (int i = 0; i < 20; i++) {
printf("%02x", hashValue[i]);
}
printf("\n");
return 0;
}
You could create another array and use sprintf or safer snprintf to print into it instead of the standard output.
Something like this:
#include <iostream>
#include <stdio.h>
#include <openssl/sha.h> // For sha1
using namespace std;
int main() {
unsigned char plainText[] = "compute sha1";
unsigned char hashValue[20];
char output[41];
SHA1(plainText,sizeof(plainText),hashValue);
char *c_output = output;
for (int i = 0; i < 20; i++, c_output += 2) {
snprintf(c_output, 3, "%02x", hashValue[i]);
}
return 0;
}
Now output[0] == 'a' and output[1] == '0'.
There might be other, even better solution, this is just the first that comes to mind.
EDIT: Added fix from comments.
seems like you want to separate the high order and low order bytes.
to isolate the high order byte, shift right 4 bytes.
and to isolate the low order byte, apply a mask. AND with 0x0f
int x = 0x3A;
int y = x >> 4; // get high order nibble
int z = x & 0x0F; // get low order nibble
printf("%02x\n", x);
printf("%02x\n", y);
printf("%02x\n", z);

How does C++ compiler represent int numbers in binary code?

I've written a program, which shows binary representation of a particular integer value, using bitwise operators in C++. For even numbers it works as I expect, but for odd it adds 1 to the left of the binary representation.
#include <iostream>
using std::cout;
using std::cin;
using std::endl;
int main()
{
unsigned int a = 128;
for (int i = sizeof(a) * 8; i >= 0; --i) {
if (a & (1UL << i)) { // if i-th digit is 1
cout << 1; // Output 1
}
else {
cout << 0; // Otherwise output 0
}
}
cout << endl;
system("pause");
return 0;
}
Results:
For a = 128: 000000000000000000000000010000000,
For a = 127: 100000000000000000000000001111111
You might prefer CHAR_BIT macro instead of raw 8 (#include <climits>).
Consider your start value! Assuming unsigned int having 32 bit, your start value is int i = 4*8, so 1U << i shifts the value out of range. This is undefined behaviour and could result in anything, obviously, your specific compiler or hardware shifts %32, thus you get an initial value & 1, resulting in the unexpected leading 1... Did you notice that you actually printed out 33 digits instead of only 32?
The problem is here:
for (int i = sizeof(a) * 8; i >= 0; --i) {
it should be:
for (int i = sizeof(a) * 8; i-- ; ) {
http://coliru.stacked-crooked.com/a/8cb2b745063883fa

setting bits in a 64bit int

I am trying to set a set of bits in a 64 bit int to ones.
As you can see in the loop in main I'm setting bits 40 to 47 to 1 using the setBit function.
for a reason I don't understand bits 16 to 23 are also set to 1 as you can see from the program's output:
0000000011111111000000000000000000000000111111110000000000000000
I couldn't mimic the same behavior on a regular int.
BTW I also tried using a unsigned long long instead of int64_t with the same problem.
What am I missing?
#include <iostream>
#include <cstdint>
using namespace std;
int64_t x = 0;
void setBit(int64_t *num, int index)
{
*num |= (1 << index);
}
bool retreiveBit(int64_t *num, int index)
{
return *num & (1 << index);
}
int main()
{
for (int i = 40; i < 48; ++i)
setBit(&x, i);
for (int i = 0; i < 64; ++i)
{
int digit = retreiveBit(&x, i);
cout << digit;
}
return 0;
}
In the sub-expression:
(1 << index)
the type of the constant 1 is int, so this shift is done in an int. If your int isn't 64 bits wide (it probably isn't), then this shift has undefined behaviour.
You need to use a constant that is at least 64 bits wide:
(1LL << index)
(you need to do this in both the setBit() and retrieveBit() functions).

How do I convert bitset to array of bytes/uint8?

I need to extact bytes from the bitset which may (not) contain a multiple of CHAR_BIT bits. I now how many of the bits in the bitset I need to put into an array. For example,
the bits set is declared as std::bitset < 40> id;
There is a separate variable nBits how many of the bits in id are usable. Now I want to extract those bits in multiples of CHAR_BIT. I also need to take care of cases where nBits % CHAR_BIT != 0. I am okay to put this into an array of uint8
You can use boost::dynamic_bitset, which can be converted to a range of "blocks" using boost::to_block_range.
#include <cstdlib>
#include <cstdint>
#include <iterator>
#include <vector>
#include <boost/dynamic_bitset.hpp>
int main()
{
typedef uint8_t Block; // Make the block size one byte
typedef boost::dynamic_bitset<Block> Bitset;
Bitset bitset(40); // 40 bits
// Assign random bits
for (int i=0; i<40; ++i)
{
bitset[i] = std::rand() % 2;
}
// Copy bytes to buffer
std::vector<Block> bytes;
boost::to_block_range(bitset, std::back_inserter(bytes));
}
Unfortunately there's no good way within the language, assuming you need for than the number of bits in an unsigned long (in which case you could use to_ulong). You'll have to iterate over all the bits and generate the array of bytes yourself.
With standard C++11, you can get the bytes out of your 40-bit bitset with shifting and masking. I didn't deal with handling different values rather than 8 and 40 and handling when the second number is not a multiple of the first.
#include <bitset>
#include <iostream>
#include <cstdint>
int main() {
constexpr int numBits = 40;
std::bitset<numBits> foo(0x1234567890);
std::bitset<numBits> mask(0xff);
for (int i = 0; i < numBits / 8; ++i) {
auto byte =
static_cast<uint8_t>(((foo >> (8 * i)) & mask).to_ulong());
std::cout << std::hex << setfill('0') << setw(2) << static_cast<int>(byte) << std::endl;
}
}

Converting to binary in C++

I made a function that converts numbers to binary. For some reason it's not working. It gives the wrong output. The output is in binary format, but it always gives the wrong result for binary numbers that end with a zero(at least that's what I noticed..)
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
while (x > 1)
{
rem = x % 2;
x /= 2;
converted += rem;
converted *= 10;
}
converted += x;
return converted;
}
Please help me fix it, this is really frustrating..
Thanks!
Use std::bitset to do the translation:
#include <iostream>
#include <bitset>
#include <limits.h>
int main()
{
int val;
std::cin >> val;
std::bitset<sizeof(int) * CHAR_BIT> bits(val);
std::cout << bits << "\n";
}
You're reversing the bits.
You cannot use the remains of x as an indicator when to terminate the loop.
Consider e.g. 4.
After first loop iteration:
rem == 0
converted == 0
x == 2
After second loop iteration:
rem == 0
converted == 0
x == 1
And then you set converted to 1.
Try:
int i = sizeof(x) * 8; // i is now number of bits in x
while (i>0) {
--i;
converted *= 10;
converted |= (x >> i) & 1;
// Shift x right to get bit number i in the rightmost position,
// then and with 1 to remove any bits left of bit number i,
// and finally or it into the rightmost position in converted
}
Running the above code with x as an unsigned char (8 bits) with value 129 (binary 10000001)
Starting with i = 8, size of unsigned char * 8. In the first loop iteration i will be 7. We then take x (129) and shift it right 7 bits, that gives the value 1. This is OR'ed into converted which becomes 1. Next iteration, we start by multiplying converted with 10 (so now it's 10), we then shift x 6 bits right (value becomes 2) and ANDs it with 1 (value becomes 0). We OR 0 with converted, which is then still 10. 3rd-7th iteration do the same thing, converted is multiplied with 10 and one specific bit is extracted from x and OR'ed into converted. After these iterations, converted is 1000000.
In the last iteration, first converted is multiplied with 10 and becomes 10000000, we shift x right 0 bits, yielding the original value 129. We AND x with 1, this gives the value 1. 1 is then OR'ed into converted, which becomes 10000001.
You're doing it wrong ;)
http://www.bellaonline.com/articles/art31011.asp
The remain of the first division is the rightmost bit in the binary form, with your function it becomes the leftmost bit.
You can do something like this :
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
unsigned long long multiplicator = 1;
while (x > 0)
{
rem = x % 2;
x /= 2;
converted += rem * multiplicator;
multiplicator *= 10;
}
return converted;
}
edit: the code proposed by CygnusX1 is a little bit more efficient, but less comprehensive I think, I'll advise taking his version.
improvement : I changed the stop condition of the while loop, so we can remove the line adding x at the end.
You are actually reversing the binary number!
to_binary(2) will return 01, instead of 10. When initial 0es are truncated, it will look the same as 1.
how about doing it this way:
unsigned long long digit = 1;
while (x>0) {
if (x%2)
converted+=digit;
x/=2;
digit*=10;
}
What about std::bitset?
http://www.cplusplus.com/reference/stl/bitset/to_string/
If you want to display you number as binary, you need to format it as a string. The easiest way to do this that I know of is to use the STL bitset.
#include <bitset>
#include <iostream>
#include <sstream>
typedef std::bitset<64> bitset64;
std::string to_binary(const unsigned long long int& n)
{
const static int mask = 0xffffffff;
int upper = (n >> 32) & mask;
int lower = n & mask;
bitset64 upper_bs(upper);
bitset64 lower_bs(lower);
bitset64 result = (upper_bs << 32) | lower_bs;
std::stringstream ss;
ss << result;
return ss.str();
};
int main()
{
for(int i = 0; i < 10; ++i)
{
std::cout << i << ": " << to_binary(i) << "\n";
};
return 1;
};
The output from this program is:
0: 0000000000000000000000000000000000000000000000000000000000000000
1: 0000000000000000000000000000000000000000000000000000000000000001
2: 0000000000000000000000000000000000000000000000000000000000000010
3: 0000000000000000000000000000000000000000000000000000000000000011
4: 0000000000000000000000000000000000000000000000000000000000000100
5: 0000000000000000000000000000000000000000000000000000000000000101
6: 0000000000000000000000000000000000000000000000000000000000000110
7: 0000000000000000000000000000000000000000000000000000000000000111
8: 0000000000000000000000000000000000000000000000000000000000001000
9: 0000000000000000000000000000000000000000000000000000000000001001
If your purpose is only display them as their binary representation, then you may try itoa or std::bitset
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <bitset>
using namespace std;
int main()
{
unsigned long long x = 1234567890;
// c way
char buffer[sizeof(x) * 8];
itoa (x, buffer, 2);
printf ("binary: %s\n",buffer);
// c++ way
cout << bitset<numeric_limits<unsigned long long>::digits>(x) << endl;
return EXIT_SUCCESS;
}
void To(long long num,char *buff,int base)
{
if(buff==NULL) return;
long long m=0,no=num,i=1;
while((no/=base)>0) i++;
buff[i]='\0';
no=num;
while(no>0)
{
m=no%base;
no=no/base;
buff[--i]=(m>9)?((base==16)?('A' + m - 10):m):m+48;
}
}
Here is a simple solution.
#include <iostream>
using namespace std;
int main()
{
int num=241; //Assuming 16 bit integer
for(int i=15; i>=0; i--) cout<<((num >> i) & 1);
cout<<endl;
for(int i=0; i<16; i++) cout<<((num >> i) & 1);
cout<<endl;
return 0;
}