So this is an update to my last post, but I'm still having a lot of trouble understanding how this works. So I was giving the main function:
void set_flag(int* flag_holder, int flag_position);
int check_flag(int flag_holder, int flag_position);
int main(int argc, char* argv[])
{
int flag_holder = 0;
int i;
set_flag(&flag_holder, 3);
set_flag(&flag_holder, 16);
set_flag(&flag_holder, 31);
for(i = 31; i >= 0; i--) {
printf("%d", check_flag(flag_holder, i));
if(i % 4 == 0)
printf(" ");
}
printf("\n");
return 0;
}
And for the assignment we are supposed to write the functions set_flag and check_flag, so that the output is equal to:
1000 0000 0000 0001 0000 0000 0000 1000
So from what I understand, were supposed to use the "set_flag" function to make sure that the nth bit is 1. And the "check_flag" function returns an integer that is 0 when the nth bit is 0, and 1 when it is 1. I don't understand what "set_flag" is really doing, and how 3, 16, and 31, will be saved as "flags" which then return as 1's in "check_flag".
When working with binary or hexadecimal values a common approach is the define a mask that we will apply to a main value.
You can easily set one or more bits to '1' with the inclusive OR operator '|'
eg: we want to set the bit#0 to '1'
main value 01011000 |
mask 00000001 =
result 01011001
To test a particular bit you can use the AND operator '&'
eg: we want to test the bit#3
main value 01011000 &
mask 00001000 =
result 00001000
note: you may need to properly format result; here the & operation will return either zero or non-zero (but nor necessary '1').
So here are the 2 functions set_flag and check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
flag_holder = flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
In these scenarios we need a binary mask to set/check only one bit. The code "int mask = 1 << flag_position;" builds this single bit mask, it basically sets the bit#0 to '1' then shift to the left to the #bit we want to set/check.
Function Set_flag and Function Check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
*flag_holder = *flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
Run it along with the main program and you will get the desired output.
Related
I am learning C++ and I wonder if it is possible to decompose a structure object into a sequence of bits?
// The task is this! I have a structure
struct test {
// It contains an array
private:
int arr [8];
public:
void init () {
for (int i = 0; i <8; i ++) {
arr [i] = 5;
}
}
};
int main () {
// at some point this array is initialized
test h;
h.init ();
// without referring to the arr field and its elements, we must convert the structure to this format
// we know that int is stored there, and these are 32 bits -> 00000000 00000000 00000000 00000101. 00000000 00000000 00000000 00000101. - and there are 8 such pieces by number
// elements in the array
return -1;
}
Well, we know the size of the array too. We need to convert the structure object to a sequence of bits:
00000000000000000000000000000101000000000000000000000000000001010000000000000000000000000000010100000000000000000000000000000101000000000000000000000000000001010000000000000000000000000000010100000000000000000000000000000000010100000000000000000000000000000101
The standard answer for converting number into bit strings is to you std::bitset. I will use a more low level approach. And use a bit mask and & operation to mask out single bits and then assign the corresdponding characters to the resulting string.
Masking woks with the bit and operator and on the locical AND operation
Bit Mask AND
0 0 0
0 1 0
1 0 0
1 1 1
You see, 0 and 1 is 0. And 1 and 1 is 1.
That allows us to access a bit in a byte.
Byte: 10101010
Mask: 00001111
--------------
00001010
And this mecahnism we will use.
But, I cannot imagine that this is homework, because of the dirty reintepret_cast that would be needed, by accessing the struct from outside.
Anyway. Let me present this solution to you.
I find it utmost ugly.
#include <iostream>
#include <bitset>
// The task is this! I have a structure
struct test {
// It contains an array
private:
int arr[8];
public:
void init() {
for (int i = 0; i < 8; i++) {
arr[i] = 5;
}
}
};
// Convert an int to a string with the bit representaion of the int
std::string intToBits(int value) {
// Here we will store the result
std::string result{};
// We want to mask the bit from MSB to LSB
unsigned int mask = 1<<31;
// Now we will work on 4 bytes with 8bits each
for (unsigned int byteNumber = 0; byteNumber < 4; ++byteNumber) {
for (unsigned int bitNumber = 0; bitNumber < 8; ++bitNumber) {
// Mask out bit and store the resulting 1 or 0 in the string
result += (value & mask) ? '1' : '0';
// Next mask
mask >>= 1;
}
// Add a space between bytes
result += ' ';
}
// At the end, we do want to have a point
result.back() = '.';
return result;
}
int main() {
// At some point this array is initialized
test h;
h.init();
// Now do dirty, ugly, and none compliant type cast
int* test = reinterpret_cast<int*>(&h);
// Convert all bytes and show result
for (unsigned int k = 0; k < 8; ++k)
std::cout << intToBits(test[k]) << ' ';
return 0;
}
Here i have a binary string, for example - "01010011". The positions of set bits are = 0, 1, 4, 6 (from right to left). I have to do series of operations something like this.
for binary string - 01010011
unset the 0th set bit. - 01010010 (new set bit positions - 1, 4, 6)
unset the 0th set bit - 01010000 (new set bit positions - 4, 6)
unset the 1st set bit - 00010000 (new set bit positions - 4)
As you can see after each operation my binary string changes and the new operations should be done on that.
My approach was to make a copy of binary string and loop through k-1 times and unset the rightmost set bit. after k-1 loop, my rightmost set bit will be the actual kth bit and i can get the position of this and unset this position in my original binary. But this method looks very inefficient to me.
I need some efficient approaches and c/c++(bitset) or python code is highly appreciated.
Note:
The kth bit will be always set in my binary string
I would define 3 functions to handle that using string.lenght
void setBit(string& t, const int x);
void clearBit(string& t, const int x);
void toggleBit(string& t, const int x);
the implementation could look like
void setBit(string& t,const int x) {
t[t.length()-1-x] = '1';
cout << "new val: " << t << endl;
}
void clearBit(string& t, const int x) {
t[t.length() - 1 - x] = '0';
cout << "new val: " << t << endl;
}
void toggleBit(string& t, const int x) {
char d = t[t.length() - 1 - x];
if (d=='0')
{
setBit(t, x);
}
else {
clearBit(t, x);
}
}
and test it like:
int main(int argc, char** argv)
{
string test = "01010011";
setBit(test, 0);
clearBit(test, 0);
toggleBit(test, 2);
toggleBit(test, 2);
return 0;
}
If you use bitset then you can loop and find first rightmost set bit, with regular integer it can be done this way:
unsigned i, N = ...;
for (i=0; i<sizeof(unsigned)*8; ++i)
{
if (N & (1<<i))
break;
}
i at this point should contain index of your first rightmost set bit in your N.
Also, on most CPUs there are dedicated instructions to count leading zeros etc to count leading zeros, or trailing bits.
How do i unset the kth set bit in binary string?
unsigned i, N = ..., k = ... ; // where k is [1..32] for 32-bit unsigned int
for (i=0; i<sizeof(unsigned)*8; ++i)
{
if (N & (1<<i))
{
if (--k == 0)
{
N &= ~(1<<i) // unset k-th set bit in N
break;
}
}
}
How about using lambda as following. I define a function which requires reference of your bit string and the k-th set bit.
void unset_kth(std::string& bit, const size_t k) {
size_t found = 0;
std::reverse(bit.begin(), bit.end());
std::replace_if(bit.begin(), bit.end(),
[&found, k](char letter) -> bool {
if(letter == '1') {
if (found == k) {
found++;
return true;
} else {
found++;
}
}
return false;
}, '0');
std::reverse(bit.begin(), bit.end());
}
and use this function as you wish
std::string bit = "01010011";
unset_kth(bit, 0); // 01010010
unset_kth(bit, 1); // 01010000
unset_kth(bit, 1); // 00010000
This code needs string and algorithm header.
I need to fetch last 6 bits of a integer or Uint32. For example if I have a value of 183, I need last six bits which will be 110 111 ie 55.
I have written a small piece of code, but it's not behaving as expected. Could you guys please point out where I am making a mistake?
int compress8bitTolessBit( int value_to_compress, int no_of_bits_to_compress )
{
int ret = 0;
while(no_of_bits_to_compress--)
{
std::cout << " the value of bits "<< no_of_bits_to_compress << std::endl;
ret >>= 1;
ret |= ( value_to_compress%2 );
value_to_compress /= 2;
}
return ret;
}
int _tmain(int argc, _TCHAR* argv[])
{
int val = compress8bitTolessBit( 183, 5 );
std::cout <<" the value is "<< val << std::endl;
system("pause>nul");
return 0;
}
You have entered the realm of binary arithmetic. C++ has built-in operators for this kind of thing. The act of "getting certain bits" of an integer is done with an "AND" binary operator.
0101 0101
AND 0000 1111
---------
0000 0101
In C++ this is:
int n = 0x55 & 0xF;
// n = 0x5
So to get the right-most 6 bits,
int n = original_value & 0x3F;
And to get the right-most N bits,
int n = original_value & ((1 << N) - 1);
Here is more information on
Binary arithmetic operators in C++
Binary operators in general
I don't get the problem, can't you just use bitwise operators? Eg
u32 trimmed = value & 0x3F;
This will keep just the 6 least significant bits by using the bitwise AND operator.
tl;dr:
int val = x & 0x3F;
int value = input & ((1 << (no_of_bits_to_compress + 1) - 1)
This one calculates the (n+1)th power of two: 1 << (no_of_bits_to_compress + 1) and subtracts 1 to get a mask with all n bits set.
The last k bits of an integer A.
1. A % (1<<k); // simply A % 2^k
2. A - ((A>>k)<<k);
The first method uses the fact that the last k bits is what is trimmed after doing k right shits(divide by 2^k).
Lets say i have an array dynamically allocated.
int* array=new int[10]
That is 10*4=40 bytes or 10*32=320 bits. I want to read the 2nd bit of the 30th byte or 242nd bit. What is the easiest way to do so? I know I can access the 30th byte using array[30] but accessing individual bits is more tricky.
bool bitset(void const * data, int bitindex) {
int byte = bitindex / 8;
int bit = bitindex % 8;
unsigned char const * u = (unsigned char const *) data;
return (u[byte] & (1<<bit)) != 0;
}
this is working !
#define GET_BIT(p, n) ((((unsigned char *)p)[n/8] >> (n%8)) & 0x01)
int main()
{
int myArray[2] = { 0xaaaaaaaa, 0x00ff00ff };
for( int i =0 ; i < 2*32 ; i++ )
printf("%d", GET_BIT(myArray, i));
return 0;
}
ouput :
0101010101010101010101010101010111111111000000001111111100000000
Be carefull of the endiannes !
First, if you're doing bitwise operations, it's usually
preferable to make the elements an unsigned integral type
(although in this case, it really doesn't make that much
difference). As for accessing the bits: to access bit i in an
array of n int's:
static int const bitsPerWord = sizeof(int) * CHAR_BIT;
assert( i >= 0 && i < n * bitsPerWord );
int wordIndex = i / bitsPerWord;
int bitIndex = i % bitsPerWord;
then to read:
return (array[wordIndex] & (1 << bitIndex)) != 0;
to set:
array[wordIndex] |= 1 << bitIndex;
and to reset:
array[wordIndex] &= ~(1 << bitIndex);
Or you can use bitset, if n is constant, or vector<bool> or
boost::dynamic_bitset if it's not, and let someone else do the
work.
You can use something like this:
!((array[30] & 2) == 0)
array[30] is the integer.
& 2 is an and operation which masks the second bit (2 = 00000010)
== 0 will check if the mask result is 0
! will negate that result, because we're checking if it's 1 not zero....
You need bit operations here...
if(array[5] & 0x1)
{
//the first bit in array[5] is 1
}
else
{
//the first bit is 0
}
if(array[5] & 0x8)
{
//the 4th bit in array[5] is 1
}
else
{
//the 4th bit is 0
}
0x8 is 00001000 in binary. Doing the anding masks all other bits and allows you to see if the bit is 1 or 0.
int is typically 32 bits, so you would need to do some arithmetic to get a certain bit number in the entire array.
EDITED based on comment below - array contains int of 32 bits, not 8 bits uchar.
int pos = 241; // I start at index 0
bool bit242 = (array[pos/32] >> (pos%32)) & 1;
I made a function that converts numbers to binary. For some reason it's not working. It gives the wrong output. The output is in binary format, but it always gives the wrong result for binary numbers that end with a zero(at least that's what I noticed..)
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
while (x > 1)
{
rem = x % 2;
x /= 2;
converted += rem;
converted *= 10;
}
converted += x;
return converted;
}
Please help me fix it, this is really frustrating..
Thanks!
Use std::bitset to do the translation:
#include <iostream>
#include <bitset>
#include <limits.h>
int main()
{
int val;
std::cin >> val;
std::bitset<sizeof(int) * CHAR_BIT> bits(val);
std::cout << bits << "\n";
}
You're reversing the bits.
You cannot use the remains of x as an indicator when to terminate the loop.
Consider e.g. 4.
After first loop iteration:
rem == 0
converted == 0
x == 2
After second loop iteration:
rem == 0
converted == 0
x == 1
And then you set converted to 1.
Try:
int i = sizeof(x) * 8; // i is now number of bits in x
while (i>0) {
--i;
converted *= 10;
converted |= (x >> i) & 1;
// Shift x right to get bit number i in the rightmost position,
// then and with 1 to remove any bits left of bit number i,
// and finally or it into the rightmost position in converted
}
Running the above code with x as an unsigned char (8 bits) with value 129 (binary 10000001)
Starting with i = 8, size of unsigned char * 8. In the first loop iteration i will be 7. We then take x (129) and shift it right 7 bits, that gives the value 1. This is OR'ed into converted which becomes 1. Next iteration, we start by multiplying converted with 10 (so now it's 10), we then shift x 6 bits right (value becomes 2) and ANDs it with 1 (value becomes 0). We OR 0 with converted, which is then still 10. 3rd-7th iteration do the same thing, converted is multiplied with 10 and one specific bit is extracted from x and OR'ed into converted. After these iterations, converted is 1000000.
In the last iteration, first converted is multiplied with 10 and becomes 10000000, we shift x right 0 bits, yielding the original value 129. We AND x with 1, this gives the value 1. 1 is then OR'ed into converted, which becomes 10000001.
You're doing it wrong ;)
http://www.bellaonline.com/articles/art31011.asp
The remain of the first division is the rightmost bit in the binary form, with your function it becomes the leftmost bit.
You can do something like this :
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
unsigned long long multiplicator = 1;
while (x > 0)
{
rem = x % 2;
x /= 2;
converted += rem * multiplicator;
multiplicator *= 10;
}
return converted;
}
edit: the code proposed by CygnusX1 is a little bit more efficient, but less comprehensive I think, I'll advise taking his version.
improvement : I changed the stop condition of the while loop, so we can remove the line adding x at the end.
You are actually reversing the binary number!
to_binary(2) will return 01, instead of 10. When initial 0es are truncated, it will look the same as 1.
how about doing it this way:
unsigned long long digit = 1;
while (x>0) {
if (x%2)
converted+=digit;
x/=2;
digit*=10;
}
What about std::bitset?
http://www.cplusplus.com/reference/stl/bitset/to_string/
If you want to display you number as binary, you need to format it as a string. The easiest way to do this that I know of is to use the STL bitset.
#include <bitset>
#include <iostream>
#include <sstream>
typedef std::bitset<64> bitset64;
std::string to_binary(const unsigned long long int& n)
{
const static int mask = 0xffffffff;
int upper = (n >> 32) & mask;
int lower = n & mask;
bitset64 upper_bs(upper);
bitset64 lower_bs(lower);
bitset64 result = (upper_bs << 32) | lower_bs;
std::stringstream ss;
ss << result;
return ss.str();
};
int main()
{
for(int i = 0; i < 10; ++i)
{
std::cout << i << ": " << to_binary(i) << "\n";
};
return 1;
};
The output from this program is:
0: 0000000000000000000000000000000000000000000000000000000000000000
1: 0000000000000000000000000000000000000000000000000000000000000001
2: 0000000000000000000000000000000000000000000000000000000000000010
3: 0000000000000000000000000000000000000000000000000000000000000011
4: 0000000000000000000000000000000000000000000000000000000000000100
5: 0000000000000000000000000000000000000000000000000000000000000101
6: 0000000000000000000000000000000000000000000000000000000000000110
7: 0000000000000000000000000000000000000000000000000000000000000111
8: 0000000000000000000000000000000000000000000000000000000000001000
9: 0000000000000000000000000000000000000000000000000000000000001001
If your purpose is only display them as their binary representation, then you may try itoa or std::bitset
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <bitset>
using namespace std;
int main()
{
unsigned long long x = 1234567890;
// c way
char buffer[sizeof(x) * 8];
itoa (x, buffer, 2);
printf ("binary: %s\n",buffer);
// c++ way
cout << bitset<numeric_limits<unsigned long long>::digits>(x) << endl;
return EXIT_SUCCESS;
}
void To(long long num,char *buff,int base)
{
if(buff==NULL) return;
long long m=0,no=num,i=1;
while((no/=base)>0) i++;
buff[i]='\0';
no=num;
while(no>0)
{
m=no%base;
no=no/base;
buff[--i]=(m>9)?((base==16)?('A' + m - 10):m):m+48;
}
}
Here is a simple solution.
#include <iostream>
using namespace std;
int main()
{
int num=241; //Assuming 16 bit integer
for(int i=15; i>=0; i--) cout<<((num >> i) & 1);
cout<<endl;
for(int i=0; i<16; i++) cout<<((num >> i) & 1);
cout<<endl;
return 0;
}