int to binary code explanation C++ [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm struggling with the piece of code below, it's used to convert an integer into a binary. Can someone explain it more cearly? especially the '0'+
for (;d>0;d--){
buffer[index++] = '0'+ (x & 1);
x >>= 1;
}

First of all, "index" is initialized to 0. But what is the definition of "d"?
We have an array of characters named "buffer" and "x" is an integer to be converted.
Now, in "x & 1", "&" is Bitwise AND operator. If we operate "x & n", it changes the last n least significant bits as,
1 & 1 = 1,
1 & 0 = 0,
0 & 1 = 0,
0 & 0 = 0.
if we execute 4 & 1,
100
001
---
000
then, it returns 0.
if we execute 9 & 1,
1001
0001
----
0001
then, it returns 1.
Basically, if x is a even number x&1 returns 0, or returns 1 if x is odd.
Now, after that, this 0 or 1 is added to '0' (ascii 48), which is-
if x is even, '0' + (x & 1) stays '0', otherwise it becomes '1' as x&1 returns 1 and '0'+1 is '1'.
After that in "x >>= 1", ">>" is Bitwise right shift operator, which is equivalent to x = x / 2 or x /= 2. But there is little bit difference if we consider integers.
Consider x = 12, that is 1100 in binary.
if we execute x >>= 1, then x becomes 6, if shifts away last 0 of 1100, becomes 110.
again if we execute x >>= 1, then x becomes 3, if shifts away last 0 of 110, becomes 11.
again if we execute x >>= 1, then x becomes 1, if shifts away last 1 of 11, becomes 1.
again if we execute x >>= 1, then x becomes 0, if shifts away last 1 of 1, becomes 0.
Finally, if x is even it stores '0' in buffer[index], otherwise stores '1', until x is not 0.

This is a loop that starts with a variable containing some value and then creates a string of character digits of ones and zeros.
The '0' + (x & 1) takes the character for a digit zero '0' and then adds to that character the value of the right most bit of x which have either a value of zero or of one. If the bit is zero then the result of the addition is '0' and if the bit is one then the result of the addition is '1'.
This character is then put into the buffer, the variable x is right shifted by one bit to move the next binary digit to the right most place.
The addition is then repeated.
The result is that you have a text string of zeros and ones as character digits.
Are you sure this is the correct source code? Looks to me like the text string result would need to be reversed in order to correctly represent the binary value.

buffer[index++] = '0'+ (x & 1);
This line progresses through what is presumably a char array, setting each character to the character '0' PLUS a value that will be equal to either 0 or 1. '0' + 0 is ''0'. '0' + 1 is '1'. The reason x & 1 will be either 0 or 1 is because this code is essentially checking if the low bit is on in x. The reason this works is because the line below then right shifts x by 1, then sets x equal to that value, which basically is knocking off the low bit, and shifting all other bits over by 1. In this way, x is traversed, and each bit is checked.
Please note, however. It appears that it will be written BACKWARDS!

In ASCII 0 has the ASCII value 48 and 1 has the ASCII value 49. IOW if you write putchar(48); you see a 0 on the screen
The buffer presumably being a char 2-dimensional array is assigned
either 48 or 49 because x & 1 evaluates to either 1 or 0.
so say you have a value x = 225 and want to convert it to readable text containing 0's and 1's
225 looks like this in binary
1110 0001
when you do 1110 0001 & 0x1 you mask out the last 1 left is 0000 0001
so adding 1 to 48 and converting the sum ito a character is 1
next the bits are shifted one step right x >>= 1
0111 0000
masking that with 0x1 is 0000 0000
so adding 0 to 48 and converting the sum to a character becomes 0
and so on until x is 0

Related

0 minus 0 gives carryout of 1 in adder-subtractor circuit

In this adder-subtractor design with the "M" input as the flag for subtraction, 0 minus 0 seems to provide the incorrect Cout. Let's assume that we're only using one full adder here (ignore A1/B1, A2/B2, A3/B3) for simplicity, and M=1, A0=0, A1=0:
The full adder will get the inputs of:
0 (B0) XOR 1 (M) = 1
0 (A0) = 0
1 (M) = 1
This results in 1+1=0, with Cout = 1 - but Cout should equal 0 for a full adder:
I think inverting the final Cout will provide the correct result, but everywhere I look online for this adder-subtractor circuit has no inverter for the final Cout. Is this circuit supposed to have an inverter at the final Cout to fix this problem?
The carry out equal to 1 is perfectly normal in this case.
When you work with unsigned logic the carry out is used as an overflow flag: assuming you're working with 4-bits operands, the operation:
a = 1000, b = 1001 (Decimal a = 8, b = 9)
1000 +
1001 =
--------
1 0001
produces a carry out of 1'b1 because the result of 8+9 cannot be represented on 4 bits.
On the other hand, when working with signed logic the carry out signal loses its 'overflow' meaning. Let's make an example:
a = 0111, b = 0010 (Decimal a = 7, b = 2)
0111 +
0010 =
--------
0 1001
In this case the result is 1001, that is -7 in two's complement. It's obvious that we had an overflow, since we added two positive numbers and we got a negative one. The carry out, anyway, is equal to 0. As a last case, if we consider:
a = 1111, b = 0001 (Decimal a = -1, b = 1)
1111 +
0001 =
--------
1 0000
we see that even though the result is correct -1+1=0, the carry out is set.
To conclude, if you work in signed logic and you need to understand whether there was an overflow, you need to check the sign of the two operands against the result's one.
Both operands positive (MSB = 0) and result negative (MSB = 1): overflow
Both operands negative (MSB = 1) and result positive (MSB = 0): overflow
Any other case: no overflow

Why is x^0 = x?

I have a very simple question.
Why is a number when XOR'ed with 0 gives the number itself.
Can someone please give the proof using an example.
Lets say I have the number 5
5^0==>
I think the answer should be just the last bit of 5 XOR'ed with 0, but the answer is still 5.
0 is false, and 1 is true.
As per the definition, XOR operation A XOR B is "A or B, but not, A and B". So, since B is false, so the result will be A.
Also, XOR truth table shows that it outputs true whenever the inputs differ:
Input Output
A B XOR Result
0 0 0
0 1 1
1 0 1
1 1 0
As you can see, whatever be the value of A, if it is XORed with 0, the result is the bit itself.
So, as you say:
5 = 101, 0 = 000
When performing XOR operation on the individual bits:
101
000
----
101 = 5.
Hence, the result of X^0 is X itself.
What is there that you did not understand. Please read about XOR
00000101 // = 5
00000000 // = 0
--------
00000101 // = 5
Bit-wise operations operates on set of bits in number - not just on last bit.
So if you perform some bit-wise operation on 32-bit integer, then all 32 bits are affected. So integer 5 is 0.....0000101 (32 bits). If you need just the resulting last bit after xor operation apply binary AND with 1:
<script>
console.log("%i\n",(5^0)&1);
console.log("%i\n",(6^0)&1);
</script>

Why does "number & (~(1 << 3))" not work for 0's?

I'm writing a program that exchanges the values of the bits on positions 3, 4 and 5 with bits on positions 24, 25 and 26 of a given 32-bit unsigned integer.
So lets say I use the number 15 and I want to turn the 4th bit into a 0, I'd use...
int number = 15
int newnumber = number & (~(1 << 3));
// output is 7
This makes sense because I'm exchanging the 4th bit from 1 to 0 so 15(1111) becomes 7(0111).
However this wont work the other way round (change a 0 to a 1), Now I know how to achieve exchanging a 0 to a 1 via a different method, but I really want to understand the code in this method.
So why wont it work?
The truth table for x AND y is:
x y Output
-----------
0 0 0
0 1 0
1 0 0
1 1 1
In other words, the output/result will only be 1 if both inputs are 1, which means that you cannot change a bit from 0 to 1 through a bitwise AND. Use a bitwise OR for that (e.g. int newnumber = number | (1 << 3);)
To summarize:
Use & ~(1 << n) to clear bit n.
Use | (1 << n) to set bit n.
To set the fourth bit to 0, you AND it with ~(1 << 3) which is the negation of 1000, or 0111.
By the same reasoning, you can set it to 1 by ORing with 1000.
To toggle it, XOR with 1000.

How does condition statement work with bit-wise operators?

I tried to understand how if condition work with bitwise operators.
A way to check if a number is even or odd can be done by:
#include <iostream>
#include <string>
using namespace std;
string test()
{
int i = 8; //a number
if(i & 1)
return "odd";
else
return "even";
}
int main ()
{
cout << test();
return 0;
}
The Part I don't understand is how the if condition work. In this case if i = 8 then the in If statement it is doing 1000 & 1 which should gives back 1000 which equal 8.
If i = 7, then in if statement it should be doing 111 & 1 which gives back 111 which equal 7
Why is it the case that if(8) will return "even" and if(7) return "odd"? I guess I want to understand what the if statement is checking to be True and what to be False when dealing with bit-wise operators.
Just A thought when I wrote this question down is it because it's actually doing
for 8: 1000 & 0001 which gives 0
for 7: 0111 & 0001 which gives 1?
Yes, you are right in the last part. Binary & and | are performed bit by bit. Since
1 & 1 == 1
1 & 0 == 0
0 & 1 == 0
0 & 0 == 0
we can see that:
8 & 1 == 1000 & 0001 == 0000
and
7 & 1 == 0111 & 0001 == 0001
Your test function does correctly compute whether a number is even or odd though, because a & 1 tests whether there is a 1 in the 1s place, which there only is for odd numbers.
Actually, in C, C++ and other major programming languages the & operator do AND operations in each bit for integral types. The nth bit in a bitwise AND is equal to 1 if and only if the nth bit of both operands are equal to 1.
For example:
8 & 1 =
1000 - 8
0001 - 1
----
0000 - 0
7 & 1 =
0111 - 7
0001 - 1
----
0001 - 1
7 & 5 =
0111 - 7
0101 - 5
----
0101 - 5
For this reason doing a bitwise AND between an even number and 1 will always be equal 0 because only odd numbers have their least significant bit equal to 1.
if(x) in C++ converts x to boolean. An integer is considered true iff it is nonzero.
Thus, all if(i & 1) is doing is checking to see if the least-significant bit is set in i. If it is set, i&1 will be nonzero; if it is not set, i&1 will be zero.
The least significant bit is set in an integer iff that integer is odd, so thus i&1 is nonzero iff i is odd.
What you say the code is doing is actually how bit-wise operators are supposed to work. In your example of (8 & 1):
1000 & 0001 = 0000
because in the first value, the last bit is set to 0, while in the second value, the last bit is set to 1. 0 & 1 = 0.
0111 & 0001 = 0001
In both values, the last bit is set to 1, so the result is 1 since 1 & 1 = 1.
The expression i & 1, where i is an int, has type int. Its value is 1 or 0, depending on the value of the low bit of i. In the statement if(i & 1), the result of that expression is converted to bool, following the usual rule for integer types: 0 becomes false and non-zero becomes true.

C++ Novice regarding Bitset operations with strings

I'm currently learning about bitset, and in one paragraph it says this about their interactions with strings:
"The numbering conventions of strings and bitsets are inversely related: the rightmost character in the string--the one with the highest subscript--is used to initialize the low order bit in the bitset--the bit with subscript 0."
however later on they give an example + diagram which shows something like this:
string str("1111111000000011001101");
bitset<32> bitvec5(str, 5, 4); // 4 bits starting at str[5], 1100
value of str:
1 1 1 1 1 (1 1 0 0) 0 0 0 ...
value of bitvec5:
...0 0 0 0 0 0 0 (1 1 0 0)
This example shows it taking the rightmost bit and putting it so the last element from the string is the last in the bitset, not the first.
Which is right?(or are both wrong?)
They are both right.
Traditionally the bits in a machine word are numbered from right to left, so the lowest bit (bit 0) is to the right, just like it is in the string.
The bitset looks like this
...1100 value
...3210 bit numbers
and the string that looks the same
"1100"
will have string[0] == '1' and string[3] == '0', the exact opposite!
string strval("1100"); //1100, so from rightmost to leftmost : 0 0 1 1
bitset<32> bitvec4(strval); //bitvec4 is 0 0 1 1
So whatever you are reading is correct(both text and example) :
the rightmost character in the string--the one with the highest
subscript--is used to initialize the low order bit in the bitset--the
bit with subscript 0.