How to divide two negative numbers using the two's complement method? - twos-complement

I was studying more about binary arithmetics and came across the problem -21/-3. The expected answer is 7. Of course I know we can do it by canceling the negative sign and dividing 21 by 3. I have seen the beauty of two's complement in the subtraction of two numbers.
So i started out like this:
-21 is: 11101011
-3 is: 11111101
Initially I tried out dividing but as numerator is greater than denominator division is not possible. Also it is clear that it is the equivalent of dividing 235 by 253. So my question is how can I divide two negative numbers using two's complement method. Is this division allowed or isn't it possible to divide like this?

Related

Can the bitwise AND of some positive integers be negative?

Can the bitwise AND of some positive integers be negative?
Let's say I have some numbers 1,2,3..N. For example, take N=7 and I want to find a subarray which results me in a negative result after bitwise AND operation.
Taking 4,5,6,7 gives me 4(100) but what subarray(for any N) could give me negative result?
(In most/commonly used encodings) sign is stored in first bit.
For first bit to be 1 (meaning negative number) after AND, both bits need to be 1 before that. So it means only 2 negative numbers can make a negative number with AND.
This is true both for integers and floats (IEEE754) (the most common int and float implementations) but basically anything that can store negative numbers has to store the sign somewhere... and as positive numbers are the "default", it's minus that gets marked as 1.
Can the bitwise AND of some positive integers be negative?
No, it can't.
A negative number has the top bit set. All positive numbers have the top bit unset. A zero bit AND a zero bit results in a zero bit. This applies to any number of bitwise AND operations performed on any number of positive integers.
And it also applies if you include zero.
In fact, if you perform an bitwise AND on a collection of integers, the result will only be negative if all of the integers are less than zero.
Note that above is for twos' complement binary representations. All modern mainstream computer instruction sets use twos' complement to represent integers.
With ones' complement you actually have two zeros +0 and -0. But if you treat -0 as negative then the same logic applies.
Bitwise AND'ing of floating point number representations ... and treating the results as numbers ... doesn't make a lot of sense. So I haven't considered that.

What exactly happens while multiplying a double value by 10

I have recently been wondering about multiplying floating point numbers.
Let's assume I have a number, for example 3.1415 with a guaranteed 3-digit precision.
Now, I multiply this value by 10, and I get 31.415X, where X is a digit I cannot
define because of the limited precision.
Now, can I be sure, that the five get's carried over to the precise digits?
If a number is proven to be precise up to 3 digits I wouldn't expect this
five to always pop up there, but after studying many cases in c++ i have noticed that it always happens.
From my point of view, however, this doesn't make any sense, because floating point numbers are stored base-two, so multiplication by ten isn't really possible, it will always be mutiplication by 10.something.
I ask this question because I wanted to create a function that calculates how precise a type is. I have came up with something like this:
template <typename T>
unsigned accuracy(){
unsigned acc = 0;
T num = (T)1/(T)3;
while((unsigned)(num *= 10) == 3){
acc++;
num -= 3;
}
return acc;
}
Now, this works for any types I've used it with, but I'm still not sure that the first unprecise digit will always be carried over in an unchanged form.
I'll talk specifically about IEEE754 doubles since that what I think you're asking for.
Doubles are defined as a sign bit, an 11-bit exponent and a 52-bit mantissa, which are concatenated to form a 64-bit value:
sign|exponent|mantissa
Exponent bits are stored in a biased format, which means we store the actual exponent +1023 (for a double). The all-zeros exponent and all-ones exponent are special, so we end up being able to represent an exponent from 2^-1022 to 2^+1023
It's a common misconception that integer values can't be represented exactly by doubles, but we can actually store any integer in [0,2^53) exactly by setting the mantissa and exponent properly, in fact the range [2^52,2^53) can only store the integer values in that range. So 10 is easily stored exactly in a double.
When it comes to multiplying doubles, we effectively have two numbers of this form:
A = (-1)^sA*mA*2^(eA-1023)
B = (-1)^sB*mB*2^(eB-1023)
Where sA,mA,eA are the sign,mantissa and exponent for A (and similarly for B).
If we multiply these:
A*B = (-1)^(sA+sB)*(mA*mB)*2^((eA-1023)+(eB-1023))
We can see that we merely sum the exponents, and then multiply the mantissas. This actually isn't bad for precision! We might overflow the exponent bits (and thus get an infinity), but other than that we just have to round the intermediate mantissa result back to 52 bits, but this will at worst only change the least significant bit in the new mantissa.
Ultimately, the error you'll see will be proportional to the magnitude of the result. But, doubles have an error proportional to their magnitude anyways so this is really as safe as we can get. The way to approximate the error in your number is as |magnitude|*2^-53. In your case, since 10 is exact, the only error will come in the representation of pi. It will have an error of ~2^-51 and thus the result will as well.
As a rule of thumb, I consider doubles to have ~15 digits of decimal precision when thinking about precision concerns.
Lets assume that for single precision 3.1415 is
0x40490E56
in IEEE 754 format which is a very popular but not the only format used.
01000000010010010000111001010110
0 10000000 10010010000111001010110
so the binary portion is 1.10010010000111001010110
110010010000111001010110
1100 1001 0000 1110 0101 0110
0xC90E56 * 10 = 0x7DA8F5C
Just like in grade school with decimal you worry about the decimal(/binary) point later, you just do a multiply.
01111.10110101000111101011100
to get into IEEE 754 format it needs to be shifted to a 1.mantissa format
so that is a shift of 3
1.11110110101000111101011
but look at the three bits chopped off 100 specifically the 1 so this means depending on the rounding mode you round, in this case lets round up
1.11110110101000111101100
0111 1011 0101 0001 1110 1100
0x7BA1EC
now if I already computed the answer:
0x41FB51EC
0 10000011 11110110101000111101100
we moved the point 3 and the exponent reflects that, the mantissa matches what we computed. we did lose one of the original non-zero bits off the right, but is that too much loss?
double, extended, work the same way just more exponent and mantissa bits, more precision and range. but at the end of the day it is nothing more than what we learned in grade school as far as the math goes, the format requires 1.mantissa so you have to use your grade school math to adjust the exponent of the base to get it in that form.
Now, can I be sure, that the five get's carried over to the precise digits?
In general, no. You can only be sure about the precision of output when you know the exact representation format used by your system, and know that the correct output is exactly representable in that format.
If you want precise result for any rational input, then you cannot use finite precision.
It seems that your function attempts to calculate how accurately the floating point type can represent 1/3. This accuracy is not useful for evaluating accuracy of representing other numbers.
because floating point numbers are stored base-two
While very common, this is not universally true. Some systems use base-10 for example.

NOT(~) bitwise operator on signed type number

I'm having little trouble understanding how not(~) operator work for positive signed number.
as an example -ve numbers are stored in 2's compliment
int a=-5;
cout<<bitset<32>(a)<<endl; // 11111111111111111111111111111011
cout<<bitset<32>(~a)<<endl; // 00000000000000000000000000000100
cout<<(~a)<<endl; // 4
4 is an expected output
but
int a=5;
cout<<bitset<32>(a)<<endl; // 00000000000000000000000000000101
cout<<bitset<32>(~a)<<endl; // 11111111111111111111111111111010
cout<<(~a)<<endl; // -6
how come -6?
The bitwise not (~) operator can basically be summarized as the following
~x == -(x+1)
The reason for this is because negation (unary -) uses 2's complement, in general.
Two's complement is defined as taking the bitwise not and then adding one i.e. -x == (~x) + 1. Simple transposition of the + 1 to the other side and then using distributive property of the negative sign (i.e. distributive property of multiplication, specifically with -1 in this case), we get the equation on top.
In a more mathematical sense:
-x == (~x) + 1 // Due to way negative numbers are stored in memory (in 2's complement)
-x - 1 == ~x // Transposing the 1 to the other side
-(x+1) == ~x // Distributing the negative sign
The C++ standard only says that the binary NOT operator (~) flips all the bits that are used to represent the value. What this means for the integer value depends on what interpretation of the bits your machine uses. The C++ standard allows the following three representations of signed integers:
sign+magnitde: There is a sign bit followed by bits encoding the magnitude.
one's complement: The idea is to represent -x by flipping all bits of x. This is similar to sign+magnitude where in the negative cases the magnitude bits are all flipped. So, 1...1111 would be -0, 1...1110 would be -1, 1...1101 would be -2 and so on...
two's complement: This representation removes one of two possible encodings for zero and adds a new value at the lower end. In two's complement 1...1111 represents -1, 1...1110 represents -2 and so on. So, it's basically one shifted down compared to one's complement.
Two's complement is much more popular these days. X86, X64, ARM, MIPS all use this representation for signed numbers as far as I know. Your machine apparently also uses two's complement because you observed ~5 to be -6.
If you want to stay 100% portable you should not rely on the two's complement representation and only use these kinds of operators on unsigned types.
It is due to how the computer understands signed numbers.
take a byte : 8 bits.
1 (the strong one) for de sign
7 for the number
as a whole :
so from -128 to +127
for negative numbers :
-128 to -1
for positive numbers : 0 to 127.
there is no need to have two 0 in numbers, therefore, there is one more negative number.

Definition of Two's Complement?

I've been reading over a couple of questions/answers here:
twos-complement-in-python
is-twos-complement-notation-of-a-positive-number-the-same-number
Someone gave some sample code to create a two's complement of a number:
def twos_comp(val, bits):
"""compute the 2's compliment of int value val"""
if( (val&(1<<(bits-1))) != 0 ):
val = val - (1<<bits)
return val
In addition someone defined two's complement as thus:
Two's complement notation uses the n-bit two's complement to flip the
sign. For 8-bit numbers, the number is subtracted from 2^8 to produce
its negative.
These declarations went unchallenged. However, that chides with my understand of a two's complement is. I thought it's calculated by inverting the binary number and adding 1. (With the understanding that the number representation has a limited number of bits.)
Additionally, the two's complement is supposed to have the property that it is the additive inverse of the original number. But, the output from twos_comp doesn't appear to have that. In my hand calculations (and some test code I wrote) with my definition , I see that when a number and the twos complement of it are added together, a 1 is overflowed and the rest of the bits are zero, thus it has the additive inverse property.
Are there multiple definitions for twos complement, am I confused, or is that definition and function from the other posts just plain wrong?
Twos complement is in fact calculated by inverting the binary number and adding 1, for negative numbers. Such that abs(-1)=1=01 -> bitwise_inv(abs(-1))+abs(-1)=FE+1=FF. This is equivalent to the definition provided of subtracting the number from 2^8 (this should not be hard to see).
The sample code you provided, does not calculate twos complement in any useful way. I don't at all understand whats its trying to do, appears to quite be quite different from "subtracting the number from 2^8" as subtracts 2*8 from the number, while also failing to remember that when we refer to the value of a twos complement number what we mean is its unsigned value.
Here is a more correct implementation, using the same template. Notice that this precisely does "subtract the number from 2^8."
def twos_c(val,bits):
if ((val&(1<<(bits-1)))!=0):
val=(1<<bits)-abs(val)
return val

Why is ~3 equal to -4 in Python?

I'm getting started in Python programming. I'm reading a basic tutorial, but this point is not very clear to me. I would appreciate any help you can give me.
~3 means 'invert' 3. With two's complement on natural number datatypes, this becomes -4, as the binary representation is inverted (all bits are flipped).
~3 means "change all the 1s to 0s and 0s to 1s", so if 3 in binary is 0000000000000011, then ~3 is 1111111111111100. since the first bit of ~3 is a 1, its a negative number. to find out which negative number, in 2s comliment, you invert all bits and add 1, so inverted we are back to 3, then added 1 we get 4.
Because signed integers are usually stored using two's complement, which means that the bitwise inverse of an integer is equal to its algebraic inverse minus one.
It's the invert operator, and returns the bitwise inverse of the number you give it.
It's not just Python, it's the integer numeric representation of almost all modern computers: two's complement. By the definition of two's complement, you get a negative number by complementing the positive number and adding one. In your example, you complemented with ~ but did not add one, so you got the negative of your number minus one.