Hamming Code Confusion - hamming-code

I am having difficulty answering this problem. Here is the original question:
A word is encoded with check bits 0111 (c8, c4, c2, and c1). The word is read back as 11101011 (data). What is the original data word?
I thought that since there are 4 check bits then this must be a 4-bit memory word, which there are only 16 possible words: 0000, 1000, 0100, 1100, 0010, 1010, 0110, 1110, 0001, 1001, 0101, 1101, 0011, 1011, 0111, 1111. Therefore, each codeword has 8 bits, and the check bits are in position 1, 2, 4, and 8.
bit 1 checks parity over bits: 1, 3, 5, 7, 9, 11
bit 2 checks parity over bits: 2, 3, 6, 7, 10, 11
bit 4 checks parity over bits: 4, 5, 6, 7, 12
bit 8 checks parity over bits: 8, 9, 10, 11, 12
I also know that to set the parity bit to 1 if total numbers of 1's checks to odd, and if all 1's check to even then set parity bit to 0.
I think that the word read back must have an error in it and that I have to correct it then this will allow me to find the original data word.
Is this what is happening in this question?

Message: 11000010
Method A:
CBA987654321 <-- Hexadecimal
1100?001?0??
C=1100
B=1011
5=0101
______
X=0010
Test vertically every single bit (XOR).
Solution:
1100?001?0??
0 0 10
110000010010
Method B:
CBA987654321 <-- Hexadecimal
1100?001?0??

Related

How to find smallest connected label in equivalency list

I have a list of numbers stored in a standard vector. Some of the numbers are children of other numbers. Here is an example
3, 4
3, 5
5, 6
7, 3
8, 9
8, 1
8, 2
9, 8
Or as a graph:
1 2 3-4 5-6 7 8-9
|-------------|
|-----------|
|---|
|-------|
That is there are two clusters 3,4,5,6,7 and 1,2,8,9. The root number is the smallest number of a cluster. Here 3 and 1. I would like to know which algorithms I can use to extract a list like this:
3, 4
3, 5
3, 6
3, 7
1, 2
1, 8
1, 9
An algorithm similar disjoint set union algorithm can help you:
Initialize N disjoint subset, each subset has exactly one number, and root of number i(r(i)) is i.
For each edge (u, v), you can assign:
t = min(r(u), r(v))
r(u) = t
r(v) = t
For each i with i != r(i), you can write out [r(i) - i].

Minimum Number of Bits Required for Two's Complement Form

On my midterm, there was a question stating:
Given the decimal values, what is the minimum number of bits required to represent each number in Two's Complement form?
The values were: -26, -1, 10, -15, -4.
I did not get this question right whatsoever, and the solutions are quite baffling.
The only part I really understand is finding the range in which the value is located. For example, -15 would be within the range of [-2^5, 2^5), and -4 would be in the range from [-2^2, 2^2). What steps are needed from here in order to find how many bits were necessary?
I tried finding some pattern to solve it, but it only worked for the first two cases. Here's my attempt:
First I found the range. -2^6 < -26 < 2^6
Then I found the value for 2^6 = 32.
Then I found the difference between the "closest" bound, and the value.
-26 - (-32) = 6
Again, this worked for the first two values by chance, and now I'm stumped as to find the actual relation between the number of bits required for an integer to be represented in Two's complement form, and the actual integer.
Thanks in advance!
First off, you're off on your powers of 2. 32 = 25.
Anyway, I followed you through the first two steps. Your last step doesn't make sense.
Find the power-of-two range that brackets the number. You want a power-of-two range of the form [-2N, 2N - 1]. So, for -26, that would be -25 &leq; -26 &leq; 25 - 1. That corresponds to -32 &leq; -26 &leq; 31.
Number of bits for the 2s complement representation will then simply be N plus 1. The "plus 1" accounts for the sign bit. For -26, that's 5 + 1 = 6.
So, for each of the numbers you gave: -26, -1, 10, -15, -4.
-25 &leq; -26 &leq; 25 - 1 becomes -32 &leq; -26 &leq; 31, which gives 5 + 1 = 6.
-20 &leq; -1 &leq; 20 - 1 becomes -1 &leq; -1 &leq; 0, which gives 0 + 1 = 1.
-24 &leq; 10 &leq; 24 - 1 becomes -16 &leq; 10 &leq; 15, which gives 4 + 1 = 5.
-24 &leq; -15 &leq; 24 - 1 becomes -16 &leq; -15 &leq; 15, which gives 4 + 1 = 5.
-22 &leq; -4 &leq; 22 - 1 becomes -4 &leq; -4 &leq; 3, which gives 2 + 1 = 3.
Got it?
The -1 one is tricky...

Meaning of bitwise and(&) of a positive and negative number?

Can anyone help what n&-n means??
And what is the significance of it.
It's an old trick that gives a number with a single bit in it, the bottom bit that was set in n. At least in two's complement arithmetic, which is just about universal these days.
The reason it works: the negative of a number is produced by inverting the number, then adding 1 (that's the definition of two's complement). When you add 1, every bit starting at the bottom that is set will overflow into the next higher bit; this stops once you reach a zero bit. Those overflowed bits will all be zero, and the bits above the last one affected will be the inverse of each other, so the only bit left is the one that stopped the cascade - the one that started as 1 and was inverted to 0.
P.S. If you're worried about running across one's complement arithmetic here's a version that works with both:
n & (~n + 1)
On pretty much every system that most people actually care about, it will give you the highest power of 2 that n is evenly divisible by.
I believe it is a trick to figure out if n is a power of 2. (n == (n & -n)) IFF n is a power of 2 (1,2,4,8).
N&(-N) will give you position of the first bit '1' in binary form of N.
For example:
N = 144 (0b10010000) => N&(-N) = 0b10000
N = 7 (0b00000111) => N&(-N) = 0b1
One application of this trick is to convert an integer to sum of power-of-2.
For example:
To convert 22 = 16 + 4 + 2 = 2^4 + 2^2 + 2^1
22&(-22) = 2, 22 - 2 = 20
20&(-20) = 4, 20 - 4 = 16
16&(-16) = 16, 16 - 16 = 0
It's just a bitwise-and of the number. Negative numbers are represented as two's complement.
So for instance, bitwise and of 7&(-7) is x00000111 & x11111001 = x00000001 = 1
I would add a self-explanatory example to the Mark Randsom's wonderful exposition.
010010000 | +144 ~
----------|-------
101101111 | -145 +
1 |
----------|-------
101110000 | -144
101110000 | -144 &
010010000 | +144
----------|-------
000010000 | 16`
Because x & -x = {0, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 16, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 32} for x from 0 to 32. It is used to jumpy in the for sequences for some applications. The applications can be to store accumulated records.
for(;x < N;x += x&-x) {
// do something here
++tr[x];
}
The loop traverses very fast because it looks for the next power of two to jump.
As #aestrivex has mentioned, it is a way of writing 1.Even i encountered this
for (int y = x; y > 0; y -= y & -y)
and it just means y=y-1 because
7&(-7) is x00000111 & x11111001 = x00000001 = 1

Hamming Code Finding Error

A 4-bit message has been encoded with the Hamming code H(7,4) and transmitted over a possibly noisy channel with at most one error. The message 0100101 (binary) is received.
Hi,
I found an error at Parity 6 and the original 4 bit message is 0100111. I was told that I was wrong. Can someone help and explain why?
Thanks
There are only three parity bits in H(7,4); those bits are at (one-indexed) positions 1, 2 and 4. There is no 'parity 6' to check. Let's examine the received message:
Parity bit 1 at position 1 covers bits 1, 3, 5 and 7. Those bits are 0, 0, 1 and 1, respectively. We take the sum of these bits, which comes to 2. This is an even sum, so we assume this bit is safe.
Parity bit 2 at position 2 covers bits 2, 3, 6 and 7. Those bits are 1, 0, 0 and 1, respectively. Again, the sum of these bits is even, so no problem exists yet.
Parity bit 3 at position 4 covers bits 4, 5, 6 and 7. Those bits are 0, 1, 0 and 1, respectively. The sum is even, so no problem here either.
The parity checks all add up, so there's no indication of error in the received message.

How do I find permutations or combinations for byte?

A character (1 byte) can represent 255 characters but how do i actually find it?
(answering the comment)
There are 256 different combinations of 8 0s and 1s.
This is true because 256 = 28.
Each digit that you add doubles the number of combinations.
In a fixed width binary number, there are two choices for the first bit, two choices for the second bit, two choices for the third, and so on. The total number of combinations for an 8-bit byte is:
2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 = 28 = 256
do you mean
for (char c = " "; c <= "~"; c++) std::cout << c << std::endl;
?
This should show you printable characters in ASCII proper. To see all characters in your font, try c = 0 and c < 255 (be careful with 255 and infinite loop) - but this won't work with your terminal, most probably.
8 bits can represent permutations of ones and zeros from binary 00000000 to 11111111. Just like 3 decimal digits can represent permutations of decimal numbers (0-9) from decimal 000 to 999.
You just start counting: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and then after you reach the digit maximum, you carry over a 1 and start from 0: ..., 8, 9, 10. Right? And then continue this until you fill up all your digits with nines: ..., 997, 998, 999.
It's the same thing in binary: 0, 1 then carry over 1 and start from 0: 0, 1, 10. Continue: 10, 11, 100, 101, 110, 111, 1000, 1001 etc.
Simply counting from 0 to the maximum value than can be represented by your digits gives you all the permutations.