Hamming Code Finding Error - hamming-code

A 4-bit message has been encoded with the Hamming code H(7,4) and transmitted over a possibly noisy channel with at most one error. The message 0100101 (binary) is received.
Hi,
I found an error at Parity 6 and the original 4 bit message is 0100111. I was told that I was wrong. Can someone help and explain why?
Thanks

There are only three parity bits in H(7,4); those bits are at (one-indexed) positions 1, 2 and 4. There is no 'parity 6' to check. Let's examine the received message:
Parity bit 1 at position 1 covers bits 1, 3, 5 and 7. Those bits are 0, 0, 1 and 1, respectively. We take the sum of these bits, which comes to 2. This is an even sum, so we assume this bit is safe.
Parity bit 2 at position 2 covers bits 2, 3, 6 and 7. Those bits are 1, 0, 0 and 1, respectively. Again, the sum of these bits is even, so no problem exists yet.
Parity bit 3 at position 4 covers bits 4, 5, 6 and 7. Those bits are 0, 1, 0 and 1, respectively. The sum is even, so no problem here either.
The parity checks all add up, so there's no indication of error in the received message.

Related

Efficient mapping from bits of one variable to another

I have 4 x Uint32 variables named lowLevelErrors1, lowLevelErrors2... up to 4. Each bit on those represent a low level error. I need to map them to a Uint64 variable named userErrors. Each bit of the userError represent an error shown to the user which can be set due to 1 or more low level errors. In other words, every low level error is mapped to 1 user error. 2 or more low level errors can be mapped to the same user error.
Let's scale it down to 2x Uint8 low level errors and 1x Uint8 user error so we can see an example.
Example: If any of the following low level errors is set {ERR_VOLT_LOW || ERR_NO_BATTERY || ERR_NOT_CHARGING} (which correspond to bit 0, bit 2 and bit 3 of lowLevelErrors1) then the user error US_ERR_POWER_FAIL is set (which is bit 5 of userErrors).
So the only way I could think of was to have a map array for each lowLevelErrors variable that will be used to map to the corresponding bit of the userErrors.
/* Let's say the lowLevelErrors have to be mapped like this:
lowLevelErrors1 bit maps to userError bit
0 5
1 1
2 5
3 5
4 0
5 2
6 7
7 0
lowLevelErrors2 bits maps to userError bit
0 1
1 1
2 0
3 3
4 6
5 6
6 4
7 7
*/
Uint8 lowLevelErrors1 = 0;
Uint8 lowLevelErrors2 = 0;
Uint8 userErrors = 0;
Uint8 mapLLE1[8] = {5, 1, 5, 5, 0, 2, 7, 0};
Uint8 mapLLE2[8] = {1, 1, 0, 3, 6, 6, 4, 7};
void mapErrors(void)
{
for (Uint8 bitIndex = 0; bitIndex < 8; i++)
{
if (lowLevelErrors1 && (1 << i)) //If error bit is set
{
userErrors |= 1 << mapLLE1[bitIndex]; //Set the corresponding user error
}
}
for (Uint8 bitIndex = 0; bitIndex < 8; i++)
{
if (lowLevelErrors2 && (1 << i)) //If error bit is set
{
userErrors |= 1 << mapLLE2[bitIndex]; //Set the corresponding user error
}
}
}
The problem with this implementation is the need for the map arrays. I will need to have 4x uint8 array[32] = 128 uint8 variables and we are running low on memory on the microcontroller.
Is there any other way to implement the same functionality using less RAM?
You have 128 input bits, each of which is mapped to a bit number from 0 to 63. So that is 128 * 6 = 768 bits of information, which needs at least 96 bytes of storage unless there is some regular pattern to it.
So you need at least 96 bytes; and even then, it would be stored as packed 6-bit integers. The code to unpack these integers might well cost more than the 32 bytes that you save by packing them.
So you basically have three choices: a 128-byte array, as you suggest; packed 6-byte integers; or some regular assignment of the error codes that is easier to unpack (which is not a possibility if the specific error code mapping is fixed).
Since you haven't given a complete example with ALL errors, it's hard to say what is the "best" method, but I would construct a table of "mask" and "value":
Something like this:
struct Translate
{
uint32_t mask;
// Maybe have mask[4]?
uint64_t value;
};
// If not mask[4], the
Translate table[] =
{
{ ERR_VOLT_LOW | ERR_NO_BATTERY | ERR_NOT_CHARGING,
// If mask[4] then add 3 more values here - expect typically zeros
US_ERR_POWER_FAIL },
...
};
I'm not sure which will make more sense, to have 4 different values in the table, or have 4 different tables - it would depend on how often your errors from LowLevel1 and LowLevel2, LowLevel2 and LowLevel4, etc are mapping to the same error. But by storing a map of multiple errors to one value, you should.
Now, once we have a data structure, the code becomes something like:
for(auto a : table)
{
if (a.mask & lowLevelErrors1)
{
userErrror |= a.value;
}
}

Meaning of bitwise and(&) of a positive and negative number?

Can anyone help what n&-n means??
And what is the significance of it.
It's an old trick that gives a number with a single bit in it, the bottom bit that was set in n. At least in two's complement arithmetic, which is just about universal these days.
The reason it works: the negative of a number is produced by inverting the number, then adding 1 (that's the definition of two's complement). When you add 1, every bit starting at the bottom that is set will overflow into the next higher bit; this stops once you reach a zero bit. Those overflowed bits will all be zero, and the bits above the last one affected will be the inverse of each other, so the only bit left is the one that stopped the cascade - the one that started as 1 and was inverted to 0.
P.S. If you're worried about running across one's complement arithmetic here's a version that works with both:
n & (~n + 1)
On pretty much every system that most people actually care about, it will give you the highest power of 2 that n is evenly divisible by.
I believe it is a trick to figure out if n is a power of 2. (n == (n & -n)) IFF n is a power of 2 (1,2,4,8).
N&(-N) will give you position of the first bit '1' in binary form of N.
For example:
N = 144 (0b10010000) => N&(-N) = 0b10000
N = 7 (0b00000111) => N&(-N) = 0b1
One application of this trick is to convert an integer to sum of power-of-2.
For example:
To convert 22 = 16 + 4 + 2 = 2^4 + 2^2 + 2^1
22&(-22) = 2, 22 - 2 = 20
20&(-20) = 4, 20 - 4 = 16
16&(-16) = 16, 16 - 16 = 0
It's just a bitwise-and of the number. Negative numbers are represented as two's complement.
So for instance, bitwise and of 7&(-7) is x00000111 & x11111001 = x00000001 = 1
I would add a self-explanatory example to the Mark Randsom's wonderful exposition.
010010000 | +144 ~
----------|-------
101101111 | -145 +
1 |
----------|-------
101110000 | -144
101110000 | -144 &
010010000 | +144
----------|-------
000010000 | 16`
Because x & -x = {0, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 16, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 32} for x from 0 to 32. It is used to jumpy in the for sequences for some applications. The applications can be to store accumulated records.
for(;x < N;x += x&-x) {
// do something here
++tr[x];
}
The loop traverses very fast because it looks for the next power of two to jump.
As #aestrivex has mentioned, it is a way of writing 1.Even i encountered this
for (int y = x; y > 0; y -= y & -y)
and it just means y=y-1 because
7&(-7) is x00000111 & x11111001 = x00000001 = 1

Hamming Code Confusion

I am having difficulty answering this problem. Here is the original question:
A word is encoded with check bits 0111 (c8, c4, c2, and c1). The word is read back as 11101011 (data). What is the original data word?
I thought that since there are 4 check bits then this must be a 4-bit memory word, which there are only 16 possible words: 0000, 1000, 0100, 1100, 0010, 1010, 0110, 1110, 0001, 1001, 0101, 1101, 0011, 1011, 0111, 1111. Therefore, each codeword has 8 bits, and the check bits are in position 1, 2, 4, and 8.
bit 1 checks parity over bits: 1, 3, 5, 7, 9, 11
bit 2 checks parity over bits: 2, 3, 6, 7, 10, 11
bit 4 checks parity over bits: 4, 5, 6, 7, 12
bit 8 checks parity over bits: 8, 9, 10, 11, 12
I also know that to set the parity bit to 1 if total numbers of 1's checks to odd, and if all 1's check to even then set parity bit to 0.
I think that the word read back must have an error in it and that I have to correct it then this will allow me to find the original data word.
Is this what is happening in this question?
Message: 11000010
Method A:
CBA987654321 <-- Hexadecimal
1100?001?0??
C=1100
B=1011
5=0101
______
X=0010
Test vertically every single bit (XOR).
Solution:
1100?001?0??
0 0 10
110000010010
Method B:
CBA987654321 <-- Hexadecimal
1100?001?0??

Find rank of a number on basis of number of 1's

Let f(k) = y where k is the y-th number in the increasing sequence of non-negative integers with
the same number of ones in its binary representation as k, e.g. f(0) = 1, f(1) = 1, f(2) = 2, f(3) = 1, f(4)
= 3, f(5) = 2, f(6) = 3 and so on. Given k >= 0, compute f(k)
many of us have seen this question
1 solution to this problem to categorise numbers on basis of number of 1's and then find the rank.i did find some patterns going by this way but it would be a lengthy process. can anyone suggest me a better solution?
This is a counting problem. I think that if you approach it with this in mind, you can do much better than literally enumerating values and checking how many bits they have.
Consider the number 17. The binary representation is 10001. The number of 1s is 2. We can get smaller numbers with two 1s by (in this case) re-distributing the 1s to any of the four low-order bits. 4 choose 2 is 6, so 17 should be the 7th number with 2 ones in the binary representation. We can check this...
0 00000 -
1 00001 -
2 00010 -
3 00011 1
4 00100 -
5 00101 2
6 00110 3
7 00111 -
8 01000 -
9 01001 4
10 01010 5
11 01011 -
12 01100 6
13 01101 -
14 01110 -
15 01111 -
16 10000 -
17 10001 7
And we were right. Generalize that idea and you should get an efficient function for which you simply compute the rank of k.
EDIT: Hint for generalization
17 is special in that if you don't consider the high-order bit, the number has rank 1; that is, f(z) = 1 where z is everything except the higher order bit. For numbers where this is not the case, how can you account for the fact that you can get smaller numbers without moving the high-order bit?
f(k) are integers less than or equal to k that have the same number of ones in their binary representation as k.
For example, k needs m bits, that is k = 2^(m-1) + a, where a < 2^(m-1). The number of integers less than 2^(m-1) that have the same number of bits as k is choose(m-1, bitcount(k)), since you can freely redistribute the ones among the m-1 least significant bits.
Integers that are greater than or equal to 2^(m-1) have the same most significant bit as k (which is 1), so there are f(k - 2^(m-1)) of them. This implies f(k) = choose(m-1, bitcount(k)) + f(k-2^(m-1)).
See "Efficiently Enumerating the Subsets of a Set". Look at Table 3, the "Bankers sequence". This is a method to generate exactly the sequence you need (if you reverse the bit order). Just run K iterations for the word with K bits. There is code to generate it included in the paper.

How do I find permutations or combinations for byte?

A character (1 byte) can represent 255 characters but how do i actually find it?
(answering the comment)
There are 256 different combinations of 8 0s and 1s.
This is true because 256 = 28.
Each digit that you add doubles the number of combinations.
In a fixed width binary number, there are two choices for the first bit, two choices for the second bit, two choices for the third, and so on. The total number of combinations for an 8-bit byte is:
2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 = 28 = 256
do you mean
for (char c = " "; c <= "~"; c++) std::cout << c << std::endl;
?
This should show you printable characters in ASCII proper. To see all characters in your font, try c = 0 and c < 255 (be careful with 255 and infinite loop) - but this won't work with your terminal, most probably.
8 bits can represent permutations of ones and zeros from binary 00000000 to 11111111. Just like 3 decimal digits can represent permutations of decimal numbers (0-9) from decimal 000 to 999.
You just start counting: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and then after you reach the digit maximum, you carry over a 1 and start from 0: ..., 8, 9, 10. Right? And then continue this until you fill up all your digits with nines: ..., 997, 998, 999.
It's the same thing in binary: 0, 1 then carry over 1 and start from 0: 0, 1, 10. Continue: 10, 11, 100, 101, 110, 111, 1000, 1001 etc.
Simply counting from 0 to the maximum value than can be represented by your digits gives you all the permutations.