I am taught that given:
message M = 101001
polynomial C = x^3 + x^2 + 1 = 1101
I should add k bits to the end of the message such that the result P is divisible by C (where k is the degree of the polynomial, 3 in this case).
I can find no 3 bit combination (XYZ) that when appended to M satisfies this criteria.
Does anyone know what is wrong with my understanding?
I'm 5 months late to this, but here goes :
Perhaps, thinking about this by integer (or binary) division is counterproductive. Better to work it out by the continuous XOR method - which gives a checksum of 001, rather than the expected 100. This, when appended to the source generates the check value 101001001.
Try this C code to see a somewhat descriptive view.
I'm no expert, but I got most of my CRC fundamentals from here. Hope that helps.
Related
For example in this answer to a reversing bits function problem made 4 years ago:
[reverse_Bits function]
https://stackoverflow.com/a/50596723/19574301
Code:
def reverse_Bits(n, no_of_bits):
result = 0
for i in range(no_of_bits):
result <<= 1
result |= n & 1
n >>= 1
return result
I don't understand how to think the problem at all.
You multiply actual number (n) by one in order to check the first right side bit. Then you right shift this number by one so you are checking if the second bit is 0 when you and it again, and this for all bits. So basically you're adding 1 to the result if there is a 1 in the actual (?) bit. Aside you left shift the result so I understand you're trying to put the bit in its correct index and if there is a one you add it... I get lost here.
I mean, I know the code works and I know how but I couldn't do it from zero without having this reference because I don't know how you achieve thinking every step of the algorithm.
I don't know if I explain my problem or if it's just a mess but hoping somebody can help me!!
If your question is, "how would I write this from scratch, without any help?" then I find personally that it comes about from a combination of sketching out simple cases, working through them manually, and progressive implementation.
For example, you may have started with example: You have the number 3 (because it is easy) and you want to reverse bits:
3 = 0000 0011 b
need to &1 and if it is non-zero, write 1000 0000 b
need to &2 and if it is non-zero, write 0100 0000 b
need to &4 and as it is zero, write nothing...
...
Okay, how can I automate 1,2,4,8,16,32 .. ? Can have a variable which will double, or I can left-shift a number by 1. Take your pick, does not matter.
For writing the values, same thing, how can I write 1000 0000 b and then 0100 0000 b, etc? Well start off as 1000 0000 b and divide by 2 or right-shift by 1.
With these two simple things, you will end up with something like this for one bit:
result = 0
src_mask = 0x01
dst_mask = 0x80
if number & src_mask != 0:
result |= dst_mask
One bit working. Then you add a loop so that you can do all bits and add a *2 for the src_mask and a /2 for the dst_mask as you do it to address each bit. Again this is all figured out from the scribbles on paper listing what I want to happen for each bit.
Then comes optimization, I don't like the 'if' so can I figure out a way of directly adding the bit without testing? if it was 0 it will add 0 and if the bit is set, then I add the bit?
This is generally the progression. Manual scribbles, first design and then step-by-step enhancements.
I have X1...X6. I have taken the combinations by two. For each of those sub-samples I have taken the mean, and then the mean of all of those means:
[(X1+X2)/2 + ... +(X5+X6)/2]/15, where 15 is the total number of combinations.
Now the mean of all of those sub-samples is equal to the mean of :
(X1+X2+X3+X4+X5+X6)/6 .
I am asking for some help in order to either PROVE it (as a generalazation), or why this happens? Because even if I increase the combinations for example the combinations of 6 by 3 or 4 etc the results are the same.
Thank you
OK, here's a quick page of scribbles that shows that no matter how many items you have if you take the mean of all combinations of 2 pairs and then take the mean of those means then you will always get the mean of the original sum.
Explanation...
I work out what the number of combinations is first. For later use.
Then it's just a matter of simplifying the calculation.
Each number is used n-1 times. X1 is obvious. X2 is used n-2 times but also used once in the sum with X1. (This bit is a bit harder with r > 2)
At the end I substitute in the actual values for the number of combinations.
This then cancels out to give the sum of all the numbers over n. Which is the mean.
The next step is to show this for all values r but that shouldn't be too hard.
Substituting r instead of 2. I found that each number is used (n-1) choose (r-1) times.
But then I'm getting the wrong cancellation out of it.
I know where I went wrong... I miscalculated the calculation for (n-1)choose(r-1)
With the correct formula the answer falls out to S/n.
I am searching for a function that get as an input a number x (assuming 15), number of bits d (4) and number of permutations m (2). The output of the function will be all the numbers that are m bit's permutations from the given number x at a d length bits.
For the given numbers, (x = 15, d = 4 and m = 2) we get 6=\binom{4}{2}different number's combination.
I would like to know if such kind of function already exist in C++ STD or boost or etc. that returns me those numbers...
P.S.
if you know a function that returns all permutations' numbers till m.
regards
i looked again at the comment from #Gregory Pakosz and i found out it was not so bad direction to start with. I tried to implement the suggested code from Bit Twiddling Hacks in my program and after some bugs in my code it worked.
Thanks
To understand the problem,let us consider these examples first:
46 = (22)6 = 212 = (23)4 = 84 = 163 = 4096.
Thus,we can say that 46,212,84 and 163 are same.
273 = 39 = 19683
so, both 273 and 39 are identical.
Now the problem is, for any given pair of ab how to compute all others possible (if any)xy where, ab = xy.I am interested in an algorithm that can be efficiently implemented in C/C++.
For example:
If the inputs are like this:
4,6 desired output :(2,12),(8,4)
8,4 desired output :(2,12),(2,6)
27,3 desired output :(3,9)
12,6 desired output :(144,3),(1728,2)
7,5 desired output : No duplicate possible
This is mostly a math problem. You can extract all the prime factors of a number, and you'll get a list of prime numbers and their exponents, i.e., 216000 = 26 * 33 * 53. Then take the GCD of the exponents: GCD(6,3,3) = 3. Divide the exponents by the GCD to get the smallest root of the number, 22 * 31 * 51 = 60. Then factor the GCD — factors of 3 are 1 and 3. There is one way to express the number as an integral power for each factor of the GCD. You can express it as (603)1 or (601)3.
EDIT: fixed math error.
If integers is the only thing you're interested in, you could just start extracting integer roots from the target number, and checking if the result is an integer.
You even have a convenient stop condition - whenever the root is below 2 you can stop. That is, the algorithm:
Given a result
N <- 2
Compute Nth root.
If it's an integer: add to answers
If it's < 2, exit loop
N += 1, back to previous step
This algorithm will always terminate.
I believe that this problem is equivalent to the Integer factorization problem.
I said this because we can convert any composite number to a unique product of prime numbers
(see Fundamental theorem of arithmetic) and then start creating combinations with the factors and the powers.
Update: for example: 46
we convert it to a power of a prime factor, so we have 212.
Now we increase the base exponentially and we have: 46, 84 ... until the exponent becomes 1.
I finally solved it myself.Using a naive integer factorization algorithm my solution look like this.It can be optimized further by using Pollard's rho algorithm
EDIT: Code updated, now it can handle composite bases.Please point if it has certain other bugs too :)
The smallest base that makes sense is 2. Also, the smallest exponent that makes sense is 2.
Given the result, you can determine the largest possible exponent.
Example: 4096 = 2^12, biggest exponent is 12.
This also works with results that aren't powers of 2: 19683 is a bit bigger than 2^14, so you won't be seeing any exponents bigger than 14.
Now you can take your number and work your way down from the top exponent toward 2 (the smallest exponent). For every trial exponent exp, take the exp-th root of the result; if that comes out as a clean integer, then you've found one solution.
You can use logarithms to calculate the log2 of a result, and to take the n-th root of a number. But you will need to watch out for rounding errors.
The advantage of this approach is that once you've set things up, you can just run down a simple loop, and once done you have all your results.
At the bottom of page 5 is the phrase "changes k to k ⊕ (1j+1)2". Isn't 1 to any power still 1 even in binary? I'm thinking this must be a typo. I sent an email to Dr. Knuth to report this, but I don't expect to hear back for months. In the meantime, I'm trying to figure out what this is supposed to be.
This can be resolved by using the convention that (...)2 represents a bitwise representation. (1j+1)2 then consists solely of j+1 ones, rather than referring to an exponentiation. You can see this convention explained more explicitly in TAOCP Volume 4 Fascicle 1 at page 8, for example:
If x is almost any nonzero 2-adic
integer, we can write its bits in the
form
x =
(g01a10b)2
in other words, x consists of some
arbitrary (but infinite) binary string
g, followed by a 0, which is followed
by a+1 ones and followed by b zeros,
for some a >= 0 and b >= 0.
[I have substituted the symbol alpha by g to save encoding problems]
Going back to your original query;
k ⊕(1j+1)2 is equated with k ⊕ (2j+1 - 1)
implying that (1j+1)2 = (2j+1 - 1): this holds because the left-hand side is the integer whose significant bits are j+1 (contiguous) ones; the right-hand side is an exponentiation. For example, with j =3:
(14)2 = (1111)2 = (24 - 1)
Hope that helps.
A list of known typos can be found on the errata page:
http://www-cs-faculty.stanford.edu/~knuth/taocp.html
Your reported typo is not there. If it really is a typo, you might be eligible for a cash reward from Knuth himself.