Can you get a list of the powers in a polynomial? Pari GP - polynomials

I'm working with single variable polynomials with coefficients +1/-1 (and zero). These can be very long and the range of powers can be quite big. It would be convenient for me to view the powers as a vector - is there any way of doing this quickly? I had hoped there would be a command already in Pari to do this, but I can't seem to see one?
Just an example to confirm what I'm trying to do...
Input:x^10 - x^8 + x^5 - x^2 + x + 1
Desired output: [10, 8, 5, 2, 1, 0]

You can use Vecrev to get the polynomial coefficients. After that just enumerate them to select the zero-based positions of non-zeros. You want the following one-liner:
nonzeros(xs) = Vecrev([x[2]-1 | x <- select(x -> x[1] != 0, vector(#xs, i, [xs[i], i]))])
Now you can easily get the list of polynomial powers:
p = x^10 - x^8 + x^5 - x^2 + x + 1
nonzeros(Vecrev(p))
>> [10, 8, 5, 2, 1, 0]

Related

How to represent the elements of the Galois filed GF(2^8) and perform arithmetic in NTL library

I am new to NTL library for its GF2X, GF2E, GF2EX, etc. Now, I want to perform multiplication on the Galois field GF(2^8). The problem is as following:
Rijndael (standardised as AES) uses the characteristic 2 finite field with 256 elements,
which can also be called the Galois field GF(2^8).
It employs the following reducing polynomial for multiplication:
x^8 + x^4 + x^3 + x^1 + 1.
For example, {53} • {CA} = {01} in Rijndael's field because
(x^6 + x^4 + x + 1)(x^7 + x^6 + x^3 + x)
= (x^13 + x^12 + x^9 + x^7) + (x^11 + x^10 + x^7 + x^5) + (x^8 + x^7 + x^4 + x^2) + (x^7 + x^6 + x^3 + x)
= x^13 + x^12 + x^9 + x^11 + x^10 + x^5 + x^8 + x^4 + x^2 + x^6 + x^3 + x
= x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^6 + x^5 + x^4 + x^3 + x^2 + x
and
x^13 + x^12 + x^11 + x^10 + x^9 + x^8 + x^6 + x^5 + x^4 + x^3 + x^2 + x modulo x^8 + x^4 + x^3 + x^1 + 1
= (11111101111110 mod 100011011)
= {3F7E mod 11B} = {01}
= 1 (decimal)
My question is how to represent the reducing polynomial x^8 + x^4 + x^3 + x^1 + 1 and the polynomials x^6 + x^4 + x + 1 and x^7 + x^6 + x^3 + x in NTL. Then perform multiplication on these polynomials, and get the result {01}.
This is a good example for me to use this library.
Again, I don't know NTL, and I'm running Visual Studio 2015 on Windows 7. I've downloaded what I need, but have to build a library with all the supplied source files which will take a while to figure out. However, based on another answer, this should get you started. First, initialize the reducing polynomial for GF(256):
GF2X P; // apparently the length doesn't need to be set
SetCoeff(P, 0, 1);
SetCoeff(P, 1, 1);
SetCoeff(P, 3, 1);
SetCoeff(P, 4, 1);
SetCoeff(P, 8, 1);
GF2E::init(P);
Next, assign variables as polynomials:
GF2X A;
SetCoeff(A, 0, 1);
SetCoeff(A, 1, 1);
SetCoeff(A, 4, 1);
SetCoeff(A, 6, 1);
GF2X B;
SetCoeff(B, 1, 1);
SetCoeff(B, 3, 1);
SetCoeff(B, 6, 1);
SetCoeff(B, 7, 1);
GF2X C;
Looks like there is an override for multiply so this would work assuming that the multiply override is based on the GF(2^8) extension field GF2E::init(P).
C = A * B:
As commented after the question, NTL is more oriented to large fields. For GF(256) it would be faster to use bytes and lookup tables. For up to GF(2^64), xmm register intrinsics with carryless multiply (PCLMULQDQ) can be used to implement finite field math quickly without tables (some constants will be needed, the polynomial and it's multiplicative inverse). For fields greater than GF(2^64), extended precision math methods would be needed. For fields GF(p^n), where p != 2 and n > 1, unsigned integers can be used with lookup tables. Building the tables would involve some mapping between integers and GF(p) polynomial coefficients.

Extended polynomials in library NTL

There is code is written using NTL library:
int main()
{
ZZ_p::init(ZZ(5)); // define GF(5)
ZZ_pX P;
BuildIrred(P, 4); // generate an irreducible polynomial P
// of degree 4 over GF(5)
ZZ_pE::init(P); // define GF(5^4)
ZZ_pEX f, g, h; // declare polynomials over GF(5^4)
random(f, 3); // f is a random, monic polynomial of degree 3
SetCoeff(f, 3);
cout << f << endl<< endl;
}
The output is:
[[3 1 1 4] [2 1 3 2] [1 0 3 1] [1]]
For example, [1 2 3] is mean 3x² + 2x + 1.
What the form of notation polynomial over GF in this case?
Your question is a little bit difficult to understand. If I understand you right, the question is how to interpret the NTL representation [[3 1 1 4] [2 1 3 2] [1 0 3 1] [1]] of a polynomial over the finite field with 5⁴ elements.
First: The elements in the finite field with 5⁴ elements (called GF(5⁴)) are represented as the polynomials GF(5)[X] mod f, where f is an irreducible polynomial of degree 4.
This means a polynomial over GF(5⁴) is a Polynomial where the coefficients are polynomials in GF(5)[X] mod f.
So [[3 1 1 4] [2 1 3 2] [1 0 3 1] [1]] can be interpreted as
Y³ + (X³ + 3X² + 1)⋅Y² + (2X³ + 3X² + X + 2)⋅Y + (4X³ + X² + X + 3)
Notice: The comment in
random(f, 3); // f is a random, monic polynomial of degree 3
SetCoeff(f, 3);
is a little bit misleading. random(f,3) sets f to a random polynomial of degree less than 3. SetCoeff(f, 3) sets the coefficient of Y³ to 1 and after that it is a polynomial of degree 3.

Algorithm about number theory

Give two positive integer a,b (1<=a<=30, 1<=b<=10000000) ,and define two unrepeatable set L and R,
L = {x * y | 1 <= x <= a, 1 <= y <= b, x,y is integer}
R = {x ^ y | 1 <= x <= a, 1 <= y <= b, x,y is integer},
^ is XOR operate
For any two integer: A∈L, B∈R, we format B to n+1(n is the decimal digit number of b) decimal digit(fill 0 in front of B),and then joint B to the end of A and get a new integer AB.
Compute the sum of all generated integer AB (In case the sum exceed, just return "sum mod 1000000007", mod means modular operation)
Note: the time of your algorithm is no more than 3 seconds
My algorithm is very simple:we can easily get the max number in set R, and the element in R is 0,1,2,3...maxXor,(the element max(a,b) may be not in R), using a hash table the compute set L. But the algorithm consume 4 seconds when a = 30, b = 100000.
Give an example:
a = 2, b = 4, so
L = {1 * 1, 1 * 2, 1 * 3, 1 * 4, 2 * 1, 2 * 2, 2 * 3, 2 * 4} = {1, 2, 3, 4, 6, 8}
R = {1^1,1^2,1^3,1^4,2^1,2^2,2^3,2^4} = {0, 1, 2, 3, 5, 6}
All generated integer AB is:
{
100, 101, 102, 103, 105, 106,
200, 201, 202, 203, 205, 206,
300, 301, 302, 303, 305, 306,
400, 401, 402, 403, 405, 406,
600, 601, 602, 603, 605, 606,
800, 801, 802, 803, 805, 806
}
The sum of all AB is 14502
So the number AB can be written as 10^(n+1) A + B. Which means that summing over all A, B, the total is equal to
|R| 10^(n+1) Sum(A in L) + |L| Sum(B in R)
In your example,
|L| = 6
|R| = 6
Sum(A in L) = 24
Sum(B in R) = 17
n = 3
which when plugged into the above formula gives 14,502.
This reduces the runtime in the size of the sets from quadratic to linear, so you should see quite a huge improvement.
The next bits I haven't fleshed out fully because I don't have the time to, but they feel like they should work:
First, notice that Sum(A in L) would be trivial to calculate using
1 + 2 + .. + n = n(n-1)/2
if there wasn't the constraint that L doesn't contain repeats. You can get around this though by exploiting the fact that a is very small: iteratively calculate the sums 1, .., a using the triangular number formula and use that information to avoid counting a product more than once.
For Sum(B in R), notice that when you compare y and x^y, at most the first lg(a) bits have changed. So you can split a sum of x^ys into two sums: one which deals with the bits from lg(a)+1 upwards and which depends only on b, and a second, more complex sum which deals with the bits from lg(a) downwards and which depends on a and b.
Edit: The OP's asked me to expand on how to quickly compute Sum(A in L). There was a lottt of stuff in this section in previous edits, but I've actually sat down and worked through it now rather than haphazardly batting it around in my head. It also turned out to be more complicated than I expected, so my apologies for not sitting down and working through it sooner #tenos.
So what we want to do is take the sum of all distinct products x*y such that 1 <= x <= a and 1 <= y <= b. Well, that turns out to be pretty hard so let's start with a simpler problem: given two integers x1, x2 with x1 < x2, how can we compute the sum of all distinct products x1*y or x2*y where 1 <= y <= b?
If we dropped the distinctness criterion, this'd be easy: it'd simply be
x1*Sum(b) + x2*Sum(b)
where Sum(j) denotes the sum of integers 1 through j inclusive, and can be calculated using Gauss's formula for the triangular numbers. So again we can reduce the problem into something simpler: how can we find the sum of all products that appear in both the left and right terms?
Well, two products are equal if
x1*y1 == x2*y2
This happens exactly when x1*y1 == x2*y2 == k*LCM(x1, x2), where LCM is the lowest common multiple and k is some integer.
The sum of this over all k such that 1 <= k*LCM(x1, x2) <= x1*b is
R(x1, x2) = LCM(x1, x2) * Sum(x1*b/LCM(x1, x2))
where R stands for "repeats". Which means that our sum of all distinct products x1*y or x2*y where 1 <= y <= b is
x1*Sum(b) + x2*Sum(b) - R(x1, x2)
Next, let's extend the definition of R to be defined on three variables x1 < x2 < x3 as
R(x1, x2, x3) = LCM(x1, x2, x3) * Sum(x1*b/LCM(x1, x2, x3))
and similarly for 4 variables, 5 variables, etc. Then the sum of distinct products for three x1 < x2 < x3 is
x1*Sum(b) + x2*Sum(b) + x3*Sum(b) - R(x1, x2) - R(x1, x3) - R(x2, x3) + R(x1, x2, x3)
by the inclusion-exclusion principle.
So, let's make use of this. Define
Sum for x = 1: 1*Sum(b)
Sum for x = 2: 2*Sum(b) - R(2, 1)
Sum for x = 3: 3*Sum(b) - R(3, 2) - R(3, 1) + R(3, 2, 1)
Etc. Then the sum of all these sums up to x = a is the sum of all distinct products.
Edit: #tenos turned this into a useful solution. He noticed that since i*Sum(b) contains many repeats, we can replace by i*sum(k...b), k = max(b/minPrimeFactor(i) + 1, i).
Further, when using inclusion-exclusion principle, many unnecessary computations can be pruned. For instance, if R(1,2) = NULL, there is no need to compute R(1,2,3), R(1,2,4).., etc. In fact, when b is very big, there are many R(i,..j) = NULL.

Prolog: Putting elements in a list for a decimal to binary conversion

Hello I was trying to modify a decimal to binary conversion function, so that it would display the results in a list. I'm new to prolog and I can't seem to get it to function properly.
dec2bin(0,0).
dec2bin(1,1).
dec2bin(N,L):- N>1,X is N mod 2,Y is N//2, dec2bin(Y,L1), L = [L1|[X]].
Then this is the result:
86 ?- dec2bin(26,L).
L = [[[[1, 1], 0], 1], 0]
Can someone help me understand what it is that I'm doing wrong.
Thanks
if you amend your code
dec2bin(0,[0]).
dec2bin(1,[1]).
dec2bin(N,L):-
N > 1,
X is N mod 2,
Y is N // 2,
dec2bin(Y,L1),
L = [X|L1].
you will get your solution with bits in reverse order:
?- dec2bin(26,L).
L = [0, 1, 0, 1, 1]
Instead of appending each bit, consider a final reverse/2, or invert the order by means of an accumulator
dec2bin(N,L) :- dec2bin(N,[],L).
dec2bin(0,L,[0|L]).
dec2bin(1,L,[1|L]).
dec2bin(N,L,R):-
N > 1,
X is N mod 2,
Y is N // 2,
dec2bin(Y,[X|L],R).
You have to apply some list concatenation, but you are just creating two terms lists and nesting them with L = [L1|[X]] when you consider L1 to be just a number.
If you consider it as a list, you can simply appending to it the newly created X, but to do so you have to rewrite the base cases of your recursion:
dec2bin(0,[0]).
dec2bin(1,[1]).
dec2bin(N,L):-
N > 1,
X is N mod 2,
Y is N // 2,
dec2bin(Y,L1),
append(L1, [X], L).
yielding to:
?- dec2bin(26,L).
L = [1, 1, 0, 1, 0]
where append/3 can be a library predicate or your own implementation.

Meaning of bitwise and(&) of a positive and negative number?

Can anyone help what n&-n means??
And what is the significance of it.
It's an old trick that gives a number with a single bit in it, the bottom bit that was set in n. At least in two's complement arithmetic, which is just about universal these days.
The reason it works: the negative of a number is produced by inverting the number, then adding 1 (that's the definition of two's complement). When you add 1, every bit starting at the bottom that is set will overflow into the next higher bit; this stops once you reach a zero bit. Those overflowed bits will all be zero, and the bits above the last one affected will be the inverse of each other, so the only bit left is the one that stopped the cascade - the one that started as 1 and was inverted to 0.
P.S. If you're worried about running across one's complement arithmetic here's a version that works with both:
n & (~n + 1)
On pretty much every system that most people actually care about, it will give you the highest power of 2 that n is evenly divisible by.
I believe it is a trick to figure out if n is a power of 2. (n == (n & -n)) IFF n is a power of 2 (1,2,4,8).
N&(-N) will give you position of the first bit '1' in binary form of N.
For example:
N = 144 (0b10010000) => N&(-N) = 0b10000
N = 7 (0b00000111) => N&(-N) = 0b1
One application of this trick is to convert an integer to sum of power-of-2.
For example:
To convert 22 = 16 + 4 + 2 = 2^4 + 2^2 + 2^1
22&(-22) = 2, 22 - 2 = 20
20&(-20) = 4, 20 - 4 = 16
16&(-16) = 16, 16 - 16 = 0
It's just a bitwise-and of the number. Negative numbers are represented as two's complement.
So for instance, bitwise and of 7&(-7) is x00000111 & x11111001 = x00000001 = 1
I would add a self-explanatory example to the Mark Randsom's wonderful exposition.
010010000 | +144 ~
----------|-------
101101111 | -145 +
1 |
----------|-------
101110000 | -144
101110000 | -144 &
010010000 | +144
----------|-------
000010000 | 16`
Because x & -x = {0, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 16, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 32} for x from 0 to 32. It is used to jumpy in the for sequences for some applications. The applications can be to store accumulated records.
for(;x < N;x += x&-x) {
// do something here
++tr[x];
}
The loop traverses very fast because it looks for the next power of two to jump.
As #aestrivex has mentioned, it is a way of writing 1.Even i encountered this
for (int y = x; y > 0; y -= y & -y)
and it just means y=y-1 because
7&(-7) is x00000111 & x11111001 = x00000001 = 1