Consider the following MAGMA-Code:
R<x> := PolynomialRing(Rationals());
F := NumberField(x^2 + 1);
E := ext<F | x^2 - 2>;
f := DefiningPolynomial(AbsoluteField(E));
K := NumberField(f);
a := Roots(f, E)[1][1];
Embed(K, E, a);
IsSubfield(K, E);
M := RelativeField(K, E);
BaseField(M);
Executed with the MAGMA calculator Running Magma V2.27-7. Seed: 3598054538
The MAGMA Manual defines the Function RelativeField(F, L) as "Given number fields L and F such that Magma knows that F is a subfield of L, return an isomorphic number field M defined as an extension over F."
Since IsSubfield(K, E) above return true, Magma certainly knows that K is a subfield of E.
Why is M defined over the Rationals and not over K? How can I actually get M defined over K?
Note: Changing the definition of f to (e.g.) x^2 - 2 or x-1 yields the results I would expect.
Something behaves differently when K and E are Isomorphic.
Related
I'm trying to implement polynomial long division based on polynomials of type int array. Here the highest degree coefficients are at the end. I'm basing my code off of the pseudo code available on Wikipedia:
function n / d is
require d ≠ 0
q ← 0
r ← n // At each step n = d × q + r
while r ≠ 0 and degree(r) ≥ degree(d) do
t ← lead(r) / lead(d) // Divide the leading terms
q ← q + t
r ← r − t × d
return (q, r)
H
This line
q = poly_add q t
is testing for the equality between q and poly_add q t, it is not an assignment and the compiler is warning you that you are ignoring the result of this test. You are also misunderstanding your pseudo-code: t is supposed to be a polynomial of degree degree r - degree d.
You need to use references, or transform your while loop into a recursive function.
Similarly, since r is an array, whose length cannot be changed:
while ((Array.length r) >= n2) && (Array.length r != 0)do
This test does not change during the loop. Structural inequality is <>.
Another issue is that this line
t.(0) <- r.((Array.length r)-1)/p2.(n2-1)
is mutating your zero polynomial which is a bad idea.
Overall, it is not clear if your polynomial are supposed to be mutable or not.
If they are not supposed to be mutable, you should avoid a.( ) <- altogether.
I need to write a code that composes a function, f (x), with itself N times using recursive function.
What I wrote is:
let f x = x + 1 (*it can be any function*)
let rec compose n f x = if n = 0 then
"Can't compose anymore"
else compose (n-1) f (f x);;
which is obviously not right. I know the code is not finished, but I do not know how to continue. Am I on the right path or not? Can you tell me how to solve the problem?
You are on the right path. Based on the requirements, I would try to start from those equations:
compunere 1 f x == f x
The above says that applying f once to x is exactly the same as doing (f x).
compunere 2 f x == f (f x)
Likewise, applying f twice should compute f (f x). If you replace (f x) by a call to compunere, you have:
compunere 2 f x == f (f x) = f (compunere 1 f x)
The general pattern of recursion seems to be:
compunere n f x == f (compunere (n - 1) f x)
Note that the most general type of f is a -> b, but when f is called again with a value of type b, that means that a and b should be the same type, and so f really is an endomorphism, a function of type a -> a. That is the case for N >= 1, but in degenerate case of N=0, you could have a different behaviour.
Applying f zero time to x could mean "return x", which means that compunere could theoretically return a value of type a for zero, for any f being a a -> b function, a and b possibly distinct; you could distinguish both cases with more code, but here we can simply let the typechecker enforce the constraint that a = b in all cases and have an uniform behaviour. You can also make 0 invalid (like negative numbers) by throwing an exception (negative applications could theoretically be postitive applications of the inverse function, but you cannot compute that when knowing nothing about f; f could be non-invertible).
Your code is a little bit different:
compunere 3 f x == (compunere 2 f (f x))
== (compunere 1 f (f (f x)))
== (compunere 0 f (f (f (f x))))
...
The advantage of your approach is that the recursive call to compunere is directly giving the result for the current computation: it is in tail position which allows the compiler to perform tail-call elimination.
When you reach N=0, the value locally bound x gives the result you want. Here, for N=0 as an input, the only natural interpretation is also to return x.
I'm trying to prove that a number is prime using the Znumtheory library.
In Znumtheory primes are defined in terms of relative primes:
Inductive prime (p:Z) : Prop :=
prime_intro :
1 < p -> (forall n:Z, 1 <= n < p -> rel_prime n p) -> prime p.
So to prove that 3 is prime I should apply prime_intro to the goal. Here is my try:
Theorem prime3 : prime 3.
Proof.
apply prime_intro.
- omega.
- intros.
unfold rel_prime. apply Zis_gcd_intro.
+ apply Z.divide_1_l.
+ apply Z.divide_1_l.
+ intros. Abort.
I don't know how to use the hypothesis H : 1 <= n < 3 which says that n is 1 or 2. I could destruct it, apply lt_eq_cases and destruct it again, but I would be stuck with a useless 1 < n in the first case.
I wasn't expecting to have a hard time with something that looks so simple.
I have a variant of #larsr's proof.
Require Import ZArith.
Require Import Znumtheory.
Require Import Omega.
Theorem prime3 : prime 3.
Proof.
constructor.
- omega.
- intros.
assert (n = 1 \/ n = 2) as Ha by omega.
destruct Ha; subst n; apply Zgcd_is_gcd.
Qed.
Like #larsr's proof, we prove that 1 < 3 using omega and then prove that either n=1 or n=2 using omega again.
To prove rel_prime 1 3 and rel_prime 2 3 which are defined in terms of Zis_gcd, we apply Zgcd_is_gcd. This lemma states that computing the gcd is enough. This is trivial on concrete inputs like (1,3) and (2,3).
EDIT: We can generalize this result, using only Gallina. We define a boolean function is_prime that we prove correct w.r.t. the inductive specification prime. I guess this can be done in a more elegant way, but I am confused with all the lemmas related to Z. Moreover, some of the definitions are opaque and cannot be used (at least directly) to define a computable function.
Require Import ZArith.
Require Import Znumtheory.
Require Import Omega.
Require Import Bool.
Require Import Recdef.
(** [for_all] checks that [f] is true for any integer between 1 and [n] *)
Function for_all (f:Z->bool) n {measure Z.to_nat n}:=
if n <=? 1 then true
else f (n-1) && for_all f (n-1).
Proof.
intros.
apply Z.leb_nle in teq.
apply Z2Nat.inj_lt. omega. omega. omega.
Defined.
Lemma for_all_spec : forall f n,
for_all f n = true -> forall k, 1 <= k < n -> f k = true.
Proof.
intros.
assert (0 <= n) by omega.
revert n H1 k H0 H.
apply (natlike_ind (fun n => forall k : Z, 1 <= k < n ->
for_all f n = true -> f k = true)); intros.
- omega.
- rewrite for_all_equation in H2.
destruct (Z.leb_spec0 (Z.succ x) 1).
+ omega.
+ replace (Z.succ x - 1) with x in H2 by omega. apply andb_true_iff in H2.
assert (k < x \/ k = x) by omega.
destruct H3.
* apply H0. omega. apply H2.
* subst k. apply H2.
Qed.
Definition is_prime (p:Z) :=
(1 <? p) && for_all (fun k => Z.gcd k p =? 1) p.
Theorem is_prime_correct : forall z, is_prime z = true -> prime z.
Proof.
intros. unfold is_prime in H.
apply andb_true_iff in H. destruct H as (H & H0).
constructor.
- apply Z.ltb_lt. assumption.
- intros.
apply for_all_spec with (k:=n) in H0; try assumption.
unfold rel_prime. apply Z.eqb_eq in H0. rewrite <- H0.
apply Zgcd_is_gcd.
Qed.
The proof becomes nearly as simple as #Arthur's one.
Theorem prime113 : prime 113.
Proof.
apply is_prime_correct; reflexivity.
Qed.
The lemma you mentioned is actually proved in that library, under the name prime_3. You can look up its proof on GitHub.
You mentioned how strange it is to have such a hard time to prove something so simple. Indeed, the proof in the standard library is quite complicated. Luckily, there are much better ways to work out this result. The Mathematical Components library advocates for a different style of development based on boolean properties. There, prime is not an inductively defined predicate, but a function nat -> bool that checks whether its argument is prime. Because of this, we can prove such simple facts by computation:
From mathcomp Require Import ssreflect ssrbool ssrnat prime.
Lemma prime_3 : prime 3. Proof. reflexivity. Qed.
There is a bit of magic going on here: the library declares a coercion is_true : bool -> Prop that is automatically inserted whenever a boolean is used in a place where a proposition is expected. It is defined as follows:
Definition is_true (b : bool) : Prop := b = true.
Thus, what prime_3 really is proving above is prime 3 = true, which is what makes that simple proof possible.
The library allows you to connect this boolean notion of what a prime number is to a more conventional one via a reflection lemma:
Lemma primeP p :
reflect (p > 1 /\ forall d, d %| p -> xpred2 1 p d) (prime p).
Unpacking notations and definitions, what this statement says is that prime p equals true if and only if p > 1 and every d that divides p is equal to 1 or p. I am afraid it would be a lengthy detour to explain how this reflection lemma works exactly, but if you find this interesting I strongly encourage you to look up more about Mathematical Components.
Here is a proof that I think is quite understandable as one steps through it.
It stays at the level of number theory and doesn't unfold definitions that much. I put in some comments, don't know if it makes it more or less readable. But try to step through it in the IDE, if you care for it...
Require Import ZArith.
Require Import Znumtheory.
Inductive prime (p:Z) : Prop :=
prime_intro :
1 < p -> (forall n:Z, 1 <= n < p -> rel_prime n p) -> prime p.
Require Import Omega.
Theorem prime3 : prime 3.
Proof.
constructor.
omega. (* prove 1 < 3 *)
intros; constructor. (* prove rel_prime n 3 *)
exists n. omega. (* prove (1 | n) *)
exists 3. omega. (* prove (1 | 3) *)
(* our goal is now (x | 1), and we know (x | n) and (x | 3) *)
assert (Hn: n=1 \/ n=2) by omega; clear H. (* because 1 <= n < 3 *)
case Hn. (* consider cases n=1 and n=2 *)
- intros; subst; trivial. (* case n = 1: proves (x | 1) because we know (x | n) *)
- intros; subst. (* case n = 2: we know (x | n) and (x | 3) *)
assert (Hgcd: (x | Z.gcd 2 3)) by (apply Z.gcd_greatest; trivial).
(* Z.gcd_greatest: (x | 2) -> x | 3) -> (x | Z.gcd 2 3) *)
apply Hgcd. (* prove (x | 1), because Z.gcd 2 3 = 1 *)
Qed.
Fun fact: #epoiner's answer can be used together with Ltac in a proof script for any prime number.
Theorem prime113 : prime 113.
Proof.
constructor.
- omega.
- intros n H;
repeat match goal with | H : 1 <= ?n < ?a |- _ =>
assert (Hn: n = a -1 \/ 1 <= n < a - 1) by omega;
clear H; destruct Hn as [Hn | H];
[subst n; apply Zgcd_is_gcd | simpl in H; try omega ]
end.
Qed.
However, the proof term gets unwieldy, and checking becomes slower and slower. This is why it small scale reflection (ssreflect) where computation is moved into the type checking probably is preferrable in the long run. It's hard to beat #Arthur Azevedo De Amorim's proof:
Proof. reflexivity. Qed. :-) Both in terms of computation time, and memory-wise.
http://ayazdzulfikar.blogspot.in/2014/12/penggunaan-fenwick-tree-bit.html?showComment=1434865697025#c5391178275473818224
For example being told that the value of the function or f (i) of the index-i is an i ^ k, for k> = 0 and always stay on this matter. Given query like the following:
Add value array [i], for all a <= i <= b as v Determine the total
array [i] f (i), for each a <= i <= b (remember the previous function
values clarification)
To work on this matter, can be formed into Query (x) = m * g (x) - c,
where g (x) is f (1) + f (2) + ... + f (x).
To accomplish this, we
need to know the values of m and c. For that, we need 2 separate
BIT. Observations below for each update in the form of ab v. To
calculate the value of m, virtually identical to the Range Update -
Point Query. We can get the following observations for each value of
i, which may be:
i <a, m = 0
a <= i <= b, m = v
b <i, m = 0
By using the following observation, it is clear that the Range Update - Point Query can be used on any of the BIT. To calculate the value of c, we need to observe the possibility for each value of i, which may be:
i <a, then c = 0
a <= i <= b, then c = v * g (a - 1)
b <i, c = v * (g (b) - g (a - 1))
Again, we need Range Update - Point Query, but in a different BIT.
Oiya, for a little help, I wrote the value of g (x) for k <= 3 yes: p:
k = 0 -> x
k = 1 -> x * (x + 1) / 2
k = 2 -> x * (x + 1) * (2x + 1) / 6
k = 3 -> (x * (x + 1) / 2) ^ 2
Now, example problem SPOJ - Horrible Queries . This problem is
similar issues that have described, with k = 0. Note also that
sometimes there is a matter that is quite extreme, where the function
is not for one type of k, but it could be some that polynomial shape!
Eg LA - Alien Abduction Again . To work on this problem, the solution
is, for each rank we make its BIT counter m respectively. BIT combined
to clear the counters c it was fine.
How can we used this concept if:
Given an array of integers A1,A2,…AN.
Given x,y: Add 1×2 to Ax, add 2×3 to Ax+1, add 3×4 to Ax+2, add 4×5 to
Ax+3, and so on until Ay.
Then return Sum of the range [Ax,Ay].
I knew that converting a regular expression to a NFA, there is a algorithm.
But I was wondering if there is a algorithm to convert a NFA to regular expression.
If there is, what is it?
And if there isn't, I am also wondering if all NFA can convert to a regular expression.
Is there a NFA that a regular expression that cannot represent?
Thank you! :D
Here is an algorithm where each transition is incrementally replaced with a regex, until there is only an initial and final state: https://courses.engr.illinois.edu/cs373/sp2009/lectures/lect_08.pdf [PDF]
An NFA can be written as a system of inequalities (over a Kleene algebra), with one variable for each state, one inequality q ≥ 1 for each final state q and one inequality q ≥ x r for each transition on x from state q to state r. This is a right-affine fixed point system, over a Kleene algebra, whose least fixed point solution gives you, for any q, the regular expression recognized by the NFA that has q as the start state. The system can be collated, so that all the inequalities q ≥ A, q ≥ B, ..., q ≥ Z, for each given q, are combined into q ≥ A + B + ... Z. The result is a matrix system 𝐪 ≥ 𝐚 + H 𝐪, where 𝐪 is the vector of all the variables, 𝐚 the vector of the affine coefficients - 0's for non-final states, 1's for final states, but those details are not important for what follows; and H is the matrix of all the linear coefficients.
To solve a right-affine system, do so one variable at a time. In Kleene algebra, the least fixed point solution to x ≥ a + bx is x = b* a. This applies both singly and matrix-wise, so that the least fixed point solutuion to 𝐪 ≥ 𝐚 + H 𝐪, in matrix form is 𝐪 = H* 𝐚.
Matrices over Kleene algebras form a Kleene algebras, with matrix addition and matrix multiplication, respectively, for the sum and product operators and the matrix star for the Kleene star. Finding the matrix star is one and the same process as solving the corresponding system 𝐪 ≥ 𝐚 + H 𝐪.
A Generic Example:
Consider the NFA with states q, r, s, with q the start state and s the only final state, and with transitions:
a: q → q, b: q → r, c: q → s,
d: r → q, e: r → r, f: r → s,
g: s → q, h: s → r, i: s → s.
Let (x, y, z) = (0, 0, 1) denote the corresponding affine coefficients. Then, the corresponding right-affine system is:
q ≥ x + a q + b r + c s,
r ≥ y + d q + e r + f s,
s ≥ z + g q + h r + i s.
Solve for s, first, to obtain
s = i* (z + g q + h r) = i* z + i* g q + i* h r.
Substitute in the other inequalities to get:
q ≥ x + c i* z + (a + c i* g) q + (b + c i* h) r,
r ≥ y + f i* z + (d + f i* g) q + (e + f i* h) r.
Rewrite this as
q ≥ x' + a' q + b' r,
r ≥ y' + d' q + e' r,
where
x' = x + c i* z, a' = a + c i* g, b' = b + c i* h,
y' = y + f i* z, d' = d + f i* g, e' = e + f i* h.
Solve for r to get
r = e'* (y' + d' q) = e'* y' + e'* d' q.
Substitute into the inequality for q to get
q ≥ (x' + b' e'* y') + (a' + b e'* d') q
and rewrite this as
q ≥ x" + a" q
where
x" = x' + b' e'* y', a" = a' + b e'* d'.
Finally, solve this for q to get
q = a"* x".
This is also the general form for that embodies the generic fail-safe solution for NFA's with 3 states.
Since q is the start state, then a"* x" is the regular expression sought for, with a", x", a', b', d', e', x', y', x, y and z as indicated above. If you try to in-line substitute them all, the expression will blow up to a size that is exponential in the number of states and will be large in size even for three states.
An Optimized Example:
Consider the system for the NFA whose states are q, r, s, with q the start state, s the final state, and the transitions
a: q → r, a: q → q, b: q → q, b: q → s, a: s → s, b: s → s
The corresponding right-affine system is
q ≥ a r + a q + b q
r ≥ b s
s ≥ 1 + a s + b s
Solve for s first:
s ≥ 1 + a s + b s = 1 + (a + b) s ⇒ s = (a + b)*.
Substitute into the inequality for r and solve to find the least fixed point:
r ≥ b (a + b)* ⇒ r = b (a + b)*.
Finally, substitute into the inequality for q and solve to find the least fixed point:
q ≥ a b (a + b)* + (a + b) q ⇒ q = (a + b)* a b (a + b)*.
The resulting regular expression is (a + b)* a b (a + b)*. So, with chess-playing strategizing, simpler and optimal forms for the solution can be found.