I'm trying to prove the Monad laws (left and right unit + associativity) for the Continuation Passing Style (CPS) Monad.
I'm using a Type Class based Monad defintion from https://coq.inria.fr/cocorico/AUGER_Monad:
Class Monad (m: Type -> Type): Type :=
{
return_ {A}: A -> m A;
bind {A B}: m A -> (A -> m B) -> m B;
right_unit {A}: forall (a: m A), bind a return_ = a;
left_unit {A}: forall (a: A) B (f: A -> m B),
bind (return_ a) f = f a;
associativity {A B C}:
forall a (f: A -> m B) (g: B -> m C),
bind a (fun x => bind (f x) g) = bind (bind a f) g
}.
Notation "a >>= f" := (bind a f) (at level 50, left associativity).
The CPS type constructor is from Ralf Hinze's Functional Pearl about Compile-time parsing in Haskell
Definition CPS (S:Type) := forall A, (S->A) -> A.
I defined bind and return_ like this
Instance CPSMonad : Monad CPS :=
{|
return_ := fun {A} a {B} => fun (f:A->B) => f a ;
bind A B := fun (m:CPS A) (k: A -> CPS B)
=>(fun C => (m _ (fun a => k a _))) : CPS B
|}.
but I'm stuck with the proof obligations for right_unit and associativity.
- unfold CPS; intros.
gives the obligation for right_unit:
A : Type
a : forall A0 : Type, (A -> A0) -> A0
============================
(fun C : Type => a ((A -> C) -> C) (fun (a0 : A) (f : A -> C) => f a0)) = a
Would be very grateful for help!
EDIT: András Kovács pointed out that eta conversion in the type checker is sufficient, so intros; apply eq_refl., or reflexivity. is enough.
Bur first I had to correct my incorrect definition of bind. (The invisible argument c was on the wrong side of the )...
Instance CPSMonad : Monad CPS :=
{|
return_ S s A f := f s ;
bind A B m k C c := m _ (fun a => k a _ c)
|}.
The solution, as mentioned in a comment by András Kovács on Mar 11 at 12:26, is
Maybe you could try going straight for reflexivity? From Coq 8.5 there's eta conversion for records, so all the laws should be apparent immediately by normalization and eta conversion.
That gives us the following instance:
Instance CPSMonad : Monad CPS :=
{|
return_ S s A f := f s ;
bind A B m k C c := m _ (fun a => k a _ c) ;
right_unit A a := eq_refl ;
left_unit A a B f := eq_refl ;
associativity A B C a f g := eq_refl
|}.
Related
I'm new to F# and have some compilation problems in this code fragment:
let rec mywhile p f s =
if p s then s
else
let s1 = f s
mywhile p f s1
let eps = 0.001
let dichotomy f a b =
let out (a, b) = a
let c a b = (a + b) / 2.
out (mywhile (fun (a, b) -> a - b < eps)
(fun (a, b) -> if f c * f a < 0 then (a, c) else (c, b))
(a, b))
In particular, here: a - b < eps, then (a, c)
Expecting a type supporting the operator '-' but given a function type. You may be missing an argument to a function.
Since c is defined as c : float -> float -> float, and you're writing f c, it must mean that f : (float -> float -> float) -> 'x (for some 'x that we don't know yet).
Since you also write f a, and we already know that f's argument is float -> float -> float, it means that a : float -> float -> float.
And this, in turn means that you can't subtract anything from a. It's a function, not a number. This is what the compiler is telling you.
Usually, when you get yourself in a situation where you don't understand what your type are doing, go ahead and add some type annotations. They will provide anchors for the compiler, sort of walls that the type inference cannot cross, and thus will contain the type inconsistencies.
For example, if you specify the type of f:
let dichotomy (f : float -> float) a b =
...
this immediately reveals an error at f c, stating that c was expected to be a float, but actually is a function.
If I understand correctly, what you meant to do is to apply f to (c a b), not to c itself. And then, correspondingly, return that same value in the tuples (a, c) and (c, b):
out (mywhile (fun (a, b) -> a - b < eps)
(fun (a, b) ->
let d = c a b
if f d * f a < 0. then (a, d) else (d, b)
)
(a, b))
(also, your zero was an int; I made it into a float by adding a dot after it)
I know there are other posts about this, but mine is slightly different.
I have a function that performs the task of foldl, using foldr. I have the solution given to me, but would like help understanding.
foldlTest:: (b -> a -> b) -> [a] -> (b -> b)
foldlTest f xs = foldr (\x r b -> r (f b x))
(\b -> b)
xs
And It is called using something like this:
foldlTest (-) [1,2,3] 10 = -4
First thing I understand is that my function takes in 2 arguments, but 3 are given in the above test case. This means that the 10 will take part in a lambda expression I assume.
1) Does the 10 take the place of the b in b -> b? (then the b would be the initial accumulator value)
What I don't understand is what the (\x r b -> r (f b x)) part does.
2) What is the value of each of the variables? I am very confused about this lambda function.
3) What exactly does the lambda function do and how is it different from a regular foldr?
OK, since none of our resident Haskell experts has yet stepped up to explain this, I thought I'd have a go. Please everyone, feel free to correct anything you see wrong, because I'm really just feeling my way towards the answer here, and the following will by its very nature be a bit rambling.
First, as always in Haskell, it's a good idea to look at the types:
Prelude> :t foldl
foldl :: Foldable t => (b -> a -> b) -> b -> t a -> b
Since we're just interested in lists here, and not generic Foldables, let's specialise this to:
foldl :: (b -> a -> b) -> b -> [a] -> b
and compare with the function you've been given:
foldlTest:: (b -> a -> b) -> [a] -> (b -> b)
Since Haskell functions are curried, which is another way of saying that -> arrows in type signatures are right associative, the last pair of parentheses there is unnecessary, so this is the same as:
foldlTest:: (b -> a -> b) -> [a] -> b -> b
Comparing with that for foldl above, we see that they're identical except for the fact that the last two parameters - the [a] and the b - have been flipped over.
So we can already observe that, while the library function foldl takes a fold function, a starting accumulator, and a list to fold, to produce a new accumulator, the foldlTest version takes a fold function, a list to fold, and a starting accumulator, to produce a new accumulator. That sounds like the exact same thing, which it is, but if we now reintroduce the pair of brackets which I took off a few steps ago, we see that foldlTest, in the form you've shown, can be thought of as:
taking a fold function and a list, and producing a function b -> b which describes how folding over the list transforms the initial accumulator into the final result.
Note in particular that what it returns, in this formulation, is indeed a function.
So now we're ready to look at the actual implementation that you've seen:
foldlTest f xs = foldr (\x r b -> r (f b x))
(\b -> b)
xs
I'll be the first to admit that this is rather complicated, even confusing. As ever, lets examine the types.
The input types are easy. We know from the above discussion that f has type b -> a -> b, and xs has type [a].
OK, what about that lambda? Let's take it in stages:
It takes 3 arguments, x, r and b, in that order.
The result of the function call is r (f b x). This already tells us a lot, if we sit down and think about it!
For one, we know f has type b -> a -> b, so from f b x we know that b has type b and x has type a.
As for r, we see that it must be a function, because it's applied to f b x. And we know the latter has type b (from the type signature of f). So r has type b -> c, for some type c.
Therefore our complicated lambda function has the type a -> (b -> c) -> b -> c, where c is some type we haven't yet determined.
Now comes a key point. This lambda is presented as the first argument (the fold function) to the foldr function. Therefore it must have the type d -> e -> e, for some types d and e.
Remember that functions are curried, and although it seems like the lambda's signature takes 3 arguments, we can reduce it to 2 by rewriting as a -> (b -> c) -> (b -> c). That's an exact match for the signature we know foldr is looking for, with d equal to a and e equal to b -> c.
And it we specialise foldr's signature so that it accepts this type of function, we find that it is:
foldr :: (a -> (b -> c) -> (b -> c)) -> (b -> c) -> [a] -> (b -> c)`
We still don't know what c is - but we don't have to wonder much longer. The above is the signature for a fold which goes over a list of as, and produces a function from b to c. The next argument to foldr, which is of type b -> c, is given (by the implementation we're trying to decipher) as \b -> b. This is just the identity function, of course, and crucially it is a function from a type to itself. So the b -> c type must actually be b -> b, or in other words c was the same as b all along!
So that lambda must have the following type:
a -> (b -> b) -> (b -> b)
It takes an a, and an endomorphism on b (that just means a function from b to itself), and returns another endomorphism on b. And this is the function we will fold the list of as with, taking the identity function as a starting point, to produce the endomorphism b -> b that will implement the left fold we're after.
The above type signature on its own doesn't give us much clue of how to implement it, given that a and b could be anything. However, we do have our function f that relates them - recall it takes a b and an a, and produces a b. Given that (by undoing the currying again), the above function requires us, given an a, a b -> b function, and a b, to produce another b, I can only see two non-trivial ways to do it:
apply the function to the b, then apply f to the a and the result.
apply f to the a and the b, then apply the b -> b function to the result
The second of these two is exactly what that lambda you are asking about does, as hopefully is now obvious from looking at it. (The first option would be written \x r b -> f (r b) x. I'm not actually sure what overall effect this would produce, although I haven't thought about it much.)
I've covered a lot of ground, although it feels like more than it really is, because I've tried to be very painstaking. To recap, what the foldlTest function does is, given a list of as and a function f :: b -> a -> b, produces a function b -> b that is built by starting with the identity function, and walking right-to-left along the list, changing the current function r :: b -> b to the one that sends b to r (f b x) - where x :: a is the element of the list we are currently at.
That's a rather algorithmic description of what foldlTest does. Let's try to see what it does to an actual list - not a concrete one, but let's say a 3-element list [a1, a2, a3]. We start with the identity function \b -> b, and successively transform it into:
b -> f b a3 (recall that r starts as the identity function)
b -> f (f b a2) a3 (this is just substituting the previous function as r into \b -> r (f b x), with a2 now playing the role of x)
b -> f (f (f b a1) a2) a3
I hope you can now see that this looks an awful lot like folding the list from the left with the same function f. And by "looks an awful lot like", I actually mean it's identical! (If you haven't seen or tried it before, try to write out the successive stages of foldl f b [a1, a2, a3] and you'll see the identical pattern.)
So apologies again that this has been a bit rambling, but I hope this has given you enough information to answer the questions you asked. And don't worry if it makes your brain hurt a bit - it does mine too! :)
The answer you've been given (not the one on SO, the one you cited in your question) seems to be more difficult than necessary. I assume it is intended to teach you some aspects of folds, but apparently this is not working well. I try to show what's happening and in the process answer your questions.
3) What exactly does the lambda function do and how is it different from a regular foldr?
The whole thing just builds a left fold (foldl) out of a right fold (foldr) and flips the last two arguments. It is equivalent to
foldTest f = flip (foldr (flip f))
foldTest f = flip (foldl f)
and it does so in a rather obscure way, by accumulating a function and does the flipping via a lambda.
1) Does the 10 take the place of the b in b -> b? (then the b would be the initial accumulator value) What I don't understand is what the (\x r b -> r (f b x)) part does.
Yes, correct. The 10 takes to role of the initial accumulator of a left fold.
2) What is the value of each of the variables? I am very confused about this lambda function.
To get an intuition as to what is happening, I find it helpful to do the actual lambda calculus step by step:
foldTest (-) [1,2,3] 10
foldTest (-) (1:(2:(3:[]))) 10
-- remember the definition of foldTest which is given in a point-free way
foldlTest f xs = foldr (\x r b -> r (f b x)) (\b -> b) xs
-- add the hidden parameter and you get
foldlTest f xs b' = (foldr (\x r b -> r (f b x)) (\b -> b) xs) b'
-- apply that definition with f = (-), xs = (1:(2:(3:[]))), b' = 10
(foldr (\x r b -> r ((-) b x)) (\b -> b) (1:(2:(3:[])))) 10
(foldr (\x r b -> r (b - x)) (\b -> b) (1:(2:(3:[])))) 10
-- the inner lambda function is curried, thus we can write it as
-- \x r (\b -> r (b - x)), which is equivalent but will be handy later on.
(
foldr (\x r -> (\b -> r (b - x))) (\b -> b) (1:(2:(3:[])))
) 10
-- keep in mind foldr is defined as
foldr f' b'' [] = b''
foldr f' b'' (x:xs') = f' x (foldr f' b'' xs')
-- apply second row of foldr with f' = (\x r -> (\b -> r (b - x))),
-- b'' = (\b -> b), x = 1 and xs' = (2:(3:[]))
(
(\x r -> (\b -> r (b - x))) 1 (foldr (\x r -> (\b -> r (b - x))) (\b -> b) (2:(3:[])))
) 10
-- apply accumulation lambda for the first time with x = 1,
-- r = foldr (\x r -> (\b -> r (b - x))) (\b -> b) (2:(3:[])) gives
(
\b -> (foldr (\x r -> (\b -> r (b - x))) (\b -> b) (2:(3:[]))) (b - 1)
) 10
-- now we repeat the process for the inner folds
(
\b -> (
foldr (\x r -> (\b -> r (b - x))) (\b -> b) (2:(3:[]))
) (b - 1)
) 10
(
\b -> (
(\x r -> (\b -> r (b - x))) 2 (foldr (\x r -> (\b -> r (b - x))) (\b -> b) (3:[]))
) (b - 1)
) 10
(
\b -> (
\b -> (foldr (\x r -> (\b -> r (b - x))) (\b -> b) (3:[])) (b - 2)
) (b - 1)
) 10
(
\b -> (
\b -> (
foldr (\x r -> (\b -> r (b - x))) (\b -> b) (3:[])
) (b - 2)
) (b - 1)
) 10
(
\b -> (
\b -> (
(\x r -> (\b -> r (b - x))) 3 (foldr (\x r -> (\b -> r (b - x))) (\b -> b) [])
) (b - 2)
) (b - 1)
) 10
(
\b -> (
\b -> (
\b -> (foldr (\x r -> (\b -> r (b - x))) (\b -> b) [])) (b - 3)
) (b - 2)
) (b - 1)
) 10
(
\b -> (
\b -> (
\b -> (
foldr (\x r -> (\b -> r (b - x))) (\b -> b) []
) (b - 3)
) (b - 2)
) (b - 1)
) 10
-- Now the first line of foldr's definition comes in to play
(
\b -> (
\b -> (
\b -> (
\b -> b
) (b - 3)
) (b - 2)
) (b - 1)
) 10
-- applying those lambdas gives us
(
\b -> (
\b -> (
\b -> (
\b -> b
) (b - 3)
) (b - 2)
) (b - 1)
) 10
-- So we can see that the foldTest function built another function
-- doing what we want:
(\b -> (\b -> (\b -> (\b -> b) (b - 3)) (b - 2)) (b - 1)) 10
(\b -> (\b -> (\b -> b) (b - 3)) (b - 2)) (10 - 1)
(\b -> (\b -> b) (b - 3)) ((10 - 1) - 2)
(\b -> b) (((10 - 1) - 2) - 3)
(((10 - 1) - 2) - 3)
((9 - 2) - 3)
(7 - 3)
4
By the definition of foldlTest, we have
foldlTest (-) xs b = foldr g n xs b
where
n b = b
g x r b = r (b - x)
By the definition of foldr, we have
foldr g n [x,y,z] = g x (foldr g n [y,z])
but also
foldr g n [x,y,z] b = g x (foldr g n [y,z]) b -- (1)
---- r -----------
= foldr g n [y,z] (b-x)
(when used "inside" the foldlTest), and so, by repeated application of (1),
= g y (foldr g n [z]) (b-x)
= foldr g n [z] ((b-x)-y)
= g z (foldr g n [] ) ((b-x)-y)
= foldr g n [] (((b-x)-y)-z)
= n (((b-x)-y)-z)
= (((b-x)-y)-z)
Thus an expression which is equivalent to the left fold is built by the right fold straight up, because g is tail recursive. And thus
foldlTest (-) [1,2,3] 10
-- [x,y,z] b
==
(((10 - 1) - 2) - 3))
==
foldl (-) 10 [1,2,3]
and so we see that no, the b in the n = (\b -> b) does not accept the 10, but rather it accepts the whole expression equivalent to the left fold that has been built by the right fold.
But yes, 10 is the initial accumulator value in the expression equivalent of the left fold, as intended, that has been built by the right fold.
Agda makes use of the following operator to show inverses between sets:
_↔_ : ∀ {f t} → Set f → Set t → Set _
Is there an equivalent in Idris? I'm trying to define bag equality on lists
data Elem : a -> List a -> Type where
Here : {xs : List a} -> Elem x (x :: xs)
There : {xs : List a} -> Elem x xs -> Elem x (y :: xs)
(~~) : List a -> List a -> Type
xs ~~ ys {a} = Elem a xs <-> Elem a ys
So that we can construct l1 ~~ l2 when l1 and l2 have the same elements in any order.
The Agda definition of ↔ seems to be very complicated and I am not sure if there is something equivalent in the Idris standard library.
The basic idea behind Agda's ↔ is to package up two functions with two proofs of roundtripping, which is easy enough to do in Idris as well:
infix 7 ~~
data (~~) : Type -> Type -> Type where
MkIso : {A : Type} -> {B : Type} ->
(to : A -> B) -> (from : B -> A) ->
(fromTo : (x : A) -> from (to x) = x) ->
(toFrom : (y : B) -> to (from y) = y) ->
A ~~ B
You can use it like in the following minimal example:
notNot : Bool ~~ Bool
notNot = MkIso not not prf prf
where
prf : (x : Bool) -> not (not x) = x
prf True = Refl
prf False = Refl
The reason the Agda version is more complicated is because it is parameterized over the choice of equality as well, so it doesn't have to be the propositional one (which is the strictest/finest there is). Parameterizing the Idris definition of ~~ above from = to arbitrary PA : A -> A -> Type and PB : B -> B -> Type is left as an exercise to the reader.
I'm confused about let polymorphism in OCaml.
Consider the following code:
A:
let f = (fun v -> v) in
((f 3), (f true))
B:
let f = (fun v -> v) in
((fun g ->
let f = g in
f) f)
C:
let f = (fun v -> v) in
((fun g ->
let f = g in
((f 3), (f true))) f)
For A and B, there is no problem. But for C, OCaml reports error:
Error: This expression has type bool but an expression was expected of type
int
So for A, when evaluating ((f 3), (f true)), f's type is 'a -> 'a,
for B, when evaluating let f = g in f, f's type is 'a -> 'a.
But for C, when evaluating ((f 3), (f true)), f's type is int -> int.
Why C's f doesn't have type 'a -> 'a?
I have difficulty in understanding the implementation of OCaml's
let polymorphism, I'll appreciate it a lot if anyone can give a concise
description of it with respect to the question.
Your code is unnecessarily confusing because you're using the same name f for two different things in B and also two different things in C.
Inside C you have this function:
fun g -> let f = g in (f 3, f true)
Again this is unnecessarily complicated; it's the same as:
fun g -> (g 3, g true)
The reason this isn't allowed is that it only works if g is a polymorphic function. This requires rank 2 polymorphism, i.e., it requires the ability to define function parameters that are polymorphic.
I'm not exactly sure what you're trying to do, but you can have a record type whose field is a polymorphic function. You can then use this record type to define something like your function:
# type r = { f : 'a . 'a -> 'a };;
type r = { f : 'a. 'a -> 'a; }
# (fun { f = g } -> (g 3, g true)) { f = fun x -> x };;
- : int * bool = (3, true)
# let myfun { f = g } = (g 3, g true);;
val myfun : r -> int * bool = <fun>
# myfun { f = fun x -> x };;
- : int * bool = (3, true)
The downside is that you need to pack and unpack your polymorphic function.
As a side comment, your example doesn't seem very compelling, because the number of functions of type 'a -> 'a is quite limited.
I've found myself repeating a pattern over and over again, and I'd like to abstract it. I'm fairly confident that coq is sufficiently expressive to capture the pattern, but I'm having a bit of trouble figuring out how to do so. I'm defining a programming language, which has mutually recursive inductive datatypes representing the syntactic terms:
Inductive Expr : Set :=
| eLambda (x:TermVar) (e:Expr)
| eVar (x:TermVar)
| eAscribe (e:Expr) (t:IFType)
| ePlus (e1:Expr) (e2:Expr)
| ... many other forms ...
with DType : Set :=
| tArrow (x:TermVar) (t:DType) (c:Constraint) (t':DType)
| tInt
| ... many other forms ...
with Constraint : Set :=
| cEq (e1:Expr) (e2:Expr)
| ...
Now, there are a number of functions that I need to define over these types. For example, I'd like a function to find all of the free variables, a function to perform substitution, and a function to pull out the set of all constraints. These functions all have the following form:
Fixpoint doExpr (e:Expr) := match e with
(* one or two Interesting cases *)
| ...
(* lots and lots of boring cases,
** all of which just recurse on the subterms
** and then combine the results in the same way
*)
| ....
with doIFType (t:IFType) := match t with
(* same structure as above *)
with doConstraint (c:Constraint) := match c with
(* ditto *)
For example, to find free variables, I need to do something interesting in the variable cases and the cases that do binding, but for everything else I just recursively find all of the free variables of the subexpressions and then union those lists together. Similarly for the function that produces a list of all of the constraints. The substitution case is a little bit more tricky, because the result types of the three functions are different, and the constructors used to combine the subexpressions are also different:
Variable x:TermVar, v:Expr.
Fixpoint substInExpr (e:Expr) : **Expr** := match e with
(* interesting cases *)
| eLambda y e' =>
if x = y then eLambda y e' else eLambda y (substInExpr e')
| eVar y =>
if x = y then v else y
(* boring cases *)
| eAscribe e' t => **eAscribe** (substInExpr e') (substInType t)
| ePlus e1 e2 => **ePlus** (substInExpr e1) (substInExpr e2)
| ...
with substInType (t:Type) : **Type** := match t with ...
with substInConstraint (c:Constraint) : **Constraint** := ...
.
Writing these functions is tedious and error prone, because I have to write out all of the uninteresting cases for each function, and I need to make sure I recurse on all of the subterms. What I would like to write is something like the following:
Fixpoint freeVars X:syntax := match X with
| syntaxExpr eVar x => [x]
| syntaxExpr eLambda x e => remove x (freeVars e)
| syntaxType tArrow x t1 c t2 => remove x (freeVars t1)++(freeVars c)++(freeVars t2)
| _ _ args => fold (++) (map freeVars args)
end.
Variable x:TermVar, v:Expr.
Fixpoint subst X:syntax := match X with
| syntaxExpr eVar y => if y = x then v else eVar y
| syntaxExpr eLambda y e => eLambda y (if y = x then e else (subst e))
| syntaxType tArrow ...
| _ cons args => cons (map subst args)
end.
The key to this idea is the ability to generally apply a constructor to some number of arguments, and to have some kind of "map" that that preserves the type and number of arguments.
Clearly this pseudocode doesn't work, because the _ cases just aren't right. So my question is, is it possible to write code that is organized this way, or am I doomed to just manually listing out all of the boring cases?
Here's another way, though it's not everyone's cup of tea.
The idea is to move recursion out of the types and the evaluators, parameterizing it instead, and turning your expression values into folds. This offers convenience in some ways, but more effort in others -- it's really a question of where you end up spending the most time. The nice aspect is that evaluators can be easy to write, and you won't have to deal with mutually recursive definitions. However, some things that are simpler the other way can become brain-twisters in this style.
Require Import Ssreflect.ssreflect.
Require Import Ssreflect.ssrbool.
Require Import Ssreflect.eqtype.
Require Import Ssreflect.seq.
Require Import Ssreflect.ssrnat.
Inductive ExprF (d : (Type -> Type) -> Type -> Type)
(c : Type -> Type) (e : Type) : Type :=
| eLambda (x:nat) (e':e)
| eVar (x:nat)
| eAscribe (e':e) (t:d c e)
| ePlus (e1:e) (e2:e).
Inductive DTypeF (c : Type -> Type) (e : Type) : Type :=
| tArrow (x:nat) (t:e) (c':c e) (t':e)
| tInt.
Inductive ConstraintF (e : Type) : Type :=
| cEq (e1:e) (e2:e).
Definition Mu (f : Type -> Type) := forall a, (f a -> a) -> a.
Definition Constraint := Mu ConstraintF.
Definition DType := Mu (DTypeF ConstraintF).
Definition Expr := Mu (ExprF DTypeF ConstraintF).
Definition substInExpr (x:nat) (v:Expr) (e':Expr) : Expr := fun a phi =>
e' a (fun e => match e return a with
(* interesting cases *)
| eLambda y e' =>
if (x == y) then e' else phi e
| eVar y =>
if (x == y) then v _ phi else phi e
(* boring cases *)
| _ => phi e
end).
Definition varNum (x:ExprF DTypeF ConstraintF nat) : nat :=
match x with
| eLambda _ e => e
| eVar y => y
| _ => 0
end.
Compute (substInExpr 2 (fun a psi => psi (eVar _ _ _ 3))
(fun _ phi =>
phi (eLambda _ _ _ 1 (phi (eVar _ _ _ 2)))))
nat varNum.
Compute (substInExpr 1 (fun a psi => psi (eVar _ _ _ 3))
(fun _ phi =>
phi (eLambda _ _ _ 1 (phi (eVar _ _ _ 2)))))
nat varNum.
Here is a way to go, but it does not give very readable code: use tactics.
Let's say I have a language with many constructors of various arity, and I want to apply a specific goal only to the case given by constructor aaa, and I want to traverse all the other constructors, to get down to the aaa's that may appear under them. I can do the following:
Say you want to define a function A -> B (A is the type of the language), you will need to keep track of what case you are in,
so you should define a phantom type over A, reducing to B.
Definition phant (x : A) : Type := B.
I suppose that the union function has type B -> B -> B and that you have a default value in B, called empty_B
Ltac generic_process f acc :=
match goal with
|- context [phan (aaa _)] => (* assume aaa has arith 1 *)
intros val_of_aaa_component; exact process_this_value val_of_aaa_component
| |- _ =>
(* This should be used when the next argument of the current
constructor is in type A, you want to process recursively
down this argument, using the function f, and keep this result
in the accumulator. *)
let v := fresh "val_in_A" in
intros v; generic_process f (union acc (f v))
(* This clause will fail if val_in_A is not in type A *)
| |- _ => let v := fresh "val_not_in_A" in
(* This should be used when the next argument of the current
constructor is not in type A, you want to ignore it *)
intros v; generic_process f acc
| |- phant _ =>
(* this rule should be used at the end, when all
the arguments of the constructor have been used. *)
exact acc
end.
Now, you define the function by a proof. Let's say the function is called process_aaa.
Definition process_aaa (x : A) : phant x.
fix process_aaa 1.
(* This adds process_add : forall x:A, phant x. in the context. *)
intros x; case x; generic_process process_aaa empty_B.
Defined.
Note that the definition of generic_process only mention one constructor by name, aaa, all others
are treated in a systematic way. We use the type information to detect those sub-components in which we want to perform a recursive descent. If you have several mutually inductive types, you can add arguments to the generic_process function to indicate which function will be used for each type and have more clauses, one for each argument of each type.
Here is a test of this idea, where the language has 4 constructors, values to be processed are the ones that appear in the constructor var and the type nat is also used in another constructor (c2). We use the type of lists of natural numbers as the type B, with nil as the empty and singleton lists as result when encountering variables. The function collects all occurrences of var.
Require Import List.
Inductive expr : Type :=
var : nat -> expr
| c1 : expr -> expr -> expr -> expr
| c2 : expr -> nat -> expr
| c3 : expr -> expr -> expr
| c4 : expr -> expr -> expr
.
Definition phant (x : expr) : Type := list nat.
Definition union := (#List.app nat).
Ltac generic_process f acc :=
match goal with
|- context[phant (var _)] => exact (fun y => y::nil)
| |- _ => let v := fresh "val_in_expr" in
intros v; generic_process f (union acc (f v))
| |- _ => let v := fresh "val_not_in_expr" in
intros v; generic_process f acc
| |- phant _ => exact acc
end.
Definition collect_vars : forall x : expr, phant x.
fix collect_vars 1.
intros x; case x; generic_process collect_vars (#nil nat).
Defined.
Compute collect_vars (c1 (var 0) (c2 (var 4) 1)
(c3 (var 2) (var 3))).
The last computation returns a list containing values 0 4 2 and 3 as expected, but not 1, which did not occur inside a var constructor.