Why isn't this FStar function accepted? - fstar

I'd like to understand why this function is not accepted as a terminating one:
val while_items: ni: nat -> ni_max: nat -> nw: nat -> bool
let rec while_items ni ni_max nw =
(if ni < ni_max
then while_items (ni + nw) ni_max nw
else true)
FStar rejects it and sends the following message:
(Error) Could not prove termination of this recursive call: The solver found a (partial) counterexample...
It's likely not the same binding issue here as the one in my related question under FStar function strange behavior
Can FStar show the counter example?

As explained in the tutorial, by default F* uses a decreases measure that puts all arguments in lexicographic order
val while_items: ni: nat -> ni_max: nat -> nw: nat -> Tot bool (decreases %[ni,ni_max;nw])
which doesn't work for proving this function terminating, since clearly ni + nw is not smaller than ni.
With a correct decreases measure and a precondition that nw is positive this does go through:
val while_items: ni: nat -> ni_max: nat -> nw: pos ->
Tot bool (decreases (if ni < ni_max then ni_max-ni else 0))
Not exactly the original example, but that one definitely loops forever for nw=0! And anyway, even after this fix this code makes little sense as is and using such loops is not idiomatic for functional programming.
Finally, F* can't produce counterexamples, and this error message has recently been fixed: https://github.com/FStarLang/FStar/pull/2075

This function does not terminate on all inputs. Consider ni < ni_max and nw=0. Unfortunately, F* does not find concrete counterexamples like this.
You can prove a variant of your function terminating, but you have to tell F* explicitly what's decreasing. Here we're using the well-founded ordering on nat.
let as_nat (x:int) : nat = if x < 0 then 0 else x
val while_items: ni: nat -> ni_max: nat -> nw: nat{ni < ni_max ==> nw > 0} -> Tot bool
(decreases (as_nat (ni_max - ni)))
let rec while_items ni ni_max nw =
(if ni < ni_max
then while_items (ni + nw) ni_max nw
else true)

Related

Trying to call function i-times in OCaml with a for cycle

I have been trying to create a cycle to call an changing function i-times , but for some reason the cycle itself always spits out an error. I have also tried an recursive function to call itself but didn't work either.
Is it even possible to make it work with for`s.
r is a list of lists.
a and b are two immutable variables.
(List.nth (r) (i)) gives an list.
let rec changing (lista: 'a zlista) (a:int) (b:int) =
match lista with
| Vazio -> failwith "NO"
| Nodo (n, l, r) ->
if a <= n && n <= b then n
else if a < n && b < n then changing l a b
else changing r a b
let rec call_changing (a: int) (b: int) =
for i=0 to ort do
changing (List.nth (r) (i)) (a) (b)
done;;
Changing returns an int, in order to call it in a for loop you have to ignore the result of the function :
for i = 0 to ort do
let _ = changing .... in ()
done
(* Or *)
for i = 0 to ort do
ignore (changing ....)
done
EDIT :
If you want to print the result you can do :
for i = 0 to ort do
Printf.printf "Result for %d iteration : %d\n" i (changing ....)
done
See the Printf documentation for more information
To perhaps generalize on Butanium's answer, OCaml is not a pure functional programming language. It does contain imperative features. Imperative features are all about side-effects. Functions which exist for the purpose of their side-effects on the system (like Printf.printf) by convention return () (the literal for the unit type).
A for loop is an imperative feature. As such, it expects that any expression (or expessions chained with ;) contained within will return unit. If they do not, you will receive warnings.
The for loop expression itself (for ... = ... to ... do ... done) returns unit so the warning can clue you in that any code in the loop which does not have side-effects is inconsequential, and while your code will compile and run, it may not do what you expect.
As a side note, I believe you may be a little overzealous with the parentheses, likely making your code harder to read.
let rec call_changing (a: int) (b: int) =
for i=0 to ort do
changing (List.nth (r) (i)) (a) (b)
done;;
Properly indented and with extraneous parens removed:
let rec call_changing (a: int) (b: int) =
for i=0 to ort do
changing (List.nth r i) a b
done;;

Non-trivial algorithm conversion from imperative to functional

To enforce my (weak) functional programming skills, I am studying The NURBS book by Piegl and Tiller converting all the algorithms to Haskell. It is a very nice and instructive process but I got stuck on algorithm 2.2, here is (a C-like reworked-by-me version of) the pseudo-code:
double[] BasisFuns( int i, double u, int p, double U[]) {
double N[p+1];
double left[p+1];
double ritght[p+1];
N[0]=1.0;
for (j=1; j<=p; j++) {
left[j] = u - U[i+1-j];
right[j] = U[i+j] - u;
double saved = 0.0;
for (r=O; r<j; r++) {
double temp= N[r]/(right[r+1]+left[j-r]);
N[r] = saved+right[r+1]*temp;
saved = left[j-r]*temp;
}
N[j] = saved;
}
return N;
}
The outer loop looks easy, but the inner one, with all those necessarily-ordered modifications to the elements of N is giving me a headache.
I started to set it up like this:
baseFunc :: RealFrac a => Int -> Int -> a -> [a] -> [a]
baseFunc i p u knots
[ ??? | j <- [1..p], r <- [0..j] ]
where
left = [ u - knots !! (i+1-j) | j <- [ 1 .. p ]
right= [ knots !! (i+j) - u | j <- [ 1 .. p ]
but I feel that I may be completely off road.
I have already written a completely different and inefficient version of this function based on Eq. 2.5 in the book, therefore here I am looking to maintain the performance of the imperative version.
Although it is certainly possible to translate numerical algorithms into Haskell starting from Fortran/Matlab/C “everything is an array” style (and, using unboxed mutable vectors, performance will generally be not much worse) this is really missing the point about using a functional language. The underlying math is actually much closer to functional than to imperative programming, so the best thing is to start right there. Specifically, the recurrence formula
can be translated almost literally into Haskell, much better than into an imperative language:
baseFuncs :: [Double] -- ^ Knots, \(\{u_i\}_i\)
-> Int -- ^ Index \(i\) at which to evaluate
-> Int -- ^ Spline degree \(p\)
-> Double -- ^ Position \(u\) at which to evaluate
-> Double
baseFuncs us i 0 u
| u >= us!!i, u < us!!(i+1) = 1
| otherwise = 0
baseFuncs us i p u
= (u - us!!i)/(us!!(i+p) - us!!i) * baseFuncs us i (p-1) u
+ (us!!(i+p+1) - u)/(us!!(i+p+1) - us!!(i+1)) * baseFuncs us (i+1) (p-1) u
Unfortunately this will actually not be efficient though, for multiple reasons.
First, lists suck at random-access. A simple fix is to switch to unboxed (but pure) vectors. While we're at is let's wrap them in a newtype because the ui are supposed to be strictly increasing. Talking about types: the direct accesses are unsafe; we could fix this by bringing p and the number of segments to the type level and only allowing indices i < n-p, but I won't go into that here.
Also, it's awkward to pass us and u around all the way down the recursion, better just bind it once and then use a helper function to go down:
import Data.Vector.Unboxed (Vector, (!))
import qualified Data.Vector.Unboxed as VU
newtype Knots = Knots {getIncreasingKnotsSeq :: Vector Double}
baseFuncs :: Knots -- ^ \(\{u_i\}_i\)
-> Int -- ^ Index \(i\) at which to evaluate
-> Int -- ^ Spline degree \(p\)
-> Double -- ^ Position \(u\) at which to evaluate
-> Double
baseFuncs (Knots us) i₀ p₀ u = go i₀ p₀
where go i 0
| u >= us!i
, i>=VU.length us-1 || u < us!(i+1) = 1
| otherwise = 0
go i p
= (u - us!i)/(us!(i+p) - us!i) * go i (p-1)
+ (us!(i+p+1) - u)/(us!(i+p+1) - us!(i+1)) * go (i+1) (p-1)
The other thing that's not optimal is that we don't share the lower-level evaluations between neighbouring recursive calls. (The evaluation is effectively spanning a directed graph with p2⁄2 nodes, but we evaluate it as a tree with 2p nodes.) That's a massive inefficiency for large p, but actually quite harmless for the typical low-degree splines.
The way to avoid this inefficiency is to memoise. The C version does this explicitly with the N array, but – this being Haskell – we can be lazy to save the effort of allocating the correct size by using a generic memoisation library, e.g. memo-trie:
import Data.MemoTrie (memo2)
baseFuncs (Knots us) i₀ p₀ u = go' i₀ p₀
where go i 0
| u >= us!i
, i>=VU.length us || u < us!(i+1) = 1
| otherwise = 0
go i p
= (u - us!i)/(us!(i+p) - us!i) * go' i (p-1)
+ (us!(i+p+1) - u)/(us!(i+p+1) - us!(i+1)) * go' (i+1) (p-1)
go' = memo2 go
That was the no-brains version (“just memoise the entire domain of go”). As dfeuer remarks, it is easy enough to explicitly memoise only the regian that actually gets evaluated, and then we can again use an efficient unboxed vector:
baseFuncs (Knots us) i₀ p₀ u = VU.unsafeHead $ gol i₀ p₀
where gol i 0 = VU.generate (p₀+1) $ \j ->
if u >= us!(i+j)
&& (i+j>=VU.length us || u < us!(i+j+1))
then 1 else 0
gol i p = case gol i (p-1) of
res' -> VU.izipWith
(\j l r -> let i' = i+j
in (u - us!i')/(us!(i'+p) - us!i') * l
+ (us!(i'+p+1) - u)/(us!(i'+p+1) - us!(i'+1)) * r)
res' (VU.unsafeTail res')
(I can safely use unsafeHead and unsafeTail here, because at each recursion level the zipping reduces the length by 1, so at the top-level I still have p₀ - (p₀-1) = 1 elements left.)
This version should, I think, have the same asymptotics as the C version. With some more small improvements like precomputing the interval lengths and pre-checking that the arguments are in the allowed range so all accesses can be made unsafe, it is probably very close in performance to the C version.
As – again – dfeuer remarks, it might not even be necessary to use vectors there because I just zip together the result. For this kind of stuff, GHC can be very good at optimising code even when using plain lists. But, I won't investigate the performance any further here.
The test I used to confirm it actually works:
https://gist.github.com/leftaroundabout/4fd6ef8642029607e1b222783b9d1c1e
(Disclaimer: I have zero idea of what is being calculated here.)
The access pattern in the array U seems to be as follows: from the index i outwards, we consume values to the left and to the right. We could imagine having two lists which consisted precisely in those sequences of elements. In fact, we could construct such lists out of a source list like this:
pryAt :: Int -> [a] -> ([a], [a])
pryAt i xs = go i ([], xs)
where
go 0 a = a
go n (us, v : vs) = go (pred n) (v : us, vs)
-- pryAt 5 ['a'..'x']
-- ("edcba","fghijklmnopqrstuvwx")
For random-access containers, we could have specialized versions of pryAt, because traversing the whole list until we reach the ith element will be inefficient.
In the outer loop, we have arrays N, left and right which grow with each iteration (N seems to be fully reconstructed at each iteration, as well). We could represent them as lists. In each iteration, we also consume a pair of elements of U, to the left and right.
The following datatype represents the situation at the beginning of an iteration of the outer loop:
data State = State
{ ns :: [Double],
left :: [Double],
right :: [Double],
us :: ([Double], [Double])
}
Assuming we already had the outerStep :: State -> State implemented, we could simply turn the crank p times:
basis :: Int -> Double -> Int -> [Double] -> [Double]
basis i u p us =
ns $ iterate outerStep initial !! p
where
initial =
State
{ ns = [1.0],
left = [],
right = [],
us = pryAt i us
}
What is done at outerStep? We add new elements to left and right, then we re-create the whole N list from the beginning, while carrying a saved accumulator along the way. This is a mapAccumR. We need some extra info: the right values (in the same direction as N) and the left values (in reverse direction) so we need to zip them beforehand:
outerStep (State {ns, left, right, us = (ul : uls, ur : urs)}) =
let left' = u - ul : left
right' = ur - u : right
(saved', ns') = mapAccumR innerStep 0.0 $ zip3 ns right' (reverse left')
in State
{ ns = saved' : ns',
left = left',
right = right',
us = (uls, urs)
}
And here are the computations of the inner step:
innerStep saved (n, r, l) =
let temp = n / (r - l)
n' = saved + r
saved' = l * temp
in (saved', n')
In addition to correcting possible bugs, more work would remain because the basis function in its current form is likely to leak memory (in particular, that mapAccumR will create lots of thunks). Perhaps it could be rewritten to use functions like iterate' or foldl' that keep their accumulators strict.
I'm not sure that it will work, but...:
baseFunc :: RealFrac a => Int -> Int -> a -> [a] -> [a]
baseFunc i p u knots =
foldl' helper [1.0] [1..p]
where
left = [ u - knots !! (i+1-j) | j <- [ 1 .. p ] ]
right= [ knots !! (i+j) - u | j <- [ 1 .. p ] ]
helper N j = outer_loop j N left right
inner_loop :: RealFrac a => Int -> Int -> [a] -> [a] -> [a] -> a -> (a, a)
inner_loop r j N left right saved =
let temp = N !! r / (right !! (r+1) + left !! (j-r))
in (saved + right !! (r+1) * temp, left !! (j-r) * temp)
outer_loop :: RealFrac a => Int -> [a] -> [a] -> [a] -> [a]
outer_loop j N left right =
let (new_N, saved) = foldl' helper (N, 0.0) [0..j-1]
helper (prev_N, saved) r =
let (N_r, new_saved) = inner_loop r j prev_N left right saved
in (insertAt r N_r prev_N, new_saved)
in new_N ++ [saved]

Hint on FStar proof dead end

Can I get a brief explanation why this proof effort fails?
In my studies I'm trying to recognize simple patterns in generated lists of integers.
The generator below produces a list of alternating 0s and 1s. I'd like to prove that items at even indexes are 0.
val evenb : nat -> bool
let rec evenb n =
match n with
| 0 -> true
| 1 -> false
| n -> evenb (n - 2)
val nth_item : ls: list nat {length ls > 0} -> n: nat {n < length ls} -> nat
let rec nth_item ls n =
match ls with
| [h] -> h
| h :: t -> if n = 0 then h else nth_item t (n - 1)
val gen_01 : lng: nat {lng >= 2 && evenb lng} -> ls: list nat {length ls = lng}
let rec gen_01 lng =
match lng with
| 2 -> [0; 1]
| _ -> [0; 1] # gen_01 (lng - 2)
let rec lemma_01 (lng: nat {lng >= 2 && evenb lng}) :
Lemma (forall (n: nat {n <= lng - 2 && evenb n}) . (nth_item (gen_01 lng) n) = 0) =
match lng with
| 2 -> ()
| _ -> lemma_01 (lng - 2)
FStar returns 'could not prove post-condition'.
I'd appreciate any help regarding the approach.
F* should also report a secondary error location pointing to the conjunct in the postcondition that was not provable---in this case, it's just the nth_item (gen_01 lng) n = 0 goal.
One way to diagnose this is to consider one branch of the proof at a time. E.g. if you add an admit(); in the second branch, then you'll see that the first branch is easily provable. So, what's going wrong is the inductive case. You don't have a strong enough induction hypothesis to prove the property you want.
Here's one proof of it ... there are probably many others.
First, I proved this:
let rec access_2n (l:nat{l >= 2 && evenb l}) (n:nat{2 * n < l})
: Lemma (ensures nth_item (gen_01 l) (2 * n) = 0)
= match n with
| 0 -> ()
| _ -> access_2n (l - 2) (n - 1)
Notice the induction on the pair l, n so that the length and the access index decrease together.
This is pretty much the property you wanted to prove, stated slightly differently. To massage it into the form you want, I did this:
First, a lemma to interpret evenb arithmetically:
[Edit: I added an open FStar.Mul to bring the * symbol into scope for multiplication]
open FStar.Mul
let rec evenb_is_even (n:nat{evenb n})
: Lemma (2 * (n / 2) = n)
= match n with
| 0 -> ()
| _ -> evenb_is_even (n - 2)
Then to prove something very like your lemma, but for an explicit n.
let lemma_01_aux (lng: nat {lng >= 2 && evenb lng}) (n:nat{n <= lng - 2 && evenb n})
: Lemma (nth_item (gen_01 lng) n = 0)
= access_2n lng (n / 2); evenb_is_even n
And finally to universally quantify over n using a library that turns lemmas into quantified postconditions.
let lemma_01 (lng: nat {lng >= 2 && evenb lng})
: Lemma (forall (n: nat {n <= lng - 2 && evenb n}) . (nth_item (gen_01 lng) n) = 0)
= FStar.Classical.forall_intro_2 lemma_01_aux

Simple timing profiler for functional languages

I needed a simple timing profiler to estimate the runtime of some parts of my program (written in OCaml, but I believe this could apply to other functional languages), and I couldn't find a very simple solution, similar to what one would code in an imperative language, using functions such as timer.start/timer.stop. So I tried one using lazy evaluation, and it works quite well for what I need, however I didn't find any references to this method, so I wonder it the approach is flawed or if there is a simpler solution.
So, the question is: do you know about similar implementations for functional languages (especially OCaml)? If so, please indicate them to me, I'd like to borrow some of their ideas to improve my "poorer man's profiler" (I've seen this question but it didn't help me much). From what I've seen, GHC already has a way to collect timing information, so it's probably not an issue for Haskell.
By the way, I tried doing timing profiling as indicated in the OCaml manual (17.4), but it was too "low-level" for what I needed: it gives lots of information at the C function level, which make it harder to evaluate precisely which part of the OCaml code is the culprit.
Below follows my implementation in OCaml (note that I need to add the "lazy" expression everytime I want to measure the time, but at the same time I can finely control how much information I need).
open Unix (* for the timers *)
(** 'timers' associates keys (strings) to time counters,
to allow for multiple simultaneous measurements. *)
let timers : (string, (float * float)) Hashtbl.t = Hashtbl.create 1
(** starts the timer associated with key <name> *)
let timer_start (name : string) : unit =
let now = Unix.times () in
Hashtbl.replace timers name (now.tms_utime, now.tms_stime)
(** Returns time elapsed between the corresponding call to
timer_start and this call *)
let timer_stop (name : string) : float =
try
let now = Unix.times () in
let t = Hashtbl.find timers name in
(now.tms_utime -. fst t) +. (now.tms_stime -. snd t)
with
Not_found -> 0.0
(** Wrapper for the timer function using lazy evaluation *)
let time (s : string) (e : 'a Lazy.t) : 'a =
timer_start s;
let a = Lazy.force e in
let t2 = timer_stop s in
(* outputs timing information *)
Printf.printf "TIMER,%s,%f\n" s t2; a
(** Example *)
let rec fibo n =
match n with
| 0 -> 1
| 1 -> 1
| n' -> fibo (n - 1) + fibo (n - 2)
let main =
let f = time "fibo" (lazy (fibo 42)) in
Printf.printf "f = %d\n" f
Unix.times measures CPU time, not wall-clock time. So this is suitable only for computational code that spends all of its time in CPU. And BTW hashtbl is not needed, even for multiple simultaneous measurements, just return the start time in timer_start and substract it in timer_stop.
Merging the ideas from #Jeffrey_Scofield and #ygrek, the "poorest man's timing profiler" is indeed so simple it would barely require mention at all, which would explain why I hadn't found it. So I've merged their answers and produced a much simpler version:
open Unix (* for the timers *)
(* Wrapper for the timer function using a "unit -> 'a" thunk *)
let time (s : string) (e : unit -> 'a) : 'a =
let tstart = Unix.times () in
let a = e () in
let tend = Unix.times () in
let delta = (tend.tms_utime -. tstart.tms_utime) +.
(tend.tms_stime -. tstart.tms_stime) in
(* outputs timing information *)
Printf.printf "TIMER,%s,%f\n" s delta; a
(* Example *)
let rec fibo n =
match n with
| 0 -> 1
| 1 -> 1
| n' -> fibo (n - 1) + fibo (n - 2)
let main =
let f = time "fibo" (fun () -> fibo 42) in
Printf.printf "f = %d\n" f

List is conceived as integer by length function

I'm trying to learn Erlang using the Karate Chop Kata. I translated the runit test supplied in the kata to an eunit test and coded up a small function to perform the task at hand.
-module(chop).
-export([chop/2]).
-import(lists).
-include_lib("eunit/include/eunit.hrl").
-ifdef(TEST).
chop_test_() -> [
?_assertMatch(-1, chop(3, [])),
?_assertMatch(-1, chop(3, [1])),
?_assertMatch(0, chop(1, [1])),
....several asserts deleted for brevity...
].
-endif.
chop(N,L) -> chop(N,L,0);
chop(_,[]) -> -1.
chop(_, [],_) -> -1;
chop(N, L, M) ->
MidIndex = length(L) div 2,
MidPoint = lists:nth(MidIndex,L),
{Left,Right} = lists:split(MidIndex,L),
case MidPoint of
_ when MidPoint < N -> chop(N,Right,M+MidIndex);
_ when MidPoint =:= N -> M+MidIndex;
_ when MidPoint > N -> chop(N,Left,M)
end.
Compiles ok.Running the test however gives, (amongst others) the following failure:
::error:badarg
in function erlang:length/1
called as length(1)
in call from chop:chop/3
I've tried different permutations of declaring chop(N,[L],M) .... and using length([L]) but have not been able to resolve this issue. Any suggestions are welcome.
ps. As you might have guessed I'm a nube when it comes to Erlang.
So I'm pressed for time at the moment, but the first problem I see is that
chop(N,L) -> chop(N,L,0);
chop(_,[]) -> -1.
is wrong because chop(N,L) will always match. reverse the clauses and see where that gets you.
Beyond that, in the case of the 1 element list, nth(0, [1]) will fail. I feel like these lists are probably 1-indexed.
As most significant thing to learn you should realize, that using binary search for lists in erlang is wrong idea, because lists:nth/2 is not O(1) but O(N) operation. Try list_to_tuple/1 and than do it on tuple. It is much more worth work.
It can also be worth to try it on array module.
The function erlang:length/1 returns the length of a list.
You called length(1) and 1 isn't a list.
length([1]) would return 1
length([1,2,3,4[) would return 4
etc, etc...
It appears that combining the remarks from Ben Hughes solves the problem. Just for completeness I'm pasting the tests-passing implementation of my binary search below.
chop(_,[]) -> -1;
chop(N,L) ->
Array = array:from_list(L),
chop(N,Array, 0, array:size(Array)-1).
chop(N, L, K, K) ->
Element = array:get(K,L),
if
Element == N -> K;
true -> -1
end;
chop(_, _, K, M) when M < K -> -1;
chop(N, L, K, M) ->
MidIndex = K + ((M - K) div 2),
MidPoint = array:get(MidIndex,L),
case MidPoint of
N -> MidIndex;
_ when MidPoint < N -> chop(N,L,MidIndex+1,M);
_ -> chop(N,L,K,MidIndex-1)
end.