We are working on verifying a system with three functions that looks like the following. However, we do not know how to proceed further with proofs like this. Actual definitions of the Coq functions may be shared. Kindly guide us.
Parameter weights : nat -> list nat -> nat -> nat.
Parameter schedule: nat -> list nat -> list nat.
Parameter count: nat -> list nat -> nat.
Lemma high_weight_jobs: forall (s1 s2 jobs: nat) (S: list nat),
jobs > 0 ->
length S > 0 ->
weights s1 S 0 > weights s2 S 0 ->
count s1 (schedule jobs S) > count s2 (schedule jobs S).
Proof.
intros.
induction S as [ | h tl IHS].
+ simpl in *. inversion H0.
+
Admitted.
You should start with thinking about what properties of these 3 functions you would need to prove this. It is usually not a good idea to proof a combined property of 3 functions based on the definitions of these 3 functions. First you prove properties of the 3 individual functions and then you prove the combined property based on these individual function properties.
You can first state such properties as axioms and see if they are indeed sufficient to prove what you want and then prove them later. I find this approach sometimes more efficient because I sometimes reformulate statements so that they are more convenient in the proof.
I also commonly use modules to abstract the definitions of functions and just use the specified properties of the functions to prove derived properties. This way you can abstract such properties from the actual implementation of the functions. This is helpful if you later need to extend one of the functions in some way. If all the properties still hold, derived proofs wouldn't be affected. If you do proofs based on the definitions, any such proof would very likely break with a change of the function.
For a further discussion, I suggest that you state the specification/properties of the 3 functions you think you need to prove this.
Related
I can understand that allowing mutable is the reason for value restriction and weakly polymorphism. Basically a mutable ref inside a function may change the type involved and affect the future use of the function. So real polymorphism may not be introduced in case of type mismatch.
For example,
# let remember =
let cache = ref None in
(fun x ->
match !cache with
| Some y -> y
| None -> cache := Some x; x)
;;
val remember : '_a -> '_a = <fun>
In remember, cache originally was 'a option, but once it gets called first time let () = remember 1, cache turns to be int option, thus the type becomes limited. Value restriction solves this potential problem.
What I still don't understand is the value restriction on partial application.
For example,
let identity x = x
val identity: 'a -> 'a = <fun>
let map_rep = List.map identity
val map_rep: '_a list -> '_a list = <fun>
in the functions above, I don't see any ref or mutable place, why still value restriction is applied?
Here is a good paper that describes OCaml's current handling of the value restriction:
Garrigue, Relaxing the Value Restriction
It has a good capsule summary of the problem and its history.
Here are some observations, for what they're worth. I'm not an expert, just an amateur observer:
The meaning of "value" in the term "value restriction" is highly technical, and isn't directly related to the values manipulated by a particular language. It's a syntactic term; i.e., you can recognize values by just looking at the symbols of the program, without knowing anything about types.
It's not hard at all to produce examples where the value restriction is too restrictive. I.e., where it would be safe to generalize a type when the value restriction forbids it. But attempts to do a better job (to allow more generalization) resulted in rules that were too difficult to remember and follow for mere mortals (such as myself).
The impediment to generalizing exactly when it would be safe to do so is not separate compilation (IMHO) but the halting problem. I.e., it's not possible in theory even if you see all the program text.
The value restriction is pretty simple: only let-bound expressions that are syntactically values are generalized. Applications, including partial applications, are not values and thus are not generalized.
Note that in general it is impossible to tell whether an application is partial, and thus whether the application could have an effect on the value of a reference cell. Of course in this particular case it is obvious that no such thing occurs, but the inference rules are designed to be sound in the event that it does.
A 'let' expression is not a (syntactic) value. While there is a precise definition of 'value', roughly the only values are identifiers, functions, constants, and constructors applied to values.
This paper and those it references explains the problem in detail.
Partial application doesn't preclude mutation. For example, here is a refactored version of your code that would also be incorrect without value restriction:
let aux cache x =
match !cache with
| Some y -> y
| None -> cache := Some x; x
let remember = aux (ref None)
I am using Haskell and QuickCheck to write a test for the following function:
{-| Given a list of points and a direction, find the point furthest
along in that direction. -}
fn :: (Eq a, Ord a, DotProd a) => [a] -> a -> a
fn pnts dir = pnts !! index
where index = fromJust $ elemIndex (maximum dotproducts) dotproducts
dotproducts = map (dot dir) pnts
I believe this implementation to be correct, since it's not too complex of a function. But, I want to use QuickCheck to test it for some edge cases.
However, I run into the problem that, when I define my QuickCheck tests, they are identical to the function I am testing.
How do I write a test in QuickCheck that tests the purpose of a function without repeating its implementation?
How do I write a test in QuickCheck that tests the purpose of a function without repeating its implementation?
First, note that sometimes, a quickcheck property that states that a function behaves according to its current implementation is not completely worthless. You can use it for regression tests if you ever change the implementation. For example, if you optimize the definition of your fn to use clever data structures, the Quickcheck property based on the old, more straight-forward implementation might prove helpful.
Second, you often want your Quickcheck properties to check high-level and declarative properties of your functions, while the implementation is usually lower-level and more directly executable. In this case, you could specify such properties as:
forall lists of points ps and directions d, the point fn ps d is in the list ps.
forall lists of points ps, directions d, and forall points p in ps, the point p is not further along in the direction d than the point fn ps d.
forall points p and forall directions d, fn [p, origin] d is p.
I'm not totally sure about the underlying geometry, so my examples might be stupid. But I hope the examples convey the general idea: The quickcheck properties could check for properties of the specification "being further along in a direction" without mentioning the particular algorithm.
I am trying to implement a parser and semantic for a 'CSP' in Scala . I have already implemented the parser , and now I am busy working on a Semantic part of the language. I am completely new to the world of concurrent systems and non-deterministic choices. so here is my question:
I want to implement the "Non-deterministic Choice" and "Interface Parallel" as explained
here.
I now can understand understand the procedure but I can't get my head straight when it comes to non-determinism. I am in need of a good data type to implement this in Scala, I am thinking to put all the processes in a list and then randomize the list and then choose a element from modified list. but that doesn't sound so non-deterministic for me.
Has anyone experienced on this issue before and knows a good algorithm?
My very limited understanding of CSP is that the CSP operators correspond to the following Haskell types:
-- Prefixing corresponds to functions
x :: A
P :: B
x -> P :: A -> C
-- Choice corresponds to product
P :: A
Q :: B
P □ Q :: (A, B)
-- Non-determinism corresponds to sum
-- I don't know how to make the non-determinism symbol, so I use (△)
P :: A
Q :: B
(P △ B) :: Either A B
Then you can use the algebraic isomorphisms to reduce CSP expressions. Using the Wikipedia example:
(coin -> STOP) □ (card -> STOP)
-- translates to the following Haskell type:
(coin -> Stop, card Stop)
-- which is algebraically isomorphic to:
(Either coin card -> Stop)
-- translates in reverse back to CSP:
coin □ card -> STOP
Also, I think one of the Wikipedia examples is wrong (or I'm wrong). I believe this expression should reduce to:
(a -> a -> STOP) □ (a -> b -> STOP)
-- translates to the following Haskell type:
(a -> a -> STOP, a -> b -> STOP)
-- which is algebraically isomorphic to:
a -> Either a b -> STOP
-- translates in reverse back to CSP:
a -> (a △ b) -> STOP
I still haven't figured out the equivalent of interface parallel, though. It doesn't seem to correspond to an elegant concept.
This is an advanced topic. The parsing side is the easy bit. Writing correct concurrent primitives is very hard and will almost certainly require formal verification using a tool such as FDR.
If you're only interested in writing the expression parser part, not the concurrency primitives, you might prefer to build upon JCSP, which already provides these primitives in a Java API. The authors of this (UKC) used formal verification to validate its components, notably the channel and alternative (choice) parts.
Perhaps I'm going about this the wrong way, but I'm using HXT to read in some vertex data that I'd like to use in an array in HOpenGL. Vertex arrays need to be a Ptr which is created by calling newArray. Unfortunately newArray returns an IO Ptr, so I'm not sure how to go about using it inside an Arrow. I think I need something with a type declaration similar to IO a -> Arrow a?
The type IO a -> Arrow a doesn't make sense; Arrow is a type class, not a specific type, much like Monad or Num. Specifically, an instance of Arrow is a type constructor taking two parameters that describes things that can be composed like functions, matching types end-to-end. So, converting IO a to an arrow could perhaps be called a conceptual type error.
I'm not sure exactly what you're trying to do, but if you really want to be using IO operations as part of an Arrow, you need your Arrow instance to include that. The simplest form of that is to observe that functions with types like a -> m b for any Monad instance can be composed in the obvious way. The hxt package seems to provide a more complicated type:
newtype IOSLA s a b = IOSLA { runIOSLA :: s -> a -> IO (s, [b]) }
This is some mixture of the IO, State, and [] monads, attached to a function as above such that you can compose them going through all three Monads at each step. I haven't really used hxt much, but if these are the Arrows you're working with, it's pretty simple to lift an arbitrary IO function to serve as one--just pass the state value s through unchanged, and turn the output of the function into a singleton list. There may already be a function to do this for you, but I didn't see one at a brief glance.
Basically, you'd want something like this:
liftArrIO :: (a -> IO b) -> IOSLA s a b
liftArrIO f = IOSLA $ \s x -> fmap (\y -> (s, [y])) (f x)
I'm writing some code (a Metropolis-Hastings MCMC sampler) that will use a random number generator, and modify an array and potentially other structures based on this.
My initial idea was to use the ST monad, so that I could use ST arrays and the mersenne-random-pure64 package, keeping the PureMT generator as part of the state.
However I want to be able to split off some of the work into separate helper functions (e.g to sample a random integer in a given range, to update the array structure, and potentially more complicated things). To do this, I think I would need to pass the references to the PureMT gen and the array to all the functions, which could quickly become very ugly if I need to store more state.
My instinct is to group all of the state into a single data type that I can access anywhere, as I would using the State monad by defining a new datatype, but I don't know if that is possible with the ST monad, or the right way to go about it.
Are there any nice patterns for doing this sort of thing? I want to keep things as general as possible because I will probably need to add extra state and build more monadic code around the existing parts.
I have tried looking for examples of ST monad code but it does not seem to be covered in Real World Haskell, and the haskell wiki examples are very short and simple.
thanks!
My instinct is to group all of the state into a single data type that I can access anywhere, as I would using the State monad by defining a new datatype, but I don't know if that is possible with the ST monad, or the right way to go about it.
Are there any nice patterns for doing this sort of thing? I want to keep things as general as possible because I will probably need to add extra state and build more monadic code around the existing parts.
The key point to realize here is that it's completely irrelevant that you're using ST. The ST references themselves are just regular values, which you need access to in a variety of places, but you don't actually want to change them! The mutability occurs in ST, but the STRef values and whatnot are basically read-only. They're names pointing to the mutable data.
Of course, read-only access to an ambient environment is what the Reader monad is for. The ugly passing of references to all the functions is exactly what it's doing for you, but because you're already in ST, you can just bolt it on as a monad transformer. As a simple example, you can do something like this:
newtype STEnv s e a = STEnv (ReaderT e (ST s) a)
deriving (Functor, Applicative, Monad)
runEnv :: STEnv s e a -> ST s e -> ST s a
runEnv (STEnv r) e = runReaderT r =<< e
readSTEnv :: (e -> STRef s a) -> STEnv s e a
readSTEnv f = STEnv $ lift . readSTRef . f =<< ask
writeSTEnv :: (e -> STRef s a) -> a -> STEnv s e ()
writeSTEnv f x = STEnv $ lift . flip writeSTRef x . f =<< ask
For more generality, you could abstract over the details of the reference types, and make it into a general "environment with mutable references" monad.
You can use the ST monad just like the IO monad, bearing in mind that you only get arrays and refs and no other IO goodies. Just like IO, you can layer a StateT over it if you want to thread some state transparently through your computation.