Prolog if-statement - if-statement

I'm trying to implement a predicate that works as follows:
pred :-
% do this always
% if-statement
%do this only, when if-statement is true
% do this also always, independent if if-statement where true or false.
I need this functionality for a program, which has optionality a gui (XPCE) or not. You can call it with
start(true) % with gui
or
start(false) % without gui
Because I don't want to write two different predicates with the same logic, but one time with gui and another time without, I want to have one one predicate, that invokes the gui-code only if start(true) were invoked.
Thanks for your help!

The standard Prolog "if-statement" is:
(If -> Then; Else)
where If, Then, and Else are goals. You can use if easily on the definition of your predicate to switch on the argument of the predicate start/1:
pred :-
% common code
( start(true) ->
% gui-only code
; % non-gui code
),
% common code
When there's no Else goal, you can replace it with the goal true. The goal (If -> Then) fails when the If goal fails. I.e. the (If -> Then) goal is equivalent to (If -> Then; fail), not to (If -> Then; true).

Related

Haskell - How to get index of elem in list?

I want to get an index of elem in list? Not Maybe Int, only int.
>elemIndex 'f' "BarFoof"
>Just 6
But need 6
You can use fromMaybe :: a -> Maybe a -> a to unwrap the value from a Just, and furtermore add a default value in case it is a Nothing.
So you can implement a function:
import Data.Maybe(fromMaybe)
elemIndex' :: Eq a => a -> [a] -> Int
elemIndex' x = fromMaybe (-1) . elemIndex x
Here it will thus return -1 in case the element can not be found. For example:
Prelude Data.Maybe Data.List> elemIndex' 'f' "BarFoof"
6
Prelude Data.Maybe Data.List> elemIndex' 'q' "BarFoof"
-1
That being said, in Haskell a Maybe is often used to denote a computation that might "fail", such that if you post-process the result, you will take the Nothing (element not found in the list) into account as well.
As Willem Van Onsem says, you can use fromMaybe. However, there are other options which may be better suited to your particular use case. (It's not clear from your code what your use case is, though, so I'm just going to list some options.)
If you know for certain that the value you're looking for is going to be somewhere in the list, usually because you've just pulled it out of the same list using some sort of fold, you can use fromJust. fromJust is similar to fromMaybe, except it doesn't take the additional first argument as a fallback value and instead will throw an error if the value isn't in the list. People will argue about whether or not you should use functions that can throw an error (these functions are called "partial," because they only handle part of their input domain) because they can really cause headaches down the line when assumptions turn out to be wrong, but in carefully controlled circumstances they can be useful.
fromJust (Just 10)
> 10
fromJust Nothing
> *** Exception: Maybe.fromJust: Nothing
Depending on the semantics of your program, it may also be more idiomatic to change your function signature to return a Maybe whatever instead of a whatever. In this case, you should look into using things like fmap, >>=, and <$> to pass your value around without ever needing to unwrap it. I can't tell you if this is the right approach for your program in particular without seeing more of your code, but for most use cases, fromMaybe is not the right approach.
fmap (+ 5) (Just 10)
> Just 15
fmap (+ 5) Nothing
> Nothing
Just 10 <$> (+ 5)
> Just 15
Nothing <$> (+ 5)
> Nothing
Just 10 >>= (\x -> if even x then Just $ x `div` 2 else Nothing)
> Just 5
Just 5 >>= (\x -> if even x then Just $ x `div` 2 else Nothing)
> Nothing
Nothing >>= (\x -> if even x then Just $ x `div` 2 else Nothing)
> Nothing
Using a value like -1 to indicate failure means that the rest of your program needs to know that -1 indicates a failure state. These sorts of failure values usually need to be handled separately from normal return values, which is exactly the use case that the Maybe type is designed to make more elegant. The quick test for whether you should be unwrapping a Maybe is to look and see if there is anywhere in your program where you are pattern matching (or using guards or if or case, etc.) to try to catch this -1 somewhere else to trigger specialized behavior. If you are, then you shouldn't be unwrapping Maybe (here).

Evaluation of arguments in function called by macro

Macros do not evaluate their arguments until explicitly told to do so, however functions do. In the following code:
(defmacro foo [xs]
(println xs (type xs)) ;; unquoted list
(blah xs))
(defn blah [xs] ;; xs is unquoted list, yet not evaluated
(println xs)
xs)
(foo (+ 1 2 3))
It seems that blah does not evaluate xs, since we still have the entire list: (+ 1 2 3) bound to xs in the body of blah.
I have basically just memorized this interaction between helper functions within macros and their evaluation of arguments, but to be honest it goes against what my instincts are (that xs would be evaluated before entering the body since function arguments are always evaluated).
My thinking was basically: "ok, in this macro body I have xs as the unevaluated list, but if I call a function with xs from within the macro it should evaluate that list".
Clearly I have an embarassingly fundamental misunderstanding here of how things work. What am I missing in my interpretation? How does the evaluation actually occur?
EDIT
I've thought on this a bit more and it seems to me that maybe viewing macro arguments as "implicitly quoted" would solve some confusion on my part.
I think I just got mixed up in the various terminologies, but given that quoted forms are synonymous with unevaluated forms, and given macro arguments are unevaluated, they are implicitly quoted.
So in my above examples, saying xs is unquoted is somewhat misleading. For example, this macro:
(defmacro bluh [xs]
`(+ 1 2 ~xs))
Is basically the same as the below macro (excluding namespacing on the symbols). Resolving xs in the call to list gives back an unevaluated (quoted?) list.
(defmacro bleh [xs]
(list '+ '1 '2 xs)) ;; xs resolves to a quoted list (or effectively quoted)
Calling bleh (or bluh) is the same as saying:
(list '+ '1 '2 '(+ 1 2 3))
;; => (+ 1 2 (+ 1 2 3))
If xs did not resolve to a quoted list, then we would end up with:
(list '+ '1 '2 (+ 1 2 3))
;; => (+ 1 2 6)
So, in short, macro arguments are quoted.
I thnk part of my confusion came from thinking about the syntax quoted forms as templates with slots filled in e.g. (+ 1 2 ~xs) I would mentally expand to (+ 1 2 (+ 1 2 3)), and seeing that (+ 1 2 3) was not quoted in that expansion, I found it confusing that function calls using xs (in the first example above blah) would not evalute immediately to 6.
The template metaphor is helpful, but if I instead look at it as a
shortcut for (list '+ '1 '2 xs) it becomes obvious that xs must be a quoted list otherwise the expansion would include 6 and not the entire list.
I'm not sure why I found this so confusing... have I got this right or did I just go down the wrong path entirely?
[This answer is an attempt to explain why macros and functions which don't evaluate their arguments are different things. I believe this applies to macros in Clojure but I am not an expert on Clojure. It's also much too long, sorry.]
I think you are confused between what Lisp calls macros and a construct which modern Lisps don't have but which used to be called FEXPRs.
There are two interesting, different, things you might want:
functions which, when called, do not immediately evaluate their arguments;
syntax transformers, which are called macros in Lisp.
I'll deal with them in order.
Functions which do not immediately evaluate their arguments
In a conventional Lisp, a form like (f x y ...), where f is a function, will:
establish that f is a function and not some special thing;
get the function corresponding to f and evaluate x, y, & the rest of the arguments in some order specified by the language (which may be 'in an unspecified order');
call f with the results of evaluating the arguments.
Step (1) is needed initially because f might be a special thing (like, say if, or quote), and it might be that the function definition is retrieved in (1) as well: all of this, as well as the order that things happen in in (2) is something the language needs to define (or, in the case of Scheme say, leave explicitly undefined).
This ordering, and in particular the ordering of (2) & (3) is known as applicative order or eager evaluation (I'll call it applicative order below).
But there are other possibilities. One such is that the arguments are not evaluated: the function is called, and only when the values of the arguments are needed are they evaluated. There are two approaches to doing this.
The first approach is to define the language so that all functions work this way. This is called lazy evaluation or normal order evaluation (I'll call it normal order below). In a normal order language function arguments are evaluated, by magic, at the point they are needed. If they are never needed then they may never be evaluated at all. So in such a language (I am inventing the syntax for function definition here so as not to commit CL or Clojure or anything else):
(def foo (x y z)
(if x y z))
Only one of y or z will be evaluated in a call to foo.
In a normal order language you don't need to explicitly care about when things get evaluated: the language makes sure that they are evaluated by the time they're needed.
Normal order languages seem like they'd be an obvious win, but they tend to be quite hard to work with, I think. There are two problems, one obvious and one less so:
side-effects happen in a less predictable order than they do in applicative order languages and may not happen at all, so people used to writing in an imperative style (which is most people) find them hard to deal with;
even side-effect-free code can behave differently than in an applicative order language.
The side-effect problem could be treated as a non-problem: we all know that code with side-effects is bad, right, so who cares about that? But even without side-effects things are different. For instance here's a definition of the Y combinator in a normal order language (this is kind of a very austere, normal order subset of Scheme):
(define Y
((λ (y)
(λ (f)
(f ((y y) f))))
(λ (y)
(λ (f)
(f ((y y) f))))))
If you try to use this version of Y in an applicative order language -- like ordinary Scheme -- it will loop for ever. Here's the applicative order version of Y:
(define Y
((λ (y)
(λ (f)
(f (λ (x)
(((y y) f) x)))))
(λ (y)
(λ (f)
(f (λ (x)
(((y y) f) x)))))))
You can see it's kind of the same, but there are extra λs in there which essentially 'lazify' the evaluation to stop it looping.
The second approach to normal order evaluation is to have a language which is mostly applicative order but in which there is some special mechanism for defining functions which don't evaluate their arguments. In this case there often would need to be some special mechanism for saying, in the body of the function, 'now I want the value of this argument'. Historically such things were called FEXPRs, and they existed in some very old Lisp implementations: Lisp 1.5 had them, and I think that both MACLISP & InterLisp had them as well.
In an applicative order language with FEXPRs, you need somehow to be able to say 'now I want to evaluate this thing', and I think this is the problem are running up against: at what point does the thing decide to evaluate the arguments? Well, in a really old Lisp which is purely dynamically scoped there's a disgusting hack to do this: when defining a FEXPR you can just pass in the source of the argument and then, when you want its value, you just call EVAL on it. That's just a terrible implementation because it means that FEXPRs can never really be compiled properly, and you have to use dynamic scope so variables can never really be compiled away. But this is how some (all?) early implementations did it.
But this implementation of FEXPRs allows an amazing hack: if you have a FEXPR which has been given the source of its arguments, and you know that this is how FEXPRs work, then, well, it can manipulate that source before calling EVAL on it: it can call EVAL on something derived from the source instead. And, in fact, the 'source' it gets given doesn't even need to be strictly legal Lisp at all: it can be something which the FEXPR knows how to manipulate to make something that is. That means you can, all of a sudden, extend the syntax of the language in pretty general ways. But the cost of being able to do that is that you can't compile any of this: the syntax you construct has to be interpreted at runtime, and the transformation happens each time the FEXPR is called.
Syntax transformers: macros
So, rather than use FEXPRs, you can do something else: you could change the way that evaluation works so that, before anything else happens, there is a stage during which the code is walked over and possibly transformed into some other code (simpler code, perhaps). And this need happen only once: once the code has been transformed, then the resulting thing can be stashed somewhere, and the transformation doesn't need to happen again. So the process now looks like this:
code is read in and structure built from it;
this initial structure is possibly transformed into other structure;
(the resulting structure is possibly compiled);
the resulting structure, or the result of compiling it is evaluated, probably many times.
So now the process of evaluation is divided into several 'times', which don't overlap (or don't overlap for a particular definition):
read time is when the initial structure is built;
macroexpansion time is when it is transformed;
compile time (which may not happen) is when the resulting thing is compiled;
evaluation time is when it is evaluated.
Well, compilers for all languages probably do something like this: before actually turning your source code into something that the machine understands they will do all sorts of source-to-source transformations. But these things are in the guts of the compiler and are operating on some representation of the source which is idiosyncratic to that compiler and not defined by the language.
Lisp opens this process to users. The language has two features which make this possible:
the structure that is created from source code once it has been read is defined by the language and the language has a rich set of tools for manipulating this structure;
the structure created is rather 'low commitment' or austere -- it does not particularly predispose you to any interpretation in many cases.
As an example of the second point, consider (in "my.file"): that's a function call of a function called in, right? Well, may be: (with-open-file (in "my.file") ...) almost certainly is not a function call, but binding in to a filehandle.
Because of these two features of the language (and in fact some others I won't go into) Lisp can do a wonderful thing: it can let users of the language write these syntax-transforming functions -- macros -- in portable Lisp.
The only thing that remains is to decide how these macros should be notated in source code. And the answer is the same way as functions are: when you define some macro m you use it just as (m ...) (some Lisps support more general things, such as CL's symbol macros. At macroexpansion time -- after the program is read but before it is (compiled and) run -- the system walks over the structure of the program looking for things which have macro definitions: when it finds them it calls the function corresponding to the macro with the source code specified by its arguments, and the macro returns some other chunk of source code, which gets walked in turn until there are no macros left (and yes, macros can expand to code involving other macros, and even to code involving themselves). Once this process is complete then the resulting code can be (compiled and) run.
So although macro look like function calls in the code, they are not just functions which don't evaluate their arguments, like FEXPRs were: instead they are functions which take a bit of Lisp source code and return another bit of Lisp source code: they're syntax transformers, or function which operate on source code (syntax) and return other source code. Macros run at macroexpansion time which is properly before evaluation time (see above).
So, in fact macros are functions, written in Lisp, and the functions they call evaluate their arguments perfectly conventionally: everything is perfectly ordinary. But the arguments to macros are programs (or the syntax of programs represented as Lisp objects of some kind) and their results are (the syntax of) other programs. Macros are functions at the meta-level, if you like. So a macro if a function which computes (parts of) programs: those programs may later themselves be run (perhaps much later, perhaps never) at which point the evaluation rules will be applied to them. But at the point a macro is called what it's dealing with is just the syntax of programs, not evaluating parts of that syntax.
So, I think your mental model is that macros are something like FEXPRs in which case the 'how does the argument get evaluated' question is an obvious thing to ask. But they're not: they're functions which compute programs, and they run properly before the program they compute is run.
Sorry this answer has been so long and rambling.
What happened to FEXPRs?
FEXPRs were always pretty problematic. For instance what should (apply f ...) do? Since f might be a FEXPR, but this can't generally be known until runtime it's quite hard to know what the right thing to do is.
So I think that two things happened:
in the cases where people really wanted normal order languages, they implemented those, and for those languages the evaluation rules dealt with the problems FEXPRs were trying to deal with;
in applicative order languages then if you want to not evaluate some argument you now do it by explicitly saying that using constructs such as delay to construct a 'promise' and force to force evaluation of a promise -- because the semantics of the languages improved it became possible to implement promises entirely in the language (CL does not have promises, but implementing them is essentially trivial).
Is the history I've described correct?
I don't know: I think it may be but it may also be a rational reconstruction. I certainly, in very old programs in very old Lisps, have seen FEXPRs being used the way I describe. I think Kent Pitman's paper, Special Forms in Lisp may have some of the history: I've read it in the past but had forgotten about it until just now.
A macro definition is a definition of a function that transforms code. The input for the macro function are the forms in the macro call. The return value of the macro function will be treated as code inserted where the macro form was. Clojure code is made of Clojure data structures (mostly lists, vectors, and maps).
In your foo macro, you define the macro function to return whatever blah did to your code. Since blah is (almost) the identity function, it just returns whatever was its input.
What is happening in your case is the following:
The string "(foo (+ 1 2 3))" is read, producing a nested list with two symbols and three integers: (foo (+ 1 2 3)).
The foo symbol is resolved to the macro foo.
The macro function foo is invoked with its argument xs bound to the list (+ 1 2 3).
The macro function (prints and then) calls the function blah with the list.
blah (prints and then) returns that list.
The macro function returns the list.
The macro is thus “expanded” to (+ 1 2 3).
The symbol + is resolved to the addition function.
The addition function is called with three arguments.
The addition function returns their sum.
If you wanted the macro foo to expand to a call to blah, you need to return such a form. Clojure provides a templating convenience syntax using backquote, so that you do not have to use list etc. to build the code:
(defmacro foo [xs]
`(blah ~xs))
which is like:
(defmacro foo [xs]
(list 'blah xs))

Does Frege perform tail call optimization?

Are tail calls optimised in Frege. I know that there is TCO neither in Java nor in languages which compile to JVM bytecode like Clojure and Scala. What about Frege?
Frege does Tail Recursion Optimization by simply generating while loops.
General tail calls are handled "by the way" through laziness. If the compiler sees a tail call to a suspectible function that is known to be (indirectly) recursive, a lazy result (a thunk) is returned. Thus, the real burden of calling that function lies with the caller. This way, stacks whose depth depends on the data are avoided.
That being said, already the static stack depth is by nature deeper in a functional language than in Java. Hence, some programs will need to be given a bigger stack (i.e. with -Xss1m).
There are pathological cases, where big thunks are build and when they are evaluated, a stack overflow will happen. A notorious example is the foldl function (same problem as in Haskell). Hence, the standard left fold in Frege is fold, which is tail recursive and strict in the accumulator and thus works in constant stack space (like Haskells foldl').
The following program should not stack overflow but print "false" after 2 or 3s:
module Test
-- inline (odd)
where
even 0 = true
even 1 = false
even n = odd (pred n)
odd n = even (pred n)
main args = println (even 123_456_789)
This works as follows: println must have a value to print, so tries to evaluate (even n). But all it gets is a thunk to (odd (pred n)). Hence it tries to evaluate this thunk, which gets another thunk to (even (pred (pred n))). even must evaluate (pred (pred n)) to see if the argument was 0 or 1, before returning another thunk (odd (pred (n-2)) where n-2 is already evaluated.
This way, all the calling (at JVM level) is done from within println. At no time does even actually invoke odd, or vice versa.
If one uncomments the inline directive, one gets a tail recursive version of even, and the result is obtained ten times faster.
Needless to say, this clumsy algorithm is only for demonstration - normally one would check for even-ness with a bit operation.
Here is another version, that is pathological and will stack overflow:
even 0 = true
even 1 = false
even n = not . odd $ n
odd = even . pred
The problem is here that not is the tail call and it is strict in its argument (i.e., to negate something, you must first have that something). Hence, When even n is computed, then not must fully evaluate odd n which, in turn, must fully evaluate even (pred n) and thus it will take 2*n stack frames.
Unfortunately, this is not going to change, even if the JVM should have proper tail call one day. The reason is the recursion in the argument of a strict function.

Can you recognize an infinite list in a Haskell program? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to tell if a list is infinite?
In Haskell, you can define an infinite list, for example [1..]. Is there a built-in function in Haskell to recognize whether a list has finite length? I don't imagine it is possible to write a user-supplied function to do this, but the internal representation of lists by Haskell may be able to support it. If not in standard Haskell, is there an extension providing such a feature?
No, this is not possible. It would be impossible to write such a function, because you can have lists whose finiteness might be unknown: consider a recursive loop generating a list of all the twin primes it can find. Or, to follow up on what Daniel Pratt mentioned in the comments, you could have a list of all the steps a universal Turing machine takes during its execution, ending the list when the machine halts. Then, you could simply check whether such a list is infinite, and solve the Halting problem!
The only question an implementation could answer is whether a list is cyclic: if one of its tail pointers points back to a previous cell of the list. However, this is implementation-specific (Haskell doesn't specify anything about how implementations must represent values), impure (different ways of writing the same list would give different answers), and even dependent on things like whether the list you pass in to such a function has been evaluated yet. Even then, it still wouldn't be able to distinguish finite lists from infinite lists in the general case!
(I mention this because, in many languages (such as members of the Lisp family), cyclic lists are the only kind of infinite lists; there's no way to express something like "a list of all integers". So, in those languages, you can check whether a list is finite or not.)
There isn't any way to test for finiteness of lists other than iterating over the list to search for the final [] in any implementation I'm aware of. And in general, it is impossible to tell whether a list is finite or infinite without actually going to look for the end (which of course means that every time you get an answer, that says finite).
You could write a wrapper type around list which keeps track of infiniteness, and limit yourself to "decidable" operations only (somehow similar to NonEmpty, which avoids empty lists):
import Control.Applicative
data List a = List (Maybe Int) [a]
infiniteList (List Nothing _) = true
infiniteList _ = false
emptyList = List (Just 0) []
singletonList x = List (Just 1) [x]
cycleList xs = List Nothing (cycle xs)
numbersFromList n = List Nothing [n..]
appendList (List sx xs) (List sy ys) = List ((+) <$> sx <*> sy) (xs ++ ys)
tailList (List s xs) = List (fmap pred s) (tail xs)
...
As ehird wrote, your only hope is in finding out whether a list is cyclic. A way of doing so is to use an extension to Haskell called "observable sharing". See for instance: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.4053
When talking about "internal representation of lists", from standpoint of Haskell implementation, there are no infinite lists. The "list" you ask about is actually a description of computational process, not a data object. No data object is infinite inside a computer. Such a thing simply does not exist.
As others have told you, internal list data might be cyclical, and implementation usually would be able to detect this, having a concept of pointer equality. But Haskell itself has no such concept.
Here's a Common Lisp function to detect the cyclicity of a list. cdr advances along a list by one notch, and cddr - by two. eq is a pointer equality predicate.
(defun is-cyclical (p)
(labels ((go (p q)
(if (not (null q))
(if (eq p q) t
(go (cdr p) (cddr q))))))
(go p (cdr p))))

why prolog isn't printing this list

I have a prolog rule below
schedule(mary,[ma424,ma387,eng301]).
and I have a predicate
taking(X,Y):- schedule(X, [Y | L]).
and when I try to figure out what classes she's taking by typing
taking(mary,Y).
i'm getting
y=ma424
why isn't it printing out ALL of her classes
i've also tried this and other variation
taking(X,Y):- schedule(X,[X|L]),schedule(Y, [Y | L]),schedule(Y,L),X\=Y,X\=L.
but it doesnt work
how do I get it to print all the classes give the way my rule is defined
This is due to the way you defined the predicate.
taking(X,Y) :- % X takes class Y if...
schedule(X, % in the schedule for X,
[Y|L]). % Y is the first element.
Your program will not magically decide to search through the list L if you don't tell it to. To do that, use the member/2 predicate:
taking(Student, Class) :-
schedule(Student, Classes),
member(Class, Classes).