Why does Lisp allow replacement of math operators in let? - clojure

I know in Scheme I can write this:
(let ((+ *)) (+ 2 3)) => 6
As well as this, in Clojure:
(let [+ *] (+ 2 3)) => 6
I know this can work, but it feels so weird. I think in any language, the math operators are predefined. C++ and Scala can do operator overloading, but this doesn't seem to be that.
Doesn't this cause confusion? Why does Lisp allow this?

This is not a general Lisp feature.
In Common Lisp the effects of binding a core language function is undefined. This means the developer should not expect that it works in portable code. An implementation may also signal a warning or an error.
For example the SBCL compiler will signal this error:
; caught ERROR:
; Lock on package COMMON-LISP violated when
; binding + as a local function while
; in package COMMON-LISP-USER.
; See also:
; The SBCL Manual, Node "Package Locks"
; The ANSI Standard, Section 11.1.2.1.2
; (DEFUN FOO (X Y)
; (FLET ((+ (X Y)
; (* X Y)))
; (+ X Y)))
We can have our own + in Common Lisp, but it then has to be in a different package (= symbol namespace):
(defpackage "MYLISP"
(:use "CL")
(:shadow CL:+))
(in-package "MYLISP")
(defun foo (a b)
(flet ((+ (x y)
(* x y)))
(+ a b)))

Disclaimer: This is from a Clojure point of view.
+ is just another function. You can pass it around and write sum with it, have partial application, read docs about it, ...:
user=> (apply + [1 2 3])
6
user=> (reduce + [1 2 3])
6
user=> (map (partial + 10) [1 2 3])
(11 12 13)
user=> `+
clojure.core/+
user=> (doc +)
-------------------------
clojure.core/+
([] [x] [x y] [x y & more])
Returns the sum of nums. (+) returns 0. Does not auto-promote
longs, will throw on overflow. See also: +'
So you can have many + in different namespaces. The core one get's "use"-ed for you by default, but you can simply write your own. You can write your own DSL:
user=> (defn + [s] (re-pattern (str s "+")))
WARNING: + already refers to: #'clojure.core/+ in namespace: user, being replaced by: #'user/+
#'user/+
user=> (+ "\\d")
#"\d+"
user=> (re-find (+ "\\d") "666")
"666"
It's not special form, it's nothing different from any other function. So with that established, why should it not be allowed to be overriden?

In Scheme you are making a local binding, shadowing whatever is higher, With let. Since + and * are just variables that just happen to evaluate to procedures you are just giving old procedures other variable names.
(let ((+ *))
+)
; ==> #<procedure:*> (non standard visualization of a procedure)
In Scheme there are no reserved words. If you look at other languages the list of reserved words are quite high. Thus in Scheme you can do this:
(define (test v)
(define let 10) ; from here you cannot use let in this scope
(define define (+ let v)) ; from here you cannot use define to define stuff
define) ; this is the variable, not the special form
;; here let and define goes out of scope and the special forms are OK again
(define define +) ; from here you cannot use top level define
(define 5 6)
; ==> 11
THe really nice thing about this is that if you choose a name and the next version of the standard happens to use the same name for something similar, but not compatible, your code will not break. In other languages I have worked with a new version might introduce conflicts.
R6RS makes it even easier
From R6RS we have libraries. That means that we have full control over what top level forms we get from the standard into our programs. You have several ways to do it:
#!r6rs
(import (rename (except (rnrs base) +) (* +)))
(+ 10 20)
; ==> 200
This is also OK.
#!r6rs
(import (except (rnrs base) +))
(define + *)
(+ 10 20)
; ==> 200 guaranteed
And finally:
#!r6rs
(import (rnrs base)) ; imports both * and +
(define + *) ; defines + as an alias to *
(+ 10 20)
; ==> 200 guaranteed
Other languages does this too:
JavaScript is perhaps the most obvious:
parseFloat = parseInt;
parseFloat("4.5")
// ==> 4
But you cannot touch their operators. They are reserved because the language needs to do a lot of stuff for the operator precedence. Just like Scheme JS is nice language for duck typing.

Mainstream Lisp dialects do not have reserved tokens for infix operations. There is no categorical difference between +, expt, format or open-file: they are all just symbols.
A Lisp proram which performs (let ((+ 3)) ...) is spiritually very similar to a C program which does something like { int sqrt = 42; ... }. There is a sqrt function in the standard C library, and since C has a single namespace (it's a Lisp-1), that sqrt is now shadowed.
What we can't do in C is { int + = 42; ...} which is because + is an operator token. An identifier is called for, so there is a syntax error. We also can't do { struct interface *if = get_interface(...); } because if is a reserved keyword and not an identifier, even though it looks like one. Lisps tend not to have reserved keywords, but some dialects have certain symbols or categories of symbols that can't be bound as variables. In ANSI Common Lisp, we can't use nil or t as variables. (Specifically, those symbols nil and t that come from the common-lisp package). This annoys some programmers, because they'd like a t variable for "time" or "type". Also, symbols from the keyword package, usually appearing with a leading colon, cannot be bound as variables. The reason is that all these symbols are self-evaluating. nil, t and the keyword symbols evaluate to themselves, and so do not act as variables to denote another value.

The reason we allow this in lisp is that all bindings are done with lexical scope, which is a concept that comes from lambda calculus.
lambda calculus is a simplified system for managing variable binding. In lambda calculus the rules for things like
(lambda (x) (lambda (y) y))
and
(lambda (x) (lambda (y) x))
and even
(lambda (x) (lambda (x) x))
are carefully specified.
In lisp LET can be thought of as syntactic sugar for a lambda expression, for example your expression (let ([+ x]) (+ 2 3)) is equivalent to ((lambda (+) (+ 2 3)) x) which according to lambda calculus simplifies down to (x 2 3).
In summary, lisp is based on uniformly applying a very simple and clear model (called lambda calculus). If it seems strange at first, that's because most other programming languages don't have such consistency or base their variable binding on a mathematical model.

Scheme's philosophy is to impose minimal restriction such that to give maximal power to programmer.
A reason to allow such things is that in Scheme you can embed other languages and in other languages you want to use the * operator with different semantics.
For example, if you implement a language to represent regular expressions you want to give the * the semantics of the algebraic kleene operator and write programs like this one
(* (+ "abc" "def") )
to represent a language that contain words like this one
empty
abc
abcabc
abcdef
def
defdef
defabc
....
Starting from the main language, untyped lambda calculus, it is possible to create a language in which you can redefine absolutely everything apart from the lambda symbol. This is the model of computation scheme is build on.

It's not weird because in lisp there are no operators except functions and special forms like let or if, that can be builtin or created as macros. So here + is not an operator, but a function that is assigned to symbol + that is adding its arguments (in scheme and clojure you can say that it's just variable that hold function for adding numbers), the same * is not multiplication operator but asterisk symbol that is multiplying its arguments, so this is just convenient notation that it use + symbol it could be add or sum but + is shorter and similar as in other languages.
This is one of this mind bending concepts when you found it for the first time, like functions as arguments and return values of other functions.
If you use very basic Lisp and lambda calculus you don't even need numbers and + operators in base language. You can create numbers from functions and plus and minus functions using same trick and assign them to symbols + and - (see Church encoding)

Why does Lisp allow rebinding of math operators?
for consistency and
because it can be useful.
Doesn't this cause confusion?
No.
Most programming languages, following traditional algebraic notation, have special syntax for the elementary arithmetic functions of addition and subtraction and so on. Rules of priority and association make some function calls implicit. This syntax makes the expressions easier to read at the price of consistency.
LISPs tip the see-saw the other way, preferring consistency over legibility.
Consistency
In Clojure (the Lisp I know), the math operators (+, -, *, ... ) are nothing special.
Their names are just ordinary symbols.
They are core functions like any others.
So of course you can replace them.
Usefulness
Why would you want to override the core arithmetic operators? For example, the units2 library redefines them to accept dimensioned quantities as well as plain numbers.
Clojure algebra is harder to read.
All operators are prefix.
All operator applications are explicit - no priorities.
If you are determined to have infix operators with priorities, you can do it. Incanter does so: here are some examples and here is the source code.

Related

Is there any guideline to help clojure programmer to determine in what situation should be built in macros over function? [duplicate]

This question already has answers here:
Where do you use macros in clojure where functions wont work
(2 answers)
Closed 4 months ago.
When I was reading a book "On Lisp", I find that there is an interesting topic about macros (chapter 7 in the book), convert the example to Clojure and execute in REPL (Clojure). Both of them are almost identical, I still don't know when the price of coding should be built in macros when I can build that as a function. Is there any guideline to help programmer to determine in what situation should be built in macros over function?
Example in Macros
(defmacro nif [expr pos zero neg]
`(case (Integer/signum ~expr)
1 ~pos
0 ~zero
-1 ~neg))
(map #(nif % 'p 'z 'n) [0 2.5 -8])
=> (z p n)
Example in function
(defn nif2 [expr pos zero neg]
(case (Integer/signum expr)
1 pos
0 zero
-1 neg))
(map #(nif2 % 'p 'z 'n) [0 2.5 -8])
=> (z p n)
Please note that the new symbol nif is a language extension. The implementation of nif is a compiler extension which adds a layer of pre-processing of the source code AST before normal compilation resumes.
This is only useful if you cannot already do something in Clojure with either a built-in or user-defined function. For example, the macro clojure.core/when is a simplified version of clojure.core/if (see the source code), and is not possible to write as a function.
Also, note that functions can do things that macros cannot; i.e. you can compose functions, but you cannot compose macros.
I haven't seen that book, but for this example there is zero benefit to writing the macro. It is just an academic exercise to prove you could do it.
However, note that you cannot use map with a macro, only a function. The example only works at all because you wrap the macro with a function:
#(nif % 'p 'z 'n)
which is the same as the anonymous function
(fn [val]
(nif val 'p 'z 'n))
Normally, you'd just write it as
(defn val->sign
[val]
(case (Integer/signum val)
1 :p
0 :z
-1 :n))
and invoke it as
(mapv val->sign [0 2.5 -8])
;=> [:z :p :n]
This example also has zero benefit to allowing custom symbols are the return values. It is more natural in Clojure to just use pre-defined keywords for the return value, as shown above.

Clojure Core function argument positions seem rather confusing. What's the logic behind it?

For me as, a new Clojurian, some core functions seem rather counter-intuitive and confusing when it comes to arguments order/position, here's an example:
> (nthrest (range 10) 5)
=> (5 6 7 8 9)
> (take-last 5 (range 10))
=> (5 6 7 8 9)
Perhaps there is some rule/logic behind it that I don't see yet?
I refuse to believe that the Clojure core team made so many brilliant technical decisions and forgot about consistency in function naming/argument ordering.
Or should I just remember it as it is?
Thanks
Slightly offtopic:
rand&rand-int VS random-sample - another example where function naming seems inconsistent but that's a rather rarely used function so it's not a big deal.
There is an FAQ on Clojure.org for this question: https://clojure.org/guides/faq#arg_order
What are the rules of thumb for arg order in core functions?
Primary collection operands come first. That way one can write → and its ilk, and their position is independent of whether or not they have variable arity parameters. There is a tradition of this in OO languages and Common Lisp (slot-value, aref, elt).
One way to think about sequences is that they are read from the left, and fed from the right:
<- [1 2 3 4]
Most of the sequence functions consume and produce sequences. So one way to visualize that is as a chain:
map <- filter <- [1 2 3 4]
and one way to think about many of the seq functions is that they are parameterized in some way:
(map f) <- (filter pred) <- [1 2 3 4]
So, sequence functions take their source(s) last, and any other parameters before them, and partial allows for direct parameterization as above. There is a tradition of this in functional languages and Lisps.
Note that this is not the same as taking the primary operand last. Some sequence functions have more than one source (concat, interleave). When sequence functions are variadic, it is usually in their sources.
Adapted from comments by Rich Hickey.
Functions that work with seqs usually has the actual seq as last argument.
(map, filter, remote etc.)
Accessing and "changing" individual elements takes a collection as first element: conj, assoc, get, update
That way, you can use the (->>) macro with a collection consistenly,
as well as create transducers consistently.
Only rarely one has to resort to (as->) to change argument order. And if you have to do so, it might be an opportunity to check if your own functions follow that convention.
For some functions (especially functions that are "seq in, seq out"), the args are ordered so that one can use partial as follows:
(ns tst.demo.core
(:use tupelo.core tupelo.test))
(dotest
(let [dozen (range 12)
odds-1 (filterv odd? dozen)
filter-odd (partial filterv odd?)
odds-2 (filter-odd dozen) ]
(is= odds-1 odds-2
[1 3 5 7 9 11])))
For other functions, Clojure often follows the ordering of "biggest-first", or "most-important-first" (usually these have the same result). Thus, we see examples like:
(get <map> <key>)
(get <map> <key> <default-val>)
This also shows that any optional values must, by definition, be last (in order to use "rest" args). This is common in most languages (e.g. Java).
For the record, I really dislike using partial functions, since they have user-defined names (at best) or are used inline (more common). Consider this code:
(let [dozen (range 12)
odds (filterv odd? dozen)
evens-1 (mapv (partial + 1) odds)
evens-2 (mapv #(+ 1 %) odds)
add-1 (fn [arg] (+ 1 arg))
evens-3 (mapv add-1 odds)]
(is= evens-1 evens-2 evens-3
[2 4 6 8 10 12]))
Also
I personally find it really annoying trying to parse out code using partial as with evens-1, especially for the case of user-defined functions, or even standard functions that are not as simple as +.
This is especially so if the partial is used with 2 or more args.
For the 1-arg case, the function literal seen for evens-2 is much more readable to me.
If 2 or more args are present, please make a named function (either local, as shown for evens-3), or a regular (defn some-fn ...) global function.

~ 'unquote' vs normal reference to a variable in clojure?

I have been playing around with clojure for some time.But not able to figure out the difference between ~ vs normal reference.
For eg:
(defn f [a b] (+ a b))
(f 1 2)
outputs:
3
and on the other hand:
(defn g [a b] `(+ ~a ~b))
(g 1 2)
outputs:
(clojure.core/+ 1 2)
So my question is what's need for different syntax ?
There is a language feature called "syntax-quote" that provides some syntactic shortcuts around forming lists that look like clojure expressions. You don't have to use it to build lists that are clojure s-expressions, you can build what you want with it, though it's almost always used in code that is part of a macro. Where that macro needs to build a Clojure s-expression and return it.
so your example
(defn g [a b] `(+ ~a ~b))
when it's read by the Clojure reader would run the syntax-quote reader macro (which is named `)
and that syntax-quote macro will take the list
(+ ~a ~b)
as it's argument and return the list
(+ 1 2)
because it interprets symbol ~ to mean "include in the list we are building, the result of evaluating this next thing".
The backquote symbol ` and the tilde ~ are normally only used when writing macros. You shouldn't normally use them when writing normal functions using defn etc.
You can find more information here and in other books.

emulating Clojure-style callable objects in Common Lisp

In Clojure, hash-maps and vectors implement invoke, so that they can be used as functions, for example
(let [dict {:species "Ursus horribilis"
:ornery :true
:diet "You"}]
(dict :diet))
lein> "You"
or, for vectors,
(let [v [42 613 28]]
(v 1))
lein> 613
One can make callable objects in Clojure by having them implement IFn. I'm new-ish to Common Lisp -- are callable objects possible and if so what would implementing that involve? I'd really like to be able to do things like
(let ((A (make-array (list n n) ...)))
(loop for i from 0 to n
for j from 0 to m
do (setf (A i j) (something i j)))
A)
rather than have code littered with aref. Likewise, it would be cool if you could access entries of other data structures, e.g. dictionaries, the same way.
I've looked at the wiki entry on function objects in Lisp/Scheme and it seems as if having a separate function namespace will complicate matters for CL, whereas in Scheme you can just do this with closures.
Example of callable objects in a precursor of Common Lisp
Callable objects have been provided before. For example in Lisp Machine Lisp:
Command: ("abc" 1) ; doesn't work in Common Lisp
#\b
Bindings in Common Lisp
Common Lisp has separate namespaces of names for functions and values. So (array 10 1 20) would only make sense, when array would be a symbol denoting a function in the function namespace. Thus the function value then would be a callable array.
Making values bound to variables act as functions mostly defeats the purpose of the different namespaces for functions and values.
(let ((v #(1 2 3)))
(v 10)) ; doesn't work in Common Lisp
Above makes no sense in a language with different namespaces for functions and values.
FLET is used for functions instead of LET.
(flet ((v #(1 2 3 4 5 6 7))) ; doesn't work in Common Lisp
(v 4))
This would then mean we would put data into the function namespace. Do we want that? Not really.
Literal data as functions in function calls.
One could also think of at least allowing literal data act as functions in direct function calls:
(#(1 2 3 4 5 6 7) 4) ; doesn't work in Common Lisp
instead of
(aref #(1 2 3 4 5 6 7) 4)
Common Lisp does not allow that in any trivial or relatively simple way.
Side remark:
One can implement something in the direction of integrating functions and values with CLOS, since CLOS generic functions are also CLOS instances of the class STANDARD-GENERIC-FUNCTION and it's possible to have and use user-defined subclasses of that. But that's usually not exploited.
Recommendation
So, best to adjust to a different language style and use CL as it is. In this case Common Lisp is not flexible enough to easily incorporate such a feature. It is general CL style to not omit symbols for minor code optimizations. The danger is obfuscation and write-only code, because a lot of information is not directly in the source code, then.
Although there may not be a way to do exactly what you want to do, there are some ways to hack together something similar. One option is define a new binding form, with-callable, that allows us to bind functions locally to callable objects. For example we could make
(with-callable ((x (make-array ...)))
(x ...))
be roughly equivalent to
(let ((x (make-array ...)))
(aref x ...))
Here is a possible definition for with-callable:
(defmacro with-callable (bindings &body body)
"For each binding that contains a name and an expression, bind the
name to a local function which will be a callable form of the
value of the expression."
(let ((gensyms (loop for b in bindings collect (gensym))))
`(let ,(loop for (var val) in bindings
for g in gensyms
collect `(,g (make-callable ,val)))
(flet ,(loop for (var val) in bindings
for g in gensyms
collect `(,var (&rest args) (apply ,g args)))
,#body))))
All that's left is to define different methods for make-callable that return closures for accessing into the objects. For example here is a method that would define it for arrays:
(defmethod make-callable ((obj array))
"Make an array callable."
(lambda (&rest indices)
(apply #'aref obj indices)))
Since this syntax is kind of ugly we can use a macro to make it prettier.
(defmacro defcallable (type args &body body)
"Define how a callable form of TYPE should get access into it."
`(defmethod make-callable ((,(car args) ,type))
,(format nil "Make a ~A callable." type)
(lambda ,(cdr args) ,#body)))
Now to make arrays callable we would use:
(defcallable array (obj &rest indicies)
(apply #'aref obj indicies))
Much better. We now have a form, with-callable, which will define local functions that allow us to access into objects, and a macro, defcallable, that allows us to define how to make callable versions of other types. One flaw with this strategy is that we have to explicitly use with-callable every time we want to make an object callable.
Another option that is similar to callable objects is Arc's structure accessing ssyntax. Basically x.5 accesses the element at index five in x. I was able to implement this in Common Lisp. You can see the code I wrote for it here, and here. I also have tests for it so you can see what using it looks like here.
How my implementation works is I wrote a macro w/ssyntax which looks at all of the symbols in the body and defines macros and symbol-macros for some of them. For example the symbol-macro for x.5 would be (get x 5), where get is a generic function I defined that accesses into structures. The flaw with this is I always have to use w/ssyntax anywhere I want to use ssyntax. Fortunately I am able to hide it away inside a macro def which acts like defun.
I agree with Rainer Joswig's advice: It would be better to become comfortable with Common Lisp's way of doing things--just as it's better for a Common Lisp programmer to become comfortable with Clojure's way of doing things, when switching to Clojure. However, it is possible to do part of what you want, as malisper's sophisticated answer shows. Here is the start of a simpler strategy:
(defun make-array-fn (a)
"Return a function that, when passed an integer i, will
return the element of array a at index i."
(lambda (i) (aref a i)))
(setf (symbol-function 'foo) (make-array-fn #(4 5 6)))
(foo 0) ; => 4
(foo 1) ; => 5
(foo 2) ; => 6
symbol-function accesses the function cell of the symbol foo, and setf puts the function object created by make-array-fn into it. Since this function is then in the function cell, foo can be used in the function position of a list. If you wanted, you could wrap up the whole operation into a macro, e.g. like this:
(defmacro def-array-fn (sym a)
"Define sym as a function that is the result of (make-array-fn a)."
`(setf (symbol-function ',sym)
(make-array-fn ,a)))
(def-array-fn bar #(10 20 30 40))
(bar 0) ; => 10
(bar 1) ; => 20
(bar 3) ; => 40
Of course, an "array" defined this way no longer looks like an array. I suppose you could do something fancy with CL's printing routines. It's also possible to allow setting values of the array as well, but this would probably require a separate symbols.

Is it possible to implement auto-currying to the Lisp-family languages?

That is, when you call a function with >1 arity with only one argument, it should, instead of displaying an error, curry that argument and return the resulting function with decreased arity. Is this possible to do using Lisp's macros?
It's possible, but not easy if you want a useful result.
If you want a language that always does simple currying, then the implementation is easy. You just convert every application of more than one input to a nested application, and the same for functions of more than one argument. With Racket's language facilities, this is a very simple exercise. (In other lisps you can get a similar effect by some macro around the code where you want to use it.)
(Incidentally, I have a language on top of Racket that does just this. It gets the full cuteness of auto-curried languages, but it's not intended to be practical.)
However, it's not too useful since it only works for functions of one argument. You could make it useful with some hacking, for example, treat the rest of the lisp system around your language as a foreign language and provide forms to use it. Another alternative is to provide your language with arity information about the surrounding lisp's functions. Either of these require much more work.
Another option is to just check every application. In other words, you turn every
(f x y z)
into code that checks the arity of f and will create a closure if there are not enough arguments. This is not too hard in itself, but it will lead to a significant overhead price. You could try to use a similar trick of some information about arities of functions that you'd use in the macro level to know where such closures should be created -- but that's difficult in essentially the same way.
But there is a much more serious problem, at the highlevel of what you want to do. The thing is that variable-arity functions just don't play well with automatic currying. For example, take an expression like:
(+ 1 2 3)
How would you decide if this should be called as is, or whether it should be translated to ((+ 1 2) 3)? It seems like there's an easy answer here, but what about this? (translate to your favorite lisp dialect)
(define foo (lambda xs (lambda ys (list xs ys))))
In this case you can split a (foo 1 2 3) in a number of ways. Yet another issue is what do you do with something like:
(list +)
Here you have + as an expression, but you could decide that this is the same as applying it on zero inputs which fits +s arity, but then how do you write an expression that evaluates to the addition function? (Sidenote: ML and Haskell "solves" this by not having nullary functions...)
Some of these issues can be resolved by deciding that each "real" application must have parens for it, so a + by itself will never be applied. But that loses much of the cuteness of having an auto-curried language, and you still have problems to solve...
In Scheme it's possible to curry a function using the curry procedure:
(define (add x y)
(+ x y))
(add 1 2) ; non-curried procedure call
(curry add) ; curried procedure, expects two arguments
((curry add) 1) ; curried procedure, expects one argument
(((curry add) 1) 2) ; curried procedure call
From Racket's documentation:
[curry] returns a procedure that is a curried version of proc. When the resulting procedure is first applied, unless it is given the maximum number of arguments that it can accept, the result is a procedure to accept additional arguments.
You could easily implement a macro which automatically uses curry when defining new procedures, something like this:
(define-syntax define-curried
(syntax-rules ()
((_ (f . a) body ...)
(define f (curry (lambda a (begin body ...)))))))
Now the following definition of add will be curried:
(define-curried (add a b)
(+ a b))
add
> #<procedure:curried>
(add 1)
> #<procedure:curried>
((add 1) 2)
> 3
(add 1 2)
> 3
The short answer is yes, though not easily.
you could implament this as a macro that wrapped every call in partial, though only in limited context. Clojure has some features that would make this rather difficult such as variable arity functions and dynamit calls. Clojure lacks a formal type system to concretely decide when the call can have no more arguments and should actually be called.
As noted by Alex W, the Common Lisp Cookbook does give an example of a "curry" function for Common Lisp. The specific example is further down on that page:
(declaim (ftype (function (function &rest t) function) curry)
(inline curry)) ;; optional
(defun curry (function &rest args)
(lambda (&rest more-args)
(apply function (append args more-args))))
Auto-currying shouldn't be that hard to implement, so I took a crack at it. Note that the following isn't extensively tested, and doesn't check that there aren't too many args (the function just completes when there are that number or more):
(defun auto-curry (function num-args)
(lambda (&rest args)
(if (>= (length args) num-args)
(apply function args)
(auto-curry (apply (curry #'curry function) args)
(- num-args (length args))))))
Seems to work, though:
* (auto-curry #'+ 3)
#<CLOSURE (LAMBDA (&REST ARGS)) {1002F78EB9}>
* (funcall (auto-curry #'+ 3) 1)
#<CLOSURE (LAMBDA (&REST ARGS)) {1002F7A689}>
* (funcall (funcall (funcall (auto-curry #'+ 3) 1) 2) 5)
8
* (funcall (funcall (auto-curry #'+ 3) 3 4) 7)
14
A primitive (doesn't handle full lambda lists properly, just simple parameter lists) version of some macro syntax sugar over the above:
(defmacro defun-auto-curry (fn-name (&rest args) &body body)
(let ((currying-args (gensym)))
`(defun ,fn-name (&rest ,currying-args)
(apply (auto-curry (lambda (,#args) ,#body)
,(length args))
,currying-args))))
Seems to work, though the need for funcall is still annoying:
* (defun-auto-curry auto-curry-+ (x y z)
(+ x y z))
AUTO-CURRY-+
* (funcall (auto-curry-+ 1) 2 3)
6
* (auto-curry-+ 1)
#<CLOSURE (LAMBDA (&REST ARGS)) {1002B0DE29}>
Sure, you just have to decide exact semantics for your language, and then implement your own loader which will translate your source files into the implementation language.
You could e.g. translate every user function call (f a b c ... z) into (...(((f a) b) c)... z), and every (define (f a b c ... z) ...) to (define f (lambda(a) (lambda(b) (lambda(c) (... (lambda(z) ...) ...))))) on top of a Scheme, to have an auto-currying Scheme (that would forbid varargs functions of course).
You will also need to define your own primitives, turning the varargs functions like e.g. (+) to binary, and turning their applications to using fold e.g. (+ 1 2 3 4) ==> (fold (+) (list 1 2 3 4) 0) or something - or perhaps just making such calls as (+ 1 2 3 4) illegal in your new language, expecting of its user to write fold forms by themselves.
That's what I meant by "deciding ... semantics for your language".
The loader can be as simple as wrapping the file contents into a call to a macro - which you would then have to implement, as per your question.
Lisp already has Functional Currying:
* (defun adder (n)
(lambda (x) (+ x n)))
ADDER
http://cl-cookbook.sourceforge.net/functions.html
Here's what I was reading about Lisp macros: https://web.archive.org/web/20060109115926/http://www.apl.jhu.edu/~hall/Lisp-Notes/Macros.html
It's possible to implement this in pure Lisp. It's possible to implement it using macros as well, however it seems as though macros would make it more confusing for very basic stuff.