I am trying figure out how to programmatically evaluate a list of functions.
Lets say that I have this code:
(defn foo
[]
(println "foo"))
(defn bar
[]
(println "bar"))
(def funcs [foo bar] )
I want execute all functions of funcs in a programmatically way.
I am tried use eval, but no succcess.
Thanks for any help.
Use for if you want the return values, and are OK with lazy evaluation (your functions are not guaranteed to be called until you access the return value), and doseq if you don't need the values and are doing this for immediate side effects.
(doseq [f [foo bar]]
(f))
(def fs
(for [f [foo bar]]
(f)))
You can use juxt:
((apply juxt funcs))
You can simply map over the functions with a call:
(map #(%) funcs)
doall and dorun can be used to force effects. doall retains results, while dorun just returns nil.
(defn foo [] :foo) ; no side-effects
(doall (map #(%) [foo foo]))
;=> (:foo :foo)
(defn print-foo [] (println (foo))) ; with side-effects
(dorun (map #(%) [print-foo print-foo]))
;=> :foo
; :foo
; nil
Related
Currently I have some code like this:
(defn compute-issue [some args] (or (age-issue some args) (name-issue some args)))
More issue types are coming.
Is there something like this:
(defn compute-issue [some args] (first-not-nil [age-issue name-issue] some args))
; Where first-not-nil would be something like
(defn first-not-nil [fs & args]
(if (empty? fs)
nil
(let [result (apply (first fs) args)]
(if (nil? result)
(recur (rest fs) args)
result))))
I'm new to Clojure. Am I reinventing an existing function?
There is a similar function some-fn in clojure.core:
Takes a set of predicates and returns a function f that returns the first logical true value
returned by one of its composing predicates against any of its arguments, else it returns
logical false. Note that f is short-circuiting in that it will stop execution on the first
argument that triggers a logical true result against the original predicates.
The key differences are some-fn returns another function for the actual function application, and that function will also discard false results, which it sounds like you may not want. This is another simple way to phrase it:
(defn first-not-nil [fs & args]
(first
(for [f fs
:let [r (apply f args)]
:when (some? r)]
r)))
I was writing an answer for this challenge, when I needed to give a recursive function an optional parameter. I ended up with something kind of equivalent to:
(defn func [a & [b?]]
(if b?
b?
(recur a a)))
My intent was for b? to act as an optional parameter. If it wasn't supplied, it would be defaulted to nil via destructuring.
Instead of running though, it gave me an error:
(func 1)
UnsupportedOperationException nth not supported on this type: Long clojure.lang.RT.nthFrom (RT.java:947)
After some debugging, I realized that for some reason the rest parameter wasn't a list as I'd expect, but just the passed number! The error was coming about because it tried to destructure the number.
I can fix it by getting rid of the wrapper list in the parameter list:
(defn func [a & b]
...
But this just looks wrong. I know the rest parameter should be a list, but b is actually just a number. If I use "unoptimized" recursion, it works as I'd expect:
(defn func2 [a & [b?]]
(if b?
b?
(func2 a a)))
(func2 1)
=> 1
Can anyone explain what's going on here?
This appears to be a known difference
; Note that recur can be surprising when using variadic functions.
(defn foo [& args]
(let [[x & more] args]
(prn x)
(if more (recur more) nil)))
(defn bar [& args]
(let [[x & more] args]
(prn x)
(if more (bar more) nil)))
; The key thing to note here is that foo and bar are identical, except
; that foo uses recur and bar uses "normal" recursion. And yet...
user=> (foo :a :b :c)
:a
:b
:c
nil
user=> (bar :a :b :c)
:a
(:b :c)
nil
; The difference arises because recur does not gather variadic/rest args
; into a seq.
It's the last comment that describes the difference.
I define a macro to bind a symbol derived from a string to the string like this:
lein repl
... Clojure 1.8.0 ...
user=> (defmacro foo [s] `(def ~(symbol s) ~s))
#'user/foo
It works as expected when invoked at top level:
user=> (foo "asdf")
#'user/asdf
user=> asdf
"asdf"
But when I try to map a function that invokes the macro over a sequence, the macro binds the function parameter symbol rather than the one I want:
user=> (map (fn [x] (foo x)) ["qwer"])
(#'user/x)
user=> x
"qwer"
user=> qwer
CompilerException ... Unable to resolve symbol: qwer ...
The following alternative binds the temporary symbol created by Clojure:
user=> (map #(foo %) ["qwer"])
(#'user/p1__1253#)
It also doesn't work when wrapped in doall as suggested by some of the existing answers I researched on StackOverflow.
How can I define a symbol-binding macro that I can map (in a function or otherwise) over a collection of strings?
map is a function and foo is a macro. Since macro expansion happens at compile time and functions are executed at run time, defining a symbol-binding macro that you can map (and thus expand at run time) is impossible.
What you can do is something like this:
(defn foo2 [s]
`(def ~(symbol s) ~s))
(defmacro foos [ss]
`(do ~#(map foo2 ss)))
(foos ["asdf" "qwer"])
asdf ;; => "asdf"
qwer ;; => "qwer"
Now it's the other way around: the macro is expanded using the functions map and foo.
Here is a way of doing it. The solution first shows how the macro foo works, then uses an intermediate solution with a function map-foo-fn and then eval.
The final solution uses a second macro map-foo-mcr. This seems to be needed since (def ...) is a special form. This is similar (but not identical) to the problem of "turtles all the way down" where using a macro in one place requires all callers to also be macros, not functions.
(ns clj.core
(:require
[tupelo.core :as t] ))
(t/refer-tupelo)
(defmacro foo
[arg]
`(def ~(symbol arg) ~arg))
(foo "aa")
(spyx aa)
(defn map-foo-fn
[coll]
(cons 'do
(forv [elem coll]
(list 'foo elem))))
(newline)
(prn (map-foo-fn ["bb"] ))
(eval (map-foo-fn ["bb"] ))
(spyx bb)
(defmacro map-foo-mcr
[coll]
`(do
~#(forv [elem coll]
(list 'foo elem))))
(newline)
(println (macroexpand-1 '(map-foo-mcr ["cc" "dd"] )))
(map-foo-mcr ["cc" "dd"] )
(spyx cc)
(spyx dd)
Results:
aa => "aa"
(do (foo "bb"))
bb => "bb"
(do (foo cc) (foo dd))
cc => "cc"
dd => "dd"
Remember that, while macros can do one thing that functions can't (avoid arg evaluation), macros cannot do other things that functions can. In particular, macros can't be passed to map et al where higher-order-function argument is required.
For more details see http://www.braveclojure.com/writing-macros and search for "Macros All the Way Down"
Note that project.clj needs
:dependencies [
[tupelo "0.9.13"]
for spyx to work
I have functions that behave different depending on which keyword arguments have values supplied. For this question, I am wondering about functions that behave slightly differently depending on the type of argument supplied.
Example function, that increments each element of a list:
(defn inc-list [& {:keys [list-str list]}]
(let [prepared-list (if-not (nil? list) list (clojure.string/split list-str #","))]
(map inc prepared-list)))
Does it make sense to make a multimethod that instead tests for the type of argument? I have not used multimethods before, not sure about right time to use them. If it is a good idea, would the below example make sense?
Example:
(defn inc-coll [col] (map inc col))
(defmulti inc-list class)
(defmethod inc-list ::collection [col] (inc-col col))
(defmethod inc-list String [list-str]
(inc-col
(map #(Integer/parseInt %)
(clojure.string/split list-str #",")))
First things first: (map 'inc x) treats each item in x as an associative collection, and looks up the value indexed by the key 'inc.
user> (map 'inc '[{inc 0} {inc 1} {inc 2}])
(0 1 2)
you probably want inc instead
user> (map inc [0 1 2])
(1 2 3)
Next, we have an attempt to inc a string, the args to string/split out of order, and some spelling errors.
If you define your multi to dispatch on class, then the methods should be parameterized by the Class, not a keyword placeholder. I changed the multi so it would work on anything Clojure knows how to treat as a seq. Also, as a bit of bikeshedding, it is better to use type, which offers some distinctions for differentiating inputs in Clojure code that class does not offer:
user> (type (with-meta {:a 0 :b 1} {:type "foo"}))
"foo"
Putting it all together:
user> (defn inc-coll [col] (map inc col))
#'user/inc-coll
user> (defmulti inc-list type)
nil
user> (defmethod inc-list String [list-str]
(inc-coll (map #(Integer/parseInt %) (clojure.string/split list-str #","))))
#<MultiFn clojure.lang.MultiFn#6507d1de>
user> (inc-list "1,10,11")
(2 11 12)
user> (defmethod inc-list clojure.lang.Seqable [col] (inc-coll (seq col)))
#<MultiFn clojure.lang.MultiFn#6507d1de>
user> (inc-list [1 2 3])
(2 3 4)
Your first example is an obfuscated application of a technique called dispatching on type. It is obfuscated because in a message-passing style the caller must convey the type to your function.
Since in every case you only use one of the keyword args, you could as well define it as:
(defn inc-list
[m l]
(->> (case m ;; message dispatch
:list l
:list-str (map #(edn/read-string %) (str/split #",")) l)
(map inc)))
The caller could be relieved from having to pass m:
(defn inc-list
[l]
(->> (cond (string? l) (map ...)
:else l)
(map inc)))
This technique has the main disadvantage that the operation procedure code must be modified when a new type is introduced to the codebase.
In Clojure it is generally superseeded by the polymorphism construct protocols, e. g.:
(defprotocol IncableList
(inc-list [this]))
Can be implemented on any type, e. g.
(extend-type clojure.lang.Seqable
IncableList
(inc-list [this] (map inc this)))
(extend-type String
IncableList
(inc-list [this] (map #(inc ...) this)))
Multimethods allow the same and provide additional flexibility over message-passing and dispatching on type by decoupling the dispatch mechanism from the operation procedures and providing the additivity of data-directed programming. They perform slower than protocols, though.
In your example the intention is to dispatch based on type, so you don't need multimethods and protocols are the appropriate technique.
Suppose that I have a vector of key-value pairs that I want to put into a map.
(def v [k1 v1 k2 v2])
I do this sort of thing:
(apply assoc (cons my-map v))
And in fact, I've found myself doing this pattern,
(apply some-function (cons some-value some-seq))
several times in the past couple days. Is this idiomatic, or is there a nicer way to move arguments form sequences into functions?
apply takes extra arguments between the function name and the last seq argument.
user> (doc apply)
-------------------------
clojure.core/apply
([f args* argseq])
Applies fn f to the argument list formed by prepending args to argseq.
That's what args* means. So you can do this:
user> (apply assoc {} :foo :bar [:baz :quux])
{:baz :quux, :foo :bar}
user> (apply conj [] :foo :bar [:baz :quux])
[:foo :bar :baz :quux]