This question is related to one I asked recently.
If I rewrite (fn [n] (fn [x] (apply * (repeat n x)))) as
(defn power [n] (fn [x] (apply * (repeat n x))))`
it works just fine when used like this
((power 2) 16)
I can substitute 2 with another power, but I'd like to make a function just for squares, cubed, and so on. What is the best way to do that? I've been fiddling with it in the REPL, but no luck so far.
Using a macro for this goes entirely around his question, which was "I have a function that generates closures, how do I give those closures names?" The simple solution is:
(letfn [(power [n]
(fn [x]
(apply * (repeat n x))))]
(def square (power 2))
(def cube (power 3)))
If you really truly hate repeating def and power a few times, then and only then is it time to get macros involved. But the amount of effort you'll spend on even the simplest macro will be wasted unless you're defining functions up to at least the tenth power, compared to the simplicity of doing it with functions.
Not quite sure if this is what you're searching for, but macro templates might be it. Here's how I would write your code:
(use 'clojure.template)
(do-template [name n]
(defn name [x] (apply * (repeat n x)))
square 2
cube 3)
user> (cube 3)
;=> 27
For more complex type of similar tasks, you could write a macro that wrap do-template to perform some transformation on its arguments, e.g.:
(defmacro def-powers-of [& ns]
(let [->name #(->> % (str "power") symbol)]
`(do-template [~'name ~'n]
(defn ~'name [~'x] (apply * (repeat ~'n ~'x)))
~#(->> ns
(map #(vector (->name %) %))
flatten))))
(def-powers-of 2 3 4 5)
user> (power3 3)
;=> 27
user> (power5 3)
;=> 243
P.S.: That macro might look awkward if you're just starting with Clojure though, don't give up because of it! ;-)
Related
I often have to run my data through a function if the data fulfill certain criteria. Typically, both the function f and the criteria checker pred are parameterized to the data. For this reason, I find myself wishing for a higher-order if-then-else which knows neither f nor pred.
For example, assume I want to add 10 to all even integers in (range 5). Instead of
(map #(if (even? %) (+ % 10) %) (range 5))
I would prefer to have a helper –let's call it fork– and do this:
(map (fork even? #(+ % 10)) (range 5))
I could go ahead and implement fork as function. It would look like this:
(defn fork
([pred thenf elsef]
#(if (pred %) (thenf %) (elsef %)))
([pred thenf]
(fork pred thenf identity)))
Can this be done by elegantly combining core functions? Some nice chain of juxt / apply / some maybe?
Alternatively, do you know any Clojure library which implements the above (or similar)?
As Alan Thompson mentions, cond-> is a fairly standard way of implicitly getting the "else" part to be "return the value unchanged" these days. It doesn't really address your hope of being higher-order, though. I have another reason to dislike cond->: I think (and argued when cond-> was being invented) that it's a mistake for it to thread through each matching test, instead of just the first. It makes it impossible to use cond-> as an analogue to cond.
If you agree with me, you might try flatland.useful.fn/fix, or one of the other tools in that family, which we wrote years before cond->1.
to-fix is exactly your fork, except that it can handle multiple clauses and accepts constants as well as functions (for example, maybe you want to add 10 to other even numbers but replace 0 with 20):
(map (to-fix zero? 20, even? #(+ % 10)) xs)
It's easy to replicate the behavior of cond-> using fix, but not the other way around, which is why I argue that fix was the better design choice.
1 Apparently we're just a couple weeks away from the 10-year anniversary of the final version of fix. How time flies.
I agree that it could be very useful to have some kind of higher-order functional construct for this but I am not aware of any such construct. It is true that you could implement a higher order fork function, but its usefulness would be quite limited and can easily be achieved using if or the cond-> macro, as suggested in the other answers.
What comes to mind, however, are transducers. You could fairly easily implement a forking transducer that can be composed with other transducers to build powerful and concise sequence processing algorithms.
The implementation could look like this:
(defn forking [pred true-transducer false-transducer]
(fn [step]
(let [true-step (true-transducer step)
false-step (false-transducer step)]
(fn
([] (step))
([dst x] ((if (pred x) true-step false-step) dst x))
([dst] dst))))) ;; flushing not performed.
And this is how you would use it in your example:
(eduction (forking even?
(map #(+ 10 %))
identity)
(range 20))
;; => (10 1 12 3 14 5 16 7 18 9 20 11 22 13 24 15 26 17 28 19)
But it can also be composed with other transducers to build more complex sequence processing algorithms:
(into []
(comp (forking even?
(comp (drop 4)
(map #(+ 10 %)))
(comp (filter #(< 10 %))
(map #(vector % % %))
cat))
(partition-all 3))
(range 20))
;; => [[18 20 11] [11 11 22] [13 13 13] [24 15 15] [15 26 17] [17 17 28] [19 19 19]]
Another way to define fork (with three inputs) could be:
(defn fork [pred then else]
(comp
(partial apply apply)
(juxt (comp {true then, false else} pred) list)))
Notice that in this version the inputs and output can receive zero or more arguments. But let's take a more structured approach, defining some other useful combinators. Let's start by defining pick which corresponds to the categorical coproduct (sum) of morphisms:
(defn pick [actions]
(fn [[tag val]]
((actions tag) val)))
;alternatively
(defn pick [actions]
(comp
(partial apply apply)
(juxt (comp actions first) rest)))
E.g. (mapv (pick [inc dec]) [[0 1] [1 1]]) gives [2 0]. Using pick we can define switch which works like case:
(defn switch [test actions]
(comp
(pick actions)
(juxt test identity)))
E.g. (mapv (switch #(mod % 3) [inc dec -]) [3 4 5]) gives [4 3 -5]. Using switch we can easily define fork:
(defn fork [pred then else]
(switch pred {true then, false else}))
E.g. (mapv (fork even? inc dec) [0 1]) gives [1 0]. Finally, using fork let's also define fork* which receives zero or more predicate and action pairs and works like cond:
(defn fork* [& args]
(->> args
(partition 2)
reverse
(reduce
(fn [else [pred then]]
(fork pred then else))
identity)))
;equivalently
(defn fork* [& args]
(->> args
(partition 2)
(map (partial apply (partial partial fork)))
(apply comp)
(#(% identity))))
E.g. (mapv (fork* neg? -, even? inc) [-1 0 1]) gives [1 1 1].
Depending on the details, it is often easiest to accomplish this goal using the cond-> macro and friends:
(let [myfn (fn [val]
(cond-> val
(even? val) (+ val 10))) ]
with result
(mapv myfn (range 5)) => [10 1 14 3 18]
There is a variant in the Tupelo library that is sometimes helpful:
(mapv #(cond-it-> %
(even? it) (+ it 10))
(range 5))
that allows you to use the special symbol it as you thread the value through multiple stages.
As the examples show, you have the option to define and name the transformer function (my favorite), or use the function literal syntax #(...)
Firstly; sorry if the terminology I'm using is incorrect, I'm still very new to clojure and the paradigm shift is taking some time.
I am trying to work with a function which takes the first item from a set which is greater than twelve (is a 'teen' number). I can write this when I'm just applying it directly to a set, but I'm unsure how to write the function within a map. Can anyone point me in the right direction?
I tried a few things, typically along the lines of (partial (first (filter (partial < 12)))) but without any luck at all so far, and researching definitions of filter/partial has not yet proved fruitful.
TL/DR
I want to have, as a value in a map, a function which takes the first item in a list which is greater than 12.
(def test-set [1, 8, 15, 22, 29])
(def some-functions {
:first first
:last last
:teenth "I don't know what to put here"
})
(first (filter (partial < 12) test-set))
One way is to use an anonymous function when defining the map (https://riptutorial.com/clojure/example/15544/defining-anonymous-functions)
> (def some-functions {
:first first
:last last
:teenth #(first (filter (partial < 12) %))})
> ((:first some-functions) test-set)
1
> ((:last some-functions) test-set)
29
> ((:teenth some-functions) test-set)
15
Of course you could also have explicitly defined your function and used it in your map:
> (defn teenth [coll] (first (filter (partial < 12) coll)))
> (def some-functions {
:first first
:last last
:teenth teenth})
(As an aside, be careful with the word set. In clojure sets are unordered collections of unique values. https://clojure.org/reference/data_structures)
I found a small improvement to #jas' answer.
The functions
#(first (filter (partial < 12) %))
and
(defn teenth [coll] (first (filter (partial < 12) coll)))
use a combination of first and filter.
This is a use case for the some function as stated on clojuredocs.org [1].
I would propose to refactor the functions to
(fn [coll] (some #(if (< 12 %) %) coll))
and
(defn teenth [coll] (some #(if (< 12 %) %) coll))
Due to the slightly more complex predicate function #(if (< 12 %) %) we cannot use partial anymore.
Please be aware that you cannot create nested anonymous functions by using the reader macro #() [2]. In this case, you have to use fn to create the nested anonymous function as shown above.
Actually, you could use fn twice, but in my opinion it's not readable anymore:
(fn [coll] (some (fn [e] (if (< 12 e) e)) coll))
[1] https://clojuredocs.org/clojure.core/some#example-542692c6c026201cdc326940
[2] https://clojure.org/reference/reader#_dispatch
I have the following bit of code that produces the correct results:
(ns scratch.core
(require [clojure.string :as str :only (split-lines join split)]))
(defn numberify [str]
(vec (map read-string (str/split str #" "))))
(defn process [acc sticks]
(let [smallest (apply min sticks)
cuts (filter #(> % 0) (map #(- % smallest) sticks))]
(if (empty? cuts)
acc
(process (conj acc (count cuts)) cuts))))
(defn print-result [[x & xs]]
(prn x)
(if (seq xs)
(recur xs)))
(let [input "8\n1 2 3 4 3 3 2 1"
lines (str/split-lines input)
length (read-string (first lines))
inputs (first (rest lines))]
(print-result (process [length] (numberify inputs))))
The process function above recursively calls itself until the sequence sticks is empty?.
I am curious to know if I could have used something like take-while or some other technique to make the code more succinct?
If ever I need to do some work on a sequence until it is empty then I use recursion but I can't help thinking there is a better way.
Your core problem can be described as
stop if count of sticks is zero
accumulate count of sticks
subtract the smallest stick from each of sticks
filter positive sticks
go back to 1.
Identify the smallest sub-problem as steps 3 and 4 and put a box around it
(defn cuts [sticks]
(let [smallest (apply min sticks)]
(filter pos? (map #(- % smallest) sticks))))
Notice that sticks don't change between steps 5 and 3, that cuts is a fn sticks->sticks, so use iterate to put a box around that:
(defn process [sticks]
(->> (iterate cuts sticks)
;; ----- 8< -------------------
This gives an infinite seq of sticks, (cuts sticks), (cuts (cuts sticks)) and so on
Incorporate step 1 and 2
(defn process [sticks]
(->> (iterate cuts sticks)
(map count) ;; count each sticks
(take-while pos?))) ;; accumulate while counts are positive
(process [1 2 3 4 3 3 2 1])
;-> (8 6 4 1)
Behind the scene this algorithm hardly differs from the one you posted, since lazy seqs are a delayed implementation of recursion. It is more idiomatic though, more modular, uses take-while for cancellation which adds to its expressiveness. Also it doesn't require one to pass the initial count and does the right thing if sticks is empty. I hope it is what you were looking for.
I think the way your code is written is a very lispy way of doing it. Certainly there are many many examples in The Little Schema that follow this format of reduction/recursion.
To replace recursion, I usually look for a solution that involves using higher order functions, in this case reduce. It replaces the min calls each iteration with a single sort at the start.
(defn process [sticks]
(drop-last (reduce (fn [a i]
(let [n (- (last a) (count i))]
(conj a n)))
[(count sticks)]
(partition-by identity (sort sticks)))))
(process [1 2 3 4 3 3 2 1])
=> (8 6 4 1)
I've changed the algorithm to fit reduce by grouping the same numbers after sorting, and then counting each group and reducing the count size.
Below is my answer for 4clojure Problem 108
I'm able to pass the first three tests but the last test times out. The code runs really, really slowly on this last test. What exactly is causing this?
((fn [& coll] (loop [coll coll m {}]
(do
(let [ct (count coll)
ns (mapv first coll)
m' (reduce #(update-in %1 [%2] (fnil inc 0)) m ns)]
(println m')
(if (some #(<= ct %) (mapv m' ns))
(apply min (map first (filter #(>= (val %) ct) m')))
(recur (mapv rest coll) m'))))))
(map #(* % % %) (range)) ;; perfect cubes
(filter #(zero? (bit-and % (dec %))) (range)) ;; powers of 2
(iterate inc 20))
You are gathering the next value from every input on every iteration (recur (mapv rest coll) m')
One of your inputs generates values extremely slowly, and accellarates to very high values very quickly: (filter #(zero? (bit-and % (dec %))) (range)).
Your code is spending most of its time discovering powers of two by incrementing by one and testing the bits.
You don't need a map of all inputs with counts of occurrences. You don't need to find the next value for items that are not the lowest found so far. I won't post a solution since it is an exercise, but eliminating the lowest non matched value on each iteration should be a start.
In addition to the other good answers here, you're doing a bunch of math, but all numbers are boxed as objects rather than being used as primitives. Many tips for doing this better here.
This is a really inefficient way of counting powers of 2:
(filter #(zero? (bit-and % (dec %))) (range))
This is essentially counting from 0 to infinity, testing each number along the way to see if it's a power of two. The further you get into the sequence, the more expensive each call to rest is.
Given that it's the test input, and you can't change it, I think you need to re-think your approach. Rather than calling (mapv rest coll), you probably only want to call rest on the sequence with the smallest first value.
I wrote a piece of code to count the leading hash(#) character of a line, which is much like a heading line in Markdown
### Line one -> return 3
######## Line two -> return 6 (Only care about the first 6 characters.
Version 1
(defn
count-leading-hash
[line]
(let [cnt (count (take-while #(= % \#) line))]
(if (> cnt 6) 6 cnt)))
Version 2
(defn
count-leading-hash
[line]
(loop [cnt 0]
(if (and (= (.charAt line cnt) \#) (< cnt 6))
(recur (inc cnt))
cnt)))
I used time to measure both tow implementations, found that the first version based on take-while is 2x faster than version 2. Taken "###### Line one" as input, version 1 took 0.09 msecs, version 2 took about 0.19 msecs.
Question 1. Is it recur that slows down the second implementation?
Question 2. Version 1 is closer to functional programming paradigm , is it?
Question 3. Which one do you prefer? Why? (You're welcome to write your own implementation.)
--Update--
After reading the doc of cloujure, I came up with a new version of this function, and I think it's much clear.
(defn
count-leading-hash
[line]
(->> line (take 6) (take-while #(= \# %)) count))
IMO it isn't useful to take time measurements for small pieces of code
Yes, version 1 is more functional
I prefer version 1 because it is easier to spot errors
I prefer version 1 because it is less code, thus less cost to maintain.
I would write the function like this:
(defn count-leading-hash [line]
(count (take-while #{\#} (take 6 line))))
No, it's the reflection used to invoke .charAt. Call (set! *warn-on-reflection* true) before creating the function, and you'll see the warning.
Insofar as it uses HOFs, sure.
The first, though (if (> cnt 6) 6 cnt) is better written as (min 6 cnt).
1: No. recur is pretty fast. For every function you call, there is a bit of overhead and "noise" from the VM: the REPL needs to parse and evaluate your call for example, or some garbage collection might happen. That's why benchmarks on such tiny bits of code don't mean anything.
Compare with:
(defn
count-leading-hash
[line]
(let [cnt (count (take-while #(= % \#) line))]
(if (> cnt 6) 6 cnt)))
(defn
count-leading-hash2
[line]
(loop [cnt 0]
(if (and (= (.charAt line cnt) \#) (< cnt 6))
(recur (inc cnt))
cnt)))
(def lines ["### Line one" "######## Line two"])
(time (dorun (repeatedly 10000 #(dorun (map count-leading-hash lines)))))
;; "Elapsed time: 620.628 msecs"
;; => nil
(time (dorun (repeatedly 10000 #(dorun (map count-leading-hash2 lines)))))
;; "Elapsed time: 592.721 msecs"
;; => nil
No significant difference.
2: Using loop/recur is not idiomatic in this instance; it's best to use it only when you really need it and use other available functions when you can. There are many useful functions that operate on collections/sequences; check ClojureDocs for a reference and examples. In my experience, people with imperative programming skills who are new to functional programming use loop/recur a lot more than those who have a lot of Clojure experience; loop/recur can be a code smell.
3: I like the first version better. There are lots of different approaches:
;; more expensive, because it iterates n times, where n is the number of #'s
(defn count-leading-hash [line]
(min 6 (count (take-while #(= \# %) line))))
;; takes only at most 6 characters from line, so less expensive
(defn count-leading-hash [line]
(count (take-while #(= \# %) (take 6 line))))
;; instead of an anonymous function, you can use `partial`
(defn count-leading-hash [line]
(count (take-while (partial = \#) (take 6 line))))
edit:
How to decide when to use partial vs an anonymous function?
In terms of performance it doesn't matter, because (partial = \#) evaluates to (fn [& args] (apply = \# args)). #(= \# %) translates to (fn [arg] (= \# arg)). Both are very similar, but partial gives you a function that accepts an arbitrary number of arguments, so in situations where you need it, that's the way to go. partial is the λ (lambda) in lambda calculus. I'd say, use what's easier to read, or partial if you need a function with an arbitrary number of arguments.
Micro-benchmarks on the JVM are almost always misleading, unless you really know what you're doing. So, I wouldn't put too much weight on the relative performance of your two solutions.
The first solution is more idiomatic. You only really see explicit loop/recur in Clojure code when it's the only reasonable alternative. In this case, clearly, there is a reasonable alternative.
Another option, if you're comfortable with regular expressions:
(defn count-leading-hash [line]
(count (or (re-find #"^#{1,6}" line) "")))