Clojure take-while with logical and - clojure

I am learning Clojure and trying to solve Project's Euler (http://projecteuler.net/) problems using this language.
Second problem asks to find the sum of the even-valued terms in Fibonacci sequence whose values do not exceed four million.
I've tried several approaches and would find next one most accurate if I could find where it's broken. Now it returns 0. I am pretty sure there is a problem with take-while condition but can't figure it out.
(reduce +
(take-while (and even? (partial < 4000000))
(map first (iterate (fn [[a b]] [b (+ a b)]) [0 1]))))

To compose multiple predicates in this way, you can use every-pred:
(every-pred even? (partial > 4000000))
The return value of this expression is a function that takes an argument and returns true if it is both even and greater than 4000000, false otherwise.

user> ((partial < 4000000) 1)
false
Partial puts the static arguments first and the free ones at the end, so it's building the opposite of what you want. It is essentially producing #(< 4000000 %) instead of #(< % 4000000) as you intended, So just change the > to <:
user> (reduce +
(take-while (and even? (partial > 4000000))
(map first (iterate (fn [[a b]] [b (+ a b)]) [0 1]))))
9227464
or perhaps it would be more clear to use the anonymous function form directly:
user> (reduce +
(take-while (and even? #(< % 4000000))
(map first (iterate (fn [[a b]] [b (+ a b)]) [0 1]))))
9227464
Now that we have covered a bit about partial, let's break down a working solution. I'll use the thread-last macro ->> to show each step separately.
user> (->> (iterate (fn [[a b]] [b (+ a b)]) [0 1]) ;; start with the fibs
(map first) ;; keep only the answer
(take-while #(< % 4000000)) ;; stop when they get too big
(filter even?) ;; take only the even? ones
(reduce +)) ;; sum it all together.
4613732
From this we can see that we don't actually want to compose the predicates evan? and less-than-4000000 on a take-while because this would stop as soon as either condition was true leaving only the number zero. Rather we want to use one of the predicates as a limit and the other as a filter.

Related

Higher-order if-then-else in Clojure?

I often have to run my data through a function if the data fulfill certain criteria. Typically, both the function f and the criteria checker pred are parameterized to the data. For this reason, I find myself wishing for a higher-order if-then-else which knows neither f nor pred.
For example, assume I want to add 10 to all even integers in (range 5). Instead of
(map #(if (even? %) (+ % 10) %) (range 5))
I would prefer to have a helper –let's call it fork– and do this:
(map (fork even? #(+ % 10)) (range 5))
I could go ahead and implement fork as function. It would look like this:
(defn fork
([pred thenf elsef]
#(if (pred %) (thenf %) (elsef %)))
([pred thenf]
(fork pred thenf identity)))
Can this be done by elegantly combining core functions? Some nice chain of juxt / apply / some maybe?
Alternatively, do you know any Clojure library which implements the above (or similar)?
As Alan Thompson mentions, cond-> is a fairly standard way of implicitly getting the "else" part to be "return the value unchanged" these days. It doesn't really address your hope of being higher-order, though. I have another reason to dislike cond->: I think (and argued when cond-> was being invented) that it's a mistake for it to thread through each matching test, instead of just the first. It makes it impossible to use cond-> as an analogue to cond.
If you agree with me, you might try flatland.useful.fn/fix, or one of the other tools in that family, which we wrote years before cond->1.
to-fix is exactly your fork, except that it can handle multiple clauses and accepts constants as well as functions (for example, maybe you want to add 10 to other even numbers but replace 0 with 20):
(map (to-fix zero? 20, even? #(+ % 10)) xs)
It's easy to replicate the behavior of cond-> using fix, but not the other way around, which is why I argue that fix was the better design choice.
1 Apparently we're just a couple weeks away from the 10-year anniversary of the final version of fix. How time flies.
I agree that it could be very useful to have some kind of higher-order functional construct for this but I am not aware of any such construct. It is true that you could implement a higher order fork function, but its usefulness would be quite limited and can easily be achieved using if or the cond-> macro, as suggested in the other answers.
What comes to mind, however, are transducers. You could fairly easily implement a forking transducer that can be composed with other transducers to build powerful and concise sequence processing algorithms.
The implementation could look like this:
(defn forking [pred true-transducer false-transducer]
(fn [step]
(let [true-step (true-transducer step)
false-step (false-transducer step)]
(fn
([] (step))
([dst x] ((if (pred x) true-step false-step) dst x))
([dst] dst))))) ;; flushing not performed.
And this is how you would use it in your example:
(eduction (forking even?
(map #(+ 10 %))
identity)
(range 20))
;; => (10 1 12 3 14 5 16 7 18 9 20 11 22 13 24 15 26 17 28 19)
But it can also be composed with other transducers to build more complex sequence processing algorithms:
(into []
(comp (forking even?
(comp (drop 4)
(map #(+ 10 %)))
(comp (filter #(< 10 %))
(map #(vector % % %))
cat))
(partition-all 3))
(range 20))
;; => [[18 20 11] [11 11 22] [13 13 13] [24 15 15] [15 26 17] [17 17 28] [19 19 19]]
Another way to define fork (with three inputs) could be:
(defn fork [pred then else]
(comp
(partial apply apply)
(juxt (comp {true then, false else} pred) list)))
Notice that in this version the inputs and output can receive zero or more arguments. But let's take a more structured approach, defining some other useful combinators. Let's start by defining pick which corresponds to the categorical coproduct (sum) of morphisms:
(defn pick [actions]
(fn [[tag val]]
((actions tag) val)))
;alternatively
(defn pick [actions]
(comp
(partial apply apply)
(juxt (comp actions first) rest)))
E.g. (mapv (pick [inc dec]) [[0 1] [1 1]]) gives [2 0]. Using pick we can define switch which works like case:
(defn switch [test actions]
(comp
(pick actions)
(juxt test identity)))
E.g. (mapv (switch #(mod % 3) [inc dec -]) [3 4 5]) gives [4 3 -5]. Using switch we can easily define fork:
(defn fork [pred then else]
(switch pred {true then, false else}))
E.g. (mapv (fork even? inc dec) [0 1]) gives [1 0]. Finally, using fork let's also define fork* which receives zero or more predicate and action pairs and works like cond:
(defn fork* [& args]
(->> args
(partition 2)
reverse
(reduce
(fn [else [pred then]]
(fork pred then else))
identity)))
;equivalently
(defn fork* [& args]
(->> args
(partition 2)
(map (partial apply (partial partial fork)))
(apply comp)
(#(% identity))))
E.g. (mapv (fork* neg? -, even? inc) [-1 0 1]) gives [1 1 1].
Depending on the details, it is often easiest to accomplish this goal using the cond-> macro and friends:
(let [myfn (fn [val]
(cond-> val
(even? val) (+ val 10))) ]
with result
(mapv myfn (range 5)) => [10 1 14 3 18]
There is a variant in the Tupelo library that is sometimes helpful:
(mapv #(cond-it-> %
(even? it) (+ it 10))
(range 5))
that allows you to use the special symbol it as you thread the value through multiple stages.
As the examples show, you have the option to define and name the transformer function (my favorite), or use the function literal syntax #(...)

How to print each elements of a hash map list using map function in clojure?

I am constructing a list of hash maps which is then passed to another function. When I try to print each hash maps from the list using map it is not working. I am able to print the full list or get the first element etc.
(defn m [a]
(println a)
(map #(println %) a))
The following works from the repl only.
(m (map #(hash-map :a %) [1 2 3]))
But from the program that I load using load-file it is not working. I am seeing the a but not its individual elements. What's wrong?
In Clojure tranform functions return a lazy sequence. So, (map #(println %) a) return a lazy sequence. When consumed, the map action is applied and only then the print-side effect is visible.
If the purpose of the function is to have a side effect, like printing, you need to eagerly evaluate the transformation. The functions dorun and doall
(def a [1 2 3])
(dorun (map #(println %) a))
; returns nil
(doall (map #(println %) a))
; returns the collection
If you actually don't want to map, but only have a side effect, you can use doseq. It is intended to 'iterate' to do side effects:
(def a [1 2 3])
(doseq [i a]
(println i))
If your goal is simply to call an existing function on every item in a collection in order, ignoring the returned values, then you should use run!:
(run! println [1 2 3])
;; 1
;; 2
;; 3
;;=> nil
In some more complicated cases it may be preferable to use doseq as #Gamlor suggests, but in this case, doseq only adds boilerplate.
I recommend to use tail recursion:
(defn printList [a]
(let [head (first a)
tail (rest a)]
(when (not (nil? head))
(println head)
(printList tail))))

How to implement map using reduce in Clojure

In the book Clojure for the Brave and True at the end of the section covering reduce there's a challenge:
If you want an exercise that will really blow your hair back, try implementing map using reduce
It turns out that this was a lot harder (at least for me, a Clojure beginner) than I thought it would be. After quite a few hours I came up with this:
(defn map-as-reduce
[f coll]
(reduce #(cons (f %2) %1) '() (reverse coll)))
Is a better way to do this? I'm particularly frustrated by the fact that I have to reverse the input collection in order for this to work correctly. It seems somehow inelegant!
Remember that you can efficiently insert at the end of a vector:
(defn map' [f coll]
(reduce #(conj %1 (f %2)) [] coll))
Example:
(map' inc [1 2 3])
;=> [2 3 4]
One difference between this map' and the original map is that the original map returns an ISeq instead of just a Seqable:
(seq? (map inc [1 2 3]))
;=> true
(seq? (map' inc [1 2 3]))
;=> false
You could remedy this by composing the above implementation of map' with seq:
(defn map' [f coll]
(seq (reduce #(conj %1 (f %2)) [] coll)))
The most important difference now is that, while the original map is lazy, this map' is eager, because reduce is eager.
just for fun:
map really accepts more than one collection as an argument. Here is an extended implementation:
(defn map-with-reduce
([f coll] (seq (reduce #(conj %1 (f %2)) [] coll)))
([f coll & colls]
(let [colls (cons coll colls)]
(map-with-reduce (partial apply f)
(partition (count colls)
(apply interleave colls))))))
in repl:
user> (map-with-reduce inc [1 2 3])
(2 3 4)
user> (map-with-reduce + [1 2 3] [4 5] [6 7 8])
(11 14)
The real map calls seq on its collection argument(s) and returns a lazy seq, so maybe this to get it a little closer to the real map?
(defn my-map
[f coll]
(lazy-seq (reduce #(conj %1 (f %2)) [] (seq coll))))
I would have added that as a comment, but I don't have the reputation. :)
You can use conj to append to a vector instead of prepending to a list:
(defn my-map [f coll]
(reduce (fn [result item]
(conj result (f item)))
[] coll))
(my-map inc [1 2 3]) => [2 3 4]
It is more common to reverse the result, not the input. This is a common idiom when handling singly-linked lists in a recursive fashion. It preserves linear complexity with this data structure.
You might want to offer different implementations for other seqs, e. g., vectors, maybe based on conj instead of cons.
I would not worry too much about elegance with this kind of exercise.
As it was already pointed out. You do not have to reverse the input.
cons add an item to the beginning of a sequence (even on vectors) whereas conj will always add in the most natural way, it always add an item in the fastest way possible for the collection. it will add from left to right for list, and from left to right for vectors.
I noticed that most suggested answers use 'reduce' so allow me to suggest this one using mainly recursion:
(defn my-map [f coll]
(loop [f f coll coll acc []]
(if (empty? coll)
acc
(recur f (rest coll) (conj acc (f (first coll)))))))

Why is my Clojure code running so slowly?

Below is my answer for 4clojure Problem 108
I'm able to pass the first three tests but the last test times out. The code runs really, really slowly on this last test. What exactly is causing this?
((fn [& coll] (loop [coll coll m {}]
(do
(let [ct (count coll)
ns (mapv first coll)
m' (reduce #(update-in %1 [%2] (fnil inc 0)) m ns)]
(println m')
(if (some #(<= ct %) (mapv m' ns))
(apply min (map first (filter #(>= (val %) ct) m')))
(recur (mapv rest coll) m'))))))
(map #(* % % %) (range)) ;; perfect cubes
(filter #(zero? (bit-and % (dec %))) (range)) ;; powers of 2
(iterate inc 20))
You are gathering the next value from every input on every iteration (recur (mapv rest coll) m')
One of your inputs generates values extremely slowly, and accellarates to very high values very quickly: (filter #(zero? (bit-and % (dec %))) (range)).
Your code is spending most of its time discovering powers of two by incrementing by one and testing the bits.
You don't need a map of all inputs with counts of occurrences. You don't need to find the next value for items that are not the lowest found so far. I won't post a solution since it is an exercise, but eliminating the lowest non matched value on each iteration should be a start.
In addition to the other good answers here, you're doing a bunch of math, but all numbers are boxed as objects rather than being used as primitives. Many tips for doing this better here.
This is a really inefficient way of counting powers of 2:
(filter #(zero? (bit-and % (dec %))) (range))
This is essentially counting from 0 to infinity, testing each number along the way to see if it's a power of two. The further you get into the sequence, the more expensive each call to rest is.
Given that it's the test input, and you can't change it, I think you need to re-think your approach. Rather than calling (mapv rest coll), you probably only want to call rest on the sequence with the smallest first value.

Clojure list (seq) traversal to compare items to other items

Say I have a list (a b c d e), I'm trying to figure out a "lazy" and Clojure-idiomatic way of producing a list or seq of each item with each other item, such as ((a b) (a c) (a d) (a e) (b c) (b d) (b e) (c d) (c e) (d e)).
Clojure's for doesn't seem to allow this, it just produces one item as it goes through a list and doesn't allow access to a sub-list. The closest I've come so far is to turn the original list into a vector, and have a for statement that iterates over the count of the vector and grab indexed items,
(for [i (range vector-count) j (range i vector-count)]
...
but I hope that there's a better way.
You want combinations. There's a function to give you a lazy sequence of combinations right here in clojure-contrib.
user> (combinations [:a :b :c :d :e] 2)
((:a :b) (:a :c) (:a :d) (:a :e) (:b :c) (:b :d) (:b :e) (:c :d) (:c :e) (:d :e))
(Unfortunately, the monolithic clojure-contrib repo containing that file is deprecated in favor of splitting contrib up into smaller separate repos, and clojure.contrib.combinatorics doesn't seem to have made the transition yet, so there's no easy way currently to install that library, but you can snag the code from github if nothing else.)
FWIW, I tried writing this without looking at the code in contrib. I think my code is much easier to understand, and in my simple-minded benchmark it's more than twice as fast. It's available at https://gist.github.com/1042047, and reproduced below for convenience:
(defn combinations [n coll]
(if (= 1 n)
(map list coll)
(lazy-seq
(when-let [[head & tail] (seq coll)]
(concat (for [x (combinations (dec n) tail)]
(cons head x))
(combinations n tail))))))
user> (require '[clojure.contrib.combinatorics :as combine])
nil
user> (time (last (user/combinations 4 (range 100))))
"Elapsed time: 4379.959957 msecs"
(96 97 98 99)
user> (time (last (combine/combinations (range 100) 4)))
"Elapsed time: 10913.170605 msecs"
(96 97 98 99)
I strongly prefer the [n coll] argument order, rather than [coll n] - clojure likes the "important" argument to come last, especially for functions dealing with seqs: mostly this is for ease of combination with (->>) in scenarios like (->> (my-list) (filter even?) (take 10) (combinations 8)).
why use range and index grabbing in the for loop?
(let [myseq (list :a :b :c :d)]
(for [a myseq b myseq] (list a b)))
works.