I have function
(defn goneSeq [inseq uptil]
(loop [counter 0 newSeq [] orginSeq inseq]
(if (== counter uptil)
newSeq
(recur (inc counter) (conj newSeq (first orginSeq)) (rest orginSeq)))))
(defn insert [sorted-seq n]
(loop [currentSeq sorted-seq counter 0]
(cond (empty? currentSeq) (concat sorted-seq (vector n))
(<= n (first currentSeq)) (concat (goneSeq sorted-seq counter) (vector n) currentSeq)
:else (recur (rest currentSeq) (inc counter)))))
that takes in a sorted-sequence and insert the number n at its appropriate position for example: (insert [1 3 4] 2) returns [1 2 3 4].
Now I want to use this function with reduce to sort a given sequence so something like:
(reduce (insert seq n) givenSeq)
What is thr correct way to achieve this?
If the function works for inserting a single value, then this would work:
(reduce insert [] givenSeq)
for example:
user> (reduce insert [] [0 1 2 30.5 0.88 2.2])
(0 0.88 1 2 2.2 30.5)
Also, it should be noted that sort and sort-by are built in and are better than most hand-rolled solutions.
May I suggest some simpler ways to do insert:
A slowish lazy way is
(defn insert [s x]
(let [[fore aft] (split-with #(> x %) s)]
(concat fore (cons x aft))))
A faster eager way is
(defn insert [coll x]
(loop [fore [], coll coll]
(if (and (seq coll) (> x (first coll)))
(recur (conj fore x) (rest coll))
(concat fore (cons x coll)))))
By the way, you had better put your defns in bottom-up order, if possible. Use declare if there is mutual recursion. You had me thinking your solution did not compile.
Related
I am new to Clojure and I want to define a function pt taking as arguments a number n and a sequence s and returning all the partitions of s in n parts, i.e. its factorizations with respect to n-concatenation. for example (pt 3 [0 1 2]) should produce:
(([] [] [0 1 2]) ([] [0] [1 2]) ([] [0 1] [2]) ([] [0 1 2] []) ([0] [] [1 2]) ([0] [1] [2]) ([0] [1 2] []) ([0 1] [] [2]) ([0 1] [2] []) ([0 1 2] [] []))
with the order being unimportant.
Specifically, I want the result to be a lazy sequence of lazy sequences of vectors.
My first attempt for such a function was the following:
(defn pt [n s]
(lazy-seq
(if (zero? n)
(when (empty? s) [nil])
((fn split [a b]
(concat
(map (partial cons a) (pt (dec n) b))
(when-let [[bf & br] (seq b)] (split (conj a bf) br))))
[] s))))
After that, I wrote a somewhat less concise version which reduces the time complexity by avoiding useless comparisons for 1-part partitions, given below:
(defn pt [n s]
(lazy-seq
(if (zero? n)
(when (empty? s) [nil])
((fn pt>0 [n s]
(lazy-seq
(if (= 1 n)
[(cons (vec s) nil)]
((fn split [a b]
(concat
(map (partial cons a) (pt>0 (dec n) b))
(when-let [[bf & br] (seq b)] (split (conj a bf) br))))
[] s))))
n s))))
The problem with these solutions is that, although they work, they produce a lazy sequence of (non-lazy) cons's and I suspect that quite a different approach must be taken to achieve the "inner laziness". So any corrections, suggestions, explanations are welcome!
EDIT: After reading l0st3d's answer I thought I should make clear that I do not want a partition just to be a LazySeq but to be "really lazy", in the sense that a part is computed and held in memory only when it is requested.
For example, both of the functions given below produce LazySeq's but only the first one produces a "really lazy" sequence.
(defn f [n]
(if (neg? n)
(lazy-seq nil)
(lazy-seq (cons n (f (dec n))))))
(defn f [n]
(if (neg? n)
(lazy-seq nil)
(#(lazy-seq (cons n %)) (f (dec n)))))
So mapping (partial concat [a]) or #(lazy-seq (cons a %)) instead of (partial cons a) does not solve the problem.
The cons call in your split inline fn is the only place where eagerness is being introduced. You could replace that with something that lazily constructs a list, like concat:
(defn pt [n s]
(lazy-seq
(if (zero? n)
(when (empty? s) [nil])
((fn split [a b]
(concat
(map (partial concat [a]) (pt (dec n) b))
(when-let [[bf & br] (seq b)] (split (conj a bf) br))))
[] s))))
(every? #(= clojure.lang.LazySeq (class %)) (pt 3 [0 1 2 3])) ;; => true
But, reading the code I feel like it's fairly unClojurey, and I think that's to do with the use of recursion. Often you'd use things like reductions, partition-by, split-at and so to do this sort of thing. I feel like there should also be a way to make this a transducer and separate out the lazyness from the processing (so you can use sequence to say you want it lazily), but I haven't got time to work that out right now. I'll try and come back with a more complete answer soon.
I have written a function that uses recursion to find the number of elements in a list and it works successfully however, I don't particularly like the way I've written it. Now I've written it one way I can't seem to think of a different way of doing it.
My code is below:
(def length
(fn [n]
(loop [i n total 0]
(cond (= 0 i) total
:t (recur (rest i)(inc total))))))
To me it seems like it is over complicated, can anyone think of another way this can be written for comparison?
Any help greatly appreciated.
Here is a naive recursive version:
(defn my-count [coll]
(if (empty? coll)
0
(inc (my-count (rest coll)))))
Bear in mind there's not going to be any tail call optimization going on here so for long lists the stack will overflow.
Here is a version using reduce:
(defn my-count [coll]
(reduce (fn [acc x] (inc acc)) 0 coll))
Here is code showing some different solutions. Normally, you should use the built-in function count.
(def data [:one :two :three])
(defn count-loop [data]
(loop [cnt 0
remaining data]
(if (empty? remaining)
cnt
(recur (inc cnt) (rest remaining)))))
(defn count-recursive [remaining]
(if (empty? remaining)
0
(inc (count-recursive (rest remaining)))))
(defn count-imperative [data]
(let [cnt (atom 0)]
(doseq [elem data]
(swap! cnt inc))
#cnt))
(deftest t-count
(is (= 3 (count data)))
(is (= 3 (count-loop data)))
(is (= 3 (count-recursive data)))
(is (= 3 (count-imperative data))))
Here's one that is tail-call optimized, and doesn't rely on loop. Basically the same as Alan Thompson's first one, but inner functions are the best things. (And feel more idiomatic to me.) :-)
(defn my-count [sq]
(letfn [(inner-count [c s]
(if (empty? s)
c
(recur (inc c) (rest s))))]
(inner-count 0 sq)))
Just for completeness, here is another twist
(defn my-count
([data]
(my-count data 0))
([data counter]
(if (empty? data)
counter
(recur (rest data) (inc counter)))))
I have completed this problem on hackerrank and my solution passes most test cases but it is not fast enough for 4 out of the 11 test cases.
My solution looks like this:
(ns scratch.core
(require [clojure.string :as str :only (split-lines join split)]))
(defn ascii [char]
(int (.charAt (str char) 0)))
(defn process [text]
(let [parts (split-at (int (Math/floor (/ (count text) 2))) text)
left (first parts)
right (if (> (count (last parts)) (count (first parts)))
(rest (last parts))
(last parts))]
(reduce (fn [acc i]
(let [a (ascii (nth left i))
b (ascii (nth (reverse right) i))]
(if (> a b)
(+ acc (- a b))
(+ acc (- b a))))
) 0 (range (count left)))))
(defn print-result [[x & xs]]
(prn x)
(if (seq xs)
(recur xs)))
(let [input (slurp "/Users/paulcowan/Downloads/input10.txt")
inputs (str/split-lines input)
length (read-string (first inputs))
texts (rest inputs)]
(time (print-result (map process texts))))
Can anyone give me any advice about what I should look at to make this faster?
Would using recursion instead of reduce be faster or maybe this line is expensive:
right (if (> (count (last parts)) (count (first parts)))
(rest (last parts))
(last parts))
Because I am getting a count twice.
You are redundantly calling reverse on every iteration of the reduce:
user=> (let [c [1 2 3]
noisey-reverse #(doto (reverse %) println)]
(reduce (fn [acc e] (conj acc (noisey-reverse c) e))
[]
[:a :b :c]))
(3 2 1)
(3 2 1)
(3 2 1)
[(3 2 1) :a (3 2 1) :b (3 2 1) :c]
The reversed value could be calculated inside the containing let, and would then only need to be calculated once.
Also, due to the way your parts is defined, you are doing linear time lookups with each call to nth. It would be better to put parts in a vector and do indexed lookup. In fact you wouldn't need a reversed parts, and could do arithmetic based on the count of the vector to find the item to look up.
I have the following functions that check for odd parity in sequence
(defn countOf[a-seq elem]
(loop [number 0 currentSeq a-seq]
(cond (empty? currentSeq) number
(= (first currentSeq) elem) (recur (inc number) (rest currentSeq))
:else (recur number (rest currentSeq))
)
)
)
(defn filteredSeq[a-seq elemToRemove]
(remove (set (vector (first a-seq))) a-seq)
)
(defn parity [a-seq]
(loop [resultset [] currentSeq a-seq]
(cond (empty? currentSeq) (set resultset)
(odd? (countOf currentSeq (first currentSeq))) (recur (concat resultset (vector(first currentSeq))) (filteredSeq currentSeq (first currentSeq)))
:else (recur resultset (filteredSeq currentSeq (first currentSeq)))
)
)
)
for example (parity [1 1 1 2 2 3]) -> (1 3) that is it picks odd number of elements from a sequence.
Is there a better way to achieve this?
How can this be done with reduce function of clojure
First, I decided to make more idiomatic versions of your code, so I could really see what it was doing:
;; idiomatic naming
;; no need to rewrite count and filter for this code
;; putting item and collection in idiomatic argument order
(defn count-of [elem a-seq]
(count (filter #(= elem %) a-seq)))
;; idiomatic naming
;; putting item and collection in idiomatic argument order
;; actually used the elem-to-remove argument
(defn filtered-seq [elem-to-remove a-seq]
(remove #(= elem-to-remove %) a-seq))
;; idiomatic naming
;; if you want a set, use a set from the beginning
;; destructuring rather than repeated usage of first
;; use rest to recur when the first item is guaranteed to be dropped
(defn idiomatic-parity [a-seq]
(loop [result-set #{}
[elem & others :as current-seq] a-seq]
(cond (empty? current-seq)
result-set
(odd? (count-of elem current-seq))
(recur (conj result-set elem) (filtered-seq elem others))
:else
(recur result-set (filtered-seq elem others)))))
Next, as requested, a version that uses reduce to accumulate the result:
;; mapcat allows us to return 0 or more results for each input
(defn reducing-parity [a-seq]
(set
(mapcat
(fn [[k v]]
(when (odd? v) [k]))
(reduce (fn [result item]
(update-in result [item] (fnil inc 0)))
{}
a-seq))))
But, reading over this, I notice that the reduce is just frequencies, a built in clojure function. And my mapcat was really just a hand-rolled keep, another built in.
(defn most-idiomatic-parity [a-seq]
(set
(keep
(fn [[k v]]
(when (odd? v) k))
(frequencies a-seq))))
In Clojure we can refine our code, and as we recognize places where our logic replicates the built in functionality, we can simplify the code and make it more clear. Also, there is a good chance the built in is better optimized than our own work-alikes.
Is there a better way to achieve this?
(defn parity [coll]
(->> coll
frequencies
(filter (fn [[_ v]] (odd? v)))
(map first)
set))
For example,
(parity [1 1 1 2 1 2 1 3])
;#{1 3}
How can this be done with reduce function of clojure.
We can use reduce to rewrite frequencies:
(defn frequencies [coll]
(reduce
(fn [acc x] (assoc acc x (inc (get acc x 0))))
{}
coll))
... and again to implement parity in terms of it:
(defn parity [coll]
(let [freqs (frequencies coll)]
(reduce (fn [s [k v]] (if (odd? v) (conj s k) s)) #{} freqs)))
Let's say we have a list of integers: 1, 2, 5, 13, 6, 5, 7 and I want to find the first number that repeats and return a vector of the two indices. In my sample, it's 5 at [2, 5]. What I did so far is loop, but can I do it more elegant, short way?
(defn get-cycle
[xs]
(loop [[x & xs_rest] xs, indices {}, i 0]
(if (nil? x)
[0 i] ; Sequence is over before we found a duplicate.
(if-let [x_index (indices x)]
[x_index i]
(recur xs_rest (assoc indices x i) (inc i))))))
No need to return number itself, because I can get it by index and, second, it may be not always there.
An option using list processing, but not significantly more concise:
(defn get-cycle [xs]
(first (filter #(number? (first %))
(reductions
(fn [[m i] x] (if-let [xat (m x)] [xat i] [(assoc m x i) (inc i)]))
[(hash-map) 0] xs))))
Here is a version using reduced to stop consuming the sequence when you find the first duplicate:
(defn first-duplicate [coll]
(reduce (fn [acc [idx x]]
(if-let [v (get acc x)]
(reduced (conj v idx))
(assoc acc x [idx])))
{} (map-indexed #(vector % %2) coll)))
I know that you have only asked for the first. Here is a fully lazy implementation with little per-step allocation overhead
(defn dups
[coll]
(letfn [(loop-fn [idx [elem & rest] cached]
(if elem
(if-let [last-idx (cached elem)]
(cons [last-idx idx]
(lazy-seq (loop-fn (inc idx) rest (dissoc cached elem))))
(lazy-seq (loop-fn (inc idx) rest (assoc cached elem idx))))))]
(loop-fn 0 coll {})))
(first (dups v))
=> [2 5]
Edit: Here are some criterium benchmarks:
The answer that got accepted: 7.819269 µs
This answer (first (dups [12 5 13 6 5 7])): 6.176290 µs
Beschastnys: 5.841101 µs
first-duplicate: 5.025445 µs
Actually, loop is a pretty good choice unless you want to find all duplicates. Things like reduce will cause the full scan of an input sequence even when it's not necessary.
Here is my version of get-cycle:
(defn get-cycle [coll]
(loop [i 0 seen {} coll coll]
(when-let [[x & xs] (seq coll)]
(if-let [j (seen x)]
[j i]
(recur (inc i) (assoc seen x i) xs)))))
The only difference from your get-cycle is that my version returns nil when there is no duplicates.
The intent of your code seems different from your description in the comments so I'm not totally confident I understand. That said, loop/recur is definitely a valid way to approach the problem.
Here's what I came up with:
(defn get-cycle [xs]
(loop [xs xs index 0]
(when-let [[x & more] (seq xs)]
(when-let [[y] (seq more)]
(if (= x y)
{x [index (inc index)]}
(recur more (inc index)))))))
This will return a map of the repeated item to a vector of the two indices the item was found at.
(get-cycle [1 1 2 1 2 4 2 1 4 5 6 7])
;=> {1 [0 1]}
(get-cycle [1 2 1 2 4 2 1 4 5 6 7 7])
;=> {7 [10 11]}
(get-cycle [1 2 1 2 4 2 1 4 5 6 7 8])
;=> nil
Here's an alternative solution using sequence functions. I like this way better but whether it's shorter or more elegant is probably subjective.
(defn pairwise [coll]
(map vector coll (rest coll)))
(defn find-first [pred xs]
(first (filter pred xs)))
(defn get-cycle [xs]
(find-first #(apply = (val (first %)))
(map-indexed hash-map (pairwise xs))))
Edited based on clarification from #demi
Ah, got it. Is this what you have in mind?
(defn get-cycle [xs]
(loop [xs (map-indexed vector xs)]
(when-let [[[i n] & more] (seq xs)]
(if-let [[j _] (find-first #(= n (second %)) more)]
{n [i j]}
(recur more)))))
I re-used find-first from my earlier sequence-based solution.