I want to get the indices of nil elements in a vector eg.
[1 nil 3 nil nil 4 3 nil] => [1 3 4 7]
(defn nil-indices [vec]
(vec (remove nil? (map
#(if (= (second %) nil) (first %))
(partition-all 2 (interleave (range (count vec)) vec)))))
)
Running this code results in
java.lang.IllegalArgumentException: Key must be integer
(NO_SOURCE_FILE:0)
If I leave out the (vec) call surrounding everything, it seems to work, but returns a sequence instead of a vector.
Thank you!
Try this instead:
(defn nil-indices [v]
(vec (remove nil? (map
#(if (= (second %) nil) (first %))
(partition-all 2 (interleave (range (count v)) v))))))
Clojure is a LISP-1: It has a single namespace for both functions and data, so when you called (vec ...), you were trying to pass your result sequence to your data as a parameter, not to the standard-library vec function.
See other answer for your problem (you are shadowing vec), but consider using a simpler approach.
map can take multiple arguments, in which case they are passed as additional arguments to the map function, e.g. (map f c1 c2 ...) calls (f (first c1) (first c2) ...) etc, until one of the sequence arguments is exhausted.
This means your (partition-all 2 (interleave ...)) is a very verbose way of saying (map list (range) v). There is also a function map-indexed which does the same thing. However, it only takes one sequence argument, so (map-indexed f c1 c2) is not legal.
Here is your function rewritten for clarity using map-indexed, threading, and nil?:
(defn nil-indices [v]
; Note: map fn called like (f range-item v-item)
; Not like (f (range-item v-item)) as in your code.
(->> (map-indexed #(when (nil? %2) %1) v) ;; like (map #(when ...) (range) v)
(remove nil?)
vec))
However, you can do this instead with reduction and the reduce-kv function. This function is like reduce, except the reduction function receives three arguments instead of two: the accumulator, the key of the item in the collection (index for vectors, key for maps), and the item itself. Using reduce-kv you can rewrite this function even more clearly (and it will probably run faster, especially with transients):
(defn nil-indices [v]
(reduce-kv #(if (nil? %3) (conj %1 %2) %1) [] v))
Related
My try:
(defn inc-by-f [v]
map #(+ (first v) %) v)
EDIT
(The original question was stupid; I missed the parenthesis. I am still leaving the question, so that perhaps I learn some new ways to deal with it.)
(defn inc-by-f [v]
(map #(+ (first v) %) v))
What other cool “Clojure” ways to achieve the desired result?
"Cooler" way (answered later than https://stackoverflow.com/a/62536870/823470 by Bob Jarvis):
(defn inc-by-f
[[v1 :as v]]
(map (partial + v1) v))
This uses
sequential destructuring to extract the first element of the input vector while still maintaining a reference to the entire vector using :as
partial to avoid the need for an anonymous function literal, which increases readability in some peoples' opinion (count me in!)
Note that the vector destructuring is only useful if the increment value is in a place that is easily accessible by destructuring. It could work if the value was the "2nd in the vector" ([_ v2 :as v]), for example, but not if the value was "the maximum element in the vector". In that case, the max would have to be obtained explicitly, e.g.
(defn inc-by-max
[v]
(map (partial + (apply max v)) v))
Also note that anonymous functions are evaluated on each call, unlike partial which is handed all its arguments and then those no longer need to be evaluated. In other words, if we take the first element of a 1000-element v inside the anonymous function, that will result in 1000 calls to first, instead of just one if we get the first element and pass it to partial. Demonstration:
user=> (dorun (map #(+ (do (println "called") 42) %) (range 3)))
called
called
called
=> nil
user=> (dorun (map (partial + (do (println "called") 42)) (range 3)))
called
=> nil
You're missing parentheses around the map invocation. The following works as you expect:
(defn inc-by-f [v]
(map #(+ (first v) %) v))
I'm often writing code of the form
(->> init
(map ...)
(filter ...)
(first))
When converting this into code that uses transducers I'll end up with something like
(transduce (comp (map ...) (filter ...)) (completing #(reduced %2)) nil init)
Writing (completing #(reduced %2)) instead of first doesn't sit well with me at all. It needlessly obscures a very straightforward task. Is there a more idiomatic way of performing this task?
I'd personally use your approach with a custom reducing function but here are some alternatives:
(let [[x] (into [] (comp (map inc) (filter even?) (take 1)) [0 1])]
x)
Using destructing :/
Or:
(first (eduction (map inc) (filter even?) [0 1])
Here you save on calling comp which is done for you. Though it's not super lazy. It'll realize up to 32 elements so it's potentially wasteful.
Fixed with a (take 1):
(first (eduction (map inc) (filter even?) (take 1) [0 1]))
Overall a bit shorter and not too unclear compared to:
(transduce (comp (map inc) (filter even?) (take 1)) (completing #(reduced %2)) nil [0 1])
If you need this a bunch, then I'd probably NOT create a custom reducer function but instead a function similar to transduce that takes xform, coll as the argument and returns the first value. It's clearer what it does and you can give it a nice docstring. If you want to save on calling comp you can also make it similar to eduction:
(defn single-xf
"Returns the first item of transducing the xforms over collection"
{:arglists '([xform* coll])}
[& xforms]
(transduce (apply comp (butlast xforms)) (completing #(reduced %2)) nil (last xforms)))
Example:
(single-xf (map inc) (filter even?) [0 1])
medley has find-first with a transducer arity and xforms has a reducing function called last. I think that the combination of the two is what you're after.
(ns foo.bar
(:require
[medley.core :as medley]
[net.cgrand.xforms.rfs :as rfs]))
(transduce (comp (map ,,,) (medley/find-first ,,,)) rfs/last init)
When doing
(map f [0 1 2] [:0 :1])
f will get called twice, with the arguments being
0 :0
1 :1
Is there a simple yet efficient way, i.e. without producing more intermediate sequences etc., to make f get called for every value of the first collection, with the following arguments?
0 :0
1 :1
2 nil
Edit Addressing question by #fl00r in the comments.
The actual use case that triggered this question needed map to always work exactly (count first-coll) times, regardless if the second (or third, or ...) collection was longer.
It's a bit late in the game now and somewhat unfair after having accepted an answer, but if a good answer gets added that only does what I specifically asked for - mapping (count first-coll) times - I would accept that.
You could do:
(map f [0 1 2] (concat [:0 :1] (repeat nil)))
Basically, pad the second coll with an infinite sequence of nils. map stops when it reaches the end of the first collection.
An (eager) loop/recur form that walks to end of longest:
(loop [c1 [0 1 2] c2 [:0 :1] o []]
(if (or (seq c1) (seq c2))
(recur (rest c1) (rest c2) (conj o (f (first c1) (first c2))))
o))
Or you could write a lazy version of map that did something similar.
A general lazy version, as suggested by Alex Miller's answer, is
(defn map-all [f & colls]
(lazy-seq
(when-not (not-any? seq colls)
(cons
(apply f (map first colls))
(apply map-all f (map rest colls))))))
For example,
(map-all vector [0 1 2] [:0 :1])
;([0 :0] [1 :1] [2 nil])
You would probably want to specialise map-all for one and two collections.
just for fun
this could easily be done with common lisp's do macro. We could implement it in clojure and do this (and much more fun things) with it:
(defmacro cl-do [clauses [end-check result] & body]
(let [clauses (map #(if (coll? %) % (list %)) clauses)
bindings (mapcat (juxt first second) clauses)
nexts (map #(nth % 2 (first %)) clauses)]
`(loop [~#bindings]
(if ~end-check
~result
(do
~#body
(recur ~#nexts))))))
and then just use it for mapping (notice it can operate on more than 2 colls):
(defn map-all [f & colls]
(cl-do ((colls colls (map next colls))
(res [] (conj res (apply f (map first colls)))))
((every? empty? colls) res)))
in repl:
user> (map-all vector [1 2 3] [:a :s] '[z x c v])
;;=> [[1 :a z] [2 :s x] [3 nil c] [nil nil v]]
I am playing around and trying to create my own reductions implementation, so far I have this which works with this test data:
((fn [func & args]
(reduce (fn [acc item]
(conj acc (func (last acc) item))
)[(first args)] (first (rest args)))) * 2 [3 4 5]
What I don't like is how I am separating the args.
(first args) is what I would expect, i.e. 2 but (rest args) is ([3 4 5]) and so I am getting the remainder like this (first (rest args)) which I do not like.
Am I missing some trick that makes it easier to work with variadic arguments?
Variadic arguments are just about getting an unspecified number of arguments in a list, so all list/destructuring operations can be applied here.
For example:
(let [[fst & rst] a-list]
; fst is the first element
; rst is the rest
)
This is more readable than:
(let [fst (first a-list)
rst (rest a-list)]
; ...
)
You can go further to get the first and second elements of a list (assuming it has >1 elements) in one line:
(let [fst snd & rst]
; ...
)
I originally misread your question and thought you were trying to reimplement the reduce function. Here is a sample implementation I wrote for this answer which does’t use first or rest:
(defn myreduce
;; here we accept the form with no initial value
;; like in (myreduce * [2 3 4 5]), which is equivalent
;; to (myreduce * 2 [3 4 5]). Notice how we use destructuring
;; to get the first/rest of the list passed as a second
;; argument
([op [fst & rst]] (myreduce op fst rst))
;; we take an operator (function), accumulator and list of elements
([op acc els]
;; no elements? give the accumulator back
(if (empty? els)
acc
;; all the function's logic is in here
;; we're destructuring els to get its first (el) and rest (els)
(let [[el & els] els]
;; then apply again the function on the same operator,
;; using (op acc el) as the new accumulator, and the
;; rest of the previous elements list as the new
;; elements list
(recur op (op acc el) els)))))
I hope it helps you see how to work with list destructuring, which is probably what you want in your function. Here is a relevant blog post on this subject.
Tidying up your function.
As #bfontaine commented, you can use (second args) instead of (first (rest args)):
(defn reductions [func & args]
(reduce
(fn [acc item] (conj acc (func (last acc) item)))
[(first args)]
(second args)))
This uses
func
(first args)
(second args)
... but ignores the rest of args.
So we can use destructuring to name the first and second elements of args - init and coll seem suitable - giving
(defn reductions [func & [init coll & _]]
(reduce
(fn [acc item] (conj acc (func (last acc) item)))
[init]
coll))
... where _ is the conventional name for the ignored argument, in this case a sequence.
We can get rid of it, simplifying to
(defn reductions [func & [init coll]] ... )
... and then to
(defn reductions [func init coll] ... )
... - a straightforward function of three arguments.
Dealing with the underlying problems.
Your function has two problems:
slowness
lack of laziness.
Slowness
The flashing red light in this function is the use of last in
(fn [acc item] (conj acc (func (last acc) item)))
This scans the whole of acc every time it is called, even if acc is a vector. So this reductions takes time proportional to the square of the length of coll: hopelessly slow for long sequences.
A simple fix is to replace (last acc) by (acc (dec (count acc))), which takes effectively constant time.
Lack of laziness
We still can't lazily use what the function produces. For example, it would be nice to encapsulate the sequence of factorials like this:
(def factorials (reductions * 1N (next (range)))))
With your reductions, this definition never returns.
You have to entirely recast your function to make it lazy. Let's modify the standard -lazy -reductions to employ destructuring:
(defn reductions [f init coll]
(cons
init
(lazy-seq
(when-let [[x & xs] (seq coll)]
(reductions f (f init x) xs)))))
Now we can define
(def factorials (reductions * 1N (next (range))))
Then, for example,
(take 10 factorials)
;(1N 1N 2N 6N 24N 120N 720N 5040N 40320N 362880N)
Another approach is to derive the sequence from itself, like a railway locomotive laying the track it travels on:
(defn reductions [f init coll]
(let [answer (lazy-seq (reductions f init coll))]
(cons init (map f answer coll))))
But this contains a hidden recursion (hidden from me, at least):
(nth (reductions * 1N (next (range))) 10000)
;StackOverflowError ...
4Clojure Problem 58 is stated as:
Write a function which allows you to create function compositions. The parameter list should take a variable number of functions, and create a function applies them from right-to-left.
(= [3 2 1] ((__ rest reverse) [1 2 3 4]))
(= 5 ((__ (partial + 3) second) [1 2 3 4]))
(= true ((__ zero? #(mod % 8) +) 3 5 7 9))
(= "HELLO" ((__ #(.toUpperCase %) #(apply str %) take) 5 "hello world"))
Here __ should be replaced by the solution.
In this problem the function comp should not be employed.
A solution I found is:
(fn [& xs]
(fn [& ys]
(reduce #(%2 %1)
(apply (last xs) ys) (rest (reverse xs)))))
It works. But I don't really understand how the reduce works here. How does it represent (apply f_1 (apply f_2 ...(apply f_n-1 (apply f_n args))...)?
Let's try modifying that solution in 3 stages. Stay with each for a while and see if you get it. Stop if and when you do lest I confuse you more.
First, let's have more descriptive names
(defn my-comp [& fns]
(fn [& args]
(reduce (fn [result-so-far next-fn] (next-fn result-so-far))
(apply (last fns) args) (rest (reverse fns)))))
then factor up some
(defn my-comp [& fns]
(fn [& args]
(let [ordered-fns (reverse fns)
first-result (apply (first ordered-fns) args)
remaining-fns (rest ordered-fns)]
(reduce
(fn [result-so-far next-fn] (next-fn result-so-far))
first-result
remaining-fns))))
next replace reduce with a loop which does the same
(defn my-comp [& fns]
(fn [& args]
(let [ordered-fns (reverse fns)
first-result (apply (first ordered-fns) args)]
(loop [result-so-far first-result, remaining-fns (rest ordered-fns)]
(if (empty? remaining-fns)
result-so-far
(let [next-fn (first remaining-fns)]
(recur (next-fn result-so-far), (rest remaining-fns))))))))
My solution was:
(fn [& fs]
(reduce (fn [f g]
#(f (apply g %&))) fs))
Lets try that for:
((
(fn [& fs]
(reduce (fn [f g]
#(f (apply g %&))) fs))
#(.toUpperCase %)
#(apply str %)
take)
5 "hello world"))
fs is a list of the functions:
#(.toUpperCase %)
#(apply str %)
take
The first time through the reduce, we set
f <--- #(.toUpperCase %)
g <--- #(apply str %)
We create an anonymous function, and assign this to the reduce function's accumulator.
#(f (apply g %&)) <---- uppercase the result of apply str
Next time through the reduce, we set
f <--- uppercase the result of apply str
g <--- take
Again we create a new anonymous function, and assign this to the reduce function's accumulator.
#(f (apply g %&)) <---- uppercase composed with apply str composed with take
fs is now empty, so this anonymous function is returned from reduce.
This function is passed 5 and "hello world"
The anonymous function then:
Does take 5 "hello world" to become (\h \e \l \l \o)
Does apply str to become "hello"
Does toUppercase to become "HELLO"
Here's an elegent (in my opinion) definition of comp:
(defn comp [& fs]
(reduce (fn [result f]
(fn [& args]
(result (apply f args))))
identity
fs))
The nested anonymous functions might make it hard to read at first, so let's try to address that by pulling them out and giving them a name.
(defn chain [f g]
(fn [& args]
(f (apply g args))))
This function chain is just like comp except that it only accepts two arguments.
((chain inc inc) 1) ;=> 3
((chain rest reverse) [1 2 3 4]) ;=> (3 2 1)
((chain inc inc inc) 1) ;=> ArityException
The definition of comp atop chain is very simple and helps isolate what reduce is bringing to the show.
(defn comp [& fs]
(reduce chain identity fs))
It chains together the first two functions, the result of which is a function. It then chains that function with the next, and so on.
So using your last example:
((comp #(.toUpperCase %) #(apply str %) take) 5 "hello world") ;=> "HELLO"
The equivalent only using chain (no reduce) is:
((chain identity
(chain (chain #(.toUpperCase %)
#(apply str %))
take))
5 "hello world")
;=> "HELLO"
At a fundamental level, reduce is about iteration. Here's what an implementation in an imperative style might look like (ignoring the possibility of multiple arities, as Clojure's version supports):
def reduce(f, init, seq):
result = init
for item in seq:
result = f(result, item)
return result
It's just capturing the pattern of iterating over a sequence and accumulating a result. I think reduce has a sort of mystique around it which can actually make it much harder to understand than it needs to be, but if you just break it down you'll definitely get it (and probably be surprised how often you find it useful).
Here is my solution:
(defn my-comp
([] identity)
([f] f)
([f & r]
(fn [& args]
(f (apply (apply my-comp r) args)))))
I like A. Webb's solution better, though it does not behave exactly like comp because it does not return identity when called without any arguments. Simply adding a zero-arity body would fix that issue though.
Consider this example:
(def c (comp f1 ... fn-1 fn))
(c p1 p2 ... pm)
When c is called:
first comp's rightmost parameter fn is applied to the p* parameters ;
then fn-1 is applied to the result of the previous step ;
(...)
then f1 is applied to the result of the previous step, and its result is returned
Your sample solution does exactly the same.
first the rightmost parameter (last xs) is applied to the ys parameters:
(apply (last xs) ys)
the remaining parameters are reversed to be fed to reduce:
(rest (reverse xs))
reduce takes the provided initial result and list of functions and iteratively applies the functions to the result:
(reduce #(%2 %1) ..init.. ..functions..)