I'm starting on functional programming now and I'm getting very crazy of working without variables.
Every tutorial that I read says that it isn't cool redefine a variable but I don't know how to solve my actual problem without saving the state on a variable.
For example: I'm working on a API and I want to keep the values throught the requests. Lets say that I have a end-point that add a person, and I have an list of persons, I would like to redefine or alter the value of my persons list adding the new person. How can I do this?
Is there ok to use var-set, alter-var-root or conj!?
(for the api i'm using compojure-api and each person would be a Hash)
Clojure differentiates values from identity. You can use atoms to manage the state in your compojure application.
(def persons (atom [])) ;; init persons as empty vector
(swap! persons #(conj % {:name "John Doe"})) ;; append new value
You can find more in docs:
https://clojure.org/reference/atoms
https://clojure.org/reference/data_structures
https://clojuredocs.org/clojure.core/atom
You'll likely need a mutable state somewhere in a large application, but one isn't necessary in all cases.
I'm not familiar with compojure, but here's a small example using immutability that might be able to give you a better idea:
(loop [requests []
people []
(let [request (receive-request)]
; Use requests/people
; Then loop again with updated lists
(recur (conj requests request)
(conj people (make-person request))))])
I'm using hypothetical receive-request and make-person functions here.
The loop creates a couple bindings, and updates them at each recur. This is an easy way to "redefine a variable". This is comparable to pure recursion, where you don't mutate the end result at any point, you just change what value gets passed onto the next iteration.
Of course, this is super simple, and impractical since you're just receiving one request at a time. If you're receiving requests from multiple threads at the same time, this would be a justifiable case for an atom:
(defn listen [result-atom]
(Thread.
(fn []
(while true ; Infinite listener for simplicity
(let [request (receive-request)]
(swap! result-atom #(conj % (make-person request))))))))
(defn listen-all []
(let [result-atom (atom [])]
(listen result-atom)
(listen result-atom)))
; result-atom now holds an updating list of people that you can do stuff with
swap! mutates the atom by conjoining onto the list it holds. The list inside the atom isn't mutated, it was just replaced by a modified version of itself. Anyone holding onto a reference to the old list of people would be unaffected by the call to swap!.
A better approach would be to use a library like core/async, but that's getting away from the question.
The point is, you may need to use a mutable variable somewhere, but the need for them is a lot less than you're used to. In most cases, almost everything can be done using immutability like in the first example.
Related
I'm new to clojure and I'm having trouble to understand some concepts, specially pure functions and immutability.
One thing that I still can't comprehend is how you solve a problem like this in clojure:
A simple console application with a login method, where the user can't try to login more than 3 times in a 1 minute interval.
In C# for example I could add the UserId and a timestamp to a collection every time the user tries to login, then I would check if there has been more 3 than attempts in the last minute.
How would I do that in Clojure considering that I can't alter my collection ?
This is not a pratical question (although some code examples would be welcomed), I want to understand how you approach a problem like this.
You don't alter objects in most cases, you create new versions of the old objects:
(loop [attempt-dates []]
(if (login-is-correct)
(login)
(recur (conj attempt-dates (current-date-stamp)))))
In this case I'm using loop. Whatever I give to recur will be passed to the next iteration of loop. I'm creating a new list that contains the new stamp when I write (conj attempt-dates (current-date-stamp)), then that new list is passed to the next iteration of loop.
This is how it's done in most cases. Instead of thinking about altering the object, think about creating a transformed copy of the object and passing the copy along.
If you really truly do need mutable state though, you can use a mutable atom to hold an immutable state:
(def mut-state (atom []))
(swap! mut-state conj 1)
(println #mut-state) ; Prints [1]
[] is still immutable here, the new version is just replacing the old version in the mutable atom container.
Unless you're needing to communicate with UI callbacks or something similar though, you usually don't actually need mutability. Practice using loop/recur and reduce instead.
With the introduction of Spec, I try to write test.check generators for all of my functions. This is fine for simple data structures, but tends to become difficult with data structures that have parts that depend on each other. In other words, some state management within the generators is then required.
It would already help enormously to have generator-equivalents of Clojure's loop/recur or reduce, so that a value produced in one iteration can be stored in some aggregated value that is then accessible in a subsequent iteration.
One simple example where this would be required, is to write a generator for splitting up a collection into exactly X partitions, with each partition having between zero and Y elements, and where the elements are then randomly assigned to any of the partitions. (Note that test.chuck's partition function does not allow to specify X or Y).
If you write this generator by looping through the collection, then this would require access to the partitions filled up during previous iterations, to avoid exceeding Y.
Does anybody have any ideas? Partial solutions I have found:
test.check's let and bind allow you to generate a value and then reuse that value later on, but they do not allow iterations.
You can iterate through a collection of previously generated values with a combination of the tuple and bindfunctions, but these iterations do not have access to the values generated during previous iterations.
(defn bind-each [k coll] (apply tcg/tuple (map (fn [x] (tcg/bind (tcg/return x) k)) coll))
You can use atoms (or volatiles) to store & access values generated during previous iterations. This works, but is very un-Clojure, in particular because you need to reset! the atom/volatile before the generator is returned, to avoid that their contents would get reused in the next call of the generator.
Generators are monad-like due to their bind and return functions, which hints at the use of a monad library such as Cats in combination with a State monad. However, the State monad was removed in Cats 2.0 (because it was allegedly not a good fit for Clojure), while other support libraries I am aware of do not have formal Clojurescript support. Furthermore, when implementing a State monad in his own library, Jim Duey — one of Clojure's monad experts — seems to warn that the use of the State monad is not compatible with test.check's shrinking (see the bottom of http://www.clojure.net/2015/09/11/Extending-Generative-Testing/), which significantly reduces the merits of using test.check.
You can accomplish the iteration you're describing by combining gen/let (or equivalently gen/bind) with explicit recursion:
(defn make-foo-generator
[state]
(if (good-enough? state)
(gen/return state)
(gen/let [state' (gen-next-step state)]
(make-foo-generator state'))))
However, it's worth trying to avoid this pattern if possible, because each use of let/bind undermines the shrinking process. Sometimes it's possible to reorganize the generator using gen/fmap. For example, to partition a collection into a sequence of X subsets (which I realize is not exactly what your example was, but I think it could be tweaked to fit), you could do something like this:
(defn partition
[coll subset-count]
(gen/let [idxs (gen/vector (gen/choose 0 (dec subset-count))
(count coll))]
(->> (map vector coll idxs)
(group-by second)
(sort-by key)
(map (fn [[_ pairs]] (map first pairs))))))
I keep running into situations where I need to filter a collection of maps by some function, and then pull out one value from each of the resulting maps to make my final collection.
I often use this basic structure:
(map :key (filter some-predicate coll))
It occurred to me that this basically accomplishes the same thing as a for loop:
(for [x coll :when (some-predicate x)] (:key x))
Is one way more efficient than the other? I would think the for version would be more efficient since we only go through the collection once.. Is this accurate?
Neither is significantly different.
Both of these return an unrealized lazy sequence where each time an item is read it is computed. The first one does not traverse the list twice, it instead creates one lazy sequence which that produces items that match the filter and is then immediately consumed (still lazily) by the map function. So in this first case you have one lazy sequence consuming items from another lazy sequence lazily. The call to for on the other hand produces a single lazy-seq with a lot of logic in each step.
You can see the code that the for example expands into with:
(pprint (macroexpand-1 '(for [x coll :when (some-predicate x)] (:key x))))
On the whole the performance will be very similar with the second method perhaps producing slightly less garbage so the only way for you to decide between these on the basis of performance will be benchmarking. On the basis of style, I choose the first one because it is shorter, though I might choose to write it with the thread-last macro if there where more stages.
(->> coll
(filter some-predicate)
(take some-limit)
(map :key))
Though this basically comes down to personal style
I have quite a few records in my program that I end up putting in a map using one of their fields as key. For example
(defrecord Foo. [id afield anotherfield])
And then I'd add that to a map with the id as key. This is all perfectly doable, but a bit tedious, e.g. when adding a new instance of Foo to a map I need to extract the key first. I'm wondering if somewhere in clojure.core a data structure to do this already exist?
Basically I'd like to construct a set of Foo's by giving the set a value to key mapping function (i.e. :id) at construction time of the set, and then have that used when I want to add/find/remove/... a value.
So instead of:
(assoc my-map (:id a-foo) a-foo))
I could do, say:
(conj my-set a-foo)
And more interestingly, merge and merge-with support.
Sounds like a simple case where you would want to use a function to eliminate the "tedious" part.
e.g.
(defn my-assoc [some-map some-record]
(assoc some-map (:id some-record) some-record))
If you are doing this a lot and need different key functions, you might want to try a higher order function:
(defn my-assoc-builder [id-function]
(fn [some-map some-record]
(assoc some-map (id-function some-record) some-record)))
(def my-assoc-by-id (my-assoc-builder :id))
Finally, note that you could do the same with a macro. However a useful general rule with macros is not to use them unless you really need them. Thus in this case, since it can be done easily with a function, I'd recommend sticking to functions.
Well as (AFAIK) there is no such datasctructure (and even if there were, it would probably do same tedious stuff in the background), you can build upon your record fns for desired operations (which will in background do same tedious stuff that needs to be done).
Basically I'd like to construct a set
of Foo's by giving the set a value to
key mapping function (i.e. :id) at
construction time of the set, and then
have that used when I want to
add/find/remove/...
Didn't get this.. If you are holding your records in a set and then want to e.g. find one by id you would have to do even more tidious work looking at every record until you find the right one.. that's O(n), and when using map you will have O(1).
Did I use tedious too much? My suggestion is use map and do some tedious stuff.. It's all 1s and 0s after all :)
Given a PersistentQueue in a ref:
(def pq (ref clojure.lang.PersistentQueue/EMPTY))
What is the idiomatic way to pop the queue and get the result?
My best attempt for your critique:
(defn qpop [queue-ref]
(dosync
(let [item (peek #queue-ref)]
(alter queue-ref pop)
item))
alter returns the in-transaction value of the queue which is popped already, so you can't just do the alter by itself.
I can't think of something more idiomatic short of abstracting the body of your dosync away.
However if you are in for a stunt, you can try the off-by-one hack: always consider the head of the PQ as garbage (it contains the previously popped item). It follows that you can rewrite qpop:
(defn qpop [queue-ref]
(peek (alter queue-ref pop))
It incurs adding special checks for emptiness (in particular when you conj). It also means keeping a reference to the item around for longer than it should (however if you look at the impl of PQ you'll see that by itsef it may keep references to popped items for too long, so liveness is already murky).
I used this hack here.
Your dosync body could be simplified using Common Lisp's prog1 macro, although core Clojure seems to lack it. There is a straightforward implementation on the Google group, along with some discussion on how you can make it a function (instead of a macro) in Clojure.