how is it possible to intern macros in clojure? - clojure

I want to do something like this for debugging purposes:
(intern 'clojure.core 'doc clojure.repl/doc)
but it is not letting me because the compiler says - cant take value of a macro.
is there another way?

A macro is a function stored in a Var with :macro true in its metadata map. So, you can
obtain the macro function itself by derefing the Var:
##'doc
use intern to install a function as a macro by attaching appropriate metadata to the name symbol (see (doc intern) which promises to transfer any metadata provided in this way to the Var):
(intern 'clojure.core
(with-meta 'doc {:macro true})
##'clojure.repl/doc)
Using reader metadata is possible too, just remember to put it "inside the quote":
;; attaches metadata to the symbol, which is what we want:
' ^:macro doc
;; attaches metadata to the list structure (quote doc):
^:macro 'doc
^:macro is shorthand for ^{:macro true}.

Related

Should I run checks outside or inside my swap! function?

When I'm going to swap! the value of an atom conditionally, should the condition wrap the swap! or should it be part of the function swap! calls?
(import '(java.time Instant))
(def not-nil? (comp not nil?))
(defonce users (atom {
"example user 1" {:ts (Instant/now)}
"example user 2" {:ts (Instant/now)}
}))
(defn update-ts [id]
(if (not-nil? (get #users id))
(swap! users assoc-in [id :ts] (Instant/now))))
In the above example, I'm doing the existence check for the user before doing the swap!. But couldn't the user be deleted from users after the check but before the swap!? So, then, is it safer to put the check inside the function run by swap!?
(defn update-ts [id]
(swap! users (fn [users]
(if (not-nil? (get users id))
(assoc-in users [id :ts] (Instant/now))
users))))
But couldn't the user be deleted from users after the check but before the swap!? So, then, is it safer to put the check inside the function run by swap!?
Yes, exactly right. You should never make a decision about how to mutate an atom from anywhere but inside of a swap! on that atom. Since swap! is the only operation guaranteed to be atomic, every time you do otherwise (ie, make a decision about an atom from outside of a swap! on it), you introduce a race condition.
But couldn't the user be deleted from users after the check but
before the swap!? So, then, is it safer to put the check inside the
function run by swap!?
As amalloy said, if you need it to be bulletproot you must put the not-null? check inside the swap function.
However, please keep in mind that you & your team are writing the rest of the program. Thus, you have a lot of outside information that may simplify your decision:
If you only ever have one thread (like most programs), you never need to worry about a race condition.
If you have 2 or more threads, maybe you never remove entries from the map (it only accumulates :ts values). Then you also don't need to worry about a conflict.
If your function is more complicated than the simple example above, you may wish to use a (dosync ...) form to wrap multiple steps instead of shoehorning everything in to a single swap function.
In the third case, replace the atom with a ref. An example is:
(defonce users (ref {...} )) ; ***** must us a ref here *****
(dosync
(if (not-nil? (get #users id))
<lots of other stuff...>
(alter users assoc-in [id :ts] (Instant/now)))))

which is faster? map or reduce with condition fn or get-in?

I am using monger and fetching a batch from my mongo nosql database using find-maps. It returns an array that I plan to use as a datastore argument (reference) downstream in my chain of function calls. Within those future function calls, I will have access to a corresponding id. I hope to use this id as a lookup to fetch within my datastore so I don't have to make another monger call. A datastore in the form of an array doesn't seem like the fastest way to get access to the object by id .... but I am not certain.
If I needed to derive an object from this datastore array, then I'd need to use a function like this (that has to log(n) over every element)
(defn fetchObjFromArray [fetch_id inputarray]
(reduce (fn [reduced_obj element_obj]
(if (= fetch_id (get-in element_obj [:_id]))
element_obj ;; ignoring duplicates for conversation
reduced_obj
)
)
{}
inputarray
)
)
Instead, if after my initial monger call, I create a key/val hash object with a function like this:
(defn createReportFromObjArray [inputarray]
(reduce (fn [returnobj elementobj]
(let [_id (get-in elementobj [:_id])
keyword (keyword _id)]
(assoc returnobj keyword elementobj)
) ;; ignoring duplicates for conversation
)
{}
inputarray)
)
then perhaps my subsequent calls could instead use get-in and that would be much faster because I would be fetching by key?
I am confused because: when I use get-in, doesn't it have to iterate over each key in the key/val has object until it finds a match between the key and the fetch_id:
(let [report (createReportFromObjArray inputarray)
target_val (get-in report [(keyword fetch_id)])]
Why doesn't get-in have to log(n) over every key? Maybe its faster because it can stop when it finds the first "match" where map/reducing has to go the whole way through log(n)? How is this faster than having to iterate over each element in an array and checkin whether id matches fetch_id?
I am very grateful for help you can offer.
In your second code example you are building a Clojure hash map in linear time. Via get and derivations they have lookup performance of O(log32(N)).
In the first example you scan the entire input and return the last element that matched the ID or the empty hash map, probably unintentionally.
_
I recommend to use (group-by :_id) instead of the second code example. I also recommend to use (first (filter (comp #{fetch_id} :_id) inputarray)) in place of the first example.
Avoid casting to keywords via keyword - Clojure keywords should generally be known at compile time. Maps support arbirtrary data types as keys.

Documentation of records in clojure

I previously had an api which had a number of functions in it, all of which expected a map in a very particular format. When it came to documenting this API, I found that in the docstrings of each of these functions I was repeating "The map with which this function is called must be of such and such a format, and this field of the map means such and such."
So I thought it would be better for those functions to take a record, and that I could just document the record instead. However it doesn't seem to be possible to document records, at least in any way interpreted either by the doc macro or Marginalia.
A solution suggested here is "just add a :doc key in the record's meta".
I tried (defrecord ^{:doc "Here is some documentation"} MyRecord [field1 field2]) but macroexpanding this suggests it doesn't have any effect. Also defrecord returns an instance of java.lang.class which doesn't implement IMeta so I'm not sure we can give it metadata?
How should records be documented?
Are records an appropriate solution here?
TL;DR: Unfortunately you can't.
From the docs:
Symbols and collections support metadata
When you use defrecord you are actually creating a java class. Since classes are neither symbols nor Clojure records, you cannot append documentation to them.
More Detailed Explanation
The following REPL session shows why its not possible to append metadata to records.
user=> (defrecord A [a b])
#<Class#61f53f0e user.A>
user=> (meta A) ;; <= A contains no metadata
nil
The important bit to notice here is that A is a regular java class.
If you try to set the metadata for A you will get an interesting error
user=> (with-meta A {:doc "Hello"})
ClassCastException java.lang.Class cannot be cast to clojure.lang.IObj
Apparently with-meta expects a clojure.lang.IObj. Since java.lang.Class is a a Java-land construct, it clearly knows nothing of clojure.lang.IObj.
Let's take a look now at the source code for with-meta
user=> (source with-meta)
(def
^{:arglists '([^clojure.lang.IObj obj m])
:doc "Returns an object of the same type and value as obj, with
map m as its metadata."
:added "1.0"
:static true}
with-meta (fn ^:static with-meta [^clojure.lang.IObj x m]
(. x (withMeta m))))
As you can see, this method expects x to have a withMeta object, which records clearly don't have.
You can't put a docstring on the record. But if you really want to, then effectively you can.
If you want users reading the code to know your intent then you can add a comment to the code.
If you want users creating an instance of your record to have access to your docstring through tooling then you can modify the created constructor function metadata. e.g.:
(let [docstring "The string-representation *MUST* be ISO8601."
arglists '([string-representation millis-since-epoch])
arglists-map '([{:keys [:string-representation :millis-since-epoch]}])]
(defrecord Timestamp [string-representation millis-since-epoch])
(alter-meta! #'->Timestamp assoc :doc docstring)
(alter-meta! #'->Timestamp assoc :arglists arglists)
(alter-meta! #'map->Timestamp assoc :doc docstring)
(alter-meta! #'map->Timestamp assoc :arglists arglists-map))
For me using a REPL in Cursive I see the argslist popup when I ask for the 'parameter info' and the docstring when I ask for 'quick documentation'.
Alternatively a better approach might be to provide your own constructor functions with standard docstrings.

Why monger only update one record instead all the records in the list

I have a function that takes in list of entry and save it to mongo using monger.
What is strange is that only the one record will be updated and the rest ignored unless I specify multi:true.
I don't understand why the multi flag is necessary for monger to persist all the updates to mongodb.
(defn update-entries
[entries]
(let [conn (mg/connect)
db (mg/get-db conn "database")]
(for [e entries] (mc/update db "posts" {"id" (:id e)} {$set {:data (:data e)}} {:multi true}))))
The multi flag is necessary for multi updates, since that's what mongo itself uses. Take a look at documentation for update. Granted, that's mongo shell, but most drivers try to follow when it comes to operation semantics.
Note that if "id" is unique, then you're updating one record at a time so having :multi set to true shouldn't matter.
There is, however, another issue with your code.
You use a for comprehension, which in turn iterates a collection lazily, i.e. calls to mc/update won't be made until you force the realization of the collection returned by for.
Since mc/update is a call made for it's side-effects (update a record in the db), using doseq would be more apropriate, unless you need the results.
If that's the case, wrap for in doall to force realization:
(doall
(for [e entries]
(mc/update db "posts" {"id" (:id e)} {$set {:data (:data e)}} {:multi true})))))

Clojure style / idiom: creating maps and adding them to other maps

I'm writing a Clojure programme to help me perform a security risk assessment (finally gotten fed-up with Excel).
I have a question on Clojure idiom and style.
To create a new record about an asset in a risk assessment I pass in the risk-assessment I'm currently working with (a map) and a bunch of information about the asset and my make-asset function creates the asset, adds it to the R-A and returns the new R-A.
(defn make-asset
"Makes a new asset, adds it to the given risk assessment
and returns the new risk assessment."
[risk-assessment name description owner categories
& {:keys [author notes confidentiality integrity availability]
:or {author "" notes "" confidentiality 3 integrity 3 availability 3}}]
(let [ia-ref (inc (risk-assessment :current-ia-ref))]
(assoc risk-assessment
:current-ia-ref ia-ref
:assets (conj (risk-assessment :assets)
{:ia-ref ia-ref
:name name
:desc description
:owner owner
:categories categories
:author author
:notes notes
:confidentiality confidentiality
:integrity integrity
:availability availability
:vulns []}))))
Does this look like a sensible way of going about it?
Could I make it more idiomatic, shorter, simpler?
Particular things I am thinking about are:
should make-asset add the asset to the risk-assessment? (An asset is meaningless outside of a risk assessment).
is there a simpler way of creating the asset; and
adding it to the risk-assessment?
Thank you
A few suggestions, which may or may not apply.
The Clojure idiom for no value is nil. Use it.
Present the asset as a flat map. The mixture of position and
keyword arguments is confusing and vulnerable to changes in what
makes an asset valid.
As #Symfrog suggests, separate the validation of the asset from its
association with a risk assessment.
Don't bother to keep :current-ia-ref as an entry in a risk
assessment. It is just an asset count.
Pull out the default entries for an asset into a map in plain sight.
You can change your assumed defaults as you wish.
This gives us something like the following (untested):
(def asset-defaults {:confidentiality 3, :integrity 3, :availability 3})
(defn asset-valid? [asset] (every? asset [:name :description :owner]))
(defn add-asset [risk-assessment asset]
(if (asset-valid? asset)
(update-in
risk-assessment
[:assets]
conj (assoc
(merge asset asset-defaults)
:ia-ref (inc (count (:assets risk-assessment)))
:vulns []))))
Responses to Comments
:current-ia-ref isn't a count. If an asset is deleted it shouldn't reduce :current-is-ref.
Then (4) does not apply.
I'm not sure of the pertinence of your statement that the Clojure idiom for no value is nil. Could explain further in this context please?
Quoting Differences with other Lisps: In Clojure nil means 'nothing'. It signifies the absence of a value, of any type, and is not specific to lists or sequences.
In this case, we needn't give :author or :notes empty string values.
'Flat map': are you talking about the arguments into the function, if so then I agree.
Yes.
I'm not sure why you define an asset-valid? function. That seems to exceed the original need somewhat: and personally I prefer ensuring only valid assets can be created rather than checking after the fact.
Your add-asset function uses the structure of its argument list to make sure that risk-assessment, name, description, owner, and categories are present (I forgot to check for categories). If you move to presenting the data as a map - whether as a single argument or by destructuring - you lose this constraint. So you have to check the data explicitly (whether to do so in a separate function is moot). But there are benefits:
You can check for more than the presence of certain arguments.
You don't have to remember what order the arguments are in.
Wouldn't your version mean that if I decided to make asset a record in future I'd have to change all the code that called add-asset?
No. A record behaves as a map - it implements IPersistentMap. You'd have to change make-asset, obviously.
... whereas my approach the details of what an asset is is hidden?
In what sense are the contents of an asset hidden? An asset is a map required to have particular keys, and likely to have several other particular keys. Whether the asset is 'really' a record doesn't matter.
A core principle of Clojure (and any other Lisp dialect) is to create small composable functions.
It is not a problem if an asset is created outside of a risk assessment as long as the asset is not exposed to code that is expecting a fully formed asset before it has been added to a risk assessment.
So I would suggest the following (untested):
(defn add-asset-ra
[{:keys [current-ia-ref] :as risk-assessment} asset]
(let [ia-ref (if current-ia-ref
(inc current-ia-ref)
1)]
(-> risk-assessment
(assoc :current-ia-ref ia-ref)
(update-in [:assets] #(conj % (assoc asset :ia-ref ia-ref))))))
(defn make-asset
[name description owner categories
& {:keys [author notes confidentiality integrity availability]
:or {author "" notes "" confidentiality 3 integrity 3 availability 3}}]
{:name name
:desc description
:owner owner
:categories categories
:author author
:notes notes
:confidentiality confidentiality
:integrity integrity
:availability availability
:vulns []})
You may also find the Schema library useful to validate the shape of function arguments.