Summarising (grouping and counting) a sequence of maps - clojure

I'm trying to find an idiomatic way in Clojure of grouping a sequence of maps by certain keys and providing counts. Sort of like 'SELECT X, Y, COUNT(*) FROM Z GROUP BY X, Y' in SQL. The data looks like this:
({:status "Academy Sponsor Led",
:pupil-population "",
:locality "Northamptonshire",
:pupil-gender "Mixed",
:county "Northamptonshire",
:pupil-age "11-18",
:school "Wrenn School",
:website ""}
{:status "Academy Sponsor Led",
:pupil-population "915",
:locality "Plymouth",
:pupil-gender "Mixed",
:county "Devon",
:pupil-age "11-19",
:school "The All Saints Church of England Academy",
:website "http://www.asap.org.uk/"}
{:status "Academy Converter",
:pupil-population "735",
:locality "Somerset",
:pupil-gender "Mixed",
:county "Somerset",
:pupil-age "11-16",
:school "Stanchester Academy",
:website "www.Stanchester-Academy.co.uk"}
{:status "Community School",
:pupil-population "",
:locality "Herefordshire",
:pupil-gender "Mixed",
:county "Herefordshire",
:pupil-age "11-18",
:school "Lady Hawkins High School",
:website "http://www.lhs.hereford.sch.uk"}...
and my solution looks like this:
(defn summarise-locality-status
"Return counts of status within locality"
[data]
(let [locality (group-by :locality data)
locality-status (map #(vector (first %) (group-by :status (second %))) locality)
counts-fn (fn [locality-status-item]
(let [statuses (second locality-status-item)]
(map #(vector % (count (get statuses %))) (keys statuses))))]
(map #(vector (first %) (counts-fn %)) locality-status)))
However it feels a bit clunky. What would be better way of doing this?

Depending on your needs,
(frequencies (for [r data] (select-keys r [:locality :status])))
is closer to the SQL, in that it is not nested.

Another solution, introducing juxt and reduce-kv:
(->> data
(group-by (juxt :locality :status))
(reduce-kv #(assoc-in % %2 (count %3)) {}))
This might be closest to your original SQL and more intuitively understandable.

How about
(reduce #(update-in %1 [(:locality %2) (:status %2)] (fnil inc 0)) {} data)
or
(reduce #(update-in %1 ((juxt :locality :status) %2) (fnil inc 0)) {} data)
The output is a little different (hash maps instead of lists), but that's easy to change. Using a hash map makes group-by superfluous and the code a lot shorter/easier.

(for [[locality statuses] (group-by :locality data)]
{:locality locality :all_status
(for [[status items] (group-by :status statuses)]
{:status status :count (count items)})})

Related

Using clojure, Is there a better way to to remove a item from a sequence, which is the value in a map?

There is a map containing sequences. The sequences contain items.
I want to remove a given item from any sequence that contains it.
The solution I found does what it should, but I wonder if there is a better
or more elegant way to achieve the same.
my current solution:
(defn remove-item-from-map-value [my-map item]
(apply merge (for [[k v] my-map] {k (remove #(= item %) v)})))
The test describe the expected behaviour:
(require '[clojure.test :as t])
(def my-map {:keyOne ["itemOne"]
:keyTwo ["itemTwo" "itemThree"]
:keyThree ["itemFour" "itemFive" "itemSix"]})
(defn remove-item-from-map-value [my-map item]
(apply merge (for [[k v] my-map] {k (remove #(= item %) v)})))
(t/is (= (remove-item-from-map-value my-map "unknown-item") my-map))
(t/is (= (remove-item-from-map-value my-map "itemFive") {:keyOne ["itemOne"]
:keyTwo ["itemTwo" "itemThree"]
:keyThree ["itemFour" "itemSix"]}))
(t/is (= (remove-item-from-map-value my-map "itemThree") {:keyOne ["itemOne"]
:keyTwo ["itemTwo"]
:keyThree ["itemFour" "itemFive" "itemSix"]}))
(t/is (= (remove-item-from-map-value my-map "itemOne") {:keyOne []
:keyTwo ["itemTwo" "itemThree"]
:keyThree ["itemFour" "itemFive" "itemSix"]}))
I'm fairly new to clojure and am interested in different solutions.
So any input is welcome.
I throw in the specter
version for good measure. It keeps the vectors inside the map
and it's really compact.
(setval [MAP-VALS ALL #{"itemFive"}] NONE my-map)
Example
user=> (use 'com.rpl.specter)
nil
user=> (def my-map {:keyOne ["itemOne"]
#_=> :keyTwo ["itemTwo" "itemThree"]
#_=> :keyThree ["itemFour" "itemFive" "itemSix"]})
#_=>
#'user/my-map
user=> (setval [MAP-VALS ALL #{"itemFive"}] NONE my-map)
{:keyOne ["itemOne"],
:keyThree ["itemFour" "itemSix"],
:keyTwo ["itemTwo" "itemThree"]}
user=> (setval [MAP-VALS ALL #{"unknown"}] NONE my-map)
{:keyOne ["itemOne"],
:keyThree ["itemFour" "itemFive" "itemSix"],
:keyTwo ["itemTwo" "itemThree"]}
i would go with something like this:
user> (defn remove-item [my-map item]
(into {}
(map (fn [[k v]] [k (remove #{item} v)]))
my-map))
#'user/remove-item
user> (remove-item my-map "itemFour")
;;=> {:keyOne ("itemOne"),
;; :keyTwo ("itemTwo" "itemThree"),
;; :keyThree ("itemFive" "itemSix")}
you could also make up a handy function map-val performing mapping on map values:
(defn map-val [f data]
(reduce-kv
(fn [acc k v] (assoc acc k (f v)))
{} data))
or shortly like this:
(defn map-val [f data]
(reduce #(update % %2 f) data (keys data)))
user> (map-val inc {:a 1 :b 2})
;;=> {:a 2, :b 3}
(defn remove-item [my-map item]
(map-val (partial remove #{item}) my-map))
user> (remove-item my-map "itemFour")
;;=> {:keyOne ("itemOne"),
;; :keyTwo ("itemTwo" "itemThree"),
;; :keyThree ("itemFive" "itemSix")}
I think your solution is mostly okay, but I would try to avoid the apply merge part, as you can easily recreate a map from a sequence with into. Also, you could also use map instead of for which I think is a little bit more idiomatic in this case as you don't use any of the list comprehension features of for.
(defn remove-item-from-map-value [m item]
(->> m
(map (fn [[k vs]]
{k (remove #(= item %) vs)}))
(into {})))
Another solution much like #leetwinski:
(defn remove-item [m i]
(zipmap (keys m)
(map (fn [v] (remove #(= % i) v))
(vals m))))
Here's a one-liner which does this in an elegant way. The perfect function for me to use in this scenario is clojure.walk/prewalk. What this fn does is it traverse all of the sub-forms of the form that you pass to it and it transforms them with the provided fn:
(defn remove-item-from-map-value [data item]
(clojure.walk/prewalk #(if (map-entry? %) [(first %) (remove #{item} (second %))] %) data))
What the remove-item-from-map-value fn will do is it will check if current form is a map entry and if so, it will remove specified key from its value (second element of the map entry, which is a vector containing a key and a value, respectively).
The best this about this approach is that is is completely extendable: you could decide to do different things for different types of forms, you can also handle nested forms, etc.
It took me some time to master this fn but once I got it I found it extremely useful!

How to transform a list of maps to a nested map of maps?

Getting data from the database as a list of maps (LazySeq) leaves me in need of transforming it into a map of maps.
I tried to 'assoc' and 'merge', but that didn't bring the desired result because of the nesting.
This is the form of my data:
(def data (list {:structure 1 :cat "A" :item "item1" :val 0.1}
{:structure 1 :cat "A" :item "item2" :val 0.2}
{:structure 1 :cat "B" :item "item3" :val 0.4}
{:structure 2 :cat "A" :item "item1" :val 0.3}
{:structure 2 :cat "B" :item "item3" :val 0.5}))
I would like to get it in the form
=> {1 {"A" {"item1" 0.1}
"item2" 0.2}}
{"B" {"item3" 0.4}}
2 {"A" {"item1" 0.3}}
{"B" {"item3" 0.5}}}
I tried
(->> data
(map #(assoc {} (:structure %) {(:cat %) {(:item %) (:val %)}}))
(apply merge-with into))
This gives
{1 {"A" {"item2" 0.2}, "B" {"item3" 0.4}},
2 {"A" {"item1" 0.3}, "B" {"item3" 0.5}}}
By merging I lose some entries, but I can't think of any other way. Is there a simple way? I was even about to try to use specter.
Any thoughts would be appreciated.
If I'm dealing with nested maps, first stop is usually to think about update-in or assoc-in - these take a sequence of the nested keys. For a problem like this where the data is very regular, it's straightforward.
(assoc-in {} [1 "A" "item1"] 0.1)
;; =>
{1 {"A" {"item1" 0.1}}}
To consume a sequence into something else, reduce is the idiomatic choice. The reducing function is right on the edge of the complexity level I'd consider an anonymous fn for, so I'll pull it out instead for clarity.
(defn- add-val [acc line]
(assoc-in acc [(:structure line) (:cat line) (:item line)] (:val line)))
(reduce add-val {} data)
;; =>
{1 {"A" {"item1" 0.1, "item2" 0.2}, "B" {"item3" 0.4}},
2 {"A" {"item1" 0.3}, "B" {"item3" 0.5}}}
Which I think was the effect you were looking for.
Roads less travelled:
As your sequence is coming from a database, I wouldn't worry about using a transient collection to speed the aggregation up. Also, now I think about it, dealing with nested transient maps is a pain anyway.
update-in would be handy if you wanted to add up any values with the same key, for example, but the implication of your question is that structure/cat/item tuples are unique and so you just need the grouping.
juxt could be used to generate the key structure - i.e.
((juxt :structure :cat :item) (first data))
[1 "A" "item1"]
but it's not clear to me that there's any way to use this to make the add-val fn more readable.
You may continue to use your existing code. Only the final merge has to change:
(defn deep-merge [& xs]
(if (every? map? xs)
(apply merge-with deep-merge xs)
(apply merge xs)))
(->> data
(map #(assoc {} (:structure %) {(:cat %) {(:item %) (:val %)}}))
(apply deep-merge))
;; =>
{1
{"A" {"item1" 0.1, "item2" 0.2},
"B" {"item3" 0.4}},
2
{"A" {"item1" 0.3},
"B" {"item3" 0.5}}}
Explanation: your original (apply merge-with into) only merge one level down. deep-merge from above will recurse into all nested maps to do the merge.
Addendum: #pete23 - one use of juxt I can think of is to make the function reusable. For example, we can extract arbitrary fields with juxt, then convert them to nested maps (with yet another function ->nested) and finally do a deep-merge:
(->> data
(map (juxt :structure :cat :item :val))
(map ->nested)
(apply deep-merge))
where ->nested can be implemented like:
(defn ->nested [[k & [v & r :as t]]]
{k (if (seq r) (->nested t) v)})
(->nested [1 "A" "item1" 0.1])
;; => {1 {"A" {"item1" 0.1}}}
One sample application (sum val by category):
(let [ks [:cat :val]]
(->> data
(map (apply juxt ks))
(map ->nested)
(apply (partial deep-merge-with +))))
;; => {"A" 0.6000000000000001, "B" 0.9}
Note deep-merge-with is left as an exercise for our readers :)
(defn map-values [f m]
(into {} (map (fn [[k v]] [k (f v)])) m))
(defn- transform-structures [ss]
(map-values (fn [cs]
(into {} (map (juxt :item :val) cs))) (group-by :cat ss)))
(defn transform [data]
(map-values transform-structures (group-by :structure data)))
then
(transform data)

Getting date intervals from vector

I have a day ordered vector in clojure that's something like
(def a [{:day #inst "2017-01-01T21:57:14.873-00:00" :balance 100.00},
{:day #inst "2017-01-05T21:57:14.873-00:00" :balance -50.00},
{:day #inst "2017-01-10T21:57:14.873-00:00" :balance -100.00},
{:day #inst "2017-01-14T21:57:14.873-00:00" :balance 50.00},
{:day #inst "2017-01-17T21:57:14.873-00:00" :balance -200.00}])
I would like to get all date intervals where balance is negative. The
period ends when balance gets positive on next position or when
balance changes its value but keeps negative, like:
[{:start #inst "2017-01-05T21:57:14.873-00:00"
:end #inst "2017-01-09T21:57:14.873-00:00"
:value -50.00},
{:start "2017-01-10T21:57:14.873-00:00"
:end "2017-01-13T21:57:14.873-00:00"
:value -100.00},
{:start "2017-01-17T21:57:14.873-00:00"
:value -200.00}]
I've found this and this but I couldn't adapt to my data. How can I do it?
Cheating a little with the dates by using this not yet implemented function, which is supposed to decrement a date:
(defn dec-date [d] d)
This would be one way to solve the problem:
(->> a
(partition-by #(-> % :balance neg?))
(drop-while #(-> % first :balance pos?))
(mapcat identity)
(map (juxt :day :balance))
(partition 2 1)
(map (fn [[[date-1 val-1] [date-2 val-2]]]
(if (neg? val-1)
{:start date-1
:end (dec-date date-2)
:value val-1}
{:start date-2
:value val-1})))))
;;=> ({:start "05/01", :end "10/01", :value -50.0} {:start "10/01", :end "14/01", :value -100.0} {:start "17/01", :value 50.0})
If dec-date was properly implemented then the first :end would be "09/01" rather than "10/01" and the second :end would be "13/01" rather than "14/01", which would be the correct answer.
Now hopefully an improved answer that will work for more edge cases:
(->> a
(partition-by #(-> % :balance neg?))
(drop-while #(-> % first :balance pos?))
(mapcat identity)
(map (juxt :day :balance))
(partition-all 2 1)
(keep (fn [[[date-1 val-1] [date-2 val-2]]]
(cond
(neg? val-1) (cond-> {:start date-1
:value val-1}
date-2 (assoc :end (dec-date date-2)))
(pos? val-1) nil
:else {:start date-2
:value val-1}))))

Alternatives to repeated use of partial when using clojure's `comp`

Given a collection"
[{:key "key_1" :value "value_1"}, {:key "key_2" :value "value_2"}]
I would like to convert this to:
{"key_1" "value_1" "key_2" "value_2"}
An function to do this would be:
(defn long->wide [xs]
(apply hash-map (flatten (map vals xs))))
I might simplify this using the threading macro:
(defn long->wide [xs]
(->> xs
(map vals)
(flatten)
(apply hash-map)))
This still requires explicit definition of the function argument which I am not doing anything with other than passing to the first function. I might then rewrite this using comp to remove this:
(def long->wide
(comp (partial apply hash-map) flatten (partial map vals)))
This however requires repeated use of partial which to me is a lot of noise in the function.
Is there a some function in clojure that combines comp and ->> so I can create a higher order function without repeated use of partial, and also which out having to create a new function?
Since many of the answers here already don't answer the original question, but
suggest different approaches, I put that one back up too.
I'd go with reduce and destructuring:
(reduce
(fn [m {:keys [key value]}]
(assoc m key value))
{}
[{:key "key_1" :value "value_1"}, {:key "key_2" :value "value_2"}])
Note, that this will also work with string keys (which you mentioned in the comments) (note :strs):
(reduce
(fn [m {:strs [key value]}]
(assoc m key value))
{}
[{"key" "key_1" "value" "value_1"}, {"key" "key_2" "value" "value_2"}])
Another (point-free) version, when using keywords:
(partial (into {} (map (juxt :key :value))))
Since you mentioned in the comments, that you are using values from a DB, there might also be the chance, that you can switch to just return value tuples. Then the whole operation is just:
(into {} [["key_1" "value_1"]["key_2" "value_2"]])
Also note, that the use of vals on a map and expecting "insertion order" is
dangerous. Small maps are ordered only by accident:
user=> (take 3 (zipmap (range 3) (range 3)))
([0 0] [1 1] [2 2])
user=> (take 3 (zipmap (range 100) (range 100)))
([0 0] [65 65] [70 70])
An other alternative to the nice answers is also:
(apply hash-map (mapcat vals [{:key "key_1" :value "value_1"}, {:key "key_2" :value "value_2"}]))
or:
((comp #(apply hash-map %) #(mapcat vals %)) [{:key "key_1" :value "value_1"}, {:key "key_2" :value "value_2"}])
which are exactly the same.
As with clojure, so many ways to solve most problems.
(partial #(reduce (fn [r m] (assoc r (m :key) (m :value)))
{}
%)))
Not sure if the creation of anonymous functions violates your condition or not but this isn't adding functions to the namespace so I thought I'd throw it out there. This also has the benefit of not requiring the keys in the input maps to be keywords as :key and :value can be replaced with values of any type since the map is in the function position. For example:
(partial #(reduce (fn [r m] (assoc r (m "key") (m "value")))
{}
%)))

How to parse xml and get an vector for some attributes on an element

I know how to extract one attribute using zip-xml/attr, but how to extract multiple attributes?
e.g I have the following
<table>
<column name="col1" type="varchar" length="8"/>
<column name="col2" type="varchar" length="16"/>
<column name="col3" type="int" length="16"/>
<table>
And the expected result is. A silly way is to call zip-xml/attr for each attribute, but is there any elegant way to do that?
[["co11" "varchar" 8] [["co12" "varchar" 16] [["co13" "int" 16]
My advice is to use a tree-walking function to extract the interesting data from the XML tree. clojure.walk has several of these, but here I use tree-seq from core clojure to just produce a seq of nodes and work on that. This function takes two functions - a branch? predicate which checks if a node can have children and a children function which gets them. I use :content for both, as tags with no nested tags produce nil, which is a falsey value and so it works also as a predicate.
(->> (clojure.xml/parse "res/doc.xml") ;;source file for your xml
(tree-seq :content :content) ;; Produce a seq by walking the tree
(filter #(= :column (:tag %))) ;;Take only :column tags
(mapv (comp vec vals :attrs)))
;;Collect the values of the :attrs maps into vectors
;;and collect those into a vector with mapv
Your desired output had unmatched square brackets, but I assume it should be like
[["col1" "varchar" "8"] ["col2" "varchar" "16"] ["col3" "int" "16"]]
which was my return value. However, this is potentially brittle - you're relying on the maps returned by clojure.xml/parse preserving the ordering of the attributes in the XML in order to know what the data means. That's not really part of the contract of maps. As an implementation detail it creates clojure.lang.PersistentStructMaps which apparently do have this feature, but it might not always be so.
Alternatively you could use just (mapv :attrs) to keep the whole of the map in there.
The right solution depends on how large and complex the XML is and to some extent, what you know about its structure. If it needs to be very generic, then you need to have quite a lot of logic to navigate the nodes etc. However, if it is a known format and you know what nodes you are interested in, its pretty straight-forward.
I used clojure.zip to create a zipper from the XML file and then use clojure.data.zip.xml to extract the nodes/paths I was interested in. I then defined simple helper functions to process specific nodes. This was pretty much my first bit of clojure and I've not yet gone back to it to re-factor it and refine/clarify some of my very rough clojure idioms based on what I've learnt since, but in the spirit of an example being worth 1000 words, here it is -
(ns arcis.models.nessus
(:use [taoensso.timbre :only [trace debug info warn error fatal]])
(:require [arcis.util :as util]
[arcis.models.db :as db]
[clojure.java.io :as io]
[clojure.xml :as xml]
[clojure.zip :as zip]
[clojure.data.zip.xml :as zx]))
(def nessus-host-keys [:hostname :host_fqdn
:system_type :operating_system
:operating_system_unsupported])
(def used-nessus-host-keys (conj nessus-host-keys
:host_start :host_end
:items :traceroute_hop_0 :traceroute_hop_1
:traceroute_hop_2 :traceroute_hop_3
:traceroute_hop_4 :traceroute_hop_5
:traceroute_hop_6 :traceroute_hop_7
:traceroute_hop_8 :traceroute_hop_9
:traceroute_hop_10 :traceroute_hop_11
:traceroute_hop_12 :traceroute_hop_13
:traceroute_hop_14 :traceroute_hop_15
:traceroute_hop_16 :traceroute_hop_17
:host_ip :patch_summary_total_cves
:cpe_0 :cpe_1 :cpe_2 :cpe_3 :cpe_4 :cpe_5
:cpe_6 :cpe_7 :cpe_8 :cpe_9))
(def nessus-item-keys [:port :svc_name :protocol :severity :plugin_id
:plugin_output])
(def used-nessus-item-keys (conj nessus-item-keys
:plugin_details
:plugin_name
:plugin_family))
(def nessus-plugin-keys [:plugin_id :plugin_name :plugin_family :fname
:script_version :plugin_type :exploitability_ease
:vuln_publication_date :cvss_temporal_data
:solution :cvss_temporal_score :risk_factor
:description :cvss_vector :synopsis
:patch_publication_date :exploit_available
:plugin_publication_date :plugin_modification_date
:cve :bid :exploit_framework_canvas :edb_id
:exploit_framework_metasploit :exploit_framework_core
:metasploit_name :canvas_package :osvdb :cwe
:cvss_temporal_vector :cvss_base_score :cpe
:exploited_by_malware])
(def used-nessus-plugin-keys (conj nessus-plugin-keys
:xref :see_also :cert
:attachment :iava :stig_severity :hp
:secunia :iawb :msft))
(def show-unprocessed true)
(defn log-unprocessed [title vls]
(if (and show-unprocessed
(seq vls))
(println (str "Unprocessed " title ": " vls))))
;;; parse nessus report
(defn parse-xref [xref]
{:xref (first (:content xref))})
(defn parse-see-also [see-also]
{:see_also (first (:content see-also))})
(defn parse-plugin [plugin]
{(util/db-keyword (name (:tag plugin))) (first (:content plugin))})
(defn parse-contents [cont]
(let [xref (mapv parse-xref (filter #(= (:tag %) :xref) cont))
see-also (mapv parse-see-also (filter #(= (:tag %) :see-also) cont))
details (reduce merge {}
(map parse-plugin
(remove #(or (= (:tag %) :xref)
(= (:tag %) :see-also)) cont)))]
(assoc details
:see_also see-also
:xref xref)))
(defn fix-item-keywords [item]
(let [ks (keys item)]
(into {}
(for [k ks]
[(util/db-keyword (name k))
(k item)]))))
(defn parse-item [item]
(let [attrs (fix-item-keywords (:attrs item))
contents (parse-contents (:content item))]
(assoc attrs
:plugin_output (:plugin_output contents)
:plugin_details (assoc (dissoc contents :plugin_output)
:plugin_id (:plugin_id attrs)
:plugin_family (:plugin_family attrs)))))
(defn parse-properties [props]
(into {}
(for [p props]
[(util/db-keyword (:name (:attrs p)))
(first (:content p))])))
(defn parse-host [h]
(let [items (map first (zx/xml-> h :ReportItem))
properties (:content (first (zx/xml1-> h :HostProperties)))]
(assoc (parse-properties properties)
:hostname (zx/attr h :name)
:items (mapv parse-item items))))
(defn parse-hosts [hosts]
(mapv parse-host hosts))
(defn parse-file [f]
(let [root (zip/xml-zip (xml/parse (io/file f)))
report-xml (zx/xml1-> root :Report)
hosts (zx/xml-> report-xml :ReportHost)]
{:report_name (zx/attr report-xml :name)
:policy (zx/text (zx/xml1-> root :Policy :policyName))
:hosts (parse-hosts hosts)}))
;;; insert nessus records into db
(defn mk-host-rec [scan-id host]
(let [[id err] (db/get-sequence-nextval "host_seq")]
(if (nil? err)
(assoc (util/build-map host nessus-host-keys)
:ipv4 (:host_ip host)
:scan_start (util/from-nessus-date (:scan_start host))
:scan_end (util/from-nessus-date (:scan_end host))
:total_cves (:patch_summary_total_cves host)
:id id
:scan_id scan-id)
nil)))
(defn insert-patches [p]
(when (seq p)
(db/insert-nessus-host-patch (first p))
(recur (rest p))))
(defn insert-host-patch [id host]
(let [p-keys (filter #(re-find #"patch_summary_*" %) (map name (keys host)))
recs (map (fn [s]
{:id (first (db/get-sequence-nextval "patch_seq"))
:host_id id
:summary ((keyword (str "patch_summary_txt_" s)) host)
:cve_num ((keyword (str "patch_summary_cve_num_" s)) host)
:cves ((keyword (str "patch_summary_cves_" s)) host)})
(filter seq
(map #(second (re-find #"patch_summary_txt_(.*)" %))
p-keys)))]
(insert-patches recs)
(util/remove-keys host (map keyword p-keys))))
(defn mk-item-rec [host-id item]
(let [[id err] (db/get-sequence-nextval "item_seq")]
(assoc (util/build-map item nessus-item-keys)
:host_id host-id
:id id)))
(defn insert-item [host-id item]
(let [rec (mk-item-rec host-id item)
not-done (keys (util/remove-keys item used-nessus-item-keys))]
(log-unprocessed "Item Keys" not-done)
(db/insert-nessus-report-item rec)
(:plugin_id item)))
(defn mk-plugin-rec [item]
(let [rec (util/build-map (:plugin_details item) nessus-plugin-keys)
not-used (keys (util/remove-keys (:plugin_details item)
used-nessus-plugin-keys))]
(log-unprocessed "Plugin Keys" not-used)
(assoc rec
:vuln_publication_date (util/from-nessus-date
(:vuln_publication_date rec))
:patch_publication_date (util/from-nessus-date
(:patch_publication_date rec))
:plugin_publication_date (util/from-nessus-date
(:plugin_publication_date rec))
:plugin_modification_date (util/from-nessus-date
(:plugin_modificaiton_date rec)))))
(defn insert-xref [plugin-id xrefs]
(when (seq xrefs)
(let [xref {:id (first (db/get-sequence-nextval "xref_seq"))
:plugin_id plugin-id
:xref (:xref (first xrefs))}]
(db/insert-nessus-xref xref)
(recur plugin-id (rest xrefs)))))
(defn insert-see-also [plugin-id see-also]
(when (seq see-also)
(let [sa {:id (first (db/get-sequence-nextval "ref_seq"))
:plugin_id plugin-id
:reference (:see_also (first see-also))}]
(db/insert-nessus-ref sa)
(recur plugin-id (rest see-also)))))
(defn insert-plugin [item]
(let [rec (mk-plugin-rec item)
xref (:xref (:plugin_details item))
see-also (:see_also (:plugin_details item))]
(if (seq xref)
(insert-xref (:plugin_id rec) xref))
(if (seq see-also)
(insert-see-also (:plugin_id rec) see-also))
(db/upsert-nessus-plugin rec)))
(defn insert-items [host-id items plugin-set]
(if (empty? items)
plugin-set
(let [p (insert-item host-id (first items))]
(if-not (contains? plugin-set p)
(insert-plugin (first items)))
(recur host-id (rest items) (conj plugin-set p)))))
(defn insert-host [scan-id host plugin-set]
(if-let [h-rec (mk-host-rec scan-id host)]
(let [[v err] (db/insert-nessus-host h-rec)
items (:items host)]
(if (nil? err)
(let [host2 (insert-host-patch (:id h-rec) host)]
(log-unprocessed "Host Keys" (keys (util/remove-keys
host2 used-nessus-host-keys)))
(insert-items (:id h-rec) items plugin-set))
plugin-set))
plugin-set))
(defn insert-hosts
([id hosts]
(insert-hosts id hosts #{}))
([id hosts plugins]
(if (empty? hosts)
plugins
(let [plugin-set (insert-host id (first hosts) plugins)]
(recur id (rest hosts) plugin-set)))))
(defn mk-scan-record [id report]
{:id id
:name (:report_name report)
:scan_dt (util/to-sql-date)
:policy (:policy report)
:entered_dt (util/to-sql-date)})
(defn store-report [update-plugins report]
(let [[id err] (db/get-sequence-nextval "nscan_seq")
scan-rec (mk-scan-record id report)]
(if (nil? err)
(let [[v e] (db/insert-nessus-scan scan-rec)]
(if (nil? e)
(if update-plugins
(let [plugin-list (set (first (db/select-nessus-plugin-ids)))]
[(insert-hosts id (:hosts report) plugin-list) nil])
[(insert-hosts id (:hosts report)) nil])
[v e]))
[id err])))
(defn process-nessus-report [update-plugins filename]
(let [report (parse-file filename)]
(println (str "Report: " (:report_name report)
"\nPolicy: " (:policy report)
"\nHost Records: " (count (:hosts report))))
(store-report update-plugins report)))
Magos's answer using tree-seq is perfectly fine, but there's no reason to abandon zippers; filtering using zippers is more succinct and the arguably the "clojure" way. (note this example uses data.xml ([org.clojure/data.xml "0.0.8"]) instead of clojure.xml).
(require '[clojure.data.zip.xml :as zf])
(require '[clojure.zip :as z])
(def ex
"<table>
<column name=\"col1\" type=\"varchar\" length=\"8\"/>
<column name=\"col2\" type=\"varchar\" length=\"16\"/>
<column name=\"col3\" type=\"int\" length=\"16\"/>
</table>")
(let [x (z/xml-zip (clojure.data.xml/parse-str ex))]
(->> (zf/xml-> x :column) ;;equivalent to (->> treeseq ... filter)
flatten
(keep :attrs)
(map vals)))
;>>> (("col1" "varchar" "8") ("col2" "varchar" "16") ("col3" "int" "16"))
But the xml-> macro simply applies functions in order, so you can do the following:
(let [x (z/xml-zip (clojure.data.xml/parse-str ex))]
(->> (zf/xml-> x :column #(keep :attrs %))
(map vals)))
;>>> (("col1" "varchar" "8") ("col2" "varchar" "16") ("col3" "int" "16"))