Create a record based on the string class name - clojure

I'm trying to create an event store where I have a table somewhat like this:
CREATE TABLE domain_events (
id serial PRIMARY KEY,
timestamp timestamptz,
entity_id int,
type text,
data jsonb
);
And I have a namespace like
(ns my-domain.domain-events)
(defrecord PurchaseOrderCreated
[id timestamp purchase-order-id created-by])
(defrecord PurchaseOrderCancelled
[id timestamp purchase-order-id cancelled-by])
So type is a string name for the fully qualified class name, something like my_domain.domain_events.PurchaseOrderCreated, which comes from getting the type from a record e.g. (type (->PurchaseOrderCreated)). I should note that (type the-event) actually produces a string prefixed with class such as class my_domain.domain_events.PurchaseOrderCreaated so I am trimming this off before storing in the DB.
I'm trying to figure out how I can retrieve these event rows from the database and rehydrate them to domain events. I feel like I'm close but just haven't been able to get all the pieces.
I've tried to use new to construct a new record but I seem to have a hard time converting the string classname to a record.
(new (resolve (symbol "my_domain.domain_events.PurchaseOrderCreated")) prop1 prop2 ...)
Plus I'm not sure how easy it's going to be to use new since my array-of-properties is going to need to be in the correct order. It may be better to use the map->PurchaseOrderCreated but I'm still not sure how to dynamically resolve this constructor based on the string classname.
Can anybody advise on what the best approach would be here?

The following should work but I'm not show if there's a more idiomatic way for it:
((resolve (symbol "my_domain.domain_events"
(str "map->"
"PurchaseOrderCreated")))
{:id 123})
symbol takes can take a ns:
https://clojuredocs.org/clojure.core/symbol

Related

defrecord holding a incrementing `vector` / `java class`

Clojurians:
Thank you for your attention on this question !
Here is case I'm thinking about, I want to define a immutable bank account record
(defrecord account [ name balance statements])
(def cash-account (->account :cash 0.0 []))
I have a function that will deposit money to that account ,and a new record of account shall return
(.deposit cash-account 100.0 )
;; returns a new cash-account with attributes
;; name = :cash balance= 100, statment=[ [(2018,1,1),100 ] ]
With more and more deposit and withdraw happening , the field statement list will expanding with more and more transactions inside.
My question will be :
after 1000 transactions, there are 1000 elements in the statment field of latest account return.
When 1001th transaction happend:
will Clojure *copy* 1000 transactions in the statment field of old account record ,and append new transaction, save them into new account record ?
or Clojure just *append* the new transaction to the old account record and provide a new pointer to it , make it look like new account record like persistent map ?
Appreciate your help & many thanks
From https://clojure.org/reference/datatypes#_deftype_and_defrecord :
defrecord provides a complete implementation of a persistent map
deftype supports mutable fields, defrecord does not
so, in your case, it will not copy the transactions, instead it will use a persistent data structure so it will look like the transaction was appended.
Here are some more docs you should also check:
https://www.braveclojure.com/functional-programming/
https://clojure.org/guides/learn/sequential_colls
https://purelyfunctional.tv/guide/clojure-collections/
https://youtu.be/lJr6ot8jGQE

Slick: get table name

With a table definition like this one:
class Test(_tableTag: Tag) extends Table[TestRow](_tableTag, "test") { ... }
how can I get back the table name (Tag "test") from an instance of Test?
The thing is I can perfectly execute some queries like db run TableQuery[Test].result, but to write raw sql, I need the table name.
If you look at Slick's TableQuery ScalaDoc there is a method called baseTableRow which says:
def baseTableRow: E
Get the "raw" table row that represents the table itself, as opposed
to a Path for a variable of the table's type. This method should
generally not be called from user code.
So you go to E <: AbstractTable's "definition" (AbstractTable) Scaladoc and find what you need, namely val tableName: String. The trick here is to know where to look (possible implicit conversions and other stuff...), that is, how to navigate the Scala(Doc) rabbithole. xD

Recommended way to declare Datomic schema in Clojure application

I'm starting to develop a Datomic-backed Clojure app, and I'm wondering what's the best way to declare the schema, in order to address the following concerns:
Having a concise, readable representation for the schema
Ensuring the schema is installed and up-to-date prior to running a new version of my app.
Intuitively, my approach would be the following:
Declaring some helper functions to make schema declarations less verbose than with the raw maps
Automatically installing the schema as part of the initialization of the app (I'm not yet knowledgeable enough to know if that always works).
Is this the best way to go? How do people usually do it?
I Use Conformity for this see Conformity repository. There is also a very useful blogpost from Yeller Here which will guide you how to use Conformity.
Raw maps are verbose, but have some great advantages over using some high level api:
Schema is defined in transaction form, what you specify is transactable (assuming the word exists)
Your schema is not tied to a particular library or spec version, it will always work.
Your schema is serializable (edn) without calling a spec API.
So you can store and deploy your schema more easily in a distributed environment since it's in data-form and not in code-form.
For those reasons I use raw maps.
Automatically installing schema.
This I don't do either.
Usually when you make a change to your schema many things may be happening:
Add new attribute
Change existing attribute type
Create full-text for an attribute
Create new attribute from other values
Others
Which may need for you to change your existing data in some non obvious and not generic way, in a process which may take some time.
I do use some automatization for applying a list of schemas and schema changes, but always in a controlled "deployment" stage when more things regarding data updating may occur.
Assuming you have users.schema.edn and roles.schema.edn files:
(require '[datomic-manage.core :as manager])
(manager/create uri)
(manager/migrate uri [:users.schema
:roles.schema])
For #1, datomic-schema might be of help. I haven't used it, but the example looks promising.
My preference (and I'm biased, as the author of the library) lies with datomic-schema - It focusses on only doing the transformation to normal datomic schema - from there, you transact the schema as you would normally.
I am looking to use the same data to calculate schema migration between the live datomic instance and the definitions - so that the enums, types and cardinality gets changed to conform to your definition.
The important part (for me) of datomic-schema is that the exit path is very clean - If you find it doesn't support something (that I can't implement for whatever reason) down the line, you can dump your schema as plain edn, save it off and remove the dependency.
Conformity will be useful beyond that if you want to do some kind of data migration, or more specific migrations (cleaning up the data, or renaming to something else first).
Proposal: using transaction functions to make declaring schema attributes less verbose in EDN, this preserving the benefits of declaring your schema in EDN as demonstrated by #Guillermo Winkler's answer.
Example:
;; defining helper function
[{:db/id #db/id[:db.part/user]
:db/doc "Helper function for defining entity fields schema attributes in a concise way."
:db/ident :utils/field
:db/fn #db/fn {:lang :clojure
:require [datomic.api :as d]
:params [_ ident type doc opts]
:code [(cond-> {:db/cardinality :db.cardinality/one
:db/fulltext true
:db/index true
:db.install/_attribute :db.part/db
:db/id (d/tempid :db.part/db)
:db/ident ident
:db/valueType (condp get type
#{:db.type/string :string} :db.type/string
#{:db.type/boolean :boolean} :db.type/boolean
#{:db.type/long :long} :db.type/long
#{:db.type/bigint :bigint} :db.type/bigint
#{:db.type/float :float} :db.type/float
#{:db.type/double :double} :db.type/double
#{:db.type/bigdec :bigdec} :db.type/bigdec
#{:db.type/ref :ref} :db.type/ref
#{:db.type/instant :instant} :db.type/instant
#{:db.type/uuid :uuid} :db.type/uuid
#{:db.type/uri :uri} :db.type/uri
#{:db.type/bytes :bytes} :db.type/bytes
type)}
doc (assoc :db/doc doc)
opts (merge opts))]}}]
;; ... then (in a later transaction) using it to define application model attributes
[[:utils/field :person/name :string "A person's name" {:db/index true}]
[:utils/field :person/age :long "A person's name" nil]]
I would suggest using Tupelo Datomic to get started. I wrote this library to simplify Datomic schema creation and ease understanding, much like you allude in your question.
As an example, suppose we’re trying to keep track of information for the world’s premiere spy agency. Let’s create a few attributes that will apply to our heroes & villains (see the executable code in the unit test).
(:require [tupelo.datomic :as td]
[tupelo.schema :as ts])
; Create some new attributes. Required args are the attribute name (an optionally namespaced
; keyword) and the attribute type (full listing at http://docs.datomic.com/schema.html). We wrap
; the new attribute definitions in a transaction and immediately commit them into the DB.
(td/transact *conn* ; required required zero-or-more
; <attr name> <attr value type> <optional specs ...>
(td/new-attribute :person/name :db.type/string :db.unique/value) ; each name is unique
(td/new-attribute :person/secret-id :db.type/long :db.unique/value) ; each secret-id is unique
(td/new-attribute :weapon/type :db.type/ref :db.cardinality/many) ; one may have many weapons
(td/new-attribute :location :db.type/string) ; all default values
(td/new-attribute :favorite-weapon :db.type/keyword )) ; all default values
For the :weapon/type attribute, we want to use an enumerated type since there are only a limited number of choices available to our antagonists:
; Create some "enum" values. These are degenerate entities that serve the same purpose as an
; enumerated value in Java (these entities will never have any attributes). Again, we
; wrap our new enum values in a transaction and commit them into the DB.
(td/transact *conn*
(td/new-enum :weapon/gun)
(td/new-enum :weapon/knife)
(td/new-enum :weapon/guile)
(td/new-enum :weapon/wit))
Let’s create a few antagonists and load them into the DB. Note that we are just using plain Clojure values and literals here, and we don’t have to worry about any Datomic specific conversions.
; Create some antagonists and load them into the db. We can specify some of the attribute-value
; pairs at the time of creation, and add others later. Note that whenever we are adding multiple
; values for an attribute in a single step (e.g. :weapon/type), we must wrap all of the values
; in a set. Note that the set implies there can never be duplicate weapons for any one person.
; As before, we immediately commit the new entities into the DB.
(td/transact *conn*
(td/new-entity { :person/name "James Bond" :location "London" :weapon/type #{ :weapon/gun :weapon/wit } } )
(td/new-entity { :person/name "M" :location "London" :weapon/type #{ :weapon/gun :weapon/guile } } )
(td/new-entity { :person/name "Dr No" :location "Caribbean" :weapon/type :weapon/gun } ))
Enjoy!
Alan

Congomongo fetch returns nil

I have simple app that should return single record from Mongo database.
(def movie (m/fetch-one :movie
:where {:_id id}))
id is correct but i keep getting nil as a return from this.
Here is how my :_id looks like
:_id #<ObjectId 5245ca7d44aed3e864a1c830>
I guess my problem is here somewhere, but I just don't have enough experience with Clojure to find an error
In this case id passed to where is 5245ca7d44aed3e864a1c830
I think the problem is that your id is a string instead of being an ObjectId object. To create an ObjectId use the function object-id. Note that there is also a fetch-by-id fn

JPQL 2.0 - query entities based on superclass entity field

I have Entity (not MappedSuperclass) Person (with id, name, surname).
I also have Entity Employee extends Person (with other attributes, unimportant).
Inheritance Strategy is single table.
Now I want to create a namedQuery like this:
SELECT emp FROM Employee emp WHERE emp.name = ?1
In the IDE I get:
the state field path emp.name cannot be resolved to a valid type
I think the problem is that the attribute belongs to the superclass entity.
So far, I haven't found any solution other than using the TYPE operator to perform a selective query on Employee instances.
I'd like to perform the query above. Is that possible?
I'm on EclipseLink/JPA 2.0
Your JPQL seems valid. Did you try it at runtime? It could just be an issue with your IDE.
(include your code)
Person has to be #MappedSuperclass.
http://www.objectdb.com/api/java/jpa/MappedSuperclass
Furthermore, you should use named parameters, e.g. :name instead of ?...