Context
I am writing an app using Clojure and following the functional programming paradigm. In this app I have two HTTP endpoints: /rank and /invite. In /rank the app ranks a list of costumers based on their scores. In /invite the app receives an invite from one costumer to another and this should yield changes on the scores of some costumers.
Problem
Data from costumers are kept in a single vector of maps called record.
Putting aside referential transparency for a moment, record should be a shared resource between the endpoints, one reads it and uses it in a ranking function to respond an HTTP request and the other reads it and updates scores in it.
Now, with functional programming in mind, record cannot be updated, so the /invite endpoint should read it and return a new record', problem is, /rank endpoint is setup to use record, but when a new record' is generated it should use it instead of the original one.
My ideas for solving this
I understand that in this context the whole app cannot be fully, functionally speaking, pure. It reads initial input from file and receive requests from the external environment, all of which renders functions that deal with these parts not referentially transparent. And almost every program will have these small portions of non-functional code, but in an attempt of trying not to add more of these non-functional functions to the app and this app being just to exercise some functional programming, I am not persisting record to a database or something, because if this were the case, problem would be solved as I could just call a function to update record on the database.
My best idea so far is: The endpoints are created with a routes function from Compojure, so in the /invite endpoint I should process the new record' vector, and recreate the /rank endpoint supplying it with record'. This part of recreating /rank is what I am struggling at, I am trying to call routes again and define again all the endpoints in the hope that it will override the original call of routes, but as expected, as I believe Compojure follows functional programming, once created, the routes are immutable and a new call of routes won't override anything, it will just create new routes that will be left in the void, not attached to the HTTP requests.
So, is it possible to do what I want with pure functional code? Or it is unavoidable to break referential transparency and I should persist record to a file or database to update it?
PS.: I don't know if this is relevant, but I am new to Clojure and to any sort of web interaction.
Clojure (in contrast to Haskell) is not pure and has its own constructs to manage changes to shared state. It doesn't isolate impureness using a type system (like IO monad in Haskell) but promotes using pure functions and managing the state with different types of references (atom, agent, ref) defining a clear semantics how and when the state is changed.
For your scenario Clojure's atom would be the simplest solution. It provides a clear contract on how its state is managed.
I would create a var holding your record within an atom:
(def record (atom [])) ;; initial record is empty
Then in your rank endpoint you could use its value using deref or with # as a syntactic sugar:
(GET "/rank" []
(calculate-rank #record))
Whereas your invite endpoint could update your record value atomically using swap!:
(POST "/invite/:id" [id]
(invite id)
(swap! record calculate-new-rank id)
(create-response))
Your calculate-new-rank function would look like that:
(defn calculate-new-rank [current-record id]
;; do some calculations
;; create a new record value and return it
(let [new-record ...]
new-record))
Your function will be called with the current version of the data stored in the atom and other optional parameters and the result of your function will be installed as the new value of your atom.
Related
A repository pattern is there to abstract away the actual data source and I do see a lot of benefits in that, but a repository should not use IQueryable to prevent leaking DB information and it should always return domain objects, not DTO's or POCO's, and it is this last thing I have trouble with getting my head around.
If a repository pattern always has to return a domain object, doesn't that mean it fetches way too much data most of the times? Lets say it returns an employee domain object with forty properties and in the service and view layers consuming that object only five of those properties are actually used.
It means the database has fetched a lot of unnecessary data a pumped that across the network. Doing that with one object is hardly noticeable, but if millions of records are pushed across that way and a lot of of the data is thrown away every time, is that not considered bad behavior?
Yes, when adding or editing or deleting the object, you will use the entire object, but reading the entire object and pushing it to another layer which uses only a fraction of it is not utilizing the underline database and network in the most optimal way. What am I missing here?
There's nothing preventing you from having a separate read model (which could a separately stored projection of the domain or a query-time projection) and separating out the command and query concerns - CQRS.
If you then put something like GraphQL in front of your read side then the consumer can decide exactly what data they want from the full model down to individual field/property level.
Your commands still interact with the full domain model as before (except where it's a performance no-brainer to use set based operations).
In Glimmer.js, what is the best way to reset a tracked property to an initial value without using the constructor?
Note: Cannot use the constructor because it is only called once on initial page render and never called again on subsequent page clicks.
There are two parts to this answer, but the common theme between them is that they emphasize switching from an imperative style (explicitly setting values in a lifecycle hook) to a declarative style (using true one-way data flow and/or using decorators to clearly indicate where you’re doing some kind of transformation of local state based on arguments).
Are you sure you need to do that? A lot of times people think they do and they should actually just restructure their data flow. For example, much of the time in Ember Classic, people reached for a pattern of "forking" data using hooks like didInsertElement or didReceiveAttrs. In Glimmer components (whether in Ember Octane or in standalone Glimmer.js), it's idiomatic instead to simply manage your updates in the owner of the data: really doing data-down-actions-up.
Occasionally, it does actually make sense to create local copies of tracked data in a component—for example, when you want to have a clean separation between data coming from your API and the way you handle data in a form (because user interfaces are API boundaries!). In those scenarios, the #localCopy and #trackedReset decorators from tracked-toolbox are great solutions.
#localCopy does roughly what its name suggests. It creates a local copy of data passed in via arguments, which you can change locally via actions, but which also switches back to the argument if the argument value changes.
#trackedReset creates some local state which resets when an argument updates. Unlike #localCopy, the state is not a copy of the argument, it just needs to reset when the argument updates.
With either of these approaches, you end up with a much more “declarative” data flow than in the old Ember Classic approach: “forking” the data is done via decorators (approach 2), and much of the time you don’t end up forking it at all because you just push the changes back up to the owner of the original data (1).
I'm trying to wrap my head around the difference between a "collection" and a "store" in REST. From what I've read so far,
a collection is:
"a server-managed directory of resources"
and a store is a:
"client-managed resource repository"
I found this post: How "store" REST archetype isn't creating a new resource and a new URI?
but it didn't really help me clarify the difference. I mean, I understand one is controlled by the server and the other by the client... but can someone give me a concrete example of what a store might be in a real world application?
I *think it's something like this:
GET http://myrestapplication.com/widgets/{widget_id} -- retrieves a widget from db
POST http://myrestapplication.com/widgets/{widget_id} -- creates a new widget from db
PUT http://myrestapplication.com/widgets/{widget_id},[list of updated parms & their vals] -- update widget
PUT http://myrestapplication.com/users/johndoe/mywishlist/{widget_id} -- updates john doe's profile to add a widget that already exists in the database... but links to it as his favorite one or one that he wants to buy
Is this correct?
if so, could the last PUT also be expressed as a POST somehow?
EDIT 1
I found an online link to the book i'm reading... where it makes the distinction between the two:
https://books.google.ca/books?id=4lZcsRwXo6MC&pg=PA16&lpg=PA16&dq=A+store+is+a+client-managed+resource+repository.+A+store+resource+lets+an+API+client:+put+resources+in,+get+them+back+out,+and+decide+when+to+delete+them&source=bl&ots=F4CkbFkweL&sig=H6eKZMPR_jQdeBZkBL1h6hVkK_E&hl=en&sa=X&ei=BB-vVJX6HYWvyQTByYHIAg&ved=0CB0Q6AEwAA#v=onepage&q=A%20store%20is%20a%20client-managed%20resource%20repository.%20A%20store%20resource%20lets%20an%20API%20client%3A%20put%20resources%20in%2C%20get%20them%20back%20out%2C%20and%20decide%20when%20to%20delete%20them&f=false
REST uses http verbs to manipulate resources. Full-Stop. That's it. To build some classes of browser-based application developers sometimes use local storage (a store), but that has absolutely nothing to do with REST (in fact, it's the opposite). Collections are a special consideration in REST-based API design because the REST principles place considerable constraints on how they are represented in the results of your queries -- special consideration also because there are no standards on how these things should be represented and access if you're using anything other than html as a resource type.
Edit:
REST suggests that when we ask for a resource we receive that resource and only that resource and that things referenced by that resource are returned as links, not as data. This mimics the http standard by which we return the requested page and links to other pages rather than embedding linked pages. So, our resources should return links to related resources, not the resources themselves.
So, what about collections?
Let's use as an example a college management system that has Course objects each of which contains a huge lists of Students.
When I GET the course I don't want to have the collection of students returned as an embedded list, because that could be huge and because my user might not be interested. Instead, I want to know that the course has a students collection and I want to be able to query that collection separately (when I need to) and I want to be able to page it on demand. For this to work, the course needs to link to the students collection URL (maybe with an appropriate type so that my code knows how to handle the link). Then, I want to use the given collection's url to request a paged list of resources. In this example, the collection's url could be something like: course/1/students, with the convention that I can add paging info to the search string to constrain the results with something like course/1/students?page=1&count=10. Embedding the students collection into the course resource would be a violation of REST. I would not be returning a course, I'd be returning course-and-students.
We have Unit of Work implemented in EntityFramework, so when we use ObjectContext and make any changes to the Entity it is tracked and then on SaveChanges it is all reflected in underlying database.
But what if I want to track changes for my custom class, so every modifications are tracked down and sent through webservice call ?
I have webservice which provides me some data, that data is displayed in datagrid and then may be modified. I want to track all the changes down and then be able to send back through webservice the data only that have been modified. Is there any solution for that like EntityFramework or POCO or whatever ? Or I have to implement my own Unit of Work pattern for it ?
Change tracking works only when entity is attached to the context. There is special type of entities called Self tracking entities which is able to track changes on the client side when exposed with web service but these classes are still your primary entities (not custom objects) and they apply their tracked state directly to the context.
What you describe has nothing to do with unit-of-work pattern. You are looking for change set pattern which is able to pass only differences back to the service. Implementation of such classes is completely up to you. .NET doesn't provide them. .NET offers two implementations of change set pattern
mentioned Self tracking entities for EF
DataSet and related classes
Both these implementations transfer by default all data (moreover at least DataSets have by default both old and new state in the message). Both data sets and STEs share same limitations - they are very badly interoperable.
Change tracking at the property level should not be left to the client of a WCF call, for a variety of reasons. If you use a DTO (Data-Transfer Object) pattern, you should be able to keep your individual objects small enough to avoid having any significant overhead from sending the entire changed object across the wire. Then, on the server side, you load the current version of the object out of your database, set the values provided by the DTO, and let Entity Framework track the changed properties.
public SavePerson(Person person)
{
using(var context = _contextFactory.Get())
{
var persistentPerson = context.People.Single(p => p.PersonId == person.PersonId);
persistendPerson.FirstName = person.FirstName;
/// etc. (This could be done with a tool like AutoMapper)
context.SaveChanges();
}
}
If you're changing multiple objects on the client side, and you want to keep track of which ones the user has changed, you could have the client be responsible for keeping track of the objects that get changed and send only those objects to the web service in bulk. There, you can apply the same pattern and wait to SaveChanges until all of the objects have been updated.
Hopefully this helps.
We currently have an application that retrieves data from the server through a web service and populates a DataSet. Then the users of the API manipulate it through the objects which in turn change the dataset. The changes are then serialized, compressed and sent back to the server to get updated.
However, I have begin using NHibernate within projects and I really like the disconnected nature of the POCO objects. The problem we have now is that our objects are so tied to the internal DataSet that they cannot be used in many situations and we end up making duplicate POCO objects to pass back and forth.
Batch.GetBatch() -> calls to web server and populates an internal dataset
Batch.SaveBatch() -> send changes to web server from dataset
Is there a way to achieve a similar model that we are using which all database access occurs through a web service but use NHibernate?
Edit 1
I have a partial solution that is working and persisting through a web service but it has two problems.
I have to serialize and send my whole collection and not just changed items
If I try to repopulate the collection upon return my objects then any references I had are lost.
Here is my example solution.
Client Side
public IList<Job> GetAll()
{
return coreWebService
.GetJobs()
.BinaryDeserialize<IList<Job>>();
}
public IList<Job> Save(IList<Job> Jobs)
{
return coreWebService
.Save(Jobs.BinarySerialize())
.BinaryDeserialize<IList<Job>>();
}
Server Side
[WebMethod]
public byte[] GetJobs()
{
using (ISession session = NHibernateHelper.OpenSession())
{
return (from j in session.Linq<Job>()
select j).ToList().BinarySerialize();
}
}
[WebMethod]
public byte[] Save(byte[] JobBytes)
{
var Jobs = JobBytes.BinaryDeserialize<IList<Job>>();
using (ISession session = NHibernateHelper.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
foreach (var job in Jobs)
{
session.SaveOrUpdate(job);
}
transaction.Commit();
}
return Jobs.BinarySerialize();
}
As you can see I am sending the whole collection to the server each time and then returning the whole collection. But I'm getting a replaced collection instead of a merged/updated collection. Not to mention the fact that it seems highly inefficient to send all the data back and forth when only part of it could be changed.
Edit 2
I have seen several references on the web for almost a transparent persistent mechanism. I'm not exactly sure if these will work and most of them look highly experimental.
ADO.NET Data Services w/NHibernate (Ayende)
ADO.NET Data Services w/NHibernate (Wildermuth)
Custom Lazy-loadable Business Collections with NHibernate
NHibernate and WCF is Not a Perfect Match
Spring.NET, NHibernate, WCF Services and Lazy Initialization
How to use NHibernate Lazy Initializing Proxies with Web Services or WCF
I'm having a hard time finding a replacement for the DataSet model we are using today. The reason I want to get away from that model is because it takes a lot of work to tie every property of every class to a row/cell of a dataset. Then it also tightly couples all of my classes together.
I've only taken a cursory look at your question, so forgive me if my response is shortsighted but here goes:
I don't think you can logically get away from doing a mapping from domain object to DTO.
By using the domain objects over the wire you are tightly coupling your client and service, part of the reason to have a service in the first place is to promote loose coupling. So that's an immediate issue.
On top of that you're going to end up with a brittle domain logic interface where you can't make changes on the service side without breaking your client.
I suspect your best bet would be to implement a loosely coupled service which implements a REST / or some other loosely coupled interface. You could use a product such as automapper to make the conversions simpler and easier and also flatten data structures as necessary.
At this point I don't know of any way to really cut down the verbosity involved in doing the interface layers but having worked on large projects that didn't make the effort I can honestly tell you the savings wasn't worth it.
I think your issue revolves around this issue:
http://thatextramile.be/blog/2010/05/why-you-shouldnt-expose-your-entities-through-your-services/
Are you or are you not going to send ORM-Entities over the wire?
Since you have a Services-Oriented architecture.. I (like the author) do not recommend this practice.
I use NHibernate. I call those ORM-Entities. They are THE POCO model. But they have "virtual" properties that allow for lazy-loading.
However, I also have some DTO-Objects. These are also POCO's. These do not have lazy'loading friendly properties.
So I do alot of "converting". I hydrate ORM-Entities (with NHibernate)...and then I end up converting them to Domain-DTO-Objects. Yes, it stinks in the beginning.
The server sends out the Domain-DTO-Objects's. There is NO lazy loading. I have to populate them with the "Goldie Locks" "just right" model. Aka, if I need Parent(s) with one level of children, I have to know that up front and send the Domain-DTO-Objects over that way, with just the right amount of hydration.
WHen I send back Domain-DTO-Objects's (from client to the server), I have to reverse the process. I convert the Domain-DTO-Objects into ORM-Entities. And allow NHibernate to work with the ORM-Entities.
Because the architecture is "disconnected", I do alot of (NHiberntae) ".Merge()" calls.
// ormItem is any NHibernate poco
using (ISession session = ISessionCreator.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
session.BeginTransaction();
ParkingAreaNHEntity mergedItem = session.Merge(ormItem);
transaction.Commit();
}
}
.Merge is a wonderful thing. Entity Framework does not have it. Boo.
Is this alot of setup? Yes.
Do I think it is perfect? No.
However. Because I send very basic DTO's(Poco's) that are not "flavored" to the ORM, I have the ability to switch ORM's without killing my contracts to the outside world.
My datalayer can be ADO.NET, EF, NHibernate, or anything. I have to write the "Converters" if I switch, and the ORM code, but everything else is isolated.
Many people argue with me. They said I'm doing too much, and the ORM-Entities are fine.
Again, I like to "now allow any lazy loading" appearances. And I prefer to have my data-layer isolated. My clients should not know or care about my data-layer/orm of choice.
There are just enough subtle differences (or some not so subtle ones) between EF and NHibernate to screwball the game plan.
Do my Domain-DTO-Objects's look 95% like my ORM-Entities? Yep. But its the 5% that can screwball you.
Moving from DataSets, especially if they are populated from stored-procedures with alot of biz-logic in the TSQL, isn't trivial. But now that I do object model, and I NEVER write a stored procedure that isn't simple CRUD functions, I'd never go back.
And I hate maintenance projects with voodoo TSQL in the stored procedures. It ain't 1999 anymore. Well, most places.
Good luck.
PS Without .Merge(in EF), here is what you have to do in a disconnected world: (boo microsoft)
http://www.entityframeworktutorial.net/EntityFramework4.3/update-many-to-many-entity-using-dbcontext.aspx