Repository pattern: isn't getting the entire domain object bad behavior (read method)? - repository-pattern

A repository pattern is there to abstract away the actual data source and I do see a lot of benefits in that, but a repository should not use IQueryable to prevent leaking DB information and it should always return domain objects, not DTO's or POCO's, and it is this last thing I have trouble with getting my head around.
If a repository pattern always has to return a domain object, doesn't that mean it fetches way too much data most of the times? Lets say it returns an employee domain object with forty properties and in the service and view layers consuming that object only five of those properties are actually used.
It means the database has fetched a lot of unnecessary data a pumped that across the network. Doing that with one object is hardly noticeable, but if millions of records are pushed across that way and a lot of of the data is thrown away every time, is that not considered bad behavior?
Yes, when adding or editing or deleting the object, you will use the entire object, but reading the entire object and pushing it to another layer which uses only a fraction of it is not utilizing the underline database and network in the most optimal way. What am I missing here?

There's nothing preventing you from having a separate read model (which could a separately stored projection of the domain or a query-time projection) and separating out the command and query concerns - CQRS.
If you then put something like GraphQL in front of your read side then the consumer can decide exactly what data they want from the full model down to individual field/property level.
Your commands still interact with the full domain model as before (except where it's a performance no-brainer to use set based operations).

Related

Django: Making sure a complex object is accessible throughout multiple view calls

for a project, I am trying to create a web-app that, among other things, allows training of machine learning agents using python libraries such as Dedupe or TensorFlow. In cases such as Dedupe, I need to provide an interface for active learning, which I currently realize through jquery based ajax calls to a view that takes and sends the necessary training data.
The problem is that I need this agent object to stay alive throughout multiple view calls and be accessible by each individual call. I have tried realizing this via the built-in cache system using Memcached, but the serialization does not seem to keep all the info intact, and while I am technically able to restore the object from the cache, this appears to break the training algorithm.
Essentially, I want to keep the object alive within the application itself (rather than an external memory store) and be able to access it from another view, but I am at a bit of a loss of how to realize this.
If someone knows the proper technique to achieve this, I would be very grateful.
Thanks in advance!
To follow up with this question, I have since realized that the behavior shown seemed to have been an effect of trying to use the result of a method call from the object loaded from cache directly in the return properties of a view. Specifically, my code looked as follows:
#model is the object loaded from cache
#this returns the wrong object (same object as on an earlier call)
return JsonResponse({"pairs": model.uncertain_pairs()})
and was changed to the following
#model is the object loaded from cache
#this returns the correct object (calls and returns the model.uncertain_pairs() method properly)
uncertain = model.uncertain_pairs()
return JsonResponse({"pairs": uncertain})
I am unsure if this specifically happens due to an implementation from Dedupe or Django side or due to Python, but this has undoubtedly fixed the issue.
To return back to the question, Django does seem to be able to properly (de-)serialize objects and their properties in cache, as long as the cache is set up properly (see Apparent bug storing large keys in django memcached which I also had to deal with)

RESTful search. Return actual resources or URIs?

Pretty new to all this REST stuff.
I'm designing my API, and am not sure what I'm supposed to return from a search query. I was assuming I would just return all objects that match the query in their entirety, but after reading up a bit about HATEOAS I am thinking I should be returning a list of URI's instead?
I can see that this could help with caching of items, but I'm worried that there will be a lot of overhead generated by the subsequent multiple HTTP requests required to get the actual object info.
Am I misunderstanding? Is it acceptable to return object instances instead or URIs?
I would return a list of resources with links to more details on those resources.
From RESTFull Web Services Cookbook 2010 - Subbu Allamaraju
Design the response of a query as a representation of a collection
resource. Set the appropriate expiration caching headers. If the query
does not match any resources, return an empty collection.
IMHO it is important to always remember that "pure REST" and "real world REST" are two quite different beasts.
How are you returning the list of URIs from your query in the first place? If you return e.g. application/json, this certainly does not tell the client how it is supposed to interpret the content; therefore, the interaction is already being driven by out-of-band information (the client magically already knows where to look for the data it needs) in conflict with HATEOAS.
So, to answer your question: I find it quite acceptable to return object instances instead of URIs -- but be careful because in the general case this means you are generating all this data without knowing if the client is even going to use it. That's why you will see a hybrid approach quite often: the object instances are not full objects (i.e. a portion of the information the server has is not returned), but they do contain a unique identifier that allows the client to fetch the full representation of selected objects if it chooses to do so.

How can I design a ColdFusion page to update an Array of data across requests without using the SESSION scope?

This is more of a conceptual question. I have a working application that allows users to upload a CSV file of addresses, then parses the data into an Array of Address objects, then validates each Address object against certain rules (certain fields are required, etc.). The page then displays any addresses that failed validation, giving the user the ability to edit or delete each.
Right now, I am storing the entire Array in a SESSION variable, assigning each Address an Id value, then updating each Address in the SESSION Array when the user makes edits and submits the form.
I'm trying to think of a way to do this without using the SESSION scope, or using a physical database, or physical file. Any ideas?
If you don't use a physical database you would have to use some sort of persistent scope. That means the SESSION scope, the CLIENT scope (if you have that enabled), the APPLICATION scope, or the SERVER scope. But I think the safest way (as all those persistent scopes are cleared if your server goes down) is to store them in a database -- whether that database is a RDBMS, text file, or a Verity or Solr collection. I apologize in advance if that doesn't answer your question.
The data needs to be stored / preserved somewhere if you want to work with it across multiple requests. Aside from the options your question rules out (session, database, file), I can think of two other (non-ideal) options:
External cache mechanism like memcached -- not necessarily recommended because it's inherently volatile and doesn't guarantee to preserve your data
Pass the contents of the CSV around from request to request, e.g. via hidden FORM containing JSON -- not recommended if the CSV can get large
Personally, my colleagues and I tend to use temporary database storage for this type of issue.
I don't know anything about your requirements or use cases, but if you can depend on your users to have modern browsers, another viable option might be HTML5 localStorage.
Skimming some of the localstorage questions might give you some ideas.

read objects persisted but not yet flushed with doctrine

I'm new to symfony2 and doctrine.
here is the problem as I see it.
i cannot use :
$repository = $this->getDoctrine()->getRepository('entity');
$my_object = $repository->findOneBy($index);
on an object that is persisted, BUT NOT FLUSHED YET !!
i think getRepository read from DB, so it will not find a not-flushed object.
my question: how to read those objects that are persisted (i think they are somewhere in a "doctrine session") to re-use them before i do flush my entire batch ?
every profile has 256 physical plumes.
every profile has 1 plumeOptions record assigned to it.
In plumeOptions, I have a cartridgeplume which is a FK for PhysicalPlume.
every plume is identified by ID (auto-generated) and an INDEX (user-generated).
rule: I say profile 1 has physical_plume_index number 3 (=index) connected to it.
now, I want to copy a profile with all its related data to another profile.
new profile is created. New 256 plumes are created and copied from older profile.
i want to link the new profile to the new plume index 3.
check here: http://pastebin.com/WFa8vkt1
I think you might want to have a look at this function:
$entityManager->getUnitOfWork()->getScheduledEntityInsertions()
Gives you back a list of entity objects which are persisting yet.
Hmm, I didn't really read your question well, with the above you will retrieve a full list (as an array) but you cannot query it like with getRepository. I will try found something for u..
I think you might look at the problem from the wrong angle. Doctrine is your persistance layer and database access layer. It is the responsibility of your domain model to provide access to objects once they are in memory. So the problem boils down to how do you get a reference to an object without the persistance layer?
Where do you create the object you need to get hold of later? Can the method/service that create the object return a reference to the controller so it can propagate it to the other place you need it? Can you dispatch an event that you listen to elsewhere in your application to get hold of the object?
In my opinion, Doctrine should be used at the startup of the application (as early as possible), to initialize the domain model, and at the shutdown of the application, to persist any changes to the domain model during the request. To use a repository to get hold of objects in the middle of a request is, in my opinion, probably a code smell and you should look at how the application flow can be refactored to remove that need.
Your is a business logic problem effectively.
Querying down the Database a findby Query on Object that are not flushed yet, means heaving much more the DB layer querying object that you have already in your function scope.
Also Keep in mind a findOneBy will retrieve also other object previously saved with same features.
If you need to find only among those new created objects, you should make f.e. them in a Session Array Variable, and iterate them with the foreach.
If you need a mix of already saved items + some new items, you should threate the 2 parts separately, one with a foreach , other one with the repository query!

Repository Pattern

I've got a quick question regarding the use of repositories. But the best way to ask is to show a bit of pseudocode and you guys tell me what the result should be
Get a record from the repository with ID of 1 (assume it exists)
Edit a couple of properties
Query the repository again for an item with ID of 1
Result = ??
Should I get the object with updated values or the object without (original state), bearing in mind that since updating the values of properties (step 2) I have not told the repository to update this record.
I think I should get a copy of the original item and not a reference to the edited version.
Please tell me what is correct.
Cheers
The repository pattern is suppose to act like a collection of your objects, so ideally I think it should return the same object instance which would have the updates in it.
Generally there is an identity map somewhere so your repositories can keep track of what has already been loaded. With an identity map, when you fetch an object with the same Id you should always get the already loaded object back regardless of how many times. This is how all more sophisticated ORMs work and is generally a good practice. An identity map helps keep things in sync while you are in the same transaction and saves you some data access.
NHibernate's session has an identity map it keeps track of so you don't have to worry about trying to implement your own in your repositories. Also I believe you can use NHibernate's stateless session if you want to load another instance without change tracking, but I'm not positive on that.
Judging from your past questions I'm assuming you are using LINQ/C#?
If you are using a DataContext and you haven't called SubmitChanges() then you should get back the original unchanged object.
Just tested it. I was wrong, you get back the changed object.
If you set ObjectTrackingEnabled = false on the DataContext you will get the unchanged object.