Few things about Repository Pattern that I simply don't understand - repository-pattern

I've read quite a few topic on what Repository is, but there are still few things bugging me.
To my understanding only difference between Repository and traditional data access layers are Repository's query construction capabilities ( ie Query Object pattern). But when reading the following definitions of a Repository Pattern, it seems we can still have Repository even if we don't implement Query Object pattern:
a)
From:
Repositories are the single point where we hand off and fetch objects.
It is also the boundary where communication with the storage starts
and ends.
I think above quote suggests that Repository is an entry point into DAL. In other words, according to the quote, the DAL consumer (say Service layer) comunicates with DAL via Repository. But shouldn't instead data context represent an entry point into DAL ( thus Repository should reside within data context )?
b)
From:
The primary thing that differentiates a Repository from a traditional
data access layer is that it is to all intents and purposes a
Collection semantic – just like IList in .Net
Don't most traditional DALs also have methods that return a collection (for example List<Customer> GetAllCustomers())? So how exactly is a collection-like semantic of a Repository any different from collection-like semantic of a traditional DAL?
c)
From:
In a nutshell, the Repository pattern means abstracting the
persistence layer, masking it as a collection. This way the
application doesn't care about databases and other persistence
details, it only deals with the abstraction (which usually is coded as
an interface).
As far as I know, the above definition isn't any different from the definition of a traditional DAL.
Thus, if Repository implementation only performed two functions – having the collection-like semantics and isolating the domain objects from details of the database access code – how would it be any different from a traditional DAL? In other words, would/should it still be called Repository?
d)
What makes the following interface a Repository interface instead of just a regular DAL interface?
From:
public interface IPostsRepository
{
void Save(Post mypost);
Post Get(int id);
PaginatedResult<Post> List(int skip,int pageSize);
PaginatedResult<Post> SearchByTitle(string title,int skip,int pageSize);
}
Thank you

FYI I asked a very similar question over here and got some excellent answers.
The bottom line is it appears to depend on the complexity of your architecture. The repository pattern is most useful to create a layer of abstraction when you need to access different types of data stores, i.e. some data is in entity framework, some is on the file system, etc. In simpler web apps with a (probably unchanging) single data store (i.e. all data in SQL Server, or Oracle, etc) it is less important. At that point something like the Entity Framework context object functions as a repository for your entity objects.

Related

How to call SQL functions / stored procedure when using the Repository pattern

What is the best way to call a SQL function / stored procedure when converting code to use the repository pattern? Specifically, I am interested in read/query capabilities.
Options
Add an ExecuteSqlQuery to IRepository
Add a new repository interface specific to the context (i.e. ILocationRepository) and add resource specific methods
Add a special "repository" for all the random stored procedures until they are all converted
Don't. Just convert the stored procedures to code and place the logic in the service layer
Option #4 does seem to be the best long term solution, but it's also going to take a lot more time and I was hoping to push this until a future phase.
Which option (above or otherwise) would be "best"?
NOTE: my architecture is based on ardalis/CleanArchitecture using ardalis/Specification, though I'm open to all suggestions.
https://github.com/ardalis/CleanArchitecture/issues/291
If necessary, or create logically grouped Query services/classes for
that purpose. It depends a bit on the functionality of the SPROC how I
would do it. Repositories should be just simple CRUD, at most with a
specification to help shape the result. More complex operations that
span many entities and/or aggregates should not be added to
repositories but modeled as separate Query objects or services. Makes
it easier to follow SOLID that way, especially SRP and OCP (and ISP)
since you're not constantly adding to your repo
interfaces/implementations.
Don't treat STORED PROCEDURES as 2nd order citizens. In general, avoid using them because they very often take away your domain code and hide it inside database, but sometimes due to performance reasons, they are your only choice. In this case, you should use option 2 and treat them same as some simple database fetch.
Option 1 is really bad because you will soon have tons of SQL in places you don't want (Application Service) and it will prevent portability to another storage media.
Option 3 is unnecessary, stored procedures are no worse than simple Entity Framework Core database access requests.
Option 4 is the reason why you cannot always avoid stored procedures. Sometimes trying to query stuff in application service/repositories will create very big performance issues. That's when, and only when, you should step in with stored procedures.

Interfaces for Rich Domain Models

Should Rich Domain Models have interfaces to assist with isolation during unit testing (e.g. when testing a service that uses the model)?
Or should the Rich Domain Model behaviour be involved in any related unit tests?
Edit:
By Rich Domain Model I'm specifically referring to domain entities that contain logic (i.e. non-anaemic).
Usually, the Domain Model is the part that you should keep isolated from everything else. The Domain Model may use interfaces so that it's isolated from external systems, etc.
However, in the most common scenarios, the Domain Model is what you're trying to protect from the deteriorating influences of external systems, UI logic, etc. - not the other way around.
Thus, there's little reason to put interfaces on the Domain Model.
You should definitely involve domain model behaviour in your unit tests. There's absolutely no point in mocking this part. You should really only be mocking external systems.
Should Rich Domain Models have interfaces
I'd say no, because
Domain behavior happens inside a well-delimited bubble. A given command triggers changes in a single aggregate. Conceptually, you don't have to isolate it since it basically just talks to itself. It might also issue messages (events) to the outside world, but testing that doesn't require the domain entities themselves to be mockable.
Concretely, domain entity (or value object) behavior is fluid -- it happens all in memory and is not supposed to directly call lower level, I/O bound operations. There will be no performance impact to not mocking stuff in your tests as long as the system under test is a small prepared cluster of objects (the Aggregate and maybe the thing calling it).
Domain model is a realm of concrete concepts. Ubiquitous language terms reflected in your entity or value object class/method names are usually prosaic and unequivocal. There's not much need for abstraction and polymorphism there -- which is what interfaces or abstract classes are for. The domain is less about generic contracts or interfaces providing services, and more about real world tasks that take place in the problem domain.

Clojure module dependencies

I'm trying to create a modular application in clojure.
Lets suppose that we have a blog engine, which consists of two modules, for example - database module, and article module (something that stores articles for blog), all with some configuration parameters.
So - article module depends on storage, And having two instances of article module and database module (with different parameters) allows us to host two different blogs in two different databases.
I tried to implement this creating new namespaces for each initialized module on-the-fly, and defining functions in this namespaces with partially applied parameters. But this approach is some sort of hacking, i think.
What is right way to do this?
A 'module' is a noun, as in the 'Kingdom of Nouns' by Steve Yegge.
Stick to non side-effecting or pure functions of their parameters (verbs) as much as possible except at the topmost levels of your abstractions. You can organize those functions however you like. At the topmost levels you will have some application state, there are many approaches to manage that, but the one I use the most is to hide these top-level services under a clojure protocol, then implement it in a clojure record (which may hold references to database connections or some-such).
This approach maximizes flexibility and prevents you from writing yourself into a corner. It's analagous to java's dependency injection. Stuart Sierra did a good talk recently on these topics at Clojure/West 2013, but the video is not yet available.
Note the difference from your approach. You need to separate the management and resolution of objects from their lifecycles. Tying them to namespaces is quick for access, but it means any functions you write as clients that use that code are now accessing global state. With protocols, you can separate the implementation detail of global state from the interface of access.
If you need a motivating example of why this is useful, consider, how would you intercept all access to a service that's globally accessible? Well, you would push the full implementation down and make the entry point a wrapper function, instead of pushing the relevant details closer to the client code. What if you wanted some behavior for some clients of the code and not others? Now you're stuck. This is just anticipating making those inevitable trade-offs preemptively and making your life easier.

IRepository - Entity implementation

I'm using Repository and UnitOfWork pattern in order to mantain decoupled code and to achieve a simple way to test my application.
The inner implementation use EntityFramerowk with DB first and all works fine.
Tomorrow, I might want use some other concrete repository implementation such as file system rather than database, so some repository method like Find, or Delete could be difficult to accomplish, because my entities doesn't implement anything about primary-foreing keys and so on. It implies my entity research on repository should looks for all fields matching with T object parameter.
So, is it good practice enforce my entities for some interface implementation? For instance:
Is there some available example or tutorial about this?
some repository method like Find, or Delete could be difficult to accomplish, because my entities doesn't implement anything about primary-foreing keys and so on. It implies my entity research on repository should looks for all fields matching with T object parameter.
That's how NOT to implement the repository. A repository interface (contract) should be ignorant of the underlying implementation details such as Entity Framework. Only this way you can have a different implementation of the repository and achieve seaparation of concerns. Also, testing code that uses the repository should not involve EF or db at all.

SOA: Is it preferable to implement a service instead of just writing service-ready code, when no external access is needed?

I'm working on the initial architecture for a solution for which an SOA approach has been recommended by a previous consultant. From reading the Erl book(s) and applying to previous work with services (and good design patterns in general), I can see the benefits of such an approach. However, this particular group does not currently have any traditional needs for implementing web services -- there are no external consumers, and no integration with other applications.
What I'm wondering is, are there any advantages to going with web services strictly to stick to SOA, that we couldn't get from just implementing objects that are "service ready"?
To explain, an example. Let's say you implement the entity "Person" as a service. You have to implement:
1. Business object/logic
2. Translator to service data structure
3. Translator from service data structure
4. WSDL
5. Service data structure (XML/JSON/etc)
6. Assertions
Now, on the other hand, if you don't go with a service, you only have to implement #1, and make sure the other code accesses it through a loose reference (using dependency injection, or a wrapper, etc). Then, if it later becomes apparent that a service is needed, you can just have the reference instead point to #2/#3 logic above in a wrapper object (so all caller objects do not need updating), and implement the same amount of objects without a penalty to the amount of development you have to do -- no extra objects or code have to be created as opposed to doing it all up front.
So, if the amount of work that has to be done is the same whether the service is implemented initially or as-needed, and there is no current need for external access through a service, is there any reason to initially implement it as a service just to stick to SOA?
Generally speaking you'd be better to wait.
You could design and implement a web service which was simply a technical facade that exposes the underlying functionality - the question is would you just do a straight one for one 'reflection' of that underlying functionality? If yes - did you design that underlying thing in such a way that it's fit for external callers? Does the API make sense, does it expose members that should be private, etc.
Another factor to consider is do you really know what the callers of the service want or need? The risk you run with building a service is that (as you're basically only guessing) you might need to re-write it when the first customers / callers come along. This can could result in all sorts of work including test cases, backwards compatibility if it drives change down to the lower levels, and so on.
having said that the advantage of putting something out there is that it might help spark use of the service - get people thinking - a more agile principled approach.
If your application is an isolated Client type application (a UI that connects to a service just to get data out of the Database) implementing a SOA like architecture is usually overkill.
Nevertheless there could be security, maintainability, serviceability aspects where using web services is a must. e.g. some clients needs access to the data outside the firewall or you prefer to separate your business logic/data access from the UI and put it on 1 server so that you don’t need to re-deploy the app every time some bus. rules changes.
Entreprise applications require many components interacting with each other and many developers working on it. In this type of scénario using SOA type architecture is the way to go.
The main reason to adopt SOA is to reduce the dependencies.
Enterprise Applications usually depends on a lot of external components (logic or data) and you don’t want to integrate these components by sharing assemblies.
Imagine that you share a component that implements some specific calculation, would you deploy this component to all the dependent applications? What will happen if you want to change some calculation logic? Would you ask all teams to upgrade their references and recompile and redeploy their app?
I recently posted on my blog a story where the former Architect had also choosed not to use web services and thought that sharing assemblies was fine. The result was chaos. Read more here.
As I mentioned, it depends on your requirements. If it’s a monolithically application and you’re sure you’ll never integrate this app and that you’ll never reuse the bus. Logic/data access a 2 tier application (UI/DB) is good enough.
Nevertheless this is an Architectural decision and as most of the architectural decisions it’s costly to change. Of course you can still factor in a web service model later on but it’s not as easy as you could think. Refactoring an existing app to add a service layer is usually a difficult task to accomplish even when using a good design based on interfaces. Example of things that could go wrong: data structure that are not serializable, circular references in properties, constructor overloading, dependencies on some internal behaviors…