Interface Segregation, is it valid to use this on top of the 'composite' repository pattern - repository-pattern

I'm using the Entity Framework as my ORM and I'm using a Repository of repositories (?) to abstract the EF so I can mock out etc and test etc.
A single repo.
public interface IRepository<T> : IView<T>
{
IQueryable<T> GetAll();
void Update( T entity );
void Delete( T entity );
void Add(T entity);
T Default();
}
the repo of repo's ;)
public interface IRepoOfRepos
{
IRepository<Table_a> Table_as { get; }
IRepository<Table_b> Table_bs { get; }
IRepository<Table_c> Table_cs { get; }
etc etc
In our application we have a series of 'modules' that perform discrete chunks of business logic and I was planning on 'injecting' the 'IRepoOfRepos' into each.
However, another team member has suggested that we should really be creating an additional layer (interface) with only the data access methods needed by each module (aka the 'I' in SOLID).
We have quite a large number of modules (30+) and this seems like quite a lot of extra work for a principle that I feel may not apply to the Data Access Layer and is really aimed at the Business Layer?
Your thoughts are much appreciated and thanks in advance!

There's a bunch of questions that cover this already:
How to use the repository pattern correctly? (best)
One repository per table or one per functional section?
What is best practise for repository pattern - repo per table?
Are you supposed to have one repository per table in JPA?
From my experience: Be ruthlessly pragmatic in your repository design. Only implement the queries you actually need RIGHT NOW, like Customer.CreateOrder() which may have required several different IRepository.Add() calls.
You won't end up using all of the CRUD methods on every table, anyway. (YAGNI)
If your Data Access Layer just provides implementations of IRepository<T>, then it doesn't fulfill its purpose. Have a look at the first question I linked - it's very instructive.

The 'composite' repository pattern doesn't exist. A repository doesn't know about other repositories. If you need to use more than one, have all the relevant interfaces injected as arguments for the service using them.
A repository interface is defined only for the specific bounded context needs. You have 30 modules,that's ok, some of their needs are common so you can have a common interface definition (because it's an abstraction there's no tight coupling). You define then other interface specific for the module's needs. Each module service (regardless of the module) will use only the needed abstractions.
When testing you'll be testing business service behaviour, using repository fakes/mocks. ORM is irrelevant, because your repo interface knows only about business objects and you never tell the repository how to do its work.
In conclusion, yes, Interface Segregation is valid to use, but no repository of repositories.

Related

TDD + DDD: Model abstractions

I've recently had an interesting experience but didn't find a satisfying answer so far: I'm a big fan of DDD and try to define rich domain objects with behavior and good information hiding, even if the team officially doesn't practice DDD. At the end of the day, it doesn't matter, as you have a well-defined object, which represents something in the problem domain.
That said, I would also like to practice TDD more. Unfortunately, if I test a service, which uses such rich domain models, the models are usually not abstracted. Therefore, to test the behavior of the service, I need to set up the model as well. This model comes with its own invariants etc., therefore with every service test, I also test the model the service is using.
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
In my opinion, there seems to be no way around this but to start creating interfaces for models. But it seems like I am the only person thinking so. For example, here is a big article, why this is an anti-pattern:
https://lostechies.com/jamesgregory/2009/05/09/entity-interface-anti-pattern/
I’m also not that too delighted to create interfaces for all models, as they should really represent something and adding another layer of abstraction just for testing seems like overkill. That said, what would be the best solution hereby? How are people on the field, which do combine DDD and TDD, handling this?
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
I think you can dismiss "not really unit-testing"; the important thing is to use tools that are fit for purpose, not the branding.
That said, troublesome to set up the tests is a legitimate concern, and all by itself sufficient excuse to look for a way to improve the design.
If your service were tightly coupled to some third party implementation, that offered no affordances for substitution, what would you do to decouple that from your tests? The usual answer would be to introduce a seam - a new design element between your code and the 3rd party code.
The two important characteristics of the seam:
it does afford substitution; which is to say, you have an interface.
the implementation of the interface that integrates with the third party code is "so simple there are obviously no deficiencies".
Then, in your tests, you introduce a substitute implementation.
The game with your "domain model" is exactly the same. Assuming that you are applying the usual lifecycle patterns, the seam includes a substitute for the repository and a substitute for the aggregate root entity.
Some good news - you only don't necessarily need to shadow the entire aggregate: only the parts of the interface that your service cares about. In effect, what you are doing is defining - for each service - the contract that describes the interactions between your service and the domain model. "Role interfaces" will be a useful search term here.
First I will make sure these two conditions are meet:
Domain models are POJOs
Domain layer isolation (other layers can access domain layer but not the way around)
Then Factory, Builder or TestHelpers can be used to bring models to desire state for tests.
Basics
Testing Scopes
Unit Testing
Integration Testing
Domain Models
These should be unit tests, which tests the Domain Models / Aggregate's methods.
Services
These should be integration tests, which tests the integration of Service methods and the associated models.
My Broad Approach
When you're testing your domain models, there may be many variance, that you'll need to account for in your unit tests.
When these then translate over to a requirement to use within an integration test, I tend to go for some sort of CreationFactory (or ArrangementFactory) for your domain models.
You can then use these in both sets of tests.
So for example...
public class ArrangeUser {
public static User ArrangeStandardUser() {
return new User(...standard...);
}
public static User ArrangeAdminUser() {
return new User(...admin...);
}
}
Then in your Unit Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
// Act
bool canDoSomething = standardUser.CanDoSomething();
// Assert
Assert.True(canDoSomething);
Then in your Integration Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
ServiceToTest service = new ServiceToTest(standardUser); // replace with some sort of Repository Mock or whatever suits.
// Act
var bool canDo = service.CanDoService();
// Assert
Assert.True(canDo);
This way you can test both the unit aspect, and the service aspect - by creation a common way to create the arrangements, without having to abstract out the entities and solves the problem of recreation the same thing over and over again.
NB. This is just a basic code demo than can be made more complex, based on the scenario, or your preferred test style.
I had a similar challenge and, together with my team, we created a tool that simplifies the test data arranging process by employing a random data generator: https://github.com/ocadotechnology/test-arranger. Especially take a look at:
How to organize tests with Test Arranger as it explicitly refers to the common DDD building blocks and explains how to arrange test data around them. In my case, following those recommendations resulted in a significant reduction in the amount and complexity of code for preparing the test data.
Custom Arrangers as it shows how to deal with the model invariants.
Besides the recommendations given on the test-arranger page, it is also handy to use Lombok's #Builder(toBuilder = true) (or an equivalent like Kotlin's copy method from data classes) on your domain classes. With the toBuilder method you can easily adjust randomly generated value objects and entities to the needs of a certain test case.

Domain Logic and Business Logic

In one of my projects, I'm using an n-tier architecture
DAL (Repository Pattern) <-> BLL (POCO Services) <-> Web UI (ASP.NET MVC)
I created a generic repository and everything is fine on the DAL layer.
in the Business Logic Layer, I have my service methods which operates like (the example I love to use because of Pizza :)
myOven.Bake(myPizza);
even though, I need some specific information which are internal to the object myPizza, like this:
myPizza.GetBakeTime();
I know, I can use something like:
myOven.GetBakeTimeFor(myPizza);
which can calculate it, but I don't want to put that specific logic into the myOven object (the service layer here), instead, I want to include that in myPizza, like
public partial class Pizza
{
public double GetBakeTime()
{
// calculate Bake Time and return, based on other variables
}
}
I mean, to extend my ORM-generated class and provide this functionality.
My question: I know, that this can be done theoretically but is there any considerations should I take into account when using both Domain Logic and Business Logic for the same class?
The Domain Layer should handle ONLY business related functionality. The Repository handles persistence of data. Those two have different purposes and should not be mixed together.
Also the Domain layer pretty much is the Business Layer. For this particular example, where you only want the baking time, then a specialized query repository should know the answer without involving the Domain (because it's precomputed). If you want to know how much time is left for baking, then a Service (part of the Domain) can get the value using the Oven and Pizza entities.
However, this is already too specific and might not be suitable at all for the real problem you want to solve.

When implementing the repository pattern should lookup value / tables get their own Repository?

I am creating RESTful services for several database entities based on a modified version of the BISDM. Some of these entities have associated lookup tables, such as depicted below:
I have decided to use the repository pattern to provide a clean separation between data persistance / retrieval; however, I am not sure how lookups ( as opposed to entities ) should be represented in the repository.
Should lookups get their own repository interface, "share" one with the associated entity, or should there be a generic ILookupRepository interface?
For the moment, these lookups are read-only; however, there will be a time where we may want to edit the lookups via services.
Option 1:
ISpaceRepository.GetSpaceCategoryById(string id);
Option 2:
ISpaceCategoryRepository.GetById(string id);
Option 3:
ILookupRepository.GetSpaceCategoryById(string id);
Incidentally, this question is related to another one regarding look-up tables & RESTful web services.
No. Repositories should represent domain model concepts, not entity level concepts, and certainly not database level. Think about all the things you would want to do with a given component of your domain, for example, Spaces.
One of the things that you'll want to do, is GetSpaceCategories(). This should definitely be included in the Spaces repository, as anyone dealing with Spaces will want access to the Space categories without having to instantiate some other repository.
A generic repository would be fairly counter-productive I would think. Treating a repository like a utility class would virtually guarantee that any moderately complex operation would have to instantiate both repositories.

Mocking Entity Framework Context

I'm using the entity framework to access my database and I want to mock the database context inside my unit tests so that I can test my middle tier classes free of their dependency on real data. I know that I'm not the first to ask about this (Mocking an Entity Framework Model), but after some googling I have an instinct that it might be possible to instantiate the context based on the model's metadata alone.
Has anyone been able to do this?
A well known way of doing this is to use the Repository pattern. This acts as a layer over your concrete data access implementation and provides a place to inject test doubles.
You can do it with just metadata, there's a good article on it, and unit testing EF in general, here.

Testing strategy for large application with few public methods?

I'm working on a project which I'm really not sure how to unit test. It's an unobtrusive tag based framework for wiring up events between models, views and delegates in a GUI system.
Basically you have one large json file which is used describes all of the events, event handlers and bindings. The user creates their models, views and delegates all of which have no knowledge of the framework. The JSON file is passed to an init() methods, then the framework creates all of the instances needed and takes care of all the bindings, listeners etc.
The problems I have are two fold:
1) There is basically only a single public method in the framework, everything else is communicated through the mark-up in the JSON file. Therefore I have a very small testing surface for what is a large and complicated application.
2) One of the big roles of the application is to instantiate class's if they haven't been instantiated previously and cached. This means that I need real classes in my test code, simple mocks aren't going to cut it.
At the moment I'm considering a couple if solutions. The first is start testing the private methods. The second is to just stub the constructors.
Any one else have any ideas?
1) There is basically only a single
public method in the framework,
everything else is communicated
through the mark-up in the JSON file.
Therefore I have a very small testing
surface for what is a large and
complicated application.
How is that possible? is this entire complicated framework stored in one class? if there are several classes involved, how do they share information without public methods?
A constructor is a public method too, by the way.
Are you just passing around the JSON object? that would couple your framework to the information source too tightly. You should have one class parsing the JSON, and the rest communicating without knowledge of the data source (via testable public methods).
list the features (scenarios, use-cases, whatever you want to call them) of the system, and establish JSON data/frameworks for each feature. These are your unit tests.