Domain Logic and Business Logic - repository-pattern

In one of my projects, I'm using an n-tier architecture
DAL (Repository Pattern) <-> BLL (POCO Services) <-> Web UI (ASP.NET MVC)
I created a generic repository and everything is fine on the DAL layer.
in the Business Logic Layer, I have my service methods which operates like (the example I love to use because of Pizza :)
myOven.Bake(myPizza);
even though, I need some specific information which are internal to the object myPizza, like this:
myPizza.GetBakeTime();
I know, I can use something like:
myOven.GetBakeTimeFor(myPizza);
which can calculate it, but I don't want to put that specific logic into the myOven object (the service layer here), instead, I want to include that in myPizza, like
public partial class Pizza
{
public double GetBakeTime()
{
// calculate Bake Time and return, based on other variables
}
}
I mean, to extend my ORM-generated class and provide this functionality.
My question: I know, that this can be done theoretically but is there any considerations should I take into account when using both Domain Logic and Business Logic for the same class?

The Domain Layer should handle ONLY business related functionality. The Repository handles persistence of data. Those two have different purposes and should not be mixed together.
Also the Domain layer pretty much is the Business Layer. For this particular example, where you only want the baking time, then a specialized query repository should know the answer without involving the Domain (because it's precomputed). If you want to know how much time is left for baking, then a Service (part of the Domain) can get the value using the Oven and Pizza entities.
However, this is already too specific and might not be suitable at all for the real problem you want to solve.

Related

TDD + DDD: Model abstractions

I've recently had an interesting experience but didn't find a satisfying answer so far: I'm a big fan of DDD and try to define rich domain objects with behavior and good information hiding, even if the team officially doesn't practice DDD. At the end of the day, it doesn't matter, as you have a well-defined object, which represents something in the problem domain.
That said, I would also like to practice TDD more. Unfortunately, if I test a service, which uses such rich domain models, the models are usually not abstracted. Therefore, to test the behavior of the service, I need to set up the model as well. This model comes with its own invariants etc., therefore with every service test, I also test the model the service is using.
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
In my opinion, there seems to be no way around this but to start creating interfaces for models. But it seems like I am the only person thinking so. For example, here is a big article, why this is an anti-pattern:
https://lostechies.com/jamesgregory/2009/05/09/entity-interface-anti-pattern/
I’m also not that too delighted to create interfaces for all models, as they should really represent something and adding another layer of abstraction just for testing seems like overkill. That said, what would be the best solution hereby? How are people on the field, which do combine DDD and TDD, handling this?
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
I think you can dismiss "not really unit-testing"; the important thing is to use tools that are fit for purpose, not the branding.
That said, troublesome to set up the tests is a legitimate concern, and all by itself sufficient excuse to look for a way to improve the design.
If your service were tightly coupled to some third party implementation, that offered no affordances for substitution, what would you do to decouple that from your tests? The usual answer would be to introduce a seam - a new design element between your code and the 3rd party code.
The two important characteristics of the seam:
it does afford substitution; which is to say, you have an interface.
the implementation of the interface that integrates with the third party code is "so simple there are obviously no deficiencies".
Then, in your tests, you introduce a substitute implementation.
The game with your "domain model" is exactly the same. Assuming that you are applying the usual lifecycle patterns, the seam includes a substitute for the repository and a substitute for the aggregate root entity.
Some good news - you only don't necessarily need to shadow the entire aggregate: only the parts of the interface that your service cares about. In effect, what you are doing is defining - for each service - the contract that describes the interactions between your service and the domain model. "Role interfaces" will be a useful search term here.
First I will make sure these two conditions are meet:
Domain models are POJOs
Domain layer isolation (other layers can access domain layer but not the way around)
Then Factory, Builder or TestHelpers can be used to bring models to desire state for tests.
Basics
Testing Scopes
Unit Testing
Integration Testing
Domain Models
These should be unit tests, which tests the Domain Models / Aggregate's methods.
Services
These should be integration tests, which tests the integration of Service methods and the associated models.
My Broad Approach
When you're testing your domain models, there may be many variance, that you'll need to account for in your unit tests.
When these then translate over to a requirement to use within an integration test, I tend to go for some sort of CreationFactory (or ArrangementFactory) for your domain models.
You can then use these in both sets of tests.
So for example...
public class ArrangeUser {
public static User ArrangeStandardUser() {
return new User(...standard...);
}
public static User ArrangeAdminUser() {
return new User(...admin...);
}
}
Then in your Unit Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
// Act
bool canDoSomething = standardUser.CanDoSomething();
// Assert
Assert.True(canDoSomething);
Then in your Integration Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
ServiceToTest service = new ServiceToTest(standardUser); // replace with some sort of Repository Mock or whatever suits.
// Act
var bool canDo = service.CanDoService();
// Assert
Assert.True(canDo);
This way you can test both the unit aspect, and the service aspect - by creation a common way to create the arrangements, without having to abstract out the entities and solves the problem of recreation the same thing over and over again.
NB. This is just a basic code demo than can be made more complex, based on the scenario, or your preferred test style.
I had a similar challenge and, together with my team, we created a tool that simplifies the test data arranging process by employing a random data generator: https://github.com/ocadotechnology/test-arranger. Especially take a look at:
How to organize tests with Test Arranger as it explicitly refers to the common DDD building blocks and explains how to arrange test data around them. In my case, following those recommendations resulted in a significant reduction in the amount and complexity of code for preparing the test data.
Custom Arrangers as it shows how to deal with the model invariants.
Besides the recommendations given on the test-arranger page, it is also handy to use Lombok's #Builder(toBuilder = true) (or an equivalent like Kotlin's copy method from data classes) on your domain classes. With the toBuilder method you can easily adjust randomly generated value objects and entities to the needs of a certain test case.

SOAP API parameter best practice

I have a set of soap webservices that are tightly coupled to an application within the same architecture but I need it to also be an API for other applications to hook into.
At the moment, the services use a parameter (and method) structure like this
Entity GetEntity(int entityId)
Entity GetEntityByName(string entityName)
.... etc.
In the case of creates I use:
void CreateEntity(Entity entity)
I am wondering though would it be better to do it like this:
EntityResponse GetEntity(EntityRequest requestObj) .....
and in the requestObj I have id, entityName and depending what the user supplies, I can perform either function.
and for the create it would be:
CreateEntityResponse CreateEntity(CreateEntityRequest requestObj).
My thinking is that by doing it like this, the API can change internally...add new parameters etc without immediately breaking any integration that has already been done.
I think there are several design principles that you may want to consider:
1) Database Entity vs Data Transport Object DTO
Looks like those Entities come directly from a database mapping? Just exposing your Entities as API, is basically a fancy SQL Query browser. It's not necessarily wrong but you will achieve better de-coupling if you would expose DTO's in the API.
The DTO's could be more future proof then the Entities and more generic.
2) SOAP vs REST
If you want to achieve a maximum of future proofing you might want to consider REST. With the REST specification you have more options to extend the API later.
For instance if you look at APIs like Facebook they purely pass in parameters and then you receive in return a key-value map of the parameters you passed in. So very generic.
In SOAP you would always end up in defining all of those eventual attributes upfront. You basically need to introduce placeholders et cetera.
There is certainly a reason why SOAP is a good contract protocol and has advantages like code generating tools are more up to date and lots more. But with REST you could be even more flexible while loosing some of the goodies you had in SOAP.
This is also a very good read:
https://www.mulesoft.com/lp/whitepaper/api/secrets-great-api
or generally the RAML design spec from Mule is a very powerful tool when it comes to designing REST APIs.

ServiceStack Logical Separation of Procedures

I believe ServiceStack is a an exceptional framework that works well toward removing the plumbing that typically goes with web services, that said there is one deficiency that perhaps I just need clarification on.
When dealing with a large project, for example with multiple products, it seems like a logical separation is important. I understand how to separate security concerns and the like, but from a pure maintenance concern, team concern, and consumption concern it seems like I must be missing something.
It seems that something like:
api.domain.com/productA
api.domain.com/productB
api.domain.com/productC
are logical separations, and if you are just using the default documentation, assuming you have 150 procedures under each product (not to mention products D, E, F, and general services outside of any product) things could get a bit unwieldy.
So the question is: What is the best way to deal with and set up large projects like this. Does each one get its own appHost with an endpoint for that project/product? Is there another approach I'm not thinking of?
The wiki docs on Modularizing ServiceStack shows how you can configure your Service implementation to spread across multiple dependencies, e.g:
public class AppHost : AppHostBase
{
//Tell which assemblies to scan for services
public AppHost() : base("My Product Services",
typeof(ServicesFromProductA).Assembly,
typeof(ServicesFromProductB).Assembly
/*, etc */) {}
public override void Configure(Container container) {}
}
Your DTO's can also be spread across multiple dlls, although it's still recommended to keep your DTO assembly dependency-free.
Reverse Proxying / Url Rewriting to different ServiceStack instances
Another option that's worth considering depending on if there is a clean separation between your different products is to separate them into distinct Services and conceptually have them appear under the same url structure by using a reverse proxy or rewrite rules in IIS which will let you have different top-level paths mapped to different independent ServiceStack instances, e.g:
http://api.domain.com/productA -> http://localhost:8000
http://api.domain.com/productB -> http://localhost:8001
http://api.domain.com/productC -> http://localhost:8002

Customizable Web Applications

At my company we develop prefabricated web applications. While our applications work as-is in many cases, often we receive complex customization requests. We are having a problem in trying to perform this in a structured way. Generic functionality should not be influenced by customizations. At the moment we are looking into Spring Web Flow and it looks like it can handle a part of what we need.
For example, we have an Online Shopping and we have a request from a client that in a moment of checking out the Shopping Basket order has to be written to a proprietary logging system.
With SWF, it is possible to inherit our Generic Checkout Flow with ClientX Checkout Flow and to extend it with states necessary to perform a custom log write. This scenario seems to be handled well. This means we can keep our Generic Checkout Flow as is and extend it with custom functionality, according to Open/Closed principle. Our team in time can add functionality to the Generic Checkout Flow and this can be distributed to a client without modifying the extension.
However, sometimes clients request our pages to be customized. For example, in our Online Shopping app a client requests a multiple currencies feature. In this case, you need to modify the View as well as the Flow (Controller). Is there a technology that would let me extend the Generic View and not modify it? So far, only two solutions with majority of template-based views (JSP, Struts, Velocity etc.) seems to be
to have a specific version of view for each client. This obviously leads to implementation explosion
to make application configurable depending on parameter (if multipleCurrency then) that leads to code explosion - a number of configuration conditions that have to be checked in each page
What would be the best solution in this case? There are probably some other customization cases I am not able to recall. Is there maybe a component based view technology that would let me extend certain base view and does that makes sense.
What are typical solutions to a problem of configurable web applications?
each customization point implies some level of conditionality.
Where possible folks tend to use style sheets to control some aspects. For example display of a currency selector perhaps could be done like that.
Another thought for that currency example: 1 is the limiting case of many. So the model provides the list of currencies. The view displays a selector if there are many, and a fixed field if only one. Quite well-defined behaviour - easy to test reusable for other scenarios.

Admin interface to manage two related data sources

In the project there are two data sources: one is project's own database, another is (semi-)legacy web service. The problem is that admin part has to keep them in sync and manage both so that user doesn't have to know they're separate (or, do know, but they do not care).
Here's an example: there's list of languages. Both apps - project and legacy - need to use them. However, they both add their own meaning. For example, project may need active/inactive, and legacy will need language code.
But admin part has to manage everything - language name, active/inactive, language code. When loading, data from both systems has to be merged and presented, and when saved, data has to be updated in both systems.
Thus, what's the best way to represent this separated data (to be used in the admin page)? Notice that I use ASP.NET MVC / NHibernate.
How do I manage legacy data?
Do I connect admin part to legacy web service external interface - where it currently only has GetXXX() methods - and add the missed C[R]UD methods?
Or, do I connect directly to legacy database - which is possible since I do control it.
Where do I do split/merge of data - in the controller/service layer, or in the repository/data layer?
In the controller layer I'll do "var viewmodel = new ViewModel { MyData = ..., LegacyData = ... }; The problem - code cluttered with legacy issues.
In the data layer, I'll do "var model = repository.Get(id)" and model will contain data from both worlds, and when I do "repository.Save(entity)" it will update both data sources - in local db only project specific fields will be stored. The problems: a) possible leaky abstraction b) getting data from web service always while it is only need sometimes and usually for admin part only
a modification, use ICombinedRepository<Language> which will provide additional split/merge. Problems: still need either new model or IWithLegacy<Language, LegacyLanguage>...
Have a single "sync" method; this will remove legacy items not present in the project item list, update those that are present, create legacy items that are missed, etc...
Well, to summarize the main issues:
do I develop CRUD interface on web service or connect directly to its database (which is under my complete control, so that I may even later decide to move that web service part into the main app or make it use the main db)?
do I have separate classes for project's and legacy entities, thus managed separately, or have project's entities have all the legacy fields, managed transparently when saved/loaded?
Anyway, are there any useful tips on managing mostly duplicated data from different sources? What are the best practices?
In the non-admin part, I'd like to completely hide the notion of the legacy data. Which is what I do now, behind the repository interfaces. But for admin part it's not that clear or easy...
What you are describing here seems to warrant the need for an Anti-Corruption Layer. You can find solutions related to this topic here: DDD, Anti Corruption layer, how-to?
When you have two conceptual Bounded Contexts, but you're only using DDD for one of them, the Anti-Corruption layer comes into play. When reading from your data source (performing a get operation [R]), the anti-corruption layer will translate your legacy data into usable objects for your project. When writing to your data source (performing a set operation [CUD]), the anti-corruption layer will translate your DDD objects into objects understood by your legacy code.
Whether or not to use the existing Web Service depends on whether or not you're willing to change existing code. Sticking with DRY practices, you don't want to duplicate what you already have. If you want to keep the Web Service, you can add CUD methods inside the anti-corruption layer without impacting your legacy application.
In the anti-corruption layer, you will want to make use of adapters and facades to bring together separate classes for your DDD project and the legacy application.
The anti-corruption layer is exactly where you handle splitting and merging.
Let me know if you have any questions on this, as it can be a somewhat advanced topic. I'll try to answer as best I can.
Good luck!