ServiceStack Logical Separation of Procedures - web-services

I believe ServiceStack is a an exceptional framework that works well toward removing the plumbing that typically goes with web services, that said there is one deficiency that perhaps I just need clarification on.
When dealing with a large project, for example with multiple products, it seems like a logical separation is important. I understand how to separate security concerns and the like, but from a pure maintenance concern, team concern, and consumption concern it seems like I must be missing something.
It seems that something like:
api.domain.com/productA
api.domain.com/productB
api.domain.com/productC
are logical separations, and if you are just using the default documentation, assuming you have 150 procedures under each product (not to mention products D, E, F, and general services outside of any product) things could get a bit unwieldy.
So the question is: What is the best way to deal with and set up large projects like this. Does each one get its own appHost with an endpoint for that project/product? Is there another approach I'm not thinking of?

The wiki docs on Modularizing ServiceStack shows how you can configure your Service implementation to spread across multiple dependencies, e.g:
public class AppHost : AppHostBase
{
//Tell which assemblies to scan for services
public AppHost() : base("My Product Services",
typeof(ServicesFromProductA).Assembly,
typeof(ServicesFromProductB).Assembly
/*, etc */) {}
public override void Configure(Container container) {}
}
Your DTO's can also be spread across multiple dlls, although it's still recommended to keep your DTO assembly dependency-free.
Reverse Proxying / Url Rewriting to different ServiceStack instances
Another option that's worth considering depending on if there is a clean separation between your different products is to separate them into distinct Services and conceptually have them appear under the same url structure by using a reverse proxy or rewrite rules in IIS which will let you have different top-level paths mapped to different independent ServiceStack instances, e.g:
http://api.domain.com/productA -> http://localhost:8000
http://api.domain.com/productB -> http://localhost:8001
http://api.domain.com/productC -> http://localhost:8002

Related

TDD + DDD: Model abstractions

I've recently had an interesting experience but didn't find a satisfying answer so far: I'm a big fan of DDD and try to define rich domain objects with behavior and good information hiding, even if the team officially doesn't practice DDD. At the end of the day, it doesn't matter, as you have a well-defined object, which represents something in the problem domain.
That said, I would also like to practice TDD more. Unfortunately, if I test a service, which uses such rich domain models, the models are usually not abstracted. Therefore, to test the behavior of the service, I need to set up the model as well. This model comes with its own invariants etc., therefore with every service test, I also test the model the service is using.
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
In my opinion, there seems to be no way around this but to start creating interfaces for models. But it seems like I am the only person thinking so. For example, here is a big article, why this is an anti-pattern:
https://lostechies.com/jamesgregory/2009/05/09/entity-interface-anti-pattern/
I’m also not that too delighted to create interfaces for all models, as they should really represent something and adding another layer of abstraction just for testing seems like overkill. That said, what would be the best solution hereby? How are people on the field, which do combine DDD and TDD, handling this?
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
I think you can dismiss "not really unit-testing"; the important thing is to use tools that are fit for purpose, not the branding.
That said, troublesome to set up the tests is a legitimate concern, and all by itself sufficient excuse to look for a way to improve the design.
If your service were tightly coupled to some third party implementation, that offered no affordances for substitution, what would you do to decouple that from your tests? The usual answer would be to introduce a seam - a new design element between your code and the 3rd party code.
The two important characteristics of the seam:
it does afford substitution; which is to say, you have an interface.
the implementation of the interface that integrates with the third party code is "so simple there are obviously no deficiencies".
Then, in your tests, you introduce a substitute implementation.
The game with your "domain model" is exactly the same. Assuming that you are applying the usual lifecycle patterns, the seam includes a substitute for the repository and a substitute for the aggregate root entity.
Some good news - you only don't necessarily need to shadow the entire aggregate: only the parts of the interface that your service cares about. In effect, what you are doing is defining - for each service - the contract that describes the interactions between your service and the domain model. "Role interfaces" will be a useful search term here.
First I will make sure these two conditions are meet:
Domain models are POJOs
Domain layer isolation (other layers can access domain layer but not the way around)
Then Factory, Builder or TestHelpers can be used to bring models to desire state for tests.
Basics
Testing Scopes
Unit Testing
Integration Testing
Domain Models
These should be unit tests, which tests the Domain Models / Aggregate's methods.
Services
These should be integration tests, which tests the integration of Service methods and the associated models.
My Broad Approach
When you're testing your domain models, there may be many variance, that you'll need to account for in your unit tests.
When these then translate over to a requirement to use within an integration test, I tend to go for some sort of CreationFactory (or ArrangementFactory) for your domain models.
You can then use these in both sets of tests.
So for example...
public class ArrangeUser {
public static User ArrangeStandardUser() {
return new User(...standard...);
}
public static User ArrangeAdminUser() {
return new User(...admin...);
}
}
Then in your Unit Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
// Act
bool canDoSomething = standardUser.CanDoSomething();
// Assert
Assert.True(canDoSomething);
Then in your Integration Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
ServiceToTest service = new ServiceToTest(standardUser); // replace with some sort of Repository Mock or whatever suits.
// Act
var bool canDo = service.CanDoService();
// Assert
Assert.True(canDo);
This way you can test both the unit aspect, and the service aspect - by creation a common way to create the arrangements, without having to abstract out the entities and solves the problem of recreation the same thing over and over again.
NB. This is just a basic code demo than can be made more complex, based on the scenario, or your preferred test style.
I had a similar challenge and, together with my team, we created a tool that simplifies the test data arranging process by employing a random data generator: https://github.com/ocadotechnology/test-arranger. Especially take a look at:
How to organize tests with Test Arranger as it explicitly refers to the common DDD building blocks and explains how to arrange test data around them. In my case, following those recommendations resulted in a significant reduction in the amount and complexity of code for preparing the test data.
Custom Arrangers as it shows how to deal with the model invariants.
Besides the recommendations given on the test-arranger page, it is also handy to use Lombok's #Builder(toBuilder = true) (or an equivalent like Kotlin's copy method from data classes) on your domain classes. With the toBuilder method you can easily adjust randomly generated value objects and entities to the needs of a certain test case.

Domain Logic and Business Logic

In one of my projects, I'm using an n-tier architecture
DAL (Repository Pattern) <-> BLL (POCO Services) <-> Web UI (ASP.NET MVC)
I created a generic repository and everything is fine on the DAL layer.
in the Business Logic Layer, I have my service methods which operates like (the example I love to use because of Pizza :)
myOven.Bake(myPizza);
even though, I need some specific information which are internal to the object myPizza, like this:
myPizza.GetBakeTime();
I know, I can use something like:
myOven.GetBakeTimeFor(myPizza);
which can calculate it, but I don't want to put that specific logic into the myOven object (the service layer here), instead, I want to include that in myPizza, like
public partial class Pizza
{
public double GetBakeTime()
{
// calculate Bake Time and return, based on other variables
}
}
I mean, to extend my ORM-generated class and provide this functionality.
My question: I know, that this can be done theoretically but is there any considerations should I take into account when using both Domain Logic and Business Logic for the same class?
The Domain Layer should handle ONLY business related functionality. The Repository handles persistence of data. Those two have different purposes and should not be mixed together.
Also the Domain layer pretty much is the Business Layer. For this particular example, where you only want the baking time, then a specialized query repository should know the answer without involving the Domain (because it's precomputed). If you want to know how much time is left for baking, then a Service (part of the Domain) can get the value using the Oven and Pizza entities.
However, this is already too specific and might not be suitable at all for the real problem you want to solve.

RIA Services, how do I organize the services?

I am working on a project that is going to be using RIA services. The visual studio solution file has 2 projects, one for the UI and the other for the domain logic. The initial approach was to have multiple domain service classes inside of the domain logic project (to keep it organized). After receiving a certain compile error I came across this issue with RIA.
http://forums.silverlight.net/forums/p/111058/257398.aspx
So my question is, if I have to use one service file it is going to have thousands of stubs and queries because of the magnitude of the database/project, how would I go about organizing this. An even better question would be, Is there a workaround to the problem listed in that forums post I linked.
Thanks in advance for the responses.
We ran into a similar situation. Although the design still smells a bit because the DomainService class has a lot of methods, we were able to organize a bit.
Choose a way to divide up the responsibilities of the DomainService class. We chose to do this by Entity, but it could be along some other line. You can create a separate file for each responsibility, although each file contains a partial class implementation of your DomainService. Our naming convention for each file was DomainService.Entity.cs
Another thing we did was create a separate DomainService for some cross-cutting concerns. We have one for logging messages from the Silverlight client back to the server, and one for authentication only.

Admin interface to manage two related data sources

In the project there are two data sources: one is project's own database, another is (semi-)legacy web service. The problem is that admin part has to keep them in sync and manage both so that user doesn't have to know they're separate (or, do know, but they do not care).
Here's an example: there's list of languages. Both apps - project and legacy - need to use them. However, they both add their own meaning. For example, project may need active/inactive, and legacy will need language code.
But admin part has to manage everything - language name, active/inactive, language code. When loading, data from both systems has to be merged and presented, and when saved, data has to be updated in both systems.
Thus, what's the best way to represent this separated data (to be used in the admin page)? Notice that I use ASP.NET MVC / NHibernate.
How do I manage legacy data?
Do I connect admin part to legacy web service external interface - where it currently only has GetXXX() methods - and add the missed C[R]UD methods?
Or, do I connect directly to legacy database - which is possible since I do control it.
Where do I do split/merge of data - in the controller/service layer, or in the repository/data layer?
In the controller layer I'll do "var viewmodel = new ViewModel { MyData = ..., LegacyData = ... }; The problem - code cluttered with legacy issues.
In the data layer, I'll do "var model = repository.Get(id)" and model will contain data from both worlds, and when I do "repository.Save(entity)" it will update both data sources - in local db only project specific fields will be stored. The problems: a) possible leaky abstraction b) getting data from web service always while it is only need sometimes and usually for admin part only
a modification, use ICombinedRepository<Language> which will provide additional split/merge. Problems: still need either new model or IWithLegacy<Language, LegacyLanguage>...
Have a single "sync" method; this will remove legacy items not present in the project item list, update those that are present, create legacy items that are missed, etc...
Well, to summarize the main issues:
do I develop CRUD interface on web service or connect directly to its database (which is under my complete control, so that I may even later decide to move that web service part into the main app or make it use the main db)?
do I have separate classes for project's and legacy entities, thus managed separately, or have project's entities have all the legacy fields, managed transparently when saved/loaded?
Anyway, are there any useful tips on managing mostly duplicated data from different sources? What are the best practices?
In the non-admin part, I'd like to completely hide the notion of the legacy data. Which is what I do now, behind the repository interfaces. But for admin part it's not that clear or easy...
What you are describing here seems to warrant the need for an Anti-Corruption Layer. You can find solutions related to this topic here: DDD, Anti Corruption layer, how-to?
When you have two conceptual Bounded Contexts, but you're only using DDD for one of them, the Anti-Corruption layer comes into play. When reading from your data source (performing a get operation [R]), the anti-corruption layer will translate your legacy data into usable objects for your project. When writing to your data source (performing a set operation [CUD]), the anti-corruption layer will translate your DDD objects into objects understood by your legacy code.
Whether or not to use the existing Web Service depends on whether or not you're willing to change existing code. Sticking with DRY practices, you don't want to duplicate what you already have. If you want to keep the Web Service, you can add CUD methods inside the anti-corruption layer without impacting your legacy application.
In the anti-corruption layer, you will want to make use of adapters and facades to bring together separate classes for your DDD project and the legacy application.
The anti-corruption layer is exactly where you handle splitting and merging.
Let me know if you have any questions on this, as it can be a somewhat advanced topic. I'll try to answer as best I can.
Good luck!

Handling the Same Class Definition From Multiple Web Services

The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.