Given a structure for say, a bank account..
class Account
{
virtual int Id { get; set; }
virtual int Balance { get; set; }
}
And I want to track transactions done, so say a simple class...
class Transaction
{
virtual int Id { get; set; }
virtual Account Account { get; set; }
virtual DateTime Timestamp { get; set; }
virtual int Amount { get; set; }
}
Let's assume I want to keep track of transactions done, which is the more intelligent approach here?
interface IAccountRepository
{
void Deposit(int account, int amount)
}
or ...
class Account
{
void Deposit(int amount)
{
// this one is easier, but then I have to repeat
// myself because I need to store the Transaction
// in the database too.
}
}
The Repository pattern seems to be the most encompassing, since it will have a handle to the unit of work/orm/session (using nHibernate) - but using a class-level method seems more straightforward, since it's more in line with standard object oriented principle of 'Do this to this object'.
My question is that if I want to log the transactions, then I have to make sure they get saved as database objects too. Going the second route, with a class level method, I can't do this inside of the Account class, so I would end up having to repeat myself.
My other option is another abstraction..
interface ITransactionRepository
{
void CreateTransaction(int account, int amount);
}
Which works fine, It kind of wraps A and B together because I would find the account in the TransactionRepository and then perform its Deposit method, but it doesn't really feel like this is a wise approach. I don't know why, my gut just tells me it isn't the best way to go.
This applies to more than just this one set of classes, of course - it's a principle of design. I wanted to see what more veteran programmers would do in this situation, if you have any thoughts.
I would sugguest using the repository pattern for CRUD (Create, read, update, delete) operations on the Accounts.
interface IAccountRepository
{
Add(Account acc);
Remove(Account acc);
Account GetAccountById(int account);
Update(Account acc);
}
Then put the Deposit method in the Account class like you mentioned
class Account
{
void Deposit(int amount)
{
}
}
Then you access the account through the repository to update the account
// Get the account by id
Account acc = rep.GetAccountById(23143);
// Deposit the money
acc.Deposit(21.23);
// Update if needed
rep.UpdateAccount(acc);
Transations could be done in a similar way, but I would probably store the account id, rather than the Account instance in the Transcation.
class Transaction
{
virtual int Id { get; set; }
virtual int AccountId { get; set; }
virtual DateTime Timestamp { get; set; }
virtual int Amount { get; set; }
}
Your question is a nice example of problems that arise from the structural DB / OO mismatch. In my experience, people tend to favor the first option you described - bypass the Account business object and store the Transaction in the DB by using the repository.
I strongly recommend not to let your DB structure influence the business layer design. Business objects are there to implement business behavior. When you access the repository directly for an operation that belongs to the Account class from a business point-of-view, your business rules will end up at a strange location - somewhere outside the Account class.
To answer your question, the approach that I always take in these situations is as follows:
Adhere to OO principles in your business logic. Structure your classes and interactions according to real-world processes, so business rules end up where you'd expect them.
Provide a storage-oriented view to the DB. What I mean with this is that your repository needs not mimic the structure of your business objects. For example, implement an repository that does CRUD for accounts and the transactions.
Related
I am currently in the middle of writing unit tests for my domain models.
To lay a bit of context I have a Role Group class which has a list of Roles, it also has a list of Users that currently have this Role Group assigned to them.
A Role Group is assigned to a User so all the methods to do this are in the User domain model. This means the Users property on the Role Group side is basically pulled in from the database.
Both Role Group and User are Aggregate Roots and they can both exist on their own.
Unit Testing a Domain Model Containing Lists Populated From The Database
What I am struggling with is I can not test the CanArchive method below because I have no way off adding in a User to the property. Apart from the bad was of using the Add method which I don't want to use as it break the whole idea of Domain Models controlling their own Data.
So I am not sure if my Domain Models are wrong or if this logic should be placed in a Service as it is an interaction between two Aggregate Roots.
The Role Group Class:
public bool Archived { get; private set; }
public int Id { get; private set; }
public string Name { get; private set; }
public virtual IList<Role> Roles { get; private set; }
public virtual IList<User> Users { get; private set; }
Updating Archived Method:
private void UpdateArchived(bool archived)
{
if (archived && !CanArchive())
{
throw new InvalidOperationException("Role Group can not be archvied.");
}
Archived = archived;
}
Method to check if Role Group can be Archived
private bool CanArchive()
{
if (Users.Count > 0)
{
return false;
}
return true;
}
Method that sets the User's Role Group in the User class
This is called when a user is created or update in the user interface.
private void UpdateRoleGroup(RoleGroup roleGroup)
{
if (roleGroup == null)
{
throw new ArgumentNullException("roleGroup", "Role Group can not be null.");
}
RoleGroup = roleGroup;
}
A few thoughts :
Unit testing a domain object should not rely upon persistence layer stuff. As soon as you do that, you have an integration test.
Integrating changes between two aggregates through the database is theoretically not a good idea in DDD. Changes caused by User.UpdateRoleGroup() should either stay in the User aggregate, or trigger public domain methods on other aggregates to mutate them (in an eventually consistent way). If those methods are public, they should be accessible from tests as well.
With Entity Framework, none of that matters really since it is not good at modelling read-only collections and you'll most likely have a mutable Users list. I don't see calling roleGroup.Users.Add(...) to set up your data in a test as a big problem, even though you should not do it in production code. Maybe making the list internal with internalsVisibleTo your test project - as #NikolaiDante suggests - and wrapping it into a public readonly collection would make that a bit less dangerous.
What I am struggling with is I can not test the CanArchive method below because I have no way off adding in a User to the property.
How does the framework that loads the RoleGroup instances from the database populate the users? This is the question you must ask yourself to find the solution for your unit tests. Just do it the same way.
I don't know what language you use. In Java for example you can use the reflection api to set private fields. There are also a lot of test frameworks that provide convenience methods for this job, e.g. Deencapsulation or Whitebox.
I have an existing web service I need to expand, but it has not gone into production yet. So, I am free to change the contracts as I see fit. But I am not sure of the best way to define the methods.
I am leaning towards Method 2 for no other reason than I cannot think of good names to give the parameters classes!
Are there any major disadvantages to using Method 2 over Method 1?
Method 1
[DataContract(Namespace = Constants.ServiceNamespace)]
public class MyParameters
{
[DataMember(Order = 1, IsRequired = true)]
public int CompanyID { get; set; }
[DataMember(Order = 2, IsRequired = true)]
public string Filter { get; set; }
}
[ServiceContract(Namespace = Constants.ServiceNamespace)]
public interface IMyService
{
[OperationContract, FaultContract(MyServiceFault)]
MyResult MyMethod(MyParameters params);
}
Method 2
public interface IMyService
{
[OperationContract, FaultContract(MyServiceFault)]
MyResult MyMethod(int companyID, string filter);
}
Assuming you are using WCF and the WS-I Basic Profile, the biggest disadvantage of Method 2 over Method 1 is that it makes evolution of the contract more difficult in the future. Parameters classes allow the addition of new fields without creating a new version of the contract, whereas a straight method call does not (because in WS-I Basic, overloaded methods are not allowed). In WCF, there are some hoops you can jump through to get around this restriction but it all lends towards a less readable, more configuration-heavy solution.
For naming parameters classes, I find it helps to think of the method in terms of the underlying message that it represents - the method name is an action, and the parameters are the message associated with that action. If you tell WCF to generate message contracts (when you add a service reference) you'll get to see all of that stuff, and it can sometimes help understand how it hangs together, although it does make the API more verbose and is unnecessary most of the time.
i have e.g. a class like:
public class Person
{
public string name { get; set; }
public int age { get; set; }
}
now i have to access a list of type List<Person> in several pages. so i implemented a singleton class which stores this informations.
is there a better solution, or does i make it right?
As with most questions of this type, it depends on the app.
A common practice for objects/lists which are required by multiple pages it to have an application level view model and reference this from each page.
I have some services at the moment that return a dto with the following fields:
[DataMember]
public int Id { get; set; }
[DataMember]
public string Name { get; set; }
and I want to add more to this service by adding the following properties:
[DataMember]
public virtual DateTime StartDate { get; set; }
I'm not in a position where i can update the consumers of these services though - the client does that themselves.
My question is - will the old clients be able to just skip these new properties? and the new ones take advantage of them or will the serialization be an issue with the new properties?
w://
As long as the old properties do not change (and the new one is marked as optional) you should be alright.
Said so, you should publish the new contract and get the clients to regenerate the service reference - or deploy the new version to a different endpoint so that when they're ready to switch they are forced to point to the new one.
From what I have seen, the DataContractSerializer just puts null in for properties not found when deserializing. Makes tracking down some bugs quite tricky - sometimes I would prefer if it were more strict and gave an exception.
Another option to consider is subclassing the original DTO to create a new derived class.
In order for serialization to work properly, you need to specify the available derived classes for the supertype with an attribute:
[DataContract]
[KnownType(typeof(DerivedDTO))]
public class OriginalDTO
In code where you use the additional property, you will need to cast the object to a DerivedDTO to get access to the property (I use the as keyword for this and check whether the resulting reference is null before using it)
As long as the new member StartDate is not declared a required field - so this would not work:
[DataMember(IsRequired="True")]
public virtual DateTime StartDate { get; set; }
But as long as you leave out the IsRequired=True, you should be fine.
We have a requirement to add an event reminder when a user enters their email address on an event page. Event is another domain object. Our initial thought was to create a Customer domain object and related CustomerService:
public class CustomerService {
public void AddEventReminder(string emailAddress, int eventId) {
var customer = new Customer(emailAddress);
customer.AddEmailReminder(eventId);
}
}
How can we verify in a unit test that the AddEmailReminder method was indeed called on the new customer?
My thoughts:
Use a factory to create the customer. This smells because I thought you were only supposed to use factory where there was some complexity in the object creation.
Bad code. Maybe there is a better way to do this?
Moq magic.
On a separate note (maybe it is related), how do we decide which is the aggregate root here? We have arbitrarily decided the customer is, but it could equally be the event. I have read and understand articles on aggregate roots, but it is unclear in this scenario.
In cases like this I would create a protected method in the service that creates the customer, in the test override that method it with anonymous inner class, and make it return a mock Customer object. Then you can verify on the mock Customer object that AddEmailReminder was called.
Something like:
public class CustomerService {
public void AddEventReminder(string emailAddress, int eventId) {
var customer = createCustomer(emailAddress);
customer.AddEmailReminder(eventId);
}
protected Customer createCustomer(string emailAddress) {
return new Customer(emailAddress);
}
}
and in the test (assume limited C# knowledge, but it should illustrate the point):
void testCustomerCreation() {
/* final? */ Customer mockCustomer = new Customer("email");
CustomerService customerService = new CustomerService() {
protected Customer createCustomer(string emailAddress) {
return mockCustomer;
}
};
customerService.AddEventReminder("email", 14);
assertEquals(mockCustomer.EventReminder() /* ? */, 14);
}
Thoughts on the CustomerService API
Are there any particular reasons why you have decided to encapsulate this operation in a CustomerService? This looks a little Anemic to me. Could it possibly be encapsulated directly on the Customer instead?
Perhaps you left out something of the CustomerService code example to simplify things...
However, if you must, changing the signature to take a Customer instance solves the problem:
public void AddEventReminder(Customer customer, int eventId)
but then again, an Int32 hardly qualifies as Domain Object, so the signature should really be
public void AddEventReminder(Customer customer, Event event)
The question is now whether this method adds any value at all?
Which is the aggregate root?
None of them, I would think. An aggregate root indicates that you manage children only through the root, and that wouldn't make sense either way in this case.
Consider the options:
If you make Event the root, it would mean that you could have no CustomerRepository, and the only way you could retrieve, edit and persist a Customer would be through an Event. That sounds very wrong to me.
If you make Customer the root, you can have no EventRepository, and the only way you could retrieve, edit and persist an Event would be through a specific Customer. That sounds just as wrong to me.
The only remaining possibility is that they are separate roots. This also means that they are only loosely connected to each other, and that you will need some kind of Domain Service to look up Events for a Customer, or Customers for an Event.