I have a "MyUnits" application that let's manage Units (like meters, kilograms, pounds, miles, km/h...). The model is complex, it supports unit compatibilities, operations, conversions, etc.
I have another application (MyApp) that will need to use "units", so I want to make it use my "Units" application.
What I thought is to have a "Units" service (webservice) UnitService that consumes and returns a Unit DTO UnitDTO.
In MyApp, I have this model:
Operand
value: float
unit: UnitDTO
OperationAdd
operand1: Operand
operand2: Operand
execute()
The problem: in OperationAdd.execute(), I need to check that units are compatibles (for example).
So either:
UnitDTO has a method that will call UnitService::areCompatible, but that is wrong! How a DTO (that should only contain data) knows UnitService which is a webservice! It shouldn't
OperationAdd.execute() calls UnitService::areCompatible, but that is wrong! How OperationAdd (an entity) knows UnitService which is a webservice! It shouldn't
or I have a OperationService that does the work (and that can call services) but my Operation entities would be like data containers, entities with no methods, and that's not really what DDD is about
I don't want anemic entities, but in the case where I have an entity that uses a service, how can I do?
And: am I wrong thinking that UnitDTO can be used as a VO?
An Unit should "advertise" its compatibilities. I don't know if the language you're using supports generics but I would do it this way.
First of all, the UnitDto contains state for some Unit. Use UniDto to create the concrete Unit (which btw is a VO). Each Unit should know its compatibilities.UnitDTO should only be a DTO , create other VO that will do the work.
C#
public class UnitBase
{
public virtual bool IsCompatible(UnitBase unit)
{
return false;
}
}
public interface ICompatible<T>
{
bool IsCompatible(T unit);
}
public class UnitFeet {}
public class UnitMeter:UnitBase,ICompatible<UnitFeet>
{
bool IsCompatible(UnitFeet unit) { return true;}
}
public override bool IsCompatible(UnitBase unit)
{
return IsCompatible((UnitFeet)unit);
}
The compiler should chose the right overload depending on the compared unit. Also the ICompatbile interface can have conversion methods from one unit to another. But let's suppose you want things more abstract
public class OperandValue:ICompatbile<OperandValue>
{
public decimal Value {get;set;}
public UnitBase Unit {get;set;}
public bool IsCompatbile(OperandValue other)
{
return Unit.IsCompatbile(other.Unit);
}
public static OperandValue FromDto(Operand data)
{
return new OperandValue(data.Value,UnitBase.FromDto(data.Unit));
}
}
OperandValue first=OperandValue.FromDto(operand1);
OperandValue second=OperandValue.FromDto(operand2);
if (first.IsCompatbile(second)){
OperationService.Add(first,second)
}
So you don't need a UnitService::AreCompatbile method, just a LOT of polymorphism and careful object design.
Related
Are methods that return void but change the state of their arguments (ie. provide a hidden or implicit return value) generally a bad practice?
I find them difficult to mock, which suggests they are possibly a sign of a bad design.
What patterns are there for avoiding them?
A highly contrived example:
public interface IMapper
{
void Map(SourceObject source, TargetObject target);
}
public class ClassUnderTest
{
private IMapper _mapper;
public ClassUnderTest(IMapper mapper)
{
_mapper = mapper;
}
public int SomeOperation()
{
var source = new SourceObject();
var target = new TargetObject();
_mapper.Map(source, target);
return target.SomeMappedValue;
}
}
Yes to some extend.
What you describe is a typical side effect. Side effects make programs hard to understand, because the information you need to understand isn't contained in the call stack. You need additional information, i.e. what methods got called before (and in what) order.
The solution is to program without side effects. This means you don't change variables, fields or anything. Instead you would return a new version of what you normally would change.
This is a basic principle of functional programming.
Of course this way of programming has it's own challenges. Just consider I/O.
Your code whould be a lot easier to test if you do this:
public interface IMapper
{
TargetObject Map(SourceObject source);
}
public class ClassUnderTest
{
private IMapper _mapper;
public ClassUnderTest(IMapper mapper)
{
_mapper = mapper;
}
public int SomeOperation(SourceObject source )
{
var target = _mapper.Map(source, target);
return target.SomeMappedValue;
}
}
You can now test you Map opperation and SomeOperation seperatly. The problem is that you idd change the state of an object which makes it hard to provide a stub for testing. When returning the new object you are able to return a test stub of the target and test your caller method.
Mock objects introduce a good approach to do deep behavior testing of some program unit.
You just should pass mocked dependency to the tested unit and check if it works with dependency as it should do.
Let you have 2 classes A and B:
public class A
{
private B b;
public A(B b)
{
this.b = b;
}
public void DoSomething()
{
b.PerformSomeAction();
if(b.State == some special value)
{
b.PerformAnotherAction();
}
}
}
public class B
{
public BState State { get; private set; }
public void PerformSomeAction()
{
//some actions
State = some special value;
}
public void PerformAnotherAction()
{
if(State != some special value)
{
fail(); //for example throw new InvalidOperationException();
}
}
}
Imagine class B is being tested with unit test TestB.
To unit test class A we can either pass B to it's constructor (to do state based testing) or pass B's mock to it (to do behavior based testing).
Let say we have chosen the second approach (for example we can't verify A's state directly and can do it indirectly) and created unit test TestA (which doesn't contain any reference to B).
So we will introduce an interface IDependency and classes will look like:
public interface IDependency
{
void PerformSomeAction();
void PerformAnotherAction();
}
public class A
{
private IDependency d;
public A(IDependency d)
{
this.d = d;
}
public void DoSomething()
{
d.PerformSomeAction();
if(d.State == some special value)
{
d.PerformAnotherAction();
}
}
}
public class B : IDependency
{
public BState State { get; private set; }
public void PerformSomeAction()
{
//some actions
State = some special value;
}
public void PerformAnotherAction()
{
if(State != some special value)
{
fail(); //for example throw new InvalidOperationException();
}
}
}
and unit test TestB is something similar to:
[TestClass]
public class TestB
{
[TestMethod]
public void ShouldPerformAnotherActionWhenDependencyReturnsSomeSpecialValue()
{
var d = CreateDependencyMockSuchThatItReturnsSomeSpecialValue();
var a = CreateA(d.Object);
a.DoSomething();
AssertSomeActionWasPerformedForDependency(d);
}
[TestMethod]
public void ShouldNotPerformAnotherActionWhenDependencyReturnsSomeNormalValue()
{
var d = CreateDependencyMockSuchThatItReturnsSomeNormalValue();
var a = CreateA(d.Object);
a.DoSomething();
AssertSomeActionWasNotPerformedForDependency(d);
}
}
Ok. It's a happy moment for developer - everything is tested and all tests are green. Everything is good.
But!
When someone modifies logic of class B (for example modifies if(State != some special value) to if(State != another value) ) only TestB fails.
This guy fixes this test and thinks that everything goes well again.
But if you try to pass B to constructor of A A.DoSomething will fail.
The root cause of it is our mock object. It fixed old behavior of B object. When B changed its behavior mock didn't reflect it.
So, my question is how to make mock of B follow changes of behavior of B?
This is a question of viewpoint. Normally, you mock an interface, not a concrete class. In your example, B's mock is an implementation of IDependency, as is B. B's mock must change whenever the behaviour of IDependency changes, and you can ensure that by looking at all the implementations of IDependency when you change the defined behaviour of IDependency.
So, the enforcement is through 2 simple rules that ought to be followed in the code base:
When a class implements an interface, it must fulfill all defined behaviour of the interface after modification.
When you change an interface, you must adapt all implementers to fulfill the new interface.
Ideally, you have unit tests in place that test against the defined behaviour of an IDependency, which apply to both B and BMock and catch violations of these rules.
I differ from the other answer, which seems to be advocating subjecting both the real implementation and the (hand-made?) mock to a set of contract tests - which specify the behavior of the role/interface. I've never seen tests that exercise mocks - could be done though.
Normally you don't handcraft mocks - rather you use a mocking framework. So w.r.t. your example, my client tests would have inline statements as
new Mock<IDependency>().Setup(d => d.Method(params)
.Returns(expectedValue)
Your question is when the contract changes, how do I guarantee that the inline expectations in the client tests are also updated (or even flagged) with changes to the dependency?
The compiler won't help here. Nor would the tests. What you have is a lack of shared agreement between the client and the dependency. You'd have to manually find-and-replace (or use IDE tooling to locate all references to the interface method) and fix.
The way out is to NOT define a lot of fine-grained IDependency interfaces recklessly. Most problems can be solved with a minimal number of chunky roles (realized as interfaces) with clearly defined non-volatile behavior. You can attempt to minimize role-level changes. Initially this was a sticking point with me too - however discussions with interaction-test experts and practical experience have managed to win me over. If this does happen all too often, a quick retrospective as to the cause of fickle interfaces should yield better results.
I have a question about testing.
I have a class that returns anomalies. in this class I have two different method that simply returns two different types of anomalies and one that return all anomalies (of both types)
this is the example code:
public interface IAnomalyService
{
IList<Anomaly> GetAllAnomalies(object parameter1, object parameter2);
IList<Anomaly> GetAnomalies_OfTypeA(object parameter1);
IList<Anomaly> GetAnomalies_OfTypeB(object parameter2);
}
public class AnomalyService : IAnomalyService
{
public IList<Anomaly> GetAllAnomalies(object parameter1, object parameter2)
{
var lstAll = new List<Anomaly>();
lstAll.AddRange(GetAnomalies_OfTypeA(parameter1));
lstAll.AddRange(GetAnomalies_OfTypeB(parameter2));
return lstAll;
}
public IList<Anomaly> GetAnomalies_OfTypeA(object parameter1)
{
//some elaborations
return new List<Anomaly> { new Anomaly { Id = 1 } };
}
public IList<Anomaly> GetAnomalies_OfTypeB(object parameter2)
{
//some elaborations
return new List<Anomaly> { new Anomaly { Id = 2 } };
}
}
class Anomaly
{
public int Id { get; set; }
}
I've created the tests for the two method that retrieve the anomalies of type A and type B (GetAnomalies_OfTypeA and GetAnomalies_OfTypeB).
Now I want to test the function GetAllAnomalies but I'm not sure what I have to do.
I think I have to way for testing it:
1) declare GetAnomalies_OfTypeA and GetAnomalies_OfTypeB in class AnomalyService as virtual, make a mock of the Class AnomalyService, and using Moq I can set CallBase as true and mock the two method GetAnomalies_OfTypeA and GetAnomalies_OfTypeB.
2)move the method GetAllAnomalies in another class called AllAnomalyService (with interface IAllAnomalyService) and in its constructor I will pass an interface of IAnomalyService and after I can test the GetAllAnomalies mocking the IAnomalyService interface.
I'm new at unit testing, so I don't know which solution is better, if is one of the mines or another one.
Can you help me?
thank you
Luca
Mocking is a good tool when a class resists testing. If you have the source, mocking is often not necessary. Try this approach:
Create a factory which can return AnomalyServices with various, defined anomalies (only type A, only type B, both, none, only type C, ...)
Since the three types are connected in some way, you should check all three in each test. If only anomalies of type A are expected, you should check that GetAllAnomalies returns the same result as GetAnomalies_OfTypeA and GetAnomalies_OfTypeB returns an empty list.
I'm sure I missing something simple here but I can't figure out why my NUnit object comparison test continues to fail.
I have a simple object:
public virtual int Id { get; private set; }
public virtual string Description { get; set; }
public virtual string Address { get; set; }
public virtual string Ports { get; set; }
public virtual string Password { get; set; }
public virtual ServerGroup ServerGroup { get; set; }
I am persisting an instance of this object to my database and then fetching it out using NHibernate. My NUnit unit test compares the object saved to the object retrieved and compares them. I understand that AreSame() would fail as they are not the same reference to an object but I would expect that AreEqual() pass.
If I debug the test I can see that both objects appear to have the same values in these properties my test still fails. Can someone tell me why?
Thanks!
You have to override Equals() method on your class. Otherwise NUnit will use the base implementation, which compares references (which is certainly not what you are after here)
As suggested you need to override Equals. You do need to be aware of the side effects.
You should also override GetHashCode or you could end up with objects where .Equals will be true, but using your Id class as the key in a Dictionary the hash would not match resulting in multiple entries with "Equal" Ids.
Also, you would need to override the == and != operators to maintain consistent behavior.
Imagine the confusion if .Equals were true but == were false.
You do need to override Equals as Grzenio suggests, but watch out for a subtle source of confusion that can occur with NHibernate. Specifically, when lazy loading is enabled, a type comparison test can fail. To illustrate, here is a piece of a well written Equals method:
// override object.Equals
public override bool Equals(object obj)
{
//
// See the full list of guidelines at
// http://go.microsoft.com/fwlink/?LinkID=85237
// and also the guidance for operator== at
// http://go.microsoft.com/fwlink/?LinkId=85238
//
if (GetType() != obj.GetType())
{
return false;
}
....
}
But when lazy loading is enabled, the way NHib works is to generate a proxy of the actual object (thereby deferring unnecessary database hits). If an equality check is made between one object that has been 'proxified' by NHib and another that hasn't been, it will fail because of the mismatch in Types. The solution (courtesy of the S#arp Architecture project, is to modify the type test to be something like this:
public override bool Equals(object obj) {
...
if (GetType() != obj.GetTypeUnproxied())
{
return false;
}
...
}
protected virtual Type GetTypeUnproxied() { return GetType(); }
This effectively returns the type of the underlying object in all cases, even when the compareTo object is a NHib proxy.
An Equals method can be as tricky as it is important to get just right, so ideally you can factor that into some sort of Layer Supertype (Fowler). Lots of open source projects, including the S#arp one I mentioned earlier, provide examples of how to do this.
HTH,
Berryl
I have an adapter class for Linq-to-Sql:
public interface IAdapter : IDisposable
{
Table<Data.User> Activities { get; }
}
Data.User is an object defined by Linq-to-Sql pointing to the User table in persistence.
The implementation for this is as follows:
public class Adapter : IAdapter
{
private readonly SecretDataContext _context = new SecretDataContext();
public void Dispose()
{
_context.Dispose();
}
public Table<Data.User> Users
{
get { return _context.Users; }
}
}
This makes mocking the persistence layer easy in unit testing, as I can just return whatever collection of data I want for Users (Rhino.Mocks):
Expect.Call(_adapter.Users).Return(users);
The problem is that I cannot create the object 'users' since the constructors are not accessible and the class Table is sealed. One option I tried is to just make IAdapter return IEnumerable or IQueryable, but the problem there is that I then do not have access to the methods ITable provides (e.g. InsertOnSubmit()). Is there a way I can create the fake Table in the unit test scenario so that I may be a happy TDD developer?
My current solution is to wrap the functionality I want from Table into a TableWrapper class:
public interface ITableWrapper<TEntity>
where TEntity : class
{
IEnumerable<TEntity> Collection { get; }
void InsertOnSubmit(TEntity entity);
}
And here's the implementation:
public class TableWrapper<TEntity> : ITableWrapper<TEntity>
where TEntity : class
{
private readonly Table<TEntity> _table;
public TableWrapper(Table<TEntity> table)
{
_table = table;
}
public IEnumerable<TEntity> Collection
{
get { return _table; }
}
public void InsertOnSubmit(TEntity entity)
{
_table.InsertOnSubmit(entity);
}
}
So now I can easily mock data from Collection, as well as keeping the functionality of InsertOnSubmit (any other functions that I need down the road can be added later on).
I have had success using the Data Access Layer to produce domain object collections and then using linq to objects.
The object under test then only relates to List, which is fairly easy to unit test.
I don't like when the logic entities should have Data Access Layer dependencies. They should stop at the service layer, if even there. I usually go for the model where the service layer invokes a data access object to get a List, passes that list into whichever logic object that needs it (if necessary uses linq-to-objects to filter out the relevant data and injects it into eiter a flat list, dictionary or an object model).
The business objects become very testable, even though they don't benefit from the richness of the inferred data model.