Should class methods accept parameters or use class properties - unit-testing

Consider the following class
public class Class1
{
public int A { get; set; }
public int B { get; set; }
public int GetComplexResult()
{
return A + B;
}
}
In order to use GetComplexResult, a consumer of this class would have to know to set A and B before calling the method. If GetComplexResult accesses many properties to calculate its result, this can lead to wrong return values if the consumer doesn't set all the appropriate properties first. So you might write this class like this instead
public class Class2
{
public int A { get; set; }
public int B { get; set; }
public int GetComplexResult(int a, int b)
{
return a + b;
}
}
This way, a caller to GetComplexResult is forced to pass in all the required values, ensuring the expected return value is correctly calculated. But if there are many required values, the parameter list grows as well and this doesn't seem like good design either. It also seems to break the point of encapsulating A, B and GetComplexResult in a single class. I might even be tempted to make GetComplexResult static since it doesn't require an instance of the class to do its work. I don't want to go around making a bunch of static methods.
Are there terms to describe these 2 different ways of creating classes? They both seem to have pros and cons - is there something I'm not understanding that should tell me that one way is better than the other? How does unit testing influence this choice?

If you use a real-world example the answer becomes clearer.
public class person
{
public string firstName { get; set; }
public string lastName { get; set; }
public string getFullName()
{
return firstName + " " + lastName;
}
}
The point of an entity object is that it contains information about an entity, and can do the operations that the entity needs to do (based on the information it contains). So yes, there are situations in which certain operations won't work properly because the entity hasn't been fully initialized, but that's not a failure of design. If, in the real world, I ask you for the full name of a newborn baby who hasn't been named yet, that will fail also.
If certain properties are essential to an entity doing its job, they can be initialized in a constructor. Another approach is to have a boolean that checks whether the entity is in a state where a given method can be called:
while (person.hasAnotherQuestion()) {
person.answerNextQuestion();
}

A good design rule is to make sure that all constructors initializes objects to valid states and that all property setters and methods then enforces the valid state. This way there will never be any objects in invalid states.
If the default values for A and B, which is 0 is not a valid state that yields a valid result from GetComplexResult, you should a constructor that initialized A and B to valid a state.

If some of the fields are never allowed to be null then you would typically make them parameters to the class constructor. If you don't always have all of the required values available at once then using a builder class may be helpful.
For example:
public Builder {
private int a;
private int b;
public Class1 create() {
// some validation logic goes here
// to make sure we have everything and
// either fill in defaults or throw an error
// if needed
return new Class1(a, b)
}
public Builder a(int val) { a = val; }
public Builder b(int val) { b = val; }
}
This Builder can then be used as follows.
Class1 obj1 = new Builder().a(5).b(6).create();
Builder builder = new Builder();
// do stuff to find value of a
builder.a(valueOfA);
// do stuff to find value of b
builder.b(valueOfB);
// do more stuff
Class1 obj2 = builder.create();
Class2 obj3 = builder.create();
This design allows you to lock down the Entity classes to whatever degree is appropriate while still allowing for a flexible construction process. It also opens the door to customizing the construction process with other implementations without changing the entity class contract.

Related

Mock class object as parameter of function

I am using junit and mokito to write unit test of my java program.
public MyClass {
private ClassA a;
public void process(ClassB b) {
if(b.method()) a = ClassA.builder().build();
}
}
Now I have write a MockClassA and MockClassB. But I don't know how to :
Pass a MockClassB instantiation to process function
How to verify whether private variable a is set successfully
Can anybody help?
You can use something like:
#Test
public void shouldDoSomething() {
// given
ClassB mock = Mockito.mock(ClassB.class);
Mockito.when(mock.method()).thenReturn(true);
MyClass classUnderTest = new MyClass();
// when
classUnderTest.process(mock);
// then
// Insert assertions
}
However, if your field is private you are unable to test it properly. You should provide a getter for this field if you want to make some assertions against it.
But remember that internal representation of MyClass should not be tested, only the behavior of it so maybe you want to try different approach

should we create object of any class on class level or function level

Which approach is better: I tried to find it on web, but I couldn't get a better answer.
1.
public class OtherClass
{
public int Add(int x, int y)
{
return x + y;
}
}
public class TestClass
{
OtherClass oClass = new OtherClass();
public int Fun1()
{
return oClass.Add(1,2);
}
public int Fun2()
{
return oClass.Add(1, 2);
}
}
2.
public class TestClass
{
public int Fun1()
{
OtherClass oClass = new OtherClass();
return oClass.Add(1, 2);
}
public int Fun2()
{
OtherClass oClass = new OtherClass();
return oClass.Add(1, 2);
}
}
I think it depends on what you are trying to test.
If you're testing the effects of a sequence of functions being executed on the same class instance then you might want to create a single instance (such as stress testing)
But otherwise I'd say it's always better to create a new instance of the class in each test function to ensure that the context of each test is predictable. If your test methods shared an instance of a class, and one test method fails and corrupts the state of the object under test, your subsequent test may fail for no other reason than the state of the object under test was corrupted by the previous failed test (it might appear the multiple tests are failing when in fact only one of the early ones is a true failure).
Depends on the scenario, if the class is gonna be shared on multiple functions and there are no specific arguments needed to create an instance of that class then it's better of being at the class level.
Let's say you're using the Fun1 and Fun2 often, having the instance creation on the method will have instance creation overhead rather than it being at the class level having a single instance, or better yet, make it static or make it singleton if you're sure that it's going to be a single instance throughout the whole app.
One benefit of having it in the class level is if you're doing unit testing, you can make an interface like IOtherClass and Inject it in the constructor of TestClass.
It would look something like this.
public class OtherClass : IOtherClass
{
public int Add(int x, int y)
{
return x + y;
}
}
public class TestClass
{
IOtherClass oClass;
public TestClass(IOtherClass _oClass)
{
oClass = _oClass;
}
public int Fun1()
{
return oClass.Add(1,2);
}
public int Fun2()
{
return oClass.Add(1, 2);
}
}
You're better off having it as a field in the class rather than declaring a new one in each method. The reason for this is simple, there won't be a line of code in each method declaring the variable meaning that if your declaration statement changes you will only have to change it in one place, not every method. Also it will make your code easier to read and add to because this line won't be duplicated everywhere.
Just remember if that field needs to be disposed your class should implement the IDisposable interface.

Partial Mock or new class or what else?

I have a question about testing.
I have a class that returns anomalies. in this class I have two different method that simply returns two different types of anomalies and one that return all anomalies (of both types)
this is the example code:
public interface IAnomalyService
{
IList<Anomaly> GetAllAnomalies(object parameter1, object parameter2);
IList<Anomaly> GetAnomalies_OfTypeA(object parameter1);
IList<Anomaly> GetAnomalies_OfTypeB(object parameter2);
}
public class AnomalyService : IAnomalyService
{
public IList<Anomaly> GetAllAnomalies(object parameter1, object parameter2)
{
var lstAll = new List<Anomaly>();
lstAll.AddRange(GetAnomalies_OfTypeA(parameter1));
lstAll.AddRange(GetAnomalies_OfTypeB(parameter2));
return lstAll;
}
public IList<Anomaly> GetAnomalies_OfTypeA(object parameter1)
{
//some elaborations
return new List<Anomaly> { new Anomaly { Id = 1 } };
}
public IList<Anomaly> GetAnomalies_OfTypeB(object parameter2)
{
//some elaborations
return new List<Anomaly> { new Anomaly { Id = 2 } };
}
}
class Anomaly
{
public int Id { get; set; }
}
I've created the tests for the two method that retrieve the anomalies of type A and type B (GetAnomalies_OfTypeA and GetAnomalies_OfTypeB).
Now I want to test the function GetAllAnomalies but I'm not sure what I have to do.
I think I have to way for testing it:
1) declare GetAnomalies_OfTypeA and GetAnomalies_OfTypeB in class AnomalyService as virtual, make a mock of the Class AnomalyService, and using Moq I can set CallBase as true and mock the two method GetAnomalies_OfTypeA and GetAnomalies_OfTypeB.
2)move the method GetAllAnomalies in another class called AllAnomalyService (with interface IAllAnomalyService) and in its constructor I will pass an interface of IAnomalyService and after I can test the GetAllAnomalies mocking the IAnomalyService interface.
I'm new at unit testing, so I don't know which solution is better, if is one of the mines or another one.
Can you help me?
thank you
Luca
Mocking is a good tool when a class resists testing. If you have the source, mocking is often not necessary. Try this approach:
Create a factory which can return AnomalyServices with various, defined anomalies (only type A, only type B, both, none, only type C, ...)
Since the three types are connected in some way, you should check all three in each test. If only anomalies of type A are expected, you should check that GetAllAnomalies returns the same result as GetAnomalies_OfTypeA and GetAnomalies_OfTypeB returns an empty list.

NUnit AreEqual always returns false

I'm sure I missing something simple here but I can't figure out why my NUnit object comparison test continues to fail.
I have a simple object:
public virtual int Id { get; private set; }
public virtual string Description { get; set; }
public virtual string Address { get; set; }
public virtual string Ports { get; set; }
public virtual string Password { get; set; }
public virtual ServerGroup ServerGroup { get; set; }
I am persisting an instance of this object to my database and then fetching it out using NHibernate. My NUnit unit test compares the object saved to the object retrieved and compares them. I understand that AreSame() would fail as they are not the same reference to an object but I would expect that AreEqual() pass.
If I debug the test I can see that both objects appear to have the same values in these properties my test still fails. Can someone tell me why?
Thanks!
You have to override Equals() method on your class. Otherwise NUnit will use the base implementation, which compares references (which is certainly not what you are after here)
As suggested you need to override Equals. You do need to be aware of the side effects.
You should also override GetHashCode or you could end up with objects where .Equals will be true, but using your Id class as the key in a Dictionary the hash would not match resulting in multiple entries with "Equal" Ids.
Also, you would need to override the == and != operators to maintain consistent behavior.
Imagine the confusion if .Equals were true but == were false.
You do need to override Equals as Grzenio suggests, but watch out for a subtle source of confusion that can occur with NHibernate. Specifically, when lazy loading is enabled, a type comparison test can fail. To illustrate, here is a piece of a well written Equals method:
// override object.Equals
public override bool Equals(object obj)
{
//
// See the full list of guidelines at
// http://go.microsoft.com/fwlink/?LinkID=85237
// and also the guidance for operator== at
// http://go.microsoft.com/fwlink/?LinkId=85238
//
if (GetType() != obj.GetType())
{
return false;
}
....
}
But when lazy loading is enabled, the way NHib works is to generate a proxy of the actual object (thereby deferring unnecessary database hits). If an equality check is made between one object that has been 'proxified' by NHib and another that hasn't been, it will fail because of the mismatch in Types. The solution (courtesy of the S#arp Architecture project, is to modify the type test to be something like this:
public override bool Equals(object obj) {
...
if (GetType() != obj.GetTypeUnproxied())
{
return false;
}
...
}
protected virtual Type GetTypeUnproxied() { return GetType(); }
This effectively returns the type of the underlying object in all cases, even when the compareTo object is a NHib proxy.
An Equals method can be as tricky as it is important to get just right, so ideally you can factor that into some sort of Layer Supertype (Fowler). Lots of open source projects, including the S#arp one I mentioned earlier, provide examples of how to do this.
HTH,
Berryl

Immutable beans in Java

I am very curious about the possibility of providing immutability for java beans (by beans here I mean classes with an empty constructor providing getters and setters for members). Clearly these classes are not immutable and where they are used to transport values from the data layer this seems like a real problem.
One approach to this problem has been mentioned here in StackOverflow called "Immutable object pattern in C#" where the object is frozen once fully built. I have an alternative approach and would really like to hear people's opinions on it.
The pattern involves two classes Immutable and Mutable where Mutable and Immutable both implement an interface which provides non-mutating bean methods.
For example
public interface DateBean {
public Date getDate();
public DateBean getImmutableInstance();
public DateBean getMutableInstance();
}
public class ImmutableDate implements DateBean {
private Date date;
ImmutableDate(Date date) {
this.date = new Date(date.getTime());
}
public Date getDate() {
return new Date(date.getTime());
}
public DateBean getImmutableInstance() {
return this;
}
public DateBean getMutableInstance() {
MutableDate dateBean = new MutableDate();
dateBean.setDate(getDate());
return dateBean;
}
}
public class MutableDate implements DateBean {
private Date date;
public Date getDate() {
return date;
}
public void setDate(Date date) {
this.date = date;
}
public DateBean getImmutableInstance() {
return new ImmutableDate(this.date);
}
public DateBean getMutableInstance() {
MutableDate dateBean = new MutableDate();
dateBean.setDate(getDate());
return dateBean;
}
}
This approach allows the bean to be constructed using reflection (by the usual conventions) and also allows us to convert to an immutable variant at the nearest opportunity. Unfortunately there is clearly a large amount of boilerplate per bean.
I am very interested to hear other people's approach to this issue. (My apologies for not providing a good question, which can be answered rather than discussed :)
Some comments (not necessarily problems):
The Date class is itself mutable so you are correctly copying it to protect immutability, but personally I prefer to convert to long in the constructor and return a new Date(longValue) in the getter.
Both your getWhateverInstance() methods return DateBean which will necessitate casting, it might be an idea to change the interface to return the specific type instead.
Having said all that I would be inclined to just have two classes one mutable and one immutable, sharing a common (i.e. get only) interface if appropriate. If you think there will be a lot of conversion back and forth then add a copy constructor to both classes.
I prefer immutable classes to declare fields as final to make the compiler enforce immutability as well.
e.g.
public interface DateBean {
public Date getDate();
}
public class ImmutableDate implements DateBean {
private final long date;
ImmutableDate(long date) {
this.date = date;
}
ImmutableDate(Date date) {
this(date.getTime());
}
ImmutableDate(DateBean bean) {
this(bean.getDate());
}
public Date getDate() {
return new Date(date);
}
}
public class MutableDate implements DateBean {
private long date;
MutableDate() {}
MutableDate(long date) {
this.date = date;
}
MutableDate(Date date) {
this(date.getTime());
}
MutableDate(DateBean bean) {
this(bean.getDate());
}
public Date getDate() {
return new Date(date);
}
public void setDate(Date date) {
this.date = date.getTime();
}
}
I think I'd use the delegation pattern - make an ImmutableDate class with a single DateBean member that must be specified in the constructor:
public class ImmutableDate implements DateBean
{
private DateBean delegate;
public ImmutableDate(DateBean d)
{
this.delegate = d;
}
public Date getDate()
{
return delegate.getDate();
}
}
If ever I need to force immutability on a DateBean d, I just new ImmutableDate(d) on it. I could have been smart and made sure I didn't delegate the delegate, but you get the idea. That avoids the issue of a client trying to cast it into something mutable. This is much like the JDK does with Collections.unmodifiableMap() etc. (in those cases, however, the mutation functions still have to be implemented, and are coded to throw a runtime exception. Much easier if you have a base interface without the mutators).
Yet again it is tedious boilerplate code but it is the sort of thing that a good IDE like Eclipse can auto-generate for you with just a few mouse clicks.
If it's the sort of thing you end up doing to a lot of domain objects, you might want to consider using dynamic proxies or maybe even AOP. It would be relatively easy then to build a proxy for any object, delegating all the get methods, and trapping or ignoring the set methods as appropriate.
I use interfaces and casting to control the mutability of beans. I don't see a good reason to complicate my domain objects with methods like getImmutableInstance() and getMutableInstance().
Why not just make use of inheritance and abstraction? e.g.
public interface User{
long getId();
String getName();
int getAge();
}
public interface MutableUser extends User{
void setName(String name);
void setAge(int age);
}
Here's what the client of the code will be doing:
public void validateUser(User user){
if(user.getName() == null) ...
}
public void updateUserAge(MutableUser user, int age){
user.setAge(age);
}
Does it answer your question?
yc