Immutable beans in Java - concurrency

I am very curious about the possibility of providing immutability for java beans (by beans here I mean classes with an empty constructor providing getters and setters for members). Clearly these classes are not immutable and where they are used to transport values from the data layer this seems like a real problem.
One approach to this problem has been mentioned here in StackOverflow called "Immutable object pattern in C#" where the object is frozen once fully built. I have an alternative approach and would really like to hear people's opinions on it.
The pattern involves two classes Immutable and Mutable where Mutable and Immutable both implement an interface which provides non-mutating bean methods.
For example
public interface DateBean {
public Date getDate();
public DateBean getImmutableInstance();
public DateBean getMutableInstance();
}
public class ImmutableDate implements DateBean {
private Date date;
ImmutableDate(Date date) {
this.date = new Date(date.getTime());
}
public Date getDate() {
return new Date(date.getTime());
}
public DateBean getImmutableInstance() {
return this;
}
public DateBean getMutableInstance() {
MutableDate dateBean = new MutableDate();
dateBean.setDate(getDate());
return dateBean;
}
}
public class MutableDate implements DateBean {
private Date date;
public Date getDate() {
return date;
}
public void setDate(Date date) {
this.date = date;
}
public DateBean getImmutableInstance() {
return new ImmutableDate(this.date);
}
public DateBean getMutableInstance() {
MutableDate dateBean = new MutableDate();
dateBean.setDate(getDate());
return dateBean;
}
}
This approach allows the bean to be constructed using reflection (by the usual conventions) and also allows us to convert to an immutable variant at the nearest opportunity. Unfortunately there is clearly a large amount of boilerplate per bean.
I am very interested to hear other people's approach to this issue. (My apologies for not providing a good question, which can be answered rather than discussed :)

Some comments (not necessarily problems):
The Date class is itself mutable so you are correctly copying it to protect immutability, but personally I prefer to convert to long in the constructor and return a new Date(longValue) in the getter.
Both your getWhateverInstance() methods return DateBean which will necessitate casting, it might be an idea to change the interface to return the specific type instead.
Having said all that I would be inclined to just have two classes one mutable and one immutable, sharing a common (i.e. get only) interface if appropriate. If you think there will be a lot of conversion back and forth then add a copy constructor to both classes.
I prefer immutable classes to declare fields as final to make the compiler enforce immutability as well.
e.g.
public interface DateBean {
public Date getDate();
}
public class ImmutableDate implements DateBean {
private final long date;
ImmutableDate(long date) {
this.date = date;
}
ImmutableDate(Date date) {
this(date.getTime());
}
ImmutableDate(DateBean bean) {
this(bean.getDate());
}
public Date getDate() {
return new Date(date);
}
}
public class MutableDate implements DateBean {
private long date;
MutableDate() {}
MutableDate(long date) {
this.date = date;
}
MutableDate(Date date) {
this(date.getTime());
}
MutableDate(DateBean bean) {
this(bean.getDate());
}
public Date getDate() {
return new Date(date);
}
public void setDate(Date date) {
this.date = date.getTime();
}
}

I think I'd use the delegation pattern - make an ImmutableDate class with a single DateBean member that must be specified in the constructor:
public class ImmutableDate implements DateBean
{
private DateBean delegate;
public ImmutableDate(DateBean d)
{
this.delegate = d;
}
public Date getDate()
{
return delegate.getDate();
}
}
If ever I need to force immutability on a DateBean d, I just new ImmutableDate(d) on it. I could have been smart and made sure I didn't delegate the delegate, but you get the idea. That avoids the issue of a client trying to cast it into something mutable. This is much like the JDK does with Collections.unmodifiableMap() etc. (in those cases, however, the mutation functions still have to be implemented, and are coded to throw a runtime exception. Much easier if you have a base interface without the mutators).
Yet again it is tedious boilerplate code but it is the sort of thing that a good IDE like Eclipse can auto-generate for you with just a few mouse clicks.
If it's the sort of thing you end up doing to a lot of domain objects, you might want to consider using dynamic proxies or maybe even AOP. It would be relatively easy then to build a proxy for any object, delegating all the get methods, and trapping or ignoring the set methods as appropriate.

I use interfaces and casting to control the mutability of beans. I don't see a good reason to complicate my domain objects with methods like getImmutableInstance() and getMutableInstance().
Why not just make use of inheritance and abstraction? e.g.
public interface User{
long getId();
String getName();
int getAge();
}
public interface MutableUser extends User{
void setName(String name);
void setAge(int age);
}
Here's what the client of the code will be doing:
public void validateUser(User user){
if(user.getName() == null) ...
}
public void updateUserAge(MutableUser user, int age){
user.setAge(age);
}
Does it answer your question?
yc

Related

Are void return methods that change the state of their argument an anti-pattern?

Are methods that return void but change the state of their arguments (ie. provide a hidden or implicit return value) generally a bad practice?
I find them difficult to mock, which suggests they are possibly a sign of a bad design.
What patterns are there for avoiding them?
A highly contrived example:
public interface IMapper
{
void Map(SourceObject source, TargetObject target);
}
public class ClassUnderTest
{
private IMapper _mapper;
public ClassUnderTest(IMapper mapper)
{
_mapper = mapper;
}
public int SomeOperation()
{
var source = new SourceObject();
var target = new TargetObject();
_mapper.Map(source, target);
return target.SomeMappedValue;
}
}
Yes to some extend.
What you describe is a typical side effect. Side effects make programs hard to understand, because the information you need to understand isn't contained in the call stack. You need additional information, i.e. what methods got called before (and in what) order.
The solution is to program without side effects. This means you don't change variables, fields or anything. Instead you would return a new version of what you normally would change.
This is a basic principle of functional programming.
Of course this way of programming has it's own challenges. Just consider I/O.
Your code whould be a lot easier to test if you do this:
public interface IMapper
{
TargetObject Map(SourceObject source);
}
public class ClassUnderTest
{
private IMapper _mapper;
public ClassUnderTest(IMapper mapper)
{
_mapper = mapper;
}
public int SomeOperation(SourceObject source )
{
var target = _mapper.Map(source, target);
return target.SomeMappedValue;
}
}
You can now test you Map opperation and SomeOperation seperatly. The problem is that you idd change the state of an object which makes it hard to provide a stub for testing. When returning the new object you are able to return a test stub of the target and test your caller method.

How to mock static member variables

I have a class ClassToTest which has a dependency on ClassToMock.
public class ClassToMock {
private static final String MEMBER_1 = FileReader.readMemeber1();
protected void someMethod() {
...
}
}
The unit test case for ClassToTest.
public class ClassToTestTest {
private ClassToMock _mock;
#Before
public void setUp() throws Exception {
_mock = mock(ClassToMock.class)
}
}
When mock is called in the setUp() method, FileReader.readMemeber1(); is executed. Is there a way to avoid this? I think one way is to initialize the MEMBER_1 inside a method. Any other alternatives?
Thanks!
Your ClassToMock tightly coupled with FileReader, that's why you are not able to test/mock it. Instead of using tool to hack the byte code so you can mock it. I would suggest you do some simple refactorings to break the dependency.
Step 1. Encapsulate Global References
This technique is also introduced in Michael Feathers's wonderful book : Working Effectively with Legacy Code.
The title pretty much self explained. Instead of directly reference a global variable, you encapsulate it inside a method.
In your case, ClassToMock can be refactored into this :
public class ClassToMock {
private static final String MEMBER_1 = FileReader.readMemeber1();
public String getMemberOne() {
return MEMBER_1;
}
}
then you can easily using Mockito to mock getMemberOne().
UPDATED Old Step 1 cannot guarantee Mockito mock safely, if FileReader.readMemeber1() throw exception, then the test will failled miserably. So I suggest add another step to work around it.
Step 1.5. add Setter and Lazy Getter
Since the problem is FileReader.readMember1() will be invoked as soon as ClassToMock is loaded. We have to delay it. So we make the getter call FileReader.readMember1() lazily, and open a setter.
public class ClassToMock {
private static String MEMBER_1 = null;
protected String getMemberOne() {
if (MEMBER_1 == null) {
MEMBER_1 = FileReader.readMemeber1();
}
return MEMBER_1;
}
public void setMemberOne(String memberOne) {
MEMBER_1 = memberOne;
}
}
Now, you should able to make a fake ClassToMock even without Mockito. However, this should not be the final state of your code, once you have your test ready, you should continue to Step 2.
Step 2. Dependence Injection
Once you have your test ready, you should refactor it further more. Now Instead of reading the MEMBER_1 by itself. This class should receive the MEMBER_1 from outside world instead. You can either use a setter or constructor to receive it. Below is the code that use setter.
public class ClassToMock {
private String memberOne;
public void setMemberOne(String memberOne) {
this.memberOne = memberOne;
}
public String getMemberOne() {
return memberOne;
}
}
These two step refactorings are really easy to do, and you can do it even without test at hand. If the code is not that complex, you can just do step 2. Then you can easily test ClassToTest
UPDATE 12/8 : answer the comment
See my another answer in this questions.
UPDATE 12/8 : answer the comment
Question : What if FileReader is something very basic like Logging that needs to
be there in every class. Would you suggest I follow the same approach
there?
It depends.
There are something you might want to think about before you do a massive refactor like that.
If I move FileReader outside, do I have a suitable class which can read from file and provide the result to every single class that needs them ?
Beside making classes easier to test, do I gain any other benefit ?
Do I have time ?
If any of the answers is "NO", then you should better not to.
However, we can still break the dependency between all the classes and FileReader with minimal changes.
From your question and comment, I assume your system using FileReader as a global reference for reading stuff from a properties file, then provide it to rest of the system.
This technique is also introduced in Michael Feathers's wonderful book : Working Effectively with Legacy Code, again.
Step 1. Delegate FileReader static methods to instance.
Change
public class FileReader {
public static FileReader getMemberOne() {
// codes that read file.
}
}
To
public class FileReader {
private static FileReader singleton = new FileReader();
public static String getMemberOne() {
return singleton.getMemberOne();
}
public String getMemberOne() {
// codes that read file.
}
}
By doing this, static methods in FileReader now have no knowledge about how to getMemberOne()
Step 2. Extract Interface from FileReader
public interface AppProperties {
String getMemberOne();
}
public class FileReader implements AppProperties {
private static AppProperties singleton = new FileReader();
public static String getMemberOne() {
return singleton.getMemberOne();
}
#Override
public String getMemberOne() {
// codes that read file.
}
}
We extract all the method to AppProperties, and static instance in FileReader now using AppProperties.
Step 3. Static setter
public class FileReader implements AppProperties {
private static AppProperties singleton = new FileReader();
public static void setAppProperties(AppProperties prop) {
singleton = prop;
}
...
...
}
We opened a seam in FileReader. By doing this, we can set change underlying instance in FileReader and it would never notice.
Step 4. Clean up
Now FileReader have two responsibilities. One is read files and provide result, another one is provide a global reference for system.
We can separate them and give them a good naming. Here is the result :
// This is the original FileReader,
// now is a AppProperties subclass which read properties from file.
public FileAppProperties implements AppProperties {
// implementation.
}
// This is the class that provide static methods.
public class GlobalAppProperties {
private static AppProperties singleton = new FileAppProperties();
public static void setAppProperties(AppProperties prop) {
singleton = prop;
}
public static String getMemberOne() {
return singleton.getMemberOne();
}
...
...
}
END.
After this refactoring, whenever you want to test. You can set a mock AppProperties to GlobalAppProperties
I think this refactoring would be better if all you want to do is break the same global dependency in many classes.
Powermock core provides a convenient utility method that could be used for this purpose.
Add powermock-core to your project.
testImplementation group: 'org.powermock', name: 'powermock-core', version: '2.0.9'
FileReader fileReader = mock(FileReader.class);
Whitebox.setInternalState(ClassToMock.class, "MEMBER_1", fileReader);
Whitebox.setInternalState is just a convenient method to set the value of a field using reflection. So it could be used along with any Mockito tests.

Partial Mock or new class or what else?

I have a question about testing.
I have a class that returns anomalies. in this class I have two different method that simply returns two different types of anomalies and one that return all anomalies (of both types)
this is the example code:
public interface IAnomalyService
{
IList<Anomaly> GetAllAnomalies(object parameter1, object parameter2);
IList<Anomaly> GetAnomalies_OfTypeA(object parameter1);
IList<Anomaly> GetAnomalies_OfTypeB(object parameter2);
}
public class AnomalyService : IAnomalyService
{
public IList<Anomaly> GetAllAnomalies(object parameter1, object parameter2)
{
var lstAll = new List<Anomaly>();
lstAll.AddRange(GetAnomalies_OfTypeA(parameter1));
lstAll.AddRange(GetAnomalies_OfTypeB(parameter2));
return lstAll;
}
public IList<Anomaly> GetAnomalies_OfTypeA(object parameter1)
{
//some elaborations
return new List<Anomaly> { new Anomaly { Id = 1 } };
}
public IList<Anomaly> GetAnomalies_OfTypeB(object parameter2)
{
//some elaborations
return new List<Anomaly> { new Anomaly { Id = 2 } };
}
}
class Anomaly
{
public int Id { get; set; }
}
I've created the tests for the two method that retrieve the anomalies of type A and type B (GetAnomalies_OfTypeA and GetAnomalies_OfTypeB).
Now I want to test the function GetAllAnomalies but I'm not sure what I have to do.
I think I have to way for testing it:
1) declare GetAnomalies_OfTypeA and GetAnomalies_OfTypeB in class AnomalyService as virtual, make a mock of the Class AnomalyService, and using Moq I can set CallBase as true and mock the two method GetAnomalies_OfTypeA and GetAnomalies_OfTypeB.
2)move the method GetAllAnomalies in another class called AllAnomalyService (with interface IAllAnomalyService) and in its constructor I will pass an interface of IAnomalyService and after I can test the GetAllAnomalies mocking the IAnomalyService interface.
I'm new at unit testing, so I don't know which solution is better, if is one of the mines or another one.
Can you help me?
thank you
Luca
Mocking is a good tool when a class resists testing. If you have the source, mocking is often not necessary. Try this approach:
Create a factory which can return AnomalyServices with various, defined anomalies (only type A, only type B, both, none, only type C, ...)
Since the three types are connected in some way, you should check all three in each test. If only anomalies of type A are expected, you should check that GetAllAnomalies returns the same result as GetAnomalies_OfTypeA and GetAnomalies_OfTypeB returns an empty list.

Should class methods accept parameters or use class properties

Consider the following class
public class Class1
{
public int A { get; set; }
public int B { get; set; }
public int GetComplexResult()
{
return A + B;
}
}
In order to use GetComplexResult, a consumer of this class would have to know to set A and B before calling the method. If GetComplexResult accesses many properties to calculate its result, this can lead to wrong return values if the consumer doesn't set all the appropriate properties first. So you might write this class like this instead
public class Class2
{
public int A { get; set; }
public int B { get; set; }
public int GetComplexResult(int a, int b)
{
return a + b;
}
}
This way, a caller to GetComplexResult is forced to pass in all the required values, ensuring the expected return value is correctly calculated. But if there are many required values, the parameter list grows as well and this doesn't seem like good design either. It also seems to break the point of encapsulating A, B and GetComplexResult in a single class. I might even be tempted to make GetComplexResult static since it doesn't require an instance of the class to do its work. I don't want to go around making a bunch of static methods.
Are there terms to describe these 2 different ways of creating classes? They both seem to have pros and cons - is there something I'm not understanding that should tell me that one way is better than the other? How does unit testing influence this choice?
If you use a real-world example the answer becomes clearer.
public class person
{
public string firstName { get; set; }
public string lastName { get; set; }
public string getFullName()
{
return firstName + " " + lastName;
}
}
The point of an entity object is that it contains information about an entity, and can do the operations that the entity needs to do (based on the information it contains). So yes, there are situations in which certain operations won't work properly because the entity hasn't been fully initialized, but that's not a failure of design. If, in the real world, I ask you for the full name of a newborn baby who hasn't been named yet, that will fail also.
If certain properties are essential to an entity doing its job, they can be initialized in a constructor. Another approach is to have a boolean that checks whether the entity is in a state where a given method can be called:
while (person.hasAnotherQuestion()) {
person.answerNextQuestion();
}
A good design rule is to make sure that all constructors initializes objects to valid states and that all property setters and methods then enforces the valid state. This way there will never be any objects in invalid states.
If the default values for A and B, which is 0 is not a valid state that yields a valid result from GetComplexResult, you should a constructor that initialized A and B to valid a state.
If some of the fields are never allowed to be null then you would typically make them parameters to the class constructor. If you don't always have all of the required values available at once then using a builder class may be helpful.
For example:
public Builder {
private int a;
private int b;
public Class1 create() {
// some validation logic goes here
// to make sure we have everything and
// either fill in defaults or throw an error
// if needed
return new Class1(a, b)
}
public Builder a(int val) { a = val; }
public Builder b(int val) { b = val; }
}
This Builder can then be used as follows.
Class1 obj1 = new Builder().a(5).b(6).create();
Builder builder = new Builder();
// do stuff to find value of a
builder.a(valueOfA);
// do stuff to find value of b
builder.b(valueOfB);
// do more stuff
Class1 obj2 = builder.create();
Class2 obj3 = builder.create();
This design allows you to lock down the Entity classes to whatever degree is appropriate while still allowing for a flexible construction process. It also opens the door to customizing the construction process with other implementations without changing the entity class contract.

How to allow derived class to call methods on other derived class? OO-design

Say I have something like -- this is just an example mind you.
class Car
{
void accelerate();
void stop();
}
class Person
{
void drive(Car car);
}
class Toyota : public Car
{
void accelerateUncontrollably();
}
class ToyotaDriver : public Person
{
void drive(Car car)
{
// How to accelerateUncontrollably without dynamic cast?
}
}
A couple things, Toyotas and ToyotaDriver go together, i.e. I can have a ToyotaFactory class which will return the driver and the car. So the pieces are interchangeable and used in different parts of the code but a Toyota and a ToyotaDriver go together.
You can't and you shouldn't...
This is meant to protect you from yourself :)
Either accelerateUncontrollably can only be done in Toyotas (but not in other car models) and then the definition is ok and you should check first if the car is indeed a Toyota or the all the car can "accelerateUncontrollably" and then the declaration should be in the Car class.
You can, of course, make a cast... but ask yourself... if you do know the subtype you're getting... why are you receiving a car and not a Toyota??
Edit: I still don't see why you can't edit it to look like:
interface IToyotaAccelerable
{
void accelerateUncontrollably();
}
class Toyota : public Car : IToyotaAccelerable
{
void accelerateUncontrollably();
}
class ToyotaDriver : public Person
{
void drive(Car car)
{
// Do whatever logic you want with the car...
// How to accelerateUncontrollably without dynamic cast?
IToyotaAccelerable accel = car as IToyotaAccelerable
if (car != null)
{
accel.accelerateUncontrollably();
}
}
}
Now you're programming against a behavioural property, something a given object can or cannot do ... so you don't need to cast and the function at leasts makes a little more sense from a semantic point of view...
You can avoid unsightly downcasting and breaking the Liskov Substitution Principle by simple delegating from the common interface like this:
class Toyota : public Car
{
void accelerateUncontrollably() // can't have abstract methods here, btw
{
// throttle the engine unexpectedly, lock pedal beneath mat, etc.
}
void accelerate() // you have to implement accelerate anyway because of Car
{
accelerateUncontrollably();
}
}
Now the ToyataDriver will have no idea that simply accelerating will call accelerate uncontrollably:
class ToyotaDriver : public Person
{
void drive(Car car)
{
car.accelerate();
}
}
Note also that any Driver object that finds themselves with a Toyota Car object may experience the same effect:
LexusDriver driver = new LexusDriver();
driver.drive(ToyotaFactory.newPrius()); // whee!
GreenHornetDriver driver new GreenHornetDriver();
driver.drive(ToyotaFactory.newCorolla()); // wow!
This is the idea: Toyata Cars present themselves to a Driver as simply a "Car", not an "Uncontrollably Accerating Car." The Driver isn't coupled to the uncontrollablyAccelerate interface, i.e., has no idea what's about to happen. Once they do call accelerate, and assuming doing so doesn't cause the system to crash, we may see a rare corollary to the Liskov Substitution Principle, the Universal Recall Principle.
Seems to me you need another type of car, it needs to extend to a type of car which can accelerate uncontrollably then have Toyota inherit from that.
By you design you are saying not all cars can accelerate uncontrollably and breaking that is break your OO and is a No-No... sorry about the rhyme.
I assume that Person::drive() usually calls Car::accelerate() at some point. I would override the definition of Car::accelerate() in Toyota::accelerate() to include Toyota::accelerateUncontrollably().
If Car::accelerate() isn't virtual, and you can't add a virtual bool Car::isCrazy() function, then there's not a good way to do this. Humorous analogies aside, it appears what you're trying to do is add a property to the Car class without actually modifying the class. There's just not going to be a good OOD way of doing that.
My impression is that using a dynamic_cast is absolutely fine here. No need to avoid it.
You can use the pattern COM follows:
class Car
{
void accelerate();
void stop();
virtual Car* specifyModel(int modelID)
{
return NULL;
}
}
class Person
{
void drive(Car car);
}
#define MODEL_TOYOTA 1
class Toyota : public Car
{
virtual Car* specifyModel(int modelID)
{
if (modelID == MODEL_TOYOTA) return this;
return NULL;
}
void accelerateUncontrollably();
}
class ToyotaDriver : public Person
{
void drive(Car car)
{
Toyota* toyota = static_cast<Toyota*>(car.specifyModel(MODEL_TOYOTA));
if (toyota != NULL)
{
toyota->accelerateUncontrollably();
}
}
}
The basic idea is that you define a virtual method in your base class that represents a "downcast" function that takes a type tag and returns a pointer. If the returned value is not null, the implication is that it can be safely downcasted to the type that matches that tag.
Derived classes override the method, check against their type tag, and return a valid pointer if it matches.