Why does Microsoft.VisualStudio.TestTools.UnitTesting.Assert.Equals() exist? - unit-testing

Description for Assert.Equals() from the MSDN Documentation:
Do not use this method.
That's it, the full explanation. Uh.. ok, but then ... why is it there?
Is it a deprecated method from an earlier version of the framework? Something that's supposed to be used only by other Microsoft Assemblies?
It just makes me want to use it all the more knowing I'm not supposed to. ;-)
Does anyone know?

.Equals is inherited from object. It's listed as "Do not use this method" so users won't confuse it with the AreEqual method.

All objects in .NET derive from Object.
Object has a .Equals() method.
Apparently the .Equals() method for this particular object doesn't do anything useful, so the docs are warning you that it doesn't do anything useful.

It was changed in 2008 (Maybe SP1) to fail a test when called, so that people who were using it by accident were told they really shouldn't be using it.

Assert.Equals, just like its based class method Object.Equals, is perfectly useful for comparing objects. However, neither method is useful for stand-alone detection and reporting or errors in unit testing, since Object.Equals returns a boolean rather than throws if the values are not equal. This is a problem if used like this in a unit test:
Assert.Equals(42, ComputeMeaningOfLife());
Aside from the problem of this unit test possibly running too long :-), this test would silently succeed even if the Compute method provides the wrong result. The right method to use is Assert.AreEqual, which doesn't return anything, but throws an exception if the parameters aren't equal.
Assert.Equals was added so code like in the sample above doesn't fall back to Object.Equals and silently neuter the unit test. Instead, when called from a unit test, Assert.Equals always throws an exception reminding you not to use it.

Related

Mockito Verify method not giving consistent results

I'm learning GwtMockito but having trouble getting consistent verify() method results in one of my tests.
I'm trying to test the correct GwtEvents are being fired by my application. So I've mocked the Event Bus like this in my #Before method:
eventBus = mock(HandlerManager.class);
This test passes as expected:
// Passes as expected
verify(eventBus).fireEvent(any(ErrorOccurredEvent.class));
I wanted to force the test to fail just to know it was running correctly. So I changed it to this and it still passes:
// Expected this to fail, but it passes
verify(eventBus).fireEvent(any(ErrorOccurredEvent.class));
verifyZeroInteractions(eventBus).fireEvent(any(ErrorOccurredEvent.class));
This seems contradictory to me. So I removed the first test:
// Fails as expected
verifyZeroInteractions(eventBus).fireEvent(any(ErrorOccurredEvent.class));
Finally I added an unrelated event that should cause it to fail
// Expected to fail, but passes
verify(eventBus).fireEvent(any(ErrorOccurredEvent.class));
verify(eventBus).fireEvent(any(ModelCreatedEvent.class)); // This event is not used at all by the class that I'm testing. It's not possible for it to be fired.
I'm not finding any documentation that explains what's going on. Both ErrorOccurredEvent and ModelCreatedEvent extend GwtEvent, and have been verified in manual testing. Am I testing my EventBus incorrectly? If so, what is a better way to go about it?
Update
I've done some additional experimenting. It appears to be an issue I'm having with the Mockito matcher. When I get the test to fail the exception reports the method signature as eventBus.fireEvent(<any>) so it doesn't appear to be taking into account the different classes I'm passing into the any method. Not sure what to do about this yet, but including it here for anyone else researching this problem.
The method you're looking for is isA, instead of any.
This doesn't explain my first attempt to force the test to fail, but it does explain the other confusion. From the Mockito documentation:
public static T any(java.lang.Class clazz)
Matches any object, including nulls
This method doesn't do type checks with the given parameter, it is
only there to avoid casting in your code. This might however change
(type checks could be added) in a future major release.
So by design it doesn't do the type checks I was hoping for. I'll have to work out another way to design these tests. But this explains why they weren't behaving as I expected.

Mocking & Unit Testing- Why check that something was called only once?

I know that many mocking libraries let the programmer check that a method was called only once. But why is that useful?
Also, why is it useful to verify the parameters to a mock's method?
When doing unit tests you are testing a isolated method: all other methods are supposed to work correctly, and you test only that your method behaves in the expected (specified...) way.
But in many occasions the expected way implies calling methods of classes you depend on (via dependency injection, if you want to do unit testing).
For these reason you need to assure that this calls are really done... and of course they are called with the expected parameters.
Example:
In your real application you have a repository class that stores all your changes in the database (and does only this!). But for unit test your "business" class (where all your business rules are defined), you should mock that "repository" class: then you must check that this mocked class receives the correct updated data.
In general, the answers to both of those questions are the same.
It's useful if the requirements of the unit/method you're testing specify that that behavior is required. If that behavior is required, then that's what you need to verify is actually happening.
If it's important to ensure that a particular method is only called once, then you can do that. If it doesn't matter that a method is called more than once, then don't test for it explicitly. Note that the default for the Mockito "verify" method is "times(1)", which means that it confirms that the method was called once and only once.
Concerning the second question, you might want to verify the parameters if it's entirely possible the method could be called with different parameters, and you don't want to count those occurrences, you only care about a specific set of parameter values.

JMock, What should you do when the mock object gets casted to a concrete class?

Not sure how I should be asking the question, but when I define my mock objects, and somewhere in the code it attempts to cast it to a different type the test throws me
$Proxy6 cannot be cast to ...
How does one solve this problem?
Does this class really need to be mocked? I usually mock services and use concrete classes for value types passed in.
One thing you can do is outlined here: define an interface in your test.
If it really needs to be mocked and you can't do the above you could provide your own implementation which does what you want the mock to do e.g. records values passed in, methods called, returns the values you want etc. and assert what you need at the end - that might be a lot of work though.
Lastly, is this pointing you towards some unidentified interfaces in your design or that the code under test needs some refactoring?
As always, the test is telling you something about your design. Why is the code trying to cast the object? Could you give us more detail?

Should you unit test the return value of Hardcode Properties?

Should we be testing values that we already know the answer to?
If a value is important enough to be a dedicated hard code value then should should it be important enough of to change a test at the same time as the value? or is this just overkill?!
If by “hardcoded properties” you mean something like this (in Java):
int getTheAnswer() {
return 42;
}
Then you need to ask yourself—why not make it a constant? For example,
static final int THE_ANSWER = 42;
In such a case, you don’t need to unit-test it.
In all other cases, you should consider unit testing.
More context is required to answer your question. It just depends. Is your getXXX method called from another method that is under test? If so, then it's already tested. If not, what is the client doing? Does the method even need to exist?
If you were developing your code using TDD, then your unit test is what would have created the hardcoded property in the first place.
No, but a better question is why are you hardcoding values? I know sometimes you have system-wide config settings that rarely (or never) change, but ideally, even those should be configurable and so should be tested. Values that will NEVER change should still be accessed via well-named constants. If your system is still in early development, maybe you haven't built those components yet, so don't worry about testing them - yet. ;)
Hmm Ok I guess the exceptions might be math or physics constants.
Should we be testing values that we already know the answer to?
How do you know that? Someone might have changed the code, and as a regression, it returns something else now.
If those values are part of the public API you should test them.
And you should test them against the literal value, not a constant defined by the program you are testing (because those could have been changed in error).
// good
assertEquals( "Number", myClass.getType(1234));
assertEquals( "Number", MyClass.TYPE_NUMBER);
// not so good
assertEquals( MyClass.TYPE_NUMBER, myClass.getType(1234));
Apart from obvious points from other answers:
You might want to add also, an integration test to verify that your hardcoded values actually works and plays well with other components.
Sometimes you do needs to hard code values, i.e. when implementing interfaces that asks for a capability tests, something like
interface IDecoder { // in C#
bool SupportStreaming { get; }
}
Of course, when implementing the IDecoder interface above, you'd have to hard code a value for the SupportStreaming property.
But the important question would of course not be Weather the it returns the hardcoded value? but Weather the hardcoded value actually is the correct value? which is only relevant when doing integration testing and/or testing other units/methods that depends on the value.
I personally don't bother with testing anything that's declarative.

How to use TDD when the fix involves changing the method under test's signature?

I'm trying to get my head around TDD methodology and have run into - what I think is - a chicken-and-egg problem: what to do if a bug fix involves the changing of a method's signature.
Consider the following method signature:
string RemoveTokenFromString (string delimited, string token)
As the name suggests, this method removes all instances of a token from delimited and returns the resultant string.
I find later that this method has a bug (e.g. the wrong bits are being removed from the string). So I write a test case describing the scenario where the bug occurs and make sure that the test fails.
When fixing the bug, I find that the method needs more information to be able to do its job properly - and this bit of information can only be sent in as a parameter (the method under test is part of a static class).
What do I do then? If I fix the bug, this compels me to change the unit test - would that be 'correct' TDD methodology?
You have fallen into the most dangerous trap in TDD: you think TDD is about testing, but it isn't. However, it is easy to fall into that trap, since all the terminology in TDD is about testing. This is why BDD was invented: it is essentially TDD, but without the confusing terminology.
In TDD, tests aren't really tests, they are examples. And assertions aren't really assertions, they are expectations. And you aren't dealing with units, you are dealing with behaviors. BDD just calls them that. (Note: BDD has evolved since it was first invented, and it now incorporates things that aren't part of TDD, but the original intention was just "many people do TDD wrong, so use different words to help them do it right".)
Anyway, if you think of a test not as a test, but a behavioral example of how the method should work, it should become obvious that as you develop a better understanding of the expected behavior, deleting or changing the test is not only allowed by TDD, it is the only correct choice! Always keep that in mind!
There is absolutely nothing wrong with bombing your tests, when you discover that the intended behaviour of the unit changes.
//Up front
[Test]
public void should_remove_correct_token_from_string()
{
var text = "do.it.correctly..";
var expected = "doitcorrectly";
Assert.AreEqual(StaticClass.RemoveTokenFromString(text, "."), expected);
}
//After finding that it doesn't do the right thing
//Delete the old test and *design* a new function that
//Does what you want through a new test
//Remember TDD is about design, not testing!
[Test]
public void should_remove_correct_token_from_string()
{
var text = "do.it.correctly..";
var expected = "doitcorrectly";
Assert.AreEqual(
StaticClass.RemoveTokenFromString(
text,
".",
System.Text.Encoding.UTF8), expected);
}
//This will force you to add a new parameter to your function
//Obviously now, there are edge cases to deal with your new parameter etc.
//So more test are required to further design your new function
Keep it simple.
If your unit test is wrong, or obsolete, you have to rewrite it. If your specs change, or certain specs are no longer relevant, your unit tests have to reflect that.
Red, green, refactor also applies to your unit tests, not just the code you are testing.
There is a refactoring called Add Parameter that could help here.
If your language supports method overloading, you could create the new function with the new parameter first, copying the body of the existing function and fixing your problem.
Then when the problem is fixed, you can modify all the tests, one by one, to call the new method. Last you can delete the old method.
With a language that does not support method overloading, create a new function with a different name, copy the body on the existing function in that new function, have the existing function calling the new function, possibly with a dummy value for the new parameter. Then you could have all your tests passing. Make your old tests call the new function, one by one. When the old method is not used anymore, it can be deleted and the new function renamed.
This is a bit process-extensive, but I think this is the TDD way to follow red-green-refactor.
Default value for parameter could also help, if there are available.
Red, Green, Refactor.
Whatever you do, you first want to get to a state where you have a compiling but failing test case that reproduces the bug. You then can proceed on adding just the parameter to the test and the implementation, but do nothing with it so you still have Red.
I'd say don't fret about the 'right'/'correct' way... whatever helps you get you closer to the solution quicker.
If you find that you need to take in an extra parameter,
update the call in the test case
add the new parameter to the actual method
verify that your code builds and the test fails again.
proceed with making it green.
Only in cases where adding a new parameter would result in zillions of compile errors, would I recommend - taking it in baby steps... you dont want to update the whole source base before finding out you really didn't need the third param or you need a fourth one.. time lost. So get the new version of the method 'working' before updating all references. (As philippe says here)
write a new overload with the added parameter
Move the code of the old method into the new overload
Make the old overload relay or delegate to the new overload with some default value for the new param
Now you can get back to the task at hand and get the new test to go green.
If you dont need the old overload anymore, delete it and fix the resulting compile errors.
If a method is not doing the job correctly then it needs to be fixed and if the fix requires change in signature then noting wrong in that. As per TDD you write the test case first which will certainly fail and then you write the method to satisfy the test. As per this approach if the method call in test requires a parameter for it to function then you need to do it.