Explain unit testing please - unit-testing

I'm a little confused about unit testing. I see the value in things like automated testing. I think perhaps a good example would be the best way to help me understand. Lets say I have a binary search function I want unit tested.
Now in testing, I would want to know things like: Does the search find the first element, the last element, and other elements? Does the search correctly compare unicode characters. Does the search handle symbols and other "painful" characters. Would unit testing cover this, or am I missing it? How would you write unit tests for my binary search?
function search(collection, value){
var start = 0, end = collection.length - 1, mid;
while (start <= end) {
mid = start + ((end - start) / 2);
if (value == collection[mid])
return mid;
if (collection[mid] < value)
end = mid - 1;
else
start = mid + 1;
}
return mid;
}
Psuedo code for unit tests would be lovely.
So, we might have:
function testFirst(){
var collection = ['a','b','c','x','y','z'],first = 'a', findex = 0;
assert(seach(collection,first),findex);
}
function testLast(){
var collection = ['a','b','c','x','y','z'], last = 'z', lindex = 5;
assert(seach(collection,last),lindex);
}

No, you're not missing it, this is what unit testing is designed to tell you. You have the right idea by testing good and bad input, edge cases, etc. You need one test for each condition. A test will set up any preconditions and then assert that your calculation (or whatever it may be) matches your expectations

You're correct in your expectations of unit testing; it's very much about validating and verifying the expected behaviour.
One value I think many folks miss about unit testing is that its value increases with time. When I write a piece of code, and write a unit test, I've basically just tested that the code does what I think it should, that it's not failing in any ways in which I have chosen to check, etc. These are good things, but they're of limited value, because they express the knowledge that you have of the system at the time; they can't help you with things you don't know about (is there a sneaky bug in my algorithm that I don't know about and didn't think to test for?).
The real value of Unit Tests, in my opinion, is the value they gain over time. This value takes two forms; documentation value and validation value.
The documentation value is the value of the unit test saying "this is what the author of the code expected this bit of code to do". It's hard to overstate the value of this sort of thing; when you've been on a project that has a large chunk of underdocumented legacy code, let me tell you, this sort of documentation value is like a miracle.
The other value is that of validation; as code lives on in projects, things get refactored, and changed, and shifted. Unit tests provide validation that the component that you thought worked in one way continues to work in that way. This can be invaluable in helping find errors that creep into projects. For example, changing a database solution can sometimes be see-through, but sometimes, those changes can cause unexpected shifts in the way some things work; unit testing the components which depend on your ORM can catch critical subtle shifts in the underlying behaviour. This really gets useful when you've got a chunk of code that's been working perfectly for years at a time, and nobody thinks to consider its potential role in a failure; those types of bugs can take a VERY long time to find, because the last place you're going to look is in the component that's been rock-solid for a very long time. Unit Testing provides validation of that "Rock Solidity".

Yes, that's about it. Each of those questions you ask could be used as a test. Think of the unit test as three steps. Set up some preconditions, run some code that is "under test", and write an assert that documents your expectations.
In your case, setting up 'collection' with some particular values (or no values) is setting the preconditions.
Calling your search method with a particular parameter is running the code under test.
Checking that the value returned by your method matches what you expect is the assert step.
Give those three things a name that describes what you are trying to do (DoesTheSearchMethodFailIfCollectionIsEmpty) and voila you have a unit test.

Related

Does property based testing make you duplicate code?

I'm trying to replace some old unit tests with property based testing (PBT), concreteley with scala and scalatest - scalacheck but I think the problem is more general. The simplified situation is , if I have a method I want to test:
def upcaseReverse(s:String) = s.toUpperCase.reverse
Normally, I would have written unit tests like:
assertEquals("GNIRTS", upcaseReverse("string"))
assertEquals("", upcaseReverse(""))
// ... corner cases I could think of
So, for each test, I write the output I expect, no problem. Now, with PBT, it'd be like :
property("strings are reversed and upper-cased") {
forAll { (s: String) =>
assert ( upcaseReverse(s) == ???) //this is the problem right here!
}
}
As I try to write a test that will be true for all String inputs, I find my self having to write the logic of the method again in the tests. In this case the test would look like :
assert ( upcaseReverse(s) == s.toUpperCase.reverse)
That is, I had to write the implementation in the test to make sure the output is correct.
Is there a way out of this? Am I misunderstanding PBT, and should I be testing other properties instead, like :
"strings should have the same length as the original"
"strings should contain all the characters of the original"
"strings should not contain lower case characters"
...
That is also plausible but sounds like much contrived and less clear. Can anybody with more experience in PBT shed some light here?
EDIT : following #Eric's sources I got to this post, and there's exactly an example of what I mean (at Applying the categories one more time): to test the method times in (F#):
type Dollar(amount:int) =
member val Amount = amount
member this.Add add =
Dollar (amount + add)
member this.Times multiplier =
Dollar (amount * multiplier)
static member Create amount =
Dollar amount
the author ends up writing a test that goes like:
let ``create then times should be same as times then create`` start multiplier =
let d0 = Dollar.Create start
let d1 = d0.Times(multiplier)
let d2 = Dollar.Create (start * multiplier) // This ones duplicates the code of Times!
d1 = d2
So, in order to test that a method, the code of the method is duplicated in the test. In this case something as trivial as multiplying, but I think it extrapolates to more complex cases.
This presentation gives some clues about the kind of properties you can write for your code without duplicating it.
In general it is useful to think about what happens when you compose the method you want to test with other methods on that class:
size
++
reverse
toUpperCase
contains
For example:
upcaseReverse(y) ++ upcaseReverse(x) == upcaseReverse(x ++ y)
Then think about what would break if the implementation was broken. Would the property fail if:
size was not preserved?
not all characters were uppercased?
the string was not properly reversed?
1. is actually implied by 3. and I think that the property above would break for 3. However it would not break for 2 (if there was no uppercasing at all for example). Can we enhance it? What about:
upcaseReverse(y) ++ x.reverse.toUpper == upcaseReverse(x ++ y)
I think this one is ok but don't believe me and run the tests!
Anyway I hope you get the idea:
compose with other methods
see if there are equalities which seem to hold (things like "round-tripping" or "idempotency" or "model-checking" in the presentation)
check if your property will break when the code is wrong
Note that 1. and 2. are implemented by a library named QuickSpec and 3. is "mutation testing".
Addendum
About your Edit: the Times operation is just a wrapper around * so there's not much to test. However in a more complex case you might want to check that the operation:
has a unit element
is associative
is commutative
is distributive with the addition
If any of these properties fails, this would be a big surprise. If you encode those properties as generic properties for any binary relation T x T -> T you should be able to reuse them very easily in all sorts of contexts (see the Scalaz Monoid "laws").
Coming back to your upperCaseReverse example I would actually write 2 separate properties:
"upperCaseReverse must uppercase the string" >> forAll { s: String =>
upperCaseReverse(s).forall(_.isUpper)
}
"upperCaseReverse reverses the string regardless of case" >> forAll { s: String =>
upperCaseReverse(s).toLowerCase === s.reverse.toLowerCase
}
This doesn't duplicate the code and states 2 different things which can break if your code is wrong.
In conclusion, I had the same question as you before and felt pretty frustrated about it but after a while I found more and more cases where I was not duplicating my code in properties, especially when I starting thinking about
combining the tested function with other functions (.isUpper in the first property)
comparing the tested function with a simpler "model" of computation ("reverse regardless of case" in the second property)
I have called this problem "convergent testing" but I can't figure out why or where there term comes from so take it with a grain of salt.
For any test you run the risk of the complexity of the test code approaching the complexity of the code under test.
In your case, the the code winds up being basically the same which is just writing the same code twice. Sometimes there is value in that. For example, if you are writing code to keep someone in intensive care alive, you could write it twice to be safe. I wouldn't fault you for the abundance of caution.
For other cases there comes a point where the likelihood of the test breaking invalidates the benefit of the test catching real issues. For that reason, even if it is against best practice in other ways (enumerating things that should be calculated, not writing DRY code) I try to write test code that is in some way simpler than the production code, so it is less likely to fail.
If I cannot find a way to write code simpler than the test code, that is also maintainable(read: "that I also like"), I move that test to a "higher" level(for example unit test -> functional test)
I just started playing with property based testing but from what I can tell it is hard to make it work with many unit tests. For complex units, it can work, but I find it more helpful at functional testing so far.
For functional testing you can often write the rule a function has to satisfy much more simply than you can write a function that satisfies the rule. This feels to me a lot like the P vs NP problem. Where you can write a program to VALIDATE a solution in linear time, but all known programs to FIND a solution take much longer. That seems like a wonderful case for property testing.

Unit testing checking for nulls

This is a very basic question but I still cannot find the appropriate answer. In my test there is a possibility to have null values and because of that the last stage (Act) starts looking a little bit strange (it is no longer act only). What I mean is the following:
Assert.IsNotNull(variable);
var newVariable = variable.Property;
Assert.IsNotNull(newVariable);
var finalVariable = newVariable.AnotherProperty;
Assert.AreEqual(3, finalVariable.Count);
Now they are obviously related and I have to be sure that the values are not null, but also there are three asserts in one test and the act part starts to look not right.
So what is the general solution in such cases? Is there anything smarter than 3 tests with one assert each and checks for null before the asserts of the last 2?
Basically there are two ways of dealing with your problem:
Guard assertions: extra asserts making sure data is in known state before proper test takes place (that's what you're doing now).
Moving guard assertions to their own tests.
Which option to chose largely depends on code under test. If preconditions would be duplicated in other tests, it's a hint for separate test approach. If precondition has reflection in production code, it's again hint for separate test approach.
On the other hand, if it's only something you do to boost your confidence, maybe separate test is too much (yet as noted in other answers, it might be a sign that you're not in full control of your test or that you're testing too many things at once).
I think you should split this test into three tests and name them accordingly to what's happening. It's perfectly sensible even if your acts in those tests are same, you are testing different scenarios by checking return value of the method.
Nulls are royal pain. The question is, can they legitimately exist?
Let's separate our discussion to code and tests.
If the null shouldn't exist then the code itself, not the tests, should check and verify that they are not null. For this reason each and every method of my code is built using a snippet that checks the arguments:
public VideoPosition(FrameRate theFrameRate, TimeSpan theAirTime)
{
Logger.LogMethod("theVideoMovie", theFrameRate, "theAirTime", theAirTime);
try
{
#region VerifyInputs
Validator.Verify(theFrameRate);
Validator.Verify(theAirTime);
Validator.VerifyTrue(theAirTime.Ticks >= 0, "theAirTime.Ticks >= 0");
If null ARE legitimate in the code, but you are testing a scenario where the returned values shouldn't be null, then of course you have to verify this in your testing code.
In your Unit Test you should be able to control every input to your class under test. This means that you control if your variable has a value or not.
So you would have one unit test that forces your variable to be null andnthen asserts this.
You will then have another test where you can be sure that your variable has a value and you omly need the other asserts.
I wrote a blog about this some time ago. Maybe it can help: Unit Testing, hell or heaven?

Unit testing with random data

I've read that generating random data in unit tests is generally a bad idea (and I do understand why), but testing on random data and then constructing a fixed unit test case from random tests which uncovered bugs seems nice. However I don't understand how to organize it nicely. My question is not related to a specific programming language or to a specific unit test framework actually, so I'll use python and some pseudo unit test framework. Here's how I see coding it:
def random_test_cases():
datasets = [
dataset1,
dataset2,
...
datasetn
]
for dataset in datasets:
assertTrue(...)
assertEquals(...)
assertRaises(...)
# and so on
The problem is: when this test case fails I can't figure out which dataset caused failure. I see two ways of solving it:
Create a single test case per dataset — the problem is load of test cases and code duplication.
Usually test framework lets us pass a message to assert functions (in my example I could do something like assertTrue(..., message = str(dataset))). The problem is that I should pass such a message to each assert, which does not look like elegant too.
Is there a simpler way of doing it?
I still think it's a bad idea.
Unit tests need to be straightforward. Given the same piece of code and the same unit test, you should be able to run it infinitely and never get a different response unless there's an external factor coming in to play. A goal contrary to this will increase maintenance cost of your automation, which defeats the purpose.
Outside of the maintenance aspect, to me it seems lazy. If you put thought in to your functionality and understand the positive as well as the negative test cases, developing unit tests are straightforward.
I also disagree with the user who shows how to do multiple tests cases inside of the same test case. When a test fails, you should be able to tell immediately which test failed and know why it failed. Tests should be as simple as you can make them and as concise/relevant to the code under test as possible.
You could define tests by extension instead of enumeration, or you could call multiple test cases from a single case.
calling multiple test cases from a single test case:
MyTest()
{
MyTest(1, "A")
MyTest(1, "B")
MyTest(2, "A")
MyTest(2, "B")
MyTest(3, "A")
MyTest(3, "B")
}
And there are sometimes elegant ways to achieve this with some testing frameworks. Here is how to do it in NUnit:
[Test, Combinatorial]
public void MyTest(
[Values(1,2,3)] int x,
[Values("A","B")] string s)
{
...
}
I also think it's a bad idea.
Mind you, not throwing random data at your code, but having unit tests doing that. It all boils down to why you unit test in the first place. The answer is "to drive the design of the code". Random data doesn't drive the design of the code, because it depends on a very rigid public interface. Mind you, you can find bugs with it, but that's not what unit tests are about. And let me note that I'm talking about unit tests, and not tests in general.
That being said, I strongly suggest taking a look at QuickCheck. It's Haskell, so it's a bit dodgy on presentation and a bit PhD-ish on documentation, but you should be able to figure it out. I'm going to summarize how it works, though.
After you pick the code you want to test (let's say the sort() function), you establish invariants which should hold. In this examples, you can have the following invariants if result = sort(input):.
Every element in result should be smaller than or equal to the next one.
Every element in input should be present in result the same number of times.
result and input should have the same length (this is repeats the previous, but let's have it for illustration).
You encode each variant in a simple function that takes the result and the output and checks whether those invariants code.
Then, you tell QuickCheck how to generate input. Since this is Haskell and the type system kicks ass, it can see that the function takes a list of integers and it knows how to generate those. It basically generates random lists of random integers and random length. Of course, it can be more fine-grained if you have a more complex data type (for example, only positive integers, only squares, etc.).
Finally, when you have those two, you just run QuickCheck. It generates all that stuff randomly and checks the invariants. If some fail, it will show you exactly which ones. It would also tell you the random seed, so you can rerun this exact failure if you need to. And as an extra bonus, whenever it gets a failed invariant, it will try to reduce the input to the smallest possible subset that fails the invariant (if you think of a tree structure, it will reduce it to the smallest subtree that fails the invariant).
And there you have it. In my opinion, this is how you should go about testing stuff with random data. It's definitely not unit tests and I even think you should run it differently (say, have CI run it every now and then, as opposed to running it on every change (since it will quickly get slow)). And let me repeat, it's a different benefit from unit testing - QuickCheck finds bugs, while unit testing drives design.
Usually the unit test frameworks support 'informative failures' as long as you pick the right assertion method.
However if everything else doesn't work, You could easily trace the dataset to the console/output file. Low tech but should work.
[TestCaseSource("GetDatasets")]
public Test.. (Dataset d)
{
Console.WriteLine(PrettyPrintDataset(d));
// proceed with checks
Console.WriteLine("Worked!");
}
In quickcheck for R we tried to solve this problem as follows
the tests are actually pseudo-random (the seed is fixed) so you can always reproduce your tests results (barring external factors, of course)
the test function returns enough data to reproduce the error, including the assertion that failed and the data that made it fail. A convenience function, repro, called on the return value of test will land you in the debugger at the beginning of the failing assertion, with arguments set to the witnesses of the failure. If the tests are executed in batch mode, equivalent information is stored in a file and the command to retrieve it is printed in stderr. Then you can call repro as before. Whether or not you program in R, I would love to know if this starts to address you requirements. Some aspects of this solution may be hard to implement in languages that are less dynamic or don't have first class functions.

Automated testing feels a lot like duplicating the tested logic, am I doing it right?

I'm implementing automated testing with CppUTest in C++.
I realize I end up almost copying and pasting the logic to be tested on the tests themselves, so I can check the expected outcomes.
Am I doing it right? should it be otherwise?
edit: I'll try to explain better:
The unit being tested takes input A, makes some processing and returns output B
So apart from making some black box checks, like checking that the output lies in an expectable range, I would also like to see if the output B that I got is the right outcome for input A I.E. if the logic is working as expected.
So for example if the unit just makes A times 2 to yield B, then in the test I have no other way of checking than making again the calculation of A times 2 to check against B to be sure it went alright.
That's the duplication I'm talking about.
// Actual function being tested:
int times2( int a )
{
return a * 2;
}
.
// Test:
int test_a;
int expected_b = test_a * 2; // here I'm duplicating times2()'s logic
int actual_b = times2( test_a );
CHECK( actual_b == expected_b );
.
PS: I think I will reformulate this in another question with my actual source code.
If your goal is to build automated tests for your existing code, you're probably doing it wrong. Hopefully you know what the result of frobozz.Gonkulate() should be for various inputs and can write tests to check that Gonkulate() is returning the right thing. If you have to copy Gonkulate()'s convoluted logic to figure out the answer, you might want to ask yourself how well you understand the logic to begin with.
If you're trying to do test-driven development, you're definitely doing it wrong. TDD consists of many quick cycles of:
Writing a test
Watching it fail
Making it pass
Refactoring as necessary to improve the overall design
Step 1 - writing the test first - is an essential part of TDD. I infer from your question that you're writing the code first and the tests later.
So for example if the unit just makes A times 2 to yield B, then in
the test I have no other way of checking than making again the
calculation of A times 2 to check against B to be sure it went
alright.
Yes you do! You know how to calculate A times two, so you don't need to do this in code. if A is 4 then you know the answer is 8. So you can just use it as the expected value.
CHECK( actual_b == 8 )
if you are worried about magic numbers, don't be. Nobody will be confused about the meaning of the hard coded numbers in the following line:
CHECK( times_2(4) == 8 )
If you don't know what the result should be then your unit test is useless. If you need to calculate the expected result, then you are either using the same logic as the function, or using an alternate algorithm to work out the result.In the first case, if the logic that you duplicate is incorrect, your test will still pass! In the second case, you are introducing another place for a bug to occur. If a test fails, you will need to work out whether it failed because the function under test has a bug, or if your test method has a bug.
I think this one is a though to crack because it's essentially a mentality shift. It was somewhat hard for me.
The thing about tests is to have your expectancies nailed down and check if your code really does what you think it does. Think in ways of exercising it, not checking its logic so directly, but as a whole. If that's too hard, maybe your function/method just does too much.
Try to think of your tests as working examples of what your code can do, not as a mathematical proof.
The programming language shouldn't matter.
var ANY_NUMBER = 4;
Assert.That(times_2(ANY_NUMBER), Is.EqualTo(ANY_NUMBER*2)
In this case, I wouldn't mind duplicating the logic. The expected value is readable as compared to 8. Second this logic doesn't look like a change-magnet. Relatively static.
For cases, where the logic is more involved (chunky) and prone to change, duplicating the logic in the test is definitely not recommended. Duplication is evil. Any change to the logic would ripple changes to the test. In that case, I'd use hardcoded input-expected output pairs with some readable pair-names.

Unit testing specific values

Consider the following code (from a requirement that says that 3 is special for some reason):
bool IsSpecial(int value)
if (value == 3)
return true
else
return false
I would unit test this with a couple of functions - one called TEST(3IsSpecial) that asserts that when passed 3 the function returns true and another that passes some random value other than 3 and asserts that the function returns false.
When the requirement changes and say it now becomes 3 and 20 are special, I would write another test that verifies that when called with 20 this function returns true as well. That test would fail and I would then go and update the if condition in the function.
Now, what if there are people on my team who do not believe in unit testing and they make this change. They will directly go and change the code and since my second unit test might not test for 20 (it could be randomly picking an int or have some other int hardcoded). Now my tests aren't in sync with the code. How do I ensure that when they change the code some unit test or the other fails?
I could be doing something grossly wrong here so any other techniques to get around this are also welcome.
That's a good question. As you note a Not3IsNotSpecial test picking a random non-3 value would be the traditional approach. This wouldn't catch a change in the definition of "special".
In a .NET environment you can use the new code contracts capability to write the test predicate (the postcondition) directly in the method. The static analyzer would catch the defect you proposed. For example:
Contract.Ensures(value != 3 && Contract.Result<Boolean>() == false);
I think anybody that's a TDD fan is experimenting with contracts now to see use patterns. The idea that you have tools to prove correctness is very powerful. You can even specify these predicates for an interface.
The only testing approach I've seen that would address this is Model Based Testing. The idea is similar to the contracts approach. You set up the Not3IsNotSpecial condition abstractly (e.g., IsSpecial(x => x != 3) == false)) and let a model execution environment generate concrete tests. I'm not sure but I think these environments do static analysis as well. Anyway, you let the model execution environment run continuously against your SUT. I've never used such an environment, but the concept is interesting.
Unfortunately, that specific scenario is something that is difficult to guard against. With a function like IsSpecial, it's unrealistic to test all four billion negative test cases, so, no, you're not doing something grossly wrong.
Here's what comes to me off the top of my head. Many repositories have hooks that allow you to run some process on each check-in, such as running the unit tests. It's possible to set a criterion that newly checked in code must reach some threshold of code coverage under unit tests. If the commit does not meet certain metrics, it is rejected.
I've never had to set one of these systems up, so I don't know what is involved, but I do know it's possible.
And believe me, I feel your pain. I work with people who are similarly resistant to unit testing.
One thing you need to think about is why 3 is a special character and others are not. If it is defining some aspect of your application, you can take that aspect out and make an enum out of it.
Now you can check here that this test should fail if value doesn't exist in enum. And for enum class write a test to check for possible values. If there is new possible value being added your test should fail.
So your method will become:
bool IsSpecial(int value)
if (SpecialValues.has(value))
return true
else
return false
and your SpecialValues will be an enum like:
enum SpecialValues {
Three(3), Twenty(20)
public int value;
}
and now you should write to test possible values for enum. A simple test can be to check total number of possible values and another test can be to check the possible values itself
The other point to make is that in a less contrived example:
20 might have been some valid condition to test for based on knowledge of the business domain. Writing tests in a BDD style based on knowledge of the business problem might have helped you explicitly catch it.
4 might have been a good value to test for due to its status as a boundary condition. This may have been more likely to change in the real world so would more likely show up in a full test case.