Does property based testing make you duplicate code? - unit-testing

I'm trying to replace some old unit tests with property based testing (PBT), concreteley with scala and scalatest - scalacheck but I think the problem is more general. The simplified situation is , if I have a method I want to test:
def upcaseReverse(s:String) = s.toUpperCase.reverse
Normally, I would have written unit tests like:
assertEquals("GNIRTS", upcaseReverse("string"))
assertEquals("", upcaseReverse(""))
// ... corner cases I could think of
So, for each test, I write the output I expect, no problem. Now, with PBT, it'd be like :
property("strings are reversed and upper-cased") {
forAll { (s: String) =>
assert ( upcaseReverse(s) == ???) //this is the problem right here!
}
}
As I try to write a test that will be true for all String inputs, I find my self having to write the logic of the method again in the tests. In this case the test would look like :
assert ( upcaseReverse(s) == s.toUpperCase.reverse)
That is, I had to write the implementation in the test to make sure the output is correct.
Is there a way out of this? Am I misunderstanding PBT, and should I be testing other properties instead, like :
"strings should have the same length as the original"
"strings should contain all the characters of the original"
"strings should not contain lower case characters"
...
That is also plausible but sounds like much contrived and less clear. Can anybody with more experience in PBT shed some light here?
EDIT : following #Eric's sources I got to this post, and there's exactly an example of what I mean (at Applying the categories one more time): to test the method times in (F#):
type Dollar(amount:int) =
member val Amount = amount
member this.Add add =
Dollar (amount + add)
member this.Times multiplier =
Dollar (amount * multiplier)
static member Create amount =
Dollar amount
the author ends up writing a test that goes like:
let ``create then times should be same as times then create`` start multiplier =
let d0 = Dollar.Create start
let d1 = d0.Times(multiplier)
let d2 = Dollar.Create (start * multiplier) // This ones duplicates the code of Times!
d1 = d2
So, in order to test that a method, the code of the method is duplicated in the test. In this case something as trivial as multiplying, but I think it extrapolates to more complex cases.

This presentation gives some clues about the kind of properties you can write for your code without duplicating it.
In general it is useful to think about what happens when you compose the method you want to test with other methods on that class:
size
++
reverse
toUpperCase
contains
For example:
upcaseReverse(y) ++ upcaseReverse(x) == upcaseReverse(x ++ y)
Then think about what would break if the implementation was broken. Would the property fail if:
size was not preserved?
not all characters were uppercased?
the string was not properly reversed?
1. is actually implied by 3. and I think that the property above would break for 3. However it would not break for 2 (if there was no uppercasing at all for example). Can we enhance it? What about:
upcaseReverse(y) ++ x.reverse.toUpper == upcaseReverse(x ++ y)
I think this one is ok but don't believe me and run the tests!
Anyway I hope you get the idea:
compose with other methods
see if there are equalities which seem to hold (things like "round-tripping" or "idempotency" or "model-checking" in the presentation)
check if your property will break when the code is wrong
Note that 1. and 2. are implemented by a library named QuickSpec and 3. is "mutation testing".
Addendum
About your Edit: the Times operation is just a wrapper around * so there's not much to test. However in a more complex case you might want to check that the operation:
has a unit element
is associative
is commutative
is distributive with the addition
If any of these properties fails, this would be a big surprise. If you encode those properties as generic properties for any binary relation T x T -> T you should be able to reuse them very easily in all sorts of contexts (see the Scalaz Monoid "laws").
Coming back to your upperCaseReverse example I would actually write 2 separate properties:
"upperCaseReverse must uppercase the string" >> forAll { s: String =>
upperCaseReverse(s).forall(_.isUpper)
}
"upperCaseReverse reverses the string regardless of case" >> forAll { s: String =>
upperCaseReverse(s).toLowerCase === s.reverse.toLowerCase
}
This doesn't duplicate the code and states 2 different things which can break if your code is wrong.
In conclusion, I had the same question as you before and felt pretty frustrated about it but after a while I found more and more cases where I was not duplicating my code in properties, especially when I starting thinking about
combining the tested function with other functions (.isUpper in the first property)
comparing the tested function with a simpler "model" of computation ("reverse regardless of case" in the second property)

I have called this problem "convergent testing" but I can't figure out why or where there term comes from so take it with a grain of salt.
For any test you run the risk of the complexity of the test code approaching the complexity of the code under test.
In your case, the the code winds up being basically the same which is just writing the same code twice. Sometimes there is value in that. For example, if you are writing code to keep someone in intensive care alive, you could write it twice to be safe. I wouldn't fault you for the abundance of caution.
For other cases there comes a point where the likelihood of the test breaking invalidates the benefit of the test catching real issues. For that reason, even if it is against best practice in other ways (enumerating things that should be calculated, not writing DRY code) I try to write test code that is in some way simpler than the production code, so it is less likely to fail.
If I cannot find a way to write code simpler than the test code, that is also maintainable(read: "that I also like"), I move that test to a "higher" level(for example unit test -> functional test)
I just started playing with property based testing but from what I can tell it is hard to make it work with many unit tests. For complex units, it can work, but I find it more helpful at functional testing so far.
For functional testing you can often write the rule a function has to satisfy much more simply than you can write a function that satisfies the rule. This feels to me a lot like the P vs NP problem. Where you can write a program to VALIDATE a solution in linear time, but all known programs to FIND a solution take much longer. That seems like a wonderful case for property testing.

Related

Test Driven Development Understanding Problems

Maybe somebody can help me understanding the "Test Driven Development" Method. I tried the following example by myself and i dont know where my understanding problem is.
Assume that we need a function that gives back the sum of two numbers a and b
To ensure, that the function works right, i write several tests. Like creating the sum-object, checking if a and b are numbers and so on .. but the first "real test" of right calculating is the following
a=3
b=3
expected value: 6
The TDD method allows us only to do so many steps to let the test pass.
So the function looks like
sum(a, b){
return 6
}
The Test "3+3" will pass.
Next test is "4+10" maybe.
I'll run the tests and the last test will fail. What a surprise ...
I'll change my function to
sum(a, b){
if(a=3 and b=3)
return 6
else
return 14
}
The test will pass!
And this goes so on and on ... i will only add another cases for every test. The function will pass every of this tests, but for every other not listed case it will not and the result is an ineffective and stupid written function.
So is there a foolproof "trick" to not fall into this way of thinking?
I thought, test driven development is pretty straight forward and dumb proof. Where is the "break even" point when its time to say, that this way of doing tests isn't practicable anymore and switch to the right solution
return a+b;
???
This is a very simple example, but i could imagine, that there are more complex functions which are obviously not so easy to correct like this one.
Thanks
The TDD workflow has a 3-part cycle ("red,green,refactor") and it's important not to skip the third part. For example, after your second version:
sum(a, b){
if(a=3 and b=3)
return 6
else
return 14
}
You should look at this and ask: is there a simpler way to write this? Well, yes, there is:
sum(a, b){
return a+b
}
Of course, this is an unrealistic trivial example, but in real-life coding, this third step will guide you to refine your code into a well-written, tested final version.
The basic idea of writing test is to know whenever your system is behaving as expected or not. In test we make expectations, assumptions. Basically, we make following
Set your expectations
Run the code
Check expectations against the actual output
We set our expectations for given conditions and test it against the actual output. As developer, product owner, we always know how the system should behave for any given condition and we write tests accordingly.
For example, for the below given pseudo code:
int sum(int a, int b) {
return a + b;
}
Here method sum should return the sum of arguments a and b. We know that,
The argument should always be integer.
The output should always be integer type.
The output should be the sum of two numbers a, b.
So, we exactly know when it would fail and we should write test to cover at least 70% of those cases.
I am a PHP guy, so my examples are in PHP. Regarding, ways to supply the arguments a, b. we have something called data provider. I am giving PHP here as a reference, in PhpUnit the preferred way of passing different argument is to pass it through Dataprovider. Visit the dataprovider sample and you will see the example for additions.
And this goes so on and on ... i will only add another cases for every test. The function will pass every of this tests, but for every other not listed case it will not and the result is an ineffective and stupid written function.
Yes, we try to cover as much part of the cases as possible. The more test covered, the more confident we become on our code. Let's say we have written a method that returns the subsets of array each having 4 unique elements in it. Now how do you approach writing the test cases for it? One of the solution would be to compute the permutation and check the length of array that should not exceed maximum count of array (being each unique element).
Where is the "break even" point when its time to say, that this way of doing tests isn't practicable anymore and switch to the right solution
We don't have break even in test cases. But we make the choices among different types of test cases namely (unit tests, functional bests, behavioural test). It is upto the developer what type of tests should be implemented and depending upon the types of tests it may vary.
The best way is to implement the TDD in projects. Until we do it in real projects, the confusion would remain. I myself had very hard time getting to understand the Mock and Expectations. It's not something that can be learned overnight, so if you don't understand something it's normal. Try it yourself, give yourself sometime, do experiments ask with friends just don't get exhausted. Always be curious.
Let us know if you still have confusions on it.

How to guarantee FsCheck reproducibility

We want to use FsCheck as part of our unit testing in continuous integration. As such deterministic and reproducible behaviour is very important for us.
FsCheck, being a random testing framework, can generate test cases that potentially sometimes break. The key is, we do not only use properties that would have to hold for necessarily every input, like say List.rev >> List.rev === id. But rather, we do some numerics and some test cases can cause the test to break because of being badly conditioned.
The question is: how can we guarantee, that once the test succeeds it will always succeed?
So far I see the following options:
hard code the seed, e.g. 0. This would be the easiest solution.
make very specific custom generators which avoid bad examples. Certainly possible, but could turn out pretty hard, especially if there are many objects to generate.
live with it, that in some cases the build might be red due to pathological cases and simply re-run.
What is the idiomatic way of using FsCheck in such a setting?
some test cases can cause the test to break because of being badly conditioned.
That sounds like you need a Conditional Property:
let isOk x =
match x with
| 42 -> false
| _ -> true
let MyProperty (x:int) = isOk x ==> // check x here...
(assuming that you don't like the number 42.)
(I started writing a comment but it got so long I guess it deserved its own answer).
It's very common to test properties with FsCheck that don't hold for every input. For example, FsCheck will trivially refute your List.rev example if you run it for list<float>.
Numerical stability is a tricky problem in itself - there isn't any non-determinism in FsCheck to blame here(FsCheck is totally deterministic, it's just an input generator...). The "non-determinism" you're referring to may be things like bugs in floating point operations in certain processors and so on. But even in that case, wouldn't you like to know about them? And if you algorithm is numerically unstable for a class of inputs, wouldn't you like to know about it? If you don't it seems to me like you're setting yourself up for some real non-determinism...in production.
The idiomatic way to write properties that don't hold for all inputs of a given type in FsCheck is to write a generator & shrinker. You can use ==> as a step up to that, but it doesn't scale up well to complex preconditions. You say this could turn out pretty hard - that's true in the sense that I guarantee you'll learn something about your code. A good thing!
Fixing the seed is a bad idea, except for reproducing a previously discovered bug. I mean, in practice what would you do: keep re-running the test until it passes, then fix the seed and declare "job done"?

Unit testing with random data

I've read that generating random data in unit tests is generally a bad idea (and I do understand why), but testing on random data and then constructing a fixed unit test case from random tests which uncovered bugs seems nice. However I don't understand how to organize it nicely. My question is not related to a specific programming language or to a specific unit test framework actually, so I'll use python and some pseudo unit test framework. Here's how I see coding it:
def random_test_cases():
datasets = [
dataset1,
dataset2,
...
datasetn
]
for dataset in datasets:
assertTrue(...)
assertEquals(...)
assertRaises(...)
# and so on
The problem is: when this test case fails I can't figure out which dataset caused failure. I see two ways of solving it:
Create a single test case per dataset — the problem is load of test cases and code duplication.
Usually test framework lets us pass a message to assert functions (in my example I could do something like assertTrue(..., message = str(dataset))). The problem is that I should pass such a message to each assert, which does not look like elegant too.
Is there a simpler way of doing it?
I still think it's a bad idea.
Unit tests need to be straightforward. Given the same piece of code and the same unit test, you should be able to run it infinitely and never get a different response unless there's an external factor coming in to play. A goal contrary to this will increase maintenance cost of your automation, which defeats the purpose.
Outside of the maintenance aspect, to me it seems lazy. If you put thought in to your functionality and understand the positive as well as the negative test cases, developing unit tests are straightforward.
I also disagree with the user who shows how to do multiple tests cases inside of the same test case. When a test fails, you should be able to tell immediately which test failed and know why it failed. Tests should be as simple as you can make them and as concise/relevant to the code under test as possible.
You could define tests by extension instead of enumeration, or you could call multiple test cases from a single case.
calling multiple test cases from a single test case:
MyTest()
{
MyTest(1, "A")
MyTest(1, "B")
MyTest(2, "A")
MyTest(2, "B")
MyTest(3, "A")
MyTest(3, "B")
}
And there are sometimes elegant ways to achieve this with some testing frameworks. Here is how to do it in NUnit:
[Test, Combinatorial]
public void MyTest(
[Values(1,2,3)] int x,
[Values("A","B")] string s)
{
...
}
I also think it's a bad idea.
Mind you, not throwing random data at your code, but having unit tests doing that. It all boils down to why you unit test in the first place. The answer is "to drive the design of the code". Random data doesn't drive the design of the code, because it depends on a very rigid public interface. Mind you, you can find bugs with it, but that's not what unit tests are about. And let me note that I'm talking about unit tests, and not tests in general.
That being said, I strongly suggest taking a look at QuickCheck. It's Haskell, so it's a bit dodgy on presentation and a bit PhD-ish on documentation, but you should be able to figure it out. I'm going to summarize how it works, though.
After you pick the code you want to test (let's say the sort() function), you establish invariants which should hold. In this examples, you can have the following invariants if result = sort(input):.
Every element in result should be smaller than or equal to the next one.
Every element in input should be present in result the same number of times.
result and input should have the same length (this is repeats the previous, but let's have it for illustration).
You encode each variant in a simple function that takes the result and the output and checks whether those invariants code.
Then, you tell QuickCheck how to generate input. Since this is Haskell and the type system kicks ass, it can see that the function takes a list of integers and it knows how to generate those. It basically generates random lists of random integers and random length. Of course, it can be more fine-grained if you have a more complex data type (for example, only positive integers, only squares, etc.).
Finally, when you have those two, you just run QuickCheck. It generates all that stuff randomly and checks the invariants. If some fail, it will show you exactly which ones. It would also tell you the random seed, so you can rerun this exact failure if you need to. And as an extra bonus, whenever it gets a failed invariant, it will try to reduce the input to the smallest possible subset that fails the invariant (if you think of a tree structure, it will reduce it to the smallest subtree that fails the invariant).
And there you have it. In my opinion, this is how you should go about testing stuff with random data. It's definitely not unit tests and I even think you should run it differently (say, have CI run it every now and then, as opposed to running it on every change (since it will quickly get slow)). And let me repeat, it's a different benefit from unit testing - QuickCheck finds bugs, while unit testing drives design.
Usually the unit test frameworks support 'informative failures' as long as you pick the right assertion method.
However if everything else doesn't work, You could easily trace the dataset to the console/output file. Low tech but should work.
[TestCaseSource("GetDatasets")]
public Test.. (Dataset d)
{
Console.WriteLine(PrettyPrintDataset(d));
// proceed with checks
Console.WriteLine("Worked!");
}
In quickcheck for R we tried to solve this problem as follows
the tests are actually pseudo-random (the seed is fixed) so you can always reproduce your tests results (barring external factors, of course)
the test function returns enough data to reproduce the error, including the assertion that failed and the data that made it fail. A convenience function, repro, called on the return value of test will land you in the debugger at the beginning of the failing assertion, with arguments set to the witnesses of the failure. If the tests are executed in batch mode, equivalent information is stored in a file and the command to retrieve it is printed in stderr. Then you can call repro as before. Whether or not you program in R, I would love to know if this starts to address you requirements. Some aspects of this solution may be hard to implement in languages that are less dynamic or don't have first class functions.

Automated testing feels a lot like duplicating the tested logic, am I doing it right?

I'm implementing automated testing with CppUTest in C++.
I realize I end up almost copying and pasting the logic to be tested on the tests themselves, so I can check the expected outcomes.
Am I doing it right? should it be otherwise?
edit: I'll try to explain better:
The unit being tested takes input A, makes some processing and returns output B
So apart from making some black box checks, like checking that the output lies in an expectable range, I would also like to see if the output B that I got is the right outcome for input A I.E. if the logic is working as expected.
So for example if the unit just makes A times 2 to yield B, then in the test I have no other way of checking than making again the calculation of A times 2 to check against B to be sure it went alright.
That's the duplication I'm talking about.
// Actual function being tested:
int times2( int a )
{
return a * 2;
}
.
// Test:
int test_a;
int expected_b = test_a * 2; // here I'm duplicating times2()'s logic
int actual_b = times2( test_a );
CHECK( actual_b == expected_b );
.
PS: I think I will reformulate this in another question with my actual source code.
If your goal is to build automated tests for your existing code, you're probably doing it wrong. Hopefully you know what the result of frobozz.Gonkulate() should be for various inputs and can write tests to check that Gonkulate() is returning the right thing. If you have to copy Gonkulate()'s convoluted logic to figure out the answer, you might want to ask yourself how well you understand the logic to begin with.
If you're trying to do test-driven development, you're definitely doing it wrong. TDD consists of many quick cycles of:
Writing a test
Watching it fail
Making it pass
Refactoring as necessary to improve the overall design
Step 1 - writing the test first - is an essential part of TDD. I infer from your question that you're writing the code first and the tests later.
So for example if the unit just makes A times 2 to yield B, then in
the test I have no other way of checking than making again the
calculation of A times 2 to check against B to be sure it went
alright.
Yes you do! You know how to calculate A times two, so you don't need to do this in code. if A is 4 then you know the answer is 8. So you can just use it as the expected value.
CHECK( actual_b == 8 )
if you are worried about magic numbers, don't be. Nobody will be confused about the meaning of the hard coded numbers in the following line:
CHECK( times_2(4) == 8 )
If you don't know what the result should be then your unit test is useless. If you need to calculate the expected result, then you are either using the same logic as the function, or using an alternate algorithm to work out the result.In the first case, if the logic that you duplicate is incorrect, your test will still pass! In the second case, you are introducing another place for a bug to occur. If a test fails, you will need to work out whether it failed because the function under test has a bug, or if your test method has a bug.
I think this one is a though to crack because it's essentially a mentality shift. It was somewhat hard for me.
The thing about tests is to have your expectancies nailed down and check if your code really does what you think it does. Think in ways of exercising it, not checking its logic so directly, but as a whole. If that's too hard, maybe your function/method just does too much.
Try to think of your tests as working examples of what your code can do, not as a mathematical proof.
The programming language shouldn't matter.
var ANY_NUMBER = 4;
Assert.That(times_2(ANY_NUMBER), Is.EqualTo(ANY_NUMBER*2)
In this case, I wouldn't mind duplicating the logic. The expected value is readable as compared to 8. Second this logic doesn't look like a change-magnet. Relatively static.
For cases, where the logic is more involved (chunky) and prone to change, duplicating the logic in the test is definitely not recommended. Duplication is evil. Any change to the logic would ripple changes to the test. In that case, I'd use hardcoded input-expected output pairs with some readable pair-names.

Explain unit testing please

I'm a little confused about unit testing. I see the value in things like automated testing. I think perhaps a good example would be the best way to help me understand. Lets say I have a binary search function I want unit tested.
Now in testing, I would want to know things like: Does the search find the first element, the last element, and other elements? Does the search correctly compare unicode characters. Does the search handle symbols and other "painful" characters. Would unit testing cover this, or am I missing it? How would you write unit tests for my binary search?
function search(collection, value){
var start = 0, end = collection.length - 1, mid;
while (start <= end) {
mid = start + ((end - start) / 2);
if (value == collection[mid])
return mid;
if (collection[mid] < value)
end = mid - 1;
else
start = mid + 1;
}
return mid;
}
Psuedo code for unit tests would be lovely.
So, we might have:
function testFirst(){
var collection = ['a','b','c','x','y','z'],first = 'a', findex = 0;
assert(seach(collection,first),findex);
}
function testLast(){
var collection = ['a','b','c','x','y','z'], last = 'z', lindex = 5;
assert(seach(collection,last),lindex);
}
No, you're not missing it, this is what unit testing is designed to tell you. You have the right idea by testing good and bad input, edge cases, etc. You need one test for each condition. A test will set up any preconditions and then assert that your calculation (or whatever it may be) matches your expectations
You're correct in your expectations of unit testing; it's very much about validating and verifying the expected behaviour.
One value I think many folks miss about unit testing is that its value increases with time. When I write a piece of code, and write a unit test, I've basically just tested that the code does what I think it should, that it's not failing in any ways in which I have chosen to check, etc. These are good things, but they're of limited value, because they express the knowledge that you have of the system at the time; they can't help you with things you don't know about (is there a sneaky bug in my algorithm that I don't know about and didn't think to test for?).
The real value of Unit Tests, in my opinion, is the value they gain over time. This value takes two forms; documentation value and validation value.
The documentation value is the value of the unit test saying "this is what the author of the code expected this bit of code to do". It's hard to overstate the value of this sort of thing; when you've been on a project that has a large chunk of underdocumented legacy code, let me tell you, this sort of documentation value is like a miracle.
The other value is that of validation; as code lives on in projects, things get refactored, and changed, and shifted. Unit tests provide validation that the component that you thought worked in one way continues to work in that way. This can be invaluable in helping find errors that creep into projects. For example, changing a database solution can sometimes be see-through, but sometimes, those changes can cause unexpected shifts in the way some things work; unit testing the components which depend on your ORM can catch critical subtle shifts in the underlying behaviour. This really gets useful when you've got a chunk of code that's been working perfectly for years at a time, and nobody thinks to consider its potential role in a failure; those types of bugs can take a VERY long time to find, because the last place you're going to look is in the component that's been rock-solid for a very long time. Unit Testing provides validation of that "Rock Solidity".
Yes, that's about it. Each of those questions you ask could be used as a test. Think of the unit test as three steps. Set up some preconditions, run some code that is "under test", and write an assert that documents your expectations.
In your case, setting up 'collection' with some particular values (or no values) is setting the preconditions.
Calling your search method with a particular parameter is running the code under test.
Checking that the value returned by your method matches what you expect is the assert step.
Give those three things a name that describes what you are trying to do (DoesTheSearchMethodFailIfCollectionIsEmpty) and voila you have a unit test.