Has Groovy a simple solution for comparing two Java beans with nested beans without using their equals(), i.e. comparing all primitive/java.lang.* fields in them and then do the same recursively for the remaining fields? In the case of inequality I'd of course like to get a nice report about what was eqaul and what wasn't. Unitils' assertReflectionEquals does exactly that but I was wondering if there is something similar directly in Groovy.
Thanks!
The Groovy runtime itself has no such thing for general classes, no.
Related
The module I'm working on holds a list of items and has a method to locate and return an item from that list based on certain criteria. the specification states that "...if several matching values are found, any one may be returned"
I'm trying to write some tests with Nunit, and I can't find anything that allows me to express this condition very well (i.e. the returned object must be either A or B but I don't mind which)
Of course I could quite easily write code that sets a boolean to whether the result is as expected and then just do a simple assert on that boolean, but this whole question is making me wonder whether this is a "red flag" for unit testing and whether there's a better solution.
How do experienced unit testers generally handle the case where there are a range of acceptable outputs and you don't want to tie the test down to one specific implementation?
Since your question is in rather general form, I can only give a rather general answer, but for example...
Assert.That(someObject, Is.TypeOf<A>().Or.TypeOf<B>());
Assert.That(someObject, Is.EqualTo(objectA).Or.EqualTo(objectB));
Assert.That(listOfValidOjects, Contains.Item(someObject));
It depends on the details of what you are testing.
I am coming from Java, JUnit and parametrized tests, but it seems that nunit supports those as well (see here).
One could use that to generate values for your different variables (and the "generator" could keep track of the expected overall result, too).
Using that approach you might find ways to avoid "hard-coding" all the potential input value combinations (as said: by really generating them); but at least you should be able to write code where that information of different input values together with the expected result is more nicely "colocated" in your source code.
I'm using ScalaTest for my unit testing. I have a test result (JSON) that might look like below. The actual result is huge and complex. This is an example.
[{"name":"George", "when":143828333, ...}, {"name":"Fred", "when":14857777, ... }]
The 'when' field values are dynamic and will change from test-to-test (i.e. current timestamp), so I can't test against these. I could use some regex to mask these out, basically replacing them with some inert token.
Does ScalaTest have an more elegant way of handling dynamic bits of data like this?
You can make a custom Equality for the types you're comparing. Your custom Equality can ignore the dynamic fields for the equality comparison. Info on Equality is here:
http://doc.scalatest.org/2.2.0/index.html#org.scalactic.Equality
All you need to do is define the areEqual method and then make it implicit. So Equality[JsonType] or Equality[String], whatever the type is. This will then be picked up by the === operator and the equal matcher in your assertions.
What I've done many times in the past in this situation is exactly what you propose: use a regex to replace the dates with a constant so your comparisons will work.
Can someone explain why and when to write parameterized test cases?
So, I would like to add to Duncan's answer in this way. Whenever you have a case where you would like to test multiple input values to test a method you basically have three options: 1) copy / paste. 2) Parameterized 3) Theories.
Obviously copy / paste is the wrong answer. Both Theories and Parameterized allow for running the same test(s) multiple times with different input. In general I suggest using Theories over Parameterized if the correct output of the test can be determined / calculated from the input. This is because when using Theories, you can mix #Theory and #Test and only the Theories will be run multiple times. This is unlike Parameterized where EVERY test in the class will be run once per input. So if you have two tests, one that uses the input and one that does not, both are executed n times.
Another advantage of Theories is the ability to use #TestedOn to pass values in-line to a specific theory.
#Theory
public void theoryTest(#TestedOn(ints={-1,0,1,2,55} int input){...}
TestedOn explained
I have a GitHub project that includes #TestOn that allows for ints, booleans, Strings, etc.
The only time I suggest using Parameterized over Theories is where it is not easy to calculate the expected output from the input and it must therefore be explicitly provided. One solution to get around the issue with running all tests multiple times is to used the Enclosed runner and put all the parameterized tests in one inner class and the non-parameterized tests in another.
Parameterized tests are used in situations where one needs to perform exactly the same test using a large number of different input values.
A good example might be testing a piece of code that performs a calculation, where you have a large collection of known-correct answers that you would like to test.
Why is it that every unit testing framework (that I know of) requires the expected value in equality tests to always be the first argument:
Assert.AreEqual(42, Util.GetAnswerToLifeTheUniverseAndEverything());
assertEquals(42, Util.GetAnswerToLifeTheUniverseAndEverything());
etc.
I'm quite used to it now, but every coder I try to teach unit testing makes the mistake of reversing the arguments, which I understand perfectly. Google didn't help, maybe one of the hard-core unit-testers here knows the answer?
It seems that most early frameworks used expected before actual (for some unknown reason though, dice roll perhaps?). Yet with programming languages development, and increased fluency of the code, that order got reversed. Most fluent interfaces usually try to mimic natural language and unit testing frameworks are no different.
In the assertion, we want to assure that some object matches some conditions. This is the natural language form, as if you were to explain your test code you'd probably say
"In this test, I make sure that computed value is equal to 5"
instead of
"In this test, I make sure that 5 is equal to computed value".
Difference may not be huge, but let's push it further. Consider this:
Assert.That(Roses, Are(Red));
Sounds about right. Now:
Assert.That(Red, Are(Roses));
Hm..? You probably wouldn't be too surprised if somebody told you that roses are red. Other way around, red are roses, raises suspicious questions. Yoda, anybody?
Yoda's making an important point - reversed order forces you to think.
It gets even more unnatural when your assertions are more complex:
Assert.That(Forest, Has.MoreThan(15, Trees));
How would you reverse that one? More than 15 trees are being had by forest?
This claim (fluency as a driving factor for modification) is somehow reflected in the change that NUnit has gone through - originally (Assert.AreEqual) it used expected before actual (old style). Fluent extensions (or to use NUnit's terminology, constraint based - Assert.That) reversed that order.
I think it is just a convention now and as you said it is adopted by "every unit testing framework (I know of)". If you are using a framework it would be annoying to switch to another framework that uses the opposite convention. So (if you are writing a new unit testing framework for example) it would be preferable for you as well to follow the existing convention.
I believe this comes from the way some developers prefer to write their equality tests:
if (4 == myVar)
To avoid any unwanted assignment, by mistake, writing one "=" instead of "==". In this case the compiler will catch this error and you will avoid a lot of troubles trying to fix a weird runtime bug.
Nobody knows and it is the source of never ending confusions. However not all frameworks follow this pattern (to a greater confusion):
FEST-Assert uses normal order:
assertThat(Util.GetAnswerToLifeTheUniverseAndEverything()).isEqualTo(42);
Hamcrest:
assertThat(Util.GetAnswerToLifeTheUniverseAndEverything(), equalTo(42))
ScalaTest doesn't really make a distinction:
Util.GetAnswerToLifeTheUniverseAndEverything() should equal (42)
I don't know but I've been part of several animated discussions about the order of arguments to equality tests in general.
There are a lot of people who think
if (42 == answer) {
doSomething();
}
is preferable to
if (answer == 42) {
doSomething();
}
in C-based languages. The reason for this is that if you accidentally put a single equals sign:
if (42 = answer) {
doSomething();
}
will give you a compiler error, but
if (answer = 42) {
doSomething();
}
might not, and would definitely introduce a bug that might be hard to track down. So who knows, maybe the person/people who set up the unit testing framework were used to thinking of equality tests in this way -- or they were copying other unit testing frameworks that were already set up this way.
I think it's because JUnit was the precursor of most unit testing frameworks (not that it was the first unit testing framework, but it kicked off an explosion in unit testing). Since JUnit did it that way, all the subsequent frameworks copied this form and it became a convention.
why did JUnit do it that way? I don't know, ask Kent Beck!
My view for this would be to avoid any exceptions eg: 42.equals(null) vs null.equals(42)
where 42 is expected
null is actual
Well they had to pick one convention. If you want to reverse it try the Hamcrest matchers. They are meant to help increase readability. Here is a basic sample:
import org.junit.Test;
import static org.junit.Assert.assertThat;
import static org.hamcrest.core.Is.is;
public HamcrestTest{
#Test
public void matcherShouldWork(){
assertThat( Math.pow( 2, 3 ), is( 8 ) );
}
}
Surely it makes logical sense to put the expected value first, as it's the first known value.
Think about it in the context of manual tests. A manual test will have the expected value written in, with the actual value recorded afterwards.
(I'm using rspec in RoR, but I believe this question is relevant to any testing system.)
I often find myself doing this kind of thing in my test suite:
actual_value = object_being_tested.tested_method(args)
expected_value = compute_expected_value(args)
assert(actual_value == expected_value, ...)
The problem is that my implementation of compute_expected_value() often ends up mimicking object_being_tested.tested_method(), so it's really not a good test because they may have identical bugs.
This is a rather open-ended question, but what techniques do people use to avoid this trap? (Points awarded for pointers to good treatises on the topic...)
Usually (for manually written unit tests) you would not compute the expected value. Rather, you would just assert against what you expect to be the result from the tested method for the given args. That is, you would have something like this:
actual_value = object_being_tested.tested_method(args)
expected_value = what_you_expect_to_be_the_result
assert(actual_value == expected_value, ...)
In other testing scenarios where the arguments (or even test methods being executed) are generated automatically, you need to devise a simple oracle which will give you the expected result (or an invariant that should hold for the expected result). Tools like Pex, Randoop, ASTGen, and UDITA enable such testing.
Well here are my two cents
a) if the calculation of the expected value is simple and does not encompass any business rules/conditions in there apart from the test case to which it is generating the expected result then it should be good enough... remember your actual code will be as generic as possible.
Well there are cases where you will run into issues in the expected method but you can easily pin point the cos of failure and fix it.
b) there are cases when the expected value cannot be easily calculated in that case probably have flat files with results or probably some kind of constant expected value as naturally you would want that.
Also then there are tests where in you just want to verify whether a particular method was called or not and you are done testing that unit.. remember to use all these different paradigms while testing and always remember KEEP IT SIMPLE
you would not do that.
you do not compute the expected value, you know it already. it should be a constant value defined in your test. (or is constructed from other functions that have already been tested.)