I have a method, which inside calls a method which returns bool value - in this question let's name the method:
bool IsFilled(int value)
Now in another method which I am testing this is being called multiple times:
if (IsFilled(0))
{
if (IsFilled(1)){
...
} else {
...
}
}
for (int i = 1; i < range; i++)
{
if (IsFilled(i)) {
...
if (IsFilled(0)) {
}
}
}
Now how would one test it correctly with gtest? I am mainly going for coverage and testing branches more than values. As such I was expecting to do something like this:
EXPECT_CALL(adapter, IsFilled(0)).Times(zeroCalls).WillOnce(IndexZeroResults);
EXPECT_CALL(adapter, IsFilled(1)).Times(oneCalls).WillOnce(IndexOneResults);
EXPECT_CALL(adapter, IsFilled(_)).Times(otherCalls).WillOnce(IndexOtherResults);
I need the 0/1 calls separated as then I can test all branches, however, the "_" would duplicate the 0/1 calls as it tests with random values. Is it possible to exclude it?
TL;DR - swap the order of EXPECT_CALLs:
EXPECT_CALL(adapter, IsFilled(_)).Times(otherCalls).WillOnce(IndexOtherResults);
EXPECT_CALL(adapter, IsFilled(1)).Times(oneCalls).WillOnce(IndexOneResults);
EXPECT_CALL(adapter, IsFilled(0)).Times(zeroCalls).WillOnce(IndexZeroResults);
This works because of GoogleMock rules for ordering EXCEPT_CALLs. Expectations are always considered in reverse order than they were declared (so last declared expectation is verified first). It is like that to allow you to create general expectations in test fixture and specific expectations in test body.
Related
I have a method that among others generates an identifier, sends it to some external dependency and returns it. I want to have a unit test that tests if the same value that was sent there is returned.
Let's say that the test and the code look like this:
def "test"() {
given:
def testMock = Mock(TestMock)
and:
def x
when:
def testClass = new TestClass()
x = testClass.callMethod(testMock)
then:
1 * testMock.method(x)
}
static interface TestMock {
void method(double x)
}
static class TestClass {
double callMethod(TestMock mock) {
double x = Math.random()
mock.method(x)
return x
}
}
The code works correct, however the test fails with a message:
One or more arguments(s) didn't match:
0: argument == expected
| | |
| | null
| false
0.5757686318956925
So it looks like the mock check in then is done before the value is assigned in when block.
Is there a way to make Spock assign this value before he checks the mock call? Or can I do this check in a different way?
The only idea I have is to inject an id-generator to the method (or actually to the class) and stub it in the test, but it would complicate the code and I would like to avoid it.
I fixed code example according to kriegaex comment.
Fixing the sample code
Before we start, there are two problems with your sample code:
1 * testMock(x) should be 1 * testMock.method(x)
callMethod should return double, not int, because otherwise the result would always be 0 (a double between 0 and 1 would always yield 0 when converted to an integer).
Please, next time make sure that your sample code not only compiles but also does the expected thing. Sample code is only helpful if it does not have extra bugs, which a person trying to help you needs to fix first before being able to focus on the actual problem later on.
The problem at hand
As you already noticed, interactions, even though lexically defined in a then: block, are transformed in such a way by Spock's compiler AST transformations, that they are registered on the mock when the mock is initialised. That is necessary because the mock must be ready before calling any methods in the when: block. Trying to directly use a result only known later while already executing the when: block, will cause the problem you described. What was first, chicken or egg? In this case, you cannot specify a method argument constraint, using the future result of another method calling the mock method in the constraint.
The workaround
A possible workaround is to stub the method and capture the argument in the closure calculating the stub result, e.g. >> { args -> arg = args[0]; return "stubbed" }. Of course, the return keyword is redundant in the last statement of a closure or method in Groovy. In your case, the method is even void, so you do not need to return anything at all in that case.
An example
I adapted your sample code, renaming classes, methods and variables to more clearly describe which is what and what is happening:
package de.scrum_master.stackoverflow.q72029050
import spock.lang.Specification
class InteractionOnCallResultTest extends Specification {
def "dependency method is called with the expected argument"() {
given:
def dependency = Mock(Dependency)
def randomNumber
def dependencyMethodArg
when:
randomNumber = new UnderTest().getRandomNumber(dependency)
then:
1 * dependency.dependencyMethod(_) >> { args -> dependencyMethodArg = args[0] }
dependencyMethodArg == randomNumber
}
}
interface Dependency {
void dependencyMethod(double x)
}
class UnderTest {
double getRandomNumber(Dependency dependency) {
double randomNumber = Math.random()
dependency.dependencyMethod(randomNumber)
return randomNumber
}
}
Try it in the Groovy Web Console.
I have a class who's only task is to take a List<Object> and return a sorted List<Object>. For an example the sort method in the class works with a procedure which places the Objects randomly in the list.
Trying to do: to write the test for that sorting method (or class) which must fail if the sorting is in fact just random. That means I need to find the List<Object> order to test I assume.
Code to be tested
class RootLoggerFirstSorter {
List<LoggerConfig> sort(List<LoggerConfig> unSortedList) {
List<LoggerConfig> levelSortedList = new ArrayList<>(unSortedList);
Collections.sort(levelSortedList, new Comparator<LoggerConfig>() {
#Override
public int compare(LoggerConfig o1, LoggerConfig o2) {
if (o1.getLevel().intLevel() == o2.getLevel().intLevel()) {
return 0;
} else if (o1.getLevel().intLevel() < o2.getLevel().intLevel()) {
return 1;
} else {
return -1;
}
}}
);
LinkedList<LoggerConfig> sortedList = new LinkedList<LoggerConfig>();
for(Iterator<LoggerConfig> i = levelSortedList.iterator(); i.hasNext();) {
LoggerConfig cfg = i.next();
addNextLoggerConfig(cfg, sortedList);
}
return sortedList;
}
private void addNextLoggerConfig(LoggerConfig cfg, LinkedList<LoggerConfig> sortedList) {
if(cfg.getName() == null || cfg.getName().isEmpty()) {
sortedList.addFirst(cfg);
} else {
sortedList.addLast(cfg);
}
}
}
Tried
.....
expect(item1.getLevel()).andStubReturn(Level.DEBUG);
expect(item2.getLevel()).andStubReturn(Level.ERROR);
expect(item3.getLevel()).andStubReturn(Level.INFO);
.....
//Ignore the pre req for test setup
#Test
public void testSort() {
List<LoggerConfig> unsortedList = makeUnsortedList();
EasyMock.replay(item1,item2,item3);
List<LoggerConfig> sortedList = tested.sort(unsortedList);
assertThat("First item on the list is ERROR level: ", sortedList.get(0).getLevel(), is(Level.ERROR) );
assertTrue(sortedList.get(1).getLevel().equals(Level.INFO) || sortedList.get(1).getLevel().equals(Level.INFO));
assertTrue(sortedList.get(2).getLevel().equals(Level.DEBUG) || sortedList.get(2).getLevel().equals(Level.DEBUG));
}
But this test will always pass since if looked at the index 1 and 2 only, index 0 will always contain the LoggerConfig with an empty name [set up is done that way]). So I thought Should I just unit test the compare method instead? If yes, how?
Problem The issue is that I need to test the sort method with a particular Object property which is the level of the LoggerConfig object. So the test must check the List order.
Many different aspects here:
Of course you do not need to test the built-in Collections.sort() method.
In that sense: instead, you want to test two aspects A) that you are actually calling that sort method B) that your comparator works as expected.
A) is achieved by the code you put in your own answer. Or to be precise: you only need one test case where you sort check for an expected result; after providing a specific test input to your method.
B) is achieved by writing test code that simply checks that compareTo() returns the expected result for the different input
In the end, this is about properly dissecting your logic into classes. Of course you can declare that comparator as anonymous inner class; and just verify that the sort method returns the expected result.
But when you make the comparator, say an inner class somewhere, you could write unit tests for just the comparator functionality.
Finally: your test case does not mean the goal that you stated: must fail if the sorting is in fact just random. You see, if the result of sort() is random, that it could randomly give you a correct result. Meaning: you can't expect a single test to verify "possibly random behavior". You would have to run many tests with a lot of different data, and verify that all of them pass; to achieve a certain confidence that the sort() isnt pure random.
But as said: you are not sorting. You are calling the built-in sort method which does not need to be tested.
I assumed the List<ConfigLogger> followed something like item1["", ERROR], item2["com.fwk.foo", DEBUG], item3["com.fwk.core.baa", INFO]. So in that case I needed to check that if item3 is in the position 1 and item2 is in position 3 in the list the implementation does the sort correctly. So test I needed was as follows:
#Test
public void testSort() {
List<LoggerConfig> unsortedList = makeUnsortedList();
EasyMock.replay(item1,item2,item3);
List<LoggerConfig> sortedList = tested.sort(unsortedList);
assertFalse(unsortedList.equals(sortedList));
assertTrue(sortedList.get(0).getName().isEmpty());
LoggerConfig cfg1 = sortedList.get(1);
LoggerConfig cfg2 = sortedList.get(2);
assertThat(cfg1.getLevel(), is(Level.DEBUG));
assertThat(cfg2.getLevel(), is(Level.INFO));
}
So I am accessing the item from the list and comparing if they are same as expected.
Should I just unit test the compare method instead?
No, you should not. The test may fail if you try to refactor the sort method later. You are actually trying to assert that the sorting is done probably. The compare method is just an implementation detail. You may not use the compare method to sort the list in the future.
Of course you also don't need to test the built-in sort method because you are actually testing your custom sort method. Anything inside this sort method is implementation details including the list.sort method you called. You should pretend that you don't know about it when you are writing a test.
Other than that, your sort method also contain some logic that is not related to the built-in sort method.
CGrunddaten m_grdDaten;
[SetUp]
public void Init()
{
m_grdDaten = new CGrunddaten();
m_grdDaten.m_cwdGeoH.m_dW = 325.0;
m_grdDaten.m_cwd_tl.m_dW = 15;
}
[Test]
public void TestMethod()
{
m_grdDaten.RechGrdDaten();
Assert.That(m_grdDaten.m_cwd_pl.m_dW, Is.EqualTo(93344).Within(.1),"Außenluftdruck");
Assert.That(m_grdDaten.m_cwd_pl_neb.m_dW, Is.EqualTo(93147.3).Within(.1), "Außenluftdruck Nebenluftberechnung");
Assert.That(m_grdDaten.m_cwd_pl_pmax.m_dW, Is.EqualTo(92928.2).Within(.1), "Außenluftdruck max. zul. Unterdruck");
Assert.That(m_grdDaten.m_cwdRho_l.m_dW, Is.EqualTo(1.124).Within(.001), "Dichte Außenluft");
Assert.That(m_grdDaten.m_cwdRho_l_neb.m_dW, Is.EqualTo(1.184).Within(.001), "Dichte Außenluft Nebenluftberechnung");
Assert.That(m_grdDaten.m_cwdRho_l_pmax.m_dW, Is.EqualTo(1.249).Within(.001), "Dichte Außenluft max. zul. Unterdruck");
}
Is there a way to get this in a TestCase or TestCaseSource, so that I have only one Assert-line ?
I'm talking about this:
m_grdDaten.m_cwd_pl.m_dW, 93344
m_grdDaten.m_cwd_pl_neb.m_dW, 93147.3
m_grdDaten.m_cwd_pl_pmax.m_dW, 92928.2
....
I know that TestCase and TestCaseSource are static.... but is there another way?
The best way to do this test would be using the not-yet-implemented multiple asserts feature, so that all the asserts would run even if some failed.
Since that's not available yet, I can understand your wanting to make this into multiple tests, where each gets reported separately. Using test cases makes this possible, of course, even though this is really logically just one test.
The fact that a test case source method must be static doesn't prevent it from creating an instance of your CGrunddaten class. The tests themselves are all just comparing two doubles for equality and don't need to know anything about that class.
You could write something like this:
private static IEnumerable<TestCaseData> GrundDatenDaten
{
var gd = new CGrunddaten();
gd.m_cwdGeoH.m_dW = 325.0;
gd.m_cwd_tl.m_dW = 15;
gd.RechGrdDaten();
yield return new TestCaseData(gd.m_cwd_pl.m_dW, 93344, .1, "Außenluftdruck");
// und so weiter
}
[TestCaseSource("GrundDatenDaten")]
public void testMethod(object actual, object expected, object tolerance, string label)
{
Assert.That(actual, Is.EqualTo(expected).Within(tolerance), label);
}
However, I don't like that very much as it hides the true function of the test in the data source. I think your original formulation is the best way to do it for now and leaves you with the ability to include the code in an Assert.Multiple block once that feature is implemented.
I'm building unit testing using Pex. my problem is not all code branches are being tested, Pex keep generating parameter values that fails the same condition which make all the code after that condition not to run.
my method goes something like this:
public void SetUp(DbSyncScopeDescription SyncScopeDesc, BasicInfo info, string dbContext = "MyDBContext")
{
// <pex>
Contracts validation
// </pex>
string localDbConnStr = string.Empty;
//this condition never get a parameter that results in true
if (IsContextExist(dbContext))
{
localDbConnStr = ConfigurationManager.ConnectionStrings[dbContext + "Context"].ConnectionString;
}
else
{
throw new MissingFieldException("dbcontext does not exist");
}
// This part is never being reached
ProvisionLocalScope(SyncScopeDesc, info.FarmId, localDbConnStr);
info.Tables = GetSyncTablesAsSyncTableInfo(SyncScopeDesc);
AdminOrm.Create(info.ToORM(), String.Format("name={0}AdminEntities", dbContext));
}
I wonder if its possible to tell Pex to pass that test so all the code will be reached.
if this is not possible is it possible to make Pex take the default values of the function parameters for one of the tests (i think this will be a good feature if its not present).
Thank you
I have a DTO which I'm populating from the request object, and the request object has many fields. I want to write a test to check if the populateDTO() method is putting values in the right places or not. If I follow the rule of one assert per test, I would have to write a large number of tests, to test each field. The other approach would be to write multiple asserts in a single test. Is it really recommended to follow one assert per test rule or can we relax in these cases. How do I approach this problem?
Keep them separate. A unit test is supposed to tell you which unit failed. Keeping them separate also allows you to isolate the problem quickly w/o requiring you to go through a lengthy debug cycle.
Is it really recommended to have only one assert per unit test? Yes it is, there are people who make that recommendation. Are they right? I don't think so. I find it hard to believe such people have actually worked on real code for a long time.
So, imangine you have a mutator method you want to unit test. The mutator has some kind of effect, or effects, which you want to check. Typically the expected effect of a mutator are few in number, because many effects suggests an overly complicated design for the mutator. With one assert per effect and one test case per assert, you will not need many test cases per mutator, so the recommendation does not seem so bad.
But the flaw in this reasoning is that those tests are looking at only the expected effects of the mutator. But if the mutator has a bug in it, it might have unexpected faulty side effects. The tests are making the foolish assumption that the code does not have a whole class of bugs, and that no future refactoring will introduce such bugs. When the method was originally written it might be obvious to the author that particular side effects were impossible, but refactoring and addition of new functionality might make such side effects possible.
The only safe way to test long lived code is to check that the mutators do not have unexpected side effects. But how can you test for those? Most classes have some invariants: things that no mutator can ever change. The size method of a container will never return a negative value, for example. Each invariant is, in effect, a post condition for every mutator (and also the constructor). Each mutator also typically has a set of invariants that describe what kind of changes it does not make. A sort method does not change the length of the container, for example. The class and mutator invariants are, in effect, post conditions for every mutator call. Adding assertions for all them is the only way of checking for unexpected side effects.
So, just add more test cases? In practice the number of invariants multiplied by the number of mutators to test is large, so one assertion per test leads to many test cases. And the information about your invariants is scattered over many test cases. A design change to tweak one invariant will require alteration of many test cases. It becomes impractical. Its better to have parameterised test cases for a mutator, which check several invariants for the mutator, using several assertions.
And the authors of JUnit5 seem to agree. They provide an assertAll for checking several assertions in one test-case.
this construction help you to have 1 big assert (with small asserts inside)
import static org.junit.jupiter.api.Assertions.assertAll;
assertAll(
() -> assertThat(actual1, is(expected)),
() -> assertThat(actual2, is(expected))
);
You can have a parameterized test where the 1st parameter is the propertyname and the second the expected value.
Is that rule extended to being in a loop? Consider this
Collection expectedValues = // populate expected values
populateDTO();
for(DTO dto : myDtoContainer)
assert_equal(dto, expectedValues.get(someIndexRelatedToDto))
Now I'm not so big on the exact syntax, but this is just the notion I'm looking at.
EDIT:
After the comments...
The answer is ... Nope!
The reason the principle exists is so you can identify which parts of the object fail. If you have them in one method, you're going to run into only one assertion, then the next, then the next, and you won't see them all.
So you can have it one of two ways:
One method, less boilerplate code.
Many methods, better reporting on the test run
It's up to you, both have ups and downs.
3. List item
[caveat: I'm very "unfluent" in Java/JUnit, so beware of errors in the details below]
There's a couple of ways to do this:
1) Write multiple assertions in the same test. This should be ok if you are only testing the DTO generation once. You could start here, and move to another solution when this starts to hurt.
2) Write a helper assertion, e.g. assertDtoFieldsEqual, passing in the expected and actual DTO. Inside the helper assertion you assert each field separately. This at least gives you the illusion of only one assert per test and will make things clearer if you test DTO generation for multiple scenarios.
3) Implement equals for the object that check each property and implement toString so that you at least can inspect the assertion result manually to find out what part is incorrect.
4) For each scenario where the DTO is generated, create a separate test fixture that generates the DTO and initializes the expected properties in the setUp method. The create a separate test for testing each of the properties. This also results in a lot of tests, but they will at least be one-liners only. Example in pseudo-code:
public class WithDtoGeneratedFromXxx : TestFixture
{
DTO dto = null;
public void setUp()
{
dto = GenerateDtoFromXxx();
expectedProp1 = "";
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(expectedProp1, dto.prop1);
}
...
}
If you need to test the DTO generation under different scenarios and choose this last method it could soon become tedious to write all those tests. If this is the case you could implement an abstract base fixture that leaves out the details on how to create the DTO and setup the expected properties to derived classes. Pseudo-code:
abstract class AbstractDtoTest : TestFixture
{
DTO dto;
SomeType expectedProp1;
abstract DTO createDto();
abstract SomeType getExpectedProp1();
void setUp()
{
dto = createDto();
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(getExpectedProp1(), dto.prop1);
}
...
}
class WithDtoGeneratedFromXxx : AbstractDtoTest
{
DTO createDto() { return GenerateDtoFromXxx(); }
abstract SomeType getExpectedProp1() { return new SomeType(); }
...
}
Or you can do some workaround.
import junit.framework.Assert;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
public class NewEmptyJUnitTest {
public NewEmptyJUnitTest() {
}
#BeforeClass
public static void setUpClass() throws Exception {
}
#AfterClass
public static void tearDownClass() throws Exception {
}
#Before
public void setUp() {
}
#After
public void tearDown() {
}
#Test
public void checkMultipleValues() {
String errMessages = new String();
try{
this.checkProperty1("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty2("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty3("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
Assert.assertTrue(errMessages, errMessages.isEmpty());
}
private boolean checkProperty1(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property1 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty2(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property2 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty3(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property3 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
}
Maybe not the best approach and if overused than can confuse... but it is a possibility.