TDD behavior testing with no getters/setters - unit-testing

I'm applying TDD to my first event centric project (CQRS, Event sourcing etc) and I'm writing my tests according to Greg Young's simple testing framework Given, When, Expect. My test fixture takes a command, commandhandler and aggregate root and then tests the events outputted.
CommandTestFixture<TCommand, TCommandHandler, TAggregateRoot>
For example here is a typical test
[TestFixture]
public class When_moving_a_group :
CommandTestFixture<MoveGroup, MoveGroupHandler, Foo>
I am very happy with these tests on the whole but with the the above test I've hit a problem. The aggregate root contains a collection of groups. The command MoveGroup reorders the collection, taking a from & to index. I setup the test and asserted that the correct GroupMoved event was generated with the correct data.
As an additional test I need to assert that the reordering of the Groups collection actually took place correctly? How do I do this when the aggregate root has no public getters/setters. I could add a method to retrieve the group at a particular index but isn't this breaking encapsulation simply to be testable?
What's the correct way to go about this?
EDIT
The reordering of the groups takes place in the GroupMoved handler on the Aggregate root.
private void Apply(GroupMoved e)
{
var moved = groups[e.From];
groups.RemoveAt(e.From);
groups.Insert(e.To, moved);
}

The friction here comes because you want to assert something about the internal implementation, but what you have at hand is at the top level.
Your tests and assertions need to be at the same logical level. There are two ways to re-arrange this:
What effect does re-ordering groups have on subsequent commands or queries which you do have at the top level?
This should give you an avenue for asserting that the correct outcome occurs without needing to assert anything about the ordering of the groups directly. This keeps the test at the top level and would allow all sorts of internal refactoring (e.g. perhaps lazy sorting of the groups).
Can you test at a lower level?
If you feel that testing as described above is too complicated, you might want to frame your test at a more detailed level. I think of this like focusing in on a section of detail to get it right.
Down at this level (rather than your composite root), the interfaces will know about groups and you'll have the opportunity to assert what you want to assert.
Alternatively, do you need this test at all?
If you can not find a suitable test at either of the above levels, then are you sure you need this test at all? If there is no visible external difference then there is no need to lock the behaviour in place with a test.

Related

Best way to setup test cases that are "late" in a process

Best way of handling many test cases needing to navigate to a particular place before they run their asserts? For example a process has 5 steps and a test case needs to test a part of step 5, how can I set it up? Call the test case methods of the previous steps inside this test case, and do that for all test cases that test step 5?
Similarly, if a test case goes deep into the website - through many pages - should that navigation be re-written for every test case, or just call some test that already does that?
Any tips on this situations?
Best way of handling many test cases needing to navigate to a particular place before they run their asserts? For example a process has 5 steps and a test case needs to test a part
of step 5, how can I set it up?
I would create a Transporter class / pattern that the test case can call to get to that state. That will make the navigation code reusable to other tests and not make the test too big/complicated. You can also use the setUp() methods in your xUnit testing frameworks which will be called before each test and place the navigator code there, if you need to use it for other tests.
Similarly, if a test case goes deep into the website - through many
pages - should that navigation be re-written for every test case, or
just call some test that already does that?
I would extract that code into a helper class called Transporter and have the tests call it to easily navigate to the deep page in one method call.
I wrote about this and other test design patterns in a conference paper at the Pacific Northwest Software Quality Conference. Look for the Transporter pattern in that paper.
Here's an example using a Transporter class where you have to login and navigate to the checkout page:
public class Transporter {
public static void login() {
//App specific code to navigate to login screen and login to the application
}
public static void gotoCheckout() {
//App specific code to navigate to the checkout page
}
}
Now your tests can just call this Transporter class to do the navigation for them.
If you are using BDD, such as JBehave (not sure if Cucumber has the same feature) where you have the Given, When, Then story (feature) structure in Gherkin syntax, you can actually use "GivenStories" feature which are like prequel test cases to set you up for your specific test case, exactly as you are describing.
There's nothing wrong, however, when using BDD to simply make multiple step scenarios leading up to the particular test case, i.e. first scenario logs-in, second scenario navigates to certain page, third scenario performs your actual test.
By writing it as a separate story (feature), however, you can re-use those as "GivenStories" in JBehave as a shortcut to get where you need to be without duplicating the steps.

NUnit - Can I group tests by a key and run tests in each group in parallel, but each group in series?

I have a large suite of tests that are currently running in series. I would like to make the suite parallelizable as much as possible. One big problem I have is that some tests require certain global application state to be set one way, and some require it to be set another way. Is there a way I can make the NUnit test runner group my tests based on which global state they require, and then run the tests within each group in parallel with each other, and modify the global state between groups?
For example, let's say there is a global setting Foo and a custom attribute [RequiresFoo(X)] that can be used to annotate which Foo value a test requires. At runtime I want NUnit to group all tests by their argument to RequiresFoo, counting unmarked tests as having some default Foo value.
Then for each group, I want it to
Set Foo = N where N is the Foo value for that group.
Run all tests in that group in parallel.
In an ideal world I would have a mock system for this global state, but that would take a lot of time that I don't have right now. Can I get NUnit to do this or something like it?
Note, I need to be able to execute any method between groups, not just set a variable. The global context I'm actually dealing with can involve starting or stopping microservices, updating configuration files, etc. I can serialize any of these requirements to a string to pass to a custom attribute, but at runtime I need to be able to run arbitrary code to parse the requirements and reconfigure the environment.
Here is a pseudo-code example.
By default tests execute in series like this:
foreach (var t in allTests)
{
Run(t);
}
NUnits basic parallel behavior is like this:
Parallel.ForEach(allTests, t => Run(t));
I want something like this:
var grouped = allTests
.GroupBy(t => GetRequirementsFromCustomAttributes(t));
foreach (var g in grouped)
{
SetGlobalState(g.Key);
Parallel.ForEach(g, t => Run(t));
}
As you state the problem, it's not yet possible in NUnit. There is a feature planned but not yet implemented that would allow arbitrary grouping of tests that may not run together.
Your workaround is to make each "group" a test, since NUnit only allows specification of parallelization on tests. Note that by test, we mean either a test case or a group of tests, i.e. fixture or namespace suite.
Putting [Parallelizable] on a test anywhere in the hierarchy causes that test to run in parallel with other tests at the same level. Putting [NonParallelizable] on it causes that same test to run in isolation.
Let's say you have five fixtures that require a value of Foo. You would make each of those fixtures non-parallelizable. In that way, none of them could run at the same time and interfere with the others.
If you want to allow those fixtures to run in parallel with other non-Foo fixtures, simply put them all in the same namespace, like Tests.FooRequred.
Create a SetupFixture in that namespace - possibly a dummy without any actual setup action. Put the [Parallelizable] attribute on it.
The group of all foo tests would then run in parallel with other tests while the individual fixtures would not run in parallel with one another.

unit testing a factory method

suppose I have several OrderProcessors, each of them handles an order a little differently.
The decision about which OrderProcessor to use is done according to the properties of the Order object, and is done by a factory method, like so:
public IOrderProcessor CreateOrderProcessor(IOrdersRepository repository, Order order, DiscountPercentages discountPercentages)
{
if (order.Amount > 5 && order.Unit.Price < 8)
{
return new DiscountOrderProcessor(repository, order, discountPercentages.FullDiscountPercentage);
}
if (order.Amount < 5)
{
// Offer a more modest discount
return new DiscountOrderProcessor(repository, order, discountPercentages.ModestDiscountPercentage);
}
return new OutrageousPriceOrderProcessor(repository, order);
}
Now, my problem is that I want to verify that the returned OrderProcessor has received the correct parameters (for example- the correct discount percentage).
However, those properties are not public on the OrderProcessor entities.
How would you suggest I handle this scenario?
The only solution I was able to come up with is making the discount percentage property of the OrderProcessors public, but it seems like an overkill to do that just for the purpose of unit testing...
One way around this is to change the fields you want to test to internal instead of private and then set the project's internals visible to the testing project. You can read about this here: http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx
You would do something like this in your AssemblyInfo.cs file:
[assembly:InternalsVisibleTo("Orders.Tests")]
Although you could argue that your unit tests should not necessarily care about the private fields of you class. Maybe it's better to pass in the values to the factory method and write unit tests for the expected result when some method (assuming Calculate() or something similar) is called on the interface.
Or another approach would be to unit test the concrete types (DiscountOrderProcessor, etc.) and confirm their return values from the public methods/properties. Then write unit tests for the factory method that it correctly returns the correct type of interface implementation.
These are the approaches I usually take when writing similar code, however there are many different ways to tackle a problem like this. I would recommend figuring out where you would get the most value in unit tests and write according to that.
If discount percentage is not public, then it's not part of the IOrderProcessor contract and therefore doesn't need to be verified. Just have a set of unit tests for the DiscountOrderProcessor to verify it's properly computing your discounts based on the discount percent passed in via the constructor.
You have a couple of choices as I see it. you could create specializations of DiscountOrderProcessor :
public class FullDiscountOrderProcessor : DiscountOrderProcessor
{
public FullDiscountOrderProcessor(IOrdersRepository repository, Order order):base(repository,order,discountPercentages.FullDiscountPercentage)
{}
}
public class ModestDiscountOrderProcessor : DiscountOrderProcessor
{
public ModestDiscountOrderProcessor (IOrdersRepository repository, Order order):base(repository,order,discountPercentages.ModestDiscountPercentage)
{}
}
and check for the correct type returned.
you could pass in a factory for creating the DiscountOrderProcessor which just takes an amount, then you could check this was called with the correct params.
You could provide a virtual method to create the DiscountOrderProcessor and check that is called with the correct params.
I quite like the first option personally, but all of these approaches suffer from the same problem that in the end you can't check the actual value and so someone could change your discount amounts and you wouldn't know. Even wioth the first approach you'd end up not being able to test what the value applied to FullDiscountOrderProcessor was.
You need to have someway to check the actual values which leaves you with:
you could make the properties public (or internal - using InternalsVisibleTo) so you can interrogate them.
you could take the returned object and check that it correctly applies the discount to some object which you pass in to it.
Personally I'd go for making the properties internal, but it depends on how the objects interact and if passing a mock object in to the discount order processor and verifying that it is acted on correctly is simple then this might be a better solution.

Unit testing style question: should the creation and deletion of data be in the same method?

I am writing unit tests for a PHP class that maintains users in a database. I now want to test if creating a user works, but also if deleting a user works. I see multiple possibilities to do that:
I only write one method that creates a user and deletes it afterwards
I write two methods. The first one creates the user, saves it's ID. The second one deletes that user with the saved ID.
I write two methods. The first one only creates a user. The second method creates a user so that there is one that can afterwards be deleted.
I have read that every test method should be independent of the others, which means the third possibility is the way to go, but that also means every method has to set up its test data by itself (e.g. if you want to test if it's possible to add a user twice).
How would you do it? What is good unit testing style in this case?
Two different things = Two tests.
Test_DeleteUser() could be in a different test fixture as well because it has a different Setup() code of ensuring that a User already exists.
[SetUp]
public void SetUp()
{
CreateUser("Me");
Assert.IsTrue( User.Exists("Me"), "Setup failed!" );
}
[Test]
public void Test_DeleteUser()
{
DeleteUser("Me");
Assert.IsFalse( User.Exists("Me") );
}
This means that if Test_CreateUser() passes and Test_DeleteUser() doesn't - you know that there is a bug in the section of the code that is responsible for deleting users.
Update: Was just giving some thought to Charlie's comments on the dependency issue - by which i mean if Creation is broken, both tests fail even though Delete. The best I could do was to move a guard check so that Setup shows up in the Errors and Failures tab; to distinguish setup failures (In general cases, setup failures should be easy to spot by an entire test-fixture showing Red.)
How you do this codependent on how you utilize Mocks and stubs. I would go for the more granular approach so having 2 different tests.
Test A
CreateUser("testuser");
assertTrue(CheckUserInDatabase("testuser"))
Test B
LoadUserIntoDB("testuser2")
DeleteUser("testuser2")
assertFalse(CheckUserInDatabase("testuser2"))
TearDown
RemoveFromDB("testuser")
RemoveFromDB("testuser2")
CheckUserInDatabase(string user)
...//Access DAL and check item in DB
If you utilize mocks and stubs you don't need to access the DAL until you do your integration testing so won't need as much work done on the asserting and setting up the data
Usually, you should have two methods but reality still wins over text on paper in the following case:
You need a lot of expensive setup code to create the object to test. This is a code smell and should be fixed but sometimes, you really have no choice (think of some code that aggregates data from several places: You really need all those places). In this case, I write mega tests (where a test case can have thousands of lines of code spread over many methods). It creates the database, all tables, fills them with defined data, runs the code step by step, verifies each step.
This should be a rare case. If you need one, you must actively ignore the rule "Tests should be fast". This scenario is so complex that you want to check as many things as possible. I had a case where I would dump the contents of 7 database tables to files and compare them for each of the 15 SQL updates (which gave me 105 files to compare in a single test) plus about a million asserts that would run.
The goal here is to make the test fail in such a way that you notice the source of the problem right away. It's like pouring all the constraints into code and make them fail early so you know which line of app code to check. The main drawback is that these test cases are hell to maintain. Every change of the app code means that you'll have to update many of the 105 "expected data" files.

How to deal with setUp() addiction when writing tests?

I'm somewhat new to writing tests. I've find myself struggling with keeping my setUp's clean and concise, instead trying to accomplish too much with an uber-setUp.
My question is, how do you split up your testing?
Do your tests include one or two lines of independent step code?
def test_public_items():
item1 = PublicItem()
item2 = PublicItem()
assertEqual(public_items, [item1, item2])
or do you factor that into the setUp no matter what?
If that's the case, how do you deal with test class separation? Do you create a new class when one set of tests needs a different setUp then another set of tests?
I believe you've hit a couple of anti-patterns here
Excessive Setup
Inappropriately shared fixture.
The rule of thumb is that all tests in a particular test fixture should need the code in the Setup() method.
If you write a test() that needs more or less setup that what is currently present, it may be a hint that it belongs to a new test fixture. Inertia to create a new test fixture.. is what snowballs the setup code into one big ball of mud - trying to do everything for all tests. Hurts readability quite a bit.. you can't see the test amid the setup code, most of which may not even be relevant for the test you're looking at.
That said it is okay to have a test have some specific setup instructions right in the test over the common setup. (That belongs to the first of the Arrange-Act-Assert triad). However if you have duplication of those instructions in multiple tests - you should probably take all those tests out to a new test fixture, whose
setup_of_new_fixture = old_setup + recurring_arrange_instruction
Yes, a text fixture (embodied in a class) should exactly be a set of tests sharing common needs for set-up and tear-down.
Ideally, a unit test should be named testThisConditionHolds. public_items is not a "condition". Wherever the incredibly-black-magic public_items is supposed to come from I'd be writing tests like:
def testNoPublicItemsRecordedIfNoneDefined(self):
assertEqual([], public_items)
def testOnePublicItemsIsRecordedRight(self):
item = PublicItem()
assertEqual([item], public_items)
def testTwoPublicItemsAreRecordedRight(self):
item1 = PublicItem()
item2 = PublicItem()
assertEqual([item1, item2], public_items)
If public_items is a magical list supposed to be magically populated as a side effect of calling a magic function PublicItem, then calling the latter in setUp would in fact destroy the ability to test these simple cases properly, so of course I wouldn't do it!