How to unit test a method whose side effect is to call other method? - unit-testing

Here is my example:
void doneWithCurrentState(State state) {
switch (state) {
case State.Normal:
// this method is never actually called with State.Normal
break;
case State.Editing:
controller.updateViewState(State.Normal);
database.updateObjectWithDetails(controller.getObjectDetailsFromViews())
break;
case State.Focus:
controller.updateViewState(State.Editing);
break;
}
}
My controller calls the doneWithCurrentState when a specific button is pressed. The states are different positions on screen that the views for that controller can assume.
If the current state is Normal, the button will be hidden.
If the button is pressed with the current state as Editing, the doneWithCurrentState method (I say method because it is actually inside a class ``) will be called and it should change the controller's views state to Normal and update the Object in the database using the ObjectDetails (which is just a struct with data that will be used to update the Object) that should be retrieved from the controller's views (i.e., text fields, checkboxes, etc).
If the button is pressed with the current state as Focus, it should just send back to the Editing state.
I am unit testing it like this:
void testDoneWithCurrentStateEditing() {
mockController.objectDetails = ...;
myClass.doneWithCurrentState(State.Editing);
AssertEqual(mockController.viewState, State.Normal, "controller state should change to Normal");
AssertTrue(mockDatabase.updateObjectWithDetailsWasCalled, "updateObjectWithDetails should be called");
AssertEqual(mockDatabase.updatedWithObjectDetail, mockController.objectDetails, "database should be updated with corresponding objectDetails");
}
void testDoneWithCurrentStateFocus() {
myClass.doneWithCurrentState(State.Focus);
AssertEqual(mockController.viewState, State.Editing, "controller state should change to Editing");
AssertFalse(mockDatabase.updateObjectWithDetailsWasCalled, "updateObjectWithDetails should not be called");
}
But it seems wrong, it seems like I'm asserting a method call is made and then I'm making the call... it's just like asserting setter and getter methods.
What would be the right way of testing that doneWithCurrentState method?
As part of the answer, I do accept something like "first you should refactor the method to better separate these concerns...".
Thank you.

If you wrote this not test-first, an obvious way to write it would be to write one case, then copy-paste into the next case. An easy mistake to make in that case would be to forget to update the parameter to updateViewState(). So (for instance) you might find yourself going from State.Focus to State.Normal. The test you've written, although it may seem weak to you, protects against mistakes of that nature. So I think it's doing what it should.

First of all, please consider using state machine for your state transitions, you will get out of switch statement branching business, which will result in a great simplification of your tests.
Next, treat your tests as a potential source for code and design smells. If it is hard to write a test for a piece of code - probably the code is lacking quality (breaking SRP, too coupled, etc.) and can be simplified/improved.
void doneWithCurrentState(State state) {
State nextState = this.stateMachine.GetNextState(state);
controller.updateViewState(nextState);
if(nextState == State.Editing)
database.updateObjectWithDetails(controller.getObjectDetailsFromViews());
}
Then you can notice that you can pull out the call to the state machine of of the method and pass in the nextState.
//whoever calls this method should get nextState from state machine.
void doneWithCurrentState(State nextState) {
controller.updateViewState(nextState);
if(nextState == State.Editing)
database.updateObjectWithDetails(controller.getObjectDetailsFromViews());
}
and so forth.. you will write simple tests for state transitions in your state machine tests.. your overall code complexity gets down and all is goodness!? Well, there is hardly a limit to the level of goodness you can achieve and I can see multiple ways the code can be cleaned up even further.
As per your original question, how to test that code of your class makes a call on 'database' or 'controller' with proper parameters with specific state is passed in. You are doing it "right", that what mocks are meant to do. However, there are better ways. Consider event-based design. What if your controller could fire the events like "NextState" and your 'database' object can just subscribe to it? Then all your test needs to test is that the proper event is fired and not include anything about database (eliminating dependencies :))

I think Paul is spot on: put the state changes based on the incoming state into a state machine, i.e. an objects whose repsonsibility is to determine what comes next. This may sound dumb, because you kind of move the same code to another object, but at least this puts the controller on a diet. It shouldn't worry about too much details itself to be maintainable.
I worry about updateViewState, though. Why does it take the same kind of parameter as the controller's callback for user interaction? Can you model this differently? It's hard to tell you anything specific without looking at the flow of information (a detailed sequence diagram with comments might help), because usually the real insight into problems like these lies multiple levels deeper in the call stack. Without knowledge about the meaning of all this, it's hard to come up with a canned solution that fits.
Questions that might help:
if State represents 3 (?) user interactions which all go through the same tunnel, can you model the actions to take as Strategy or Command?
if doneWithCurrentState represents finishing one of many interaction modes, do you really need to use a shared doneWithCurrentState method? Couldn't you use three different callbacks? Maybe this is the wrong kind of abstraction. ("Don't Repeat Yourself" isn't about code but about things that change (in)dependently)

Related

How do I unit test RxJava Completable doOnSuccess function?

I have seen how to test observable using TestSubscriber but I have no idea how to test Completable.doOnSuccess callback. Specifically this method:
fun setAuthToken(authToken: AuthToken): Completable {
this.authToken = authToken
return Completable.fromSingle<User>(api
.getCurrentUser()
.doOnSuccess {
user = it
})
}
This is not something that might not need to be tested with RxJava test subscribers at all (depending on the rest of the code).
Remember - you don't want to test internal state, or at least do it as rarely as possible. Internal state and class structure can change and it will probably change often. So it's bad practice to check if user is assigned to the field.
So you could make Completable blocking and then assert state of (let’s call it ‚server’) server class, but I would highly discourage doing it this way:
server.setAuthToken(AuthToken("token"))
.blockingAwait()
assertThat(server.user, equalTo(expectedUser))
What you want to test is behavior.
You are probably not assigning user to the field just for the sake of having some fields. You are doing it to use information from user later on. So first you should call setAuthToken and then call function that really uses information from the user. Then you can assert if used information is correct and is coming from correct user.
So sample tests (depending on the class) could look like this:
server.setAuthToken(AuthToken("token"))
.andThen(server.sendRequest())
.blockingAwait()
// assert if correct user info was sent
or
server.setAuthToken(AuthToken("token"))
.andThen(server.sendRequest())
.test()
// assert if correct user info was sent

What to do when TDD tests reveal new functionality that is needed that also needs tests?

[EDIT]: Click here for the question on the appropriate site.
What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be seperated into it's own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement.
For example, I am writing a point class that has a function WillCollideWith(LineSegment):
public class Point {
// Point data and constructor ...
public bool CollidesWithLine(LineSegment lineSegment) {
Vector PointEndOfMovement = new Vector(Position.X + Velocity.X,
Position.Y + Velocity.Y);
LineSegment pointPath = new LineSegment(Position, PointEndOfMovement);
if (lineSegment.Intersects(pointPath)) return true;
return false;
}
}
I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle.
Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?
If you follow TDD to the letter as per how Kent Beck defines it in his book, when you come across something that you will also need to test, make a note of it on a piece of paper (he refers to this as a test list) and then focus on the current test. Kent suggests you should work on one test at a time.
From a test first perspective you should focus on making the test pass, which has several options:
Write the implementation of Intersects inline in the current method. "Green" means working, not pretty. Once working, refactor both the code AND tests.
Stub it out. Pass in a test double (mock) into the method that can simulate the contract.
Fake it. When you come across a method you need, make a note for other tests, then write a basic implementation (eg "return true")
I suggest your best option is to mock it, that way you stay in your workflow and you also test a limited amount of code at a time.
I like to use [Ignore] attribute to mark tests which require attention (e.g. when it is not completed). Such tests will not run. Ignored tests are highlighted in test-runners (usually yellow or orange). Even if all other tests are passed, you will not see green line while there any ignored tests. This insures that tests will not be forgotten.

How to choose TDD starting point in a real world project?

I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...

Unit test 'structure' of method?

Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}

Unit testing functions with side effects?

Let's say you're writing a function to check if a page was reached by the appropriate URL. The page has a "canonical" stub - for example, while a page could be reached at stackoverflow.com/questions/123, we would prefer (for SEO reasons) to redirect it to stackoverflow.com/questions/123/how-do-i-move-the-turtle-in-logo - and the actual redirect is safely contained in its own method (eg. redirectPage($url)), but how do you properly test the function which calls it?
For example, take the following function:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
If you were to unit test the checkStub() function, wouldn't the redirect get in the way?
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing. My mind immediately thinks of routers and controllers as having these sorts of problems, as testing them necessarily leads to the generation of pages rather than being confined to just their own function.
Do I just fail at unit testing?
You say...
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing
I think this is why unit testing is (1) hard and (2) leads to code that doesn't crumble under its own weight. You have to be meticulous about breaking all of your dependencies or you end up with unit tests == integration tests.
In your example, you would inject a redirector as a dependency. You use a mock, double or spy. Then you do the tests as #atk lays out. Sometimes it's not worth it. More often it forces you to write better code. And it's hard to do without an IOC container.
This is an old question, but I think this answer is relevant. #Rob states that you would inject a redirector as a dependency - and sure, this works. However, your problem is that you don't have a good separation of concerns.
You need to make your functions as atomic as possible, and then compose larger functionality using the granular functions you've created. You wrote this:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
I'd write this:
function checkStubEquality($stub1, $stub2) {
return $stub1 == $stub2;
}
canonicalStub = model->getStub($questionId);
if (!checkStubEquality(canonicalStub, $stub)) redirectPage($baseUrl . $canonicalStub);
It sounds like you just have another test case. You need to check that the stub is identified correctly as a stub with both positive and negative testing, and you need to check that the page to which you are redirected is correct.
Or do I totally misunderstand the question?