Sinon spy on switch statement - unit-testing

Is there a way to spy / stub a switch statement? I tried:
let spy = sandbox.spy(global, 'switch');
But that does not work unfortunately.

No. A switch is a language-level control statement and can't be affected like this.
This is no setback, however. You should be testing what your units do, not how they do it. This distinction can be a little muddy sometimes, but in this case? The presence or absence of a switch in a function is going to be a 'how' and not a 'what' 100% of the time. In other words, its an implementation detail.
If you're to the point where you want something like this, I'd recommend taking a step back and really thinking about why you want it. Do you really want tests to fail if you replace a switch with an equivalent chain of if/else-if/else? Probably not.

Related

Do I need to unit test functions with no control flow?

I am wondering if I should unit test functions that have no control flow. This functions take some input, call a sequence of 5/6 other functions, then return some output.
Testing them seems a waste of time, since I don't see what I would be testing exactly. The other functions called already have unit test for them.
The main drawback for me is that I don't know what the output should be a priori, I would need to call the same functions in the test scripit to see if the results coincide; and then what am I testing? That the test function and the actual function have the same lines in the same order?
Thanks for any insight
Note: Same as last question, if you think it's primarily opinion based reformulate as "According to the principles advocated in Art of Unit Testing, should i unit test functions with no control flow?"
Short answer: yes, of course you do!
Long answer: how a method does something is in the end "implementation" detail. In that sense: you should not care at all if a method is using a switch, some if/elses, a loop, or just calls other methods in sequence.
Instead, you should understand the contract that your method provides: which input it takes; and what comes out of it (depending on the inputs maybe).
That is what you focus on: creating a setup where your method can run; to then check if the method upholds that contract.
Example:
public void foo(Bar bar) {
FooBar fooBar = bar.wobbel();
fooBar.throttle();
fooBar.rattle(this.someField);
}
that code above doesn't contain any control flow statements. But still, there are various points in there where things could go wrong (for example NullPointerExceptions). Don't you think it would be better to catch those using unit tests?

Should I write a Test that already passes?

I'm talking about uncle bob's rules of TDD:
You are not allowed to write any production code unless it is to make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
My problem is that what happens when you expect to create a feature that can generate more than 1 result, and at first iteration you implement codes that can satisfy every scenario?
I wrote such a code once, because that was the only solution which hit my mind first.
I'd say I didn't violate any of these 3 rules.
I wrote a test with least possible conditions to fail.
Then I implemented the feature with sufficient codes to pass the test (and that was the only solution I came up with; so I'd say least possible codes I could have wrote).
And then I wrote the next test to discover that It was already passing.
Now what about the rules ?
Am I not allowed to write this test even if the feature is a soooper important one ? Or should I rollback and start over ?
I'd also mention that this method cannot be refactored according to the result or data fed to it. Right now the example situation I can think about is a little stupid but bear with me please. take a look at this for example:
I want to create a method to add numbers, this is pretty much all I can do. White a failing test:
public function it_can_add_numbers()
{
$this->add(2, 3)->shouldReturn(5);
}
Then make it pass:
public function add($numberOne, $numberTwo)
{
return $numberOne + $numberTwo;
}
Now one could argue that I should have returned 5 in first iteration because that was enough to pass the test and to introduce regression as well, but this is not an actual problem, so please bear with me and suppose this is the only solution one could think of.
Now my company want me to make sure that they are able to add 12 and 13 because these are some internal magical numbers that we'd be using quite a lot of time. I go ahead and write another test, because this is how I'm supposed to verify a feature.
public function it_can_add_twelve_and_thirteen()
{
$this->add(12, 13)->shouldReturn(25);
}
turns out the test is already passing.
At this point, I can choose to not write the test, but what if at a later time someone make changes in actual codes and make it
public function add($numberOne, $numberTwo)
{
return 5;
}
The test will still pass but the feature is not there.
So what about those situations when you cannot immediately think about a possible flaw to introduce in first iterations before making improvements ? should I leave it here and wait for someone to come over and screw it up ? should I leave this case for regression tests ?
To be true to the rule number 3 in uncle bob's rules, after writing the test:
public function it_can_add_numbers()
{
$this->add(2, 3)->shouldReturn(5);
}
The correct code would be:
public function add($numberOne, $numberTwo)
{
return 5;
}
Now, when you add the second test, it will fail, which will make you change the code to comply with both tests, and then refactor it to be DRY, resulting in:
public function add($numberOne, $numberTwo)
{
return $numberOne + $numberTwo;
}
I won't say that it is the "only true way" to code, and #Oli and #Leo have a point that you should not stop thinking like a programmer because you are TDDing, but the above process is an example of following the 3 rules you stated in TDD...
If you write a passing test, you are not doing TDD.
That is not to say that the test has no value, but it says very clearly that your new test is not "driving design" (or "driving development", the "other" DD). You may need the new test for regression or to satisfy management or to bring up some code coverage metric, but you don't need it to drive your design or your development.
You did violate Uncle Bob's third rule, insofar as you wrote more logic than was required to pass the test you had.
It's OK to break the rules; it's just not OK to break the rules and say you're not breaking them. If you want to rigidly adhere to TDD as Uncle Bob defines it, you need to follow his rules.
If the feature is really simple, like the addition example, I would not worry about implementing it with just one test. In more complex, real-world situations, implementing the entire solution at once may leave you with poorly structured code because you don't get the chance to evolve the design and refactor the code. TDD is a design technique that helps you write better code and requires you to be constantly THINKING, not just following rules by rote.
TDD is a best practice, not a religion. if your case is trivial/obvious, just write the right code instead of doing some artificial useless refactoring

TDD duplication of test data

I'm new to test driven development and first time I'm tring to use it in a simple project.
I have a class, and I need to test creation, insertion and deletion of objects of this class. If I write three seperate test functions I need to duplicate initialization codes in other function. On the hand if I put all tests in one test function then it is a contradiction with one test per function. What should I do?
Here the situation:
tst_create()
{
createHead(head);
createBody(body);
createFoot(foot);
}
tst_insert()
{
createHead(head);
createBody(body);
createFoot(foot);
obj_id=insert(obj); //Also I need to delete obj_id somehow in order to preserve old state
}
tst_delete()
{
createHead(head);
createBody(body);
createFoot(foot);
obj_id=insert(obj);
delete(obj_id);
}
vs
tstCreateInsertDelete()
{
createHead(head);
createBody(body);
createFoot(foot);
obj_id=insert(obj);
delete(obj_id);
}
Rather than "One test per function", try thinking about it as, "One aspect of behaviour per function".
What does inserting an object give you? How about deleting an object? Why are these valuable? How can you tell you've done them? Write an example of how the code might be used, and why that behaviour is valuable. That then becomes your test.
When you've worked out what the behaviour is that you're interested in, extract out the duplication only if it makes the test more readable. TDD isn't just about testing; it's also about providing documentation, and helping you think about the responsibility of each element of code and the design of that code. The tests will probably be read far more than they're written, so readability has to come first.
If necessary, put all the behaviour you're interested in in one method, and just make sure it's readable. You can add comments if required.
Factor out the duplication in your tests.
Depending on your test framework, there may be support for defining a setup method that's called before each test execution and a teardown method that's called after each test.
Regardless, you can extract the common stuff so that you only have to repeat a call to a single shared setup.
If you tell us what language and test framework you use, we might be able to give more specific advice.

Unit testing of extremely trivial methods (yes or no)

Suppose you have a method:
public void Save(Entity data)
{
this.repositoryIocInstance.EntitySave(data);
}
Would you write a unit test at all?
public void TestSave()
{
// arrange
Mock<EntityRepository> repo = new Mock<EntityRepository>();
repo.Setup(m => m.EntitySave(It.IsAny<Entity>());
// act
MyClass c = new MyClass(repo.Object);
c.Save(new Entity());
// assert
repo.Verify(m => EntitySave(It.IsAny<Entity>()), Times.Once());
}
Because later on if you do change method's implementation to do more "complex" stuff like:
public void Save(Entity data)
{
if (this.repositoryIocInstance.Exists(data))
{
this.repositoryIocInstance.Update(data);
}
else
{
this.repositoryIocInstance.Create(data);
}
}
...your unit test would fail but it probably wouldn't break your application...
Question
Should I even bother creating unit tests on methods that don't have any return types* or **don't change anything outside of internal mock?
Don't forget that unit tests isn't just about testing code. It's about allowing you to determine when behaviour changes.
So you may have something that's trivial. However, your implementation changes and you may have a side effect. You want your regression test suite to tell you.
e.g. Often people say you shouldn't test setters/getters since they're trivial. I disagree, not because they're complicated methods, but someone may inadvertently change them through ignorance, fat-finger scenarios etc.
Given all that I've just said, I would definitely implement tests for the above (via mocking, and/or perhaps it's worth designing your classes with testability in mind and having them report status etc.)
It's true your test is depending on your implementation, which is something you should avoid (though it is not really that simple sometimes...) and is not necessarily bad. But these kind of tests are expected to break even if your change doesn't break the code.
You could have many approaches to this:
Create a test that really goes to the database and check if the state was changed as expected (it won't be a unit test anymore)
Create a test object that fakes a database and do operations in-memory (another implementation for your repositoryIocInstance), and verify the state was changed as expected. Changes to the repository interface would incurr in changes to this object as well. But your interfaces shouldn't be changing much, right?
See all of this as too expensive, and use your approach, which may incur on unnecessarily breaking tests later (but once the chance is low, it is ok to take the risk)
Ask yourself two questions. "What is the manual equivalent of this unit test?" and "is it worth automating?". In your case it would be something like:
What is manual equivalent?
- start debugger
- step into "Save" method
- step into next, make sure you're inside IRepository.EntitySave implementation
Is it worth automating? My answer is "no". It is 100% obvious from the code.
From hundreds of similar waste tests I didn't see a single which would turn out to be useful.
The general rule of thumb is, that you test all things, that could probably break. If you are sure, that the method is simple enough (and stays simple enough) to not be a problem, that let it out with testing.
The second thing is, you should test the contract of the method, not the implementation. If the test fails after a change, but not the application, then your test tests not the right thing. The test should cover cases that are important for your application. This should ensure, that every change to the method that doesn't break the application also don't fail the test.
A method that does not return any result still changes the state of your application. Your unit test, in this case, should be testing whether the new state is as intended.
"your unit test would fail but it probably wouldn't break your application"
This is -- actually -- really important to know. It may seem annoying and trivial, but when someone else starts maintaining your code, they may have made a really bad change to Save and (improbably) broken the application.
The trick is to prioritize.
Test the important stuff first. When things are slow, add tests for trivial stuff.
When there isn't an assertion in a method, you are essentially asserting that exceptions aren't thrown.
I'm also struggling with the question of how to test public void myMethod(). I guess if you do decide to add a return value for testability, the return value should represent all salient facts necessary to see what changed about the state of the application.
public void myMethod()
becomes
public ComplexObject myMethod() {
DoLotsOfSideEffects()
return new ComplexObject { rows changed, primary key, value of each column, etc };
}
and not
public bool myMethod()
DoLotsOfSideEffects()
return true;
The short answer to your question is: Yes, you should definitely test methods like that.
I assume that it is important that the Save method actually saves the data. If you don't write a unit test for this, then how do you know?
Someone else may come along and remove that line of code that invokes the EntitySave method, and none of the unit tests will fail. Later on, you are wondering why items are never persisted...
In your method, you could say that anyone deleting that line would only be doing so if they have malign intentions, but the thing is: Simple things don't necessarily stay simple, and you better write the unit tests before things get complicated.
It is not an implementation detail that the Save method invokes EntitySave on the Repository - it is part of the expected behavior, and a pretty crucial part, if I may say so. You want to make sure that data is actually being saved.
Just because a method does not return a value doesn't mean that it isn't worth testing. In general, if you observe good Command/Query Separation (CQS), any void method should be expected to change the state of something.
Sometimes that something is the class itself, but other times, it may be the state of something else. In this case, it changes the state of the Repository, and that is what you should be testing.
This is called testing Inderect Outputs, instead of the more normal Direct Outputs (return values).
The trick is to write unit tests so that they don't break too often. When using Mocks, it is easy to accidentally write Overspecified Tests, which is why most Dynamic Mocks (like Moq) defaults to Stub mode, where it doesn't really matter how many times you invoke a given method.
All this, and much more, is explained in the excellent xUnit Test Patterns.

How to use TDD when the fix involves changing the method under test's signature?

I'm trying to get my head around TDD methodology and have run into - what I think is - a chicken-and-egg problem: what to do if a bug fix involves the changing of a method's signature.
Consider the following method signature:
string RemoveTokenFromString (string delimited, string token)
As the name suggests, this method removes all instances of a token from delimited and returns the resultant string.
I find later that this method has a bug (e.g. the wrong bits are being removed from the string). So I write a test case describing the scenario where the bug occurs and make sure that the test fails.
When fixing the bug, I find that the method needs more information to be able to do its job properly - and this bit of information can only be sent in as a parameter (the method under test is part of a static class).
What do I do then? If I fix the bug, this compels me to change the unit test - would that be 'correct' TDD methodology?
You have fallen into the most dangerous trap in TDD: you think TDD is about testing, but it isn't. However, it is easy to fall into that trap, since all the terminology in TDD is about testing. This is why BDD was invented: it is essentially TDD, but without the confusing terminology.
In TDD, tests aren't really tests, they are examples. And assertions aren't really assertions, they are expectations. And you aren't dealing with units, you are dealing with behaviors. BDD just calls them that. (Note: BDD has evolved since it was first invented, and it now incorporates things that aren't part of TDD, but the original intention was just "many people do TDD wrong, so use different words to help them do it right".)
Anyway, if you think of a test not as a test, but a behavioral example of how the method should work, it should become obvious that as you develop a better understanding of the expected behavior, deleting or changing the test is not only allowed by TDD, it is the only correct choice! Always keep that in mind!
There is absolutely nothing wrong with bombing your tests, when you discover that the intended behaviour of the unit changes.
//Up front
[Test]
public void should_remove_correct_token_from_string()
{
var text = "do.it.correctly..";
var expected = "doitcorrectly";
Assert.AreEqual(StaticClass.RemoveTokenFromString(text, "."), expected);
}
//After finding that it doesn't do the right thing
//Delete the old test and *design* a new function that
//Does what you want through a new test
//Remember TDD is about design, not testing!
[Test]
public void should_remove_correct_token_from_string()
{
var text = "do.it.correctly..";
var expected = "doitcorrectly";
Assert.AreEqual(
StaticClass.RemoveTokenFromString(
text,
".",
System.Text.Encoding.UTF8), expected);
}
//This will force you to add a new parameter to your function
//Obviously now, there are edge cases to deal with your new parameter etc.
//So more test are required to further design your new function
Keep it simple.
If your unit test is wrong, or obsolete, you have to rewrite it. If your specs change, or certain specs are no longer relevant, your unit tests have to reflect that.
Red, green, refactor also applies to your unit tests, not just the code you are testing.
There is a refactoring called Add Parameter that could help here.
If your language supports method overloading, you could create the new function with the new parameter first, copying the body of the existing function and fixing your problem.
Then when the problem is fixed, you can modify all the tests, one by one, to call the new method. Last you can delete the old method.
With a language that does not support method overloading, create a new function with a different name, copy the body on the existing function in that new function, have the existing function calling the new function, possibly with a dummy value for the new parameter. Then you could have all your tests passing. Make your old tests call the new function, one by one. When the old method is not used anymore, it can be deleted and the new function renamed.
This is a bit process-extensive, but I think this is the TDD way to follow red-green-refactor.
Default value for parameter could also help, if there are available.
Red, Green, Refactor.
Whatever you do, you first want to get to a state where you have a compiling but failing test case that reproduces the bug. You then can proceed on adding just the parameter to the test and the implementation, but do nothing with it so you still have Red.
I'd say don't fret about the 'right'/'correct' way... whatever helps you get you closer to the solution quicker.
If you find that you need to take in an extra parameter,
update the call in the test case
add the new parameter to the actual method
verify that your code builds and the test fails again.
proceed with making it green.
Only in cases where adding a new parameter would result in zillions of compile errors, would I recommend - taking it in baby steps... you dont want to update the whole source base before finding out you really didn't need the third param or you need a fourth one.. time lost. So get the new version of the method 'working' before updating all references. (As philippe says here)
write a new overload with the added parameter
Move the code of the old method into the new overload
Make the old overload relay or delegate to the new overload with some default value for the new param
Now you can get back to the task at hand and get the new test to go green.
If you dont need the old overload anymore, delete it and fix the resulting compile errors.
If a method is not doing the job correctly then it needs to be fixed and if the fix requires change in signature then noting wrong in that. As per TDD you write the test case first which will certainly fail and then you write the method to satisfy the test. As per this approach if the method call in test requires a parameter for it to function then you need to do it.