Is it okay to rewrite unit tests? - unit-testing

I am still learning OOP, and each day I discover something foreign. So when writing unit tests, it looks like it's common to have function names, and the program design in general, already defined. "Test this factory or that dependency container to see if it works as expected", for example.
Being a learner, I am pretty sure I'd want to change a lot of things, from function names to the structure of the code to what the functions do themselves, over time. Obviously, this would mean rewriting the tests to make them pass. Did you face this problem? A few things I read speak like it is taboo to touch tests once written, so how do you solve this?

it is taboo to touch tests once written
This is total nonsense, of course. Time passes, things change, code evolves, tests need to be touched. Feel free to amend and rewrite tests, but be careful to not accidentally lose functionality in the process (when rewritten version does not test cases that previous version did).
For me this problem simply doesn't exist.

As #Sergio said, of course you have to change tests if the class under test changes.
Just to note about changing tests in general: don't forget to make sure that the new test actually fails if the class under test is wrong. When you're doing new code and writing tests first, you get to see the test fail before you implement the new functionality and the test passes (TDD's "red/green" rhythm). When you're changing tests, you need to make sure you didn't just make a test that always passes.
For your question about changing things about the class under test (names, behaviour), you can easily do this in a test-first manner too:
Change the test to reflect the changes you would like to make to the class under test
Run the test to verify that it fails (or possibly won't compile, if name changes) [Red]
Update the class and see the test pass [Green]

An important thing to remember is that test code is just as important as production code, if it's not, why bother with it at all? With this in mind, it's just as important to maintain and refactor your tests as it is to maintain and refactor your production code.
That said, I wouldn't recommend rewriting a test just because your knowledge has moved forward and you don't like the way you did it initially. If a test is testing something worthwhile, is understandable, and always passes, i'd leave it alone.

I think this depens on your programm. If you have a really huge programm you should not rewrite your tests if you are not sure that other functions uses that functions. If you are in the development you can change them if you are sure that this will make no trouble.

Related

Adding tests while refactoring in test driven development

Let's say that I am refactoring some classes that have unit tests already written. Let's assume that the test coverage is reasonable in the sense that it covers most use cases.
While refactoring I change some implementations. Move some variables, add/remove a few, abstract out things into some functions etc. The api of the classes and it's function has remained the same though.
Should I be adding tests when I am refactoring these classes? Or should I add a new test for every bit of refactoring? This is what I usually do when I am building up code rather than refactoring.
PS: Apologies if this is really vague.
Usually unit tests are work-/design-/use case-specifications of how your refactored System Under Test/Class Under Test (eg.: classes) should really work. So by stating this I would really just go as:
Write the test according to your specification
Refactor the code to adhere to the specification
Check the assertation result of the test
In practice I have came to the conclusion that you don't need to test each and every line of your code just for the sake of high percentage of code coverage, but make sure you always test parts of your code where the behaviour or logic lies.
If your changes are verified by current tests, there's no need to add new ones. Check your code coverage. If there are holes in your new code, that means you made an unverified functional change.
New tests might be valuable if a newly extracted class is moved to another project, where you don't have the original tests.

How do I ensure that I don't break the test code when I refactor it?

Code evolves, and as it does, it also decays if not pruned, a bit like a garden in that respect. Pruning mean refactoring to make it fulfill its evolving purpose.
Refactoring is much safer if we have a good unit test coverage.
Test-driven development forces us to write the test code first, before the production code. Hence, we can't test the implementation, because there isn't any. This makes it much easier to refactor the production code.
The TDD cycle is something like this: write a test, test fails, write production code until the test succeeds, refactor the code.
But from what I've seen, people refactor the production code, but not the test code. As test code decays, the production code will go stale and then everything goes downhill. Therefore, I think it is necessary to refactor test code.
Here's the problem: How do you ensure that you don't break the test code when you refactor it?
(I've done one approach, https://thecomsci.wordpress.com/2011/12/19/double-dabble/, but I think there might be a better way.)
Apparently there's a book, http://www.amazon.com/dp/0131495054, which I haven't read yet.
There's also a Wiki page about this, http://c2.com/cgi/wiki?RefactoringTestCode, which doesn't have a solution.
Refactoring your tests is a two step process. Simply stated: First you must use your application under test to ensure that the tests pass while refactoring. Then, after your refactored tests are green, you must ensure that they will fail. However to do this properly requires some specific steps.
In order to properly test your refactored tests, you must change the application under test to cause the test to fail. Only that test condition should fail. That way you can ensure that the test is failing properly in addition to passing. You should strive for a single test failure, but that will not be possible in some cases (i.e. not unit tests). However if you are refactoring correctly there will be a single failure in the refactored tests, and the other failures will exist in tests not related to the current refactoring. Understanding your codebase is required to properly identify cascading failures of this type and failures of this type only apply to tests other than unit tests.
I think you should not change your test code.
Why?
In TDD, you define a interface for a class.
This interface contains methods that are defined with a certain set of functionality.The requirements / design.
First: These requirements do not change while refactoring your production code. Refactoring means: changing/cleaning the code without changing the functionality.
Second: The test checks a certain set of functionality, this set stays the same.
Conclusion: Refactoring test and refactoring your production code are two different things.
Tip:When write your tests, write clean code. Make small tests. Which really test one piece of the functionality.
But "Your design changes because of unforeseen changes to the requirements". This may lead or may not lead to changes in the interface.
When your requirements change, your tests must change. This is not avoidable.
You have to keep in mind that this is a new TDD cycle. First test the new functionality and remove the old functionality tests. Then implement the new design.
To make this work properly, you need clean and small tests.
Example:
MethodOne does: changeA and changeB
Don't put this in 1 unit test, but make a test class with 2 unit tests.
Both execute MethodOne, but they check for other results (changeA, changeB).
When the specification of changeA changes, you only need to rewrite 1 unit method.
When MethodOne gets a new specification changeC: Add a unit test.
With the above example your tests will be more agile and easier to change.
Summary:
Dont refactor your tests, when refactoring your production code.
Write clean and agile tests.
Hopes this helps.
Good luck with it.
#disclaimer: I do not want your money if this makes you rich.
How do you ensure that you don't break the test code when you refactor
it?
Rerunning the tests should suffice in most cases.
There are some other strategies described here but they might be overkill compared to the few benefits you get.
Um.
FOR JAVA SOLUTION! I don't know what language you're programming in!
Ok, I just read "Clean Code" by one of the Martins, a book which argues that the idea of refactoring test code to keep clean and readible is fine idea, nad indeed a goal. So the ambition to refactor and keep code clean is Good, not a silly idea like I first thought.
But that's not what you asked, so let's take a shot at answering!
I'd keep a db of your tests - or the last test result, anyway.
With a bit of java annotating, you can do something like this:
#SuperTestingFramerworkCapable
public class MyFancyTest {
#TestEntry
#Test
public testXEqualsYAfterConstructors(){
#TestElement
//create my object X
#TestElement
//create my object Y
#TheTest
AssertTrue(X.equals(Y));
}
}
ANYWAY, you'd also need a reflection and annotation-processing super class, that would inspect this code. It could just be an extra step in your processing - write tests, pass through this super processor, and then, if it passes, run the tests.
And your super processor is going to use a schema
MyFancyTest
And for each member you have in your class, it will use a new table - here the (only) table would be testXEqualsYAfterConstructors
And that table would have columns for each item marked with the #TestElement annotation. And it would also have a column for #TheTest
I suppose you'd just call the columns TestElement1, TestElement2 etc etc
And THEN, once it had set all this up, it would just save the variable names and the line annotated #TheTest.
So the table would be
testXEqualsYAfterConstructors
TestElement1 | TestElement2 | TheTest
SomeObjectType X | SomeObjectType X | AssertTrue(X.equals(Y));
So, if the super processor goes and finds tables exist, then it can compare what is already there with what is now in the code, and it can raise an alert for each differing entry. And you can create a new user - an Admin - who can get the changes, and can check over them, crucible style, and ok or not them.
And then you can market this solution for this problem, sell you company for 100M and give me 20%
cheers!
Slow day, here's the rational:
yuor solution uses a lot of extra overhead, most damagingly, in the actual production code. Your prod code shouldn't be tied to your test code, ever, and it certainly shouldn't have random variable that are test specific in it.
The next suggestion I have with the code you put up is that your framework doesn't stop people breaking tests. After all, you can have this:
#Test
public void equalsIfSameObject()
{
Person expected = createPerson();
Person actual = expected;
check(Person.FEATURE_EQUAL_IF_SAME_OBJECT);
boolean isEqual = actual.equals(expected);
assertThat(isEqual).isTrue();
}
But if I change the last two lines of code in some "refactoring" of test classes, then your framework is going to report a success, but the test won't do anything. You really need to ensure that an alert is raised and people can look at the "difference".
Then again, you might just want to use svn or perforce and crucible to compare and check this stuff!
Also, seeing as you're keen on a New Idea, you'll want to read about local annotations:http://stackoverflow.com/questions/3285652/how-can-i-create-an-annotation-processor-that-processes-a-local-variable
Um, so you might need to get that guy's - see the last comment in the link above - you might need his custom java compiler too.
#Disclaimer
If you create a new company with code that pretty much follows the above, I reserve the right to 20% of the company if and when you're worth more than 30M, at a time of my choosing
About two months before your question was one of my main questions in refactoring. Just let me explain my experience:
when you want to refactor a method, you should cover it with unit tests(or any other tests) to be sure you are not breaking something during refactoring(in my case the team knew the code worked well because they had been using it for 6 years, they just needed to improve it, so all of my unit tests passed in first step).So in first step you have some passed unit tests that cover whole scenarios. If some of unit tests fails, firstly you should fix the problem to be sure your method works correctly.
after passing all of tests, you have refactored the method and you want to run your test to be sure every thing is right. Any changes in test codes?
you should write tests that are independent from internal structure of method. After refactoring, you should just change some small part of code and in the most of the time no changes are required, because refactoring just improves the structure and doesn't change the behavior. If your test code needed to change a lot, you never know if you've broke some things on main code during refactoring or not.
and most important thing for me is to remember in every test, one behavior should be considered
I hope I could explain well.

Is useless to do unit test after writing code?

I finished an app and after that I'm trying to write unit test to cover all methods.
The thing is that Im seeing that I'm testing my code with the knowledge of how it works.
For me is a bit stupid because I know how the code works and I'm testing what is my code actually doing.
My question is:
Is this useless? Im testing what it does, not what it is supposing to do. My code works but I can improve it. Then:
Should I complete all my tests and then try to refactor my code changing my test to "How the code should work" and making the changes to the app to pass the test?
Thank you.
You need to test "How the code should work". That is, you have a clear idea of what a particular method should behave like, then create the set of tests that covers that behaviour. If your code fails the test then you can fix it.
Even though your code is working now you still need the tests for regression testing. Later when you modify your code or add new extensions to it you need to ensure that you did not break the existing functionality. The set of tests that you derive today will be able to tell you how well you did.
While I don't always do Test-Driven Development (write tests before implementing the class), I do always write tests after I implement the code. Even some days after. At least I try to follow a path coverage strategy (that is, every route flow from the beginning to the method up to when it returns). Also for unexpected parameter values. This is useful for enhancing your confidence of correct behaviour of your functions, also when you change the code.
Quite always I find unexpected behaviors or little bugs :) So it works
It will be extremely useful if you ever change the code.
Pretty useless, I'd say. It is called "test driven" for a reason. Every line of code is motivated by a test.
That means the code line is also protected against changes.
Test driven development requires a minimalistic approach and tons of discipline. Add a test to extend the functionality a little. Implement the functionality so it just makes the light green but nothing more. Make foolish implementations until the tests force you to make serious ones.
I have tried to add tests to existing code and found it difficult. The tendency is that only some of the functionality becomes tested. This is dangerous since the test suite will make people think the code is protected. One method is to go through the code line by line and ensure there is a test for it. Make foolish changes to the code until a test forces you to back to the original version. In the end, any change to the code should break the tests.
However, there is no way you can derive the requirements from the code. No test suite that is the result of reverse-engineering will be complete.
You should at least try to build the tests on how its supposed to work. Therefor it is better to build the tests in advance, but it is still not useless to build the tests now.
The reason: you don't build the tests to test your current code, but you're building them to test future modifications. Unit tests are especially useful to test if a modification didn't break earlier code.
Better late than never!
Of course, the best option is to do the unit tests before you start implementing so that you can profit from them even more, but having a good set of unit tests will prevent you from introducing many bugs when you refactor or implement new features, regardless of whether you implemented the unit tests before or after the implementation.

Unit-Testing Acceptance-Tests?

I am currently making some Acceptance-Tests that will help drive the design of a program I am about to do. Everything seems fine except I've realized that the Acceptance-Tests are kinda complex, that is, although they are conceptually simple, they require quite a bit of tricky code to run. I'll need to make a couple of "helper" classes for my Acceptance-Tests.
My question is in how to develop them:
Make Unit-Tests of my Acceptance-Tests(this seems odd -- has anyone done anything like it?)
Make Unit-Tests for those help classes. After I have done all the code of those help classes, I can pass and start working on the real Unit-Tests of my System. When using this approach, where would you put the helper classes? In the tests' project or in the real project? They don't necessarily have dependencies on testing/mocking frameworks.
Any other idea?
A friend is very keen on the notion that acceptance tests tell you whether your code is broken, while unit tests tell you where it's broken; these are complementary and valuable bits of information. Acceptance tests are better at letting you know when you're done.
But to get done, you need all the pieces along the way; you need them to be working, and that's what unit tests are great at. Done test-first, they'll also lead you to better design (not just code that works). A good approach is to write a big-picture acceptance test, and say to yourself: "when this is passing, I'm done." Then work to make it pass, working TDD: write a small unit test for the next little bit of functionality you need to make the AT pass; write the code to make it pass; refactor; repeat. As you progress, run the AT from time to time; you will probably find it failing later and later in the test. And, as mentioned above, when it's passing, you're done.
I don't think unit testing the acceptance test itself makes much sense. But unit testing its helper classes - indeed, writing them test-first - is a very good way to go. You're likely to find some methods that you write "just for the test" working their way into the production code - and even if you don't, you still want to know that the code your ATs use is working right.
If your AT is simple enough, the old adage of "the test tests the code, and the code tests the test" is probably sufficient - when you have a failing test, it's either because the code is wrong or because the test is wrong, and it should be easy enough to figure out which. But when the test is complicated, it's good to have its underpinnings well tested, too.
If you think of software development like a car production plant, then the act of writing software is like developing a new car. Each component is tested separately because it's new. It's never been done before. (If it has, you can either get people who've done it before or buy it off the shelf.)
Your build system, which builds your software and also which tests it, is like the conveyor belt - the process which churns out car after car after car. Car manufacturers usually consider how they're going to automate production of new components and test their cars as part of creating new ones, and you can bet that they also test the machines which produce those cars.
So, yes, unit-testing your acceptance tests seems perfectly fine to me, especially if it helps you go faster and keep things easier to change.
There is nothing wrong with using unit test framework (like JUnit) to write acceptance tests (or integration tests). People don't like calling them 'unit tests' for many reasons. To me the main reason is that integration/acceptance tests won't run every time one checks in the changes (too long or/and no proper environment).
Your helper classes are rather standard thing that comprise "test infrastructure code". They don't belong anywhere else but test code. It's your choice to test them or not. But without them your tests won't be feasible in big systems.
So, your choice is #2 or no tests of tests at all. There is nothing wrong with refactoring infrastructure code to make it more transparent and simple.
Your option 2 is the way I'd do it: Write helper classes test-first - it sounds like you know what they should do.
The tests are tests, and although the helper classes are not strictly tests, they won't be referenced by your main code, just the tests, so they belong with the tests. Perhaps they could be in a separate package/namespace from regular tests.

TDD - When introducing a class when refactoring - should that class be unit tested?

Presume you have a class which passes all its current unit tests.
If you were to add or pull out some methods/introduce a new class and then use composition to incorporate the same functionality would the new class require testing?
I'm torn between whether or not you should so any advice would be great.
Edit:
Suppose I should have added I use DI (Dependency Injection) therefore should I inject the new class as well?
Not in the context of TDD, no, IMHO. The existing tests justify everything about the existence of the class. If you need to add behavior to the class, that would be the time to introduce a test.
That being said, it may make your code and tests clearer to move the tests into a class that relates to the new class you made. That depends very much on the specific case.
EDIT: After your edit, I would say that that makes a good case for moving some existing tests (or a portion of the existing tests). If the class is so decoupled that it requires injection, then it sounds like the existing tests may not be obviously covering it if they stay where they are.
Initially, no, they're not necessary. If you had perfect coverage, extracted the class and did nothing more, you would still have perfect coverage (and those tests would confirm that the extraction was indeed a pure refactoring).
But eventually - and probably soon - yes. The extracted class is likely to be used outside its original context, and you want to constrain its behavior with tests that are specific to the new class, so that changes for a new context don't inadvertently affect behavior for the original caller. Of course the original tests would still reveal this, but good unit tests point directly to the problematic unit, and the original tests are now a step removed.
It's also good to have the new tests as executable documentation for the newly-extracted class.
Well, yes and no.
If I understand correctly, you have written tests, and wrote production code that makes the tests pass - i.e. the simplest thing that works.
Now you are in the refactoring phase. You want to extract code from one class and put it in a class of its own, probably to keep up with the Single Responsibility Principle (or SRP).
You may make the refactoring without adding tests, since your tests are there precisely to allow you to refactor without fear. Remember - refactor means changing the code, without modifying the functionality.
However, it is quite likely that refactoring the code will break your tests. This is most likely caused by fragile tests that test behavior, rather than state - i.e. you mocked the the methods you ported out.
On the other hand, if your tests are primarily state-driven (i.e. you assert results, and ignore implementation), then your new service component (the block of code you extracted to a new class) will not be tested. If you use some form of code coverage testing tool, you'll find out. If that is the case, you may wish to test that it works. Might, because 100% Code Coverage is neither desirable nor feasible. If possible, I'd try to add the tests for that service.
In the end, it may very well boil down to a judgment call.
I would say no. It is already being tested by the tests run on the old class.
As others have said, it's probably not entirely needed right away, since all the same stuff is still under test. But once you start making changes to either of those two classes individually, you should separate the tests.
Of course, the tests shouldn't be too hard to write; since you have the stuff being tested already, it should be fairly trivial to break out the various bits of the tests.