The standard process for test-driven development seems to be to add a test, see it fail, write production code, see the test pass, refactor, and check it all into source control.
Is there anything that allows you to check out revision x of the test code, and revision x-1 of the production code, and see that the tests you've written in revision x fail? (I'd be interested in any language and source control system, but I use ruby and git)
There may be circumstances where you might add tests that already pass, but they'd be more verification than development.
A couple of things:
After refactoring the test, you run the test again
Then, you refactor the code, then run the test again
Then, you don't have to check in right away, but you could
In TDD, there is no purpose in adding a test that passes. It's a waste of time. I've been tempted to do this in order to increase code coverage, but that code should have been covered by tests which actually failed first.
If the test doesn't fail first, then you don't know if the code you then add fixes the problem, and you don't know if the test actually tests anything. It's no longer a test - it's just some code that may or may not test anything.
Simply keep your tests and code in seperate directories, and then you can check out one version of the tests and another of the code.
That being said, in a multi-developer environment you generally don't want to be checking in code where the tests fail.
I would also question the motivation for doing this? If it is to "enforce" the failing test first, then I would point you to this comment from the father of (the promotion of) TDD.
"There may be circumstances where you might add tests that already pass, but they'd be more verification than development."
In TDD you always watch a test fail before making it pass so that you know it works.
As you've found, sometimes you want to explicitly describe behaviour that is covered by code you've already written but when considered from outside the class under test is a separate feature of the class. In that case the test will pass.
But, still watch the test fail.
Either write the test with an obviously failing assertion and then fix the assertion to make it pass. Or, temporarily break the code and watch all affected tests fail, including the new one. And then fix the code to make it work again.
If you keep your production and test code in separate versioning areas (e.g. separate projects/source trees/libraries/etc.), most version control systems allow you to checkout previous versions of code and rebuild them. In your case, you could checkout the x-1 version of production code, rebuild it, then run your test code against the newly built/deployed production deployable.
One thing that might help would be to tag/label all of your code when you do a release, so that you can easily fetch an entire source tree for a previous version of your code.
Is there anything that allows you to
check out revision x of the test code,
and revision x-1 of the production
code, and see that the tests you've
written in revision x fail?
I think that you are looking for the keyword continuous integration. There are many tools that actually are implemented as post-commit hooks in version control systems (aka something that runs on servers/central repository after each commit): for example, they will run your unit tests after each commits, and email the committers if a revision introduces a regression.
Such tools are perfectly able to detect which tests are new and never passed from old tests that used to pass and that currently fail due to a recent commit, which means that using TDD and continuous integration altogether is just fine: you will probably be able to configure your tools not to scream when a new failing test is introduced, and to complain only on regressions.
As always, I'll direct you to Wikipedia for a generic introduction on the topic. And a more detailed, quite famous resource would be the article from Martin Fowler
If you git commit after writing your failing tests, and then again when they are passing, you should at a later time be able to create a branch at the point where the tests fail.
You can then add more tests, verify that they also fail, git commit, git merge and then run the tests with the current code base to see if the work you already did will cause the test to pass or if you now need to do some more work.
Related
We want to verify that there are no unit tests for the committed code that fails before allowing the developers to commit.
Do you know any tool that will help us?
I'd recommend against doing this, because in practice there are times when you want to allow developers to submit code that does not pass all its unit tests. Also, consider that developers might try to work around the restriction by deleting unit tests, or by not writing them in the first place.
And how could the tool determine that all unot tests passed? It would have to build the code and run the unit test-suite. A fault in the build environment or the test-suite might therefore make it impossible to check-in code.
I finished an app and after that I'm trying to write unit test to cover all methods.
The thing is that Im seeing that I'm testing my code with the knowledge of how it works.
For me is a bit stupid because I know how the code works and I'm testing what is my code actually doing.
My question is:
Is this useless? Im testing what it does, not what it is supposing to do. My code works but I can improve it. Then:
Should I complete all my tests and then try to refactor my code changing my test to "How the code should work" and making the changes to the app to pass the test?
Thank you.
You need to test "How the code should work". That is, you have a clear idea of what a particular method should behave like, then create the set of tests that covers that behaviour. If your code fails the test then you can fix it.
Even though your code is working now you still need the tests for regression testing. Later when you modify your code or add new extensions to it you need to ensure that you did not break the existing functionality. The set of tests that you derive today will be able to tell you how well you did.
While I don't always do Test-Driven Development (write tests before implementing the class), I do always write tests after I implement the code. Even some days after. At least I try to follow a path coverage strategy (that is, every route flow from the beginning to the method up to when it returns). Also for unexpected parameter values. This is useful for enhancing your confidence of correct behaviour of your functions, also when you change the code.
Quite always I find unexpected behaviors or little bugs :) So it works
It will be extremely useful if you ever change the code.
Pretty useless, I'd say. It is called "test driven" for a reason. Every line of code is motivated by a test.
That means the code line is also protected against changes.
Test driven development requires a minimalistic approach and tons of discipline. Add a test to extend the functionality a little. Implement the functionality so it just makes the light green but nothing more. Make foolish implementations until the tests force you to make serious ones.
I have tried to add tests to existing code and found it difficult. The tendency is that only some of the functionality becomes tested. This is dangerous since the test suite will make people think the code is protected. One method is to go through the code line by line and ensure there is a test for it. Make foolish changes to the code until a test forces you to back to the original version. In the end, any change to the code should break the tests.
However, there is no way you can derive the requirements from the code. No test suite that is the result of reverse-engineering will be complete.
You should at least try to build the tests on how its supposed to work. Therefor it is better to build the tests in advance, but it is still not useless to build the tests now.
The reason: you don't build the tests to test your current code, but you're building them to test future modifications. Unit tests are especially useful to test if a modification didn't break earlier code.
Better late than never!
Of course, the best option is to do the unit tests before you start implementing so that you can profit from them even more, but having a good set of unit tests will prevent you from introducing many bugs when you refactor or implement new features, regardless of whether you implemented the unit tests before or after the implementation.
If one has a project that has tests that are executed as part of the build procedure on a build machine, if a set tests fail, should the entire build fail?
What are the things one should consider when answering that question? Does it matter which tests are failing?
Background information that prompted this question:
Currently I am working on a project that has NUnit tests that are done as part of the build procedure and are executed on our cruise control .net build machine.
The project used to be setup so that if any tests fail, the build fails. The reasoning being if the tests fail, that means the product is not working/not complete/it is a failure of a project, and hence the build should fail.
We have added some tests that although they fail, they are not crucial to the project (see below for more details). So if those tests fail, the project is not a complete failure, and we would still want it to build.
One of the tests that passes verifies that incorrect arguments result in an exception, but the test does not pass is the one that checks that all the allowed arguments do not result in an exception. So the class rejects all invalid cases, but also some valid ones. This is not a problem for the project, since the rejected valid arguments are fringe cases, on which the application will not rely.
If it's in any way doable, then do it. It greatly reduces the broken-window-problem:
In a system with no (visible) flaws, introducing a small flaw is usually seen as a very bad idea. So if you've got a project with a green status (no unit test fails) and you introduce the first failing test, then you (and/or your peers) will be motivated to fix the problem.
If, on the other side, there are known-failing tests, then adding just another broken test is seen as keeping the status-quo.
Therefore you should always strive to keep all tests running (and not just "most of them"). And treating every single failing test as reason for failing the build goes a long way towards that goal.
If a unit test fails, some code is not behaving as expected. So the code shouldn't be released.
Although you can make the build for testing/bugfixing purposes.
If you felt that a case was important enough to write a test for, then if that test is failing, the software is failing. Based on that alone, yes, it should consider the build a failure and not continue. If you don't use that reasoning, then who decides what tests are not important? Where is the line between "if this fails it's ok, but if that fails it's not"? Failure is failure.
I think a nice setup like yours should always build successfully, including all unit tests passed.
Like Gamecat said, the build itself is succeeded, but this code should never go to production.
Imagine one of your team members introducing a bug in the code which that one unit test (which always fails) covers. It won't be discovered by the test since you allow that one test to always fail.
In our team we have a simple policy: if all tests don't pass, we don't go to production with the code. This is also a very simple to understand for our project manager.
In my opinion it really depends on your Unit Tests,...
if your Unit tests are really UNIT tests (like they should be => "reference to endless books ;)" )
then the build should fail, because something is not behaving as should...
but most often (unfortunately to often seen), in so many projects these unit tests only cover some 'edges' and/or are integration tests, then the build should go on
(yes, this is a subjective answer ;)
in short:
do you know the unit tests to be fine: fail;
else: build on
The real problem is with your failing tests. You should not have a unit test where it's OK to fail because it's an edge case. Decide whether the edge case is important or not - if not then delete the unit test; if yes then fix the code.
Like some of the other answers implied, it's definitely a code smell when unit tests fail. If you live with the smell, then you're less likely to spot the next problem
All the answers have been great, here is what I decided to do:
Make the tests (or if need be split a failing test) that are not crucial be ignored by NUnit (I remembered this feature after asking the question). This allows:
The build can fail if any tests fail, hence reducing the smelliness
The tests that are ignored have to be defended to project manager (whomever is in charge)
Any tests that are ignored are marked in a special way
I think that is the best compromise, forcing people to fix the tests, but not necessarily right away (but they have to defend their decision of not fixing it now since everyone knows what they did).
What I actually did: fixed the broken tests.
How can you make sure that all developers on your team are unit testing their code? Code coverage metrics are the only way I can think of to objectively measure this. Is there another way?
(Of course if you're really following TDD then this shouldn't be an issue. But let's just suppose you've got some developers that don't quite "get" TDD yet.)
This is probably a social problem rather than a technological one. Firstly, do you want unit tests that result in 100% code coverage, or can you settle for less, and trust your developers to put in unit tests where they really matter, and where they make sense. You could probably get some kind of system in place that would do code coverage tests to ensure that unit tests cover a certain percentage of code. But then there would still be ways to game the system. And it still wouldn't result in code that was bug free. Due to things like the halting problem, it's impossible to cover ever path in the code.
Run test coverage reports automatically during your build process. We do this with CruiseControl. Otherwise, you have to actually inspect what is getting tested in your test results reports.
Code coverage tools are almost certainly superior to any ad hoc method you could come up with. That's why they exist.
Even developers who get TDD are far from immune to gaps in coverage. Often, code that fixes a bug breaks a lateral test or creates a branch that the original developer did not anticipate and the maintenance developer didn't realize was new.
A good way to get tests written is to increase accountability. If a developer has to explain to someone else exactly why they didn't write unit tests, they're more likely to do so. Most companies I've worked at have required that any proposed commit to a trunk be reviewed by another developer before the commit, and that the name of the reviewer be included in the commit comments. In this environment, you can tell your team that they should not allow code to "pass" peer code review unless unit tests are in place.
Now you have a chain of responsibility. If a developer commits code without naming the reviewer, you can ask them who reviewed the code (and, as I learned the hard way, having to say "nobody" to your boss when asked this question is no fun!). If you do become aware of code being committed without unit tests, you can ask both the developer and the code reviewer why unit tests were not included. The possibility of being asked this question encourages code reviewers to insist on unit tests.
One more step you can take is to install a commit hook in your version control system that e-mails the entire team when a commit is made, along with the files and even code that made up the commit. This provides a high level of transparency, and further encourages developers to "follow the rules." Of course, this only works if it scales to the number of commits your team does per day.
This is more of a psychological solution than a technical solution, but it's worked well for me when managing software teams. It's also a bit gentler than the rubber hose suggested in another answer. :-)
Here we just have a test folder, with a package structure mirroring the actual code. To check in a class, policy states it must have an accompanying testing class, with certain guidelines about which/how each method needs to be tested. (Example: We don't require pure getters and setters to be tested)
A quick glance at the testing folder shows when a class is missing, and the offending person can be beaten with a rubber hose (or whatever depending on your policy).
Go in and change a random variable or pass a null somewhere and you should expect to see a bunch of red. =D
One way to do it would be to write a script search all checkins for the term 'test' or 'testfixture' (obviously depending on your environment). If there is a commit log or an email sent out that to the manager that details changes made, then it'd be trivial with your favorite text processing language to scan the code files for signs of unit tests (the Assert keyword would probably be the best bet).
If there aren't unit tests, then during your next code review, take an example of a recent check-in, and spend ten minutes talking about the possible ways it could 'go wrong', and how unit tests would have found the error faster.
Have them submit a report or some sort of screen shot of the results of their unit tests on a regular basis. They can either fake it (more likely would take more time than actually doing the tests) or actually do them.
In the end, you are going to know the ones who are not doing the tests, they will be the ones with the bugs that could have been easily caught with the unit tests.
The issue is as much social as it is technical. If you have developers who "don't quite 'get' TDD yet" then helping them understand the benefits of TDD may be a better long-term solution than technical measures that "make" them write tests because it's required. Reluctant developers can easily write tests that meet code coverage criteria and yet aren't valuable tests.
One thing that should be mentioned here is that the you need a system for regularly running the unit tests. They should be part of your checkin guantlet or nightly build system. Merely making sure the unit tests are written doesn't ensure you are getting value out of them.
Sit down with them and observe what they do. If they don't unit test, remind them gently.
If I create a test suite for a development project, should those classes be kept under version control with the rest of the project code?
Yes, there is no reason not to put them in source control. What if the tests change? What if the interfaces change, necessitating that the tests change?
Yes, all the same reasons you put production code in to source control still apply to any unit tests you write.
It's the classic who, where and why questions:
Who changed the code?
When did they change it?
What did they change it for?
These questions are just as pertinent to testing code as they are to production code. You absolutely should put your unit testing code in to the repository.
Absolutely. Test classes must stay up-to-date with the code. This means checking it in and running the tests under continuous integration.
Absolutely! Test classes are source code and should be managed like any other source code. You will need to modify them and keep track of versions and you want to know the maintenance history.
You should also keep test data under source control unless it is massively large.
Unit tests should be tied to a code base in your repository.
For no other reason than if you have to produce a maintenance release for a previous version, you can guarantee that, by the metric of your unit tests, you code is no worse than it was before (and hopefully is now better).
Indeed yes. How could anyone ever think otherwise?
If you use code branches, you should try and make your testing code naturally fit under the main codeline so when you branch, the right versions of the tests branch too.
Yes they should. People checking out the latest release should be able to unit test the code on their machine. This will help to identify missing dependencies and can also provide them with unofficial documentation on how the code works.
Yes.
Test code is a code. It should be maintained, refactored, and versioned. It is a part of your system source.
Absolutely, they should be treated as first class citizens of your code base. They'll need all the love and care ie maintenance as any piece of code does.
Yes they should. You should be checking the tests out and running them whenever you make code changes. If you put them somewhere else that is that much more trouble to go through to run them.
Yes. For all of the other reasons mentioned here, plus also the fact that as functionality changes, your test suite will change, and it should be easy to get the right test suite for any given release, branch, etc. and having the tests not only in version control but the same repository as your code is the way to achieve that.
Yes for all the reasons above also if you are using a continuous integration server that is "watching" your source control you can have it run the latest unit tests on every commit.
This means that a broken build results from unit tests failing as well as from code not compiling.
Absolutely. You'll likely find that as your code changes your tests may need to change as well, so you'll likely want to have a record of those changes, especially if the tests or code all of a sudden stop working. ;-)
Also, the unit testcases should be kept as close as possible to the actual code they are testing (the bottom of the same file seems to be the standard). It's as much for convenience as it is for maintenance.
For some additional reading about what makes a good unit test, check out this stackoverflow post.